Search Results

Search found 19539 results on 782 pages for 'pretty print'.

Page 114/782 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • How to pull one commit at a time from a remote git repository?

    - by Norman Ramsey
    I'm trying to set up a darcs mirror of a git repository. I have something that works OK, but there's a significant problem: if I push a whole bunch of commits to the git repo, those commits get merged into a single darcs patchset. I really want to make sure each git commit gets set up as a single darcs patchset. I bet this is possible by doing some kind of git fetch followed by interrogation of the local copy of the remote branch, but my git fu is not up to the job. Here's the (ksh) code I'm using now, more or less: git pull -v # pulls all the commits from remote --- bad! # gets information about only the last commit pulled -- bad! author="$(git log HEAD^..HEAD --pretty=format:"%an <%ae>")" logfile=$(mktemp) git log HEAD^..HEAD --pretty=format:"%s%n%b%n" > $logfile # add all new files to darcs and record a patchset. this part is OK darcs add -q --umask=0002 -r . darcs record -a -A "$author" --logfile="$logfile" darcs push -a rm -f $logfile My idea is Try git fetch to get local copy of the remote branch (not sure exactly what arguments are needed) Somehow interrogate the local copy to get a hash for every commit since the last mirroring operation (I have no idea how to do this) Loop through all the hashes, pulling just that commit and recording the associated patchset (I'm pretty sure I know how to do this if I get my hands on the hash) I'd welcome either help fleshing out the scenario above or suggestions about something else I should try. Ideas?

    Read the article

  • Show me your Linq to SQL architectures!

    - by Brad Heller
    I've been using Linq to SQL for a new implementation that I've been working on. I have about 5000 lines of code and am a little ways from a solid demo. I've been pretty satisfied with Linq to SQL so far -- the tools are excellent and pretty painless and it allows you to get a DAL up and running quickly. That said, there are some major draw backs that I just keep hitting over and over again. Namely how to handle separation of concerns between my DAL and my business layer and juggling that with different data contexts. Here is the architecture I've been using: My repositories do all my data access and they return Linq to SQL objects. Each of my Linq to SQL objects implements an IDetachable interface. A typical implementation looks like this: partial class PaymentDetail : IDetachable { #region IDetachable Members public bool IsAttached { get { return PropertyChanging != null; } } public void Detach() { if (IsAttached) { PropertyChanged = null; PropertyChanging = null; Transaction.Detach(); } } #endregion } Every time I do a DAL operation in my repository I "detach" when I'm done with the object (and it should theoretically detach from any child objects) to remove the DataContext's context. Like I said, this works pretty well, but there are some edge cases that seem to be a big pain in the ass. For instance, my Transaction object has many PaymentDetails. Even when there are no PaymentDetails in that collection it's still attached to the DataContext's context! Thus, if I try to update (I update by Attach()ing to the object and then SubmitChanges()) I get that dreaded "An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. This is not supported." message. Anyway, I'm starting to doubt that this technology was a good gamble. Has anyone got a decent architecture that they're willing to share? I'd really love to use this technology but I feel like I spend 1/3 of my time just debugging is retarded quirks!

    Read the article

  • MS Access: Why is ADODB.Recordset.BatchUpdate so much slower than Application.ImportXML?

    - by apenwarr
    I'm trying to run the code below to insert a whole lot of records (from a file with a weird file format) into my Access 2003 database from VBA. After many, many experiments, this code is the fastest I've been able to come up with: it does 10000 records in about 15 seconds on my machine. At least 14.5 of those seconds (ie. almost all the time) is in the single call to UpdateBatch. I've read elsewhere that the JET engine doesn't support UpdateBatch. So maybe there's a better way to do it. Now, I would just think the JET engine is plain slow, but that can't be it. After generating the 'testy' table with the code below, I right clicked it, picked Export, and saved it as XML. Then I right clicked, picked Import, and reloaded the XML. Total time to import the XML file? Less than one second, ie. at least 15x faster. Surely there's an efficient way to insert data into Access that doesn't require writing a temp file? Sub TestBatchUpdate() CurrentDb.Execute "create table testy (x int, y int)" Dim rs As New ADODB.Recordset rs.CursorLocation = adUseServer rs.Open "testy", CurrentProject.AccessConnection, _ adOpenStatic, adLockBatchOptimistic, adCmdTableDirect Dim n, v n = Array(0, 1) v = Array(50, 55) Debug.Print "starting loop", Time For i = 1 To 10000 rs.AddNew n, v Next i Debug.Print "done loop", Time rs.UpdateBatch Debug.Print "done update", Time CurrentDb.Execute "drop table testy" End Sub I would be willing to resort to C/C++ if there's some API that would let me do fast inserts that way. But I can't seem to find it. It can't be that Application.ImportXML is using undocumented APIs, can it?

    Read the article

  • Base class deleted before subclass during python __del__ processing

    - by Oddthinking
    Context I am aware that if I ask a question about Python destructors, the standard argument will be to use contexts instead. Let me start by explaining why I am not doing that. I am writing a subclass to logging.Handler. When an instance is closed, it posts a sentinel value to a Queue.Queue. If it doesn't, a second thread will be left running forever, waiting for Queue.Queue.get() to complete. I am writing this with other developers in mind, so I don't want a failure to call close() on a handler object to cause the program to hang. Therefore, I am adding a check in __del__() to ensure the object was closed properly. I understand circular references may cause it to fail in some circumstances. There's not a lot I can do about that. Problem Here is some simple example code: explicit_delete = True class Base: def __del__(self): print "Base class cleaning up." class Sub(Base): def __del__(self): print "Sub-class cleaning up." Base.__del__(self) x = Sub() if explicit_delete: del x print "End of thread" When I run this I get, as expected: Sub-class cleaning up. Base class cleaning up. End of thread If I set explicit_delete to False in the first line, I get: End of thread Sub-class cleaning up. Exception AttributeError: "'NoneType' object has no attribute '__del__'" in <bound method Sub.__del__ of <__main__.Sub instance at 0x00F0B698>> ignored It seems the definition of Base is removed before the x._del_() is called. The Python Documentation on _del_() warns that the subclass needs to call the base-class to get a clean deletion, but here that appears to be impossible. Can you see where I made a bad step?

    Read the article

  • Textually diffing JSON

    - by Richard Levasseur
    As part of my release processes, I have to compare some JSON configuration data used by my application. As a first attempt, I just pretty-printed the JSON and diff'ed them (using kdiff3 or just diff). As that data has grown, however, kdiff3 confuses different parts in the output, making additions look like giant modifies, odd deletions, etc. It makes it really hard to figure out what is different. I've tried other diff tools, too (meld, kompare, diff, a few others), but they all have the same problem. Despite my best efforts, I can't seem to format the JSON in a way that the diff tools can understand. Example data: [ { "name": "date", "type": "date", "nullable": true, "state": "enabled" }, { "name": "owner", "type": "string", "nullable": false, "state": "enabled", } ...lots more... ] The above probably wouldn't cause the problem (the problem occurs when there begin to be hundreds of lines), but thats the gist of what is being compared. Thats just a sample; the full objects are 4-5 attributes, and some attributes have 4-5 attributes in them. The attribute names are pretty uniform, but their values pretty varied. In general, it seems like all the diff tools confuse the closing "}" with the next objects closing "}". I can't seem to break them of this habit. I've tried adding whitespace, changing indentation, and adding some "BEGIN" and "END" strings before and after the respective objects, but the tool still get confused.

    Read the article

  • What Scheme Does Ghuloum Use?

    - by Don Wakefield
    I'm trying to work my way through Compilers: Backend to Frontend (and Back to Front Again) by Abdulaziz Ghuloum. It seems abbreviated from what one would expect in a full course/seminar, so I'm trying to fill in the pieces myself. For instance, I have tried to use his testing framework in the R5RS flavor of DrScheme, but it doesn't seem to like the macro stuff: src/ghuloum/tests/tests-driver.scm:6:4: read: illegal use of open square bracket I've read his intro paper on the course, An Incremental Approach to Compiler Construction, which gives a great overview of the techniques used, and mentions a couple of Schemes with features one might want to implement for 'extra credit', but he doesn't mention the Scheme he uses in the course. Update I'm still digging into the original question (investigating options such as Petit Scheme suggested by Eli below), but found an interesting link relating to Gholoum's work, so I am including it here. [Ikarus Scheme](http://en.wikipedia.org/wiki/Ikarus_(Scheme_implementation)) is the actual implementation of Ghuloum's ideas, and appears to have been part of his Ph.D. work. It's supposed to be one of the first implementations of R6RS. I'm trying to install Ikarus now, but the configure script doesn't want to recognize my system's install of libgmp.so, so my problems are still unresolved. Example The following example seems to work in PLT 2.4.2 running in DrEd using the Pretty Big (require lang/plt-pretty-big) (load "/Users/donaldwakefield/ghuloum/tests/tests-driver.scm") (load "/Users/donaldwakefield/ghuloum/tests/tests-1.1-req.scm") (define (emit-program x) (unless (integer? x) (error "---")) (emit " .text") (emit " .globl scheme_entry") (emit " .type scheme_entry, @function") (emit "scheme_entry:") (emit " movl $~s, %eax" x) (emit " ret") ) Attempting to replace the require directive with #lang scheme results in the error message foo.scm:7:3: expand: unbound identifier in module in: emit which appears to be due to a failure to load tests-driver.scm. Attempting to use #lang r6rs disables the REPL, which I'd really like to use, so I'm going to try to continue with Pretty Big. My thanks to Eli Barzilay for his patient help.

    Read the article

  • Send document to printer from web page

    - by FiveTools
    I have a webpage that activates a print job on a printer. This works in the localhost environment but does not work when the application is deployed to the webserver. I'm using the PrintDocument class from the .net System.Drawing.Print namespace. I'm now assuming the printer has to be available to the application on the remote server? Any suggestions on how I would get this to work? PrintDocument pd = new PrintDocument(); PaperSource ps = new PaperSource(); pd.DefaultPageSettings.PaperSize = new System.Drawing.Printing.PaperSize("Custom", 1180, 850); pd.PrintPage += new PrintPageEventHandler (this.pd_PrintPage); // Set your printer's name. Obtain from // System's Printer Dialog Box. pd.PrinterSettings.PrinterName = "Okidata ML 321 Turbo/D (IBM)"; //PrintPreviewDialog dlgPrintPvw = new PrintPreviewDialog(); //dlgPrintPvw.Document = pd; //dlgPrintPvw.Focus(); //dlgPrintPvw.ShowDialog(); pd.Print();

    Read the article

  • How to display external database data in VBulletin 4 Forums Custom PHP Block?

    - by NJTechGuy
    Hi guys! I want to display data feed from an external database in a sidebar in the forums section. PHP Block Code : $host = 'db.123.net'; $dbUser = 'db49'; $dbPass = 'iReVbY'; $db = 'db6578h8'; mysql_connect("$host", "$dbUser", "$dbPass") or die(mysql_error()); mysql_select_db("$db") or die(mysql_error()); ob_start(); $result = mysql_query("SELECT id, title from abc") or die(mysql_error()); while($row = mysql_fetch_array( $result )) { print"<center>"; print "<a href=\"http://abc.com/?id=" . $row['id'] . "\"></a>"; print "</center>"; } $output .= ob_get_contents(); return $output; ob_end_clean(); How do I return an array to display in a PHP block in the sidebar (forums section)? Please help me out of this! Thank you..

    Read the article

  • getting a tiled image collection on the iPad (deepzoom)

    - by Chris B
    I have a set of tiled image collections created via microsoft's deep zoom composer, and a silverlight app that currently consumes them for display via MultiScaleImage - it's all working pretty well - I'd just like to get some experience with iPad programming and have a couple of ideas for some ipad applications. All my ideas rely on me being able to display/manipulate these tiled image sets (on the iPad). I just picked up a iMac to facilitate this. I'm not seeing any objective-c / cocoa-touch libraries for this though, so am assuming I will have to roll my own. (Saw the seadragon ajax component, which is pretty slick, but I'm dealing with collections here, which it doesn't support. I would also like to roll this as a native app just to get the experience). The only open source project I found for displaying/manipulating the tiled image sets was Openzoom -a flash component. I'm not to familiar with actionscript either (python, java, c#, and c are the only languages I have really used), but briefly inspecting the code I didn't really have any issues with it and can probably use it for hints on how to swap the tiles in and out, etc.. But, as I'm pretty new to obj-c/cocoa-touch, some pointers in the right direction would be appreciated. 1) Are there any other projects out there I am missing, or is openzoom my best bet for some reference? 2) Should I be trying to do this display in the UIKit framework, or should I do it as an OpenGL display? 3) Any other suggestions/pointers that I didn't think to ask.

    Read the article

  • Nicely representing a floating-point number in python

    - by dln385
    I want to represent a floating-point number as a string rounded to some number of significant digits, and never using the exponential format. Essentially, I want to display any floating-point number and make sure it “looks nice”. There are several parts to this problem: I need to be able to specify the number of significant digits. The number of significant digits needs to be variable, which can't be done with with the string formatting operator. I need it to be rounded the way a person would expect, not something like 1.999999999999 I've figured out one way of doing this, though it looks like a work-round and it's not quite perfect. (The maximum precision is 15 significant digits.) >>> def f(number, sigfig): return ("%.15f" % (round(number, int(-1 * floor(log10(number)) + (sigfig - 1))))).rstrip("0").rstrip(".") >>> print f(0.1, 1) 0.1 >>> print f(0.0000000000368568, 2) 0.000000000037 >>> print f(756867, 3) 757000 Is there a better way to do this? Why doesn't Python have a built-in function for this?

    Read the article

  • Python halts while iteratively processing my 1GB csv file

    - by Dan
    I have two files: metadata.csv: contains an ID, followed by vendor name, a filename, etc hashes.csv: contains an ID, followed by a hash The ID is essentially a foreign key of sorts, relating file metadata to its hash. I wrote this script to quickly extract out all hashes associated with a particular vendor. It craps out before it finishes processing hashes.csv stored_ids = [] # this file is about 1 MB entries = csv.reader(open(options.entries, "rb")) for row in entries: # row[2] is the vendor if row[2] == options.vendor: # row[0] is the ID stored_ids.append(row[0]) # this file is 1 GB hashes = open(options.hashes, "rb") # I iteratively read the file here, # just in case the csv module doesn't do this. for line in hashes: # not sure if stored_ids contains strings or ints here... # this probably isn't the problem though if line.split(",")[0] in stored_ids: # if its one of the IDs we're looking for, print the file and hash to STDOUT print "%s,%s" % (line.split(",")[2], line.split(",")[4]) hashes.close() This script gets about 2000 entries through hashes.csv before it halts. What am I doing wrong? I thought I was processing it line by line. ps. the csv files are the popular HashKeeper format and the files I am parsing are the NSRL hash sets. http://www.nsrl.nist.gov/Downloads.htm#converter UPDATE: working solution below. Thanks everyone who commented! entries = csv.reader(open(options.entries, "rb")) stored_ids = dict((row[0],1) for row in entries if row[2] == options.vendor) hashes = csv.reader(open(options.hashes, "rb")) matches = dict((row[2], row[4]) for row in hashes if row[0] in stored_ids) for k, v in matches.iteritems(): print "%s,%s" % (k, v)

    Read the article

  • Margin on the top in LaTex

    - by Tim
    I am now writing a thesis which is required to have 1 and half inches on the left or binding side, and 1 inch on the other three sides. I just have my print-out checked in the binding office and I was told the margins are not okay, especially the one on the top is not enough, a little less than 1 inch. See the figure below: I wonder which commands are responsible for the margins on the four sides, especially for the one on the top? I provide links to two files I believe that control the layout and format of the thesis:jhu12.clo and thesis.cls. Or instead of modifying the LaTex files, is thre some setting when printing the pdf file to control these margins on the print-out? Thanks and regards! EDIT: This is the print-out of command \layout It shows "one inch + \voffset" for the top (the NO. 2 item). But the staff at the binding office use his ruler to show it is less than 1 inch. How to adjust the margins in LaTex then? Thanks!

    Read the article

  • Consistent Equals() results, but inconsistent TreeMap.containsKey() result

    - by smessing
    I have the following object Node: private class Node implements Comparable<Node>(){ private String guid(); ... public boolean equals(Node o){ return (this == o); } public int hashCode(){ return guid.hashCode(); } ... } And I use it in the following TreeMap: TreeMap<Node, TreeSet<Edge>> nodes = new TreeMap<Node, TreeSet<Edge>>(); Now, the tree map is used in a class called Graph to store nodes currently in the graph, along with a set of their edges (from the class Edge). My problem is when I try to execute: public containsNode(n){ for (Node x : nodes.keySet()) { System.out.println("HASH CODE: "); System.out.print(x.hashCode() == n.hashCode()); System.out.println("EQUALS: "); System.out.print(x.equals(n)); System.out.println("CONTAINS: "); System.out.print(nodes.containsKey(n)); System.out.println("N: " + n); System.out.println("X: " + x); } } I sometimes get the following: HASHCODE: true EQUALS: true CONTAINS: false N: foo X: foo Anyone have an idea as to what I'm doing wrong? I'm still new to all this, so I apologize in advance if I'm overlooking something simple (I know hashCode() doesn't really matter for TreeMap, but I figured I'd include it).

    Read the article

  • Problem when reading input in C

    - by gcx
    I've made a Linked List. Its elements keep both previous and next items' address. It gets commands from an input file. It detects the command and uses the following statement as a parameter. (text: add_to_front john - means: add_to_front(john)) Code: http://pastebin.com/KcAm1y3L When I try to give the commands from an input file it gives me same output over and over. However, if I write inputs in main() manually, it works. For ex input file: add_to_front john add_to_back jane add_to_back jane print (unfortunately) the output is: >add_to_front john >add_to_back jane >add_to_back jane >print jane jane jane Although, if I write add_to_front(john); add_to_back(jane); add_to_back(jane); print(); instead of this command check: while (scanf("%s",command)!=EOF) { if (strcmp(command,"add_to_front")==0) { gets(parameter); add_to_front(parameter); } else if (strcmp(command,"add_to_back")==0) { gets(parameter); add_to_back(parameter); } else if (strcmp(command,"remove_from_back")==0) remove_from_back(parameter); ... printf(" HUH?\n"); } } in main() it gives the correct output. I know it's a lot to ask but this thing is bothering me for 2 days. What do you think i'm doing wrong?

    Read the article

  • How to convert a printer driver to a stand-alone console application which can generate a printer fi

    - by Thorbjørn Ravn Andersen
    I have a situation where the only way to generate a certain datafile is to print it manually to FILE: under Windows and save it in a file for further processing. I would really like to have a small stand-alone program which embeds this binary printer driver so I can run it from a batch file and have it generate that binary file for me, as we can then fully automate the "save file in Visio, 'print' it and upload it to the final destination and trigger a remote test". Is this possible with a suitable Windows SDK? I am a Java programmer, so I do not know Visual Studio and the possibilities with MSDN - yet! - but I'd appreciate pointers. EDIT: I have the installation files for that printer driver, both 32 and 64 bit. Older versions may include a 16 bit driver. EDIT: The "print to FILE:" functionality is just what was recommended by the documentation. I have played a little bit with using the LPR-protocol to see what it can do. I'd still prefer the "invoke small binary" approach.

    Read the article

  • How do I find, count, and display unique elements of an array using Perl?

    - by Luke
    I am a novice Perl programmer and would like some help. I have an array list that I am trying to split each element based on the pipe into two scalar elements. From there I would like to spike out only the lines that read ‘PJ RER Apts to Share’ as the first element. Then I want to print out the second element only once while counting each time the element appears. I wrote the piece of code below but can’t figure out where I am going wrong. It might be something small that I am just overlooking. Any help would be greatly appreciated. ## CODE ## my @data = ('PJ RER Apts to Share|PROVIDENCE', 'PJ RER Apts to Share|JOHNSTON', 'PJ RER Apts to Share|JOHNSTON', 'PJ RER Apts to Share|JOHNSTON', 'PJ RER Condo|WEST WARWICK', 'PJ RER Condo|WARWICK'); foreach my $line (@data) { $count = @data; chomp($line); @fields = split(/\|/,$line); if (($fields[0] =~ /PJ RER Apts to Share/g)){ @array2 = $fields[1]; my %seen; my @uniq = grep { ! $seen{$_}++ } @array2; my $count2 = scalar(@uniq); print "$array2[0] ($count2)","\n" } } print "$count","\n"; ## OUTPUT ## PROVIDENCE (1) JOHNSTON (1) JOHNSTON (1) JOHNSTON (1) 6

    Read the article

  • how do I paste text to a line by line text filter like awk, without having stdin echo to the screen?

    - by Barton Chittenden
    I have a text in an email on a windows box that looks something like this: 100 some random text 101 some more random text 102 lots of random text, all different 103 lots of random text, all the same I want to extract the numbers, i.e. the first word on each line. I've got a terminal running bash open on my Linux box... If these were in a text file, I would do this: awk '{print $1}' mytextfile.txt I would like to paste these in, and get my numbers out, without creating a temp file. my naive first attempt looked like this: $ awk '{print $1}' 100 some random text 100 101 some more random text 101 102 lots of random text, all different 103 lots of random text, all the same 102 103 The buffering of stdin and stdout make a hash of this. I wouldn't mind if stdin all printed first, followed by all of stdout; this is what would happen if I were to paste into 'sort' for example, but awk and sed are a different story. a little more thought gave me this: open two terminals. Create a fifo file. Read from the fifo on one terminal, write to it on another. This does in fact work, but I'm lazy. I don't want to open a second terminal. Is there a way in the shell that I can hide the text echoed to the screen when I'm passing it in to a pipe, so that I paste this: 100 some random text 101 some more random text 102 lots of random text, all different 103 lots of random text, all the same but see this? $ awk '{print $1}' 100 101 102 103

    Read the article

  • Perl Parallel::ForkManager wait_all_children() takes excessively long time

    - by zhang18
    I have a script that uses Parallel::ForkManager. However, the wait_all_children() process takes incredibly long time even after all child-processes are completed. The way I know is by printing out some timestamps (see below). Does anyone have any idea what might be causing this (I have 16 CPU cores on my machine)? my $pm = Parallel::ForkManager->new(16) for my $i (1..16) { $pm->start($i) and next; ... do something within the child-process ... print (scalar localtime), " Process $i completed.\n"; $pm->finish(); } print (scalar localtime), " Waiting for some child process to finish.\n"; $pm->wait_all_children(); print (scalar localtime), " All processes finished.\n"; Clearly, I'll get the Waiting for some child process to finish message first, with a timestamp of, say, 7:08:35. Then I'll get a list of Process i completed messages, with the last one at 7:10:30. However, I do not receive the message All Processes finished until 7:16:33(!). Why is that 6-minute delay between 7:10:30 and 7:16:33? Thx!

    Read the article

  • Python: query a class's parent-class after multiple derivations ("super()" does not work)

    - by henry
    Hi, I have built a class-system that uses multiple derivations of a baseclass (object-class1-class2-class3): class class1(object): def __init__(self): print "class1.__init__()" object.__init__(self) class class2(class1): def __init__(self): print "class2.__init__()" class1.__init__(self) class class3(class2): def __init__(self): print "class3.__init__()" class2.__init__(self) x = class3() It works as expected and prints: class3.__init__() class2.__init__() class1.__init__() Now I would like to replace the 3 lines object.__init__(self) ... class1.__init__(self) ... class2.__init__(self) with something like this: currentParentClass().__init__() ... currentParentClass().__init__() ... currentParentClass().__init__() So basically, i want to create a class-system where i don't have to type "classXYZ.doSomething()". As mentioned above, I want to get the "current class's parent-class". Replacing the three lines with: super(type(self), self).__init__() does NOT work (it always returns the parent-class of the current instance - class2) and will result in an endless loop printing: class3.__init__() class2.__init__() class2.__init__() class2.__init__() class2.__init__() ... So is there a function that can give me the current class's parent-class? Thank you for your help! Henry -------------------- Edit: @Lennart ok maybe i got you wrong but at the moment i think i didn't describe the problem clearly enough.So this example might explain it better: lets create another child-class class class4(class3): pass now what happens if we derive an instance from class4? y = class4() i think it clearly executes: super(class3, self).__init__() which we can translate to this: class2.__init__(y) this is definitly not the goal(that would be class3.__init__(y)) Now making lots of parent-class-function-calls - i do not want to re-implement all of my functions with different base-class-names in my super()-calls.

    Read the article

  • Saving "heavy" figure to PDF in MATLAB - rendering problem

    - by yuk
    I generate a figure in MATLAB with lot amount of points (100000+) and want to save it into a PDF file. With zbuffer or painters renderer I've got very large and slowly opened file (over 4 Mb) - all points are in vector format. Using OpenGL renderer rasterize the figure in PDF, ok for the plot, but not good for text labels. The file size is about 150 Kb. Try this simplified code, for example: x=linspace(1,10,100000); y=sin(x)+randn(size(x)); plot(x,y,'.') set(gcf,'Renderer','zbuffer') print -dpdf -r300 testpdf_zb set(gcf,'Renderer','painters') print -dpdf -r300 testpdf_pa set(gcf,'Renderer','opengl') print -dpdf -r300 testpdf_op The actual figure is much more complex with several axes and different types of plots. Is there a way to rasterize the figure, but keep text labels as vectors? Another problem with OpenGL is that is does not work in terminal mode (-nosplash -nodesktop) under Mac OSX. Looks like OpenGL is not supported. I have to use terminal mode for automation. The MATLAB version I run is 2007b. Mac OSX server 10.4.

    Read the article

  • Spaceship objects

    - by Jam
    I'm trying to make a program which creates a spaceship and I'm using the status() method to display the ship's name and fuel values. However, it doesn't seem to be working. I think I may have messed something up with the status() method. I'm also trying to make it so that I can change the fuel values, but I don't want to create a new method to do so. I think I've taken a horrible wrong turn somewhere in there. Help please! class Ship(object): def __init__(self, name="Enterprise", fuel=0): self.name=name self.fuel=fuel print "The spaceship", name, "has arrived!" def status(): print "Name: ", self.name print "Fuel level: ", self.fuel status=staticmethod(status) def main(): ship1=Ship(raw_input("What would you like to name this ship?")) fuel_level=raw_input("How much fuel does this ship have?") if fuel_level<0: self.fuel=0 else: self.fuel(fuel_level) ship2=Ship(raw_input("What would you like to name this ship?")) fuel_level2=raw_input("How much fuel does this ship have?") if fuel_level2<0: self.fuel=0 else: self.fuel(fuel_level2) ship3=Ship(raw_input("What would you like to name this ship?")) fuel_level3=raw_input("How much fuel does this ship have?") if fuel_level3<0: self.fuel=0 else: self.fuel(fuel_level3) Ship.status() main() raw_input("Press enter to exit.")

    Read the article

  • Using pam_python in a script running with mod_python

    - by markys
    Hi ! I would like to develop a web interface to allow users of a Linux system to do certain tasks related to their account. I decided to write the backend of the site using Python and mod_python on Apache. To authenticate the users, I thought I could use python_pam to query the PAM service. I adapted the example bundled with the module and got this: # out is the output stream used to print debug def auth(username, password, out): def pam_conv(aut, query_list, user_data): out.write("Query list: " + str(query_list) + "\n") # List to store the responses to the different queries resp = [] for item in query_list: query, qtype = item # If PAM asks for an input, give the password if qtype == PAM.PAM_PROMPT_ECHO_ON or qtype == PAM.PAM_PROMPT_ECHO_OFF: resp.append((str(password), 0)) elif qtype == PAM.PAM_PROMPT_ERROR_MSG or qtype == PAM.PAM_PROMPT_TEXT_INFO: resp.append(('', 0)) out.write("Our response: " + str(resp) + "\n") return resp # If username of password is undefined, fail if username is None or password is None: return False service = 'login' pam_ = PAM.pam() pam_.start(service) # Set the username pam_.set_item(PAM.PAM_USER, str(username)) # Set the conversation callback pam_.set_item(PAM.PAM_CONV, pam_conv) try: pam_.authenticate() pam_.acct_mgmt() except PAM.error, resp: out.write("Error: " + str(resp) + "\n") return False except: return False # If we get here, the authentication worked return True My problem is that this function does not behave the same wether I use it in a simple script or through mod_python. To illustrate this, I wrote these simple cases: my_username = "markys" my_good_password = "lalala" my_bad_password = "lololo" def handler(req): req.content_type = "text/plain" req.write("1- " + str(auth(my_username,my_good_password,req) + "\n")) req.write("2- " + str(auth(my_username,my_bad_password,req) + "\n")) return apache.OK if __name__ == "__main__": print "1- " + str(auth(my_username,my_good_password,sys.__stdout__)) print "2- " + str(auth(my_username,my_bad_password,sys.__stdout__)) The result from the script is : Query list: [('Password: ', 1)] Our response: [('lalala', 0)] 1- True Query list: [('Password: ', 1)] Our response: [('lololo', 0)] Error: ('Authentication failure', 7) 2- False but the result from mod_python is : Query list: [('Password: ', 1)] Our response: [('lalala', 0)] Error: ('Authentication failure', 7) 1- False Query list: [('Password: ', 1)] Our response: [('lololo', 0)] Error: ('Authentication failure', 7) 2- False I don't understand why the auth function does not return the same value given the same inputs. Any idea where I got this wrong ? Here is the original script, if that could help you. Thanks a lot !

    Read the article

  • HTTP: can GET and POST requests from a same machine come from different IPs?

    - by NoozNooz42
    I'm pretty sure I remember reading --but cannot find back the links anymore-- about this: on some ISP (including at least one big ISP in the U.S.) it is possible to have a user's GET and POST request appearing to come from different IPs. (note that this is totally programming related, and I'll give an example below) I'm not talking about having your IP adress dynamically change between two requests. I'm talking about this: IP 1: 123.45.67.89 IP 2: 101.22.33.44 The same user makes a GET, then a POST, then a GET again, then a POST again and the servers see this: - GET from IP 1 - POST from IP 2 - GET from IP 1 - POST from IP 2 So altough it's the same user, the webserver sees different IPs for the GET and the POSTs. Surely seen that HTTP is a stateless protocol this is perfectly legit right? I'd like to find back the explanation as to how/why certain ISP have their networks configured such that this may happen. I'm asking because someone asked me to implement the following IP filter and I'm pretty sure it is fundamentally broken code (breaking havoc for at least one major american ISP users). Here's a Java servlet filter that is supposed to protect against some attacks. The reasoning is that: "For any session filter checks that IP address in the request is the same that was used when session was created. So in this case session ID could not be stolen for forming fake sessions." http://www.servletsuite.com/servlets/protectsessionsflt.htm However I'm pretty sure this is inherently broken because there are ISPs where you may see GET and POST coming from different IPs. Any info on this subject is very welcome.

    Read the article

  • Return a list of imported Python modules used in a script?

    - by Jono Bacon
    Hi All, I am writing a program that categorizes a list of Python files by which modules they import. As such I need to scan the collection of .py files ad return a list of which modules they import. As an example, if one of the files I import has the following lines: import os import sys, gtk I would like it to return: ["os", "sys", "gtk"] I played with modulefinder and wrote: from modulefinder import ModuleFinder finder = ModuleFinder() finder.run_script('testscript.py') print 'Loaded modules:' for name, mod in finder.modules.iteritems(): print '%s ' % name, but this returns more than just the modules used in the script. As an example in a script which merely has: import os print os.getenv('USERNAME') The modules returned from the ModuleFinder script return: tokenize heapq __future__ copy_reg sre_compile _collections cStringIO _sre functools random cPickle __builtin__ subprocess cmd gc __main__ operator array select _heapq _threading_local abc _bisect posixpath _random os2emxpath tempfile errno pprint binascii token sre_constants re _abcoll collections ntpath threading opcode _struct _warnings math shlex fcntl genericpath stat string warnings UserDict inspect repr struct sys pwd imp getopt readline copy bdb types strop _functools keyword thread StringIO bisect pickle signal traceback difflib marshal linecache itertools dummy_thread posix doctest unittest time sre_parse os pdb dis ...whereas I just want it to return 'os', as that was the module used in the script. Can anyone help me achieve this?

    Read the article

  • use proxy in python to fetch a webpage

    - by carmao
    I am trying to write a function in Python to use a public anonymous proxy and fetch a webpage, but I got a rather strange error. The code (I have Python 2.4): import urllib2 def get_source_html_proxy(url, pip, timeout): # timeout in seconds (maximum number of seconds willing for the code to wait in # case there is a proxy that is not working, then it gives up) proxy_handler = urllib2.ProxyHandler({'http': pip}) opener = urllib2.build_opener(proxy_handler) opener.addheaders = [('User-agent', 'Mozilla/5.0')] urllib2.install_opener(opener) req=urllib2.Request(url) sock=urllib2.urlopen(req) timp=0 # a counter that is going to measure the time until the result (webpage) is # returned while 1: data = sock.read(1024) timp=timp+1 if len(data) < 1024: break timpLimita=50000000 * timeout if timp==timpLimita: # 5 millions is about 1 second break if timp==timpLimita: print IPul + ": Connection is working, but the webpage is fetched in more than 50 seconds. This proxy returns the following IP: " + str(data) return str(data) else: print "This proxy " + IPul + "= good proxy. " + "It returns the following IP: " + str(data) return str(data) # Now, I call the function to test it for one single proxy (IP:port) that does not support user and password (a public high anonymity proxy) #(I put a proxy that I know is working - slow, but is working) rez=get_source_html_proxy("http://www.whatismyip.com/automation/n09230945.asp", "93.84.221.248:3128", 50) print rez The error: Traceback (most recent call last): File "./public_html/cgi-bin/teste5.py", line 43, in ? rez=get_source_html_proxy("http://www.whatismyip.com/automation/n09230945.asp", "93.84.221.248:3128", 50) File "./public_html/cgi-bin/teste5.py", line 18, in get_source_html_proxy sock=urllib2.urlopen(req) File "/usr/lib64/python2.4/urllib2.py", line 130, in urlopen return _opener.open(url, data) File "/usr/lib64/python2.4/urllib2.py", line 358, in open response = self._open(req, data) File "/usr/lib64/python2.4/urllib2.py", line 376, in _open '_open', req) File "/usr/lib64/python2.4/urllib2.py", line 337, in _call_chain result = func(*args) File "/usr/lib64/python2.4/urllib2.py", line 573, in lambda r, proxy=url, type=type, meth=self.proxy_open: \ File "/usr/lib64/python2.4/urllib2.py", line 580, in proxy_open if '@' in host: TypeError: iterable argument required I do not know why the character "@" is an issue (I have no such in my code. Should I have?) Thanks in advance for your valuable help.

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >