Search Results

Search found 9228 results on 370 pages for 'hg import'.

Page 321/370 | < Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >

  • How to determine subprocess.Popen() failed when shell=True

    - by Malcolm
    Windows version of Python 2.6.4: Is there any way to determine if subprocess.Popen() fails when using shell=True? Popen() successfully fails when shell=False >>> import subprocess >>> p = subprocess.Popen( 'Nonsense.application', shell=False ) Traceback (most recent call last): File ">>> pyshell#258", line 1, in <module> p = subprocess.Popen( 'Nonsense.application' ) File "C:\Python26\lib\subprocess.py", line 621, in __init__ errread, errwrite) File "C:\Python26\lib\subprocess.py", line 830, in _execute_child startupinfo) WindowsError: [Error 2] The system cannot find the file specified But when shell=True, there appears to be no way to determine if a Popen() call was successful or not. >>> p = subprocess.Popen( 'Nonsense.application', shell=True ) >>> p >>> subprocess.Popen object at 0x0275FF90&gt;&gt;&gt; >>> p.pid 6620 >>> p.returncode >>> Ideas appreciated. Regards, Malcolm

    Read the article

  • How can I share variables between a base class and subclass in Perl?

    - by Jonathan
    I have a base class like this: package MyClass; use vars qw/$ME list of vars/; use Exporter; @ISA = qw/Exporter/; @EXPORT_OK = qw/ many variables & functions/; %EXPORT_TAGS = (all => \@EXPORT_OK ); sub my_method { } sub other_methods etc { } --- more code--- I want to subclass MyClass, but only for one method. package MySubclass; use MyClass; use vars qw/@ISA/; @ISA = 'MyClass'; sub my_method { --- new method } And I want to call this MySubclass like I would the original MyClass, and still have access to all of the variables and functions from Exporter. However I am having problems getting the Exporter variables from the original class, MyClass, to export correctly. Do I need to run Exporter again inside the subclass? That seems redundant and unclear. Example file: #!/usr/bin/perl use MySubclass /$ME/; -- rest of code But I get compile errors when I try to import the $ME variable. Any suggestions?

    Read the article

  • Fast JSON serialization (and comparison with Pickle) for cluster computing in Python?

    - by user248237
    I have a set of data points, each described by a dictionary. The processing of each data point is independent and I submit each one as a separate job to a cluster. Each data point has a unique name, and my cluster submission wrapper simply calls a script that takes a data point's name and a file describing all the data points. That script then accesses the data point from the file and performs the computation. Since each job has to load the set of all points only to retrieve the point to be run, I wanted to optimize this step by serializing the file describing the set of points into an easily retrievable format. I tried using JSONpickle, using the following method, to serialize a dictionary describing all the data points to file: def json_serialize(obj, filename, use_jsonpickle=True): f = open(filename, 'w') if use_jsonpickle: import jsonpickle json_obj = jsonpickle.encode(obj) f.write(json_obj) else: simplejson.dump(obj, f, indent=1) f.close() The dictionary contains very simple objects (lists, strings, floats, etc.) and has a total of 54,000 keys. The json file is ~20 Megabytes in size. It takes ~20 seconds to load this file into memory, which seems very slow to me. I switched to using pickle with the same exact object, and found that it generates a file that's about 7.8 megabytes in size, and can be loaded in ~1-2 seconds. This is a significant improvement, but it still seems like loading of a small object (less than 100,000 entries) should be faster. Aside from that, pickle is not human readable, which was the big advantage of JSON for me. Is there a way to use JSON to get similar or better speed ups? If not, do you have other ideas on structuring this? (Is the right solution to simply "slice" the file describing each event into a separate file and pass that on to the script that runs a data point in a cluster job? It seems like that could lead to a proliferation of files). thanks.

    Read the article

  • Functional way to get a matrix from text

    - by Elazar Leibovich
    I'm trying to solve some Google Code Jam problems, where an input matrix is typically given in this form: 2 3 #matrix dimensions 1 2 3 4 5 6 7 8 9 # all 3 elements in the first row 2 3 4 5 6 7 8 9 0 # each element is composed of three integers where each element of the matrix is composed of, say, three integers. So this example should be converted to #!scala Array( Array(A(1,2,3),A(4,5,6),A(7,8,9), Array(A(2,3,4),A(5,6,7),A(8,9,0), ) An imperative solution would be of the form #!python input = """2 3 1 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 0 """ lines = input.split('\n') print lines[0] m,n = (int(x) for x in lines[0].split()) array = [] row = [] A = [] for line in lines[1:]: for elt in line.split(): A.append(elt) if len(A)== 3: row.append(A) A = [] array.append(row) row = [] from pprint import pprint pprint(array) A functional solution I've thought of is #!scala def splitList[A](l:List[A],i:Int):List[List[A]] = { if (l.isEmpty) return List[List[A]]() val (head,tail) = l.splitAt(i) return head :: splitList(tail,i) } def readMatrix(src:Iterator[String]):Array[Array[TrafficLight]] = { val Array(x,y) = src.next.split(" +").map(_.trim.toInt) val mat = src.take(x).toList.map(_.split(" "). map(_.trim.toInt)). map(a => splitList(a.toList,3). map(b => TrafficLight(b(0),b(1),b(2)) ).toArray ).toArray return mat } But I really feel it's the wrong way to go because: I'm using the functional List structure for each line, and then convert it to an array. The whole code seems much less efficeint I find it longer less elegant and much less readable than the python solution. It is harder to which of the map functions operates on what, as they all use the same semantics. What is the right functional way to do that?

    Read the article

  • CharField values disappearing after save (readonly field)

    - by jamida
    I'm implementing simple "grade book" application where the teacher would be able to update the grades w/o being allowed to change the students' names (at least not on the update grade page). To do this I'm using one of the read-only tricks, the simplest one. The problem is that after the SUBMIT the view is re-displayed with 'blank' values for the students. I'd like the students' names to re-appear. Below is the simplest example that exhibits this problem. (This is poor DB design, I know, I've extracted just the relevant parts of the code to showcase the problem. In the real example, student is in its own table but the problem still exists there.) models.py class Grade1(models.Model): student = models.CharField(max_length=50, unique=True) finalGrade = models.CharField(max_length=3) class Grade1OForm(ModelForm): student = forms.CharField(max_length=50, required=False) def __init__(self, *args, **kwargs): super(Grade1OForm,self).__init__(*args, **kwargs) instance = getattr(self, 'instance', None) if instance and instance.id: self.fields['student'].widget.attrs['readonly'] = True self.fields['student'].widget.attrs['disabled'] = 'disabled' def clean_student(self): instance = getattr(self,'instance',None) if instance: return instance.student else: return self.cleaned_data.get('student',None) class Meta: model=Grade1 views.py from django.forms.models import modelformset_factory def modifyAllGrades1(request): gradeFormSetFactory = modelformset_factory(Grade1, form=Grade1OForm, extra=0) studentQueryset = Grade1.objects.all() if request.method=='POST': myGradeFormSet = gradeFormSetFactory(request.POST, queryset=studentQueryset) if myGradeFormSet.is_valid(): myGradeFormSet.save() info = "successfully modified" else: myGradeFormSet = gradeFormSetFactory(queryset=studentQueryset) return render_to_response('grades/modifyAllGrades.html',locals()) template <p>{{ info }}</p> <form method="POST" action=""> <table> {{ myGradeFormSet.management_form }} {% for myform in myGradeFormSet.forms %} {# myform.as_table #} <tr> {% for field in myform %} <td> {{ field }} {{ field.errors }} </td> {% endfor %} </tr> {% endfor %} </table> <input type="submit" value="Submit"> </form>

    Read the article

  • Is it possible to convert a 40-character SHA1 hash to a 20-character SHA1 hash?

    - by ewitch
    My problem is a bit hairy, and I may be asking the wrong questions, so please bear with me... I have a legacy MySQL database which stores the user passwords & salts for a membership system. Both of these values have been hashed using the Ruby framework - roughly like this: hashedsalt = Digest::SHA1.hexdigest("--#{Time.now.to_s}--#{login}--") hashedpassword = Digest::SHA1.hexdigest("#{hashedsalt}:#{password}") So both values are stored as 40-character strings (varchar(40)) in MySQL. Now I need to import all of these users into the ASP.NET membership framework for a new web site, which uses a SQL Server database. It is my understanding that the the way I have ASP.NET membership configured, the user passwords and salts are also stored in the membership database (in table aspnet_Membership) as SHA1 hashes, which are then Base64 encoded (see here for details) and stored as nvarchar(128) data. But from the length of the Base64 encoded strings that are stored (28 characters) it seems that the SHA1 hashes that ASP.NET membership generates are only 20 characters long, rather than 40. From some other reading I have been doing I am thinking this has to do with the number of bits per character/character set/encoding or something related. So is there some way to convert the 40-character SHA1 hashes to 20-character hashes which I can then transfer to the new ASP.NET membership data table? I'm pretty familiar with ASP.NET membership by now but I feel like I'm just missing this one piece. However, it may also be known that SHA1 in Ruby and SHA1 in .NET are incompatible, so I'm fighting a losing battle... Thanks in advance for any insight.

    Read the article

  • manipulating textbox value

    - by chameios
    hello All, I am new to the programming world of dojo and web applications. I am trying to acomplish a task where I want to manipulate the textbox value with some text. I tried everything including some code from dojocampus, but even this code doesnot do anything. I have also tried to create an instance , with dojo.widget.byId and dijit.byId and then tried the instance.value = 'newtext' and everything that I could find but for some reason , the textbox is not updating. Please help me. <html> <head> <title>Dojo example</title> <style type="text/css"> @import "pathtodojo/dijit/themes/nihilo/nihilo.css"; </style> <style type="text/css"> </style> <script type="text/javascript" src="pathtodojo/dojo/dojo.js" djConfig="parseOnLoad:true, isDebug: true"></script> <script> dojo.require("dijit.form.TextBox"); function init() { var box0 = dijit.byId("value0Box"); var box1 = dijit.byId("value1Box"); box1.attr("value", box0.attr("value") + " modified"); dojo.connect(box0, "onChange", function(){ box1.attr("value", box0.attr("value") + " modified"); }); } dojo.addOnLoad(init); </script> <body class="nihilo"> A textbox with a value: <input id="value0Box" dojoType="dijit.form.TextBox" value="Some value" intermediateChanges="true"></input> <br> A textbox set with a value from the above textbox: <input id="value1Box" dojoType="dijit.form.TextBox"></input> <br> </body> </html> regards C

    Read the article

  • Django's post_save signal behaves weirdly with models using multi-table inheritance

    - by hekevintran
    Django's post_save signal behaves weirdly with models using multi-table inheritance I am noticing an odd behavior in the way Django's post_save signal works when using a model that has multi-table inheritance. I have these two models: class Animal(models.Model): category = models.CharField(max_length=20) class Dog(Animal): color = models.CharField(max_length=10) I have a post save callback called echo_category: def echo_category(sender, **kwargs): print "category: '%s'" % kwargs['instance'].category post_save.connect(echo_category, sender=Dog) I have this fixture: [ { "pk": 1, "model": "animal.animal", "fields": { "category": "omnivore" } }, { "pk": 1, "model": "animal.dog", "fields": { "color": "brown" } } ] In every part of the program except for in the post_save callback the following is true: from animal.models import Dog Dog.objects.get(pk=1).category == u'omnivore' # True When I run syncdb and the fixture is installed, the echo_category function is run. The output from syncdb is: $ python manage.py syncdb --noinput Installing json fixture 'initial_data' from '~/my_proj/animal/fixtures'. category: '' Installed 2 object(s) from 1 fixture(s) The weird thing here is that the dog object's category attribute is an empty string. Why is it not 'omnivore' like it is everywhere else? As a temporary (hopefully) workaround I reload the object from the database in the post_save callback: def echo_category(sender, **kwargs): instance = kwargs['instance'] instance = sender.objects.get(pk=instance.pk) print "category: '%s'" % instance.category post_save.connect(echo_category, sender=Dog) This works but it is not something I like because I must remember to do it when the model inherits from another model and it must hit the database again. The other weird thing is that I must do instance.pk to get the primary key. The normal 'id' attribute does not work (I cannot use instance.id). I do not know why this is. Maybe this is related to the reason why the category attribute is not doing the right thing?

    Read the article

  • Doctesting functions that receive and display user input - Python (tearing my hair out)

    - by GlenCrawford
    Howdy! I am currently writing a small application with Python (3.1), and like a good little boy, I am doctesting as I go. However, I've come across a method that I can't seem to doctest. It contains an input(), an because of that, I'm not entirely sure what to place in the "expecting" portion of the doctest. Example code to illustrate my problem follows: """ >>> getFiveNums() Howdy. Please enter five numbers, hit <enter> after each one Please type in a number: Please type in a number: Please type in a number: Please type in a number: Please type in a number: """ import doctest numbers = list() # stores 5 user-entered numbers (strings, for now) in a list def getFiveNums(): print("Howdy. Please enter five numbers, hit <enter> after each one") for i in range(5): newNum = input("Please type in a number:") numbers.append(newNum) print("Here are your numbers: ", numbers) if __name__ == "__main__": doctest.testmod(verbose=True) When running the doctests, the program stops executing immediately after printing the "Expecting" section, waits for me to enter five numbers one after another (without prompts), and then continues. As shown below: I don't know what, if anything, I can place in the Expecting section of my doctest to be able to test a method that receives and then displays user input. So my question (finally) is, is this function doctestable?

    Read the article

  • How to use EffectUpdate?

    - by coma
    So, this is my sample: <?xml version="1.0" encoding="utf-8"?> <s:Application xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:mx="library://ns.adobe.com/flex/mx" xmlns:s="library://ns.adobe.com/flex/spark"> <fx:Style> @namespace s "library://ns.adobe.com/flex/spark"; @namespace mx "library://ns.adobe.com/flex/mx"; s|Application { background-color: #333333; } #info { padding-top: 5; padding-right: 5; padding-bottom: 5; padding-left: 5; font-size: 22; background-color: #ffffff; } #plane { corner-radius: 8; background-color: #1c1c1c; } </fx:Style> <fx:Script> import mx.events.*; private var steps:uint = 0; private function effectUpdateHandler(event:EffectEvent):void { info.text = "rotationY: " + plane.rotationY + " ; steps: " + steps; steps++; } </fx:Script> <fx:Declarations> <s:Rotate3D id="spin" target="{plane}" autoCenterTransform="true" angleYFrom="0" angleYTo="360" repeatCount="10" effectUpdate="effectUpdateHandler(event)" /> </fx:Declarations> <s:VGroup horizontalAlign="center" gap="50" width="100%"> <s:Label id="info" width="100%"/> <s:BorderContainer id="plane" width="200" height="200" click="spin.play()"/> </s:VGroup> </s:Application> and it doesn't make me happy.

    Read the article

  • How to deploy EJB on server?

    - by shekhar
    Hi, I am learning EJB3 from last few days. I have many questions regarding EJB, application servers and deployment of EJB. To start with, I have created one simple helloworld stateless session bean but I don't know how to deploy it on server. It has single bean class, bean interface and one servlet client. I have used eclipse to develop this project. None of the books that I read gives step by step details about how to put EJB on server and how to access those beans. I have JBoss 6 server and I also have JEE budle downloaded from sun website. Does this JEE bundle contains Glassfish server? or do I need to download it seperately? Can anyone please give me step by step details of how to put my bean and its client on server (JBoss or JEE)? and why do we need to include bean interface class in EJB client code? I mean either we need to keep client and bean in same package or if we keep them in seperate packages we need to import bean interfaces in client code. Am I right? Thanks and Regards, Chandrashekhar

    Read the article

  • xmlrpc client call in python does not come back

    - by Jack Ha
    Using Python 2.6.4, windows With the following script I want to test a certain xmlrpc server. I call a non-existent function and hope for a traceback with an error. Instead, the function does not return. What could be the cause? import xmlrpclib s = xmlrpclib.Server("http://127.0.0.1:80", verbose=True) s.functioncall() The output is: send: 'POST /RPC2 HTTP/1.0\r\nHost: 127.0.0.1:80\r\nUser-Agent: xmlrpclib.py/1.0 .1 (by www.pythonware.com)\r\nContent-Type: text/xml\r\nContent-Length: 106\r\n\ r\n' send: "<?xml version='1.0'?>\n<methodCall>\n<methodName>functioncall</methodName >\n<params>\n</params>\n</methodCall>\n" reply: 'HTTP/1.1 200 OK\r\n' header: Content-Type: text/xml header: Cache-Control: no-cache header: Content-Length: 376 header: Date: Tue, 30 Mar 2010 13:27:21 GMT body: '<?xml version="1.0"?>\r\n<methodResponse>\r\n<fault>\r\n<value>\r\n<struc t>\r\n<member>\r\n<name>faultCode</name>\r\n<value><i4>1</i4></value>\r\n</membe r>\r\n<member>\r\n<name>faultString</name>\r\n<value><string>PVSS00ctrl (2), 2 010.03.30 15:27:21.395, CTRL, SEVERE, 72, Function not defined, functioncall , , \n</string></value>\r\n</member>\r\n</struct>\r\n</value>\r\n</fault>\r\n</m ethodResponse>\r\n' (here the program hangs and does not return until I kill the server) edit: the server is written in c++, using its own xmlrpc library

    Read the article

  • cx_Oracle makes subprocess give OSError

    - by Shrikant Sharat
    I am trying to use the cx_Oracle module with python 2.6.6 on ubuntu Maverick, with Oracle 11gR2 Enterprise edition. I am able to connect to my oracle db just fine, but once I do that, the subprocess module does not work anymore. Here is an iPython session that reproduces the problem... In [1]: import subprocess as sp, cx_Oracle as dbh In [2]: sp.call(['whoami']) sharat Out[2]: 0 In [3]: con = dbh.connect('system', 'password') In [4]: con.close() In [5]: sp.call(['whomai']) --------------------------------------------------------------------------- OSError Traceback (most recent call last) /home/sharat/desk/calypso-launcher/<ipython console> in <module>() /usr/lib/python2.6/subprocess.pyc in call(*popenargs, **kwargs) 468 retcode = call(["ls", "-l"]) 469 """ --> 470 return Popen(*popenargs, **kwargs).wait() 471 472 /usr/lib/python2.6/subprocess.pyc in __init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags) 621 p2cread, p2cwrite, 622 c2pread, c2pwrite, --> 623 errread, errwrite) 624 625 if mswindows: /usr/lib/python2.6/subprocess.pyc in _execute_child(self, args, executable, preexec_fn, close_fds, cwd, env, universal_newlines, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite) 1134 1135 if data != "": -> 1136 _eintr_retry_call(os.waitpid, self.pid, 0) 1137 child_exception = pickle.loads(data) 1138 for fd in (p2cwrite, c2pread, errread): /usr/lib/python2.6/subprocess.pyc in _eintr_retry_call(func, *args) 453 while True: 454 try: --> 455 return func(*args) 456 except OSError, e: 457 if e.errno == errno.EINTR: OSError: [Errno 10] No child processes So, the call to sp.call works fine before connecting to oracle, but breaks after that. Even if I have closed the connection to the database. Looking around, I found http://bugs.python.org/issue1731717 as somewhat related to this issue, but I am not dealing with threads here. I don't know if cx_Oracle is. Moreover, the above issue mentions that adding a time.sleep(1) fixes it, but it didn't help me. Any help appreciated. Thanks.

    Read the article

  • mysql-python stopped working

    - by MAC
    This is a rather dumb question but i'am looking at a bizarre situation. I am running fedora and have python 2.6.5 installed. The other day i installed MySQL-python using yum (because i do not have the setuptools module so i cannot build it from source). Anyway yesterday i wrote my entire data access layer in python and it was running fine, i did test it. Today however it gives me an ImportError: No module named MySQLdb The only thing i ever changed was i installed eclipse and pyDev. Any ideas on what went wrong and how i fix it. I tried removing and re-installing MySql-python but that did not help. I did the following import sys print sys.path And it shows me all the paths which are basically pertaining to /usr/local/lib/python2.6 However i was trying to find where the MySQLdb module is installed and it seems that its installed in /usr/lib/python2.5/sitepackages Now i have no idea why it got installed there and why it was working earlier and why it stopped working now. Any ideas on how i should fix it. I did try copying the site-packages folder over to the python2.6 folder but that did not work Help!!

    Read the article

  • How to implement a python REPL that nicely handles asynchronous output?

    - by andy
    I have a Python-based app that can accept a few commands in a simple read-eval-print-loop. I'm using raw_input('> ') to get the input. On Unix-based systems, I also import readline to make things behave a little better. All this is working fine. The problem is that there are asynchronous events coming in, and I'd like to print output as soon as they happen. Unfortunately, this makes things look ugly. The " " string doesn't show up again after the output, and if the user is halfway through typing something, it chops their text in half. It should probably redraw the user's text-in-progress after printing something. This seems like it must be a solved problem. What's the proper way to do this? Also note that some of my users are Windows-based. TIA Edit: The accepted answer works under Unixy platforms (when the readline module is available), but if anyone knows how to make this work under Windows, it would be much appreciated!

    Read the article

  • Project Euler (P14): recursion problems

    - by sean mcdaid
    Hi I'm doing the Collatz sequence problem in project Euler (problem 14). My code works with numbers below 100000 but with numbers bigger I get stack over-flow error. Is there a way I can re-factor the code to use tail recursion, or prevent the stack overflow. The code is below: import java.util.*; public class v4 { // use a HashMap to store computed number, and chain size static HashMap<Integer, Integer> hm = new HashMap<Integer, Integer>(); public static void main(String[] args) { hm.put(1, 1); final int CEILING_MAX=Integer.parseInt(args[0]); int len=1; int max_count=1; int max_seed=1; for(int i=2; i<CEILING_MAX; i++) { len = seqCount(i); if(len > max_count) { max_count = len; max_seed = i; } } System.out.println(max_seed+"\t"+max_count); } // find the size of the hailstone sequence for N public static int seqCount(int n) { if(hm.get(n) != null) { return hm.get(n); } if(n ==1) { return 1; } else { int length = 1 + seqCount(nextSeq(n)); hm.put(n, length); return length; } } // Find the next element in the sequence public static int nextSeq(int n) { if(n%2 == 0) { return n/2; } else { return n*3+1; } } }

    Read the article

  • Help understanding the Single Responsibility Principle

    - by user204588
    I'm trying to understand what a responsibility actually is so I want to use an example of something I'm currently working on. I have a app that imports product information from one system to another system. The user of the apps gets to choose various settings for which product fields in one system that want to use in the other system. So I have a class, say ProductImporter and it's responsibility is to import products. This class is large, probably too large. The methods in this class are complex and would be for example, getDescription. This method doesn't simply grab a description from the other system but sets a product description based on various settings set by the user. If I were to add a setting and a new way to get a description, this class could change. So, is that two responsibilities? Is there one that imports products and one that gets a description. It would seem this way, almost every method I have would be in it's own class and that seems like overkill. I really need a good description of this principle because it's hard for me to completely understand. I don't want needless complexity.

    Read the article

  • Using XmlSerializer deserialize complex type elements are null

    - by Jean Bastos
    I have the following schema: <?xml version="1.0"?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:tipos="http://www.ginfes.com.br/tipos_v03.xsd" targetNamespace="http://www.ginfes.com.br/servico_consultar_situacao_lote_rps_resposta_v03.xsd" xmlns="http://www.ginfes.com.br/servico_consultar_situacao_lote_rps_resposta_v03.xsd" attributeFormDefault="unqualified" elementFormDefault="qualified"> <xsd:import schemaLocation="tipos_v03.xsd" namespace="http://www.ginfes.com.br/tipos_v03.xsd" /> <xsd:element name="ConsultarSituacaoLoteRpsResposta"> <xsd:complexType> <xsd:choice> <xsd:sequence> <xsd:element name="NumeroLote" type="tipos:tsNumeroLote" minOccurs="1" maxOccurs="1"/> <xsd:element name="Situacao" type="tipos:tsSituacaoLoteRps" minOccurs="1" maxOccurs="1"/> </xsd:sequence> <xsd:element ref="tipos:ListaMensagemRetorno" minOccurs="1" maxOccurs="1"/> </xsd:choice> </xsd:complexType> </xsd:element> </xsd:schema> and the following class: [System.CodeDom.Compiler.GeneratedCodeAttribute("xsd", "2.0.50727.3038")] [System.SerializableAttribute()] [System.Diagnostics.DebuggerStepThroughAttribute()] [System.ComponentModel.DesignerCategoryAttribute("code")] [System.Xml.Serialization.XmlTypeAttribute(AnonymousType = true, Namespace = "http://www.ginfes.com.br/servico_consultar_situacao_lote_rps_envio_v03.xsd")] [System.Xml.Serialization.XmlRootAttribute(Namespace = "http://www.ginfes.com.br/servico_consultar_situacao_lote_rps_envio_v03.xsd", IsNullable = false)] public partial class ConsultarSituacaoLoteRpsEnvio { [System.Xml.Serialization.XmlElementAttribute(Order = 0)] public tcIdentificacaoPrestador Prestador { get; set; } [System.Xml.Serialization.XmlElementAttribute(Order = 1)] public string Protocolo { get; set; } } Use the following code to deserialize the object: XmlSerializer respSerializer = new XmlSerializer(typeof(ConsultarSituacaoLoteRpsResposta)); StringReader reader = new StringReader(resp); ConsultarSituacaoLoteRpsResposta respModel = (ConsultarSituacaoLoteRpsResposta)respSerializer.Deserialize(reader); does not occur any error but the properties of objects are null, anyone know what is happening?

    Read the article

  • Create a dynamic project template in VS 2010?

    - by jonhobbs
    This might sound a bit of an odd question but I know what I want to achieve, just don't know if it's possible. Firstly, I'd like to be able to create a visual studio project that the 2 developers that work with me can use as a basis for all new websites. I want to drop all the common files that we use in there, like jQuery, CMS files etc. so that every time they start a new project they don't have to worry about all of that stuff. I guess to do this I just set up a project and "File Export Template" ? Now, here's the tricky bit... When you open up one of the default templates in VS it asks you a few questions, such as if you want to use a master page or if you want to use code behind etc. What I would like to do is set up something similar so that when you use the project template it asks you what version of jQuery you want to use so that it can import the right file, or for example it might ask you if you want to include certain user controls that the CMS contains. If you tick the box then the folder with the necessary user controls would be put in your new project for you. I know MS can do this but can a user like me include functionality like that in my own project template? Hope that makes sense.

    Read the article

  • C#+BDE+DBF problem

    - by Drabuna
    I have huge problem: I have lots of .dbf files(~50000) and I need to import them into Oracle database. I open conncection like this: OleDbConnection oConn = new OleDbConnection(); OleDbCommand oCmd = new OleDbCommand(); oConn.ConnectionString = @"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + directory + ";Extended Properties=dBASE IV;User ID=Admin;Password="; oCmd.Connection = oConn; oCmd.CommandText = @"SELECT * FROM " + tablename; try { oConn.Open(); resultTable.Load(oCmd.ExecuteReader()); } catch (Exception ex) { MessageBox.Show(ex.Message); } oConn.Close(); oCmd.Dispose(); oConn.Dispose(); I read them in loop, and then insert into oracle. Everything's fine. BUT: There is about 1000 files, that I can't open. They raise exception "not a table". So I google, and install Borland Database Engine. Now everything wokrs fine....but no. Now, when I'm reading files, on 1024 file exception raises: "System resource exceeded". But I have lots of free resources. When I remove BDE, everything's fine again, no "system resource exceeded" error, but I cant read all files. Help please. PS: Tried using ODBC but nothing changes.

    Read the article

  • Add new types to Go

    - by nevalu
    I'm trying add new types for that been managed/used as in Go core types. To create new types is anything very interesting to validate data before of send it to a non-SQL DBMS or to check data from a form. Go uses univeral constants to define them at global level: var DateType = universe.DefineType("date", universePos, &dateType{}) In this case they're defined to be called from a package like types: var Date = &dateType{} I get these errors: test.go:58: o.lit undefined (cannot refer to unexported field lit) test.go:62: *dateType is not Type missing Pos() token.Position The code is based on: http://github.com/tav/go/blob/master/src/pkg/exp/eval/value.go http://github.com/tav/go/blob/master/src/pkg/exp/eval/type.go package main import ( "exp/eval" "fmt" // "go/token" ) // http://github.com/tav/go/blob/master/src/pkg/exp/eval/value.go type DateValue interface { eval.Value Get(*eval.Thread) string Set(*eval.Thread, string) } /* Date */ type dateV string func (v *dateV) String() string { return fmt.Sprint(*v) } func (v *dateV) Assign(t *eval.Thread, o eval.Value) { *v = dateV(o.(DateValue).Get(t)) } func (v *dateV) Get(*eval.Thread) string { return string(*v) } func (v *dateV) Set(t *eval.Thread, x string) { *v = dateV(x) } // http://github.com/tav/go/blob/master/src/pkg/exp/eval/type.go type Type interface { eval.Type // isDate returns true if this is a date type. isDate() bool } /* Common type */ type commonType struct{} // added func (commonType) isDate() bool { return false } /* Date */ type dateType struct { commonType } // * It should not be an universal constant //var universePos = token.Position{"<universe>", 0, 0, 0} // added //var DateType = universe.DefineType("date", universePos, &dateType{}) var Date = &dateType{} func (t *dateType) compat(o Type, conv bool) bool { t2, ok := o.lit().(*dateType) return ok && t == t2 } func (t *dateType) lit() Type { return t } func (t *dateType) isDate() bool { return true } func (t *dateType) String() string { return "<date>" } func (t *dateType) Zero() eval.Value { res := dateV("") return &res } /* Named types */ /* type NamedType struct { eval.NamedType Def Type }*/ type NamedType struct { // added // token.Position Name string // Underlying type. If incomplete is true, this will be nil. // If incomplete is false and this is still nil, then this is // a placeholder type representing an error. Def Type // True while this type is being defined. incomplete bool methods map[string]eval.Method } func (t *NamedType) isDate() bool { return t.Def.isDate() } /* *********************** */ func main() { print("foo") }

    Read the article

  • Am I correctly extracting JPEG binary data from this mysqldump?

    - by Glenn
    I have a very old .sql backup of a vbulletin site that I ran around 8 years ago. I am trying to see the file attachments that are stored in the DB. The script below extracts them all and is verified to be JPEG by hex dumping and checking the SOI (start of image) and EOI (end of image) bytes (FFD8 and FFD9, respectively) according to the JPEG wiki page. But when I try to open them with evince, I get this message "Error interpreting JPEG image file (JPEG datastream contains no image)" What could be going on here? Some background info: sqldump is around 8 years old vbulletin 2.x was the software that stored the info most likely php 4 was used most likely mysql 4.0, possibly even 3.x the column datatype these attachments are stored in is mediumtext My Python 3.1 script: #!/usr/bin/env python3.1 import re trim_l = re.compile(b"""^INSERT INTO attachment VALUES\('\d+', '\d+', '\d+', '(.+)""") trim_r = re.compile(b"""(.+)', '\d+', '\d+'\);$""") extractor = re.compile(b"""^(.*(?:\.jpe?g|\.gif|\.bmp))', '(.+)$""") with open('attachments.sql', 'rb') as fh: for line in fh: data = trim_l.findall(line)[0] data = trim_r.findall(data)[0] data = extractor.findall(data) if data: name, data = data[0] try: filename = 'files/%s' % str(name, 'UTF-8') ah = open(filename, 'wb') ah.write(data) except UnicodeDecodeError: continue finally: ah.close() fh.close() update The JPEG wiki page says FF bytes are section markers, with the next byte indicating the section type. I see some that are not listed in the wiki page (specifically, I see a lot of 5C bytes, so FF5C). But the list is of "common markers" so I'm trying to find a more complete list. Any guidance here would also be appreciated.

    Read the article

  • virtualenv macosX --no-site-package ignored

    - by Tristram Gräbener
    Hello, I'm having problems with macOSX and virtualenv. It seems to ignore --no-site-package. Using exactly the same commands with linux (archlinux) it works. It it macOSX 10.5 with python 2.5 curl -o virtualenv.py 'http://bitbucket.org/ianb/virtualenv/raw/tip/virtualenv.py Create a new environment python virtualenv.py --no-site-packages foo New python executable in foo/bin/python Installing setuptools...........................done. Activate it source foo/bin/activate Try to install something in it. Despite virtualenv it looks for the system-wide install easy_install cherrypy Searching for cherrypy Best match: CherryPy 3.1.2 Adding CherryPy 3.1.2 to easy-install.pth file Using /Library/Python/2.5/site-packages Processing dependencies for cherrypy Finished processing dependencies for cherrypy Yet it doesn't find the module (foo)guidage-multimodal:~ tristram$ python Python 2.5.1 (r251:54863, Feb 6 2009, 19:02:12) [GCC 4.0.1 (Apple Inc. build 5465)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import cherrypy Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named cherrypy I tried PIP after looking at http://stackoverflow.com/questions/1382925/virtualenv-no-site-packages-and-pip-still-finding-global-packages However it fails installing psycopg2 (some problems with gcc). Also I would like to be able to have a setup.py (from distribute) that does the whole woork

    Read the article

  • Reading UTF-8 XML and writing it to a file with Python

    - by Harri
    I'm trying to parse UTF-8 XML file and save some parts of it to another file. Problem is, that this is my first Python script ever and I'm totally confused about the character encoding problems I'm finding. My script fails immediately when it tries to write non-ascii character to a file, but it can print it to command prompt (at least in some level) Here's the XML (from the parts that matter at least, it's a *.resx file which contains UI strings) <?xml version="1.0" encoding="utf-8"?> <root> <resheader name="foo"> <value>bar</value> </resheader> <data name="lorem" xml:space="preserve"> <value>ipsum öä</value> </data> </root> And here's my python script from xml.dom.minidom import parse names = [] values = [] def getStrings(path): dom = parse(path) data = dom.getElementsByTagName("data") for i in range(len(data)): name = data[i].getAttribute("name") names.append(name) value = data[i].getElementsByTagName("value") values.append(value[0].firstChild.nodeValue.encode("utf-8")) def writeToFile(): with open("uiStrings-fi.py", "w") as f: for i in range(len(names)): line = names[i] + '="'+ values[i] + '"' #varName='varValue' f.write(line) f.write("\n") getStrings("ResourceFile.fi-FI.resx") writeToFile() And here's the traceback: Traceback (most recent call last): File "GenerateLanguageFiles.py", line 24, in writeToFile() File "GenerateLanguageFiles.py", line 19, in writeToFile line = names[i] + '="'+ values[i] + '"' #varName='varValue' UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 2: ordinal not in ran ge(128) How should I fix my script so it would read and write UTF-8 characters properly? The files I'm trying to generate would be used in test automation with Robots Framework.

    Read the article

  • MySQL-python 1.2.3 and OS X 10.5: 64- or 32-bit?

    - by Dave Everitt
    I've been happily using Django and MySQL in development on an existing machine running OS X 10.4 Tiger, and have set up a similar environment in 10.5 Leopard on a new 64-bit MacBook, with a working MySQL and Python 2.6.4. However, now I want them to communicate, easy_install MySQL-python gave ld warnings that the file is not of the required architecture, which led me to test my Python 2.4.6 install (from the Mac OS X disc image): >>> import sys >>> sys.maxint 2147483647 Ah. So my Python install appears to be 32-bit and (I think?) won't install MySQL-python for my 64-bit MySQL. There are lots of hacks out there for MySQL-python on OS X (mostly 1.2.2), but - after hours of reading - I'm pretty sure they won't fix this architecture mismatch. So I'm stuck because I can't decide whether to: give up, remove the 64-bit MySQL install (thorough methods, please?) and use the 32-bit MySQL disc image instead; re-install Python in 64-bit mode from the tarball, --with-universal archs-64-bit and --enable-universalsdk= as detailed in Python.org's 2.6 news. So my questions for anyone who has encountered this issue are: Is installing 64-bit Python on OS X 10.5 worth bothering with? If so, (naive, lazy question!) how are the two required arguments combined? If I just skip along in 32-bit (as on my working setup) what am I missing? I'm after a hassle-free install that's easy to reproduce on other machines (possible student use) so I'd really welcome your opinions, please!

    Read the article

< Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >