Search Results

Search found 11935 results on 478 pages for 'knowledge module'.

Page 186/478 | < Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >

  • Loading 2 Singletons With Dependencies when an app is opened (appDelegate / appDidBecomeActive) iPhone SDK

    - by taber
    I'm trying to load two standard-issue style singletons: http://cocoawithlove.com/2008/11/singletons-appdelegates-and-top-level.html when my iPhone app is loaded. Here's my code: - (void) applicationDidFinishLaunching:(UIApplication *)application { // first, restore user prefs [AppState loadState]; // then, initialize the camera [[CameraModule sharedCameraModule] initCamera]; } My "camera module" has code that checks a property of the AppState singleton. But I think what's happening is a race condition where the camera module singleton is trying to access the AppState property while it's in the middle of being initialized (so the property is nil, and it's re-initializing AppState). I'd really like to keep these separate, instead of just throwing one (or both) into the App Delegate. Has anyone else seen something like this? What kind of workaround did you use, or what would you suggest? Thanks in advance! Here's the loadState method: + (void)loadState { @synchronized([AppState class]) { NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; NSString *file = [documentsDirectory stringByAppendingPathComponent:@"prefs.archive"]; Boolean saveFileExists = [[NSFileManager defaultManager] fileExistsAtPath:file]; if(saveFileExists) { sharedAppState = [[NSKeyedUnarchiver unarchiveObjectWithFile:file] retain]; } else { [AppState sharedAppState]; } } }

    Read the article

  • How to set UCS2 in numpy?

    - by mindcorrosive
    I'm trying to build numpy 1.2.1 as a module for a third-party python interpreter (custom-built, py2.4 linux x86_64) so that I can make calls to numpy from within it. Let's call this one interpreter A. The thing is, the system-wide python interpreter (also py2.4, let's call it B) from the vendor is built with --enable-unicode=ucs4, while the custom one is with UCS2. Needless to say, when I try to build a module with B, I get an error when I try to import numpy in A -- it complains about undefined symbol _PyUnicodeUCS4_IsWhiteSpace. I've searched around and apparently there's no way around this but to compile a custom Python interpreter -- which I did (let's call it interpreter C), properly specifying the unicode string length (verifiable through sys.maxunicode). I managed to build numpy with C as well, surprisingly enough, but still the problem persists when I try to import it in interpreter C. Previously, when I built numpy using B, there were no problems when importing it in B, but A would complain. Perhaps there's an option when building numpy to specify the length of Unicode strings to be used, as when configuring Python builds? Or am I doing something else wrong? A few notes: Upgrading to newer versions of python and/or numpy is not an option - interpreter A will stay on this version of the grammar for the foreseeable future. Also, it is not possible to start the interpreter A in standalone mode to build numpy with it, as it needs some other libraries preloaded I know that this whole thing is a mess, but I'd appreciate any help I can get to make this work. If you need more information, please let me know, I'd be happy to oblige. Thanks to everybody for their time in advance.

    Read the article

  • Directly call distutils' or setuptools' setup() function with command name/options, without parsing

    - by Ryan B. Lynch
    I'd like to call Python's distutils' or setuptools' setup() function in a slightly unconventional way, but I'm not sure whether distutils is meant for this kind of usage. As an example, let's say I currently have a 'setup.py' file, which looks like this (lifted verbatim from the distutils docs--the setuptools usage is almost identical): from distutils.core import setup setup(name='Distutils', version='1.0', description='Python Distribution Utilities', author='Greg Ward', author_email='[email protected]', url='http://www.python.org/sigs/distutils-sig/', packages=['distutils', 'distutils.command'], ) Normally, to build just the .spec file for an RPM of this module, I could run python setup.py bdist_rpm --spec-only, which parses the command line and calls the 'bdist_rpm' code to handle the RPM-specific stuff. The .spec file ends up in './dist'. How can I change my setup() invocation so that it runs the 'bdist_rpm' command with the '--spec-only' option, WITHOUT parsing command-line parameters? Can I pass the command name and options as parameters to setup()? Or can I manually construct a command line, and pass that as a parameter, instead? NOTE: I already know that I could call the script in a separate process, with an actual command line, using os.system() or the subprocess module or something similar. I'm trying to avoid using any kind of external command invocations. I'm looking specifically for a solution that runs setup() in the current interpreter. For background, I'm converting some release-management shell scripts into a single Python program. One of the tasks is running 'setup.py' to generate a .spec file for further pre-release testing. Running 'setup.py' as an external command, with its own command line options, seems like an awkward method, and it complicates the rest of the program. I feel like there may be a more Pythonic way.

    Read the article

  • Celery tasks not works with gevent

    - by Novarg
    When i use celery + gevent for tasks that uses subprocess module i'm getting following stacktrace: Traceback (most recent call last): File "/home/venv/admin/lib/python2.7/site-packages/celery/task/trace.py", line 228, in trace_task R = retval = fun(*args, **kwargs) File "/home/venv/admin/lib/python2.7/site-packages/celery/task/trace.py", line 415, in __protected_call__ return self.run(*args, **kwargs) File "/home/webapp/admin/webadmin/apps/loggingquarantine/tasks.py", line 107, in release_mail_task res = call_external_script(popen_obj.communicate) File "/home/webapp/admin/webadmin/apps/core/helpers.py", line 42, in call_external_script return func_to_call(*args, **kwargs) File "/usr/lib64/python2.7/subprocess.py", line 740, in communicate return self._communicate(input) File "/usr/lib64/python2.7/subprocess.py", line 1257, in _communicate stdout, stderr = self._communicate_with_poll(input) File "/usr/lib64/python2.7/subprocess.py", line 1287, in _communicate_with_poll poller = select.poll() AttributeError: 'module' object has no attribute 'poll' My manage.py looks following (doing monkeypatch there): #!/usr/bin/env python from gevent import monkey import sys import os if __name__ == "__main__": if not 'celery' in sys.argv: monkey.patch_all() os.environ.setdefault("DJANGO_SETTINGS_MODULE", "webadmin.settings") from django.core.management import execute_from_command_line sys.path.append(".") execute_from_command_line(sys.argv) Is there a reason why celery tasks act like it wasn't patched properly? p.s. strange thing that my local setup on Macos works fine while i getting such exceptions under Centos (all package versions are the same, init and config scripts too)

    Read the article

  • Using "ocamlfind" to make the OCaml compiler and toplevel find (project specific) libraries

    - by CharlieP
    Hello, I'm trying to use ocamlfind with both the OCaml compiler and toplevel. From what I understood, I need to place the required libraries in the _tags file at the root of my project, so that the ocamlfind tool will take care of loading them - allowing me to open them in my modules like so : open Sdl open Sdlvideo open Str Currently, my _tags file looks like this : <*>: pkg_sdl,pkg_str I can apparently launch the ocamlfind command with the ocamlc or ocamlopt argument, provided I wan't to compile my project, but I did not see an option to launch the toplevel in the same manner. Is there any way to do this (something like "ocamlfind ocaml")? I also don't know how to place my project specific modules in the _tags file : imagine I have a module name Land. I am currently using the #use "land.ml" directive to open the file and load the module, but it has been suggested that this is not good practice. What syntax should I use in _tags to specify it should be loaded by ocamlfind (considering land.ml is not in the ocamlfind search path) ? Thank you, Charlie P. Edit : According to the first answer of this post, the _tags file is not to be used with ocamlfind. The questions above still stand, there is just a new one to the list : what is the correct way to specify the libraries to ocamlfind ?

    Read the article

  • Getting exception when trying to monkey patch pymongo.connection._Pool

    - by Creotiv
    I use pymongo 1.9 on Ubuntu 10.10 with python 2.6.6 When i trying to monkey patch pymongo.connection._Pool i'm getting error on connection: AutoReconnect: could not find master/primary But when i change _Pool class in pymongo.connection module, it work pretty fine. Even if i copy _Pool implementation from pymongo.connection module and will try to monkey patch by the same code, it still giving same exception. I need to remove threading.local from _Pool class, because i use gevent and i need to implement Pool for all mongo connections(for all threads). I use this code: import pymongo class GPool: """A simple connection pool. Uses thread-local socket per thread. By calling return_socket() a thread can return a socket to the pool. Right now the pool size is capped at 10 sockets - we can expose this as a parameter later, if needed. """ # Non thread-locals __slots__ = ["sockets", "socket_factory", "pool_size","sock"] #sock = None def __init__(self, socket_factory): self.pool_size = 10 if not hasattr(self,"sock"): self.sock = None self.socket_factory = socket_factory if not hasattr(self, "sockets"): self.sockets = [] def socket(self): # we store the pid here to avoid issues with fork / # multiprocessing - see # test.test_connection:TestConnection.test_fork for an example # of what could go wrong otherwise pid = os.getpid() if self.sock is not None and self.sock[0] == pid: return self.sock[1] try: self.sock = (pid, self.sockets.pop()) except IndexError: self.sock = (pid, self.socket_factory()) return self.sock[1] def return_socket(self): if self.sock is not None and self.sock[0] == os.getpid(): # There's a race condition here, but we deliberately # ignore it. It means that if the pool_size is 10 we # might actually keep slightly more than that. if len(self.sockets) < self.pool_size: self.sockets.append(self.sock[1]) else: self.sock[1].close() self.sock = None pymongo.connection._Pool = GPool

    Read the article

  • A question about DOM parser used with Python

    - by fixxxer
    I'm using the following python code to search for a node in an XML file and changing the value of an attribute of one of it's children.Changes are happening correctly when the node is displayed using toxml().But, when it is written to a file, the attributes rearrange themselves(as seen in the Source and the Final XML below). Could anyone explain how and why this happen? Python code: #!/usr/bin/env python import xml from xml.dom.minidom import parse dom=parse("max.xml") #print "Please enter the store name:" for sku in dom.getElementsByTagName("node"): if sku.getAttribute("name") == "store": sku.childNodes[1].childNodes[5].setAttribute("value","Delhi,India") print sku.toxml() xml.dom.ext.PrettyPrint(dom, open("new.xml", "w")) a part of the Source XML: <node name='store' node_id='515' module='mpx.lib.node.simple_value.SimpleValue' config_builder='' inherant='false' description='Configurable Value'> <match> <property name='1' value='point'/> <property name='2' value='0'/> <property name='val' value='Store# 09204 Staten Island, NY'/> <property name='3' value='str'/> </match> </node> Final XML : <node config_builder="" description="Configurable Value" inherant="false" module="mpx.lib.node.simple_value.SimpleValue" name="store" node_id="515"> <match> <property name="1" value="point"/> <property name="2" value="0"/> <property name="val" value="Delhi,India"/> <property name="3" value="str"/> </match> </node>

    Read the article

  • Problem running a Python program, error: Name 's' is not defined.

    - by Sergio Tapia
    Here's my code: #This is a game to guess a random number. import random guessTaken = 0 print("Hello! What's your name kid") myName = input() number = random.randint(1,20) print("Well, " + myName + ", I'm thinking of a number between 1 and 20.") while guessTaken < 6: print("Take a guess.") guess = input() guess = int(guess) guessTaken = guessTaken + 1 if guess < number: print("You guessed a little bit too low.") if guess > number: print("You guessed a little too high.") if guess == number: break if guess == number: guessTaken = str(guessTaken) print("Well done " + myName + "! You guessed the number in " + guessTaken + " guesses!") if guess != number: number = str(number) print("No dice kid. I was thinking of this number: " + number) This is the error I get: Name error: Name 's' is not defined. I think the problem may be that I have Python 3 installed, but the program is being interpreted by Python 2.6. I'm using Linux Mint if that can help you guys help me. Using Geany as the IDE and pressing F5 to test it. It may be loading 2.6 by default, but I don't really know. :( Edit: Error 1 is: File "GuessingGame.py", line 8, in <Module> myName = input() Error 2 is: File <string>, line 1, in <Module>

    Read the article

  • Speedup writing C programs using a subset of the Python syntax

    - by psihodelia
    I am constantly trying to optimize my time. Writing a C code takes a lot of time and requires much more keyboard touches than say writing a Python program. However, in order to speed up the time required to create a C program, one can automatize many things. I'd like to write my programs using smth. like Python but with C semantics. It means, all keywords are C keywords, but syntax is optimized. For example, this C code: #include "dsplib.h" #include "coeffs.h" #define MODULENAME "dsplib" #define NUM_SAMPLES 320 typedef float t_Vec; typedef struct s_Inter { char *pc_Name; struct s_Inter *px_Next; }t_Inter; typedef struct s_DspLibControl { t_Vec f_Y; }t_DspLibControl; void v_DspLibName(void) { printf("Module: %s", MODULENAME); printf("\n"); } int v_DspLibInitInterControl(t_DspLibControl *px_Con) { int y; px_Con->f_Y = 0.0; for(int i=0;i<10;i++) { y += i * i; } return y; } in optimized pythonized version can look like: include dsplib, coeffs define MODULENAME="dsplib", NUM_SAMPLES=320 typedef float t_Vec typedef struct s_Inter: char *pc_Name struct s_Inter *px_Next t_Inter typedef struct s_DspLibControl: t_Vec f_Y t_DspLibControl v_DspLibName(): printf("Module: %s", MODULENAME); printf("\n") int v_DspLibInitInterControl(t_DspLibControl *px_Con): int y px_Con->f_Y = 0.0 for int i=0;i<10;i++: y += i * i return y My question is: Do you know any VIM script, which allows to translate an original pythonized C code into a standard C code? For example, one is writing a C code but uses pythonized syntax, once she decides to translate pythonized blocks into standard C, she selects such blocks and press some key. And she doesn't save such pythonized code of course, VIM translates it into standard C.

    Read the article

  • Which apache worker to use with passenger and how?

    - by Millisami
    I've this config in my apache2.conf <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MaxClients 15 MinSpareThreads 4 MaxSpareThreads 5 ThreadsPerChild 15 MaxRequestsPerChild 50000 </IfModule> Now I'm confused here. Which module gets loaded on which conditions? The phusion guys have suggested to use the worker module. Since both are present in apache conf file, do I have to comment the mpm_prefork_module or leave it as it is? Following is my passenger conf file for apache: LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4/ext/apache2/mod_passenger.so PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-2.2.4 PassengerRuby /usr/bin/ruby1.8 PassengerMaxPoolSize 3 PassengerPoolIdleTime 999999 RailsFrameworkSpawnerIdleTime 0 RailsAppSpawnerIdleTime 0 I'm running just a single Rails 2.3.2 app on 256MB slice at slicehost. I'm not quite satisfied with the performance yet. Are the settings above are any good??

    Read the article

  • Demangling typeclass functions in GHC profiler output

    - by Paul Kuliniewicz
    When profiling a Haskell program written in GHC, the names of typeclass functions are mangled in the .prof file to distinguish one instance's implementations of them from another. How can I demangle these names to find out which type's instance it is? For example, suppose I have the following program, where types Fast and Slow both implement Show: import Data.List (foldl') sum' = foldl' (+) 0 data Fast = Fast instance Show Fast where show _ = show $ sum' [1 .. 10] data Slow = Slow instance Show Slow where show _ = show $ sum' [1 .. 100000000] main = putStrLn (show Fast ++ show Slow) I compile with -prof -auto-all -caf-all and run with +RTS -p. In the .prof file that gets generated, I see that the top cost centers are: COST CENTRE MODULE %time %alloc show_an9 Main 71.0 83.3 sum' Main 29.0 16.7 And in the tree, I likewise see (omitting irrelevant lines): individual inherited COST CENTRE MODULE no. entries %time %alloc %time %alloc main Main 232 1 0.0 0.0 100.0 100.0 show_an9 Main 235 1 71.0 83.3 100.0 100.0 sum' Main 236 0 29.0 16.7 29.0 16.7 show_anx Main 233 1 0.0 0.0 0.0 0.0 How do I figure out that show_an9 is Slow's implementation of show and not Fast's?

    Read the article

  • Django + Virtualenv: manage.py commands fail with ImportError of project name.

    - by Bartek
    This is blowing my mind because it's probably an easy solution, but I can't figure out what could be causing this. So I have a new dev box and am setting everything up. I installed virtualenv, created a new environment for my project under ~/.virtualenvs/projectname Then, I cloned my project from github into my projects directory. Nothing fancy here. There are no .pyc files sitting around so it's a clean slate of code. Then, I activated my virtualenv and installed Django via pip. All looks good so far. Then, I run python manage.py syncdb within my project dir. This is where I get confused: ImportError: No module named projectname So I figured I may have had some references of projectname within my code. So I grep (ack, actually) through my code base and I find nothing of the sorts. So now I'm at a loss, given this environment why am I getting an ImportError on a module named projectname that isn't referenced anywhere in my code? I look forward to a solution .. thanks guys!

    Read the article

  • What is the easiest way to read wav-files using Python [summary]?

    - by Roman
    I want to use Python to access a wav-file and write its content in a form which allows me to analyze it (let's say arrays). I heard that "audiolab" is a suitable tool for that (it transforms numpy arrays into wav and vica versa). I have installed the "audiolab" but I had a problem with the version of numpy (I could not "from numpy.testing import Tester"). I had 1.1.1. version of numpy. I have installed a newer version on numpy (1.4.0). But then I got a new set of errors: Traceback (most recent call last): File "test.py", line 7, in import scikits.audiolab File "/usr/lib/python2.5/site-packages/scikits/audiolab/init.py", line 25, in from pysndfile import formatinfo, sndfile File "/usr/lib/python2.5/site-packages/scikits/audiolab/pysndfile/init.py", line 1, in from _sndfile import Sndfile, Format, available_file_formats, available_encodings File "numpy.pxd", line 30, in scikits.audiolab.pysndfile._sndfile (scikits/audiolab/pysndfile/_sndfile.c:9632) ValueError: numpy.dtype does not appear to be the correct type object I gave up to use audiolab and thought that I can use "wave" package to read in a wav-file. I asked a question about that but people recommended to use scipy instead. OK, I decided to focus on scipy (I have 0.6.0. version). But when I tried to do the following: from scipy.io import wavfile x = wavfile.read('/usr/share/sounds/purple/receive.wav') I get the following: Traceback (most recent call last): File "test3.py", line 4, in <module> from scipy.io import wavfile File "/usr/lib/python2.5/site-packages/scipy/io/__init__.py", line 23, in <module> from numpy.testing import NumpyTest ImportError: cannot import name NumpyTest So, I gave up to use scipy. Can I use just wave package? I do not need much. I just need to have content of wav-file in human readable format and than I will figure out what to do with that.

    Read the article

  • How can I execute a .sql from C#?

    - by J. Pablo Fernández
    For some integration tests I want to connect to the database and run a .sql file that has the schema needed for the tests to actually run, including GO statements. How can I execute the .sql file? (or is this totally the wrong way to go?) I've found a post in the MSDN forum showing this code: using System.Data.SqlClient; using System.IO; using Microsoft.SqlServer.Management.Common; using Microsoft.SqlServer.Management.Smo; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { string sqlConnectionString = "Data Source=(local);Initial Catalog=AdventureWorks;Integrated Security=True"; FileInfo file = new FileInfo("C:\\myscript.sql"); string script = file.OpenText().ReadToEnd(); SqlConnection conn = new SqlConnection(sqlConnectionString); Server server = new Server(new ServerConnection(conn)); server.ConnectionContext.ExecuteNonQuery(script); } } } but on the last line I'm getting this error: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. --- System.TypeInitializationException: The type initializer for '' threw an exception. --- .ModuleLoadException: The C++ module failed to load during appdomain initialization. --- System.DllNotFoundException: Unable to load DLL 'MSVCR80.dll': The specified module could not be found. (Exception from HRESULT: 0x8007007E). I was told to go and download that DLL from somewhere, but that sounds very hacky. Is there a cleaner way to? Is there another way to do it? What am I doing wrong? I'm doing this with Visual Studio 2008, SQL Server 2008, .Net 3.5SP1 and C# 3.0.

    Read the article

  • How to convert from hex-encoded string to a "human readable" string?

    - by John Jensen
    I'm using the Net-SNMP bindings for python and I'm attempting to grab an ARP cache from a Brocade switch. Here's what my code looks like: #!/usr/bin/env python import netsnmp def get_arp(): oid = netsnmp.VarList(netsnmp.Varbind('ipNetToMediaPhysAddress')) res = netsnmp.snmpwalk(oid, Version=2, DestHost='10.0.1.243', Community='public') return res arp_table = get_arp() print arp_table The SNMP code itself is working fine. Output from snmpwalk looks like this: <snip> IP-MIB::ipNetToMediaPhysAddress.128.10.200.6.158 = STRING: 0:1b:ed:a3:ec:c1 IP-MIB::ipNetToMediaPhysAddress.129.10.200.6.162 = STRING: 0:1b:ed:a4:ac:c1 IP-MIB::ipNetToMediaPhysAddress.130.10.200.6.166 = STRING: 0:1b:ed:38:24:1 IP-MIB::ipNetToMediaPhysAddress.131.10.200.6.170 = STRING: 74:8e:f8:62:84:1 </snip> But my output from the python script yields a tuple of hex-encoded strings that looks like this: ('\x00$8C\x98\xc1', '\x00\x1b\xed;_A', '\x00\x1b\xed\xb4\x8f\x81', '\x00$86\x15\x81', '\x00$8C\x98\x81', '\x00\x1b\xed\x9f\xadA', ...etc) I've spent some time googling and came across the struct module and the .decode("hex") string method, but the .decode("hex") method doesn't seem to work: Python 2.7.3 (default, Apr 10 2013, 06:20:15) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> hexstring = '\x00$8C\x98\xc1' >>> newstring = hexstring.decode("hex") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.7/encodings/hex_codec.py", line 42, in hex_decode output = binascii.a2b_hex(input) TypeError: Non-hexadecimal digit found >>> And the documentation for struct is a bit over my head.

    Read the article

  • Django generic relation field reports that all() is getting unexpected keyword argument when no args

    - by Joshua
    I have a model which can be attached to to other models. class Attachable(models.Model): content_type = models.ForeignKey(ContentType) object_pk = models.TextField() content_object = generic.GenericForeignKey(ct_field="content_type", fk_field="object_pk") class Meta: abstract = True class Flag(Attachable): user = models.ForeignKey(User) flag = models.SlugField() timestamp = models.DateTimeField() I'm creating a generic relationship to this model in another model. flags = generic.GenericRelation(Flag) I try to get objects from this generic relation like so: self.flags.all() This results in the following exception: >>> obj.flags.all() Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/local/lib/python2.6/dist-packages/django/db/models/manager.py", line 105, in all return self.get_query_set() File "/usr/local/lib/python2.6/dist-packages/django/contrib/contenttypes/generic.py", line 252, in get_query_set return superclass.get_query_set(self).filter(**query) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 498, in filter return self._filter_or_exclude(False, *args, **kwargs) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 516, in _filter_or_exclude clone.query.add_q(Q(*args, **kwargs)) File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/query.py", line 1675, in add_q can_reuse=used_aliases) File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/query.py", line 1569, in add_filter negate=negate, process_extras=process_extras) File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/query.py", line 1737, in setup_joins "Choices are: %s" % (name, ", ".join(names))) FieldError: Cannot resolve keyword 'object_id' into field. Choices are: content_type, flag, id, nestablecomment, object_pk, timestamp, user >>> obj.flags.all(object_pk=obj.pk) Traceback (most recent call last): File "<console>", line 1, in <module> TypeError: all() got an unexpected keyword argument 'object_pk' What have I done wrong?

    Read the article

  • Testing subpackage modules in Python 3

    - by Mitchell Model
    I have been experimenting with various uses of hierarchies like this and the differences between absolute and relative imports, and can't figure out how to do routine things with the package, subpackages, and modules without simply putting everything on sys.path. I have a two-level package hierarchy: MyApp __init__.py Application __init__.py Module1 Module2 ... Domain __init__.py Module1 Module2 ... UI __init__.py Module1 Module2 ... I want to be able to do the following: Run test code in a Module's "if main" when the module imports from other modules in the same directory. Have one or more test code modules in each subpackage that runs unit tests on the modules in the subpackage. Have a set of unit tests that reside in someplace reasonable, but outside the subpackages, either in a sibling package, at the top-level package, or outside the top-level package (though all these might end up doing is running the tests in each subpackage) "Enter" the structure from any of the three subpackage levels, e.g. run code that just uses Domain modules, run code that just uses Application modules, but Application uses code from both Application and Domain modules, and run code from GUI uses code from both GUI and Application; for instance, Application test code would import Application modules but not Domain modules. After developing the bulk of the code without subpackages, continue developing and testing after organizing the modules into this hierarchy. I know how to use relative imports so that external code that puts MyApp on its sys.path can import MyApp, import any subpackages it wants, and import things from their modules, while the modules in each subpackage can import other modules from the same subpackage or from sibling packages. However, the development needs listed above seem incompatible with subpackage structuring -- in other words, I can't have it both ways: a well-structured multi-level package hierarchy used from the outside and also used from within, in particular for testing but also because modules from one design level (in particular the UI) should not import modules from a design level below the next one down. Sorry for the long essay, but I think it fairly represents the struggles a lot of people have been having adopting to the new relative import mechanisms.

    Read the article

  • Read attributes of MSBuild custom tasks via events in the Logger

    - by gt
    I am trying to write a MSBuild logger module which logs information when receiving TaskStarted events about the Task and its parameters. The build is run with the command: MSBuild.exe /logger:MyLogger.dll build.xml Within the build.xml is a sequence of tasks, most of which have been custom written to compile a (C++ or C#) solution, and are accessed with the following custom Task: <DoCompile Desc="Building MyProject 1" Param1="$(Param1Value)" /> <DoCompile Desc="Building MyProject 2" Param1="$(Param1Value)" /> <!-- etc --> The custom build task DoCompile is defined as: public class DoCompile : Microsoft.Build.Utilities.Task { [Required] public string Description { set { _description = value; } } // ... more code here ... } Whilst the build is running, as each task starts, the logger module receives IEventSource.TaskStarted events, subscribed to as follows: public class MyLogger : Microsoft.Build.Utilities.Logger { public override void Initialize(Microsoft.Build.Framework.IEventSource eventSource) { eventSource.TaskStarted += taskStarted; } private void taskStarted(object sender, Microsoft.Build.Framework.TaskStartedEventArgs e) { // write e.TaskName, attributes and e.Timestamp to log file } } The problem I have is that in the taskStarted() method above, I want to be able to access the attributes of the task for which the event was fired. I only have access to the logger code and cannot change either the build.xml or the custom build tasks. Can anyone suggest a way I can do this?

    Read the article

  • ByteFlow installation Error on Windows

    - by Patrick
    Hi Folks, When I try to install ByteFlow on my Windows development machine, I got the following MySQL error, and I don't know what to do, please give me some suggestion. Thank you so much!!! E:\byteflow-5b6d964917b5>manage.py syncdb !!! Read about DEBUG in settings_local.py and then remove me !!! !!! Read about DEBUG in settings_local.py and then remove me !!! J:\Program Files\Python26\lib\site-packages\MySQLdb\converters.py:37: DeprecationWarning: the sets module is deprecated from sets import BaseSet, Set Creating table auth_permission Creating table auth_group Creating table auth_user Creating table auth_message Creating table django_content_type Creating table django_session Creating table django_site Creating table django_admin_log Creating table django_flatpage Creating table actionrecord Creating table blog_post Traceback (most recent call last): File "E:\byteflow-5b6d964917b5\manage.py", line 11, in <module> execute_manager(settings) File "J:\Program Files\Python26\lib\site-packages\django\core\management\__init__.py", line 362, in execute_manager utility.execute() File "J:\Program Files\Python26\lib\site-packages\django\core\management\__init__.py", line 303, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "J:\Program Files\Python26\lib\site-packages\django\core\management\base.py", line 195, in run_from_argv self.execute(*args, **options.__dict__) File "J:\Program Files\Python26\lib\site-packages\django\core\management\base.py", line 222, in execute output = self.handle(*args, **options) File "J:\Program Files\Python26\lib\site-packages\django\core\management\base.py", line 351, in handle return self.handle_noargs(**options) File "J:\Program Files\Python26\lib\site-packages\django\core\management\commands\syncdb.py", line 78, in handle_noargs cursor.execute(statement) File "J:\Program Files\Python26\lib\site-packages\django\db\backends\util.py", line 19, in execute return self.cursor.execute(sql, params) File "J:\Program Files\Python26\lib\site-packages\django\db\backends\mysql\base.py", line 84, in execute return self.cursor.execute(query, args) File "J:\Program Files\Python26\lib\site-packages\MySQLdb\cursors.py", line 166, in execute self.errorhandler(self, exc, value) File "J:\Program Files\Python26\lib\site-packages\MySQLdb\connections.py", line 35, in defaulterrorhandler raise errorclass, errorvalue _mysql_exceptions.OperationalError: (1071, 'Specified key was too long; max key length is 767 bytes')

    Read the article

  • How do I make the manifest available during a Maven/Surefire unittest run "mvn test" ?

    - by Ernst de Haan
    How do I make the manifest available during a Maven/Surefire unittest run "mvn test" ? I have an open-source project that I am converting from Ant to Maven, including its unit tests. Here's the project source repository with the Maven project: http://github.com/znerd/logdoc My question pertains to the primary module, called "base". This module has a unit test that tests the behaviour of the static method getVersion() in the class org.znerd.logdoc.Library. This method returns: Library.class.getPackage().getImplementationVersion() The getImplementationVersion() method returns a value of a setting in the manifest file. So far, so good. I have tested this in the past and it works well, as long as the manifest is indeed available on the classpath at the path META-INF/MANIFEST.MF (either on the file system or inside a JAR file). Now my challenge is that the manifest file is not available when I run the unit tests: mvn test Surefire runs the unit tests, but my unit test fails with a mesage indicating that Library.getVersion() returned null. When I want to check the JAR, I find that it has not even been generated. Maven/Surefire runs the unit tests against the classes, before the resources are added to the classpath. So can I either run the unit tests against the JAR (implicitly requiring the JAR to be generated first) or can I make sure the resources (including the manifest file) are generated/copied under target/classes before the unit tests are run? Note that I use Maven 2.2.0, Java 1.6.0_17 on Mac OS X 10.6.2, with JUnit 4.8.1.

    Read the article

  • Python csv reader acting weird

    - by PylonsN00b
    So OK if I run this wrong code: csvReader1 = csv.reader(file('new_categories.csv', "rU"), delimiter=',') for row1 in csvReader1: print row1[0] print row1[8] category_sku = str(row[8]) if category_sku == sku: classifications["Craft"] = row[0] classifications["Theme"] = row[1] I get: Knitting 391 Traceback (most recent call last): File "upload_all_inventory_ebay.py", line 403, in <module> inventory_item_list = get_item_list(product) File "upload_all_inventory_ebay.py", line 294, in get_item_list category_sku = str(row[8]) NameError: global name 'row' is not defined Where Knitting and 391 are exactly right, of course I need to refer to row[8] as row1[8]...k so I do this: csvReader1 = csv.reader(file('new_categories.csv', "rU"), delimiter=',') for row1 in csvReader1: print row1[0] print row1[8] category_sku = str(row1[8]) if category_sku == sku: classifications["Craft"] = row1[0] classifications["Theme"] = row1[1] And I get this: ........... Crochet 107452 Knitting 107454 Knitting 107455 Knitting 107456 Knitting 107457 Crochet 108200 Crochet 108201 Crochet 108205 Crochet 108213 Crochet 108214 Crochet 108217 108432 Quilt 108451 108482 108488 Scrapbooking 108711 Knitting 122363 Needlework Beading Crafts & Decorating Crochet Crochet Crochet Traceback (most recent call last): File "upload_all_inventory_ebay.py", line 403, in <module> inventory_item_list = get_item_list(product) File "upload_all_inventory_ebay.py", line 292, in get_item_list print row1[0] IndexError: list index out of range Where the output you see there is every effing thing in column 0 and column 1 !!!!!!!!!! Why? And WHY is row1[0] out of range if it wasn't before. YAY Fridays!

    Read the article

  • need advice on Zend framework Application architecture, or say approach dealing with modules

    - by simple
    Let me start with the things that I did and how am I using some things to get results I have set up modular structure as: application/ /configs /layouts /models /modules /users /profile /frontend /backend /controllers /views .... I write a plugin that does addes changes with FrontController-setModuleControllerDirectoryName() FrontController-addModuleDirectory() and It is all good I have a changed all the directories according weather admin page is requested in the url or not (it is some thing like /admin/some/some) Let's say I have a single layout for anything that is related to Profile viewing , in this case the "Profile" module. The Profile layout is divided into three parts In the layout I was pulling out the Profile/PhotoController 's index action with a action() $this->action('index', 'photo', 'profile'); Then I have faced few issues 1. Can get passed Params inside the Photo Controller when calling ( profile/profile/index); 2. found out that helper Action() is evil cause it starts another dispatching loop =) --- and now I am thinking that my approach on plugging in controllers modules into layout also evil =). anyhow how Should I deal with plugging in some controllers (another module controllers) into the layout ?

    Read the article

  • Google Maps in Drupal: reference to gmap object from JavaScript

    - by user280817
    Is there a way to obtain JavaScript references to the Google maps that are embedded into Drupal pages by the GMap module? I want to be able to manipulate the maps in these pages. I want to pan and zoom them. But I cannot find a reference to an embedded map object. I've dissected the relevant JavaScript objects Drupal.gmap and Drupal.settings.gmap with no success--unless I've overlooked something. The Drupal GMap module doesn't seem to explicitly provide references (within its API) to the GMap objects that it embeds into pages. It just generates themed text which is interpolated into the page. The technique of passing the HTML ID of the map container to either the GMap2 object constructor or the similar Drupal.gmap.getMap() function in order to obtain a map reference doesn't appear to work: Both simply return an instance to a new map, one having the same dimensions and basic characteristics of the original map, but apparently sans all of its overlays (which could contain markers). And I have to call setCenter() on it before I can use it, which initializes the structure, so I know it has no overlays.

    Read the article

  • magento payment process.. how it works in general

    - by spirytus
    hi everyone, got a question and I hope this is right place to ask :).. don’t quite understand how payment works in magento. client goes to checkout and lets say wants to pay as a guest, so provides address etc. and finally gets to payment methods. Then I want clients to pay thru credit card. Already have module installed for gateway (bank?) of my choice. At that point I would expect users to be redirected to 3rd party page (bank hosted) where they giving all the details, only after being returned to my magento site with appropriate message. In magento however it seems like they need to provide cc numbers and details on magento checkout page. I don’t understand if I (or the payment module I installed) need to transfer then all the credit card details to bank? I would have to have checkout page on ssl connection and static ip right? The thing is I want to avoid touching CC numbers at any point and would love to have it done by a bank page. I like the idea of magento interface all the way without redirecting to another page though, the only problem is not sure if would be able to set it all up properly. If anyone could explain to me possible options, what is the common way to do it and how the whole process works that would be very much appreciated. I did my research and looked all over google and various forums still need someones help though. Please let me know if some parts of my question are not quite clear, will try to better explain if necessary.

    Read the article

  • Using ASP.NET session state with Silverlight (PRISM)

    - by Jon Andersen
    Hi, The scenario: I have a PRISM application developed in Silverlight (4), and I'm using a ASP.NET server side application to host several web-services (which, in turn, accesses WCF-services, but that's not really important here). The Silverlight application must be able to call the web services cross-domain (meaning that the web services isn't necessarily on the same server hosting the silverlight application). The Silverlight application consists of several modules, each accessing the ASP.NET web-services. I do not have much experience with Silverlight and PRISM, but as far as I can see, this is not a very unusual scenario... The problem: My challange is, that when 2 different modules access the web-services, I get 2 new sessions on the web-server. I would have thought that since both modules live on the same HTML-page (and then also in the same browser session), they would get the same session on the web-server...? I have tried to make the web-service Proxy-client globally available in the container (using Unity), by registering an instance (using Container.RegisterInstance), and then getting this instance whenever a module needs to make a web-service call (using Container.Resolve), but this doesn't seem to help. However, any calls made within the same module always gets the same session on the server. Can anyone see what I'm missing here...? Thanks! Jon

    Read the article

< Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >