Search Results

Search found 15187 results on 608 pages for 'boost python'.

Page 315/608 | < Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >

  • A map and set which uses contiguous memory and has a reserve function

    - by edA-qa mort-ora-y
    I use several maps and sets. The lack of contiguous memory, and high number of (de)allocations, is a performance bottleneck. I need a mainly STL-compatbile map and set class which can use a contiguous block of memory for internal objects (or multiple blocks). It also needs to have a reserve function so that I can preallocate for expected sizes. Before I write my own I'd like to check what is available first. Is there something in Boost which does this? Does somebody know of an available implementation elsewhere? Intrusive collection types are not usable here as the same objects need to exist in several collections. As far as I know STL memory pools are per-type, not per instance. These global pools are not efficient with respect to memory locality in mutli-cpu/core processing. Object pools don't work as the types will be shared between instance but their pool should not. In many cases a hash map may be an option in some cases.

    Read the article

  • Parallel version of loop not faster than serial version

    - by Il-Bhima
    I'm writing a program in C++ to perform a simulation of particular system. For each timestep, the biggest part of the execution is taking up by a single loop. Fortunately this is embarassingly parallel, so I decided to use Boost Threads to parallelize it (I'm running on a 2 core machine). I would expect at speedup close to 2 times the serial version, since there is no locking. However I am finding that there is no speedup at all. I implemented the parallel version of the loop as follows: Wake up the two threads (they are blocked on a barrier). Each thread then performs the following: Atomically fetch and increment a global counter. Retrieve the particle with that index. Perform the computation on that particle, storing the result in a separate array Wait on a job finished barrier The main thread waits on the job finished barrier. I used this approach since it should provide good load balancing (since each computation may take differing amounts of time). I am really curious as to what could possibly cause this slowdown. I always read that atomic variables are fast, but now I'm starting to wonder whether they have their performance costs. If anybody has some ideas what to look for or any hints I would really appreciate it. I've been bashing my head on it for a week, and profiling has not revealed much.

    Read the article

  • What makes deployment successful for some users and unsuccessful for others?

    - by Julien
    I am trying to deploy a Visual C++ application (developed with Microsoft Visual Studio 2008) using a Setup and Deployment Project. After installation, users on some target computers get the following error message after launching the application executable: “This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix the problem.” Another user after installation could run the application properly. I cannot find the root cause of this problem, despite spending several hours on the Visual Studio help files and online forums (most postings date back to 2006). Does anyone at Stack Overflow have a suggestion? Thanks in advance. Additional details appear below. The application uses FLTK 1.1.9 for a GUI library, as well as some Boost 1.39 libraries (regex, lexical_cast, date_time, math). I made sure I am trying to deploy the release version (not the debug version) of the application. The Runtime library in the Code Generation settings is Multi-threaded DLL (/MD). The dependency walker of myapp.exe lists the following DLLs: wsock32.dll, comctl32.dll, kernel32.dll, user32.dll, gdi32.dll, shell32.dll, ole32.dll, mvcp90.dll, msvcr90.dll. In the Setup and Deployment Project, I add the following DLLs to the File System on Target Machine: fltkdlld.dll, and a folder named Microsoft.VC90.CRT with msvcm90.dll, msvcp90.dll, mcvcr90.dll and Microsoft.VC90.CRT.manifest. The installation process on the target computers getting the error message requires having the .Net Framework 3.5 installed first. Any suggestion? Thanks in advance!

    Read the article

  • What is the nicest way to parse this in C++ ?

    - by ereOn
    Hi, In my program, I have a list of "server address" in the following format: host[:port] The brackets here, indicate that the port is optional. host can be a hostname, an IPv4 or IPv6 address. port, if present can be a numeric port number or a service string (like: "http" or "ssh"). If port is present and host is an IPv6 address, host must be in "bracket-enclosed" notation (Example: [::1]) Here are some valid examples: localhost localhost:11211 127.0.0.1:http [::1]:11211 ::1 [::1] And an invalid example: ::1:80 // Invalid: Is this the IPv6 address ::1:80 and a default port, or the IPv6 address ::1 and the port 80 ? ::1:http // This is not ambigous, but for simplicity sake, let's consider this is forbidden as well. My goal is to separate such entries in two parts (obviously host and port). I don't care if either the host or port are invalid as long as they don't contain a : (290.234.34.34.5 is ok for host, it will be rejected in the next process); I just want to separate the two parts, or if there is no port part, to know it somehow. I tried to do something with std::stringstream but everything I come up to seems hacky and not really elegant. How would you do this in C++ ? I don't mind answers in C but C++ is prefered. Any boost solution is welcome as well. Thank you.

    Read the article

  • Calling a method at a specific interval rate in C++

    - by Aetius
    This is really annoying me as I have done it before, about a year ago and I cannot for the life of me remember what library it was. Basically, the problem is that I want to be able to call a method a certain number of times or for a certain period of time at a specified interval. One example would be I would like to call a method "x" starting from now, 10 times, once every 0.5 seconds. Alternatively, call method "x" starting from now, 10 times, until 5 seconds have passed. Now I thought I used a boost library for this functionality but I can't seem to find it now and feeling a bit annoyed. Unfortunately I can't look at the code again as I'm not in possession of it any more. Alternatively, I could have dreamt this all up and it could have been proprietary code. Assuming there is nothing out there that does what I would like, what is currently the best way of producing this behaviour? It would need to be high-resolution, up to a millisecond. It doesn't matter if it blocks the thread that it is executed from or not. Thanks!

    Read the article

  • Loading specific files from arbitrary directories?

    - by Haydn V. Harach
    I want to load foo.txt. foo.txt might exist in the data/bar/ directory, or it might exist in the data/New Folder/ directory. There might be a different foo.txt in both of these directories, in which case I would want to either load one and ignore the other according to some order that I've sorted the directories by (perhaps manually, perhaps by date of creation), or else load them both and combine the results somehow. The latter (combining the results of both/all foo.txt files) is circumstantial and beyond the scope of this question, but something I want to be able to do in the future. I'm using SDL and boost::filesystem. I want to keep my list of dependencies as small as possible, and as cross-platform as possible. I'm guessing that my best bet would be to get a list of every directory (within the data/ folder), sort/filter this list, then when I go to load foo.txt, I search for it in each potential directory? This sounds like it would be very inefficient, if I have dozens of potential directories to search through every time. What's the best way to go about accomplishing this? Bonus: What if I want some of the directories to be archives? ie. considering both data/foo/ and data/bar.zip to both be valid, and pull foobar.txt from either one without caring.

    Read the article

  • Safe way for getting/finding a vertex in a graph with custom properties -> good programming practice

    - by Shadow
    Hi, I am writing a Graph-class using boost-graph-library. I use custom vertex and edge properties and a map to store/find the vertices/edges for a given property. I'm satisfied with how it works, so far. However, I have a small problem, where I'm not sure how to solve it "nicely". The class provides a method Vertex getVertex(Vertexproperties v_prop) and a method bool hasVertex(Vertexproperties v_prop) The question now is, would you judge this as good programming practice in C++? My opinion is, that I have first to check if something is available before I can get it. So, before getting a vertex with a desired property, one has to check if hasVertex() would return true for those properties. However, I would like to make getVertex() a bit more robust. ATM it will segfault when one would directly call getVertex() without prior checking if the graph has a corresponding vertex. A first idea was to return a NULL-pointer or a pointer that points past the last stored vertex. For the latter, I haven't found out how to do this. But even with this "robust" version, one would have to check for correctness after getting a vertex or one would also run into a SegFault when dereferencing that vertex-pointer for example. Therefore I am wondering if it is "ok" to let getVertex() SegFault if one does not check for availability beforehand?

    Read the article

  • Incremental PCA

    - by smichak
    Hi, Lately, I've been looking into an implementation of an incremental PCA algorithm in python - I couldn't find something that would meet my needs so I did some reading and implemented an algorithm I found in some paper. Here is the module's code - the relevant paper on which it is based is mentioned in the module's documentation. I would appreciate any feedback from people who are interested in this. Micha #!/usr/bin/env python """ Incremental PCA calculation module. Based on P.Hall, D. Marshall and R. Martin "Incremental Eigenalysis for Classification" which appeared in British Machine Vision Conference, volume 1, pages 286-295, September 1998. Principal components are updated sequentially as new observations are introduced. Each new observation (x) is projected on the eigenspace spanned by the current principal components (U) and the residual vector (r = x - U(U.T*x)) is used as a new principal component (U' = [U r]). The new principal components are then rotated by a rotation matrix (R) whose columns are the eigenvectors of the transformed covariance matrix (D=U'.T*C*U) to yield p + 1 principal components. From those, only the first p are selected. """ __author__ = "Micha Kalfon" import numpy as np _ZERO_THRESHOLD = 1e-9 # Everything below this is zero class IPCA(object): """Incremental PCA calculation object. General Parameters: m - Number of variables per observation n - Number of observations p - Dimension to which the data should be reduced """ def __init__(self, m, p): """Creates an incremental PCA object for m-dimensional observations in order to reduce them to a p-dimensional subspace. @param m: Number of variables per observation. @param p: Number of principle components. @return: An IPCA object. """ self._m = float(m) self._n = 0.0 self._p = float(p) self._mean = np.matrix(np.zeros((m , 1), dtype=np.float64)) self._covariance = np.matrix(np.zeros((m, m), dtype=np.float64)) self._eigenvectors = np.matrix(np.zeros((m, p), dtype=np.float64)) self._eigenvalues = np.matrix(np.zeros((1, p), dtype=np.float64)) def update(self, x): """Updates with a new observation vector x. @param x: Next observation as a column vector (m x 1). """ m = self._m n = self._n p = self._p mean = self._mean C = self._covariance U = self._eigenvectors E = self._eigenvalues if type(x) is not np.matrix or x.shape != (m, 1): raise TypeError('Input is not a matrix (%d, 1)' % int(m)) # Update covariance matrix and mean vector and centralize input around # new mean oldmean = mean mean = (n*mean + x) / (n + 1.0) C = (n*C + x*x.T + n*oldmean*oldmean.T - (n+1)*mean*mean.T) / (n + 1.0) x -= mean # Project new input on current p-dimensional subspace and calculate # the normalized residual vector g = U.T*x r = x - (U*g) r = (r / np.linalg.norm(r)) if not _is_zero(r) else np.zeros_like(r) # Extend the transformation matrix with the residual vector and find # the rotation matrix by solving the eigenproblem DR=RE U = np.concatenate((U, r), 1) D = U.T*C*U (E, R) = np.linalg.eigh(D) # Sort eigenvalues and eigenvectors from largest to smallest to get the # rotation matrix R sorter = list(reversed(E.argsort(0))) E = E[sorter] R = R[:,sorter] # Apply the rotation matrix U = U*R # Select only p largest eigenvectors and values and update state self._n += 1.0 self._mean = mean self._covariance = C self._eigenvectors = U[:, 0:p] self._eigenvalues = E[0:p] @property def components(self): """Returns a matrix with the current principal components as columns. """ return self._eigenvectors @property def variances(self): """Returns a list with the appropriate variance along each principal component. """ return self._eigenvalues def _is_zero(x): """Return a boolean indicating whether the given vector is a zero vector up to a threshold. """ return np.fabs(x).min() < _ZERO_THRESHOLD if __name__ == '__main__': import sys def pca_svd(X): X = X - X.mean(0).repeat(X.shape[0], 0) [_, _, V] = np.linalg.svd(X) return V N = 1000 obs = np.matrix([np.random.normal(size=10) for _ in xrange(N)]) V = pca_svd(obs) print V[0:2] pca = IPCA(obs.shape[1], 2) for i in xrange(obs.shape[0]): x = obs[i,:].transpose() pca.update(x) U = pca.components print U

    Read the article

  • I don't understand how work call_once

    - by SABROG
    Please help me understand how work call_once Here is thread-safe code. I don't understand why this need Thread Local Storage and global_epoch variables. Variable _fast_pthread_once_per_thread_epoch can be changed to constant/enum like {FAST_PTHREAD_ONCE_INIT, BEING_INITIALIZED, FINISH_INITIALIZED}. Why needed count calls in global_epoch? I think this code can be rewriting with logc: if flag FINISH_INITIALIZED do nothing, else go to block with mutexes and this all. #ifndef FAST_PTHREAD_ONCE_H #define FAST_PTHREAD_ONCE_H #include #include typedef sig_atomic_t fast_pthread_once_t; #define FAST_PTHREAD_ONCE_INIT SIG_ATOMIC_MAX extern __thread fast_pthread_once_t _fast_pthread_once_per_thread_epoch; #ifdef __cplusplus extern "C" { #endif extern void fast_pthread_once( pthread_once_t *once, void (*func)(void) ); inline static void fast_pthread_once_inline( fast_pthread_once_t *once, void (*func)(void) ) { fast_pthread_once_t x = *once; /* unprotected access */ if ( x _fast_pthread_once_per_thread_epoch ) { fast_pthread_once( once, func ); } } #ifdef __cplusplus } #endif #endif FAST_PTHREAD_ONCE_H Source fast_pthread_once.c The source is written in C. The lines of the primary function are numbered for reference in the subsequent correctness argument. #include "fast_pthread_once.h" #include static pthread_mutex_t mu = PTHREAD_MUTEX_INITIALIZER; /* protects global_epoch and all fast_pthread_once_t writes */ static pthread_cond_t cv = PTHREAD_COND_INITIALIZER; /* signalled whenever a fast_pthread_once_t is finalized */ #define BEING_INITIALIZED (FAST_PTHREAD_ONCE_INIT - 1) static fast_pthread_once_t global_epoch = 0; /* under mu */ __thread fast_pthread_once_t _fast_pthread_once_per_thread_epoch; static void check( int x ) { if ( x == 0 ) abort(); } void fast_pthread_once( fast_pthread_once_t *once, void (*func)(void) ) { /*01*/ fast_pthread_once_t x = *once; /* unprotected access */ /*02*/ if ( x _fast_pthread_once_per_thread_epoch ) { /*03*/ check( pthread_mutex_lock(µ) == 0 ); /*04*/ if ( *once == FAST_PTHREAD_ONCE_INIT ) { /*05*/ *once = BEING_INITIALIZED; /*06*/ check( pthread_mutex_unlock(µ) == 0 ); /*07*/ (*func)(); /*08*/ check( pthread_mutex_lock(µ) == 0 ); /*09*/ global_epoch++; /*10*/ *once = global_epoch; /*11*/ check( pthread_cond_broadcast(&cv;) == 0 ); /*12*/ } else { /*13*/ while ( *once == BEING_INITIALIZED ) { /*14*/ check( pthread_cond_wait(&cv;, µ) == 0 ); /*15*/ } /*16*/ } /*17*/ _fast_pthread_once_per_thread_epoch = global_epoch; /*18*/ check (pthread_mutex_unlock(µ) == 0); } } This code from BOOST: #ifndef BOOST_THREAD_PTHREAD_ONCE_HPP #define BOOST_THREAD_PTHREAD_ONCE_HPP // once.hpp // // (C) Copyright 2007-8 Anthony Williams // // Distributed under the Boost Software License, Version 1.0. (See // accompanying file LICENSE_1_0.txt or copy at // http://www.boost.org/LICENSE_1_0.txt) #include #include #include #include "pthread_mutex_scoped_lock.hpp" #include #include #include namespace boost { struct once_flag { boost::uintmax_t epoch; }; namespace detail { BOOST_THREAD_DECL boost::uintmax_t& get_once_per_thread_epoch(); BOOST_THREAD_DECL extern boost::uintmax_t once_global_epoch; BOOST_THREAD_DECL extern pthread_mutex_t once_epoch_mutex; BOOST_THREAD_DECL extern pthread_cond_t once_epoch_cv; } #define BOOST_ONCE_INITIAL_FLAG_VALUE 0 #define BOOST_ONCE_INIT {BOOST_ONCE_INITIAL_FLAG_VALUE} // Based on Mike Burrows fast_pthread_once algorithm as described in // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2444.html template void call_once(once_flag& flag,Function f) { static boost::uintmax_t const uninitialized_flag=BOOST_ONCE_INITIAL_FLAG_VALUE; static boost::uintmax_t const being_initialized=uninitialized_flag+1; boost::uintmax_t const epoch=flag.epoch; boost::uintmax_t& this_thread_epoch=detail::get_once_per_thread_epoch(); if(epoch #endif I right understand, boost don't use atomic operation, so code from boost not thread-safe?

    Read the article

  • One letter game problem?

    - by Alex K
    Recently at a job interview I was given the following problem: Write a script capable of running on the command line as python It should take in two words on the command line (or optionally if you'd prefer it can query the user to supply the two words via the console). Given those two words: a. Ensure they are of equal length b. Ensure they are both words present in the dictionary of valid words in the English language that you downloaded. If so compute whether you can reach the second word from the first by a series of steps as follows a. You can change one letter at a time b. Each time you change a letter the resulting word must also exist in the dictionary c. You cannot add or remove letters If the two words are reachable, the script should print out the path which leads as a single, shortest path from one word to the other. You can /usr/share/dict/words for your dictionary of words. My solution consisted of using breadth first search to find a shortest path between two words. But apparently that wasn't good enough to get the job :( Would you guys know what I could have done wrong? Thank you so much. import collections import functools import re def time_func(func): import time def wrapper(*args, **kwargs): start = time.time() res = func(*args, **kwargs) timed = time.time() - start setattr(wrapper, 'time_taken', timed) return res functools.update_wrapper(wrapper, func) return wrapper class OneLetterGame: def __init__(self, dict_path): self.dict_path = dict_path self.words = set() def run(self, start_word, end_word): '''Runs the one letter game with the given start and end words. ''' assert len(start_word) == len(end_word), \ 'Start word and end word must of the same length.' self.read_dict(len(start_word)) path = self.shortest_path(start_word, end_word) if not path: print 'There is no path between %s and %s (took %.2f sec.)' % ( start_word, end_word, find_shortest_path.time_taken) else: print 'The shortest path (found in %.2f sec.) is:\n=> %s' % ( self.shortest_path.time_taken, ' -- '.join(path)) def _bfs(self, start): '''Implementation of breadth first search as a generator. The portion of the graph to explore is given on demand using get_neighboors. Care was taken so that a vertex / node is explored only once. ''' queue = collections.deque([(None, start)]) inqueue = set([start]) while queue: parent, node = queue.popleft() yield parent, node new = set(self.get_neighbours(node)) - inqueue inqueue = inqueue | new queue.extend([(node, child) for child in new]) @time_func def shortest_path(self, start, end): '''Returns the shortest path from start to end using bfs. ''' assert start in self.words, 'Start word not in dictionnary.' assert end in self.words, 'End word not in dictionnary.' paths = {None: []} for parent, child in self._bfs(start): paths[child] = paths[parent] + [child] if child == end: return paths[child] return None def get_neighbours(self, word): '''Gets every word one letter away from the a given word. We do not keep these words in memory because bfs accesses a given vertex only once. ''' neighbours = [] p_word = ['^' + word[0:i] + '\w' + word[i+1:] + '$' for i, w in enumerate(word)] p_word = '|'.join(p_word) for w in self.words: if w != word and re.match(p_word, w, re.I|re.U): neighbours += [w] return neighbours def read_dict(self, size): '''Loads every word of a specific size from the dictionnary into memory. ''' for l in open(self.dict_path): l = l.decode('latin-1').strip().lower() if len(l) == size: self.words.add(l) if __name__ == '__main__': import sys if len(sys.argv) not in [3, 4]: print 'Usage: python one_letter_game.py start_word end_word' else: g = OneLetterGame(dict_path = '/usr/share/dict/words') try: g.run(*sys.argv[1:]) except AssertionError, e: print e

    Read the article

  • Why doesn't my QsciLexerCustom subclass work in PyQt4 using QsciScintilla?

    - by Jon Watte
    My end goal is to get Erlang syntax highlighting in QsciScintilla using PyQt4 and Python 2.6. I'm running on Windows 7, but will also need Ubuntu support. PyQt4 is missing the necessary wrapper code for the Erlang lexer/highlighter that "base" scintilla has, so I figured I'd write a lightweight one on top of QsciLexerCustom. It's a little bit problematic, because the Qsci wrapper seems to really want to talk about line+index rather than offset-from-start when getting/setting subranges of text. Meanwhile, the lexer gets arguments as offset-from-start. For now, I get a copy of the entire text, and split that up as appropriate. I have the following lexer, and I apply it with setLexer(). It gets all the appropriate calls when I open a new file and sets this as the lexer, and prints a bunch of appropriate lines based on what it's doing... but there is no styling in the document. I tried making all the defined styles red, and the document is still stubbornly black-on-white, so apparently the styles don't really "take effect" What am I doing wrong? If nobody here knows, what's the appropriate discussion forum where people might actually know these things? (It's an interesting intersection between Python, Qt and Scintilla, so I imagine the set of people who would know is small) Let's assume prefs.declare() just sets up a dict that returns the value for the given key (I've verified this -- it's not the problem). Let's assume scintilla is reasonably properly constructed into its host window QWidget. Specifically, if I apply a bundled lexer (such as QsciLexerPython), it takes effect and does show styled text. prefs.declare('font.name.margin', "MS Dlg") prefs.declare('font.size.margin', 8) prefs.declare('font.name.code', "Courier New") prefs.declare('font.size.code', 10) prefs.declare('color.editline', "#d0e0ff") class LexerErlang(Qsci.QsciLexerCustom): def __init__(self, obj = None): Qsci.QsciLexerCustom.__init__(self, obj) self.sci = None self.plainFont = QtGui.QFont() self.plainFont.setPointSize(int(prefs.get('font.size.code'))) self.plainFont.setFamily(prefs.get('font.name.code')) self.marginFont = QtGui.QFont() self.marginFont.setPointSize(int(prefs.get('font.size.code'))) self.marginFont.setFamily(prefs.get('font.name.margin')) self.boldFont = QtGui.QFont() self.boldFont.setPointSize(int(prefs.get('font.size.code'))) self.boldFont.setFamily(prefs.get('font.name.code')) self.boldFont.setBold(True) self.styles = [ Qsci.QsciStyle(0, QtCore.QString("base"), QtGui.QColor("#000000"), QtGui.QColor("#ffffff"), self.plainFont, True), Qsci.QsciStyle(1, QtCore.QString("comment"), QtGui.QColor("#008000"), QtGui.QColor("#eeffee"), self.marginFont, True), Qsci.QsciStyle(2, QtCore.QString("keyword"), QtGui.QColor("#000080"), QtGui.QColor("#ffffff"), self.boldFont, True), Qsci.QsciStyle(3, QtCore.QString("string"), QtGui.QColor("#800000"), QtGui.QColor("#ffffff"), self.marginFont, True), Qsci.QsciStyle(4, QtCore.QString("atom"), QtGui.QColor("#008080"), QtGui.QColor("#ffffff"), self.plainFont, True), Qsci.QsciStyle(5, QtCore.QString("macro"), QtGui.QColor("#808000"), QtGui.QColor("#ffffff"), self.boldFont, True), Qsci.QsciStyle(6, QtCore.QString("error"), QtGui.QColor("#000000"), QtGui.QColor("#ffd0d0"), self.plainFont, True), ] print("LexerErlang created") def description(self, ix): for i in self.styles: if i.style() == ix: return QtCore.QString(i.description()) return QtCore.QString("") def setEditor(self, sci): self.sci = sci Qsci.QsciLexerCustom.setEditor(self, sci) print("LexerErlang.setEditor()") def styleText(self, start, end): print("LexerErlang.styleText(%d,%d)" % (start, end)) lines = self.getText(start, end) offset = start self.startStyling(offset, 0) print("startStyling()") for i in lines: if i == "": self.setStyling(1, self.styles[0]) print("setStyling(1)") offset += 1 continue if i[0] == '%': self.setStyling(len(i)+1, self.styles[1]) print("setStyling(%)") offset += len(i)+1 continue self.setStyling(len(i)+1, self.styles[0]) print("setStyling(n)") offset += len(i)+1 def getText(self, start, end): data = self.sci.text() print("LexerErlang.getText(): " + str(len(data)) + " chars") return data[start:end].split('\n') Applied to the QsciScintilla widget as follows: _lexers = { 'erl': (Q.SCLEX_ERLANG, LexerErlang), 'hrl': (Q.SCLEX_ERLANG, LexerErlang), 'html': (Q.SCLEX_HTML, Qsci.QsciLexerHTML), 'css': (Q.SCLEX_CSS, Qsci.QsciLexerCSS), 'py': (Q.SCLEX_PYTHON, Qsci.QsciLexerPython), 'php': (Q.SCLEX_PHP, Qsci.QsciLexerHTML), 'inc': (Q.SCLEX_PHP, Qsci.QsciLexerHTML), 'js': (Q.SCLEX_CPP, Qsci.QsciLexerJavaScript), 'cpp': (Q.SCLEX_CPP, Qsci.QsciLexerCPP), 'h': (Q.SCLEX_CPP, Qsci.QsciLexerCPP), 'cxx': (Q.SCLEX_CPP, Qsci.QsciLexerCPP), 'hpp': (Q.SCLEX_CPP, Qsci.QsciLexerCPP), 'c': (Q.SCLEX_CPP, Qsci.QsciLexerCPP), 'hxx': (Q.SCLEX_CPP, Qsci.QsciLexerCPP), 'tpl': (Q.SCLEX_CPP, Qsci.QsciLexerCPP), 'xml': (Q.SCLEX_XML, Qsci.QsciLexerXML), } ... inside my document window class ... def addContentsDocument(self, contents, title): handler = self.makeScintilla() handler.title = title sci = handler.sci sci.append(contents) self.tabWidget.addTab(sci, title) self.tabWidget.setCurrentWidget(sci) self.applyLexer(sci, title) EventBus.bus.broadcast('command.done', {'text': 'Opened ' + title}) return handler def applyLexer(self, sci, title): (language, lexer) = language_and_lexer_from_title(title) if lexer: l = lexer() print("making lexer: " + str(l)) sci.setLexer(l) else: print("setting lexer by id: " + str(language)) sci.SendScintilla(Qsci.QsciScintillaBase.SCI_SETLEXER, language) linst = sci.lexer() print("lexer: " + str(linst)) def makeScintilla(self): sci = Qsci.QsciScintilla() sci.setUtf8(True) sci.setTabIndents(True) sci.setIndentationsUseTabs(False) sci.setIndentationWidth(4) sci.setMarginsFont(self.smallFont) sci.setMarginWidth(0, self.smallFontMetrics.width('00000')) sci.setFont(self.monoFont) sci.setAutoIndent(True) sci.setBraceMatching(Qsci.QsciScintilla.StrictBraceMatch) handler = SciHandler(sci) self.handlers[sci] = handler sci.setMarginLineNumbers(0, True) sci.setCaretLineVisible(True) sci.setCaretLineBackgroundColor(QtGui.QColor(prefs.get('color.editline'))) return handler Let's assume the rest of the application works, too (because it does :-)

    Read the article

  • wxPthon problems with Wrapping StaticText

    - by Scott B
    Hello. I am having an issue with wxPython. A simplified version of the code is posted below (white space, comments, etc removed to reduce size - but the general format to my program is kept roughly the same). When I run the script, the static text correctly wraps as it should, but the other items in the panel do not move down (they act as if the statictext is only one line and thus not everything is visible). If I manually resize the window/frame, even just a tiny amount, everything gets corrected and displays as it is should. I took screen shots to show this behavior, but I just created this account and thus don't have the required 10 reputation points to be allowed to post pictures. Why does it not display correctly to begin with? I've tried all sorts of combination's of GetParent().Refresh() or Update() and GetTopLevelParent().Update() or Refresh(). I've tried everything I can think of but cannot get it to display correctly without manually resizing the frame/window. Once re-sized, it works exactly as I want it to. Information: Windows XP Python 2.5.2 wxPython 2.8.11.0 (msw-unicode) Any suggestions? Thanks! Code: #! /usr/bin/python import wx class StaticWrapText(wx.PyControl): def __init__(self, parent, id=wx.ID_ANY, label='', pos=wx.DefaultPosition, size=wx.DefaultSize, style=wx.NO_BORDER, validator=wx.DefaultValidator, name='StaticWrapText'): wx.PyControl.__init__(self, parent, id, pos, size, style, validator, name) self.statictext = wx.StaticText(self, wx.ID_ANY, label, style=style) self.wraplabel = label #self.wrap() def wrap(self): self.Freeze() self.statictext.SetLabel(self.wraplabel) self.statictext.Wrap(self.GetSize().width) self.Thaw() def DoGetBestSize(self): self.wrap() #print self.statictext.GetSize() self.SetSize(self.statictext.GetSize()) return self.GetSize() class TestPanel(wx.Panel): def __init__(self, *args, **kwargs): # Init the base class wx.Panel.__init__(self, *args, **kwargs) self.createControls() def createControls(self): # --- Panel2 ------------------------------------------------------------- self.Panel2 = wx.Panel(self, -1) msg1 = 'Below is a List of Files to be Processed' staticBox = wx.StaticBox(self.Panel2, label=msg1) Panel2_box1_v1 = wx.StaticBoxSizer(staticBox, wx.VERTICAL) Panel2_box2_h1 = wx.BoxSizer(wx.HORIZONTAL) Panel2_box3_v1 = wx.BoxSizer(wx.VERTICAL) self.wxL_Inputs = wx.ListBox(self.Panel2, wx.ID_ANY, style=wx.LB_EXTENDED) sz = dict(size=(120,-1)) wxB_AddFile = wx.Button(self.Panel2, label='Add File', **sz) wxB_DeleteFile = wx.Button(self.Panel2, label='Delete Selected', **sz) wxB_ClearFiles = wx.Button(self.Panel2, label='Clear All', **sz) Panel2_box3_v1.Add(wxB_AddFile, 0, wx.TOP, 0) Panel2_box3_v1.Add(wxB_DeleteFile, 0, wx.TOP, 0) Panel2_box3_v1.Add(wxB_ClearFiles, 0, wx.TOP, 0) Panel2_box2_h1.Add(self.wxL_Inputs, 1, wx.ALL|wx.EXPAND, 2) Panel2_box2_h1.Add(Panel2_box3_v1, 0, wx.ALL|wx.EXPAND, 2) msg = 'This is a long line of text used to test the autowrapping ' msg += 'static text message. ' msg += 'This is a long line of text used to test the autowrapping ' msg += 'static text message. ' msg += 'This is a long line of text used to test the autowrapping ' msg += 'static text message. ' msg += 'This is a long line of text used to test the autowrapping ' msg += 'static text message. ' staticMsg = StaticWrapText(self.Panel2, label=msg) Panel2_box1_v1.Add(staticMsg, 0, wx.ALL|wx.EXPAND, 2) Panel2_box1_v1.Add(Panel2_box2_h1, 1, wx.ALL|wx.EXPAND, 0) self.Panel2.SetSizer(Panel2_box1_v1) # --- Combine Everything ------------------------------------------------- final_vbox = wx.BoxSizer(wx.VERTICAL) final_vbox.Add(self.Panel2, 1, wx.ALL|wx.EXPAND, 2) self.SetSizerAndFit(final_vbox) class TestFrame(wx.Frame): def __init__(self, *args, **kwargs): # Init the base class wx.Frame.__init__(self, *args, **kwargs) panel = TestPanel(self) self.SetClientSize(wx.Size(500,500)) self.Center() class wxFileCleanupApp(wx.App): def __init__(self, *args, **kwargs): # Init the base class wx.App.__init__(self, *args, **kwargs) def OnInit(self): # Create the frame, center it, and show it frame = TestFrame(None, title='Test Frame') frame.Show() return True if __name__ == '__main__': app = wxFileCleanupApp() app.MainLoop() EDIT: See my post below for a solution that works!

    Read the article

  • Data munging and data import scripting

    - by morpheous
    I need to write some scripts to carry out some tasks on my server (running Ubuntu server 8.04 TLS). The tasks are to be run periodically, so I will be running the scripts as cron jobs. I have divided the tasks into "group A" and "group B" - because (in my mind at least), they are a bit different. Task Group A import data from a file and possibly reformat it - by reformatting, I mean doing things like santizing the data, possibly normalizing it and or running calculations on 'columns' of the data Import the munged data into a database. For now, I am mostly using mySQL for the vast majority of imports - although some files will be imported into a sqlLite database. Note: The files will be mostly text files, although some of the files are in a binary format (my own proprietary format, written by a C++ application I developed). Task Group B Extract data from the database Perform calculations on the data and either insert or update tables in the database. My coding experience is is primarily as a C/C++ developer, although I have been using PHP as well for the last 2 years or so. I am from a windows background so I am still finding my feet in the linux environment. My question is this - I need to write scripts to perform the tasks I described above. Although I suppose I could write a few C++ applications to be used in the shell scripts, I think it may be better to write them in a scripting language (maybe this is a flawed assumption?). My thinking is that it would be easier to modify thins in a script - no need to rebuild etc for changes to functionality. Additionally, C++ data munging in C++ tends to involve more lines of code than "natural" scripting languages such as Perl, Python etc. Assuming that the majority of people on here agree that scripting is the way to go, herein lies my dilema. Which scripting language to use to perform the tasks above (giving my background). My gut instinct tells me that Perl (shudder) would be the most obvious choice for performing all of the above tasks. BUT (and that is a big BUT). The mere mention of Perl makes my toes curl, as I had a very, very bag experience with it a while back. The syntax seems quite unnatural to me - despite how many times I have tried to learn it - so if possible, I would really like to give it a miss. PHP (which I already know), also am not sure is a good candidate for scripting on the CLI (I have not seen many examples on how to do this etc - so I may be wrong). The last thing I must mention is that IF I have to learn a new language in order to do this, I cannot afford (time constraint) to spend more than a day, in learning the key commands/features required in order to do this (I can always learn the details of the language later, once I have actually deployed the scripts). So, which scripting language would you recommend (PHP, Python, Perl, [insert your favorite here]) - and most importantly WHY?. Or, should I just stick to writing little C++ applications that I call in a shell script?. Lastly, if you have suggested a scripting language, can you please show with a FEW lines (Perl mongers - I'm looking in your direction [nothing to cryptic!] ;) ) how I can use the language you suggested to do what I want to do. Hopefully, the lines you present will convince me that it can be done easily and elegantly in the language you suggested.

    Read the article

  • TDD - beginner problems and stumbling blocks

    - by Noufal Ibrahim
    While I've written unit tests for most of the code I've done, I only recently got my hands on a copy of TDD by example by Kent Beck. I have always regretted certain design decisions I made since they prevented the application from being 'testable'. I read through the book and while some of it looks alien, I felt that I could manage it and decided to try it out on my current project which is basically a client/server system where the two pieces communicate via. USB. One on the gadget and the other on the host. The application is in Python. I started off and very soon got entangled in a mess of rewrites and tiny tests which I later figured didn't really test anything. I threw away most of them and and now have a working application for which the tests have all coagulated into just 2. Based on my experiences, I have a few questions which I'd like to ask. I gained some information from http://stackoverflow.com/questions/1146218/new-to-tdd-are-there-sample-applications-with-tests-to-show-how-to-do-tdd but have some specific questions which I'd like answers to/discussion on. Kent Beck uses a list which he adds to and strikes out from to guide the development process. How do you make such a list? I initially had a few items like "server should start up", "server should abort if channel is not available" etc. but they got mixed and finally now, it's just something like "client should be able to connect to server" (which subsumed server startup etc.). How do you handle rewrites? I initially selected a half duplex system based on named pipes so that I could develop the application logic on my own machine and then later add the USB communication part. It them moved to become a socket based thing and then moved from using raw sockets to using the Python SocketServer module. Each time things changed, I found that I had to rewrite considerable parts of the tests which was annoying. I'd figured that the tests would be a somewhat invariable guide during my development. They just felt like more code to handle. I needed a client and a server to communicate through the channel to test either side. I could mock one of the sides to test the other but then the whole channel wouldn't be tested and I worry that I'd miss that. This detracted from the whole red/green/refactor rhythm. Is this just lack of experience or am I doing something wrong? The "Fake it till you make it" left me with a lot of messy code that I later spent a lot of time to refactor and clean up. Is this the way things work? At the end of the session, I now have my client and server running with around 3 or 4 unit tests. It took me around a week to do it. I think I could have done it in a day if I were using the unit tests after code way. I fail to see the gain. I'm looking for comments and advice from people who have implemented large non trivial projects completely (or almost completely) using this methodology. It makes sense to me to follow the way after I have something already running and want to add a new feature but doing it from scratch seems to tiresome and not worth the effort. P.S. : Please let me know if this should be community wiki and I'll mark it like that. Update 0 : All the answers were equally helpful. I picked the one I did because it resonated with my experiences the most. Update 1: Practice Practice Practice!

    Read the article

  • Sphinx Error Unknow directive type "automodule" or "autoclass"

    - by user1130381
    I need document my python project using Sphinx. But I can't use autodoc. When I config my project I select the option "extension autodoc", but now if I use .. autoclass:: Class, I have a ERROR: ERROR: Unknown directive type "autoclass" I configure the PYTHONPATH, and now it's good. But I already have this problem. My index file is: .. ATOM documentation master file, created by sphinx-quickstart on Thu Nov 22 15:24:42 2012. You can adapt this file completely to your liking, but it should at least contain the root toctree directive. Welcome to ATOM's documentation! Contents: .. toctree:: :maxdepth: 2 .. automodule:: atom Indices and tables :ref:genindex :ref:modindex :ref:search Thank you, I need that someone say me how I find the problem, or how i can fix this! Sorry for my English, I start now to learn English.

    Read the article

  • Adaptive Threshold in OpenCV (Version 1 - the swig version)

    - by Neil Benn
    Hello I'm trying to get adaptive thresholding working in the python binding to opencv (the swig one - cannot get opencv 2.0 working as I am using a beagleboard as the cross compiling is not working yet). I have a greyscale image (ccg.jpg) and the following code import opencv from opencv import highgui img = highgui.cvLoadImage("ccg.png") img_bw = opencv.cvCreateImage(opencv.cvGetSize(img), opencv.IPL_DEPTH_8U, 1) opencv.cvAdaptiveThreshold(img, img_bw, 125, opencv.CV_ADAPTIVE_THRESH_MEAN_C, opencv.CV_THRESH_BINARY, 7, 10) When I run this I get the error: RuntimeError: openCV Error: Status=Formats of input arguments do not match function name=cvAdaptiveThreshold error messgae= file_name=cvadapthresh.cpp line=122 I've also tried having both the source and dest arguments both the same (greyscale) and I get the error 'Unsupported format or combination of formats'. Does anyone have any clues as to where I could be going wrong? Cheers, Neil

    Read the article

  • Redirect logging output using custom logging handler

    - by mridang
    Hi Guys, I'm using a module in my python app that writes a lot a of messages using the logging module. Initially I was using this in a console application and it was pretty easy to get the logging output to display on the console using a console handler. Now I've developed a GUI version of my app using wxPython and I'd like to display all the logging output to a custom control — a multi-line textCtrl. Is there a way i could create a custom logging handler so i can redirect all the logging output there and display the logging messages wherever/however I want — in this case, a wxPython app. Thanks

    Read the article

  • PHP Frameworks (CodeIgnitor, Yii, CakePHP) vs. Django

    - by niting
    I have to develop a site which has to accomodate around 2000 users a day and speed is a criterion for it. Moreover, the site is a user oriented one where the user will be able to log in and check his profile, register for specific events he/she wants to participate in. The site is to be hosted on a VPS server.Although I have pretty good experience with python and PHP but I have no idea how to use either of the framework. We have plenty of time to experiment and learn one of the above frameworks.Could you please specify which one would be preferred for such a scenario considering speed, features, and security of the site. Thanks, niting

    Read the article

  • Easy_install of wxpython has "setup script" error.

    - by vgm64
    I have an install of python 2.5 that fink placed in /sw/bin/. I use the easy install command sudo /sw/bin/easy_install wxPython to try to install wxpython and I get an error while trying to process wxPython-src-2.8.9.1.tab.bz2 that there is not setup script. Easy-install has worked for several other installations until this one. Any help on why it's busting now? EDIT: The error occurs before dumping back to shell prompt. Reading http://wxPython.org/download.php Best match: wxPython src-2.8.9.1 Downloading http://downloads.sourceforge.net/wxpython/wxPython-src-2.8.9.1.tar.bz2 Processing wxPython-src-2.8.9.1.tar.bz2 error: Couldn't find a setup script in /tmp/easy_install-tNg6FG/wxPython-src-2.8.9.1.tar.bz2

    Read the article

  • urlencode an array of values

    - by Ikke
    I'm trying to urlencode an dictionary in python with urllib.urlencode. The problem is, I have to encode an array. The result needs to be: criterias%5B%5D=member&criterias%5B%5D=issue #unquoted: criterias[]=member&criterias[]=issue But the result I get is: criterias=%5B%27member%27%2C+%27issue%27%5D #unquoted: criterias=['member',+'issue'] I have tried several things, but I can't seem to get the right result. import urllib criterias = ['member', 'issue'] params = { 'criterias[]': criterias, } print urllib.urlencode(params) If I use cgi.parse_qs to decode a correct query string, I get this as result: {'criterias[]': ['member', 'issue']} But if I encode that result, I get a wrong result back. Is there a way to produce the expected result?

    Read the article

  • Why allow concatenation of string literals?

    - by Caspin
    I recently got bit by a subtle bug. char ** int2str = { "zero", // 0 "one", // 1 "two" // 2 "three",// 3 nullptr }; assert( values[1] == "one"_s ); // passes assert( values[2] == "two"_s ); // fails If you have godlike code review powers you'll notice I forgot the , after "two". After the considerable effort to find that bug I've got to ask why would anyone ever want this behavior? I can see how this might be useful for macro magic, but then why is this a "feature" in a modern language like python? Have you ever used string literal concatenation in production code?

    Read the article

  • django admin: Add a "remove file" field for Image- or FileFields

    - by w-
    I was hunting around the net for a way to easily allow users to blank out imagefield/filefields they have set in the admin. I found this http://www.djangosnippets.org/snippets/894/ What was really interesting to me here was the code posted in the comment by rfugger remove_the_file = forms.BooleanField(required=False) def save(self, *args, **kwargs): object = super(self.__class__, self).save(*args, **kwargs) if self.cleaned_data.get('remove_the_file'): object.the_file = '' return object When i try to use this in my own form I basically added this to my admin.py which already had a BlahAdmin class BlahModelForm(forms.ModelForm): class Meta: model = Blah remove_img01 = forms.BooleanField(required=False) def save(self, *args, **kwargs): object = super(self.__class__, self).save(*args, **kwargs) if self.cleaned_data.get('remove_img01'): object.img01 = '' return object when i run it I get this error maximum recursion depth exceeded while calling a Python object at this line object = super(self.__class__, self).save(*args, **kwargs) When i think about it for a bit, it seems obvious that it is just infinitely calling itself causing the error. My problem is i can't figure out what is the correct way i should be doing this. Any suggestions? thanks

    Read the article

  • Comparison of the multiprocessing module and pyro?

    - by fivebells
    I use pyro for basic management of parallel jobs on a compute cluster. I just moved to a cluster where I will be responsible for using all the cores on each compute node. (On previous clusters, each core has been a separate node.) The python multiprocessing module seems like a good fit for this. I notice it can also be used for remote-process communication. If anyone has used both frameworks for remote-process communication, I'd be grateful to hear how they stack up against each other. The obvious benefit of the multiprocessing module is that it's built-in from 2.6. Apart from that, it's hard for me to tell which is better.

    Read the article

  • User management API

    - by Akshey
    Hi, I am developing an application suite where users will need to connect to a server and depending on their account type they will be given some services. The server will run Linux. Can you please suggest me some user management API which I can use to develop the server program? By user management I mean user authentication and other related functionalities. I prefer to work in C++ or python, but any other language should not be a problem. Please note that this application suite is not web based. Due to security issues, I do not want to give each user a separate account on the linux server. Thanks, Akshey

    Read the article

  • Getting BeautifulSoup to find a specific <p>

    - by Ryan
    I'm trying to put together a basic HTML scraper for a variety of scientific journal websites, specifically trying to get the abstract or introductory paragraph. The current journal I'm working on is Nature, and the article I've been using as my sample can be seen at http://www.nature.com/nature/journal/v463/n7284/abs/nature08715.html. I can't get the abstract out of that page, however. I'm searching for everything between the <p class="lead">...</p> tags, but I can't seem to figure out how to isolate them. I thought it would be something simple like from BeautifulSoup import BeautifulSoup import re import urllib2 address="http://www.nature.com/nature/journal/v463/n7284/full/nature08715.html" html = urllib2.urlopen(address).read() soup = BeautifulSoup(html) abstract = soup.find('p', attrs={'class' : 'lead'}) print abstract Using Python 2.5, BeautifulSoup 3.0.8, running this returns 'None'. I have no option of using anything else that needs to be compiled/installed (like lxml). Is BeautifulSoup confused, or am I?

    Read the article

< Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >