Search Results

Search found 13450 results on 538 pages for 'recent items'.

Page 413/538 | < Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >

  • How do you read a file line by line in your language of choice?

    - by Jon Ericson
    I got inspired to try out Haskell again based on a recent answer. My big block is that reading a file line by line (a task made simple in languages such as Perl) seems complicated in a functional language. How do you read a file line by line in your favorite language? So that we are comparing apples to other types of apples, please write a program that numbers the lines of the input file. So if your input is: Line the first. Next line. End of communication. The output would look like: 1 Line the first. 2 Next line. 3 End of communication. I will post my Haskell program as an example. Ken commented that this question does not specify how errors should be handled. I'm not overly concerned about it because: Most answers did the obvious thing and read from stdin and wrote to stdout. The nice thing is that it puts the onus on the user to redirect those streams the way they want. So if stdin is redirected from a non-existent file, the shell will take care of reporting the error, for instance. The question is more aimed at how a language does IO than how it handles exceptions. But if necessary error handling is missing in an answer, feel free to either edit the code to fix it or make a note in the comments.

    Read the article

  • A couple of questions about NHibernate's GuidCombGenerator

    - by Eyvind
    The following code can be found in the NHibernate.Id.GuidCombGenerator class. The algorithm creates sequential (comb) guids based on combining a "random" guid with a DateTime. I have a couple of questions related to the lines that I have marked with *1) and *2) below: private Guid GenerateComb() { byte[] guidArray = Guid.NewGuid().ToByteArray(); // *1) DateTime baseDate = new DateTime(1900, 1, 1); DateTime now = DateTime.Now; // Get the days and milliseconds which will be used to build the byte string TimeSpan days = new TimeSpan(now.Ticks - baseDate.Ticks); TimeSpan msecs = now.TimeOfDay; // *2) // Convert to a byte array // Note that SQL Server is accurate to 1/300th of a millisecond so we divide by 3.333333 byte[] daysArray = BitConverter.GetBytes(days.Days); byte[] msecsArray = BitConverter.GetBytes((long) (msecs.TotalMilliseconds / 3.333333)); // Reverse the bytes to match SQL Servers ordering Array.Reverse(daysArray); Array.Reverse(msecsArray); // Copy the bytes into the guid Array.Copy(daysArray, daysArray.Length - 2, guidArray, guidArray.Length - 6, 2); Array.Copy(msecsArray, msecsArray.Length - 4, guidArray, guidArray.Length - 4, 4); return new Guid(guidArray); } First of all, for *1), wouldn't it be better to have a more recent date as the baseDate, e.g. 2000-01-01, so as to make room for more values in the future? Regarding *2), why would we care about the accuracy for DateTimes in SQL Server, when we only are interested in the bytes of the datetime anyway, and never intend to store the value in an SQL Server datetime field? Wouldn't it be better to use all the accuracy available from DateTime.Now?

    Read the article

  • GitHub solution for personal repo

    - by Luke Maurer
    So I've got my private SVN repo on my home server, and it has maybe 30 different modules thrown together in it, ranging from abortive throw-away larks to a few endeavors that might actually go somewhere someday. But a recent filesystem failure (BTW, never ever EVER use XFS without a battery-backed hardware RAID) has me spooked and thinking of using a DVCS for all that. I've also just had quite the swig of the Git koolaid, and I've been working with GitHub of late, so that's where I'm looking right now. Of course, it would be silly to shell out major cash for a separate private Git repo for every little project, and I don't want to have to be selective about what I throw up there (I love all my children :-D ), so I'll have to be somewhat creative about this. I can happily use SSH to my home box to use Git the way I've been using SVN, and I'm thinking from there I could amalgamate everything into, say, a big project with 30 submodules, which I then push to GitHub. What'd be a sane way to set this up? Does using submodules sound feasible? How do I sync it all to my private GitHub repo? Cron job? Git hook? I'd love to hear it if anyone's done something similar. I'm not really married to Git or GitHub, so a sufficiently compelling feature of another solution might sway me. But if your answer does involve a different system (especially a different VCS), be advised it'll be a tougher sell :-)

    Read the article

  • Python many-to-one mapping (creating equivalence classes)

    - by Adam Matan
    Hi, I have a project of converting one database to another. One of the original database columns defines the row's category. This coulmn should be mapepd to a new category in the new databse. For example, let's assume the original categories are:parrot, spam, cheese_shop, Cleese, Gilliam, Palin Now that's a little verbose for me, And I want to have these rows categorized as sketch, actor - That is, define all the sketches and all the actors as two equivalence classes. >>> monty={'parrot':'sketch', 'spam':'sketch', 'cheese_shop':'sketch', 'Cleese':'actor', 'Gilliam':'actor', 'Palin':'actor'} >>> monty {'Gilliam': 'actor', 'Cleese': 'actor', 'parrot': 'sketch', 'spam': 'sketch', 'Palin': 'actor', 'cheese_shop': 'sketch'} That's quite awkward- I would prefer having something like: monty={ ('parrot','spam','cheese_shop'): 'sketch', ('Cleese', 'Gilliam', 'Palin') : 'actors'} But this, of course, sets the entire tuple as a key: >>> monty['parrot'] Traceback (most recent call last): File "<pyshell#29>", line 1, in <module> monty['parrot'] KeyError: 'parrot' Any ideas how to create an elegant many-to-one dictionary in Python? Thanks, Adam

    Read the article

  • How to ship numpy with web2py application under myapp/modules?

    - by Newbie07
    I am having the following error while importing numpy from application/myapp/modules: Traceback (most recent call last): File "/home/mdipierro/make_web2py/web2py/gluon/restricted.py", line 212, in restricted File "D:/web2py_win/web2py/applications/myapp/controllers/default.py", line 13, in File "/home/mdipierro/make_web2py/web2py/gluon/custom_import.py", line 100, in custom_importer File "applications\myapp\modules\numpy\ __init__.py", line 137, in File "/home/mdipierro/make_web2py/web2py/gluon/custom_import.py", line 81, in custom_importer ImportError: Cannot import module 'add_newdocs' I tried adding 'application.myapp.modules.' in the 'import add_newdocs' statement of numpy\ __init.py__ and the error propagates to other subsequent imports(i.e. add_docs imports some other stuff and I get the ImportError again for these imports). So I narrowed down the problem to the "working directory" of the import statement. However, I do not wish to add 'application.myapp.modules.' in every import statement inside the package since it would be impractical and hard to edit if someone decides to rename the app later on. How do I make the import work smoothly? NOTE: It is necessary for me to put the numpy package in the app to ensure ease of deployment.

    Read the article

  • When I try to run vim in command line I get Python errors

    - by Eduan
    When I try running vim in the Terminal (so as to follow @romainl's suggestion in my other question) I get lots of Python errors, which all boil down to: IOError: invalid Python installation: unable to open /usr/include/python2.7/pyconfig.h (No such file or directory) Why is this? I can use Python or Sublime Text even, without any problems. The full list of errors is the following: Traceback (most recent call last): File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.py", line 565, in <module> File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.py", line 547, in main File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.py", line 278, in addusersitepackages File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.py", line 253, in getusersitepackages File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.py", line 243, in getuserbase File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/sysconfig.py", line 523, in get_config_var File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/sysconfig.py", line 419, in get_config_vars File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/sysconfig.py", line 298, in _init_posix IOError: invalid Python installation: unable to open /usr/include/python2.7/pyconfig.h (No such file or directory) Extra info: I am on Mac OS X Mountain Lion (OS 10.8) EDIT: I tried @BobDunakey idea with no success, the idea was to use sudo. I still get the same errors. EDIT 2: I was able to solve the problem thanks to Zirak's solution, which is the following: http://clearfix.be/2012/08/05/fix-mountain-lion-10-8-python-ioerror-pyconfig-h-error/

    Read the article

  • Show last 4 table entries mysql php

    - by user272899
    I have a movie database Kind of like a blog and I want to display the last 4 created entries. I have a column in my table for timestamp called 'dateadded'. Using this code how would I only display the 4 most recent entries to table <?php //connect to database mysql_connect($mysql_hostname,$mysql_user,$mysql_password); @mysql_select_db($mysql_database) or die("<b>Unable to connect to specified database</b>"); //query databae $query = "SELECT * FROM movielist"; $result=mysql_query($query) or die('Error, insert query failed'); $row=0; $numrows=mysql_num_rows($result); while($row<$numrows) { $id=mysql_result($result,$row,"id"); $imgurl=mysql_result($result,$row,"imgurl"); $imdburl=mysql_result($result,$row,"imdburl"); ?> <div class="moviebox rounded"><a href="http://<?php echo $domain; ?>/viewmovie?movieid=<?php echo $id; ?>" rel="facebox"> <img src="<?php echo $imgurl; ?>" /> <form method="get" action=""> <input type="text" name="link" class="link" style="display:none" value="http://us.imdb.com/Title?<?php echo $imdburl; ?>"/> </form> </a></div> <?php $row++; } ?>

    Read the article

  • Table cell doesn't obey vertical-align CSS declaration when it contains a floated element

    - by mikez302
    I am trying to create a table, where each cell contains a big floated h1 on the left side, and a larger amount of small text to the right of the big text, vertically centered. However, the small text is showing up at the top of each cell, despite that it has a "vertical-align: middle" declaration. When I remove the big floated element, everything looks fine. I tested it in recent versions of IE, Firefox, and Safari, and this happened in every case. Why is this happening? Does anyone know of a way around it? Here is an example: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html><head> <meta http-equiv='Content-Type' content='text/html; charset=UTF-8'> <title>vertical-align test</title> <style type="text/css"> td { border: solid black 1px; vertical-align: middle; font-size: 12px} h1 { font-size: 40px; float: left} </style> </head> <body> <table> <tr> <td><h1>1</h1>The quick brown fox jumps over the lazy dog.</td> <td>The quick brown fox jumps over the lazy dog.</td> </tr> </table> </body></html> Notice that the small text in the first cell is at the top for some reason, but the text in the 2nd cell is vertically centered.

    Read the article

  • How to access non-first matches with xpath in Selenium RC ?

    - by Gj
    I have 20 labels in my page: In [85]: sel.get_xpath_count("//label") Out[85]: u'20' And I can get the first one be default: In [86]: sel.get_text("xpath=//label") Out[86]: u'First label:' But, unlike the xpath docs I've found, I'm getting an error trying to subscript the xpath to get to the second label's text: In [87]: sel.get_text("xpath=//label[2]") ERROR: An unexpected error occurred while tokenizing input The following traceback may be corrupted or invalid The error message is: ('EOF in multi-line statement', (216, 0)) ERROR: An unexpected error occurred while tokenizing input The following traceback may be corrupted or invalid The error message is: ('EOF in multi-line statement', (1186, 0)) --------------------------------------------------------------------------- Exception Traceback (most recent call last) /Users/me/<ipython console> in <module>() /Users/me/selenium.pyc in get_text(self, locator) 1187 'locator' is an element locator 1188 """ -> 1189 return self.get_string("getText", [locator,]) 1190 1191 /Users/me/selenium.pyc in get_string(self, verb, args) 217 218 def get_string(self, verb, args): --> 219 result = self.do_command(verb, args) 220 return result[3:] 221 /Users/me/selenium.pyc in do_command(self, verb, args) 213 #print "Selenium Result: " + repr(data) + "\n\n" 214 if (not data.startswith('OK')): --> 215 raise Exception, data 216 return data 217 Exception: ERROR: Element xpath=//label[2] not found What gives?

    Read the article

  • How to react when asked a question you already know during an interview

    - by DevNull
    The short story:- If you are asked a tough algorithmic/puzzle question during an interview, whose solution is already known to you, do you:- Honestly tell the interviewer that you know this question already? -- this could result in bursting the interviewer's ego and him increasing the complexity level of the subsequent questions. Do an Oscar deserving performance and act as if you are thinking and trying hard and slowly getting to the solution? -- depending on your acting skills, could majorly impress the interviewer making the rest of the interview easier. Long story:- OK, this question comes as a result of what happened to me in a recent telephonic interview that I gave - the interview was supposed to be all algorithmic. The interviewer started with an algorithmic question which I had luckily already seen here on Stackoverflow. The best solution to that problem is not very intuitive and is more of a you-get-it-if-you-know-it kind. Now, just to not disappoint the interviewer too much, I took a few seconds as if I was pondering on the problem and then blurted out the answer which I knew too well having read and admired it on SO already. But I guess that gave it away to the interviewer that I already knew this question and since then, he started asking me for more efficient solutions and I kept coming up with approaches (even if not correct or more efficient, but I did touch a lot of different data structures and algos) and he kept asking for more efficient solutions and generally seemed put off by my initial salvo which was unexpected. What should I have done? Cheers!

    Read the article

  • WPF deployment strategy dilemma.clickonce(limited customization)+autoupdate vs installer(unlimited c

    - by black sensei
    Hello Experts!!! I've been facing a deployment problem.I've built a WPF application with visual studio 2008 and created an installer(msi) which works fine.But then it's pain to add automatic update to it. i've seen this article at windowsclient.net but it seems to be pretty old but could have been the perfect thing for me.Then i looked at the .Net Application updater block v2.0 which uses enterprise library june 2005 and for some reason it's not installing on my machine. I thought i will need to use a more recent Enterprise library so i installed and compiled Enterprise 4.1(october 2008) but nothing better happened.To i decided to give a try to CLickonce deployment.After struggling with it, it was almost perfect.I realized that when i was testing the updates provided by the clickonce on my machine which is XP i didn't notice the need of having sqlite dll in the GAC. surely it was already there.I noticed it when i moved to vista that there is a problem.After checking the net i know it's impossible to add a dll to the Global Assembly Cache. Now i'm stuck, i think i've hit a wall.Can any one share some of his experience? I'm willing to try the updater block if i can get help. Thanks for reading this!!

    Read the article

  • AJAX vs ActiveX/Flash for browser-based game

    - by iconiK
    I have been following the usage of JavaScript for the past few years, and with the release of extremely fast scripting engines (V8, SquirrelFish Extrene, TraceMonkey, etc.) the possibilities of JavaScript have increased dramatically. However, the usage share of Internet Explorer coupled with it's total lack of support for recent standards makes me want to drop a bomb on Microsoft's HQ, as it creates a huge amount of problems for any website. The game will need to be pretty dynamic client-side, with animations and other eye-candy things, but not a full-blown game like those that run directly in the OS using DirectX or OpenGL. However, this might be a little stretch for JavaScript and will certainly feel extremely slow in Internet Explorer (given that the current IE engine can be hundreds of times slower than SFX; gotta see what IE9 will bring), would it be better to just do the whole thing in Flash? I know this means requiring the plug-in AND I have no experience whatsoever with Flash (other than browsing YouTube :P). It also means I can't just output directly from PHP, I would have to use XML or some other format to pass data to it (JSON is directly integrated in JS and PHP can deal with it easily). Another idea would be to provide an alternative interface just for IE, though I don't know how (ActiveX maybe? or with Flash, then why not just provide it to all browsers) or totally not supporting it and requiring the use of other browsers, although this is plain stupid from a business perspective. So here am I, wondering what approach to take and thus asking for your advice. How should I build the client-side? AJAX in all browsers, Flash in all browsers or a mix (AJAX for "modern" browsers and something else for the "grandpa": IE).

    Read the article

  • Automated Oracle Schema Migration Tool

    - by Dave Jarvis
    What are some tools (commercial or OSS) that provide a GUI-based mechanism for creating schema upgrade scripts? To be clear, here are the tool responsibilities: Obtain connection to recent schema version (called "source"). Obtain connection to previous schema version (called "target"). Compare all schema objects between source and target. Create a script to make the target schema equivalent to the source schema ("upgrade script"). Create a rollback script to revert the source schema, used if the upgrade script fails (at any point). Create individual files for schema objects. The software must: Use ALTER TABLE instead of DROP and CREATE for renamed columns. Work with Oracle 10g or greater. Create scripts that can be batch executed (via command-line). Trivial installation process. (Bonus) Create scripts that can be executed with SQL*Plus. Here are some examples (from StackOverflow, ServerFault, and Google searches): Change Manager Oracle SQL Developer Software that does not meet the criteria, or cannot be evaluated, includes: TOAD PL/SQL Developer - Invalid SQL*Plus statements. Does not produce ALTER statements. SQL Fairy - No installer. Complex installation process. Poorly documented. DBDiff - Crippled data set evaluation, poor customer support. OrbitDB - Crippled data set evaluation. SchemaCrawler - No easily identifiable download version for Oracle databases. SQL Compare - SQL Server, not Oracle. LiquiBase - Requires changing the development process. No installer. Manually edit config files. Does not recognize its own baseUrl parameter. The only acceptable crippling of the evaluation version is by time. Crippling by restricting the number of tables and views hides possible bugs that are only visible in the software during the attempt to migrate hundreds of tables and views.

    Read the article

  • python-iptables: Cryptic error when allowing incoming TCP traffic on port 1234

    - by Lucas Kauffman
    I wanted to write an iptables script in Python. Rather than calling iptables itself I wanted to use the python-iptables package. However I'm having a hard time getting some basic rules setup. I wanted to use the filter chain to accept incoming TCP traffic on port 1234. So I wrote this: import iptc chain = iptc.Chain(iptc.TABLE_FILTER,"INPUT") rule = iptc.Rule() target = iptc.Target(rule,"ACCEPT") match = iptc.Match(rule,'tcp') match.dport='1234' rule.add_match(match) rule.target = target chain.insert_rule(rule) However when I run this I get this thrown back at me: Traceback (most recent call last): File "testing.py", line 9, in <module> chain.insert_rule(rule) File "/usr/local/lib/python2.6/dist-packages/iptc/__init__.py", line 1133, in insert_rule self.table.insert_entry(self.name, rbuf, position) File "/usr/local/lib/python2.6/dist-packages/iptc/__init__.py", line 1166, in new obj.refresh() File "/usr/local/lib/python2.6/dist-packages/iptc/__init__.py", line 1230, in refresh self._free() File "/usr/local/lib/python2.6/dist-packages/iptc/__init__.py", line 1224, in _free self.commit() File "/usr/local/lib/python2.6/dist-packages/iptc/__init__.py", line 1219, in commit raise IPTCError("can't commit: %s" % (self.strerror())) iptc.IPTCError: can't commit: Invalid argument Exception AttributeError: "'NoneType' object has no attribute 'get_errno'" in <bound method Table.__del__ of <iptc.Table object at 0x7fcad56cc550>> ignored Does anyone have experience with python-iptables that could enlighten on what I did wrong?

    Read the article

  • mod_python req.subprocess_env not "seeing" PythonOptions

    - by Brandon
    I'm having trouble getting an environmental variable out of apache config. (don't ask why it's being done this way, I didn't originally code it) This is what I have in the apache config. <Location "/var/www"> SetHandler python-program PythonHandler mod_python.publisher PythonOption MYSQL_PWD ########### PythonDebug On </Location> This is the problem code... #this is the problem code in question. def index(req): req.add_common_vars() os.environ["MYSQL_PWD"] = req.subprocess_env["MYSQL_PWD"] req.content_type = "text/html" statText = getStatText() here is the traceback I'm getting from executing this. Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/mod_python/importer.py", line 1537, in HandlerDispatch default=default_handler, arg=req, silent=hlist.silent) File "/usr/lib/python2.5/site-packages/mod_python/importer.py", line 1229, in _process_target result = _execute_target(config, req, object, arg) File "/usr/lib/python2.5/site-packages/mod_python/importer.py", line 1128, in _execute_target result = object(arg) File "/usr/lib/python2.5/site-packages/mod_python/publisher.py", line 213, in handler published = publish_object(req, object) File "/usr/lib/python2.5/site-packages/mod_python/publisher.py", line 425, in publish_object return publish_object(req,util.apply_fs_data(object, req.form, req=req)) File "/usr/lib/python2.5/site-packages/mod_python/util.py", line 554, in apply_fs_data return object(**args) File "/var/www/admin/Stat.py", line 299, in index os.environ["MYSQL_PWD"] = req.subprocess_env["MYSQL_PWD"] KeyError: 'MYSQL_PWD'

    Read the article

  • Celery tasks not works with gevent

    - by Novarg
    When i use celery + gevent for tasks that uses subprocess module i'm getting following stacktrace: Traceback (most recent call last): File "/home/venv/admin/lib/python2.7/site-packages/celery/task/trace.py", line 228, in trace_task R = retval = fun(*args, **kwargs) File "/home/venv/admin/lib/python2.7/site-packages/celery/task/trace.py", line 415, in __protected_call__ return self.run(*args, **kwargs) File "/home/webapp/admin/webadmin/apps/loggingquarantine/tasks.py", line 107, in release_mail_task res = call_external_script(popen_obj.communicate) File "/home/webapp/admin/webadmin/apps/core/helpers.py", line 42, in call_external_script return func_to_call(*args, **kwargs) File "/usr/lib64/python2.7/subprocess.py", line 740, in communicate return self._communicate(input) File "/usr/lib64/python2.7/subprocess.py", line 1257, in _communicate stdout, stderr = self._communicate_with_poll(input) File "/usr/lib64/python2.7/subprocess.py", line 1287, in _communicate_with_poll poller = select.poll() AttributeError: 'module' object has no attribute 'poll' My manage.py looks following (doing monkeypatch there): #!/usr/bin/env python from gevent import monkey import sys import os if __name__ == "__main__": if not 'celery' in sys.argv: monkey.patch_all() os.environ.setdefault("DJANGO_SETTINGS_MODULE", "webadmin.settings") from django.core.management import execute_from_command_line sys.path.append(".") execute_from_command_line(sys.argv) Is there a reason why celery tasks act like it wasn't patched properly? p.s. strange thing that my local setup on Macos works fine while i getting such exceptions under Centos (all package versions are the same, init and config scripts too)

    Read the article

  • Difference in F# and Clojure when calling redefined functions

    - by Michiel Borkent
    In F#: > let f x = x + 2;; val f : int -> int > let g x = f x;; val g : int -> int > g 10;; val it : int = 12 > let f x = x + 3;; val f : int -> int > g 10;; val it : int = 12 In Clojure: 1:1 user=> (defn f [x] (+ x 2)) #'user/f 1:2 user=> (defn g [x] (f x)) #'user/g 1:3 user=> (g 10) 12 1:4 user=> (defn f [x] (+ x 3)) #'user/f 1:5 user=> (g 10) 13 Note that in Clojure the most recent version of f gets called in the last line. In F# however still the old version of f is called. Why is this and how does this work?

    Read the article

  • help('modules') crashing?

    - by Chris
    I was trying to install a module for opencv and added an opencv.pth file to the folder beyond my sites.py file. I have since deleted it and no change. When I try to run help('modules'), I get the following error: Please wait a moment while I gather a list of all available modules... /System/Library/Frameworks/Python.framework/Versions/2.5/Extras/lib/python/twisted/words/im/init.py:8: UserWarning: twisted.im will be undergoing a rewrite at some point in the future. warnings.warn("twisted.im will be undergoing a rewrite at some point in the future.") /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/pkgutil.py:110: DeprecationWarning: The wxPython compatibility package is no longer automatically generated or actively maintained. Please switch to the wx package as soon as possible. import(name) Traceback (most recent call last): File "", line 1, in File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site.py", line 348, in call return pydoc.help(*args, **kwds) File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/pydoc.py", line 1644, in call self.help(request) File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/pydoc.py", line 1681, in help elif request == 'modules': self.listmodules() File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/pydoc.py", line 1802, in listmodules ModuleScanner().run(callback) File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/pydoc.py", line 1853, in run for importer, modname, ispkg in pkgutil.walk_packages(): File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/pkgutil.py", line 110, in walk_packages import(name) File "/BinaryCache/wxWidgets/wxWidgets-11~262/Root/System/Library/Frameworks/Python.framework/Versions/2.5/Extras/lib/python/wxaddons/init.py", line 180, in import_hook File "/Library/Python/2.5/site-packages/ctypes_opencv/init.py", line 19, in from ctypes_opencv.cv import * File "/BinaryCache/wxWidgets/wxWidgets-11~262/Root/System/Library/Frameworks/Python.framework/Versions/2.5/Extras/lib/python/wxaddons/init.py", line 180, in import_hook File "/Library/Python/2.5/site-packages/ctypes_opencv/cv.py", line 2567, in ('desc', CvMat_r, 1), # CvMat* desc File "/Library/Python/2.5/site-packages/ctypes_opencv/cxcore.py", line 114, in cfunc return CFUNCTYPE(result, *atypes)((name, dll), tuple(aflags)) AttributeError: dlsym(0x2674d10, cvCreateFeatureTree): symbol not found What gives?!

    Read the article

  • Which key value store is the most promising/stable?

    - by Mike Trpcic
    I'm looking to start using a key/value store for some side projects (mostly as a learning experience), but so many have popped up in the recent past that I've got no idea where to begin. Just listing from memory, I can think of: CouchDB MongoDB Riak Redis Tokyo Cabinet Berkeley DB Cassandra MemcacheDB And I'm sure that there are more out there that have slipped through my search efforts. With all the information out there, it's hard to find solid comparisons between all of the competitors. My criteria and questions are: (Most Important) Which do you recommend, and why? Which one is the fastest? Which one is the most stable? Which one is the easiest to set up and install? Which ones have bindings for Python and/or Ruby? Edit: So far it looks like Redis is the best solution, but that's only because I've gotten one solid response (from ardsrk). I'm looking for more answers like his, because they point me in the direction of useful, quantitative information. Which Key-Value store do you use, and why? Edit 2: If anyone has experience with CouchDB, Riak, or MongoDB, I'd love to hear your experiences with them (and even more so if you can offer a comparative analysis of several of them)

    Read the article

  • use proxy in python to fetch a webpage

    - by carmao
    I am trying to write a function in Python to use a public anonymous proxy and fetch a webpage, but I got a rather strange error. The code (I have Python 2.4): import urllib2 def get_source_html_proxy(url, pip, timeout): # timeout in seconds (maximum number of seconds willing for the code to wait in # case there is a proxy that is not working, then it gives up) proxy_handler = urllib2.ProxyHandler({'http': pip}) opener = urllib2.build_opener(proxy_handler) opener.addheaders = [('User-agent', 'Mozilla/5.0')] urllib2.install_opener(opener) req=urllib2.Request(url) sock=urllib2.urlopen(req) timp=0 # a counter that is going to measure the time until the result (webpage) is # returned while 1: data = sock.read(1024) timp=timp+1 if len(data) < 1024: break timpLimita=50000000 * timeout if timp==timpLimita: # 5 millions is about 1 second break if timp==timpLimita: print IPul + ": Connection is working, but the webpage is fetched in more than 50 seconds. This proxy returns the following IP: " + str(data) return str(data) else: print "This proxy " + IPul + "= good proxy. " + "It returns the following IP: " + str(data) return str(data) # Now, I call the function to test it for one single proxy (IP:port) that does not support user and password (a public high anonymity proxy) #(I put a proxy that I know is working - slow, but is working) rez=get_source_html_proxy("http://www.whatismyip.com/automation/n09230945.asp", "93.84.221.248:3128", 50) print rez The error: Traceback (most recent call last): File "./public_html/cgi-bin/teste5.py", line 43, in ? rez=get_source_html_proxy("http://www.whatismyip.com/automation/n09230945.asp", "93.84.221.248:3128", 50) File "./public_html/cgi-bin/teste5.py", line 18, in get_source_html_proxy sock=urllib2.urlopen(req) File "/usr/lib64/python2.4/urllib2.py", line 130, in urlopen return _opener.open(url, data) File "/usr/lib64/python2.4/urllib2.py", line 358, in open response = self._open(req, data) File "/usr/lib64/python2.4/urllib2.py", line 376, in _open '_open', req) File "/usr/lib64/python2.4/urllib2.py", line 337, in _call_chain result = func(*args) File "/usr/lib64/python2.4/urllib2.py", line 573, in lambda r, proxy=url, type=type, meth=self.proxy_open: \ File "/usr/lib64/python2.4/urllib2.py", line 580, in proxy_open if '@' in host: TypeError: iterable argument required I do not know why the character "@" is an issue (I have no such in my code. Should I have?) Thanks in advance for your valuable help.

    Read the article

  • E-Commerce Security: Only Credit Card Fields Encrypted?!

    - by bizarreunprofessionalanddangerous
    I'd like your opinions on how a major bricks-and-mortar company is running the security for its shopping Web site. After a recent update, when you are logged into your shopping account, the session is now not secured. No 'https', no browser 'lock'. All the personal contact info, shopping history -- and if I'm not mistaken submit and change password -- are being sent unencrypted. There is a small frame around the credit card fields that is https. There's a little notice: "Our website is secure. Our website uses frames and because of this the secure icon will not appear in your browser" On top of this the most prominent login fields for the site are broken, and haven't gotten fixed for a week or longer (giving the distinct impression they have no clue what's going on and can't be trusted with anything). Now is it just me -- or is this simply incomprehensible for a billion dollar company, significant shopping site, in the year 2010. No lock. "We use frames" (maybe they forget "Best viewed in IE4"). Customers complaining, as you can see from their FAQ "explaining" why you aren't seeing https. I'm getting nowhere trying to convince customer service that they REALLY need to do something about this, and am about to head for the CEO. But I just want to make sure this is as BIZARRE and unprofessional and dangerous a situation as I think it is. (I'm trying to visualize what their Web technical team consists of. I'm getting A) some customer service reps who were given a 3 hour training course on Web site maintenance, B) a 14 year old boy in his bedroom masquerading as a major technical services company, C) a guy in a hut in a jungle with an e-commerce book from 1996.)

    Read the article

  • 404 when getting private YouTube video even when logged in with the owner's account using gdata-pyth

    - by Gonzalo
    If a YouTube video is set as private and I try to fetch it using the gdata Python API a 404 RequestError is raised, even though I have done a programmatic login with the account that owns that video: from gdata.youtube import service yt_service = service.YouTubeService(email=my_email, password=my_password, client_id=my_client_id, source=my_source, developer_key=my_developer_key) yt_service.ProgrammaticLogin() yt_service.GetYouTubeVideoEntry(video_id='IcVqemzfyYs') --------------------------------------------------------------------------- RequestError Traceback (most recent call last) <ipython console> /usr/lib/python2.4/site-packages/gdata/youtube/service.pyc in GetYouTubeVideoEntry(self, uri, video_id) 203 elif video_id and not uri: 204 uri = '%s/%s' % (YOUTUBE_VIDEO_URI, video_id) --> 205 return self.Get(uri, converter=gdata.youtube.YouTubeVideoEntryFromString) 206 207 def GetYouTubeContactFeed(self, uri=None, username='default'): /usr/lib/python2.4/site-packages/gdata/service.pyc in Get(self, uri, extra_headers, redirects_remaining, encoding, converter) 1100 'body': result_body} 1101 else: -> 1102 raise RequestError, {'status': server_response.status, 1103 'reason': server_response.reason, 'body': result_body} 1104 RequestError: {'status': 404, 'body': 'Video not found', 'reason': 'Not Found'} This happens every time, unless I go into my YouTube account (through the YouTube website) and set it public, after that I can set it as private and back to public using the Python API. Am I missing a step or is there another (or any) way to fetch a YouTube video set as private from the API? Thanks in advance.

    Read the article

  • "ImportError: cannot import name Exchange" when using Celery w/ Kombu installed

    - by alukach
    I'm trying to create a Celery worker node on an Amazon EC2 Windows2008 R2 instance. Due to a plugin we require for another Python module, we are required to user Python3. I've installed Python3.2 and necessary modules and run Celery, but for some reason it states that it can't import Exchange and Queue from Kombu, despite the face that Kombu is installed and can be imported by python. Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Users\Administrator>celery Traceback (most recent call last): File "C:\Python32\Scripts\celery-script.py", line 9, in <module> load_entry_point('celery==3.0.11', 'console_scripts', 'celery')() File "C:\Python32\lib\site-packages\celery\__main__.py", line 13, in main from celery.bin.celery import main File "C:\Python32\lib\site-packages\celery\bin\celery.py", line 21, in <module> from celery.utils import term File "C:\Python32\lib\site-packages\celery\utils\__init__.py", line 22, in <module> from kombu import Exchange, Queue ImportError: cannot import name Exchange C:\Users\Administrator>python Python 3.2 (r32:88445, Feb 20 2011, 21:29:02) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> from kombu import Exchange, Queue >>> print('WTF?!?') WTF?!? >>> This problem seems similar to this and this, but I have not been able to remedy the issue. Any ideas?

    Read the article

  • Python IOError: Not a gzipped file (Gzip and Blowfish Encrypt/Compress)

    - by notbad.jpeg
    I'm having some problems with python's built-in library gzip. Looked through almost every other stack question about it, and none of them seem to work. MY PROBLEM IS THAT WHEN I TRY TO DECOMPRESS I GET THE IOError I'm Getting: Traceback (most recent call last): File "mymodule.py", line 61, in return gz.read() File "/usr/lib/python2.7/gzip.py", line 245, readself._read(readsize) File "/usr/lib/python2.7/gzip.py", line 287, in _readself._read_gzip_header() File "/usr/lib/python2.7/gzip.py", line 181, in _read_gzip_header raise IOError, 'Not a gzipped file'IOError: Not a gzipped file This is my code to send it over SMB, it might not make sense why i do things, but it's normally in a while loop and memory efficient, I just simplified it. buffer = cStringIO.StringIO(output) #output is from a subprocess call small_buffer = cStringIO.StringIO() small_string = buffer.read() #need a string to write to buffer gzip_obj = gzip.GzipFile(fileobj=small_buffer,compresslevel=6, mode='wb') gzip_obj.write(small_string) compressed_str = small_buffer.getvalue() blowfish = Blowfish.new('abcd', Blowfish.MODE_ECB) remainder = '|'*(8 - (len(compressed_str) % 8)) compressed_str += remainder encrypted = blowfish.encrypt(compressed_str) #i send it over smb, then retrieve it later Then this is the code that retrieves it: #buffer is a cStringIO object filled with data from smb retrieval decrypter = Blowfish.new('abcd', Blowfish.MODE_ECB) value = buffer.getvalue() decrypted = decrypter.decrypt(value) buff = cStringIO.StringIO(decrypted) buff.seek(0) gz = gzip.GzipFile(fileobj=buff) return gz.read() Here's the problem return gz.read()

    Read the article

  • Reading numpy arrays outside of Python

    - by Abiel
    In a recent question I asked about the fastest way to convert a large numpy array to a delimited string. My reason for asking was because I wanted to take that plain text string and transmit it (over HTTP for instance) to clients written in other programming languages. A delimited string of numbers is obviously something that any client program can work with easily. However, it was suggested that because string conversion is slow, it would be faster on the Python side to do base64 encoding on the array and send it as binary. This is indeed faster. My question now is, (1) how can I make sure my encoded numpy array will travel well to clients on different operating systems and different hardware, and (2) how do I decode the binary data on the client side. For (1), my inclination is to do something like the following import numpy as np import base64 x = np.arange(100, dtype=np.float64) base64.b64encode(x.tostring()) Is there anything else I need to do? For (2), I would be happy to have an example in any programming language, where the goal is to take the numpy array of floats and turn them into a similar native data structure. Assume we have already done base64 decoding and have a byte array, and that we also know the numpy dtype, dimensions, and any other metadata which will be needed. Thanks.

    Read the article

< Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >