Search Results

Search found 15187 results on 608 pages for 'boost python'.

Page 44/608 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Boost Solr results based on the field that contained the hit

    - by TomFor
    Hi, I was browsing the web looking for a indexing and search framework and stumbled upon Solr. A functionality that we abolutely need is to boost results based on what field contained the hit. A small example: Consider a record like this: <movie> <title>The Dark Knight</title> <alternative_title>Batman Begins 2</alternative_title> <year>2008</year> <director>Christopher Nolan</director> <plot>Batman, Gordon and Harvey Dent are forced to deal with the chaos unleashed by an anarchist mastermind known only as the Joker, as it drives each of them to their limits.</plot> </movie> I want to combine for example the title, alternative_title and plot fields into one search field, which isn't too difficult after looking at the Solr/Lucene documentation and tutorials. However I also want that movies that have a hit in title have a higher score than hits on alternative_title and those in their turn should score higher than hits in the plot field. Is there any way to indicate this kond of scoring in the xml or do we need to develop some custom scoring algorythm? Please also note that the example I've givnen is fictional end the real data will probably contain 100+ fields. Thanks in advance, Tom

    Read the article

  • Using mem_fun_ref with boost::shared_ptr

    - by BlueRaja
    Following the advice of this page, I'm trying to get shared_ptr to call IUnknown::Release() instead of delete: IDirectDrawSurface* dds; ... //Allocate dds return shared_ptr<IDirectDrawSurface>(dds, mem_fun_ref(&IUnknown::Release)); error C2784: 'std::const_mem_fun1_ref_t<_Result,_Ty,_Arg std::mem_fun_ref(Result (_thiscall _Ty::* )(_Arg) const)' : could not deduce template argument for 'Result (_thiscall _Ty::* )(Arg) const' from 'ULONG (_cdecl IUnknown::* )(void)' error C2784: 'std::const_mem_fun_ref_t<_Result,_Ty std::mem_fun_ref(Result (_thiscall _Ty::* )(void) const)' : could not deduce template argument for 'Result (_thiscall _Ty::* )(void) const' from 'ULONG (__cdecl IUnknown::* )(void)' error C2784: 'std::mem_fun1_ref_t<_Result,_Ty,_Arg std::mem_fun_ref(Result (_thiscall _Ty::* )(_Arg))' : could not deduce template argument for 'Result (_thiscall _Ty::* )(Arg)' from 'ULONG (_cdecl IUnknown::* )(void)' error C2784: 'std::mem_fun_ref_t<_Result,_Ty std::mem_fun_ref(Result (_thiscall _Ty::* )(void))' : could not deduce template argument for 'Result (_thiscall _Ty::* )(void)' from 'ULONG (__cdecl IUnknown::* )(void)' error C2661: 'boost::shared_ptr::shared_ptr' : no overloaded function takes 2 arguments I have no idea what to make of this. My limited template/functor knowledge led me to try typedef ULONG (IUnknown::*releaseSignature)(void); shared_ptr<IDirectDrawSurface>(dds, mem_fun_ref(static_cast<releaseSignature>(&IUnknown::Release))); But to no avail. Any ideas?

    Read the article

  • compile Boost as static Universal binary lib

    - by Albert
    I want to have a static Universal binary lib of Boost. (Preferable the latest stable version, that is 1.43.0, or newer.) I found many Google hits with similar problems and possible solutions. However, most of them seems outdated. Also none of them really worked. Right now, I am trying sudo ./bjam --toolset=darwin --link=static --threading=multi \ --architecture=combined --address-model=32_64 \ --macosx-version=10.4 --macosx-version-min=10.4 \ install That compiles and install fine. However, the produced binaries seems broken. az@ip245 47 (openlierox) %file /usr/local/lib/libboost_signals.a /usr/local/lib/libboost_signals.a: current ar archive random library az@ip245 49 (openlierox) %lipo -info /usr/local/lib/libboost_signals.a input file /usr/local/lib/libboost_signals.a is not a fat file Non-fat file: /usr/local/lib/libboost_signals.a is architecture: x86_64 az@ip245 48 (openlierox) %otool -hv /usr/local/lib/libboost_signals.a Archive : /usr/local/lib/libboost_signals.a /usr/local/lib/libboost_signals.a(trackable.o): Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC_64 X86_64 ALL 0x00 OBJECT 3 1536 SUBSECTIONS_VIA_SYMBOLS /usr/local/lib/libboost_signals.a(connection.o): Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC_64 X86_64 ALL 0x00 OBJECT 3 1776 SUBSECTIONS_VIA_SYMBOLS /usr/local/lib/libboost_signals.a(named_slot_map.o): Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC_64 X86_64 ALL 0x00 OBJECT 3 1856 SUBSECTIONS_VIA_SYMBOLS /usr/local/lib/libboost_signals.a(signal_base.o): Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC_64 X86_64 ALL 0x00 OBJECT 3 1776 SUBSECTIONS_VIA_SYMBOLS /usr/local/lib/libboost_signals.a(slot.o): Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC_64 X86_64 ALL 0x00 OBJECT 3 1616 SUBSECTIONS_VIA_SYMBOLS Any suggestion how to get that correct?

    Read the article

  • Boost Unit testing memory reuse causing tests that should fail to pass

    - by Knyphe
    We have started using the boost unit testing library for a large existing code base, and I have run into some trouble with unit tests incorrectly passing, seemingly due to the reuse of memory on the stack. Here is my situation: BOOST_AUTO_TEST_CASE(test_select_base_instantiation_default) { SelectBase selectBase(); BOOST_CHECK_EQUAL( selectBase.getSelectType(), false); BOOST_CHECK_EQUAL( selectBase.getTypeName(_T("")); BOOST_CHECK_EQUAL( selectBase.getEntityType(), -1); BOOST_CHECK_EQUAL( selectBase.getDataPos(), -1); } BOOST_AUTO_TEST_CASE(test_select_base_instantiation_default) { SelectBase selectBase(true, _T("abc")); BOOST_CHECK_EQUAL( selectBase.getSelectType(), false); BOOST_CHECK_EQUAL( selectBase.getTypeName(_T("abc")); BOOST_CHECK_EQUAL( selectBase.getEntityType(), -1); BOOST_CHECK_EQUAL( selectBase.getDataPos(), -1); } The first test passed correctly, initializing all the variables. The constructor in the second unit test did not correctly set EntityType or DataPosition, but the unit test passed. I was able to get it to fail by placing some variables on the stack in the second test, like so: BOOST_AUTO_TEST_CASE(test_select_base_instantiation_default) { int a, b; SelectBase selectBase(true, _T("abc")); BOOST_CHECK_EQUAL( selectBase.getSelectType(), false); BOOST_CHECK_EQUAL( selectBase.getTypeName(_T("abc")); BOOST_CHECK_EQUAL( selectBase.getEntityType(), -1); BOOST_CHECK_EQUAL( selectBase.getDataPos(), -1); } If there is only one int, only the dataPos CHECK_EQUAL fails, but if there are two, both EntityType and DataPos fail, so it seems pretty clear that this is an issue with the variables being created on the same stack memory or some such. Is there a good way to clear the memory between each unit test, or am I potentially using the library incorrectly or writing bad tests? Any help would be appreciated.

    Read the article

  • Compose path (with boost::filesystem)

    - by ypnos
    I have a file that describes input data, which is split into several other files. In my descriptor file, I first give the path A that tells where all the other files are found. The originator may set either a relative (to location of the descriptor file) or absolute path. When my program is called, the user gives the name of the descriptor file. It may not be in the current working directory, so the filename B given may also contain directories. For my program to always find the input files at the right places, I need to combine this information. If the path A given is absolute, I need to just that one. If it is relative, I need to concatenate it to the path B (i.e. directory portion of the filename). I thought boost::filesystem::complete may do the job for me. Unfortunately, it seems it is not. I also did not understand how to test wether a path given is absolute or not. Any ideas?

    Read the article

  • Is boost shared_ptr <XXX> thread safe?

    - by sxingfeng
    I have a question about boost :: shared_ptr. There are lots of thread. class CResource { xxxxxx } class CResourceBase { public: void SetResource(shared_ptr<CResource> res) { m_Res = res; } shared_ptr<CResource> GetResource() { return m_Res; } private: shared_ptr<CResource> m_Res; } CResourceBase base; //---------------------------------------------- Thread A: while (true) { ...... shared_ptr<CResource> nowResource = base.GetResource(); nowResource.doSomeThing(); ... } Thread B: shared_ptr<CResource> nowResource; base.SetResource(nowResource); ... //----------------------------------------------------------- If thread A do not care the nowResource is the newest . Will this part of code have problem? I mean when ThreadB do not SetResource completely, Thread A get a wrong smart point by GetResource? Another question : what does thread-safe mean? If I do not care about whether the resource is newest, will the shared_ptr nowResource crash the program when the nowResource is released or will the problem destroy the shared_point?

    Read the article

  • boost multi_index partial indexes

    - by Gokul
    Hi, I want to implement inside boost multi-index two sets of keys with same search criteria but different eviction criteria. Say i have two sets of data with same search condition, but one set needs a MRU(Most Recently Used) list of 100 and the other set requires a MRU of 200. Say the entry is like this class Student { int student_no; char sex; std::string address; }; The search criteria is student_no, but for sex='m', we need MRU of 200 and for sex='f', we need a MRU of 100. Now i have a solution where in i introduce a new ordered index to maintain ordering. For example the IndexSpecifierList will be something like this typedef multi_index_container< Student, indexed_by< ordered_unique< member<Student, int, &Student::student_no> >, ordered_unique< composite_key< member<Student, char, &Student::sex>, member<Student, int, &Student::sex_specific_student_counter> > > > > student_set Now everytime, i am inserting a new one, i have to take a equal_range for that using index 2 and remove the oldest one and if something is getting re-used, i have to update it by incrementing the counter. Is there a better solution to this kind of problem? Thanks, Gokul.

    Read the article

  • Python bindings for a vala library

    - by celil
    I am trying to create python bindings to a vala library using the following IBM tutorial as a reference. My initial directory has the following two files: test.vala using GLib; namespace Test { public class Test : Object { public int sum(int x, int y) { return x + y; } } } test.override %% headers #include <Python.h> #include "pygobject.h" #include "test.h" %% modulename test %% import gobject.GObject as PyGObject_Type %% ignore-glob *_get_type %% and try to build the python module source test_wrap.c using the following code build.sh #/usr/bin/env bash valac test.vala -CH test.h python /usr/share/pygobject/2.0/codegen/h2def.py test.h > test.defs pygobject-codegen-2.0 -o test.override -p test test.defs > test_wrap.c However, the last command fails with an error $ ./build.sh Traceback (most recent call last): File "/usr/share/pygobject/2.0/codegen/codegen.py", line 1720, in <module> sys.exit(main(sys.argv)) File "/usr/share/pygobject/2.0/codegen/codegen.py", line 1672, in main o = override.Overrides(arg) File "/usr/share/pygobject/2.0/codegen/override.py", line 52, in __init__ self.handle_file(filename) File "/usr/share/pygobject/2.0/codegen/override.py", line 84, in handle_file self.__parse_override(buf, startline, filename) File "/usr/share/pygobject/2.0/codegen/override.py", line 96, in __parse_override command = words[0] IndexError: list index out of range Is this a bug in pygobject, or is something wrong with my setup? What is the best way to call code written in vala from python? EDIT: Removing the extra line fixed the current problem, but now as I proceed to build the python module, I am facing another problem. Adding the following C file to the existing two in the directory: test_module.c #include <Python.h> void test_register_classes (PyObject *d); extern PyMethodDef test_functions[]; DL_EXPORT(void) inittest(void) { PyObject *m, *d; init_pygobject(); m = Py_InitModule("test", test_functions); d = PyModule_GetDict(m); test_register_classes(d); if (PyErr_Occurred ()) { Py_FatalError ("can't initialise module test"); } } and building with the following script build.sh #/usr/bin/env bash valac test.vala -CH test.h python /usr/share/pygobject/2.0/codegen/h2def.py test.h > test.defs pygobject-codegen-2.0 -o test.override -p test test.defs > test_wrap.c CFLAGS="`pkg-config --cflags pygobject-2.0` -I/usr/include/python2.6/ -I." LDFLAGS="`pkg-config --libs pygobject-2.0`" gcc $CFLAGS -fPIC -c test.c gcc $CFLAGS -fPIC -c test_wrap.c gcc $CFLAGS -fPIC -c test_module.c gcc $LDFLAGS -shared test.o test_wrap.o test_module.o -o test.so python -c 'import test; exit()' results in an error: $ ./build.sh ***INFO*** The coverage of global functions is 100.00% (1/1) ***INFO*** The coverage of methods is 100.00% (1/1) ***INFO*** There are no declared virtual proxies. ***INFO*** There are no declared virtual accessors. ***INFO*** There are no declared interface proxies. Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: ./test.so: undefined symbol: init_pygobject Where is the init_pygobject symbol defined? What have I missed linking to?

    Read the article

  • SQLite, python, unicode, and non-utf data

    - by Nathan Spears
    I started by trying to store strings in sqlite using python, and got the message: sqlite3.ProgrammingError: You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str). It is highly recommended that you instead just switch your application to Unicode strings. Ok, I switched to Unicode strings. Then I started getting the message: sqlite3.OperationalError: Could not decode to UTF-8 column 'tag_artist' with text 'Sigur Rós' when trying to retrieve data from the db. More research and I started encoding it in utf8, but then 'Sigur Rós' starts looking like 'Sigur Rós' note: My console was set to display in 'latin_1' as @John Machin pointed out. What gives? After reading this, describing exactly the same situation I'm in, it seems as if the advice is to ignore the other advice and use 8-bit bytestrings after all. I didn't know much about unicode and utf before I started this process. I've learned quite a bit in the last couple hours, but I'm still ignorant of whether there is a way to correctly convert 'ó' from latin-1 to utf-8 and not mangle it. If there isn't, why would sqlite 'highly recommend' I switch my application to unicode strings? I'm going to update this question with a summary and some example code of everything I've learned in the last 24 hours so that someone in my shoes can have an easy(er) guide. If the information I post is wrong or misleading in any way please tell me and I'll update, or one of you senior guys can update. Summary of answers Let me first state the goal as I understand it. The goal in processing various encodings, if you are trying to convert between them, is to understand what your source encoding is, then convert it to unicode using that source encoding, then convert it to your desired encoding. Unicode is a base and encodings are mappings of subsets of that base. utf_8 has room for every character in unicode, but because they aren't in the same place as, for instance, latin_1, a string encoded in utf_8 and sent to a latin_1 console will not look the way you expect. In python the process of getting to unicode and into another encoding looks like: str.decode('source_encoding').encode('desired_encoding') or if the str is already in unicode str.encode('desired_encoding') For sqlite I didn't actually want to encode it again, I wanted to decode it and leave it in unicode format. Here are four things you might need to be aware of as you try to work with unicode and encodings in python. The encoding of the string you want to work with, and the encoding you want to get it to. The system encoding. The console encoding. The encoding of the source file Elaboration: (1) When you read a string from a source, it must have some encoding, like latin_1 or utf_8. In my case, I'm getting strings from filenames, so unfortunately, I could be getting any kind of encoding. Windows XP uses UCS-2 (a Unicode system) as its native string type, which seems like cheating to me. Fortunately for me, the characters in most filenames are not going to be made up of more than one source encoding type, and I think all of mine were either completely latin_1, completely utf_8, or just plain ascii (which is a subset of both of those). So I just read them and decoded them as if they were still in latin_1 or utf_8. It's possible, though, that you could have latin_1 and utf_8 and whatever other characters mixed together in a filename on Windows. Sometimes those characters can show up as boxes, other times they just look mangled, and other times they look correct (accented characters and whatnot). Moving on. (2) Python has a default system encoding that gets set when python starts and can't be changed during runtime. See here for details. Dirty summary ... well here's the file I added: \# sitecustomize.py \# this file can be anywhere in your Python path, \# but it usually goes in ${pythondir}/lib/site-packages/ import sys sys.setdefaultencoding('utf_8') This system encoding is the one that gets used when you use the unicode("str") function without any other encoding parameters. To say that another way, python tries to decode "str" to unicode based on the default system encoding. (3) If you're using IDLE or the command-line python, I think that your console will display according to the default system encoding. I am using pydev with eclipse for some reason, so I had to go into my project settings, edit the launch configuration properties of my test script, go to the Common tab, and change the console from latin-1 to utf-8 so that I could visually confirm what I was doing was working. (4) If you want to have some test strings, eg test_str = "ó" in your source code, then you will have to tell python what kind of encoding you are using in that file. (FYI: when I mistyped an encoding I had to ctrl-Z because my file became unreadable.) This is easily accomplished by putting a line like so at the top of your source code file: # -*- coding: utf_8 -*- If you don't have this information, python attempts to parse your code as ascii by default, and so: SyntaxError: Non-ASCII character '\xf3' in file _redacted_ on line 81, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details Once your program is working correctly, or, if you aren't using python's console or any other console to look at output, then you will probably really only care about #1 on the list. System default and console encoding are not that important unless you need to look at output and/or you are using the builtin unicode() function (without any encoding parameters) instead of the string.decode() function. I wrote a demo function I will paste into the bottom of this gigantic mess that I hope correctly demonstrates the items in my list. Here is some of the output when I run the character 'ó' through the demo function, showing how various methods react to the character as input. My system encoding and console output are both set to utf_8 for this run: '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Now I will change the system and console encoding to latin_1, and I get this output for the same input: 'ó' = original char <type 'str'> repr(char)='\xf3' 'ó' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Notice that the 'original' character displays correctly and the builtin unicode() function works now. Now I change my console output back to utf_8. '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' '?' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Here everything still works the same as last time but the console can't display the output correctly. Etc. The function below also displays more information that this and hopefully would help someone figure out where the gap in their understanding is. I know all this information is in other places and more thoroughly dealt with there, but I hope that this would be a good kickoff point for someone trying to get coding with python and/or sqlite. Ideas are great but sometimes source code can save you a day or two of trying to figure out what functions do what. Disclaimers: I'm no encoding expert, I put this together to help my own understanding. I kept building on it when I should have probably started passing functions as arguments to avoid so much redundant code, so if I can I'll make it more concise. Also, utf_8 and latin_1 are by no means the only encoding schemes, they are just the two I was playing around with because I think they handle everything I need. Add your own encoding schemes to the demo function and test your own input. One more thing: there are apparently crazy application developers making life difficult in Windows. #!/usr/bin/env python # -*- coding: utf_8 -*- import os import sys def encodingDemo(str): validStrings = () try: print "str =",str,"{0} repr(str) = {1}".format(type(str), repr(str)) validStrings += ((str,""),) except UnicodeEncodeError as ude: print "Couldn't print the str itself because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print ude try: x = unicode(str) print "unicode(str) = ",x validStrings+= ((x, " decoded into unicode by the default system encoding"),) except UnicodeDecodeError as ude: print "ERROR. unicode(str) couldn't decode the string because the system encoding is set to an encoding that doesn't understand some character in the string." print "\tThe system encoding is set to {0}. See error:\n\t".format(sys.getdefaultencoding()), print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the unicode(str) because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('latin_1') print "str.decode('latin_1') =",x validStrings+= ((x, " decoded with latin_1 into unicode"),) try: print "str.decode('latin_1').encode('utf_8') =",str.decode('latin_1').encode('utf_8') validStrings+= ((x, " decoded with latin_1 into unicode and encoded into utf_8"),) except UnicodeDecodeError as ude: print "The string was decoded into unicode using the latin_1 encoding, but couldn't be encoded into utf_8. See error:\n\t", print ude except UnicodeDecodeError as ude: print "Something didn't work, probably because the string wasn't latin_1 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('latin_1') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('utf_8') print "str.decode('utf_8') =",x validStrings+= ((x, " decoded with utf_8 into unicode"),) try: print "str.decode('utf_8').encode('latin_1') =",str.decode('utf_8').encode('latin_1') except UnicodeDecodeError as ude: print "str.decode('utf_8').encode('latin_1') didn't work. The string was decoded into unicode using the utf_8 encoding, but couldn't be encoded into latin_1. See error:\n\t", validStrings+= ((x, " decoded with utf_8 into unicode and encoded into latin_1"),) print ude except UnicodeDecodeError as ude: print "str.decode('utf_8') didn't work, probably because the string wasn't utf_8 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('utf_8') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t",uee print print "Printing information about each character in the original string." for char in str: try: print "\t'" + char + "' = original char {0} repr(char)={1}".format(type(char), repr(char)) except UnicodeDecodeError as ude: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), ude) except UnicodeEncodeError as uee: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), uee) print uee try: x = unicode(char) print "\t'" + x + "' = unicode(char) {1} repr(unicode(char))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = unicode(char) ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = unicode(char) {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('latin_1') print "\t'" + x + "' = char.decode('latin_1') {1} repr(char.decode('latin_1'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('latin_1') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('latin_1') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('utf_8') print "\t'" + x + "' = char.decode('utf_8') {1} repr(char.decode('utf_8'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('utf_8') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('utf_8') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) print x = 'ó' encodingDemo(x) Much thanks for the answers below and especially to @John Machin for answering so thoroughly.

    Read the article

  • SQLite, python, unicode, and non-utf data

    - by Nathan Spears
    I started by trying to store strings in sqlite using python, and got the message: sqlite3.ProgrammingError: You must not use 8-bit bytestrings unless you use a text_factory that can interpret 8-bit bytestrings (like text_factory = str). It is highly recommended that you instead just switch your application to Unicode strings. Ok, I switched to Unicode strings. Then I started getting the message: sqlite3.OperationalError: Could not decode to UTF-8 column 'tag_artist' with text 'Sigur Rós' when trying to retrieve data from the db. More research and I started encoding it in utf8, but then 'Sigur Rós' starts looking like 'Sigur Rós' note: My console was set to display in 'latin_1' as @John Machin pointed out. What gives? After reading this, describing exactly the same situation I'm in, it seems as if the advice is to ignore the other advice and use 8-bit bytestrings after all. I didn't know much about unicode and utf before I started this process. I've learned quite a bit in the last couple hours, but I'm still ignorant of whether there is a way to correctly convert 'ó' from latin-1 to utf-8 and not mangle it. If there isn't, why would sqlite 'highly recommend' I switch my application to unicode strings? I'm going to update this question with a summary and some example code of everything I've learned in the last 24 hours so that someone in my shoes can have an easy(er) guide. If the information I post is wrong or misleading in any way please tell me and I'll update, or one of you senior guys can update. Summary of answers Let me first state the goal as I understand it. The goal in processing various encodings, if you are trying to convert between them, is to understand what your source encoding is, then convert it to unicode using that source encoding, then convert it to your desired encoding. Unicode is a base and encodings are mappings of subsets of that base. utf_8 has room for every character in unicode, but because they aren't in the same place as, for instance, latin_1, a string encoded in utf_8 and sent to a latin_1 console will not look the way you expect. In python the process of getting to unicode and into another encoding looks like: str.decode('source_encoding').encode('desired_encoding') or if the str is already in unicode str.encode('desired_encoding') For sqlite I didn't actually want to encode it again, I wanted to decode it and leave it in unicode format. Here are four things you might need to be aware of as you try to work with unicode and encodings in python. The encoding of the string you want to work with, and the encoding you want to get it to. The system encoding. The console encoding. The encoding of the source file Elaboration: (1) When you read a string from a source, it must have some encoding, like latin_1 or utf_8. In my case, I'm getting strings from filenames, so unfortunately, I could be getting any kind of encoding. Windows XP uses UCS-2 (a Unicode system) as its native string type, which seems like cheating to me. Fortunately for me, the characters in most filenames are not going to be made up of more than one source encoding type, and I think all of mine were either completely latin_1, completely utf_8, or just plain ascii (which is a subset of both of those). So I just read them and decoded them as if they were still in latin_1 or utf_8. It's possible, though, that you could have latin_1 and utf_8 and whatever other characters mixed together in a filename on Windows. Sometimes those characters can show up as boxes, other times they just look mangled, and other times they look correct (accented characters and whatnot). Moving on. (2) Python has a default system encoding that gets set when python starts and can't be changed during runtime. See here for details. Dirty summary ... well here's the file I added: \# sitecustomize.py \# this file can be anywhere in your Python path, \# but it usually goes in ${pythondir}/lib/site-packages/ import sys sys.setdefaultencoding('utf_8') This system encoding is the one that gets used when you use the unicode("str") function without any other encoding parameters. To say that another way, python tries to decode "str" to unicode based on the default system encoding. (3) If you're using IDLE or the command-line python, I think that your console will display according to the default system encoding. I am using pydev with eclipse for some reason, so I had to go into my project settings, edit the launch configuration properties of my test script, go to the Common tab, and change the console from latin-1 to utf-8 so that I could visually confirm what I was doing was working. (4) If you want to have some test strings, eg test_str = "ó" in your source code, then you will have to tell python what kind of encoding you are using in that file. (FYI: when I mistyped an encoding I had to ctrl-Z because my file became unreadable.) This is easily accomplished by putting a line like so at the top of your source code file: # -*- coding: utf_8 -*- If you don't have this information, python attempts to parse your code as ascii by default, and so: SyntaxError: Non-ASCII character '\xf3' in file _redacted_ on line 81, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details Once your program is working correctly, or, if you aren't using python's console or any other console to look at output, then you will probably really only care about #1 on the list. System default and console encoding are not that important unless you need to look at output and/or you are using the builtin unicode() function (without any encoding parameters) instead of the string.decode() function. I wrote a demo function I will paste into the bottom of this gigantic mess that I hope correctly demonstrates the items in my list. Here is some of the output when I run the character 'ó' through the demo function, showing how various methods react to the character as input. My system encoding and console output are both set to utf_8 for this run: '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Now I will change the system and console encoding to latin_1, and I get this output for the same input: 'ó' = original char <type 'str'> repr(char)='\xf3' 'ó' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' 'ó' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Notice that the 'original' character displays correctly and the builtin unicode() function works now. Now I change my console output back to utf_8. '?' = original char <type 'str'> repr(char)='\xf3' '?' = unicode(char) <type 'unicode'> repr(unicode(char))=u'\xf3' '?' = char.decode('latin_1') <type 'unicode'> repr(char.decode('latin_1'))=u'\xf3' '?' = char.decode('utf_8') ERROR: 'utf8' codec can't decode byte 0xf3 in position 0: unexpected end of data Here everything still works the same as last time but the console can't display the output correctly. Etc. The function below also displays more information that this and hopefully would help someone figure out where the gap in their understanding is. I know all this information is in other places and more thoroughly dealt with there, but I hope that this would be a good kickoff point for someone trying to get coding with python and/or sqlite. Ideas are great but sometimes source code can save you a day or two of trying to figure out what functions do what. Disclaimers: I'm no encoding expert, I put this together to help my own understanding. I kept building on it when I should have probably started passing functions as arguments to avoid so much redundant code, so if I can I'll make it more concise. Also, utf_8 and latin_1 are by no means the only encoding schemes, they are just the two I was playing around with because I think they handle everything I need. Add your own encoding schemes to the demo function and test your own input. One more thing: there are apparently crazy application developers making life difficult in Windows. #!/usr/bin/env python # -*- coding: utf_8 -*- import os import sys def encodingDemo(str): validStrings = () try: print "str =",str,"{0} repr(str) = {1}".format(type(str), repr(str)) validStrings += ((str,""),) except UnicodeEncodeError as ude: print "Couldn't print the str itself because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print ude try: x = unicode(str) print "unicode(str) = ",x validStrings+= ((x, " decoded into unicode by the default system encoding"),) except UnicodeDecodeError as ude: print "ERROR. unicode(str) couldn't decode the string because the system encoding is set to an encoding that doesn't understand some character in the string." print "\tThe system encoding is set to {0}. See error:\n\t".format(sys.getdefaultencoding()), print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the unicode(str) because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('latin_1') print "str.decode('latin_1') =",x validStrings+= ((x, " decoded with latin_1 into unicode"),) try: print "str.decode('latin_1').encode('utf_8') =",str.decode('latin_1').encode('utf_8') validStrings+= ((x, " decoded with latin_1 into unicode and encoded into utf_8"),) except UnicodeDecodeError as ude: print "The string was decoded into unicode using the latin_1 encoding, but couldn't be encoded into utf_8. See error:\n\t", print ude except UnicodeDecodeError as ude: print "Something didn't work, probably because the string wasn't latin_1 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('latin_1') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t", print uee try: x = str.decode('utf_8') print "str.decode('utf_8') =",x validStrings+= ((x, " decoded with utf_8 into unicode"),) try: print "str.decode('utf_8').encode('latin_1') =",str.decode('utf_8').encode('latin_1') except UnicodeDecodeError as ude: print "str.decode('utf_8').encode('latin_1') didn't work. The string was decoded into unicode using the utf_8 encoding, but couldn't be encoded into latin_1. See error:\n\t", validStrings+= ((x, " decoded with utf_8 into unicode and encoded into latin_1"),) print ude except UnicodeDecodeError as ude: print "str.decode('utf_8') didn't work, probably because the string wasn't utf_8 encoded. See error:\n\t", print ude except UnicodeEncodeError as uee: print "ERROR. Couldn't print the str.decode('utf_8') because the console is set to an encoding that doesn't understand some character in the string. See error:\n\t",uee print print "Printing information about each character in the original string." for char in str: try: print "\t'" + char + "' = original char {0} repr(char)={1}".format(type(char), repr(char)) except UnicodeDecodeError as ude: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), ude) except UnicodeEncodeError as uee: print "\t'?' = original char {0} repr(char)={1} ERROR PRINTING: {2}".format(type(char), repr(char), uee) print uee try: x = unicode(char) print "\t'" + x + "' = unicode(char) {1} repr(unicode(char))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = unicode(char) ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = unicode(char) {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('latin_1') print "\t'" + x + "' = char.decode('latin_1') {1} repr(char.decode('latin_1'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('latin_1') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('latin_1') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) try: x = char.decode('utf_8') print "\t'" + x + "' = char.decode('utf_8') {1} repr(char.decode('utf_8'))={2}".format(x, type(x), repr(x)) except UnicodeDecodeError as ude: print "\t'?' = char.decode('utf_8') ERROR: {0}".format(ude) except UnicodeEncodeError as uee: print "\t'?' = char.decode('utf_8') {0} repr(char)={1} ERROR PRINTING: {2}".format(type(x), repr(x), uee) print x = 'ó' encodingDemo(x) Much thanks for the answers below and especially to @John Machin for answering so thoroughly.

    Read the article

  • What's the best way to create a static utility class in python? Is using metaclasses code smell?

    - by rsimp
    Ok so I need to create a bunch of utility classes in python. Normally I would just use a simple module for this but I need to be able to inherit in order to share common code between them. The common code needs to reference the state of the module using it so simple imports wouldn't work well. I don't like singletons, and classes that use the classmethod decorator do not have proper support for python properties. One pattern I see used a lot is creating an internal python class prefixed with an underscore and creating a single instance which is then explicitly imported or set as the module itself. This is also used by fabric to create a common environment object (fabric.api.env). I've realized another way to accomplish this would be with metaclasses. For example: #util.py class MetaFooBase(type): @property def file_path(cls): raise NotImplementedError def inherited_method(cls): print cls.file_path #foo.py from util import * import env class MetaFoo(MetaFooBase): @property def file_path(cls): return env.base_path + "relative/path" def another_class_method(cls): pass class Foo(object): __metaclass__ = MetaFoo #client.py from foo import Foo file_path = Foo.file_path I like this approach better than the first pattern for a few reasons: First, instantiating Foo would be meaningless as it has no attributes or methods, which insures this class acts like a true single interface utility, unlike the first pattern which relies on the underscore convention to dissuade client code from creating more instances of the internal class. Second, sub-classing MetaFoo in a different module wouldn't be as awkward because I wouldn't be importing a class with an underscore which is inherently going against its private naming convention. Third, this seems to be the closest approximation to a static class that exists in python, as all the meta code applies only to the class and not to its instances. This is shown by the common convention of using cls instead of self in the class methods. As well, the base class inherits from type instead of object which would prevent users from trying to use it as a base for other non-static classes. It's implementation as a static class is also apparent when using it by the naming convention Foo, as opposed to foo, which denotes a static class method is being used. As much as I think this is a good fit, I feel that others might feel its not pythonic because its not a sanctioned use for metaclasses which should be avoided 99% of the time. I also find most python devs tend to shy away from metaclasses which might affect code reuse/maintainability. Is this code considered code smell in the python community? I ask because I'm creating a pypi package, and would like to do everything I can to increase adoption.

    Read the article

  • Getting error while install mod_wsgi on centos6.3 with python 2.7

    - by user825904
    In initially installed yum install mod_wsgi and i think it was linked with python 2.6 Now is there any way to link it with 2.7 I tried configuring from the source and i get this error apxs -c -I/usr/local/include/python2.7 -DNDEBUG mod_wsgi.c -L/usr/local/lib -L/usr/local/lib/python2.7/config -lpython2.7 -lpthread -ldl -lutil -lm /usr/lib64/apr-1/build/libtool --silent --mode=compile gcc -prefer-pic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -Wformat-security -fno-strict-aliasing -DLINUX=2 -D_REENTRANT -D_GNU_SOURCE -pthread -I/usr/include/httpd -I/usr/include/apr-1 -I/usr/include/apr-1 -I/usr/local/include/python2.7 -DNDEBUG -c -o mod_wsgi.lo mod_wsgi.c && touch mod_wsgi.slo In file included from /usr/local/include/python2.7/Python.h:8, from mod_wsgi.c:142: /usr/local/include/python2.7/pyconfig.h:1161:1: warning: "_POSIX_C_SOURCE" redefined In file included from /usr/include/sys/types.h:26, from /usr/include/apr-1/apr-x86_64.h:127, from /usr/include/apr-1/apr.h:19, from /usr/include/httpd/ap_config.h:25, from /usr/include/httpd/httpd.h:43, from mod_wsgi.c:34: /usr/include/features.h:162:1: warning: this is the location of the previous definition In file included from /usr/local/include/python2.7/Python.h:8, from mod_wsgi.c:142: /usr/local/include/python2.7/pyconfig.h:1183:1: warning: "_XOPEN_SOURCE" redefined In file included from /usr/include/sys/types.h:26, from /usr/include/apr-1/apr-x86_64.h:127, from /usr/include/apr-1/apr.h:19, from /usr/include/httpd/ap_config.h:25, from /usr/include/httpd/httpd.h:43, from mod_wsgi.c:34: /usr/include/features.h:164:1: warning: this is the location of the previous definition mod_wsgi.c: In function ‘wsgi_server_group’: mod_wsgi.c:991: warning: unused variable ‘value’ mod_wsgi.c: In function ‘Log_isatty’: mod_wsgi.c:1665: warning: unused variable ‘result’ mod_wsgi.c: In function ‘Log_writelines’: mod_wsgi.c:1802: warning: unused variable ‘msg’ mod_wsgi.c: In function ‘Adapter_output’: mod_wsgi.c:3087: warning: unused variable ‘n’ mod_wsgi.c: In function ‘Adapter_file_wrapper’: mod_wsgi.c:4138: warning: unused variable ‘result’ mod_wsgi.c: In function ‘wsgi_python_term’: mod_wsgi.c:5850: warning: unused variable ‘tstate’ mod_wsgi.c:5849: warning: unused variable ‘interp’ mod_wsgi.c: In function ‘wsgi_python_child_init’: mod_wsgi.c:7050: warning: unused variable ‘l’ mod_wsgi.c:6948: warning: unused variable ‘interp’ mod_wsgi.c: In function ‘wsgi_add_import_script’: mod_wsgi.c:7701: warning: unused variable ‘error’ mod_wsgi.c: In function ‘wsgi_add_handler_script’: mod_wsgi.c:8179: warning: unused variable ‘dconfig’ mod_wsgi.c:8178: warning: unused variable ‘sconfig’ mod_wsgi.c: In function ‘wsgi_hook_handler’: mod_wsgi.c:9375: warning: suggest parentheses around assignment used as truth value mod_wsgi.c:9377: warning: suggest parentheses around assignment used as truth value mod_wsgi.c:9379: warning: suggest parentheses around assignment used as truth value mod_wsgi.c:9383: warning: suggest parentheses around assignment used as truth value mod_wsgi.c:9403: warning: suggest parentheses around assignment used as truth value mod_wsgi.c:9405: warning: suggest parentheses around assignment used as truth value mod_wsgi.c:9408: warning: suggest parentheses around assignment used as truth value mod_wsgi.c: In function ‘wsgi_daemon_worker’: mod_wsgi.c:10819: warning: unused variable ‘duration’ mod_wsgi.c:10818: warning: unused variable ‘start’ mod_wsgi.c: In function ‘wsgi_hook_daemon_handler’: mod_wsgi.c:13172: warning: unused variable ‘i’ mod_wsgi.c:13170: warning: unused variable ‘elts’ mod_wsgi.c:13169: warning: unused variable ‘head’ mod_wsgi.c: At top level: mod_wsgi.c:8142: warning: ‘wsgi_set_user_authoritative’ defined but not used mod_wsgi.c:15251: warning: ‘wsgi_hook_check_user_id’ defined but not used /usr/lib64/apr-1/build/libtool --silent --mode=link gcc -o mod_wsgi.la -rpath /usr/lib64/httpd/modules -module -avoid-version mod_wsgi.lo -L/usr/local/lib -L/usr/local/lib/python2.7/config -lpython2.7 -lpthread -ldl -lutil -lm /usr/bin/ld: /usr/local/lib/libpython2.7.a(abstract.o): relocation R_X86_64_32 against `.rodata.str1.8' can not be used when making a shared object; recompile with -fPIC /usr/local/lib/libpython2.7.a: could not read symbols: Bad value collect2: ld returned 1 exit status apxs:Error: Command failed with rc=65536 . make: *** [mod_wsgi.la] Error 1 Waiting for Graham

    Read the article

  • OSX: Python packages fail to install, error message "/usr/local/bin: File Exists"

    - by kylehotchkiss
    I keep trying to install django and other python packages, and I keep getting the exact same error message: Installing django-admin.py script to /usr/local/bin error: /usr/local/bin: File exists So I look to make sure that my /usr/local folder is okay. At first glance it appears okay, until I try cd-ing into my bin. It says it can't because it's not a directory. Peculiar, I thought, so then I tried a Anchorage:local khotchkiss$ ls -a -l total 26168 drwxr-xr-x 6 root wheel 204 Dec 26 20:18 . drwxr-xr-x@ 14 root wheel 476 Feb 24 12:54 .. -rwxr-xr-x@ 1 root wheel 13395080 Oct 22 23:04 bin drwxr-xr-x 8 root wheel 272 Dec 26 20:18 git drwxr-xr-x 4 root wheel 136 Dec 18 11:31 include drwxr-xr-x 12 root wheel 408 Dec 18 11:31 lib And haven't a clue of what the 'bin' is, why its so large, and why its preventing me from installing python packages. Any clue?

    Read the article

  • deploying a Python application from a PHP developer

    - by user1218776
    I'm a little confused on the deployment process for Python. Let's say you create a brand new project with virtualenv source bin/activate pip install a few libraries write a simple hello world app pip freeze the dependencies When I deploy this code into a machine, do I need first make sure the machine is sourced before installing dependencies? I don't mean to sound like a total noob but in the PHP world, I don't have to worry about this because it's already part of the project. All the dependencies are registered with the autoloader in place. The steps would be: rsync the files (or any other method) source bin/activate pip install the dependencies from the pip freeze output file It feels awkward, or just wrong and very error prone. What are the correct steps to make? I've searched around but it seems many tutorials/articles make an assumption that anyone reading the article has past python experience (imo).

    Read the article

  • Problem with installing sqlite3 module for python 2.6 on an ubuntu system

    - by Hoang
    I need to run the sqlite3 module on python 2.6 in an ubuntu system. How do I install this module for Python 2.6? Somehow I don't have this module, it raises the error: >>> import sqlite3 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.6/sqlite3/__init__.py", line 24, in <module> from dbapi2 import * File "/usr/local/lib/python2.6/sqlite3/dbapi2.py", line 27, in <module> from _sqlite3 import * ImportError: No module named _sqlite3

    Read the article

  • Set defualt open with app to a python program on a Mac

    - by Vincent
    I use the open source application Ulipad http://code.google.com/p/ulipad/ do edit restructured text files (rst). It is a python application that I launch via terminal like so python32 UliPad.py I have python32 as an alias to the 32bit install of python on my machine. I have several versions installed. First I would like a way to launch ulipad like other osx apps. Not really sure how to do this. I would like to set all .rst files to be opened with UliPad.py. Is there a way to do this? I know how to choose the default app in finder but not sure how to choose ulipad as that app.

    Read the article

  • Integrate Python Projects Into Xcode

    - by Vynile
    Hi! I'm a Mac user, and one of my hobbies is programming. I use Xcode, the integrated IDE of Mac OS X. I started to learn Python programming langage, and I want to use Xcode for developing my scripts. I searched for weeks in the internet, but I didn't find something interesting. Firstly, I want to update the integrated interpreter of Mac OS X, that is on 2.6 version. And secondly, I want to create a Python project on Xcode easily, like I do with C & C++ projects. Can you help me? I really need help! Cordially.

    Read the article

  • Running Python scripts in a browser

    - by sunwukung
    I want to start learning Python - and I'm having trouble getting scripts to load up in a browser (using Wamp). So far I've tried the following: 1: add the following lines to httpd.conf: AddHandler cgi-script .py Options ExecCGI I navigate to localhost/path/to/script/myscript.py but get an Internal Server error. 2: downloaded mod_wsgi-win32-ap22py26-3.0.so - renamed to mod_wsgi (running Wamp with Apache 2.2) added the following lines to httpd.conf AddHandler mod_wsgi .py WSGIScriptAlias /wsgi/ "path/to/my/pythonscripts/folder/" but when I navigate to the script - it renders the script in it's entirety i.e. #!c:/Python26/python.exe -u print "hello world" I managed to get CherryPy working, but ideally I want to learn the language in a relatively raw context before digging into a framework. Can anyone give me some pointers?

    Read the article

  • FCGI & recompiling python code without restarting apache.

    - by Zayatzz
    Hello At one hosting company, they used to run python projects with fcgi. They had set it up so that when i changed django.fcgi file, which put django & my project on pythonpath, my project code was instantly recompiled. Because of that a friend set up hosting for our shared project in his server using fastcgi. It has been set up and the python scripts execute as they should, but what we do not know is, how to set it up so that my project would be recompiled when my setup file has been changed. Alan

    Read the article

  • Set default open-with app to a Python program on a Mac

    - by Vincent
    I use the open source application UliPad to edit restructured text files (rst). It is a Python application that I launch via Terminal like so: python32 UliPad.py I have python32 as an alias to the 32bit install of Python on my machine. I have several versions installed. First I would like a way to launch UliPad like other OS X apps. Not really sure how to do this. Secondly, I would like to set all .rst files to be opened with UliPad.py. Is there a way to do this? I know how to choose the default app in Finder but not sure how to choose UliPad as that app.

    Read the article

  • Packaging python software with custom dependencies

    - by viraptor
    Hi, I'm looking for a good way to package a Python application that is going to be deployed on a Debian server. The application itself depends on some modules which are not included in base Debian repository, although they might be in the future. This creates some problems... I depend on some patches to those modules. If the original module gets installed one day, the application will break. However if I install everything I need in a virtualenv just for that application, I lose the ability to upgrade Python itself (in case of security updates). The third option would be to rename my fork of the upstream module and just treat it as a completely separate one. But that would mean changing the code (not much work, but it wouldn't be that clean / universal anymore). Are there any other options that I missed? Are there any pros / cons I didn't see in the solutions above?

    Read the article

  • Getting .py scripts to open up in Python's IDLE

    - by Jahkr
    Hi there, I'm new to Python. I use Python 2.7 and I am running Windows Vista (64-bit). How do I make it when I click on .py scripts... that it opens up in IDLE so I can edit it a snap? Ya know... without having to open IDLE by itself. Heh. I got all the way to C:\Python27\Lib\idlelib but I don't see the IDLE application. Then when I do right-click and "Default open with" and select the idle.bat file.. I get this: Thanks!

    Read the article

  • Python and mod_wsgi path issue

    - by jasonh
    I have an AIX 6.1 system that I've compiled and installed: Apache 2.2.21 (into /usr/local/mercurial) Python 2.7.2 (into /usr/local/bin and /usr/local/lib) mod_wsgi 3.3 (with the AIX fix #1 described here) Mercurial 2.0 (system-wide) However, when Apache starts, I get the following message in error_log: IOError: invalid Python installation: unable to open /usr/local/bin/lib/python2.7/config/Makefile (No such file or directory) See the problem? bin/lib doesn't exist. /usr/local/lib/python2.7/config/Makefile does exist though. However, I can't figure out where it's getting that path from. Here's the environment variables I've got: PYTHONHOME=/usr/local/bin PYTHONPATH=/usr/local/lib/python2.7 LIBPATH="/usr/local/mercurial/lib:$LIBPATH" PATH=/usr/local/bin:/usr/local/lib:$PATH LDR_CNTRL="MAXDATA=0x80000000" AIXTHREAD_SCOPE=S AIXTHREAD_MUTEX_DEBUG=OFF AIXTHREAD_RWLOCK_DEBUG=OFF AIXTHREAD_COND_DEBUG=OFF SPINLOOPTIME=1000 YIELDLOOPTIME=8 MALLOCMULTIHEAP=considersize,heaps:8 I've tried all sorts of combinations with and without PYTHONHOME, PYTHONLIB and PATH in envvars. My PATH, in case it matters is: /usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/usr/opt/ifor/bin:/usr/local/bin:.

    Read the article

  • what are python libraries to work with git without installing git

    - by Arash
    I want to develop an application using python. It should be able to work with git repositories (show diffs, ...) I need a python library to work with .git repositories (creating, cloning, commit, ...) without installing git on my system. It would be nice if you give your own idea about each library you suggest. Information about how its documentation is? how bug free it is? and if it has an active development? is appreciated. Thanks in advance

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >