Search Results

Search found 27655 results on 1107 pages for 'visual python'.

Page 332/1107 | < Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >

  • virtualenv macosX --no-site-package ignored

    - by Tristram Gräbener
    Hello, I'm having problems with macOSX and virtualenv. It seems to ignore --no-site-package. Using exactly the same commands with linux (archlinux) it works. It it macOSX 10.5 with python 2.5 curl -o virtualenv.py 'http://bitbucket.org/ianb/virtualenv/raw/tip/virtualenv.py Create a new environment python virtualenv.py --no-site-packages foo New python executable in foo/bin/python Installing setuptools...........................done. Activate it source foo/bin/activate Try to install something in it. Despite virtualenv it looks for the system-wide install easy_install cherrypy Searching for cherrypy Best match: CherryPy 3.1.2 Adding CherryPy 3.1.2 to easy-install.pth file Using /Library/Python/2.5/site-packages Processing dependencies for cherrypy Finished processing dependencies for cherrypy Yet it doesn't find the module (foo)guidage-multimodal:~ tristram$ python Python 2.5.1 (r251:54863, Feb 6 2009, 19:02:12) [GCC 4.0.1 (Apple Inc. build 5465)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import cherrypy Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named cherrypy I tried PIP after looking at http://stackoverflow.com/questions/1382925/virtualenv-no-site-packages-and-pip-still-finding-global-packages However it fails installing psycopg2 (some problems with gcc). Also I would like to be able to have a setup.py (from distribute) that does the whole woork

    Read the article

  • Python3 and ftplib uploading files

    - by Teifion
    My python2 script uploads files nicely using this method but python3 is presenting problems and I'm stuck as to where to go next (googling hasn't helped). from ftplib import FTP ftp = FTP(ftp_host, ftp_user, ftp_pass) ftp.storbinary('STOR myfile.txt', open('myfile.txt')) The error I get is Traceback (most recent call last): File "/Library/WebServer/CGI-Executables/rob3/functions/cli_f.py", line 12, in upload ftp.storlines('STOR myfile.txt', open('myfile.txt')) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 454, in storbinary conn.sendall(buf) TypeError: must be bytes or buffer, not str I tried altering the code to from ftplib import FTP ftp = FTP(ftp_host, ftp_user, ftp_pass) ftp.storbinary('STOR myfile.txt'.encode('utf-8'), open('myfile.txt')) But instead I got this Traceback (most recent call last): File "/Library/WebServer/CGI-Executables/rob3/functions/cli_f.py", line 12, in upload ftp.storbinary('STOR myfile.txt'.encode('utf-8'), open('myfile.txt')) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 450, in storbinary conn = self.transfercmd(cmd) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 358, in transfercmd return self.ntransfercmd(cmd, rest)[0] File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 329, in ntransfercmd resp = self.sendcmd(cmd) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 244, in sendcmd self.putcmd(cmd) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 179, in putcmd self.putline(line) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 172, in putline line = line + CRLF TypeError: can't concat bytes to str Can anybody point me in the right direction

    Read the article

  • Longer execution through Java shell than console?

    - by czuk
    I have a script in Python which do some computations. When I run this script in console it takes about 7 minutes to complete but when I run it thought Java shell it takes three times longer. I use following code to execute the script in Java: this.p = Runtime.getRuntime().exec("script.py --batch", envp); this.input = new BufferedReader(new InputStreamReader(p.getInputStream())); this.output = new BufferedWriter(new OutputStreamWriter(p.getOutputStream())); this.error = new BufferedReader(new InputStreamReader(p.getErrorStream())); Do you have any suggestion why the Python script runs three time longer in Java than in a console? update The computation goes as follow: Java sends data to the Python. Python reads the data. Python generates a decision tree --- this is a long operation. Python sends a confirmation that the tree is ready. Java receives the confirmation. Later there is a series of communications between Java and Python but it takes only several second.

    Read the article

  • Why isn't pyinstaller making me an .exe file?

    - by Matt Miller
    I am attempting to follow this guide to make a simple Hello World script into an .exe file. I have Windows Vista with an AMD 64-bit processor I have installed Python 2.6.5 (Windows AMD64 version) I have set the PATH (if that's the right word) so that the command line recognizes Python I have installed UPX (there only seems to be a 32-bit version for Windows) and pasted a copy of upx.exe into the Python26 folder as instructed. I have installed Pywin (Windows AMD 64 Python 2.6 version) I have run Pyinstaller's Configure.py. It gives some error messages but seems to complete. I don't know if this is what's causing the problem, so the following is what it says when I run it: C:\Python26\Pyinstaller\branches\py26winConfigure.py I: read old config from C:\Python26\Pyinstaller\branches\py26win\config.dat I: computing EXE_dependencies I: Finding TCL/TK... I: Analyzing C:\Python26\DLLs_tkinter.pyd W: Cannot get binary dependencies for file: W: C:\Python26\DLLs_tkinter.pyd W: Traceback (most recent call last): File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 608, in get Imports return _getImports_pe(pth) File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 275, in _ge tImports_pe importva, importsz = datadirs[1] IndexError: list index out of range I: Analyzing C:\Python26\DLLs_ctypes.pyd W: Cannot get binary dependencies for file: W: C:\Python26\DLLs_ctypes.pyd W: Traceback (most recent call last): File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 608, in get Imports return _getImports_pe(pth) File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 275, in _ge tImports_pe importva, importsz = datadirs[1] IndexError: list index out of range I: Analyzing C:\Python26\DLLs\select.pyd W: Cannot get binary dependencies for file: W: C:\Python26\DLLs\select.pyd W: Traceback (most recent call last): File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 608, in get Imports return _getImports_pe(pth) File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 275, in _ge tImports_pe importva, importsz = datadirs[1] IndexError: list index out of range I: Analyzing C:\Python26\DLLs\unicodedata.pyd W: Cannot get binary dependencies for file: W: C:\Python26\DLLs\unicodedata.pyd W: Traceback (most recent call last): File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 608, in get Imports return _getImports_pe(pth) File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 275, in _ge tImports_pe importva, importsz = datadirs[1] IndexError: list index out of range I: Analyzing C:\Python26\DLLs\bz2.pyd W: Cannot get binary dependencies for file: W: C:\Python26\DLLs\bz2.pyd W: Traceback (most recent call last): File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 608, in get Imports return _getImports_pe(pth) File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 275, in _ge tImports_pe importva, importsz = datadirs[1] IndexError: list index out of range I: Analyzing C:\Python26\python.exe I: Dependent assemblies of C:\Python26\python.exe: I: amd64_Microsoft.VC90.CRT_1fc8b3b9a1e18e3b_9.0.21022.8_none I: Searching for assembly amd64_Microsoft.VC90.CRT_1fc8b3b9a1e18e3b_9.0.21022.8_ none... I: Found manifest C:\Windows\WinSxS\Manifests\amd64_microsoft.vc90.crt_1fc8b3b9a 1e18e3b_9.0.21022.8_none_750b37ff97f4f68b.manifest I: Searching for file msvcr90.dll I: Found file C:\Windows\WinSxS\amd64_microsoft.vc90.crt_1fc8b3b9a1e18e3b_9.0.21 022.8_none_750b37ff97f4f68b\msvcr90.dll I: Searching for file msvcp90.dll I: Found file C:\Windows\WinSxS\amd64_microsoft.vc90.crt_1fc8b3b9a1e18e3b_9.0.21 022.8_none_750b37ff97f4f68b\msvcp90.dll I: Searching for file msvcm90.dll I: Found file C:\Windows\WinSxS\amd64_microsoft.vc90.crt_1fc8b3b9a1e18e3b_9.0.21 022.8_none_750b37ff97f4f68b\msvcm90.dll I: Adding Microsoft.VC90.CRT\Microsoft.VC90.CRT.manifest I: Adding Microsoft.VC90.CRT\msvcr90.dll I: Adding Microsoft.VC90.CRT\msvcp90.dll I: Adding Microsoft.VC90.CRT\msvcm90.dll W: Cannot get binary dependencies for file: W: C:\Python26\python.exe W: Traceback (most recent call last): File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 608, in get Imports return _getImports_pe(pth) File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 275, in _ge tImports_pe importva, importsz = datadirs[1] IndexError: list index out of range I: Analyzing C:\Windows\WinSxS\Manifests\amd64_microsoft.vc90.crt_1fc8b3b9a1e18e 3b_9.0.21022.8_none_750b37ff97f4f68b.manifest I: Analyzing C:\Windows\WinSxS\amd64_microsoft.vc90.crt_1fc8b3b9a1e18e3b_9.0.210 22.8_none_750b37ff97f4f68b\msvcr90.dll W: Cannot get binary dependencies for file: W: C:\Windows\WinSxS\amd64_microsoft.vc90.crt_1fc8b3b9a1e18e3b_9.0.21022.8_none_ 750b37ff97f4f68b\msvcr90.dll W: Traceback (most recent call last): File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 608, in get Imports return _getImports_pe(pth) File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 275, in _ge tImports_pe importva, importsz = datadirs[1] IndexError: list index out of range I: Analyzing C:\Windows\WinSxS\amd64_microsoft.vc90.crt_1fc8b3b9a1e18e3b_9.0.210 22.8_none_750b37ff97f4f68b\msvcp90.dll W: Cannot get binary dependencies for file: W: C:\Windows\WinSxS\amd64_microsoft.vc90.crt_1fc8b3b9a1e18e3b_9.0.21022.8_none_ 750b37ff97f4f68b\msvcp90.dll W: Traceback (most recent call last): File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 608, in get Imports return _getImports_pe(pth) File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 275, in _ge tImports_pe importva, importsz = datadirs[1] IndexError: list index out of range I: Analyzing C:\Windows\WinSxS\amd64_microsoft.vc90.crt_1fc8b3b9a1e18e3b_9.0.210 22.8_none_750b37ff97f4f68b\msvcm90.dll W: Cannot get binary dependencies for file: W: C:\Windows\WinSxS\amd64_microsoft.vc90.crt_1fc8b3b9a1e18e3b_9.0.21022.8_none_ 750b37ff97f4f68b\msvcm90.dll W: Traceback (most recent call last): File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 608, in get Imports return _getImports_pe(pth) File "C:\Python26\Pyinstaller\branches\py26win\bindepend.py", line 275, in _ge tImports_pe importva, importsz = datadirs[1] IndexError: list index out of range I: could not find TCL/TK I: testing for Zlib... I: ... Zlib available I: Testing for ability to set icons, version resources... I: ... resource update available I: Testing for Unicode support... I: ... Unicode available I: testing for UPX... I: ...UPX available I: computing PYZ dependencies... I: done generating C:\Python26\Pyinstaller\branches\py26win\config.dat My Python script (named Hello.py) is the same as the example: #!/usr/bin/env python for i in xrange(10000): print "Hello, World!" This is my BAT file, in the same directory: set PIP=C:\Python26\Pyinstaller\branches\py26win\ python %PIP%Makespec.py --onefile --console --upx --tk Hello.py python %PIP%Build.py Hello.spec When I run Hello.bat in the command prompt several files are made, none of which are an .exe file, and the following is displayed: C:\My Filesset PIP=C:\Python26\Pyinstaller\branches\py26win\ C:\My Filespython C:\Python26\Pyinstaller\branches\py26win\Makespec.py --onefil e --console --upx --tk Hello.py wrote C:\My Files\Hello.spec now run Build.py to build the executable C:\My Filespython C:\Python26\Pyinstaller\branches\py26win\Build.py Hello.spec I: Dependent assemblies of C:\Python26\python.exe: I: amd64_Microsoft.VC90.CRT_1fc8b3b9a1e18e3b_9.0.21022.8_none Traceback (most recent call last): File "C:\Python26\Pyinstaller\branches\py26win\Build.py", line 1359, in main(args[0], configfilename=opts.configfile) File "C:\Python26\Pyinstaller\branches\py26win\Build.py", line 1337, in main build(specfile) File "C:\Python26\Pyinstaller\branches\py26win\Build.py", line 1297, in build execfile(spec) File "Hello.spec", line 3, in pathex=['C:\My Files']) File "C:\Python26\Pyinstaller\branches\py26win\Build.py", line 292, in _init _ raise ValueError, "script '%s' not found" % script ValueError: script 'C:\Python26\Pyinstaller\branches\py26win\support\useTK.py' n ot found I have limited knowledge with the command prompt, so please take baby steps with me if I need to do something there.

    Read the article

  • Creating an ASP.NET report using Visual Studio 2010 - Part 2

    - by rajbk
    We continue building our report in this three part series. Creating an ASP.NET report using Visual Studio 2010 - Part 1 Creating an ASP.NET report using Visual Studio 2010 - Part 3 Creating the Client Report Definition file (RDLC) Add a folder called “RDLC”. This will hold our RDLC report.   Right click on the RDLC folder, select “Add new item..” and add an “RDLC” name of “Products”. We will use the “Report Wizard” to walk us through the steps of creating the RDLC.   In the next dialog, give the dataset a name called “ProductDataSet”. Change the data source to “NorthwindReports.DAL” and select “ProductRepository(GetProductsProjected)”. The fields that are returned from the method are shown on the right. Click next.   Drag and drop the ProductName, CategoryName, UnitPrice and Discontinued into the Values container. Note that you can create much more complex grouping using this UI. Click Next.   Most of the selections on this screen are grayed out because we did not choose a grouping in the previous screen. Click next. Choose a style for your report. Click next. The report graphic design surface is now visible. Right click on the report and add a page header and page footer. With the report design surface active, drag and drop a TextBox from the tool box to the page header. Drag one more textbox to the page header. We will use the text boxes to add some header text as shown in the next figure. You can change the font size and other properties of the textboxes using the formatting tool bar (marked in red). You can also resize the columns by moving your cursor in between columns and dragging. Adding Expressions Add two more text boxes to the page footer. We will use these to add the time the report was generated and page numbers. Right click on the first textbox in the page footer and select “Expression”. Add the following expression for the print date (note the = sign at the left of the expression in the dialog below) "© Northwind Traders " & Format(Now(),"MM/dd/yyyy hh:mm tt") Right click on the second text box and add the following for the page count.   Globals.PageNumber & " of " & Globals.TotalPages Formatting the page footer is complete.   We are now going to format the “Unit Price” column so it displays the number in currency format.  Right click on the [UnitPrice] column (not header) and select “Text Box Properties..” Under “Number”, select “Currency”. Hit OK. Adding a chart With the design surface active, go to the toolbox and drag and drop a chart control. You will need to move the product list table down first to make space for the chart contorl. The document can also be resized by dragging on the corner or at the page header/footer separator. In the next dialog, pick the first chart type. This can be changed later if needed. Click OK. The chart gets added to the design surface.   Click on the blue bars in the chart (not legend). This will bring up drop locations for dropping the fields. Drag and drop the UnitPrice and CategoryName into the top (y axis) and bottom (x axis) as shown below. This will give us the total unit prices for a given category. That is the best I could come up with as far as what report to render, sorry :-) Delete the legend area to get more screen estate. Resize the chart to your liking. Change the header, x axis and y axis text by double clicking on those areas. We made it this far. Let’s impress the client by adding a gradient to the bar graph :-) Right click on the blue bar and select “Series properties”. Under “Fill”, add a color and secondary color and select the Gradient style. We are done designing our report. In the next section you will see how to add the report to the report viewer control, bind to the data and make it refresh when the filter criteria are changed.   Creating an ASP.NET report using Visual Studio 2010 - Part 3

    Read the article

  • Using the Static Code Analysis feature of Visual Studio (Premium/Ultimate) to find memory leakage problems

    - by terje
    Memory for managed code is handled by the garbage collector, but if you use any kind of unmanaged code, like native resources of any kind, open files, streams and window handles, your application may leak memory if these are not properly handled.  To handle such resources the classes that own these in your application should implement the IDisposable interface, and preferably implement it according to the pattern described for that interface. When you suspect a memory leak, the immediate impulse would be to start up a memory profiler and start digging into that.   However, before you follow that impulse, do a Static Code Analysis run with a ruleset tuned to finding possible memory leaks in your code.  If you get any warnings from this, fix them before you go on with the profiling. How to use a ruleset In Visual Studio 2010 (Premium and Ultimate editions) you can define your own rulesets containing a list of Static Code Analysis checks.   I have defined the memory checks as shown in the lists below as ruleset files, which can be downloaded – see bottom of this post.  When you get them, you can easily attach them to every project in your solution using the Solution Properties dialog. Right click the solution, and choose Properties at the bottom, or use the Analyze menu and choose “Configure Code Analysis for Solution”: In this dialog you can now choose the Memorycheck ruleset for every project you want to investigate.  Pressing Apply or Ok opens every project file and changes the projects code analysis ruleset to the one we have specified here. How to define your own ruleset  (skip this if you just download my predefined rulesets) If you want to define the ruleset yourself, open the properties on any project, choose Code Analysis tab near the bottom, choose any ruleset in the drop box and press Open Clear out all the rules by selecting “Source Rule Sets” in the Group By box, and unselect the box Change the Group By box to ID, and select the checks you want to include from the lists below. Note that you can change the action for each check to either warning, error or none, none being the same as unchecking the check.   Now go to the properties window and set a new name and description for your ruleset. Then save (File/Save as) the ruleset using the new name as its name, and use it for your projects as detailed above. It can also be wise to add the ruleset to your solution as a solution item. That way it’s there if you want to enable Code Analysis in some of your TFS builds.   Running the code analysis In Visual Studio 2010 you can either do your code analysis project by project using the context menu in the solution explorer and choose “Run Code Analysis”, you can define a new solution configuration, call it for example Debug (Code Analysis), in for each project here enable the Enable Code Analysis on Build   In Visual Studio Dev-11 it is all much simpler, just go to the Solution root in the Solution explorer, right click and choose “Run code analysis on solution”.     The ruleset checks The following list is the essential and critical memory checks.  CheckID Message Can be ignored ? Link to description with fix suggestions CA1001 Types that own disposable fields should be disposable No  http://msdn.microsoft.com/en-us/library/ms182172.aspx CA1049 Types that own native resources should be disposable Only if the pointers assumed to point to unmanaged resources point to something else  http://msdn.microsoft.com/en-us/library/ms182173.aspx CA1063 Implement IDisposable correctly No  http://msdn.microsoft.com/en-us/library/ms244737.aspx CA2000 Dispose objects before losing scope No  http://msdn.microsoft.com/en-us/library/ms182289.aspx CA2115 1 Call GC.KeepAlive when using native resources See description  http://msdn.microsoft.com/en-us/library/ms182300.aspx CA2213 Disposable fields should be disposed If you are not responsible for release, of if Dispose occurs at deeper level  http://msdn.microsoft.com/en-us/library/ms182328.aspx CA2215 Dispose methods should call base class dispose Only if call to base happens at deeper calling level  http://msdn.microsoft.com/en-us/library/ms182330.aspx CA2216 Disposable types should declare a finalizer Only if type does not implement IDisposable for the purpose of releasing unmanaged resources  http://msdn.microsoft.com/en-us/library/ms182329.aspx CA2220 Finalizers should call base class finalizers No  http://msdn.microsoft.com/en-us/library/ms182341.aspx Notes: 1) Does not result in memory leak, but may cause the application to crash   The list below is a set of optional checks that may be enabled for your ruleset, because the issues these points too often happen as a result of attempting to fix up the warnings from the first set.   ID Message Type of fault Can be ignored ? Link to description with fix suggestions CA1060 Move P/invokes to NativeMethods class Security No http://msdn.microsoft.com/en-us/library/ms182161.aspx CA1816 Call GC.SuppressFinalize correctly Performance Sometimes, see description http://msdn.microsoft.com/en-us/library/ms182269.aspx CA1821 Remove empty finalizers Performance No http://msdn.microsoft.com/en-us/library/bb264476.aspx CA2004 Remove calls to GC.KeepAlive Performance and maintainability Only if not technically correct to convert to SafeHandle http://msdn.microsoft.com/en-us/library/ms182293.aspx CA2006 Use SafeHandle to encapsulate native resources Security No http://msdn.microsoft.com/en-us/library/ms182294.aspx CA2202 Do not dispose of objects multiple times Exception (System.ObjectDisposedException) No http://msdn.microsoft.com/en-us/library/ms182334.aspx CA2205 Use managed equivalents of Win32 API Maintainability and complexity Only if the replace doesn’t provide needed functionality http://msdn.microsoft.com/en-us/library/ms182365.aspx CA2221 Finalizers should be protected Incorrect implementation, only possible in MSIL coding No http://msdn.microsoft.com/en-us/library/ms182340.aspx   Downloadable ruleset definitions I have defined three rulesets, one called Inmeta.Memorycheck with the rules in the first list above, and Inmeta.Memorycheck.Optionals containing the rules in the second list, and the last one called Inmeta.Memorycheck.All containing the sum of the two first ones.  All three rulesets can be found in the  zip archive  “Inmeta.Memorycheck” downloadable from here.   Links to some other resources relevant to Static Code Analysis MSDN Magazine Article by Mickey Gousset on Static Code Analysis in VS2010 MSDN :  Analyzing Managed Code Quality by Using Code Analysis, root of the documentation for this Preventing generated code from being analyzed using attributes Online training course on Using Code Analysis with VS2010 Blogpost by Tatham Oddie on custom code analysis rules How to write custom rules, from Microsoft Code Analysis Team Blog Microsoft Code Analysis Team Blog

    Read the article

  • Selling Visual Studio ALM

    - by Tarun Arora
    Introduction As a consultant I have been selling Application Lifecycle Management services using Visual Studio and Team Foundation Server. I’ve been contacted various times by friends working in organization telling me that ALM processes in their company were benchmarked when dinosaurs walked the earth. Most of these individuals already know the great features Microsoft ALM tools offer and are keen to start a conversation with the CIO but don’t exactly know where to start. It is very important how you engage in your first conversation, if you start the conversation with ‘There is this great tooling from Microsoft which offers amazing features to boost developer productivity, … ‘ from experience I can tell you the reply from your CIO would be ‘I already know! Our existing landscape has a combination of bleeding edge open source and cutting edge licensed tools which already cover these features quite well, more over Microsoft products have a high licensing cost associated to them.’ You will always find it harder to sell by feature, the trick is to highlight the gap in the existing processes & tools and then highlight the impact of these gaps to the overall development processes, by now you would have captured enough attention to show off how the ALM tooling offered by Microsoft not only fills those gaps but offers great value adds to take their development practices to the next level. Rangers ALM Assessment Guide Image 1 – Welcome! First look at the Rangers ALM assessment guide Most organization already have some processes in place to cover aspects of ALM. How do you go about proving that there isn’t enough cover in place? This is where Visual Studio ALM Rangers ALM Assessment guide can help. The ALM assessment guide is really a tool that helps you gather information about Development practices and processes within a customer's environment. Several questionnaires are used to identify the current state of individual development lifecycle areas and decide on a desired state for those processes. It also presents guidance and roll-up summaries to help with recommendations moving forward. The ALM Rangers assessment guide can be downloaded from here. Image 2 – ALM Assessment guide divided into different functions of SDLC The assessment guide is divided into different functions of Software Development Lifecycle (listed below), this gives you the ability to access how mature the company is in different areas of SDLC. Architecture & Design Requirement Engineering & UX Development Software Configuration Management Governance Deployment & Operations Testing & Quality Assurance Project Planning & Management Each section has a set of questions, fill in the assessment by selecting “Never/Sometimes/Always” from the Answer column in the question sheets.  Each answer has weightage to the overall score. Each question has a link next to it, clicking the link takes you to the Reference sheet which gives you more details about the question along with a reason for “why you need to ask this question?”, “other ways to phrase the question” and “what to expect as an answer from the customer”. The trick is to engage the customer in a discussion. You need to probe a lot, listen to the customer and have a discussion with several team members, preferably without management to ensure that you receive candid feedback. This reminds me of a funny incident when during an ALM review a customer told me that they have a sophisticated semi-automated application deployment process, further discussions revealed that deployment actually involved 72 manual configuration steps per production node. Such observations can be recorded in the Issue Brainstorming worksheet for further consideration later. It is also worth mentioning the different levels of ALM maturity to the customer. By default the desired state of ALM maturity is set to Standard, it is possible to set a desired state by area, you should strive for Advanced or Dynamic, it always helps by explaining the classification and advantages. Image 3 – ALM levels by description The ALM assessment guide helps you arrive at a quantitative measure of the company’s ALM maturity. The resultant graph plotted on a spider’s web shows you the company’s current state of ALM maturity and the desired state of ALM maturity. Further since the results are classified by area you can immediately spot the areas where the customer needs immediate help. Image 4 – The spiders web! The red cross icons are areas shouting out for immediate attention, the yellow exclamation icons are areas that need improvement. These icons are calculated on the difference between the Current State of ALM maturity VS the Desired state of ALM maturity. Image 5 – Results by area Conclusion To conclude the Rangers ALM assessment guide gives you the ability to, Measure the customer’s current ALM maturity level Understand the ALM maturity level the customer desires to achieve Capture a healthy list of issues the customer wants to brainstorm further Now What’s next…? Download and get started with the Rangers ALM Assessment Guide. If you have successfully captured the above listed three pieces of information you are in a great state to make recommendations on the identified areas highlighting the benefits that Visual Studio ALM tools would offer. In the next post I will be covering how to take the ALM assessment results as the base to actually convert your recommendation into a sell.  Remember to subscribe to http://feeds.feedburner.com/TarunArora. I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. *** A special thanks goes out to fellow ranges Willy, Ethem and Philip for reviewing the blog post and providing valuable feedback. ***

    Read the article

  • Algorithm design, "randomising" timetable schedule in Python although open to other languages.

    - by S1syphus
    Before I start I should add I am a musician and not a native programmer, this was undertook to make my life easier. Here is the situation, at work I'm given a new csv file each which contains a list of sound files, their length, and the minimum total amount of time they must be played. I create a playlist of exactly 60 minutes, from this excel file. Each sample played the by the minimum number of instances, but spread out from each other; so there will never be a period where for where one sound is played twice in a row or in close proximity to itself. Secondly, if the minimum instances of each song has been used, and there is still time with in the 60 min, it needs to fill the remaining time using sounds till 60 minutes is reached, while adhering to above. The smallest duration possible is 15 seconds, and then multiples of 15 seconds. Here is what I came up with in python and the problems I'm having with it, and as one user said its buggy due to the random library used in it. So I'm guessing a total rethink is on the table, here is where I need your help. Whats is the best way to solve the issue, I have had a brief look at things like knapsack and bin packing algorithms, while both are relevant neither are appropriate and maybe a bit beyond me.

    Read the article

  • How do I create a named temporary file on windows in Python?

    - by Chris B.
    I've got a Python program that needs to create a named temporary file which will be opened and closed a couple times over the course of the program, and should be deleted when the program exits. Unfortunately, none of the options in tempfile seem to work: TemporaryFile doesn't have a visible name NamedTemporaryFile creates a file-like object. I just need a filename. I've tried closing the object it returns (after setting delete = False) but I get stream errors when I try to open the file later. SpooledTemporaryFile doesn't have a visible name mkstemp returns both the open file object and the name; it doesn't guarantee the file is deleted when the program exits mktemp returns the filename, but doesn't guarantee the file is deleted when the program exits I've tried using mktemp1 within a context manager, like so: def get_temp_file(suffix): class TempFile(object): def __init__(self): self.name = tempfile.mktemp(suffix = '.test') def __enter__(self): return self def __exit__(self, ex_type, ex_value, ex_tb): if os.path.exists(self.name): try: os.remove(self.name) except: print sys.exc_info() return TempFile() ... but that gives me a WindowsError(32, 'The process cannot access the file because it is being used by another process'). The filename is used by a process my program spawns, and even though I ensure that process finishes before I exit, it seems to have a race condition out of my control. What's the best way of dealing with this? 1 I don't need to worry about security here; this is part of a testing module, so the most someone nefarious could do is cause our unit tests to spuriously fail. The horror!

    Read the article

  • In Python epoll can I avoid the errno.EWOULDBLOCK, errno.EAGAIN ?

    - by davyzhang
    I wrote a epoll wrapper in python, It works fine but recently I found the performance is not not ideal for large package sending. I look down into the code and found there's actually a LOT of error Traceback (most recent call last): File "/Users/dawn/Documents/workspace/work/dev/server/sandbox/single_point/tcp_epoll.py", line 231, in send_now num_bytes = self.sock.send(self.response) error: [Errno 35] Resource temporarily unavailable and previously silent it as the document said, so my sending function was done this way: def send_now(self): '''send message at once''' st = time.time() times = 0 while self.response != '': try: num_bytes = self.sock.send(self.response) l.info('msg wrote %s %d : %r size %r',self.ip,self.port,self.response[:num_bytes],num_bytes) self.response = self.response[num_bytes:] except socket.error,e: if e[0] in (errno.EWOULDBLOCK,errno.EAGAIN): #here I printed it, but I silent it in normal days #print 'would block, again %r',tb.format_exc() break else: l.warning('%r %r socket error %r',self.ip,self.port,tb.format_exc()) #must break or cause dead loop break except: #other exceptions l.warning('%r %r msg write error %r',self.ip,self.port,tb.format_exc()) break times += 1 et = time.time() I googled it, and says it caused by temporarily network buffer run out So how can I manually and efficiently detect this error instead it goes to exception phase? Because it cause to much time to rasie/handle the exception.

    Read the article

  • Maintaining a Python web application: heavier vs lighter framework?

    - by Tiberiu Ana
    Five+ years from now, you are hired to support and extend a data-centric web application written in Python that hasn't been kept up to date. Would you rather prefer it was written in the current version of Django/Pylons at the time, using the available standard components, or kept minimal with something like CherryPy/web.py and a few library dependencies? Heavy framework Advantages: standard approach to application design and structure, as encouraged by framework; less application code to worry about. Disadvantages: requires learning the framework to understand how things work; broken things in old version of framework difficult to fix; upgrading to new version potentially difficult due to changing APIs; finding relevant documentation/help potentially difficult due to changing APIs. Light framework Advantages: most application code is directly "visible"; only needed features are implemented; architecture should be simpler to understand; less need to upgrade external dependencies; easier to upgrade external dependencies. Disadvantages: some reinventing the wheel; non-standard design and structure (with the associated unique issues and bugs). I will update the list with any helpful answers.

    Read the article

  • How to implement a python REPL that nicely handles asynchronous output?

    - by andy
    I have a Python-based app that can accept a few commands in a simple read-eval-print-loop. I'm using raw_input('> ') to get the input. On Unix-based systems, I also import readline to make things behave a little better. All this is working fine. The problem is that there are asynchronous events coming in, and I'd like to print output as soon as they happen. Unfortunately, this makes things look ugly. The " " string doesn't show up again after the output, and if the user is halfway through typing something, it chops their text in half. It should probably redraw the user's text-in-progress after printing something. This seems like it must be a solved problem. What's the proper way to do this? Also note that some of my users are Windows-based. TIA Edit: The accepted answer works under Unixy platforms (when the readline module is available), but if anyone knows how to make this work under Windows, it would be much appreciated!

    Read the article

  • Elegant way to take basename of directory in Python?

    - by user248237
    I have several scripts that take as input a directory name, and my program creates files in those directories. Sometimes I want to take the basename of a directory given to the program and use it to make various files in the directory. For example, # directory name given by user via command-line output_dir = "..." # obtained by OptParser, for example my_filename = output_dir + '/' + os.path.basename(output_dir) + '.my_program_output' # write stuff to my_filename The problem is that if the user gives a directory name with a trailing slash, then os.path.basename will return the empty string, which is not what I want. What is the most elegant way to deal with these slash/trailing slash issues in python? I know I can manually check for the slash at the end of output_dir and remove it if it's there, but there seems like there should be a better way. Is there? Also, is it OK to manually add '/' characters? E.g. output_dir + '/' os.path.basename() or is there a more generic way to build up paths? Thanks.

    Read the article

  • Yet another Python Windows CMD mklink problem ... can't get it to work!

    - by Felix Dombek
    OK I have just posted another question which outlined my program but the specific problem was different. Now, my program just stops working without any message whatsoever. I'd be grateful if someone could help me here. I want to create symlinks for each file in a directory structure, all in one large flat folder, and have the following code by now: # loop over directory structure: # for all items in current directory, # if item is directory, recurse into it; # else it's a file, then create a symlink for it def makelinks(folder, targetfolder, cmdprocess = None): if not cmdprocess: cmdprocess = subprocess.Popen("cmd", stdin = subprocess.PIPE, stdout = subprocess.PIPE, stderr = subprocess.PIPE) print(folder) for name in os.listdir(folder): fullname = os.path.join(folder, name) if os.path.isdir(fullname): makelinks(fullname, targetfolder, cmdprocess) else: makelink(fullname, targetfolder, cmdprocess) #for a given file, create one symlink in the target folder def makelink(fullname, targetfolder, cmdprocess): linkname = os.path.join(targetfolder, re.sub(r"[\/\\\:\*\?\"\<\>\|]", "-", fullname)) if not os.path.exists(linkname): try: os.remove(linkname) print("Invalid symlink removed:", linkname) except: pass if not os.path.exists(linkname): cmdprocess.stdin.write("mklink " + linkname + " " + fullname + "\r\n") So this is a top-down recursion where first the folder name is printed, then the subdirectories are processed. If I run this now over some folder, the whole thing just stops after 10 or so symbolic links. Here is the output: D:\Musik\neu D:\Musik\neu\# Electronic D:\Musik\neu\# Electronic\# tag & reencode D:\Musik\neu\# Electronic\# tag & reencode\ChillOutMix D:\Musik\neu\# Electronic\# tag & reencode\Unknown D&B D:\Musik\neu\# Electronic\# tag & reencode\Unknown D&B 2 The program still seems to run but no new output is generated. It created 9 symlinks for some files in the # tag & reencode and the first three files in the ChillOutMix folder. The cmd.exe Window is still open and empty, and shows in its title bar that it is currently processing the mklink command for the third file in ChillOutMix. I tried to insert a time.sleep(2) after each cmdprocess.stdin.write in case Python is just too fast for the cmd process, but it doesn't help. Does anyone know what the problem might be?

    Read the article

  • How is it that json serialization is so much faster than yaml serialization in python?

    - by guidoism
    I have code that relies heavily on yaml for cross-language serialization and while working on speeding some stuff up I noticed that yaml was insanely slow compared to other serialization methods (e.g., pickle, json). So what really blows my mind is that json is so much faster that yaml when the output is nearly identical. >>> import yaml, cjson; d={'foo': {'bar': 1}} >>> yaml.dump(d, Dumper=yaml.SafeDumper) 'foo: {bar: 1}\n' >>> cjson.encode(d) '{"foo": {"bar": 1}}' >>> import yaml, cjson; >>> timeit("yaml.dump(d, Dumper=yaml.SafeDumper)", setup="import yaml; d={'foo': {'bar': 1}}", number=10000) 44.506911039352417 >>> timeit("yaml.dump(d, Dumper=yaml.CSafeDumper)", setup="import yaml; d={'foo': {'bar': 1}}", number=10000) 16.852826118469238 >>> timeit("cjson.encode(d)", setup="import cjson; d={'foo': {'bar': 1}}", number=10000) 0.073784112930297852 PyYaml's CSafeDumper and cjson are both written in C so it's not like this is a C vs Python speed issue. I've even added some random data to it to see if cjson is doing any caching, but it's still way faster than PyYaml. I realize that yaml is a superset of json, but how could the yaml serializer be 2 orders of magnitude slower with such simple input?

    Read the article

  • Convert array to CSV/TSV-formated string in Python.

    - by dreeves
    Python provides csv.DictWriter for outputting CSV to a file. What is the simplest way to output CSV to a string or to stdout? For example, given a 2D array like this: [["a b c", "1,2,3"], ["i \"comma-heart\" you", "i \",heart\" u, too"]] return the following string: "a b c, \"1, 2, 3\"\n\"i \"\"comma-heart\"\" you\", \"i \"\",heart\"\" u, too\"" which when printed would look like this: a b c, "1,2,3" "i ""heart"" you", "i "",heart"" u, too" (I'm taking csv.DictWriter's word for it that that is in fact the canonical way to output that array as CSV. Excel does parse it correctly that way, though Mathematica does not. From a quick look at the wikipedia page on CSV it seems Mathematica is wrong.) One way would be to write to a temp file with csv.DictWriter and read it back with csv.DictReader. What's a better way? TSV instead of CSV It also occurs to me that I'm not wedded to CSV. TSV would make a lot of the headaches with delimiters and quotes go away: just replace tabs with spaces in the entries of the 2D array and then just intersperse tabs and newlines and you're done. Let's include solutions for both TSV and CSV in the answers to make this as useful as possible for future searchers.

    Read the article

  • What is the best way to translate this recursive python method into Java?

    - by Simucal
    In another question I was provided with a great answer involving generating certain sets for the Chinese Postman Problem. The answer provided was: def get_pairs(s): if not s: yield [] else: i = min(s) for j in s - set([i]): for r in get_pairs(s - set([i, j])): yield [(i, j)] + r for x in get_pairs(set([1,2,3,4,5,6])): print x This will output the desire result of: [(1, 2), (3, 4), (5, 6)] [(1, 2), (3, 5), (4, 6)] [(1, 2), (3, 6), (4, 5)] [(1, 3), (2, 4), (5, 6)] [(1, 3), (2, 5), (4, 6)] [(1, 3), (2, 6), (4, 5)] [(1, 4), (2, 3), (5, 6)] [(1, 4), (2, 5), (3, 6)] [(1, 4), (2, 6), (3, 5)] [(1, 5), (2, 3), (4, 6)] [(1, 5), (2, 4), (3, 6)] [(1, 5), (2, 6), (3, 4)] [(1, 6), (2, 3), (4, 5)] [(1, 6), (2, 4), (3, 5)] [(1, 6), (2, 5), (3, 4)] This really shows off the expressiveness of Python because this is almost exactly how I would write the pseudo-code for the algorithm. I especially like the usage of yield and and the way that sets are treated as first class citizens. However, there in lies my problem. What would be the best way to: 1.Duplicate the functionality of the yield return construct in Java? Would it instead be best to maintain a list and append my partial results to this list? How would you handle the yield keyword. 2.Handle the dealing with the sets? I know that I could probably use one of the Java collections which implements that implements the Set interface and then using things like removeAll() to give me a set difference. Is this what you would do in that case? Ultimately, I'm looking to reduce this method into as concise and straightforward way as possible in Java. I'm thinking the return type of the java version of this method will likely return a list of int arrays or something similar. How would you handle the situations above when converting this method into Java?

    Read the article

  • Where is my python script spending time? Is there "missing time" in my cprofile / pstats trace?

    - by fmark
    I am attempting to profile a long running python script. The script does some spatial analysis on raster GIS data set using the gdal module. The script currently uses three files, the main script which loops over the raster pixels called find_pixel_pairs.py, a simple cache in lrucache.py and some misc classes in utils.py. I have profiled the code on a moderate sized dataset. pstats returns: p.sort_stats('cumulative').print_stats(20) Thu May 6 19:16:50 2010 phes.profile 355483738 function calls in 11644.421 CPU seconds Ordered by: cumulative time List reduced from 86 to 20 due to restriction <20> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.008 0.008 11644.421 11644.421 <string>:1(<module>) 1 11064.926 11064.926 11644.413 11644.413 find_pixel_pairs.py:49(phes) 340135349 544.143 0.000 572.481 0.000 utils.py:173(extent_iterator) 8831020 18.492 0.000 18.492 0.000 {range} 231922 3.414 0.000 8.128 0.000 utils.py:152(get_block_in_bands) 142739 1.303 0.000 4.173 0.000 utils.py:97(search_extent_rect) 745181 1.936 0.000 2.500 0.000 find_pixel_pairs.py:40(is_no_data) 285478 1.801 0.000 2.271 0.000 utils.py:98(intify) 231922 1.198 0.000 2.013 0.000 utils.py:116(block_to_pixel_extent) 695766 1.990 0.000 1.990 0.000 lrucache.py:42(get) 1213166 1.265 0.000 1.265 0.000 {min} 1031737 1.034 0.000 1.034 0.000 {isinstance} 142740 0.563 0.000 0.909 0.000 utils.py:122(find_block_extent) 463844 0.611 0.000 0.611 0.000 utils.py:112(block_to_pixel_coord) 745274 0.565 0.000 0.565 0.000 {method 'append' of 'list' objects} 285478 0.346 0.000 0.346 0.000 {max} 285480 0.346 0.000 0.346 0.000 utils.py:109(pixel_coord_to_block_coord) 324 0.002 0.000 0.188 0.001 utils.py:27(__init__) 324 0.016 0.000 0.186 0.001 gdal.py:848(ReadAsArray) 1 0.000 0.000 0.160 0.160 utils.py:50(__init__) The top two calls contain the main loop - the entire analyis. The remaining calls sum to less than 625 of the 11644 seconds. Where are the remaining 11,000 seconds spent? Is it all within the main loop of find_pixel_pairs.py? If so, can I find out which lines of code are taking most of the time?

    Read the article

  • Return a list of imported Python modules used in a script?

    - by Jono Bacon
    Hi All, I am writing a program that categorizes a list of Python files by which modules they import. As such I need to scan the collection of .py files ad return a list of which modules they import. As an example, if one of the files I import has the following lines: import os import sys, gtk I would like it to return: ["os", "sys", "gtk"] I played with modulefinder and wrote: from modulefinder import ModuleFinder finder = ModuleFinder() finder.run_script('testscript.py') print 'Loaded modules:' for name, mod in finder.modules.iteritems(): print '%s ' % name, but this returns more than just the modules used in the script. As an example in a script which merely has: import os print os.getenv('USERNAME') The modules returned from the ModuleFinder script return: tokenize heapq __future__ copy_reg sre_compile _collections cStringIO _sre functools random cPickle __builtin__ subprocess cmd gc __main__ operator array select _heapq _threading_local abc _bisect posixpath _random os2emxpath tempfile errno pprint binascii token sre_constants re _abcoll collections ntpath threading opcode _struct _warnings math shlex fcntl genericpath stat string warnings UserDict inspect repr struct sys pwd imp getopt readline copy bdb types strop _functools keyword thread StringIO bisect pickle signal traceback difflib marshal linecache itertools dummy_thread posix doctest unittest time sre_parse os pdb dis ...whereas I just want it to return 'os', as that was the module used in the script. Can anyone help me achieve this?

    Read the article

  • What is the easiest way to read wav-files using Python [summary]?

    - by Roman
    I want to use Python to access a wav-file and write its content in a form which allows me to analyze it (let's say arrays). I heard that "audiolab" is a suitable tool for that (it transforms numpy arrays into wav and vica versa). I have installed the "audiolab" but I had a problem with the version of numpy (I could not "from numpy.testing import Tester"). I had 1.1.1. version of numpy. I have installed a newer version on numpy (1.4.0). But then I got a new set of errors: Traceback (most recent call last): File "test.py", line 7, in import scikits.audiolab File "/usr/lib/python2.5/site-packages/scikits/audiolab/init.py", line 25, in from pysndfile import formatinfo, sndfile File "/usr/lib/python2.5/site-packages/scikits/audiolab/pysndfile/init.py", line 1, in from _sndfile import Sndfile, Format, available_file_formats, available_encodings File "numpy.pxd", line 30, in scikits.audiolab.pysndfile._sndfile (scikits/audiolab/pysndfile/_sndfile.c:9632) ValueError: numpy.dtype does not appear to be the correct type object I gave up to use audiolab and thought that I can use "wave" package to read in a wav-file. I asked a question about that but people recommended to use scipy instead. OK, I decided to focus on scipy (I have 0.6.0. version). But when I tried to do the following: from scipy.io import wavfile x = wavfile.read('/usr/share/sounds/purple/receive.wav') I get the following: Traceback (most recent call last): File "test3.py", line 4, in <module> from scipy.io import wavfile File "/usr/lib/python2.5/site-packages/scipy/io/__init__.py", line 23, in <module> from numpy.testing import NumpyTest ImportError: cannot import name NumpyTest So, I gave up to use scipy. Can I use just wave package? I do not need much. I just need to have content of wav-file in human readable format and than I will figure out what to do with that.

    Read the article

  • Return a list of important Python modules in a script?

    - by Jono Bacon
    Hi All, I am writing a program that categorizes a list of Python files by which modules they import. As such I need to scan the collection of .py files ad return a list of which modules they import. As an example, if one of the files I import has the following lines: import os import sys, gtk I would like it to return: ["os", "sys", "gtk"] I played with modulefinder and wrote: from modulefinder import ModuleFinder finder = ModuleFinder() finder.run_script('testscript.py') print 'Loaded modules:' for name, mod in finder.modules.iteritems(): print '%s ' % name, but this returns more than just the modules used in the script. As an example in a script which merely has: import os print os.getenv('USERNAME') The modules returned from the ModuleFinder script return: tokenize heapq __future__ copy_reg sre_compile _collections cStringIO _sre functools random cPickle __builtin__ subprocess cmd gc __main__ operator array select _heapq _threading_local abc _bisect posixpath _random os2emxpath tempfile errno pprint binascii token sre_constants re _abcoll collections ntpath threading opcode _struct _warnings math shlex fcntl genericpath stat string warnings UserDict inspect repr struct sys pwd imp getopt readline copy bdb types strop _functools keyword thread StringIO bisect pickle signal traceback difflib marshal linecache itertools dummy_thread posix doctest unittest time sre_parse os pdb dis ...whereas I just want it to return 'os', as that was the module used in the script. Can anyone help me achieve this?

    Read the article

  • How do I create a python module from a fortran program with f2py?

    - by Lars Hellemo
    I am trying to read some smps files with python, and found a fortran implementation, so I thought I would give f2py a shot. The problem is that I have no experience with fortran. I have successfully installed gfortran and f2py on my Linux box and ran the example on thew f2py page, but I have some trouble compiling and running the large program. There are two files, one with a file reader wrapper and one with all the logic. They seem to call each other, but when I compile and link or try f2py, I get errors that they somehow can't find each other: f95 -c FILEWR~1.F f95 -c SMPSREAD.F90 f95 -o smpsread SMPSREAD.o FILEWR~1.o FILEWR~1.o In function `file_wrapper_' FILEWR~1.F(.text+0x3d) undefined reference to `chopen_' usrlibgcci486-linux-gnu4.4.1libgfortranbegin.a(fmain.o) In function `main' (.text+0x27) undefined reference to `MAIN__' collect2 ld returned 1 exit status I also tried changing the name to FILE_WRAPPER.F but that did not help. With f2py I found out I had to include a comment to get it to accept free format, and saved this as a new file and tried: f2py -c -m smpsread smpsread.f90 I get a lot of output and warnings, but the error seems to be this one: getctype: No C-type found in "{'typespec': 'type', 'attrspec': ['allocatable'], 'typename': 'node', 'dimension': [':']}", assuming void. The fortran 90 spms reader can be found here. Any help or suggestions appreciated.

    Read the article

  • Setting an Excel Range with an Array using Python and comtypes?

    - by technomalogical
    Using comtypes to drive Python, it seems some magic is happening behind the scenes that is not converting tuples and lists to VARIANT types: # RANGE(“C14:D21”) has values # Setting the Value on the Range with a Variant should work, but # list or tuple is not getting converted properly it seems >>>from comtypes.client import CreateObject >>>xl = CreateObject("Excel.application") >>>xl.Workbooks.Open(r'C:\temp\my_file.xlsx') >>>xl.Visible = True >>>vals=tuple([(x,y) for x,y in zip('abcdefgh',xrange(8))]) # creates: #(('a', 0), ('b', 1), ('c', 2), ('d', 3), ('e', 4), ('f', 5), ('g', 6), ('h', 7)) >>>sheet = xl.Workbooks[1].Sheets["Sheet1"] >>>sheet.Range["C14","D21"].Value() (('foo',1),('foo',2),('foo',3),('foo',4),('foo',6),('foo',6),('foo',7),('foo',8)) >>>sheet.Range["C14","D21"].Value[()] = vals # no error, this blanks out the cells in the Range According to the comtypes docs: When you pass simple sequences (lists or tuples) as VARIANT parameters, the COM server will receive a VARIANT containing a SAFEARRAY of VARIANTs with the typecode VT_ARRAY | VT_VARIANT. This seems to be inline with what MSDN says about passing an array to a Range's Value. I also found this page showing something similar in C#. Can anybody tell me what I'm doing wrong? EDIT I've come up with a simpler example that performs the same way (in that, it does not work): >>>from comtypes.client import CreateObject >>>xl = CreateObject("Excel.application") >>>xl.Workbooks.Add() >>>sheet = xl.Workbooks[1].Sheets["Sheet1"] # at this point, I manually typed into the range A1:B3 >>> sheet.Range("A1","B3").Value() ((u'AAA', 1.0), (u'BBB', 2.0), (u'CCC', 3.0)) >>>sheet.Range("A1","B3").Value[()] = [(x,y) for x,y in zip('xyz',xrange(3))] # Using a generator expression, per @Mike's comment # However, this still blanks out my range :(

    Read the article

  • Python/Sqlite program, write as browser app or desktop app?

    - by ChrisC
    I am in the planning stages of rewriting an Access db I wrote several years ago in a full fledged program. I have very slight experience coding, but not enough to call myself a programmer by far. I'll definitely be learning as I go, so I'd like to keep everything as simple as possible. I've decided on Python and SQLite for my program, but I need help on my next decision. Here is my situation 1) It'll be a desktop program, run locally on each machine, all Windows 2) I would really like a nice looking GUI with colors, nice screens, menus, lists, etc, 3) I'm thinking about using a browser interface because (a) from what I've read, browser apps can look really great, and (b) I understand there are lots of free tools to assist in setting up the GUI/GUI code with drag and drop tools, so that helps my "keep it simple" goal. 4) I want the program to be totally portable so it runs completely from one single folder on a user's PC, with no installation(s) needed for it to run (If I did it as a browser app, isn't there the possibility that a user's browser settings could affect or break the app. How likely is this?) For my situation, should I make it a desktop app or browser app?

    Read the article

  • Best way to convert a Unicode URL to ASCII (UTF-8 percent-escaped) in Python?

    - by benhoyt
    I'm wondering what's the best way -- or if there's a simple way with the standard library -- to convert a URL with Unicode chars in the domain name and path to the equivalent ASCII URL, encoded with domain as IDNA and the path %-encoded, as per RFC 3986. I get from the user a URL in UTF-8. So if they've typed in http://?.ws/? I get 'http://\xe2\x9e\xa1.ws/\xe2\x99\xa5' in Python. And what I want out is the ASCII version: 'http://xn--hgi.ws/%E2%99%A5'. What I do at the moment is split the URL up into parts via a regex, and then manually IDNA-encode the domain, and separately encode the path and query string with different urllib.quote() calls. # url is UTF-8 here, eg: url = u'http://?.ws/?'.encode('utf-8') match = re.match(r'([a-z]{3,5})://(.+\.[a-z0-9]{1,6})' r'(:\d{1,5})?(/.*?)(\?.*)?$', url, flags=re.I) if not match: raise BadURLException(url) protocol, domain, port, path, query = match.groups() try: domain = unicode(domain, 'utf-8') except UnicodeDecodeError: return '' # bad UTF-8 chars in domain domain = domain.encode('idna') if port is None: port = '' path = urllib.quote(path) if query is None: query = '' else: query = urllib.quote(query, safe='=&?/') url = protocol + '://' + domain + port + path + query # url is ASCII here, eg: url = 'http://xn--hgi.ws/%E3%89%8C' Is this correct? Any better suggestions? Is there a simple standard-library function to do this?

    Read the article

< Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >