Search Results

Search found 48774 results on 1951 pages for 'linux development'.

Page 109/1951 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • Debugging an httpmodule on the asp.net development server

    - by Caveatrob
    Hi, I want to integrate some http modules in my asp.net application (v 3.5, visual studio 2008) and I'm not sure how to debug or use such modules while debugging in the asp.net development server that fires when I run the web app. Do I need to include the module source in the solution or can I just drop the DLL into BIN? I'm from the 1.1 world and am not yet used to the asp.net development server.

    Read the article

  • Least intrusive antivirus software for development PC?

    - by poppavein
    What is the least intrusive and most effective antivirus software for a Windows PC that is used for software development (lots of small files and lots of disk I/O)? The software should support running from the command line so that virus scan be included into the build process. Edit: I understand that prevention techniques work better than any antivirus, but the employer demands that commercial AV software be used in the development environment (looking a replacement for horrible Symantec Antivirus).

    Read the article

  • How to update Xcode to install "UNIX Development Support"

    - by Oscar Reyes
    I installed Xcode a long time ago. Apparently I didn't check back then the "UNIX Developemtn Support" checkbox. Now I want to have them bu when I click on the installation this is what appears: The UNIX Development Support check box is disabled Q. ¿How can I install the UNIX Development Support? Is there a way to run some script that creates all the needed links from /Developer/ to /usr/bin ? Thanks in advance.

    Read the article

  • Java development in Ubuntu

    - by Veera
    I'm a newbie to Linux systems and recently I started using Ubuntu 10.04. When I do java development in Windows, I usually keep my project files under some drive (D: for example) and under my development folder, such as D:\projects\myproj. But I'm bit confused with Ubuntu's folder structure. So, I just want to know how do you organize your projects in Ubuntu? Under which folder do we keep our projects file?

    Read the article

  • Web Development Designing Method

    - by Raj
    HI can any one tell me which web development method is currently use. and can anyone also tell me related article or any other guide for it, am working on web development project in asp.net and Sql server (Clothing Website) as it is my first project after finishing my studies and i don't have much idea, which process should i have to follow

    Read the article

  • iPhone/Android app development tutorials?

    - by Decipher
    Hi, I'm looking at jumping on the band-wagon and knock out a couple iPhone and Android apps. I'm not a complete beginner as I have programmed in quite a few languages, with my current focus on PHP and Drupal development, but I have not developed for either the iPhone or Android before. I have downloaded the respective SDKs, but I was hoping to find some good quality tutorials that with step-by-step guides getting your development environment up and building your first app.

    Read the article

  • emacs setup for web development

    - by fig-gnuton
    What should a complete emacs setup have if you want to use it for web development? This SO question is about using emacs for Rails, however it mostly leaves out things like HTML, CSS, and JavaScript. This emacs wiki page has suggestions for emulating aspects of TextMate, but isn't specific to web development.

    Read the article

  • Can Core Data be used on Linux?

    - by glenc
    This might be a stupid question, but I was wondering whether or not you can use the Core Data libraries on Linux at all? I'm planning how to build the server side of an iPhone app that I'm working on, and have found that you can use PyObjC to get access to Core Data in a Python environment, e.g. use Core Data in a TurboGears web application. At this point I'm thinking that you would have to run the web server on Mac OSX, because I can't find any evidence on the internet that you can access the Objective-C libraries on Linux. I've always written webapps on Linux but will obviously make the jump to an OSX server if it allows me to use the same datastore implementation on the iPhone and the server, the only job remaining being the Core Data <- Web Services XML translation that has to happen on the wire.

    Read the article

  • Using .lib and .dll files in Linux

    - by smile
    Hi all, I have to make a project to run successfully on a Linux machine. Right now my project works very well on windows machine. On Windows machine it is compiling and working fine. My project is using one ".lib" and one ".dll" file to do the tasks successfully on Windows. Can i use the same .lib file and .dll file on linux machine to build the project successfully? I am compiling the project with G++ and using GNU Makefile to do the task. What should i do in the case that i can not use the .LIB and .DLL file on Linux machine. Thanks in advance Shivakumar.Konidela

    Read the article

  • getting data problem with MYSQL under linux

    - by aelshereay
    Hi all, Recently I started to use Linux (Ubuntu 9.10) instead of windows. I am working on a java web application with Spring, MYSQL with jpa. However, before to install linux I made a backup file from the database, then installed linux, installed the MYSQL Query Browser and Administrator tools, and using the Admin tool restored the backup file, then got all the tables and made a simple select statement from one of the tables and got result normally and everything seems to work just fine. There a USER table, and there's a namedQuery defined to get a user by userName, the problem is that when I pass a correct userName I still get nothing! I really don't know what is the problem! The application was working perfectly under windows! Please, can anyone help me to solve this problem? Thank you in advance.

    Read the article

  • Emacs/xterm color annoyance on Linux

    - by tgamblin
    I'm using emacs in a console window both on my local Linux box and on the login node of a remote cluster. I use emacs regularly, and I've got the foreground color set to white in my .emacs file like so: (set-foreground-color "white") (set-background-color "black") However, when I run emacs, the foreground isn't white; it's grey and very hard to read. On my Mac, emacs in a console window with the same settings shows up as proper white. But on both linux boxes, in konsole and xterm, it's grey. In case it matters, I've got TERM set to xterm-color, the desktop is running RHEL 5, and the cluster node is running RHEL 4 (CentOS). Is this some default with how Linux sets up terminal colors? How do I get white to be white? Note: this is with console emacs, not emacs under X. That's emacs -nw if you have DISPLAY set.

    Read the article

  • What is the fastest way to copy content of DVD to hard disc using Linux

    - by Ritesh
    I have gone through some of the links Which talks about fastest way of copying files in windows using FILE_FLAG_NO_BUFFERING and FILE_FLAG_OVERLAPPED . It also talks about how request made for read and write opeartions with BUFFER SIZE as 256KB and 128KB are faster than 1Mb .The link for that is :- Explanation for tiny reads (overlapped, buffered) outperforming large contiguous reads? I am also loking for a Similar method in linux Which allows me to copy the content of my DVD to Hard Disc in a fast Way . So I wanted to know Is there some file operation flags in Linux which would provide me the best result or Which way of Copy in Linux is the best ? My codes are all in c++.

    Read the article

  • transferring subversion changes between linux and windows

    - by andreas buykx
    Hi all, What is the best way to transfer changes that include new and deleted directories and/or new and deleted (actually moved) files in those directories from a subversion repository on linux to windows? I do my developments on linux using a subversion repository, but I have to test my changes on windows as well. My windows machine has a tortoisesvn repository which I tried to patch with a svn diff output. This failed miserably since my patch contains a renamed (i.e. deleted and added under a different name) directory, a new directory and the files in there. Do I do things wrong by just applying the svn diff output as a patch in tortoisesvn? For now I think that my best option is to have the windows tree on the same svn version as the linux tree and just copy the entire changed directory over the existing directory. Would that work?

    Read the article

  • How to build Linux system from kernel to UI layer

    - by mohit
    Hi, I have been looking into MeeGo, maemo, Android architecture. They all have Linux Kernel, build some libraries on it, then build middle layer libraries [e.g telephony, media etc...]. Suppose i wana build my own system, say Linux Kernel, with some binariers like glibc, Dbus,.... UI toolkit like GTK+ and its binaries. I want to compile every project from source to customize my own linux system for desktop, netbook and handheld devices. [starting from netbook first :)] How can i build my own customize system from kernel to UI.

    Read the article

  • C/C++ memory usage API in Linux/Windows

    - by minjang
    I'd like to obtain memory usage information for both per process and system wide. In Windows, it's pretty easy. GetProcessMemoryInfo and GlobalMemoryStatusEx do these jobs greatly and very easily. For example, GetProcessMemoryInfo gives "PeakWorkingSetSize" of the given process. GlobalMemoryStatusEx returns system wide available memory. However, I need to do it on Linux. I'm trying to find Linux system APIs that are equivalent GetProcessMemoryInfo and GlobalMemoryStatusEx. I found 'getrusage'. However, max 'ru_maxrss' (resident set size) in struct rusage is just zero, which is not implemented. Also, I have no idea to get system-wide free memory. Current workaround for it, I'm using "system("ps -p %my_pid -o vsz,rsz");". Manually logging to the file. But, it's dirty and not convenient to process the data. I'd like to know some fancy Linux APIs for this purpose.

    Read the article

  • Control Debug Level in C++ Library - Linux

    - by rursw1
    Hi all, I have a C++ library, which is used in both Linux and Windows. I want to enable the user to control the debug level (0 - no debug, 1 - only critical errors ... 5 - informative debug information). The debug log is printed to a text file. In Windows, I can do it using a registry value (DWORD DebugLevel). What can be a good replacement which works also for Linux? (Without 3rd party tools, for example Linux "registry"). Thanks in advance!

    Read the article

  • 'pip install carbon' looks like it works, but pip disagrees afterward

    - by fennec
    I'm trying to use pip to install the package carbon, a package related to statistics collection. When I run pip install carbon, it looks like everything works. However, pip is unconvinced that the package is actually installed. (This ultimately causes trouble because I'm using Puppet, and have a rule to install carbon using pip, and when puppet asks pip "is this package installed?" it says "no" and it reinstalls it again.) How do I figure out what's preventing pip from recognizing the success of this installation? Here is the output of the regular install: root@statsd:/opt/graphite# pip install carbon Downloading/unpacking carbon Downloading carbon-0.9.9.tar.gz Running setup.py egg_info for package carbon package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) Requirement already satisfied (use --upgrade to upgrade): twisted in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): txamqp in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): zope.interface in /usr/local/lib/python2.7/dist-packages (from twisted->carbon) Requirement already satisfied (use --upgrade to upgrade): distribute in /usr/local/lib/python2.7/dist-packages (from zope.interface->twisted->carbon) Installing collected packages: carbon Running setup.py install for carbon package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) changing mode of build/scripts-2.7/validate-storage-schemas.py from 664 to 775 changing mode of build/scripts-2.7/carbon-aggregator.py from 664 to 775 changing mode of build/scripts-2.7/carbon-cache.py from 664 to 775 changing mode of build/scripts-2.7/carbon-relay.py from 664 to 775 changing mode of build/scripts-2.7/carbon-client.py from 664 to 775 changing mode of /opt/graphite/bin/validate-storage-schemas.py to 775 changing mode of /opt/graphite/bin/carbon-aggregator.py to 775 changing mode of /opt/graphite/bin/carbon-cache.py to 775 changing mode of /opt/graphite/bin/carbon-relay.py to 775 changing mode of /opt/graphite/bin/carbon-client.py to 775 Successfully installed carbon Cleaning up... root@statsd:/opt/graphite# pip freeze | grep carbon root@statsd: Here is the verbose version of the install: root@statsd:/opt/graphite# pip install carbon -v Downloading/unpacking carbon Using version 0.9.9 (newest of versions: 0.9.9, 0.9.9, 0.9.8, 0.9.7, 0.9.6, 0.9.5) Downloading carbon-0.9.9.tar.gz Running setup.py egg_info for package carbon running egg_info creating pip-egg-info/carbon.egg-info writing requirements to pip-egg-info/carbon.egg-info/requires.txt writing pip-egg-info/carbon.egg-info/PKG-INFO writing top-level names to pip-egg-info/carbon.egg-info/top_level.txt writing dependency_links to pip-egg-info/carbon.egg-info/dependency_links.txt writing manifest file 'pip-egg-info/carbon.egg-info/SOURCES.txt' warning: manifest_maker: standard file '-c' not found package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) reading manifest file 'pip-egg-info/carbon.egg-info/SOURCES.txt' writing manifest file 'pip-egg-info/carbon.egg-info/SOURCES.txt' Requirement already satisfied (use --upgrade to upgrade): twisted in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): txamqp in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): zope.interface in /usr/local/lib/python2.7/dist-packages (from twisted->carbon) Requirement already satisfied (use --upgrade to upgrade): distribute in /usr/local/lib/python2.7/dist-packages (from zope.interface->twisted->carbon) Installing collected packages: carbon Running setup.py install for carbon running install running build running build_py creating build creating build/lib.linux-i686-2.7 creating build/lib.linux-i686-2.7/carbon copying lib/carbon/amqp_publisher.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/manhole.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/instrumentation.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/cache.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/management.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/relayrules.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/events.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/protocols.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/conf.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/rewrite.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/hashing.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/writer.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/client.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/util.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/service.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/amqp_listener.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/routers.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/storage.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/log.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/__init__.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/state.py -> build/lib.linux-i686-2.7/carbon creating build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/receiver.py -> build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/rules.py -> build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/buffers.py -> build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/__init__.py -> build/lib.linux-i686-2.7/carbon/aggregator package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) creating build/lib.linux-i686-2.7/twisted creating build/lib.linux-i686-2.7/twisted/plugins copying lib/twisted/plugins/carbon_relay_plugin.py -> build/lib.linux-i686-2.7/twisted/plugins copying lib/twisted/plugins/carbon_aggregator_plugin.py -> build/lib.linux-i686-2.7/twisted/plugins copying lib/twisted/plugins/carbon_cache_plugin.py -> build/lib.linux-i686-2.7/twisted/plugins copying lib/carbon/amqp0-8.xml -> build/lib.linux-i686-2.7/carbon running build_scripts creating build/scripts-2.7 copying and adjusting bin/validate-storage-schemas.py -> build/scripts-2.7 copying and adjusting bin/carbon-aggregator.py -> build/scripts-2.7 copying and adjusting bin/carbon-cache.py -> build/scripts-2.7 copying and adjusting bin/carbon-relay.py -> build/scripts-2.7 copying and adjusting bin/carbon-client.py -> build/scripts-2.7 changing mode of build/scripts-2.7/validate-storage-schemas.py from 664 to 775 changing mode of build/scripts-2.7/carbon-aggregator.py from 664 to 775 changing mode of build/scripts-2.7/carbon-cache.py from 664 to 775 changing mode of build/scripts-2.7/carbon-relay.py from 664 to 775 changing mode of build/scripts-2.7/carbon-client.py from 664 to 775 running install_lib copying build/lib.linux-i686-2.7/carbon/amqp_publisher.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/manhole.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/amqp0-8.xml -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/instrumentation.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/cache.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/management.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/relayrules.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/events.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/protocols.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/conf.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/rewrite.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/hashing.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/writer.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/client.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/util.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/aggregator/receiver.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/aggregator/rules.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/aggregator/buffers.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/aggregator/__init__.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/service.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/amqp_listener.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/routers.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/storage.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/log.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/__init__.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/state.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/twisted/plugins/carbon_relay_plugin.py -> /opt/graphite/lib/twisted/plugins copying build/lib.linux-i686-2.7/twisted/plugins/carbon_aggregator_plugin.py -> /opt/graphite/lib/twisted/plugins copying build/lib.linux-i686-2.7/twisted/plugins/carbon_cache_plugin.py -> /opt/graphite/lib/twisted/plugins byte-compiling /opt/graphite/lib/carbon/amqp_publisher.py to amqp_publisher.pyc byte-compiling /opt/graphite/lib/carbon/manhole.py to manhole.pyc byte-compiling /opt/graphite/lib/carbon/instrumentation.py to instrumentation.pyc byte-compiling /opt/graphite/lib/carbon/cache.py to cache.pyc byte-compiling /opt/graphite/lib/carbon/management.py to management.pyc byte-compiling /opt/graphite/lib/carbon/relayrules.py to relayrules.pyc byte-compiling /opt/graphite/lib/carbon/events.py to events.pyc byte-compiling /opt/graphite/lib/carbon/protocols.py to protocols.pyc byte-compiling /opt/graphite/lib/carbon/conf.py to conf.pyc byte-compiling /opt/graphite/lib/carbon/rewrite.py to rewrite.pyc byte-compiling /opt/graphite/lib/carbon/hashing.py to hashing.pyc byte-compiling /opt/graphite/lib/carbon/writer.py to writer.pyc byte-compiling /opt/graphite/lib/carbon/client.py to client.pyc byte-compiling /opt/graphite/lib/carbon/util.py to util.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/receiver.py to receiver.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/rules.py to rules.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/buffers.py to buffers.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/__init__.py to __init__.pyc byte-compiling /opt/graphite/lib/carbon/service.py to service.pyc byte-compiling /opt/graphite/lib/carbon/amqp_listener.py to amqp_listener.pyc byte-compiling /opt/graphite/lib/carbon/routers.py to routers.pyc byte-compiling /opt/graphite/lib/carbon/storage.py to storage.pyc byte-compiling /opt/graphite/lib/carbon/log.py to log.pyc byte-compiling /opt/graphite/lib/carbon/__init__.py to __init__.pyc byte-compiling /opt/graphite/lib/carbon/state.py to state.pyc byte-compiling /opt/graphite/lib/twisted/plugins/carbon_relay_plugin.py to carbon_relay_plugin.pyc byte-compiling /opt/graphite/lib/twisted/plugins/carbon_aggregator_plugin.py to carbon_aggregator_plugin.pyc byte-compiling /opt/graphite/lib/twisted/plugins/carbon_cache_plugin.py to carbon_cache_plugin.pyc running install_data copying conf/storage-schemas.conf.example -> /opt/graphite/conf copying conf/rewrite-rules.conf.example -> /opt/graphite/conf copying conf/relay-rules.conf.example -> /opt/graphite/conf copying conf/carbon.amqp.conf.example -> /opt/graphite/conf copying conf/aggregation-rules.conf.example -> /opt/graphite/conf copying conf/carbon.conf.example -> /opt/graphite/conf running install_egg_info running egg_info creating lib/carbon.egg-info writing requirements to lib/carbon.egg-info/requires.txt writing lib/carbon.egg-info/PKG-INFO writing top-level names to lib/carbon.egg-info/top_level.txt writing dependency_links to lib/carbon.egg-info/dependency_links.txt writing manifest file 'lib/carbon.egg-info/SOURCES.txt' warning: manifest_maker: standard file '-c' not found reading manifest file 'lib/carbon.egg-info/SOURCES.txt' writing manifest file 'lib/carbon.egg-info/SOURCES.txt' removing '/opt/graphite/lib/carbon-0.9.9-py2.7.egg-info' (and everything under it) Copying lib/carbon.egg-info to /opt/graphite/lib/carbon-0.9.9-py2.7.egg-info running install_scripts copying build/scripts-2.7/validate-storage-schemas.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-aggregator.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-cache.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-relay.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-client.py -> /opt/graphite/bin changing mode of /opt/graphite/bin/validate-storage-schemas.py to 775 changing mode of /opt/graphite/bin/carbon-aggregator.py to 775 changing mode of /opt/graphite/bin/carbon-cache.py to 775 changing mode of /opt/graphite/bin/carbon-relay.py to 775 changing mode of /opt/graphite/bin/carbon-client.py to 775 writing list of installed files to '/tmp/pip-9LuJTF-record/install-record.txt' Successfully installed carbon Cleaning up... Removing temporary dir /opt/graphite/build... root@statsd:/opt/graphite# For reference, this is pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7)

    Read the article

  • How to run multiple Linux installations on a single VirtualBox?

    - by NoCatharsis
    I just started playing with VirtualBox to check out different versions of Linux. I was wondering if it would be possible (and recommended) to set up a single virtual machine with multiple partitions - one for each separate version of Linux, one for swap memory, and one fully shared /home partition? Still new to Linux, but I've read this is a good way to partition so you can share common applications between Linux installations, however I think everything I've read is pertaining to literal partitions and not virtual machines.

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • Agile Development Requires Agile Support

    - by Matt Watson
    Agile developmentAgile development has become the standard methodology for application development. The days of long term planning with giant Gantt waterfall charts and detailed requirements is fading away. For years the product planning process frustrated product owners and businesses because no matter the plan, nothing ever went to plan. Agile development throws the detailed planning out the window and instead focuses on giving developers some basic requirements and pointing them in the right direction. Constant collaboration via quick iterations with the end users, product owners, and the development team helps ensure the project is done correctly.  The various agile development methodologies have helped greatly with creating products faster, but not without causing new problems. Complicated application deployments now occur weekly or monthly. Most of the products are web-based and deployed as a software service model. System performance and availability of these apps becomes mission critical. This is all much different from the old process of mailing new releases of client-server apps on CD once per quarter or year.The steady stream of new products and product enhancements puts a lot of pressure on IT operations to keep up with the software deployments and adding infrastructure capacity. The problem is most operations teams still move slowly thanks to change orders, documentation, procedures, testing and other processes. Operations can slow the process down and push back on the development team in some organizations. The DevOps movement is trying to solve some of these problems by integrating the development and operations teams more together. Rapid change introduces new problemsThe rapid product change ultimately creates some application problems along the way. Higher rates of change increase the likelihood of new application defects. Delivering applications as a software service also means that scalability of applications is critical. Development teams struggle to keep up with application defects and scalability concerns in their applications. Fixing application problems is a never ending job for agile development teams. Fixing problems before your customers do and fixing them quickly is critical. Most companies really struggle with this due to the divide between the development and operations groups. Fixing application problems typically requires querying databases, looking at log files, reviewing config files, reviewing error logs and other similar tasks. It becomes difficult to work on new features when your lead developers are working on defects from the last product version. Developers need more visibilityThe problem is most developers are not given access to see server and application information in the production environments. The operations team doesn’t trust giving all the developers the keys to the kingdom to log in to production and poke around the servers. The challenge is either give them no access, or potentially too much access. Those with access can still waste time figuring out the location of the application and how to connect to it over VPN. In addition, reproducing problems in test environments takes too much time and isn't always possible. System administrators spend a lot of time helping developers track down server information. Most companies give key developers access to all of the production resources so they can help resolve application defects. The problem is only those key people have access and they become a bottleneck. They end up spending 25-50% of their time on a daily basis trying to solve application issues because they are the only ones with access. These key employees’ time is best spent on strategic new projects, not addressing application defects. This job should fall to entry level developers, provided they have access to all the information they need to troubleshoot the problems.The solution to agile application support is giving all the developers limited access to the production environment and all the server information they need to see. Some companies create their own solutions internally to collect log files, centralize errors or other things to address the problem. Some developers even have access to server monitoring or other tools. But they key is giving them access to everything they need so they can see the full picture and giving access to the whole team. Giving access to everyone scales up the application support team and creates collaboration around providing improved application support.Stackify enables agile application supportStackify has created a solution that can give all developers a secure and read only view of the entire production server environment without console or remote desktop access.They provide a web application that provides real time visibility to the important information that developers need to see. An application centric view enables them to see all of their apps across multiple datacenters and environments. They don’t need to know where the application is deployed, just the name of the application to find it and dig in to see more. All your developers can see server health, application health, log files, config files, windows event viewer, deployment history, application notes, and much more. They can receive email and text alerts when problems arise and even safely query your production databases.Stackify enables companies that do agile development to scale up their application support team by getting more team members involved. The lead developers can spend more time on new projects. Application issues can be fixed quicker than ever. Operations can spend less time helping developers collect server information. Agile application support starts with Stackify. Visit Stackify.com to learn more.

    Read the article

  • Best C++ development environment in Linux

    - by Bruce
    I have some experience with Eclipse and Qt creator and am somewhat disappointed in their debuggers, less so in their editors. On Windows, I like Visual Studio for debugging and SlickEdit for editing (SE is also available on Linux). Is there an IDE that is somehow better than the two mentioned?

    Read the article

  • How do you play or record audio (to .WAV) on Linux in C++? [closed]

    - by Jacky Alcine
    Hello, I've been looking for a way to play and record audio on a Linux (preferably Ubuntu) system. I'm currently working on a front-end to a voice recognition toolkit that'll automate a few steps required to adapt a voice model for PocketSphinx and Julius. Suggestions of alternative means of audio input/output are welcome, as well as a fix to the bug shown below. Here is the current code I've used so far to play a .WAV file: void Engine::sayText ( const string OutputText ) { string audioUri = "temp.wav"; string requestUri = this->getRequestUri( OPENMARY_PROCESS , OutputText.c_str( ) ); int error , audioStream; pa_simple *pulseConnection; pa_sample_spec simpleSpecs; simpleSpecs.format = PA_SAMPLE_S16LE; simpleSpecs.rate = 44100; simpleSpecs.channels = 2; eprintf( E_MESSAGE , "Generating audio for '%s' from '%s'..." , OutputText.c_str( ) , requestUri.c_str( ) ); FILE* audio = this->getHttpFile( requestUri , audioUri ); fclose(audio); eprintf( E_MESSAGE , "Generated audio."); if ( ( audioStream = open( audioUri.c_str( ) , O_RDONLY ) ) < 0 ) { fprintf( stderr , __FILE__": open() failed: %s\n" , strerror( errno ) ); goto finish; } if ( dup2( audioStream , STDIN_FILENO ) < 0 ) { fprintf( stderr , __FILE__": dup2() failed: %s\n" , strerror( errno ) ); goto finish; } close( audioStream ); pulseConnection = pa_simple_new( NULL , "AudioPush" , PA_STREAM_PLAYBACK , NULL , "openMary C++" , &simpleSpecs , NULL , NULL , &error ); for (int i = 0;;i++ ) { const int bufferSize = 1024; uint8_t audioBuffer[bufferSize]; ssize_t r; eprintf( E_MESSAGE , "Buffering %d..",i); /* Read some data ... */ if ( ( r = read( STDIN_FILENO , audioBuffer , sizeof (audioBuffer ) ) ) <= 0 ) { if ( r == 0 ) /* EOF */ break; eprintf( E_ERROR , __FILE__": read() failed: %s\n" , strerror( errno ) ); if ( pulseConnection ) pa_simple_free( pulseConnection ); } /* ... and play it */ if ( pa_simple_write( pulseConnection , audioBuffer , ( size_t ) r , &error ) < 0 ) { fprintf( stderr , __FILE__": pa_simple_write() failed: %s\n" , pa_strerror( error ) ); if ( pulseConnection ) pa_simple_free( pulseConnection ); } usleep(2); } /* Make sure that every single sample was played */ if ( pa_simple_drain( pulseConnection , &error ) < 0 ) { fprintf( stderr , __FILE__": pa_simple_drain() failed: %s\n" , pa_strerror( error ) ); if ( pulseConnection ) pa_simple_free( pulseConnection ); } } NOTE: If you want the rest of the code to this file, you can download it here directly from Launchpad.

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >