Search Results

Search found 8706 results on 349 pages for 'projects'.

Page 111/349 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • solution for an offline server

    - by dashmug
    I'm trying to setup a development server at work that will ideally be able to test drive a couple of projects in PHP, Rails, or Django (not always running at the same time). I develop the apps locally on a Mac and then I'll put the projects up on this server for testing with my actual users (non-techies) before deploying to a production server. My problem is that we have a very poor internet connection (almost negligible) at work and doing the usual apt-get/yum/ports (make, clean, install) processes for setting up servers always get their packages from online repositories somewhere. I know I could probably download the source and then compile them myself but that's going to be too much of a hassle for me. I'm thinking about two solutions: Plan A: Run a server VM on my Mac and then use this VM as the source repository for the offline server. I've read about Ubuntu's apt-proxy and it seems to be good enough though I haven't tried it yet. I'm not sure if this is possible but can I simply do apt-get install nginx --downloadonly so that the package and its dependencies will be downloaded into my VM and my server can use the VM as the source repo for apt-get? Plan B: Run a server VM on my Mac (which I can setup/update easily when I'm home) and then clone the VM to the offline development server. Maybe I should simply make the server a VM host so I can simply copy the VM over. I think this is okay for the first-time setup but subsequent updates will take too long (cloning the VM image). If I was working on Windows, I imagine it'd be easier because most services have an installer file that I can download and then run at the server. If you could suggest another way, it would be much appreciated. Update: From Michael Hampton's answer, I found a possible solution which is apt-cacher. I also found this page on Ubuntu's website. I wonder if there is a better tool than this one.

    Read the article

  • How can I write an excel formula to do row based calculations; where certain conditions need to be met?

    - by BDY
    I am given: An excel sheet contains around 200 tasks (described in rows 2-201 in Column A). Each task can be elegible for a max of two projects (There are 4 projects in total, called "P1-P4" - drop down lists in Columns B and D); and this with a specific %-rate allocation (columns C & E - Column C refers to the Project Column B, and Column E refers to the Project in Column D). Column F shows the amount of work days spent on each task. Example in row 2: Task 1 (Column A); P1 (Column B) ; 80% (Column C) ; P3 (Column D) ; 20% (Column E) ; 3 (Column F) I need to know the sum of the working days spent on Project P3 respecting the %-rate for elegibility. I know how to calculate it for each Task (each Row) - e.g. for Task 1: =IF(B2="P3";C2*F2)+IF(D2="P3";E2*F2) However instead of repeating this for each task, I need a formula that adds them all together. Unfortunately the following formula shows me an error: =IF(B2:B201="P3";C2:C201*F2:F201)+IF(D2:D201="P3";E2:E201*F2:F201) Can anyone help please? Thank you!!

    Read the article

  • Is there any equivalence of `--depth immediates` in `git`?

    - by ???
    Currently, I'm try to setup git front-end to the Subversion repository. My Subversion repository is a single large repository which consists of several co-related projects: svn-root |-- project1 | |-- branches | |-- tags | `-- trunk |-- project2 | |-- branches | |-- tags | `-- trunk `-- project3 |-- branches |-- tags `-- trunk Because it's sometimes needs to move files between different projects, so I don't want to break the repository to separate ones. I'm going to use git-svn to setup a git front-end, but I don't see how to exactly mapping the svn to git structure. The two systems treat branches and tags very different and I doubt it is possible. To simplify the problem, I would just git svn clone the whole root directory and let branches/tags/trunk directories just sit there. But this will definitely result in too many files in branches and tags directories. In Subversion, it's easy to just set the depth of checkout to immediates, which will only checkout the branch/tag titles, without the directory contents. but I don't know if this can be done in git. The git-svn messed me up. I hope there's more elegant solution.

    Read the article

  • Internal Server Error with mod_wsgi [django] on windows xp

    - by sacabuche
    when i run development server it works very well, even an empty project runing in mod_wsgi i have no problem but when i want to put my own project i get an Internal Server Error (500) in my apache conf i put WSGIScriptAlias /codevents C:/django/apache/CODEvents.wsgi <Directory "C:/django/apache"> Order allow,deny Allow from all </Directory> Alias /codevents/media C:/django/projects/CODEvents/html <Directory "C:/django/projects/CODEvents/html"> Order allow,deny Allow from all </Directory> in CODEvents.wsgi import os, sys sys.path.append('C:\\Python26\\Lib\\site-packages\\django') sys.path.append('C:\\django\\projects') sys.path.append('C:\\django\\apps') sys.path.append('C:\\django\\projects\\CODEvents') os.environ['DJANGO_SETTINGS_MODULE'] = 'CODEvents.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() and in my error_log it said: [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] mod_wsgi (pid=1848): Exception occurred processing WSGI script 'C:/django/apache/CODEvents.wsgi'. [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] Traceback (most recent call last): [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\core\\handlers\\wsgi.py", line 241, in __call__ [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] response = self.get_response(request) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\core\\handlers\\base.py", line 142, in get_response [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return self.handle_uncaught_exception(request, resolver, exc_info) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\core\\handlers\\base.py", line 166, in handle_uncaught_exception [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return debug.technical_500_response(request, *exc_info) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\views\\debug.py", line 58, in technical_500_response [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] html = reporter.get_traceback_html() [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\views\\debug.py", line 137, in get_traceback_html [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return t.render(c) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\__init__.py", line 173, in render [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return self._render(context) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\__init__.py", line 167, in _render [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return self.nodelist.render(context) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\__init__.py", line 796, in render [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] bits.append(self.render_node(node, context)) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\debug.py", line 72, in render_node [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] result = node.render(context) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\debug.py", line 89, in render [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] output = self.filter_expression.resolve(context) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\__init__.py", line 579, in resolve [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] new_obj = func(obj, *arg_vals) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\defaultfilters.py", line 693, in date [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return format(value, arg) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\dateformat.py", line 281, in format [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return df.format(format_string) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\dateformat.py", line 30, in format [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] pieces.append(force_unicode(getattr(self, piece)())) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\dateformat.py", line 187, in r [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return self.format('D, j M Y H:i:s O') [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\dateformat.py", line 30, in format [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] pieces.append(force_unicode(getattr(self, piece)())) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\encoding.py", line 66, in force_unicode [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] s = unicode(s) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\functional.py", line 206, in __unicode_cast [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return self.__func(*self.__args, **self.__kw) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\__init__.py", line 55, in ugettext [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return real_ugettext(message) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\functional.py", line 55, in _curried [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return _curried_func(*(args+moreargs), **dict(kwargs, **morekwargs)) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\__init__.py", line 36, in delayed_loader [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return getattr(trans, real_name)(*args, **kwargs) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\trans_real.py", line 276, in ugettext [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return do_translate(message, 'ugettext') [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\trans_real.py", line 266, in do_translate [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] _default = translation(settings.LANGUAGE_CODE) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\trans_real.py", line 176, in translation [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] default_translation = _fetch(settings.LANGUAGE_CODE) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\trans_real.py", line 159, in _fetch [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] app = import_module(appname) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\importlib.py", line 35, in import_module [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] __import__(name) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\contrib\\admin\\__init__.py", line 1, in <module> [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] from django.contrib.admin.helpers import ACTION_CHECKBOX_NAME [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\contrib\\admin\\helpers.py", line 1, in <module> [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] from django import forms [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\forms\\__init__.py", line 17, in <module> [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] from models import * [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\forms\\models.py", line 6, in <module> [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] from django.db import connections [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\db\\__init__.py", line 75, in <module> [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] connection = connections[DEFAULT_DB_ALIAS] [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\db\\utils.py", line 91, in __getitem__ [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] backend = load_backend(db['ENGINE']) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\db\\utils.py", line 49, in load_backend [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] raise ImproperlyConfigured(error_msg) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] TemplateSyntaxError: Caught ImproperlyConfigured while rendering: 'django.db.backends.postgresql_psycopg2' isn't an available database backend. [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] Try using django.db.backends.XXX, where XXX is one of: [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] 'dummy', 'mysql', 'oracle', 'postgresql', 'postgresql_psycopg2', 'sqlite3' [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] Error was: cannot import name utils please help me!!

    Read the article

  • Internal Server Error with mod_wgsi [django] on windows xp

    - by sacabuche
    when i run development server it works very well, even an empty project runing in mod_wsgi i have no problem but when i want to put my own project i get an Internal Server Error (500) in my apache conf i put WSGIScriptAlias /codevents C:/django/apache/CODEvents.wsgi <Directory "C:/django/apache"> Order allow,deny Allow from all </Directory> Alias /codevents/media C:/django/projects/CODEvents/html <Directory "C:/django/projects/CODEvents/html"> Order allow,deny Allow from all </Directory> in CODEvents.wsgi import os, sys sys.path.append('C:\\Python26\\Lib\\site-packages\\django') sys.path.append('C:\\django\\projects') sys.path.append('C:\\django\\apps') sys.path.append('C:\\django\\projects\\CODEvents') os.environ['DJANGO_SETTINGS_MODULE'] = 'CODEvents.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() and in my error_log it said: [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] mod_wsgi (pid=1848): Exception occurred processing WSGI script 'C:/django/apache/CODEvents.wsgi'. [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] Traceback (most recent call last): [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\core\\handlers\\wsgi.py", line 241, in __call__ [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] response = self.get_response(request) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\core\\handlers\\base.py", line 142, in get_response [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return self.handle_uncaught_exception(request, resolver, exc_info) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\core\\handlers\\base.py", line 166, in handle_uncaught_exception [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return debug.technical_500_response(request, *exc_info) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\views\\debug.py", line 58, in technical_500_response [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] html = reporter.get_traceback_html() [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\views\\debug.py", line 137, in get_traceback_html [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return t.render(c) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\__init__.py", line 173, in render [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return self._render(context) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\__init__.py", line 167, in _render [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return self.nodelist.render(context) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\__init__.py", line 796, in render [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] bits.append(self.render_node(node, context)) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\debug.py", line 72, in render_node [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] result = node.render(context) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\debug.py", line 89, in render [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] output = self.filter_expression.resolve(context) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\__init__.py", line 579, in resolve [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] new_obj = func(obj, *arg_vals) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\template\\defaultfilters.py", line 693, in date [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return format(value, arg) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\dateformat.py", line 281, in format [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return df.format(format_string) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\dateformat.py", line 30, in format [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] pieces.append(force_unicode(getattr(self, piece)())) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\dateformat.py", line 187, in r [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return self.format('D, j M Y H:i:s O') [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\dateformat.py", line 30, in format [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] pieces.append(force_unicode(getattr(self, piece)())) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\encoding.py", line 66, in force_unicode [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] s = unicode(s) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\functional.py", line 206, in __unicode_cast [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return self.__func(*self.__args, **self.__kw) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\__init__.py", line 55, in ugettext [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return real_ugettext(message) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\functional.py", line 55, in _curried [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return _curried_func(*(args+moreargs), **dict(kwargs, **morekwargs)) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\__init__.py", line 36, in delayed_loader [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return getattr(trans, real_name)(*args, **kwargs) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\trans_real.py", line 276, in ugettext [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] return do_translate(message, 'ugettext') [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\trans_real.py", line 266, in do_translate [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] _default = translation(settings.LANGUAGE_CODE) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\trans_real.py", line 176, in translation [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] default_translation = _fetch(settings.LANGUAGE_CODE) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\translation\\trans_real.py", line 159, in _fetch [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] app = import_module(appname) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\utils\\importlib.py", line 35, in import_module [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] __import__(name) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\contrib\\admin\\__init__.py", line 1, in <module> [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] from django.contrib.admin.helpers import ACTION_CHECKBOX_NAME [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\contrib\\admin\\helpers.py", line 1, in <module> [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] from django import forms [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\forms\\__init__.py", line 17, in <module> [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] from models import * [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\forms\\models.py", line 6, in <module> [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] from django.db import connections [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\db\\__init__.py", line 75, in <module> [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] connection = connections[DEFAULT_DB_ALIAS] [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\db\\utils.py", line 91, in __getitem__ [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] backend = load_backend(db['ENGINE']) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] File "C:\\Python26\\lib\\site-packages\\django\\db\\utils.py", line 49, in load_backend [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] raise ImproperlyConfigured(error_msg) [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] TemplateSyntaxError: Caught ImproperlyConfigured while rendering: 'django.db.backends.postgresql_psycopg2' isn't an available database backend. [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] Try using django.db.backends.XXX, where XXX is one of: [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] 'dummy', 'mysql', 'oracle', 'postgresql', 'postgresql_psycopg2', 'sqlite3' [Mon May 24 23:31:39 2010] [error] [client 127.0.0.1] Error was: cannot import name utils please help me!!

    Read the article

  • Hosted bug tracking system with mercurial repositories (Summary of options & request for opinions)

    - by Mark Booth
    The Question What hosted mercurial repository/bug tracking system or systems have you used? Would you recommend it to others? Are there serious flaws, either in the repository hosting or the bug tracking features that would make it difficult to recommend it? Do you have any other experiences with it or opinions of it that you would like to share? If you have used other non mercurial hosted repository/bug tracking systems, how does it compare? (If I understand correctly, the best format for this type of community-wiki style question is one answer per option, if you have experienced if several) Background I have been looking into options for setting up a bug/issue tracking database and found some valuable advice in this thread and this. But then I got to thinking that a hosted solution might not only solve the problem of tracking bugs, but might also solve the problem we have accessing our mercurial source code repositories while at customer sites around the world. Since we currently have no way to serve mercurial repositories over ssl, when I am at a customer site I have to connect my laptop via VPN to my work network and access the mercurial repositories over a samba share (even if it is just to synce twice a day). This is excruciatingly slow on high latency networks and can be impossible with some customers' firewalls. Even if we could run a TRAC or Redmine server here (thanks turnkey), I'm not sure it would be much quicker as our internet connection is over-stretched as it is. What I would like is for developers to be able to be able to push/pull to/from a remote repository, servicing engineers to be able to pull from a remote repository and for customers (both internal and external) to be able to submit bug/issue reports. Initial options The two options I found were Assembla and Jira. Looking at Assembla I thought the 'group' price looked reasonable, but after enquiring, found that each workspace could only contain a single repository. Since each of our products might have up to a dozen repositories (mostly for libraries) which need to be managed seperately for each product, I could see it getting expensive really quickly. On the plus side, it appears that 'users' are just workspace members, so you can have as many client users (people who can only submit support tickets and track their own tickets) without using up your user allocation. Jira only charges based on the number of users, unfortunately client users also count towards this, if you want them to be able to track their tickets. If you only want clients to be able to submit untracked issues, you can let them submit anonymously, but that doesn't feel very professional to me. More options Looking through MercurialHosting page that @Paidhi suggested, I've added the options which appear to offer private repositories, along with another that I found with a web search. Prices are as per their website today (29th March 2010). Corrections welcome in the future. Anyway, here is my summary, according to the information given on their websites: Assembla, http://www.assembla.com/, looks to be a reasonable price, but suffers only one repository per workspace, so three projects with 6 repos each would use up most of the spaces associated with a $99/month professional account (20 spaces). Bug tracking is based on Trac. Mercurial+Trac support was announced in a blog entry in 2007, but they only list SVN and Git on their Features web page. Cost: $24, $49, $99 & $249/month for 40, 40, unlimited, unlimited users and 1, 10, 20, 100 workspaces. SSL based push/pull? Website https login. BitBucket, http://bitbucket.org/plans/, is primarily a mercurial hosting site for open source projects, with SSL support, but they have an integrated bug tracker and they are cheap for private repositories. It has it’s own issues tracker, but also integrates with Lighthouse & FogBugz. Cost: $0, $5, $12, $50 & $100/month for 1, 5, 15, 25 & 150 private repositories. SSL based push/pull. No https on website login, but supports OpenID, so you can chose an OpenID provider with https login. Codebase HQ, http://www.codebasehq.com/, supports Hg and is almost as cheap as BitBucket. Cost: £5, £13, £21 & £40/month for 3, 15, 30 & 60 active projects, unlimited repositories, unlimited users (except 10 users at £5/month) and 0.5, 2, 4 & 10GB. SSL based push/pull? Website https login? Firefly, http://www.activestate.com/firefly/, by ActiveState looks interesting, but the website is a little light on details, such as whether you can only have one repository per project or not. Cost: $9, $19, & £39/month for 1, 5 & 30 private projects, with a 0.5, 1.5 & 3 GB storage limit. SSL based push/pull? Website https login. Jira, http://www.atlassian.com/software/jira/, isn’t limited by the number of repositories you can have, but by ‘user’. It could work out quite expensive if we want client users to be able to track their issues, since they would need a full user account to be created for them. Also, while there is a Mercurial extension to support jira, there is no ‘Advanced integration’ for Mercurial from Atlassian Fisheye. Cost: $150, $300, $400, $500, $700/month for 10, 25, 50, 100, 100+ users. SSL based push/pull? Website https login. Kiln & FogBugz On Demand, http://fogcreek.com/Kiln/IntrotoOnDemand.html, integrates Kilns mercurial DVCS features with FogBugz, where the combined package is much cheaper than the component parts. Also, the Fogbugz integration is supposedly excellent. *8’) Cost: £30/developer/month ($5/d/m more than either on their own). SSL based push/pull? SourceRepo, http://sourcerepo.com/, also supports HG and is even cheaper than BitBucket & Codebase. Cost: $4, $7 & $13/month for 1, unlimited & unlimited repositories/trac/redmine instances and 500MB, 1GB & 3GB storage. SSL based push/pull. Website https login. Edit: 29th March 2010 & Bounty I split this question into sections, made the questions themselves more explicit, added other options from the research I have done since my first posting and made this community wiki, since I now understand what CW is for. *8') Also, I've added a bounty to encourage people to offer their opinions. At the end of the bounty period, I will award the bounty to whoever writes the best review (good or bad), irrespective of the number of up/down votes it gets. Given that it's probably more important to avoid bad providers than find the absolute best one, 'bad reviews' could be considered more important than good ones.

    Read the article

  • Am I just not understanding TDD unit testing (Asp.Net MVC project)?

    - by KallDrexx
    I am trying to figure out how to correctly and efficiently unit test my Asp.net MVC project. When I started on this project I bought the Pro ASP.Net MVC, and with that book I learned about TDD and unit testing. After seeing the examples, and the fact that I work as a software engineer in QA in my current company, I was amazed at how awesome TDD seemed to be. So I started working on my project and went gun-ho writing unit tests for my database layer, business layer, and controllers. Everything got a unit test prior to implementation. At first I thought it was awesome, but then things started to go downhill. Here are the issues I started encountering: I ended up writing application code in order to make it possible for unit tests to be performed. I don't mean this in a good way as in my code was broken and I had to fix it so the unit test pass. I mean that abstracting out the database to a mock database is impossible due to the use of linq for data retrieval (using the generic repository pattern). The reason is that with linq-sql or linq-entities you can do joins just by doing: var objs = select p from _container.Projects select p.Objects; However, if you mock the database layer out, in order to have that linq pass the unit test you must change the linq to be var objs = select p from _container.Projects join o in _container.Objects on o.ProjectId equals p.Id select o; Not only does this mean you are changing your application logic just so you can unit test it, but you are making your code less efficient for the sole purpose of testability, and getting rid of a lot of advantages using an ORM has in the first place. Furthermore, since a lot of the IDs for my models are database generated, I proved to have to write additional code to handle the non-database tests since IDs were never generated and I had to still handle those cases for the unit tests to pass, yet they would never occur in real scenarios. Thus I ended up throwing out my database unit testing. Writing unit tests for controllers was easy as long as I was returning views. However, the major part of my application (and the one that would benefit most from unit testing) is a complicated ajax web application. For various reasons I decided to change the app from returning views to returning JSON with the data I needed. After this occurred my unit tests became extremely painful to write, as I have not found any good way to write unit tests for non-trivial json. After pounding my head and wasting a ton of time trying to find a good way to unit test the JSON, I gave up and deleted all of my controller unit tests (all controller actions are focused on this part of the app so far). So finally I was left with testing the Service layer (BLL). Right now I am using EF4, however I had this issue with linq-sql as well. I chose to do the EF4 model-first approach because to me, it makes sense to do it that way (define my business objects and let the framework figure out how to translate it into the sql backend). This was fine at the beginning but now it is becoming cumbersome due to relationships. For example say I have Project, User, and Object entities. One Object must be associated to a project, and a project must be associated to a user. This is not only a database specific rule, these are my business rules as well. However, say I want to do a unit test that I am able to save an object (for a simple example). I now have to do the following code just to make sure the save worked: User usr = new User { Name = "Me" }; _userService.SaveUser(usr); Project prj = new Project { Name = "Test Project", Owner = usr }; _projectService.SaveProject(prj); Object obj = new Object { Name = "Test Object" }; _objectService.SaveObject(obj); // Perform verifications There are many issues with having to do all this just to perform one unit test. There are several issues with this. For starters, if I add a new dependency, such as all projects must belong to a category, I must go into EVERY single unit test that references a project, add code to save the category then add code to add the category to the project. This can be a HUGE effort down the road for a very simple business logic change, and yet almost none of the unit tests I will be modifying for this requirement are actually meant to test that feature/requirement. If I then add verifications to my SaveProject method, so that projects cannot be saved unless they have a name with at least 5 characters, I then have to go through every Object and Project unit test to make sure that the new requirement doesn't make any unrelated unit tests fail. If there is an issue in the UserService.SaveUser() method it will cause all project, and object unit tests to fail and it the cause won't be immediately noticeable without having to dig through the exceptions. Thus I have removed all service layer unit tests from my project. I could go on and on, but so far I have not seen any way for unit testing to actually help me and not get in my way. I can see specific cases where I can, and probably will, implement unit tests, such as making sure my data verification methods work correctly, but those cases are few and far between. Some of my issues can probably be mitigated but not without adding extra layers to my application, and thus making more points of failure just so I can unit test. Thus I have no unit tests left in my code. Luckily I heavily use source control so I can get them back if I need but I just don't see the point. Everywhere on the internet I see people talking about how great TDD unit tests are, and I'm not just talking about the fanatical people. The few people who dismiss TDD/Unit tests give bad arguments claiming they are more efficient debugging by hand through the IDE, or that their coding skills are amazing that they don't need it. I recognize that both of those arguments are utter bullocks, especially for a project that needs to be maintainable by multiple developers, but any valid rebuttals to TDD seem to be few and far between. So the point of this post is to ask, am I just not understanding how to use TDD and automatic unit tests?

    Read the article

  • Erroneous/Incorrect C2248 error using Visual Studio 2010

    - by Dylan Bourque
    I'm seeing what I believe to be an erroneous/incorrect compiler error using the Visual Studio 2010 compiler. I'm in the process of up-porting our codebase from Visual Studio 2005 and I ran across a construct that was building correctly before but now generates a C2248 compiler error. Obviously, the code snippet below has been generic-ized, but it is a compilable example of the scenario. The ObjectPtr<T> C++ template comes from our codebase and is the source of the error in question. What appears to be happening is that the compiler is generating a call to the copy constructor for ObjectPtr<T> when it shouldn't (see my comment block in the SomeContainer::Foo() method below). For this code construct, there is a public cast operator for SomeUsefulData * on ObjectPtr<SomeUsefulData> but it is not being chosen inside the true expression if the ?: operator. Instead, I get the two errors in the block quote below. Based on my knowledge of C++, this code should compile. Has anyone else seen this behavior? If not, can someone point me to a clarification of the compiler resolution rules that would explain why it's attempting to generate a copy of the object in this case? Thanks in advance, Dylan Bourque Visual Studio build output: c:\projects\objectptrtest\objectptrtest.cpp(177): error C2248: 'ObjectPtr::ObjectPtr' : cannot access private member declared in class 'ObjectPtr' with [ T=SomeUsefulData ] c:\projects\objectptrtest\objectptrtest.cpp(25) : see declaration of 'ObjectPtr::ObjectPtr' with [ T=SomeUsefulData ] c:\projects\objectptrtest\objectptrtest.cpp(177): error C2248: 'ObjectPtr::ObjectPtr' : cannot access private member declared in class 'ObjectPtr' with [ T=SomeUsefulData ] c:\projects\objectptrtest\objectptrtest.cpp(25) : see declaration of 'ObjectPtr::ObjectPtr' with [ T=SomeUsefulData ] Below is a minimal, compilable example of the scenario: #include <stdio.h> #include <tchar.h> template<class T> class ObjectPtr { public: ObjectPtr<T> (T* pObj = NULL, bool bShared = false) : m_pObject(pObj), m_bObjectShared(bShared) {} ~ObjectPtr<T> () { Detach(); } private: // private, unimplemented copy constructor and assignment operator // to guarantee that ObjectPtr<T> objects are not copied ObjectPtr<T> (const ObjectPtr<T>&); ObjectPtr<T>& operator = (const ObjectPtr<T>&); public: T * GetObject () { return m_pObject; } const T * GetObject () const { return m_pObject; } bool HasObject () const { return (GetObject()!=NULL); } bool IsObjectShared () const { return m_bObjectShared; } void ObjectShared (bool bShared) { m_bObjectShared = bShared; } bool IsNull () const { return !HasObject(); } void Attach (T* pObj, bool bShared = false) { Detach(); if (pObj != NULL) { m_pObject = pObj; m_bObjectShared = bShared; } } void Detach (T** ppObject = NULL) { if (ppObject != NULL) { *ppObject = m_pObject; m_pObject = NULL; m_bObjectShared = false; } else { if (HasObject()) { if (!IsObjectShared()) delete m_pObject; m_pObject = NULL; m_bObjectShared = false; } } } void Detach (bool bDeleteIfNotShared) { if (HasObject()) { if (bDeleteIfNotShared && !IsObjectShared()) delete m_pObject; m_pObject = NULL; m_bObjectShared = false; } } bool IsEqualTo (const T * pOther) const { return (GetObject() == pOther); } public: T * operator -> () { ASSERT(HasObject()); return m_pObject; } const T * operator -> () const { ASSERT(HasObject()); return m_pObject; } T & operator * () { ASSERT(HasObject()); return *m_pObject; } const T & operator * () const { ASSERT(HasObject()); return (const C &)(*m_pObject); } operator T * () { return m_pObject; } operator const T * () const { return m_pObject; } operator bool() const { return (m_pObject!=NULL); } ObjectPtr<T>& operator = (T * pObj) { Attach(pObj, false); return *this; } bool operator == (const T * pOther) const { return IsEqualTo(pOther); } bool operator == (T * pOther) const { return IsEqualTo(pOther); } bool operator != (const T * pOther) const { return !IsEqualTo(pOther); } bool operator != (T * pOther) const { return !IsEqualTo(pOther); } bool operator == (const ObjectPtr<T>& other) const { return IsEqualTo(other.GetObject()); } bool operator != (const ObjectPtr<T>& other) const { return !IsEqualTo(other.GetObject()); } bool operator == (int pv) const { return (pv==NULL)? IsNull() : (LPVOID(m_pObject)==LPVOID(pv)); } bool operator != (int pv) const { return !(*this == pv); } private: T * m_pObject; bool m_bObjectShared; }; // Some concrete type that holds useful data class SomeUsefulData { public: SomeUsefulData () {} ~SomeUsefulData () {} }; // Some concrete type that holds a heap-allocated instance of // SomeUsefulData class SomeContainer { public: SomeContainer (SomeUsefulData* pUsefulData) { m_pData = pUsefulData; } ~SomeContainer () { // nothing to do here } public: bool EvaluateSomeCondition () { // fake condition check to give us an expression // to use in ?: operator below return true; } SomeUsefulData* Foo () { // this usage of the ?: operator generates a C2248 // error b/c it's attempting to call the copy // constructor on ObjectPtr<T> return EvaluateSomeCondition() ? m_pData : NULL; /**********[ DISCUSSION ]********** The following equivalent constructs compile w/out error and behave correctly: (1) explicit cast to SomeUsefulData* as a comiler hint return EvaluateSomeCondition() ? (SomeUsefulData *)m_pData : NULL; (2) if/else instead of ?: if (EvaluateSomeCondition()) return m_pData; else return NULL; (3) skip the condition check and return m_pData as a SomeUsefulData* directly return m_pData; **********[ END DISCUSSION ]**********/ } private: ObjectPtr<SomeUsefulData> m_pData; }; int _tmain(int argc, _TCHAR* argv[]) { return 0; }

    Read the article

  • Software: Launching League of Legends spectator mode from Command Line (Mac)

    - by Alex Popov
    Background: tl;dr at the end League of Legends has a spectator mode, in which you can watch someone else's game (essentially a replay) with a 3 minute delay. Popular LoL website OP.GG has figured out a clever way of hosting these spectator games on their own servers, thereby making them replayable, as opposed to only being available while the game is on (as Riot does it). If you request a replay from OP.GG, it sends a batch file which looks for where the League is situated and then the magic happens: @start "" "League of Legends.exe" "8394" "LoLLauncher.exe" "" "spectator fspectate.op.gg:4081 tjJbtRLQ/HMV7HuAxWV0XsXoRB4OmFBr 1391881421 NA1" This works fine on Windows. I'm trying to get it to work on Mac (which has an official client). First I tried running the same command by hand, (split for convenience) /Applications/ ... /LeagueOfLegends.app/ ... /LeagueofLegends 8393 LoLLauncher \ /Applications/ ... /LolClient spectator fspectate.op.gg:4081 tjJbtRLQ/HMV7HuAxWV0XsXoRB4OmFBr 1391881421 NA1 Running this, however, just starts the LoLLauncher, which closes all the active League processes. The exactly same thing happens if I just call /Applications/ ... /LeagueOfLegends.app/ ... /LeagueofLegends Next I tried seeing what actually happens when Spectator mode is initiated so I ran $ ps -axf | grep -i lol which showed UID PID PPID C STIME TTY TIME CMD 503 3085 1 0 Wed02pm ?? 0:00.00 (LolClient) 503 24607 1 0 9:19am ?? 0:00.98 /Applications/League of Legends.app/Contents/LOL/RADS/system/UserKernel.app/Contents/MacOS/UserKernel updateandrun lol_launcher LoLLauncher.app 503 24610 24607 0 9:19am ?? 1:08.76 /Applications/League of Legends.app/Contents/LoL/RADS/projects/lol_launcher/releases/0.0.0.122/deploy/LoLLauncher.app/Contents/MacOS/LoLLauncher 503 24611 24610 0 9:19am ?? 1:23.02 /Applications/League of Legends.app/Contents/LoL/RADS/projects/lol_air_client/releases/0.0.0.127/deploy/bin/LolClient -runtime .\ -nodebug META-INF\AIR\application.xml .\ -- 8393 503 24927 24610 0 9:44am ?? 0:03.37 /Applications/League of Legends.app/Contents/LoL/RADS/solutions/lol_game_client_sln/releases/0.0.0.117/deploy/LeagueOfLegends.app/Contents/MacOS/LeagueofLegends 8394 LoLLauncher /Applications/League of Legends.app/Contents/LoL/RADS/projects/lol_air_client/releases/0.0.0.127/deploy/bin/LolClient spectator 216.133.234.17:8088 Yn1oMX/n3LpXNebibzUa1i3Z+s2HV0ul 1400781241 NA1 Of Interest: there is (LolClient) which I cannot kill by it's PID. UserKernel updateandrun lol_launcher LoLLauncher.app is launched first. LoLLauncher is launched by the UserKernel (as we can see from the PPID) The very long command (PID: 24927) is how Spectator mode is launched, and is also launched by UserKernel. Spectator mode is launched in exactly the same way that the OP.GG .bat wanted to, with the only difference that Spectator mode connects to Riot instead of OP.GG's spectate server. I tried attaching GDB to the LolClient, but I couldn't get anything meaningful from it since it's an Adobe AIR application (and I've never used GDB with code other than mine own). Next I ran dtruss -a -b 100m -f -p $PID on everything I could think of: the LolClient, the LolLauncher and the UserKernel and skimmed the half a million lines produced. I found stuff like the GET request used to get the information of the game to spectate, but I could not see any launch of the equivalent of League of Legends.exe with spectator options. Finally, I ran lsof | grep -i lol to see if anything else was opened in the process, but didn't find anything that seemed appropriate. Open were UserKernel, LolLauncher, LolClient, Adobe AIR, LeagueofLegends and then Bugsplat, all of which are expected. None of this seemed especially relevant to figuring out how LeagueofLegends was opened into spectator mode. It obviously can be done, since Spectator mode is accessible from within the client. It seems likely that it can be done from the CLI, since Windows can do it and the clients are supposed to equals. Unless I'm missing something in the difference between how UNIX and Windows handle CLI application launches. My question is if there are any other things I can try to figure out how to launch Spectator mode myself. tl;dr: Trying to get into spectator mode from the CLI. It's possible on Windows (see first code block) but it just restarts League on Mac. What else can I try to find what call is made, and how to reproduce it? PS: Please let me know how I can improve this question or its formatting, I'd love to use StackOverflow/SuperUser, but as the guys said on the podcast this week (Ep. 59) it's very intimidating. Sorry for posting this on StackOverflow the first time :(

    Read the article

  • Rails 2.3.5 and oauth-plugin

    - by pgb
    I started a rails application from scratch, using Rails 2.3.5, and installed oauth-plugin. The installation was done by running script/plugin install git://github.com/pelle/oauth-plugin.git. Now, when I try to start the server, I get the following errors: => Rails 2.3.5 application starting on http://0.0.0.0:3000 /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:440:in `load_missing_constant': uninitialized constant Rails::Railtie (NameError) from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:80:in `const_missing' from /Users/Pablo/Projects/test.oauth/vendor/plugins/oauth-plugin/lib/oauth-plugin.rb:16 from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in `require' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:521:in `new_constants_in' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in `require' from /Users/Pablo/Projects/test.oauth/vendor/plugins/oauth-plugin/rails/init.rb:1:in `evaluate_init_rb' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/rails/plugin.rb:158:in `evaluate_init_rb' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/kernel/reporting.rb:11:in `silence_warnings' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/rails/plugin.rb:154:in `evaluate_init_rb' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/rails/plugin.rb:48:in `load' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/rails/plugin/loader.rb:38:in `load_plugins' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/rails/plugin/loader.rb:37:in `each' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/rails/plugin/loader.rb:37:in `load_plugins' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:369:in `load_plugins' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:165:in `process' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:113:in `send' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:113:in `run' from /Users/Pablo/Projects/test.oauth/config/environment.rb:9 from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in `require' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:521:in `new_constants_in' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in `require' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/commands/server.rb:84 from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' from script/server:3 I can't figure out why this is failing. Am I missing a dependency? What other information I can provide to others help me figure this out?

    Read the article

  • adobe flash buider (flex4): Error #2025 or Error: addChild() is not available in this class. Instead

    - by user306584
    Hi, I'm a complete newbie to Flex, so apologies for my dumbness. I've searched for an answer but haven't found anything that seems to be doing the trick. What I'm trying to do: port this example http://www.adobe.com/devnet/air/flex/articles/flex_air_codebase_print.html to Flash Builder 4. All seems to be fine but for one thing. When I use the original code for the Air application <?xml version="1.0" encoding="utf-8"?> <s:WindowedApplication xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" creationComplete="onApplicationComplete()"> <fx:Script> <![CDATA[ private static const neededForCompilation:AirGeneralImplementation = null; private function onApplicationComplete():void { var can:MainCanvas = new MainCanvas(); this.addChild(can); can.labelMessage = "Loaded in an AIR Application "; } ]]> </fx:Script> <fx:Declarations> <!-- Place non-visual elements (e.g., services, value objects) here --> </fx:Declarations> </s:WindowedApplication> I get this run time error Error: addChild() is not available in this class. Instead, use addElement() or modify the skin, if you have one. at spark.components.supportClasses::SkinnableComponent/addChild()[E:\dev\4.0.0\frameworks\projects\spark\src\spark\components\supportClasses\SkinnableComponent.as:1038] If I substitute the code with this.addElement(can); Everything loads well but the first time I try to press any of the buttons on the main canvas I get the following run time error ArgumentError: Error #2025: The supplied DisplayObject must be a child of the caller. at flash.display::DisplayObjectContainer/getChildIndex() at mx.managers::SystemManager/getChildIndex()[E:\dev\4.0.0\frameworks\projects\framework\src\mx\managers\SystemManager.as:1665] at mx.managers.systemClasses::ActiveWindowManager/mouseDownHandler()[E:\dev\4.0.0\frameworks\projects\framework\src\mx\managers\systemClasses\ActiveWindowManager.as:437] here's the super simple code for the main canvas <?xml version="1.0" encoding="utf-8"?> <s:Application xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" minWidth="955" minHeight="600" creationComplete="init();"> <fx:Declarations> <!-- Place non-visual elements (e.g., services, value objects) here --> </fx:Declarations> <fx:Script source="main.as" /> <mx:Label id="lblMessage" text="The UI from the shared Flex app BothCode" x="433" y="112"/> <s:Button x="433" y="141" click="saveFile();" label="Save File"/> <s:Button x="601" y="141" click="GeneralFactory.getGeneralInstance().airOnlyFunctionality();" label="Air Only"/> </s:Application> Any help would be immensely appreciated. And any pointers to how to setup a project that can compile in both Air and Flash while sharing the same code, all for Flex 4, would also be immensely appreciated. thank you!

    Read the article

  • Flex 4, chart, Error 1009

    - by Stephane
    Hello, I am new to flex development and I am stuck with this error TypeError: Error #1009: Cannot access a property or method of a null object reference. at mx.charts.series::LineSeries/findDataPoints()[E:\dev\4.0.0\frameworks\projects\datavisualization\src\mx\charts\series\LineSeries.as:1460] at mx.charts.chartClasses::ChartBase/findDataPoints()[E:\dev\4.0.0\frameworks\projects\datavisualization\src\mx\charts\chartClasses\ChartBase.as:2202] at mx.charts.chartClasses::ChartBase/mouseMoveHandler()[E:\dev\4.0.0\frameworks\projects\datavisualization\src\mx\charts\chartClasses\ChartBase.as:4882] I try to create a chart where I can move the points with the mouse I notice that the error doesn't occur if I move the point very slowly I have try to use the debugger and pint some debug every where without success All the behaviours were ok until I had the modifyData Please let me know if you have some experience with this kind of error I will appreciate any help. Is it also possible to remove the error throwing because after that the error occur if I click the dismiss all error button then the component work great there is the simple code of the chart <fx:Declarations> </fx:Declarations> <fx:Script> <![CDATA[ import mx.charts.HitData; import mx.charts.events.ChartEvent; import mx.charts.events.ChartItemEvent; import mx.collections.ArrayCollection; [Bindable] private var selectItem:Object; [Bindable] private var chartMouseY:int; [Bindable] private var hitData:Boolean=false; [Bindable] private var profitPeriods:ArrayCollection = new ArrayCollection( [ { period: "Qtr1 2006", profit: 32 }, { period: "Qtr2 2006", profit: 47 }, { period: "Qtr3 2006", profit: 62 }, { period: "Qtr4 2006", profit: 35 }, { period: "Qtr1 2007", profit: 25 }, { period: "Qtr2 2007", profit: 55 } ]); public function chartMouseUp(e:MouseEvent):void{ if(hitData){ hitData = false; } } private function chartMouseMove(e:MouseEvent):void { if(hitData){ var p:Point = new Point(linechart.mouseX,linechart.mouseY); var d:Array = linechart.localToData(p); chartMouseY=d[1]; modifyData(); } } public function modifyData():void { var idx:int = profitPeriods.getItemIndex(selectItem); var item:Object = profitPeriods.getItemAt(idx); item.profit = chartMouseY; profitPeriods.setItemAt(item,idx); } public function chartMouseDown(e:MouseEvent):void{ if(!hitData){ var hda:Array = linechart.findDataPoints(e.currentTarget.mouseX, e.currentTarget.mouseY); if (hda[0]) { selectItem = hda[0].item; hitData = true; } } } ]]> </fx:Script> <s:layout> <s:HorizontalLayout horizontalAlign="center" verticalAlign="middle" /> </s:layout> <s:Panel title="LineChart Control" > <s:VGroup > <s:HGroup> <mx:LineChart id="linechart" color="0x323232" height="500" width="377" mouseDown="chartMouseDown(event)" mouseMove="chartMouseMove(event)" mouseUp="chartMouseUp(event)" showDataTips="true" dataProvider="{profitPeriods}" > <mx:horizontalAxis> <mx:CategoryAxis categoryField="period"/> </mx:horizontalAxis> <mx:series> <mx:LineSeries yField="profit" form="segment" displayName="Profit"/> </mx:series> </mx:LineChart> <mx:Legend dataProvider="{linechart}" color="0x323232"/> </s:HGroup> <mx:Form> <mx:TextArea id="DEBUG" height="200" width="300"/> </mx:Form> </s:VGroup> </s:Panel> UPDATE 30/2010 : the null object is _renderData.filteredCache from the chartline the code call before error is the default mouseMoveHandler of chartBase to chartline. Is it possible to remove it ? or provide a filteredCache

    Read the article

  • NHibernate MySQL Composite-Key

    - by LnDCobra
    I am trying to create a composite key that mimicks the set of PrimaryKeys in the built in MySQL.DB table. The Db primary key is as follows: Field | Type | Null | ---------------------------------- Host | char(60) | No | Db | char(64) | No | User | char(16) | No | This is my DataBasePrivilege.hbm.xml file <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="TGS.MySQL.DataBaseObjects" namespace="TGS.MySQL.DataBaseObjects"> <class name="TGS.MySQL.DataBaseObjects.DataBasePrivilege,TGS.MySQL.DataBaseObjects" table="db"> <composite-id name="CompositeKey" class="TGS.MySQL.DataBaseObjects.DataBasePrivilegePrimaryKey, TGS.MySQL.DataBaseObjects"> <key-property name="Host" column="Host" type="char" length="60" /> <key-property name="DataBase" column="Db" type="char" length="64" /> <key-property name="User" column="User" type="char" length="16" /> </composite-id> </class> </hibernate-mapping> The following are my 2 classes for my composite key: namespace TGS.MySQL.DataBaseObjects { public class DataBasePrivilege { public virtual DataBasePrivilegePrimaryKey CompositeKey { get; set; } } public class DataBasePrivilegePrimaryKey { public string Host { get; set; } public string DataBase { get; set; } public string User { get; set; } public override bool Equals(object obj) { if (ReferenceEquals(null, obj)) return false; if (ReferenceEquals(this, obj)) return true; if (obj.GetType() != typeof (DataBasePrivilegePrimaryKey)) return false; return Equals((DataBasePrivilegePrimaryKey) obj); } public bool Equals(DataBasePrivilegePrimaryKey other) { if (ReferenceEquals(null, other)) return false; if (ReferenceEquals(this, other)) return true; return Equals(other.Host, Host) && Equals(other.DataBase, DataBase) && Equals(other.User, User); } public override int GetHashCode() { unchecked { int result = (Host != null ? Host.GetHashCode() : 0); result = (result*397) ^ (DataBase != null ? DataBase.GetHashCode() : 0); result = (result*397) ^ (User != null ? User.GetHashCode() : 0); return result; } } } } And the following is the exception I am getting: Execute System.InvalidCastException: Unable to cast object of type 'System.Object[]' to type 'TGS.MySQL.DataBaseObjects.DataBasePrivilegePrimaryKey'. at (Object , GetterCallback ) at NHibernate.Bytecode.Lightweight.AccessOptimizer.GetPropertyValues(Object target) at NHibernate.Tuple.Component.PocoComponentTuplizer.GetPropertyValues(Object component) at NHibernate.Type.ComponentType.GetPropertyValues(Object component, EntityMode entityMode) at NHibernate.Type.ComponentType.GetHashCode(Object x, EntityMode entityMode) at NHibernate.Type.ComponentType.GetHashCode(Object x, EntityMode entityMode, ISessionFactoryImplementor factory) at NHibernate.Engine.EntityKey.GenerateHashCode() at NHibernate.Engine.EntityKey..ctor(Object identifier, String rootEntityName, String entityName, IType identifierType, Boolean batchLoadable, ISessionFactoryImplementor factory, EntityMode entityMode) at NHibernate.Engine.EntityKey..ctor(Object id, IEntityPersister persister, EntityMode entityMode) at NHibernate.Event.Default.DefaultLoadEventListener.OnLoad(LoadEvent event, LoadType loadType) at NHibernate.Impl.SessionImpl.FireLoad(LoadEvent event, LoadType loadType) at NHibernate.Impl.SessionImpl.Get(String entityName, Object id) at NHibernate.Impl.SessionImpl.Get(Type entityClass, Object id) at NHibernate.Impl.SessionImpl.Get[T](Object id) at TGS.MySQL.DataBase.DataProvider.GetDatabasePrivilegeByHostDbUser(String host, String db, String user) in C:\Documents and Settings\Michal\My Documents\Visual Studio 2008\Projects\TGS\TGS.MySQL.DataBase\DataProvider.cs:line 20 at TGS.UserAccountControl.UserAccountManager.GetDatabasePrivilegeByHostDbUser(String host, String db, String user) in C:\Documents and Settings\Michal\My Documents\Visual Studio 2008\Projects\TGS\TGS.UserAccountControl\UserAccountManager.cs:line 10 at TGS.UserAccountControlTest.UserAccountManagerTest.CanGetDataBasePrivilegeByHostDbUser() in C:\Documents and Settings\Michal\My Documents\Visual Studio 2008\Projects\TGS\TGS.UserAccountControlTest\UserAccountManagerTest.cs:line 12 I am new to NHibernate and any help would be appreciated. I just can't see where it is getting the object[] from? Is the composite key supposed to be object[]?

    Read the article

  • If 'Architect' is a dirty word - what's the alternative; when not everyone can actually design a goo

    - by Andras Zoltan
    Now - I'm a developer first and foremost; but whenever I sit down to work on a big project with lots of interlinking components and areas, I will forward-plan my interfaces, base classes etc as best I can - putting on my Architect hat. For a few weeks I've been doing this for a huge project - designing whole swathes of interfaces etc for a business-wide platform that we're developing. The basic structure is a couple of big projects that consists of service and data interfaces, with some basic implementations of all of these. On their own, these assemblies are useless though, as they are simply intended intended as a scaffold on which to build a business-specific implementation (we have a lot of businesses). Therefore, the design of the core platform is absolutely crucial, since consumers of the system are not intended to know which implementation they are actually using. In the past it's not worked so well, but after a few proof-of-concepts and R&D projects this new platform is now growing nicely and is already proving itself. Then somebody else gets involved in the project - he's a TDD man who sees code-level architecture as an irrelevance and is definitely from the camp that 'architect' is a dirty word - I should add that our working relationship is very good despite this :) He's open about the fact that he can't architect in advance and obviously TDD really helps him because it allows him to evolve his systems over time. That I get, and totally understand; but it means that his coding style, basically, doesn't seem to be able to honour the architecture that I've been putting in place. Now don't get me wrong - he's an awesome coder; but the other day he needed to extend one of his components (an implementation of a core interface) to bring in an extra implementation-specific dependency; and in doing so he extended the core interface as well as his implementation (he uses ReSharper), thus breaking the independence of the whole interface. When I pointed out his error to him, he was dismayed. Being test-first, all that mattered to him was that he'd made his tests pass, and just said 'well, I need that dependency, so can't we put it in?'. Of course we could put it in, but I was frustrated that he couldn't see that refactoring the generic interface to incorporate an implementation-specific feature was just wrong! But it is all very Charlie Brown to him (you know the sound the adults make when they're talking to the children) - as far as he's concerned we don't need to worry about it because we can always refactor. The problem is, the culture of test-write-refactor is all very well and good - but not when you're dealing with a platform that is going to be shared out among so many projects that you could never get them all in one place to make the refactorings work. In my opinion, sometimes you actually have to think about what you're doing, and not just let nature take its course. Am I simply fulfilling the role of Architect as a dirty word here? I believe that architecture is important and should be thought about before code gets written; unless it's a particularly small project. But when you're working in a team of people who don't think that way, or even can't think that way how can you actually get this across? Is it a case of simply making the architecture off-limits to changes by other people? I don't want to start having bloody committees just to be able to grow the system; but equally I don't want to be the only one responsible for it all. Do you think the architect role is a waste of time? Is it at odds with TDD and other practises? Can this mix of different practises be made to work, or should I just be a lot less precious (and in so doing allow a generic platform become useless!)? Or do I just lay down the law? Any ideas/experiences/views gratefully received.

    Read the article

  • A methology that allows for a single Java code base covering many different versions?

    - by Thorbjørn Ravn Andersen
    I work in a small shop where we have a LOT of legacy Cobol code and where a methology has been adopted to allow us to minimize forking and branching as much as possible. For a given release we have three levels: CORE - bottom layer, this code is common to all releases GROUP - optional code common to several customers. CUSTOMER - optional code specific for a single customer. When a program is needed, it is first searched for in CUSTOMER, then in GROUP and finally in CORE. A given application for us invokes many programs which all are looked for in this sequence (think exe files and PATH under Windows). We also have Java programs interacting with this legacy code, and as the core-group-customer lookup mehchanism does not lend it self easily to Java it has tended to grow in a CVS branch for each customer, requiring much too much maintainance. The Java part and the backend part tend to be developed in parallel. I have been assigned to figure out a way to make the two worlds meet. Essentially we want a Java enviornment which allows us to have a single code base with sources for each release, where we easily can select a group and a customer and work with the application as it goes for that customer, and then easily switch to another codeset and THAT customer. I was thinking of perhaps a scenario with an Eclipse project for each core, customer, and group and then use Project Sets to select those we need for a given scenario. The problem I cannot get my head about, is how we would create robust code in the CORE projects which will work regardless of which group and customer is selected. A Factory class which knows which sub class of a passed Class object to invoke instead of each and every new? Others must have had similar code base management problems. Anybody with experiences to share? EDIT: The conclusion to this problem above has been that CVS needs to be replaced with a source code management system better suited for dealing with many branches concurrently and the migration of source from one component to the other while keeping history. Inspired by the recent migration by slf4j and logback we are currently looking at git as it handles branches very well. We've considered subversion and mercurial too but git appears to be better for single location, multibranched projects. I've asked about Perforce in another question, but my personal inclination is towards open source solutions for something as crucial as this. EDIT: After some more pondering, we've found that our actual pain point is that we use branches in CVS, and that branches in CVS are the easiest to work with if you branch ALL files! The revised conclusion is that we can do this with CVS alone, by switching to a forest of java projects, each corresponding to one of the levels above, and use the Eclipse build paths to tie them together so each CUSTOMER version pulls in the appropriate GROUP and CORE project. We still want to switch to a better versioning system but this is so important a decision so we want to delay it as much as possible. EDIT: I now have a proof-of-concept implementation of the CORE-GROUP-CUSTOMER concept using Google Guice 2.0 - the @ImplementedBy tag is just what we need. I wonder what everybody else does? Using if's all over the place? EDIT: Now I also need this functionality for web applications. Guice was until the JSR-330 is in place. Anybody with versioning experience? EDIT: JSR-330/299 is now in place with the JEE6 reference implementation Weld based on JBoss Seam and I have reimplemented the proof-of-concept with Weld and can see that if we use @Alternative along with ... in beans.xml we can get the behaviour we desire. I.e. provide a new implementation for a given functionality in CORE without changing a bit in the CORE jars. Initial reading up on the Servlet 3.0 specification indicates that it may support the same functionality for web application resources (not code). We will now do initial testing on the real application.

    Read the article

  • Run unit tests in Jenkins / Hudson in automated fashion from dev to build server

    - by Kevin Donde
    We are currently running a Jenkins (Hudson) CI server to build and package our .net web projects and database projects. Everything is working great but I want to start writing unit tests and then only passing the build if the unit tests pass. We are using the built in msbuild task to build the web project. With the following arguments ... MsBuild Version .NET 4.0 MsBuild Build File ./WebProjectFolder/WebProject.csproj Command Line Arguments ./target:Rebuild /p:Configuration=Release;DeployOnBuild=True;PackageLocation=".\obj\Release\WebProject.zip";PackageAsSingleFile=True We need to run automated tests over our code that run automatically when we build on our machines (post build event possibly) but also run when Jenkins does a build for that project. If you run it like this it doesn't build the unit tests project because the web project doesn't reference the test project. The test project would reference the web project but I'm pretty sure that would be butchering our automated builds as they exist primarily to build and package our deployments. Running these tests should be a step in that automated build and package process. Options ... Create two Jenkins jobs. one to run the tests ... if the tests pass another build is triggered which builds and packages the web project. Put the post build event on the test project. Build the solution instead of the project (make sure the solution contains the required tests) and put post build events on any test projects that would run the nunit console to run the tests. Then use the command line to copy all the required files from each of the bin and content directories into a package. Just build the test project in jenkins instead of the web project in jenkins. The test project would reference the web project (depending on what you're testing) and build it. Problems ... There's two jobs and not one. Two things to debug not one. One to see if the tests passed and one to build and compile the web project. The tests could pass but the build could fail if its something that isn't used by what you're testing ... This requires us to know exactly what goes into the build. Right now msbuild does it all for us. If you have multiple teams working on a project everytime an extra folder is created you have to worry about the possibly brittle command line statements. This seems like a corruption of our main purpose here. The tests should be a step in this process not the overriding most important thing in this process. I'm also not 100% sure that a triggered build is the same as a normal build does it do all the same things as a normal build. Move all the correct files in the same way move them all into the same directories etc. Initial problem. We want to run our tests whenever our main project is built. But adding a post build event to the web project that runs against the test project doesn't work because the web project doesn't reference the test project and won't trigger a build of this project. I could go on ... but that's enough ... We've spent about a week trying to make this work nicely but haven't succeeded. Feel free to edit this if you feel you can get a better response ...

    Read the article

  • Java multiple class compositing and boiler plate reduction

    - by h2g2java
    We all know why Java does/should not have multiple inheritance. So this is not questioning about what has already been debated till-cows-come-home. This discusses what we would do when we wish to create a class that has the characteristics of two or more other classes. Probably, most of us would do this to "inherit" from three classes. For simplicity, I left out the constructor.: class Car extends Vehicle { final public Transport transport; final public Machine machine; } So that, Car class directly inherits methods and objects of Vehicle class, but would have to refer to transport and machine explicitly to refer to objects instantiated in Transport and Machine. Car car = new Car(); car.drive(); // from Vehicle car.transport.isAmphibious(); // from Transport car.machine.getCO2Footprint(); // from Machine I thought this was a good idea until when I encounter frameworks that require setter and getter methods. For example, the XML <Car amphibious='false' footPrint='1000' model='Fordstatic999'/> would look for the methods setAmphibious(..), setFootPrint(..) and setModel(..). Therefore, I have to project the methods from Transport and Machine classes class Car extends Vehicle { final public Transport transport; final public Machine machine; public void setAmphibious(boolean b){ this.transport.setAmphibious(b); } public void setFootPrint(String fp){ this.machine.setFootPrint(fp); } } This is OK, if there were just a few characteristics. Right now, I am trying to adapt all of SmartGWT into GWT UIBinder, especially those classes that are not a GWT widget. There are lots of characteristics to project. Wouldn't it be nice if there exists some form of annotation framework that is like this: class Car extends Vehicle @projects {Transport @projects{Machine @projects Guzzler}} { /* No need to explicitly instantiate Transport, Machine or Guzzler */ .... } Where, in case of common names of characteristics exist, the characteristics of Machine would take precedence Guzzler's, and Transport's would have precedence over Machine's, and Vehicle's would have precedence over Transport's. The annotation framework would then instantiate Transport, Machine and Guzzler as hidden members of Car and expand to break-out the protected/public characteristics, in the precedence dictated by the @project annotation sequence, into actual source code or into byte-code. Preferably into byte-code. So that the setFootPrint method is found in both Machine and Guzzler, only that of Machine's would be projected. Questions: Don't you think this is a good idea to have such a framework? Does such a framework already exist? Tell me where/what. Is there an eclipse plugin that does it? Is there a proposal or plan anywhere that you know about such an annotation framework? It would be wonderful too, if the annotation/plugin framework lets me specify that boolean, int, or whatever else needs to be converted from String and does the conversion/parsing for me too. Please advise, somebody. I hope wording of my question was clear enough. Thx. Edited: To avoid OO enthusiasts jumping to conclusion, I have renamed the title of this question.

    Read the article

  • How do I solve "Two different CRTLDLLs are loaded" when using packages in C++ Builder 2010?

    - by David M
    Hi, We are trying to split up our monolithic EXE into a combination of an EXE and several packages. So far, we have one package that we're trying to use, and when running the EXE Codeguard shows the following error on startup: CG Error Two different CRTLDLLs are loaded. CG might report false errors (C:\Windows\system32\CC32100MT.DLL) (D:\Projects\Foo\Bar.bpl) OK I read this as two different runtime libraries being loaded - one, the correct one (CC32100MT.dll), one incorrect, which is the package we're trying to use. Continuing to run the program shows odd errors, especially casting between classes or passing a pointer to a class as a parameter in a method that crosses the EXE/DLL boundary. Codeguard itself doesn't show any other errors at all though. How do we solve this? Some more details We've looked at as many things as we (the developer working on this and I) can collectively think of: Each project is built using runtime packages. The EXE host lists Bar in its package list. Each project is set to compile with dynamic RTL. However, changing this does not solve the problem. The package is linked to the EXE via its BPI file, but linking via a LIB makes no difference either. The EXE and BPL are compiled with the same project settings, where the same options exist for both types of project. We think, anyway :) There is only one copy of the BPL and BPI on the system: it's definitely linking to the right one. Examining the EXE and BPL with Depends and TDump show they are both using C:\Windows\system32\CC32100MT.DLL. They should both be using the one RTL. Creating a new project (a plain VCL forms application) and linking to the BPL (via its BPI) works fine. Something in the process of adding all the files and LIBs that make our EXE contain the code it needs to changes this, but we haven't been able to figure out what. The LIBs all either correspond to DLLs we use (flat C interface, usually look as though they were built with MSVC) or are simple projects with lots of related files, compiled to a lib for the purpose of linking into the EXE - these correspond roughly to the areas of the program we want to split to BPLs, by the way. There don't seem to be project options for the LIB projects that would affect RTL linking, unless we've missed them. I have exhaustively hunted through Depends and looked at all RTL and CC32*.dll files the EXE and every single DLL references. All are identical: rtl140.bpl and CC32100MT.DLL. Fully qualified paths show they are the same files, too. Everything should be using the one same run-time library. We're stumped. Absolutely stumped. We've had other problems using BPLs (they seem to be surprisingly tricky things, especially using C++) but have managed to solve them all. This one we've had no luck at all and we'd really appreciate any insights :) We're using C++Builder 2010 (as part of RAD Studio actually, but with little Delphi code apart from components.)

    Read the article

  • Same source, multiple targets with different resources (Visual Studio .Net 2008)

    - by Mike Bell
    A set of software products differ only by their resource strings, binary resources, and by the strings / graphics / product keys used by their Visual Studio Setup projects. What is the best way to create, organize, and maintain them? i.e. All the products essentially consist of the same core functionality customized by graphics, strings, and other resource data to form each product. Imagine you are creating a set of products like "Excel for Bankers", Excel for Gardeners", "Excel for CEOs", etc. Each product has the the same functionality, but differs in name, graphics, help files, included templates etc. The environment in which these are being built is: vanilla Windows.Forms / Visual Studio 2008 / C# / .Net. The ideal solution would be easy to maintain. e.g. If I introduce a new string / new resource projects I haven't added the resource to should fail at compile time, not run time. (And subsequent localization of the products should also be feasible). Hopefully I've missed the blindingly-obvious and easy way of doing all this. What is it? ============ Clarification(s) ================ By "product" I mean the package of software that gets installed by the installer and sold to the end user. Currently I have one solution, consisting of multiple projects, (including a Setup project), which builds a set of assemblies and create a single installer. What I need to produce are multiple products/installers, all with similar functionality, which are built from the same set of assemblies but differ in the set of resources used by one of the assemblies. What's the best way of doing this? ------------ The 95% Solution ----------------- Based upon Daminen_the_unbeliever's answer, a resource file per configuration can be achieved as follows: Create a class library project ("Satellite"). Delete the default .cs file and add a folder ("Default") Create a resource file in the folder "MyResources" Properties - set CustomToolNamespace to something appropriate (e.g. "XXX") Make sure the access modifier for the resources is "Public". Add the resources. Edit the source code. Refer to the resources in your code as XXX.MyResources.ResourceName) Create Configurations for each product variant ("ConfigN") For each product variant, create a folder ("VariantN") Copy and Paste the MyResources file into each VariantN folder Unload the "Satellite" project, and edit the .csproj file For each "VariantN/MyResources" <Compile> or <EmbeddedResource> tag, add a Condition="'$(Configuration)' == 'ConfigN'" attribute. Save, Reload the .csproj, and you're done... This creates a per-configuration resource file, which can (presumably) be further localized. Compile error messages are produced for any configuration that where a a resource is missing. The resource files can be localized using the standard method (create a second resources file (MyResources.fr.resx) and edit .csproj as before). The reason this is a 95% solution is that resources used to initialize forms (e.g. Form Titles, button texts) can't be easily handled in the same manner - the easiest approach seems to be to overwrite these with values from the satellite assembly.

    Read the article

  • CakePHP access indirectly related model - beginner's question

    - by user325077
    Hi everyone, I am writing a CakePHP application to log the work I do for various clients, but after trying for days I seem unable to get it to do what I want. I have read most of the book CakePHP's website. and googled for all I'm worth, so I presume I am missing something obvious! Every 'log item' belongs to a 'sub-project, which in turn belongs to a 'project', which in turn belongs to a 'sub-client' which finally belongs to a client. These are the 5 MySQL tables I am using: mysql> DESCRIBE log_items; +-----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | date | date | NO | | NULL | | | time | time | NO | | NULL | | | time_spent | int(11) | NO | | NULL | | | sub_projects_id | int(11) | NO | MUL | NULL | | | title | varchar(100) | NO | | NULL | | | description | text | YES | | NULL | | | created | datetime | YES | | NULL | | | modified | datetime | YES | | NULL | | +-----------------+--------------+------+-----+---------+----------------+ mysql> DESCRIBE sub_projects; +-------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | name | varchar(100) | NO | | NULL | | | projects_id | int(11) | NO | MUL | NULL | | | created | datetime | YES | | NULL | | | modified | datetime | YES | | NULL | | +-------------+--------------+------+-----+---------+----------------+ mysql> DESCRIBE projects; +----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | name | varchar(100) | NO | | NULL | | | sub_clients_id | int(11) | NO | MUL | NULL | | | created | datetime | YES | | NULL | | | modified | datetime | YES | | NULL | | +----------------+--------------+------+-----+---------+----------------+ mysql> DESCRIBE sub_clients; +------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | name | varchar(100) | NO | | NULL | | | clients_id | int(11) | NO | MUL | NULL | | | created | datetime | YES | | NULL | | | modified | datetime | YES | | NULL | | +------------+--------------+------+-----+---------+----------------+ mysql> DESCRIBE clients; +----------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | name | varchar(100) | NO | | NULL | | | created | datetime | YES | | NULL | | | modified | datetime | YES | | NULL | | +----------+--------------+------+-----+---------+----------------+ I have set up the following associations in CakePHP: LogItem belongsTo SubProjects SubProject belongsTo Projects Project belongsTo SubClients SubClient belongsTo Clients Client hasMany SubClients SubClient hasMany Projects Project hasMany SubProjects SubProject hasMany LogItems Using 'cake bake' I have created the models, controllers (index, view add, edit and delete) and views, and things seem to function - as in I am able to perform simple CRUD operations successfully. The Question When editing a 'log item' at www.mydomain/log_items/edit I am presented with the view you would all suspect; namely the columns of the log_items table with the appropriate textfields/select boxes etc. I would also like to incorporate select boxes to choose the client, sub-client, project and sub-project in the 'log_items' edit view. Ideally the 'sub-client' select box should populate itself depending upon the 'client' chosen, the 'project' select box should also populate itself depending on the 'sub-client' selected etc, etc. I guess the way to go about populating the select boxes with relevant options is Ajax, but I am unsure of how to go about actually accessing a model from the child view of a indirectly related model, for example how to create a 'sub-client' select box in the 'log_items' edit view. I have have found this example: http://forum.phpsitesolutions.com/php-frameworks/cakephp/ajax-cakephp-dynamically-populate-html-select-dropdown-box-t29.html where someone achieves something similar for US states, counties and cities. However, I noticed in the database schema - which is downloadable from the site above link - that the database tables don't have any foreign keys, so now I'm wondering if I'm going about things in the correct manner. Any pointers and advice would be very much appreciated. Kind regards, Chris

    Read the article

  • Upgrading from TFS 2010 RC to TFS 2010 RTM done

    - by Martin Hinshelwood
    Today is the big day, with the Launch of Visual Studio 2010 already done in Asia, and rolling around the world towards us, we are getting ready for the RTM (Released). We have had TFS 2010 in Production for nearly 6 months and have had only minimal problems. Update 12th April 2010  – Added Scott Hanselman’s tweet about the MSDN download release time. SSW was the first company in the world outside of Microsoft to deploy Visual Studio 2010 Team Foundation Server to production, not once, but twice. I am hoping to make it 3 in a row, but with all the hype around the new version, and with it being a production release and not just a go-live, I think there will be a lot of competition. Developers: MSDN will be updated with #vs2010 downloads and details at 10am PST *today*! @shanselman - Scott Hanselman Same as before, we need to Uninstall 2010 RC and install 2010 RTM. The installer will take care of all the complexity of actually upgrading any schema changes. If you are upgrading from TFS 2008 to TFS2010 you can follow our Rules To Better TFS 2010 Migration and read my post on our successes.   We run TFS 2010 in a Hyper-V virtual environment, so we have the advantage of running a snapshot as well as taking a DB backup. Done - Snapshot the hyper-v server Microsoft does not support taking a snapshot of a running server, for very good reason, and Brian Harry wrote a post after my last upgrade with the reason why you should never snapshot a running server. Done - Uninstall Visual Studio Team Explorer 2010 RC You will need to uninstall all of the Visual Studio 2010 RC client bits that you have on the server. Done - Uninstall TFS 2010 RC Done - Install TFS 2010 RTM Done - Configure TFS 2010 RTM Pick the Upgrade option and point it at your existing “tfs_Configuration” database to load all of the existing settings Done - Upgrade the SharePoint Extensions Upgrade Build Servers (Pending) Test the server The back out plan, and you should always have one, is to restore the snapshot. Upgrading to Team Foundation Server 2010 – Done The first thing you need to do is off the TFS server and then log into the Hyper-v server and create a snapshot. Figure: Make sure you turn the server off and delete all old snapshots before you take a new one I noticed that the snapshot that was taken before the Beta 2 to RC upgrade was still there. You should really delete old snapshots before you create a new one, but in this case the SysAdmin (who is currently tucked up in bed) asked me not to. I guess he is worried about a developer messing up his server Turn your server on and wait for it to boot in anticipation of all the nice shiny RTM’ness that is coming next. The upgrade procedure for TFS2010 is to uninstal the old version and install the new one. Figure: Remove Visual Studio 2010 Team Foundation Server RC from the system.   Figure: Most of the heavy lifting is done by the Uninstaller, but make sure you have removed any of the client bits first. Specifically Visual Studio 2010 or Team Explorer 2010.  Once the uninstall is complete, this took around 5 minutes for me, you can begin the install of the RTM. Running the 64 bit OS will allow the application to use more than 2GB RAM, which while not common may be of use in heavy load situations. Figure: It is always recommended to install the 64bit version of a server application where possible. I do not think it is likely, with SharePoint 2010 and Exchange 2010  and even Windows Server 2008 R2 being 64 bit only, I do not think there will be another release of a server app that is 32bit. You then need to choose what it is you want to install. This depends on how you are running TFS and on how many servers. In our case we run TFS and the Team Foundation Build Service (controller only) on out TFS server along with Analysis services and Reporting Services. But our SharePoint server lives elsewhere. Figure: This always confuses people, but in reality it makes sense. Don’t install what you do not need. Every extra you install has an impact of performance. If you are integrating with SharePoint you will need to run this install on every Front end server in your farm and don’t forget to upgrade your Build servers and proxy servers later. Figure: Selecting only Team Foundation Server (TFS) and Team Foundation Build Services (TFBS)   It is worth noting that if you have a lot of builds kicking off, and hence a lot of get operations against your TFS server, you can use a proxy server to cache the source control on another server in between your TFS server and your build servers. Figure: Installing Microsoft .NET Framework 4 takes the most time. Figure: Now run Windows Update, and SSW Diagnostic to make sure all your bits and bobs are up to date. Note: SSW Diagnostic will check your Power Tools, Add-on’s, Check in Policies and other bits as well. Configure Team Foundation Server 2010 – Done Now you can configure the server. If you have no key you will need to pick “Install a Trial Licence”, but it is only £500, or free with a MSDN subscription. Anyway, if you pick Trial you get 90 days to get your key. Figure: You can pick trial and add your key later using the TFS Server Admin. Here is where the real choices happen. We are doing an Upgrade from a previous version, so I will pick Upgrade the same as all you folks that are using the RC or TFS 2008. Figure: The upgrade wizard takes your existing 2010 or 2008 databases and upgraded them to the release.   Once you have entered your database server name you can click “List available databases” and it will show what it can upgrade. Figure: Select your database from the list and at this point, make sure you have a valid backup. At this point you have not made ANY changes to the databases. At this point the configuration wizard will load configuration from your existing database if you have one. If you are upgrading TFS 2008 refer to Rules To Better TFS 2010 Migration. Mostly during the wizard the default values will suffice, but depending on the configuration you want you can pick different options. Figure: Set the application tier account and Authentication method to use. We use NTLM to keep things simple as we host our TFS server externally for our remote developers.  Figure: Setting your TFS server URL’s to be the remote URL’s allows the reports to be accessed without using VPN. Very handy for those remote developers. Figure: Detected the existing Warehouse no problem. Figure: Again we love green ticks. It gives us a warm fuzzy feeling. Figure: The username for connecting to Reporting services should be a domain account (if you are on a domain that is). Figure: Setup the SharePoint integration to connect to your external SharePoint server. You can take the option to connect later.   You then need to run all of your readiness checks. These check can save your life! it will check all of the settings that you have entered as well as checking all the external services are configures and running properly. There are two reasons that TFS 2010 is so easy and painless to install where previous version were not. Microsoft changes the install to two steps, Install and configuration. The second reason is that they have pulled out all of the stops in making the install run all the checks necessary to make sure that once you start the install that it will complete. if you find any errors I recommend that you report them on http://connect.microsoft.com so everyone can benefit from your misery.   Figure: Now we have everything setup the configuration wizard can do its work.  Figure: Took a while on the “Web site” stage for some point, but zipped though after that.  Figure: last wee bit. TFS Needs to do a little tinkering with the data to complete the upgrade. Figure: All upgraded. I am not worried about the yellow triangle as SharePoint was being a little silly Exception Message: TF254021: The account name or password that you specified is not valid. (type TfsAdminException) Exception Stack Trace:    at Microsoft.TeamFoundation.Management.Controls.WizardCommon.AccountSelectionControl.TestLogon(String connectionString)    at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) [Info   @16:10:16.307] Benign exception caught as part of verify: Exception Message: TF255329: The following site could not be accessed: http://projects.ssw.com.au/. The server that you specified did not return the expected response. Either you have not installed the Team Foundation Server Extensions for SharePoint Products on this server, or a firewall is blocking access to the specified site or the SharePoint Central Administration site. For more information, see the Microsoft Web site (http://go.microsoft.com/fwlink/?LinkId=161206). (type TeamFoundationServerException) Exception Stack Trace:    at Microsoft.TeamFoundation.Client.SharePoint.WssUtilities.VerifyTeamFoundationSharePointExtensions(ICredentials credentials, Uri url)    at Microsoft.TeamFoundation.Admin.VerifySharePointSitesUrl.Verify() Inner Exception Details: Exception Message: TF249064: The following Web service returned an response that is not valid: http://projects.ssw.com.au/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. Either the extensions are not installed, the request resulted in HTML being returned, or there is a problem with the URL. Verify that the following URL points to a valid SharePoint Web application and that the application is available: http://projects.ssw.com.au. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. (type TeamFoundationServerInvalidResponseException) Exception Data Dictionary: ResponseStatusCode = InternalServerError I’ll look at SharePoint after, probably the SharePoint box just needs a restart or a kick If there is a problem with SharePoint it will come out in testing, But I will definatly be passing this on to Microsoft.   Upgrading the SharePoint connector to TFS 2010 You will need to upgrade the Extensions for SharePoint Products and Technologies on all of your SharePoint farm front end servers. To do this uninstall  the TFS 2010 RC from it in the same way as the server, and then install just the RTM Extensions. Figure: Only install the SharePoint Extensions on your SharePoint front end servers. TFS 2010 supports both SharePoint 2007 and SharePoint 2010.   Figure: When you configure SharePoint it uploads all of the solutions and templates. Figure: Everything is uploaded Successfully. Figure: TFS even remembered the settings from the previous installation, fantastic.   Upgrading the Team Foundation Build Servers to TFS 2010 Just like on the SharePoint servers you will need to upgrade the Build Server to the RTM. Just uninstall TFS 2010 RC and then install only the Team Foundation Build Services component. Unlike on the SharePoint server you will probably have some version of Visual Studio installed. You will need to remove this as well. (Coming Soon) Connecting Visual Studio 2010 / 2008 / 2005 and Eclipse to TFS2010 If you have developers still on Visual Studio 2005 or 2008 you will need do download the respective compatibility pack: Visual Studio Team System 2005 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 Visual Studio Team System 2008 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 If you are using Eclipse you can download the new Team Explorer Everywhere install for connecting to TFS. Get your developers to check that you have the latest version of your applications with SSW Diagnostic which will check for Service Packs and hot fixes to Visual Studio as well.   Technorati Tags: TFS,TFS2010,TFS 2010,Upgrade

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

  • Mauritius Software Craftsmanship Community

    There we go! I finally managed to push myself forward and pick up an old, actually too old, idea since I ever arrived here in Mauritius more than six years ago. I'm talking about a community for all kind of ICT connected people. In the past (back in Germany), I used to be involved in various community activities. For example, I was part of the Microsoft Community Leader/Influencer Program (CLIP) in Germany due to an FAQ on Visual FoxPro, actually Active FoxPro Pages (AFP) to be more precise. Then in 2003/2004 I addressed the responsible person of the dFPUG user group in Speyer in order to assist him in organising monthly user group meetings. Well, he handed over management completely, and attended our meetings regularly. Why did it take you so long? Well, I don't want to bother you with the details but short version is that I was too busy on either job (building up new companies) or private life (got married and we have two lovely children, eh 'monsters') or even both. But now is the time where I was starting to look for new fields given the fact that I gained some spare time. My businesses are up and running, the kids are in school, and I am finally in a position where I can commit myself again to community activities. And I love to do that! Why a new user group? Good question... And 'easy' to answer. Since back in 2007 I did my usual research, eh Google searches, to see whether there existing user groups in Mauritius and in which field of interest. And yes, there are! If I recall this correctly, then there are communities for PHP, Drupal, Python (just recently), Oracle, and Linux (which used to be even two). But... either they do not exist anymore, they are dormant, or there is only a low heart-beat, frankly speaking. And yes, I went to meetings of the Linux User Group Meta (Mauritius) back in 2010/2011 and just recently. I really like the setup and the way the LUGM is organised. It's just that I have a slightly different point of view on how a user group or community should organise itself and how to approach future members. Don't get me wrong, I'm not criticizing others doing a very good job, I'm only saying that I'd like to do it differently. The last meeting of the LUGM was awesome; read my feedback about it. Ok, so what's up with 'Mauritius Software Craftsmanship Community' or short: MSCC? As I've already written in my article on 'Communities - The importance of exchange and discussion' I think it is essential in a world of IT to stay 'connected' with a good number of other people in the same field. There is so much dynamic and every day's news that it is almost impossible to keep on track with all of them. The MSCC is going to provide a common platform to exchange experience and share knowledge between each other. You might be a newbie and want to know what to expect working as a software developer, or as a database administrator, or maybe as an IT systems administrator, or you're an experienced geek that loves to share your ideas or solutions that you implemented to solve a specific problem, or you're the business (or HR) guy that is looking for 'fresh' blood to enforce your existing team. Or... you're just interested and you'd like to communicate with like-minded people. Meetup of 26.06.2013 @ L'arabica: Of course there are laptops around. Free WiFi, power outlet, coffee, code and Linux in one go. The MSCC is technology-agnostic and spans an umbrella over any kind of technology. Simply because you can't ignore other technologies anymore in a connected IT world as we have. A front-end developer for iOS applications should have the chance to connect with a Python back-end coder and eventually with a DBA for MySQL or PostgreSQL and exchange their experience. Furthermore, I'm a huge fan of cross-platform development, and it is very pleasant to have pure Web developers - with all that HTML5, CSS3, JavaScript and JS libraries stuff - and passionate C# or Java coders at the same table. This diversity of knowledge can assist and boost your personal situation. And last but not least, there are projects and open positions 'flying' around... People might like to hear others opinion about an employer or get new impulses on how to tackle down an issue at their workspace, etc. This is about community. And that's how I see the MSCC in general - free of any limitations be it by programming language or technology. Having the chance to exchange experience and to discuss certain aspects of technology saves you time and money, and it's a pleasure to enjoy. Compared to dusty books and remote online resources. It's human! Organising meetups (meetings, get-together, gatherings - you name it!) As of writing this article, the MSCC is currently meeting every Wednesday for the weekly 'Code & Coffee' session at various locations (suggestions are welcome!) in Mauritius. This might change in the future eventually but especially at the beginning I think it is very important to create awareness in the Mauritian IT world. Yes, we are here! Come and join us! ;-) The MSCC's main online presence is located at Meetup.com because it allows me to handle the organisation of events and meeting appointments very easily, and any member can have a look who else is involved so that an exchange of contacts is given at any time. In combination with the other entities (G+ Communities, FB Pages or in Groups) I advertise and manage all future activities here: Mauritius Software Craftsmanship Community This is a community for those who care and are proud of what they do. For those developers, regardless how experienced they are, who want to improve and master their craft. This is a community for those who believe that being average is just not good enough. I know, there are not many 'craftsmen' yet but it's a start... Let's see how it looks like by the end of the year. There are free smartphone apps for Android and iOS from Meetup.com that allow you to keep track of meetings and to stay informed on latest updates. And last but not least, there is a Trello workspace to collect and share ideas and provide downloads of slides, etc. Trello is also available as free smartphone app. Sharing is caring! As mentioned, the #MSCC is present in various social media networks in order to cover as many people as possible here in Mauritius. Following is an overview of the current networks: Twitter - Latest updates and quickies Google+ - Community channel Facebook - Community Page LinkedIn - Community Group Trello - Collaboration workspace to share and develop ideas Hopefully, this covers the majority of computer-related people in Mauritius. Please spread the word about the #MSCC between your colleagues, your friends and other interested 'geeks'. Your future looks bright Running and participating in a user group or any kind of community usually provides quite a number of advantages for anyone. On the one side it is very joyful for me to organise appointments and get in touch with people that might be interested to present a little demo of their projects or their recent problems they had to tackle down, and on the other side there are lots of companies that have various support programs or sponsorships especially tailored for user groups. At the moment, I already have a couple of gimmicks that I would like to hand out in small contests or raffles during one of the upcoming meetings, and as said, companies provide all kind of goodies, books free of charge, or sometimes even licenses for communities. Meeting other software developers or IT guys also opens up your point of view on the local market and there might be interesting projects or job offers available, too. A community like the Mauritius Software Craftsmanship Community is great for freelancers, self-employed, students and of course employees. Meetings will be organised on a regular basis, and I'm open to all kind of suggestions from you. Please leave a comment here in blog or join the conversations in the above mentioned social networks. Let's get this community up and running, my fellow Mauritians! Recent updates The MSCC is now officially participating in the O'Reilly UK User Group programm and we are allowed to request review or recension copies of recent titles. Additionally, we have a discount code for any books or ebooks that you might like to order on shop.oreilly.com. More applications for user group sponsorship programms are pending and I'm looking forward to a couple of announcement very soon. And... we need some kind of 'corporate identity' - Over at the MSCC website there is a call for action (or better said a contest with prizes) to create a unique design for the MSCC. This would include a decent colour palette, a logo, graphical banners for Meetup, Google+, Facebook, LinkedIn, etc. and of course badges for our craftsmen to add to their personal blogs and websites. Please spread the word and contribute. Thanks!

    Read the article

  • Visual Studio 2013 Static Code Analysis in depth: What? When and How?

    - by Hosam Kamel
    In this post I'll illustrate in details the following points What is static code analysis? When to use? Supported platforms Supported Visual Studio versions How to use Run Code Analysis Manually Run Code Analysis Automatically Run Code Analysis while check-in source code to TFS version control (TFSVC) Run Code Analysis as part of Team Build Understand the Code Analysis results & learn how to fix them Create your custom rule set Q & A References What is static Rule analysis? Static Code Analysis feature of Visual Studio performs static code analysis on code to help developers identify potential design, globalization, interoperability, performance, security, and a lot of other categories of potential problems according to Microsoft's rules that mainly targets best practices in writing code, and there is a large set of those rules included with Visual Studio grouped into different categorized targeting specific coding issues like security, design, Interoperability, globalizations and others. Static here means analyzing the source code without executing it and this type of analysis can be performed through automated tools (like Visual Studio 2013 Code Analysis Tool) or manually through Code Review which already supported in Visual Studio 2012 and 2013 (check Using Code Review to Improve Quality video on Channel9) There is also Dynamic analysis which performed on executing programs using software testing techniques such as Code Coverage for example. When to use? Running Code analysis tool at regular intervals during your development process can enhance the quality of your software, examines your code for a set of common defects and violations is always a good programming practice. Adding that Code analysis can also find defects in your code that are difficult to discover through testing allowing you to achieve first level quality gate for you application during development phase before you release it to the testing team. Supported platforms .NET Framework, native (C and C++) Database applications. Support Visual Studio versions All version of Visual Studio starting Visual Studio 2013 (except Visual Studio Test Professional) check Feature comparisons Create and modify a custom rule set required Visual Studio Premium or Ultimate. How to use? Code Analysis can be run manually at any time from within the Visual Studio IDE, or even setup to automatically run as part of a Team Build or check-in policy for Team Foundation Server. Run Code Analysis Manually To run code analysis manually on a project, on the Analyze menu, click Run Code Analysis on your project or simply right click on the project name on the Solution Explorer choose Run Code Analysis from the context menu Run Code Analysis Automatically To run code analysis each time that you build a project, you select Enable Code Analysis on Build on the project's Property Page Run Code Analysis while check-in source code to TFS version control (TFSVC) Team Foundation Version Control (TFVC) provides a way for organizations to enforce practices that lead to better code and more efficient group development through Check-in policies which are rules that are set at the team project level and enforced on developer computers before code is allowed to be checked in. (This is available only if you're using Team Foundation Server) Require permissions on Team Foundation Server: you must have the Edit project-level information permission set to Allow typically your account must be part of Project Administrators, Project Collection Administrators, for more information about Team Foundation permissions check http://msdn.microsoft.com/en-us/library/ms252587(v=vs.120).aspx In Team Explorer, right-click the team project name, point to Team Project Settings, and then click Source Control. In the Source Control dialog box, select the Check-in Policy tab. Click Add to create a new check-in policy. Double-click the existing Code Analysis item in the Policy Type list to change the policy. Check or Uncheck the policy option based on the configurations you need to perform as illustrated below: Enforce check-in to only contain files that are part of current solution: code analysis can run only on files specified in solution and project configuration files. This policy guarantees that all code that is part of a solution is analyzed. Enforce C/C++ Code Analysis (/analyze): Requires that all C or C++ projects be built with the /analyze compiler option to run code analysis before they can be checked in. Enforce Code Analysis for Managed Code: Requires that all managed projects run code analysis and build before they can be checked in. Check Code analysis rule set reference on MSDN What is Rule Set? Rule Set is a group of code analysis rules like the example below where Microsoft.Design is the rule set name where "Do not declare static members on generic types" is the code analysis rule Once you configured the Analysis rule the policy will be enabled for all the team member in this project whenever a team member check-in any source code to the TFSVC the policy section will highlight the Code Analysis policy as below TFS is a very extensible platform so you can simply implement your own custom Code Analysis Check-in policy, check this link for more details http://msdn.microsoft.com/en-us/library/dd492668.aspx but you have to be aware also about compatibility between different TFS versions check http://msdn.microsoft.com/en-us/library/bb907157.aspx Run Code Analysis as part of Team Build With Team Foundation Build (TFBuild), you can create and manage build processes that automatically compile and test your applications, and perform other important functions. Code Analysis can be enabled in the Build Definition file by selecting the correct value for the build process parameter "Perform Code Analysis" Once configure, Kick-off your build definition to queue a new build, Code Analysis will run as part of build workflow and you will be able to see code analysis warning as part of build report Understand the Code Analysis results & learn how to fix them Now after you went through Code Analysis configurations and the different ways of running it, we will go through the Code Analysis result how to understand them and how to resolve them. Code Analysis window in Visual Studio will show all the analysis results based on the rule sets you configured in the project file properties, let's dig deep into what each result item contains: 1 Check ID The unique identifier for the rule. CheckId and Category are used for in-source suppression of a warning.       2 Title The title of warning message       3 Description A description of the problem or suggested fix 4 File Name File name and the line of code number which violate the code analysis rule set 5 Category The code analysis category for this error 6 Warning /Error Depend on how you configure it in the rule set the default is Warning level 7 Action Copy: copy the warning information to the clipboard Create Work Item: If you're connected to Team Foundation Server you can create a work item most probably you may create a Task or Bug and assign it for a developer to fix certain code analysis warning Suppress Message: There are times when you might decide not to fix a code analysis warning. You might decide that resolving the warning requires too much recoding in relation to the probability that the issue will arise in any real-world implementation of your code. Or you might believe that the analysis that is used in the warning is inappropriate for the particular context. You can suppress individual warnings so that they no longer appear in the Code Analysis window. Two options available: In Source inserts a SuppressMessage attribute in the source file above the method that generated the warning. This makes the suppression more discoverable. In Suppression File adds a SuppressMessage attribute to the GlobalSuppressions.cs file of the project. This can make the management of suppressions easier. Note that the SuppressMessage attribute added to GlobalSuppression.cs also targets the method that generated the warning. It does not suppress the warning globally.       Visual Studio makes it very easy to fix Code analysis warning, all you have to do is clicking on the Check Id hyperlink if you are not aware how to fix the warring and you'll be directed to MSDN online or local copy based on the configuration you did while installing Visual Studio and you will find all the information about the warring including how to fix it. Create a Custom Code Analysis Rule Set The Microsoft standard rule sets provide groups of rules that are organized by function and depth. For example, the Microsoft Basic Design Guidelines Rules and the Microsoft Extended Design Guidelines Rules contain rules that focus on usability and maintainability issues, with added emphasis on naming rules in the Extended rule set, you can create and modify a custom rule set to meet specific project needs associated with code analysis. To create a custom rule set, you open one or more standard rule sets in the rule set editor. Create and modify a custom rule set required Visual Studio Premium or Ultimate. You can check How to: Create a Custom Rule Set on MSDN for more details http://msdn.microsoft.com/en-us/library/dd264974.aspx Q & A Visual Studio static code analysis vs. FxCop vs. StyleCpp http://www.excella.com/blog/stylecop-vs-fxcop-difference-between-code-analysis-tools/ Code Analysis for SharePoint Apps and SPDisposeCheck? This post lists some of the rule set you can run specifically for SharePoint applications and how to integrate SPDisposeCheck as well. Code Analysis for SQL Server Database Projects? This post illustrate how to run static code analysis on T-SQL through SSDT ReSharper 8 vs. Visual Studio 2013? This document lists some of the features that are provided by ReSharper 8 but are missing or not as fully implemented in Visual Studio 2013. References A Few Billion Lines of Code Later: Using Static Analysis to Find Bugs in the Real World http://cacm.acm.org/magazines/2010/2/69354-a-few-billion-lines-of-code-later/fulltext What is New in Code Analysis for Visual Studio 2013 http://blogs.msdn.com/b/visualstudioalm/archive/2013/07/03/what-is-new-in-code-analysis-for-visual-studio-2013.aspx Analyze the code quality of Windows Store apps using Visual Studio static code analysis http://msdn.microsoft.com/en-us/library/windows/apps/hh441471.aspx [Hands-on-lab] Using Code Analysis with Visual Studio 2012 to Improve Code Quality http://download.microsoft.com/download/A/9/2/A9253B14-5F23-4BC8-9C7E-F5199DB5F831/Using%20Code%20Analysis%20with%20Visual%20Studio%202012%20to%20Improve%20Code%20Quality.docx Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >