Search Results

Search found 1598 results on 64 pages for 'warnings'.

Page 30/64 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • SQL Source Control with Vault Support Officially Released

    - by Ajarn Mark Caldwell
    HOORAY!  It is officially here!  Today, Red-Gate officially released SQL Source Control version 2.1 with support for Vault. While we have been happily and successfully running the beta version (a.k.a. the Early Access release) of Red-Gate SQL Source Control with support for Vault for quite a while, it is good to have the official RTM (or GOLD, or PROD, or whatever you call your “no-longer-in-beta”) release of the product. As a courtesy to those who have not already read the series, allow me to provide you with these links to my previous posts about this fantastic tool. Using SQL Source Control with Fortress or Vault – Part 1 – Introduction and initial thoughts about the tool and source controlling SQL code in general. Using SQL Source Control with Fortress or Vault – Part 2 – Additional details about included features and a few warnings. Source Control and SQL Development – Part 3 – How we did it in the good ol’ days before this product came along. Using SQL Source Control and Vault Professional Part 4 – A few closing thoughts.

    Read the article

  • Are R&D mini-projects a good activity for interns?

    - by dukeofgaming
    I'm going to be in charge of hiring some interns for our software department soon (automotive infotainment systems) and I'm designing an internship program. The main productive activity "menu" I'm planning for them consists of: Verification testing Writing Unit Tests (automated, with an xUnit-compliant framework [several languages in our projects]) Documenting Code Updating wiki Updating diagrams & design docs Helping with low priority tickets (supervised/mentored) Hunting down & cleaning compiler/run-time warnings Refactoring/cleaning code against our coding standards But I also have this idea that having them do small R&D projects would be good to test their talent and get them to have fun. These mini-projects would be: Experimental implementations & optimizations Proof of concept implementations for new technologies Small papers (~2-5 pages) doing formal research on the previous two points Apps (from a mini-project pool) These kinds of projects would be pre-defined and very concrete, although new ideas from the interns themselves would be very welcome. Even if a project is too big or is abandoned, the idea would also be to lay the ground work so they can be retaken by another intern or intern team. While I think this is good in concept, I don't know if it could be good in practice, as obviously this would diminish their productivity on "real work" (work with immediate value to the company), but I think it could help bring aboard very bright people and get them to want to stay in the future (which, I think, is the end goal for any internship program). My question here is if these activities are too open ended or difficult for the average intern to accomplish and if R&D is an efficient use of an interns time or if it makes more sense for to assign project work to interns instead.

    Read the article

  • Terminal closing itself after 14.04 upgrade

    - by David
    All was fine in 12.04, in this case I'm using virtualbox in Windows. Last days the warning message about my Ubuntu version no longer being supported was coming up pretty often, so, yesterday I finally decided to upgrade. The upgrading process ran ok, no errors, no warnings. After rebooting the errors started to happen. Just after booting up there were some errors about video, gnome, and video textures (sorry I didn't care in that moment so I don't remember well). Luckly that went away after installing VirtualBox additions. But the big problem here is that I can't use the terminal. It opens Ok when pressing control+alt+t, but most of the commands cause instant closing. For example, df, ls, mv, cd... usually work, although it has closed few times. But 'find' causes instant close. 'apt-get' update kills it too, just after it gets the package list from the sources, when it starts processing them. I've tried xterm, everything works and I have none of that problems. I have tried reinstalling konsole, bash-static, bash-completion, but nothing worked. I have no idea what to do as there is no error message to search for the cause. It seems something related to bash, but that's all I know.

    Read the article

  • recent unreliable wireless connection with Netgear KWGR614 router

    - by gabkdlly
    Recently, my internet connection over wireless ( via a Netgear KWGR614 router ) has become unreliable, on both a Dell laptop running Ubuntu 10.04 as well as my Desktop running Ubuntu 10.10 . The problem does not seem to occur on a laptop running Windows Vista, nor on a Desktop running Windows 7 ( this machine is connected with an ethernet cable ). The problem does not seem to occur on my Openmoko Freerunner ( running Android 1.5 ), though I hardly ever use this device to connect over WLAN, so the problem may have just slipped by. On my main Ubuntu Desktop, I have tried the following wireless devices: a Longshine PCI card ( an old device with an RTL8180L chip ) a D-Link DWL-510 PCI card ( this device threw warnings in dmesg ) a USB device from MSI ( US54EX ). Usually my wireless network shows up in the network manager with a normal signal strength, even when the connection speed is slow or the connection gets reset ( asking me to click connect to re-authenticate my wireless connection ). I know that this router is susceptible to denial of service attacks, as I have previously been able to disrupt its operation by putting an nmap scan into a while loop. Is it possible that someone is jamming my signal ?

    Read the article

  • Synaptics touchpad problem when disabling it and then enabling it

    - by CYREX
    My girlfriend has an HP dv6000. In ubuntu 10.10 32bit i use the synaptic on it and all is good but when i disable it and enable it the problem starts. when i press the disable button in the synaptics touchpad it disables the mouse AND the keyboard. After enabled the Keyboard keys and Mouse clicks do not work. If i click on the panel below, for example the Applications, Places or System buttons the focus gets stuck there forever. I can open nautilus by clicking on it but i can not use the menus, the ALT+F2 function, see the wireless connections, lower the sound through the panel, etc.. Here comes the weird part. If i press CTRL+ALT+F1 (or any other tty for that matter) and then come back to CTRL+ALT+F7 where the gui is everything works perfect again. This started about a week ago but she told me right now. i checked dmesg which is for sometime now throwing some warnings about Skipping EDID probe do to cached edid but for what i could find out this did not create the problem in the start. NOTE: I do not need to login when i do CTRL+ALT+F1 i just need to change to another tty then come back to F7. What could be causing this problem?

    Read the article

  • How to install Tor (Web Browser) in Ubuntu 12.10?

    - by Zignd
    I would like to install the Tor, but I'm having some problems. I know that someone will say "This question is a exactly duplication of How to install tor?", but it's not, because the another question can not be applied to Ubuntu 12.10 as the deb command is not available anymore. I did a research and even at the Tor's Official Website the available resource can not be applied to Ubuntu 12.10. I tried to use the deb command (as the above question says: deb http://deb.torproject.org/torproject.org <DISTRIBUTION> main) and the Terminal says deb: command not found and when I try to install it says E: Unable to locate package deb. I've also tried to use the ppa: ubun-tor, but it's not compatible with Quantal Quetzal, because it's too old. I've also tried to use sudo apt-get install tor, but browser icon don't shows up after installation and if you try to use the command tor in the Terminal I get the following error message: Nov 26 10:59:25.731 [notice] Tor v0.2.3.22-rc (git-4a0c70a817797420) running on Linux. Nov 26 10:59:25.731 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning Nov 26 10:59:25.731 [notice] Read configuration file "/etc/tor/torrc". Nov 26 10:59:25.737 [notice] Initialized libevent version 2.0.19-stable using method epoll (with changelist). Good. Nov 26 10:59:25.737 [notice] Opening Socks listener on 127.0.0.1:9050 Nov 26 10:59:25.737 [warn] Could not bind to 127.0.0.1:9050: Address already in use. Is Tor already running? Nov 26 10:59:25.737 [warn] Failed to parse/validate config: Failed to bind one of the listener ports. Nov 26 10:59:25.737 [err] Reading config failed--see warnings above. Thanks in advance.

    Read the article

  • How (and when) to move users to mysqli and PDO_MYSQL?

    - by cj
    An important discussion on the PHP "internals" development mailing list is taking place. It's one that you should take some note of. It concerns the next step in transitioning PHP applications away from the very old mysql extension and towards adopting the much better mysqli extension or PDO_MYSQL driver for PDO. This would allow the mysql extension to, at some as-yet undetermined time in the future, be removed. Both mysqli and PDO_MYSQL have been around for many years, and have various advantages: http://php.net/manual/en/mysqlinfo.api.choosing.php The initial RFC for this next step is at https://wiki.php.net/rfc/mysql_deprecation I would expect the RFC to change substantially based on current discussion. The crux of that discussion is the timing of the next step of deprecation. There is also discussion of the carrot approach (showing users the benfits of moving), and stick approach (displaying warnings when the mysql extension is used). As always, there is a lot of guesswork going on as to what MySQL APIs are in current use by PHP applications, how those applications are deployed, and what their upgrade cycle is. This is where you can add your weight to the discussion - and also help by spreading the word to move to mysqli or PDO_MYSQL. An example of such a 'carrot' is the excellent summary at Ulf Wendel's blog: http://blog.ulf-wendel.de/2012/php-mysql-why-to-upgrade-extmysql/ I want to repeat that no time frame for the eventual removal of the mysql extension is set. I expect it to be some years away.

    Read the article

  • New site not appearing in index after change of address, no feedback from google webmaster tools

    - by Duffy
    Our change of address seems to not be taking effect. Here's the story so far: We're a web company and our product is called The New Hive. Our site used to be at thenewhive.com, but we decided to switch to newhive.com (drop the "the", it's cleaner). So the timeline of what I've tried, starting on July 29th: used 301 redirects for all pages (e.g. thenewhive.com/tag/art = newhive.com/tag/art) At this point we noticed that we had disappeared from search results when searching "The New Hive", the front page used to be all links to our site plus a couple news articles about the company. So on August 5th I: verified new domain in webmaster tools (old domain was already verified) submitted a change of address request on August 5th with Webmaster Tools / Configuration / Change of Address Then after another week, on August 13th I did this: Went to Webmaster Tools / Health / Fetch as google fetched our homepage and a couple sub pages, all successfully clicked "Submit to Index" for homepage As of today (August 23rd) we're still not showing up in the index. We're getting no warnings or feedback of any kind from the dashboard so I'm inclined to think something's broken with the dashboard rather than that something's wrong with our site from an SEO perspective. From the dashboard: No new messages or recent critical issues. Crawl Errors: No data available. From Health - Index Status: Total indexed 0 Ever crawled 42,490 Not selected 12 Blocked by robots 0 I'm really at a loss here, any help would be appreciated.

    Read the article

  • recent unreliable wireless connection

    - by gabkdlly
    Recently, my internet connection over wireless ( via a Netgear KWGR614 router ) has become unreliable, on both a Dell laptop running Ubuntu 10.04 as well as my Desktop running Ubuntu 10.10 . The problem does not seem to occur on a laptop running Windows Vista, nor on a Desktop running Windows 7 ( this machine is connected with an ethernet cable ). The problem does not seem to occur on my Openmoko Freerunner ( running Android 1.5 ), though I hardly ever use this device to connect over WLAN, so the problem may have just slipped by. On my main Ubuntu Desktop, I have tried the following wireless devices: a Longshine PCI card ( an old device with an RTL8180L chip ) a D-Link DWL-510 PCI card ( this device threw warnings in dmesg ) a USB device from MSI ( US54EX ). Usually my wireless network shows up in the network manager with a normal signal strength, even when the connection speed is slow or the connection gets reset ( asking me to click connect to re-authenticate my wireless connection ). I have observed this problem with a Netgear KWGR614 Router ( with the manufacturers firmware ), as well as with a TP-LINK TL-WR741ND router running OpenWrt. Taking a look at my routers logs, I find many instances of the following line: Tuesday,04 Jan 2011 03:53:01 [TCP SYN Flood][Deny access policy matched, dropping packet] I know that the Netgear router is susceptible to denial of service attacks, as I have previously been able to disrupt its operation by putting an nmap scan into a while loop. I use WEP or WPA to encrypt the wireless network. Is it possible that someone is jamming my signal ?

    Read the article

  • Areas of support needed when attempting to roll out a new software system

    In general, I think most people tend to be resistant to new systems or even change because they fear the unknown. Change means that their normal routine will be interrupted until they can learn to conform to the new routine due to the fact that it has transformed to the old routine. In addition, the feeling of failure is also generates a resistance to change. Why would a worker want to move from a process that has worked successfully for them in the past? Their fears over shadow any benefits a change in a new system or business process will bring to their work life. Areas of support needed when attempting to roll out a new software system: Executive/Upper Management Support If there is no support from the top of an organization how will employees be supportive of the new system? Proper Training Employees need to train on a new system prior to its rollout. The more training employee’s receive on any new system will directly impact how comfortable they will be with the system and are more accepting of the change because they can see how the changes will benefit them. Employee Incentives One way to re-enforce the need for employees to use a new system is to offer incentives to ensure that the system will be used. Employee Discipline/Termination If employees are adamantly refusing to use the new system after several warnings then they need to be formally reprimanded.  If this does not work the employer is forced to replace the employees.

    Read the article

  • How effectively "sell" a good design in large meetings

    - by User1
    Many times I have witnessed a sad tragedy. Here's what happens: A team design review for a new project. I see a simple design that has quite a few holes. I casually mention the holes and ways to avoid them. The warnings are ignored with comments like "that 'never' happen in real life" Eventually the things that "will 'never' happen" happen An emergency team design review for a broken project. So what do I do? Copping the "I told you so" attitude is not going to win friends and influence people. Sometimes years go by and the comments from step 3 are forgotten anyway. I definitely don't want to be the annoying pest reminding the world of the gotchas. I often sit back and watch the Titanic sail off to Europe. It's frustrating to see bad designs move forward. It's also frustrating that I can't seem to convince others of the pending peril of the current path. I do worst on team meetings where everyone has different ways of understanding different terms. Also, egos tend to win of reason and thought. I'm looking for good tactics to convince groups people to use some new and complicated ideas.

    Read the article

  • Weekend Project: Build a Fireball Launcher

    - by Jason Fitzpatrick
    What’s more fun than playing with fire? Shooting it from your hands. Put on your robe and wizard hat, make a stop at the hardware store, and spend the weekend trying to convince your friends you’ve acquired supernatural powers. Over at MAKE Magazine, Joel Johnson explains the impetus for his project: A stalwart of close-quarter magicians for years, the electronic flash gun is a simple device: a battery-powered, hand-held ignitor that uses a “glo-plug” to light a bit of flash paper and cotton, shooting a fireball a few feet into the air. You can buy one from most magic shops for around $50, but if you build one on your own, you’ll not only save a few bucks, you’ll also learn how easy it is to add fire effects to almost any electronics project. (And what gadget couldn’t stand a little more spurting flame?) The parts list is minimal but the end effect is pretty fantastic. Hit up the link below for the full build guide, plenty of warnings, and a weekend project that’s sure to impress. How to Own Your Own Website (Even If You Can’t Build One) Pt 3 How to Sync Your Media Across Your Entire House with XBMC How to Own Your Own Website (Even If You Can’t Build One) Pt 2

    Read the article

  • Trouble dual booting Ubuntu 14.04 & Windows 8

    - by AkBKukU
    My motherboard (MSI G45-Z87) has efi, I still can't figure out how to make stuff work with it. I had Ubuntu working with Windows 8 before 14.04 came out and I did a clean install of Ubuntu when it did to upgrade. Since then I hadn't been able to boot Windows but I don't use it anyway so it didn't effect me. I tried getting it working today so I could use some adobe software. The last time I had tried to do something with the boot it was giving me warnings that my boot files were to far in the drive. So I followed this guide to use gparted and boot-repair to add a boot partition. I was able to reboot Ubuntu after that. I then proceeded to install Windows 8.1 to a different drive. Now the computer will only boot straight into Windows and if I manually select Ubuntu, but not the drive Ubuntu is on, to boot it stops on a black screen during boot after showing the Ubuntu logo. I've run boot-repair in several different ways but have had no luck. Here is the boot summary info from the recommended settings for it. I could really use some help.

    Read the article

  • Error: kernel headers not found. (But they are in place)

    - by Guandalino
    I'm trying to install the Guest Additions in VirtualBox 4.04. Host OS is Ubuntu desktop 11.04 64bit, guest OS is Ubuntu server 11.10 64bit. $ sudo ./VBoxLinuxAdditions.run After some output this line is printed: The headers for the current running kernel were not found. But the headers are installed, at least accordingly to dpkg: $ dpkg --get-selections | grep linux-headers linux-headers-3.0.0-12 install linux-headers-3.0.0-12-server install linux-headers-server install The running kernel is: $ uname -a Linux foobar 3.0.0-12-server #20-Ubuntu SMP Fri Oct 7 16:36:30 UTC 2011 x86_64 x86_64 X86_64 GNU/Linux How do I fix things so that Guest Additions installer is able to find kernel headers? Update: added full output. The headers for the current running kernel were not found. If the module compilation fails then this could be the reason. Building the main Guest Additions module ...done. Building the shared folder support module ...fail! (Look at /var/log/vboxadd-install.log to find out what went wrong) Installing the Window System drivers ...fails! (Could not find the X.Org or XFree86 Window System). I don't care for fail #2, because that's a server and I don't need X server. But I need shared folder support. Some further detail: $ tail /val/log/vboxadd-install.log .......... cc1: some warnings being treated as errors make[2]: *** [/tmp/vbox.0/vfsmod.o] Error 1 make[1]: *** [_module_/tmp/vbox.0] Error 2 make: *** [vboxsf] Error 2

    Read the article

  • Need help: jBPM installation issue

    - by Kouky
    This is Ms Kahina, I’m getting started with jBPM & Java EE, I tried to install the jBPM 5.4 version by following all the steps stated in the iBPM user guide as below: 1- I’ve installed Java (JDK 1.7) and ant ( Apache-ant-1.9.4). 2- I’ve set both JAVA_HOME and ANT_HOME environement variables. 3- I’ve downloaded the jBPM 5.4 full installer and run the installation script “ant install.demo” and after that the start script “ant start.demo”. But unfortunately jBPM 5.4 is not working. When running the installation Script I’m getting successful message but with the following warnings : [copy] Warning: Could not find file C:\jbpm-installer\db\task-persistence.xml to copy. [copy] Warning: Could not find file C:\jbpm-installer\db\Taskorm.xml to copy. The start demo was successful. Actually when I try to open any tool provided with the jBPM such us the jBPM-console for example I’m getting the following messages: Address not found or sometimes the http status 404 occurred even if the Welcome page of jBoss AS was opened at http://localhost:8080/. Please need your assistance to sort out this issue in order to move forward in my project as I'm blocked in the installation stage since more than a week now, I don’t know if this is related to the jBoss AS7 or to any other thing that I didn’t find out. Looking forward to hearing from you. Thanks Kahina

    Read the article

  • Procurement and E-Business Suite Product Analyzers .. Can you use this tool to resolve your SR?

    - by LindaJ-Oracle
    Procurement and E-Business Suite Product Analyzers (Doc ID 1545562.1). Analyzers are Query/Read only tools with easy to read html output. The tools are delivered by EBS Support via My Oracle Support documents ids for ease of use. The Analyzer scripts are meant to be part of your Production maintenance program by your Sysadmin, or to designated end users. The result set is an easy to read html output that provides recommendations, solutions and early warnings to of items that should be reviewed and correct. Each analyzer can be ran on demand or scheduled for repeatability and emailed to critical reviewers. There are several Analyzers available for E-Business Suite Applications Technology Group, Financials, and Manufacturing including some of the following topics.  Review them all at (Doc ID 1545562.1). Workflow Concurrent Processing Clone Log Parser Utility (Rapid Clone) Invoices, Payments, Accounting, Suppliers and EBTax Validate Data before Period Close EBTax Setup Payables Trial Balance Internet Expenses AutoInvoice Post-Process ASCP Performance PO Approval iProcurement Items For the Procurement specific Analyzers access them directly at: R12 IP Item Analyzer Diagnostic Script (Doc ID 1586248.1) R12: PO Approval Analyzer Diagnostic Script (Doc ID 1525670.1)

    Read the article

  • C++ function templates, function name confusion. This is funny [migrated]

    - by nashmaniac
    Alright so heres the program and works absolutely right #include <iostream> using namespace std; template <typename T> void Swap(T &a , T &b); int main(){ int i = 10; int j = 20; cout<<"i, j = " << i <<" , " <<j<<endl; Swap(i,j); cout<<"i, j = " << i <<" , " <<j<<endl; } template <typename T> void Swap(T &a , T &b){ T temp; temp = a ; a = b; b= temp; } but when I change the function's name from Swap to swap it generates an error saying error: call of overloaded 'swap(int&, int&)' is ambiguous| note: candidates are: void swap(T&, T&) [with T = int]| ||=== Build finished: 1 errors, 0 warnings ===| what happened is it a rule to start functions using templates to start with a capital letter ?

    Read the article

  • Headphones not working in ubuntu 12.04LTS

    - by mursalat
    So after a million warnings about not supported ubuntu OS, last night I finally upgraded to 12.04 - went somewhat smoothly (sadly) After my installation I got all excited with the exciting new look when I login, and all the new shebang, gently I installed chrome and went to youtube to check out some of my music and test flash in the process, now the sound worked awesomely, however when I plugged in my headphones, I could hear only a buzz-like sound when the base drops or there is a loud noise, maybe I didn't hear it from the headphones, but some other source. However sadly my headphones are not working, and I am a noob at fixing these stuff on Ubuntu. I've done loads of programming but when it comes to linux drivers and settings I seem to get frustrated. So I would really appreciate any help people! IN SUMMARY: My headphones are not working, my laptop's internal speakers are working awesome. To be as helpful as I can I have pasted the output from lspci -v to http://pastebin.com/VQNzDkZs I have also checked the volume levels from alsamixer and none are on mute. If you need any more information, please just ask, and I will be checking this post every 3 to 4 hours! Cheers!

    Read the article

  • Compilation problem [closed]

    - by Misery
    I am trying to compile a program called Triangle Mesh Generator. It is an open source code written in ANSI C. I need it to bo a callable lib so I am using a switch (#define) created by it's Author. However I get this error: /usr/lib/gcc/x86_64-linux-gnu/4.6/../../../x86_64-linux-gnu/crt1.o: In function `_start': collect2: ld returned 1 exit status make: *** [triangle] Error 1 I am not sure why such an error occurs. Worth adding is that compiling it as a stand alone program is succesful. I am using GCC on 12.04. The lines quoted above constitute the total output. What I put in the terminal is just make in the proper folder. There are no other errors, warnings, or other messages. Link to the sources I found some additional instructions. I'll get back after reading them :] EDIT: I have found some additional instructions that let me compile it. Thanks for help. I am closing question right now. Regards

    Read the article

  • Override methods should call base method?

    - by Trevor Pilley
    I'm just running NDepend against some code that I have written and one of the warnings is Overrides of Method() should call base.Method(). The places this occurs are where I have a base class which has virtual properties and methods with default behaviour but which can be overridden by a class which inherits from the base class and doesn't call the overridden method. For example, in the base class I have a property defined like this: protected virtual char CloseQuote { get { return '"'; } } And then in an inheriting class which uses a different close quote: protected override char CloseQuote { get { return ']'; } } Not all classes which inherit from the base class use different quote characters hence my initial design. The alternatives I thought of were have get/set properties in the base class with the defaults set in the constructor: protected BaseClass() { this.CloseQuote = '"'; } protected char CloseQuote { get; set; } public InheritingClass() { this.CloseQuote = ']'; } Or make the base class require the values as constructor args: protected BaseClass(char closeQuote, ...) { this.CloseQuote = '"'; } protected char CloseQuote { get; private set; } public InheritingClass() base (closeQuote: ']', ...) { } Should I use virtual in a scenario where the base implementation may be replaced instead of extended or should I opt for one of the alternatives I thought of? If so, which would be preferable and why?

    Read the article

  • How to fix GRUB on dualboot with Windows7 and Ubuntu?

    - by b_oliv
    I am a relatively recent user of Linux. I had several releases of Ubuntu installed on my laptop working in dual-boot and never had any issues. Recently, I installed openSUSE because I thought it would be necessary for an assignment at my university. It turns out it wasn't so I returned to Ubuntu and decided to burn the new .iso to a CD and install it. The problem is that during installation process I almost for sure messed up with the partitions and now, whenever I tried to load Windows 7, it will tells me that a required device is inaccessible. So, I reinstalled Ubuntu again and now all I get is that I am redirected to the GRUB menu without any warnings. I tried creating a Windows Recovery Disk but it gives me Unexpected I/O error. I suspect it is because it was downloaded from the Internet and maybe some files weren't there. I tried everything without success, so I decided to ask here, in the hope I can receive some help and also learn how to help others with it in the future. Here it is my boot info summary: http://paste.ubuntu.com/1344990/ Also, I might add, that on the boot-repair advanced options, the box repair Windows boot files is "locked", so I can't check it. EDIT: Apparentely, the box is locked, because, from what I understood after reading the boot-repair information, everything is fine with my windows boot-files... I still need some guidance though

    Read the article

  • Lost access to the unity interface how to fix? (ubuntu 11.10)

    - by Tal Galili
    o.k, this is embarrassing: I have installed Compiz Config Settings Manager and tried to fix it so that the transition time between changing tabs (using alt+tab) will be short. by accident I un-pressed V from something else, and it asked me about a conflict - I pressed the "x" button to close the window and as a result I stopped seeing the unity interface. That is - I can not see any buttons of the left side. I went to the terminal (ctrl+alt+F1) and ran ccsm As a result I got the following error: $ ccsm /usr/lib/python2.7/site-packages/gtk-2.0/gtk/__init__.py:57: GtkWarning: could not open display warnings.warn(str(e), _gtk.Warning) Traceback (most recent call last): File "/usr/bin/ccsm", line 93, in <module> import ccm File "/usr/lib/python2.7/site-packages/ccm/__init__.py", line 1, in <module> from ccm.Conflicts import * File "/usr/lib/python2.7/site-packages/ccm/Conflicts.py", line 26, in <module> from ccm.Constants import * File "/usr/lib/python2.7/site-packages/ccm/Constants.py", line 29, in <module> CurrentScreenNum = gtk.gdk.display_get_default().get_default_screen().get_number() AttributeError: 'NoneType' object has no attribute 'get_default_screen' What should I do next? Thanks.

    Read the article

  • How to fix legacy code that uses <string.h> unsafely?

    - by Snowbody
    We've got a bunch of legacy code, written in straight C (some of which is K&R!), which for many, many years has been compiled using Visual C 6.0 (circa 1998) on an XP machine. We realize this is unsustainable, and we're trying to move it to a modern compiler. Political issues have said that the most recent compiler allowed is VC++ 2005. When compiling the project, there are many warnings about the unsafe string manipulation functions used (sprintf(), strcpy(), etc). Reviewing some of these places shows that the code is indeed unsafe; it does not check for buffer overflows. The compiler warning recommends that we move to using sprintf_s(), strcpy_s(), etc. However, these are Microsoft-created (and proprietary) functions and aren't available on (say) gcc (although we're primarily a Windows shop we do have some clients on various flavors of *NIX) How ought we to proceed? I don't want to roll our own string libraries. I only want to go over the code once. I'd rather not switch to C++ if we can help it.

    Read the article

  • OpenVPN not connecting

    - by LandArch
    There have been a number of post similar to this, but none seem to satisfy my need. Plus I am a Ubuntu newbie. I followed this tutorial to completely set up OpenVPN on Ubuntu 12.04 server. Here is my server.conf file ################################################# # Sample OpenVPN 2.0 config file for # # multi-client server. # # # # This file is for the server side # # of a many-clients <-> one-server # # OpenVPN configuration. # # # # OpenVPN also supports # # single-machine <-> single-machine # # configurations (See the Examples page # # on the web site for more info). # # # # This config should work on Windows # # or Linux/BSD systems. Remember on # # Windows to quote pathnames and use # # double backslashes, e.g.: # # "C:\\Program Files\\OpenVPN\\config\\foo.key" # # # # Comments are preceded with '#' or ';' # ################################################# # Which local IP address should OpenVPN # listen on? (optional) local 192.168.13.8 # Which TCP/UDP port should OpenVPN listen on? # If you want to run multiple OpenVPN instances # on the same machine, use a different port # number for each one. You will need to # open up this port on your firewall. port 1194 # TCP or UDP server? proto tcp ;proto udp # "dev tun" will create a routed IP tunnel, # "dev tap" will create an ethernet tunnel. # Use "dev tap0" if you are ethernet bridging # and have precreated a tap0 virtual interface # and bridged it with your ethernet interface. # If you want to control access policies # over the VPN, you must create firewall # rules for the the TUN/TAP interface. # On non-Windows systems, you can give # an explicit unit number, such as tun0. # On Windows, use "dev-node" for this. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. dev tap0 up "/etc/openvpn/up.sh br0" down "/etc/openvpn/down.sh br0" ;dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel if you # have more than one. On XP SP2 or higher, # you may need to selectively disable the # Windows firewall for the TAP adapter. # Non-Windows systems usually don't need this. ;dev-node MyTap # SSL/TLS root certificate (ca), certificate # (cert), and private key (key). Each client # and the server must have their own cert and # key file. The server and all clients will # use the same ca file. # # See the "easy-rsa" directory for a series # of scripts for generating RSA certificates # and private keys. Remember to use # a unique Common Name for the server # and each of the client certificates. # # Any X509 key management system can be used. # OpenVPN can also use a PKCS #12 formatted key file # (see "pkcs12" directive in man page). ca "/etc/openvpn/ca.crt" cert "/etc/openvpn/server.crt" key "/etc/openvpn/server.key" # This file should be kept secret # Diffie hellman parameters. # Generate your own with: # openssl dhparam -out dh1024.pem 1024 # Substitute 2048 for 1024 if you are using # 2048 bit keys. dh dh1024.pem # Configure server mode and supply a VPN subnet # for OpenVPN to draw client addresses from. # The server will take 10.8.0.1 for itself, # the rest will be made available to clients. # Each client will be able to reach the server # on 10.8.0.1. Comment this line out if you are # ethernet bridging. See the man page for more info. ;server 10.8.0.0 255.255.255.0 # Maintain a record of client <-> virtual IP address # associations in this file. If OpenVPN goes down or # is restarted, reconnecting clients can be assigned # the same virtual IP address from the pool that was # previously assigned. ifconfig-pool-persist ipp.txt # Configure server mode for ethernet bridging. # You must first use your OS's bridging capability # to bridge the TAP interface with the ethernet # NIC interface. Then you must manually set the # IP/netmask on the bridge interface, here we # assume 10.8.0.4/255.255.255.0. Finally we # must set aside an IP range in this subnet # (start=10.8.0.50 end=10.8.0.100) to allocate # to connecting clients. Leave this line commented # out unless you are ethernet bridging. server-bridge 192.168.13.101 255.255.255.0 192.168.13.105 192.168.13.200 # Configure server mode for ethernet bridging # using a DHCP-proxy, where clients talk # to the OpenVPN server-side DHCP server # to receive their IP address allocation # and DNS server addresses. You must first use # your OS's bridging capability to bridge the TAP # interface with the ethernet NIC interface. # Note: this mode only works on clients (such as # Windows), where the client-side TAP adapter is # bound to a DHCP client. ;server-bridge # Push routes to the client to allow it # to reach other private subnets behind # the server. Remember that these # private subnets will also need # to know to route the OpenVPN client # address pool (10.8.0.0/255.255.255.0) # back to the OpenVPN server. push "route 192.168.13.1 255.255.255.0" push "dhcp-option DNS 192.168.13.201" push "dhcp-option DOMAIN blahblah.dyndns-wiki.com" ;push "route 192.168.20.0 255.255.255.0" # To assign specific IP addresses to specific # clients or if a connecting client has a private # subnet behind it that should also have VPN access, # use the subdirectory "ccd" for client-specific # configuration files (see man page for more info). # EXAMPLE: Suppose the client # having the certificate common name "Thelonious" # also has a small subnet behind his connecting # machine, such as 192.168.40.128/255.255.255.248. # First, uncomment out these lines: ;client-config-dir ccd ;route 192.168.40.128 255.255.255.248 # Then create a file ccd/Thelonious with this line: # iroute 192.168.40.128 255.255.255.248 # This will allow Thelonious' private subnet to # access the VPN. This example will only work # if you are routing, not bridging, i.e. you are # using "dev tun" and "server" directives. # EXAMPLE: Suppose you want to give # Thelonious a fixed VPN IP address of 10.9.0.1. # First uncomment out these lines: ;client-config-dir ccd ;route 10.9.0.0 255.255.255.252 # Then add this line to ccd/Thelonious: # ifconfig-push 10.9.0.1 10.9.0.2 # Suppose that you want to enable different # firewall access policies for different groups # of clients. There are two methods: # (1) Run multiple OpenVPN daemons, one for each # group, and firewall the TUN/TAP interface # for each group/daemon appropriately. # (2) (Advanced) Create a script to dynamically # modify the firewall in response to access # from different clients. See man # page for more info on learn-address script. ;learn-address ./script # If enabled, this directive will configure # all clients to redirect their default # network gateway through the VPN, causing # all IP traffic such as web browsing and # and DNS lookups to go through the VPN # (The OpenVPN server machine may need to NAT # or bridge the TUN/TAP interface to the internet # in order for this to work properly). ;push "redirect-gateway def1 bypass-dhcp" # Certain Windows-specific network settings # can be pushed to clients, such as DNS # or WINS server addresses. CAVEAT: # http://openvpn.net/faq.html#dhcpcaveats # The addresses below refer to the public # DNS servers provided by opendns.com. ;push "dhcp-option DNS 208.67.222.222" ;push "dhcp-option DNS 208.67.220.220" # Uncomment this directive to allow different # clients to be able to "see" each other. # By default, clients will only see the server. # To force clients to only see the server, you # will also need to appropriately firewall the # server's TUN/TAP interface. ;client-to-client # Uncomment this directive if multiple clients # might connect with the same certificate/key # files or common names. This is recommended # only for testing purposes. For production use, # each client should have its own certificate/key # pair. # # IF YOU HAVE NOT GENERATED INDIVIDUAL # CERTIFICATE/KEY PAIRS FOR EACH CLIENT, # EACH HAVING ITS OWN UNIQUE "COMMON NAME", # UNCOMMENT THIS LINE OUT. ;duplicate-cn # The keepalive directive causes ping-like # messages to be sent back and forth over # the link so that each side knows when # the other side has gone down. # Ping every 10 seconds, assume that remote # peer is down if no ping received during # a 120 second time period. keepalive 10 120 # For extra security beyond that provided # by SSL/TLS, create an "HMAC firewall" # to help block DoS attacks and UDP port flooding. # # Generate with: # openvpn --genkey --secret ta.key # # The server and each client must have # a copy of this key. # The second parameter should be '0' # on the server and '1' on the clients. ;tls-auth ta.key 0 # This file is secret # Select a cryptographic cipher. # This config item must be copied to # the client config file as well. ;cipher BF-CBC # Blowfish (default) ;cipher AES-128-CBC # AES ;cipher DES-EDE3-CBC # Triple-DES # Enable compression on the VPN link. # If you enable it here, you must also # enable it in the client config file. comp-lzo # The maximum number of concurrently connected # clients we want to allow. ;max-clients 100 # It's a good idea to reduce the OpenVPN # daemon's privileges after initialization. # # You can uncomment this out on # non-Windows systems. user nobody group nogroup # The persist options will try to avoid # accessing certain resources on restart # that may no longer be accessible because # of the privilege downgrade. persist-key persist-tun # Output a short status file showing # current connections, truncated # and rewritten every minute. status openvpn-status.log # By default, log messages will go to the syslog (or # on Windows, if running as a service, they will go to # the "\Program Files\OpenVPN\log" directory). # Use log or log-append to override this default. # "log" will truncate the log file on OpenVPN startup, # while "log-append" will append to it. Use one # or the other (but not both). ;log openvpn.log ;log-append openvpn.log # Set the appropriate level of log # file verbosity. # # 0 is silent, except for fatal errors # 4 is reasonable for general usage # 5 and 6 can help to debug connection problems # 9 is extremely verbose verb 3 # Silence repeating messages. At most 20 # sequential messages of the same message # category will be output to the log. ;mute 20 I am using Windows 7 as the Client and set that up accordingly using the OpenVPN GUI. That conf file is as follows: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. dev tap0 up "/etc/openvpn/up.sh br0" down "/etc/openvpn/down.sh br0" ;dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. proto tcp ;proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. blahblah.dyndns-wiki.com 1194 ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. ;remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) user nobody group nobody # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca "C:\\Program Files\OpenVPN\config\\ca.crt" cert "C:\\Program Files\OpenVPN\config\\ChadMWade-THINK.crt" key "C:\\Program Files\OpenVPN\config\\ChadMWade-THINK.key" # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 Not sure whats left to do.

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >