Search Results

Search found 294 results on 12 pages for 'inspection'.

Page 4/12 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Start a VPN session using a Terminal script

    - by craibuc
    I use an OSX Terminal session to start a VPN connection. The command that I execute at the prompt is: /etc/netlock/cvc -c :: This works as expected. I would like to save this to a script file that I can simply double-click to start. I created a file, 'vpn.command', added the command (list above), save it, and given execute permission: chmod +x vpn.command When I double-click the file, Terminal opens a BASH shell, executes the command, then exits. Upon closer inspection, the command is now '/etc/netlock/cvc -c ::; exit;' Why is the extra '; exit;' appended to my command? BTW, is there a way to execute another command, /etc/netlock/cvc -d, when the Terminal session is being closed so I can close the VPN automatically?

    Read the article

  • init.d service died

    - by jerluc
    Adapting some code from a linux forum, I've added a service script to /etc/init.d on my ubuntu natty server to start/stop/restart node.js It literally was working the first day I made it, but then today, after viewing my website this morning, the server threw a 404, and upon further inspection, the node.js process was gone. So I went to start the service again, only this time, node.js didn't start at all, and ever since I haven't been able to get my service script working. Below is the entire script: #!/bin/sh # # Node Server Startup # case "$1" in start) echo -n "Starting node: " daemon node /usr/local/www/server.js echo touch /var/lock/subsys/node ;; stop) echo -n "Shutting down node: " killall node echo rm -f /var/lock/subsys/node rm -f /var/run/node.pid ;; status) status node ;; restart) $0 stop $0 start ;; reload) echo -n "Reloading node: " killall node -HUP echo ;; *) echo "Usage: $0 {start|stop|restart|reload|status}" exit 1 esac exit 0 Thanks for any help!

    Read the article

  • Restrict only some plugins to specific sites in Google Chrome

    - by Christian
    I am looking for a way to set up Google Chrome so that it will run a certain plug-in (Java, what else?) only on whitelisted sites, but other plug-ins (like the PDF viewer) everywhere. From playing with the policies available for Chrome, I think there are basically two levels of plug-in management: List of disabled plugins/enabled plugins: Controls whether a plug-in exists for the browser at all This pair of policies applies to plug-ins, but not to sites. Default plug-in settings/Allow plug-ins on sites: Controls on which sites plug-ins can run This set of policies applies to sites, but not to individual plugins, and it cannot override the first pair. There appears to be no way to configure Chrome so that some plug-ins only run on whitelisted sites, but others run everywhere by default. I have also looked at filtering content on the firewall/proxy level, but I'm not convinced it can be done securely there. Filtering by URLs (file names) or content types can be circumvented trivially, and identification by content inspection cannot be safe either.

    Read the article

  • Reset windows XP machine dates

    - by KL
    What steps would you take to make a windows xp machine appear that it hasn't been logged on to since some past date. Not to a forensic level here, just to a casual inspection. Fyi this is not intended to do anything destructive to someone elses machine. This is for my own use believe it or not. And no, I'm not trying to hide from daddy that I used his computer. If you want to be sarcastic here, please come up with something half way amusing, thanks.

    Read the article

  • What’s New In Microsoft Security Essentials 2.0 And How To Upgrade To 2.0

    - by Gopinath
    Since Microsoft released Microsoft Security Essentials(MSE) couple of years ago, I stopped worrying about antivirus programs on all my Windows PCs. MSE is just awesome and it’s the best free antivirus available in the market. Microsoft released version 2.0 of MSE yesterday with enhanced security features and more love for Windows users. New features introduced in this version are New protection engine - Heuristic scanning engine is introduced to bump the virus detection and cleaning mechanism. Network inspection system to monitor network traffic as we browse and protects us from malicious scripts and programs. Better integration with Windows Firewall With this upgrade, MSE is irresistible antivirus application to have on every Windows PC. How To Upgrade MSE 1.0 to 2.0 Generally upgrading Microsoft applications are kids play. All one would require to upgrade is to go to Help->Check for upgrades menu option and follow the wizard to complete upgrade process. Microsoft Security Essentials 1.0 to 2.0 upgrade is also expected to be this way, but somehow it’s not working for me in India. May be I guess, MSE 2.0 is not released for Indian users. What ever may be the reason, it’s very easy to upgrade MSE 1.0 to 2.0  manually. Just download the installer from Microsoft(link given below) and run the installer. Choose Upgrade option when the installer is executing to have MSE 2.0 installed on your PC. MSE 2.0 Download Link You can download Microsoft Security Essentials 2.0 at Microsoft Download Center. This article titled,What’s New In Microsoft Security Essentials 2.0 And How To Upgrade To 2.0, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Ruby gem installation error after OSX Yosemite and Xcode 6 installation

    - by Andres Trevino
    I tried installing a gem like I did before installing Yosemite, but now I'm getting an error: /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:159:in `synchronize': ERROR: Failed to build gem native extension. (Gem::Ext::BuildError) ERROR: Failed to build gem native extension. deadlock; recursive locking This is the command I wrote: sudo gem install mysql2 This is the message it appears in the terminal: Gem files will remain installed in /Library/Ruby/Gems/2.0.0/gems/autotest-fsevent-0.2.9 for inspection. Results logged to /Library/Ruby/Gems/2.0.0/extensions/universal-darwin-14/2.0.0/autotest-fsevent-0.2.9/gem_make.out Gem files will remain installed in /Library/Ruby/Gems/2.0.0/gems/autotest-fsevent-0.2.9 for inspection. Results logged to /Library/Ruby/Gems/2.0.0/extensions/universal-darwin-14/2.0.0/autotest-fsevent-0.2.9/gem_make.out from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:159:in build_extension' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:198:inblock in build_extensions' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:195:in each' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:195:inbuild_extensions' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:1436:in block in build_extensions' from /Library/Ruby/Site/2.0.0/rubygems/user_interaction.rb:45:inuse_ui' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:1434:in build_extensions' from /Library/Ruby/Site/2.0.0/rubygems/stub_specification.rb:60:inbuild_extensions' from /Library/Ruby/Site/2.0.0/rubygems/basic_specification.rb:56:in contains_requirable_file?' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:925:inblock in find_inactive_by_path' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:924:in each' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:924:infind' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:924:in find_inactive_by_path' from /Library/Ruby/Site/2.0.0/rubygems.rb:185:intry_activate' from /Library/Ruby/Site/2.0.0/rubygems/core_ext/kernel_require.rb:132:in rescue in require' from /Library/Ruby/Site/2.0.0/rubygems/core_ext/kernel_require.rb:144:inrequire' from /Library/Ruby/Site/2.0.0/rubygems.rb:601:in load_yaml' from /Library/Ruby/Site/2.0.0/rubygems/config_file.rb:328:inload_file' from /Library/Ruby/Site/2.0.0/rubygems/config_file.rb:197:in initialize' from /Library/Ruby/Site/2.0.0/rubygems.rb:289:innew' from /Library/Ruby/Site/2.0.0/rubygems.rb:289:in configuration' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:63:inrun' from /Library/Ruby/Site/2.0.0/rubygems/ext/ext_conf_builder.rb:38:in block in build' from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/tempfile.rb:324:inopen' from /Library/Ruby/Site/2.0.0/rubygems/ext/ext_conf_builder.rb:17:in build' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:161:inblock (2 levels) in build_extension' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:160:in chdir' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:160:inblock in build_extension' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:159:in synchronize' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:159:inbuild_extension' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:198:in block in build_extensions' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:195:ineach' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:195:in build_extensions' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:1436:inblock in build_extensions' from /Library/Ruby/Site/2.0.0/rubygems/user_interaction.rb:45:in use_ui' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:1434:inbuild_extensions' from /Library/Ruby/Site/2.0.0/rubygems/stub_specification.rb:60:in build_extensions' from /Library/Ruby/Site/2.0.0/rubygems/basic_specification.rb:56:incontains_requirable_file?' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:925:in block in find_inactive_by_path' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:924:ineach' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:924:in find' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:924:infind_inactive_by_path' from /Library/Ruby/Site/2.0.0/rubygems.rb:185:in try_activate' from /Library/Ruby/Site/2.0.0/rubygems/core_ext/kernel_require.rb:132:inrescue in require' from /Library/Ruby/Site/2.0.0/rubygems/core_ext/kernel_require.rb:144:in require' from /Library/Ruby/Site/2.0.0/rubygems.rb:601:inload_yaml' from /Library/Ruby/Site/2.0.0/rubygems/config_file.rb:328:in load_file' from /Library/Ruby/Site/2.0.0/rubygems/config_file.rb:197:ininitialize' from /Library/Ruby/Site/2.0.0/rubygems/gem_runner.rb:74:in new' from /Library/Ruby/Site/2.0.0/rubygems/gem_runner.rb:74:indo_configuration' from /Library/Ruby/Site/2.0.0/rubygems/gem_runner.rb:39:in run' from /usr/bin/gem:21:in' I am using OSX 10.10 and Xcode 6 Beta. Do any of you guys have any idea as to what to do about this? Thanks in advance

    Read the article

  • How do I create a popup banner before login with Lightdm?

    - by Rich Loring
    When Ubuntu was using gnome I was able to create a popup banner like the banner below before the login screen using zenity in the /etc/gdm/Init/Default. The line of code would be like this: if [ -f "/usr/bin/zenity" ]; then /usr/bin/zenity --info --text="`cat /etc/issue`" --no-wrap; else xmessage -file /etc/issue -button ok -geometry 540X480; fi How can I accomplish this with Unity? NOTICE TO USERS This is a Federal computer system (and/or it is directly connected to a BNL local network system) and is the property of the United States Government. It is for authorized use only. Users (authorized or unauthorized) have no explicit or implicit expectation of privacy. Any or all uses of this system and all files on this system may be intercepted, monitored, recorded, copied, audited, inspected, and disclosed to authorized site, Department of Energy, and law enforcement personnel, as well as authorized officials of other agencies, both domestic and foreign. By using this system, the user consents to such interception, monitoring, recording, copying, auditing, inspection, and disclosure at the discretion of authorized site or Department of Energy personnel. Unauthorized or improper use of this system may result in administrative disciplinary action and civil and criminal penalties. By continuing to use this system you indicate your awareness of and consent to these terms and conditions of use. LOG OFF IMMEDIATELY if you do not agree to the conditions stated in this warning.

    Read the article

  • Microsoft Security Essentials 2.0 Kills Viruses Dead. Download It Now.

    - by The Geek
    Microsoft’s Security Essentials has been our favorite anti-malware application for a while—it’s free, unobtrusive, and it doesn’t slow your PC down, but now it’s even better with the new 2.0 release, which adds network filtering, heuristic protection, and more. Just to be clear and direct with you: we absolutely recommend Microsoft Security Essentials as your anti-malware / anti-virus utility over any other option—and how can you argue? It’s totally free! New Features in 2.0 Here’s all of the new features in the latest release, which make it even more of a must-download: Network Traffic Inspection integrates into the network system and monitors the traffic at a low level without slowing down your PC, so it can actually detect threats before they get to your PC.   Internet Explorer Integration blocks malicious scripts before IE even starts running them—clearly a big security advantage.  Heuristic Scanning Engine finds malware that hasn’t been previously detected by scanning for certain types of attacks. This provides even more protection than just through virus definitions.   These new features make MSE on par with other anti-malware applications, especially the heuristic scanning, which has been the only complaint that anybody could make against MSE in the past—but now it has it Latest Features How-To Geek ETC The Complete List of iPad Tips, Tricks, and Tutorials The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor The Spam Police Parts 1 and 2 – Goodbye Spammers [Videos] Snow Angels Theme for Windows 7 Exploring the Jungle Ruins Wallpaper Protect Your Privacy When Browsing with Chrome and Iron Browser Free Shipping Day is Friday, December 17, 2010 – National Free Shipping Day Find an Applicable Quote for Any Programming Situation

    Read the article

  • Build a gem with native extension (Gem::Installer::ExtensionBuildError)

    - by Arnaud Leymet
    I have the following configuration: uname -a : Linux 2.6.24.2 i686 GNU/Linux (Ubuntu) ruby -v : ruby 1.9.0 (2007-12-25 revision 14709) [i486-linux] rails -v : Rails 3.0.0.beta3 gem -v : 1.3.5 rake --version : rake, version 0.8.7 make -v : GNU Make 3.81 gem env : RUBYGEMS VERSION: 1.3.5 RUBY VERSION: 1.9.0 (2007-12-25 patchlevel 0) [i486-linux] INSTALLATION DIRECTORY: /usr/lib/ruby1.9/gems/1.9.0 RUBY EXECUTABLE: /usr/bin/ruby1.9 EXECUTABLE DIRECTORY: /usr/bin RUBYGEMS PLATFORMS: ruby x86-linux GEM PATHS: /usr/lib/ruby1.9/gems/1.9.0 /root/.gem/ruby/1.9.0 GEM CONFIGURATION: :update_sources = true :verbose = true :benchmark = false :backtrace = false :bulk_threshold = 1000 REMOTE SOURCES: http://gems.rubyforge.org/ And when I try this simple command: gem install nokogiri Here is what I get: # gem install nokogiri Building native extensions. This could take a while... ERROR: Error installing nokogiri: ERROR: Failed to build gem native extension. /usr/bin/ruby1.9 extconf.rb checking for iconv.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for libxml/parser.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for libxslt/xslt.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for libexslt/exslt.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for xmlParseDoc() in -lxml2... yes checking for xsltParseStylesheetDoc() in -lxslt... yes checking for exsltFuncRegister() in -lexslt... yes checking for xmlRelaxNGSetParserStructuredErrors()... yes checking for xmlRelaxNGSetParserStructuredErrors()... yes checking for xmlRelaxNGSetValidStructuredErrors()... yes checking for xmlSchemaSetValidStructuredErrors()... yes checking for xmlSchemaSetParserStructuredErrors()... yes creating Makefile make cc -I. -I/usr/include/libxml2 -I/usr/include -I/usr/include/ruby-1.9.0/i486-linux -I/usr/include/ruby-1.9.0 -I. -DHAVE_XMLRELAXNGSETPARSERSTRUCTUREDERRORS -DHAVE_XMLRELAXNGSETPARSERSTRUCTUREDERRORS -DHAVE_XMLRELAXNGSETVALIDSTRUCTUREDERRORS -DHAVE_XMLSCHEMASETVALIDSTRUCTUREDERRORS -DHAVE_XMLSCHEMASETPARSERSTRUCTUREDERRORS -I/opt/local/include/ -I/opt/local/include/libxml2 -I/opt/local/include -D_FILE_OFFSET_BITS=64 -fPIC -fno-strict-aliasing -g -fPIC -g -DXP_UNIX -O3 -Wall -Wcast-qual -Wwrite-strings -Wconversion -Wmissing-noreturn -Winline -o xml_document_fragment.o -c xml_document_fragment.c In the included file starting at ./nokogiri.h:75, From ./xml_document_fragment.h:4, From xml_document_fragment.c:1: ./xml_document.h:5:16: error: st.h : No file or folder with this type make: *** [xml_document_fragment.o] Error 1 Gem files will remain installed in /usr/lib/ruby1.9/gems/1.9.0/gems/nokogiri-1.4.1 for inspection. Results logged to /usr/lib/ruby1.9/gems/1.9.0/gems/nokogiri-1.4.1/ext/nokogiri/gem_make.out The "gem_make.out" file contains the exact same information as described above. If I try with another gem: gem install gherkin Here is what I get: u# gem install gherkin Building native extensions. This could take a while... ERROR: Error installing gherkin: ERROR: Failed to build gem native extension. /usr/bin/ruby1.9 extconf.rb checking for main() in -lc... yes creating Makefile make cc -I. -I/usr/include/ruby-1.9.0/i486-linux -I/usr/include/ruby-1.9.0 -I. -D_FILE_OFFSET_BITS=64 -fPIC -fno-strict-aliasing -g -fPIC -o gherkin_lexer_ar.o -c gherkin_lexer_ar.c /Users/aslakhellesoy/scm/gherkin/tasks/../ragel/i18n/ar.c.rl:11:16: erreur: re.h : Aucun fichier ou dossier de ce type make: *** [gherkin_lexer_ar.o] Erreur 1 Gem files will remain installed in /usr/lib/ruby1.9/gems/1.9.0/gems/gherkin-1.0.30 for inspection. Results logged to /usr/lib/ruby1.9/gems/1.9.0/gems/gherkin-1.0.30/ext/gherkin_lexer_ar/gem_make.out In fact whenever I try to install a gem with native extension, I get the same type of error. Would that ring a bell to anyone?

    Read the article

  • How to improve designer and developer work flow?

    - by mbdev
    I work in a small startup with two front end developers and one designer. Currently the process starts with the designer sending a png file with the whole page design and assets if needed. My task as front end developer is to convert it to a HTML/CSS page. My work flow currently looks like this: Lay out the distinct parts using html elements. Style each element very roughly (floats, minimal fonts and padding) so I can modify it using inspection. Using Chrome Developer Tools (inspect) add/change css attributes while updating the css file. Refresh the page after X amount of changes Use Pixel Perfect to refine the design more. Sit with the designer to make last adjustments. Inferring the paddings, margins, font sizes using trial and error takes a lot of time and I feel the process could become more efficient but not sure how to improve it. Using PSD files is not an option since buying Photoshop for each developer is currently not considered. Design guide is also not available since design is still evolving and new features are introduced. Ideas for improving the process above and sharing how the process looks like in your company will be great.

    Read the article

  • Floating point undesireable in highly critical code?

    - by Kirt Undercoffer
    Question 11 in the Software Quality section of "IEEE Computer Society Real-World Software Engineering Problems", Naveda, Seidman, lists fp computation as undesirable because "the accuracy of the computations cannot be guaranteed". This is in the context of computing acceleration for an emergency braking system for a high speed train. This thinking seems to be invoking possible errors in small differences between measurements of a moving object but small differences at slow speeds aren't a problem (or shouldn't be), small differences between two measurements at high speed are irrelevant - can there be a problem with small roundoff errors during deceleration for an emergency braking system? This problem has been observed with airplane braking systems resulting in hydroplaning but could this actually happen in the context of a high speed train? The concern about fp errors seems to not be well-founded in this context. Any insight? The fp is used for acceleration so perhaps the concern is inching over a speed limit? But fp should be just fine if they use a double in whatever implementation language. The actual problem in the text states: During the inspection of the code for the emergency braking system of a new high speed train (a highly critical, real-time application), the review team identifies several characteristics of the code. Which of these characteristics are generally viewed as undesirable? The code contains three recursive functions (well that one is obvious). The computation of acceleration uses floating point arithmetic. All other computations use integer arithmetic. The code contains one linked list that uses dynamic memory allocation (second obvious problem). All inputs are checked to determine that they are within expected bounds before they are used.

    Read the article

  • TODO Formatting

    - by charlie.mott
    Article Source: http://geekswithblogs.net/charliemott TODO's should only be used for a short period of time to remind you that something needs to be done. They should then be addressed as soon as possible. In order to know who owns a TODO task and how long it’s been outstanding, my company uses the following standard for TODO formatting: Format:     // TODO : Owner Initials – Date Created – Description of task. Sample:     // TODO: CM – 2012/01/20 – Move this class to a new location so it can be reused. Using this pattern makes it easy to use the Resharper TODO explorer. The Carrot In order to make it easy for developers to apply this rule, a code snippet can be created in Visual Studio. Even better, I created a Resharper template. This gives the facility to use the current user name and current date macros. image This actually makes the formatting look like this. Sample:     // TODO: cmott – 2012/01/20 – Move this class to a new location so it can be reused. The Stick How to you enforce such a rule? I tried to create a custom Resharper Highlighting Pattern to perform custom code analysis inspection for deviations from this pattern. However, I did not have any success. The find dialog would not accept // text. If I work it out, I will update this blog post. StyleCop Instead I created a custom StyleCop rule. I followed the approach used with the StyleCop Contrib project. This provides a simple to use base class and easy to use unit testing framework. I will upload this todo format analyzer as a patch to that project. image

    Read the article

  • How do you save/export changes made in Firebug?

    - by blunders
    Using Firebug to edit CSS, how do I save/export changes made to the CSS? TOOLS: Firefox, Firebug MAJOR UPDATE: If you know of a way to lock the forward/back/refresh on a FireFox tab, please let me know. Otherwise, I've given up on using FireBug/FireDiff as an IDE for CSS, it's nice, but lol... press backspace at the wrong time and ALL your work is gone... funny. So, really like the browser highlighting to CSS/HTML in Firebug. Know any good CSS editors that do this? Really had hope FireBug would work, but for now only see it as being good for ad-hoc inspection and test; meaning using it for what it's made for. UPDATES: @Lèse majesté: Just as an update, "Web Developer add-on" does let you edit CSS, but it does not let you edit/save CSS changes made by Firebug. Meaning you use Firebug to ID and maybe test changes, but it does not let you save the changes from Firebug. Here's a "how to" covering how to use them together: FF + FB + WD @Lèse majesté: Still playing around with FireDiff. It works okay, found one bug already (although I'm just working around it), and there's no "how to" I've been able to find, so I'm just trying every feature and clicking around... (for example, to export a diff you must be over the last item in the list, right click, and select as "Save Diff". The ".diff" is just a text file, no idea why at this point the ext is .diff.

    Read the article

  • ReSharper 7.1 update

    - by TATWORTH
    Jet Brains have announced ReSharper 7.1: a considerable update to the powerful .NET developer productivity tool for Visual Studio. They invite you to download ReSharper 7.1 and take it for a free 30-day trial. I urge you to try this excellent Visual Studio add-on. Here is their announcement: Following this update, ReSharper 7 brings even more value to all .NET developers, such as more ways to refactor, inspect, clean up, review and generate code. Feature highlights of ReSharper 7 now include: Full integration with Visual Studio 2012 while maintaining support for Visual Studio 2005, 2008, and 2010.Performance and bug fixes: Since releasing version 7.0 this summer, we have fixed over 300 performance problems and bugs.New code inspections and contract annotations for a more robust .NET code quality analysis. Sharing ReSharper code inspection results with teammates has been streamlined as well for the purposes of code review.Improved tooling for .NET code maintenance including the top requested Extract Class refactoring that helps decrease code complexity, as well as a way to remove unused assembly references across the entire solution.Enhanced code formatter: We have implemented some of the most demanded code formatter improvements so far. For example, ReSharper 7.1 is able to format XML doc comments and chained method calls.Additional code exploration features helping visualize hierarchies of polymorphic members and CSS styles.An extended and fine-tuned code generation toolset. In terms of support for specific technologies and frameworks, ReSharper 7 is on the cutting edge as well, providing: Support for VB.NET refined with the Extract Class refactoring, new quick-fixes and improved IntelliSense.XAML support considerably enhanced in terms of code completion, typing assistance, naming style control, and code generation.An extensive pack of functionality for developers looking to create Windows Store applications for Windows 8.INotifyPropertyChanged interface support pack to improve productivity of Windows Forms, WPF and Silverlight application developers.Extended web development toolset, including improvements to JavaScript support, and initial support for ASP.NET 4.5 and ASP.NET MVC 4.Addition of two previously unsupported Microsoft development technologies: LightSwitch and SharePoint. For details on features and improvements in ReSharper 7 and a 30-day free trial, please read What's New in ReSharper 7.

    Read the article

  • Floating point undesirable in highly critical code?

    - by Kirt Undercoffer
    Question 11 in the Software Quality section of "IEEE Computer Society Real-World Software Engineering Problems", Naveda, Seidman, lists fp computation as undesirable because "the accuracy of the computations cannot be guaranteed". This is in the context of computing acceleration for an emergency braking system for a high speed train. This thinking seems to be invoking possible errors in small differences between measurements of a moving object but small differences at slow speeds aren't a problem (or shouldn't be), small differences between two measurements at high speed are irrelevant - can there be a problem with small roundoff errors during deceleration for an emergency braking system? This problem has been observed with airplane braking systems resulting in hydroplaning but could this actually happen in the context of a high speed train? The concern about fp errors seems to not be well-founded in this context. Any insight? The fp is used for acceleration so perhaps the concern is inching over a speed limit? But fp should be just fine if they use a double in whatever implementation language. The actual problem in the text states: During the inspection of the code for the emergency braking system of a new high speed train (a highly critical, real-time application), the review team identifies several characteristics of the code. Which of these characteristics are generally viewed as undesirable? The code contains three recursive functions (well that one is obvious). The computation of acceleration uses floating point arithmetic. All other computations use integer arithmetic. The code contains one linked list that uses dynamic memory allocation (second obvious problem). All inputs are checked to determine that they are within expected bounds before they are used.

    Read the article

  • Attempting to install ubuntu 11.10

    - by Orin
    I installed version 9 sometime ago and since have forgotten the process for partitioning, or the layout is different. I have 5 partitions but only have windows xp installed on the pc in question with that being the one of those 5 partitions which is ntfs 34444 mb its - a 40gig hard-drive. My first question is... is there a way to get a screen shot of the partitioner when I am running the demo session straight from disc... these 5 partitions are fragmenting the other 4ish gig needed to install.. I get an error message which says go back and make sure 1 partition has at least 2.5 gig or so. But I have no idea what I am supposed to set these remaining 4 partitions to in order to proceed.. I have read up on install guides and understand that one must be "/" root and another as swap... but to no avail thus far have the correct combo. A few screen shots will no doubt help you guys answer as I'm baffled as to what specific details to give as each one has various settings on inspection, and I don't really feel like writing it all down manually then posting specs for each one

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

  • SubSonic 2.2 missing stored procedures in StoredProcedures.cs when generated with sonic.exe

    - by Mark
    We are trying to move from SubSonic 2.0.3 to 2.2 (not using .NET 3.5). When we regenerate the project using SubCommander\sonic.exe and try to compile we get some errors reporting missing members (which should have been automatically generated based on the stored procedures we have). On closer inspection it looks like my StoredProcedures.cs file is missing some (not all) automatically generated methods for my classes. As an example, I have 2 procs: [dbo]._ClassA_Func1 [dbo]._ClassA_Func2 Only one of these is being generated in the StoredProcedures.cs file. I have checked the permissions of both procs using fn_my_permissions and they seem identical. Does anyone have any ideas on what I can check? Thanks -- Mark

    Read the article

  • Cannot install Fast debugger in Netbeans 6.8 for Ruby 1.9

    - by Bragaadeesh
    Hi, I am using Netbeans 6.8 version and Ruby 1.9.1 installed on Windows XP. I tried to install the fast debugger and I am getting the following error. Building native extensions. This could take a while... ERROR: Error installing ruby-debug-ide: ERROR: Failed to build gem native extension. D:/Ruby19/bin/ruby.exe mkrf_conf.rb Building native extensions. This could take a while... Gem files will remain installed in D:/Ruby19/lib/ruby/gems/1.9.1/gems/ruby-debug-ide-0.4.9 for inspection. Results logged to D:/Ruby19/lib/ruby/gems/1.9.1/gems/ruby-debug-ide-0.4.9/ext/gem_make.out Have someone else faced this problem before. Please help. Thanks.

    Read the article

  • 404 Error - HEAD request on default page

    - by Matt
    I am working on a project where we are about to go to internal release. So we are working to clean up the small problems before then. I was looking at our logs and noticed a high number of 404 errors. On further inspection it seems that all of them are related to HEAD requests. I haven't been able to find any substantive information about the preferred method for handling this in a standards compliant manner. Is there anything out there that can point out the proper way to handle that.

    Read the article

  • Extracting ""((Adj|Noun)+|((Adj|Noun)(Noun-Prep)?)(Adj|Noun))Noun"" from Text (Justeson & Katz, 1995)

    - by ssuhan
    I would like to query if it is possible to extract ((Adj|Noun)+|((Adj|Noun)(Noun-Prep)?)(Adj|Noun))Noun proposed by Justeson and Katz (1995) in R package openNLP? That is, I would like to use this linguistic filtering to extract candidate noun phrases. I cannot well understand its meaning. Could you do me a favor to explain it or transform such representation into R language. Many thanks. Maybe we can start the sample code from: library("openNLP") acq <- "This paper describes a novel optical thread plug gauge (OTPG) for internal thread inspection using machine vision. The OTPG is composed of a rigid industrial endoscope, a charge-coupled device camera, and a two degree-of-freedom motion control unit. A sequence of partial wall images of an internal thread are retrieved and reconstructed into a 2D unwrapped image. Then, a digital image processing and classification procedure is used to normalize, segment, and determine the quality of the internal thread." acqTag <- tagPOS(acq) acqTagSplit = strsplit(acqTag," ")

    Read the article

  • Problem with rake db:migrate

    - by Shreyas Satish
    When I try rake db:migrate, I get the following error: !!! The bundled mysql.rb driver has been removed from Rails 2.2. Please install the mysql gem and try again: gem install mysql. rake aborted! no such file to load -- mysql And when I try to "gem install mysql" Building native extensions. This could take a while... ERROR: Error installing mysql: ERROR: Failed to build gem native extension. /usr/bin/ruby extconf.rb Can't find header files for ruby. Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/mysql-2.8.1 for inspection. I checked the rubygems folder and mysql gem has been installed. Any ideas? Cheers

    Read the article

  • error when installing mysql ruby gem on OSX 10.6.3

    - by kapil.israni
    So I am getting the same issue as mentioned here - http://stackoverflow.com/questions/1366746/gem-install-mysql-failure-in-snow-leopard But I haven't been able to get it fixed using the answers on this link. Here's a brief history - I had MAMP on my machine, but now I downloaded the latest MySQL from mysql.com and installed version 5.1.46 this new version runs fine and client "mysql" is able to connect and I also have XCode v3.2.1, since someone mentioned that it can cause issues. Here's the error - **Building native extensions. This could take a while... ERROR: Error installing mysql: ERROR: Failed to build gem native extension. /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby extconf.rb --with-mysql-config=/usr/local/mysql/bin/mysql_config mkmf.rb can't find header files for ruby at /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/ruby.h Gem files will remain installed in /Library/Ruby/Gems/1.8/gems/mysql-2.8.1 for inspection. Results logged to /Library/Ruby/Gems/1.8/gems/mysql-2.8.1/ext/mysql_api/gem_make.out**

    Read the article

  • Problem installing MySQL gem on Fedora

    - by Shreyas Satish
    When I try rake db:migrate, I get the following error: !!! The bundled mysql.rb driver has been removed from Rails 2.2. Please install the mysql gem and try again: gem install mysql. rake aborted! no such file to load -- mysql And when I try to "gem install mysql" Building native extensions. This could take a while... ERROR: Error installing mysql: ERROR: Failed to build gem native extension. /usr/bin/ruby extconf.rb Can't find header files for ruby. Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/mysql-2.8.1 for inspection. $ sudo gem install mysql -- --with-mysql-config=/usr/local/mysql/bin/mysql_config has also been tried but same error. I'm on a Fedora 10. Help will be much appreciated.Cheers!

    Read the article

  • Extracting dates from html meta data in FAST-ESP

    - by Neil
    During document processing I want to extract all dates from html meta data and then identify the latest date which will be used to populate a date field (dtgeneric1). <meta name="OriginalPublicationDate" content="2010/04/21 12:06:36" /> <meta name="LastModificationDate" content="2010/04/22 14:10:16" /> + other non-date meta data Inspection using spy stages shows that our pipeline already adds meta_* attributes but the meta data names will be different across documents from different sources. #### ATTRIBUTE meta_originalpublicationdate <class 'docproc.DocumentAttributes.TextChunks'>: 2010/04/21 12:06:36 #### ATTRIBUTE meta_lastmodificationdate <class 'docproc.DocumentAttributes.TextChunks'>: 2010/04/22 14:10:16 + other non-date meta attributes Ideally we would like to pass all the meta_* attributes to a Python stage and use that to work out which are dates and which is the largest but there seems to be no way of specifying "all meta attributes" as input. Has anyone done something similar and can offer any advice on the best way to do this. Thanks Neil

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >