Search Results

Search found 27426 results on 1098 pages for 'build tools'.

Page 53/1098 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Firefox 4 show tools menu in button interface

    - by chris.nullptr
    I like the extra vertical space provided by the latest firefox beta (4b9). However, the button interface doesn't give you access to the tools menu. I know that I can either temporarily enable the menubar by holding down the ALT key, but flipping a flag in about:config would be preferable. Does such a flag exist? Edit: I want to be able to access the tools menu from the new button interface like you can with Bookmarks, History, Options, and Help.

    Read the article

  • Microsoft Office 2010 Proofing Tools Kit

    - by Svish
    I have installed the Office 2010 available on MSDN, but there is no proofing tools kit available there yet. Still I see various sources where I can download this kit when I search for it on Google. Is the Proofing Tools Kit available yet or not? Are these sources I see on Google legitimate ones or should I stay away from them? Or are they also available from Microsoft directly somewhere I haven't looked yet?

    Read the article

  • apache log analysis tools for multiple virtual hosts?

    - by shreddd
    I am interested in trying to get a side by side usage comparison of all the virtual hosts being served up by my apache server. In the simplest case, I want to see a list (or bar chart) with each virtual host and the number of requests/traffic on that site. I've been playing around with webalyzer and awstats but I haven't been able to compare multiple virtual hosts in the same infographic. Anyone have any suggestions on tools for doing this (or how I might use the above tools to do so)?

    Read the article

  • Things to install on a new machine – revisited

    - by RoyOsherove
    as I prepare to get a new dev machine at work, I write the things I am going to install on it, before writing the first line of code on that machine: Control Freak Tools: Everything Search Engine – a free and amazingly fast search engine for files all over your machine. (just file names, not inside files). This is so fast I use it almost as a replacement for my start menu, but it’s also great for finding those files that get hidden and tucked away in dark places on my system. Ever had a situation where you needed to see exactly how many copies of X.dll were hiding on your machine and where? this tool is perfect for that. Google Chrome. It’s just fast. very fast. and Firefox has become the IE of alternative browsers in terms of speed and memory. Don’t even get me started on IE. TweetDeck – get a complete view of what’s up on twitter Total Commander – my still favorite file manager, over five years now. KatMouse – will scroll any window your hovering on, even if it’s not an active window, when you use scroll the wheel on it. PowerIso or Daemon Tools – for loading up ISO images of discs LogMeIn Ignition – quick access to your LogMeIn computers for online Backup: JungleDisk or BackBlaze KeePass – save important passwords MS Security Essentials – free anti virus that’s quoest and doesn’t make a mess of your system. for home: uTorrent – a torrent client that can read rss feeds (like the ones from ezrss.it ) Camtasia Studio and SnagIt – for recording and capturing the screen, and then adding cool effects on top. Foxit PDF Reader – much faster that adove reader. Toddler Keys (for home) – for when your baby wants to play with your keyboard. Live Writer – for writing blog posts for Lenovo ThinkPads – Lenovo System Update – if you have a “custom” system instead of the one that came built in, this will keep all your lenovo drivers up to date. FileZilla – for FTP stuff All the utils from sysinternals, (or try the live-links) especially: AutoRuns for deciding what’s really going to load at startup, procmon to see what’s really going on with processes in your system   Developer stuff: Reflector. Pure magic. Time saver. See source code of any compiled assembly. Resharper. Great for productivity and navigation across your source code FinalBuilder – a commercial build automation tool. Love it. much better than any xml based time hog out there. TeamCity – a great visual and friendly server to manage continuous integration. powerful features. Test Lint – a free addin for vs 2010 I helped create, that checks your unit tests for possible problems and hints you about it. TestDriven.NET – a great test runner for vs 2008 and 2010 with some powerful features. VisualSVN – a commercial tool if you use subversion. very reliable addin for vs 2008 and 2010 Beyond Compare – a powerful file and directory comparison tool. I love the fact that you can right click in windows exporer on any file and select “select left side to compare”, then right click on another file and select “compare with left side”. Great usability thought! PostSharp 2.0 – for addind system wide concepts into your code (tracing, exception management). Goes great hand in hand with.. SmartInspect – a powerful framework and viewer for tracing for your application. lots of hidden features. Crypto Obfuscator – a relatively new obfuscation tool for .NET that seems to do the job very well. Crypto Licensing – from the same company –finally a licensing solution that seems to really fit what I needed. And it works. Fiddler 2 – great for debugging and tracing http traffic to and from your app. Debugging Tools for Windows and DebugDiag  - great for debugging scenarios. still wanting more? I think this should keep you busy for a while.   Regulator and Regulazy – for testing and generating regular expressions Notepad 2 – for quick editing and viewing with syntax highlighting

    Read the article

  • Build-Essentials installation failing

    - by Brickman
    I am having trouble accessing the several critical header files that show to be a part of the build process. The "Ubuntu Software Center" shows "Build Essentials" as installed: Next I did the following two commands, which did not improve the problem: ~$ sudo apt-get install build-essential [sudo] password for: Reading package lists... Done Building dependency tree Reading state information... Done build-essential is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. :~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. :~$ Dump of headers after installation attempts. > /usr/include/boost/interprocess/detail/atomic.hpp > /usr/include/boost/interprocess/smart_ptr/detail/sp_counted_base_atomic.hpp > /usr/include/qt4/Qt/qatomic.h /usr/include/qt4/Qt/qbasicatomic.h > /usr/include/qt4/QtCore/qatomic.h > /usr/include/qt4/QtCore/qbasicatomic.h > /usr/share/doc/git-annex/html/bugs/git_annex_unlock_is_not_atomic.html > /usr/src/linux-headers-3.11.0-15/arch/alpha/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/arc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/arm/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/arm64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/avr32/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/blackfin/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/cris/include/arch-v10/arch/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/cris/include/arch-v32/arch/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/cris/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/frv/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/h8300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/hexagon/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/ia64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/m32r/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/m68k/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/metag/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/microblaze/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/mips/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/mn10300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/parisc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/powerpc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/s390/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/score/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/sh/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/sparc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/tile/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/x86/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/arch/xtensa/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-15/include/asm-generic/atomic.h > /usr/src/linux-headers-3.11.0-15/include/asm-generic/bitops/atomic.h > /usr/src/linux-headers-3.11.0-15/include/asm-generic/bitops/ext2-atomic.h > /usr/src/linux-headers-3.11.0-15/include/asm-generic/bitops/non-atomic.h > /usr/src/linux-headers-3.11.0-15/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-15-generic/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/alpha/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/arc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/arm/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/arm64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/avr32/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/blackfin/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/cris/include/arch-v10/arch/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/cris/include/arch-v32/arch/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/cris/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/frv/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/h8300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/hexagon/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/ia64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/m32r/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/m68k/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/metag/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/microblaze/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/mips/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/mn10300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/parisc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/powerpc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/s390/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/score/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/sh/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/sparc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/tile/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/x86/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/arch/xtensa/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-17/include/asm-generic/atomic.h > /usr/src/linux-headers-3.11.0-17/include/asm-generic/bitops/atomic.h > /usr/src/linux-headers-3.11.0-17/include/asm-generic/bitops/ext2-atomic.h > /usr/src/linux-headers-3.11.0-17/include/asm-generic/bitops/non-atomic.h > /usr/src/linux-headers-3.11.0-17/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-17-generic/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/alpha/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/arc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/arm/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/arm64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/avr32/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/blackfin/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/cris/include/arch-v10/arch/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/cris/include/arch-v32/arch/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/cris/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/frv/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/h8300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/hexagon/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/ia64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/m32r/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/m68k/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/metag/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/microblaze/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/mips/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/mn10300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/parisc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/powerpc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/s390/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/score/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/sh/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/sparc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/tile/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/x86/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/arch/xtensa/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-18/include/asm-generic/atomic.h > /usr/src/linux-headers-3.11.0-18/include/asm-generic/bitops/atomic.h > /usr/src/linux-headers-3.11.0-18/include/asm-generic/bitops/ext2-atomic.h > /usr/src/linux-headers-3.11.0-18/include/asm-generic/bitops/non-atomic.h > /usr/src/linux-headers-3.11.0-18/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-18-generic/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/alpha/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/arc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/arm/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/arm64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/avr32/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/blackfin/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/cris/include/arch-v10/arch/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/cris/include/arch-v32/arch/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/cris/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/frv/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/h8300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/hexagon/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/ia64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/m32r/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/m68k/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/metag/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/microblaze/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/mips/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/mn10300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/parisc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/powerpc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/s390/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/score/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/sh/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/sparc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/tile/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/x86/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/arch/xtensa/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-19/include/asm-generic/atomic.h > /usr/src/linux-headers-3.11.0-19/include/asm-generic/bitops/atomic.h > /usr/src/linux-headers-3.11.0-19/include/asm-generic/bitops/ext2-atomic.h > /usr/src/linux-headers-3.11.0-19/include/asm-generic/bitops/non-atomic.h > /usr/src/linux-headers-3.11.0-19/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-19-generic/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/alpha/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/arc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/arm/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/arm64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/avr32/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/blackfin/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/cris/include/arch-v10/arch/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/cris/include/arch-v32/arch/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/cris/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/frv/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/h8300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/hexagon/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/ia64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/m32r/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/m68k/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/metag/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/microblaze/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/mips/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/mn10300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/parisc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/powerpc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/s390/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/score/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/sh/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/sparc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/tile/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/x86/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/arch/xtensa/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-20/include/asm-generic/atomic.h > /usr/src/linux-headers-3.11.0-20/include/asm-generic/bitops/atomic.h > /usr/src/linux-headers-3.11.0-20/include/asm-generic/bitops/ext2-atomic.h > /usr/src/linux-headers-3.11.0-20/include/asm-generic/bitops/non-atomic.h > /usr/src/linux-headers-3.11.0-20/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-20-generic/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/alpha/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/arc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/arm/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/arm64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/avr32/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/blackfin/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/cris/include/arch-v10/arch/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/cris/include/arch-v32/arch/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/cris/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/frv/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/h8300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/hexagon/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/ia64/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/m32r/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/m68k/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/metag/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/microblaze/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/mips/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/mn10300/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/parisc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/powerpc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/s390/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/score/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/sh/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/sparc/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/tile/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/x86/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/arch/xtensa/include/asm/atomic.h > /usr/src/linux-headers-3.11.0-22/include/asm-generic/atomic.h > /usr/src/linux-headers-3.11.0-22/include/asm-generic/bitops/atomic.h > /usr/src/linux-headers-3.11.0-22/include/asm-generic/bitops/ext2-atomic.h > /usr/src/linux-headers-3.11.0-22/include/asm-generic/bitops/non-atomic.h > /usr/src/linux-headers-3.11.0-22/include/linux/atomic.h > /usr/src/linux-headers-3.11.0-22-generic/include/linux/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/alpha/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/arc/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/arm/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/arm64/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/avr32/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/blackfin/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/cris/include/arch-v10/arch/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/cris/include/arch-v32/arch/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/cris/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/frv/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/hexagon/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/ia64/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/m32r/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/m68k/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/metag/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/microblaze/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/mips/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/mn10300/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/parisc/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/powerpc/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/s390/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/score/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/sh/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/sparc/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/tile/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/x86/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/arch/xtensa/include/asm/atomic.h > /usr/src/linux-headers-3.14.4-031404/include/asm-generic/atomic.h > /usr/src/linux-headers-3.14.4-031404/include/asm-generic/bitops/atomic.h > /usr/src/linux-headers-3.14.4-031404/include/asm-generic/bitops/ext2-atomic.h > /usr/src/linux-headers-3.14.4-031404/include/asm-generic/bitops/non-atomic.h > /usr/src/linux-headers-3.14.4-031404/include/linux/atomic.h > /usr/src/linux-headers-3.14.4-031404-generic/include/linux/atomic.h > /usr/src/linux-headers-3.14.4-031404-lowlatency/include/linux/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/alpha/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/arc/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/arm/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/arm64/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/avr32/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/blackfin/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/cris/include/arch-v10/arch/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/cris/include/arch-v32/arch/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/cris/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/frv/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/h8300/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/hexagon/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/ia64/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/m32r/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/m68k/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/metag/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/microblaze/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/mips/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/mn10300/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/parisc/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/powerpc/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/s390/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/score/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/sh/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/sparc/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/tile/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/x86/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/arch/xtensa/include/asm/atomic.h > /usr/src/linux-lts-saucy-3.11.0/include/asm-generic/atomic.h > /usr/src/linux-lts-saucy-3.11.0/include/asm-generic/bitops/atomic.h > /usr/src/linux-lts-saucy-3.11.0/include/asm-generic/bitops/ext2-atomic.h > /usr/src/linux-lts-saucy-3.11.0/include/asm-generic/bitops/non-atomic.h > /usr/src/linux-lts-saucy-3.11.0/include/linux/atomic.h > /usr/src/linux-lts-saucy-3.11.0/ubuntu/lttng/lib/ringbuffer/vatomic.h > /usr/src/linux-lts-saucy-3.11.0/ubuntu/lttng/wrapper/ringbuffer/vatomic.h > /usr/src/linux-lts-saucy-3.11.0/ubuntu/lttng-modules/lib/ringbuffer/vatomic.h > /usr/src/linux-lts-saucy-3.11.0/ubuntu/lttng-modules/wrapper/ringbuffer/vatomic.h Yes, I know there are multiple headers of the same type here, but they are different versions. Version "linux-headers-3.14.4-031404" shows to be the latest. Ubuntu shows "Nothing needed to be installed." However, the following C/C++ headers files show to be missing for Eclipse and QT4. #include <linux/version.h> #include <linux/module.h> #include <linux/socket.h> #include <linux/miscdevice.h> #include <linux/list.h> #include <linux/vmalloc.h> #include <linux/slab.h> #include <linux/init.h> #include <asm/uaccess.h> #include <asm/atomic.h> #include <linux/delay.h> #include <linux/usb.h> This problem appears on my 32-bit version of Ubuntu and on both of my 64-bit versions. What I am doing wrong?

    Read the article

  • Virtualbox on Ubuntu 12.04 and 3.5 kernel

    - by kas
    I have installed the 3.5 kernel under Ubuntu 12.04. When I install virtualbox I recieve the following error. Setting up virtualbox (4.1.12-dfsg-2ubuntu0.2) ... * Stopping VirtualBox kernel modules [ OK ] * Starting VirtualBox kernel modules * No suitable module for running kernel found [fail] invoke-rc.d: initscript virtualbox, action "restart" failed. Processing triggers for python-central ... Setting up virtualbox-dkms (4.1.12-dfsg-2ubuntu0.2) ... Loading new virtualbox-4.1.12 DKMS files... First Installation: checking all kernels... Building only for 3.5.0-18-generic Building initial module for 3.5.0-18-generic Error! Bad return status for module build on kernel: 3.5.0-18-generic (x86_64) Consult /var/lib/dkms/virtualbox/4.1.12/build/make.log for more information. * Stopping VirtualBox kernel modules [ OK ] * Starting VirtualBox kernel modules * No suitable module for running kernel found [fail] invoke-rc.d: initscript virtualbox, action "restart" failed. Setting up virtualbox-qt (4.1.12-dfsg-2ubuntu0.2) ... Does anyone know how I might be able to resolve this? Edit -- Here is the make.log DKMS make.log for virtualbox-4.1.12 for kernel 3.5.0-18-generic (x86_64) Mon Nov 19 12:12:23 EST 2012 make: Entering directory `/usr/src/linux-headers-3.5.0-18-generic' LD /var/lib/dkms/virtualbox/4.1.12/build/built-in.o LD /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/built-in.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/linux/SUPDrv-linux.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/SUPDrv.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/SUPDrvSem.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/alloc-r0drv.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/initterm-r0drv.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/memobj-r0drv.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/mpnotification-r0drv.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/powernotification-r0drv.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/assert-r0drv-linux.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/alloc-r0drv-linux.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/initterm-r0drv-linux.o CC [M] /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/memobj-r0drv-linux.o /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/memobj-r0drv-linux.c: In function ‘rtR0MemObjLinuxDoMmap’: /var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/memobj-r0drv-linux.c:1150:9: error: implicit declaration of function ‘do_mmap’ [-Werror=implicit-function-declaration] cc1: some warnings being treated as errors make[2]: *** [/var/lib/dkms/virtualbox/4.1.12/build/vboxdrv/r0drv/linux/memobj-r0drv-linux.o] Error 1 make[1]: *** [/var/lib/dkms/virtualbox/4.1.12/build/vboxdrv] Error 2 make: *** [_module_/var/lib/dkms/virtualbox/4.1.12/build] Error 2 make: Leaving directory `/usr/src/linux-headers-3.5.0-18-generic'

    Read the article

  • exception occured in java compiler

    - by user2892977
    I am a beginner in Java.I have JDK1.7.0 installed on windows 7 OS.I just wrote a sample java file where the file was not getting compiled and throws the below error. Sam.java:5: ';' expected Sample p = New Sample(); An exception has occurred in the compile r (1.7.0-ea). Please file a bug at the Java Developer Connection (http://java.su n.com/webapps/bugreport) after checking the Bug Parade for duplicates. Include your program and the following diagnostic in your report. Thank you. java.lang.StringIndexOutOfBoundsException: String index out of range: 26 at java.lang.String.charAt(String.java:694) at com.sun.tools.javac.util.Log.printErrLine(Log.java:251) at com.sun.tools.javac.util.Log.writeDiagnostic(Log.java:343) at com.sun.tools.javac.util.Log.report(Log.java:315) at com.sun.tools.javac.util.AbstractLog.error(AbstractLog.java:96) at com.sun.tools.javac.parser.Parser.reportSyntaxError(Parser.java:295) at com.sun.tools.javac.parser.Parser.accept(Parser.java:326) at com.sun.tools.javac.parser.Parser.blockStatements(Parser.java:1599) at com.sun.tools.javac.parser.Parser.block(Parser.java:1500) at com.sun.tools.javac.parser.Parser.block(Parser.java:1514) at com.sun.tools.javac.parser.Parser.methodDeclaratorRest(Parser.java:25 69) at com.sun.tools.javac.parser.Parser.classOrInterfaceBodyDeclaration(Par ser.java:2518) at com.sun.tools.javac.parser.Parser.classOrInterfaceBody(Parser.java:24 45) at com.sun.tools.javac.parser.Parser.classDeclaration(Parser.java:2290) at com.sun.tools.javac.parser.Parser.classOrInterfaceOrEnumDeclaration(P arser.java:2228) at com.sun.tools.javac.parser.Parser.typeDeclaration(Parser.java:2217) at com.sun.tools.javac.parser.Parser.compilationUnit(Parser.java:2163) at com.sun.tools.javac.main.JavaCompiler.parse(JavaCompiler.java:530) at com.sun.tools.javac.main.JavaCompiler.parse(JavaCompiler.java:571) at com.sun.tools.javac.main.JavaCompiler.parseFiles(JavaCompiler.java:82 2) at com.sun.tools.javac.main.JavaCompiler.compile(JavaCompiler.java:748) at com.sun.tools.javac.main.Main.compile(Main.java:386) at com.sun.tools.javac.main.Main.compile(Main.java:312) at com.sun.tools.javac.main.Main.compile(Main.java:303) at com.sun.tools.javac.Main.compile(Main.java:82) at com.sun.tools.javac.Main.main(Main.java:67) Below is the code for Sam.java file class sam { public static void main(String args[]) { Sample p = New Sample(); p.show(); p.display(); } } I researched in google with the various compiler options but that did not help.I would like to understand the below errors. 1 - Sam.java:5: ';' expected 2 - An exception has occurred in the compiler (1.7.0-ea)

    Read the article

  • Silverlight TV 17: Build a Twitter Client for Windows Phone 7 with Silverlight

      At MIX10 this week it was announced that you can develop Windows Phone 7 apps using Silverlight! In this episode, Mike Harsh comes back to Silverlight TV to show John how easy it is to develop a real world application for Windows Phone 7 Series (WP7) using Silverlight. Within minutes, Mike has developed and started running a functional WP7 twitter application that makes cross domain calls. He demonstrates how to design the interface using the designer and tools in Visual Studio 2010 Express...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Best Practices in Setting up a Build and Deployment environment for the Java Platform

    - by Genadinik
    I have a project for which "quick and dirty" isn't the best solution. What is the most stable and currently accepted set of procedures/tools that I should look into when setting up my build/deploy dev (and later production) environment? What I mean is: Should I use ANT? Or has there been something better that has evolved? In what instances should I use Maven? What are some best practices to create a continuous integration/deployment environment? What are best practices for doing test-driven development? Anything else?

    Read the article

  • How to force VS 2010 to skip "builds" of projects which haven't changed?

    - by Ladislav Mrnka
    Our product's solution has more than 100+ projects (500+ksloc of production code). Most of them are C# projects but we also have few using C++/CLI to bridge communication with native code. Rebuilding the whole solution takes several minutes. That's fine. If I want to rebuilt the solution I expect that it will really take some time. What is not fine is time needed to build solution after full rebuild. Imagine I used full rebuild and know without doing any changes to to the solution I press Build (F6 or Ctrl+Shift+B). Why it takes 35s if there was no change? In output I see that it started "building" of each project - it doesn't perform real build but it does something which consumes significant amount of time. That 35s delay is pain in the ass. Yes I can improve the time by not using build solution but only build project (Shift+F6). If I run build project on particular test project I'm currently working on it will take "only" 8+s. It requires me to run project build on correct project (the test project to ensure dependent tested code is build as well). At least ReSharper test runner correctly recognizes that only this single project must be build and rerunning test usually contains only 8+s compilation. My current coding Kata is: don't touch Ctrl+Shift+B. The test project build will take 8s even if I don't do any changes. The reason why it takes 8s is because it also "builds" dependencies = in my case it "builds" more than 20 projects but I made changes only to unit test or single dependency! I don't want it to touch other projects. Is there a way to simply tell VS to build only projects where some changes were done and projects which are dependent on changed ones (preferably this part as another build option)? I worry you will tell me that it is exactly what VS is doing but in MS way ... I want to improve my TDD experience and reduce the time of compilation (in TDD the compilation can happen twice per minute). To make this even more frustrated I'm working in a team where most of developers used to work on Java projects prior to joining this one. So you can imagine how they are pissed off when they must use VS in contrast to full incremental compilation in Java. I don't require incremental compilation of classes. I expect working incremental compilation of solutions. Especially in product like VS 2010 Ultimate which costs several thousands dollars. I really don't want to get answers like: Make a separate solution Unload projects you don't need etc. I can read those answers here. Those are not acceptable solutions. We're not paying for VS to do such compromises.

    Read the article

  • Java Developers: Is Ant still in the "main stream" for builds? Do we push new developers to learn it?

    - by Sam Goldberg
    We have been slowly replacing batch command files (windows .bat) which were simply jarring up the classes compiled in the developers IDE, with more comprehensive Ant builds (i.e. get from CVS, clean compile, jar, archive, email, etc.) I've spent a lot of time learning (and debugging issues) with Ant, so I'm most comfortable using it for these tasks. But I wonder if Ant is still in as wide usage as it was when I first started learning, or whether "the world has moved on" to something newer (and maybe slicker). (I've started to see more Maven build stuff distributed, which I've never used, for example.) The practical import of this question, is whether I push new developers to learn Ant, or whether they should be learning something else for builds? I'm never too on top of the trends, so it would be great to hear from other Java developers what they think is the best build tool, and what they think new developers should be learning.

    Read the article

  • Build in better usability with UX Direct

    - by JuergenKress
    The Oracle Applications User Experience team has created a program called Oracle UX Direct to provide customers, partners, and consultants in the enterprise industry with design best-practices and tools that they can leverage to make their enterprise implementations more successful. Read the Voice of User Experience, or VoX, blog to learn more about why the program was created, and visit the UX Direct web site to find out how to introduce design thinking during the implementation stage. Create a solution that best fits the needs of users from the beginning. Read more about UX Direct on VoX. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: UX,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • how to set BUILD_MAC_SDK_EXPERIMENTAL=1 on Mac 10.7?

    - by Nguyen Minh Binh
    I am building Android OS source on Mac 10.7 follow instructions at: http://source.android.com/source/building.html. Below are the error code when I try to run lunch full-eng BinhNguyens-MacBook:WORKING_DIRECTORY CuongLy$ lunch full-eng 2012-10-04 14:02:58.544 xcodebuild[645:80b] XcodeColors: load (v10.1) 2012-10-04 14:02:58.560 xcodebuild[645:80b] XcodeColors: pluginDidLoad: build/core/combo/HOST_darwin-x86.mk:62: ***************************** build/core/combo/HOST_darwin-x86.mk:63: * Can not find SDK 10.6 at /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.6.sdk build/core/combo/HOST_darwin-x86.mk:65: * If you wish to build using higher version of SDK, build/core/combo/HOST_darwin-x86.mk:66: * try setting BUILD_MAC_SDK_EXPERIMENTAL=1 before build/core/combo/HOST_darwin-x86.mk:67: * rerunning this command build/core/combo/HOST_darwin-x86.mk:69: ***************************** build/core/combo/HOST_darwin-x86.mk:70: * Stop.. Stop. Please tell me how to set BUILD_MAC_SDK_EXPERIMENTAL=1 ?

    Read the article

  • How do you fix "Too many open files" problem in Hudson?

    - by Randyaa
    We use Hudson as a continuous integration system to execute automated builds (nightly and based on CVS polling) of a lot of our projects. Some projects poll CVS every 15 minutes, some others poll every 5 minutes and some poll every hour. Every few weeks we'll get a build that fails with the following output: FATAL: java.io.IOException: Too many open files java.io.IOException: java.io.IOException: Too many open files at java.lang.UNIXProcess.<init>(UNIXProcess.java:148) The next build always worked (with 0 changes) so we always chalked it up to 2 build jobs being run at the same time and happening to have too many files open during the process. This weekend we had a build fail Friday night (automatic nightly build) with the message and every other nightly build also failed. Somehow this triggered Hudson to continuously build every project which failed until the issue was resolved. This resulted in a build every 30 minutes or so of every project until sometime Saturday night when the issue magically disappeared.

    Read the article

  • ojspc always returns 0 on errors

    - by Matt McCormick
    In my Ant build.xml file, I am trying to compile JSPs using ojspc. The files are being compiled, however, the build process is still running to completion when the JSP compilation has errors. This is part of my build.xml: <java fork="true" jar="${env.ORACLE_HOME}\j2ee\home\ojspc.jar" resultproperty="result"> <jvmarg value="-Djava.compiler=NONE"/> <arg value="-extend"/> <arg value="com.orionserver.http.OrionHttpJspPage"/> <arg value="-batchMask"/> <arg value="*.jsp"/> <arg value="${target-directory}/build/target/ear/${module-dir-name}-jsp.war"/> </java> <echo level="info">Result Property: ${result}</echo> I have tried setting the property failonerror="true" but that does not change anything. I receive the following output: [java] Detected archive, now processing contents of ../build/target/ear/web-module-jsp.war... [java] Setting up temp area... [java] Expanding archive in temp area... [java] C:\DOCUME~1\MMCCOR~1\LOCALS~1\Temp\tmp12940\_web_2d_inf\_jsp\_password.java:60: cannot resolve symbol [java] symbol : variable reqvst [java] location: class _web_2d_inf._jsp._password [java] out.print(reqvst.getAttribute("test")); [java] ^ [java] 1 error [java] Creating D:\eclipse-workspace\jdw\build\..\build\target\ear\web-module-jsp.war ... [java] Removing temp area... [echo] Result Property: 0 ...(more commands) BUILD SUCCESSFUL In the password.jsp file, I intentionally introduced an error to test. How can I get the build to fail on an error? At the Ant Java page, I am confused by: By default the return code of a is ignored. Alternatively, you can set resultproperty to the name of a property and have it assigned to the result code (barring immutability, of course). When you set failonerror="true", the only possible value for resultproperty is 0. Any non-zero response is treated as an error and would mean the build exits.

    Read the article

  • How can I copy files in the middle of a build in Team System?

    - by Dana
    I have two solutions that I want to include in a build. Solution two requires the dll's from solution one to successfully build. Solution two has a Binaries folder where the dll's from solution one need to be copied before building Solution two. I've been trying an AfterBuild Target, hoping that it would copy the items after the first SolutionToBuild, but it doesn't fire then. I'm guessing that it would probably fire after both solutions have compiled, but that's not what I want. <SolutionToBuild Include="$(BuildProjectFolderPath)/../../Main/Framework.sln"> <Targets>AfterCompileFramework</Targets> <Properties></Properties> </SolutionToBuild> <SolutionToBuild Include="$(BuildProjectFolderPath)/../../../Dashboard/Main/Dashboard.sln"> <Targets></Targets> <Properties></Properties> </SolutionToBuild> <ItemGroup> <FrameworkBinaries Include="$(DropLocation)\$(BuildNumber)\Release\Framework.*.dll"/> </ItemGroup> <Message Text="FrameworkBinaries: @(FrameworkBinaries)" Importance="high"/> <Copy SourceFiles="@(FrameworkBinaries)" DestinationFolder="$(BuildProjectFolderPath)/../../../Dashboard/Main/Binaries"/>

    Read the article

  • TFS2010 Hangs “Waiting for Build Agent”

    - by Qpirate
    I have asked this question over on SO the link to the question is here but i am hoping this is a better place to ask it. I have 3 VM's each running the TFS Build Host Service 1 has 1 controller and 1 agent 2 have 2 Build Agents each. Most of the time (7\10 builds) it comes back with the following error message TF215097: An error occurred while initializing a build for build definition BUILD_DEFINITION: There was no endpoint listening at http://MACHINE1:9191/Build/v3.0/Services/Controller/14 that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details. and there is no errors when i do get this message. the following is the config file that i have created <configuration> <appSettings> <add key="traceWriter" value="true"/> </appSettings> <system.diagnostics> <switches> <add name="BuildServiceTraceLevel" value="4"/> <add name="API" value="4"/> <add name="Authentication" value="4"/> <add name="Authorization" value="4"/> <add name="Database" value="4"/> <add name="General" value="4"/> <add name="traceLevel" value="4"/> </switches> <trace autoflush="true" indentsize="4"> <listeners> <add name="myListener" type="Microsoft.TeamFoundation.TeamFoundationTextWriterTraceListener,Microsoft.TeamFoundation.Common, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" initializeData="c:\logs\TFSBuildServiceHost.exe.log" /> <remove name="Default" /> </listeners> </trace> </system.diagnostics> </configuration> I do have my own custom activities in my build process but this does not seem to be a problem as sometimes the build actually does go. I have tried refreshing the template as some sites suggest. Has anyone come across a solution for this problem? or can anyone tell me how to catch these errors when they happen?

    Read the article

  • Google indexing and ranking a custom domain served by Google App Engine

    - by Hugues
    I have a website served on the following URL : "http://www.plugimmo.com" which is a custom domain served by Google App Engine on the following URL : http://plugimmo.appspot.com Since a while I have tried to optimise the Google indexing and ranking with no success. The problem is that searching on Google the keywords in the title of my home page does not retrieve my website at all even not in the 1,000 first results : When checking the cached version of google ( cache:www.plugimmo.com), it says that the cached version is the one of 20-Aug-12 of "plugimmo.appspot.com". It looks there are several issues : 1 - The cached version is really old. I have made a lot of changes since the 20-Aug-12 and I saw the googlebot crawling my site several times. 2 - The cached version is for "plugimmo.appspot.com" 3 - When looking at the Google Webmaster tools, I see that the number of pages indexed for www.plugimmo.com is 0, but that can't be the case given the number of changes I made since then. My questions would therefore be the following : Why is the version of the cache so old although I saw the googlebot crawling the site many times since 20-Aug-12 ? Is there a problem with indexing a custom domain served by Google App Engine ? Why is the Google Webmaster tools showing 0 pages indexed although new pages have been crawled and that no errors have been reported in the indexing ? Also, the site has been developed with Google Web Toolkit. I have followed the guidelines regarding crawling Ajax sites. The home page when crawled by a robot is redirected to http://www.plugimmo.com/HomeSnapshot.html Thanks a lot for your help ! Hugues

    Read the article

  • Announcing Sesame Data Browser

    - by Fabrice Marguerie
    At the occasion of MIX10, which is currently taking place in Las Vegas, I'd like to announce Sesame Data Browser.Sesame will be a suite of tools for dealing with data, and Sesame Data Browser will be the first tool from that suite.Today, during the second MIX10 keynote, Microsoft demonstrated how they are pushing hard to get OData adopted. If you don't know about OData, you can visit the just revamped dedicated website: http://odata.org. There you'll find about the OData protocol, which allows you to publish and consume data on the web, the OData SDK (with client libraries for .NET, Java, Javascript, PHP, iPhone, and more), a list of OData producers, and a list of OData consumers.This is where Sesame Data Browser comes into play. It's one of the tools you can use today to consume OData.I'll let you have a look, but be aware that this is just a preview and many additional features are coming soon.Sesame Data Browser is part of a bigger picture than just OData that will take shape over the coming months. Sesame is a project I've been working on for many months now, so what you see now is just a start :-)I hope you'll enjoy what you see. Let me know what you think.

    Read the article

  • Soft 404 error on redirected outbound links

    - by Techlands
    I have a redirect script on my site which sends visitors to an affiliate site. However in the last month I've noticed that Google webmaster tools is reporting my outbound links as a 404 error. Here is the breakdown on how its setup: My outbound links are coded like this: <a href="/f/c123" rel="nofollow" target="_blank">Link Title</a> My redirect script will then perform a 302 redirect to the affiliate link Originally I had the affiliate links (CJ) directly in the HTML, however I noticed over time that this had some impact on my sites traffic. So I changed them to a redirect script and my traffic returned. This seemed to work with no issues for over 1 year but now I'm getting soft 404 errors in Google webmaster tools. I did try adding a rule in my robots.txt to block any links starting with /f/ but I'm not sure if this will help or Google will still report soft 404 errors. I am considering as possible options to change the a tag to a button tag and use an onclick event to load the link.

    Read the article

  • hProduct-microformats not work in google

    - by silverfox
    I'm trying to work with hProduct was testing tool for google microformats (http://www.google.com/webmasters/tools/richsnippets), but it is not recognizing the data: does not recognize the photo does not recognize the price does not recognize the category only recognizes the rating HTML: <div class="hproduct"> <span class="brand">ACME</span> <span class="fn">Executive Anvil</span> <img class="photo" src="http://microformats.org/wiki/skins/Microformats/images/logo.gif" /> <span class="review hreview-aggregate"> Average rating: <span class="rating">4.4</span>, based on <span class="count">89 </span> reviews </span> Regular price: $179.99 Sale: $<span class="price">119.99</span> (Sale ends 5 November!) <span class="description">Sleeker than ACME's Classic Anvil, the Executive Anvil is perfect for the business traveler looking for something to drop from a height.</span> Category: <span class="category"> <span class="value-title" title="Hardware > Tools > Anvils">Anvils</span> </span> </div> and still shows this warning: waring: In order to generate a preview with rich snippets, either price or review or availability needs to be present. I used google's own example: http://support.google.com/webmasters/bin/answer.py?hl=en&answer=186036 I also tested the microformas.org: http://microformats.org/wiki/google-rich-snippets-examples

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >