Search Results

Search found 17069 results on 683 pages for 'build monkey'.

Page 95/683 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • libtool failed with exit code 1 on XCode 4.3

    - by Doz
    I have XCode 4.3 and I am getting this frustrating xml-lib related error. I have a feeling its because of the fact that 4.3 is not using /Developer folder but instead the /Applications/XCode.app/... The error message is below: Libtool /Users/dkatz/Library/Developer/Xcode/DerivedData/RWEngines-ewchevfhokeivnffrputdqapsyxu/Build/Products/Release-iphonesimulator/RWEngines.framework/Versions/A/RWEngines normal i386 cd /Users/dkatz/Sites/xCode/RWA/RWEngines setenv MACOSX_DEPLOYMENT_TARGET 10.6 setenv PATH "/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin:/Applications/Xcode.app/Contents/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin" /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/libtool -static -arch_only i386 -syslibroot /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator5.0.sdk -L/Users/dkatz/Library/Developer/Xcode/DerivedData/RWEngines-ewchevfhokeivnffrputdqapsyxu/Build/Products/Release-iphonesimulator -filelist /Users/dkatz/Library/Developer/Xcode/DerivedData/RWEngines-ewchevfhokeivnffrputdqapsyxu/Build/Intermediates/RWEngines.build/Release-iphonesimulator/RWEngines.build/Objects-normal/i386/RWEngines.LinkFileList -ObjC -Xlinker -no_implicit_dylibs -D__IPHONE_OS_VERSION_MIN_REQUIRED=50000 -framework UIKit /Users/dkatz/Library/Developer/Xcode/DerivedData/RWEngines-ewchevfhokeivnffrputdqapsyxu/Build/Products/Release-iphonesimulator/libCorePlot-CocoaTouch.a -framework SenTestingKit -framework QuartzCore -framework Foundation -framework RWCommon -o /Users/dkatz/Library/Developer/Xcode/DerivedData/RWEngines-ewchevfhokeivnffrputdqapsyxu/Build/Products/Release-iphonesimulator/RWEngines.framework/Versions/A/RWEngines And the actual error: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/libtool failed with exit code 1 Thanks guys!

    Read the article

  • What is the difference between these task definition syntaxes in gradle?

    - by bergyman
    A) task build << { description = "Build task." ant.echo('build') } B) task build { description = "Build task." ant.echo('build') } I notice that with type B, the code within the task seems to be executed when typing gradle -t - ant echoes out 'build' even when just listing all the various available tasks. The description is also actually displayed with type B. However, with type A no code is executed when listing out the available tasks, and the description is not displayed when executing gradle -t. The docs don't seem to go into the difference between these two syntaxes (that I've found), only that you can define a task either way.

    Read the article

  • How can I determine if a given git hash exists on a given branch?

    - by pinko
    Background: I use an automated build system which takes a git hash as input, as well as the name of the branch on which that hash exists, and builds it. However, the build system uses the hash alone to check out the code and build it -- it simply stores the branch name, as given, in the build DB metadata. I'm worried about developers accidentally providing the wrong branch name when they kick off a build, causing confusion when people are looking through the build history. So how can I confirm, before passing along the hash and branch name to the build system, that the given hash does in fact come from the given branch?

    Read the article

  • Database continuous integration step by step

    - by David Atkinson
    This post will describe how to set up basic database continuous integration using TeamCity to initiate the build process, SQL Source Control to put your database under source control, and the SQL Compare command line to keep a test database up to date. In my example I will be using Subversion as my source control repository. If you wish to follow my steps verbatim, please make sure you have TortoiseSVN, SQL Compare and SQL Source Control installed. Downloading and Installing TeamCity TeamCity (http://www.jetbrains.com/teamcity/index.html) is free for up to three agents, so it a great no-risk tool you can use to experiment with. 1. Download the latest version from the JetBrains website. For some reason the TeamCity executable didn't download properly for me, stalling frustratingly at 99%, so I tried again with the zip file download option (see screenshot below), which worked flawlessly. 2. Run the installer using the defaults. This results in a set-up with the server component and agent installed on the same machine, which is ideal for getting started with ease. 3. Check that the build agent is pointing to the server correctly. This has caught me out a few times before. This setting is in C:\TeamCity\buildAgent\conf\buildAgent.properties and for my installation is serverUrl=http\://localhost\:80 . If you need to change this value, if for example you've had to install the Server console to a different port number, the TeamCity Build Agent Service will need to be restarted for the change to take effect. 4. Open the TeamCity admin console on http://localhost , and specify your own designated username and password at first startup. Putting your database in source control using SQL Source Control 5. Assuming you've got SQL Source Control installed, select a development database in the SQL Server Management Studio Object Explorer and select Link Database to Source Control. 6. For the Link step you can either create your own empty folder in source control, or you can select Just Evaluating, which just creates a local subversion repository for you behind the scenes. 7. Once linked, note that your database turns green in the Object Explorer. Visit the Commit tab to do an initial commit of your database objects by typing in an appropriate comment and clicking Commit. 8. There is a hidden feature in SQL Source Control that opens up TortoiseSVN (provided it is installed) pointing to the linked repository. Keep Shift depressed and right click on the text to the right of 'Linked to', in the example below, it's the red Evaluation Repository text. Select Open TortoiseSVN Repo Browser. This screen should give you an idea of how SQL Source Control manages the object files behind the scenes. Back in the TeamCity admin console, we'll now create a new project to monitor the above repository location and to trigger a 'build' each time the repository changes. 9. In TeamCity Adminstration, select Create Project and give it a name, such as "My first database CI", and click Create. 10. Click on Create Build Configuration, and name it something like "Integration build". 11. Click VCS settings and then Create And Attach new VCS root. This is where you will tell TeamCity about the repository it should monitor. 12. In my case since I'm using the Just Evaluating option in SQL Source Control, I should select Subversion. 13. In the URL field paste your repository location. In my case this is file:///C:/Users/David.Atkinson/AppData/Local/Red Gate/SQL Source Control 3/EvaluationRepositories/WidgetDevelopment/WidgetDevelopment 14. Click on Test Connection to ensure that you can communicate with your source control system. Click Save. 15. Click Add Build Step, and Runner Type: Command Line. Should you be familiar with the other runner types, such as NAnt, MSBuild or Powershell, you can opt for these, but for the same of keeping it simple I will pick the simplest option. 16. If you have installed SQL Compare in the default location, set the Command Executable field to: C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe 17. Flip back to SSMS briefly and add a new database to your server. This will be the database used for continuous integration testing. 18. Set the command parameters according to your server and the name of the database you have created. In my case I created database RedGateCI on server .\sql2008r2 /scripts1:. /server2:.\sql2008r2 /db2:RedGateCI /sync /verbose Note that if you pick a server instance that isn't on your local machine, you'll need the TCP/IP protocol enabled in SQL Server Configuration Manager otherwise the SQL Compare command line will not be able to connect. 19. Save and select Build Triggering / Add New Trigger / VCS Trigger. This is where you tell TeamCity when it should initiate a build. Click Save. 20. Now return to SQL Server Management Studio and make a schema change (eg add a new object) to your linked development database. A blue indicator will appear in the Object Explorer. Commit this change, typing in an appropriate check-in comment. All being good, within 60 seconds (a TeamCity default that can be changed) a build will be triggered. 21. Click on Projects in TeamCity to get back to the overview screen: The build log will show you the console output, which is useful for troubleshooting any issues: That's it! You now have continuous integration on your database. In future posts I'll cover how you can generate and test the database creation script, the database upgrade script, and run database unit tests as part of your continuous integration script. If you have any trouble getting this up and running please let me know, either by commenting on this post, or email me directly using the email address below. Technorati Tags: SQL Server

    Read the article

  • iOS: Using Jenkins for nightly internal builds (TestFlight), plus frequent client builds

    - by Amy Worrall
    I'm an iOS dev, working for a small agency. I'm currently on a few smaller projects where I'm the only developer. We recently acquired a Jenkins server, but each project is left to fend for themselves as for how to use it. I'd like to use it for making and distributing builds. My ideal would be: Every commit is built as a single IPA that is placed in a HTTP-accessible location. (It only needs to keep the latest one, otherwise the disk would fill up — some of our apps are 500MB or more.) Once a day it makes a build, signs it with our internal provisioning profile, tacks a build number onto the end of the version number, and sends it to our internal TestFlight team. When manually prompted, it builds the latest commit, gives it a manually specified version number, signs it with the client's provisioning profile and sends it to TestFlight. I'm pretty new to Jenkins. The developer who set up the server is running it on one of our projects, so I know it has the right stuff installed to do Xcode builds. I believe he's only using it to run unit tests though, not to do any of the code signing, IPA creation or TestFlight stuff. So my questions: I've listed three distinct kinds of build. How does Jenkins cope with that? I see there's a "build triggers" section in the config for a Jenkins project, but it doesn't seem to mention different types of build. Should I just set up multiple Jenkins projects, called "App X (continuous)", "App X (nightly)" and "App X (client)"? How do I specify the provisioning profile through Jenkins? If there isn't a way, I guess I could make different configurations in the Xcode project… Has anyone else used Jenkins to actually do the release (i.e. build and push to TestFlight) of beta builds of their apps? Is it a good idea? Or should I continue just doing it manually?

    Read the article

  • Building Python 3.2.3 on redhat 5: missing _posixsubprocess

    - by Oz123
    I am trying to build Python3 on a RHEL 5.7 machine, I successful managed to build Python 3.2.2, with : # Install required build dependencies yum install openssl-devel bzip2-devel expat-devel gdbm-devel readline-devel sqlite-devel # Fetch and extract source. Please refer to http://www.python.org/download/releases # to ensure the latest source is used. wget http://www.python.org/ftp/python/3.2/Python-3.2.tar.bz2 tar -xjf Python-3.2.tar.bz2 cd Python-3.2 # Configure the build with a prefix (install dir) of /opt/python3, compile, and install. ./configure --prefix=/opt/python3 make But I am failing (?) with Python 3.2.3: Failed to build these modules: _posixsubprocess Is this a problem that should bother me ? How do I build it? I found this patch, but it's not included in sources Python 3.2.3 I obtained from the website ... Applying this patch on my sources, didn't solve the problem ... Here is the output from stderr: ~/tmp/Python-3.2.3 $ make > build.log ldd: warning: you do not have execution permission for `/usr/local/lib/libreadline.so' /usr/bin/ld: skipping incompatible /usr/local/lib/libreadline.so when searching for -lreadline /usr/bin/ld: skipping incompatible /usr/local/lib/libreadline.a when searching for -lreadline /home/oznahum/tmp/Python-3.2.3/Modules/_posixsubprocess.c: In function '_close_open_fd_range_safe': /home/oznahum/tmp/Python-3.2.3/Modules/_posixsubprocess.c:205: error: 'O_CLOEXEC' undeclared (first use in this function) /home/oznahum/tmp/Python-3.2.3/Modules/_posixsubprocess.c:205: error: (Each undeclared identifier is reported only once /home/oznahum/tmp/Python-3.2.3/Modules/_posixsubprocess.c:205: error: for each function it appears in.) /usr/bin/ld: skipping incompatible /usr/local/lib/libz.so when searching for -lz /usr/bin/ld: skipping incompatible /usr/local/lib/libz.so when searching for -lz ~/tmp/Python-3.2.3 $ grep posix build.log gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -IInclude -I./Include -DPy_BUILD_CORE -c ./Modules/posixmodule.c -o Modules/posixmodule.o ar rc libpython3.2m.a Modules/_threadmodule.o Modules/signalmodule.o Modules/posixmodule.o Modules/errnomodule.o Modules/pwdmodule.o Modules/_sre.o Modules/_codecsmodule.o Modules/_weakref.o Modules/_functoolsmodule.o Modules/operator.o Modules/_collectionsmodule.o Modules/itertoolsmodule.o Modules/_localemodule.o Modules/_iomodule.o Modules/iobase.o Modules/fileio.o Modules/bytesio.o Modules/bufferedio.o Modules/textio.o Modules/stringio.o Modules/zipimport.o Modules/symtablemodule.o Modules/xxsubtype.o building '_posixsubprocess' extension gcc -pthread -fPIC -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -IInclude -I/home/oznahum/localroot/include -I. -I./Include -I/usr/local/include -I/home/oznahum/tmp/Python-3.2.3 -c /home/oznahum/tmp/Python-3.2.3/Modules/_posixsubprocess.c -o build/temp.linux-x86_64-3.2/home/oznahum/tmp/Python-3.2.3/Modules/_posixsubprocess.o _posixsubprocess

    Read the article

  • debian/rules error "No rule to make target"

    - by Hairo
    i'm having some problems creating a .deb file with debuild before reading some tutorials i managed to make the file but i always get this error: make: *** No rule to make target «build». Stop. dpkg-buildpackage: failure: debian/rules build gave error exit status 2 debuild: fatal error at line 1329: dpkg-buildpackage -rfakeroot -D -us -uc -b failed Any help?? This is my debian rules file: #!/usr/bin/make -f # -*- makefile -*- # Sample debian/rules that uses debhelper. # This file was originally written by Joey Hess and Craig Small. # As a special exception, when this file is copied by dh-make into a # dh-make output file, you may use that output file without restriction. # This special exception was added by Craig Small in version 0.37 of dh-make. # Uncomment this to turn on verbose mode. #export DH_VERBOSE=1 build-stamp: configure-stamp dh_testdir touch build-stamp clean: dh_testdir dh_testroot rm -f build-stamp configure-stamp dh_clean install: build dh_testdir dh_testroot dh_clean -k dh_installdirs $(MAKE) install DESTDIR=$(CURDIR)/debian/pycounter mkdir -p $(CURDIR)/debian/pycounter # Copy .py files cp pycounter.py $(CURDIR)/debian/pycounter/opt/extras.ubuntu.com/pycounter/pycounter.py cp prefs.py $(CURDIR)/debian/pycounter/opt/extras.ubuntu.com/pycounter/prefs.py # desktop copyright and others (not complete, check) cp extras-pycounter.desktop $(CURDIR)/debian/pycounter/usr/share/applications/extras-pycounter.desktop

    Read the article

  • Best method to organize/manage dependencies in the VCS within a large solution

    - by SnOrfus
    A simple scenario: 2 projects are in version control The application The test(s) A significant number of checkins are made to the application daily. CI builds and runs all of the automation nightly. In order to write and/or run tests you need to have built the application (to reference/load instrumented assemblies). Now, consider the application to be massive, such that building it is prohibitive in time (an entire day to compile). The obvious side effect here, is that once you've performed a build locally, it is immediately inconsistent with latest. For instance: If I were to sync with latest, and open up one of the test projects, it would not locally build until I built the application. This is the same when syncing to another branch/build/tag. So, in order to even start working, I need to wait a day to build the application locally, so that the assemblies could be loaded - and then those assemblies wouldn't be latest. How do you organize the repository or (ideally) your development environment such that you can continually develop tests against whatever the current build is, or a given specific build, while minimizing building the application as much as possible?

    Read the article

  • sshd running but no PID file

    - by dunxd
    I'm recently started using monit to monitor the status of sshd on my CentOS 5.4 server. This works fine, but every so often monit reports that sshd is no longer running. This isn't true - I am still able to login to the server via ssh, however I note the following: There is no longer any PID file at /var/run/sshd.pid - after a reboot this file exists. Once it is gone, restarting sshd via service sshd restart does not create the PID file. sudo service sshd status reports openssh-daemon is stopped - again, restarting sshd does not change this, but a reboot does. sudo service sshd stop reports failed, presumably because of the missing PID file. Any idea what is going on? Update sudo netstat -lptun gives the following output relating to port 22 tcp 0 0 :::22 :::* LISTEN 20735/sshd Killing the process with this PID as suggested by @Henry and then starting sshd via service results in service sshd status recognising the process by PID again. Would still like to understand this better. RPM verify suggested by a couple of answerers shows this: sudo rpm -vV openssh openssh-server openssh-clients | grep 'S\.5' S.5....T c /etc/pam.d/sshd S.5....T c /etc/ssh/sshd_config /etc/pam.d/sshd has the following contents: #%PAM-1.0 auth include system-auth account required pam_nologin.so account include system-auth password include system-auth session optional pam_keyinit.so force revoke session include system-auth #session required pam_loginuid.so Should that last line be commented out? Update Here's the output of @YannickGirouard 's script: $ sudo ./sshd_test Searching for the process listening on port 22... Found the following PID: 21330 Command line for PID 21330: /usr/sbin/sshd Listing process(es) relating to PID 21330: UID PID PPID C STIME TTY TIME CMD root 21330 1 0 14:04 ? 00:00:00 /usr/sbin/sshd Listing RPM information about openssh packages: Name : openssh Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:50:57 AM GMT Build Host: builder10.centos.org Group : Applications/Internet Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 745390 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH implementation of SSH protocol versions 1 and 2 ------------------------------------------------------ Name : openssh-clients Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:51:04 AM GMT Build Host: builder10.centos.org Group : Applications/Internet Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 871132 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH client applications ------------------------------------------------------ Name : openssh-server Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:51:04 AM GMT Build Host: builder10.centos.org Group : System Environment/Daemons Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 492478 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH server daemon ------------------------------------------------------ However, I've since got things working by killing the process and starting afresh, as suggested by @Henry below, so perhaps I am no longer seeing the same thing. Will try again if I am seeing the issue again after next reboot. Update - 14 March Monit alerted me that sshd had disappeared, and again I am able to ssh onto the server. So now I can run the script $ sudo ./sshd_test Searching for the process listening on port 22... Found the following PID: 2208 Command line for PID 2208: /usr/sbin/sshd Listing process(es) relating to PID 2208: UID PID PPID C STIME TTY TIME CMD root 2208 1 0 Mar13 ? 00:00:00 /usr/sbin/sshd root 1885 2208 0 21:50 ? 00:00:00 sshd: dunx [priv] Listing RPM information about openssh packages: Name : openssh Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:50:57 AM GMT Build Host: builder10.centos.org Group : Applications/Internet Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 745390 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH implementation of SSH protocol versions 1 and 2 ------------------------------------------------------ Name : openssh-clients Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:51:04 AM GMT Build Host: builder10.centos.org Group : Applications/Internet Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 871132 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH client applications ------------------------------------------------------ Name : openssh-server Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:51:04 AM GMT Build Host: builder10.centos.org Group : System Environment/Daemons Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 492478 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH server daemon ------------------------------------------------------ Again, when I look for /var/run/sshd.pid I don't find it. $ cat /var/run/sshd.pid cat: /var/run/sshd.pid: No such file or directory $ sudo netstat -anp | grep sshd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 2208/sshd $ sudo kill 2208 $ sudo service sshd start Starting sshd: [ OK ] $ cat /var/run/sshd.pid 3794 $ sudo service sshd status openssh-daemon (pid 3794) is running... Is it possible that sshd is restarting and not creating a pidfile for some reason?

    Read the article

  • ASMLib

    - by wcoekaer
    Oracle ASMlib on Linux has been a topic of discussion a number of times since it was released way back when in 2004. There is a lot of confusion around it and certainly a lot of misinformation out there for no good reason. Let me try to give a bit of history around Oracle ASMLib. Oracle ASMLib was introduced at the time Oracle released Oracle Database 10g R1. 10gR1 introduced a very cool important new features called Oracle ASM (Automatic Storage Management). A very simplistic description would be that this is a very sophisticated volume manager for Oracle data. Give your devices directly to the ASM instance and we manage the storage for you, clustered, highly available, redundant, performance, etc, etc... We recommend using Oracle ASM for all database deployments, single instance or clustered (RAC). The ASM instance manages the storage and every Oracle server process opens and operates on the storage devices like it would open and operate on regular datafiles or raw devices. So by default since 10gR1 up to today, we do not interact differently with ASM managed block devices than we did before with a datafile being mapped to a raw device. All of this is without ASMLib, so ignore that one for now. Standard Oracle on any platform that we support (Linux, Windows, Solaris, AIX, ...) does it the exact same way. You start an ASM instance, it handles storage management, all the database instances use and open that storage and read/write from/to it. There are no extra pieces of software needed, including on Linux. ASM is fully functional and selfcontained without any other components. In order for the admin to provide a raw device to ASM or to the database, it has to have persistent device naming. If you booted up a server where a raw disk was named /dev/sdf and you give it to ASM (or even just creating a tablespace without asm on that device with datafile '/dev/sdf') and next time you boot up and that device is now /dev/sdg, you end up with an error. Just like you can't just change datafile names, you can't change device filenames without telling the database, or ASM. persistent device naming on Linux, especially back in those days ways to say it bluntly, a nightmare. In fact there were a number of issues (dating back to 2004) : Linux async IO wasn't pretty persistent device naming including permissions (had to be owned by oracle and the dba group) was very, very difficult to manage system resource usage in terms of open file descriptors So given the above, we tried to find a way to make this easier on the admins, in many ways, similar to why we started working on OCFS a few years earlier - how can we make life easier for the admins on Linux. A feature of Oracle ASM is the ability for third parties to write an extension using what's called ASMLib. It is possible for any third party OS or storage vendor to write a library using a specific Oracle defined interface that gets used by the ASM instance and by the database instance when available. This interface offered 2 components : Define an IO interface - allow any IO to the devices to go through ASMLib Define device discovery - implement an external way of discovering, labeling devices to provide to ASM and the Oracle database instance This is similar to a library that a number of companies have implemented over many years called libODM (Oracle Disk Manager). ODM was specified many years before we introduced ASM and allowed third party vendors to implement their own IO routines so that the database would use this library if installed and make use of the library open/read/write/close,.. routines instead of the standard OS interfaces. PolyServe back in the day used this to optimize their storage solution, Veritas used (and I believe still uses) this for their filesystem. It basically allowed, in particular, filesystem vendors to write libraries that could optimize access to their storage or filesystem.. so ASMLib was not something new, it was basically based on the same model. You have libodm for just database access, you have libasm for asm/database access. Since this library interface existed, we decided to do a reference implementation on Linux. We wrote an ASMLib for Linux that could be used on any Linux platform and other vendors could see how this worked and potentially implement their own solution. As I mentioned earlier, ASMLib and ODMLib are libraries for third party extensions. ASMLib for Linux, since it was a reference implementation implemented both interfaces, the storage discovery part and the IO part. There are 2 components : Oracle ASMLib - the userspace library with config tools (a shared object and some scripts) oracleasm.ko - a kernel module that implements the asm device for /dev/oracleasm/* The userspace library is a binary-only module since it links with and contains Oracle header files but is generic, we only have one asm library for the various Linux platforms. This library is opened by Oracle ASM and by Oracle database processes and this library interacts with the OS through the asm device (/dev/asm). It can install on Oracle Linux, on SuSE SLES, on Red Hat RHEL,.. The library itself doesn't actually care much about the OS version, the kernel module and device cares. The support tools are simple scripts that allow the admin to label devices and scan for disks and devices. This way you can say create an ASM disk label foo on, currently /dev/sdf... So if /dev/sdf disappears and next time is /dev/sdg, we just scan for the label foo and we discover it as /dev/sdg and life goes on without any worry. Also, when the database needs access to the device, we don't have to worry about file permissions or anything it will be taken care of. So it's a convenience thing. The kernel module oracleasm.ko is a Linux kernel module/device driver. It implements a device /dev/oracleasm/* and any and all IO goes through ASMLib - /dev/oracleasm. This kernel module is obviously a very specific Oracle related device driver but it was released under the GPL v2 so anyone could easily build it for their Linux distribution kernels. Advantages for using ASMLib : A good async IO interface for the database, the entire IO interface is based on an optimal ASYNC model for performance A single file descriptor per Oracle process, not one per device or datafile per process reducing # of open filehandles overhead Device scanning and labeling built-in so you do not have to worry about messing with udev or devlabel, permissions or the likes which can be very complex and error prone. Just like with OCFS and OCFS2, each kernel version (major or minor) has to get a new version of the device drivers. We started out building the oracleasm kernel module rpms for many distributions, SLES (in fact in the early days still even for this thing called United Linux) and RHEL. The driver didn't make sense to get pushed into upstream Linux because it's unique and specific to the Oracle database. As it takes a huge effort in terms of build infrastructure and QA and release management to build kernel modules for every architecture, every linux distribution and every major and minor version we worked with the vendors to get them to add this tiny kernel module to their infrastructure. (60k source code file). The folks at SuSE understood this was good for them and their customers and us and added it to SLES. So every build coming from SuSE for SLES contains the oracleasm.ko module. We weren't as successful with other vendors so for quite some time we continued to build it for RHEL and of course as we introduced Oracle Linux end of 2006 also for Oracle Linux. With Oracle Linux it became easy for us because we just added the code to our build system and as we churned out Oracle Linux kernels whether it was for a public release or for customers that needed a one off fix where they also used asmlib, we didn't have to do any extra work it was just all nicely integrated. With the introduction of Oracle Linux's Unbreakable Enterprise Kernel and our interest in being able to exploit ASMLib more, we started working on a very exciting project called Data Integrity. Oracle (Martin Petersen in particular) worked for many years with the T10 standards committee and storage vendors and implemented Linux kernel support for DIF/DIX, data protection in the Linux kernel, note to those that wonder, yes it's all in mainline Linux and under the GPL. This basically gave us all the features in the Linux kernel to checksum a data block, send it to the storage adapter, which can then validate that block and checksum in firmware before it sends it over the wire to the storage array, which can then do another checksum and to the actual DISK which does a final validation before writing the block to the physical media. So what was missing was the ability for a userspace application (read: Oracle RDBMS) to write a block which then has a checksum and validation all the way down to the disk. application to disk. Because we have ASMLib we had an entry into the Linux kernel and Martin added support in ASMLib (kernel driver + userspace) for this functionality. Now, this is all based on relatively current Linux kernels, the oracleasm kernel module depends on the main kernel to have support for it so we can make use of it. Thanks to UEK and us having the ability to ship a more modern, current version of the Linux kernel we were able to introduce this feature into ASMLib for Linux from Oracle. This combined with the fact that we build the asm kernel module when we build every single UEK kernel allowed us to continue improving ASMLib and provide it to our customers. So today, we (Oracle) provide Oracle ASMLib for Oracle Linux and in particular on the Unbreakable Enterprise Kernel. We did the build/testing/delivery of ASMLib for RHEL until RHEL5 but since RHEL6 decided that it was too much effort for us to also maintain all the build and test environments for RHEL and we did not have the ability to use the latest kernel features to introduce the Data Integrity features and we didn't want to end up with multiple versions of asmlib as maintained by us. SuSE SLES still builds and comes with the oracleasm module and they do all the work and RHAT it certainly welcome to do the same. They don't have to rebuild the userspace library, it's really about the kernel module. And finally to re-iterate a few important things : Oracle ASM does not in any way require ASMLib to function completely. ASMlib is a small set of extensions, in particular to make device management easier but there are no extra features exposed through Oracle ASM with ASMLib enabled or disabled. Often customers confuse ASMLib with ASM. again, ASM exists on every Oracle supported OS and on every supported Linux OS, SLES, RHEL, OL withoutASMLib Oracle ASMLib userspace is available for OTN and the kernel module is shipped along with OL/UEK for every build and by SuSE for SLES for every of their builds ASMLib kernel module was built by us for RHEL4 and RHEL5 but we do not build it for RHEL6, nor for the OL6 RHCK kernel. Only for UEK ASMLib for Linux is/was a reference implementation for any third party vendor to be able to offer, if they want to, their own version for their own OS or storage ASMLib as provided by Oracle for Linux continues to be enhanced and evolve and for the kernel module we use UEK as the base OS kernel hope this helps.

    Read the article

  • Tree Surgeon 2.0 - The future on the T4 Express

    - by Malcolm Anderson
    If you've never been a fan of TreeSurgeon (http://treesurgeon.codeplex.com/) then skip this post.However, if have been there have been some interesting developments over the last couple of years.The biggest one is T4Recently Bill Simser wrote a detailed post about the potential future of tree surgeon, called "Tree Surgeon - Alive and Kicking or Dead and Buried" He raised the question:Times have changed. Since that last release in 2008 so much has changed for .NET developers. The question is, today is the project still viable? Do we still need a tool to generate a project tree given that we have things like scaffolding systems, NuGet, and T4 templates. Or should we just give the project its rightful and respectful send off as its had a good life and has outlived its usefulness.For myself, the answer is, keep it.I've spent the last couple of years doing agile engineering coaching and architecture and from my experience, I can tell you, there are a lot of shops out there that would benefit from having Tree Surgeon as a viable product.  Many would benefit simply from having the software engineering information that is embedded in the tree surgeon site be floating around their conversation.Little things like, keep all of your software needed to run the build, with the build in the version control system.Have your developers and the build system using the same build.Have a one-touch buildSeparate your code from your interfacePut unit tests in first, not lastI've seen companies with great developers suffer from the problems that naturally come from builds taking 3 and 4 hours to run.  It takes work to get that build down to 10 minutes, but the benefits are always worth it.  Tree Surgeon gives you a leg up, by starting you off with a project that you can drop into your Continuous Integration system, right out of the box.Well, it used to be right out of the box.  Today, you have to play with the project to make it work for you, but even with the issues (it hasn't been updated since 2008) it still gives you a framework, with logical separations that you can build from.If you have used Tree Surgeon in the past, take a few minutes and drop a comment about what difference it made in your development style, and what you are doing differently today because of it.

    Read the article

  • Developing GLSL Shaders?

    - by skln
    I want to create shaders but I need a tool to create and see the visual result before I put them into my game. As to determine if there is something wrong with my game or if it's something with the shader I created. I've looked at some like Render Monkey and OpenGL Shader Designer from what I recall of Render Monkey it had a way to define your own attributes (now as "in" for vertex shaders = 330) easily though I can't remember to what extent. Shader Designer requires a plugin that I didn't even bother to look at creating cause it's an external process and plugin. Are there any tools out there that support a scripting language and I could easily provide specific input such as float movement = sin(elapsedTime()); and then define in float movement; in the vertex shader ? It'd be cool if anyone could share how they develop shaders, if they just code away and then plug it into their game hoping to get the result they wanted.

    Read the article

  • Maven. How to include specific folder or file when assemblying project depending on is it dev build or production?

    - by user563588
    Using maven-assembly-plugin <plugin> <artifactId>maven-assembly-plugin</artifactId> <version>2.1</version> <configuration> <descriptors> <descriptor>descriptor.xml</descriptor> </descriptors> <finalName>xxx-impl-${pom.version}</finalName> <outputDirectory>target/assembly</outputDirectory> <workDirectory>target/assembly/work</workDirectory> </configuration> in descriptor.xml file we can specify <fileSets> <fileSet> <directory>src/install</directory> <outputDirectory>/</outputDirectory> </fileSet> </fileSets> Is it possible to include specific file from this folder or sub-folder depending on profile? Or some other way... Like this: <profiles> <profile> <id>dev</id> <activation> <activeByDefault>false</activeByDefault> </activation> <build> <resources> <resource> <directory>src/install/dev</directory> <includes> <include>**/*</include> </includes> </resource> </resources> </build> </profile> <profile> <id>prod</id> <build> <resources> <resource> <directory>src/install/prod</directory> <includes> <include>**/*</include> </includes> </resource> </resources> </build> </profile> </profiles> But it puts resources in jar when packaging. But we need to put it in zip when assemblying as I already mentioned above :( Thanks!

    Read the article

  • Maven. How to include specific folder or file when assemblying project depending on is it dev build or production?

    - by user563588
    Using maven-assembly-plugin <plugin> <artifactId>maven-assembly-plugin</artifactId> <version>2.1</version> <configuration> <descriptors> <descriptor>descriptor.xml</descriptor> </descriptors> <finalName>xxx-impl-${pom.version}</finalName> <outputDirectory>target/assembly</outputDirectory> <workDirectory>target/assembly/work</workDirectory> </configuration> in descriptor.xml file we can specify <fileSets> <fileSet> <directory>src/install</directory> <outputDirectory>/</outputDirectory> </fileSet> </fileSets> Is it possible to include specific file from this folder or sub-folder depending on profile? Or some other way... Like this: <profiles> <profile> <id>dev</id> <activation> <activeByDefault>false</activeByDefault> </activation> <build> <resources> <resource> <directory>src/install/dev</directory> <includes> <include>**/*</include> </includes> </resource> </resources> </build> </profile> <profile> <id>prod</id> <build> <resources> <resource> <directory>src/install/prod</directory> <includes> <include>**/*</include> </includes> </resource> </resources> </build> </profile> </profiles> But it puts resources in jar when packaging. But we need to put it in zip when assemblying as I already mentioned above :( Thanks!

    Read the article

  • Netbeans / persistence API error

    - by Danny
    I am following this tutorial, I'm using netbeans 6.5.1 http://www.netbeans.org/kb/docs/java/gui-db-custom.html When I get to the part where I create the "new entity class from database", which is in the "Customizing the Master/Detail View" section of the tutoiral. I can't ever compile because I get this error in the task list (and I get a runtime error when I run)... Atleast I think these two are related. Error Named queries can be defined only on an Entity or MappedSuperclass class. Countries.java 27 C:/Users/Danny/dev/NetBeansProjects/Test/src/test Error Named queries can be defined only on an Entity or MappedSuperclass class. Products.java 28 C:/Users/Danny/dev/NetBeansProjects/Test/src/test note that all I do is do newentity classes from database, and do exactly as the tutorial says. I stopped on the tutoral at the "Adding Dialog Boxes" heading, because the tutorial writer says the application should be "partially functional" which makes me think it should atleast RUN. I haven't edited the generated code outside of what I've been instructed to do. This is the output produced by the netbeans output console: run: Jun 23, 2009 3:03:23 PM org.jdesktop.application.Application$1 run SEVERE: Application class test.TestApp failed to launch javax.persistence.PersistenceException: No Persistence provider for EntityManager named MyBusinessRecordsPU: Provider named oracle.toplink.essentials.PersistenceProvider threw unexpected exception at create EntityManagerFactory: oracle.toplink.essentials.exceptions.PersistenceUnitLoadingException Local Exception Stack: Exception [TOPLINK-30005] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.PersistenceUnitLoadingException Exception Description: An exception was thrown while searching for persistence archives with ClassLoader: sun.misc.Launcher$AppClassLoader@11b86e7 Internal Exception: javax.persistence.PersistenceException: Exception [TOPLINK-28018] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.EntityManagerSetupException Exception Description: predeploy for PersistenceUnit [MyBusinessRecordsPU] failed. Internal Exception: Exception [TOPLINK-7244] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.ValidationException Exception Description: An incompatible mapping has been encountered between [class test.Products] and [class test.Orders]. This usually occurs when the cardinality of a mapping does not correspond with the cardinality of its backpointer. at oracle.toplink.essentials.exceptions.PersistenceUnitLoadingException.exceptionSearchingForPersistenceResources(PersistenceUnitLoadingException.java:143) at oracle.toplink.essentials.ejb.cmp3.EntityManagerFactoryProvider.createEntityManagerFactory(EntityManagerFactoryProvider.java:169) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:110) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:83) at test.TestView.initComponents(TestView.java:337) at test.TestView.(TestView.java:39) at test.TestApp.startup(TestApp.java:19) at org.jdesktop.application.Application$1.run(Application.java:171) at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:209) at java.awt.EventQueue.dispatchEvent(EventQueue.java:597) at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:269) at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:184) at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:174) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:169) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:161) at java.awt.EventDispatchThread.run(EventDispatchThread.java:122) Caused by: javax.persistence.PersistenceException: Exception [TOPLINK-28018] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.EntityManagerSetupException Exception Description: predeploy for PersistenceUnit [MyBusinessRecordsPU] failed. Internal Exception: Exception [TOPLINK-7244] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.ValidationException Exception Description: An incompatible mapping has been encountered between [class test.Products] and [class test.Orders]. This usually occurs when the cardinality of a mapping does not correspond with the cardinality of its backpointer. at oracle.toplink.essentials.internal.ejb.cmp3.EntityManagerSetupImpl.predeploy(EntityManagerSetupImpl.java:643) at oracle.toplink.essentials.internal.ejb.cmp3.JavaSECMPInitializer.callPredeploy(JavaSECMPInitializer.java:171) at oracle.toplink.essentials.internal.ejb.cmp3.JavaSECMPInitializer.initPersistenceUnits(JavaSECMPInitializer.java:239) at oracle.toplink.essentials.internal.ejb.cmp3.JavaSECMPInitializer.initialize(JavaSECMPInitializer.java:255) at oracle.toplink.essentials.ejb.cmp3.EntityManagerFactoryProvider.createEntityManagerFactory(EntityManagerFactoryProvider.java:155) ... 14 more Caused by: Exception [TOPLINK-28018] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.EntityManagerSetupException Exception Description: predeploy for PersistenceUnit [MyBusinessRecordsPU] failed. Internal Exception: Exception [TOPLINK-7244] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.ValidationException Exception Description: An incompatible mapping has been encountered between [class test.Products] and [class test.Orders]. This usually occurs when the cardinality of a mapping does not correspond with the cardinality of its backpointer. at oracle.toplink.essentials.exceptions.EntityManagerSetupException.predeployFailed(EntityManagerSetupException.java:228) ... 19 more Caused by: Exception [TOPLINK-7244] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.ValidationException Exception Description: An incompatible mapping has been encountered between [class test.Products] and [class test.Orders]. This usually occurs when the cardinality of a mapping does not correspond with the cardinality of its backpointer. at oracle.toplink.essentials.exceptions.ValidationException.invalidMapping(ValidationException.java:1069) at oracle.toplink.essentials.internal.ejb.cmp3.metadata.MetadataValidator.throwInvalidMappingEncountered(MetadataValidator.java:275) at oracle.toplink.essentials.internal.ejb.cmp3.metadata.accessors.OneToManyAccessor.process(OneToManyAccessor.java:161) at oracle.toplink.essentials.internal.ejb.cmp3.metadata.accessors.RelationshipAccessor.processRelationship(RelationshipAccessor.java:290) at oracle.toplink.essentials.internal.ejb.cmp3.metadata.MetadataProject.processRelationshipDescriptors(MetadataProject.java:579) at oracle.toplink.essentials.internal.ejb.cmp3.metadata.MetadataProject.process(MetadataProject.java:512) at oracle.toplink.essentials.internal.ejb.cmp3.metadata.MetadataProcessor.processAnnotations(MetadataProcessor.java:246) at oracle.toplink.essentials.ejb.cmp3.persistence.PersistenceUnitProcessor.processORMetadata(PersistenceUnitProcessor.java:370) at oracle.toplink.essentials.internal.ejb.cmp3.EntityManagerSetupImpl.predeploy(EntityManagerSetupImpl.java:607) ... 18 more The following providers: oracle.toplink.essentials.ejb.cmp3.EntityManagerFactoryProvider Returned null to createEntityManagerFactory. at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:154) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:83) at test.TestView.initComponents(TestView.java:337) at test.TestView.(TestView.java:39) at test.TestApp.startup(TestApp.java:19) at org.jdesktop.application.Application$1.run(Application.java:171) at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:209) at java.awt.EventQueue.dispatchEvent(EventQueue.java:597) at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:269) at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:184) at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:174) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:169) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:161) at java.awt.EventDispatchThread.run(EventDispatchThread.java:122) Exception in thread "AWT-EventQueue-0" java.lang.Error: Application class test.TestApp failed to launch at org.jdesktop.application.Application$1.run(Application.java:177) at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:209) at java.awt.EventQueue.dispatchEvent(EventQueue.java:597) at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:269) at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:184) at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:174) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:169) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:161) at java.awt.EventDispatchThread.run(EventDispatchThread.java:122) Caused by: javax.persistence.PersistenceException: No Persistence provider for EntityManager named MyBusinessRecordsPU: Provider named oracle.toplink.essentials.PersistenceProvider threw unexpected exception at create EntityManagerFactory: oracle.toplink.essentials.exceptions.PersistenceUnitLoadingException Local Exception Stack: Exception [TOPLINK-30005] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.PersistenceUnitLoadingException Exception Description: An exception was thrown while searching for persistence archives with ClassLoader: sun.misc.Launcher$AppClassLoader@11b86e7 Internal Exception: javax.persistence.PersistenceException: Exception [TOPLINK-28018] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.EntityManagerSetupException Exception Description: predeploy for PersistenceUnit [MyBusinessRecordsPU] failed. Internal Exception: Exception [TOPLINK-7244] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.ValidationException Exception Description: An incompatible mapping has been encountered between [class test.Products] and [class test.Orders]. This usually occurs when the cardinality of a mapping does not correspond with the cardinality of its backpointer. at oracle.toplink.essentials.exceptions.PersistenceUnitLoadingException.exceptionSearchingForPersistenceResources(PersistenceUnitLoadingException.java:143) at oracle.toplink.essentials.ejb.cmp3.EntityManagerFactoryProvider.createEntityManagerFactory(EntityManagerFactoryProvider.java:169) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:110) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:83) at test.TestView.initComponents(TestView.java:337) at test.TestView.(TestView.java:39) at test.TestApp.startup(TestApp.java:19) at org.jdesktop.application.Application$1.run(Application.java:171) at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:209) at java.awt.EventQueue.dispatchEvent(EventQueue.java:597) at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:269) at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:184) at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:174) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:169) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:161) at java.awt.EventDispatchThread.run(EventDispatchThread.java:122) Caused by: javax.persistence.PersistenceException: Exception [TOPLINK-28018] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.EntityManagerSetupException Exception Description: predeploy for PersistenceUnit [MyBusinessRecordsPU] failed. Internal Exception: Exception [TOPLINK-7244] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.ValidationException Exception Description: An incompatible mapping has been encountered between [class test.Products] and [class test.Orders]. This usually occurs when the cardinality of a mapping does not correspond with the cardinality of its backpointer. at oracle.toplink.essentials.internal.ejb.cmp3.EntityManagerSetupImpl.predeploy(EntityManagerSetupImpl.java:643) at oracle.toplink.essentials.internal.ejb.cmp3.JavaSECMPInitializer.callPredeploy(JavaSECMPInitializer.java:171) at oracle.toplink.essentials.internal.ejb.cmp3.JavaSECMPInitializer.initPersistenceUnits(JavaSECMPInitializer.java:239) at oracle.toplink.essentials.internal.ejb.cmp3.JavaSECMPInitializer.initialize(JavaSECMPInitializer.java:255) at oracle.toplink.essentials.ejb.cmp3.EntityManagerFactoryProvider.createEntityManagerFactory(EntityManagerFactoryProvider.java:155) ... 14 more Caused by: Exception [TOPLINK-28018] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.EntityManagerSetupException Exception Description: predeploy for PersistenceUnit [MyBusinessRecordsPU] failed. Internal Exception: Exception [TOPLINK-7244] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.ValidationException Exception Description: An incompatible mapping has been encountered between [class test.Products] and [class test.Orders]. This usually occurs when the cardinality of a mapping does not correspond with the cardinality of its backpointer. at oracle.toplink.essentials.exceptions.EntityManagerSetupException.predeployFailed(EntityManagerSetupException.java:228) ... 19 more Caused by: Exception [TOPLINK-7244] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.ValidationException Exception Description: An incompatible mapping has been encountered between [class test.Products] and [class test.Orders]. This usually occurs when the cardinality of a mapping does not correspond with the cardinality of its backpointer. at oracle.toplink.essentials.exceptions.ValidationException.invalidMapping(ValidationException.java:1069) at oracle.toplink.essentials.internal.ejb.cmp3.metadata.MetadataValidator.throwInvalidMappingEncountered(MetadataValidator.java:275) at oracle.toplink.essentials.internal.ejb.cmp3.metadata.accessors.OneToManyAccessor.process(OneToManyAccessor.java:161) at oracle.toplink.essentials.internal.ejb.cmp3.metadata.accessors.RelationshipAccessor.processRelationship(RelationshipAccessor.java:290) at oracle.toplink.essentials.internal.ejb.cmp3.metadata.MetadataProject.processRelationshipDescriptors(MetadataProject.java:579) at oracle.toplink.essentials.internal.ejb.cmp3.metadata.MetadataProject.process(MetadataProject.java:512) at oracle.toplink.essentials.internal.ejb.cmp3.metadata.MetadataProcessor.processAnnotations(MetadataProcessor.java:246) at oracle.toplink.essentials.ejb.cmp3.persistence.PersistenceUnitProcessor.processORMetadata(PersistenceUnitProcessor.java:370) at oracle.toplink.essentials.internal.ejb.cmp3.EntityManagerSetupImpl.predeploy(EntityManagerSetupImpl.java:607) ... 18 more The following providers: oracle.toplink.essentials.ejb.cmp3.EntityManagerFactoryProvider Returned null to createEntityManagerFactory. at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:154) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:83) at test.TestView.initComponents(TestView.java:337) at test.TestView.(TestView.java:39) at test.TestApp.startup(TestApp.java:19) at org.jdesktop.application.Application$1.run(Application.java:171) ... 8 more BUILD SUCCESSFUL (total time: 2 seconds) I'm thinking my netbeans application must be misconfigured or something, I can't seem to find anyone with the same problem as myself

    Read the article

  • CodePlex Daily Summary for Thursday, January 13, 2011

    CodePlex Daily Summary for Thursday, January 13, 2011Popular ReleasesMVC Music Store: MVC Music Store v2.0: This is the 2.0 release of the MVC Music Store Tutorial. This tutorial is updated for ASP.NET MVC 3 and Entity Framework Code-First, and contains fixes and improvements based on feedback and common questions from previous releases. The main download, MvcMusicStore-v2.0.zip, contains everything you need to build the sample application, including A detailed tutorial document in PDF format Assets you will need to build the project, including images, a stylesheet, and a pre-populated databas...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.7 GA Released: Hi, Today we are releasing Visifire 3.6.7 GA with the following feature: * Inlines property has been implemented in Title. Also, this release contains fix for the following bugs: * In Column and Bar chart DataPoint’s label properties were not working as expected at real-time if marker enabled was set to true. * 3D Column and Bar chart were not rendered properly if AxisMinimum property was set in x-axis. You can download Visifire v3.6.7 here. Cheers, Team VisifireFluent Validation for .NET: 2.0: Changes since 2.0 RC Fix typo in the name of FallbackAwareResourceAccessorBuilder Fix issue #7062 - allow validator selectors to work against nullable properties with overriden names. Fix error in German localization. Better support for client-side validation messages in MVC integration. All changes since 1.3 Allow custom MVC ModelValidators to be added to the FVModelValidatorProvider Support resource provider for custom property validators through the new IResourceAccessorBuilder ...EnhSim: EnhSim 2.3.2 ALPHA: 2.3.1 ALPHAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Quick update to ...ASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.6: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager new stuff: paging for the lookup lookup with multiselect changes: the css classes used by the framework where renamed to be more standard the lookup controller requries an item.ascx (no more ViewData["structure"]), and LookupList action renamed to Search all the...pwTools: pwTools: Changelog v1.0 base release??????????: All-In-One Code Framework ??? 2011-01-12: 2011???????All-In-One Code Framework(??) 2011?1??????!!http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 ?????release?,???????ASP.NET, AJAX, WinForm, Windows Shell????13?Sample Code。???,??????????sample code。 ?????:http://blog.csdn.net/sjb5201/archive/2011/01/13/6135037.aspx ??,??????MSDN????????????。 http://social.msdn.microsoft.com/Forums/zh-CN/codezhchs/threads ?????????????????,??Email ????patterns & practices – Enterprise Library: Enterprise Library 5.0 - Extensibility Labs: This is a preview release of the Hands-on Labs to help you learn and practice different ways the Enterprise Library can be extended. Learning MapCustom exception handler (estimated time to complete: 1 hr 15 mins) Custom logging trace listener (1 hr) Custom configuration source (registry-based) (30 mins) System requirementsEnterprise Library 5.0 / Unity 2.0 installed SQL Express 2008 installed Visual Studio 2010 Pro (or better) installed AuthorsChris Tavares, Microsoft Corporation ...Orchard Project: Orchard 1.0: Orchard Release Notes Build: 1.0.20 Published: 1/12/2010 How to Install OrchardTo install the Orchard tech preview using Web PI, follow these instructions: http://www.orchardproject.net/docs/Installing-Orchard.ashx Web PI will detect your hardware environment and install the application. --OR-- Alternatively, to install the release manually, download the Orchard.Web.1.0.20.zip file. The zip contents are pre-built and ready-to-run. Simply extract the contents of the Orchard folder from ...Umbraco CMS: Umbraco 4.6.1: The Umbraco 4.6.1 (codename JUNO) release contains many new features focusing on an improved installation experience, a number of robust developer features, and contains nearly 200 bug fixes since the 4.5.2 release. Getting Started A great place to start is with our Getting Started Guide: Getting Started Guide: http://umbraco.codeplex.com/Project/Download/FileDownload.aspx?DownloadId=197051 Make sure to check the free foundation videos on how to get started building Umbraco sites. They're ...Google URL Shortener API for .NET: Google URL Shortener API v1: According follow specification: http://code.google.com/apis/urlshortener/v1/reference.htmlStyleCop for ReSharper: StyleCop for ReSharper 5.1.14986.000: A considerable amount of work has gone into this release: Features: Huge focus on performance around the violation scanning subsystem: - caching added to reduce IO operations around reading and merging of settings files - caching added to reduce creation of expensive objects Users should notice condsiderable perf boost and a decrease in memory usage. Bug Fixes: - StyleCop's new ObjectBasedEnvironment object does not resolve the StyleCop installation path, thus it does not return the ...SQL Monitor - tracking sql server activities: SQL Monitor 3.1 beta 1: 1. support alert message template 2. dynamic toolbar commands depending on functionality 3. fixed some bugs 4. refactored part of the code, now more stable and more clean upFacebook C# SDK: 4.2.1: - Authentication bug fixes - Updated Json.Net to version 4.0.0 - BREAKING CHANGE: Removed cookieSupport config setting, now automatic. This download is also availible on NuGet: Facebook FacebookWeb FacebookWebMvcHawkeye - The .Net Runtime Object Editor: Hawkeye 1.2.5: In the case you are running an x86 Windows and you installed Release 1.2.4, you should consider upgrading to this release (1.2.5) as it appears Hawkeye is broken on x86 OS. I apologize for the inconvenience, but it appears Hawkeye 1.2.4 (and probably previous versions) doesn't run on x86 Windows (See issue http://hawkeye.codeplex.com/workitem/7791). This maintenance release fixes this broken behavior. This release comes in two flavors: Hawkeye.125.N2 is the standard .NET 2 build, was compile...Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (January 2011): Another release build for daily use; it contains many new features, enhanced compatibility with latest PHP opensource applications and several issue fixes. To improve the performance of your application using MySQL, please use Managed MySQL Extension for Phalanger. Changes made within this release include following: New features available only in Phalanger. Full support of Multi-Script-Assemblies was implemented; you can build your application into several DLLs now. Deploy them separately t...AutoLoL: AutoLoL v1.5.3: A message will be displayed when there's an update available Shows a list of recent mastery files in the Editor Tab (requested by quite a few people) Updater: Update information is now scrollable Added a buton to launch AutoLoL after updating is finished Updated the UI to match that of AutoLoL Fix: Detects and resolves 'Read Only' state on Version.xmlTweetSharp: TweetSharp v2.0.0.0 - Preview 7: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Note: This code is currently preview quality. Preview 7 ChangesFixes the regression issue in OAuth from Preview 6 Preview 6 ChangesMaintenance release with user reported fixes Preview 5 ChangesMaintenance release with user reported fixes Third Party Library VersionsHammock v1.0.6: http://hammock.codeplex.com Json.NET 3.5 Release 8: http://json.codeplex.comExtended WPF Toolkit: Extended WPF Toolkit - 1.3.0: What's in the 1.3.0 Release?BusyIndicator ButtonSpinner ChildWindow ColorPicker - Updated (Breaking Changes) DateTimeUpDown - New Control Magnifier - New Control MaskedTextBox - New Control MessageBox NumericUpDown RichTextBox RichTextBoxFormatBar - Updated .NET 3.5 binaries and SourcePlease note: The Extended WPF Toolkit 3.5 is dependent on .NET Framework 3.5 and the WPFToolkit. You must install .NET Framework 3.5 and the WPFToolkit in order to use any features in the To...Ionics Isapi Rewrite Filter: 2.1 latest stable: V2.1 is stable, and is in maintenance mode. This is v2.1.1.25. It is a bug-fix release. There are no new features. 28629 29172 28722 27626 28074 29164 27659 27900 many documentation updates and fixes proper x64 build environment. This release includes x64 binaries in zip form, but no x64 MSI file. You'll have to manually install x64 servers, following the instructions in the documentation.New Projects4chan: Project for educational purposesAE.Net.Mail - POP/IMAP Client: C# POP/IMAP client libraryAlmathy: Application communautaire pour le partage de données basée sur le protocole XMPP. Discussion instantanée, mail, échange de données, travaux partagés. Développée en C#, utilisant les Windows Forms.AMK - Associação Metropolitana de Kendo: Projeto do site versão 2011 da Associação Metropolitana de Kendo. O site contará com: - Cadastro de atletas e eventos; - Controle de cadastro junto a CBK e histórico de graduações; - Calendário de treinos; Será desenvolvido usando .NET, MVC2, Entity Framework, SQL Server, JQueryAzke: New: Azke is a portal developed with ASP.NET MVC and MySQL. Old: Azke is a portal developed with ASP.net and MySQL.BuildScreen: A standalone Windows application to displays project build statuses from TeamCity. It can be used on a large screen for the whole development team to watch their build statuses as well as on a developer machine.CardOnline: CardOnline GameLobby GameCAudioEndpointVolume: The CAudioEndpointVolume implements the IAudioEndpointVolume Interface to control the master volume in Visual Basic 6.0 for Windows Vista and later operating systems.Cloudy - Online Storage Library: The goal of Cloudy is to be an online storage library to enable access for the most common storage services : DropBox, Skydrive, Google Docs, Azure Blob Storage and Amazon S3. It'll work in .NET 3.5 and up, Silverlight and Windows Phone 7 (WP7).ContentManager: Content Manger for SAP Kpro exports the data pool of a content-server by PHIOS-Lists (txt-files) into local binary files. Afterwards this files can be imported in a SAP content-repository . The whole List of the PHIOS Keys can also be downloaded an splitted in practical units.CSWFKrad: private FrameworkDiego: Diegodoomhjx_javalib: this is my lib for the java project.Eve Planetary Interaction: Eve Planetary InteractionEventWall: EventWall allows you to show related blogposts and tweets on the big screen. Just configure the hashtag and blog RSS feeds and you're done.Google URL Shortener API for .NET: Google URL Shortener API for .NET is a wrapper for the Google Project below: http://code.google.com/apis/urlshortener/v1/reference.html With Google URL Shortener API, you may shorten urls and get some analytics information about this. It's developer in C# 4.0 and VS2010. HtmlDeploy: A project to compile asp's Master page into pure html files. The idea is to use jQuery templating to do all the "if" and "for" when it comes to generating html output. Inventory Management System: Inventory Management System is a Silverlight-based system. It aims to manage and control the input raw material activites, output of finished goods of a small inventory.koplamp kapot: demo project voor AzureMSAccess SVN: Access SVN adds to Microsoft Access (MS Access) support for SVN Source controlMultiConvert: console based utility primary to convert solid edge files into data exchange formats like *.stp and *.dxf. It's using the API of the original software and can't be run alone. The goal of this project is creating routines with desired pre-instructions for batch conversionsNGN Image Getter: Nationalgeographic has really stunning photos! They allow to download them via website. This program automates that process.OOD CRM: Project CRM for Object Oriented Development final examsOpen NOS Client: Open NOS Rest ClientopenEHR.NET: openEHR.NET is a C# implementation of openEHR (http://openehr.org) Reference Model (RM) and Archetype Model (AM) specifications (Release 1.0.1). It allows you to build openEHR applications by composing RM objects, validate against AM objects and serialise to/from XML.OpenGL Flow Designer: OpenGL Flow Designer allows writing pseudo-C code to build OpenGL pipeline and preview results immediately.Orchard Translation Manager: An Orchard module that aims at facilitating the creation and management of translations of the Orchard CMS and its extensions.pwTools: A set of tools for viewing/editing some perfect world client/server filesServerStatusMobile: Server availability monitoring application for windows mobile 6.5.x running on WVGA (480x800) devices. Supports autorun, vibration & notification when server became unavailable, text log files for each server. .NET Compact Framework 3.5 required for running this applicationSimple Silverlight Multiple File Uploader: The project allows you to achieve multiple file uploading in Silverlight without complex, complicated, or third-party applications. In just two core classes, it's very simple to understand, use, and extend. Snos: Projet Snos linker Snes multi SupportT-4 Templates for ASP.NET Web Form Databound Control Friendly Logical Layers: This open source project includes a set of T-4 templates to enable you to build logical layers (i.e. DAL/BLL) with just few clicks! The logical layers implemented here are based on Entity Framework 4.0, ASP.NET Web Form Data Bound control friendly and fully unit testable.The Media Store: The Media StoreUM: source control for the um projecturBook - Your Address Book Manager: urBook is and Address Book Manager that can merge all your contacts on all web services you use with your phone contacts to have one consistent Address Book for all your contacts and to update back your web services contacts with the one consistent address bookWindows Phone Certificate Installer: Helps install Trusted Root Certificates on Windows Phone 7 to enable SSL requests. Intended to allow developers to use localhost web servers during development without requiring the purchase of an SSL certificate. Especially helpful for ws-trust secured web service development.

    Read the article

  • What's the use of Ant's extension-point if/unless attributes?

    - by Robert Menteer
    When you define an extension-point in an Ant build file you can have it conditional by using the if or unless attribute. On a target the if/unless prevent it's tasks from being run. But an extension-point doesn't have any tasks to conditionally run, so what does the condition do? My thought (which proved to be incorrect in Ant 1.8.0) is it would prevent any tasks that extend the extension-point from being run. Here is an example build script showing the problem: <project name = "ext-test" default = "main"> <property name = "do.it" value = "false" /> <extension-point name = "init"/> <extension-point name = "doit" depends = "init" if = "${do.it}" /> <target name = "extend-init" extensionOf = "init"> <echo message = "Doing extend-init." /> </target> <target name = "extend-doit" extensionOf = "doit"> <echo message = "Do It! (${do.it})" /> </target> <target name = "main" depends = "doit"> <echo message = "Doing main." /> </target> </project> Using the command: ant -v Relults in: Apache Ant version 1.8.0 compiled on February 1 2010 Trying the default build file: build.xml Buildfile: /Users/bob/build.xml Detected Java version: 1.6 in: /System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home Detected OS: Mac OS X parsing buildfile /Users/bob/build.xml with URI = file:/Users/bob/build.xml Project base dir set to: /Users/bob parsing buildfile jar:file:/Users/bob/Documents/Development/3P-Tools/apache-ant-1.8.0/lib/ant.jar!/org/apache/tools/ant/antlib.xml with URI = jar:file:/Users/bob/Documents/Development/3P-Tools/apache-ant-1.8.0/lib/ant.jar!/org/apache/tools/ant/antlib.xml from a zip file Build sequence for target(s) `main' is [extend-init, init, extend-doit, doit, main] Complete build sequence is [extend-init, init, extend-doit, doit, main, ] extend-init: [echo] Doing extend-init. init: extend-doit: [echo] Do It! (false) doit: Skipped because property 'false' not set. main: [echo] Doing main. BUILD SUCCESSFUL Total time: 0 seconds You will notice the target extend-doit is executed but the extention-point itself is skipped. Since an extention-point doesn't have any tasks exactly what has been skipped? Any targets that depend on the extention-point still get executed since a skipped target is a successful target. What is the value of the if/unless attributes on an extention-point?

    Read the article

  • Building 64bit Qt on 32bit Xp computer

    - by photo_tom
    I'm trying to build Qt in a shared 64 bit mode on my 32bit XP system. I can configure the QMake and start the 64bit build. The problem is that when the build starts, the first thing that happens in that the process builds ui, moc and rcc utility compilers in 64 bit mode, then tries to run them on my 32bit machine. Does anyone know how to configure the build so that it does not build those compilers first?

    Read the article

  • Difference in Java versions

    - by Polppan
    Are there any differences between these two java versions. If there are any differences how can I have the version java version "1.4.2" because that is what I have in server. 1) java version "1.4.2_06" Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.2_06-b03) Java HotSpot(TM) Client VM (build 1.4.2_06-b03, mixed mode) 2) java version "1.4.2" Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.2) Classic VM (build 1.4.2, J2RE 1.4.2 IBM AIX build ca142-20090307

    Read the article

  • ruby on rails- nested attributes for 'has_one' relationship

    - by Kum
    Hello, I need some help concerning nested attributes in models with 'has_one' relationship. Model Survey has 1 question Model Question has 1 answer How do i build the 'answer' in the code below def new @survey = Survey.new @survey.build_question # build one question @survey.question.answer.build #this part is not working end Please can anybody tell me how to build the answer as the code "@survey.question.answer.build" is not correct. Many many thanks for your help

    Read the article

  • juju bootstrap fails with a local environment, why?

    - by Braiam
    Each time I try to bootstrap juju using a local enviroment it fails starting the juju-db-braiam-local script as follows: $ sudo juju --debug --verbose bootstrap 2013-10-20 02:28:53 INFO juju.provider.local environprovider.go:32 opening environment "local" 2013-10-20 02:28:53 DEBUG juju.provider.local environ.go:210 found "10.0.3.1" as address for "lxcbr0" 2013-10-20 02:28:53 DEBUG juju.provider.local environ.go:234 checking 10.0.3.1:8040 to see if machine agent running storage listener 2013-10-20 02:28:53 DEBUG juju.provider.local environ.go:237 nope, start some 2013-10-20 02:28:53 DEBUG juju.environs.tools storage.go:87 Uploading tools for [raring precise] 2013-10-20 02:28:53 DEBUG juju.environs.tools build.go:109 looking for: juju 2013-10-20 02:28:53 DEBUG juju.environs.tools build.go:150 checking: /usr/bin/jujud 2013-10-20 02:28:53 INFO juju.environs.tools build.go:156 found existing jujud 2013-10-20 02:28:53 INFO juju.environs.tools build.go:166 target: /tmp/juju-tools243949228/jujud 2013-10-20 02:28:53 DEBUG juju.environs.tools build.go:217 forcing version to 1.14.1.1 2013-10-20 02:28:53 DEBUG juju.environs.tools build.go:37 adding entry: &tar.Header{Name:"FORCE-VERSION", Mode:420, Uid:0, Gid:0, Size:8, ModTime:time.Time{sec:63517832933, nsec:278894120, loc:(*time.Location)(0x108fda0)}, Typeflag:0x30, Linkname:"", Uname:"ubuntu", Gname:"ubuntu", Devmajor:0, Devminor:0, AccessTime:time.Time{sec:63517832933, nsec:278894120, loc:(*time.Location)(0x108fda0)}, ChangeTime:time.Time{sec:63517832933, nsec:278894120, loc:(*time.Location)(0x108fda0)}} 2013-10-20 02:28:53 DEBUG juju.environs.tools build.go:37 adding entry: &tar.Header{Name:"jujud", Mode:493, Uid:0, Gid:0, Size:19179512, ModTime:time.Time{sec:63517832933, nsec:274894120, loc:(*time.Location)(0x108fda0)}, Typeflag:0x30, Linkname:"", Uname:"ubuntu", Gname:"ubuntu", Devmajor:0, Devminor:0, AccessTime:time.Time{sec:63517832933, nsec:274894120, loc:(*time.Location)(0x108fda0)}, ChangeTime:time.Time{sec:63517832933, nsec:274894120, loc:(*time.Location)(0x108fda0)}} 2013-10-20 02:28:55 INFO juju.environs.tools storage.go:106 built 1.14.1.1-raring-amd64 (4196kB) 2013-10-20 02:28:55 INFO juju.environs.tools storage.go:112 uploading 1.14.1.1-precise-amd64 2013-10-20 02:28:55 INFO juju.environs.tools storage.go:112 uploading 1.14.1.1-raring-amd64 2013-10-20 02:28:55 INFO juju.environs.tools tools.go:29 reading tools with major version 1 2013-10-20 02:28:55 INFO juju.environs.tools tools.go:34 filtering tools by version: 1.14.1.1 2013-10-20 02:28:55 INFO juju.environs.tools tools.go:37 filtering tools by series: precise 2013-10-20 02:28:55 DEBUG juju.environs.tools storage.go:41 reading v1.* tools 2013-10-20 02:28:55 DEBUG juju.environs.tools storage.go:61 found 1.14.1.1-precise-amd64 2013-10-20 02:28:55 DEBUG juju.environs.tools storage.go:61 found 1.14.1.1-raring-amd64 2013-10-20 02:28:55 INFO juju.environs.boostrap bootstrap.go:57 bootstrapping environment "local" 2013-10-20 02:28:55 INFO juju.environs.tools tools.go:29 reading tools with major version 1 2013-10-20 02:28:55 INFO juju.environs.tools tools.go:34 filtering tools by version: 1.14.1.1 2013-10-20 02:28:55 INFO juju.environs.tools tools.go:37 filtering tools by series: precise 2013-10-20 02:28:55 DEBUG juju.environs.tools storage.go:41 reading v1.* tools 2013-10-20 02:28:55 DEBUG juju.environs.tools storage.go:61 found 1.14.1.1-precise-amd64 2013-10-20 02:28:55 DEBUG juju.environs.tools storage.go:61 found 1.14.1.1-raring-amd64 2013-10-20 02:28:55 DEBUG juju.provider.local environ.go:395 create mongo journal dir: /home/braiam/.juju/local/db/journal 2013-10-20 02:28:55 DEBUG juju.provider.local environ.go:401 generate server cert 2013-10-20 02:28:55 INFO juju.provider.local environ.go:421 installing service juju-db-braiam-local to /etc/init 2013-10-20 02:28:56 ERROR juju.provider.local environ.go:423 could not install mongo service: exec ["start" "juju-db-braiam-local"]: exit status 1 (start: Job failed to start) 2013-10-20 02:28:56 ERROR juju supercommand.go:282 command failed: exec ["start" "juju-db-braiam-local"]: exit status 1 (start: Job failed to start) error: exec ["start" "juju-db-braiam-local"]: exit status 1 (start: Job failed to start) What is the reason for this error and how to solve it?

    Read the article

  • Using Stub Objects

    - by user9154181
    Having told the long and winding tale of where stub objects came from and how we use them to build Solaris, I'd like to focus now on the the nuts and bolts of building and using them. The following new features were added to the Solaris link-editor (ld) to support the production and use of stub objects: -z stub This new command line option informs ld that it is to build a stub object rather than a normal object. In this mode, it accepts the same command line arguments as usual, but will quietly ignore any objects and sharable object dependencies. STUB_OBJECT Mapfile Directive In order to build a stub version of an object, its mapfile must specify the STUB_OBJECT directive. When producing a non-stub object, the presence of STUB_OBJECT causes the link-editor to perform extra validation to ensure that the stub and non-stub objects will be compatible. ASSERT Mapfile Directive All data symbols exported from the object must have an ASSERT symbol directive in the mapfile that declares them as data and supplies the size, binding, bss attributes, and symbol aliasing details. When building the stub objects, the information in these ASSERT directives is used to create the data symbols. When building the real object, these ASSERT directives will ensure that the real object matches the linking interface presented by the stub. Although ASSERT was added to the link-editor in order to support stub objects, they are a general purpose feature that can be used independently of stub objects. For instance you might choose to use an ASSERT directive if you have a symbol that must have a specific address in order for the object to operate properly and you want to automatically ensure that this will always be the case. The material presented here is derived from a document I originally wrote during the development effort, which had the dual goals of providing supplemental materials for the stub object PSARC case, and as a set of edits that were eventually applied to the Oracle Solaris Linker and Libraries Manual (LLM). The Solaris 11 LLM contains this information in a more polished form. Stub Objects A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be used at runtime. However, an application can be built against a stub object, where the stub object provides the real object name to be used at runtime, and then use the real object at runtime. When building a stub object, the link-editor ignores any object or library files specified on the command line, and these files need not exist in order to build a stub. Since the compilation step can be omitted, and because the link-editor has relatively little work to do, stub objects can be built very quickly. Stub objects can be used to solve a variety of build problems: Speed Modern machines, using a version of make with the ability to parallelize operations, are capable of compiling and linking many objects simultaneously, and doing so offers significant speedups. However, it is typical that a given object will depend on other objects, and that there will be a core set of objects that nearly everything else depends on. It is necessary to impose an ordering that builds each object before any other object that requires it. This ordering creates bottlenecks that reduce the amount of parallelization that is possible and limits the overall speed at which the code can be built. Complexity/Correctness In a large body of code, there can be a large number of dependencies between the various objects. The makefiles or other build descriptions for these objects can become very complex and difficult to understand or maintain. The dependencies can change as the system evolves. This can cause a given set of makefiles to become slightly incorrect over time, leading to race conditions and mysterious rare build failures. Dependency Cycles It might be desirable to organize code as cooperating shared objects, each of which draw on the resources provided by the other. Such cycles cannot be supported in an environment where objects must be built before the objects that use them, even though the runtime linker is fully capable of loading and using such objects if they could be built. Stub shared objects offer an alternative method for building code that sidesteps the above issues. Stub objects can be quickly built for all the shared objects produced by the build. Then, all the real shared objects and executables can be built in parallel, in any order, using the stub objects to stand in for the real objects at link-time. Afterwards, the executables and real shared objects are kept, and the stub shared objects are discarded. Stub objects are built from a mapfile, which must satisfy the following requirements. The mapfile must specify the STUB_OBJECT directive. This directive informs the link-editor that the object can be built as a stub object, and as such causes the link-editor to perform validation and sanity checking intended to guarantee that an object and its stub will always provide identical linking interfaces. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data exported from the object must have an ASSERT symbol attribute in the mapfile to specify the symbol type, size, and bss attributes. In the case where there are multiple symbols that reference the same data, the ASSERT for one of these symbols must specify the TYPE and SIZE attributes, while the others must use the ALIAS attribute to reference this primary symbol. Given such a mapfile, the stub and real versions of the shared object can be built using the same command line for each, adding the '-z stub' option to the link for the stub object, and omiting the option from the link for the real object. To demonstrate these ideas, the following code implements a shared object named idx5, which exports data from a 5 element array of integers, with each element initialized to contain its zero-based array index. This data is available as a global array, via an alternative alias data symbol with weak binding, and via a functional interface. % cat idx5.c int _idx5[5] = { 0, 1, 2, 3, 4 }; #pragma weak idx5 = _idx5 int idx5_func(int index) { if ((index 4)) return (-1); return (_idx5[index]); } A mapfile is required to describe the interface provided by this shared object. % cat mapfile $mapfile_version 2 STUB_OBJECT; SYMBOL_SCOPE { _idx5 { ASSERT { TYPE=data; SIZE=4[5] }; }; idx5 { ASSERT { BINDING=weak; ALIAS=_idx5 }; }; idx5_func; local: *; }; The following main program is used to print all the index values available from the idx5 shared object. % cat main.c #include <stdio.h> extern int _idx5[5], idx5[5], idx5_func(int); int main(int argc, char **argv) { int i; for (i = 0; i The following commands create a stub version of this shared object in a subdirectory named stublib. elfdump is used to verify that the resulting object is a stub. The command used to build the stub differs from that of the real object only in the addition of the -z stub option, and the use of a different output file name. This demonstrates the ease with which stub generation can be added to an existing makefile. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o stublib/libidx5.so.1 -zstub % ln -s libidx5.so.1 stublib/libidx5.so % elfdump -d stublib/libidx5.so | grep STUB [11] FLAGS_1 0x4000000 [ STUB ] The main program can now be built, using the stub object to stand in for the real shared object, and setting a runpath that will find the real object at runtime. However, as we have not yet built the real object, this program cannot yet be run. Attempts to cause the system to load the stub object are rejected, as the runtime linker knows that stub objects lack the actual code and data found in the real object, and cannot execute. % cc main.c -L stublib -R '$ORIGIN/lib' -lidx5 -lc % ./a.out ld.so.1: a.out: fatal: libidx5.so.1: open failed: No such file or directory Killed % LD_PRELOAD=stublib/libidx5.so.1 ./a.out ld.so.1: a.out: fatal: stublib/libidx5.so.1: stub shared object cannot be used at runtime Killed We build the real object using the same command as we used to build the stub, omitting the -z stub option, and writing the results to a different file. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o lib/libidx5.so.1 Once the real object has been built in the lib subdirectory, the program can be run. % ./a.out [0] 0 0 0 [1] 1 1 1 [2] 2 2 2 [3] 3 3 3 [4] 4 4 4 Mapfile Changes The version 2 mapfile syntax was extended in a number of places to accommodate stub objects. Conditional Input The version 2 mapfile syntax has the ability conditionalize mapfile input using the $if control directive. As you might imagine, these directives are used frequently with ASSERT directives for data, because a given data symbol will frequently have a different size in 32 or 64-bit code, or on differing hardware such as x86 versus sparc. The link-editor maintains an internal table of names that can be used in the logical expressions evaluated by $if and $elif. At startup, this table is initialized with items that describe the class of object (_ELF32 or _ELF64) and the type of the target machine (_sparc or _x86). We found that there were a small number of cases in the Solaris code base in which we needed to know what kind of object we were producing, so we added the following new predefined items in order to address that need: NameMeaning ...... _ET_DYNshared object _ET_EXECexecutable object _ET_RELrelocatable object ...... STUB_OBJECT Directive The new STUB_OBJECT directive informs the link-editor that the object described by the mapfile can be built as a stub object. STUB_OBJECT; A stub shared object is built entirely from the information in the mapfiles supplied on the command line. When the -z stub option is specified to build a stub object, the presence of the STUB_OBJECT directive in a mapfile is required, and the link-editor uses the information in symbol ASSERT attributes to create global symbols that match those of the real object. When the real object is built, the presence of STUB_OBJECT causes the link-editor to verify that the mapfiles accurately describe the real object interface, and that a stub object built from them will provide the same linking interface as the real object it represents. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data in the object is required to have an ASSERT attribute that specifies the symbol type and size. If the ASSERT BIND attribute is not present, the link-editor provides a default assertion that the symbol must be GLOBAL. If the ASSERT SH_ATTR attribute is not present, or does not specify that the section is one of BITS or NOBITS, the link-editor provides a default assertion that the associated section is BITS. All data symbols that describe the same address and size are required to have ASSERT ALIAS attributes specified in the mapfile. If aliased symbols are discovered that do not have an ASSERT ALIAS specified, the link fails and no object is produced. These rules ensure that the mapfiles contain a description of the real shared object's linking interface that is sufficient to produce a stub object with a completely compatible linking interface. SYMBOL_SCOPE/SYMBOL_VERSION ASSERT Attribute The SYMBOL_SCOPE and SYMBOL_VERSION mapfile directives were extended with a symbol attribute named ASSERT. The syntax for the ASSERT attribute is as follows: ASSERT { ALIAS = symbol_name; BINDING = symbol_binding; TYPE = symbol_type; SH_ATTR = section_attributes; SIZE = size_value; SIZE = size_value[count]; }; The ASSERT attribute is used to specify the expected characteristics of the symbol. The link-editor compares the symbol characteristics that result from the link to those given by ASSERT attributes. If the real and asserted attributes do not agree, a fatal error is issued and the output object is not created. In normal use, the link editor evaluates the ASSERT attribute when present, but does not require them, or provide default values for them. The presence of the STUB_OBJECT directive in a mapfile alters the interpretation of ASSERT to require them under some circumstances, and to supply default assertions if explicit ones are not present. See the definition of the STUB_OBJECT Directive for the details. When the -z stub command line option is specified to build a stub object, the information provided by ASSERT attributes is used to define the attributes of the global symbols provided by the object. ASSERT accepts the following: ALIAS Name of a previously defined symbol that this symbol is an alias for. An alias symbol has the same type, value, and size as the main symbol. The ALIAS attribute is mutually exclusive to the TYPE, SIZE, and SH_ATTR attributes, and cannot be used with them. When ALIAS is specified, the type, size, and section attributes are obtained from the alias symbol. BIND Specifies an ELF symbol binding, which can be any of the STB_ constants defined in <sys/elf.h>, with the STB_ prefix removed (e.g. GLOBAL, WEAK). TYPE Specifies an ELF symbol type, which can be any of the STT_ constants defined in <sys/elf.h>, with the STT_ prefix removed (e.g. OBJECT, COMMON, FUNC). In addition, for compatibility with other mapfile usage, FUNCTION and DATA can be specified, for STT_FUNC and STT_OBJECT, respectively. TYPE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SH_ATTR Specifies attributes of the section associated with the symbol. The section_attributes that can be specified are given in the following table: Section AttributeMeaning BITSSection is not of type SHT_NOBITS NOBITSSection is of type SHT_NOBITS SH_ATTR is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SIZE Specifies the expected symbol size. SIZE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. The syntax for the size_value argument is as described in the discussion of the SIZE attribute below. SIZE The SIZE symbol attribute existed before support for stub objects was introduced. It is used to set the size attribute of a given symbol. This attribute results in the creation of a symbol definition. Prior to the introduction of the ASSERT SIZE attribute, the value of a SIZE attribute was always numeric. While attempting to apply ASSERT SIZE to the objects in the Solaris ON consolidation, I found that many data symbols have a size based on the natural machine wordsize for the class of object being produced. Variables declared as long, or as a pointer, will be 4 bytes in size in a 32-bit object, and 8 bytes in a 64-bit object. Initially, I employed the conditional $if directive to handle these cases as follows: $if _ELF32 foo { ASSERT { TYPE=data; SIZE=4 } }; bar { ASSERT { TYPE=data; SIZE=20 } }; $elif _ELF64 foo { ASSERT { TYPE=data; SIZE=8 } }; bar { ASSERT { TYPE=data; SIZE=40 } }; $else $error UNKNOWN ELFCLASS $endif I found that the situation occurs frequently enough that this is cumbersome. To simplify this case, I introduced the idea of the addrsize symbolic name, and of a repeat count, which together make it simple to specify machine word scalar or array symbols. Both the SIZE, and ASSERT SIZE attributes support this syntax: The size_value argument can be a numeric value, or it can be the symbolic name addrsize. addrsize represents the size of a machine word capable of holding a memory address. The link-editor substitutes the value 4 for addrsize when building 32-bit objects, and the value 8 when building 64-bit objects. addrsize is useful for representing the size of pointer variables and C variables of type long, as it automatically adjusts for 32 and 64-bit objects without requiring the use of conditional input. The size_value argument can be optionally suffixed with a count value, enclosed in square brackets. If count is present, size_value and count are multiplied together to obtain the final size value. Using this feature, the example above can be written more naturally as: foo { ASSERT { TYPE=data; SIZE=addrsize } }; bar { ASSERT { TYPE=data; SIZE=addrsize[5] } }; Exported Global Data Is Still A Bad Idea As you can see, the additional plumbing added to the Solaris link-editor to support stub objects is minimal. Furthermore, about 90% of that plumbing is dedicated to handling global data. We have long advised against global data exported from shared objects. There are many ways in which global data does not fit well with dynamic linking. Stub objects simply provide one more reason to avoid this practice. It is always better to export all data via a functional interface. You should always hide your data, and make it available to your users via a function that they can call to acquire the address of the data item. However, If you do have to support global data for a stub, perhaps because you are working with an already existing object, it is still easilily done, as shown above. Oracle does not like us to discuss hypothetical new features that don't exist in shipping product, so I'll end this section with a speculation. It might be possible to do more in this area to ease the difficulty of dealing with objects that have global data that the users of the library don't need. Perhaps someday... Conclusions It is easy to create stub objects for most objects. If your library only exports function symbols, all you have to do to build a faithful stub object is to add STUB_OBJECT; and then to use the same link command you're currently using, with the addition of the -z stub option. Happy Stubbing!

    Read the article

  • "Shared Folders" Feature Is Not Working In VirtualBox

    - by Islam Hassan
    I have Ubuntu 11.10 as a host and another linux 2.6 distribution as a guest. When I try to setup guest additions, this error message appears Building the shared folder support module .. fail And because of that, when I run the following in terminal mount -t vboxsf shared /root/shared I get the following error message mount: unknown filesystem type 'vboxsf' Any syggestions please? EDIT Sorry, the mentioned error message isn't complete, this is it. Building the shared folder support module ...fail! (Look at /var/log/vboxadd-install.log to find out what went wrong) This is the content of vboxadd-install.log Uninstalling modules from DKMS Attempting to install using DKMS Creating symlink /var/lib/dkms/vboxguest/4.1.2/source -> /usr/src/vboxguest-4.1.2 DKMS: add Completed. Kernel preparation unnecessary for this kernel. Skipping... Building module: cleaning build area.... make KERNELRELEASE=3.2.6 -C /lib/modules/3.2.6/build M=/var/lib/dkms/vboxguest/4.1.2/build..........................(bad exit status: 2) Error! Bad return status for module build on kernel: 3.2.6 (i686) Consult the make.log in the build directory /var/lib/dkms/vboxguest/4.1.2/build/ for more information. 0 0 ERROR: binary package for vboxguest: 4.1.2 not found Failed to install using DKMS, attempting to install without make KBUILD_VERBOSE=1 -C /lib/modules/3.2.6/build SUBDIRS=/tmp/vbox.0 SRCROOT=/tmp/vbox.0 modules test -e include/generated/autoconf.h -a -e include/config/auto.conf || ( \ echo; \ echo " ERROR: Kernel configuration is invalid."; \ echo " include/generated/autoconf.h or include/config/auto.conf are missing.";\ echo " Run 'make oldconfig && make prepare' on kernel src to fix it."; \ echo; \ /bin/false) mkdir -p /tmp/vbox.0/.tmp_versions ; rm -f /tmp/vbox.0/.tmp_versions/* WARNING: Symbol version dump /usr/src/linux-source-3.2.6/Module.symvers is missing; modules will have no dependencies and modversions. Actually the log file is very large and it exceeds the 30000 characters limit. How can I upload the entire log file here?

    Read the article

  • A strong component keeps everything together

    - by Justin Paul-Oracle
    Most of the times you implement a WebCenter Content based system, you require some sort of customization. Sometimes these customizations need a Java class or two, or libraries (for example, the JavaMail API), or Database Objects (like new tables, views, indexes, etc). I have seen that libraries and Database Objects are usually put in place using manual steps. This means that the library jar files are copied to one of the common classes directory (set in the Content CLASSPATH variable) and/or the database scripts are executed manually. I have also seen people place the custom Java classes in the common classes directory. While this may seem like an easy solution, think about a scenario where you need to disable or uninstall the component or if you have to upgrade or migrate the system. You have to keep these manual steps documented and execute them every time you encounter the above scenarios. It is very common that some of these manual steps are missed when you have multiple teams and people working on the system. Here are a few points to ponder upon: Place all your custom Java classes within your component. Create a new directory, say ${COMPONENT_DIR}/classes, and place your code there. You can choose to bundle all your classes into a jar or you can place the entire class directory structure. Add a path entry to the Build Settings so that it is bundled with the component when you build it. You also need to update the Custom Class Path and the Custom Class Path Load Order under the Advanced Build Settings. This will ensure that the system CLASSPATH is updated to add this new directory. Create a new component for any new library that you want to add. Add the appropriate path entries to the Build Settings so that it is bundled with the component when you build it. You also need to update the Custom Class Path, Custom Class Path Load Order and/or the Custom Library Path under the Advanced Build Settings. Enter a comma separated list of features that this component will provide. When you create other components that will use the features exposed by this component, make sure that you specify a dependency to this library component by specifying the comma separated list of features in the Advanced Build Settings. The component wizard allows you to create custom install/uninstall Java code. The wizard will create a install filter class when you check the “Has Install” checkbox on the “Install/Uninstall Settings” tab. Consider using this filter class to create database objects when you install the component and drop the objects when you uninstall the component. If you do a lot of custom component development, consider creating a install/uninstall Java class, which can execute queries defined within the component. To sum up, whenever you write a new custom component, make sure that you bundle everything within the component.

    Read the article

  • IIRF reverse proxy problem

    - by Sergei
    Hi everyone, We have a java application ( Atlassian Bamboo) running on port 8085 on Windows 2003. It is accessile as http: //bamboo:8085. I am trying to setup reverse proxy for IIS6 using IIRF so content is accessible via http: //bamboo. It seems that I set it ip correctly, and I can retrieve Status page. This is how my IIRF.ini looks like: RewriteLog c:\temp\iirf RewriteLogLevel 2 StatusUrl /iirfStatus RewriteCond %{HTTP_HOST} ^bambooi$ [I] #This setup works #ProxyPass ^/(.*)$ http://othersite/$1 #This does not ProxyPass ^/(.*)$ http://bamboo:8085/$1 However when I type in http: //bamboo in IE, I get 'page cannot be displayed ' message. FF does not return anything at all. I made Wireshark network dump, selected 'follow TCPstream' and it seems like correct page is being retrieved.Why cannot I see it then? I also noticed that I can retrieve http: //bamboo/favicon.ico so I must be very close to the solution.. This is the Wireshark output: GET / HTTP/1.1 Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, application/x-shockwave-flash, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, application/x-ms-application, application/x-ms-xbap, application/vnd.ms-xpsdocument, application/xaml+xml, */* Accept-Language: en-gb User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Accept-Encoding: gzip, deflate Host: bamboo Connection: Keep-Alive Cookie: JSESSIONID=wpsse0zyo4g5 HTTP/1.1 200 200 OK Date: Sat, 30 Jan 2010 09:19:46 GMT Server: Microsoft-IIS/6.0 Via: 1.1 DESTINATION_IP (IIRF 2.0) Content-Type: text/html; charset=utf-8 Transfer-Encoding: chunked <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <title>Dashboard</title> <meta http-equiv="content-type" content="text/html; charset=UTF-8" /> <meta name="robots" content="all" /> <meta name="MSSmartTagsPreventParsing" content="true" /> <meta http-equiv="Pragma" content="no-cache" /> <meta http-equiv="Expires" content="-1" /> <link type="text/css" rel="stylesheet" href="/s/1206/1/_/scripts/yui-2.6.0/build/grids/grids.css" /> <!--<link type="text/css" rel="stylesheet" href="/s/1206/1/_/scripts/yui/build/reset-fonts-grids/reset-fonts-grids.css" />--> <link rel="stylesheet" href="/s/1206/1/_/styles/main.css" type="text/css" /> <link rel="stylesheet" href="/s/1206/1/_/styles/main2.css" type="text/css" /> <link rel="stylesheet" href="/s/1206/1/_/styles/global-static.css" type="text/css" /> <link rel="stylesheet" href="/s/1206/1/_/styles/widePlanList.css" type="text/css" /> <link rel="stylesheet" href="/s/1206/1/_/styles/forms.css" type="text/css" /> <link rel="stylesheet" href="/s/1206/1/_/styles/yui-support/yui-custom.css" type="text/css" /> <link rel="shortcut icon" href="/s/1206/1/_/images/icons/favicon.ico" type="image/x-icon"/> <link rel="icon" href="/s/1206/1/_/images/icons/favicon.png" type="image/png" /> <link rel="stylesheet" href="/s/1206/1/_/styles/bamboo-tabs.css" type="text/css" /> <!-- Core YUI--> <link rel="stylesheet" type="text/css" href="/s/1206/1/_/scripts/yui-2.6.0/build/tabview/assets/tabview-core.css"> <link rel="stylesheet" type="text/css" href="/s/1206/1/_/scripts/yui-2.6.0/build/tabview/assets/skins/sam/tabview-skin.css"> <script type="text/javascript" src="/s/1206/1/_/scripts/yui-2.6.0/build/yahoo/yahoo-min.js"></script> <script type="text/javascript" src="/s/1206/1/_/scripts/yui-2.6.0/build/event/event-min.js" ></script> <script type="text/javascript" src="/s/1206/1/_/scripts/yui-2.6.0/build/dom/dom-min.js" ></script> <!--<script type="text/javascript" src="/s/1206/1/_/scripts/yui-2.6.0/build/animation/animation.js" ></script>--> <!-- Container --> <script type="text/javascript" src="/s/1206/1/_/scripts/yui-2.6.0/build/container/container-min.js"></script> <script type="text/javascript" src="/s/1206/1/_/scripts/yui-2.6.0/build/connection/connection-min.js"></script> <link type="text/css" rel="stylesheet" href="/s/1206/1/_/scripts/yui-2.6.0/build/container/assets/container.css" /> <!-- Menu --> <script type="text/javascript" src="/s/1206/1/_/scripts/yui-2.6.0/build/menu/menu-min.js"></script> <link type="text/css" rel="stylesheet" href="/s/1206/1/_/scripts/yui-2.6.0/build/menu/assets/menu.css" /> <!-- Tab view --> <!-- JavaScript Dependencies for Tabview: --> <script type="text/javascript" src="/s/1206/1/_/scripts/yui-2.6.0/build/yahoo-dom-event/yahoo-dom-event.js"></script> <script type="text/javascript" src="/s/1206/1/_/scripts/yui-2.6.0/build/element/element-beta-min.js"></script> <!-- Needed for old versions of the YUI --> <link rel="stylesheet" href="/s/1206/1/_/styles/yui-support/tabview.css" type="text/css" /> <link rel="stylesheet" href="/s/1206/1/_/styles/yui-support/round_tabs.css" type="text/css" /> <script type="text/javascript" src="/s/1206/1/_/scripts/yui-2.6.0/build/tabview/tabview-min.js"></script> <script type="text/javascript" src="/s/1206/1/_/scripts/yui-2.6.0/build/json/json-min.js"></script> <script type="text/javascript" src="/s/1206/1/_/scripts/yui-ext/yui-ext-nogrid.js"></script> <script type="text/javascript" src="/s/1206/1/_/scripts/bamboo.js"></script> <script type="text/javascript"> YAHOO.namespace('bamboo'); YAHOO.bamboo.tooltips = new Object(); YAHOO.bamboo.contextPath = ''; YAHOO.ext.UpdateManager.defaults.loadScripts = true; YAHOO.ext.UpdateManager.defaults.indicatorText = '<div class="loading-indicator">Currently loading...</div>'; YAHOO.ext.UpdateManager.defaults.timeout = 60; addUniversalOnload(addConfirmationToLinks); </script> <link rel="alternate" type="application/rss+xml" title="Bamboo RSS feed" href="/rss/createAllBuildsRssFeed.action?feedType=rssAll" /> </head> <body> <ul id="top"> <li id="skipNav"> <a href="#menu">Skip to navigation</a> </li> <li> <a href="#content">Skip to content</a> </li> </ul> <div id="nonFooter"> <div id="hd"> <div id="header"> <div id="logo"> <a href="/start.action"><img src="/images/bamboo_header_logo.gif" alt="Atlassian Bamboo" height="36" width="118" /></a> </div> <ul id="userOptions"> <li id="loginLink"> <a id="login" href="/userlogin!default.action?os_destination=%2Fstart.action">Log in</a> </li> <li id="signupLink"> <a id="signup" href="/signupUser!default.action">Signup</a> </li> <li id="helpLink"> <a id="help" href="http://confluence.atlassian.com/display/BAMBOO">Help</a> </li> </ul> </div> <!-- END #header --> <div id="menu"> <ul> <li><a id="home" href="/start.action" title="Atlassian Bamboo" accesskey="H"> <u>H</u>ome</a></li> <li><a id="authors" href="/authors/gotoAuthorReport.action" accesskey="U">A<u>u</u>thors</a></li> <li><a id="reports" href="/reports/viewReport.action" accesskey="R"> <u>R</u>eports</a></li> </ul> </div> <!-- END #menu --> </div> <!-- END #hd --> <div id="bd"> <div id="content"> <h1>Header here</h1> <div class="topMarginned"> <div id='buildSummaryTabs' class='dashboardTab'> </div> <script type="text/javascript"> function initUI(){ var jtabs = new YAHOO.ext.TabPanel('buildSummaryTabs'); YAHOO.bamboo.tabPanel = jtabs; // Use setUrl for Ajax loading var tab3 = jtabs.addTab('allTab', "All Plans"); tab3.setUrl('/ajax/displayAllBuildSummaries.action', null, true); var tab4 = jtabs.addTab("currentTab", "Current Activity"); tab4.setUrl('/ajax/displayCurrentActivity.action', null, true); var handleTabChange = function(e, activePanel) { saveCookie('atlassian.bamboo.dashboard.tab.selected', activePanel.id, 365); }; jtabs.on('tabchange', handleTabChange); var selectedCookie = getCookieValue('atlassian.bamboo.dashboard.tab.selected'); if (jtabs.getTab(selectedCookie)) { jtabs.activate(selectedCookie); } else { jtabs.activate('allTab'); } } YAHOO.util.Event.onContentReady('buildSummaryTabs', initUI); </script> </div> <script type="text/javascript"> setTimeout( "window.location.reload()", 1800*1000 ); </script> <div class="clearer" ></div> </div> <!-- END #content --> </div> <!-- END #bd --> </div> <!-- END #nonFooter --> <div id="ft"> <div id="footer"> <p> Powered by <a href="http://www.atlassian.com/software/bamboo/">Atlassian Bamboo</a> version 2.2.1 build 1206 - <span title="15:59:44 17 Mar 2009">17 Mar 09</span> </p> <ul> <li class="first"> <a href="https://support.atlassian.com/secure/CreateIssue.jspa?pid=10060&issuetype=1">Report a problem</a> </li> <li> <a href="http://jira.atlassian.com/secure/CreateIssue.jspa?pid=11011&issuetype=4">Request a feature</a> </li> <li> <a href="http://forums.atlassian.com/forum.jspa?forumID=103">Contact Atlassian</a> </li> <li> <a href="/viewAdministrators.action">Contact Administrators</a> </li> </ul> </div> <!-- END #footer --> </div> <!-- END #ft -->

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >