Search Results

Search found 17876 results on 716 pages for 'ejb jar xml'.

Page 300/716 | < Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >

  • Selenium RC htmlsuite error with IE

    - by Alina
    I am trying to use Selenium RC but i keep getting this error whenever i start the server. "HTML suite exception seen: java.lang.RuntimeException: sessionId 7643 doesn't exist; perhaps this session w as already stopped?" The command I use is java -jar C:\selenium-remote-control-1.0.3\selenium-server-1.0.3\selenium-server.jar -multiwindow -htmlSuite "*iexplore" "https://user1.apps.com/" "C:\TEMP\Selenium Tests\TestSuite1.html" "C:\TEMP\Selenium Tests\results.html" However with the same command if I change *iexplore to *firefox then it works. I need to run the test with IE, please help!Many thanks!!

    Read the article

  • Compare-Object gives false differences

    - by Andy
    I have some problem with Compare-Object. My task is to get difference between two directory snapshots made at different times. First snapshot is taken like this: ls -recurse d:\dir | export-clixml dir-20100129.xml Then, later, I get second snapshot and load both of them: $b = (import-clixml dir-20100130.xml) $a = (import-clixml dir-20100129.xml) Next, I'm trying to compare with Compare-Object, like that: diff $a $b What I get is in some places files that were added to $b since $a, but in some -- files that were in both snapshots, and some files, that were added to $b, are not given in Compare-Object output. Puzzling, but $b.count - $a.count is EXACTLY the same as (diff $a $b).count. Why is that? Ok, Compare-Object has -property param. I try to use that: diff -property fullname $a $b And I get the whole mess of differences: it shows me ALL the files. For example, say $a contains: A\1.txt A\2.txt A\3.txt And $b contains: X\2.mp3 X\3.mp3 X\4.mp3 A\1.txt A\2.txt A\3.txt diff output is something like that: X\2.mp3 => A\1.txt <= X\3.mp3 => A\2.txt <= X\4.mp3 => A\3.txt <= A\1.txt => A\2.txt => A\3.txt => Weird. I think I don't understand something crucial about Compare- Object usage, and manuals are scarce... Please, help me to get the DIFFERENCE between two directory snapshots. Thanks in advance UPDATE: I've saved data as plain strings like that: > import-clixml dir-20100129.xml | % { $_.fullname } | out-file -enc utf8 a.txt And results are the same. Here're excerpts of both snapshots (top 100-something lines, a.txt and b.txt), output of compare-object, and output of UNIX diff (unified). All files are UTF-8: http://dl.dropbox.com/u/2873752/compare-object-problem.zip

    Read the article

  • apache chokes after 300 connections

    - by john titus
    We have an apache webserver in front of Tomcat hosted on EC2, instance type is extra large with 34GB memory. Our application deals with lot of external webservices and we have a very lousy external webservice which takes almost 300 seconds to respond to requests during peak hours. During peak hours the server chokes at just about 300 httpd processes. ps -ef | grep httpd | wc -l =300 I have googled and found numerous suggestions but nothing seems to work.. following are some configuration i have done which are directly taken from online resources. I have increased the limits of max connection and max clients in both apache and tomcat. here are the configuration details: //apache <IfModule prefork.c> StartServers 100 MinSpareServers 10 MaxSpareServers 10 ServerLimit 50000 MaxClients 50000 MaxRequestsPerChild 2000 </IfModule> //tomcat <Connector port="8080" protocol="org.apache.coyote.http11.Http11NioProtocol" connectionTimeout="600000" redirectPort="8443" enableLookups="false" maxThreads="1500" compressableMimeType="text/html,text/xml,text/plain,text/css,application/x-javascript,text/vnd.wap.wml,text/vnd.wap.wmlscript,application/xhtml+xml,application/xml-dtd,application/xslt+xml" compression="on"/> //Sysctl.conf net.ipv4.tcp_tw_reuse=1 net.ipv4.tcp_tw_recycle=1 fs.file-max = 5049800 vm.min_free_kbytes = 204800 vm.page-cluster = 20 vm.swappiness = 90 net.ipv4.tcp_rfc1337=1 net.ipv4.tcp_max_orphans = 65536 net.ipv4.ip_local_port_range = 5000 65000 net.core.somaxconn = 1024 I have been trying numerous suggestions but in vain.. how to fix this? I'm sure m2xlarge server should serve more requests than 300, probably i might be going wrong with my configuration.. The server chokes only during peak hours and when there are 300 concurrent requests waiting for the [300 second delayed] webservice to respond. Please help..

    Read the article

  • How to get the PID of a process started by /bin/su -c

    - by crash3k
    I'm writing a init.d-script for an java-app. But the java-app should be run by another user. (The OS I'm using is Debian Squeeze.) I already got this: /bin/su - $USER - c "cd $PATH;echo $PASSWORD | $JAVA -Xmx256m -jar $PATH/app.jar -d > /dev/null" & PID=$! /bin/su - $USER - c "echo $PID > $PIDFILE" But this will of course only save the pid of the "/bin/su"-process instead of the pid of the created java-process.

    Read the article

  • Is there a way to use VirtualBox without using it's resource registry?

    - by Catskul
    Summary VirtualBox seems to want everything to be "registered" which makes it much more annoying to work with on the command line. I'm attempting to create an automated script which will create, move, start, stop, and destroy virtual machines and virtual disks. Requiring registration will complicate the task for the following reasons. leaves state information around that can cause unpredicted edgecases causing scripts to fail. creates potential name space collisions for multiple process creating VMs with the same name moving/copying resources on the same machine is more complicated because references in the registry need to be updated copying resources (disk + vm combination) to another machine require reconfiguration once they reach their target machine, and require the transfer of extra meta data to do the reconfiguration. If something unexpectedly fails, and an unregister thus fails to happen, left over configuration information can cause problems in subsequent runs. Use Case My specific use case is for a continuous integration server which creates and destroys VMs and Disk images potentially with the same name, and would require more logic to deal with the registry's statefulness. Imaginary Example It seems that I should just be able to for example (using some imaginary and/or incorrect commands): mkdir foobar customdiskimg_script ./foo/foo.vdi vboxmanage createvm --name "foo" --ostype Linux --basefolder ./foo/foo.xml vboxmanage storagectl ./foo/foo.xml --name foo --add ide vboxmanage storageattach --storagectl foo --medium ./foo/foo.vdi ./foo/foo.xml vboxmanage startvm ./foo/foo.xml TLDR Is there a way to use virtualbox without "registering" harddisks and VMs?

    Read the article

  • How can you connect OpenGrok to a SVN repository?

    - by Malcolm Frexner
    I was able to install and use opengrok on WinXP using this blog entry http://theflashesofinsight.wordpress.com/2009/05/11/install-opengrok-on-windows/ I now want to index a subversion repository. I checked out a repository to the source folder and can search the files. However the links for history and annotate are not active. I have svn installed and indexing the directory give no warnings or errors. (There was an error when I didnt have the SVN client installed) Is there some configuration needed? I saw this link http://blogs.sun.com/trond/entry/using_subversion_with_opengrok but it did not give me any clue. I used java -Xmx1024m -jar opengrok.jar -W "C:\\OpenGrok\\data\\configuration.xml" -r on -P -S -v -s "C:\\OpenGrok\\source" -d "C:\\OpenGrok\\data" and after it java -Xmx1024m -jar opengrok.jar -R "C:\\OpenGrok\\data\\configuration.xml" -H This is the resulting config: <?xml version="1.0" encoding="UTF-8"?> <java version="1.6.0_20" class="java.beans.XMLDecoder"> <object class="org.opensolaris.opengrok.configuration.Configuration"> <void property="dataRoot"> <string>C:\OpenGrok\data</string> </void> <void property="projects"> <void method="add"> <object class="org.opensolaris.opengrok.configuration.Project"> <void property="description"> <string>Configuration</string> </void> <void property="path"> <string>/Configuration</string> </void> </object> </void> <void method="add"> <object class="org.opensolaris.opengrok.configuration.Project"> <void property="description"> <string>test</string> </void> <void property="path"> <string>/test</string> </void> </object> </void> </void> <void property="remoteScmSupported"> <boolean>true</boolean> </void> <void property="repositories"> <void method="add"> <object class="org.opensolaris.opengrok.history.RepositoryInfo"> <void property="datePattern"> <string>yyyy-MM-dd&apos;T&apos;HH:mm:ss.SSS&apos;Z&apos;</string> </void> <void property="directoryName"> <string>C:\OpenGrok\source\Configuration</string> </void> <void property="remote"> <boolean>true</boolean> </void> <void property="type"> <string>Subversion</string> </void> <void property="working"> <boolean>true</boolean> </void> </object> </void> </void> <void property="sourceRoot"> <string>C:\OpenGrok\source</string> </void> <void property="verbose"> <boolean>true</boolean> </void> </object> </java>

    Read the article

  • Problem with Java Mail : No provider for smtp

    - by user359198
    Hello all. I am using JavaMail to do a simple application that sends an email when it finds some files in a directory. I managed to get it worked from Eclipse. I Run the application and it sent the email with no errors. But, when I created the jar, and executed it, it fails in the email sending part. It gives this exception. javax.mail.NoSuchProviderException: No provider for smtp at javax.mail.Session.getProvider(Session.java:460) at javax.mail.Session.getTransport(Session.java:655) at javax.mail.Session.getTransport(Session.java:636) at main.java.util.MailManager.sendMail(MailManager.java:69) at main.java.DownloadsMail.composeAndSendMail(DownloadsMail.java:16) at main.java.DownloadsController.checkDownloads(DownloadsController.java:51) at main.java.MainDownloadsController.run(MainDownloadsController.java:26) at java.lang.Thread.run(Unknown Source) I am using the library in this method: public static boolean sendMail(String subject, String text){ noExceptionsThrown = true; try { loadProperties(); } catch (IOException e1) { System.out.println("Problem encountered while loading properties"); e1.printStackTrace(); noExceptionsThrown = false; } Properties mailProps = new Properties(); String host = "mail.smtp.host"; mailProps.setProperty(host, connectionProps.getProperty(host)); String tls = "mail.smtp.starttls.enable"; mailProps.setProperty(tls, connectionProps.getProperty(tls)); String port = "mail.smtp.port"; mailProps.setProperty(port, connectionProps.getProperty(port)); String user = "mail.smtp.user"; mailProps.setProperty(user, connectionProps.getProperty(user)); String auth = "mail.smtp.auth"; mailProps.setProperty(auth, connectionProps.getProperty(auth)); Session session = Session.getDefaultInstance(mailProps); //session.setDebug(true); MimeMessage message = new MimeMessage(session); try { message.setFrom(new InternetAddress(messageProps.getProperty("from"))); message.addRecipient(Message.RecipientType.TO, new InternetAddress( messageProps.getProperty("to"))); message.setSubject(subject); message.setText(text); Transport t = session.getTransport("smtp"); try { t.connect(connectionProps.getProperty("user"), passwordProps .getProperty("password")); t.sendMessage(message, message.getAllRecipients()); } catch (Exception e) { System.out.println("Error encountered while sending the email"); e.printStackTrace(); noExceptionsThrown = false; } finally { t.close(); } } catch (Exception e) { System.out.println("Error encountered while creating the message"); e.printStackTrace(); noExceptionsThrown = false; } return noExceptionsThrown; } I am loading these values from properties files. mail.smtp.host=smtp.gmail.com mail.smtp.starttls.enable=true mail.smtp.port=587 mail.smtp.auth=true I have tried to change the host by ssl://smtp.gmail.com, the port by 465 (just for trying something different), but it doesn't work either. Anyway, if it works fine from Eclipse with the original parameters, I guess that the values are correct, but the problem is creating the jar. I don't know very much about the possible results or changes when creating a jar. Could the JavaMail libraries someway go wrong when the jar is created? Do you have any ideas? Thank you very much for your help.

    Read the article

  • Spring MVC Project's .war file not able to deploy using JRE in jetty server

    - by PDKumar
    IDE: STS(Eclipse) Server: Jetty-distribution-8.1.15.v20140411 I have created a SpringsMVC Project using Template available in STS tool(New-Springs Project- Springs MVC Project). I generated a war file(SpringsMVC.war) and placed it in /webapps folder of Jetty server. Now I started Jetty using JRE's 'java' , D:\jetty-distribution-8.1.15.v20140411"C:\Program Files (x86)\Java\jre7\bin\java" -jar start.jar Now when I tried to access my application in browser, it shows the below error; HTTP ERROR 500 Problem accessing /SpringMVCProject1/. Reason: Server Error Caused by: org.apache.jasper.JasperException: PWC6345: There is an error in invoking javac. A full JDK (not just JRE) is required at org.apache.jasper.compiler.DefaultErrorHandler.jspError(DefaultErrorHandler.java:92) at org.apache.jasper.compiler.ErrorDispatcher.dispatch(ErrorDispatcher.java:378) at org.apache.jasper.compiler.ErrorDispatcher.jspError(ErrorDispatcher.java:119) at org.apache.jasper.compiler.Jsr199JavaCompiler.compile(Jsr199JavaCompiler.java:208) at org.apache.jasper.compiler.Compiler.generateClass(Compiler.java:384) at org.apache.jasper.compiler.Compiler.compile(Compiler.java:453) at org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:625) at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:374) at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:492) at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:378) at javax.servlet.http.HttpServlet.service(HttpServlet.java:848) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:503) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:575) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:429) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:276) at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:103) at org.springframework.web.servlet.view.InternalResourceView.renderMergedOutputModel(InternalResourceView.java:238) at org.springframework.web.servlet.view.AbstractView.render(AbstractView.java:262) at org.springframework.web.servlet.DispatcherServlet.render(DispatcherServlet.java:1180) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:950) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:852) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:882) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:778) at javax.servlet.http.HttpServlet.service(HttpServlet.java:735) at javax.servlet.http.HttpServlet.service(HttpServlet.java:848) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:503) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:533) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:429) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116) at org.eclipse.jetty.server.Server.handle(Server.java:370) at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:494) at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:971) at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1033) at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:644) at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235) at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82) at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:696) at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:53) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543) at java.lang.Thread.run(Unknown Source) But if I use JDK's java, war file gets deployed and output displayed perfectly. D:\jetty-distribution-8.1.15.v20140411"C:\Program Files (x86)\Java\jdk1.7.0_55\bin\java" -jar start.jar Hello world! The time on the server is August 20, 2014 3:42:53 PM IST. Please tell is it not possible to use JRE to execute a "SpringsMVCProject"?

    Read the article

  • Maven not setting classpath for dependencies properly

    - by Matthew
    OS name: "linux" version: "2.6.32-27-generic" arch: "i386" Family: "unix" Apache Maven 2.2.1 (r801777; 2009-08-06 12:16:01-0700) Java version: 1.6.0_20 I am trying to use the mysql dependency in with maven in ubuntu. If I move the "mysql-connector-java-5.1.14.jar" file that maven downloaded into my $JAVA_HOME/jre/lib/ext/ folder, everything is fine when I run the jar. I think I should be able to just specify the dependency in the pom.xml file and maven should take care of setting the classpath for the dependency jars automatically. Is this incorrect? My pom.xml file looks like this: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.ion.common</groupId> <artifactId>TestPreparation</artifactId> <version>1.0-SNAPSHOT</version> <packaging>jar</packaging> <name>TestPrep</name> <url>http://maven.apache.org</url> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <configuration> <archive> <manifest> <addClasspath>true</addClasspath> <mainClass>com.ion.common.App</mainClass> </manifest> </archive> </configuration> </plugin> </plugins> </build> <dependencies> <!-- JUnit testing dependency --> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> <!-- MySQL database driver --> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.14</version> <scope>compile</scope> </dependency> </dependencies> </project> The command "mvn package" builds it without any problems, and I can run it, but when the application attempts to access the database, this error is presented: java.lang.ClassNotFoundException: com.mysql.jdbc.Driver at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at java.lang.ClassLoader.loadClass(ClassLoader.java:321) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) at java.lang.ClassLoader.loadClass(ClassLoader.java:266) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:186) at com.ion.common.Functions.databases(Functions.java:107) at com.ion.common.App.main(App.java:31) The line it is failing on is: Class.forName("com.mysql.jdbc.Driver"); Can anyone tell me what I'm doing wrong or how to fix it?

    Read the article

  • Can't run init script on boot after another init script

    - by Colin McQueen
    I have three init scripts and the Broker init script runs fine, but when I try to run the Consumer init script and then the Data Collector init script, the only process that is running is the Broker. I added the symbolic links to the run levels using update-rc.d for each script and I also changed the number prefixes in the symbolic links to try and run the scripts in the proper order but that did not work. I am able to run the scripts from the terminal and they work fine but they need to all be started on boot. Any ideas as to why my other scripts are not running? Also inside my Consumer and Data Collector I am running: su user1 -c 'java -jar foo.jar' to start the services. Also the Consumer Java class sits and waits for a message from the queue, so the Java code does not stop until I specify the stop argument for the init script. The Broker has to start first, then the Consumer, then the Data Collector. Adding the symbolic links for the runlevels: sudo update-rc.d Broker defaults 10 90 sudo update-rc.d Consumer defaults 15 85 sudo update-rc.d DataCollector defaults 20 80

    Read the article

  • Java-JDBC-MySQL Error

    - by LeonardPeris
    I'm trying to get my java program to talk to a MySQL DB. So i did some reading and downloaded MySQL Connector/J. I've extracted it into my home directory ~. Here are the contents. user@hamster:~$ ls LoadDriver.class LoadDriver.java mysql-connector-java-5.1.18-bin.jar The contents of LoadDriver.java are user@hamster:~$ cat LoadDriver.java import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; // Notice, do not import com.mysql.jdbc.* // or you will have problems! public class LoadDriver { public static void main(String[] args) { try { // The newInstance() call is a work around for some // broken Java implementations Class.forName("com.mysql.jdbc.Driver").newInstance(); } catch (Exception ex) { System.out.println(ex); } } } The contents are the same from http://dev.mysql.com/doc/refman/5.1/en/connector-j-usagenotes-basic.html#connector-j-usagenotes-connect-drivermanager with the only change that the Exception is being printed to console in the catch block. I compile it as follows leonard@hamster:~$ javac LoadDriver.java When I try to execute it, the following is the ouput. leonard@hamster:~$ java LoadDriver java.lang.ClassNotFoundException: com.mysql.jdbc.Driver This output is consistent with the executing command, but when trying to run it with the prescribed CLASSPATH method I run into the following issue. leonard@hamster:~$ java -cp /home/leonard/mysql-connector-java-5.1.18-bin.jar LoadDriver Exception in thread "main" java.lang.NoClassDefFoundError: LoadDriver Caused by: java.lang.ClassNotFoundException: LoadDriver at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at java.lang.ClassLoader.loadClass(ClassLoader.java:321) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) at java.lang.ClassLoader.loadClass(ClassLoader.java:266) Could not find the main class: LoadDriver. Program will exit. Am I missing something? How do I get MySQL's own code samples running.

    Read the article

  • Why doesn't Gradle include transitive dependencies in compile / runtime classpath?

    - by Francis Toth
    I'm learning how Gradle works, and I can't understand how it resolves a project transitive dependencies. For now, I have two projects : projectA : which has a couple of dependencies on external libraries projectB : which has only one dependency on projectA No matter how I try, when I build projectB, gradle doesn't include any projectA dependencies (X and Y) in projectB's compile or runtime classpath. I've only managed to make it work by including projectA's dependencies in projectB's build script, which, in my opinion does not make any sense. These dependencies should be automatically attached to projectB. I'm pretty sure I'm missing something but I can't figure out what. I've read about "lib dependencies", but it seems to apply only to local projects like described here, not on external dependencies. Here is the build.gradle I use in the root project (the one that contains both projectA and projectB) : buildscript { repositories { mavenCentral() } dependencies { classpath 'com.android.tools.build:gradle:0.3' } } subprojects { apply plugin: 'java' apply plugin: 'idea' group = 'com.company' repositories { mavenCentral() add(new org.apache.ivy.plugins.resolver.SshResolver()) { name = 'customRepo' addIvyPattern "ssh://.../repository/[organization]/[module]/[revision]/[module].xml" addArtifactPattern "ssh://.../[organization]/[module]/[revision]/[module](-[classifier]).[ext]" } } sourceSets { main { java { srcDir 'src/' } } } idea.module { downloadSources = true } // task that create sources jar task sourceJar(type: Jar) { from sourceSets.main.java classifier 'sources' } // Publishing configuration uploadArchives { repositories { add project.repositories.customRepo } } artifacts { archives(sourceJar) { name "$name-sources" type 'source' builtBy sourceJar } } } This one concerns projectA only : version = '1.0' dependencies { compile 'com.company:X:1.0' compile 'com.company:B:1.0' } And this is the one used by projectB : version = '1.0' dependencies { compile ('com.company:projectA:1.0') { transitive = true } } Thank you in advance for any help, and please, apologize me for my bad English.

    Read the article

  • NetBeans Sources as a Platform

    - by Geertjan
    By default, when you create a new NetBeans module, the 'development IDE' is the platform to which you'll be deploying your module. It's a good idea to create your own platform and check that into your own repo, so that everyone working on your project will be able to work with a standardized platform, rather than whatever happens to be beneath the development IDE your using. Something else you can do is use the NetBeans sources as your platform, once you've checked them out. That's something I did the other day when trying to see whether adding 'setActivatedNodes' to NbSheet was sufficient for getting UndoRedo enabled in the Properties Window. So that's a good use case, i.e., you'd like to change the NetBeans Platform somehow, or you're fixing a bug, in other words, in some way you need to change the NetBeans Platform sources and then would like to try out the result of your changes as a client of your changes. In that scenario, here's how to set up and use a NetBeans Platform from the NetBeans sources. Run 'ant build platform' on the root of the NetBeans sources. You'll end up with nbbuild/netbeans containing a subfolder 'platform' and a subfolder 'harness'. There's your NetBeans Platform. Go to Tools | NetBeans Platform and browse to nbbuild/netbeans', registering it as your NetBeans Platform. Create a new NetBeans module, using the new NetBeans Platform as the platform. Now the cool thing is you can open any of the NetBeans modules from the NetBeans Platform modules in the NetBeans sources. When you change the source code of one of these modules and then build that module, the changed JAR will automatically be added to the right place in the nbbuild/netbeans folder. And when you do a 'clean' on a NetBeans Platform module, the related JAR will be removed from nbbuild/netbeans. In other words, in this way, by changing the NetBeans sources, you're directly changing the platform that your custom module will be running on when you deploy it. That's pretty cool and gives you a more connected relationship to your platform, since you're able to change it in the same way as the custom modules you create.

    Read the article

  • ADF Seeded Customizations in JDeveloper 11.1.2.1

    - by Dmitry Nefedkin
    For the ADF training I needed a demo application that shows ADF seeded customizations functionality. I’m using the latest JDeveloper 11.1.2.1, so I decided to download the “Customizing and Personalizing an ADF Application” completed tutorial application available here I’ve downloaded and unzipped the CustomizeApp.zip and opened the CustomizeApp.jws in the JDeveloper 11.1.2.1 using the Customization Role. The result was the following: MDS-00036 “Cannot instantiate the class oracle.model.mycompany.SiteCC”. I thought: “OK, that’s because SiteCC class is not accessible to JDeveloper classloader, I should jar it and put to the <JDEVELOPER_HOME> \jdev\lib\patches like I did in JDeveloper 11.1.1.5 and ealier”.  No way, it JDeveloper 11.1.2 we do not have this patches directory at all! It seems that is because of the new architecture of the JDeveloper plugins based on OSGi.   I looked through the tutorial and have not found any step related to the jar–ing the SiteCC class and moving it to the specific directory.  So, JDeveloper 11.1.2  is smart enough to find my customization class and add it to the classpath without any specific actions from my side.  But why am I getting this “cannot instantiate the class” error?I’ve checked at the the full path to my CustomizeApp.jws  - c:\temp\ADF personalizations\CustomizeApp\CustomizeApp.jws  and noticed the space in the name of the directory.  Was it the root cause of the issue?  Yes!  I’ve renamed the ADF personalizations folder to pers, opened the c:\temp\pers\CustomizeApp\CustomizeApp.jws,  and received the expected behaviour: So, be aware of the spaces in the paths when working with JDeveloper…

    Read the article

  • Java IDE written in pure Java?

    - by Darestium
    Is there a Java IDE written in Java? I just got my year 9 DET laptop today at school, and there are all sorts of restrictions set in place. Somewhat annoyingly, you cannot run any executable other than the ones already installed on the system (for some reason they haven't disabled the use of Command Prompt, PowerShell, or strangely enough, regedit). They allow you to run Java executables, so I thought that would be the only way to be able to program on my crappy laptop at school (when I have finished all my work, naturally) :D Edit: By written in Java, I also mean that the executable, that is used to run the program, has the file extension ".jar", thus running on the JVM. Edit 2: I tried the DrJava IDE, and it worked great, thanks (I can compile and execute programs)! Regarding running Eclipse as through the command line using the command "java -jar "C:/Users.../org.eclipse..."". This results in an error producing a log saying file, the main error is: MESSAGE An error occurred while automatically activating bundle org.eclipse.ui.workbench (182). How do I fix this error (I much perfer working with Eclipse than any other IDE)? Edit 3: Regarding my last edit, just disregard it :D. I fixed the problem by downloading the latest version of Eclipse.

    Read the article

  • Cannot install openjdk on Hardy Heron

    - by infaustus
    I know that Hardy Heron is very old but don't ask why Hardy... I've tried root@vz10931:/etc/apt# apt-get install openjdk-6-jre Reading package lists... Done Building dependency tree Reading state information... Done You might want to run `apt-get -f install' to correct these: The following packages have unmet dependencies: openjdk-6-jre: Depends: libasound2 (> 1.0.14) but it is not going to be installed Depends: libgif4 (>= 4.1.6) but it is not going to be installed Depends: libxtst6 but it is not going to be installed Depends: openjdk-6-jre-headless (>= 6b18-1.8.3-0ubuntu1~8.04.2) but it is not going to be installed vim: Depends: vim-common (= 1:7.1-138+1ubuntu3.1) but 2:7.3.154+hg~74503f6ee649-2ubuntu3 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). My sources.list deb http://pl.archive.ubuntu.com/ubuntu/ hardy main restricted universe multiverse deb-src http://pl.archive.ubuntu.com/ubuntu/ hardy main restricted universe multiverse deb http://pl.archive.ubuntu.com/ubuntu/ hardy-updates main restricted universe multiverse deb-src http://pl.archive.ubuntu.com/ubuntu/ hardy-updates main restricted universe multiverse deb http://security.ubuntu.com/ubuntu hardy-security main restricted universe multiverse deb-src http://security.ubuntu.com/ubuntu hardy-security main restricted universe multiverse And root@vz10931:/etc/apt# ls -l sources.list.d/ total 0 Please help. When I've tried apt-get install -f I had install new system because everything crashed. Edit: I checked that i have openjdk installed root@vz10931:/var/www/mailer# dpkg --list | grep java iU sun-java6-bin 6.24-1build0.8.04.1 Sun Java(TM) Runtime Environment (JRE) 6 (ar iU sun-java6-jdk 6.24-1build0.8.04.1 Sun Java(TM) Development Kit (JDK) 6 iU sun-java6-jre 6.24-1build0.8.04.1 Sun Java(TM) Runtime Environment (JRE) 6 (ar but when i am trying to start java file: java -jar program.jar error appear -bash: java: command not found

    Read the article

  • Minecraft in jdk 1.7.0_u2 x64

    - by Nela Drobná
    I have Ubuntu 11.10 64-bit and I installed JDK 1.7.0 update 2 x64 via webupd8 page. But currently I have problem with minecraft game. After downloading launcher from Minecraft.net and lauch the game by java -jar /home/zrebec/Downloads/minecraft.jar launcehr is launched normaly, after login the game download the updates but then I got just the black screen with this in terminal: Setting user: zrebec, -356009615199623309 Exception in thread "Minecraft main thread" java.lang.UnsatisfiedLinkError: /home/zrebec/.minecraft/bin/natives/liblwjgl.so: /home/zrebec/.minecraft/bin/natives/liblwjgl.so: wrong ELF class: ELFCLASS32 (Possible cause: architecture word width mismatch) at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1928) at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1825) at java.lang.Runtime.load0(Runtime.java:792) at java.lang.System.load(System.java:1059) at org.lwjgl.Sys$1.run(Sys.java:69) at java.security.AccessController.doPrivileged(Native Method) at org.lwjgl.Sys.doLoadLibrary(Sys.java:65) at org.lwjgl.Sys.loadLibrary(Sys.java:81) at org.lwjgl.Sys.<clinit>(Sys.java:98) at org.lwjgl.opengl.Display.<clinit>(Display.java:132) at net.minecraft.client.Minecraft.a(SourceFile:180) at net.minecraft.client.Minecraft.run(SourceFile:648) at java.lang.Thread.run(Thread.java:722) Please anyone can help me with this? I think that problem will be in architecture becase: liblwjgl.so: /home/zrebec/.minecraft/bin/natives/liblwjgl.so: wrong ELF class: ELFCLASS32 (Possible cause: architecture word width mismatch) Any idea please? I know, maybe this one is off topic because maybe its not Ubuntu problem maybe but in 64-bit works all perfectly and I think that accepted answer can help to many users and can make better playing games under linux. Really. Thank you very much for any idea.

    Read the article

  • Groovy Grapes in NetBeans IDE

    - by Geertjan
    The start of Groovy Grapes support in NetBeans IDE. Below you see a pure Groovy project, with the Groovy JAR and the Ivy JAR automatically on its classpath. There's also a Groovy script that makes use of a @Grab annotation. In the bottom left, in the Services window, you also see a Grape Repository browser, i.e., showing you the JARs that are currently in ".groovy/grapes". Click the images below to get a better look at them. Next, you see what happens when the project is run. The @Grab annotation automatically starts downloading the JARs that are needed and puts them into the ".groovy/grapes" folder. However, the "no suitable classloader found for grab" error message (which Google shows is a problem for lots of developers) prevents the application from running successfully: The final screenshot shows that I've put the JARs that I need onto the classpath of the project. I did that manually, hoping to learn from the NetBeans Maven project or the NetBeans Gradle project how to do that automatically. Also note that the @Grab annotation has been commented out. Now the error message about the classloader is avoided and the project runs. What needs to happen for Groovy Grapes support to be complete in NetBeans IDE: Figure out how to add the downloaded JARs to the project classpath automatically. Fix the refresh problem in the Grape Repository browser, i.e., right now the refresh doesn't happen automatically yet. Hopefully find a way to get around the grab classloader problem, i.e., it's not ideal that one needs to comment out the annotation. Let the user specify a different Grape repository, i.e., right now ".groovy/grapes" is assumed, but the user should be able to point the repository browser to something different. Maybe there should be support for multiple Grape repositories? Comments/feedback/help is welcome.

    Read the article

  • I Can't run the Netbeans but I installed successfully

    - by David
    I'm new to Ubuntu as well as Netbeans. I installed Netbeans, and I've made sure to install all the JDKs and JREs I could find. It installed without errors. I also saw this question and made sure I followed all the instructions there as well. I never got any error messages of any kind. So far as I know, it installed okay. However, when I try to run Netbeans, I get the message in the bottom of the Netbeans IDE like this: ant -f /root/NetBeansProjects/samp1 -Djsp.includes=/root/NetBeansProjects/samp1/build/web/one.jsp -DforceRedeploy=false -Dclient.urlPart=/one.jsp -Ddirectory.deployment.supported=true -Djavac.jsp.includes=org/apache/jsp/one_jsp.java -Dnb.wait.for.caches=true run /root/NetBeansProjects/samp1/nbproject/build-impl.xml:774: The libs.CopyLibs.classpath property is not set up. This property must point to org-netbeans-modules-java-j2seproject-copylibstask.jar file which is part of NetBeans IDE installation and is usually located at <netbeans_installation>/java<version>/ant/extra folder. Either open the project in the IDE and make sure CopyLibs library exists or setup the property manually. For example like this: ant -Dlibs.CopyLibs.classpath=a/path/to/org-netbeans-modules-java-j2seproject-copylibstask.jar BUILD FAILED (total time: 0 seconds)

    Read the article

  • WiX major upgrade refuses to replace existing file!

    - by Joshua
    Hello! I have inherited this project with a WiX installer, and am required to make this version usefully upgrade the previous one! My problem comes in replacing the database files with new versions. No, the problem is not that they are locked, I can replace them manually, and in fact now ONE of them is replaced, while the other is not. Please, please tell me what I'm doing wrong here. I've tried several other solutions (including registry keys as KeyPath instead of CompanionFile) but nothing is quite working. Here is (most of) the code of the .WXS file: <Product Id='$(var.ProductCode)' UpgradeCode='$(var.UpgradeCode)' Name="Pathways" Version='$(var.ProductVersion)' Manufacturer='$(var.Manufacturer)' Language='1033'> <Package Id="*" Description="Pathways Directory Software" InstallerVersion="301" Compressed="yes" /> <WixVariable Id="WixUILicenseRtf" Value="License.rtf" /> <Media Id="1" Cabinet="Pathways.cab" EmbedCab="yes" /> <Upgrade Id="$(var.UpgradeCode)"> <UpgradeVersion OnlyDetect="no" Maximum="$(var.ProductVersion)" IncludeMaximum="no" Language="1033" Property="OLDAPPFOUND" /> <UpgradeVersion Minimum="$(var.ProductVersion)" IncludeMinimum="yes" OnlyDetect="no" Language="1033" Property="NEWAPPFOUND" /> </Upgrade> <Property Id="ALLUSERS">2</Property> <!-- directories --> <Directory Id="TARGETDIR" Name="SourceDir"> <!-- program files directory --> <Directory Id="ProgramFilesFolder"> <Directory Id="INSTALLDIR" Name="Pathways"/> </Directory> <!-- application data directory --> <Directory Id="CommonAppDataFolder" Name="CommonAppData"> <Directory Id="CommonAppDataPathways" Name="Pathways" /> </Directory> <!-- start menu program directory --> <Directory Id="ProgramMenuFolder"> <Directory Id="ProgramsMenuPathwaysFolder" Name="Pathways" /> </Directory> <!-- desktop directory --> <Directory Id="DesktopFolder" /> </Directory> <Icon Id="PathwaysIcon" SourceFile="\\Fileserver\Release\Pathways\Latest\Release\Pathways.exe" /> <!-- components in the reference to the install directory --> <DirectoryRef Id="INSTALLDIR"> <Component Id="Application" Guid="EEE4EB55-A515-4872-A4A5-06D6AB4A06A6"> <File Id="pathwaysExe" Name="Pathways.exe" DiskId="1" Source="\\Fileserver\Release\Pathways\Latest\Release\Pathways.exe" Vital="yes" KeyPath="yes" Assembly=".net" AssemblyApplication="pathwaysExe" AssemblyManifest="pathwaysExe"> <!--<netfx:NativeImage Id="ngen_Pathways.exe" Platform="32bit" Priority="2"/> --> </File> <File Id="pathwaysChm" Name="Pathways.chm" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\Pathways.chm" /> <File Id="publicKeyXml" ShortName="RSAPUBLI.XML" Name="RSAPublicKey.xml" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\RSAPublicKey.xml" Vital="yes" /> <File Id="staticListsXml" ShortName="STATICLI.XML" Name="StaticLists.xml" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\StaticLists.xml" Vital="yes" /> <File Id="axInteropMapPointDll" ShortName="AXMPOINT.DLL" Name="AxInterop.MapPoint.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\AxInterop.MapPoint.dll" Vital="yes" /> <File Id="interopMapPointDll" ShortName="INMPOINT.DLL" Name="Interop.MapPoint.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\Interop.MapPoint.dll" Vital="yes" /> <File Id="mapPointDll" ShortName="MAPPOINT.DLL" Name="MapPoint.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\Interop.MapPoint.dll" Vital="yes" /> <File Id="devExpressData63Dll" ShortName="DAAT63.DLL" Name="DevExpress.Data.v6.3.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\DevExpress.Data.v6.3.dll" Vital="yes" /> <File Id="devExpressUtils63Dll" ShortName="UTILS63.DLL" Name="DevExpress.Utils.v6.3.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\DevExpress.Utils.v6.3.dll" Vital="yes" /> <File Id="devExpressXtraBars63Dll" ShortName="BARS63.DLL" Name="DevExpress.XtraBars.v6.3.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\DevExpress.XtraBars.v6.3.dll" Vital="yes" /> <File Id="devExpressXtraNavBar63Dll" ShortName="NAVBAR63.DLL" Name="DevExpress.XtraNavBar.v6.3.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\DevExpress.XtraNavBar.v6.3.dll" Vital="yes" /> <File Id="devExpressXtraCharts63Dll" ShortName="CHARTS63.DLL" Name="DevExpress.XtraCharts.v6.3.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\DevExpress.XtraCharts.v6.3.dll" Vital="yes" /> <File Id="devExpressXtraEditors63Dll" ShortName="EDITOR63.DLL" Name="DevExpress.XtraEditors.v6.3.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\DevExpress.XtraEditors.v6.3.dll" Vital="yes" /> <File Id="devExpressXtraPrinting63Dll" ShortName="PRINT63.DLL" Name="DevExpress.XtraPrinting.v6.3.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\DevExpress.XtraPrinting.v6.3.dll" Vital="yes" /> <File Id="devExpressXtraReports63Dll" ShortName="REPORT63.DLL" Name="DevExpress.XtraReports.v6.3.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\DevExpress.XtraReports.v6.3.dll" Vital="yes" /> <File Id="devExpressXtraRichTextEdit63Dll" ShortName="RICHTE63.DLL" Name="DevExpress.XtraRichTextEdit.v6.3.dll" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\DevExpress.XtraRichTextEdit.v6.3.dll" Vital="yes" /> <RegistryValue Id="PathwaysInstallDir" Root="HKLM" Key="Software\Tribal Data Resources\Pathways" Name="InstallDir" Action="write" Type="string" Value="[INSTALLDIR]" /> </Component> </DirectoryRef> <!-- application data components --> <DirectoryRef Id="CommonAppDataPathways"> <Component Id="CommonAppDataPathwaysFolderComponent" Guid="087C6F14-E87E-4B57-A7FA-C03FC8488E0D"> <CreateFolder> <Permission User="Everyone" GenericAll="yes" /> </CreateFolder> <RemoveFolder Id="CommonAppDataPathways" On="uninstall" /> <!-- <RegistryValue Root="HKCU" Key="Software\TDR\Pathways" Name="installed" Type="integer" Value="1" KeyPath="yes" />--> </Component> <Component Id="Settings" Guid="A3513208-4F12-4496-B609-197812B4A953" NeverOverwrite="yes"> <File Id="settingsXml" KeyPath="yes" ShortName="SETTINGS.XML" Name="Settings.xml" DiskId="1" Source="\\fileserver\Release\Pathways\Dependencies\Settings\settings.xml" Vital="yes" /> </Component> <Component Id="Database" Guid="1D8756EF-FD6C-49BC-8400-299492E8C65D" > <!-- <RegistryValue Root="HKLM" Key="Software\TDR\Pathways\Database" Name="installed" Type="integer" Value="1" KeyPath="yes" /> --> <File Id="pathwaysMdf" Name="Pathways.mdf" DiskId="1" Source="\\fileserver\Shared\Databases\Pathways\SystemDBs\Pathways.mdf" CompanionFile="pathwaysExe" Vital="yes"/> <File Id="pathwaysLdf" Name="Pathways_log.ldf" DiskId="1" Source="\\fileserver\Shared\Databases\Pathways\SystemDBs\Pathways.ldf" CompanionFile="pathwaysExe" Vital="yes"/> </Component> <!-- <Component Id="MDF" Guid="FFB7CE02-B592-4c44-A315-99CF4828E3D9" > <File Id="pathwaysMdf" KeyPath="yes" Name="Pathways.mdf" DiskId="1" Source="\\fileserver\Shared\Databases\Pathways\SystemDBs\Pathways.mdf" /> </Component> <Component Id="LDF" Guid="9E4E3DCA-A067-47f4-9905-4AD5C35A8025" > <File Id="pathwaysLdf" KeyPath="yes" Name="Pathways_log.ldf" DiskId="1" Source="\\fileserver\Shared\Databases\Pathways\SystemDBs\Pathways.ldf" /> </Component> --> </DirectoryRef> <!-- shortcut components --> <DirectoryRef Id="DesktopFolder"> <Component Id="DesktopShortcutComponent" Guid="1BF412BA-9C6B-460D-80ED-8388AC66703F"> <Shortcut Id="DesktopShortcut" Target="[INSTALLDIR]Pathways.exe" Name="Pathways" Description="Pathways Tribal Directory" Icon="PathwaysIcon" Show="normal" WorkingDirectory="INSTALLDIR" /> <RegistryValue Root="HKCU" Key="Software\TDR\Pathways" Name="installed" Type="integer" Value="1" KeyPath="yes"/> </Component> </DirectoryRef> <DirectoryRef Id ="ProgramsMenuPathwaysFolder"> <Component Id="ProgramsMenuShortcutComponent" Guid="83A18245-4C22-4CDC-94E0-B480F80A407D"> <Shortcut Id="ProgramsMenuShortcut" Target="[INSTALLDIR]Pathways.exe" Name="Pathways" Icon="PathwaysIcon" Show="normal" WorkingDirectory="INSTALLDIR" /> <RemoveFolder Id="ProgramsMenuPathwaysFolder" On="uninstall"/> <RegistryValue Root="HKCU" Key="Software\TDR\Pathways" Name="installed" Type="integer" Value="1" KeyPath="yes"/> </Component> </DirectoryRef> <Feature Id="App" Title="Pathways Application" Level="1" Description="Pathways software" Display="expand" ConfigurableDirectory="INSTALLDIR" Absent="disallow" AllowAdvertise="no" InstallDefault="local"> <ComponentRef Id="Application" /> <ComponentRef Id="CommonAppDataPathwaysFolderComponent" /> <ComponentRef Id="Settings"/> <ComponentRef Id="ProgramsMenuShortcutComponent" /> <Feature Id="Shortcuts" Title="Desktop Shortcut" Level="1" Absent="allow" AllowAdvertise="no" InstallDefault="local"> <ComponentRef Id="DesktopShortcutComponent" /> </Feature> </Feature> <Feature Id="Data" Title="Database" Level="1" Absent="allow" AllowAdvertise="no" InstallDefault="local"> <ComponentRef Id="Database" /> </Feature> <UIRef Id ="WixUI_FeatureTree"/> <UIRef Id="WixUI_ErrorProgressText"/> <UI> <Error Id="2000">There is a later version of this program installed.</Error> </UI> <CustomAction Id="NewerVersionDetected" Error="2000" /> <InstallExecuteSequence> <RemoveExistingProducts After="InstallFinalize"/> </InstallExecuteSequence> </Product> Running this installer attempting the upgrade from the previous version ALMOST WORKS. The file that is giving me trouble is the one called "PathwaysMdf". Even though it's Component code is EXACTLY the same as the PathwaysLdf file, that file is replaced, while the MDF is NOT. You can see, commented out, some of the other things I've attempted, some from suggestions on stackoverflow. The entire log file from running the upgrade is located at: http://pastebin.com/ppjhq6Wi THANK YOU! Joshua

    Read the article

  • How to customise search core results web part Part1

    - by ybbest
    In this post, I’d like to show you how to customise search core results web part. It is a quite simple, most of the times what you need to do is to change the xslt to perform the changes. Here are the steps: 1. You need to change the xslt to the following, so that you can see the raw xml. <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" > <xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes" /> <xsl:template match="/"> <xmp><xsl:copy-of select="*"/></xmp> </xsl:template> </xsl:stylesheet> a. To do so , you need to go to edit page>>Edit search core results web part >>Display Properties and then untick use Location Visualization b. Open the xslt editor and copy the existing XSLT code to your preferred xslt editor so that you can customise it. c. Now you can paste in the XSLT code above. 2.Perform the search after you have completed step1 and you will see the search results returned in raw xml <All_Results> <Result> <id>1</id> <workid>678</workid> <rank>100000000</rank> <title>Ybbest</title> <author></author> <size>137531</size> <url>http://ybbest</url> <urlEncoded>http%3A%2F%2Fybbest</urlEncoded> <description>Ybbest test site</description> <write>3/17/2012</write> <sitename>http://ybbest</sitename> <collapsingstatus>0</collapsingstatus> <hithighlightedsummary> <c0>Ybbest</c0> test site <ddd /> Add a new image, change this welcome text or add new lists to this page by clicking the edit button above. You can click on Shared Documents to add files or on the <ddd /> </hithighlightedsummary> <hithighlightedproperties> <HHTitle> <c0>Ybbest</c0> </HHTitle> <HHUrl>http://<c0>ybbest</c0></HHUrl> </hithighlightedproperties> <contentclass>STS_Site</contentclass> <isdocument>False</isdocument> <picturethumbnailurl></picturethumbnailurl> <popularsocialtags /> <picturewidth>0</picturewidth> <pictureheight>0</pictureheight> <datepicturetaken></datepicturetaken> <serverredirectedurl></serverredirectedurl> <fileextension></fileextension> <ows_metadatafacetinfo></ows_metadatafacetinfo> <imageurl imageurldescription="SharePoint Site Collection">/_layouts/images/siteicon_16x16.png</imageurl> </Result> <TotalResults>69</TotalResults> <NumberOfResults>50</NumberOfResults> </All_Results> 3. Then you can read what has been returned in the raw xml and start modifying the xslt to customise your search results page. 4.You can also link an external xslt to the web part.It can be set in the Miscellaneous of Web Part section. You can also set it pragmatically using a feature receiver , you can download the source code to do so here. References: http://stackoverflow.com/questions/6548104/change-xslt-of-the-searchresultwebpart-during-the-featureactivated http://www.dotnetmafia.com/blogs/dotnettipoftheday/archive/2010/04/05/a-quick-guide-to-coreresultswebpart-configuration-changes-in-sharepoint-2010.aspx http://www.tonytestasworld.com/post/2011/01/30/HowTo-display-SharePoint-Search-results-as-raw-XML.aspx

    Read the article

  • SOA Suite Integration: Part 1: Building a Web Service

    - by Anthony Shorten
    Over the next few weeks I will be posting blog entries outlying the SOA Suite integration of the Oracle Utilities Application Framework. This will illustrate how easy it is to integrate by providing some samples. I will use a consistent set of features as examples. The examples will be simple and while will not illustrate ALL the possibilities it will illustrate the relative ease of integration. Think of them as a foundation. You can obviously build upon them. Now, to ease a few customers minds, this series will certainly feature the latest version of SOA Suite and the latest version of Oracle Utilities Application Framework but the principles will apply to past versions of both those products. So if you have Oracle SOA Suite 10g or are a customer of Oracle Utilities Application Framework V2.1 or above most of what I will show you will work with those versions. It is just easier in Oracle SOA Suite 11g and Oracle Utilities Application Framework V4.x. This first posting will not feature SOA Suite at all but concentrate on the capability for the Oracle Utilities Application Framework to create Web Services you can use for integration. The XML Application Integration (XAI) component of the Oracle Utilities Application Framework allows product objects to be exposed as XML based transactions or as Web Services (or both). XAI was written before Web Services became fashionable and has allowed customers of our products to provide a consistent interface into and out of our product line. XAI has been enhanced over the last few years to take advantages of the maturing landscape of Web Services in the market place to a point where it now easier to integrate to SOA infrastructure. There are a number of object types that can be exposed as Web Services: Maintenance Objects – These are the lowest level objects that can be exposed as Web Services. Customers of past versions of the product will be familiar with XAI services based upon Maintenance Objects as they used to be the only method of generating Web Services. These are still supported for background compatibility but are starting to become less popular as they were strict in their structure and were solely attribute based. To generate Maintenance Object based Web Services definition you need to use the XAI Schema Editor component. Business Objects – In Oracle Utilities Application Framework V2.1 we introduced the concept of Business Objects. These are site or industry specific objects that are based upon Maintenance Objects. These allow sites to respecify, in configuration, the structure and elements of a Maintenance Object and other Business Objects (they are true objects with support for inheritance, polymorphism, encapsulation etc.). These can be exposed as Web Services. Business Services – As with Business Objects, we introduced Business Services in Oracle Utilities Application Framework V2.1 which allowed applications services and query zones to be expressed as custom services. These can then be exposed as Web Services via the Business Service definition. Service Scripts - As with Business Objects and Business Services, we introduced Service Scripts in Oracle Utilities Application Framework V2.1. These allow services and/objects to be combined into complex objects or simply expose common routines as callable scripts. These can also be defined as Web Services. For the purpose of this series we will restrict ourselves to Business Objects. The techniques can apply to any of the objects discussed above. Now, lets get to the important bit of this blog post, the creation of a Web Service. To build a Business Object, you first logon to the product and navigate to the Administration Menu by selecting the Admin Menu from the Menu action on left top of the screen (next to Home). A popup menu will appear with the menu’s available. If you do not see the Admin menu then you do not have authority to use it. Here is an example: Navigate to the B menu and select the + symbol next to the Business Object menu item. This indicates that you want to ADD a new Business Object. This menu will appear if you are running Alphabetic mode in your installation (I almost forgot that point). You will be presented with the Business Object maintenance screen. You will fill out the following on the first tab (at a minimum): Business Object – The name of the Business Object. Typically you will make it descriptive and also prefix with CM to denote it as a customization (you can easily find it if you prefix it). As I running this on my personal copy of the product I will use my initials as the prefix and call the sample Web Service “AS-User”. Description – A short description of the object to tell others what it is used for. For my example, I will use “Anthony Shorten’s User Object”. Detailed Description – You can add a long description to help other developers understand your object. I am just going to specify “Anthony Shorten’s Test Object for SOA Suite Integration”. Maintenance Object – As this Business Service is going to be based upon a Maintenance Object I will specify the desired Maintenance Object. In this example, I have decided to use the Framework object USER. Now, I chose this for a number of reasons. It is meaningful, simple and is across all our product lines. I could choose ANY maintenance object I wished to expose (including any custom ones, if I had them). Parent Business Object – If I was not using a Maintenance Object but building a child Business Object against another Business Object, then I would specify the Parent Business Object here. I am not using Parent’s so I will leave this blank. You either use Parent Business Object or Maintenance Object not both. Application Service – Business Objects like other objects are subject to security. You can attach an Application Service to an object to specify which groups of users (remember services are attached to user groups not users) have appropriate access to the object. I will use a default service provided with the product, F1-DFLTS ,as this is just a demonstration so I do not have to be too sophisticated about security. Instance Control – This allows the object to create instances in its objects. You can specify a Business Object purely to hold rules. I am being simple here so I will set it to Allow New Instances to allow the Business Object to be used to create, read, update and delete user records. The rest of the tab I will leave empty as I want this to be a very simple object. Other options allow lots of flexibility. The contents should look like this: Before saving your work, you need to navigate to the Schema tab and specify the contents of your object. I will save some time. When you create an object the schema will only contain the basic root elements of the object (in fact only the schema tag is visible). When you go to the Schema Tab, on the dashboard you will see a BO Schema zone with a solitary button. This will allow you to Generate the Schema for you from our metadata. Click on the Generate button to generate a basic schema from the metadata. You will now see a Schema with the element tags and references to the metadata of the Maintenance object (in the mapField attribute). I could spend a while outlining all the ways you can change the schema with defaults, formatting, tagging etc but the online help has plenty of great examples to illustrate this. You can use the Schema Tips zone in the for more details of the available customizations. Note: The tags are generated from the language pack you have installed. The sample is English so the tags are in English (which is the base language of all installations). If you are using a language pack then the tags will be generated in the language of the user that generated the object. At this point you can save your Business Object by pressing the Save action. At this point you have a basic Business Object based on the USER maintenance object ready for use but it is not defined as a Web Service yet. To do this you need to define the newly created Business Object as an XAI Inbound Service. The easiest and quickest way is to select + off the XAI Inbound Service off the context menu on the Business Object maintenance screen. This will prepopulate the service definition with the following: Adapter – This will be set to Business Adaptor. This indicates that the service is either Business Object, Business Service or Service Script based. Schema Type – Whether the object is a Business Object, Business Service or Service Script. In this case it is a Business Object. Schema Name – The name of the object. In this case it is the Business Object AS-User. Active – Set to Yes. This means the service is available upon startup automatically. You can enable and disable services as needed. Transaction Type – A default transaction type as this is Business Object Service. More about this in later postings. In our case we use the default Read. This means that if we only specify data and not a transaction type then the product will assume you want to issue a read against the object. You need to fill in the following: XAI Inbound Service – The name of the Web Service. Usually people use the same name as the underlying object , in the case of this example, but this can match your sites interfacing standards. By the way you can define multiple XAI Inbound Services/Web Services against the same object if you want. Description and Detail Description – Documentation for your Web Service. I just supplied some basic documentation for this demonstration. You can now save the service definition. Note: There are lots of other options on this screen that allow for behavior of your service to be specified. I will leave them blank for now. When you save the service you are issued with two new pieces of information. XAI Inbound Service Id is a randomly generated identifier used internally by the XAI Servlet. WSDL URL is the WSDL standard URL used for integration. We will take advantage of that in later posts. An example of the definition is shown below: Now you have defined the service but it will only be available when the next server restart or when you flush the data cache. XAI Inbound Services are cached for performance so the cache needs to be told of this new service. To refresh the cache you can use the Admin –> X –> XAI Command menu item. From the command dropdown select Refresh Registry and press Send Command. You will see an XML of the command sent to the server (the presence of the XML means it is finished). If you have an error around the authorization, then check your default user and password settings on the XAI Options menu item. Be careful with flushing the cache as the cache is shared (unless of course you are the only Web Service user on the system – In that case it only affects you). The Web Service is NOW available to be used. To perform a simple test of your new Web Service, navigate to the Admin –> X –> XAI Submission menu item. You will see an open XML request tab. You need to type in the request XML you want to test in the Main tab. The first tag is the XAI Inbound Service Name and the elements are as per your schema (minus the schema tag itself as that is only used internally). My example is as follows (I want to return the details of user SYSUSER) – Remember to close tags. Hitting the Save button will issue the XML and return the response according to the Business Object schema. Now before you panic, you noticed that it did not ask for credentials. It propagates the online credentials to the service call on this function. You now have a Web Service you can use for integration. We will reuse this information in subsequent posts. The process I just described can be used for ANY object in the system you want to expose. This whole process at a minimum can take under a minute. Obviously I only showed the basics but you can at least get an appreciation of the ease of defining a Web Service (just by using a browser). The next posts now build upon this. Hope you enjoyed the post.

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • Adding UCM as a search source in Windows Explorer

    - by kyle.hatlestad
    A customer recently pointed out to me that Windows 7 supports federated search within Windows Explorer. This means you can perform searches to external sources such as Google, Flickr, YouTube, etc right from within Explorer. While we do have the Desktop Integration Suite which offers searching within Explorer, I thought it would be interesting to look into this method which would not require any client software to implement. Basically, federated searching hooks up in Windows Explorer through the OpenSearch protocol. A Search Connector Descriptor file is run and it installs the search provider. The file is a .osdx file which is an OpenSearch Description document. It describes the search provider you are hooking up to along with the URL for the query. If those results can come back as an RSS or ATOM feed, then you're all set. So the first step is to install the RSS Feeds component from the UCM Samples page on OTN. If you're on 11g, I've found the RSS Feeds works just fine on that version too. Next, you want to perform a Quick Search with a particular search term and then copy the RSS link address for that search result. Here is what an example URL might looks like: http://server:16200/cs/idcplg?IdcService=GET_SCS_FEED&feedName=search_results&QueryText=%28+%3cqsch%3eoracle%3c%2fqsch %3e+%29&SortField=dInDate&SortOrder=Desc&ResultCount=20&SearchQueryFormat= Universal&SearchProviders=server& Now you want to create a new text file and start out with this information: <?xml version="1.0" encoding="UTF-8"?><OpenSearchDescription xmlns:ms-ose="http://schemas.microsoft.com/opensearchext/2009/"> <ShortName></ShortName> <Description></Description> <Url type="application/rss+xml" template=""/> <Url type="text/html" template=""/> </OpenSearchDescription> Enter a ShortName and Description. The ShortName will be the value used when displaying the search provider in Explorer. In the template attribute for the first Url element, enter the URL copied previously. You will then need to convert the ampersand symbols to '&' to make them XML compliant. Finally, you'll want to switch out the search term with '{searchTerms}'. For the second Url element, you can do the same thing except you want to copy the UCM search results URL from the page of results. That URL will look something like: http://server:16200/cs/idcplg?IdcService=GET_SEARCH_RESULTS&SortField=dInDate&SortOrder=Desc&ResultCount=20&QueryText=%3Cqsch%3Eoracle%3C%2Fqsch%3E&listTemplateId= &ftx=1&SearchQueryFormat=Universal&TargetedQuickSearchSelection= &MiniSearchText=oracle Again, convert the ampersand symbols and replace the search term with '{searchTerms}'. When complete, save the file with the .osdx extension. The completed file should look like: <?xml version="1.0" encoding="UTF-8"?> <OpenSearchDescription xmlns="http://a9.com/-/spec/opensearch/1.1/" xmlns:ms-ose="http://schemas.microsoft.com/opensearchext/2009/"> <ShortName>Universal Content Management</ShortName> <Description>OpenSearch for UCM via Windows 7 Search Federation.</Description> <Url type="application/rss+xml" template="http://server:16200/cs/idcplg?IdcService=GET_SCS_FEED&amp;feedName=search_results&amp;QueryText=%28+%3Cqsch%3E{searchTerms}%3C%2fqsch%3E+%29&amp;SortField=dInDate&amp;SortOrder=Desc&amp;ResultCount=200&amp;SearchQueryFormat=Universal"/> <Url type="text/html" template="http://server:16200/cs/idcplg?IdcService=GET_SEARCH_RESULTS&amp;SortField=dInDate&amp;SortOrder=Desc&amp;ResultCount=20&amp;QueryText=%3Cqsch%3E{searchTerms}%3C%2Fqsch%3E&amp;listTemplateId=&amp;ftx=1&amp;SearchQueryFormat=Universal&amp;TargetedQuickSearchSelection=&amp;MiniSearchText={searchTerms}"/> </OpenSearchDescription> After you save the file, simply double-click it to create the provider. It will ask if you want to add the search connector to Windows. Click Add and it will add it to the Searches folder in your user folder as well as your Favorites. Now just click on the search icon and in the upper right search box, enter your term. As you are typing, it begins executing searches and the results will come back in Explorer. Now when you double-click on an item, it will try and download the web viewable for viewing. You also have the ability to save the search, just as you would in UCM. And there is a link to Search On Website which will launch your browser and go directly to the search results page there. And with some tweaks to the RSS component, you can make the results a bit more interesting. It supports the Media RSS standard, so you can pass along the thumbnail of the documents in the results. To enable this, edit the rss_resources.htm file in the RSS Feeds component. In the std_rss_feed_begin resource include, add the namespace 'xmlns:media="http://search.yahoo.com/mrss/' to the rss definition: <rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:media="http://search.yahoo.com/mrss/"> Next, in the rss_channel_item_with_thumb include, below the closing image element, add this element: </images> <media:thumbnail url="<$if strIndexOf(thumbnailUrl, "@t") > 0 or strIndexOf(thumbnailUrl, "@g") > 0 or strIndexOf(thumbnailUrl, "@p") > 0$><$rssHttpHost$><$thumbnailUrl$><$elseif dGif$><$HttpWebRoot$>images/docgifs/<$dGif$><$endif$>" /> <description> This and lots of other tweaks can be done to the RSS component to help extend it for optimum use in Explorer. Hopefully this can get you started. *Note: This post also applies to Universal Records Management (URM).

    Read the article

  • Integrating Code Metrics in TFS 2010 Build

    - by Jakob Ehn
    The build process template and custom activity described in this post is available here: http://cid-ee034c9f620cd58d.office.live.com/self.aspx/BlogSamples/CodeMetricsSample.zip Running code metrics has been available since VS 2008, but only from inside the IDE. Yesterday Microsoft finally releases a Visual Studio Code Metrics Power Tool 10.0, a command line tool that lets you run code metrics on your applications.  This means that it is now possible to perform code metrics analysis on the build server as part of your nightly/QA builds (for example). In this post I will show how you can run the metrics command line tool, and also a custom activity that reads the output and appends the results to the build log, and also fails he build if the metric values exceeds certain (configurable) treshold values. The code metrics tool analyzes all the methods in the assemblies, measuring cyclomatic complexity, class coupling, depth of inheritance and lines of code. Then it calculates a Maintainability Index from these values that is a measure f how maintanable this method is, between 0 (worst) and 100 (best). For information on hwo this value is calculated, see http://blogs.msdn.com/b/codeanalysis/archive/2007/11/20/maintainability-index-range-and-meaning.aspx. After this it aggregates the information and present it at the class, namespace and module level as well. Running Metrics.exe in a build definition Running the actual tool is easy, just use a InvokeProcess activity last in the Compile the Project sequence, reference the metrics.exe file and pass the correct arguments and you will end up with a result XML file in the drop directory. Here is how it is done in the attached build process template: In the above sequence I first assign the path to the code metrics result file ([BinariesDirectory]\result.xml) to a variable called MetricsResultFile, which is then sent to the InvokeProcess activity in the Arguments property. Here are the arguments for the InvokeProcess activity: Note that we tell metrics.exe to analyze all assemblies located in the Binaries folder. You might want to do some more intelligent filtering here, you probably don’t want to analyze all 3rd party assemblies for example. Note also the path to the metrics.exe, this is the default location when you install the Code Metrics power tool. You must of course install the power tool on all build servers. Using the standard output logging (in the Handle Standard Output/Handle Error Output sections), we get the following output when running the build: Integrating Code Metrics into the build Having the results available next to the build result is nice, but we want to have results integrated in the build result itself, and also to affect the outcome of the build. The point of having QA builds that measure, for example, code metrics is to make it very clear how the code being built measures up to the standards of the project/company. Just having a XML file available in the drop location will not cause the developers to improve their code, but a (partially) failing build will! To do this, we need to write a custom activity that parses the metrics result file, logs it to the build log and fails the build if the values frfom the metrics is below/above some predefined treshold values. The custom activity performs the following steps Parses the XML. I’m using Linq 2 XSD for this, since the XML schema for the result file is available, it is vey easy to generate code that lets you query the structure using standard Linq operators. Runs through the metric result hierarchy and logs the metrics for each level and also verifies maintainability index and the cyclomatic complexity with the treshold values. The treshold values are defined in the build process template are are sent in as arguments to the custom activity If the treshold values are exceeded, the activity either fails or partially fails the current build. For more information about the structure of the code metrics result file, read Cameron Skinner's post about it. It is very simpe and easy to understand. I won’t go through the code of the custom activity here, since there is nothing special about it and it is available for download so you can look at it and play with it yourself. The treshold values for Maintainability Index and Cyclomatic Complexity is defined in the build process template, and can be modified per build definition: I have taken the default value for these settings from my colleague Terje Sandström post on Code Metrics - suggestions for approriate limits. You’ll notice that this is quite an improvement compared to using code metrics inside the IDE, where Red/Yellow/Green limits are fixed (and the default values are somewaht strange, see Terjes post for a discussion on this) This is the first version of the code metrics integration with TFS 2010 Build, I will proabably enhance the functionality and the logging (the “tree view” structure in the log becomes quite hard to read) soon. I will also consider adding it to the Community TFS Build Extensions site when it becomes a bit more mature. Another obvious improvement is to extend the data warehouse of TFS and push the metric results back to the warehouse and make it visible in the reports.

    Read the article

< Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >