Search Results

Search found 44309 results on 1773 pages for 'hp command view'.

Page 411/1773 | < Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >

  • What causes a switch port to receive data not destined for it?

    - by user1693454
    We are having an intermittent fault which is effecting one of our control systems on one of our HP Procurve switches. For some reason, this PLC (10mbit port - 192.168.6.56) which is attached directly to the HP Switch intermittantly start's receiving data which is not destined for it. The data is being sent from a Thecus NAS with latest firmware (192.168.6.218) to a physical IBM Server running Win2003R2 and SAP (192.168.6.225). The problem does not just send to this server, it has been to other physical servers in the past too, but always from the Thecus NAS. I am using a monitor port to wireshark what is going in/out of the PLC - normally there would be about 1mb in/out per 2 or 3 minutes - only a server asking the state of the coils. When the problem occurs, there is a flood of data being put onto the PLC line - in this captured instance, about 67mb in less than a minute. Due to this, there is no way that the PLC can be queried as the port is effectively DOSed, in turn killing part of our factory. I know that having Production on the same vlan as IT is not a good idea - I agree, however it cannot be changed at the moment (will have to wait 3 months), as well as the problem has only started happening in the last 3 months. Here is a screen cap of one of the packets being sent from the Thecus NAS which was captured from the PLC port on the HP Switch: And there are over 700 of these in this one 1024kb file. If anyone has any idea on what could be going on, some help would be greatly appreciated. If you need to know anything more, let me know! Cheers!

    Read the article

  • Different subnets routing with just one layer 3 switch

    - by GustavoFSx
    Our current network looks like this: Location 1: 2 Layer 2 switches | subnet 192.168.1.0/24 | Firewall for our VPN Location 2: 1 Layer 2 switch | subnet 192.168.3.0/24 | Firewall for our VPN Location 3: 1 Layer 2 switch | subnet 192.168.5.0/24 | Firewall for our VPN We just got a direct fiber connection between location 1 and 2, we also got a new HP V1910 24G layer 3 switch. I tried to follow the instructions on this site, but I can't get it to work. I think our network should look like this: Location 1: HP Switch FIBER to L2 | subnet 192.168.1.0/24 | Firewall for our VPN Location 2: 1 Layer 2 switch | subnet 192.168.3.0/24 | FIBER to L1 Location 3: 1 Layer 2 switch | subnet 192.168.5.0/24 | Firewall for our VPN So, how can I get routing working on our location 2? It's old gateway was a firewall device on ip 192.168.3.1. I'm thinking on creating a VLAN Interface on 192.168.3.1 on the switch for the Location 2. But how will I handle that on the HP switch that has a direct fiber connection with that switch? Please help, I'm not very good with networking.

    Read the article

  • Cheated!! Please help

    - by Rohit K
    I was experiencing some hard disk problem with my HP laptop. It showed at bootup as system diagnostics are run -- Smart Drive attribute failed. Also i repeatedly got a warning for imminent hard drive failure and thus to back up my data. So i gave my laptop to authorized HP service center for repair. They formatted my hard disk and installed a pirated copy of Win 7 Ultimate (i earlier had genuine Win 7 Home Premium running on my laptop) and they told me they cover out-of-warranty issues and even charged me for it. What's worse is that the hard drive problem is still present and all i am left with is an illegal copy of windows which i think also voids my warranty. What should i do? I mean i did purchase a genuine Windows with my laptop, so there must be some way to reinstall it even when i don't have a genuine copy now on my machine. Can't i get legit keys to reinstall Win 7 from Microsoft because i did pay for the software when i purchased my machine. And if that's not possible, how can i claim warranty and get my hard disk replaced by HP?

    Read the article

  • Eclipse juno can not open with error " An error has occurred. See the log file",ubuntu 12.04

    - by ana
    I'm trying to lunch eclipse for first time ,I've download the package and installed it manually.here is the log file : !SESSION 2012-10-10 16:06:11.460 ----------------------------------------------- eclipse.buildId=M20120914-1800 java.fullversion=GNU libgcj 4.6.3 BootLoader constants: OS=linux, ARCH=x86_64, WS=gtk, NL=en_US Command-line arguments: -os linux -ws gtk -arch x86_64 !ENTRY org.eclipse.osgi 4 0 2012-10-10 16:06:19.756 !MESSAGE Could not start bundle: org.eclipse.equinox.console !STACK 0 org.osgi.framework.BundleException: Could not start bundle: org.eclipse.equinox.console at org.eclipse.osgi.framework.internal.core.ConsoleManager.checkForConsoleBundle(ConsoleManager.java:217) at org.eclipse.core.runtime.adaptor.EclipseStarter.startup(EclipseStarter.java:297) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:176) at java.lang.reflect.Method.invoke(libgcj.so.12) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:629) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:584) at org.eclipse.equinox.launcher.Main.run(Main.java:1438) at org.eclipse.equinox.launcher.Main.main(Main.java:1414) Caused by: org.osgi.framework.BundleException: Exception in org.eclipse.equinox.console.command.adapter.Activator.start() of bundle org.eclipse.equinox.console. at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:734) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.start(BundleContextImpl.java:683) at org.eclipse.osgi.framework.internal.core.BundleHost.startWorker(BundleHost.java:381) at org.eclipse.osgi.framework.internal.core.AbstractBundle.start(AbstractBundle.java:300) at org.eclipse.osgi.framework.internal.core.ConsoleManager.checkForConsoleBundle(ConsoleManager.java:215) ...7 more Caused by: org.osgi.framework.BundleException: Exception in org.apache.felix.gogo.command.Activator.start() of bundle org.apache.felix.gogo.command. at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:734) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.start(BundleContextImpl.java:683) at org.eclipse.osgi.framework.internal.core.BundleHost.startWorker(BundleHost.java:381) at org.eclipse.osgi.framework.internal.core.AbstractBundle.start(AbstractBundle.java:300) at org.eclipse.equinox.console.command.adapter.Activator.startBundle(Activator.java:248) at org.eclipse.equinox.console.command.adapter.Activator.start(Activator.java:239) at org.eclipse.osgi.framework.internal.core.BundleContextImpl$1.run(BundleContextImpl.java:711) at java.security.AccessController.doPrivileged(libgcj.so.12) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:702) ...11 more Caused by: java.lang.NoClassDefFoundError: org.apache.felix.gogo.command.OBR at java.lang.Class.initializeClass(libgcj.so.12) at org.apache.felix.gogo.command.Activator.start(Activator.java:54) at org.eclipse.osgi.framework.internal.core.BundleContextImpl$1.run(BundleContextImpl.java:711) at java.security.AccessController.doPrivileged(libgcj.so.12) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:702) ...19 more Caused by: java.lang.ClassNotFoundException: org.apache.felix.bundlerepository.Repository at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:501) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:421) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:412) at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107) at java.lang.ClassLoader.loadClass(libgcj.so.12) at java.lang.Class.initializeClass(libgcj.so.12) ...23 more Root exception: java.lang.NoClassDefFoundError: org.apache.felix.gogo.command.OBR at java.lang.Class.initializeClass(libgcj.so.12) at !ENTRY org.eclipse.osgi 2 0 2012-10-10 16:06:30.433 !MESSAGE The following is a complete list of bundles which are not resolved, see the prior log entry for the root cause if it exists: !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.433 !MESSAGE Bundle com.sun.el_2.2.0.v201108011116 [4] was not resolved. !SUBENTRY 2 com.sun.el 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.el_2.2.0. !SUBENTRY 2 com.sun.el 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet.http_2.5.0. !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.434 !MESSAGE Bundle javax.el_2.2.0.v201108011116 [6] was not resolved. !SUBENTRY 2 javax.el 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet_2.5.0. !SUBENTRY 2 javax.el 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet.http_2.5.0. !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.434 !MESSAGE Bundle javax.servlet_3.0.0.v201112011016 [8] was not resolved. !SUBENTRY 2 javax.servlet 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(&(osgi.ee=JavaSE)(version=1.6))". !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.434 !MESSAGE Bundle javax.servlet.jsp_2.2.0.v201112011158 [9] was not resolved. !SUBENTRY 2 javax.servlet.jsp 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.el_2.2.0. !SUBENTRY 2 javax.servlet.jsp 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet_2.6.0. !SUBENTRY 2 javax.servlet.jsp 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet.http_2.6.0. !SUBENTRY 2 javax.servlet.jsp 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(&(osgi.ee=JavaSE)(version=1.6))". !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.434 !MESSAGE Bundle org.apache.jasper.glassfish_2.2.2.v201205150955 [21] was not resolved. !SUBENTRY 2 org.apache.jasper.glassfish 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.el_2.2.0. !SUBENTRY 2 org.apache.jasper.glassfish 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet_2.6.0. !SUBENTRY 2 org.apache.jasper.glassfish 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet.descriptor_2.6.0. !SUBENTRY 2 org.apache.jasper.glassfish 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet.http_2.6.0. !SUBENTRY 2 org.apache.jasper.glassfish 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet.jsp_2.2.0. !SUBENTRY 2 org.apache.jasper.glassfish 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet.jsp.el_2.2.0. !SUBENTRY 2 org.apache.jasper.glassfish 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet.jsp.tagext_2.2.0. !SUBENTRY 2 org.apache.jasper.glassfish 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing optionally imported package javax.tools_0.0.0. !SUBENTRY 2 org.apache.jasper.glassfish 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(&(osgi.ee=JavaSE)(version=1.6))". !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.434 !MESSAGE Bundle org.eclipse.equinox.http.jetty_3.0.0.v20120522-1841 [91] was not resolved. !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.434 !MESSAGE Missing imported package javax.servlet_[2.6.0,4.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package javax.servlet.http_[2.6.0,4.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.equinox.http.servlet_1.0.0. !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.http_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.io.bio_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.io.nio_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.server_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.server.bio_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.server.handler_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.server.nio_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.server.session_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.server.ssl_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.servlet_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.util_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.util.component_[8.0.0,9.0.0). !SUBENTRY 2 org.eclipse.equinox.http.jetty 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package org.eclipse.jetty.util.log_[8.0.0,9.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.435 !MESSAGE Bundle org.eclipse.equinox.http.registry_1.1.200.v20120522-2049 [92] was not resolved. !SUBENTRY 2 org.eclipse.equinox.http.registry 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package javax.servlet_2.3.0. !SUBENTRY 2 org.eclipse.equinox.http.registry 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package javax.servlet.http_2.3.0. !SUBENTRY 2 org.eclipse.equinox.http.registry 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(|(&(osgi.ee=CDC/Foundation)(version=1.0))(&(osgi.ee=JavaSE)(version=1.3)))". !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.435 !MESSAGE Bundle org.eclipse.equinox.http.servlet_1.1.300.v20120522-1841 [93] was not resolved. !SUBENTRY 2 org.eclipse.equinox.http.servlet 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package javax.servlet_[2.3.0,3.1.0). !SUBENTRY 2 org.eclipse.equinox.http.servlet 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing optionally imported package javax.servlet.annotation_2.6.0. !SUBENTRY 2 org.eclipse.equinox.http.servlet 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing optionally imported package javax.servlet.descriptor_2.6.0. !SUBENTRY 2 org.eclipse.equinox.http.servlet 2 0 2012-10-10 16:06:30.435 !MESSAGE Missing imported package javax.servlet.http_[2.3.0,3.1.0). !SUBENTRY 2 org.eclipse.equinox.http.servlet 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(|(&(osgi.ee=CDC/Foundation)(version=1.0))(&(osgi.ee=JavaSE)(version=1.3)))". !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.436 !MESSAGE Bundle org.eclipse.equinox.jsp.jasper_1.0.400.v20120522-2049 [94] was not resolved. !SUBENTRY 2 org.eclipse.equinox.jsp.jasper 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing imported package javax.servlet_[2.4.0,3.1.0). !SUBENTRY 2 org.eclipse.equinox.jsp.jasper 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing optionally imported package javax.servlet.annotation_2.6.0. !SUBENTRY 2 org.eclipse.equinox.jsp.jasper 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing optionally imported package javax.servlet.descriptor_2.6.0. !SUBENTRY 2 org.eclipse.equinox.jsp.jasper 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing imported package javax.servlet.http_[2.4.0,3.1.0). !SUBENTRY 2 org.eclipse.equinox.jsp.jasper 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing imported package javax.servlet.jsp_[2.0.0,2.3.0). !SUBENTRY 2 org.eclipse.equinox.jsp.jasper 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing imported package org.apache.jasper.servlet_[0.0.0,6.0.0). !SUBENTRY 2 org.eclipse.equinox.jsp.jasper 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(|(&(osgi.ee=CDC/Foundation)(version=1.0))(&(osgi.ee=JavaSE)(version=1.3)))". !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.436 !MESSAGE Bundle org.eclipse.equinox.jsp.jasper.registry_1.0.300.v20120522-2049 [95] was not resolved. !SUBENTRY 2 org.eclipse.equinox.jsp.jasper.registry 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing imported package org.eclipse.equinox.jsp.jasper_0.0.0. !SUBENTRY 2 org.eclipse.equinox.jsp.jasper.registry 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing imported package javax.servlet_2.4.0. !SUBENTRY 2 org.eclipse.equinox.jsp.jasper.registry 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing imported package javax.servlet.http_2.4.0. !SUBENTRY 2 org.eclipse.equinox.jsp.jasper.registry 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(|(&(osgi.ee=CDC/Foundation)(version=1.0))(&(osgi.ee=JavaSE)(version=1.3)))". !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.436 !MESSAGE Bundle org.eclipse.help.webapp_3.6.101.v20120717-130216 [135] was not resolved. !SUBENTRY 2 org.eclipse.help.webapp 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing required bundle org.eclipse.equinox.jsp.jasper.registry_1.0.100. !SUBENTRY 2 org.eclipse.help.webapp 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing required bundle org.eclipse.equinox.http.registry_1.0.200. !SUBENTRY 2 org.eclipse.help.webapp 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing imported package javax.servlet_2.4.0. !SUBENTRY 2 org.eclipse.help.webapp 2 0 2012-10-10 16:06:30.436 !MESSAGE Missing imported package javax.servlet.http_2.4.0. !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.437 !MESSAGE Bundle org.eclipse.jdt.apt.pluggable.core_1.0.400.v20120522-1651 [139] was not resolved. !SUBENTRY 2 org.eclipse.jdt.apt.pluggable.core 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing imported package org.eclipse.jdt.internal.compiler.tool_0.0.0. !SUBENTRY 2 org.eclipse.jdt.apt.pluggable.core 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing imported package org.eclipse.jdt.internal.compiler.apt.dispatch_0.0.0. !SUBENTRY 2 org.eclipse.jdt.apt.pluggable.core 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing imported package org.eclipse.jdt.internal.compiler.apt.model_0.0.0. !SUBENTRY 2 org.eclipse.jdt.apt.pluggable.core 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing imported package org.eclipse.jdt.internal.compiler.apt.util_0.0.0. !SUBENTRY 2 org.eclipse.jdt.apt.pluggable.core 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(&(osgi.ee=JavaSE)(version=1.6))". !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.437 !MESSAGE Bundle org.eclipse.jdt.compiler.apt_1.0.500.v20120522-1651 [141] was not resolved. !SUBENTRY 2 org.eclipse.jdt.compiler.apt 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing optionally imported package org.eclipse.jdt.internal.compiler.tool_0.0.0. !SUBENTRY 2 org.eclipse.jdt.compiler.apt 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(&(osgi.ee=JavaSE)(version=1.6))". !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.437 !MESSAGE Bundle org.eclipse.jdt.compiler.tool_1.0.101.v20120522-1651 [142] was not resolved. !SUBENTRY 2 org.eclipse.jdt.compiler.tool 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing required capability Require-Capability: osgi.ee; filter="(&(osgi.ee=JavaSE)(version=1.6))". !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.437 !MESSAGE Bundle org.eclipse.jetty.continuation_8.1.3.v20120522 [155] was not resolved. !SUBENTRY 2 org.eclipse.jetty.continuation 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing imported package javax.servlet_2.6.0. !SUBENTRY 2 org.eclipse.jetty.continuation 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing optionally imported package org.mortbay.log_[6.1.0,7.0.0). !SUBENTRY 2 org.eclipse.jetty.continuation 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing optionally imported package org.mortbay.util.ajax_[6.1.0,7.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.437 !MESSAGE Bundle org.eclipse.jetty.http_8.1.3.v20120522 [156] was not resolved. !SUBENTRY 2 org.eclipse.jetty.http 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing imported package javax.servlet_2.6.0. !SUBENTRY 2 org.eclipse.jetty.http 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing imported package javax.servlet.http_2.6.0. !SUBENTRY 2 org.eclipse.jetty.http 2 0 2012-10-10 16:06:30.437 !MESSAGE Missing imported package org.eclipse.jetty.io_[8.1.0,9.0.0). org.eclipse.jetty.util.resource_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.http 2 0 2012-10-10 16:06:30.438 !MESSAGE Missing imported package org.eclipse.jetty.util.ssl_[8.1.0,9.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.438 !MESSAGE Bundle org.eclipse.jetty.io_8.1.3.v20120522 [157] was not resolved. !SUBENTRY 2 org.eclipse.jetty.io 2 0 2012-10-10 16:06:30.438 !MESSAGE Missing imported package org.eclipse.jetty.util_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.io 2 0 2012-10-10 16:06:30.438 !MESSAGE Missing imported package org.eclipse.jetty.util.component_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.io 2 0 2012-10-10 16:06:30.438 !MESSAGE Missing imported package org.eclipse.jetty.util.log_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.io 2 0 2012-10-10 16:06:30.438 !MESSAGE Missing imported package org.eclipse.jetty.util.thread_[8.1.0,9.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.438 !MESSAGE Bundle org.eclipse.jetty.security_8.1.3.v20120522 [158] was not resolved. !SUBENTRY 2 org.eclipse.jetty.security 2 0 2012-10-10 16:06:30.438 !MESSAGE Missing imported package javax.servlet_2.6.0. !SUBENTRY 2 org.eclipse.jetty.security 2 0 2012-10-10 16:06:30.438 !MESSAGE Missing imported package javax.servlet.http_2.6.0. !SUBENTRY 2 org.eclipse.jetty.security 2 0 2012-10-10 16:06:30.438 !MESSAGE Missing imported package org.eclipse.jetty.http_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.security 2 0 2012-10-10 16:06:30.438 !MESSAGE Missing imported package org.eclipse.jetty.server_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.security 2 0 2012-10-10 16:06:30.438 org.eclipse.jetty.jmx_8.0.0. !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2012-10-10 16:06:30.439 !MESSAGE Missing imported package org.eclipse.jetty.security_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing imported package org.eclipse.jetty.server_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing imported package org.eclipse.jetty.server.handler_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing imported package org.eclipse.jetty.server.nio_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing imported package org.eclipse.jetty.server.session_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing imported package org.eclipse.jetty.server.ssl_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing optionally imported package org.eclipse.jetty.util_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing optionally imported package org.eclipse.jetty.util.component_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing optionally imported package org.eclipse.jetty.util.log_[8.1.0,9.0.0). !SUBENTRY 2 org.eclipse.jetty.servlet 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing optionally imported package org.eclipse.jetty.util.resource_[8.1.0,9.0.0). !SUBENTRY 1 org.eclipse.osgi 2 0 2012-10-10 16:06:30.440 !MESSAGE Bundle org.eclipse.jetty.util_8.1.3.v20120522 [161] was not resolved. !SUBENTRY 2 org.eclipse.jetty.util 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing imported package javax.servlet_2.6.0. !SUBENTRY 2 org.eclipse.jetty.util 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing imported package javax.servlet.http_2.6.0. !SUBENTRY 2 org.eclipse.jetty.util 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing optionally imported package org.slf4j_[1.5.0,2.0.0). !SUBENTRY 2 org.eclipse.jetty.util 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing optionally imported package org.slf4j.helpers_[1.6.0,2.0.0). !SUBENTRY 2 org.eclipse.jetty.util 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing optionally imported package org.slf4j.impl_[1.5.0,2.0.0). !SUBENTRY 2 org.eclipse.jetty.util 2 0 2012-10-10 16:06:30.440 !MESSAGE Missing optionally imported package org.slf4j.spi_[1.6.0,2.0.0). !ENTRY org.eclipse.osgi 4 0 2012-10-10 16:06:30.441 !MESSAGE Application error !STACK 1 java.lang.ArrayIndexOutOfBoundsException: 0 at org.eclipse.e4.core.internal.di.ConstructorRequestor.calcDependentObjects(ConstructorRequestor.java:79) at org.eclipse.e4.core.internal.di.Requestor.getDependentObjects(Requestor.java:143) at org.eclipse.e4.core.internal.di.InjectorImpl.resolveArgs(InjectorImpl.java:408) at org.eclipse.e4.core.internal.di.InjectorImpl.internalMake(InjectorImpl.java:312) at org.eclipse.e4.core.internal.di.InjectorImpl.make(InjectorImpl.java:240) at org.eclipse.e4.core.contexts.ContextInjectionFactory.make(ContextInjectionFactory.java:161) at org.eclipse.e4.ui.internal.workbench.swt.E4Application.createDefaultHeadlessContext(E4Application.java:420) at org.eclipse.e4.ui.internal.workbench.swt.E4Application.createDefaultContext(E4Application.java:434) at org.eclipse.e4.ui.internal.workbench.swt.E4Application.createE4Workbench(E4Application.java:182) at org.eclipse.ui.internal.Workbench$5.run(Workbench.java:557) at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332) at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:543) at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:149) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:124) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:353) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:180) at java.lang.reflect.Method.invoke(libgcj.so.12) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:629) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:584) at org.eclipse.equinox.launcher.Main.run(Main.java:1438) at org.eclipse.equinox.launcher.Main.main(Main.java:1414) would you please help me with this?

    Read the article

  • Podcast: Advanced MVVM with Josh Smith

    - by craigshoemaker
    Author, Microsoft MVP and accomplished pianist Josh Smith, Sr. UX Developer at IdentityMine, joins the show to discuss some of Model View ViewModel’s more advanced scenarios. Full Speed: download Fast Version: download Josh shares is experience using MVVM gives some real-world advice on: Using modal dialogs Evils and virtues of code behind in views Use of attached behaviors Undo/redo strategies Working with animations Building a task based architecture for managing communication between View and ViewModel Frameworks in the MVVM space The Book Get first-hand experience implementing the solutions to the challenges discussed in the show by reading Josh’s new book ‘Advanced MVVM’. Resources The following resources are mentioned in the show: Laurent Bugnion's mix talk ‘Understanding the Model-View-ViewModel Pattern Josh Smith’s MVVM Foundation Laurent Bugnion’s MVVM Light framework Rob Eisenberg's Caliburn

    Read the article

  • ASP.NET MVC ModelCopier

    - by shiju
     In my earlier post ViewModel patten and AutoMapper in ASP.NET MVC application, We have discussed the need for  View Model objects and how to map values between View Model objects and Domain model objects using AutoMapper. ASP.NET MVC futures assembly provides a static class ModelCopier that can also use for copying values between View Model objects and Domain model objects. ModelCopier class has two static methods - CopyCollection and CopyModel.CopyCollection method would copy values between two collection objects and CopyModel would copy values between two model objects. <PRE class="c#" name="code"> var expense=new Expense(); ModelCopier.CopyModel(expenseViewModel, expense);</PRE>The above code copying values from expenseViewModel object to  expense object.                For simple mapping between model objects, you can use ModelCopier but for complex scenarios, I highly recommending to using AutoMapper for mapping between model objects.

    Read the article

  • An Introduction to Meteor

    - by Stephen.Walther
    The goal of this blog post is to give you a brief introduction to Meteor which is a framework for building Single Page Apps. In this blog entry, I provide a walkthrough of building a simple Movie database app. What is special about Meteor? Meteor has two jaw-dropping features: Live HTML – If you make any changes to the HTML, CSS, JavaScript, or data on the server then every client shows the changes automatically without a browser refresh. For example, if you change the background color of a page to yellow then every open browser will show the new yellow background color without a refresh. Or, if you add a new movie to a collection of movies, then every open browser will display the new movie automatically. With Live HTML, users no longer need a refresh button. Changes to an application happen everywhere automatically without any effort. The Meteor framework handles all of the messy details of keeping all of the clients in sync with the server for you. Latency Compensation – When you modify data on the client, these modifications appear as if they happened on the server without any delay. For example, if you create a new movie then the movie appears instantly. However, that is all an illusion. In the background, Meteor updates the database with the new movie. If, for whatever reason, the movie cannot be added to the database then Meteor removes the movie from the client automatically. Latency compensation is extremely important for creating a responsive web application. You want the user to be able to make instant modifications in the browser and the framework to handle the details of updating the database without slowing down the user. Installing Meteor Meteor is licensed under the open-source MIT license and you can start building production apps with the framework right now. Be warned that Meteor is still in the “early preview” stage. It has not reached a 1.0 release. According to the Meteor FAQ, Meteor will reach version 1.0 in “More than a month, less than a year.” Don’t be scared away by that. You should be aware that, unlike most open source projects, Meteor has financial backing. The Meteor project received an $11.2 million round of financing from Andreessen Horowitz. So, it would be a good bet that this project will reach the 1.0 mark. And, if it doesn’t, the framework as it exists right now is still very powerful. Meteor runs on top of Node.js. You write Meteor apps by writing JavaScript which runs both on the client and on the server. You can build Meteor apps on Windows, Mac, or Linux (Although the support for Windows is still officially unofficial). If you want to install Meteor on Windows then download the MSI from the following URL: http://win.meteor.com/ If you want to install Meteor on Mac/Linux then run the following CURL command from your terminal: curl https://install.meteor.com | /bin/sh Meteor will install all of its dependencies automatically including Node.js. However, I recommend that you install Node.js before installing Meteor by installing Node.js from the following address: http://nodejs.org/ If you let Meteor install Node.js then Meteor won’t install NPM which is the standard package manager for Node.js. If you install Node.js and then you install Meteor then you get NPM automatically. Creating a New Meteor App To get a sense of how Meteor works, I am going to walk through the steps required to create a simple Movie database app. Our app will display a list of movies and contain a form for creating a new movie. The first thing that we need to do is create our new Meteor app. Open a command prompt/terminal window and execute the following command: Meteor create MovieApp After you execute this command, you should see something like the following: Follow the instructions: execute cd MovieApp to change to your MovieApp directory, and run the meteor command. Executing the meteor command starts Meteor on port 3000. Open up your favorite web browser and navigate to http://localhost:3000 and you should see the default Meteor Hello World page: Open up your favorite development environment to see what the Meteor app looks like. Open the MovieApp folder which we just created. Here’s what the MovieApp looks like in Visual Studio 2012: Notice that our MovieApp contains three files named MovieApp.css, MovieApp.html, and MovieApp.js. In other words, it contains a Cascading Style Sheet file, an HTML file, and a JavaScript file. Just for fun, let’s see how the Live HTML feature works. Open up multiple browsers and point each browser at http://localhost:3000. Now, open the MovieApp.html page and modify the text “Hello World!” to “Hello Cruel World!” and save the change. The text in all of the browsers should update automatically without a browser refresh. Pretty amazing, right? Controlling Where JavaScript Executes You write a Meteor app using JavaScript. Some of the JavaScript executes on the client (the browser) and some of the JavaScript executes on the server and some of the JavaScript executes in both places. For a super simple app, you can use the Meteor.isServer and Meteor.isClient properties to control where your JavaScript code executes. For example, the following JavaScript contains a section of code which executes on the server and a section of code which executes in the browser: if (Meteor.isClient) { console.log("Hello Browser!"); } if (Meteor.isServer) { console.log("Hello Server!"); } console.log("Hello Browser and Server!"); When you run the app, the message “Hello Browser!” is written to the browser JavaScript console. The message “Hello Server!” is written to the command/terminal window where you ran Meteor. Finally, the message “Hello Browser and Server!” is execute on both the browser and server and the message appears in both places. For simple apps, using Meteor.isClient and Meteor.isServer to control where JavaScript executes is fine. For more complex apps, you should create separate folders for your server and client code. Here are the folders which you can use in a Meteor app: · client – This folder contains any JavaScript which executes only on the client. · server – This folder contains any JavaScript which executes only on the server. · common – This folder contains any JavaScript code which executes on both the client and server. · lib – This folder contains any JavaScript files which you want to execute before any other JavaScript files. · public – This folder contains static application assets such as images. For the Movie App, we need the client, server, and common folders. Delete the existing MovieApp.js, MovieApp.html, and MovieApp.css files. We will create new files in the right locations later in this walkthrough. Combining HTML, CSS, and JavaScript Files Meteor combines all of your JavaScript files, and all of your Cascading Style Sheet files, and all of your HTML files automatically. If you want to create one humongous JavaScript file which contains all of the code for your app then that is your business. However, if you want to build a more maintainable application, then you should break your JavaScript files into many separate JavaScript files and let Meteor combine them for you. Meteor also combines all of your HTML files into a single file. HTML files are allowed to have the following top-level elements: <head> — All <head> files are combined into a single <head> and served with the initial page load. <body> — All <body> files are combined into a single <body> and served with the initial page load. <template> — All <template> files are compiled into JavaScript templates. Because you are creating a single page app, a Meteor app typically will contain a single HTML file for the <head> and <body> content. However, a Meteor app typically will contain several template files. In other words, all of the interesting stuff happens within the <template> files. Displaying a List of Movies Let me start building the Movie App by displaying a list of movies. In order to display a list of movies, we need to create the following four files: · client\movies.html – Contains the HTML for the <head> and <body> of the page for the Movie app. · client\moviesTemplate.html – Contains the HTML template for displaying the list of movies. · client\movies.js – Contains the JavaScript for supplying data to the moviesTemplate. · server\movies.js – Contains the JavaScript for seeding the database with movies. After you create these files, your folder structure should looks like this: Here’s what the client\movies.html file looks like: <head> <title>My Movie App</title> </head> <body> <h1>Movies</h1> {{> moviesTemplate }} </body>   Notice that it contains <head> and <body> top-level elements. The <body> element includes the moviesTemplate with the syntax {{> moviesTemplate }}. The moviesTemplate is defined in the client/moviesTemplate.html file: <template name="moviesTemplate"> <ul> {{#each movies}} <li> {{title}} </li> {{/each}} </ul> </template> By default, Meteor uses the Handlebars templating library. In the moviesTemplate above, Handlebars is used to loop through each of the movies using {{#each}}…{{/each}} and display the title for each movie using {{title}}. The client\movies.js JavaScript file is used to bind the moviesTemplate to the Movies collection on the client. Here’s what this JavaScript file looks like: // Declare client Movies collection Movies = new Meteor.Collection("movies"); // Bind moviesTemplate to Movies collection Template.moviesTemplate.movies = function () { return Movies.find(); }; The Movies collection is a client-side proxy for the server-side Movies database collection. Whenever you want to interact with the collection of Movies stored in the database, you use the Movies collection instead of communicating back to the server. The moviesTemplate is bound to the Movies collection by assigning a function to the Template.moviesTemplate.movies property. The function simply returns all of the movies from the Movies collection. The final file which we need is the server-side server\movies.js file: // Declare server Movies collection Movies = new Meteor.Collection("movies"); // Seed the movie database with a few movies Meteor.startup(function () { if (Movies.find().count() == 0) { Movies.insert({ title: "Star Wars", director: "Lucas" }); Movies.insert({ title: "Memento", director: "Nolan" }); Movies.insert({ title: "King Kong", director: "Jackson" }); } }); The server\movies.js file does two things. First, it declares the server-side Meteor Movies collection. When you declare a server-side Meteor collection, a collection is created in the MongoDB database associated with your Meteor app automatically (Meteor uses MongoDB as its database automatically). Second, the server\movies.js file seeds the Movies collection (MongoDB collection) with three movies. Seeding the database gives us some movies to look at when we open the Movies app in a browser. Creating New Movies Let me modify the Movies Database App so that we can add new movies to the database of movies. First, I need to create a new template file – named client\movieForm.html – which contains an HTML form for creating a new movie: <template name="movieForm"> <fieldset> <legend>Add New Movie</legend> <form> <div> <label> Title: <input id="title" /> </label> </div> <div> <label> Director: <input id="director" /> </label> </div> <div> <input type="submit" value="Add Movie" /> </div> </form> </fieldset> </template> In order for the new form to show up, I need to modify the client\movies.html file to include the movieForm.html template. Notice that I added {{> movieForm }} to the client\movies.html file: <head> <title>My Movie App</title> </head> <body> <h1>Movies</h1> {{> moviesTemplate }} {{> movieForm }} </body> After I make these modifications, our Movie app will display the form: The next step is to handle the submit event for the movie form. Below, I’ve modified the client\movies.js file so that it contains a handler for the submit event raised when you submit the form contained in the movieForm.html template: // Declare client Movies collection Movies = new Meteor.Collection("movies"); // Bind moviesTemplate to Movies collection Template.moviesTemplate.movies = function () { return Movies.find(); }; // Handle movieForm events Template.movieForm.events = { 'submit': function (e, tmpl) { // Don't postback e.preventDefault(); // create the new movie var newMovie = { title: tmpl.find("#title").value, director: tmpl.find("#director").value }; // add the movie to the db Movies.insert(newMovie); } }; The Template.movieForm.events property contains an event map which maps event names to handlers. In this case, I am mapping the form submit event to an anonymous function which handles the event. In the event handler, I am first preventing a postback by calling e.preventDefault(). This is a single page app, no postbacks are allowed! Next, I am grabbing the new movie from the HTML form. I’m taking advantage of the template find() method to retrieve the form field values. Finally, I am calling Movies.insert() to insert the new movie into the Movies collection. Here, I am explicitly inserting the new movie into the client-side Movies collection. Meteor inserts the new movie into the server-side Movies collection behind the scenes. When Meteor inserts the movie into the server-side collection, the new movie is added to the MongoDB database associated with the Movies app automatically. If server-side insertion fails for whatever reasons – for example, your internet connection is lost – then Meteor will remove the movie from the client-side Movies collection automatically. In other words, Meteor takes care of keeping the client Movies collection and the server Movies collection in sync. If you open multiple browsers, and add movies, then you should notice that all of the movies appear on all of the open browser automatically. You don’t need to refresh individual browsers to update the client-side Movies collection. Meteor keeps everything synchronized between the browsers and server for you. Removing the Insecure Module To make it easier to develop and debug a new Meteor app, by default, you can modify the database directly from the client. For example, you can delete all of the data in the database by opening up your browser console window and executing multiple Movies.remove() commands. Obviously, enabling anyone to modify your database from the browser is not a good idea in a production application. Before you make a Meteor app public, you should first run the meteor remove insecure command from a command/terminal window: Running meteor remove insecure removes the insecure package from the Movie app. Unfortunately, it also breaks our Movie app. We’ll get an “Access denied” error in our browser console whenever we try to insert a new movie. No worries. I’ll fix this issue in the next section. Creating Meteor Methods By taking advantage of Meteor Methods, you can create methods which can be invoked on both the client and the server. By taking advantage of Meteor Methods you can: 1. Perform form validation on both the client and the server. For example, even if an evil hacker bypasses your client code, you can still prevent the hacker from submitting an invalid value for a form field by enforcing validation on the server. 2. Simulate database operations on the client but actually perform the operations on the server. Let me show you how we can modify our Movie app so it uses Meteor Methods to insert a new movie. First, we need to create a new file named common\methods.js which contains the definition of our Meteor Methods: Meteor.methods({ addMovie: function (newMovie) { // Perform form validation if (newMovie.title == "") { throw new Meteor.Error(413, "Missing title!"); } if (newMovie.director == "") { throw new Meteor.Error(413, "Missing director!"); } // Insert movie (simulate on client, do it on server) return Movies.insert(newMovie); } }); The addMovie() method is called from both the client and the server. This method does two things. First, it performs some basic validation. If you don’t enter a title or you don’t enter a director then an error is thrown. Second, the addMovie() method inserts the new movie into the Movies collection. When called on the client, inserting the new movie into the Movies collection just updates the collection. When called on the server, inserting the new movie into the Movies collection causes the database (MongoDB) to be updated with the new movie. You must add the common\methods.js file to the common folder so it will get executed on both the client and the server. Our folder structure now looks like this: We actually call the addMovie() method within our client code in the client\movies.js file. Here’s what the updated file looks like: // Declare client Movies collection Movies = new Meteor.Collection("movies"); // Bind moviesTemplate to Movies collection Template.moviesTemplate.movies = function () { return Movies.find(); }; // Handle movieForm events Template.movieForm.events = { 'submit': function (e, tmpl) { // Don't postback e.preventDefault(); // create the new movie var newMovie = { title: tmpl.find("#title").value, director: tmpl.find("#director").value }; // add the movie to the db Meteor.call( "addMovie", newMovie, function (err, result) { if (err) { alert("Could not add movie " + err.reason); } } ); } }; The addMovie() method is called – on both the client and the server – by calling the Meteor.call() method. This method accepts the following parameters: · The string name of the method to call. · The data to pass to the method (You can actually pass multiple params for the data if you like). · A callback function to invoke after the method completes. In the JavaScript code above, the addMovie() method is called with the new movie retrieved from the HTML form. The callback checks for an error. If there is an error then the error reason is displayed in an alert (please don’t use alerts for validation errors in a production app because they are ugly!). Summary The goal of this blog post was to provide you with a brief walk through of a simple Meteor app. I showed you how you can create a simple Movie Database app which enables you to display a list of movies and create new movies. I also explained why it is important to remove the Meteor insecure package from a production app. I showed you how to use Meteor Methods to insert data into the database instead of doing it directly from the client. I’m very impressed with the Meteor framework. The support for Live HTML and Latency Compensation are required features for many real world Single Page Apps but implementing these features by hand is not easy. Meteor makes it easy.

    Read the article

  • Book Review: Oracle ADF Real World Developer’s Guide

    - by Frank Nimphius
    Recently PACKT Publishing published "Oracle ADF Real World Developer’s Guide" by Jobinesh Purushothaman, a product manager in our team. Though already the sixth book dedicated to Oracle ADF, it has a lot of great information in it that none of the previous books covered, making it a safe buy even for those who own the other books published by Oracle Press (McGrwHill) and PACKT Publishing. More than the half of the "Oracle ADF Real World Developer’s Guide" book is dedicated to Oracle ADF Business Components in a depth and clarity that allows you to feel the expertise that Jobinesh gained in this area. If you enjoy Jobinesh blog (http://jobinesh.blogspot.co.uk/) about Oracle ADF, then, no matter what expert you are in Oracle ADF, this book makes you happy as it provides you with detail information you always wished to have. If you are new to Oracle ADF, then this book alone doesn't get you flying, but, if you have some Java background, accelerates your learning big, big, big times. Chapter 1 is an introduction to Oracle ADF and not only explains the layers but also how it compares to plain Java EE solutions (page 13). If you are new to Oracle JDeveloper and ADF, then at the end of this chapter you know how to start JDeveloper and begin your ADF development Chapter 2 starts with what Jobinesh really is good at: ADF Business Components. In this chapter you learn about the architecture ingredients of ADF Business Components: View Objects, View Links, Associations, Entities, Row Sets, Query Collections and Application Modules. This chapter also provides a introduction to ADFBC SDO services, as well as sequence diagrams for what happens when you execute queries or commit updates. Chapter 3 is dedicated to entity objects and  is one of many chapters in this book you will enjoy and never want to miss. Jobinesh explains the artifacts that make up an entity object, how to work with entities and resource bundles, and many advanced topics, including inheritance, change history tracking, custom properties, validation and cursor handling.  Chapter 4 - you guessed it - is all about View objects. Comparable to entities, you learn about the XM files and classes that make a view object, as well as how to define and work with queries. List-of-values, inheritance, polymorphism, bind variables and data filtering are interesting - and important topics that follow. Again the chapter provides helpful sequence diagrams for you to understand what happens internally within a view object. Chapter 5 focuses on advanced view object and entity object topics, like lifecycle callback methods and when you want to override them. This chapter is a good digest of Jobinesh's blog entries (which most ADF developers have in their bookmark list). Really worth reading ! Chapter 6 then is bout Application Modules. Beside of what application modules are, this chapter covers important topics like properties, passivation, activation, application module pooling, how and where to write custom logic. In addition you learn about the AM lifecycle and request sequence. Chapter 7 is about the ADF binding layer. If you are new to Oracle ADF and got lost in the more advanced ADF Business Components chapters, then this chapter is where you get back into the game. In very easy terms, Jobinesh explains what the ADF binding is, how it fits into the JSF request lifecycle and what are the metadata file involved. Chapter 8 then goes into building data bound web user interfaces. In this chapter you get the basics of JavaServer Faces (e.g. managed beans) and learn about the interaction between the JSF UI and the ADF binding layer. Later this chapter provides advanced solutions for working with tree components and list of values. Chapter 9 introduces bounded task flows and ADF controller. This is a chapter you want to read if you are new to ADF of have started. Experts don't find anything new here, which doesn't mean that it is not worth reading it (I for example, enjoyed the controller talk very much) Chapter 10 is an advanced coverage of bounded task flow and talks about contextual events  Chapter 11 is another highlight and explains error handling, trains, transactions and more. I can only recommend you read this chapter. I am aware of many documents that cover exception handling in Oracle ADF (and my Oracle Magazine article for January/February 2013 does the same), but none that covers it in such a great depth. Chapter 12 covers ADF best practices, which is a great round-up of all the tips provided in this book (without Jobinesh to repeat himself). Its all cool stuff that helps you with your ADF projects. In summary, "Oracle ADF Real World Developer’s Guide" by Jobinesh Purushothaman is a great book and addition for all Oracle ADF developers and those who want to become one. Frank

    Read the article

  • Debugging Tips for Skinning

    - by Christian David Straub
    Another guest post by Jeanne Waldman.If you are developing a skin for your Fusion Application in JDeveloper you should know these tips.   1. Firebug is your friend 2. Uncompress the css style classes 3. CHECK_FILE_MODIFICATION so that you see your skinning changes right away 4. View the generated CSS File   1. Firebug is your friend Install Firebug (http://getfirebug.com/layout) into Firefox and use it to view your rendered jspx page in the browser. You can select the HTML dom nodes on your page and you can see the css styles applied to each dom node.   2. Uncompress the css style classes By default the styleclasses that are rendered are compressed. You may see style classes like class="x10" and class="x2e". But in your skin css file you have styleclasses like: af|inputText::content or af|panelBox::header   It is easier for you to develop a skin and debug a skin with Firebug if you see the uncompressed styleclasses. To do this, a. open web.xml b. add   <context-param>     <param-name>org.apache.myfaces.trinidad.DISABLE_CONTENT_COMPRESSION</param-name>     <param-value>true</param-value>   </context-param> c. save d. restart the server and re-run your page.   3. CHECK_FILE_MODIFICATION so that you see your skinning changes right away   For performance sake the ADF Faces framework does not check if you skin .css file has changed on every render. But this is exactly what you want to happen when you are developing or debugging a skin. You want your changes to get noticed right away, without restarting the server.   To do this, a. open web.xml b. add   <context-param>     <description>If this parameter is true, there will be an automatic check of the modification date of your JSPs, and saved state will be discarded when JSP's change. It will also automatically check if your skinning css files have changed without you having to restart the server. This makes development easier, but adds overhead. For this reason this parameter should be set to false when your application is deployed.</description>     <param-name>org.apache.myfaces.trinidad.CHECK_FILE_MODIFICATION</param-name>     <param-value>false</param-value>   </context-param> c. save d. restart the server and re-run your page. e. from then on, you can change your skin's .css file, save it and refresh your page and you should see the changes right away   4. View the generated CSS File   There are different ways to view the generated CSS File which is your skin's css file merged in with all the skins it extends and processed and generated to the filesystem and linked to your generated html page. One way is to view it with Firebug. The problem with this approach is you might see something that is a little different than the actual css file because Firebug may do some extra processing. I like to view the generated css file by: Right click on your page in the browser 

    Read the article

  • Tomcat still running after uninstalling

    - by Rohit Jain
    I installed tomcat7 using the following command: sudo apt-get install tomcat7 Then to uninstall it, I used below command: sudo dpkg -l|grep tomcat This listed all the packages related to tomcat. Then I removed tomcat7: sudo dpkg -P tomcat7 After that, I saw that, some related package were still there, and surprisingly I was still able to access the tomcat home page at - http://localhost:8080. So, I tried to remove it using the following command: sudo apt-get remove tomcat7 sudo apt-get autoremove But, still I was able to access the tomcat home page. So I re-booted my PC, thinking that the effect will take effect after that. But again, I'm still able to access the homepage. That means that tomcat is still running on my PC. What's going on here? Have I followed the steps correctly to uninstall tomcat. I want to uninstall to re-install a private instance of tomcat. I found out that the directory - /usr/share/tomcat7, is still there: /usr/share/tomcat7$ ls conf log webapps Is it something to do with the uninstallation?

    Read the article

  • Internet Explorer 9 Cannot open file on download CTRL + J doesn’t work can’t open list of downloads

    - by simonsabin
    If any of the above symptoms are causing a problem, i.e. 1. You download a file and the download dialog disappears. 2. You select open when you download a file and nothing happens. 3. The View Downloads doesn’t work 4. CTRL + J doesn’t work (view downloads) The solution is to clear your download history See IE9 - View downloads / Ctrl+J do not open. I cannot open any file. But SAVE function still work fine. 64 bit version. for details the answer is provided by Steven. S on June 20th. I hope that...(read more)

    Read the article

  • Security Issues with Single Page Apps

    - by Stephen.Walther
    Last week, I was asked to do a code review of a Single Page App built using the ASP.NET Web API, Durandal, and Knockout (good stuff!). In particular, I was asked to investigate whether there any special security issues associated with building a Single Page App which are not present in the case of a traditional server-side ASP.NET application. In this blog entry, I discuss two areas in which you need to exercise extra caution when building a Single Page App. I discuss how Single Page Apps are extra vulnerable to both Cross-Site Scripting (XSS) attacks and Cross-Site Request Forgery (CSRF) attacks. This goal of this blog post is NOT to persuade you to avoid writing Single Page Apps. I’m a big fan of Single Page Apps. Instead, the goal is to ensure that you are fully aware of some of the security issues related to Single Page Apps and ensure that you know how to guard against them. Cross-Site Scripting (XSS) Attacks According to WhiteHat Security, over 65% of public websites are open to XSS attacks. That’s bad. By taking advantage of XSS holes in a website, a hacker can steal your credit cards, passwords, or bank account information. Any website that redisplays untrusted information is open to XSS attacks. Let me give you a simple example. Imagine that you want to display the name of the current user on a page. To do this, you create the following server-side ASP.NET page located at http://MajorBank.com/SomePage.aspx: <%@Page Language="C#" %> <html> <head> <title>Some Page</title> </head> <body> Welcome <%= Request["username"] %> </body> </html> Nothing fancy here. Notice that the page displays the current username by using Request[“username”]. Using Request[“username”] displays the username regardless of whether the username is present in a cookie, a form field, or a query string variable. Unfortunately, by using Request[“username”] to redisplay untrusted information, you have now opened your website to XSS attacks. Here’s how. Imagine that an evil hacker creates the following link on another website (hackers.com): <a href="/SomePage.aspx?username=<script src=Evil.js></script>">Visit MajorBank</a> Notice that the link includes a query string variable named username and the value of the username variable is an HTML <SCRIPT> tag which points to a JavaScript file named Evil.js. When anyone clicks on the link, the <SCRIPT> tag will be injected into SomePage.aspx and the Evil.js script will be loaded and executed. What can a hacker do in the Evil.js script? Anything the hacker wants. For example, the hacker could display a popup dialog on the MajorBank.com site which asks the user to enter their password. The script could then post the password back to hackers.com and now the evil hacker has your secret password. ASP.NET Web Forms and ASP.NET MVC have two automatic safeguards against this type of attack: Request Validation and Automatic HTML Encoding. Protecting Coming In (Request Validation) In a server-side ASP.NET app, you are protected against the XSS attack described above by a feature named Request Validation. If you attempt to submit “potentially dangerous” content — such as a JavaScript <SCRIPT> tag — in a form field or query string variable then you get an exception. Unfortunately, Request Validation only applies to server-side apps. Request Validation does not help in the case of a Single Page App. In particular, the ASP.NET Web API does not pay attention to Request Validation. You can post any content you want – including <SCRIPT> tags – to an ASP.NET Web API action. For example, the following HTML page contains a form. When you submit the form, the form data is submitted to an ASP.NET Web API controller on the server using an Ajax request: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title></title> </head> <body> <form data-bind="submit:submit"> <div> <label> User Name: <input data-bind="value:user.userName" /> </label> </div> <div> <label> Email: <input data-bind="value:user.email" /> </label> </div> <div> <input type="submit" value="Submit" /> </div> </form> <script src="Scripts/jquery-1.7.1.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { user: { userName: ko.observable(), email: ko.observable() }, submit: function () { $.post("/api/users", ko.toJS(this.user)); } }; ko.applyBindings(viewModel); </script> </body> </html> The form above is using Knockout to bind the form fields to a view model. When you submit the form, the view model is submitted to an ASP.NET Web API action on the server. Here’s the server-side ASP.NET Web API controller and model class: public class UsersController : ApiController { public HttpResponseMessage Post(UserViewModel user) { var userName = user.UserName; return Request.CreateResponse(HttpStatusCode.OK); } } public class UserViewModel { public string UserName { get; set; } public string Email { get; set; } } If you submit the HTML form, you don’t get an error. The “potentially dangerous” content is passed to the server without any exception being thrown. In the screenshot below, you can see that I was able to post a username form field with the value “<script>alert(‘boo’)</script”. So what this means is that you do not get automatic Request Validation in the case of a Single Page App. You need to be extra careful in a Single Page App about ensuring that you do not display untrusted content because you don’t have the Request Validation safety net which you have in a traditional server-side ASP.NET app. Protecting Going Out (Automatic HTML Encoding) Server-side ASP.NET also protects you from XSS attacks when you render content. By default, all content rendered by the razor view engine is HTML encoded. For example, the following razor view displays the text “<b>Hello!</b>” instead of the text “Hello!” in bold: @{ var message = "<b>Hello!</b>"; } @message   If you don’t want to render content as HTML encoded in razor then you need to take the extra step of using the @Html.Raw() helper. In a Web Form page, if you use <%: %> instead of <%= %> then you get automatic HTML Encoding: <%@ Page Language="C#" %> <% var message = "<b>Hello!</b>"; %> <%: message %> This automatic HTML Encoding will prevent many types of XSS attacks. It prevents <script> tags from being rendered and only allows &lt;script&gt; tags to be rendered which are useless for executing JavaScript. (This automatic HTML encoding does not protect you from all forms of XSS attacks. For example, you can assign the value “javascript:alert(‘evil’)” to the Hyperlink control’s NavigateUrl property and execute the JavaScript). The situation with Knockout is more complicated. If you use the Knockout TEXT binding then you get HTML encoded content. On the other hand, if you use the HTML binding then you do not: <!-- This JavaScript DOES NOT execute --> <div data-bind="text:someProp"></div> <!-- This Javacript DOES execute --> <div data-bind="html:someProp"></div> <script src="Scripts/jquery-1.7.1.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { someProp : "<script>alert('Evil!')<" + "/script>" }; ko.applyBindings(viewModel); </script>   So, in the page above, the DIV element which uses the TEXT binding is safe from XSS attacks. According to the Knockout documentation: “Since this binding sets your text value using a text node, it’s safe to set any string value without risking HTML or script injection.” Just like server-side HTML encoding, Knockout does not protect you from all types of XSS attacks. For example, there is nothing in Knockout which prevents you from binding JavaScript to a hyperlink like this: <a data-bind="attr:{href:homePageUrl}">Go</a> <script src="Scripts/jquery-1.7.1.min.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { homePageUrl: "javascript:alert('evil!')" }; ko.applyBindings(viewModel); </script> In the page above, the value “javascript:alert(‘evil’)” is bound to the HREF attribute using Knockout. When you click the link, the JavaScript executes. Cross-Site Request Forgery (CSRF) Attacks Cross-Site Request Forgery (CSRF) attacks rely on the fact that a session cookie does not expire until you close your browser. In particular, if you visit and login to MajorBank.com and then you navigate to Hackers.com then you will still be authenticated against MajorBank.com even after you navigate to Hackers.com. Because MajorBank.com cannot tell whether a request is coming from MajorBank.com or Hackers.com, Hackers.com can submit requests to MajorBank.com pretending to be you. For example, Hackers.com can post an HTML form from Hackers.com to MajorBank.com and change your email address at MajorBank.com. Hackers.com can post a form to MajorBank.com using your authentication cookie. After your email address has been changed, by using a password reset page at MajorBank.com, a hacker can access your bank account. To prevent CSRF attacks, you need some mechanism for detecting whether a request is coming from a page loaded from your website or whether the request is coming from some other website. The recommended way of preventing Cross-Site Request Forgery attacks is to use the “Synchronizer Token Pattern” as described here: https://www.owasp.org/index.php/Cross-Site_Request_Forgery_%28CSRF%29_Prevention_Cheat_Sheet When using the Synchronizer Token Pattern, you include a hidden input field which contains a random token whenever you display an HTML form. When the user opens the form, you add a cookie to the user’s browser with the same random token. When the user posts the form, you verify that the hidden form token and the cookie token match. Preventing Cross-Site Request Forgery Attacks with ASP.NET MVC ASP.NET gives you a helper and an action filter which you can use to thwart Cross-Site Request Forgery attacks. For example, the following razor form for creating a product shows how you use the @Html.AntiForgeryToken() helper: @model MvcApplication2.Models.Product <h2>Create Product</h2> @using (Html.BeginForm()) { @Html.AntiForgeryToken(); <div> @Html.LabelFor( p => p.Name, "Product Name:") @Html.TextBoxFor( p => p.Name) </div> <div> @Html.LabelFor( p => p.Price, "Product Price:") @Html.TextBoxFor( p => p.Price) </div> <input type="submit" /> } The @Html.AntiForgeryToken() helper generates a random token and assigns a serialized version of the same random token to both a cookie and a hidden form field. (Actually, if you dive into the source code, the AntiForgeryToken() does something a little more complex because it takes advantage of a user’s identity when generating the token). Here’s what the hidden form field looks like: <input name=”__RequestVerificationToken” type=”hidden” value=”NqqZGAmlDHh6fPTNR_mti3nYGUDgpIkCiJHnEEL59S7FNToyyeSo7v4AfzF2i67Cv0qTB1TgmZcqiVtgdkW2NnXgEcBc-iBts0x6WAIShtM1″ /> And here’s what the cookie looks like using the Google Chrome developer toolbar: You use the [ValidateAntiForgeryToken] action filter on the controller action which is the recipient of the form post to validate that the token in the hidden form field matches the token in the cookie. If the tokens don’t match then validation fails and you can’t post the form: public ActionResult Create() { return View(); } [ValidateAntiForgeryToken] [HttpPost] public ActionResult Create(Product productToCreate) { if (ModelState.IsValid) { // save product to db return RedirectToAction("Index"); } return View(); } How does this all work? Let’s imagine that a hacker has copied the Create Product page from MajorBank.com to Hackers.com – the hacker grabs the HTML source and places it at Hackers.com. Now, imagine that the hacker trick you into submitting the Create Product form from Hackers.com to MajorBank.com. You’ll get the following exception: The Cross-Site Request Forgery attack is blocked because the anti-forgery token included in the Create Product form at Hackers.com won’t match the anti-forgery token stored in the cookie in your browser. The tokens were generated at different times for different users so the attack fails. Preventing Cross-Site Request Forgery Attacks with a Single Page App In a Single Page App, you can’t prevent Cross-Site Request Forgery attacks using the same method as a server-side ASP.NET MVC app. In a Single Page App, HTML forms are not generated on the server. Instead, in a Single Page App, forms are loaded dynamically in the browser. Phil Haack has a blog post on this topic where he discusses passing the anti-forgery token in an Ajax header instead of a hidden form field. He also describes how you can create a custom anti-forgery token attribute to compare the token in the Ajax header and the token in the cookie. See: http://haacked.com/archive/2011/10/10/preventing-csrf-with-ajax.aspx Also, take a look at Johan’s update to Phil Haack’s original post: http://johan.driessen.se/posts/Updated-Anti-XSRF-Validation-for-ASP.NET-MVC-4-RC (Other server frameworks such as Rails and Django do something similar. For example, Rails uses an X-CSRF-Token to prevent CSRF attacks which you generate on the server – see http://excid3.com/blog/rails-tip-2-include-csrf-token-with-every-ajax-request/#.UTFtgDDkvL8 ). For example, if you are creating a Durandal app, then you can use the following razor view for your one and only server-side page: @{ Layout = null; } <!DOCTYPE html> <html> <head> <title>Index</title> </head> <body> @Html.AntiForgeryToken() <div id="applicationHost"> Loading app.... </div> @Scripts.Render("~/scripts/vendor") <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> </body> </html> Notice that this page includes a call to @Html.AntiForgeryToken() to generate the anti-forgery token. Then, whenever you make an Ajax request in the Durandal app, you can retrieve the anti-forgery token from the razor view and pass the token as a header: var csrfToken = $("input[name='__RequestVerificationToken']").val(); $.ajax({ headers: { __RequestVerificationToken: csrfToken }, type: "POST", dataType: "json", contentType: 'application/json; charset=utf-8', url: "/api/products", data: JSON.stringify({ name: "Milk", price: 2.33 }), statusCode: { 200: function () { alert("Success!"); } } }); Use the following code to create an action filter which you can use to match the header and cookie tokens: using System.Linq; using System.Net.Http; using System.Web.Helpers; using System.Web.Http.Controllers; namespace MvcApplication2.Infrastructure { public class ValidateAjaxAntiForgeryToken : System.Web.Http.AuthorizeAttribute { protected override bool IsAuthorized(HttpActionContext actionContext) { var headerToken = actionContext .Request .Headers .GetValues("__RequestVerificationToken") .FirstOrDefault(); ; var cookieToken = actionContext .Request .Headers .GetCookies() .Select(c => c[AntiForgeryConfig.CookieName]) .FirstOrDefault(); // check for missing cookie or header if (cookieToken == null || headerToken == null) { return false; } // ensure that the cookie matches the header try { AntiForgery.Validate(cookieToken.Value, headerToken); } catch { return false; } return base.IsAuthorized(actionContext); } } } Notice that the action filter derives from the base AuthorizeAttribute. The ValidateAjaxAntiForgeryToken only works when the user is authenticated and it will not work for anonymous requests. Add the action filter to your ASP.NET Web API controller actions like this: [ValidateAjaxAntiForgeryToken] public HttpResponseMessage PostProduct(Product productToCreate) { // add product to db return Request.CreateResponse(HttpStatusCode.OK); } After you complete these steps, it won’t be possible for a hacker to pretend to be you at Hackers.com and submit a form to MajorBank.com. The header token used in the Ajax request won’t travel to Hackers.com. This approach works, but I am not entirely happy with it. The one thing that I don’t like about this approach is that it creates a hard dependency on using razor. Your single page in your Single Page App must be generated from a server-side razor view. A better solution would be to generate the anti-forgery token in JavaScript. Unfortunately, until all browsers support a way to generate cryptographically strong random numbers – for example, by supporting the window.crypto.getRandomValues() method — there is no good way to generate anti-forgery tokens in JavaScript. So, at least right now, the best solution for generating the tokens is the server-side solution with the (regrettable) dependency on razor. Conclusion The goal of this blog entry was to explore some ways in which you need to handle security differently in the case of a Single Page App than in the case of a traditional server app. In particular, I focused on how to prevent Cross-Site Scripting and Cross-Site Request Forgery attacks in the case of a Single Page App. I want to emphasize that I am not suggesting that Single Page Apps are inherently less secure than server-side apps. Whatever type of web application you build – regardless of whether it is a Single Page App, an ASP.NET MVC app, an ASP.NET Web Forms app, or a Rails app – you must constantly guard against security vulnerabilities.

    Read the article

  • Html.RenderAction Failed when Validation Failed

    - by Shaun
    RenderAction method had been introduced when ASP.NET MVC 1.0 released in its MvcFuture assembly and then final announced along with the ASP.NET MVC 2.0. Similar as RenderPartial, the RenderAction can display some HTML markups which defined in a partial view in any parent views. But the RenderAction gives us the ability to populate the data from an action which may different from the action which populating the main view. For example, in Home/Index.aspx we can invoke the Html.RenderPartial(“MyPartialView”) but the data of MyPartialView must be populated by the Index action of the Home controller. If we need the MyPartialView to be shown in Product/Create.aspx we have to copy (or invoke) the relevant code from the Index action in Home controller to the Create action in the Product controller which is painful. But if we are using Html.RenderAction we can tell the ASP.NET MVC from which action/controller the data should be populated. in that way in the Home/Index.aspx and Product/Create.aspx views we just need to call Html.RenderAction(“CreateMyPartialView”, “MyPartialView”) so it will invoke the CreateMyPartialView action in MyPartialView controller regardless from which main view. But in my current project we found a bug when I implement a RenderAction method in the master page to show something that need to connect to the backend data center when the validation logic was failed on some pages. I created a sample application below.   Demo application I created an ASP.NET MVC 2 application and here I need to display the current date and time on the master page. I created an action in the Home controller named TimeSlot and stored the current date into ViewDate. This method was marked as HttpGet as it just retrieves some data instead of changing anything. 1: [HttpGet] 2: public ActionResult TimeSlot() 3: { 4: ViewData["timeslot"] = DateTime.Now; 5: return View("TimeSlot"); 6: } Next, I created a partial view under the Shared folder to display the date and time string. 1: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<dynamic>" %> 2:  3: <span>Now: <% 1: : ViewData["timeslot"].ToString() %></span> Then at the master page I used Html.RenderAction to display it in front of the logon link. 1: <div id="logindisplay"> 2: <% 1: Html.RenderAction("TimeSlot", "Home"); %> 3:  4: <% 1: Html.RenderPartial("LogOnUserControl"); %> 5: </div> It’s fairly simple and works well when I navigated to any pages. But when I moved to the logon page and click the LogOn button without input anything in username and password the validation failed and my website crashed with the beautiful yellow page. (I really like its color style and fonts…)   How ASP.NET MVC executes Html.RenderAction In this example all other pages were rendered successful which means the ASP.NET MVC found the TimeSolt action under the Home controller except this situation. The only different is that when I clicked the LogOn button the browser send an HttpPost request to the server. Is that the reason of this bug? I created another action in Home controller with the same action name but for HttpPost. 1: [HttpPost] 2: [ActionName("TimeSlot")] 3: public ActionResult TimeSlot(object dummy) 4: { 5: return TimeSlot(); 6: } Or, I can use the AcceptVerbsAttribute on the TimeSlot action to let it allow both HttpGet and HttpPost. 1: [AcceptVerbs("GET", "POST")] 2: public ActionResult TimeSlot() 3: { 4: ViewData["timeslot"] = DateTime.Now; 5: return View("TimeSlot"); 6: } And then repeat what I did before and this time it worked well. Why we need the action for HttpPost here as it’s just data retrieving? That is because of how ASP.NET MVC executes the RenderAction method. In the source code of ASP.NET MVC we can see when proforming the RenderAction ASP.NET MVC creates a RequestContext instance from the current RequestContext and created a ChildActionMvcHandler instance which inherits from MvcHandler class. Then the ASP.NET MVC processes the handler through the HttpContext.Server.Execute method. That means it performs the action as a stand-alone request asynchronously and flush the result into the  TextWriter which is being used to render the current page. Since when I clicked the LogOn the request was in HttpPost so when ASP.NET MVC processed the ChildActionMvcHandler it would find the action which allow the current request method, which is HttpPost. Then our TimeSlot method in HttpGet would not be matched.   Summary In this post I introduced a bug in my currently developing project regards the new Html.RenderAction method provided within ASP.NET MVC 2 when processing a HttpPost request. In ASP.NET MVC world the underlying Http information became more important than in ASP.NET WebForm world. We need to pay more attention on which kind of request it currently created and how ASP.NET MVC processes.   Hope this helps, Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 30 (sys.dm_server_registry)

    - by Tamarick Hill
    The sys.dm_server_registry DMV is used to provide SQL Server configuration and installation information that is currently stored in your Windows Registry. It is a very simple DMV that returns only three columns. The first column returned is the registry_key. The second column returned is the value_name which is the name of the actual registry key value. The third and final column returned is the value_data which is the value of the registry key data. Lets have a look at the information this DMV returns as well as some key values from the Windows Registy. SELECT * FROM sys.dm_server_registry View using RegEdit to view the registy: This DMV provides you with a quick and easy way to view SQL Server Instance registry values. For more information about this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/hh204561.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Is Oracle Solaris 11 Really Better Than Oracle Solaris 10?

    - by rickramsey
    If you want to be well armed for that debate, study this comparison of the commands and capabilities of each OS before the spittle starts flying: How Solaris 11 Compares to Solaris 10 For instance, did you know that the command to configure your wireless network in Solaris 11 is not wificonfig, but dladm and ipadm for manual configuration, and netcfg for automatic configuration? Personally, I think the change was made to correct the grievous offense of spelling out "config" in the wificonfig command, instead of sticking to the widely accepted "cfg" convention, but loathe as I am to admit it, there may have been additional reasons for the change. This doc was written by the Solaris Documentation Team, and it not only compares the major features and command sequences in Solaris 11 to those in Solaris 10, but it links you to the sections of the documentation that explain them in detail. - Rick Website Newsletter Facebook Twitter

    Read the article

  • Using Stub Objects

    - by user9154181
    Having told the long and winding tale of where stub objects came from and how we use them to build Solaris, I'd like to focus now on the the nuts and bolts of building and using them. The following new features were added to the Solaris link-editor (ld) to support the production and use of stub objects: -z stub This new command line option informs ld that it is to build a stub object rather than a normal object. In this mode, it accepts the same command line arguments as usual, but will quietly ignore any objects and sharable object dependencies. STUB_OBJECT Mapfile Directive In order to build a stub version of an object, its mapfile must specify the STUB_OBJECT directive. When producing a non-stub object, the presence of STUB_OBJECT causes the link-editor to perform extra validation to ensure that the stub and non-stub objects will be compatible. ASSERT Mapfile Directive All data symbols exported from the object must have an ASSERT symbol directive in the mapfile that declares them as data and supplies the size, binding, bss attributes, and symbol aliasing details. When building the stub objects, the information in these ASSERT directives is used to create the data symbols. When building the real object, these ASSERT directives will ensure that the real object matches the linking interface presented by the stub. Although ASSERT was added to the link-editor in order to support stub objects, they are a general purpose feature that can be used independently of stub objects. For instance you might choose to use an ASSERT directive if you have a symbol that must have a specific address in order for the object to operate properly and you want to automatically ensure that this will always be the case. The material presented here is derived from a document I originally wrote during the development effort, which had the dual goals of providing supplemental materials for the stub object PSARC case, and as a set of edits that were eventually applied to the Oracle Solaris Linker and Libraries Manual (LLM). The Solaris 11 LLM contains this information in a more polished form. Stub Objects A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be used at runtime. However, an application can be built against a stub object, where the stub object provides the real object name to be used at runtime, and then use the real object at runtime. When building a stub object, the link-editor ignores any object or library files specified on the command line, and these files need not exist in order to build a stub. Since the compilation step can be omitted, and because the link-editor has relatively little work to do, stub objects can be built very quickly. Stub objects can be used to solve a variety of build problems: Speed Modern machines, using a version of make with the ability to parallelize operations, are capable of compiling and linking many objects simultaneously, and doing so offers significant speedups. However, it is typical that a given object will depend on other objects, and that there will be a core set of objects that nearly everything else depends on. It is necessary to impose an ordering that builds each object before any other object that requires it. This ordering creates bottlenecks that reduce the amount of parallelization that is possible and limits the overall speed at which the code can be built. Complexity/Correctness In a large body of code, there can be a large number of dependencies between the various objects. The makefiles or other build descriptions for these objects can become very complex and difficult to understand or maintain. The dependencies can change as the system evolves. This can cause a given set of makefiles to become slightly incorrect over time, leading to race conditions and mysterious rare build failures. Dependency Cycles It might be desirable to organize code as cooperating shared objects, each of which draw on the resources provided by the other. Such cycles cannot be supported in an environment where objects must be built before the objects that use them, even though the runtime linker is fully capable of loading and using such objects if they could be built. Stub shared objects offer an alternative method for building code that sidesteps the above issues. Stub objects can be quickly built for all the shared objects produced by the build. Then, all the real shared objects and executables can be built in parallel, in any order, using the stub objects to stand in for the real objects at link-time. Afterwards, the executables and real shared objects are kept, and the stub shared objects are discarded. Stub objects are built from a mapfile, which must satisfy the following requirements. The mapfile must specify the STUB_OBJECT directive. This directive informs the link-editor that the object can be built as a stub object, and as such causes the link-editor to perform validation and sanity checking intended to guarantee that an object and its stub will always provide identical linking interfaces. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data exported from the object must have an ASSERT symbol attribute in the mapfile to specify the symbol type, size, and bss attributes. In the case where there are multiple symbols that reference the same data, the ASSERT for one of these symbols must specify the TYPE and SIZE attributes, while the others must use the ALIAS attribute to reference this primary symbol. Given such a mapfile, the stub and real versions of the shared object can be built using the same command line for each, adding the '-z stub' option to the link for the stub object, and omiting the option from the link for the real object. To demonstrate these ideas, the following code implements a shared object named idx5, which exports data from a 5 element array of integers, with each element initialized to contain its zero-based array index. This data is available as a global array, via an alternative alias data symbol with weak binding, and via a functional interface. % cat idx5.c int _idx5[5] = { 0, 1, 2, 3, 4 }; #pragma weak idx5 = _idx5 int idx5_func(int index) { if ((index 4)) return (-1); return (_idx5[index]); } A mapfile is required to describe the interface provided by this shared object. % cat mapfile $mapfile_version 2 STUB_OBJECT; SYMBOL_SCOPE { _idx5 { ASSERT { TYPE=data; SIZE=4[5] }; }; idx5 { ASSERT { BINDING=weak; ALIAS=_idx5 }; }; idx5_func; local: *; }; The following main program is used to print all the index values available from the idx5 shared object. % cat main.c #include <stdio.h> extern int _idx5[5], idx5[5], idx5_func(int); int main(int argc, char **argv) { int i; for (i = 0; i The following commands create a stub version of this shared object in a subdirectory named stublib. elfdump is used to verify that the resulting object is a stub. The command used to build the stub differs from that of the real object only in the addition of the -z stub option, and the use of a different output file name. This demonstrates the ease with which stub generation can be added to an existing makefile. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o stublib/libidx5.so.1 -zstub % ln -s libidx5.so.1 stublib/libidx5.so % elfdump -d stublib/libidx5.so | grep STUB [11] FLAGS_1 0x4000000 [ STUB ] The main program can now be built, using the stub object to stand in for the real shared object, and setting a runpath that will find the real object at runtime. However, as we have not yet built the real object, this program cannot yet be run. Attempts to cause the system to load the stub object are rejected, as the runtime linker knows that stub objects lack the actual code and data found in the real object, and cannot execute. % cc main.c -L stublib -R '$ORIGIN/lib' -lidx5 -lc % ./a.out ld.so.1: a.out: fatal: libidx5.so.1: open failed: No such file or directory Killed % LD_PRELOAD=stublib/libidx5.so.1 ./a.out ld.so.1: a.out: fatal: stublib/libidx5.so.1: stub shared object cannot be used at runtime Killed We build the real object using the same command as we used to build the stub, omitting the -z stub option, and writing the results to a different file. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o lib/libidx5.so.1 Once the real object has been built in the lib subdirectory, the program can be run. % ./a.out [0] 0 0 0 [1] 1 1 1 [2] 2 2 2 [3] 3 3 3 [4] 4 4 4 Mapfile Changes The version 2 mapfile syntax was extended in a number of places to accommodate stub objects. Conditional Input The version 2 mapfile syntax has the ability conditionalize mapfile input using the $if control directive. As you might imagine, these directives are used frequently with ASSERT directives for data, because a given data symbol will frequently have a different size in 32 or 64-bit code, or on differing hardware such as x86 versus sparc. The link-editor maintains an internal table of names that can be used in the logical expressions evaluated by $if and $elif. At startup, this table is initialized with items that describe the class of object (_ELF32 or _ELF64) and the type of the target machine (_sparc or _x86). We found that there were a small number of cases in the Solaris code base in which we needed to know what kind of object we were producing, so we added the following new predefined items in order to address that need: NameMeaning ...... _ET_DYNshared object _ET_EXECexecutable object _ET_RELrelocatable object ...... STUB_OBJECT Directive The new STUB_OBJECT directive informs the link-editor that the object described by the mapfile can be built as a stub object. STUB_OBJECT; A stub shared object is built entirely from the information in the mapfiles supplied on the command line. When the -z stub option is specified to build a stub object, the presence of the STUB_OBJECT directive in a mapfile is required, and the link-editor uses the information in symbol ASSERT attributes to create global symbols that match those of the real object. When the real object is built, the presence of STUB_OBJECT causes the link-editor to verify that the mapfiles accurately describe the real object interface, and that a stub object built from them will provide the same linking interface as the real object it represents. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data in the object is required to have an ASSERT attribute that specifies the symbol type and size. If the ASSERT BIND attribute is not present, the link-editor provides a default assertion that the symbol must be GLOBAL. If the ASSERT SH_ATTR attribute is not present, or does not specify that the section is one of BITS or NOBITS, the link-editor provides a default assertion that the associated section is BITS. All data symbols that describe the same address and size are required to have ASSERT ALIAS attributes specified in the mapfile. If aliased symbols are discovered that do not have an ASSERT ALIAS specified, the link fails and no object is produced. These rules ensure that the mapfiles contain a description of the real shared object's linking interface that is sufficient to produce a stub object with a completely compatible linking interface. SYMBOL_SCOPE/SYMBOL_VERSION ASSERT Attribute The SYMBOL_SCOPE and SYMBOL_VERSION mapfile directives were extended with a symbol attribute named ASSERT. The syntax for the ASSERT attribute is as follows: ASSERT { ALIAS = symbol_name; BINDING = symbol_binding; TYPE = symbol_type; SH_ATTR = section_attributes; SIZE = size_value; SIZE = size_value[count]; }; The ASSERT attribute is used to specify the expected characteristics of the symbol. The link-editor compares the symbol characteristics that result from the link to those given by ASSERT attributes. If the real and asserted attributes do not agree, a fatal error is issued and the output object is not created. In normal use, the link editor evaluates the ASSERT attribute when present, but does not require them, or provide default values for them. The presence of the STUB_OBJECT directive in a mapfile alters the interpretation of ASSERT to require them under some circumstances, and to supply default assertions if explicit ones are not present. See the definition of the STUB_OBJECT Directive for the details. When the -z stub command line option is specified to build a stub object, the information provided by ASSERT attributes is used to define the attributes of the global symbols provided by the object. ASSERT accepts the following: ALIAS Name of a previously defined symbol that this symbol is an alias for. An alias symbol has the same type, value, and size as the main symbol. The ALIAS attribute is mutually exclusive to the TYPE, SIZE, and SH_ATTR attributes, and cannot be used with them. When ALIAS is specified, the type, size, and section attributes are obtained from the alias symbol. BIND Specifies an ELF symbol binding, which can be any of the STB_ constants defined in <sys/elf.h>, with the STB_ prefix removed (e.g. GLOBAL, WEAK). TYPE Specifies an ELF symbol type, which can be any of the STT_ constants defined in <sys/elf.h>, with the STT_ prefix removed (e.g. OBJECT, COMMON, FUNC). In addition, for compatibility with other mapfile usage, FUNCTION and DATA can be specified, for STT_FUNC and STT_OBJECT, respectively. TYPE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SH_ATTR Specifies attributes of the section associated with the symbol. The section_attributes that can be specified are given in the following table: Section AttributeMeaning BITSSection is not of type SHT_NOBITS NOBITSSection is of type SHT_NOBITS SH_ATTR is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SIZE Specifies the expected symbol size. SIZE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. The syntax for the size_value argument is as described in the discussion of the SIZE attribute below. SIZE The SIZE symbol attribute existed before support for stub objects was introduced. It is used to set the size attribute of a given symbol. This attribute results in the creation of a symbol definition. Prior to the introduction of the ASSERT SIZE attribute, the value of a SIZE attribute was always numeric. While attempting to apply ASSERT SIZE to the objects in the Solaris ON consolidation, I found that many data symbols have a size based on the natural machine wordsize for the class of object being produced. Variables declared as long, or as a pointer, will be 4 bytes in size in a 32-bit object, and 8 bytes in a 64-bit object. Initially, I employed the conditional $if directive to handle these cases as follows: $if _ELF32 foo { ASSERT { TYPE=data; SIZE=4 } }; bar { ASSERT { TYPE=data; SIZE=20 } }; $elif _ELF64 foo { ASSERT { TYPE=data; SIZE=8 } }; bar { ASSERT { TYPE=data; SIZE=40 } }; $else $error UNKNOWN ELFCLASS $endif I found that the situation occurs frequently enough that this is cumbersome. To simplify this case, I introduced the idea of the addrsize symbolic name, and of a repeat count, which together make it simple to specify machine word scalar or array symbols. Both the SIZE, and ASSERT SIZE attributes support this syntax: The size_value argument can be a numeric value, or it can be the symbolic name addrsize. addrsize represents the size of a machine word capable of holding a memory address. The link-editor substitutes the value 4 for addrsize when building 32-bit objects, and the value 8 when building 64-bit objects. addrsize is useful for representing the size of pointer variables and C variables of type long, as it automatically adjusts for 32 and 64-bit objects without requiring the use of conditional input. The size_value argument can be optionally suffixed with a count value, enclosed in square brackets. If count is present, size_value and count are multiplied together to obtain the final size value. Using this feature, the example above can be written more naturally as: foo { ASSERT { TYPE=data; SIZE=addrsize } }; bar { ASSERT { TYPE=data; SIZE=addrsize[5] } }; Exported Global Data Is Still A Bad Idea As you can see, the additional plumbing added to the Solaris link-editor to support stub objects is minimal. Furthermore, about 90% of that plumbing is dedicated to handling global data. We have long advised against global data exported from shared objects. There are many ways in which global data does not fit well with dynamic linking. Stub objects simply provide one more reason to avoid this practice. It is always better to export all data via a functional interface. You should always hide your data, and make it available to your users via a function that they can call to acquire the address of the data item. However, If you do have to support global data for a stub, perhaps because you are working with an already existing object, it is still easilily done, as shown above. Oracle does not like us to discuss hypothetical new features that don't exist in shipping product, so I'll end this section with a speculation. It might be possible to do more in this area to ease the difficulty of dealing with objects that have global data that the users of the library don't need. Perhaps someday... Conclusions It is easy to create stub objects for most objects. If your library only exports function symbols, all you have to do to build a faithful stub object is to add STUB_OBJECT; and then to use the same link command you're currently using, with the addition of the -z stub option. Happy Stubbing!

    Read the article

  • Performing a clean database creation using msbuild

    - by Robert May
    So I’m taking a break from writing about other Agile stuff for a post. :)  I’m still going to get back to the other subjects, but this is fun too. Something I’ve done quite a bit of is MSBuild and CI work.  I’m experimenting with ways to improve what I’ve done in the past, particularly around database CI. Today, I developed a mechanism for starting from scratch with your database.  By scratch, I mean blowing away the existing database and creating it again from a single command line call.  I’m a firm believer that developers should be able to get to a known clean state at the database level with a single command and that they should be operating off of their own isolated database to improve productivity.  These scripts will help that. Here’s how I did it.  First, we have to disconnect users.  I did so using the help of a script from sql server central.  Note that I’m using sqlcmd variable replacement. -- kills all the users in a particular database -- dlhatheway/3M, 11-Jun-2000 declare @arg_dbname sysname declare @a_spid smallint declare @msg varchar(255) declare @a_dbid int set @arg_dbname = '$(DatabaseName)' select @a_dbid = sdb.dbid from master..sysdatabases sdb where sdb.name = @arg_dbname declare db_users insensitive cursor for select sp.spid from master..sysprocesses sp where sp.dbid = @a_dbid open db_users fetch next from db_users into @a_spid while @@fetch_status = 0 begin select @msg = 'kill '+convert(char(5),@a_spid) print @msg execute (@msg) fetch next from db_users into @a_spid end close db_users deallocate db_users GO Once all users are booted from the database, we can commence with recreating the database.  I generated the script that is used to create a database from SQL Server management studio, so I’m only going to show the bits that weren’t generated that are important.  There are a bunch of Alter Database statements that aren’t shown. First, I had to find the default location of the database files in the install, since they can be in many different locations.  I used Method 1 from a technet blog and then modified it a bit to do what I needed to do.  I ended up using dynamic SQL because for the life of me, I couldn’t get the “Filename” property to not return an error when I used anything besides a string.  I’m dropping the database first, if it exists.  Here’s the code:   IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = N'$(DatabaseName)') BEGIN drop database $(DatabaseName) END; go IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = 'zzTempDBForDefaultPath') BEGIN DROP DATABASE zzTempDBForDefaultPath END; -- Create temp database. Because no options are given, the default data and --- log path locations are used CREATE DATABASE zzTempDBForDefaultPath; DECLARE @Default_Data_Path VARCHAR(512), @Default_Log_Path VARCHAR(512); --Get the default data path SELECT @Default_Data_Path = ( SELECT LEFT(physical_name,LEN(physical_name)-CHARINDEX('\',REVERSE(physical_name))+1) FROM sys.master_files mf INNER JOIN sys.[databases] d ON mf.[database_id] = d.[database_id] WHERE d.[name] = 'zzTempDBForDefaultPath' AND type = 0); --Get the default Log path SELECT @Default_Log_Path = ( SELECT LEFT(physical_name,LEN(physical_name)-CHARINDEX('\',REVERSE(physical_name))+1) FROM sys.master_files mf INNER JOIN sys.[databases] d ON mf.[database_id] = d.[database_id] WHERE d.[name] = 'zzTempDBForDefaultPath' AND type = 1); --Clean up. IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = 'zzTempDBForDefaultPath') BEGIN DROP DATABASE zzTempDBForDefaultPath END; DECLARE @SQL nvarchar(max) SET @SQL= 'CREATE DATABASE $(DatabaseName) ON PRIMARY ( NAME = N''$(DatabaseName)'', FILENAME = N''' + @Default_Data_Path + N'$(DatabaseName)' + '.mdf' + ''', SIZE = 2048KB , FILEGROWTH = 1024KB ) LOG ON ( NAME = N''$(DatabaseName)Log'', FILENAME = N''' + @Default_Log_Path + N'$(DatabaseName)' + '.ldf' + ''', SIZE = 1024KB , FILEGROWTH = 10%) ' exec (@SQL) GO And with that, your database is created.  You can run these scripts on any server and on any database name.  To do that, I created an MSBuild script that looks like this: <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0"> <PropertyGroup> <DatabaseName>MyDatabase</DatabaseName> <Server>localhost</Server> <SqlCmd>sqlcmd -v DatabaseName=$(DatabaseName) -S $(Server) -i </SqlCmd> <ScriptDirectory>.\Scripts</ScriptDirectory> </PropertyGroup> <Target Name ="Rebuild"> <ItemGroup> <ScriptFiles Include="$(ScriptDirectory)\*.sql"/> </ItemGroup> <Exec Command="$(SqlCmd) &quot;%(ScriptFiles.Identity)&quot;" ContinueOnError="false"/> </Target> </Project> Note that the Scripts directory is underneath the directory where I’m running the msbuild command and is relative to that directory.  Note also that the target is using batching to run each script in the scripts subdirectory, one after the other.  Each script is passed to the sqlcmd command line execution using the .Identity property on the itemgroup that is created.  This target file is saved in the file “Database.target”. To make this work, you’ll need msbuild in your path, and then run the following command: msbuild database.target /target:Rebuild Once you’ve got your virgin database setup, you’d then need to use a tool like dbdeploy.net to determine that it was a virgin database, build a change script based on the change scripts, and then you’d want another sqlcmd call to update the database with the appropriate scripts.  I’m doing that next, so I’ll post a blog update when I’ve got it working. Technorati Tags: MSBuild,Agile,CI,Database

    Read the article

  • GlassFish Clustering with DCOM on Windows

    - by ByronNevins
    DCOM - Distributed COM, a Microsoft protocol for communicating with Windows machines. Why use DCOM? In GlassFish 3.1 SSH is used as the standard way to run commands on remote nodes for clustering.  It is very difficult for users to get SSH configured properly on Windows.  SSH does not come with Windows so we have to depend on third party tools.  And then the user is forced to install and configure these tools -- which can be tricky. DCOM is available on all supported platforms.  It is built-in to Windows. The idea is to use DCOM to communicate with remote Windows nodes.  This has the huge advantage that the user has to do minimal, if any, configuration on the Windows nodes. Implementation HighlightsTwo open Source Libraries have been added to GlassFish: Jcifs – a SAMBA implementation in Java J-interop – A Java implementation for making DCOM calls to remote Windows computers.   Note that any supported platform can use DCOM to work with Windows nodes -- not just Windows.E.g. you can have a Linux DAS work with Windows remote instances.All existing SSH commands now have a corresponding DCOM command – except for setup-ssh which isn’t needed for DCOM.  validate-dcom is an all new command. New DCOM Commands create-node-dcom delete-node-dcom install-node-dcom list-nodes-dcom ping-node-dcom uninstall-node-dcom update-node-dcom validate-dcom setup-local-dcom (This is only available via Update Center for GlassFish 3.1.2) These commands are in-place in the trunk (4.0).  And in the branch (3.1.2) Windows Configuration Challenges There are an infinite number of possible configurations of Windows if you look at it as a combination of main release, service-pack, special drivers, software, configuration etc.  Later versions of Windows err on the side of tightening security be default.  This means that the Windows host may need to have configuration changes made.These configuration changes mostly need to be made by the user.  setup-local-dcom will assist you in making required changes to the Windows Registry.  See the reference blogs for details. The validate-dcom Command validate-dcom is a crucial command.  It should be run before any other commands.  If it does not run successfully then there is no point in running other commands.The validate-dcom command must be used from a DAS machine to test a different Windows machine.  If  validate-dcom runs successfully you can be confident that all the DCOM commands will work.  Conversely, the opposite is also true:  If validate-dcom fails, then no DCOM commands will work. What validate-dcom does Verify that the remote host is not the local machine. Resolves the remote host name Checks that the remote DCOM port is being listened on (135, 139) Checks that the remote host’s File Sharing is enabled (port 445) It copies a file (a script) to the remote host to verify that SAMBA is working and authorization is correct It runs a script that it copied on-the-fly to the remote host. Tips and Tricks The bread and butter commands that use DCOM are existing commands like create-instance, start-instance etc.   All of the commands that have dcom in their name are for dealing with the actual nodes. The way the software works is to call asadmin.bat on the remote machine and run a command.  This means that you can track these commands easily on the remote machine with the usual tools.  E.g. using AS_LOGFILE, looking at log files, etc.  It’s easy to attach a debugger to the remote asadmin process, “just in time”, if necessary. How to debug the remote commands:Edit the asadmin.bat file that is in the glassfish/bin folder.  Use glassfish/lib/nadmin.bat in GlassFish 4.0+Add these options to the java call:-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=1234  Now if you run, say start-instance on DAS, you can attach your debugger, at your leisure, to the remote machines port 1234.  It will be running start-local-instance and patiently waiting for you to attach.

    Read the article

  • Disable Add-Ons to Speed Up Browsing in Internet Explorer 9

    - by Lori Kaufman
    We’ve shown you how to enhance Internet Explorer with add-ons, similar to Firefox and Chrome. However, too many add-ons can slow down Internet Explorer and even cause it to crash. However, you can easily disable some or all add-ons. To begin, activate the Command bar, if it’s not already available. Right-click on an empty area of the tab bar and select Command bar from the popup menu. Click the Tools button on the Command bar and select Toolbars | Disable add-ons from the popup menu. Here’s How to Download Windows 8 Release Preview Right Now HTG Explains: Why Linux Doesn’t Need Defragmenting How to Convert News Feeds to Ebooks with Calibre

    Read the article

  • MYSQL – Detecting Current Version of MySQL Server Installation

    - by Pinal Dave
    Here is one of the most popular questions which I receive which is related to MySQL installation. The question is how do I know which version of the MySQL I have installed on my server. Here is the simple trick which works all the time. Connect to your MySQL engine with the help of Command Prompt or MySQL Workbench. When you execute the following command it will give us all the necessary information related to MySQL Version. SHOW VARIABLES LIKE "%version%"; Here is the screenshot of the result which I receive when I ran above command on my Test Server. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Tips and Tricks, T SQL

    Read the article

  • How do I compare the md5sum of a file with the md5 file (that was available to download with the file)?

    - by user91583
    Images are available for a distro on http://livedistro.org/gnulinux/israel-remix-team-mint-12. I want to use the 32-bit version. I have downloaded the ISO file for the 32-bit version (customdist.iso). I have downloaded the md5 file for the ISO file (customdist.iso.md5). I want to calculate the md5sum of the ISO file and compare it to the md5 file. I can use the md5sum command to display within the terminal the calculated md5 for the ISO file. I have searched the web and can't find a way to compare the calculated md5 for the ISO file with the downloaded md5 file. So far, the closest I have come is the command md5sum -c customdist.iso.md5 from within the folder containing both the files, but this command gives the result: md5sum: customdist.iso.md5: no properly formatted MD5 checksum lines found Any ideas?

    Read the article

  • lubuntu - audio drives not recognized

    - by TheAdnan
    still no sound after doing that.. I typed sudo dpkg -l | grep -e alsa -e pulseaudio again, and got this: ii alsa-base 1.0.25+dfsg-0ubuntu4 all ALSA driver configuration files ii alsa-utils 1.0.25-4ubuntu2 i386 Utilities for configuring and using ALSA ii gnome-alsamixer 0.9.7~cvs.20060916.ds.1-3ubuntu1 i386 ALSA sound mixer for GNOME ii gstreamer0.10-alsa:i386 0.10.36-1.1ubuntu1 i386 GStreamer plugin for ALSA ii gstreamer0.10-pulseaudio:i386 0.10.31-3+nmu1ubuntu2 i386 GStreamer plugin for PulseAudio ii pulseaudio 1:3.0-0ubuntu6 i386 PulseAudio sound server ii pulseaudio-module-x11 1:3.0-0ubuntu6 i386 X11 module for PulseAudio sound server ii pulseaudio-utils 1:3.0-0ubuntu6 i386 Command line tools for the PulseAudio sound server After the commands in 2., pulseaudio is working again, but there is still no sound.. I tried the command in 3., and here is what I got: ~$ sudo gedit /etc/default/speech-dispatcher sudo: gedit: command not found

    Read the article

  • smartctl not returning on HBA that's secure-erasing a different drive

    - by Stu2000
    Whenever I run smartctl -i /dev/sd* where * is a drive that is plugged into the same host bus adapter as another drive that is currently being erased with an hdparm secure erase command, the smart command will just 'hang' and not return (blocked) until the erasure of the other drive is finished. To make matters worse you can't cntrl-c out of it. Has anyone else had this issue? Is there another way to retrieve smart data from a drive, which doesn't block? I noticed that I can still use the udevadm command to retrieve the serial and model of the drive which is useful but doesn't appear to have any smart data. Any information relating to this matter is appreciated, especially if you can tell me another way to retrieve the S.M.A.R.T data that might work. Regards, Stuart

    Read the article

  • Most commands won't do anything if I use the console at my Ubuntu server. Why?

    - by Tijs
    I've bought a vps server a few days ago, with Ubuntu on it, but I have a problem now. By nearly all the commands I put in I get this error: : command not found. I am logging in as root. I think this is the Ubuntu version I have: ubuntu-8.04-i386-minimal. (Maybe it has to do with 'minimal'? I really don't know.) To be more specific, the command I have and try to run now is this: cd ~/mclawl; screen -S MCForge -d -m -c /dev/null -- sh -c 'mono MCForge.exe; exec $SHELL' If I do so, I get this: -bash: screen: command not found

    Read the article

  • Sub routing in a SPA site

    - by Anders
    I have a SPA site that I'm working on, I have a requirement that you can have subroutes for a page view model. Im currently using this 'pattern' for the site MyApp.FooViewModel = MyApp.define({ meta: { query: MyApp.Core.Contracts.Queries.FooQuery, title: "Foo" }, init: function (queryResult) { }, prototype: { } }); In the master view model I have a route table this.navigation(new MyApp.RoutesViewModel({ Home: { model: MyApp.HomeViewModel, route: String.empty }, Foo: { model: MyApp.FooViewModel } })); The meta object defines which query should populate the top level view model when its invoked through sammyjs, this is all fine but it does not support sub routing My plan is to change the meta object so that it can (optional offcourse) look like this meta: { query: MyApp.Core.Contracts.Queries.FooQuery, title: "Foo", route: { barId: MyApp.BarViewModel } } When sammyjs detects a barId in the query string the Barmodel will be executed and populated through its own meta object. Is this a good design?

    Read the article

< Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >