Search Results

Search found 29925 results on 1197 pages for 'version independence'.

Page 200/1197 | < Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >

  • springTestContextBeforeTestMethod failed in Maven spring-test

    - by joejax
    I try to setup a project with spring-test using TestNg in Maven. The code is like: @ContextConfiguration(locations={"test-context.xml"}) public class AppTest extends AbstractTestNGSpringContextTests { @Test public void testApp() { assert true; } } A test-context.xml simply defined a bean: <bean id="app" class="org.sonatype.mavenbook.simple.App"/> I got error for Failed to load ApplicationContext when running mvn test from command line, seems it cannot find the test-context.xml file; however, I can get it run correctly inside Eclipse (with TestNg plugin). So, test-context.xml is under src/test/resources/, how do I indicate this in the pom.xml so that 'mvn test' command will work? Thanks, UPDATE: Thanks for the reply. Cannot load context file error was caused by I moved the file arround in different location since I though the classpath was the problem. Now I found the context file seems loaded from the Maven output, but the test is failed: Running TestSuite May 25, 2010 9:55:13 AM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions INFO: Loading XML bean definitions from class path resource [test-context.xml] May 25, 2010 9:55:13 AM org.springframework.context.support.AbstractApplicationContext prepareRefresh INFO: Refreshing org.springframework.context.support.GenericApplicationContext@171bbc9: display name [org.springframework.context.support.GenericApplicationContext@171bbc9]; startup date [Tue May 25 09:55:13 PDT 2010]; root of context hierarchy May 25, 2010 9:55:13 AM org.springframework.context.support.AbstractApplicationContext obtainFreshBeanFactory INFO: Bean factory for application context [org.springframework.context.support.GenericApplicationContext@171bbc9]: org.springframework.beans.factory.support.DefaultListableBeanFactory@1df8b99 May 25, 2010 9:55:13 AM org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons INFO: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@1df8b99: defining beans [app,org.springframework.context.annotation.internalCommonAnnotationProcessor,org.springframework.context.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.annotation.internalRequiredAnnotationProcessor]; root of factory hierarchy Tests run: 3, Failures: 2, Errors: 0, Skipped: 1, Time elapsed: 0.63 sec <<< FAILURE! Results : Failed tests: springTestContextBeforeTestMethod(org.sonatype.mavenbook.simple.AppTest) springTestContextAfterTestMethod(org.sonatype.mavenbook.simple.AppTest) Tests run: 3, Failures: 2, Errors: 0, Skipped: 1 If I use spring-test version 3.0.2.RELEASE, the error becomes: org.springframework.test.context.testng.AbstractTestNGSpringContextTests.springTestContextPrepareTestInstance() is depending on nonexistent method null Here is the structure of the project: simple |-- pom.xml `-- src |-- main | `-- java `-- test |-- java `-- resources |-- test-context.xml `-- testng.xml testng.xml: <suite name="Suite" parallel="false"> <test name="Test"> <classes> <class name="org.sonatype.mavenbook.simple.AppTest"/> </classes> </test> </suite> test-context.xml: <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd" default-lazy-init="true"> <bean id="app" class="org.sonatype.mavenbook.simple.App"/> </beans> In the pom.xml, I add testng, spring, and spring-test artifacts, and plugin: <dependency> <groupId>org.testng</groupId> <artifactId>testng</artifactId> <version>5.1</version> <classifier>jdk15</classifier> <scope>test</scope> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring</artifactId> <version>2.5.6</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-test</artifactId> <version>2.5.6</version> <scope>test</scope> </dependency> <build> <finalName>simple</finalName> <plugins> <plugin> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <suiteXmlFiles> <suiteXmlFile>src/test/resources/testng.xml</suiteXmlFile> </suiteXmlFiles> </configuration> </plugin> </plugins> Basically, I replaced 'A Simple Maven Project' Junit with TestNg, hope it works. UPDATE: I think I got the problem (still don't know why) - Whenever I extends AbstractTestNGSpringContextTests or AbstractTransactionalTestNGSpringContextTests, the test will failed with this error: Failed tests: springTestContextBeforeTestMethod(org.sonatype.mavenbook.simple.AppTest) springTestContextAfterTestMethod(org.sonatype.mavenbook.simple.AppTest) So, eventually the error went away when I override the two methods. I don't think this is the right way, didn't find much info from spring-test doc. If you know spring test framework, please shred some light on this.

    Read the article

  • Error when running a GWTTestCase using maven gwt plugin

    - by adancu
    Hi, I've created a test which extends GWTTestCase but I'm getting this error: mvn integration-test gwt:test Running com.myproject.test.ui. GwtTestMyFirstTestCase Translatable source found in... [WARN] No source path entries; expect subsequent failures [ERROR] Unable to find type 'java.lang.Object' [ERROR] Hint: Check that your module inherits 'com.google.gwt.core.Core' either directly or indirectly (most often by inheriting module 'com.google.gwt.user.User') Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.1 sec <<< FAILURE! GwtTestMyFirstTestCase.java is in /src/test/java, while the GWT module is located in src/main/java. I assume this shouldn't be a problem. I've done everything required according to http://mojo.codehaus.org/gwt-maven-plugin/user-guide/testing.html and of course that my gwt module already has com.google.gwt.core.Core indirectly imported. http://maven.apache.org/maven-v4_0_0.xsd" 4.0.0 com.myproject main jar 0.0.1-SNAPSHOT Main Module <properties> <gwt.module>com.myproject.MainModule</gwt.module> </properties> <parent> <groupId>com.myproject</groupId> <artifactId>app</artifactId> <version>0.1.0-SNAPSHOT</version> </parent> <dependencies> <dependency> <groupId>com.myproject</groupId> <artifactId>app-commons</artifactId> <version>0.0.1-SNAPSHOT</version> </dependency> <dependency> <groupId>com.google.gwt</groupId> <artifactId>gwt-dev</artifactId> <version>${gwt.version}</version> <scope>provided</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-dependency-plugin</artifactId> <configuration> <outputFile>../app/src/main/webapp/WEB-INF/main.tree</outputFile> </configuration> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>gwt-maven-plugin</artifactId> <executions> <execution> <goals> <goal>test</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <configuration> <classesDirectory> ${project.build.directory}/${project.build.finalName}/${gwt.module} </classesDirectory> </configuration> </plugin> </plugins> </build> Here is the test case, located in /src/test/java/com/myproject/test/ui public class GwtTestMyFirstTestCase extends GWTTestCase { @Override public String getModuleName() { return "com.myproject.MainModule"; } public void testSomething() { } } Here is the gwt module I'm trying to test, located in src/main/java/com/myproject/MainModule.gwt.xml: <inherits name='com.myproject.Commons' /> <source path="site" /> <source path="com.myproject.test.ui" /> <set-property name="gwt.suppressNonStaticFinalFieldWarnings" value="true" /> <entry-point class='com.myproject.site.SiteModuleEntry' /> Can anyone give me a hint or two about what I'm doing wrong? Thanks in advance.

    Read the article

  • EC2 instance suddenly refusing SSH connections and won't respond to ping

    - by Chris
    My instance was running fine and this morning I was able to access a Ruby on Rails app hosted on it. An hour later I suddenly wasn't able to access my site, my SSH connection attempts were refused and the server wasn't even responding to ping. I didn't change anything on my system during that hour and reboots aren't fixing it. I've never had any problems connecting or pinging the system before. Can someone please help? This is on my production system! OS: CentOS 5 AMI ID: ami-10b55379 Type: m1.small [] ~% ssh -v *****@meeteor.com OpenSSH_5.2p1, OpenSSL 0.9.8l 5 Nov 2009 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to meeteor.com [184.73.235.191] port 22. debug1: connect to address 184.73.235.191 port 22: Connection refused ssh: connect to host meeteor.com port 22: Connection refused [] ~% ping meeteor.com PING meeteor.com (184.73.235.191): 56 data bytes Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 Request timeout for icmp_seq 2 ^C --- meeteor.com ping statistics --- 4 packets transmitted, 0 packets received, 100.0% packet loss [] ~% ========= System Log ========= Restarting system. Linux version 2.6.16-xenU ([email protected]) (gcc version 4.0.1 20050727 (Red Hat 4.0.1-5)) #1 SMP Mon May 28 03:41:49 SAST 2007 BIOS-provided physical RAM map: Xen: 0000000000000000 - 000000006a400000 (usable) 980MB HIGHMEM available. 727MB LOWMEM available. NX (Execute Disable) protection: active IRQ lockup detection disabled Built 1 zonelists Kernel command line: root=/dev/sda1 ro 4 Enabling fast FPU save and restore... done. Enabling unmasked SIMD FPU exception support... done. Initializing CPU#0 PID hash table entries: 4096 (order: 12, 65536 bytes) Xen reported: 2599.998 MHz processor. Dentry cache hash table entries: 131072 (order: 7, 524288 bytes) Inode-cache hash table entries: 65536 (order: 6, 262144 bytes) Software IO TLB disabled vmalloc area: ee000000-f53fe000, maxmem 2d7fe000 Memory: 1718700k/1748992k available (1958k kernel code, 20948k reserved, 620k data, 144k init, 1003528k highmem) Checking if this processor honours the WP bit even in supervisor mode... Ok. Calibrating delay using timer specific routine.. 5202.30 BogoMIPS (lpj=26011526) Mount-cache hash table entries: 512 CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line) CPU: L2 Cache: 1024K (64 bytes/line) Checking 'hlt' instruction... OK. Brought up 1 CPUs migration_cost=0 Grant table initialized NET: Registered protocol family 16 Brought up 1 CPUs xen_mem: Initialising balloon driver. highmem bounce pool size: 64 pages VFS: Disk quotas dquot_6.5.1 Dquot-cache hash table entries: 1024 (order 0, 4096 bytes) Initializing Cryptographic API io scheduler noop registered io scheduler anticipatory registered (default) io scheduler deadline registered io scheduler cfq registered i8042.c: No controller found. RAMDISK driver initialized: 16 RAM disks of 4096K size 1024 blocksize Xen virtual console successfully installed as tty1 Event-channel device installed. netfront: Initialising virtual ethernet driver. mice: PS/2 mouse device common for all mice md: md driver 0.90.3 MAX_MD_DEVS=256, MD_SB_DISKS=27 md: bitmap version 4.39 NET: Registered protocol family 2 Registering block device major 8 IP route cache hash table entries: 65536 (order: 6, 262144 bytes) TCP established hash table entries: 262144 (order: 9, 2097152 bytes) TCP bind hash table entries: 65536 (order: 7, 524288 bytes) TCP: Hash tables configured (established 262144 bind 65536) TCP reno registered TCP bic registered NET: Registered protocol family 1 NET: Registered protocol family 17 NET: Registered protocol family 15 Using IPI No-Shortcut mode md: Autodetecting RAID arrays. md: autorun ... md: ... autorun DONE. kjournald starting. Commit interval 5 seconds EXT3-fs: mounted filesystem with ordered data mode. VFS: Mounted root (ext3 filesystem) readonly. Freeing unused kernel memory: 144k freed *************************************************************** *************************************************************** ** WARNING: Currently emulating unsupported memory accesses ** ** in /lib/tls glibc libraries. The emulation is ** ** slow. To ensure full performance you should ** ** install a 'xen-friendly' (nosegneg) version of ** ** the library, or disable tls support by executing ** ** the following as root: ** ** mv /lib/tls /lib/tls.disabled ** ** Offending process: init (pid=1) ** *************************************************************** *************************************************************** Pausing... 5Pausing... 4Pausing... 3Pausing... 2Pausing... 1Continuing... INIT: version 2.86 booting Welcome to CentOS release 5.4 (Final) Press 'I' to enter interactive startup. Setting clock : Fri Oct 1 14:35:26 EDT 2010 [ OK ] Starting udev: [ OK ] Setting hostname localhost.localdomain: [ OK ] No devices found Setting up Logical Volume Management: [ OK ] Checking filesystems Checking all file systems. [/sbin/fsck.ext3 (1) -- /] fsck.ext3 -a /dev/sda1 /dev/sda1: clean, 275424/1310720 files, 1161123/2621440 blocks [ OK ] Remounting root filesystem in read-write mode: [ OK ] Mounting local filesystems: [ OK ] Enabling local filesystem quotas: [ OK ] Enabling /etc/fstab swaps: [ OK ] INIT: Entering runlevel: 4 Entering non-interactive startup Starting background readahead: [ OK ] Applying ip6tables firewall rules: modprobe: FATAL: Module ip6_tables not found. ip6tables-restore v1.3.5: ip6tables-restore: unable to initializetable 'filter' Error occurred at line: 3 Try `ip6tables-restore -h' or 'ip6tables-restore --help' for more information. [FAILED] Applying iptables firewall rules: [ OK ] Loading additional iptables modules: ip_conntrack_netbios_ns [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface eth0: Determining IP information for eth0... done. [ OK ] Starting auditd: [FAILED] Starting irqbalance: [ OK ] Starting portmap: [ OK ] FATAL: Module lockd not found. Starting NFS statd: [ OK ] Starting RPC idmapd: FATAL: Module sunrpc not found. FATAL: Error running install command for sunrpc Error: RPC MTAB does not exist. Starting system message bus: [ OK ] Starting Bluetooth services:[ OK ] [ OK ] Can't open RFCOMM control socket: Address family not supported by protocol Mounting other filesystems: [ OK ] Starting PC/SC smart card daemon (pcscd): [ OK ] Starting hidd: Can't open HIDP control socket: Address family not supported by protocol [FAILED] Starting autofs: Starting automount: automount: test mount forbidden or incorrect kernel protocol version, kernel protocol version 5.00 or above required. [FAILED] [FAILED] Starting sshd: [ OK ] Starting cups: [ OK ] Starting sendmail: [ OK ] Starting sm-client: [ OK ] Starting console mouse services: no console device found[FAILED] Starting crond: [ OK ] Starting xfs: [ OK ] Starting anacron: [ OK ] Starting atd: [ OK ] % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 390 100 390 0 0 58130 0 --:--:-- --:--:-- --:--:-- 58130 100 390 100 390 0 0 56984 0 --:--:-- --:--:-- --:--:-- 0 Starting yum-updatesd: [ OK ] Starting Avahi daemon... [ OK ] Starting HAL daemon: [ OK ] Starting OSSEC: [ OK ] Starting smartd: [ OK ] c CentOS release 5.4 (Final) Kernel 2.6.16-xenU on an i686 domU-12-31-39-00-C4-97 login: INIT: Id "2" respawning too fast: disabled for 5 minutes INIT: Id "3" respawning too fast: disabled for 5 minutes INIT: Id "4" respawning too fast: disabled for 5 minutes INIT: Id "5" respawning too fast: disabled for 5 minutes INIT: Id "6" respawning too fast: disabled for 5 minutes

    Read the article

  • ASP.NET exception gives irrelevant stack trace on YSOD, very challenging!

    - by pootow
    Here is the YSOD: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [SqlException (0x80131904): Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.] System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) +428 System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) +65 System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) +117 System.Data.SqlClient.SqlConnection.Open() +122 ECommerce.PMethod.Sql.SqlConns.Open() +78 ECommerce.PMethod.Sql.SqlConns..ctor() +120 ECommerce.login.DatasInfo.Proc.UserCenter.IsLogin(String UserGUID, Int32 UserID) +49 ECommerce.login.Rules.Users.UserLogin.isLogin() +44 Config.isUserLogined() +5 Shopping_Shopping.Page_Load(Object sender, EventArgs e) +10 System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +14 System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +35 System.Web.UI.Control.OnLoad(EventArgs e) +99 System.Web.UI.Control.LoadRecursive() +50 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +627 [TypeInitializationException: The type initializer for 'ECommerce.ERP.DAL.DBConn' threw an exception.] ECommerce.ERP.DAL.DBConn.get_ConnString() +0 [ObjectDefinitionStoreException: Factory method 'System.String get_ConnString()' threw an Exception.] Spring.Objects.Factory.Support.SimpleInstantiationStrategy.Instantiate(RootObjectDefinition definition, String name, IObjectFactory factory, MethodInfo factoryMethod, Object[] arguments) +257 Spring.Objects.Factory.Support.ConstructorResolver.InstantiateUsingFactoryMethod(String name, RootObjectDefinition definition, Object[] arguments) +624 Spring.Objects.Factory.Support.AbstractAutowireCapableObjectFactory.InstantiateUsingFactoryMethod(String name, RootObjectDefinition definition, Object[] arguments) +60 Spring.Objects.Factory.Support.AbstractAutowireCapableObjectFactory.CreateObjectInstance(String objectName, RootObjectDefinition objectDefinition, Object[] arguments) +56 Spring.Objects.Factory.Support.AbstractAutowireCapableObjectFactory.InstantiateObject(String name, RootObjectDefinition definition, Object[] arguments, Boolean allowEagerCaching, Boolean suppressConfigure) +436 [ObjectCreationException: Error thrown by a dependency of object 'styleService' defined in 'assembly [ECommerce.Services.Impl, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null], resource [ECommerce.Services.Impl.AppContext.xml] line 56' : Initialization of object failed : Factory method 'System.String get_ConnString()' threw an Exception. while resolving 'constructor argument with name promotionservice' to 'promotionService' defined in 'assembly [ECommerce.Services.Impl, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null], resource [ECommerce.Services.Impl.AppContext.xml] line 31' while resolving 'constructor argument with name domainservice' to 'promotionDomainService' defined in 'assembly [ECommerce.Domain, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null], resource [ECommerce.Domain.AppContext.xml] line 20' while resolving 'constructor argument with name promotionrepos' to 'promotionRepos' defined in 'assembly [ECommerce.Data.AdoNet, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null], resource [ECommerce.Data.AdoNet.AppContext.xml] line 34' while resolving 'constructor argument with name connstr' to 'ECommerce.ERP.DAL.DBConn#389F399' defined in 'assembly [ECommerce.Data.AdoNet, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null], resource [ECommerce.Data.AdoNet.AppContext.xml] line 34'] Spring.Objects.Factory.Support.ObjectDefinitionValueResolver.ResolveReference(IObjectDefinition definition, String name, String argumentName, RuntimeObjectReference reference) +394 Spring.Objects.Factory.Support.ObjectDefinitionValueResolver.ResolvePropertyValue(String name, IObjectDefinition definition, String argumentName, Object argumentValue) +312 Spring.Objects.Factory.Support.ObjectDefinitionValueResolver.ResolveValueIfNecessary(String name, IObjectDefinition definition, String argumentName, Object argumentValue) +17 Spring.Objects.Factory.Support.ConstructorResolver.ResolveConstructorArguments(String objectName, RootObjectDefinition definition, ObjectWrapper wrapper, ConstructorArgumentValues cargs, ConstructorArgumentValues resolvedValues) +993 Spring.Objects.Factory.Support.ConstructorResolver.AutowireConstructor(String objectName, RootObjectDefinition rod, ConstructorInfo[] chosenCtors, Object[] explicitArgs) +171 Spring.Objects.Factory.Support.AbstractAutowireCapableObjectFactory.AutowireConstructor(String name, RootObjectDefinition definition, ConstructorInfo[] ctors, Object[] explicitArgs) +65 Spring.Objects.Factory.Support.AbstractAutowireCapableObjectFactory.CreateObjectInstance(String objectName, RootObjectDefinition objectDefinition, Object[] arguments) +161 Spring.Objects.Factory.Support.AbstractAutowireCapableObjectFactory.InstantiateObject(String name, RootObjectDefinition definition, Object[] arguments, Boolean allowEagerCaching, Boolean suppressConfigure) +636 Spring.Objects.Factory.Support.AbstractObjectFactory.CreateAndCacheSingletonInstance(String objectName, RootObjectDefinition objectDefinition, Object[] arguments) +174 Spring.Objects.Factory.Support.WebObjectFactory.CreateAndCacheSingletonInstance(String objectName, RootObjectDefinition objectDefinition, Object[] arguments) +150 Spring.Objects.Factory.Support.AbstractObjectFactory.GetObjectInternal(String name, Type requiredType, Object[] arguments, Boolean suppressConfigure) +990 Spring.Objects.Factory.Support.AbstractObjectFactory.GetObject(String name) +10 Spring.Context.Support.AbstractApplicationContext.GetObject(String name) +20 ECommerce.Common.ServiceLocator.GetService() +334 ECommerce.Mvc.Controllers.StylesController..ctor() +72 [TargetInvocationException: Exception has been thrown by the target of an invocation.] System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean noCheck, Boolean& canBeCached, RuntimeMethodHandle& ctor, Boolean& bNeedSecurityCheck) +0 System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean fillCache) +86 System.RuntimeType.CreateInstanceImpl(Boolean publicOnly, Boolean skipVisibilityChecks, Boolean fillCache) +230 System.Activator.CreateInstance(Type type, Boolean nonPublic) +67 System.Web.Mvc.DefaultControllerFactory.GetControllerInstance(RequestContext requestContext, Type controllerType) +80 [InvalidOperationException: An error occurred when trying to create a controller of type 'ECommerce.Mvc.Controllers.StylesController'. Make sure that the controller has a parameterless public constructor.] System.Web.Mvc.DefaultControllerFactory.GetControllerInstance(RequestContext requestContext, Type controllerType) +190 System.Web.Mvc.DefaultControllerFactory.CreateController(RequestContext requestContext, String controllerName) +68 System.Web.Mvc.MvcHandler.ProcessRequestInit(HttpContextBase httpContext, IController& controller, IControllerFactory& factory) +118 System.Web.Mvc.MvcHandler.BeginProcessRequest(HttpContextBase httpContext, AsyncCallback callback, Object state) +46 System.Web.Mvc.MvcHandler.BeginProcessRequest(HttpContext httpContext, AsyncCallback callback, Object state) +63 System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.BeginProcessRequest(HttpContext context, AsyncCallback cb, Object extraData) +13 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +8677954 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +155 Version Information: Microsoft .NET Framework Version:2.0.50727.3082; ASP.NET Version:2.0.50727.3082 Question is: the first stack trace is irrelevant to others, what happened? Any ideas? Let me make this more clear: a MVC page uses the spring part trying to load a lazy-init service which constructor wants a connection string through a static property like this: <object id="promotionRepos" type="ECommerce.Data.AdoNet.Promotions.PromotionRepos, ECommerce.Data.AdoNet" lazy-init="true"> <constructor-arg name="provider"> <null /> </constructor-arg> <constructor-arg name="connStr"> <object type="ECommerce.ERP.DAL.DBConn, ECommerce.ERP.DAL" factory-method="get_ConnString" /> </constructor-arg> <property name="RefreshInterval" value="00:00:10" /> </object> the timeout part is some what irrelevent to all others. see this in the first exception: Shopping_Shopping.Page_Load(Object sender, EventArgs e) +10 it's another page at all. And also, ECommerce.PMethod.Sql.SqlConns.Open() uses its own connection string, not the one loaded by spring, it's different module from diffrent team. And I am sure the connection string is correct. And, this ysod cames up randomly. Sometimes nothing is wrong, and sometimes, it appears. I thought there could be something wrong with my database or the network/firewall, I will check it later, but now I want understand this tricky stack trace.

    Read the article

  • ASP.NET MVC + MySql Membership Provider, user cannot login

    - by Jason Miesionczek
    Hello, I've been playing around with using MySql as the membership provider for asp.net mvc forms authentication. I've got things configured correctly as far as i can tell, and i can create users via both the register action and asp.net web config site. however, when i try to login with one of the users, it does not work. it returns an error as if i had entered a wrong password, or if the account doesn't exist. i have verified in the database that the account does exist. I've followed the instructions here for reference: http://blog.tchami.com/post/ASPNET-MVC-2-and-MySQL-Membership-Provider.aspx here is my web.config: <?xml version="1.0"?> <!-- For more information on how to configure your ASP.NET application, please visit http://go.microsoft.com/fwlink/?LinkId=152368 --> <configuration> <connectionStrings> <add name="MySQLConn" connectionString="Server=localhost;Database=intereditor;Uid=<user>;Pwd=<password>;"/> </connectionStrings> <system.web> <compilation debug="true" targetFramework="4.0"> <assemblies> <add assembly="System.Web.Abstractions, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Web.Routing, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </assemblies> </compilation> <authentication mode="Forms"> <forms loginUrl="~/Account/LogOn" timeout="2880" name=".ASPXFORM$" path="/" requireSSL="false" slidingExpiration="true" enableCrossAppRedirects="false" /> </authentication> <membership defaultProvider="MySqlMembershipProvider"> <providers> <clear/> <add name="MySqlMembershipProvider" type="MySql.Web.Security.MySQLMembershipProvider,MySql.Web,Version=6.3.4.0, Culture=neutral,PublicKeyToken=c5687fc88969c44d" autogenerateschema="true" connectionStringName="MySQLConn" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="false" passwordFormat="Hashed" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="10" passwordStrengthRegularExpression="" applicationName="/" /> </providers> </membership> <profile defaultProvider="MySqlProfileProvider"> <providers> <clear/> <add name="MySqlProfileProvider" type="MySql.Web.Profile.MySQLProfileProvider,MySql.Web,Version=6.3.4.0,Culture=neutral,PublicKeyToken=c5687fc88969c44d" connectionStringName="MySQLConn" applicationName="/" /> </providers> </profile> <roleManager enabled="true" defaultProvider="MySqlRoleProvider"> <providers> <clear /> <add name="MySqlRoleProvider" type="MySql.Web.Security.MySQLRoleProvider,MySql.Web,Version=6.3.4.0,Culture=neutral,PublicKeyToken=c5687fc88969c44d" connectionStringName="MySQLConn" applicationName="/" /> </providers> </roleManager> <pages> <namespaces> <add namespace="System.Web.Mvc" /> <add namespace="System.Web.Mvc.Ajax" /> <add namespace="System.Web.Mvc.Html" /> <add namespace="System.Web.Routing" /> </namespaces> </pages> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules runAllManagedModulesForAllRequests="true"/> </system.webServer> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="System.Web.Mvc" publicKeyToken="31bf3856ad364e35" /> <bindingRedirect oldVersion="1.0.0.0" newVersion="2.0.0.0" /> </dependentAssembly> </assemblyBinding> </runtime> </configuration> Can anyone please help me identify what is wrong so that users can login? UPDATE So after debugging the login process in the code of the membership provider itself, i discovered that there is a bug in the provider. There is a discrepancy between the password hash that is stored in the database, and the has that is generated based on the inputted password. As a workaround for my issue, i changed the password format to 'encrpyted' and added a machine key to my web.config. I am still interested in figuring out the issue with the hashed format in the provider, and will spend some more time debugging it, and if i can figure out the problem, i will put together a patch and submit it.

    Read the article

  • How do I get my dependenices inject using @Configurable in conjunction with readResolve()

    - by bmatthews68
    The framework I am developing for my application relies very heavily on dynamically generated domain objects. I recently started using Spring WebFlow and now need to be able to serialize my domain objects that will be kept in flow scope. I have done a bit of research and figured out that I can use writeReplace() and readResolve(). The only catch is that I need to look-up a factory in the Spring context. I tried to use @Configurable(preConstruction = true) in conjunction with the BeanFactoryAware marker interface. But beanFactory is always null when I try to use it in my createEntity() method. Neither the default constructor nor the setBeanFactory() injector are called. Has anybody tried this or something similar? I have included relevant class below. Thanks in advance, Brian /* * Copyright 2008 Brian Thomas Matthews Limited. * All rights reserved, worldwide. * * This software and all information contained herein is the property of * Brian Thomas Matthews Limited. Any dissemination, disclosure, use, or * reproduction of this material for any reason inconsistent with the * express purpose for which it has been disclosed is strictly forbidden. */ package com.btmatthews.dmf.domain.impl.cglib; import java.io.InvalidObjectException; import java.io.ObjectStreamException; import java.io.Serializable; import java.lang.reflect.InvocationTargetException; import java.util.HashMap; import java.util.Map; import org.apache.commons.beanutils.PropertyUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.BeanFactory; import org.springframework.beans.factory.BeanFactoryAware; import org.springframework.beans.factory.annotation.Configurable; import org.springframework.util.StringUtils; import com.btmatthews.dmf.domain.IEntity; import com.btmatthews.dmf.domain.IEntityFactory; import com.btmatthews.dmf.domain.IEntityID; import com.btmatthews.dmf.spring.IEntityDefinitionBean; /** * This class represents the serialized form of a domain object implemented * using CGLib. The readResolve() method recreates the actual domain object * after it has been deserialized into Serializable. You must define * &lt;spring-configured/&gt; in the application context. * * @param <S> * The interface that defines the properties of the base domain * object. * @param <T> * The interface that defines the properties of the derived domain * object. * @author <a href="mailto:[email protected]">Brian Matthews</a> * @version 1.0 */ @Configurable(preConstruction = true) public final class SerializedCGLibEntity<S extends IEntity<S>, T extends S> implements Serializable, BeanFactoryAware { /** * Used for logging. */ private static final Logger LOG = LoggerFactory .getLogger(SerializedCGLibEntity.class); /** * The serialization version number. */ private static final long serialVersionUID = 3830830321957878319L; /** * The application context. Note this is not serialized. */ private transient BeanFactory beanFactory; /** * The domain object name. */ private String entityName; /** * The domain object identifier. */ private IEntityID<S> entityId; /** * The domain object version number. */ private long entityVersion; /** * The attributes of the domain object. */ private HashMap<?, ?> entityAttributes; /** * The default constructor. */ public SerializedCGLibEntity() { SerializedCGLibEntity.LOG .debug("Initializing with default constructor"); } /** * Initialise with the attributes to be serialised. * * @param name * The entity name. * @param id * The domain object identifier. * @param version * The entity version. * @param attributes * The entity attributes. */ public SerializedCGLibEntity(final String name, final IEntityID<S> id, final long version, final HashMap<?, ?> attributes) { SerializedCGLibEntity.LOG .debug("Initializing with parameterized constructor"); this.entityName = name; this.entityId = id; this.entityVersion = version; this.entityAttributes = attributes; } /** * Inject the bean factory. * * @param factory * The bean factory. */ public void setBeanFactory(final BeanFactory factory) { SerializedCGLibEntity.LOG.debug("Injected bean factory"); this.beanFactory = factory; } /** * Called after deserialisation. The corresponding entity factory is * retrieved from the bean application context and BeanUtils methods are * used to initialise the object. * * @return The initialised domain object. * @throws ObjectStreamException * If there was a problem creating or initialising the domain * object. */ public Object readResolve() throws ObjectStreamException { SerializedCGLibEntity.LOG.debug("Transforming deserialized object"); final T entity = this.createEntity(); entity.setId(this.entityId); try { PropertyUtils.setSimpleProperty(entity, "version", this.entityVersion); for (Map.Entry<?, ?> entry : this.entityAttributes.entrySet()) { PropertyUtils.setSimpleProperty(entity, entry.getKey() .toString(), entry.getValue()); } } catch (IllegalAccessException e) { throw new InvalidObjectException(e.getMessage()); } catch (InvocationTargetException e) { throw new InvalidObjectException(e.getMessage()); } catch (NoSuchMethodException e) { throw new InvalidObjectException(e.getMessage()); } return entity; } /** * Lookup the entity factory in the application context and create an * instance of the entity. The entity factory is located by getting the * entity definition bean and using the factory registered with it or * getting the entity factory. The name used for the definition bean lookup * is ${entityName}Definition while ${entityName} is used for the factory * lookup. * * @return The domain object instance. * @throws ObjectStreamException * If the entity definition bean or entity factory were not * available. */ @SuppressWarnings("unchecked") private T createEntity() throws ObjectStreamException { SerializedCGLibEntity.LOG.debug("Getting domain object factory"); // Try to use the entity definition bean final IEntityDefinitionBean<S, T> entityDefinition = (IEntityDefinitionBean<S, T>)this.beanFactory .getBean(StringUtils.uncapitalize(this.entityName) + "Definition", IEntityDefinitionBean.class); if (entityDefinition != null) { final IEntityFactory<S, T> entityFactory = entityDefinition .getFactory(); if (entityFactory != null) { SerializedCGLibEntity.LOG .debug("Domain object factory obtained via enity definition bean"); return entityFactory.create(); } } // Try to use the entity factory final IEntityFactory<S, T> entityFactory = (IEntityFactory<S, T>)this.beanFactory .getBean(StringUtils.uncapitalize(this.entityName) + "Factory", IEntityFactory.class); if (entityFactory != null) { SerializedCGLibEntity.LOG .debug("Domain object factory obtained via direct look-up"); return entityFactory.create(); } // Neither worked! SerializedCGLibEntity.LOG.warn("Cannot find domain object factory"); throw new InvalidObjectException( "No entity definition or factory found for " + this.entityName); } }

    Read the article

  • Embedding mercurial revision information in Visual Studio c# projects automatically

    - by Mark Booth
    Original Problem In building our projects, I want the mercurial id of each repository to be embedded within the product(s) of that repository (the library, application or test application). I find it makes it so much easier to debug an application ebing run by custiomers 8 timezones away if you know precisely what went into building the particular version of the application they are using. As such, every project (application or library) in our systems implement a way of getting at the associated revision information. I also find it very useful to be able to see if an application has been compiled with clean (un-modified) changesets from the repository. 'Hg id' usefully appends a + to the changeset id when there are uncommitted changes in a repository, so this allows is to easily see if people are running a clean or a modified version of the code. My current solution is detailed below, and fulfills the basic requirements, but there are a number of problems with it. Current Solution At the moment, to each and every Visual Studio solution, I add the following "Pre-build event command line" commands: cd $(ProjectDir) HgID I also add an HgID.bat file to the Project directory: @echo off type HgId.pre > HgId.cs For /F "delims=" %%a in ('hg id') Do <nul >>HgID.cs set /p = @"%%a" echo ; >> HgId.cs echo } >> HgId.cs echo } >> HgId.cs along with an HgId.pre file, which is defined as: namespace My.Namespace { /// <summary> Auto generated Mercurial ID class. </summary> internal class HgID { /// <summary> Mercurial version ID [+ is modified] [Named branch]</summary> public const string Version = When I build my application, the pre-build event is triggered on all libraries, creating a new HgId.cs file (which is not kept under revision control) and causing the library to be re-compiled with with the new 'hg id' string in 'Version'. Problems with the current solution The main problem is that since the HgId.cs is re-created at each pre-build, every time we need to compile anything, all projects in the current solution are re-compiled. Since we want to be able to easily debug into our libraries, we usually keep many libraries referenced in our main application solution. This can result in build times which are significantly longer than I would like. Ideally I would like the libraries to compile only if the contents of the HgId.cs file has actually changed, as opposed to having been re-created with exactly the same contents. The second problem with this method is it's dependence on specific behaviour of the windows shell. I've already had to modify the batch file several times, since the original worked under XP but not Vista, the next version worked under Vista but not XP and finally I managed to make it work with both. Whether it will work with Windows 7 however is anyones guess and as time goes on, I see it more likely that contractors will expect to be able to build our apps on their Windows 7 boxen. Finally, I have an aesthetic problem with this solution, batch files and bodged together template files feel like the wrong way to do this. My actual questions How would you solve/how are you solving the problem I'm trying to solve? What better options are out there than what I'm currently doing? Rejected Solutions to these problems Before I implemented the current solution, I looked at Mercurials Keyword extension, since it seemed like the obvious solution. However the more I looked at it and read peoples opinions, the more that I came to the conclusion that it wasn't the right thing to do. I also remember the problems that keyword substitution has caused me in projects at previous companies (just the thought of ever having to use Source Safe again fills me with a feeling of dread *8'). Also, I don't particularly want to have to enable Mercurial extensions to get the build to complete. I want the solution to be self contained, so that it isn't easy for the application to be accidentally compiled without the embedded version information just because an extension isn't enabled or the right helper software hasn't been installed. I also thought of writing this in a better scripting language, one where I would only write HgId.cs file if the content had actually changed, but all of the options I could think of would require my co-workers, contractors and possibly customers to have to install software they might not otherwise want (for example cygwin). Any other options people can think of would be appreciated. Update Partial solution Having played around with it for a while, I've managed to get the HgId.bat file to only overwrite the HgId.cs file if it changes: @echo off type HgId.pre > HgId.cst For /F "delims=" %%a in ('hg id') Do <nul >>HgId.cst set /p = @"%%a" echo ; >> HgId.cst echo } >> HgId.cst echo } >> HgId.cst fc HgId.cs HgId.cst >NUL if %errorlevel%==0 goto :ok copy HgId.cst HgId.cs :ok del HgId.cst Problems with this solution Even though HgId.cs is no longer being re-created every time, Visual Studio still insists on compiling everything every time. I've tried looking for solutions and tried checking "Only build startup projects and dependencies on Run" in Tools|Options|Projects and Solutions|Build and Run but it makes no difference. The second problem also remains, and now I have no way to test if it will work with Vista, since that contractor is no longer with us. If anyone can test this batch file on a Windows 7 and/or Vista box, I would appreciate hearing how it went. Finally, my aesthetic problem with this solution, is even strnger than it was before, since the batch file is more complex and this there is now more to go wrong. If you can think of any better solution, I would love to hear about them.

    Read the article

  • Accessing local variable doesn't improve performance

    - by NicMagnier
    The short version Why is this code: var index = (Math.floor(y / scale) * img.width + Math.floor(x / scale)) * 4; More performant than this one? var index = Math.floor(ref_index) * 4; The long version This week, the author of Impact js published an article about some rendering issue: http://www.phoboslab.org/log/2012/09/drawing-pixels-is-hard In the article there was the source of a function to scale an image by accessing pixels in the canvas. I wanted to suggest some traditional ways to optimize this kind of code so that the scaling would be shorter at loading time. But after testing it my result was most of the time worst that the original function. Guessing this was the JavaScript engine that was doing some smart optimization I tried to understand a bit more what was going on so I did a bunch of test. But my results are quite confusing and I would need some help to understand what's going on. I have a test page here: http://www.mx981.com/stuff/resize_bench/test.html jsPerf: http://jsperf.com/local-variable-due-to-the-scope-lookup To start the test, click the picture and the results will appear in the console. There are three different versions: The original code: for( var y = 0; y < heightScaled; y++ ) { for( var x = 0; x < widthScaled; x++ ) { var index = (Math.floor(y / scale) * img.width + Math.floor(x / scale)) * 4; var indexScaled = (y * widthScaled + x) * 4; scaledPixels.data[ indexScaled ] = origPixels.data[ index ]; scaledPixels.data[ indexScaled+1 ] = origPixels.data[ index+1 ]; scaledPixels.data[ indexScaled+2 ] = origPixels.data[ index+2 ]; scaledPixels.data[ indexScaled+3 ] = origPixels.data[ index+3 ]; } } jsPerf: http://jsperf.com/so-accessing-local-variable-doesn-t-improve-performance One of my attempt to optimize it: var ref_index = 0; var ref_indexScaled = 0 var ref_step = 1 / scale; for( var y = 0; y < heightScaled; y++ ) { for( var x = 0; x < widthScaled; x++ ) { var index = Math.floor(ref_index) * 4; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+1 ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+2 ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+3 ]; ref_index+= ref_step; } } jsPerf: http://jsperf.com/so-accessing-local-variable-doesn-t-improve-performance The same optimized code but with recalculating the index variable each time (Hybrid) var ref_index = 0; var ref_indexScaled = 0 var ref_step = 1 / scale; for( var y = 0; y < heightScaled; y++ ) { for( var x = 0; x < widthScaled; x++ ) { var index = (Math.floor(y / scale) * img.width + Math.floor(x / scale)) * 4; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+1 ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+2 ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+3 ]; ref_index+= ref_step; } } jsPerf: http://jsperf.com/so-accessing-local-variable-doesn-t-improve-performance The only difference in the two last one is the calculation of the 'index' variable. And to my surprise the optimized version is slower in most browsers (except opera). Results of personal testing (not the jsPerf tests): Opera Original: 8668ms Optimized: 932ms Hybrid: 8696ms Chrome Original: 139ms Optimized: 145ms Hybrid: 136ms Safari Original: 433ms Optimized: 853ms Hybrid: 451ms Firefox Original: 343ms Optimized: 422ms Hybrid: 350ms After digging around, it seems an usual good practice is to access mainly local variable due to the scope lookup. Because The optimized version only call one local variable it should be faster that the Hybrid code which call multiple variable and object in addition to the various operation involved. So why the "optimized" version is slower? I thought that it might be because some JavaScript engine don't optimize the Optimized version because it is not hot enough but after using --trace-opt in chrome, it seems all version are properly compiled by V8. At this point I am a bit clueless and wonder if somebody would know what is going on? I did also some more test cases in this page: http://www.mx981.com/stuff/resize_bench/index.html

    Read the article

  • Service Discovery in WCF 4.0 &ndash; Part 1

    - by Shaun
    When designing a service oriented architecture (SOA) system, there will be a lot of services with many service contracts, endpoints and behaviors. Besides the client calling the service, in a large distributed system a service may invoke other services. In this case, one service might need to know the endpoints it invokes. This might not be a problem in a small system. But when you have more than 10 services this might be a problem. For example in my current product, there are around 10 services, such as the user authentication service, UI integration service, location service, license service, device monitor service, event monitor service, schedule job service, accounting service, player management service, etc..   Benefit of Discovery Service Since almost all my services need to invoke at least one other service. This would be a difficult task to make sure all services endpoints are configured correctly in every service. And furthermore, it would be a nightmare when a service changed its endpoint at runtime. Hence, we need a discovery service to remove the dependency (configuration dependency). A discovery service plays as a service dictionary which stores the relationship between the contracts and the endpoints for every service. By using the discovery service, when service X wants to invoke service Y, it just need to ask the discovery service where is service Y, then the discovery service will return all proper endpoints of service Y, then service X can use the endpoint to send the request to service Y. And when some services changed their endpoint address, all need to do is to update its records in the discovery service then all others will know its new endpoint. In WCF 4.0 Discovery it supports both managed proxy discovery mode and ad-hoc discovery mode. In ad-hoc mode there is no standalone discovery service. When a client wanted to invoke a service, it will broadcast an message (normally in UDP protocol) to the entire network with the service match criteria. All services which enabled the discovery behavior will receive this message and only those matched services will send their endpoint back to the client. The managed proxy discovery service works as I described above. In this post I will only cover the managed proxy mode, where there’s a discovery service. For more information about the ad-hoc mode please refer to the MSDN.   Service Announcement and Probe The main functionality of discovery service should be return the proper endpoint addresses back to the service who is looking for. In most cases the consume service (as a client) will send the contract which it wanted to request to the discovery service. And then the discovery service will find the endpoint and respond. Sometimes the contract and endpoint are not enough. It also contains versioning, extensions attributes. This post I will only cover the case includes contract and endpoint. When a client (or sometimes a service who need to invoke another service) need to connect to a target service, it will firstly request the discovery service through the “Probe” method with the criteria. Basically the criteria contains the contract type name of the target service. Then the discovery service will search its endpoint repository by the criteria. The repository might be a database, a distributed cache or a flat XML file. If it matches, the discovery service will grab the endpoint information (it’s called discovery endpoint metadata in WCF) and send back. And this is called “Probe”. Finally the client received the discovery endpoint metadata and will use the endpoint to connect to the target service. Besides the probe, discovery service should take the responsible to know there is a new service available when it goes online, as well as stopped when it goes offline. This feature is named “Announcement”. When a service started and stopped, it will announce to the discovery service. So the basic functionality of a discovery service should includes: 1, An endpoint which receive the service online message, and add the service endpoint information in the discovery repository. 2, An endpoint which receive the service offline message, and remove the service endpoint information from the discovery repository. 3, An endpoint which receive the client probe message, and return the matches service endpoints, and return the discovery endpoint metadata. WCF 4.0 discovery service just covers all these features in it's infrastructure classes.   Discovery Service in WCF 4.0 WCF 4.0 introduced a new assembly named System.ServiceModel.Discovery which has all necessary classes and interfaces to build a WS-Discovery compliant discovery service. It supports ad-hoc and managed proxy modes. For the case mentioned in this post, what we need to build is a standalone discovery service, which is the managed proxy discovery service mode. To build a managed discovery service in WCF 4.0 just create a new class inherits from the abstract class System.ServiceModel.Discovery.DiscoveryProxy. This class implemented and abstracted the procedures of service announcement and probe. And it exposes 8 abstract methods where we can implement our own endpoint register, unregister and find logic. These 8 methods are asynchronized, which means all invokes to the discovery service are asynchronously, for better service capability and performance. 1, OnBeginOnlineAnnouncement, OnEndOnlineAnnouncement: Invoked when a service sent the online announcement message. We need to add the endpoint information to the repository in this method. 2, OnBeginOfflineAnnouncement, OnEndOfflineAnnouncement: Invoked when a service sent the offline announcement message. We need to remove the endpoint information from the repository in this method. 3, OnBeginFind, OnEndFind: Invoked when a client sent the probe message that want to find the service endpoint information. We need to look for the proper endpoints by matching the client’s criteria through the repository in this method. 4, OnBeginResolve, OnEndResolve: Invoked then a client sent the resolve message. Different from the find method, when using resolve method the discovery service will return the exactly one service endpoint metadata to the client. In our example we will NOT implement this method.   Let’s create our own discovery service, inherit the base System.ServiceModel.Discovery.DiscoveryProxy. We also need to specify the service behavior in this class. Since the build-in discovery service host class only support the singleton mode, we must set its instance context mode to single. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using System.ServiceModel; 7:  8: namespace Phare.Service 9: { 10: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 11: public class ManagedProxyDiscoveryService : DiscoveryProxy 12: { 13: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 14: { 15: throw new NotImplementedException(); 16: } 17:  18: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 19: { 20: throw new NotImplementedException(); 21: } 22:  23: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 24: { 25: throw new NotImplementedException(); 26: } 27:  28: protected override IAsyncResult OnBeginResolve(ResolveCriteria resolveCriteria, AsyncCallback callback, object state) 29: { 30: throw new NotImplementedException(); 31: } 32:  33: protected override void OnEndFind(IAsyncResult result) 34: { 35: throw new NotImplementedException(); 36: } 37:  38: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 39: { 40: throw new NotImplementedException(); 41: } 42:  43: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 44: { 45: throw new NotImplementedException(); 46: } 47:  48: protected override EndpointDiscoveryMetadata OnEndResolve(IAsyncResult result) 49: { 50: throw new NotImplementedException(); 51: } 52: } 53: } Then let’s implement the online, offline and find methods one by one. WCF discovery service gives us full flexibility to implement the endpoint add, remove and find logic. For the demo purpose we will use an internal dictionary to store the services’ endpoint metadata. In the next post we will see how to serialize and store these information in database. Define a concurrent dictionary inside the service class since our it will be used in the multiple threads scenario. 1: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 2: public class ManagedProxyDiscoveryService : DiscoveryProxy 3: { 4: private ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata> _services; 5:  6: public ManagedProxyDiscoveryService() 7: { 8: _services = new ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata>(); 9: } 10: } Then we can simply implement the logic of service online and offline. 1: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 2: { 3: _services.AddOrUpdate(endpointDiscoveryMetadata.Address, endpointDiscoveryMetadata, (key, value) => endpointDiscoveryMetadata); 4: return new OnOnlineAnnouncementAsyncResult(callback, state); 5: } 6:  7: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 8: { 9: OnOnlineAnnouncementAsyncResult.End(result); 10: } 11:  12: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 13: { 14: EndpointDiscoveryMetadata endpoint = null; 15: _services.TryRemove(endpointDiscoveryMetadata.Address, out endpoint); 16: return new OnOfflineAnnouncementAsyncResult(callback, state); 17: } 18:  19: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 20: { 21: OnOfflineAnnouncementAsyncResult.End(result); 22: } Regards the find method, the parameter FindRequestContext.Criteria has a method named IsMatch, which can be use for us to evaluate which service metadata is satisfied with the criteria. So the implementation of find method would be like this. 1: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 2: { 3: _services.Where(s => findRequestContext.Criteria.IsMatch(s.Value)) 4: .Select(s => s.Value) 5: .All(meta => 6: { 7: findRequestContext.AddMatchingEndpoint(meta); 8: return true; 9: }); 10: return new OnFindAsyncResult(callback, state); 11: } 12:  13: protected override void OnEndFind(IAsyncResult result) 14: { 15: OnFindAsyncResult.End(result); 16: } As you can see, we checked all endpoints metadata in repository by invoking the IsMatch method. Then add all proper endpoints metadata into the parameter. Finally since all these methods are asynchronized we need some AsyncResult classes as well. Below are the base class and the inherited classes used in previous methods. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.Threading; 6:  7: namespace Phare.Service 8: { 9: abstract internal class AsyncResult : IAsyncResult 10: { 11: AsyncCallback callback; 12: bool completedSynchronously; 13: bool endCalled; 14: Exception exception; 15: bool isCompleted; 16: ManualResetEvent manualResetEvent; 17: object state; 18: object thisLock; 19:  20: protected AsyncResult(AsyncCallback callback, object state) 21: { 22: this.callback = callback; 23: this.state = state; 24: this.thisLock = new object(); 25: } 26:  27: public object AsyncState 28: { 29: get 30: { 31: return state; 32: } 33: } 34:  35: public WaitHandle AsyncWaitHandle 36: { 37: get 38: { 39: if (manualResetEvent != null) 40: { 41: return manualResetEvent; 42: } 43: lock (ThisLock) 44: { 45: if (manualResetEvent == null) 46: { 47: manualResetEvent = new ManualResetEvent(isCompleted); 48: } 49: } 50: return manualResetEvent; 51: } 52: } 53:  54: public bool CompletedSynchronously 55: { 56: get 57: { 58: return completedSynchronously; 59: } 60: } 61:  62: public bool IsCompleted 63: { 64: get 65: { 66: return isCompleted; 67: } 68: } 69:  70: object ThisLock 71: { 72: get 73: { 74: return this.thisLock; 75: } 76: } 77:  78: protected static TAsyncResult End<TAsyncResult>(IAsyncResult result) 79: where TAsyncResult : AsyncResult 80: { 81: if (result == null) 82: { 83: throw new ArgumentNullException("result"); 84: } 85:  86: TAsyncResult asyncResult = result as TAsyncResult; 87:  88: if (asyncResult == null) 89: { 90: throw new ArgumentException("Invalid async result.", "result"); 91: } 92:  93: if (asyncResult.endCalled) 94: { 95: throw new InvalidOperationException("Async object already ended."); 96: } 97:  98: asyncResult.endCalled = true; 99:  100: if (!asyncResult.isCompleted) 101: { 102: asyncResult.AsyncWaitHandle.WaitOne(); 103: } 104:  105: if (asyncResult.manualResetEvent != null) 106: { 107: asyncResult.manualResetEvent.Close(); 108: } 109:  110: if (asyncResult.exception != null) 111: { 112: throw asyncResult.exception; 113: } 114:  115: return asyncResult; 116: } 117:  118: protected void Complete(bool completedSynchronously) 119: { 120: if (isCompleted) 121: { 122: throw new InvalidOperationException("This async result is already completed."); 123: } 124:  125: this.completedSynchronously = completedSynchronously; 126:  127: if (completedSynchronously) 128: { 129: this.isCompleted = true; 130: } 131: else 132: { 133: lock (ThisLock) 134: { 135: this.isCompleted = true; 136: if (this.manualResetEvent != null) 137: { 138: this.manualResetEvent.Set(); 139: } 140: } 141: } 142:  143: if (callback != null) 144: { 145: callback(this); 146: } 147: } 148:  149: protected void Complete(bool completedSynchronously, Exception exception) 150: { 151: this.exception = exception; 152: Complete(completedSynchronously); 153: } 154: } 155: } 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using Phare.Service; 7:  8: namespace Phare.Service 9: { 10: internal sealed class OnOnlineAnnouncementAsyncResult : AsyncResult 11: { 12: public OnOnlineAnnouncementAsyncResult(AsyncCallback callback, object state) 13: : base(callback, state) 14: { 15: this.Complete(true); 16: } 17:  18: public static void End(IAsyncResult result) 19: { 20: AsyncResult.End<OnOnlineAnnouncementAsyncResult>(result); 21: } 22:  23: } 24:  25: sealed class OnOfflineAnnouncementAsyncResult : AsyncResult 26: { 27: public OnOfflineAnnouncementAsyncResult(AsyncCallback callback, object state) 28: : base(callback, state) 29: { 30: this.Complete(true); 31: } 32:  33: public static void End(IAsyncResult result) 34: { 35: AsyncResult.End<OnOfflineAnnouncementAsyncResult>(result); 36: } 37: } 38:  39: sealed class OnFindAsyncResult : AsyncResult 40: { 41: public OnFindAsyncResult(AsyncCallback callback, object state) 42: : base(callback, state) 43: { 44: this.Complete(true); 45: } 46:  47: public static void End(IAsyncResult result) 48: { 49: AsyncResult.End<OnFindAsyncResult>(result); 50: } 51: } 52:  53: sealed class OnResolveAsyncResult : AsyncResult 54: { 55: EndpointDiscoveryMetadata matchingEndpoint; 56:  57: public OnResolveAsyncResult(EndpointDiscoveryMetadata matchingEndpoint, AsyncCallback callback, object state) 58: : base(callback, state) 59: { 60: this.matchingEndpoint = matchingEndpoint; 61: this.Complete(true); 62: } 63:  64: public static EndpointDiscoveryMetadata End(IAsyncResult result) 65: { 66: OnResolveAsyncResult thisPtr = AsyncResult.End<OnResolveAsyncResult>(result); 67: return thisPtr.matchingEndpoint; 68: } 69: } 70: } Now we have finished the discovery service. The next step is to host it. The discovery service is a standard WCF service. So we can use ServiceHost on a console application, windows service, or in IIS as usual. The following code is how to host the discovery service we had just created in a console application. 1: static void Main(string[] args) 2: { 3: using (var host = new ServiceHost(new ManagedProxyDiscoveryService())) 4: { 5: host.Opened += (sender, e) => 6: { 7: host.Description.Endpoints.All((ep) => 8: { 9: Console.WriteLine(ep.ListenUri); 10: return true; 11: }); 12: }; 13:  14: try 15: { 16: // retrieve the announcement, probe endpoint and binding from configuration 17: var announcementEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 18: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 19: var binding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 20: var announcementEndpoint = new AnnouncementEndpoint(binding, announcementEndpointAddress); 21: var probeEndpoint = new DiscoveryEndpoint(binding, probeEndpointAddress); 22: probeEndpoint.IsSystemEndpoint = false; 23: // append the service endpoint for announcement and probe 24: host.AddServiceEndpoint(announcementEndpoint); 25: host.AddServiceEndpoint(probeEndpoint); 26:  27: host.Open(); 28:  29: Console.WriteLine("Press any key to exit."); 30: Console.ReadKey(); 31: } 32: catch (Exception ex) 33: { 34: Console.WriteLine(ex.ToString()); 35: } 36: } 37:  38: Console.WriteLine("Done."); 39: Console.ReadKey(); 40: } What we need to notice is that, the discovery service needs two endpoints for announcement and probe. In this example I just retrieve them from the configuration file. I also specified the binding of these two endpoints in configuration file as well. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> And this is the console screen when I ran my discovery service. As you can see there are two endpoints listening for announcement message and probe message.   Discoverable Service and Client Next, let’s create a WCF service that is discoverable, which means it can be found by the discovery service. To do so, we need to let the service send the online announcement message to the discovery service, as well as offline message before it shutdown. Just create a simple service which can make the incoming string to upper. The service contract and implementation would be like this. 1: [ServiceContract] 2: public interface IStringService 3: { 4: [OperationContract] 5: string ToUpper(string content); 6: } 1: public class StringService : IStringService 2: { 3: public string ToUpper(string content) 4: { 5: return content.ToUpper(); 6: } 7: } Then host this service in the console application. In order to make the discovery service easy to be tested the service address will be changed each time it’s started. 1: static void Main(string[] args) 2: { 3: var baseAddress = new Uri(string.Format("net.tcp://localhost:11001/stringservice/{0}/", Guid.NewGuid().ToString())); 4:  5: using (var host = new ServiceHost(typeof(StringService), baseAddress)) 6: { 7: host.Opened += (sender, e) => 8: { 9: Console.WriteLine("Service opened at {0}", host.Description.Endpoints.First().ListenUri); 10: }; 11:  12: host.AddServiceEndpoint(typeof(IStringService), new NetTcpBinding(), string.Empty); 13:  14: host.Open(); 15:  16: Console.WriteLine("Press any key to exit."); 17: Console.ReadKey(); 18: } 19: } Currently this service is NOT discoverable. We need to add a special service behavior so that it could send the online and offline message to the discovery service announcement endpoint when the host is opened and closed. WCF 4.0 introduced a service behavior named ServiceDiscoveryBehavior. When we specified the announcement endpoint address and appended it to the service behaviors this service will be discoverable. 1: var announcementAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 2: var announcementBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 3: var announcementEndpoint = new AnnouncementEndpoint(announcementBinding, announcementAddress); 4: var discoveryBehavior = new ServiceDiscoveryBehavior(); 5: discoveryBehavior.AnnouncementEndpoints.Add(announcementEndpoint); 6: host.Description.Behaviors.Add(discoveryBehavior); The ServiceDiscoveryBehavior utilizes the service extension and channel dispatcher to implement the online and offline announcement logic. In short, it injected the channel open and close procedure and send the online and offline message to the announcement endpoint.   On client side, when we have the discovery service, a client can invoke a service without knowing its endpoint. WCF discovery assembly provides a class named DiscoveryClient, which can be used to find the proper service endpoint by passing the criteria. In the code below I initialized the DiscoveryClient, specified the discovery service probe endpoint address. Then I created the find criteria by specifying the service contract I wanted to use and invoke the Find method. This will send the probe message to the discovery service and it will find the endpoints back to me. The discovery service will return all endpoints that matches the find criteria, which means in the result of the find method there might be more than one endpoints. In this example I just returned the first matched one back. In the next post I will show how to extend our discovery service to make it work like a service load balancer. 1: static EndpointAddress FindServiceEndpoint() 2: { 3: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 4: var probeBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 5: var discoveryEndpoint = new DiscoveryEndpoint(probeBinding, probeEndpointAddress); 6:  7: EndpointAddress address = null; 8: FindResponse result = null; 9: using (var discoveryClient = new DiscoveryClient(discoveryEndpoint)) 10: { 11: result = discoveryClient.Find(new FindCriteria(typeof(IStringService))); 12: } 13:  14: if (result != null && result.Endpoints.Any()) 15: { 16: var endpointMetadata = result.Endpoints.First(); 17: address = endpointMetadata.Address; 18: } 19: return address; 20: } Once we probed the discovery service we will receive the endpoint. So in the client code we can created the channel factory from the endpoint and binding, and invoke to the service. When creating the client side channel factory we need to make sure that the client side binding should be the same as the service side. WCF discovery service can be used to find the endpoint for a service contract, but the binding is NOT included. This is because the binding was not in the WS-Discovery specification. In the next post I will demonstrate how to add the binding information into the discovery service. At that moment the client don’t need to create the binding by itself. Instead it will use the binding received from the discovery service. 1: static void Main(string[] args) 2: { 3: Console.WriteLine("Say something..."); 4: var content = Console.ReadLine(); 5: while (!string.IsNullOrWhiteSpace(content)) 6: { 7: Console.WriteLine("Finding the service endpoint..."); 8: var address = FindServiceEndpoint(); 9: if (address == null) 10: { 11: Console.WriteLine("There is no endpoint matches the criteria."); 12: } 13: else 14: { 15: Console.WriteLine("Found the endpoint {0}", address.Uri); 16:  17: var factory = new ChannelFactory<IStringService>(new NetTcpBinding(), address); 18: factory.Opened += (sender, e) => 19: { 20: Console.WriteLine("Connecting to {0}.", factory.Endpoint.ListenUri); 21: }; 22: var proxy = factory.CreateChannel(); 23: using (proxy as IDisposable) 24: { 25: Console.WriteLine("ToUpper: {0} => {1}", content, proxy.ToUpper(content)); 26: } 27: } 28:  29: Console.WriteLine("Say something..."); 30: content = Console.ReadLine(); 31: } 32: } Similarly, the discovery service probe endpoint and binding were defined in the configuration file. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> OK, now let’s have a test. Firstly start the discovery service, and then start our discoverable service. When it started it will announced to the discovery service and registered its endpoint into the repository, which is the local dictionary. And then start the client and type something. As you can see the client asked the discovery service for the endpoint and then establish the connection to the discoverable service. And more interesting, do NOT close the client console but terminate the discoverable service but press the enter key. This will make the service send the offline message to the discovery service. Then start the discoverable service again. Since we made it use a different address each time it started, currently it should be hosted on another address. If we enter something in the client we could see that it asked the discovery service and retrieve the new endpoint, and connect the the service.   Summary In this post I discussed the benefit of using the discovery service and the procedures of service announcement and probe. I also demonstrated how to leverage the WCF Discovery feature in WCF 4.0 to build a simple managed discovery service. For test purpose, in this example I used the in memory dictionary as the discovery endpoint metadata repository. And when finding I also just return the first matched endpoint back. I also hard coded the bindings between the discoverable service and the client. In next post I will show you how to solve the problem mentioned above, as well as some additional feature for production usage. You can download the code here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Building a better mouse-trap &ndash; Improving the creation of XML Message Requests using Reflection, XML &amp; XSLT

    - by paulschapman
    Introduction The way I previously created messages to send to the GovTalk service I used the XMLDocument to create the request. While this worked it left a number of problems; not least that for every message a special function would need to created. This is OK for the short term but the biggest cost in any software project is maintenance and this would be a headache to maintain. So the following is a somewhat better way of achieving the same thing. For the purposes of this article I am going to be using the CompanyNumberSearch request of the GovTalk service – although this technique would work for any service that accepted XML. The C# functions which send and receive the messages remain the same. The magic sauce in this is the XSLT which defines the structure of the request, and the use of objects in conjunction with reflection to provide the content. It is a bit like Sweet Chilli Sauce added to Chicken on a bed of rice. So on to the Sweet Chilli Sauce The Sweet Chilli Sauce The request to search for a company based on it’s number is as follows; <GovTalkMessage xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > <EnvelopeVersion>1.0</EnvelopeVersion> <Header> <MessageDetails> <Class>NumberSearch</Class> <Qualifier>request</Qualifier> <TransactionID>1</TransactionID> </MessageDetails> <SenderDetails> <IDAuthentication> <SenderID>????????????????????????????????</SenderID> <Authentication> <Method>CHMD5</Method> <Value>????????????????????????????????</Value> </Authentication> </IDAuthentication> </SenderDetails> </Header> <GovTalkDetails> <Keys/> </GovTalkDetails> <Body> <NumberSearchRequest xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://xmlgw.companieshouse.gov.uk/v1-0/schema/NumberSearch.xsd"> <PartialCompanyNumber>99999999</PartialCompanyNumber> <DataSet>LIVE</DataSet> <SearchRows>1</SearchRows> </NumberSearchRequest> </Body> </GovTalkMessage> This is the XML that we send to the GovTalk Service and we get back a list of companies that match the criteria passed A message is structured in two parts; The envelope which identifies the person sending the request, with the name of the request, and the body which gives the detail of the company we are looking for. The Chilli What makes it possible is the use of XSLT to define the message – and serialization to convert each request object into XML. To start we need to create an object which will represent the contents of the message we are sending. However there is a common properties in all the messages that we send to Companies House. These properties are as follows SenderId – the id of the person sending the message SenderPassword – the password associated with Id TransactionId – Unique identifier for the message AuthenticationValue – authenticates the request Because these properties are unique to the Companies House message, and because they are shared with all messages they are perfect candidates for a base class. The class is as follows; using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Security.Cryptography; using System.Text; using System.Text.RegularExpressions; using Microsoft.WindowsAzure.ServiceRuntime; namespace CompanyHub.Services { public class GovTalkRequest { public GovTalkRequest() { try { SenderID = RoleEnvironment.GetConfigurationSettingValue("SenderId"); SenderPassword = RoleEnvironment.GetConfigurationSettingValue("SenderPassword"); TransactionId = DateTime.Now.Ticks.ToString(); AuthenticationValue = EncodePassword(String.Format("{0}{1}{2}", SenderID, SenderPassword, TransactionId)); } catch (System.Exception ex) { throw ex; } } /// <summary> /// returns the Sender ID to be used when communicating with the GovTalk Service /// </summary> public String SenderID { get; set; } /// <summary> /// return the password to be used when communicating with the GovTalk Service /// </summary> public String SenderPassword { get; set; } // end SenderPassword /// <summary> /// Transaction Id - uses the Time and Date converted to Ticks /// </summary> public String TransactionId { get; set; } // end TransactionId /// <summary> /// calculate the authentication value that will be used when /// communicating with /// </summary> public String AuthenticationValue { get; set; } // end AuthenticationValue property /// <summary> /// encodes password(s) using MD5 /// </summary> /// <param name="clearPassword"></param> /// <returns></returns> public static String EncodePassword(String clearPassword) { MD5CryptoServiceProvider md5Hasher = new MD5CryptoServiceProvider(); byte[] hashedBytes; UTF32Encoding encoder = new UTF32Encoding(); hashedBytes = md5Hasher.ComputeHash(ASCIIEncoding.Default.GetBytes(clearPassword)); String result = Regex.Replace(BitConverter.ToString(hashedBytes), "-", "").ToLower(); return result; } } } There is nothing particularly clever here, except for the EncodePassword method which hashes the value made up of the SenderId, Password and Transaction id. Each message inherits from this object. So for the Company Number Search in addition to the properties above we need a partial number, which dataset to search – for the purposes of the project we only need to search the LIVE set so this can be set in the constructor and the SearchRows. Again all are set as properties. With the SearchRows and DataSet initialized in the constructor. public class CompanyNumberSearchRequest : GovTalkRequest, IDisposable { /// <summary> /// /// </summary> public CompanyNumberSearchRequest() : base() { DataSet = "LIVE"; SearchRows = 1; } /// <summary> /// Company Number to search against /// </summary> public String PartialCompanyNumber { get; set; } /// <summary> /// What DataSet should be searched for the company /// </summary> public String DataSet { get; set; } /// <summary> /// How many rows should be returned /// </summary> public int SearchRows { get; set; } public void Dispose() { DataSet = String.Empty; PartialCompanyNumber = String.Empty; DataSet = "LIVE"; SearchRows = 1; } } As well as inheriting from our base class, I have also inherited from IDisposable – not just because it is just plain good practice to dispose of objects when coding, but it gives also gives us more versatility when using the object. There are four stages in making a request and this is reflected in the four methods we execute in making a call to the Companies House service; Create a request Send a request Check the status If OK then get the results of the request I’ve implemented each of these stages within a static class called Toolbox – which also means I don’t need to create an instance of the class to use it. When making a request there are three stages; Get the template for the message Serialize the object representing the message Transform the serialized object using a predefined XSLT file. Each of my templates I have defined as an embedded resource. When retrieving a resource of this kind we have to include the full namespace to the resource. In making the code re-usable as much as possible I defined the full ‘path’ within the GetRequest method. requestFile = String.Format("CompanyHub.Services.Schemas.{0}", RequestFile); So we now have the full path of the file within the assembly. Now all we need do is retrieve the assembly and get the resource. asm = Assembly.GetExecutingAssembly(); sr = asm.GetManifestResourceStream(requestFile); Once retrieved  So this can be returned to the calling function and we now have a stream of XSLT to define the message. Time now to serialize the request to create the other side of this message. // Serialize object containing Request, Load into XML Document t = Obj.GetType(); ms = new MemoryStream(); serializer = new XmlSerializer(t); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); serializer.Serialize(xmlTextWriter, Obj); ms = (MemoryStream)xmlTextWriter.BaseStream; GovTalkRequest = Toolbox.ConvertByteArrayToString(ms.ToArray()); First off we need the type of the object so we make a call to the GetType method of the object containing the Message properties. Next we need a MemoryStream, XmlSerializer and an XMLTextWriter so these can be initialized. The object is serialized by making the call to the Serialize method of the serializer object. The result of that is then converted into a MemoryStream. That MemoryStream is then converted into a string. ConvertByteArrayToString This is a fairly simple function which uses an ASCIIEncoding object found within the System.Text namespace to convert an array of bytes into a string. public static String ConvertByteArrayToString(byte[] bytes) { System.Text.ASCIIEncoding enc = new System.Text.ASCIIEncoding(); return enc.GetString(bytes); } I only put it into a function because I will be using this in various places. The Sauce When adding support for other messages outside of creating a new object to store the properties of the message, the C# components do not need to change. It is in the XSLT file that the versatility of the technique lies. The XSLT file determines the format of the message. For the CompanyNumberSearch the XSLT file is as follows; <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <GovTalkMessage xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > <EnvelopeVersion>1.0</EnvelopeVersion> <Header> <MessageDetails> <Class>NumberSearch</Class> <Qualifier>request</Qualifier> <TransactionID> <xsl:value-of select="CompanyNumberSearchRequest/TransactionId"/> </TransactionID> </MessageDetails> <SenderDetails> <IDAuthentication> <SenderID><xsl:value-of select="CompanyNumberSearchRequest/SenderID"/></SenderID> <Authentication> <Method>CHMD5</Method> <Value> <xsl:value-of select="CompanyNumberSearchRequest/AuthenticationValue"/> </Value> </Authentication> </IDAuthentication> </SenderDetails> </Header> <GovTalkDetails> <Keys/> </GovTalkDetails> <Body> <NumberSearchRequest xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://xmlgw.companieshouse.gov.uk/v1-0/schema/NumberSearch.xsd"> <PartialCompanyNumber> <xsl:value-of select="CompanyNumberSearchRequest/PartialCompanyNumber"/> </PartialCompanyNumber> <DataSet> <xsl:value-of select="CompanyNumberSearchRequest/DataSet"/> </DataSet> <SearchRows> <xsl:value-of select="CompanyNumberSearchRequest/SearchRows"/> </SearchRows> </NumberSearchRequest> </Body> </GovTalkMessage> </xsl:template> </xsl:stylesheet> The outer two tags define that this is a XSLT stylesheet and the root tag from which the nodes are searched for. The GovTalkMessage is the format of the message that will be sent to Companies House. We first set up the XslCompiledTransform object which will transform the XSLT template and the serialized object into the request to Companies House. xslt = new XslCompiledTransform(); resultStream = new MemoryStream(); writer = new XmlTextWriter(resultStream, Encoding.ASCII); doc = new XmlDocument(); The Serialize method require XmlTextWriter to write the XML (writer) and a stream to place the transferred object into (writer). The XML will be loaded into an XMLDocument object (doc) prior to the transformation. // create XSLT Template xslTemplate = Toolbox.GetRequest(Template); xslTemplate.Seek(0, SeekOrigin.Begin); templateReader = XmlReader.Create(xslTemplate); xslt.Load(templateReader); I have stored all the templates as a series of Embedded Resources and the GetRequestCall takes the name of the template and extracts the relevent XSLT file. /// <summary> /// Gets the framwork XML which makes the request /// </summary> /// <param name="RequestFile"></param> /// <returns></returns> public static Stream GetRequest(String RequestFile) { String requestFile = String.Empty; Stream sr = null; Assembly asm = null; try { requestFile = String.Format("CompanyHub.Services.Schemas.{0}", RequestFile); asm = Assembly.GetExecutingAssembly(); sr = asm.GetManifestResourceStream(requestFile); } catch (Exception) { throw; } finally { asm = null; } return sr; } // end private static stream GetRequest We first take the template name and expand it to include the full namespace to the Embedded Resource I like to keep all my schemas in the same directory and so the namespace reflects this. The rest is the default namespace for the project. Then we get the currently executing assembly (which will contain the resources with the call to GetExecutingAssembly() ) Finally we get a stream which contains the XSLT file. We use this stream and then load an XmlReader with the contents of the template, and that is in turn loaded into the XslCompiledTransform object. We convert the object containing the message properties into Xml by serializing it; calling the Serialize() method of the XmlSerializer object. To set up the object we do the following; t = Obj.GetType(); ms = new MemoryStream(); serializer = new XmlSerializer(t); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); We first determine the type of the object being transferred by calling GetType() We create an XmlSerializer object by passing the type of the object being serialized. The serializer writes to a memory stream and that is linked to an XmlTextWriter. Next job is to serialize the object and load it into an XmlDocument. serializer.Serialize(xmlTextWriter, Obj); ms = (MemoryStream)xmlTextWriter.BaseStream; xmlRequest = new XmlTextReader(ms); GovTalkRequest = Toolbox.ConvertByteArrayToString(ms.ToArray()); doc.LoadXml(GovTalkRequest); Time to transform the XML to construct the full request. xslt.Transform(doc, writer); resultStream.Seek(0, SeekOrigin.Begin); request = Toolbox.ConvertByteArrayToString(resultStream.ToArray()); So that creates the full request to be sent  to Companies House. Sending the request So far we have a string with a request for the Companies House service. Now we need to send the request to the Companies House Service. Configuration within an Azure project There are entire blog entries written about configuration within an Azure project – most of this is out of scope for this article but the following is a summary. Configuration is defined in two files within the parent project *.csdef which contains the definition of configuration setting. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="OnlineCompanyHub" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WebRole name="CompanyHub.Host"> <InputEndpoints> <InputEndpoint name="HttpIn" protocol="http" port="80" /> </InputEndpoints> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> <Setting name="DataConnectionString" /> </ConfigurationSettings> </WebRole> <WebRole name="CompanyHub.Services"> <InputEndpoints> <InputEndpoint name="HttpIn" protocol="http" port="8080" /> </InputEndpoints> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> <Setting name="SenderId"/> <Setting name="SenderPassword" /> <Setting name="GovTalkUrl"/> </ConfigurationSettings> </WebRole> <WorkerRole name="CompanyHub.Worker"> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> </ConfigurationSettings> </WorkerRole> </ServiceDefinition>   Above is the configuration definition from the project. What we are interested in however is the ConfigurationSettings tag of the CompanyHub.Services WebRole. There are four configuration settings here, but at the moment we are interested in the second to forth settings; SenderId, SenderPassword and GovTalkUrl The value of these settings are defined in the ServiceDefinition.cscfg file; <?xml version="1.0"?> <ServiceConfiguration serviceName="OnlineCompanyHub" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration"> <Role name="CompanyHub.Host"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="DataConnectionString" value="UseDevelopmentStorage=true" /> </ConfigurationSettings> </Role> <Role name="CompanyHub.Services"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="SenderId" value="UserID"/> <Setting name="SenderPassword" value="Password"/> <Setting name="GovTalkUrl" value="http://xmlgw.companieshouse.gov.uk/v1-0/xmlgw/Gateway"/> </ConfigurationSettings> </Role> <Role name="CompanyHub.Worker"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> </ConfigurationSettings> </Role> </ServiceConfiguration>   Look for the Role tag that contains our project name (CompanyHub.Services). Having configured the parameters we can now transmit the request. This is done by ‘POST’ing a stream of XML to the Companies House servers. govTalkUrl = RoleEnvironment.GetConfigurationSettingValue("GovTalkUrl"); request = WebRequest.Create(govTalkUrl); request.Method = "POST"; request.ContentType = "text/xml"; writer = new StreamWriter(request.GetRequestStream()); writer.WriteLine(RequestMessage); writer.Close(); We use the WebRequest object to send the object. Set the method of sending to ‘POST’ and the type of data as text/xml. Once set up all we do is write the request to the writer – this sends the request to Companies House. Did the Request Work Part I – Getting the response Having sent a request – we now need the result of that request. response = request.GetResponse(); reader = response.GetResponseStream(); result = Toolbox.ConvertByteArrayToString(Toolbox.ReadFully(reader));   The WebRequest object has a GetResponse() method which allows us to get the response sent back. Like many of these calls the results come in the form of a stream which we convert into a string. Did the Request Work Part II – Translating the Response Much like XSLT and XML were used to create the original request, so it can be used to extract the response and by deserializing the result we create an object that contains the response. Did it work? It would be really great if everything worked all the time. Of course if it did then I don’t suppose people would pay me and others the big bucks so that our programmes do not a) Collapse in a heap (this is an area of memory) b) Blow every fuse in the place in a shower of sparks (this will probably not happen this being real life and not a Hollywood movie, but it was possible to blow the sound system of a BBC Model B with a poorly coded setting) c) Go nuts and trap everyone outside the airlock (this was from a movie, and unless NASA get a manned moon/mars mission set up unlikely to happen) d) Go nuts and take over the world (this was also from a movie, but please note life has a habit of being of exceeding the wildest imaginations of Hollywood writers (note writers – Hollywood executives have no imagination and judging by recent output of that town have turned plagiarism into an art form). e) Freeze in total confusion because the cleaner pulled the plug to the internet router (this has happened) So anyway – we need to check to see if our request actually worked. Within the GovTalk response there is a section that details the status of the message and a description of what went wrong (if anything did). I have defined an XSLT template which will extract these into an XML document. <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ev="http://www.govtalk.gov.uk/CM/envelope" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <xsl:template match="/"> <GovTalkStatus xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Status> <xsl:value-of select="ev:GovTalkMessage/ev:Header/ev:MessageDetails/ev:Qualifier"/> </Status> <Text> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Text"/> </Text> <Location> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Location"/> </Location> <Number> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Number"/> </Number> <Type> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Type"/> </Type> </GovTalkStatus> </xsl:template> </xsl:stylesheet>   Only thing different about previous XSL files is the references to two namespaces ev & gt. These are defined in the GovTalk response at the top of the response; xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" If we do not put these references into the XSLT template then  the XslCompiledTransform object will not be able to find the relevant tags. Deserialization is a fairly simple activity. encoder = new ASCIIEncoding(); ms = new MemoryStream(encoder.GetBytes(statusXML)); serializer = new XmlSerializer(typeof(GovTalkStatus)); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); messageStatus = (GovTalkStatus)serializer.Deserialize(ms);   We set up a serialization object using the object type containing the error state and pass to it the results of a transformation between the XSLT above and the GovTalk response. Now we have an object containing any error state, and the error message. All we need to do is check the status. If there is an error then we can flag an error. If not then  we extract the results and pass that as an object back to the calling function. We go this by guess what – defining an XSLT template for the result and using that to create an Xml Stream which can be deserialized into a .Net object. In this instance the XSLT to create the result of a Company Number Search is; <?xml version="1.0" encoding="us-ascii"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ev="http://www.govtalk.gov.uk/CM/envelope" xmlns:sch="http://xmlgw.companieshouse.gov.uk/v1-0/schema" exclude-result-prefixes="ev"> <xsl:template match="/"> <CompanySearchResult xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <CompanyNumber> <xsl:value-of select="ev:GovTalkMessage/ev:Body/sch:NumberSearch/sch:CoSearchItem/sch:CompanyNumber"/> </CompanyNumber> <CompanyName> <xsl:value-of select="ev:GovTalkMessage/ev:Body/sch:NumberSearch/sch:CoSearchItem/sch:CompanyName"/> </CompanyName> </CompanySearchResult> </xsl:template> </xsl:stylesheet> and the object definition is; using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace CompanyHub.Services { public class CompanySearchResult { public CompanySearchResult() { CompanyNumber = String.Empty; CompanyName = String.Empty; } public String CompanyNumber { get; set; } public String CompanyName { get; set; } } } Our entire code to make calls to send a request, and interpret the results are; String request = String.Empty; String response = String.Empty; GovTalkStatus status = null; fault = null; try { using (CompanyNumberSearchRequest requestObj = new CompanyNumberSearchRequest()) { requestObj.PartialCompanyNumber = CompanyNumber; request = Toolbox.CreateRequest(requestObj, "CompanyNumberSearch.xsl"); response = Toolbox.SendGovTalkRequest(request); status = Toolbox.GetMessageStatus(response); if (status.Status.ToLower() == "error") { fault = new HubFault() { Message = status.Text }; } else { Object obj = Toolbox.GetGovTalkResponse(response, "CompanyNumberSearchResult.xsl", typeof(CompanySearchResult)); } } } catch (FaultException<ArgumentException> ex) { fault = new HubFault() { FaultType = ex.Detail.GetType().FullName, Message = ex.Detail.Message }; } catch (System.Exception ex) { fault = new HubFault() { FaultType = ex.GetType().FullName, Message = ex.Message }; } finally { } Wrap up So there we have it – a reusable set of functions to send and interpret XML results from an internet based service. The code is reusable with a little change with any service which uses XML as a transport mechanism – and as for the Companies House GovTalk service all I need to do is create various objects for the result and message sent and the relevent XSLT files. I might need minor changes for other services but something like 70-90% will be exactly the same.

    Read the article

  • PhotoShop CC install: error: could not update photoshop

    - by John Sonderson
    I have renamed the folder to which my previous version of PhotoShop CC was installed and then installed the latest version of PhotoShop CC using the Creative Cloud application. However once Creative Cloud finished downloading and installing PhotoShop CC I got the following error (sorry about having a localized version of windows, so I cannot properly translate the error text, but the error code I get is error code U46M1P6): Anyone got this error and is this OK, or is something wrong and do I need to fix it? I am curious about exactly which features I will be missing due to this error message. I tried restarting the download and install process, but I get error message U44M1P7. Can someone please explain to me why it is not working? Thanks.

    Read the article

  • OpenVPN Problems, connecting with Windows OpenVPC client to Linux OpenVPN

    - by Filip Ekberg
    After following this guide and connecting to the VPN Server, I get the following error: Sat Mar 06 19:43:08 2010 us=127000 NOTE: failed to obtain options consistency info from peer -- this could occur if the remote peer is running a version of OpenVPN before 1.5-beta8 or if there is a network connectivity problem, and will not necessarily prevent OpenVPN from running (0 bytes received from peer, 0 bytes authenticated data channel traffic) -- you can disable the options consistency check with --disable-occ. I am using Archlinux and installed openvpn with Pacman. I want to acheive the following: Connect to the VPN Server, being able to route certain made up hosts through it. Is this possible? openvpn --version gives me the following openvpn --version OpenVPN 2.1.1 i686-pc-linux-gnu [SSL] [LZO2] [EPOLL] built on Jan 31 2010 Originally developed by James Yonan Copyright (C) 2002-2009 OpenVPN Technologies, Inc. <[email protected]> Suggestions?

    Read the article

  • OpenVPN Problems, connecting with Windows OpenVPC client to Linux OpenVPN

    - by Filip Ekberg
    After following this guide and connecting to the VPN Server, I get the following error: Sat Mar 06 19:43:08 2010 us=127000 NOTE: failed to obtain options consistency info from peer -- this could occur if the remote peer is running a version of OpenVPN before 1.5-beta8 or if there is a network connectivity problem, and will not necessarily prevent OpenVPN from running (0 bytes received from peer, 0 bytes authenticated data channel traffic) -- you can disable the options consistency check with --disable-occ. I am using Archlinux and installed openvpn with Pacman. I want to acheive the following: Connect to the VPN Server, being able to route certain made up hosts through it. Is this possible? openvpn --version gives me the following openvpn --version OpenVPN 2.1.1 i686-pc-linux-gnu [SSL] [LZO2] [EPOLL] built on Jan 31 2010 Originally developed by James Yonan Copyright (C) 2002-2009 OpenVPN Technologies, Inc. <[email protected]> Suggestions?

    Read the article

  • Accessing microsoft office web apps (specifically OneNote) from iphone or android

    - by Howiecamp
    I'm trying to access some OneNote notebooks from the iphone and android browsers. On the iphone I can't seem to open the notebook and see it's text. When i click on the notebook it wants to download it And it's a file type that the browser can't understand. When using android the browser crashes everytime i click on the notebook. Interestingly this happens no matter which browser i use on android - The stock browser, skyfire and dolphin all crash the same at this point. I've also tried clicking on the "desktop version" link which forces the app to render the desktop version rather than the mobile version but same result. Finally i tried changing dolphin and skyfire to emulate iphone in firefox but again same results. Has anyone got this working?

    Read the article

  • How to disable SSLCompression on Apache httpd 2.2.15?

    - by Stefan Lasiewski
    I read about the CRIME attack against TLS Compression (CRIME is a successor to the BEAST attack against ssl & tls), and I want to protect my webservers against this attack by disabling SSL Compression, which was added to Apache 2.2.22 (See Bug 53219). I am running Scientific Linux 6.1, which ships with httpd-2.2.15. Security fixes for upstream versions of httpd 2.2 should be backported to this version. # rpm -q httpd httpd-2.2.15-15.sl6.1.x86_64 # httpd -V Server version: Apache/2.2.15 (Unix) Server built: Feb 14 2012 09:47:14 Server's Module Magic Number: 20051115:24 Server loaded: APR 1.3.9, APR-Util 1.3.9 Compiled using: APR 1.3.9, APR-Util 1.3.9 I tried SSLCompression off in my configuration, but that results in the following error message: # /etc/init.d/httpd restart Stopping httpd: [ OK ] Starting httpd: Syntax error on line 147 of /etc/httpd/httpd.conf: Invalid command 'SSLCompression', perhaps misspelled or defined by a module not included in the server configuration [FAILED] Is it possible to disable SSLCompression with this version of Apache Webserver?

    Read the article

  • CUDA on GeForce 8600GT

    - by viswanathgs
    I have got the cuda driver, toolkit and sdk installed in Ubuntu 10.04. I'm using nVidia Geforce 8600 GT card. Official website says my card is CUDA supported. But on running the deviceQuery that comes with the cuda sdk, I'm getting the following output. ./deviceQuery Starting... CUDA Device Query (Runtime API) version (CUDART static linking) There is no device supporting CUDA deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 134566327, CUDA Runtime Version = 0.0, NumDevs = 0 PASSED Press <Enter> to Quit... So, is GeForce 8600GT actually not CUDA supported, or is the problem with something else? Thanks.

    Read the article

  • Pre-compiled Iperf 2.x binary for win32?

    - by Ryan Bolger
    I'd like to do some network testing on Windows using Iperf. The latest on sourceforge appears to be 2.0.4. However, it's only available as source to be compiled. I attempted to do some google searching for a pre-compiled version, but all I could find were some links to 1.x stuff. Admittedly, the 1.x version I found does seem to work and I could likely continue using it without issue. But I've got the itch that says I need the latest version and setting up a build VM and dealing with inevitable compile issues doesn't sound like a whole lot of fun. So I figured I'd ask here if anyone knows where to find pre-compiled Iperf 2.x binaries for Windows.

    Read the article

  • Microsoft Word 2007 opening all docs with field codes toggled off

    - by WilliamKF
    Recently, something changed with my Microsoft Word 2007 installation/preferences on Windows XP, such that whenever I open a word document, all the field codes are displayed raw instead of as their expanded value. For example, my header reads: My Name { TITLE \* MERGEFORMAT } Version { REVNUM \* MERGEFORMAT } But, if I copy and paste it here, it reads expanded: My Name My Doc Title Version 42 I expect to see the copy and paste version directly inside Word, I can work around this by right clicking on each such field and choosing toggle field codes, however, I never had to do that before, as previously, the document opened with all such field codes expanded. Another example is the Table of Contents which shows as: { TOC \o "1-3" \h \z \u } Instead of the full table of contents. I searched the word options dialog, but could not find anything that appeared relevant. Please suggest how to restore the old behavior.

    Read the article

  • iis7 .net webservice 404 error

    - by user11457
    I have a webservice /test/Service1.asmx in the same folder as a page /test/test.aspx. the page works fine but I get the message bellow for the services in the same location. I know the file is there and the url is correct, added the script module and managed handler as well. If anyone knows what I'm missing here I'd appreciate it Server Error in '/' Application. The resource cannot be found. Description: HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. Please review the following URL and make sure that it is spelled correctly. Requested URL: /test/Service1.asmx Version Information: Microsoft .NET Framework Version:2.0.50727.4200; ASP.NET Version:2.0.50727.4016

    Read the article

  • Can't restore backup from SQL Server 2008 R2 to SQL Server 2005 or 2008

    - by Erick
    Hi everyone, I'm trying to get a backup from SQL Server 2008 R2 restored to SQL Server 2008, but when we try to do the restore we get this: The database was backed up on a server running version 10.50.1092. That version is incompatible with this server, which is running version 10.00.2531. Either restore the database on a server that supports the backup, or use a backup that is compatible with this server. I can use the script wizard to generate a script, but that takes over an hour to run. I also tried just exporting the data from server to server, but it had issues with the primary keys/identity columns. I will be running into this issue with several other clients so any help you could offer about how to get around this would be great. Thanks for your help!

    Read the article

  • Re-using SPSS 18 Trial in OSX?

    - by Telos
    A friend of mine needs to use SPSS for a project she is working on, and would normally have access to it in her school's library. Unfortunately she's out of town for the next couple weeks, and she's already gone through the trial version once. Is there some way she would be able to re-use the trial version? I've already had her try running a program meant to remove all files associated with a program, but that didn't work it still says she's used up her trial time. Her idea is to install a trial for an earlier version (like SPSS 17 instead of 18) but we're not sure where she would find that either. Any thoughts on how to get her up and running?

    Read the article

  • How to upgrade ClamAV on Ubuntu Hardy Heron 8.04 LTS?

    - by Jordan Lev
    I'm running a server on Ubuntu Hardy Heron 8.04 LTS, and when I installed ClamAV via aptitude, it installed version 0.94. That version has now been EOL'ed, but when I run "aptitude upgrade", it doesn't update ClamAV to the more recent version (0.96). I then followed these instructions on Installing ClamAV from the PPA, but when I did that, I get a message saying "The following packages have been kept back: ... clamav clamav-base clamav-daemon clamav-freshclam ..." Does anyone know how to get Ubuntu 8.04 to do this update via aptitude or apt-get (I'm hoping to avoid having to compile from source, etc.)?

    Read the article

  • Install Glibc2 using Yum

    - by Nerrve
    I'm trying to install glibc2 version 2.11 that's needed for openoffice 3.4 https://issues.apache.org/ooo/show_bug.cgi?id=119393 but i can't seem to find the dependency with yum. I already have the following dependencies installed. glibc.i686 2.5-49.el5_5.7 installed glibc.x86_64 2.5-49.el5_5.7 installed glibc-common.x86_64 2.5-49.el5_5.7 installed glibc-devel.x86_64 2.5-49.el5_5.7 installed glibc-headers.x86_64 2.5-49.el5_5.7 installed libc-client.x86_64 2004g-2.2.1 installed and glibc.i686 2.5-81.el5_8.2 updates glibc.x86_64 2.5-81.el5_8.2 updates glibc-common.x86_64 2.5-81.el5_8.2 updates glibc-devel.i386 2.5-81.el5_8.2 updates glibc-devel.x86_64 2.5-81.el5_8.2 updates glibc-headers.x86_64 2.5-81.el5_8.2 updates glibc-utils.x86_64 2.5-81.el5_8.2 updates I ran the following to get the version but it shows something different [root@***** /]# ./lib64/libc.so.6 GNU C Library stable release version 2.5, by Roland McGrath et al. Can someone please help? Thanks! EDIT : I'm using CentOS 2.6.18-128.1.10.el5

    Read the article

  • Sign out of Windows Live Messenger remotely

    - by justinhj
    I just upgraded Windows Live Messenger at home. I'm logged into my machine at work so I have a Live session active there too. Now the fun part. This new version of messenger is signing me out after about 2 minutes, and saying "You were signed out from here because you signed in to a version of Messenger that doesn't let you sign in at more than on place" Ok, so I went into the options on my home machine and selected "Sign me out at all other locations". Is there another way I can force my office machine to logout remotely, as either this option does not work, or the machine in my office just keeps reconnecting. Version 2009 14.0.8089.726 EDIT: Actually this problem went away after a few hours; I guess some kind of server side timeout kicked in.

    Read the article

  • Photoshop CS6 beta - functionality

    - by Biker John
    Is Photoshop beta cs6 full featured, or is just a preview of SOME of the new functions? I want to make sure before I remove my cs5 version. There are two contradictions that are making me unsure (as written on the official cs6 beta download page): Explore Photoshop CS6 beta for a sneak preview of some of the incredible performance enhancements, imaging magic, and creativity tools we are working on... AND Photoshop CS6 beta includes all the features in Photoshop CS6 and Photoshop CS6 Extended. Take this opportunity to try out the 3D image editing and quantitative image analysis capabilities of Photoshop Extended*, but note that—while these features will be included in the shipping version of Photoshop CS6 Extended—they will not be included in the shipping version of Photoshop CS6...

    Read the article

< Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >