Search Results

Search found 10931 results on 438 pages for 'struts config'.

Page 87/438 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • ZFS Basics

    - by user12614620
    Stage 1 basics: creating a pool # zpool create $NAME $REDUNDANCY $DISK1_0..N [$REDUNDANCY $DISK2_0..N]... $NAME = name of the pool you're creating. This will also be the name of the first filesystem and, by default, be placed at the mountpoint "/$NAME" $REDUNDANCY = either mirror or raidzN, and N can be 1, 2, or 3. If you leave N off, then it defaults to 1. $DISK1_0..N = the disks assigned to the pool. Example 1: zpool create tank mirror c4t1d0 c4t2d0 name of pool: tank redundancy: mirroring disks being mirrored: c4t1d0 and c4t2d0 Capacity: size of a single disk Example 2: zpool create tank raidz c4t1d0 c4t2d0 c4t3d0 c4t4d0 c4t5d0 Here the redundancy is raidz, and there are five disks, in a 4+1 (4 data, 1 parity) config. This means that the capacity is 4 times the disk size. If the command used "raidz2" instead, then the config would be 3+2. Likewise, "raidz3" would be a 2+3 config. Example 3: zpool create tank mirror c4t1d0 c4t2d0 mirror c4t3d0 c4t4d0 This is the same as the first mirror example, except there are two mirrors now. ZFS will stripe data across both mirrors, which means that writing data will go a bit faster. Note: you cannot create a mirror of two raidzs. You can create a raidz of mirrors, but to do that requires trickery.

    Read the article

  • NFS users getting a laggy GUI expierence

    - by elzilrac
    I am setting up a system (ubuntu 12.04) that uses ldap, pam, and autofs to load users and their home folders from a remote server. One of the options for login is sitting down at the machine and starting a GUI session. Programs such as chormium (browser) that preform many read/write operations in the ~/.cache and ~/.config files are slowing down the GUI experience as well as putting strain of the NFS server that is causing other users to have problems. Ubuntu had the handy-dandy XDG_CONFIG_HOME and XDG_CACHE_HOME variables that can be set to change the default location of .cache and .config from the home folder to somewhere else. There are several places to set them, but most of them are not optimal. /etc/environment pros: will work across all shells cons: cannot use variables like $USER so that you can't make users have different new locations for .cache and .config. Every users' new location would be the same directory. /etc/bash.bashrc pros: $USER works, so you can place them in different folders cons: only gets run for bash compatible shells ~/.pam_environment pros: works regardless of shell cons: cannot use system variables (like $USER), has it's own syntax, and has to be created for every user

    Read the article

  • Dark NetBeans

    - by Geertjan
    Let's make NetBeans IDE look like this. Not saying it's a nice color or anything, just that it's possible to do so: I changed the coloring in the Java editor by going to Tools | Options, then chose "Fonts & Colors", then selected the "Norway Today" profile and changed the background setting to Dark Gray. Next, I put this themes.xml file into the "config" folder of the NetBeans IDE user directory, which you can identify as such by going to Help | About in the IDE. Go to the exact location defined by "User directory" in Help | About, and then go to the "config" folder within that folder: The "config" folder of the user directory is the readable/writable root of the NetBeans IDE virtual filesystem. If a themes.xml file is found there, it is used, as described here. Then, in netbeans.conf file, which is not in the NetBeans user directory but in the NetBeans installation directory, within its "etc" folder, I added the following to "netbeans_default_options": -J-Dnetbeans.useTheme=true --laf Metal The first of these enables usage of the themes.xml file, i.e., it notifies NetBeans IDE at startup to load the themes.xml file and to apply the content to the relevant UI components, while the second is needed because most/all of the themes only work if you're using the Metal Look and Feel. Note: I must add that in most cases, whatever it is you're trying to achieve via a themes.xml file can probably be achieved in a different, and better, way. The themes.xml mechanism has been there forever, but is not actively supported or tested, though it may work for the specific thing you're trying to do anyway. For example, if you're trying to change the background color of a TopComponent, use the paintComponent method of the TopComponent instead of using a themes.xml file.

    Read the article

  • New Solaris 11.2 beta features: SMF stencils

    - by user13366125
    As much as there is often a lot discussion about configuration items inside the SMF repository (like the hostname), it brings an important advantage: It introduces the concept of dependencies to configuration changes. What services have be restarted when i change a configuration item. Do you remember all the services that are dependent on the hostname and need a restart after changing it? SMF solves this by putting the information about dependencies into it configuration. You define it with the manifests. However, as much configuration you may put into SMF, most applications still insists to get it's configuration inside the traditional configuration files, like the resolv.conf for the resolver or the puppet.conf for Puppet. So you need a way to take the information out of the SMF repository and generate a config file with it. In the past the way to do so, was some scripting inside the start method that generated the config file before the service started. Solaris 11.2 offers a new feature in this area. It introduces a generic method to enable you to create config files from SMF properties. It's called SMF stencils. (read more)

    Read the article

  • Ubuntu 13.04 Sound Problem after following weird commands

    - by user206356
    After launching a few commands : echo autospawn = no >> ~/.config/pulse/client.conf #use ~/.pulse/client.conf on Ubuntu <= 12.10 killall pulseaudio $LANG=C pulseaudio -vvvv --log-time=1 > ~/pulseverbose.log 2>&1 My sound does not work. (just with the speakers, with headphones it works but I can not change the volume) The sound icon on the top right corner does show a speaker with a single non continuous line. I can not change the volume; it is frozen. There can be an extremely low output of the sound (I hear something but I am not sure...) It does not show a single output device that is avalaible, not even the "dummie". I have tried to reset pulseaudio, alsa, remove it, purging it, reinstalling it, without having success. EDIT: I have tried launching pulseaudio via the terminal. It worked :D However, I am very surprised why it does not automatically start at the start of the computer. Any ideas ? Here the console output : W: [pulseaudio] authkey.c: Failed to open cookie file '/home/simonm/.config/pulse/cookie': No such file or directory W: [pulseaudio] authkey.c: Failed to load authorization key '/home/simonm/.config/pulse/cookie': No such file or directory W: [pulseaudio] authkey.c: Failed to open cookie file '/home/simonm/.pulse-cookie': No such file or directory W: [pulseaudio] authkey.c: Failed to load authorization key '/home/simonm/.pulse-cookie': No such file or directory

    Read the article

  • Wcf Facility Metadata publishing for this service is currently disabled.

    - by cvista
    Hey I'm trying to connect to my Wcf service which is configured using castles wcf facility. When I go to the service in a browser i get: Metadata publishing for this service is currently disabled. Which lists a load of instructions which i cant do because the configuration isnt in the web.config. when I try to connect using VS/add service reference i get: The HTML document does not contain Web service discovery information. Metadata contains a reference that cannot be resolved: 'http://s.ibzstar.com/userservices.svc'. Content Type application/soap+xml; charset=utf-8 was not supported by service http://s.ibzstar.com/userservices.svc. The client and service bindings may be mismatched. The remote server returned an error: (415) Cannot process the message because the content type 'application/soap+xml; charset=utf-8' was not the expected type 'text/xml; charset=utf-8'.. If the service is defined in the current solution, try building the solution and adding the service reference again. Anyone know what I need to do to get this working? The end client is an iPhone app written using Monotouch if that matters - so no castle windsor on the client side. cheers w:// Here's the Windsor.config from the service: <?xml version="1.0" encoding="utf-8" ?> <configuration> <components> <component id="eventServices" service="IbzStar.Domain.IEventServices, IbzStar.Domain" type="IbzStar.Domain.EventServices, IbzStar.Domain" lifestyle="transient"> </component> <component id="userServices" service="IbzStar.Domain.IUserServices, IbzStar.Domain" type="IbzStar.Domain.UserServices, IbzStar.Domain" lifestyle="transient"> </component> The Web.config section: <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true"/> <services> </services> <behaviors> <serviceBehaviors> <behavior name="IbzStar.WebServices.Service1Behavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> </behaviors> My App_Start contains this: Container = new WindsorContainer(new XmlInterpreter(new ConfigResource())) .AddFacility<WcfFacility>() .Install(Configuration.FromXmlFile("Windsor.config")); As for the client config - I'm using the wizard to add the service.

    Read the article

  • Problem using ‘useLegacyV2RuntimeActivationPolicy’ & supportedRuntime in an application

    - by Notre
    Hello, I've modified a couple of different applications' .config file like this: <startup useLegacyV2RuntimeActivationPolicy="true"> <supportedRuntime version="v4.0"/> </startup> When I did this for devenv.exe.config (VS 2005 - don't ask :) ), things work great - most of the Visual Studio used .NET 2.0 but I was able to make use of an assembly targeting .NET 4.0 framework. I tried to do the same thing for a custom .exe, which happens to be based on MS CAB (slightly modified) and has a hybrid mix of WPF and WinForms content. As soon as I modified this application's app config file, I started getting this exception, sometime during application startup: The Undo operation encountered a context that is different from what was applied in the corresponding Set operation. The possible cause is that a context was Set on the thread and not reverted(undone). System.InvalidOperationException: The Undo operation encountered a context that is different from what was applied in the corresponding Set operation. The possible cause is that a context was Set on the thread and not reverted(undone). There's a big long stack trace that doesn't show anything in my application code directly (just a bunch of MS assemblies). If I modify the application's .config file to this: <startup useLegacyV2RuntimeActivationPolicy="true"> </startup> i.e.I remove the supportedRuntime element, then the application doesn't throw this exception. But when I go to the point in my code where I try to load my .NET 4 assembly, if fails with this: System.BadImageFormatException: Could not load file or assembly '' or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded. I guess this is expected. I have two questions: 1) Any idea why I'm getting the System.InvalidOperationException exception when I modify this application's configuration file to include the supportedRuntime element, adding .NET 4 support and any suggestions on what I can do about it? 2) If the answer is "no idea why/don't know what you can do about it", then is possible for my .NET 3.5 SP1 code (C#) to provide more fine grain support for conditionally adding .NET 4 runtime support for a certain assembly(ies) without converting my entire application to target .NET 4, or without using the declarative config file approach? At some point I would convert the entire application to target .NET 4, but for the short term this is daunting task and I'm hope for some short term solution/hack. Thank you very much for any advice you can give!

    Read the article

  • Rails 3 server won't start in cucumber environment

    - by James
    Hi I'm trying to start my rails 3 app in the same environment that cucumber uses because this is necessary for a particular test. When I try to start the server via rails server -e cucumber I get this error: /home/james/rails-projs/fact/config/environments/cucumber.rb:7: undefined local variable or method `config' for main:Object (NameError) from /home/james/.bundle/ruby/1.8/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activesupport/lib/active_support/dependencies.rb:209:in `require' from /home/james/.bundle/ruby/1.8/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activesupport/lib/active_support/dependencies.rb:209:in `require' from /home/james/.bundle/ruby/1.8/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activesupport/lib/active_support/dependencies.rb:195:in `load_dependency' from /home/james/.bundle/ruby/1.8/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activesupport/lib/active_support/dependencies.rb:523:in `new_constants_in' from /home/james/.bundle/ruby/1.8/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activesupport/lib/active_support/dependencies.rb:195:in `load_dependency' from /home/james/.bundle/ruby/1.8/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activesupport/lib/active_support/dependencies.rb:209:in `require' from /home/james/.bundle/ruby/1.8/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/railties/lib/rails/application/bootstrap.rb:10 from /home/james/.bundle/ruby/1.8/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/railties/lib/rails/initializable.rb:25:in `instance_exec' from /home/james/.bundle/ruby/1.8/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/railties/lib/rails/initializable.rb:25:in `run' from /home/james/.bundle/ruby/1.8/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/railties/lib/rails/initializable.rb:55:in `run_initializers' from /home/james/.bundle/ruby/1.8/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/railties/lib/rails/initializable.rb:54:in `each' from /home/james/.bundle/ruby/1.8/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/railties/lib/rails/initializable.rb:54:in `run_initializers' from /home/james/.bundle/ruby/1.8/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/railties/lib/rails/application.rb:109:in `initialize!' from /home/james/.bundle/ruby/1.8/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/railties/lib/rails/application.rb:81:in `send' from /home/james/.bundle/ruby/1.8/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/railties/lib/rails/application.rb:81:in `method_missing' from /home/james/rails-projs/beta/config/environment.rb:6 from /home/james/.bundle/ruby/1.8/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activesupport/lib/active_support/dependencies.rb:209:in `require' from /home/james/.bundle/ruby/1.8/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activesupport/lib/active_support/dependencies.rb:209:in `require' from /home/james/.bundle/ruby/1.8/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activesupport/lib/active_support/dependencies.rb:195:in `load_dependency' from /home/james/.bundle/ruby/1.8/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activesupport/lib/active_support/dependencies.rb:523:in `new_constants_in' from /home/james/.bundle/ruby/1.8/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activesupport/lib/active_support/dependencies.rb:195:in `load_dependency' from /home/james/.bundle/ruby/1.8/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activesupport/lib/active_support/dependencies.rb:209:in `require' from config.ru:3 from /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/builder.rb:46:in `instance_eval' from /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/builder.rb:46:in `initialize' from config.ru:1:in `new' from config.ru:1 I'd appreciate any help.

    Read the article

  • Mercurial client error 255 and HTTP error 404 when attempting to push large files to server

    - by coderunner
    Problem: When attempting to push a changeset that contains 6 large files (.exe, .dmg, etc) to my remote server my client (MacHG) is reporting the error: "Error During Push. Mercurial reported error number 255: abort: HTTP Error 404: Not Found" What does the error even mean?! The only thing unique (that I can tell) about this commit is the size, type, and filenames of the files. How can I determine which exact file within the changeset is failing? How can I delete the corrupt changeset from the repository? Someone reported using "mq" extensions, but it looks overly complicated for what I'm trying to achieve. Background: I can push and pull the following: source files, directories, .class files and a .jar file to and from the server, using both MacHG and toirtoise HG. I successfully committed to my local repository the addition for the first time the 6 large .exe, .dmg etc installer files (about 130Mb total). In the following commit to my local repository, I removed ("untracked" / forget) the 6 files causing the problem, however the previous (failing) changeset is still queued to be pushed to the server (i.e. my local host is trying to push the "add" and then the "remove" to the remote server - and keep aligned with the "keep everything in history" philosophy of the source control system). I can commit .txt .java files etc using TortoiseHG from Windows PCs. I haven't actually testing committing or pushing the same large files using TortoiseHG. Please help! Setup: Client applications = MacHG v0.9.7 (SCM 1.5.4), and TortoiseHG v1.0.4 (SCM 1.5.4) Server = HTTPS, IIS7.5, Mercurial 1.5.4, Python 2.6.5, setup using these instructions: http://www.jeremyskinner.co.uk/mercurial-on-iis7/ In IIS7.5 the CGI handler is configured to handle ALL verbs (not just GET, POST and HEAD). My hgweb.cgi file on the server is as follows: #!/usr/bin/env python # # An example hgweb CGI script, edit as necessary # Path to repo or hgweb config to serve (see 'hg help hgweb') #config = "/path/to/repo/or/config" # Uncomment and adjust if Mercurial is not installed system-wide: #import sys; sys.path.insert(0, "/path/to/python/lib") # Uncomment to send python tracebacks to the browser if an error occurs: #import cgitb; cgitb.enable() from mercurial import demandimport; demandimport.enable() from mercurial.hgweb import hgweb, wsgicgi application = hgweb('C:\inetpub\wwwroot\hg\hgweb.config') wsgicgi.launch(application) My hgweb.config file on the server is as follows: [collections] C:\Mercurial Repositories = C:\Mercurial Repositories [web] baseurl = /hg allow_push = usernamea allow_push = usernameb

    Read the article

  • Including configuration files while compiling a Flex application with MXMLC

    - by Daniel
    Hello there, I'm using: - Flex SDK 3.5.0 - Parsley 2.2.2. - Flash Builder 4 Down in my src folder (which is configured as part of the source path in the Flash Builder), I have a logging.xml which I configure via Parsley: FlexLoggingXmlSupport.initialize(); XmlContextBuilder.build("com/company/product/util/log/logging.xml"); When I run my application through Flash Builder, the XmlContentBuilder seems to locate the logging.xml (the implementation is a regular URLLoader one). When I compile my application using MXMLC (whether in Ant or command-line), and then run the swf, I get the following error: Cause(0): Error loading com/company/product/util/log/logging.xml: Error in URLLoader - cause: Error #2032: Stream Error. URL: file:///C|/workspace/folder01/product/target/com/company/product/util/log/logging.xml - cause: Error #2032: Stream Error. URL: file:///C|/workspace/folder01/product/target/com/company/product/util/log/logging.xml Here is the MXMLC tag in Ant: <mxmlc file="${product.src.dir}/com/company/product/view/Main.mxml" output="${product.target.dir}/${product.release.filename}" keep-generated-actionscript="false"> <load-config filename="${FLEX_HOME}/frameworks/flex-config.xml" /> <!-- source paths --> <source-path path-element="${FLEX_HOME}/frameworks" /> <compiler.source-path path-element="${product.src.dir}" /> <compiler.source-path path-element="${product.locale.dir}/{locale}" /> <compiler.library-path dir="${product.basedir}" append="true"> <include name="libs" /> </compiler.library-path> <warnings>false</warnings> <debug>false</debug> </mxmlc> And here is the command line: \mxmlc.exe -output "C:\temp\Rap.swf" -load-config "C:\Program Files\Adobe\Adobe Flash Builder 4 Plug-in\sdks\3.5.0\frameworks\flex-config.xml" -source-path "C:\Program Files\Adobe\Adobe Flash Builder 4 Plug-in\sdks\3.5.0\frameworks" C:\workspace\folder01\product\src C:\workspace\folder01\product\locale\en_US -library-path+=C:\workspace\folder01\product\libs -file-specs C:\workspace\folder01\product\src\com\company\product\view\main.mxml Now perhaps I don't get this correctly, but as far as I understand the SWF should be compiled with all of the resources in the paths I give MXMLC as source-paths. For some reason it seems that the XML file is not compiled into the SWF, hence the relative path of the XmlContentBuilder isn't located successfully. I could not find any argument to provide the MXMLC with that might solve this. I tried using the -dump-config option with the Flash Builder's compiler, then giving that configuration to MXMLC, but it didn't work either. I tried providing the XmlContentBuilder with an absolute path. That worked fine when I compiled with MXMLC via Ant, but still didn't work when I used MXMLC in the command-line... I'd be happy to be enlightened here, regarding all subjects - using MXMLC, accessing resources with relative paths, configuring logging in Parsley, etc. Many thanks in advance, Daniel

    Read the article

  • JBOSS 7.1 started hanging after 6 months of deployment

    - by PVR
    My application is been live from 6 months. The application is host on jboss 7.1 server. From last few days I am finding numerous problem of hanging of jboss server. Though I restart the jboss server again, it does not invoke. I need to restart the server machine itself. Can anyone please let me know what could be the cause of these problems and the workable resolutions or any suggestion ? Kindly dont degrade the question as I am facing a lot problems due to this hanging issue. Also for the information, the application is based on Java, GWT, Hibernate 3. Please find the standalone.xml file in case if it helps. <extensions> <extension module="org.jboss.as.clustering.infinispan"/> <extension module="org.jboss.as.configadmin"/> <extension module="org.jboss.as.connector"/> <extension module="org.jboss.as.deployment-scanner"/> <extension module="org.jboss.as.ee"/> <extension module="org.jboss.as.ejb3"/> <extension module="org.jboss.as.jaxrs"/> <extension module="org.jboss.as.jdr"/> <extension module="org.jboss.as.jmx"/> <extension module="org.jboss.as.jpa"/> <extension module="org.jboss.as.logging"/> <extension module="org.jboss.as.mail"/> <extension module="org.jboss.as.naming"/> <extension module="org.jboss.as.osgi"/> <extension module="org.jboss.as.pojo"/> <extension module="org.jboss.as.remoting"/> <extension module="org.jboss.as.sar"/> <extension module="org.jboss.as.security"/> <extension module="org.jboss.as.threads"/> <extension module="org.jboss.as.transactions"/> <extension module="org.jboss.as.web"/> <extension module="org.jboss.as.webservices"/> <extension module="org.jboss.as.weld"/> </extensions> <system-properties> <property name="org.apache.coyote.http11.Http11Protocol.COMPRESSION" value="on"/> <property name="org.apache.coyote.http11.Http11Protocol.COMPRESSION_MIME_TYPES" value="text/javascript,text/css,text/html,text/xml,text/json"/> </system-properties> <management> <security-realms> <security-realm name="ManagementRealm"> <authentication> <properties path="mgmt-users.properties" relative-to="jboss.server.config.dir"/> </authentication> </security-realm> <security-realm name="ApplicationRealm"> <authentication> <properties path="application-users.properties" relative-to="jboss.server.config.dir"/> </authentication> </security-realm> </security-realms> <management-interfaces> <native-interface security-realm="ManagementRealm"> <socket-binding native="management-native"/> </native-interface> <http-interface security-realm="ManagementRealm"> <socket-binding http="management-http"/> </http-interface> </management-interfaces> </management> <profile> <subsystem xmlns="urn:jboss:domain:logging:1.1"> <console-handler name="CONSOLE"> <level name="INFO"/> <formatter> <pattern-formatter pattern="%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n"/> </formatter> </console-handler> <periodic-rotating-file-handler name="FILE"> <formatter> <pattern-formatter pattern="%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n"/> </formatter> <file relative-to="jboss.server.log.dir" path="server.log"/> <suffix value=".yyyy-MM-dd"/> <append value="true"/> </periodic-rotating-file-handler> <logger category="com.arjuna"> <level name="WARN"/> </logger> <logger category="org.apache.tomcat.util.modeler"> <level name="WARN"/> </logger> <logger category="sun.rmi"> <level name="WARN"/> </logger> <logger category="jacorb"> <level name="WARN"/> </logger> <logger category="jacorb.config"> <level name="ERROR"/> </logger> <root-logger> <level name="INFO"/> <handlers> <handler name="CONSOLE"/> <handler name="FILE"/> </handlers> </root-logger> </subsystem> <subsystem xmlns="urn:jboss:domain:configadmin:1.0"/> <subsystem xmlns="urn:jboss:domain:datasources:1.0"> <datasources> <datasource jndi-name="java:jboss/datasources/ExampleDS" pool-name="ExampleDS" enabled="true" use-java-context="true"> <connection-url>jdbc:h2:mem:test;DB_CLOSE_DELAY=-1</connection-url> <driver>h2</driver> <security> <user-name>sa</user-name> <password>sa</password> </security> </datasource> <drivers> <driver name="h2" module="com.h2database.h2"> <xa-datasource-class>org.h2.jdbcx.JdbcDataSource</xa-datasource-class> </driver> </drivers> </datasources> </subsystem> <subsystem xmlns="urn:jboss:domain:deployment-scanner:1.1"> <deployment-scanner path="deployments" relative-to="jboss.server.base.dir" scan-interval="5000"/> </subsystem> <subsystem xmlns="urn:jboss:domain:ee:1.0"/> <subsystem xmlns="urn:jboss:domain:ejb3:1.2"> <session-bean> <stateless> <bean-instance-pool-ref pool-name="slsb-strict-max-pool"/> </stateless> <stateful default-access-timeout="5000" cache-ref="simple"/> <singleton default-access-timeout="5000"/> </session-bean> <pools> <bean-instance-pools> <strict-max-pool name="slsb-strict-max-pool" max-pool-size="20" instance-acquisition-timeout="5" instance-acquisition-timeout-unit="MINUTES"/> <strict-max-pool name="mdb-strict-max-pool" max-pool-size="20" instance-acquisition-timeout="5" instance-acquisition-timeout-unit="MINUTES"/> </bean-instance-pools> </pools> <caches> <cache name="simple" aliases="NoPassivationCache"/> <cache name="passivating" passivation-store-ref="file" aliases="SimpleStatefulCache"/> </caches> <passivation-stores> <file-passivation-store name="file"/> </passivation-stores> <async thread-pool-name="default"/> <timer-service thread-pool-name="default"> <data-store path="timer-service-data" relative-to="jboss.server.data.dir"/> </timer-service> <remote connector-ref="remoting-connector" thread-pool-name="default"/> <thread-pools> <thread-pool name="default"> <max-threads count="10"/> <keepalive-time time="100" unit="milliseconds"/> </thread-pool> </thread-pools> </subsystem> <subsystem xmlns="urn:jboss:domain:infinispan:1.2" default-cache-container="hibernate"> <cache-container name="hibernate" default-cache="local-query"> <local-cache name="entity"> <transaction mode="NON_XA"/> <eviction strategy="LRU" max-entries="10000"/> <expiration max-idle="100000"/> </local-cache> <local-cache name="local-query"> <transaction mode="NONE"/> <eviction strategy="LRU" max-entries="10000"/> <expiration max-idle="100000"/> </local-cache> <local-cache name="timestamps"> <transaction mode="NONE"/> <eviction strategy="NONE"/> </local-cache> </cache-container> </subsystem> <subsystem xmlns="urn:jboss:domain:jaxrs:1.0"/> <subsystem xmlns="urn:jboss:domain:jca:1.1"> <archive-validation enabled="true" fail-on-error="true" fail-on-warn="false"/> <bean-validation enabled="true"/> <default-workmanager> <short-running-threads> <core-threads count="50"/> <queue-length count="50"/> <max-threads count="50"/> <keepalive-time time="10" unit="seconds"/> </short-running-threads> <long-running-threads> <core-threads count="50"/> <queue-length count="50"/> <max-threads count="50"/> <keepalive-time time="100" unit="seconds"/> </long-running-threads> </default-workmanager> <cached-connection-manager/> </subsystem> <subsystem xmlns="urn:jboss:domain:jdr:1.0"/> <subsystem xmlns="urn:jboss:domain:jmx:1.1"> <show-model value="true"/> <remoting-connector/> </subsystem> <subsystem xmlns="urn:jboss:domain:jpa:1.0"> <jpa default-datasource=""/> </subsystem> <subsystem xmlns="urn:jboss:domain:mail:1.0"> <mail-session jndi-name="java:jboss/mail/Default"> <smtp-server outbound-socket-binding-ref="mail-smtp"/> </mail-session> </subsystem> <subsystem xmlns="urn:jboss:domain:naming:1.1"/> <subsystem xmlns="urn:jboss:domain:osgi:1.2" activation="lazy"> <properties> <property name="org.osgi.framework.startlevel.beginning"> 1 </property> </properties> <capabilities> <capability name="javax.servlet.api:v25"/> <capability name="javax.transaction.api"/> <capability name="org.apache.felix.log" startlevel="1"/> <capability name="org.jboss.osgi.logging" startlevel="1"/> <capability name="org.apache.felix.configadmin" startlevel="1"/> <capability name="org.jboss.as.osgi.configadmin" startlevel="1"/> </capabilities> </subsystem> <subsystem xmlns="urn:jboss:domain:pojo:1.0"/> <subsystem xmlns="urn:jboss:domain:remoting:1.1"> <connector name="remoting-connector" socket-binding="remoting" security-realm="ApplicationRealm"/> </subsystem> <subsystem xmlns="urn:jboss:domain:resource-adapters:1.0"/> <subsystem xmlns="urn:jboss:domain:sar:1.0"/> <subsystem xmlns="urn:jboss:domain:security:1.1"> <security-domains> <security-domain name="other" cache-type="default"> <authentication> <login-module code="Remoting" flag="optional"> <module-option name="password-stacking" value="useFirstPass"/> </login-module> <login-module code="RealmUsersRoles" flag="required"> <module-option name="usersProperties" value="${jboss.server.config.dir}/application-users.properties"/> <module-option name="rolesProperties" value="${jboss.server.config.dir}/application-roles.properties"/> <module-option name="realm" value="ApplicationRealm"/> <module-option name="password-stacking" value="useFirstPass"/> </login-module> </authentication> </security-domain> <security-domain name="jboss-web-policy" cache-type="default"> <authorization> <policy-module code="Delegating" flag="required"/> </authorization> </security-domain> <security-domain name="jboss-ejb-policy" cache-type="default"> <authorization> <policy-module code="Delegating" flag="required"/> </authorization> </security-domain> </security-domains> </subsystem> <subsystem xmlns="urn:jboss:domain:threads:1.1"/> <subsystem xmlns="urn:jboss:domain:transactions:1.1"> <core-environment> <process-id> <uuid/> </process-id> </core-environment> <recovery-environment socket-binding="txn-recovery-environment" status-socket-binding="txn-status-manager"/> <coordinator-environment default-timeout="300"/> </subsystem> <subsystem xmlns="urn:jboss:domain:web:1.1" default-virtual-server="default-host" native="false"> <connector name="http" protocol="HTTP/1.1" scheme="http" socket-binding="http"/> <virtual-server name="default-host" enable-welcome-root="false"> <alias name="localhost"/> <alias name="nextenders.com"/> </virtual-server> </subsystem> <subsystem xmlns="urn:jboss:domain:webservices:1.1"> <modify-wsdl-address>true</modify-wsdl-address> <wsdl-host>${jboss.bind.address:127.0.0.1}</wsdl-host> <endpoint-config name="Standard-Endpoint-Config"/> <endpoint-config name="Recording-Endpoint-Config"> <pre-handler-chain name="recording-handlers" protocol-bindings="##SOAP11_HTTP ##SOAP11_HTTP_MTOM ##SOAP12_HTTP ##SOAP12_HTTP_MTOM"> <handler name="RecordingHandler" class="org.jboss.ws.common.invocation.RecordingServerHandler"/> </pre-handler-chain> </endpoint-config> </subsystem> <subsystem xmlns="urn:jboss:domain:weld:1.0"/> </profile> <interfaces> <interface name="management"> <inet-address value="${jboss.bind.address.management:127.0.0.1}"/> </interface> <interface name="public"> <inet-address value="${jboss.bind.address:127.0.0.1}"/> </interface> <interface name="unsecure"> <inet-address value="${jboss.bind.address.unsecure:127.0.0.1}"/> </interface> </interfaces> <socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}"> <socket-binding name="management-native" interface="management" port="${jboss.management.native.port:9999}"/> <socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/> <socket-binding name="management-https" interface="management" port="${jboss.management.https.port:9443}"/> <socket-binding name="ajp" port="8009"/> <socket-binding name="http" port="80"/> <socket-binding name="https" port="443"/> <socket-binding name="osgi-http" interface="management" port="8090"/> <socket-binding name="remoting" port="4447"/> <socket-binding name="txn-recovery-environment" port="4712"/> <socket-binding name="txn-status-manager" port="4713"/> <outbound-socket-binding name="mail-smtp"> <remote-destination host="localhost" port="25"/> </outbound-socket-binding> </socket-binding-group>

    Read the article

  • trying to build Boost MPI, but the lib files are not created. What's going on?

    - by unknownthreat
    I am trying to run a program with Boost MPI, but the thing is I don't have the .lib. So I try to create one by following the instruction at http://www.boost.org/doc/libs/1_43_0/doc/html/mpi/getting_started.html#mpi.config The instruction says "For many users using LAM/MPI, MPICH, or OpenMPI, configuration is almost automatic", I got myself OpenMPI in C:\, but I didn't do anything more with it. Do we need to do anything with it? Beside that, another statement from the instruction: "If you don't already have a file user-config.jam in your home directory, copy tools/build/v2/user-config.jam there." Well, I simply do what it says. I got myself "user-config.jam" in C:\boost_1_43_0 along with "using mpi ;" into the file. Next, this is what I've done: bjam --with-mpi C:\boost_1_43_0>bjam --with-mpi WARNING: No python installation configured and autoconfiguration failed. See http://www.boost.org/libs/python/doc/building.html for configuration instructions or pass --without-python to suppress this message and silently skip all Boost.Python targets Building the Boost C++ Libraries. warning: skipping optional Message Passing Interface (MPI) library. note: to enable MPI support, add "using mpi ;" to user-config.jam. note: to suppress this message, pass "--without-mpi" to bjam. note: otherwise, you can safely ignore this message. warning: Unable to construct ./stage-unversioned warning: Unable to construct ./stage-unversioned Component configuration: - date_time : not building - filesystem : not building - graph : not building - graph_parallel : not building - iostreams : not building - math : not building - mpi : building - program_options : not building - python : not building - random : not building - regex : not building - serialization : not building - signals : not building - system : not building - test : not building - thread : not building - wave : not building ...found 1 target... The Boost C++ Libraries were successfully built! The following directory should be added to compiler include paths: C:\boost_1_43_0 The following directory should be added to linker library paths: C:\boost_1_43_0\stage\lib C:\boost_1_43_0> I see that there are many libs in C:\boost_1_43_0\stage\lib, but I see no trace of libboost_mpi-vc100-mt-1_43.lib or libboost_mpi-vc100-mt-gd-1_43.lib at all. These are the libraries required for linking in mpi applications. What could possibly gone wrong when libraries are not being built?

    Read the article

  • Getting started with XSD validation with C#

    - by Rosarch
    Here is my first attempt at validating XML with XSD. The XML file to be validated: <?xml version="1.0" encoding="utf-8" ?> <config xmlns="Schemas" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="config.xsd"> <levelVariant> <filePath>SampleVariant</filePath> </levelVariant> <levelVariant> <filePath>LegendaryMode</filePath> </levelVariant> <levelVariant> <filePath>AmazingMode</filePath> </levelVariant> </config> The XSD, located in "Schemas/config.xsd" relative to the XML file to be validated: <?xml version="1.0" encoding="utf-8" ?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"> <xs:element name="config"> <xs:complexType> <xs:sequence> <xs:element name="levelVariant"> <xs:complexType> <xs:sequence> <xs:element name="filePath" type="xs:anyURI"> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> Right now, I just want to validate the XML file precisely as it appears currently. Once I understand this better, I'll expand more. Do I really need so many lines for something as simple as the XML file as it currently exists? The validation code in C#: public void SetURI(string uri) { XElement toValidate = XElement.Load(Path.Combine(PATH_TO_DATA_DIR, uri) + ".xml"); // begin confusion string schemaURI = toValidate.Attributes("xmlns").First().ToString() + toValidate.Attributes("xsi:noNamespaceSchemaLocation").First().ToString(); XmlSchemaSet schemas = new XmlSchemaSet(); schemas.Add("", XmlReader.Create( SOMETHING )); XDocument toValidateDoc = new XDocument(toValidate); toValidateDoc.Validate(schemas, null); // end confusion root = toValidate; } Any illumination would be appreciated.

    Read the article

  • Spring, Jersey and Viewable JSP Integration

    - by Brian D.
    I'm trying to integrate Spring 3.0.5 with Jersey 1.4. I seem to have everything working, but whenever I try and return a Viewable that points to a JSP, I get a 404 error. When I wasn't using spring I could use this filter: <filter> <filter-name>Jersey Filter</filter-name> <filter-class>com.sun.jersey.spi.container.servlet.ServletContainer</filter-class> <init-param> <param-name>com.sun.jersey.config.feature.Redirect</param-name> <param-value>true</param-value> </init-param> <init-param> <param-name>com.sun.jersey.config.property.packages</param-name> <param-value>cheetah.frontend.controllers</param-value> </init-param> <init-param> <param-name>com.sun.jersey.config.feature.FilterForwardOn404</param-name> <param-value>true</param-value> </init-param> <init-param> <param-name>com.sun.jersey.config.property.WebPageContentRegex</param-name> <param-value>/(images|css|jsp)/.*</param-value> </init-param> </filter> And I could return a Viewable to any JSP's, image, css that were stored in the appropriate folder. However, now that I have to use the SpringServlet to get spring integration, I'm at a loss for how to access resources, since I can't use the filter above. I've tried using this servlet mapping with no luck: <servlet> <servlet-name>jerseyspring</servlet-name> <servlet-class>com.sun.jersey.spi.spring.container.servlet.SpringServlet</servlet-class> <load-on-startup>1</load-on-startup> <init-param> <param-name>com.sun.jersey.config.property.WebPageContentRegex</param-name> <param-value>/(images|css|jsp)/.*</param-value> </init-param> </servlet> <servlet-mapping> <servlet-name>jerseyspring</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping> Does anyone know the proper configurations to achieve this? Thanks for any help.

    Read the article

  • Getting started with XSD validation with .NET

    - by Rosarch
    Here is my first attempt at validating XML with XSD. The XML file to be validated: <?xml version="1.0" encoding="utf-8" ?> <config xmlns="Schemas" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="config.xsd"> <levelVariant> <filePath>SampleVariant</filePath> </levelVariant> <levelVariant> <filePath>LegendaryMode</filePath> </levelVariant> <levelVariant> <filePath>AmazingMode</filePath> </levelVariant> </config> The XSD, located in "Schemas/config.xsd" relative to the XML file to be validated: <?xml version="1.0" encoding="utf-8" ?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"> <xs:element name="config"> <xs:complexType> <xs:sequence> <xs:element name="levelVariant"> <xs:complexType> <xs:sequence> <xs:element name="filePath" type="xs:anyURI"> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> Right now, I just want to validate the XML file precisely as it appears currently. Once I understand this better, I'll expand more. Do I really need so many lines for something as simple as the XML file as it currently exists? The validation code in C#: public void SetURI(string uri) { XElement toValidate = XElement.Load(Path.Combine(PATH_TO_DATA_DIR, uri) + ".xml"); // begin confusion // exception here string schemaURI = toValidate.Attributes("xmlns").First().ToString() + toValidate.Attributes("xsi:noNamespaceSchemaLocation").First().ToString(); XmlSchemaSet schemas = new XmlSchemaSet(); schemas.Add(null, schemaURI); XDocument toValidateDoc = new XDocument(toValidate); toValidateDoc.Validate(schemas, null); // end confusion root = toValidate; } Running the above code gives this exception: The ':' character, hexadecimal value 0x3A, cannot be included in a name. Any illumination would be appreciated.

    Read the article

  • usercontrol hosted in IE renders as a textbox

    - by coxymla
    On my ongoing saga to mirror the hosting of a legacy app on a clean box, I've hit my next snag. One page relies on a big .NET UserControl that on the new machine renders only as a big, greyed out textarea (greyed out vertical scrollbar on the right hand edge. Inspecting the source shows the expected object tag.) This is particularly tricky because nobody seems to know much about hosted UserControls and all the discussions data back to 2002-2004. The page is quite simple: <%@ Page language="c#" Codebehind="DataExport.aspx.cs" AutoEventWireup="false" Inherits="yyyyy.Web.DataExport" %> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" > <html> <head> <title>DataExport</title> <link rel="Configuration" href="/xxxxx/yyyyy/DataExport.config"> </head> <body style="margin:0px;padding:0px;overflow:hidden"> <OBJECT id="DataExport" style="WIDTH: 100%; HEIGHT: 100%; position:absolute; left: 0px; top:0px" classid="yyyyy.Common.dll#yyyyy.Controls.DataExport" VIEWASTEXT> </OBJECT> </body> </html> The config file referenced: <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <sectionGroup name="yyyyy"> <section name="dataExport" type="yyyyy.Controls.DataExportSectionHandler,yyyyy.Common" /> </sectionGroup> </configSections> <yyyyy> <dataExport> <layoutFile>http://vm2/xxxxx/yyyyy/layout.xml</layoutFile> <webServiceUrl>http://vm2/xxxxx/yyyyy/services/yyyyy.asmx</webServiceUrl> </dataExport> </yyyyy> </configuration> What I've checked: Security permissions should be OK, the site is trusted and adding a URL exception to grant FullTrust doesn't change anything. Config file is acessible over the web, layout.xml is accessible, ASMX shows the expected command list Machine.config grants GET permission for the usercontrol.config file. What perhaps looks fishy to me: The DataExport UserControl references Aspose.Excel to generate the spreadsheets it exports. When I navigate to the page and get a blank textbox, then run gacutil /ldl, nothing is in the local download cache. On the working machine, running the same command after viewing the page shows a laundry list of DLLs including the control DLL and the Aspose DLL.

    Read the article

  • log4net initialisation

    - by Ruben Bartelink
    I've looked hard for duplicates but have to ask the following, no matter how basic it may seem, to get it clear once and for all! In a fresh Console app using log4net version 1.2.10.0 on VS28KSP1 on 64 bit W7, I have the following code:- using log4net; using log4net.Config; namespace ConsoleApplication1 { class Program { static readonly ILog _log = LogManager.GetLogger(typeof(Program)); static void Main(string[] args) { _log.Info("Ran"); } } } In my app.config, I have: <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" /> </configSections> <log4net> <appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender"> <file value="Program.log" /> <lockingModel type="log4net.Appender.FileAppender+MinimalLock" /> <appendToFile value="true" /> <rollingStyle value="Size" /> <maxSizeRollBackups value="10" /> <maximumFileSize value="1MB" /> <staticLogFileName value="true" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="[%username] %date [%thread] %-5level %logger [%property{NDC}] - %message%newline" /> </layout> </appender> <root> <level value="DEBUG" /> <appender-ref ref="RollingFileAppender" /> </root> </log4net> </configuration> This doesnt write anything, unless I either add an attribute: [ assembly:XmlConfigurator ] Or explicitly initialise it in Main(): _log.Info("This will not go to the log"); XmlConfigurator.Configure(); _log.Info("Ran"); This raises the following questions: I'm almost certain I've seen it working somewhere on some version of log4net without the addition of the assembly attribute or call in Main. Can someone assure me I'm not imagining that? Can someone please point me to where in the doc it explicitly states that both the config section and the initialisation hook are required - hopefully with an explanation of when this changed, if it did? I can easily imagine why this might be the policy -- having the initialisation step explicit to avoid surprises etc., it's just that I seem to recall this not always being the case... (And normally I have the config in a separate file, which generally takes configsections out of the picture)

    Read the article

  • Phonegap web view thinks device screen taller than it is - results in offscreen tabbar

    - by Stin
    I have a jQTouch application loaded via server, so all I need to do is display the webpage full screen in PhoneGap for a faux-Native app. Unfortunatley each solution I've tried in PhoneGap has an issue: it thinks the screen size is taller than it is. This resuls in the tabbar that is pinned to the bottom being permantly offscreen and there fore unusable. You should be able to recreate this with my code below and going to the iTabbar online demo. Any thoughts on how to correct this issue? For background, going to the app page in iOS safari works fine, as well as saving the page to the home screen. In both cases the webview stops at the bottom of the screen and the tabbar is therefore viewable. Also, I'm using build.phonegap.com to compile (I'm not compiling locally) I've tried two methods: load the childBrowser plugin and call up the page (with navbar hidden via options) set the following config.xml parameter to prevent phonegap from switching to Safari, and then just load the link (preferable as it's cleaner in my mind. I've pasted my index.html and config.xml below) Details on the config.xml paramater: Open all links in WebView stay-in-webview with values true or false example: <preference name="stay-in-webview" value="true" /> if set to true, all links (even with target set to blank) will open in the app's webview only use this preference if you want pages from your server to take over your entire app default is false (Source: https://build.phonegap.com/docs/config-xml) my index.html: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <meta http-equiv="Content-type" content="text/html;charset=utf-8"> <title>MyApp</title> <script src="phonegap.js"></script> </head> <body> <p><a href="http://www.itabbar.com/itabbar/demo.html#home">Launch iTabbar</a></p> </body> </html> my config.xml: <?xml version="1.0" encoding="UTF-8" ?> <widget xmlns = "http://www.w3.org/ns/widgets" xmlns:gap = "http://phonegap.com/ns/1.0" id = "com.phonegap.myapp" versionCode="10" version = "1.0.0"> <!-- versionCode is optional and Android only --> <name>MyApp</name> <description> My app is... </description> <author href="https://myurl.com" email="[email protected]"> me </author> <preference name="stay-in-webview" value="true" /> </widget>

    Read the article

  • AngularJS service returning promise unit test gives error No more request expected

    - by softweave
    I want to test a service (Bar) that invokes another service (Foo) and returns a promise. The test is currently failing with this error: Error: Unexpected request: GET foo.json No more request expected Here are the service definitions: // Foo service returns new objects having get function returning a promise angular.module('foo', []). factory('Foo', ['$http', function ($http) { function FooFactory(config) { var Foo = function (config) { angular.extend(this, config); }; Foo.prototype = { get: function (url, params, successFn, errorFn) { successFn = successFn || function (response) {}; errorFn = errorFn || function (response) {}; return $http.get(url, {}).then(successFn, errorFn); } }; return new Foo(config); }; return FooFactory; }]); // Bar service uses Foo service angular.module('bar', ['foo']). factory('Bar', ['Foo', function (Foo) { var foo = Foo(); return { getCurrentTime: function () { return foo.get('foo.json', {}, function (response) { return Date.parse(response.data.now); }); } }; }]); Here is my current test: 'use strict'; describe('bar tests', function () { var currentTime, currentTimeInMs, $q, $rootScope, mockFoo, mockFooFactory, Foo, Bar, now; currentTime = "March 26, 2014 13:10 UTC"; currentTimeInMs = Date.parse(currentTime); beforeEach(function () { // stub out enough of Foo to satisfy Bar service: // create mock object with function get: function(url, params, successFn, errorFn) // that promises to return a response with this property // { data: { now: "March 26, 2014 13:10 UTC" }}) mockFoo = { get: function (url, params, successFn, errorFn) { successFn = successFn || function (response) {}; errorFn = errorFn || function (response) {}; // setup deferred promise var deferred = $q.defer(); deferred.resolve({data: { now: currentTime }}); return (deferred.promise).then(successFn, errorFn); } }; // create mock Foo service mockFooFactory = function(config) { return mockFoo; }; module(function ($provide) { $provide.value('Foo', mockFooFactory); }); module('bar'); inject(function (_$q_, _$rootScope_, _Foo_, _Bar_) { $q = _$q_; $rootScope = _$rootScope_; Foo = _Foo_; Bar = _Bar_; }); }); it('getCurrentTime should return currentTimeInMs', function () { Bar.getCurrentTime().then(function (serverCurrentTime) { now = serverCurrentTime; }); $rootScope.$apply(); // resolve Bar promise expect(now).toEqual(currentTimeInMs); }); }); The error is being thrown at $rootScope.$apply(). I also tried using $rootScope.$digest(), but it gives the same error. Thanks in advance for any insight you can give me.

    Read the article

  • â?? in my hmtl after purify

    - by mmcgrail
    I have a database the i am rebuilding the table structure was crap so I'm porting some of the data from one table to another. This data appears to have been copy past from MSO product so as I'm getting the data I clean it up with htmlpurifier and some alittle str_replace in php here the clean function function clean_html($html) { $config = HTMLPurifier_Config::createDefault(); $config->set('AutoFormat','RemoveEmpty',true); $config->set('HTML','AllowedAttributes','href,src'); $config->set('HTML','AllowedElements','p,em,strong,a,ul,li,ol,img'); $purifier = new HTMLPurifier($config); $html = $purifier->purify($html); $html = str_replace('&nbsp;',' ',$html); $html = str_replace("\r",'',$html); $html = str_replace("\n",'',$html); $html = str_replace("\t",'',$html); $html = str_replace(' ',' ',$html); $html = str_replace('<p> </p>','',$html); $html = str_replace(chr(160),' ',$html); return trim($html); } but when I put the results into my new table and out put them to the ckeditor I get those three characters. I then have a javascript function that is called to remove special characters from the content of the ckeditor too. it doesn't clean it either function remove_special(str) { var rExps=[ /[\xC0-\xC2]/g, /[\xE0-\xE2]/g, /[\xC8-\xCA]/g, /[\xE8-\xEB]/g, /[\xCC-\xCE]/g, /[\xEC-\xEE]/g, /[\xD2-\xD4]/g, /[\xF2-\xF4]/g, /[\xD9-\xDB]/g, /[\xF9-\xFB]/g, /\xD1/,/\xF1/g, "/[\u00a0|\u1680|[\u2000-\u2009]|u200a|\u200b|\u2028|\u2029|\u202f|\u205f|\u3000|\xa0]/g", /\u000b/g,'/[\u180e|\u000c]/g', /\u2013/g, /\u2014/g, /\xa9/g,/\xae/g,/\xb7/g,/\u2018/g,/\u2019/g,/\u201c/g,/\u201d/g,/\u2026/g]; var repChar=['A','a','E','e','I','i','O','o','U','u','N','n',' ','\t','','-','--','(c)','(r)','*',"'","'",'"','"','...']; for(var i=0; i<rExps.length; i++) { str=str.replace(rExps[i],repChar[i]); } for (var x = 0; x < str.length; x++) { charcode = str.charCodeAt(x); if ((charcode < 32 || charcode > 126) && charcode !=10 && charcode != 13) { str = str.replace(str.charAt(x), ""); } } return str; } Does anyone know off hand what I need to do to get rid of them. I think they may be some sort of quote

    Read the article

  • Qt, can't display child widget

    - by Blin
    I have two widgets defined as follows class mainWindow : public QWidget { Q_OBJECT public: mainWindow(); void readConfig(); private: SWindow *config; QVector <QString> filePath; QVector <QLabel*> alias,procStatus; QVector <int> delay; QGridLayout *mainLayout; QVector<QPushButton*> stopButton,restartButton; QVector<QProcess*> proc; QSignalMapper *stateSignalMapper, *stopSignalMapper, *restartSignalMapper; public slots: void openSettings(); void startRunning(); void statusChange(int); void stopProc(int); void restartProc(int); void renew(); }; class SWindow : public QWidget { Q_OBJECT public: SWindow(QWidget *parent=0); void readConfig(); void addLine(int); private: QVector<QPushButton*> selectButton; QVector<QLabel*> filePath; QVector<QLineEdit*> alias; QSignalMapper *selectSignalMapper; QVector<QSpinBox*> delay; QGridLayout *mainLayout; public slots: void selectFile(int); void saveFile(); void addLineSlot(); }; when i create and display SWindow object from mainWindow like this void mainWindow::openSettings() { config = new SWindow(); config->show(); } everything is ok, but now i need to access the mainWindow from SWindow, and void mainWindow::openSettings() { config = new SWindow(this); config->show(); } doesn't display SWindow. How can i display SWindow? How do i call a function on widget close?

    Read the article

  • Capistrano asks for SSH password when deploying from local machine to server

    - by GhostRider
    When I try to ssh to a server, I'm able to do it as my id_rsa.pub key is added to the authorized keys in the server. Now when I try to deploy my code via Capistrano to the server from my local project folder, the server asks for a password. I'm unable to understand what could be the issue if I'm able to ssh and unable to deploy to the same server. $ cap deploy:setup "no seed data" triggering start callbacks for `deploy:setup' * 13:42:18 == Currently executing `multistage:ensure' *** Defaulting to `development' * 13:42:18 == Currently executing `development' * 13:42:18 == Currently executing `deploy:setup' triggering before callbacks for `deploy:setup' * 13:42:18 == Currently executing `db:configure_mongoid' * executing "mkdir -p /home/deploy/apps/development/flyingbird/shared/config" servers: ["dev1.noob.com", "176.9.24.217"] Password: Cap script: # gem install capistrano capistrano-ext capistrano_colors begin; require 'capistrano_colors'; rescue LoadError; end require "bundler/capistrano" # RVM bootstrap # $:.unshift(File.expand_path('./lib', ENV['rvm_path'])) require 'rvm/capistrano' set :rvm_ruby_string, 'ruby-1.9.2-p290' set :rvm_type, :user # or :user # Application setup default_run_options[:pty] = true # allow pseudo-terminals ssh_options[:forward_agent] = true # forward SSH keys (this will use your SSH key to get the code from git repository) ssh_options[:port] = 22 set :ip, "dev1.noob.com" set :application, "flyingbird" set :repository, "repo-path" set :scm, :git set :branch, fetch(:branch, "master") set :deploy_via, :remote_cache set :rails_env, "production" set :use_sudo, false set :scm_username, "user" set :user, "user1" set(:database_username) { application } set(:production_database) { application + "_production" } set(:staging_database) { application + "_staging" } set(:development_database) { application + "_development" } role :web, ip # Your HTTP server, Apache/etc role :app, ip # This may be the same as your `Web` server role :db, ip, :primary => true # This is where Rails migrations will run # Use multi-staging require "capistrano/ext/multistage" set :stages, ["development", "staging", "production"] set :default_stage, rails_env before "deploy:setup", "db:configure_mongoid" # Uncomment if you use any of these databases after "deploy:update_code", "db:symlink_mongoid" after "deploy:update_code", "uploads:configure_shared" after "uploads:configure_shared", "uploads:symlink" after 'deploy:update_code', 'bundler:symlink_bundled_gems' after 'deploy:update_code', 'bundler:install' after "deploy:update_code", "rvm:trust_rvmrc" # Use this to update crontab if you use 'whenever' gem # after "deploy:symlink", "deploy:update_crontab" if ARGV.include?("seed_data") after "deploy", "db:seed" else p "no seed data" end #Custom tasks to handle resque and redis restart before "deploy", "deploy:stop_workers" after "deploy", "deploy:restart_redis" after "deploy", "deploy:start_workers" after "deploy", "deploy:cleanup" 'Create symlink for public uploads' namespace :uploads do task :symlink do run <<-CMD rm -rf #{release_path}/public/uploads && mkdir -p #{release_path}/public && ln -nfs #{shared_path}/public/uploads #{release_path}/public/uploads CMD end task :configure_shared do run "mkdir -p #{shared_path}/public" run "mkdir -p #{shared_path}/public/uploads" end end namespace :rvm do desc 'Trust rvmrc file' task :trust_rvmrc do run "rvm rvmrc trust #{current_release}" end end namespace :db do desc "Create mongoid.yml in shared path" task :configure_mongoid do db_config = <<-EOF defaults: &defaults host: localhost production: <<: *defaults database: #{production_database} staging: <<: *defaults database: #{staging_database} EOF run "mkdir -p #{shared_path}/config" put db_config, "#{shared_path}/config/mongoid.yml" end desc "Make symlink for mongoid.yml" task :symlink_mongoid do run "ln -nfs #{shared_path}/config/mongoid.yml #{release_path}/config/mongoid.yml" end desc "Fill the database with seed data" task :seed do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake db:seed" end end namespace :bundler do desc "Symlink bundled gems on each release" task :symlink_bundled_gems, :roles => :app do run "mkdir -p #{shared_path}/bundled_gems" run "ln -nfs #{shared_path}/bundled_gems #{release_path}/vendor/bundle" end desc "Install bundled gems " task :install, :roles => :app do run "cd #{release_path} && bundle install --deployment" end end namespace :deploy do task :start, :roles => :app do run "touch #{current_path}/tmp/restart.txt" end desc "Restart the app" task :restart, :roles => :app do run "touch #{current_path}/tmp/restart.txt" end desc "Start the workers" task :stop_workers do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake resque:stop_workers" end desc "Restart Redis server" task :restart_redis do "/etc/init.d/redis-server restart" end desc "Start the workers" task :start_workers do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake resque:start_workers" end end

    Read the article

  • Vagrant (Virtualbox) host-only multiple node networking issue

    - by Lorin Hochstein
    I'm trying to use a multi-VM vagrant environment as a testbed for deploying OpenStack, and I've run into a networking problem with trying to communicate from one VM, to a VM-inside-of-a-VM. I have two Vagrant nodes, a cloud controller node and a compute node. I'm using host-only networking. My Vagrantfile looks like this: Vagrant::Config.run do |config| config.vm.box = "precise64" config.vm.define :controller do |controller_config| controller_config.vm.network :hostonly, "192.168.206.130" # eth1 controller_config.vm.network :hostonly, "192.168.100.130" # eth2 controller_config.vm.host_name = "controller" end config.vm.define :compute1 do |compute1_config| compute1_config.vm.network :hostonly, "192.168.206.131" # eth1 compute1_config.vm.network :hostonly, "192.168.100.131" # eth2 compute1_config.vm.host_name = "compute1" compute1_config.vm.customize ["modifyvm", :id, "--memory", 1024] end end When I try to start up a (QEMU-based) VM, it boots successfully on compute1, and its virtual nic (vnet0) is connected via a bridge, br100: root@compute1:~# brctl show 100 bridge name bridge id STP enabled interfaces br100 8000.08002798c6ef no eth2 vnet0 When the QEMU VM makes a request to the DHCP server (dnsmasq) running on controller, I can see the request reaches the controller because of the output on the syslog on the controller: Aug 6 02:34:56 precise64 dnsmasq-dhcp[12042]: DHCPDISCOVER(br100) fa:16:3e:07:98:11 Aug 6 02:34:56 precise64 dnsmasq-dhcp[12042]: DHCPOFFER(br100) 192.168.100.2 fa:16:3e:07:98:11 However, the DHCPOFFER never makes it back to the VM running on compute1. If I watch the requests using tcpdump on the vboxnet3 interface on my host machine that runs Vagrant (Mac OS X), I can see both the requests and the replies $ sudo tcpdump -i vboxnet3 -n port 67 or port 68 tcpdump: WARNING: vboxnet3: That device doesn't support promiscuous mode (BIOCPROMISC: Operation not supported on socket) tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on vboxnet3, link-type EN10MB (Ethernet), capture size 65535 bytes 22:51:20.694040 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:20.694057 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:20.696047 IP 192.168.100.1.67 > 192.168.100.2.68: BOOTP/DHCP, Reply, length 311 22:51:23.700845 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:23.700876 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:23.701591 IP 192.168.100.1.67 > 192.168.100.2.68: BOOTP/DHCP, Reply, length 311 22:51:26.705978 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:26.705995 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 22:51:26.706527 IP 192.168.100.1.67 > 192.168.100.2.68: BOOTP/DHCP, Reply, length 311 But, if I tcpdump on eth2 on compute, I only see the requests, not the replies: root@compute1:~# tcpdump -i eth2 -n port 67 or port 68 tcpdump: WARNING: eth2: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth2, link-type EN10MB (Ethernet), capture size 65535 bytes 02:51:20.240672 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 02:51:23.249758 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 02:51:26.258281 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:07:98:11, length 280 At this point, I'm stuck. I'm not sure why the DHCP replies aren't making it to the compute node. Perhaps it has something to do with the configuration of the VirtualBox virtual switch/router? Note that eth2 interfaces on both nodes have been set to promiscuous mode.

    Read the article

  • phpmyadmin login redirect fails with custom ssl port

    - by baraboom
    The server is running Ubuntu 10.10, Apache 2.2.16, PHP 5.3.3-1ubuntu9.3, phpMyAdmin 3.3.7deb5build0.10.10.1. Since this same server is also running Zimbra on port 443, I've configured apache to serve SSL on port 81. So far, I have one CMS script running on this virtual host successfully. However, when I access /phpmyadmin (set up with the default alias) on my custom ssl port and submit the login form, I am redirected to http://vhost.domain.com:81/index.php?TOKEN=foo (note the http:// instead of the https:// that the login url was using). This generates an Error 400 Bad Request complaining about "speaking plain HTTP to an SSL-enabled server port." I can then manually change the http:// to https:// in the URL and use phpmyadmin as expected. I was annoyed enough to spend an hour trying to fix it and now even more annoyed that I cannot figure it out. I've tried various things, including: Adding $cfg['PmaAbsoluteUri'] = 'https://vhost.domain.com:81/phpmyadmin/'; to the /usr/share/phpmyadmin/config.inc.php file but this did not correct the problem (even though /usr/share/phpmyadmin/libraries/auth/cookie.auth.lib.php looks like it should honor it and use it as the redirect). Adding $cfg['ForceSSL'] = 1; to the same config.inc.php but then apache spirals into an infinite redirect. Adding a rewrite rule to the vhost-ssl conf file in apache but I was unable to figure out the condition to use when http:// was present along with the correct ssl port of :81. Lots of googling. Here are the relevant Apache configuration pieces: /etc/apache2/ports.conf <IfModule mod_ssl.c> NameVirtualHost *:81 Listen 81 </IfModule> /etc/apache2/sites-enabled/vhost-nonssl <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName vhost.domain.com DocumentRoot /home/xxx/sites/vhost/html RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}:81%{REQUEST_URI} </Virtualhost> /etc/apache2/sites-enabled/vhost-ssl <VirtualHost *:81> ServerAdmin webmaster@localhost ServerName vhost.domain.com DocumentRoot /home/xxx/sites/vhost/html <Directory /> Options FollowSymLinks AllowOverride None AuthType Basic AuthName "Restricted Vhost" AuthUserFile /home/xxx/sites/vhost/.users Require valid-user </Directory> <Directory /home/xxx/sites/vhost/html/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> /etc/apache2/conf.d/phpmyadmin.conf Alias /phpmyadmin /usr/share/phpmyadmin (The rest of the default .conf truncated.) Everything in the apache config seems to work ok - the rewrite from non-ssl to ssl, the http authentication, the problem only happens when I am submitting the login form for phpmyadmin from https://vhost.domain.com:81/index.php. Other configs: The phpmyadmin config is completely default and the php.ini has only had some minor changes to memory and timeout limits. These seem to work fine, as mentioned, another php script runs with no problem and phpmyadmin works great once I manually enter in the correct schema after login. I'm looking for either a bandaid I can add to save me the trouble of manually entering in the https:// after login, a real fix that will make phpmyadmin behave as I think it should or some greater understanding of why my desired config is not possible.

    Read the article

  • PHP.ini does not load

    - by Jonathan Park
    Ok this is probably just me not knowing enough about php but here it goes. I'm on Ubuntu Hardy. I have a custom compiled version of PHP which I have compiled with these parameters. ./configure --enable-soap --with-zlib --with-mysql --with-apxs2=[correct path] --with-config-file-path=[correct path] --with-mysqli --with-curlwrappers --with-curl --with-mcrypt I have used the command pecl install pecl_http to install the http.so extension. It is in the correct module directory for my php.ini. My php.ini is loading and I can change things within the ini and effect php. I have included the extension=http.so line in my php.ini. That worked fine. Until I added these compilation options in order to add imap --with-openssl --with-kerberos --with-imap --with-imap-ssl Which failed because I needed the c-client library which I fixed by apt-get install libc-client-dev After which php compiles fine and I have working imap support, woo. HOWEVER, now all my calls to HttpRequest which is part of the pecl_http extention in http.so result in Fatal error: Class 'HttpRequest' not found errors. I figure the http.so module is no longer loading for one reason or another but I cannot find any errors showing the reason. You might say "Have you tried undoing the new imap setup?" To which I will answer. Yes I have. I directly undid all my config changes and uninstalled the c-client library and I still can't get it to work. I thought that's weird... I have made no changes that would have resulted in this issue. After looking at that I have also discovered that not only is the http extension no longer loading but all my extensions loaded via php.ini are no longer loading. Can someone at least give me some further debugging steps? So far I have tried enabling all errors including startup errors in my php.ini which works for other errors, but I'm not seeing any startup errors either on command line or via apache. And yet again the php.ini appears to be being parsed given that if I run php_info() I get settings that are in the php.ini. Edit it appears that only some of the php.ini settings are being listened to. Is there a way to test my php.ini? Edit Edit It appears I am mistaken again and the php.ini is not being loaded at all any longer. However, If I run php_info() I get that it's looking for my php.ini in the correct location. Edit Edit Edit My config is at the config file path location below but it says no config file loaded. WTF Permission issue? It is currently 644 so everyone should be able to read it if not write it. I tried making it 777 and that didn't work. Configuration File (php.ini) Path /etc/php.ini Loaded Configuration File (none) Edit Edit Edit Edit By loading the ini on the command line using the -c command I am able to run my files and using -m shows that my modules load So nothing is wrong with the php.ini

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >