Search Results

Search found 5157 results on 207 pages for 'node'.

Page 112/207 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • Problem deploying portlets on JBoss Portal 2.7.2: Not a canonical value

    - by Dee
    I just downloaded the JBoss Portal Server 2.7.2 (JBoss Portal + JBoss AS 4.2.3 bundle to be precise) and tried deploying portlets just as the SimpleHelloWorld provided in the samples. The portlet gets deployed fine but when I put it on a page I get the following exception. I tried adding other Portlets as well (such as the booking MVC portelt supplied with Spring WebFlow dist) but the same problem happens. The problem happens when the new instances are created by me, example when i create a new instance of CMS Portlet, I get the same error. If I use an existing instance it works. If I deploy a portlet that creates an instance using the "portle-instances.xml" then it works fine, but creating additional instances using the Admin and deploying them on page fails due to the following error. What am I doing wrong? Can anyone please help? javax.servlet.ServletException: java.lang.IllegalArgumentException: Not a canonical value SimplestHelloWorldWindow org.jboss.portal.server.servlet.PortalServlet.service(PortalServlet.java:278) javax.servlet.http.HttpServlet.service(HttpServlet.java:803) org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96) root cause java.lang.IllegalArgumentException: Not a canonical value SimplestHelloWorldWindow org.jboss.portal.core.model.portal.PortalObjectPath$CanonicalFormat.parse(PortalObjectPath.java:357) org.jboss.portal.core.model.portal.PortalObjectPath.<init>(PortalObjectPath.java:161) org.jboss.portal.core.model.portal.PortalObjectPath.parse(PortalObjectPath.java:314) org.jboss.portal.core.model.portal.PortalObjectId.parse(PortalObjectId.java:158) org.jboss.portal.core.model.portal.PortalObjectId.parse(PortalObjectId.java:143) org.jboss.portal.core.model.portal.navstate.PortalObjectNavigationalStateContext.createWindowKey(PortalObjectNavigationalStateContext.java:299) org.jboss.portal.core.model.portal.navstate.PortalObjectNavigationalStateContext.getWindowNavigationalState(PortalObjectNavigationalStateContext.java:194) org.jboss.portal.core.controller.portlet.ControllerPageNavigationalState.getPortletWindowNavigationalState(ControllerPageNavigationalState.java:230) org.jboss.portal.core.model.portal.command.render.RenderWindowCommand.getPortletNavigationalState(RenderWindowCommand.java:121) org.jboss.portal.core.impl.model.content.InternalContentProvider.renderWindow(InternalContentProvider.java:211) org.jboss.portal.core.model.portal.command.render.RenderWindowCommand.execute(RenderWindowCommand.java:100) org.jboss.portal.core.controller.ControllerCommand$1.invoke(ControllerCommand.java:68) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:131) org.jboss.portal.core.aspects.controller.node.EventBroadcasterInterceptor.invoke(EventBroadcasterInterceptor.java:124) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.controller.PageCustomizerInterceptor.invoke(PageCustomizerInterceptor.java:134) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.controller.PolicyEnforcementInterceptor.invoke(PolicyEnforcementInterceptor.java:78) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.controller.node.PortalNodeInterceptor.invoke(PortalNodeInterceptor.java:81) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.controller.BackwardCompatibilityInterceptor.invoke(BackwardCompatibilityInterceptor.java:48) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.controller.ControlInterceptor.invoke(ControlInterceptor.java:56) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.controller.NavigationalStateInterceptor.invoke(NavigationalStateInterceptor.java:42) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.controller.ajax.AjaxInterceptor.invoke(AjaxInterceptor.java:55) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.controller.ResourceAcquisitionInterceptor.invoke(ResourceAcquisitionInterceptor.java:50) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.common.invocation.Invocation.invoke(Invocation.java:157) org.jboss.portal.core.controller.ControllerContext.execute(ControllerContext.java:134) org.jboss.portal.core.model.portal.command.render.RenderWindowCommand.render(RenderWindowCommand.java:80) org.jboss.portal.core.model.portal.command.render.RenderPageCommand.execute(RenderPageCommand.java:222) org.jboss.portal.core.controller.ControllerCommand$1.invoke(ControllerCommand.java:68) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:131) org.jboss.portal.core.aspects.controller.node.EventBroadcasterInterceptor.invoke(EventBroadcasterInterceptor.java:124) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.controller.PageCustomizerInterceptor.invoke(PageCustomizerInterceptor.java:134) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.controller.PolicyEnforcementInterceptor.invoke(PolicyEnforcementInterceptor.java:78) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.controller.node.PortalNodeInterceptor.invoke(PortalNodeInterceptor.java:81) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.controller.BackwardCompatibilityInterceptor.invoke(BackwardCompatibilityInterceptor.java:48) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.controller.ControlInterceptor.invoke(ControlInterceptor.java:56) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.controller.NavigationalStateInterceptor.invoke(NavigationalStateInterceptor.java:42) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.controller.ajax.AjaxInterceptor.invoke(AjaxInterceptor.java:55) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.controller.ResourceAcquisitionInterceptor.invoke(ResourceAcquisitionInterceptor.java:50) org.jboss.portal.core.controller.ControllerInterceptor.invoke(ControllerInterceptor.java:40) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.common.invocation.Invocation.invoke(Invocation.java:157) org.jboss.portal.core.controller.ControllerContext.execute(ControllerContext.java:134) org.jboss.portal.core.model.portal.PortalObjectResponseHandler.processCommandResponse(PortalObjectResponseHandler.java:80) org.jboss.portal.core.controller.classic.ClassicResponseHandler.processHandlers(ClassicResponseHandler.java:78) org.jboss.portal.core.controller.classic.ClassicResponseHandler.processCommandResponse(ClassicResponseHandler.java:53) org.jboss.portal.core.controller.handler.ResponseHandlerSelector.processCommandResponse(ResponseHandlerSelector.java:70) org.jboss.portal.core.controller.Controller.processCommandResponse(Controller.java:315) org.jboss.portal.core.controller.Controller.processCommand(Controller.java:303) org.jboss.portal.core.controller.Controller.handle(Controller.java:261) org.jboss.portal.server.RequestControllerDispatcher.invoke(RequestControllerDispatcher.java:51) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:131) org.jboss.portal.core.cms.aspect.IdentityBindingInterceptor.invoke(IdentityBindingInterceptor.java:47) org.jboss.portal.server.ServerInterceptor.invoke(ServerInterceptor.java:38) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.server.aspects.server.ContentTypeInterceptor.invoke(ContentTypeInterceptor.java:68) org.jboss.portal.server.ServerInterceptor.invoke(ServerInterceptor.java:38) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.server.PortalContextPathInterceptor.invoke(PortalContextPathInterceptor.java:45) org.jboss.portal.server.ServerInterceptor.invoke(ServerInterceptor.java:38) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.server.LocaleInterceptor.invoke(LocaleInterceptor.java:96) org.jboss.portal.server.ServerInterceptor.invoke(ServerInterceptor.java:38) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.server.UserInterceptor.invoke(UserInterceptor.java:196) org.jboss.portal.server.ServerInterceptor.invoke(ServerInterceptor.java:38) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.server.aspects.server.SignOutInterceptor.invoke(SignOutInterceptor.java:98) org.jboss.portal.server.ServerInterceptor.invoke(ServerInterceptor.java:38) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.impl.api.user.UserEventBridgeTriggerInterceptor.invoke(UserEventBridgeTriggerInterceptor.java:65) org.jboss.portal.server.ServerInterceptor.invoke(ServerInterceptor.java:38) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.server.IdentityCacheInterceptor.invoke(IdentityCacheInterceptor.java:68) org.jboss.portal.server.ServerInterceptor.invoke(ServerInterceptor.java:38) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.core.aspects.server.TransactionInterceptor.org$jboss$portal$core$aspects$server$TransactionInterceptor$invoke$aop(TransactionInterceptor.java:49) org.jboss.portal.core.aspects.server.TransactionInterceptor$invoke_N5143606530999904530.invokeNext(TransactionInterceptor$invoke_N5143606530999904530.java) org.jboss.aspects.tx.TxPolicy.invokeInOurTx(TxPolicy.java:79) org.jboss.aspects.tx.TxInterceptor$RequiresNew.invoke(TxInterceptor.java:253) org.jboss.portal.core.aspects.server.TransactionInterceptor$invoke_N5143606530999904530.invokeNext(TransactionInterceptor$invoke_N5143606530999904530.java) org.jboss.aspects.tx.TxPolicy.invokeInOurTx(TxPolicy.java:79) org.jboss.aspects.tx.TxInterceptor$RequiresNew.invoke(TxInterceptor.java:262) org.jboss.portal.core.aspects.server.TransactionInterceptor$invoke_N5143606530999904530.invokeNext(TransactionInterceptor$invoke_N5143606530999904530.java) org.jboss.portal.core.aspects.server.TransactionInterceptor.invoke(TransactionInterceptor.java) org.jboss.portal.server.ServerInterceptor.invoke(ServerInterceptor.java:38) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.server.aspects.LockInterceptor$InternalLock.invoke(LockInterceptor.java:69) org.jboss.portal.server.aspects.LockInterceptor.invoke(LockInterceptor.java:130) org.jboss.portal.common.invocation.Invocation.invokeNext(Invocation.java:115) org.jboss.portal.common.invocation.Invocation.invoke(Invocation.java:157) org.jboss.portal.server.servlet.PortalServlet.service(PortalServlet.java:252) javax.servlet.http.HttpServlet.service(HttpServlet.java:803) org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96)

    Read the article

  • Neo4j Windows Plugin Installation desktop V2.M06

    - by user2904850
    I have downloaded and installed the latest Window V2 Community M06 build of Neo4j on a windows 7 64 bit machine (I have tried the 32bit and 64 bit installs). Installation proceeds without and problems and the system runs normally but there is no /plugins directory (on both versions):- \Program Files (x86)\Neo4j Community\ \Program Files (x86)\Neo4j Community\.install4j \Program Files (x86)\Neo4j Community\bin I am trying to install the Spatial plugin .. so I tried creating the \plugins directory. I extracted the zip file and left the zip file in the directory but the plugins are not found:- C:\Users\WFN44217>curl localhost:7474/db/data/ { "extensions" : { }, "node" : "http://localhost:7474/db/data/node", "reference_node" : "http://localhost:7474/db/data/node/0", "node_index" : "http://localhost:7474/db/data/index/node", "relationship_index" : "http://localhost:7474/db/data/index/relationship", "extensions_info" : "http://localhost:7474/db/data/ext", "relationship_types" : "http://localhost:7474/db/data/relationship/types", "batch" : "http://localhost:7474/db/data/batch", "cypher" : "http://localhost:7474/db/data/cypher", "transaction" : "http://localhost:7474/db/data/transaction", "neo4j_version" : "2.0.0-M06" } I have tried some other plugins, but these are also not found. Any idea what might be missing? Extract from the log files: 2013-10-17 11:19:31.881+0000 INFO [o.n.k.i.DiagnosticsManager]: VM Arguments: [-Dexe4j.semaphoreName=Local\c:_program_files_neo4j_community_bin_neo4j-community.exe, -Dexe4j.isInstall4j=true, -Dexe4j.moduleName=C:\Program Files\Neo4j Community\bin\neo4j-community.exe, -Dexe4j.processCommFile=C:\Users\WFN44217\AppData\Local\Temp\e4j_p6384.tmp, -Dexe4j.tempDir=, -Dexe4j.unextractedPosition=0, -Djava.library.path=C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\Microsoft\Web Platform Installer\;C:\Program Files (x86)\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\;C:\Program Files (x86)\Windows Kits\8.0\Windows Performance Toolkit\;C:\Program Files\Microsoft SQL Server\110\Tools\Binn\;C:\Program Files\Microsoft SQL Server\110\DTS\Binn\;C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\;C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\ManagementStudio\;C:\Program Files (x86)\Microsoft SQL Server\110\DTS\Binn\;C:\apache-maven-3.1.1-bin\apache-maven-3.1.1\bin\;C:\Program Files\Java\jdk1.7.0_40\bin\;C:\Program Files (x86)\Git\cmd;C:\Program Files (x86)\Git\bin;c:\ikvm-7.2.4630.5\bin;C:\Program Files\nodejs\;C:\Users\WFN44217\AppData\Roaming\npm;c:\program files\neo4j community\jre\bin, -Dexe4j.consoleCodepage=cp0, -Dinstall4j.launcherId=24, -Dinstall4j.swt=false] 2013-10-17 11:19:31.881+0000 INFO [o.n.k.i.DiagnosticsManager]: Java classpath: 2013-10-17 11:19:31.883+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.0] file:/C:/Program%20Files/Neo4j%20Community/bin/neo4j-desktop-2.0.0-M06.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\charsets.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [classpath] C:\Program Files\Neo4j Community\.install4j\i4jruntime.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/jaccess.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/zipfs.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\resources.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\jfr.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\jsse.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [classpath] C:\Program Files\Neo4j Community\bin\neo4j-desktop-2.0.0-M06.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/sunec.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\classes 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\rt.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.0] file:/C:/Program%20Files/Neo4j%20Community/bin/ 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/sunmscapi.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/dns_sd.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/dnsns.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/sunjce_provider.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/localedata.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.1] file:/C:/Program%20Files/Neo4j%20Community/jre/lib/ext/access-bridge-64.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [loader.0] file:/C:/Program%20Files/Neo4j%20Community/.install4j/i4jruntime.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\sunrsasign.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: [bootstrap] C:\Program Files\Neo4j Community\jre\lib\jce.jar 2013-10-17 11:19:31.884+0000 INFO [o.n.k.i.DiagnosticsManager]: Library path: 2013-10-17 11:19:31.885+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Windows\System32 2013-10-17 11:19:31.885+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Windows 2013-10-17 11:19:31.885+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Windows\System32\wbem 2013-10-17 11:19:31.885+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Windows\System32\WindowsPowerShell\v1.0 2013-10-17 11:19:31.886+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files\Microsoft\Web Platform Installer 2013-10-17 11:19:31.886+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Microsoft ASP.NET\ASP.NET Web Pages\v1.0 2013-10-17 11:19:31.886+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Windows Kits\8.0\Windows Performance Toolkit 2013-10-17 11:19:31.886+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files\Microsoft SQL Server\110\Tools\Binn 2013-10-17 11:19:31.887+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files\Microsoft SQL Server\110\DTS\Binn 2013-10-17 11:19:31.887+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn 2013-10-17 11:19:31.887+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\ManagementStudio 2013-10-17 11:19:31.888+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Microsoft SQL Server\110\DTS\Binn 2013-10-17 11:19:31.888+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\apache-maven-3.1.1-bin\apache-maven-3.1.1\bin 2013-10-17 11:19:31.888+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files\Java\jdk1.7.0_40\bin 2013-10-17 11:19:31.888+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Git\cmd 2013-10-17 11:19:31.889+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files (x86)\Git\bin 2013-10-17 11:19:31.889+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\ikvm-7.2.4630.5\bin 2013-10-17 11:19:31.889+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files\nodejs 2013-10-17 11:19:31.889+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Users\WFN44217\AppData\Roaming\npm 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: C:\Program Files\Neo4j Community\jre\bin 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: System.properties: 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: exe4j.moduleName = C:\Program Files\Neo4j Community\bin\neo4j-community.exe 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: exe4j.processCommFile = C:\Users\WFN44217\AppData\Local\Temp\e4j_p6384.tmp 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: exe4j.semaphoreName = Local\c:_program_files_neo4j_community_bin_neo4j-community.exe 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: sun.boot.library.path = c:\program files\neo4j community\jre\bin 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: exe4j.consoleCodepage = cp0 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: path.separator = ; 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: file.encoding.pkg = sun.io 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: user.country = GB 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: user.script = 2013-10-17 11:19:31.890+0000 INFO [o.n.k.i.DiagnosticsManager]: sun.os.patch.level = Service Pack 1 2013-10-17 11:19:31.891+0000 INFO [o.n.k.i.DiagnosticsManager]: install4j.exeDir = C:\Program Files\Neo4j Community\bin\ 2013-10-17 11:19:31.891+0000 INFO [o.n.k.i.DiagnosticsManager]: user.dir = C:\Program Files\Neo4j Community\bin

    Read the article

  • Managing the layout of a Java MainFrame of Canvas3d

    - by John N
    Hi, Im trying to organise the layout of four canvas3d objects in a single MainFrame. Iv tried using some layout managers but none are working (or im doing it wrong). Can anyone give me advice or point me to a way to get this to display the four canvas's as a grid of four? Thanks, John public class Main { public static void Main(){ Window win = new Window(); } } import javax.media.j3d.BranchGroup; import javax.media.j3d.Canvas3D; import javax.media.j3d.Locale; import javax.media.j3d.PhysicalBody; import javax.media.j3d.PhysicalEnvironment; import javax.media.j3d.Transform3D; import javax.media.j3d.TransformGroup; import javax.media.j3d.View; import javax.media.j3d.ViewPlatform; import javax.media.j3d.VirtualUniverse; import javax.vecmath.Vector3f; import com.sun.j3d.utils.picking.PickCanvas; public class Universe { boolean camera = true; Canvas3D canvas1, canvas2, canvas3, canvas4; VirtualUniverse universe; Locale locale; TransformGroup vpTrans1, vpTransRight, vpTransFront, vpTransPers; TransformGroup mouseTransform = null; View view1, view2, view3, view4; BranchGroup scene; PickCanvas pickCanvas1 = null; PickCanvas pickCanvas2 = null; PickCanvas pickCanvas3 = null; PickCanvas pickCanvas4 = null; BranchGroup obj = new BranchGroup(); // Create a BranchGroup node for the view platform BranchGroup vpRoot = new BranchGroup(); //Temp vars for cam movement public Universe(Canvas3D c1, Canvas3D c2, Canvas3D c3, Canvas3D c4, BranchGroup scene) { this.canvas1 = c1; this.canvas2 = c2; this.canvas3 = c3; this.canvas4 = c4; this.scene = scene; // Establish a virtual universe that has a single // hi-res Locale universe = new VirtualUniverse(); locale = new Locale(universe); // Create a PhysicalBody and PhysicalEnvironment object PhysicalBody body = new PhysicalBody(); PhysicalEnvironment environment = new PhysicalEnvironment(); // Create a View and attach the Canvas3D and the physical // body and environment to the view. view1 = new View(); view1.addCanvas3D(c1); view1.addCanvas3D(c2); view1.addCanvas3D(c3); view1.addCanvas3D(c4); view1.setPhysicalBody(body); view1.setPhysicalEnvironment(environment); // Create a BranchGroup node for the view platform BranchGroup vpRoot = new BranchGroup(); // Create a ViewPlatform object, and its associated // TransformGroup object, and attach it to the root of the // subgraph. Attach the view to the view platform. Transform3D t = new Transform3D(); t.set(new Vector3f(0.0f, 0.0f, 2.0f)); ViewPlatform vp = new ViewPlatform(); vpTrans1 = new TransformGroup(t); vpTrans1.addChild(vp); vpRoot.addChild(vpTrans1); vpRoot.addChild(scene); view1.attachViewPlatform(vp); // Attach the branch graph to the universe, via the // Locale. The scene graph is now live! locale.addBranchGraph(vpRoot); } } import javax.media.j3d.BranchGroup; import javax.media.j3d.Canvas3D; import javax.media.j3d.Locale; import javax.media.j3d.PhysicalBody; import javax.media.j3d.PhysicalEnvironment; import javax.media.j3d.Transform3D; import javax.media.j3d.TransformGroup; import javax.media.j3d.View; import javax.media.j3d.ViewPlatform; import javax.media.j3d.VirtualUniverse; import javax.vecmath.Vector3f; import com.sun.j3d.utils.picking.PickCanvas; public class Universe { boolean camera = true; Canvas3D canvas1, canvas2, canvas3, canvas4; VirtualUniverse universe; Locale locale; TransformGroup vpTrans1, vpTransRight, vpTransFront, vpTransPers; TransformGroup mouseTransform = null; View view1, view2, view3, view4; BranchGroup scene; PickCanvas pickCanvas1 = null; PickCanvas pickCanvas2 = null; PickCanvas pickCanvas3 = null; PickCanvas pickCanvas4 = null; BranchGroup obj = new BranchGroup(); // Create a BranchGroup node for the view platform BranchGroup vpRoot = new BranchGroup(); //Temp vars for cam movement public Universe(Canvas3D c1, Canvas3D c2, Canvas3D c3, Canvas3D c4, BranchGroup scene) { this.canvas1 = c1; this.canvas2 = c2; this.canvas3 = c3; this.canvas4 = c4; this.scene = scene; // Establish a virtual universe that has a single // hi-res Locale universe = new VirtualUniverse(); locale = new Locale(universe); // Create a PhysicalBody and PhysicalEnvironment object PhysicalBody body = new PhysicalBody(); PhysicalEnvironment environment = new PhysicalEnvironment(); // Create a View and attach the Canvas3D and the physical // body and environment to the view. view1 = new View(); view1.addCanvas3D(c1); view1.addCanvas3D(c2); view1.addCanvas3D(c3); view1.addCanvas3D(c4); view1.setPhysicalBody(body); view1.setPhysicalEnvironment(environment); // Create a BranchGroup node for the view platform BranchGroup vpRoot = new BranchGroup(); // Create a ViewPlatform object, and its associated // TransformGroup object, and attach it to the root of the // subgraph. Attach the view to the view platform. Transform3D t = new Transform3D(); t.set(new Vector3f(0.0f, 0.0f, 2.0f)); ViewPlatform vp = new ViewPlatform(); vpTrans1 = new TransformGroup(t); vpTrans1.addChild(vp); vpRoot.addChild(vpTrans1); vpRoot.addChild(scene); view1.attachViewPlatform(vp); // Attach the branch graph to the universe, via the // Locale. The scene graph is now live! locale.addBranchGraph(vpRoot); } }

    Read the article

  • C++ scoping error

    - by Pat Murray
    I have the following code: #include "Student.h" #include "SortedList.h" using namespace std; int main() { // points to the sorted list object SortedList *list = new SortedList; //This is line 17 // array to hold 100 student objects Student create[100]; int num = 100000; // holds different ID numbers // fills an array with 100 students of various ID numbers for (Student &x : create) { x = new Student(num); num += 100; } // insert all students into the sorted list for (Student &x : create) list->insert(&x); delete list; return 0; } And I keep getting the compile time error: main.cpp: In function ‘int main()’: main.cpp:17: error: ‘SortedList’ was not declared in this scope main.cpp:17: error: ‘list’ was not declared in this scope main.cpp:17: error: expected type-specifier before ‘SortedList’ main.cpp:17: error: expected `;' before ‘SortedList’ main.cpp:20: error: ‘Student’ was not declared in this scope main.cpp:20: error: expected primary-expression before ‘]’ token main.cpp:20: error: expected `;' before ‘create’ main.cpp:25: error: expected `;' before ‘x’ main.cpp:31: error: expected primary-expression before ‘for’ main.cpp:31: error: expected `;' before ‘for’ main.cpp:31: error: expected primary-expression before ‘for’ main.cpp:31: error: expected `)' before ‘for’ main.cpp:31: error: expected `;' before ‘x’ main.cpp:34: error: type ‘<type error>’ argument given to ‘delete’, expected pointer main.cpp:35: error: expected primary-expression before ‘return’ main.cpp:35: error: expected `)' before ‘return’ My Student.cpp and SortedList.cpp files compile just fine. They both also include .h files. I just do not understand why I get an error on that line. It seems to be a small issue though. Any insight would be appreciated. UPDATE1: I originally had .h files included, but i changed it when trying to figure out the cause of the error. The error remains with the .h files included though. UPDATE2: SortedList.h #ifndef SORTEDLIST_H #define SORTEDLIST_H #include "Student.h" /* * SortedList class * * A SortedList is an ordered collection of Students. The Students are ordered * from lowest numbered student ID to highest numbered student ID. */ class SortedList { public: SortedList(); // Constructs an empty list. SortedList(const SortedList & l); // Constructs a copy of the given student object ~SortedList(); // Destructs the sorted list object const SortedList & operator=(const SortedList & l); // Defines the assignment operator between two sorted list objects bool insert(Student *s); // If a student with the same ID is not already in the list, inserts // the given student into the list in the appropriate place and returns // true. If there is already a student in the list with the same ID // then the list is not changed and false is returned. Student *find(int studentID); // Searches the list for a student with the given student ID. If the // student is found, it is returned; if it is not found, NULL is returned. Student *remove(int studentID); // Searches the list for a student with the given student ID. If the // student is found, the student is removed from the list and returned; // if no student is found with the given ID, NULL is returned. // Note that the Student is NOT deleted - it is returned - however, // the removed list node should be deleted. void print() const; // Prints out the list of students to standard output. The students are // printed in order of student ID (from smallest to largest), one per line private: // Since Listnodes will only be used within the SortedList class, // we make it private. struct Listnode { Student *student; Listnode *next; }; Listnode *head; // pointer to first node in the list static void freeList(Listnode *L); // Traverses throught the linked list and deallocates each node static Listnode *copyList(Listnode *L); // Returns a pointer to the first node within a particular list }; #endif #ifndef STUDENT_H #define STUDENT_H Student.h #ifndef STUDENT_H #define STUDENT_H /* * Student class * * A Student object contains a student ID, the number of credits, and an * overall GPA. */ class Student { public: Student(); // Constructs a default student with an ID of 0, 0 credits, and 0.0 GPA. Student(int ID); // Constructs a student with the given ID, 0 credits, and 0.0 GPA. Student(int ID, int cr, double grPtAv); // Constructs a student with the given ID, number of credits, and GPA.\ Student(const Student & s); // Constructs a copy of another student object ~Student(); // Destructs a student object const Student & operator=(const Student & rhs); // Defines the assignment operator between two student objects // Accessors int getID() const; // returns the student ID int getCredits() const; // returns the number of credits double getGPA() const; // returns the GPA // Other methods void update(char grade, int cr); // Updates the total credits and overall GPA to take into account the // additions of the given letter grade in a course with the given number // of credits. The update is done by first converting the letter grade // into a numeric value (A = 4.0, B = 3.0, etc.). The new GPA is // calculated using the formula: // // (oldGPA * old_total_credits) + (numeric_grade * cr) // newGPA = --------------------------------------------------- // old_total_credits + cr // // Finally, the total credits is updated (to old_total_credits + cr) void print() const; // Prints out the student to standard output in the format: // ID,credits,GPA // Note: the end-of-line is NOT printed after the student information private: int studentID; int credits; double GPA; }; #endif

    Read the article

  • Using R to Analyze G1GC Log Files

    - by user12620111
    Using R to Analyze G1GC Log Files body, td { font-family: sans-serif; background-color: white; font-size: 12px; margin: 8px; } tt, code, pre { font-family: 'DejaVu Sans Mono', 'Droid Sans Mono', 'Lucida Console', Consolas, Monaco, monospace; } h1 { font-size:2.2em; } h2 { font-size:1.8em; } h3 { font-size:1.4em; } h4 { font-size:1.0em; } h5 { font-size:0.9em; } h6 { font-size:0.8em; } a:visited { color: rgb(50%, 0%, 50%); } pre { margin-top: 0; max-width: 95%; border: 1px solid #ccc; white-space: pre-wrap; } pre code { display: block; padding: 0.5em; } code.r, code.cpp { background-color: #F8F8F8; } table, td, th { border: none; } blockquote { color:#666666; margin:0; padding-left: 1em; border-left: 0.5em #EEE solid; } hr { height: 0px; border-bottom: none; border-top-width: thin; border-top-style: dotted; border-top-color: #999999; } @media print { * { background: transparent !important; color: black !important; filter:none !important; -ms-filter: none !important; } body { font-size:12pt; max-width:100%; } a, a:visited { text-decoration: underline; } hr { visibility: hidden; page-break-before: always; } pre, blockquote { padding-right: 1em; page-break-inside: avoid; } tr, img { page-break-inside: avoid; } img { max-width: 100% !important; } @page :left { margin: 15mm 20mm 15mm 10mm; } @page :right { margin: 15mm 10mm 15mm 20mm; } p, h2, h3 { orphans: 3; widows: 3; } h2, h3 { page-break-after: avoid; } } pre .operator, pre .paren { color: rgb(104, 118, 135) } pre .literal { color: rgb(88, 72, 246) } pre .number { color: rgb(0, 0, 205); } pre .comment { color: rgb(76, 136, 107); } pre .keyword { color: rgb(0, 0, 255); } pre .identifier { color: rgb(0, 0, 0); } pre .string { color: rgb(3, 106, 7); } var hljs=new function(){function m(p){return p.replace(/&/gm,"&").replace(/"}while(y.length||w.length){var v=u().splice(0,1)[0];z+=m(x.substr(q,v.offset-q));q=v.offset;if(v.event=="start"){z+=t(v.node);s.push(v.node)}else{if(v.event=="stop"){var p,r=s.length;do{r--;p=s[r];z+=("")}while(p!=v.node);s.splice(r,1);while(r'+M[0]+""}else{r+=M[0]}O=P.lR.lastIndex;M=P.lR.exec(L)}return r+L.substr(O,L.length-O)}function J(L,M){if(M.sL&&e[M.sL]){var r=d(M.sL,L);x+=r.keyword_count;return r.value}else{return F(L,M)}}function I(M,r){var L=M.cN?'':"";if(M.rB){y+=L;M.buffer=""}else{if(M.eB){y+=m(r)+L;M.buffer=""}else{y+=L;M.buffer=r}}D.push(M);A+=M.r}function G(N,M,Q){var R=D[D.length-1];if(Q){y+=J(R.buffer+N,R);return false}var P=q(M,R);if(P){y+=J(R.buffer+N,R);I(P,M);return P.rB}var L=v(D.length-1,M);if(L){var O=R.cN?"":"";if(R.rE){y+=J(R.buffer+N,R)+O}else{if(R.eE){y+=J(R.buffer+N,R)+O+m(M)}else{y+=J(R.buffer+N+M,R)+O}}while(L1){O=D[D.length-2].cN?"":"";y+=O;L--;D.length--}var r=D[D.length-1];D.length--;D[D.length-1].buffer="";if(r.starts){I(r.starts,"")}return R.rE}if(w(M,R)){throw"Illegal"}}var E=e[B];var D=[E.dM];var A=0;var x=0;var y="";try{var s,u=0;E.dM.buffer="";do{s=p(C,u);var t=G(s[0],s[1],s[2]);u+=s[0].length;if(!t){u+=s[1].length}}while(!s[2]);if(D.length1){throw"Illegal"}return{r:A,keyword_count:x,value:y}}catch(H){if(H=="Illegal"){return{r:0,keyword_count:0,value:m(C)}}else{throw H}}}function g(t){var p={keyword_count:0,r:0,value:m(t)};var r=p;for(var q in e){if(!e.hasOwnProperty(q)){continue}var s=d(q,t);s.language=q;if(s.keyword_count+s.rr.keyword_count+r.r){r=s}if(s.keyword_count+s.rp.keyword_count+p.r){r=p;p=s}}if(r.language){p.second_best=r}return p}function i(r,q,p){if(q){r=r.replace(/^((]+|\t)+)/gm,function(t,w,v,u){return w.replace(/\t/g,q)})}if(p){r=r.replace(/\n/g,"")}return r}function n(t,w,r){var x=h(t,r);var v=a(t);var y,s;if(v){y=d(v,x)}else{return}var q=c(t);if(q.length){s=document.createElement("pre");s.innerHTML=y.value;y.value=k(q,c(s),x)}y.value=i(y.value,w,r);var u=t.className;if(!u.match("(\\s|^)(language-)?"+v+"(\\s|$)")){u=u?(u+" "+v):v}if(/MSIE [678]/.test(navigator.userAgent)&&t.tagName=="CODE"&&t.parentNode.tagName=="PRE"){s=t.parentNode;var p=document.createElement("div");p.innerHTML=""+y.value+"";t=p.firstChild.firstChild;p.firstChild.cN=s.cN;s.parentNode.replaceChild(p.firstChild,s)}else{t.innerHTML=y.value}t.className=u;t.result={language:v,kw:y.keyword_count,re:y.r};if(y.second_best){t.second_best={language:y.second_best.language,kw:y.second_best.keyword_count,re:y.second_best.r}}}function o(){if(o.called){return}o.called=true;var r=document.getElementsByTagName("pre");for(var p=0;p|=||=||=|\\?|\\[|\\{|\\(|\\^|\\^=|\\||\\|=|\\|\\||~";this.ER="(?![\\s\\S])";this.BE={b:"\\\\.",r:0};this.ASM={cN:"string",b:"'",e:"'",i:"\\n",c:[this.BE],r:0};this.QSM={cN:"string",b:'"',e:'"',i:"\\n",c:[this.BE],r:0};this.CLCM={cN:"comment",b:"//",e:"$"};this.CBLCLM={cN:"comment",b:"/\\*",e:"\\*/"};this.HCM={cN:"comment",b:"#",e:"$"};this.NM={cN:"number",b:this.NR,r:0};this.CNM={cN:"number",b:this.CNR,r:0};this.BNM={cN:"number",b:this.BNR,r:0};this.inherit=function(r,s){var p={};for(var q in r){p[q]=r[q]}if(s){for(var q in s){p[q]=s[q]}}return p}}();hljs.LANGUAGES.cpp=function(){var a={keyword:{"false":1,"int":1,"float":1,"while":1,"private":1,"char":1,"catch":1,"export":1,virtual:1,operator:2,sizeof:2,dynamic_cast:2,typedef:2,const_cast:2,"const":1,struct:1,"for":1,static_cast:2,union:1,namespace:1,unsigned:1,"long":1,"throw":1,"volatile":2,"static":1,"protected":1,bool:1,template:1,mutable:1,"if":1,"public":1,friend:2,"do":1,"return":1,"goto":1,auto:1,"void":2,"enum":1,"else":1,"break":1,"new":1,extern:1,using:1,"true":1,"class":1,asm:1,"case":1,typeid:1,"short":1,reinterpret_cast:2,"default":1,"double":1,register:1,explicit:1,signed:1,typename:1,"try":1,"this":1,"switch":1,"continue":1,wchar_t:1,inline:1,"delete":1,alignof:1,char16_t:1,char32_t:1,constexpr:1,decltype:1,noexcept:1,nullptr:1,static_assert:1,thread_local:1,restrict:1,_Bool:1,complex:1},built_in:{std:1,string:1,cin:1,cout:1,cerr:1,clog:1,stringstream:1,istringstream:1,ostringstream:1,auto_ptr:1,deque:1,list:1,queue:1,stack:1,vector:1,map:1,set:1,bitset:1,multiset:1,multimap:1,unordered_set:1,unordered_map:1,unordered_multiset:1,unordered_multimap:1,array:1,shared_ptr:1}};return{dM:{k:a,i:"",k:a,r:10,c:["self"]}]}}}();hljs.LANGUAGES.r={dM:{c:[hljs.HCM,{cN:"number",b:"\\b0[xX][0-9a-fA-F]+[Li]?\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"number",b:"\\b\\d+(?:[eE][+\\-]?\\d*)?L\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"number",b:"\\b\\d+\\.(?!\\d)(?:i\\b)?",e:hljs.IMMEDIATE_RE,r:1},{cN:"number",b:"\\b\\d+(?:\\.\\d*)?(?:[eE][+\\-]?\\d*)?i?\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"number",b:"\\.\\d+(?:[eE][+\\-]?\\d*)?i?\\b",e:hljs.IMMEDIATE_RE,r:1},{cN:"keyword",b:"(?:tryCatch|library|setGeneric|setGroupGeneric)\\b",e:hljs.IMMEDIATE_RE,r:10},{cN:"keyword",b:"\\.\\.\\.",e:hljs.IMMEDIATE_RE,r:10},{cN:"keyword",b:"\\.\\.\\d+(?![\\w.])",e:hljs.IMMEDIATE_RE,r:10},{cN:"keyword",b:"\\b(?:function)",e:hljs.IMMEDIATE_RE,r:2},{cN:"keyword",b:"(?:if|in|break|next|repeat|else|for|return|switch|while|try|stop|warning|require|attach|detach|source|setMethod|setClass)\\b",e:hljs.IMMEDIATE_RE,r:1},{cN:"literal",b:"(?:NA|NA_integer_|NA_real_|NA_character_|NA_complex_)\\b",e:hljs.IMMEDIATE_RE,r:10},{cN:"literal",b:"(?:NULL|TRUE|FALSE|T|F|Inf|NaN)\\b",e:hljs.IMMEDIATE_RE,r:1},{cN:"identifier",b:"[a-zA-Z.][a-zA-Z0-9._]*\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"operator",b:"|=||   Using R to Analyze G1GC Log Files   Using R to Analyze G1GC Log Files Introduction Working in Oracle Platform Integration gives an engineer opportunities to work on a wide array of technologies. My team’s goal is to make Oracle applications run best on the Solaris/SPARC platform. When looking for bottlenecks in a modern applications, one needs to be aware of not only how the CPUs and operating system are executing, but also network, storage, and in some cases, the Java Virtual Machine. I was recently presented with about 1.5 GB of Java Garbage First Garbage Collector log file data. If you’re not familiar with the subject, you might want to review Garbage First Garbage Collector Tuning by Monica Beckwith. The customer had been running Java HotSpot 1.6.0_31 to host a web application server. I was told that the Solaris/SPARC server was running a Java process launched using a commmand line that included the following flags: -d64 -Xms9g -Xmx9g -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:InitiatingHeapOccupancyPercent=80 -XX:PermSize=256m -XX:MaxPermSize=256m -XX:+PrintGC -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -XX:+PrintGCDateStamps -XX:+PrintFlagsFinal -XX:+DisableExplicitGC -XX:+UnlockExperimentalVMOptions -XX:ParallelGCThreads=8 Several sources on the internet indicate that if I were to print out the 1.5 GB of log files, it would require enough paper to fill the bed of a pick up truck. Of course, it would be fruitless to try to scan the log files by hand. Tools will be required to summarize the contents of the log files. Others have encountered large Java garbage collection log files. There are existing tools to analyze the log files: IBM’s GC toolkit The chewiebug GCViewer gchisto HPjmeter Instead of using one of the other tools listed, I decide to parse the log files with standard Unix tools, and analyze the data with R. Data Cleansing The log files arrived in two different formats. I guess that the difference is that one set of log files was generated using a more verbose option, maybe -XX:+PrintHeapAtGC, and the other set of log files was generated without that option. Format 1 In some of the log files, the log files with the less verbose format, a single trace, i.e. the report of a singe garbage collection event, looks like this: {Heap before GC invocations=12280 (full 61): garbage-first heap total 9437184K, used 7499918K [0xfffffffd00000000, 0xffffffff40000000, 0xffffffff40000000) region size 4096K, 1 young (4096K), 0 survivors (0K) compacting perm gen total 262144K, used 144077K [0xffffffff40000000, 0xffffffff50000000, 0xffffffff50000000) the space 262144K, 54% used [0xffffffff40000000, 0xffffffff48cb3758, 0xffffffff48cb3800, 0xffffffff50000000) No shared spaces configured. 2014-05-14T07:24:00.988-0700: 60586.353: [GC pause (young) 7324M->7320M(9216M), 0.1567265 secs] Heap after GC invocations=12281 (full 61): garbage-first heap total 9437184K, used 7496533K [0xfffffffd00000000, 0xffffffff40000000, 0xffffffff40000000) region size 4096K, 0 young (0K), 0 survivors (0K) compacting perm gen total 262144K, used 144077K [0xffffffff40000000, 0xffffffff50000000, 0xffffffff50000000) the space 262144K, 54% used [0xffffffff40000000, 0xffffffff48cb3758, 0xffffffff48cb3800, 0xffffffff50000000) No shared spaces configured. } A simple grep can be used to extract a summary: $ grep "\[ GC pause (young" g1gc.log 2014-05-13T13:24:35.091-0700: 3.109: [GC pause (young) 20M->5029K(9216M), 0.0146328 secs] 2014-05-13T13:24:35.440-0700: 3.459: [GC pause (young) 9125K->6077K(9216M), 0.0086723 secs] 2014-05-13T13:24:37.581-0700: 5.599: [GC pause (young) 25M->8470K(9216M), 0.0203820 secs] 2014-05-13T13:24:42.686-0700: 10.704: [GC pause (young) 44M->15M(9216M), 0.0288848 secs] 2014-05-13T13:24:48.941-0700: 16.958: [GC pause (young) 51M->20M(9216M), 0.0491244 secs] 2014-05-13T13:24:56.049-0700: 24.066: [GC pause (young) 92M->26M(9216M), 0.0525368 secs] 2014-05-13T13:25:34.368-0700: 62.383: [GC pause (young) 602M->68M(9216M), 0.1721173 secs] But that format wasn't easily read into R, so I needed to be a bit more tricky. I used the following Unix command to create a summary file that was easy for R to read. $ echo "SecondsSinceLaunch BeforeSize AfterSize TotalSize RealTime" $ grep "\[GC pause (young" g1gc.log | grep -v mark | sed -e 's/[A-SU-z\(\),]/ /g' -e 's/->/ /' -e 's/: / /g' | more SecondsSinceLaunch BeforeSize AfterSize TotalSize RealTime 2014-05-13T13:24:35.091-0700 3.109 20 5029 9216 0.0146328 2014-05-13T13:24:35.440-0700 3.459 9125 6077 9216 0.0086723 2014-05-13T13:24:37.581-0700 5.599 25 8470 9216 0.0203820 2014-05-13T13:24:42.686-0700 10.704 44 15 9216 0.0288848 2014-05-13T13:24:48.941-0700 16.958 51 20 9216 0.0491244 2014-05-13T13:24:56.049-0700 24.066 92 26 9216 0.0525368 2014-05-13T13:25:34.368-0700 62.383 602 68 9216 0.1721173 Format 2 In some of the log files, the log files with the more verbose format, a single trace, i.e. the report of a singe garbage collection event, was more complicated than Format 1. Here is a text file with an example of a single G1GC trace in the second format. As you can see, it is quite complicated. It is nice that there is so much information available, but the level of detail can be overwhelming. I wrote this awk script (download) to summarize each trace on a single line. #!/usr/bin/env awk -f BEGIN { printf("SecondsSinceLaunch IncrementalCount FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize\n") } ###################### # Save count data from lines that are at the start of each G1GC trace. # Each trace starts out like this: # {Heap before GC invocations=14 (full 0): # garbage-first heap total 9437184K, used 325496K [0xfffffffd00000000, 0xffffffff40000000, 0xffffffff40000000) ###################### /{Heap.*full/{ gsub ( "\\)" , "" ); nf=split($0,a,"="); split(a[2],b," "); getline; if ( match($0, "first") ) { G1GC=1; IncrementalCount=b[1]; FullCount=substr( b[3], 1, length(b[3])-1 ); } else { G1GC=0; } } ###################### # Pull out time stamps that are in lines with this format: # 2014-05-12T14:02:06.025-0700: 94.312: [GC pause (young), 0.08870154 secs] ###################### /GC pause/ { DateTime=$1; SecondsSinceLaunch=substr($2, 1, length($2)-1); } ###################### # Heap sizes are in lines that look like this: # [ 4842M->4838M(9216M)] ###################### /\[ .*]$/ { gsub ( "\\[" , "" ); gsub ( "\ \]" , "" ); gsub ( "->" , " " ); gsub ( "\\( " , " " ); gsub ( "\ \)" , " " ); split($0,a," "); if ( split(a[1],b,"M") > 1 ) {BeforeSize=b[1]*1024;} if ( split(a[1],b,"K") > 1 ) {BeforeSize=b[1];} if ( split(a[2],b,"M") > 1 ) {AfterSize=b[1]*1024;} if ( split(a[2],b,"K") > 1 ) {AfterSize=b[1];} if ( split(a[3],b,"M") > 1 ) {TotalSize=b[1]*1024;} if ( split(a[3],b,"K") > 1 ) {TotalSize=b[1];} } ###################### # Emit an output line when you find input that looks like this: # [Times: user=1.41 sys=0.08, real=0.24 secs] ###################### /\[Times/ { if (G1GC==1) { gsub ( "," , "" ); split($2,a,"="); UserTime=a[2]; split($3,a,"="); SysTime=a[2]; split($4,a,"="); RealTime=a[2]; print DateTime,SecondsSinceLaunch,IncrementalCount,FullCount,UserTime,SysTime,RealTime,BeforeSize,AfterSize,TotalSize; G1GC=0; } } The resulting summary is about 25X smaller that the original file, but still difficult for a human to digest. SecondsSinceLaunch IncrementalCount FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize ... 2014-05-12T18:36:34.669-0700: 3985.744 561 0 0.57 0.06 0.16 1724416 1720320 9437184 2014-05-12T18:36:34.839-0700: 3985.914 562 0 0.51 0.06 0.19 1724416 1720320 9437184 2014-05-12T18:36:35.069-0700: 3986.144 563 0 0.60 0.04 0.27 1724416 1721344 9437184 2014-05-12T18:36:35.354-0700: 3986.429 564 0 0.33 0.04 0.09 1725440 1722368 9437184 2014-05-12T18:36:35.545-0700: 3986.620 565 0 0.58 0.04 0.17 1726464 1722368 9437184 2014-05-12T18:36:35.726-0700: 3986.801 566 0 0.43 0.05 0.12 1726464 1722368 9437184 2014-05-12T18:36:35.856-0700: 3986.930 567 0 0.30 0.04 0.07 1726464 1723392 9437184 2014-05-12T18:36:35.947-0700: 3987.023 568 0 0.61 0.04 0.26 1727488 1723392 9437184 2014-05-12T18:36:36.228-0700: 3987.302 569 0 0.46 0.04 0.16 1731584 1724416 9437184 Reading the Data into R Once the GC log data had been cleansed, either by processing the first format with the shell script, or by processing the second format with the awk script, it was easy to read the data into R. g1gc.df = read.csv("summary.txt", row.names = NULL, stringsAsFactors=FALSE,sep="") str(g1gc.df) ## 'data.frame': 8307 obs. of 10 variables: ## $ row.names : chr "2014-05-12T14:00:32.868-0700:" "2014-05-12T14:00:33.179-0700:" "2014-05-12T14:00:33.677-0700:" "2014-05-12T14:00:35.538-0700:" ... ## $ SecondsSinceLaunch: num 1.16 1.47 1.97 3.83 6.1 ... ## $ IncrementalCount : int 0 1 2 3 4 5 6 7 8 9 ... ## $ FullCount : int 0 0 0 0 0 0 0 0 0 0 ... ## $ UserTime : num 0.11 0.05 0.04 0.21 0.08 0.26 0.31 0.33 0.34 0.56 ... ## $ SysTime : num 0.04 0.01 0.01 0.05 0.01 0.06 0.07 0.06 0.07 0.09 ... ## $ RealTime : num 0.02 0.02 0.01 0.04 0.02 0.04 0.05 0.04 0.04 0.06 ... ## $ BeforeSize : int 8192 5496 5768 22528 24576 43008 34816 53248 55296 93184 ... ## $ AfterSize : int 1400 1672 2557 4907 7072 14336 16384 18432 19456 21504 ... ## $ TotalSize : int 9437184 9437184 9437184 9437184 9437184 9437184 9437184 9437184 9437184 9437184 ... head(g1gc.df) ## row.names SecondsSinceLaunch IncrementalCount ## 1 2014-05-12T14:00:32.868-0700: 1.161 0 ## 2 2014-05-12T14:00:33.179-0700: 1.472 1 ## 3 2014-05-12T14:00:33.677-0700: 1.969 2 ## 4 2014-05-12T14:00:35.538-0700: 3.830 3 ## 5 2014-05-12T14:00:37.811-0700: 6.103 4 ## 6 2014-05-12T14:00:41.428-0700: 9.720 5 ## FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize ## 1 0 0.11 0.04 0.02 8192 1400 9437184 ## 2 0 0.05 0.01 0.02 5496 1672 9437184 ## 3 0 0.04 0.01 0.01 5768 2557 9437184 ## 4 0 0.21 0.05 0.04 22528 4907 9437184 ## 5 0 0.08 0.01 0.02 24576 7072 9437184 ## 6 0 0.26 0.06 0.04 43008 14336 9437184 Basic Statistics Once the data has been read into R, simple statistics are very easy to generate. All of the numbers from high school statistics are available via simple commands. For example, generate a summary of every column: summary(g1gc.df) ## row.names SecondsSinceLaunch IncrementalCount FullCount ## Length:8307 Min. : 1 Min. : 0 Min. : 0.0 ## Class :character 1st Qu.: 9977 1st Qu.:2048 1st Qu.: 0.0 ## Mode :character Median :12855 Median :4136 Median : 12.0 ## Mean :12527 Mean :4156 Mean : 31.6 ## 3rd Qu.:15758 3rd Qu.:6262 3rd Qu.: 61.0 ## Max. :55484 Max. :8391 Max. :113.0 ## UserTime SysTime RealTime BeforeSize ## Min. :0.040 Min. :0.0000 Min. : 0.0 Min. : 5476 ## 1st Qu.:0.470 1st Qu.:0.0300 1st Qu.: 0.1 1st Qu.:5137920 ## Median :0.620 Median :0.0300 Median : 0.1 Median :6574080 ## Mean :0.751 Mean :0.0355 Mean : 0.3 Mean :5841855 ## 3rd Qu.:0.920 3rd Qu.:0.0400 3rd Qu.: 0.2 3rd Qu.:7084032 ## Max. :3.370 Max. :1.5600 Max. :488.1 Max. :8696832 ## AfterSize TotalSize ## Min. : 1380 Min. :9437184 ## 1st Qu.:5002752 1st Qu.:9437184 ## Median :6559744 Median :9437184 ## Mean :5785454 Mean :9437184 ## 3rd Qu.:7054336 3rd Qu.:9437184 ## Max. :8482816 Max. :9437184 Q: What is the total amount of User CPU time spent in garbage collection? sum(g1gc.df$UserTime) ## [1] 6236 As you can see, less than two hours of CPU time was spent in garbage collection. Is that too much? To find the percentage of time spent in garbage collection, divide the number above by total_elapsed_time*CPU_count. In this case, there are a lot of CPU’s and it turns out the the overall amount of CPU time spent in garbage collection isn’t a problem when viewed in isolation. When calculating rates, i.e. events per unit time, you need to ask yourself if the rate is homogenous across the time period in the log file. Does the log file include spikes of high activity that should be separately analyzed? Averaging in data from nights and weekends with data from business hours may alias problems. If you have a reason to suspect that the garbage collection rates include peaks and valleys that need independent analysis, see the “Time Series” section, below. Q: How much garbage is collected on each pass? The amount of heap space that is recovered per GC pass is surprisingly low: At least one collection didn’t recover any data. (“Min.=0”) 25% of the passes recovered 3MB or less. (“1st Qu.=3072”) Half of the GC passes recovered 4MB or less. (“Median=4096”) The average amount recovered was 56MB. (“Mean=56390”) 75% of the passes recovered 36MB or less. (“3rd Qu.=36860”) At least one pass recovered 2GB. (“Max.=2121000”) g1gc.df$Delta = g1gc.df$BeforeSize - g1gc.df$AfterSize summary(g1gc.df$Delta) ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 0 3070 4100 56400 36900 2120000 Q: What is the maximum User CPU time for a single collection? The worst garbage collection (“Max.”) is many standard deviations away from the mean. The data appears to be right skewed. summary(g1gc.df$UserTime) ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 0.040 0.470 0.620 0.751 0.920 3.370 sd(g1gc.df$UserTime) ## [1] 0.3966 Basic Graphics Once the data is in R, it is trivial to plot the data with formats including dot plots, line charts, bar charts (simple, stacked, grouped), pie charts, boxplots, scatter plots histograms, and kernel density plots. Histogram of User CPU Time per Collection I don't think that this graph requires any explanation. hist(g1gc.df$UserTime, main="User CPU Time per Collection", xlab="Seconds", ylab="Frequency") Box plot to identify outliers When the initial data is viewed with a box plot, you can see the one crazy outlier in the real time per GC. Save this data point for future analysis and drop the outlier so that it’s not throwing off our statistics. Now the box plot shows many outliers, which will be examined later, using times series analysis. Notice that the scale of the x-axis changes drastically once the crazy outlier is removed. par(mfrow=c(2,1)) boxplot(g1gc.df$UserTime,g1gc.df$SysTime,g1gc.df$RealTime, main="Box Plot of Time per GC\n(dominated by a crazy outlier)", names=c("usr","sys","elapsed"), xlab="Seconds per GC", ylab="Time (Seconds)", horizontal = TRUE, outcol="red") crazy.outlier.df=g1gc.df[g1gc.df$RealTime > 400,] g1gc.df=g1gc.df[g1gc.df$RealTime < 400,] boxplot(g1gc.df$UserTime,g1gc.df$SysTime,g1gc.df$RealTime, main="Box Plot of Time per GC\n(crazy outlier excluded)", names=c("usr","sys","elapsed"), xlab="Seconds per GC", ylab="Time (Seconds)", horizontal = TRUE, outcol="red") box(which = "outer", lty = "solid") Here is the crazy outlier for future analysis: crazy.outlier.df ## row.names SecondsSinceLaunch IncrementalCount ## 8233 2014-05-12T23:15:43.903-0700: 20741 8316 ## FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize ## 8233 112 0.55 0.42 488.1 8381440 8235008 9437184 ## Delta ## 8233 146432 R Time Series Data To analyze the garbage collection as a time series, I’ll use Z’s Ordered Observations (zoo). “zoo is the creator for an S3 class of indexed totally ordered observations which includes irregular time series.” require(zoo) ## Loading required package: zoo ## ## Attaching package: 'zoo' ## ## The following objects are masked from 'package:base': ## ## as.Date, as.Date.numeric head(g1gc.df[,1]) ## [1] "2014-05-12T14:00:32.868-0700:" "2014-05-12T14:00:33.179-0700:" ## [3] "2014-05-12T14:00:33.677-0700:" "2014-05-12T14:00:35.538-0700:" ## [5] "2014-05-12T14:00:37.811-0700:" "2014-05-12T14:00:41.428-0700:" options("digits.secs"=3) times=as.POSIXct( g1gc.df[,1], format="%Y-%m-%dT%H:%M:%OS%z:") g1gc.z = zoo(g1gc.df[,-c(1)], order.by=times) head(g1gc.z) ## SecondsSinceLaunch IncrementalCount FullCount ## 2014-05-12 17:00:32.868 1.161 0 0 ## 2014-05-12 17:00:33.178 1.472 1 0 ## 2014-05-12 17:00:33.677 1.969 2 0 ## 2014-05-12 17:00:35.538 3.830 3 0 ## 2014-05-12 17:00:37.811 6.103 4 0 ## 2014-05-12 17:00:41.427 9.720 5 0 ## UserTime SysTime RealTime BeforeSize AfterSize ## 2014-05-12 17:00:32.868 0.11 0.04 0.02 8192 1400 ## 2014-05-12 17:00:33.178 0.05 0.01 0.02 5496 1672 ## 2014-05-12 17:00:33.677 0.04 0.01 0.01 5768 2557 ## 2014-05-12 17:00:35.538 0.21 0.05 0.04 22528 4907 ## 2014-05-12 17:00:37.811 0.08 0.01 0.02 24576 7072 ## 2014-05-12 17:00:41.427 0.26 0.06 0.04 43008 14336 ## TotalSize Delta ## 2014-05-12 17:00:32.868 9437184 6792 ## 2014-05-12 17:00:33.178 9437184 3824 ## 2014-05-12 17:00:33.677 9437184 3211 ## 2014-05-12 17:00:35.538 9437184 17621 ## 2014-05-12 17:00:37.811 9437184 17504 ## 2014-05-12 17:00:41.427 9437184 28672 Example of Two Benchmark Runs in One Log File The data in the following graph is from a different log file, not the one of primary interest to this article. I’m including this image because it is an example of idle periods followed by busy periods. It would be uninteresting to average the rate of garbage collection over the entire log file period. More interesting would be the rate of garbage collect in the two busy periods. Are they the same or different? Your production data may be similar, for example, bursts when employees return from lunch and idle times on weekend evenings, etc. Once the data is in an R Time Series, you can analyze isolated time windows. Clipping the Time Series data Flashing back to our test case… Viewing the data as a time series is interesting. You can see that the work intensive time period is between 9:00 PM and 3:00 AM. Lets clip the data to the interesting period:     par(mfrow=c(2,1)) plot(g1gc.z$UserTime, type="h", main="User Time per GC\nTime: Complete Log File", xlab="Time of Day", ylab="CPU Seconds per GC", col="#1b9e77") clipped.g1gc.z=window(g1gc.z, start=as.POSIXct("2014-05-12 21:00:00"), end=as.POSIXct("2014-05-13 03:00:00")) plot(clipped.g1gc.z$UserTime, type="h", main="User Time per GC\nTime: Limited to Benchmark Execution", xlab="Time of Day", ylab="CPU Seconds per GC", col="#1b9e77") box(which = "outer", lty = "solid") Cumulative Incremental and Full GC count Here is the cumulative incremental and full GC count. When the line is very steep, it indicates that the GCs are repeating very quickly. Notice that the scale on the Y axis is different for full vs. incremental. plot(clipped.g1gc.z[,c(2:3)], main="Cumulative Incremental and Full GC count", xlab="Time of Day", col="#1b9e77") GC Analysis of Benchmark Execution using Time Series data In the following series of 3 graphs: The “After Size” show the amount of heap space in use after each garbage collection. Many Java objects are still referenced, i.e. alive, during each garbage collection. This may indicate that the application has a memory leak, or may indicate that the application has a very large memory footprint. Typically, an application's memory footprint plateau's in the early stage of execution. One would expect this graph to have a flat top. The steep decline in the heap space may indicate that the application crashed after 2:00. The second graph shows that the outliers in real execution time, discussed above, occur near 2:00. when the Java heap seems to be quite full. The third graph shows that Full GCs are infrequent during the first few hours of execution. The rate of Full GC's, (the slope of the cummulative Full GC line), changes near midnight.   plot(clipped.g1gc.z[,c("AfterSize","RealTime","FullCount")], xlab="Time of Day", col=c("#1b9e77","red","#1b9e77")) GC Analysis of heap recovered Each GC trace includes the amount of heap space in use before and after the individual GC event. During garbage coolection, unreferenced objects are identified, the space holding the unreferenced objects is freed, and thus, the difference in before and after usage indicates how much space has been freed. The following box plot and bar chart both demonstrate the same point - the amount of heap space freed per garbage colloection is surprisingly low. par(mfrow=c(2,1)) boxplot(as.vector(clipped.g1gc.z$Delta), main="Amount of Heap Recovered per GC Pass", xlab="Size in KB", horizontal = TRUE, col="red") hist(as.vector(clipped.g1gc.z$Delta), main="Amount of Heap Recovered per GC Pass", xlab="Size in KB", breaks=100, col="red") box(which = "outer", lty = "solid") This graph is the most interesting. The dark blue area shows how much heap is occupied by referenced Java objects. This represents memory that holds live data. The red fringe at the top shows how much data was recovered after each garbage collection. barplot(clipped.g1gc.z[,c("AfterSize","Delta")], col=c("#7570b3","#e7298a"), xlab="Time of Day", border=NA) legend("topleft", c("Live Objects","Heap Recovered on GC"), fill=c("#7570b3","#e7298a")) box(which = "outer", lty = "solid") When I discuss the data in the log files with the customer, I will ask for an explaination for the large amount of referenced data resident in the Java heap. There are two are posibilities: There is a memory leak and the amount of space required to hold referenced objects will continue to grow, limited only by the maximum heap size. After the maximum heap size is reached, the JVM will throw an “Out of Memory” exception every time that the application tries to allocate a new object. If this is the case, the aplication needs to be debugged to identify why old objects are referenced when they are no longer needed. The application has a legitimate requirement to keep a large amount of data in memory. The customer may want to further increase the maximum heap size. Another possible solution would be to partition the application across multiple cluster nodes, where each node has responsibility for managing a unique subset of the data. Conclusion In conclusion, R is a very powerful tool for the analysis of Java garbage collection log files. The primary difficulty is data cleansing so that information can be read into an R data frame. Once the data has been read into R, a rich set of tools may be used for thorough evaluation.

    Read the article

  • Neo4j OutOfMemory problem

    - by Edward83
    Hi! This is my source code of Main.java. It was grabbed from neo4j-apoc-1.0 examples. The goal of modification to store 1M records of 2 nodes and 1 relation: package javaapplication2; import org.neo4j.graphdb.GraphDatabaseService; import org.neo4j.graphdb.Node; import org.neo4j.graphdb.RelationshipType; import org.neo4j.graphdb.Transaction; import org.neo4j.kernel.EmbeddedGraphDatabase; public class Main { private static final String DB_PATH = "neo4j-store-1M"; private static final String NAME_KEY = "name"; private static enum ExampleRelationshipTypes implements RelationshipType { EXAMPLE } public static void main(String[] args) { GraphDatabaseService graphDb = null; try { System.out.println( "Init database..." ); graphDb = new EmbeddedGraphDatabase( DB_PATH ); registerShutdownHook( graphDb ); System.out.println( "Start of creating database..." ); int valIndex = 0; for(int i=0; i<1000; ++i) { for(int j=0; j<1000; ++j) { Transaction tx = graphDb.beginTx(); try { Node firstNode = graphDb.createNode(); firstNode.setProperty( NAME_KEY, "Hello" + valIndex ); Node secondNode = graphDb.createNode(); secondNode.setProperty( NAME_KEY, "World" + valIndex ); firstNode.createRelationshipTo( secondNode, ExampleRelationshipTypes.EXAMPLE ); tx.success(); ++valIndex; } finally { tx.finish(); } } } System.out.println("Ok, client processing finished!"); } finally { System.out.println( "Shutting down database ..." ); graphDb.shutdown(); } } private static void registerShutdownHook( final GraphDatabaseService graphDb ) { // Registers a shutdown hook for the Neo4j instance so that it // shuts down nicely when the VM exits (even if you "Ctrl-C" the // running example before it's completed) Runtime.getRuntime().addShutdownHook( new Thread() { @Override public void run() { graphDb.shutdown(); } } ); } } After a few iterations (around 150K) I got error message: "java.lang.OutOfMemoryError: Java heap space at java.nio.HeapByteBuffer.(HeapByteBuffer.java:39) at java.nio.ByteBuffer.allocate(ByteBuffer.java:312) at org.neo4j.kernel.impl.nioneo.store.PlainPersistenceWindow.(PlainPersistenceWindow.java:30) at org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool.allocateNewWindow(PersistenceWindowPool.java:534) at org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool.refreshBricks(PersistenceWindowPool.java:430) at org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool.acquire(PersistenceWindowPool.java:122) at org.neo4j.kernel.impl.nioneo.store.CommonAbstractStore.acquireWindow(CommonAbstractStore.java:459) at org.neo4j.kernel.impl.nioneo.store.AbstractDynamicStore.updateRecord(AbstractDynamicStore.java:240) at org.neo4j.kernel.impl.nioneo.store.PropertyStore.updateRecord(PropertyStore.java:209) at org.neo4j.kernel.impl.nioneo.xa.Command$PropertyCommand.execute(Command.java:513) at org.neo4j.kernel.impl.nioneo.xa.NeoTransaction.doCommit(NeoTransaction.java:443) at org.neo4j.kernel.impl.transaction.xaframework.XaTransaction.commit(XaTransaction.java:316) at org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.commit(XaResourceManager.java:399) at org.neo4j.kernel.impl.transaction.xaframework.XaResourceHelpImpl.commit(XaResourceHelpImpl.java:64) at org.neo4j.kernel.impl.transaction.TransactionImpl.doCommit(TransactionImpl.java:514) at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:571) at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:543) at org.neo4j.kernel.impl.transaction.TransactionImpl.commit(TransactionImpl.java:102) at org.neo4j.kernel.EmbeddedGraphDbImpl$TransactionImpl.finish(EmbeddedGraphDbImpl.java:329) at javaapplication2.Main.main(Main.java:62) 28.05.2010 9:52:14 org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool logWarn WARNING: [neo4j-store-1M\neostore.propertystore.db.strings] Unable to allocate direct buffer" Guys! Help me plzzz, what I did wrong, how can I repair it? Tested on platform Windows XP 32bit SP3. Maybe solution within creation custom configuration? thnx 4 every advice!

    Read the article

  • Preventing multiple repeat selection of synchronized Controls ?

    - by BillW
    The working code sample here synchronizes (single) selection in a TreeView, ListView, and ComboBox via the use of lambda expressions in a dictionary where the Key in the dictionary is a Control, and the Value of each Key is an Action<int. Where I am stuck is that I am getting multiple repetitions of execution of the code that sets the selection in the various controls in a way that's unexpected : it's not recursing : there's no StackOverFlow error happening; but, I would like to figure out why the current strategy for preventing multiple selection of the same controls is not working. Perhaps the real problem here is distinguishing between a selection update triggered by the end-user and a selection update triggered by the code that synchronizes the other controls ? Note: I've been experimenting with using Delegates, and forms of Delegates like Action<T>, to insert executable code in Dictionaries : I "learn best" by posing programming "challenges" to myself, and implementing them, as well as studying, at the same time, the "golden words" of luminaries like Skeet, McDonald, Liberty, Troelsen, Sells, Richter. Note: Appended to this question/code, for "deep background," is a statement of how I used to do things in pre C#3.0 days where it seemed like I did need to use explicit measures to prevent recursion when synchronizing selection. Code : Assume a WinForms standard TreeView, ListView, ComboBox, all with the same identical set of entries (i.e., the TreeView has only root nodes; the ListView, in Details View, has one Column). private Dictionary<Control, Action<int>> ControlToAction = new Dictionary<Control, Action<int>>(); private void Form1_Load(object sender, EventArgs e) { // add the Controls to be synchronized to the Dictionary // with appropriate Action<int> lambda expressions ControlToAction.Add(treeView1, (i => { treeView1.SelectedNode = treeView1.Nodes[i]; })); ControlToAction.Add(listView1, (i => { listView1.Items[i].Selected = true; })); ControlToAction.Add(comboBox1, (i => { comboBox1.SelectedIndex = i; })); } private void synchronizeSelection(int i, Control currentControl) { foreach(Control theControl in ControlToAction.Keys) { // skip the 'current control' if (theControl == currentControl) continue; // for debugging only Console.WriteLine(theControl.Name + " synchronized"); // execute the Action<int> associated with the Control ControlToAction[theControl](i); } } private void treeView1_AfterSelect(object sender, TreeViewEventArgs e) { synchronizeSelection(e.Node.Index, treeView1); } private void listView1_SelectedIndexChanged(object sender, EventArgs e) { // weed out ListView SelectedIndexChanged firing // with SelectedIndices having a Count of #0 if (listView1.SelectedIndices.Count > 0) { synchronizeSelection(listView1.SelectedIndices[0], listView1); } } private void comboBox1_SelectedValueChanged(object sender, EventArgs e) { if (comboBox1.SelectedIndex > -1) { synchronizeSelection(comboBox1.SelectedIndex, comboBox1); } } background : pre C# 3.0 Seems like, back in pre C# 3.0 days, I was always using a boolean flag to prevent recursion when multiple controls were updated. For example, I'd typically have code like this for synchronizing a TreeView and ListView : assuming each Item in the ListView was synchronized with a root-level node of the TreeView via a common index : // assume ListView is in 'Details View,' has a single column, // MultiSelect = false // FullRowSelect = true // HideSelection = false; // assume TreeView // HideSelection = false // FullRowSelect = true // form scoped variable private bool dontRecurse = false; private void treeView1_AfterSelect(object sender, TreeViewEventArgs e) { if(dontRecurse) return; dontRecurse = true; listView1.Items[e.Node.Index].Selected = true; dontRecurse = false; } private void listView1_SelectedIndexChanged(object sender, EventArgs e) { if(dontRecurse) return // weed out ListView SelectedIndexChanged firing // with SelectedIndices having a Count of #0 if (listView1.SelectedIndices.Count > 0) { dontRecurse = true; treeView1.SelectedNode = treeView1.Nodes[listView1.SelectedIndices[0]]; dontRecurse = false; } } Then it seems, somewhere around FrameWork 3~3.5, I could get rid of the code to suppress recursion, and there was was no recursion (at least not when synchronizing a TreeView and a ListView). By that time it had become a "habit" to use a boolean flag to prevent recursion, and that may have had to do with using a certain third party control.

    Read the article

  • puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work?

    Read the article

  • xslt apply-templates in second level

    - by m00sila
    I cannot wrap <panel> tags to second level individual items as shown in Expected result bellow. But with the xslt i have written, all 1.x get into single node. Please help me. Source xml <root> <step id="1"> <content> <text> 1.0 Sample first level step text </text> </content> <content/> <step> <content> <text> 1.1 Sample second level step text </text> </content> </step> <step> <content> <text> 1.2 Sample second level step text </text> </content> </step> <step> <content> <text> 1.3 Sample second level step text </text> </content> </step> </step> </root> Expected output <panel> <panel> 1.0 Sample first level step text </panel> <panel> 1.1 Sample second level step text </panel> <panel> 1.2 Sample second level step text </panel> <panel> 1.3 Sample second level step text </panel> </panel> My XSLT <xsl:template match="/"> <panel> <xsl:apply-templates/> </panel> </xsl:template> <xsl:template match="root/step" > <panel> <panel> <xsl:apply-templates select ="content/text/node()"></xsl:apply-templates> </panel> <panel> <xsl:apply-templates select ="step/content/text/node()"></xsl:apply-templates> </panel> </panel> </xsl:template>

    Read the article

  • Boost::Spirit::Qi autorules -- avoiding repeated copying of AST data structures

    - by phooji
    I've been using Qi and Karma to do some processing on several small languages. Most of the grammars are pretty small (20-40 rules). I've been able to use autorules almost exclusively, so my parse trees consist entirely of variants, structs, and std::vectors. This setup works great for the common case: 1) parse something (Qi), 2) make minor manipulations to the parse tree (visitor), and 3) output something (Karma). However, I'm concerned about what will happen if I want to make complex structural changes to a syntax tree, like moving big subtrees around. Consider the following toy example: A grammar for s-expr-style logical expressions that uses autorules... // Inside grammar class; rule names match struct names... pexpr %= pand | por | var | bconst; pand %= lit("(and ") >> (pexpr % lit(" ")) >> ")"; por %= lit("(or ") >> (pexpr % lit(" ")) >> ")"; pnot %= lit("(not ") >> pexpr >> ")"; ... which leads to parse tree representation that looks like this... struct var { std::string name; }; struct bconst { bool val; }; struct pand; struct por; struct pnot; typedef boost::variant<bconst, var, boost::recursive_wrapper<pand>, boost::recursive_wrapper<por>, boost::recursive_wrapper<pnot> > pexpr; struct pand { std::vector<pexpr> operands; }; struct por { std::vector<pexpr> operands; }; struct pnot { pexpr victim; }; // Many Fusion Macros here Suppose I have a parse tree that looks something like this: pand / ... \ por por / \ / \ var var var var (The ellipsis means 'many more children of similar shape for pand.') Now, suppose that I want negate each of the por nodes, so that the end result is: pand / ... \ pnot pnot | | por por / \ / \ var var var var The direct approach would be, for each por subtree: - create pnot node (copies por in construction); - re-assign the appropriate vector slot in the pand node (copies pnot node and its por subtree). Alternatively, I could construct a separate vector, and then replace (swap) the pand vector wholesale, eliminating a second round of copying. All of this seems cumbersome compared to a pointer-based tree representation, which would allow for the pnot nodes to be inserted without any copying of existing nodes. My question: Is there a way to avoid copy-heavy tree manipulations with autorule-compliant data structures? Should I bite the bullet and just use non-autorules to build a pointer-based AST (e.g., http://boost-spirit.com/home/2010/03/11/s-expressions-and-variants/)?

    Read the article

  • Generating cache file for Twitter rss feed

    - by Kerri
    I'm working on a site with a simple php-generated twitter box with user timeline tweets pulled from the user_timeline rss feed, and cached to a local file to cut down on loads, and as backup for when twitter goes down. I based the caching on this: http://snipplr.com/view/8156/twitter-cache/. It all seemed to be working well yesterday, but today I discovered the cache file was blank. Deleting it then loading again generated a fresh file. The code I'm using is below. I've edited it to try to get it to work with what I was already using to display the feed and probably messed something crucial up. The changes I made are the following (and I strongly believe that any of these could be the cause): - Revised the time difference code (the linked example seemed to use a custom function that wasn't included in the code) Removed the "serialize" function from the "fwrites". This is purely because I couldn't figure out how to unserialize once I loaded it in the display code. I truthfully don't understand the role that serialize plays or how it works, so I'm sure I should have kept it in. If that's the case I just need to understand where/how to deserialize in the second part of the code so that it can be parsed. Removed the $rss variable in favor of just loading up the cache file in my original tweet display code. So, here are the relevant parts of the code I used: <?php $feedURL = "http://twitter.com/statuses/user_timeline/#######.rss"; // START CACHING $cache_file = dirname(__FILE__).'/cache/twitter_cache.rss'; // Start with the cache if(file_exists($cache_file)){ $mtime = (strtotime("now") - filemtime($cache_file)); if($mtime > 600) { $cache_rss = file_get_contents('http://twitter.com/statuses/user_timeline/75168146.rss'); $cache_static = fopen($cache_file, 'wb'); fwrite($cache_static, $cache_rss); fclose($cache_static); } echo "<!-- twitter cache generated ".date('Y-m-d h:i:s', filemtime($cache_file))." -->"; } else { $cache_rss = file_get_contents('http://twitter.com/statuses/user_timeline/#######.rss'); $cache_static = fopen($cache_file, 'wb'); fwrite($cache_static, $cache_rss); fclose($cache_static); } //END CACHING //START DISPLAY $doc = new DOMDocument(); $doc->load($cache_file); $arrFeeds = array(); foreach ($doc->getElementsByTagName('item') as $node) { $itemRSS = array ( 'title' => $node->getElementsByTagName('title')->item(0)->nodeValue, 'date' => $node->getElementsByTagName('pubDate')->item(0)->nodeValue ); array_push($arrFeeds, $itemRSS); } // the rest of the formatting and display code.... } ?> ETA 6/17 Nobody can help…? I'm thinking it has something to do with writing a blank cache file over a good one when twitter is down, because otherwise I imagine that this should be happening every ten minutes when the cache file is overwritten again, but it doesn't happen that frequently. I made the following change to the part where it checks how old the file is to overwrite it: $cache_rss = file_get_contents('http://twitter.com/statuses/user_timeline/75168146.rss'); if($mtime > 600 && $cache_rss != ''){ $cache_static = fopen($cache_file, 'wb'); fwrite($cache_static, $cache_rss); fclose($cache_static); } …so now, it will only write the file if it's over ten minutes old and there's actual content retrieved from the rss page. Do you think this will work?

    Read the article

  • How to implement dynamic binary search for search and insert operations of n element (C or C++)

    - by iecut
    The idea is to use multiple arrays, each of length 2^k, to store n elements, according to binary representation of n.Each array is sorted and different arrays are not ordered in any way. In the above mentioned data structure, SEARCH is carried out by a sequence of binary search on each array. INSERT is carried out by a sequence of merge of arrays of the same length until an empty array is reached. More Detail: Lets suppose we have a vertical array of length 2^k and to each node of that array there attached horizontal array of length 2^k. That is, to the first node of vertical array, a horizontal array of length 2^0=1 is connected,to the second node of vertical array, a horizontal array of length 2^1= 2 is connected and so on. So the insert is first carried out in the first horizontal array, for the second insert the first array becomes empty and second horizontal array is full with 2 elements, for the third insert 1st and 2nd array horiz. array are filled and so on. I implemented the normal binary search for search and insert as follows: int main() { int a[20]= {0}; int n, i, j, temp; int *beg, *end, *mid, target; printf(" enter the total integers you want to enter (make it less then 20):\n"); scanf("%d", &n); if (n = 20) return 0; printf(" enter the integer array elements:\n" ); for(i = 0; i < n; i++) { scanf("%d", &a[i]); } // sort the loaded array, binary search! for(i = 0; i < n-1; i++) { for(j = 0; j < n-i-1; j++) { if (a[j+1] < a[j]) { temp = a[j]; a[j] = a[j+1]; a[j+1] = temp; } } } printf(" the sorted numbers are:"); for(i = 0; i < n; i++) { printf("%d ", a[i]); } // point to beginning and end of the array beg = &a[0]; end = &a[n]; // use n = one element past the loaded array! // mid should point somewhere in the middle of these addresses mid = beg += n/2; printf("\n enter the number to be searched:"); scanf("%d",&target); // binary search, there is an AND in the middle of while()!!! while((beg <= end) && (*mid != target)) { // is the target in lower or upper half? if (target < *mid) { end = mid - 1; // new end n = n/2; mid = beg += n/2; // new middle } else { beg = mid + 1; // new beginning n = n/2; mid = beg += n/2; // new middle } } // find the target? if (*mid == target) { printf("\n %d found!", target); } else { printf("\n %d not found!", target); } getchar(); // trap enter getchar(); // wait return 0; } Could anyone please suggest how to modify this program or a new program to implement dynamic binary search that works as explained above!!

    Read the article

  • Is asp.net caching my sql results?

    - by Christian W
    I have the following method in an App_Code/Globals.cs file: public static XmlDataSource getXmlSourceFromOrgid(int orgid) { XmlDataSource xds = new XmlDataSource(); var ctx = new SensusDataContext(); SqlConnection c = new SqlConnection(ctx.Connection.ConnectionString); c.Open(); SqlCommand cmd = new SqlCommand(String.Format("select orgid, tekst, dbo.GetOrgTreeXML({0}) as Subtree from tblOrg where OrgID = {0}", orgid), c); var rdr = cmd.ExecuteReader(); rdr.Read(); StringBuilder sb = new StringBuilder(); sb.AppendFormat("&lt;node orgid=\"{0}\" tekst=\"{1}\"&gt;",rdr.GetInt32(0),rdr.GetString(1)); sb.Append(rdr.GetString(2)); sb.Append("&lt;/node&gt;"); xds.Data = sb.ToString(); xds.ID = "treedata"; rdr.Close(); c.Close(); return xds; } This gives me an XML-structure to use with the asp.net treeview control (I also use the CssFriendly extender to get nicer code) My problem is that if I logon on my pc with a code that gives me access on a lower level in the tree hierarchy (it's an orgianization hierarchy), it somehow "remembers" what level i logon at. So when my coworker tests from her computer with another code, giving access to another place in the tree, she get's the same tree as me. (The tree is supposed to show your own level and down.) I have added a html-comment to show what orgid it passes to the function, and the orgid passed is correct. So either the treeview caches something serverside, or the sqlquery caches it's result somehow... Any ideas? Sql function: ALTER function [dbo].[GetOrgTreeXML](@orgid int) returns XML begin RETURN (select org.orgid as '@orgid', org.tekst as '@tekst', [dbo].GetOrgTreeXML(org.orgid) from tblOrg org where (@orgid is null and Eier is null) or Eier=@orgid for XML PATH('NODE'), TYPE) end Extra code as requested: int orgid = int.Parse(Session["org"].ToString()); string orgname = context.Orgs.Where(q => q.OrgID == orgid).First().Tekst; debuglit.Text = String.Format("<!-- Id: {0} \n name: {1} -->", orgid, orgname); var orgxml = Globals.getXmlSourceFromOrgid(orgid); tvNavtree.DataSource = orgxml; tvNavtree.DataBind(); Where "debuglit" is a asp:Literal in the aspx file. EDIT: I have narrowed it down. All functions returns correct values. It just doesn't bind to it. I suspect the CssFriendly adapter to have something to do with it. I disabled the CssFriendly adapter and the problem persists... Stepping through it in debug it's correct all the way, with the stepper standing on "tvNavtree.DataBind();" I can hover the pointer over the tvNavtree.Datasource and see that it actually has the correct data. So something must be faulting in the binding process...

    Read the article

  • How can I disable 'output escaping' in minidom

    - by William
    I'm trying to build an xml document from scratch using xml.dom.minidom. Everything was going well until I tried to make a text node with a ® (Registered Trademark) symbol in. My objective is for when I finally hit print mydoc.toxml() this particular node will actually contain a ® symbol. First I tried: import xml.dom.minidom as mdom data = '®' which gives the rather obvious error of: File "C:\src\python\HTMLGen\test2.py", line 3 SyntaxError: Non-ASCII character '\xae' in file C:\src\python\HTMLGen\test2.py on line 3, but no encoding declared; see http://www.python.or g/peps/pep-0263.html for details I have of course also tried changing the encoding of my python script to 'utf-8' using the opening line comment method, but this didn't help. So I thought import xml.dom.minidom as mdom data = '&#174;' #Both accepted xml encodings for registered trademark data = '&reg;' text = mdom.Text() text.data = data print data print text.toxml() But because when I print text.toxml(), the ampersands are being escaped, I get this output: &reg; &amp;reg; My question is, does anybody know of a way that I can force the ampersands not to be escaped in the output, so that I can have my special character reference carry through to the XML document? Basically, for this node, I want print text.toxml() to produce output of &reg; or &#174; in a happy and cooperative way! EDIT 1: By the way, if minidom actually doesn't have this capacity, I am perfectly happy using another module that you can recommend which does. EDIT 2: As Hugh suggested, I tried using data = u'®' (while also using data # -*- coding: utf-8 -*- Python source tags). This almost helped in the sense that it actually caused the ® symbol itself to be outputted to my xml. This is actually not the result I am looking for. As you may have guessed by now (and perhaps I should have specified earlier) this xml document happens to be an HTML page, which needs to work in a browser. So having ® in the document ends up causing rubbish in the browser (® to be precise!). I also tried: data = unichr(174) text.data = data.encode('ascii','xmlcharrefreplace') print text.toxml() But of course this lead to the same origional problem where all that happens is the ampersand gets escaped by .toxml(). My ideal scenario would be some way of escaping the ampersand so that the XML printing function won't "escape" it on my behalf for the document (in other words, achieving my original goal of having &reg; or &#174; appear in the document). Seems like soon I'm going to have to resort to regular expressions! EDIT 2a: Or perhaps not. Seems like getting my html meta information correct <META http-equiv="Content-Type" Content="text/html; charset=UTF-8"> could help, but I'm not sure yet how this fits in with the xml structure...

    Read the article

  • Use JQuery to target unwrapped text inside a div

    - by Chris
    I'm trying to find a way to wrap just the inner text of an element, I don't want to target any other inner dom elements. For example. <ul> <li class="this-one"> this is my item <ul> <li> this is a sub element </li> </ul> </li> </ul> I want to use jQuery to do this. <ul> <li class="this-one"> <div class="tree-item-text">this is my item</div> <ul> <li> <div class="tree-item-text">this is a sub element</div> </li> </ul> </li> </ul> A little background is I need to make an in-house tree structure ui element, So I'm using the UL structure to represent this. But I don't want developers to have to do any special formatting to use the widget. update: I just wanted to add the purpose of this is I want to add a click listener to be able to expand the elements under the li, However, since those elements are within the li the click listener will activate even when clicking on the children, So I want to attach it to the text instead, to do this the text needs to be targetable, which is why I want to wrap it in a div of it's own. So far I've come up with wrapping all the inner elements of the li in a div and then moving all inner dom elements back to the original parent. But this code is pretty heavy for something that might be much simpler and not require so much DOM manipulation. EDIT: Want to share the first pseudo alternative I came up with but I think it is very tasking for what I want to accomplish. var innerTextThing = $("ul.tree ul").parents("li").wrapInner("<div class='tree-node-text'>"); $(innerTextThing.find(".tree-node-text")).each(function(){ $(this).after($(this).children("ul")); }); Answered: I ended up doing the following, FYI i only have to worry about FF and IE compatibility so it's untested in other browsers. //this will wrap all li textNodes in a div so we can target them. $(that).find("li").contents() .filter(function () { return this.nodeType == 3; }).each(function () { if ( //these are for IE and FF compatibility (this.textContent != undefined && this.textContent.trim() != "") || (this.innerText != undefined && this.innerText.trim() != "") ) { $(this).wrap("<div class='tree-node-text'>"); } });

    Read the article

  • XForms and multiple inputs for same model tag

    - by iHeartGreek
    Hi! I apologize ahead of time if I am not asking this properly.. it is hard to put into words what I am asking.. I have XForms model such as: <file> <criteria> <criterion></criterion> </criteria> </file> I want to have multiple input text boxes that create a new criterion tag. user interface such as: <xf:input ref="/file/criteria/criterion" model="select_data"> <xf:label>Select</xf:label> </xf:input> <xf:input ref="/file/criteria/criterion" model="select_data"> <xf:label>Select</xf:label> </xf:input> <xf:input ref="/file/criteria/criterion" model="select_data"> <xf:label>Select</xf:label> </xf:input> And I would like the XML output to look like this (once user has entered in info): <file> <criteria> <criterion>AAA</criterion> <criterion>BBB</criterion> <criterion>CCC</criterion> </criteria> </file> The way I have it doesn't work, as it sees the 3 input fields to be referring all to the same criterion tag. How do I differentiate? Thanks! I hope that made some sense! BEGIN FIRST EDIT Thanks for the responses for the basic text box! However, I now need to do this with a listbox. But for the life of me, I can't figure out how. I read somewhere to use with the xforms:select and deselect events.. but I didn't know where to place them, and the places I tried gave me very weird behaviour. I am currently implementing the following: <xf:select ref="instance('criteria_data')/criteria/criterion" selection="" appearance="compact" > <xf:label>Choose criteria</xf:label> <xf:itemset nodeset="instance('criteria_choices')/choice"> <xf:label ref="@label"></xf:label> <xf:value ref="."></xf:value> </xf:itemset> </xf:select> However when multiple choices are submitted, all selection values are inserted into the same node, separated by spaces. For example: If AAA and BBB and FFF were selected from listbox, it would result in the following XML: <criterion>AAA BBB FFF</criterion> How do I change my code to have each selection be in a separate node? i.e. I want it to look like this: <criterion>AAA</criterion> <criterion>BBB</criterion> <criterion>FFF</criterion> Thanks! END FIRST EDIT BEGIN SECOND EDIT: For the listboxes (ie xf:select appearance="compact") I ended up allowing the spaces to occur in the same node and then just transformed that xml using xsl to generate a properly formatted new xml doc (with separate individual nodes). Unfortunately, I did not find a less cumbersome solution by inserting them originally into separate nodes. The selected answer works very well for text boxes however, hence why I selected it as the answer. END SECOND EDIT

    Read the article

  • Configure TFS portal afterwards

    Update #1 January 8th, 2010: There is an updated post on this topic for Beta 2: http://www.ewaldhofman.nl/post/2009/12/10/Configure-TFS-portal-afterwards-Beta-2.aspx Update #2 October 10th, 2010: In the new Team Foundation Server Power Tools September 2010, there is now a command to create a portal. tfpt addprojectportal   Add or move portal for an existing team project Usage: tfpt addprojectportal /collection:uri                              /teamproject:"project name"                              /processtemplate:"template name"                              [/webapplication:"webappname"]                              [/relativepath:"pathfromwebapp"]                              [/validate]                              [/verbose] /collection Required. URL of Team Project Collection. /teamproject Required. Specifies the name of the team project. /processtemplate Required. Specifies that name of the process template. /webapplication The name of the SharePoint Web Application. Must also specify relativepath. /relativepath The path for the site relative to the root URL for the SharePoint Web Application. Must also specify webapplication. /validate Specifies that the user inputs are to be validated. If specified, only validation will be done and no portal setting will be changed. /verbose Switches on the verbose mode. I created a new Team Project in TFS 2010 Beta 1 and choose not to configure SharePoint during the creation of the Team Project. Of course I found out fairly quickly that a portal for TFS is very useful, especially the Iteration and the Product backlog workbooks and the dashboard reports. This blog describes how you can configure the sharepoint portal afterwards. Update: September 9th, 2009 Adding the portal afterwards is much easier as described below. Here are the steps Step 1: Create a new temporary project (with a SharePoint site for it). Open the Team Explorer Right click in the Team Explorer the root node (i.e. the project collection) Select "New team project" from the menu Walk throught he wizard and make sure you check the option to create the portal (which is by default checked) Step 2: Disable the site for the new project Open the Team Explorer Select the team project you created in step 1 In the menu click on Team -> Show Project Portal. In the menu click on Team -> Team Project Settings -> Portal Settings... The following dialog pops up Uncheck the option "Enable team project portal" Confirm the dialog with OK Step 3: Enable the site for the original one. Point it to the newly created site. Open the Team Explorer Select the team project you want to add the portal to In the menu open Team -> Team Project Settings -> Portal Settings... The same dialog as in step 2 pops up Check the option "Enable team project portal" Click on the "Configure URL" button The following dialog pops up   In the dialog select in the combobox of the web application the TFS server Enter in the Relative site path the text "sites/[Project Collection Name]/[Team Project Name created in step 1]" Confirm the "Specify an existing SharePoint Site" with OK Check the "Reports and dashboards refer to data for this team project" option Confirm the dialog "Project Portal Settings" with OK Step 4: Delete the temporary project you created. In Beta 1, I have found no way to delete a team project. Maybe it will be available in TFS 2010 Beta 2. Original post Step 1: Create new portal site Go to the sharepoint site of your project collection (/sites//default.aspx">/sites//default.aspx">http://<servername>/sites/<project_collection_name>/default.aspx) Click on the Site Actions at the left side of the screen and choose the option Site Settings In the site settings, choose the Sites and workspaces option Create a new site Enter the values for the Title, the description, the site address. And choose for the TFS2010 Agile Dashboard as template. Create the site, by clicking on the Create button Step 2: Integrate portal site with team project Open Visual Studio Open the Team Explorer (View -> Team Explorer) Select in the Team Explorer tool window the Team Project for which you are create a new portal Open the Project Portal Settings (Team -> Team Project Settings -> Portal Setings...) Check the Enable team project portal checkbox Click on Configure URL... You will get a new dialog as below Enter the url to the TFS server in the web application combobox And specify the relative site path: sites/<project collection>/<site name> Confirm with OK Check in the Project Portal Settings dialog the checkbox "Reports and dashboards refer to data for this team project" Confirm the settings with OK (this takes a while...) When you now browse to the portal, you will see that the dashboards are now showing up with the data for the current team project. Step 3: Download process template To get a copy of the documents that are default in a team project, we need to have a fresh set of files that are not attached to a team project yet. You can do that with the following steps. Start the Process Template Manager (Team -> Team Project Collection Settings -> Process Template Manager...) Choose the Agile process template and click on download Choose a folder to download Step 4: Add Product and Iteration backlog Go to the Team Explorer in Visual Studio Make sure the team project is in the list of team projects, and expand the team project Right click the Documents node, and choose New Document Library Enter "Shared Documents", and click on Add Right click the Shared Documents node and choose Upload Document Go the the file location where you stored the process template from step 3 and then navigate to the subdirectory "Agile Process Template 5.0\MSF for Agile Software Development v5.0\Windows SharePoint Services\Shared Documents\Project Management" Select in the Open Dialog the files "Iteration Backlog" and "Product Backlog", and click Open Step 5: Bind Iteration backlog workbook to the team project Right click on the "Iteration Backlog" file and select Edit, and confirm any warning messages Place your cursor in cell A1 of the Iteration backlog worksheet Switch to the Team ribbon and click New List. Select your Team Project and click Connect From the New List dialog, select the Iteration Backlog query in the Workbook Queries folder. The final step is to add a set of document properties that allow the workbook to communicate with the TFS reporting warehouse. Before we create the properties we need to collect some information about your project. The first piece of information comes from the table created in the previous step.  As you collect these properties, copy them into notepad so they can be used in later steps. Property How to retrieve the value? [Table name] Switch to the Design ribbon and select the Table Name value in the Properties portion of the ribbon [Project GUID] In the Visual Studio Team Explorer, right click your Team Project and select Properties.  Select the URL value and copy the GUID (long value with lots of characters) at the end of the URL [Team Project name] In the Properties dialog, select the Name field and copy the value [TFS server name] In the Properties dialog, select the Server Name field and copy the value [UPDATE] I have found that this is not correct: you need to specify the instance of your SQL Server. The value is used to create a connection to the TFS cube. Switch back to the Iteration Backlog workbook. Click the Office button and select Prepare – Properties. Click the Document Properties – Server drop down and select Advanced Properties. Switch to the Custom tab and add the following properties using the values you collected above. Variable name Value [Table name]_ASServerName [TFS server name] [Table name]_ASDatabase tfs_warehouse [Table name]_TeamProjectName [Team Project name] [Table name]_TeamProjectId [Project GUID] Click OK to close the properties dialog. It is possible that the Estimated Work (Hours) is showing the #REF! value. To resolve that change the formula with: =SUMIFS([Table name][Original Estimate]; [Table name][Iteration Path];CurrentIteration&"*";[Table name][Area Path];AreaPath&"*";[Table name][Work Item Type]; "Task") For example =SUMIFS(VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Original Estimate]; VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Iteration Path];CurrentIteration&"*";VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Area Path];AreaPath&"*";VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Work Item Type]; "Task") Also the Total Remaining Work in the Individual Capacity table may contain #REF! values. To resolve that change the formula with: =SUMIFS([Table name][Remaining Work]; [Table name][Iteration Path];CurrentIteration&"*";[Table name][Area Path];AreaPath&"*";[Table name][Assigned To];[Team Member];[Table name][Work Item Type]; "Task") For example =SUMIFS(VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Remaining Work]; VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Iteration Path];CurrentIteration&"*";VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Area Path];AreaPath&"*";VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Assigned To];[Team Member];VSTS_ab392b55_6647_439a_bae4_8c66e908bc0d[Work Item Type]; "Task") Save and close the workbook. Step 6: Bind Product backlog workbook to the team project Repeat the steps for binding the Iteration backlog for thiw workbook too. In the worksheet Capacity, the formula of the Storypoints might be missing. You can resolve it with: =IF([Iteration]="";"";SUMIFS([Table name][Story Points];[Table name][Iteration Path];[Iteration]&"*")) Example =IF([Iteration]="";"";SUMIFS(VSTS_487f1e4c_db30_4302_b5e8_bd80195bc2ec[Story Points];VSTS_487f1e4c_db30_4302_b5e8_bd80195bc2ec[Iteration Path];[Iteration]&"*"))

    Read the article

  • Running a simple integration scenario using the Oracle Big Data Connectors on Hadoop/HDFS cluster

    - by hamsun
    Between the elephant ( the tradional image of the Hadoop framework) and the Oracle Iron Man (Big Data..) an english setter could be seen as the link to the right data Data, Data, Data, we are living in a world where data technology based on popular applications , search engines, Webservers, rich sms messages, email clients, weather forecasts and so on, have a predominant role in our life. More and more technologies are used to analyze/track our behavior, try to detect patterns, to propose us "the best/right user experience" from the Google Ad services, to Telco companies or large consumer sites (like Amazon:) ). The more we use all these technologies, the more we generate data, and thus there is a need of huge data marts and specific hardware/software servers (as the Exadata servers) in order to treat/analyze/understand the trends and offer new services to the users. Some of these "data feeds" are raw, unstructured data, and cannot be processed effectively by normal SQL queries. Large scale distributed processing was an emerging infrastructure need and the solution seemed to be the "collocation of compute nodes with the data", which in turn leaded to MapReduce parallel patterns and the development of the Hadoop framework, which is based on MapReduce and a distributed file system (HDFS) that runs on larger clusters of rather inexpensive servers. Several Oracle products are using the distributed / aggregation pattern for data calculation ( Coherence, NoSql, times ten ) so once that you are familiar with one of these technologies, lets says with coherence aggregators, you will find the whole Hadoop, MapReduce concept very similar. Oracle Big Data Appliance is based on the Cloudera Distribution (CDH), and the Oracle Big Data Connectors can be plugged on a Hadoop cluster running the CDH distribution or equivalent Hadoop clusters. In this paper, a "lab like" implementation of this concept is done on a single Linux X64 server, running an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0, and a single node Apache hadoop-1.2.1 HDFS cluster, using the SQL connector for HDFS. The whole setup is fairly simple: Install on a Linux x64 server ( or virtual box appliance) an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 server Get the Apache Hadoop distribution from: http://mir2.ovh.net/ftp.apache.org/dist/hadoop/common/hadoop-1.2.1. Get the Oracle Big Data Connectors from: http://www.oracle.com/technetwork/bdc/big-data-connectors/downloads/index.html?ssSourceSiteId=ocomen. Check the java version of your Linux server with the command: java -version java version "1.7.0_40" Java(TM) SE Runtime Environment (build 1.7.0_40-b43) Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode) Decompress the hadoop hadoop-1.2.1.tar.gz file to /u01/hadoop-1.2.1 Modify your .bash_profile export HADOOP_HOME=/u01/hadoop-1.2.1 export PATH=$PATH:$HADOOP_HOME/bin export HIVE_HOME=/u01/hive-0.11.0 export PATH=$PATH:$HADOOP_HOME/bin:$HIVE_HOME/bin (also see my sample .bash_profile) Set up ssh trust for Hadoop process, this is a mandatory step, in our case we have to establish a "local trust" as will are using a single node configuration copy the new public keys to the list of authorized keys connect and test the ssh setup to your localhost: We will run a "pseudo-Hadoop cluster", in what is called "local standalone mode", all the Hadoop java components are running in one Java process, this is enough for our demo purposes. We need to "fine tune" some Hadoop configuration files, we have to go at our $HADOOP_HOME/conf, and modify the files: core-site.xml hdfs-site.xml mapred-site.xml check that the hadoop binaries are referenced correctly from the command line by executing: hadoop -version As Hadoop is managing our "clustered HDFS" file system we have to create "the mount point" and format it , the mount point will be declared to core-site.xml as: The layout under the /u01/hadoop-1.2.1/data will be created and used by other hadoop components (MapReduce = /mapred/...) HDFS is using the /dfs/... layout structure format the HDFS hadoop file system: Start the java components for the HDFS system As an additional check, you can use the GUI Hadoop browsers to check the content of your HDFS configurations: Once our HDFS Hadoop setup is done you can use the HDFS file system to store data ( big data : )), and plug them back and forth to Oracle Databases by the means of the Big Data Connectors ( which is the next configuration step). You can create / use a Hive db, but in our case we will make a simple integration of "raw data" , through the creation of an External Table to a local Oracle instance ( on the same Linux box, we run the Hadoop HDFS one node cluster and one Oracle DB). Download some public "big data", I use the site: http://france.meteofrance.com/france/observations, from where I can get *.csv files for my big data simulations :). Here is the data layout of my example file: Download the Big Data Connector from the OTN (oraosch-2.2.0.zip), unzip it to your local file system (see picture below) Modify your environment in order to access the connector libraries , and make the following test: [oracle@dg1 bin]$./hdfs_stream Usage: hdfs_stream locationFile [oracle@dg1 bin]$ Load the data to the Hadoop hdfs file system: hadoop fs -mkdir bgtest_data hadoop fs -put obsFrance.txt bgtest_data/obsFrance.txt hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$ hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$hadoop fs -ls hdfs:///user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt Check the content of the HDFS with the browser UI: Start the Oracle database, and run the following script in order to create the Oracle database user, the Oracle directories for the Oracle Big Data Connector (dg1 it’s my own db id replace accordingly yours): #!/bin/bash export ORAENV_ASK=NO export ORACLE_SID=dg1 . oraenv sqlplus /nolog <<EOF CONNECT / AS sysdba; CREATE OR REPLACE DIRECTORY osch_bin_path AS '/u01/orahdfs-2.2.0/bin'; CREATE USER BGUSER IDENTIFIED BY oracle; GRANT CREATE SESSION, CREATE TABLE TO BGUSER; GRANT EXECUTE ON sys.utl_file TO BGUSER; GRANT READ, EXECUTE ON DIRECTORY osch_bin_path TO BGUSER; CREATE OR REPLACE DIRECTORY BGT_LOG_DIR as '/u01/BG_TEST/logs'; GRANT READ, WRITE ON DIRECTORY BGT_LOG_DIR to BGUSER; CREATE OR REPLACE DIRECTORY BGT_DATA_DIR as '/u01/BG_TEST/data'; GRANT READ, WRITE ON DIRECTORY BGT_DATA_DIR to BGUSER; EOF Put the following in a file named t3.sh and make it executable, hadoop jar $OSCH_HOME/jlib/orahdfs.jar \ oracle.hadoop.exttab.ExternalTable \ -D oracle.hadoop.exttab.tableName=BGTEST_DP_XTAB \ -D oracle.hadoop.exttab.defaultDirectory=BGT_DATA_DIR \ -D oracle.hadoop.exttab.dataPaths="hdfs:///user/oracle/bgtest_data/obsFrance.txt" \ -D oracle.hadoop.exttab.columnCount=7 \ -D oracle.hadoop.connection.url=jdbc:oracle:thin:@//localhost:1521/dg1 \ -D oracle.hadoop.connection.user=BGUSER \ -D oracle.hadoop.exttab.printStackTrace=true \ -createTable --noexecute then test the creation fo the external table with it: [oracle@dg1 samples]$ ./t3.sh ./t3.sh: line 2: /u01/orahdfs-2.2.0: Is a directory Oracle SQL Connector for HDFS Release 2.2.0 - Production Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved. Enter Database Password:] The create table command was not executed. The following table would be created. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081035-74-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files would be created. osch-20131022081035-74-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt Then remove the --noexecute flag and create the external Oracle table for the Hadoop data. Check the results: The create table command succeeded. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081719-3239-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files were created. osch-20131022081719-3239-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt This is the view from the SQL Developer: and finally the number of lines in the oracle table, imported from our Hadoop HDFS cluster SQL select count(*) from "BGUSER"."BGTEST_DP_XTAB"; COUNT(*) ---------- 1151 In a next post we will integrate data from a Hive database, and try some ODI integrations with the ODI Big Data connector. Our simplistic approach is just a step to show you how these unstructured data world can be integrated to Oracle infrastructure. Hadoop, BigData, NoSql are great technologies, they are widely used and Oracle is offering a large integration infrastructure based on these services. Oracle University presents a complete curriculum on all the Oracle related technologies: NoSQL: Introduction to Oracle NoSQL Database Using Oracle NoSQL Database Big Data: Introduction to Big Data Oracle Big Data Essentials Oracle Big Data Overview Oracle Data Integrator: Oracle Data Integrator 12c: New Features Oracle Data Integrator 11g: Integration and Administration Oracle Data Integrator: Administration and Development Oracle Data Integrator 11g: Advanced Integration and Development Oracle Coherence 12c: Oracle Coherence 12c: New Features Oracle Coherence 12c: Share and Manage Data in Clusters Oracle Coherence 12c: Oracle GoldenGate 11g: Fundamentals for Oracle Oracle GoldenGate 11g: Fundamentals for SQL Server Oracle GoldenGate 11g Fundamentals for Oracle Oracle GoldenGate 11g Fundamentals for DB2 Oracle GoldenGate 11g Fundamentals for Teradata Oracle GoldenGate 11g Fundamentals for HP NonStop Oracle GoldenGate 11g Management Pack: Overview Oracle GoldenGate 11g Troubleshooting and Tuning Oracle GoldenGate 11g: Advanced Configuration for Oracle Other Resources: Apache Hadoop : http://hadoop.apache.org/ is the homepage for these technologies. "Hadoop Definitive Guide 3rdEdition" by Tom White is a classical lecture for people who want to know more about Hadoop , and some active "googling " will also give you some more references. About the author: Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugen currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

    Read the article

  • Windows Azure Mobile Services: New support for iOS apps, Facebook/Twitter/Google identity, Emails, SMS, Blobs, Service Bus and more

    - by ScottGu
    A few weeks ago I blogged about Windows Azure Mobile Services - a new capability in Windows Azure that makes it incredibly easy to connect your client and mobile applications to a scalable cloud backend. Earlier today we delivered a number of great improvements to Windows Azure Mobile Services.  New features include: iOS support – enabling you to connect iPhone and iPad apps to Mobile Services Facebook, Twitter, and Google authentication support with Mobile Services Blob, Table, Queue, and Service Bus support from within your Mobile Service Sending emails from your Mobile Service (in partnership with SendGrid) Sending SMS messages from your Mobile Service (in partnership with Twilio) Ability to deploy mobile services in the West US region All of these improvements are now live in production and available to start using immediately. Below are more details on them: iOS Support This week we delivered initial support for connecting iOS based devices (including iPhones and iPads) to Windows Azure Mobile Services.  Like the rest of our Windows Azure SDK, we are delivering the native iOS libraries to enable this under an open source (Apache 2.0) license on GitHub.  We’re excited to get your feedback on this new library through our forum and GitHub issues list, and we welcome contributions to the SDK. To create a new iOS app or connect an existing iOS app to your Mobile Service, simply select the “iOS” tab within the Quick Start view of a Mobile Service within the Windows Azure Portal – and then follow either the “Create a new iOS app” or “Connect to an existing iOS app” link below it: Clicking either of these links will expand and display step-by-step instructions for how to build an iOS application that connects with your Mobile Service: Read this getting started tutorial to walkthrough how you can build (in less than 5 minutes) a simple iOS “Todo List” app that stores data in Windows Azure.  Then follow the below tutorials to explore how to use the iOS client libraries to store data and authenticate users. Get Started with data in Mobile Services for iOS Get Started with authentication in Mobile Services for iOS Facebook, Twitter, and Google Authentication Support Our initial preview of Mobile Services supported the ability to authenticate users of mobile apps using Microsoft Accounts (formerly called Windows Live ID accounts).  This week we are adding the ability to also authenticate users using Facebook, Twitter, and Google credentials.  These are now supported with both Windows 8 apps as well as iOS apps (and a single app can support multiple forms of identity simultaneously – so you can offer your users a choice of how to login). The below tutorials walkthrough how to register your Mobile Service with an identity provider: How to register your app with Microsoft Account How to register your app with Facebook How to register your app with Twitter How to register your app with Google The tutorials above walkthrough how to obtain a client ID and a secret key from the identity provider. You can then click on the “Identity” tab of your Mobile Service (within the Windows Azure Portal) and save these values to enable server-side authentication with your Mobile Service: You can then write code within your client or mobile app to authenticate your users to the Mobile Service.  For example, below is the code you would write to have them login to the Mobile Service using their Facebook credentials: Windows Store App (using C#): var user = await App.MobileService                     .LoginAsync(MobileServiceAuthenticationProvider.Facebook); iOS app (using Objective C): UINavigationController *controller = [self.todoService.client     loginViewControllerWithProvider:@"facebook"     completion:^(MSUser *user, NSError *error) {        //... }]; Learn more about authenticating Mobile Services using Microsoft Account, Facebook, Twitter, and Google from these tutorials: Get started with authentication in Mobile Services for Windows Store (C#) Get started with authentication in Mobile Services for Windows Store (JavaScript) Get started with authentication in Mobile Services for iOS Using Windows Azure Blob, Tables and ServiceBus with your Mobile Services Mobile Services provide a simple but powerful way to add server logic using server scripts. These scripts are associated with the individual CRUD operations on your mobile service’s tables. Server scripts are great for data validation, custom authorization logic (e.g. does this user participate in this game session), augmenting CRUD operations, sending push notifications, and other similar scenarios.   Server scripts are written in JavaScript and are executed in a secure server-side scripting environment built using Node.js.  You can edit these scripts and save them on the server directly within the Windows Azure Portal: In this week’s release we have added the ability to work with other Windows Azure services from your Mobile Service server scripts.  This is supported using the existing “azure” module within the Windows Azure SDK for Node.js.  For example, the below code could be used in a Mobile Service script to obtain a reference to a Windows Azure Table (after which you could query it or insert data into it):     var azure = require('azure');     var tableService = azure.createTableService("<< account name >>",                                                 "<< access key >>"); Follow the tutorials on the Windows Azure Node.js dev center to learn more about working with Blob, Tables, Queues and Service Bus using the azure module. Sending emails from your Mobile Service In this week’s release we have also added the ability to easily send emails from your Mobile Service, building on our partnership with SendGrid. Whether you want to add a welcome email upon successful user registration, or make your app alert you of certain usage activities, you can do this now by sending email from Mobile Services server scripts. To get started, sign up for SendGrid account at http://sendgrid.com . Windows Azure customers receive a special offer of 25,000 free emails per month from SendGrid. To sign-up for this offer, or get more information, please visit http://www.sendgrid.com/azure.html . One you signed up, you can add the following script to your Mobile Service server scripts to send email via SendGrid service:     var sendgrid = new SendGrid('<< account name >>', '<< password >>');       sendgrid.send({         to: '<< enter email address here >>',         from: '<< enter from address here >>',         subject: 'New to-do item',         text: 'A new to-do was added: ' + item.text     }, function (success, message) {         if (!success) {             console.error(message);         }     }); Follow the Send email from Mobile Services with SendGrid tutorial to learn more. Sending SMS messages from your Mobile Service SMS is a key communication medium for mobile apps - it comes in handy if you want your app to send users a confirmation code during registration, allow your users to invite their friends to install your app or reach out to mobile users without a smartphone. Using Mobile Service server scripts and Twilio’s REST API, you can now easily send SMS messages to your app.  To get started, sign up for Twilio account. Windows Azure customers receive 1000 free text messages when using Twilio and Windows Azure together. Once signed up, you can add the following to your Mobile Service server scripts to send SMS messages:     var httpRequest = require('request');     var account_sid = "<< account SID >>";     var auth_token = "<< auth token >>";       // Create the request body     var body = "From=" + from + "&To=" + to + "&Body=" + message;       // Make the HTTP request to Twilio     httpRequest.post({         url: "https://" + account_sid + ":" + auth_token +              "@api.twilio.com/2010-04-01/Accounts/" + account_sid + "/SMS/Messages.json",         headers: { 'content-type': 'application/x-www-form-urlencoded' },         body: body     }, function (err, resp, body) {         console.log(body);     }); I’m excited to be speaking at the TwilioCon conference this week, and will be showcasing some of the cool scenarios you can now enable with Twilio and Windows Azure Mobile Services. Mobile Services availability in West US region Our initial preview of Windows Azure Mobile Services was only supported in the US East region of Windows Azure.  As with every Windows Azure service, overtime we will extend Mobile Services to all Windows Azure regions. With this week’s preview update we’ve added support so that you can now create your Mobile Service in the West US region as well: Summary The above features are all now live in production and are available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using Mobile Services today. Visit the Windows Azure Mobile Developer Center to learn more about how to build apps with Mobile Services. We’ll have even more new features and enhancements coming later this week – including .NET 4.5 support for Windows Azure Web Sites.  Keep an eye out on my blog for details as new features become available. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • C++ - Conway's Game of Life & Stepping Backwards

    - by Gabe
    I was able to create a version Conway's Game of Life that either stepped forward each click, or just ran forward using a timer. (I'm doing this using Qt.) Now, I need to be able to save all previous game grids, so that I can step backwards by clicking a button. I'm trying to use a stack, and it seems like I'm pushing the old gridcells onto the stack correctly. But when I run it in QT, the grids don't change when I click BACK. I've tried different things for the last three hours, to no avail. Any ideas? gridwindow.cpp - My problem should be in here somewhere. Probably the handleBack() func. #include <iostream> #include "gridwindow.h" using namespace std; // Constructor for window. It constructs the three portions of the GUI and lays them out vertically. GridWindow::GridWindow(QWidget *parent,int rows,int cols) : QWidget(parent) { QHBoxLayout *header = setupHeader(); // Setup the title at the top. QGridLayout *grid = setupGrid(rows,cols); // Setup the grid of colored cells in the middle. QHBoxLayout *buttonRow = setupButtonRow(); // Setup the row of buttons across the bottom. QVBoxLayout *layout = new QVBoxLayout(); // Puts everything together. layout->addLayout(header); layout->addLayout(grid); layout->addLayout(buttonRow); setLayout(layout); } // Destructor. GridWindow::~GridWindow() { delete title; } // Builds header section of the GUI. QHBoxLayout* GridWindow::setupHeader() { QHBoxLayout *header = new QHBoxLayout(); // Creates horizontal box. header->setAlignment(Qt::AlignHCenter); this->title = new QLabel("CONWAY'S GAME OF LIFE",this); // Creates big, bold, centered label (title): "Conway's Game of Life." this->title->setAlignment(Qt::AlignHCenter); this->title->setFont(QFont("Arial", 32, QFont::Bold)); header->addWidget(this->title); // Adds widget to layout. return header; // Returns header to grid window. } // Builds the grid of cells. This method populates the grid's 2D array of GridCells with MxN cells. QGridLayout* GridWindow::setupGrid(int rows,int cols) { isRunning = false; QGridLayout *grid = new QGridLayout(); // Creates grid layout. grid->setHorizontalSpacing(0); // No empty spaces. Cells should be contiguous. grid->setVerticalSpacing(0); grid->setSpacing(0); grid->setAlignment(Qt::AlignHCenter); for(int i=0; i < rows; i++) //Each row is a vector of grid cells. { std::vector<GridCell*> row; // Creates new vector for current row. cells.push_back(row); for(int j=0; j < cols; j++) { GridCell *cell = new GridCell(); // Creates and adds new cell to row. cells.at(i).push_back(cell); grid->addWidget(cell,i,j); // Adds to cell to grid layout. Column expands vertically. grid->setColumnStretch(j,1); } grid->setRowStretch(i,1); // Sets row expansion horizontally. } return grid; // Returns grid. } // Builds footer section of the GUI. QHBoxLayout* GridWindow::setupButtonRow() { QHBoxLayout *buttonRow = new QHBoxLayout(); // Creates horizontal box for buttons. buttonRow->setAlignment(Qt::AlignHCenter); // Clear Button - Clears cell; sets them all to DEAD/white. QPushButton *clearButton = new QPushButton("CLEAR"); clearButton->setFixedSize(100,25); connect(clearButton, SIGNAL(clicked()), this, SLOT(handlePause())); // Pauses timer before clearing. connect(clearButton, SIGNAL(clicked()), this, SLOT(handleClear())); // Connects to clear function to make all cells DEAD/white. buttonRow->addWidget(clearButton); // Forward Button - Steps one step forward. QPushButton *forwardButton = new QPushButton("FORWARD"); forwardButton->setFixedSize(100,25); connect(forwardButton, SIGNAL(clicked()), this, SLOT(handleForward())); // Signals to handleForward function.. buttonRow->addWidget(forwardButton); // Back Button - Steps one step backward. QPushButton *backButton = new QPushButton("BACK"); backButton->setFixedSize(100,25); connect(backButton, SIGNAL(clicked()), this, SLOT(handleBack())); // Signals to handleBack funciton. buttonRow->addWidget(backButton); // Start Button - Starts game when user clicks. Or, resumes game after being paused. QPushButton *startButton = new QPushButton("START/RESUME"); startButton->setFixedSize(100,25); connect(startButton, SIGNAL(clicked()), this, SLOT(handlePause())); // Deletes current timer if there is one. Then restarts everything. connect(startButton, SIGNAL(clicked()), this, SLOT(handleStart())); // Signals to handleStart function. buttonRow->addWidget(startButton); // Pause Button - Pauses simulation of game. QPushButton *pauseButton = new QPushButton("PAUSE"); pauseButton->setFixedSize(100,25); connect(pauseButton, SIGNAL(clicked()), this, SLOT(handlePause())); // Signals to pause function which pauses timer. buttonRow->addWidget(pauseButton); // Quit Button - Exits program. QPushButton *quitButton = new QPushButton("EXIT"); quitButton->setFixedSize(100,25); connect(quitButton, SIGNAL(clicked()), qApp, SLOT(quit())); // Signals the quit slot which ends the program. buttonRow->addWidget(quitButton); return buttonRow; // Returns bottom of layout. } /* SLOT method for handling clicks on the "clear" button. Receives "clicked" signals on the "Clear" button and sets all cells to DEAD. */ void GridWindow::handleClear() { for(unsigned int row=0; row < cells.size(); row++) // Loops through current rows' cells. { for(unsigned int col=0; col < cells[row].size(); col++) // Loops through the rows'columns' cells. { GridCell *cell = cells[row][col]; // Grab the current cell & set its value to dead. cell->setType(DEAD); } } } /* SLOT method for handling clicks on the "start" button. Receives "clicked" signals on the "start" button and begins game simulation. */ void GridWindow::handleStart() { isRunning = true; // It is running. Sets isRunning to true. this->timer = new QTimer(this); // Creates new timer. connect(this->timer, SIGNAL(timeout()), this, SLOT(timerFired())); // Connect "timerFired" method class to the "timeout" signal fired by the timer. this->timer->start(500); // Timer to fire every 500 milliseconds. } /* SLOT method for handling clicks on the "pause" button. Receives "clicked" signals on the "pause" button and stops the game simulation. */ void GridWindow::handlePause() { if(isRunning) // If it is running... this->timer->stop(); // Stops the timer. isRunning = false; // Set to false. } void GridWindow::handleForward() { if(isRunning); // If it's running, do nothing. else timerFired(); // It not running, step forward one step. } void GridWindow::handleBack() { std::vector<std::vector<GridCell*> > cells2; if(isRunning); // If it's running, do nothing. else if(backStack.empty()) cout << "EMPTYYY" << endl; else { cells2 = backStack.peek(); for (unsigned int f = 0; f < cells.size(); f++) // Loop through cells' rows. { for (unsigned int g = 0; g < cells.at(f).size(); g++) // Loop through cells columns. { cells[f][g]->setType(cells2[f][g]->getType()); // Set cells[f][g]'s type to cells2[f][g]'s type. } } cout << "PRE=POP" << endl; backStack.pop(); cout << "OYYYY" << endl; } } // Accessor method - Gets the 2D vector of grid cells. std::vector<std::vector<GridCell*> >& GridWindow::getCells() { return this->cells; } /* TimerFired function: 1) 2D-Vector cells2 is declared. 2) cells2 is initliazed with loops/push_backs so that all its cells are DEAD. 3) We loop through cells, and count the number of LIVE neighbors next to a given cell. --> Depending on how many cells are living, we choose if the cell should be LIVE or DEAD in the next simulation, according to the rules. -----> We save the cell type in cell2 at the same indice (the same row and column cell in cells2). 4) After check all the cells (and save the next round values in cells 2), we set cells's gridcells equal to cells2 gridcells. --> This causes the cells to be redrawn with cells2 types (white or black). */ void GridWindow::timerFired() { backStack.push(cells); std::vector<std::vector<GridCell*> > cells2; // Holds new values for 2D vector. These are the next simulation round of cell types. for(unsigned int i = 0; i < cells.size(); i++) // Loop through the rows of cells2. (Same size as cells' rows.) { vector<GridCell*> row; // Creates Gridcell* vector to push_back into cells2. cells2.push_back(row); // Pushes back row vectors into cells2. for(unsigned int j = 0; j < cells[i].size(); j++) // Loop through the columns (the cells in each row). { GridCell *cell = new GridCell(); // Creates new GridCell. cell->setType(DEAD); // Sets cell type to DEAD/white. cells2.at(i).push_back(cell); // Pushes back the DEAD cell into cells2. } // This makes a gridwindow the same size as cells with all DEAD cells. } for (unsigned int m = 0; m < cells.size(); m++) // Loop through cells' rows. { for (unsigned int n = 0; n < cells.at(m).size(); n++) // Loop through cells' columns. { unsigned int neighbors = 0; // Counter for number of LIVE neighbors for a given cell. // We know check all different variations of cells[i][j] to count the number of living neighbors for each cell. // We check m > 0 and/or n > 0 to make sure we don't access negative indexes (ex: cells[-1][0].) // We check m < size to make sure we don't try to access rows out of the vector (ex: row 5, if only 4 rows). // We check n < row size to make sure we don't access column item out of the vector (ex: 10th item in a column of only 9 items). // If we find that the Type = 1 (it is LIVE), then we add 1 to the neighbor. // Else - we add nothing to the neighbor counter. // Neighbor is the number of LIVE cells next to the current cell. if(m > 0 && n > 0) { if (cells[m-1][n-1]->getType() == 1) neighbors += 1; } if(m > 0) { if (cells[m-1][n]->getType() == 1) neighbors += 1; if(n < (cells.at(m).size() - 1)) { if (cells[m-1][n+1]->getType() == 1) neighbors += 1; } } if(n > 0) { if (cells[m][n-1]->getType() == 1) neighbors += 1; if(m < (cells.size() - 1)) { if (cells[m+1][n-1]->getType() == 1) neighbors += 1; } } if(n < (cells.at(m).size() - 1)) { if (cells[m][n+1]->getType() == 1) neighbors += 1; } if(m < (cells.size() - 1)) { if (cells[m+1][n]->getType() == 1) neighbors += 1; } if(m < (cells.size() - 1) && n < (cells.at(m).size() - 1)) { if (cells[m+1][n+1]->getType() == 1) neighbors += 1; } // Done checking number of neighbors for cells[m][n] // Now we change cells2 if it should switch in the next simulation step. // cells2 holds the values of what cells should be on the next iteration of the game. // We can't change cells right now, or it would through off our other cell values. // Apply game rules to cells: Create new, updated grid with the roundtwo vector. // Note - LIVE is 1; DEAD is 0. if (cells[m][n]->getType() == 1 && neighbors < 2) // If cell is LIVE and has less than 2 LIVE neighbors -> Set to DEAD. cells2[m][n]->setType(DEAD); else if (cells[m][n]->getType() == 1 && neighbors > 3) // If cell is LIVE and has more than 3 LIVE neighbors -> Set to DEAD. cells2[m][n]->setType(DEAD); else if (cells[m][n]->getType() == 1 && (neighbors == 2 || neighbors == 3)) // If cell is LIVE and has 2 or 3 LIVE neighbors -> Set to LIVE. cells2[m][n]->setType(LIVE); else if (cells[m][n]->getType() == 0 && neighbors == 3) // If cell is DEAD and has 3 LIVE neighbors -> Set to LIVE. cells2[m][n]->setType(LIVE); } } // Now we've gone through all of cells, and saved the new values in cells2. // Now we loop through cells and set all the cells' types to those of cells2. for (unsigned int f = 0; f < cells.size(); f++) // Loop through cells' rows. { for (unsigned int g = 0; g < cells.at(f).size(); g++) // Loop through cells columns. { cells[f][g]->setType(cells2[f][g]->getType()); // Set cells[f][g]'s type to cells2[f][g]'s type. } } } stack.h - Here's my stack. #ifndef STACK_H_ #define STACK_H_ #include <iostream> #include "node.h" template <typename T> class Stack { private: Node<T>* top; int listSize; public: Stack(); int size() const; bool empty() const; void push(const T& value); void pop(); T& peek() const; }; template <typename T> Stack<T>::Stack() : top(NULL) { listSize = 0; } template <typename T> int Stack<T>::size() const { return listSize; } template <typename T> bool Stack<T>::empty() const { if(listSize == 0) return true; else return false; } template <typename T> void Stack<T>::push(const T& value) { Node<T>* newOne = new Node<T>(value); newOne->next = top; top = newOne; listSize++; } template <typename T> void Stack<T>::pop() { Node<T>* oldT = top; top = top->next; delete oldT; listSize--; } template <typename T> T& Stack<T>::peek() const { return top->data; // Returns data in top item. } #endif gridcell.cpp - Gridcell implementation #include <iostream> #include "gridcell.h" using namespace std; // Constructor: Creates a grid cell. GridCell::GridCell(QWidget *parent) : QFrame(parent) { this->type = DEAD; // Default: Cell is DEAD (white). setFrameStyle(QFrame::Box); // Set the frame style. This is what gives each box its black border. this->button = new QPushButton(this); //Creates button that fills entirety of each grid cell. this->button->setSizePolicy(QSizePolicy::Expanding,QSizePolicy::Expanding); // Expands button to fill space. this->button->setMinimumSize(19,19); //width,height // Min height and width of button. QHBoxLayout *layout = new QHBoxLayout(); //Creates a simple layout to hold our button and add the button to it. layout->addWidget(this->button); setLayout(layout); layout->setStretchFactor(this->button,1); // Lets the buttons expand all the way to the edges of the current frame with no space leftover layout->setContentsMargins(0,0,0,0); layout->setSpacing(0); connect(this->button,SIGNAL(clicked()),this,SLOT(handleClick())); // Connects clicked signal with handleClick slot. redrawCell(); // Calls function to redraw (set new type for) the cell. } // Basic destructor. GridCell::~GridCell() { delete this->button; } // Accessor for the cell type. CellType GridCell::getType() const { return(this->type); } // Mutator for the cell type. Also has the side effect of causing the cell to be redrawn on the GUI. void GridCell::setType(CellType type) { this->type = type; redrawCell(); // Sets type and redraws cell. } // Handler slot for button clicks. This method is called whenever the user clicks on this cell in the grid. void GridCell::handleClick() { // When clicked on... if(this->type == DEAD) // If type is DEAD (white), change to LIVE (black). type = LIVE; else type = DEAD; // If type is LIVE (black), change to DEAD (white). setType(type); // Sets new type (color). setType Calls redrawCell() to recolor. } // Method to check cell type and return the color of that type. Qt::GlobalColor GridCell::getColorForCellType() { switch(this->type) { default: case DEAD: return Qt::white; case LIVE: return Qt::black; } } // Helper method. Forces current cell to be redrawn on the GUI. Called whenever the setType method is invoked. void GridCell::redrawCell() { Qt::GlobalColor gc = getColorForCellType(); //Find out what color this cell should be. this->button->setPalette(QPalette(gc,gc)); //Force the button in the cell to be the proper color. this->button->setAutoFillBackground(true); this->button->setFlat(true); //Force QT to NOT draw the borders on the button } Thanks a lot. Let me know if you need anything else.

    Read the article

  • Calculix Data Visualiser using QT

    - by Ann
    I am doing a project on CalculiX data visualizor,using Qt.I 've to draw the structure and after giving force the displacement should be shawn as variation in color.I chose HSV coloring,but while executing I got an error message:"QColor::from Hsv:HSV parameters out of range".The code is: DataViz1::DataViz1(QWidget *parent) : QWidget(parent), ui(new Ui::DataViz1) { DArea = new QGLScreen(this); DArea-setGeometry(QRect(10,10,700,600)); //TODO This values are feeded by user dfile="/home/41407/color.txt";//input file with displacement mfile="/home/41407/mesh21.txt";//input file nodeId="*NODE"; elId="*ELEMENT"; DataId="displ"; parseMfile(); parseDfile(); DArea->Nodes=Nodes; DArea->Elements=Elements; DArea->Data=Data; DArea->fillColorArray(); //printf("Colr is %d",DArea->pickColor(-11.02,0));fflush(stdout); ui->setupUi(this); } DataViz1::~DataViz1() { delete ui; } void DataViz1::parseMfile() { QFile file(mfile); if (!file.open(QIODevice::ReadOnly | QIODevice::Text)) return; int node_end=0; QTextStream in(&file); in.skipWhiteSpace(); while (!in.atEnd()) { QString line = in.readLine(); if(line.startsWith(nodeId))//Node block in Mfile { while(1) { line = in.readLine(); if(line.startsWith(elId)) { break; } Nodes< while(1) { line = in.readLine(); Elements<<line; //printf("Element is %s\n",line.toLocal8Bit().constData());fflush(stdout); if(in.atEnd()) break; } } } } void DataViz1::parseDfile() { QFile file(dfile); if (!file.open(QIODevice::ReadOnly | QIODevice::Text)) return; int node_end=0; QTextStream in(&file); in.skipWhiteSpace(); while (!in.atEnd()) { QString line = in.readLine(); if(line.startsWith(DataId)) { continue; } line = in.readLine(); Data< } /......................................................................../ include "qglscreen.h" include GLfloat LightAmbient[]= { 0.5f, 0.5f, 0.5f, 1.0f }; GLfloat LightDiffuse[]= { 1.0f, 1.0f, 1.0f, 1.0f }; GLfloat LightPosition[]= { 0.0f, 0.0f, 2.0f, 1.0f }; QGLScreen::QGLScreen(QWidget *parent):QGLWidget(QGLFormat(QGL::SampleBuffers), parent) { clearColor = Qt::black; xRot = 0; yRot = 0; zRot = 0; ifdef QT_OPENGL_ES_2 program = 0; endif //TODO user input ElType="HE8"; DType="SolidFrame"; axis="X"; } QGLScreen::~QGLScreen() { } QSize QGLScreen::minimumSizeHint() const { return QSize(50, 50); } QSize QGLScreen::sizeHint() const { return QSize(200, 200); } void QGLScreen::setClearColor(const QColor &color) { clearColor = color; updateGL(); } void QGLScreen::initializeGL() { xRot=0; yRot=0; zRot=0; scaling = 1.0; /* select clearing (background) color */ glClearColor (0.0, 0.0, 0.0, 0.0); glMatrixMode(GL_PROJECTION); glLoadIdentity(); // glViewport(0,0,10,10); glOrtho(-10.0, +10.0, -10.0, +10.0, -10.0,+10.0); glEnable (GL_LINE_SMOOTH); glHint (GL_LINE_SMOOTH_HINT, GL_DONT_CARE); } void QGLScreen::wheel1() { scaling1 += .0025; count2++; update(); } void QGLScreen::wheel2() { if(count2-14) { scaling1 -= .0025; count2--; update(); } } void QGLScreen::drawModel(int x1,int y1,int x2,int y2) { makeCurrent(); QStringList Cnode,Celement; for (int i = 0; i < Elements.size(); ++i) { Celement=Elements.at(i).split(","); // printf("Element is %s",Celement.at(0).toLocal8Bit().constData());fflush(stdout); //printf("Node at el is %s\n",(findNode(Celement.at(1).toInt())).at(1).toLocal8Bit().constData()); fflush(stdout); if(ElType=="HE8") { //First four nodes float ENX1=(findNode(Celement.at(1).toInt())).at(1).toDouble(); float ENX2=(findNode(Celement.at(2).toInt())).at(1).toDouble(); float ENX3=(findNode(Celement.at(3).toInt())).at(1).toDouble(); float ENX4=(findNode(Celement.at(4).toInt())).at(1).toDouble(); float ENY1=(findNode(Celement.at(1).toInt())).at(2).toDouble(); float ENY2=(findNode(Celement.at(2).toInt())).at(2).toDouble(); float ENY3=(findNode(Celement.at(3).toInt())).at(2).toDouble(); float ENY4=(findNode(Celement.at(4).toInt())).at(2).toDouble(); float ENZ1=(findNode(Celement.at(1).toInt())).at(3).toDouble(); float ENZ2=(findNode(Celement.at(2).toInt())).at(3).toDouble(); float ENZ3=(findNode(Celement.at(3).toInt())).at(3).toDouble(); float ENZ4=(findNode(Celement.at(4).toInt())).at(3).toDouble(); //Second four Nodes float ENX5=(findNode(Celement.at(5).toInt())).at(1).toDouble(); float ENX6=(findNode(Celement.at(6).toInt())).at(1).toDouble(); float ENX7=(findNode(Celement.at(7).toInt())).at(1).toDouble(); float ENX8=(findNode(Celement.at(8).toInt())).at(1).toDouble(); float ENY5=(findNode(Celement.at(5).toInt())).at(2).toDouble(); float ENY6=(findNode(Celement.at(6).toInt())).at(2).toDouble(); float ENY7=(findNode(Celement.at(7).toInt())).at(2).toDouble(); float ENY8=(findNode(Celement.at(8).toInt())).at(2).toDouble(); float ENZ5=(findNode(Celement.at(5).toInt())).at(3).toDouble(); float ENZ6=(findNode(Celement.at(6).toInt())).at(3).toDouble(); float ENZ7=(findNode(Celement.at(7).toInt())).at(3).toDouble(); float ENZ8=(findNode(Celement.at(8).toInt())).at(3).toDouble(); //Identify Colors GLfloat ENC[8][3]; for(int k=1;k<8;k++) { int hsv=pickColor(findData(Celement.at(k).toInt()).toDouble(),0); //printf("hsv is %d=",hsv);fflush(stdout); getRGB(hsv); //printf("%d*%d*%d\n",red,green,blue); //ENC[k]={red,green,blue}; ENC[k][0]=red; ENC[k][1]=green; ENC[k][2]=blue; } //Plot the first four direct loop if(DType=="WireFrame"){ glBegin(GL_LINE_LOOP); glColor3f(255,0,0); glVertex3f(ENX1,ENY1,ENZ1); glColor3f(255,0,0); glVertex3f(ENX2,ENY2,ENZ2); glColor3f(255,0,0); glVertex3f(ENX3,ENY3,ENZ3); glColor3f(255,0,0); glVertex3f(ENX4,ENY4,ENZ4); glEnd(); //Plot the second four direct loop glBegin(GL_LINE_LOOP); glColor3f(0,0,255); glVertex3f(ENX5,ENY5,ENZ5); glColor3f(0,0,255); glVertex3f(ENX6,ENY6,ENZ6); glColor3f(0,0,255); glVertex3f(ENX7,ENY7,ENZ7); glColor3f(0,0,255); glVertex3f(ENX8,ENY8,ENZ8); glEnd(); //Plot the interconnections glBegin(GL_LINE); glColor3f(150,150,150); glVertex3f(ENX1,ENY1,ENZ1); glVertex3f(ENX5,ENY5,ENZ5); glEnd(); glBegin(GL_LINE); glColor3f(150,150,150); glVertex3f(ENX2,ENY2,ENZ2); glVertex3f(ENX6,ENY6,ENZ6); glEnd(); glBegin(GL_LINE); glColor3f(150,150,150); glVertex3f(ENX3,ENY3,ENZ3); glVertex3f(ENX7,ENY7,ENZ7); glEnd(); glBegin(GL_LINE); glColor3f(150,150,150); glVertex3f(ENX4,ENY4,ENZ4); glVertex3f(ENX8,ENY8,ENZ8); glEnd(); } if(DType=="SolidFrame") { glBegin(GL_QUADS); glColor3fv(ENC[1]); glVertex3f(ENX1,ENY1,ENZ1); glColor3fv(ENC[2]); glVertex3f(ENX2,ENY2,ENZ2); glColor3fv(ENC[3]); glVertex3f(ENX3,ENY3,ENZ3); glColor3fv(ENC[4]); glVertex3f(ENX4,ENY4,ENZ4); glEnd(); //break; glBegin(GL_QUADS); glColor3fv(ENC[5]); glVertex3f(ENX5,ENY5,ENZ5); glColor3fv(ENC[6]); glVertex3f(ENX6,ENY6,ENZ6); glColor3fv(ENC[7]); glVertex3f(ENX7,ENY7,ENZ7); glColor3fv(ENC[8]); glVertex3f(ENX8,ENY8,ENZ8); glEnd(); glBegin(GL_QUAD_STRIP); glColor3fv(ENC[1]); glVertex3f(ENX1,ENY1,ENZ1); glColor3fv(ENC[5]); glVertex3f(ENX5,ENY5,ENZ5); glColor3fv(ENC[2]); glVertex3f(ENX2,ENY2,ENZ2); glColor3fv(ENC[6]); glVertex3f(ENX6,ENY6,ENZ6); glEnd(); glBegin(GL_QUAD_STRIP); glColor3fv(ENC[3]); glVertex3f(ENX3,ENY3,ENZ3); glColor3fv(ENC[7]); glVertex3f(ENX7,ENY7,ENZ7); glColor3fv(ENC[4]); glVertex3f(ENX4,ENY4,ENZ4); glColor3fv(ENC[8]); glVertex3f(ENX8,ENY8,ENZ8); glEnd(); glBegin(GL_QUAD_STRIP); glColor3fv(ENC[2]); glVertex3f(ENX2,ENY2,ENZ2); glColor3fv(ENC[6]); glVertex3f(ENX6,ENY6,ENZ6); glColor3fv(ENC[3]); glVertex3f(ENX3,ENY3,ENZ3); glColor3fv(ENC[7]); glVertex3f(ENX7,ENY7,ENZ7); glEnd(); glBegin(GL_QUAD_STRIP); glColor3fv(ENC[1]); glVertex3f(ENX1,ENY1,ENZ1); glColor3fv(ENC[5]); glVertex3f(ENX5,ENY5,ENZ5); glColor3fv(ENC[4]); glVertex3f(ENX4,ENY4,ENZ4); glColor3fv(ENC[8]); glVertex3f(ENX8,ENY8,ENZ8); glEnd(); } } } } QStringList QGLScreen::findNode(int element) { QStringList Temp; for (int i = 0; i < Nodes.size(); ++i) { Temp=Nodes.at(i).split(","); if(Temp.at(0).toInt()==element) { break; } } return Temp; } QString QGLScreen::findData(int Node) { QString Temp; QRegExp sep("\s+"); for (int i = 0; i < Data.size(); ++i) { if((Data.at(i).split("\t")).at(0).section(sep,1,1).toInt()==Node) { if(axis=="X") { Temp=Data.at(i).split("\t").at(0).section(sep,2,2); } if(axis=="Y") { Temp=Data.at(i).split("\t").at(0).section(sep,3,3); } if(axis=="Z") { Temp=Data.at(i).split("\t").at(0).section(sep,4,4); } break; } } return Temp; } void QGLScreen::fillColorArray() { QString Temp1,Temp2,Temp3; double d1s=0,d2s=0,d3s=0,d1l=0,d2l=0,d3l=0,diff=0; QRegExp sep("\\s+"); for (int i = 0; i < Data.size(); ++i) { Temp1=(Data.at(i).split("\t")).at(0).section(sep,2,2); if(d1s>Temp1.toDouble()) { d1s=Temp1.toDouble(); } if(d1l<Temp1.toDouble()) { d1l=Temp1.toDouble(); } Temp2=(Data.at(i).split("\t")).at(0).section(sep,3,3); if(d2s>Temp2.toDouble()) { d2s=Temp2.toDouble(); } if(d2l<Temp2.toDouble()) { d2l=Temp2.toDouble(); } Temp3=(Data.at(i).split("\t")).at(0).section(sep,4,4); if(d3s>Temp3.toDouble()) { d3s=Temp3.toDouble(); } if(d3l<Temp3.toDouble()) { d3l=Temp3.toDouble(); } // printf("data is %s",Temp.toLocal8Bit().constData());fflush(stdout); } color[0][0]=d1l; for(int i=1;i<360;i++) { //printf("Large is%f small is %f",d1l,d1s); diff=d1l-d1s; if(d1l==0&&d1s<0) color[0][i]=color[0][i-1]-diff/360; else if(d1l>0&&d1s==0) color[0][i]=color[0][i-1]+diff/360; else if(d1l>0&&d1s<0) color[0][i]=color[0][i-1]-diff/360; diff=d2l-d2s; if(d2l==0&&d2s<0) color[1][i]=color[1][i-1]-diff/360; else if(d2l>0&&d2s==0) color[1][i]=color[1][i-1]+diff/360; else if(d2l>0&&d2s<0) color[1][i]=color[1][i-1]-diff/360; diff=d3l-d3s; if(d3l==0&&d3s<0) color[2][i]=color[2][i-1]-diff/360; else if(d3l>0&&d3s==0) color[2][i]=color[2][i-1]+diff/360; else if(d3l>0&&d3s<0) color[2][i]=color[2][i-1]-diff/360; } //for(int i=0;i<360;i++) printf("%d %f %f %f\n",i,color[0][i],color[1][i],color[2][i]); } int QGLScreen::pickColor(double data,int Did) { int i,pos; if(axis=="X")Did=0; if(axis=="Y")Did=1; if(axis=="Z")Did=2; //printf("%f data is",data);fflush(stdout); for(int i=0;i<360;i++) { if(color[Did][i]<data && data>color[Did][i+1]) { //printf("Orginal dat is %f Data found is %f and pos %d\n",data,color[Did][i],i);fflush(stdout); pos=i; break; } } return pos; } void QGLScreen::getRGB(int hsv) { QColor c; c.setHsv(hsv,255,255,255); QColor r=QColor::fromHsv(hsv,255,255); red=r.red(); green=r.green(); blue=r.blue(); } void QGLScreen::paintGL() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glPushAttrib(GL_ALL_ATTRIB_BITS); glMatrixMode(GL_PROJECTION); glPushMatrix(); glLoadIdentity(); GLfloat x = 3.0 * GLfloat(width()) / height(); glOrtho(-x, +x, -3.0, +3.0, 4.0, 15.0); glMatrixMode(GL_MODELVIEW); glPushMatrix(); glLoadIdentity(); glTranslatef(0.0, 0.0, -10.0); glScalef(scaling, scaling, scaling); glRotatef(xRot, 1.0, 0.0, 0.0); glRotatef(yRot, 0.0, 1.0, 0.0); glRotatef(zRot, 0.0, 0.0, 1.0); drawModel(0,0,1,1); /* don't wait! * start processing buffered OpenGL routines */ glFlush (); } /void QGLScreen::zoom1() { scaling+=.05; update(); }/ void QGLScreen::resizeGL(int width, int height) { int side = qMin(width, height); glViewport((width - side) / 2, (height - side) / 2, side, side); #if !defined(QT_OPENGL_ES_2) glMatrixMode(GL_PROJECTION); glLoadIdentity(); #ifndef QT_OPENGL_ES glOrtho(-0.5, +0.5, +0.5, -0.5, 4.0, 15.0); #else glOrthof(-0.5, +0.5, +0.5, -0.5, 4.0, 15.0); #endif glMatrixMode(GL_MODELVIEW); #endif } void QGLScreen::mousePressEvent(QMouseEvent *event) { lastPos = event-pos(); } void QGLScreen::mouseMoveEvent(QMouseEvent *event) { GLfloat dx = GLfloat(event->x() - lastPos.x()) / width(); GLfloat dy = GLfloat(event->y() - lastPos.y()) / height(); if (event->buttons() & Qt::LeftButton) { xRot+= 180 * dy; yRot += 180 * dx; update(); } else if (event->buttons() & Qt::RightButton) { xRot += 180 * dy; yRot += 180 * dx; update(); } lastPos = event->pos(); } void QGLScreen::mouseReleaseEvent(QMouseEvent * /* event */) { emit clicked(); }

    Read the article

  • How can I change mouse keymapping

    - by zuberuber
    I have Razer DeathAdder(left handed edition) and A4Tech wireless mouse. My problem is I don't know how to change wireless mouse keymapping(swaping left/right click). Can somebody guide me how to do such thing? List of my devices: ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? Logitech Unifying Device. Wireless PID:4004 id=8 [slave pointer (2)] ? ? Razer Razer DeathAdder id=11 [slave pointer (2)] ? ? A4TECH USB Device id=12 [slave pointer (2)] ? ? A4TECH USB Device id=13 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Power Button id=7 [slave keyboard (3)] ? Logitech USB Keyboard id=9 [slave keyboard (3)] ? Logitech USB Keyboard id=10 [slave keyboard (3)] This is my Razer xinput: Device 'Razer Razer DeathAdder': Device Enabled (121): 1 Coordinate Transformation Matrix (123): 1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000 Device Accel Profile (246): 0 Device Accel Constant Deceleration (247): 5.000000 Device Accel Adaptive Deceleration (248): 1.000000 Device Accel Velocity Scaling (249): 10.000000 Device Product ID (240): 5426, 22 Device Node (241): "/dev/input/event4" Evdev Axis Inversion (250): 0, 0 Evdev Axes Swap (252): 0 Axis Labels (253): "Rel X" (131), "Rel Y" (132), "Rel Vert Wheel" (274) Button Labels (254): "Button Left" (124), "Button Middle" (125), "Button Right" (126), "Button Wheel Up" (127), "Button Wheel Down" (128), "Button Horiz Wheel Left" (129), "Button Horiz Wheel Right" (130), "Button Side" (269), "Button Extra" (270), "Button Forward" (271), "Button Back" (272), "Button Task" (273), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243) Evdev Middle Button Emulation (255): 0 Evdev Middle Button Timeout (256): 50 Evdev Third Button Emulation (257): 0 Evdev Third Button Emulation Timeout (258): 1000 Evdev Third Button Emulation Button (259): 3 Evdev Third Button Emulation Threshold (260): 20 Evdev Wheel Emulation (261): 0 Evdev Wheel Emulation Axes (262): 0, 0, 4, 5 Evdev Wheel Emulation Inertia (263): 10 Evdev Wheel Emulation Timeout (264): 200 Evdev Wheel Emulation Button (265): 4 Evdev Drag Lock Buttons (266): 0 And this is my wireless mouse xinput: Device 'A4TECH USB Device': Device Enabled (121): 1 Coordinate Transformation Matrix (123): 1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000 Device Accel Profile (246): 0 Device Accel Constant Deceleration (247): 1.000000 Device Accel Adaptive Deceleration (248): 1.000000 Device Accel Velocity Scaling (249): 10.000000 Device Product ID (240): 2522, 1359 Device Node (241): "/dev/input/event16" Evdev Axis Inversion (250): 0, 0 Evdev Axes Swap (252): 0 Axis Labels (253): "Rel X" (131), "Rel Y" (132), "Rel Horiz Wheel" (245), "Rel Vert Wheel" (274) Button Labels (254): "Button Left" (124), "Button Middle" (125), "Button Right" (126), "Button Wheel Up" (127), "Button Wheel Down" (128), "Button Horiz Wheel Left" (129), "Button Horiz Wheel Right" (130), "Button Side" (269), "Button Extra" (270), "Button Forward" (271), "Button Back" (272), "Button Task" (273), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243), "Button Unknown" (243) Evdev Middle Button Emulation (255): 0 Evdev Middle Button Timeout (256): 50 Evdev Third Button Emulation (257): 0 Evdev Third Button Emulation Timeout (258): 1000 Evdev Third Button Emulation Button (259): 3 Evdev Third Button Emulation Threshold (260): 20 Evdev Wheel Emulation (261): 0 Evdev Wheel Emulation Axes (262): 0, 0, 4, 5 Evdev Wheel Emulation Inertia (263): 10 Evdev Wheel Emulation Timeout (264): 200 Evdev Wheel Emulation Button (265): 4 Evdev Drag Lock Buttons (266): 0

    Read the article

  • SQL SERVER – Weekend Project – Experimenting with ACID Transactions, SQL Compliant, Elastically Scalable Database

    - by pinaldave
    Database technology is huge and big world. I like to explore always beyond what I know and share the learning. Weekend is the best time when I sit around download random software on my machine which I like to call as a lab machine (it is a pretty old laptop, hardly a quality as lab machine) and experiment it. There are so many free betas available for download that it’s hard to keep track and even harder to find the time to play with very many of them.  This blog is about one you shouldn’t miss if you are interested in the learning various relational databases. NuoDB just released their Beta 7.  I had already downloaded their Beta 6 and yesterday did the same for 7.   My impression is that they are onto something very very interesting.  In fact, it might be something really promising in terms of database elasticity, scale and operational cost reduction. The folks at NuoDB say they are working on the world’s first “emergent” database which they tout as a brand new transitional database that is intended to dramatically change what’s possible with OLTP.  It is SQL compliant, guarantees ACID transactions, yet scales elastically on heterogeneous and decentralized cloud-based resources. Interesting note for sure, making me explore more. Based on what I’ve seen so far, they are solving the architectural challenge that exists between elastic, cloud-based compute infrastructures designed to scale out in response to workload requirements versus the traditional relational database management system’s architecture of central control. Here’s my experience with the NuoDB Beta 6 so far: First they pretty much threw away all the features you’d associate with existing RDBMS architectures except the SQL and ACID transactions which they were smart to keep.  It looks like they have incorporated a number of the big ideas from various algorithms, systems and techniques to achieve maximum DB scalability. From a user’s perspective, the NuoDB Beta software behaves like any other traditional SQL database and seems to offer all the benefits users have come to expect from standards-based SQL solutions. One of the interesting feature is that one can run a transactional node and a storage node on my Windows laptop as well on other platforms – indeed interesting for sure. It’s quite amazing to see a database elastically scale across machine boundaries. So, one of the basic NuoDB concepts is that as you need to scale out, you can easily use more inexpensive hardware when/where you need it.  This is unlike what we have traditionally done to scale a database for an application – we replace the hardware with something more powerful (faster CPU and Disks). This is where I started to feel like NuoDB is on to something that has the potential to elastically scale on commodity hardware while reducing operational expense for a big OLTP database to a degree we’ve never seen before. NuoDB is able to fully leverage the cloud in an asynchronous and highly decentralized manner – while providing both SQL compliance and ACID transactions. Basically what NuoDB is doing is so new that it is all hard to believe until you’ve experienced it in action.  I will keep you up to date as I test the NuoDB Beta 7 but if you are developing a web-scale application or have an on-premise app you are thinking of moving to the cloud, testing this beta is worth your time. If you do try it, let me know what you think.  Before I say anything more, I am going to do more experiments and more test on this product and compare it with other existing similar products. For me it was a weekend worth spent on learning something new. I encourage you to download Beta 7 version and share your opinions here. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • WSS 3.0/MOSS 2007 Active Directory Forms Based Authentication PeoplePicker no users found

    - by John Haigh
    WSS 3.0/MOSS 2007 Active Directory Forms Based Authentication PeoplePicker no users found After finding these steps online from http://dattard.blogspot.com/2008/11/active-directory-forms-based.html in order to setup Active Directory Forms Based Authentication I was all set to complete this task, except for one problem. These steps are missing one very important vital step in order for FBA to work with Active Directory. A supplement to step 3 before granting access in step 5 through the people picker. You need to specify the Active Directory Provider Name to the people picker, otherwise you will not be able specify users through the Policy for Web Application. <PeoplePickerWildcards>       <clear />          <add key="ADMembershipProvider" value="%" />     </PeoplePickerWildcards> Recently we needed to use Forms Based Authentication with Active Directory from an Extranet. This is how we got it to work. 1. Extend the Web Application Instead of tweaking the internal web app, Extend the web application you want to expose to the Extranet, giving it the required host headers etc. 2. Configure SharePoint Central Admin to use FBA for the "new" Web Applications Login to SharePoint Central Admin Go to Application Management / Application Security / Authentication Providers and Change the Web Application to the one which needs to be configured for Forms Based Authentication Click zone / default, change authentication type to forms and enter ActiveDirectoryMemebershipProvider under membership provider name ( for example , "ADMembershipProvider") and save this change 3. Update the web.config of SharePoint Central admin site under configuration node <connectionStrings> <add name="ADConnectionString" connectionString="LDAP://DynamicsAX.local/CN=Users,DC=DynamicsAX,DC=local /> </connectionStrings> under system.web node <membership defaultProvider="ADMembershipProvider"> <providers> <add name="ADMembershipProvider" type="System.Web.Security.ActiveDirectoryMembershipProvider,System.Web,Version=2.0.0.0,Culture=neutral,PublicKeyToken=b03f5f7f11d50a3a" connectionStringName="ADConnectionString" connectionUsername="xxx" connectionPassword="yyy" enableSearchMethods="true" attributeMapUsername="sAMAccountName"/> </providers> </membership> 4.Update the web.config of SharePoint Web application Repeat step 3 for the web.config of the SharePoint webapplication to be configured for Forms Based Authentication Change the authentication in web.config to <authentication mode="Forms"> <forms loginUrl="/_layouts/login.aspx"></forms> </authentication> 5. Grant Access on the extended Web Application Your extranet web application is now configured to use FBA. However, until users, who will be accessing the site via FBA, are given permissions for the site, it will be inaccessible to them. To get started, open your browser and navigate to your farm’s Central Administration site. Click on Application Management and then click on Policy for Web Application. Make sure that you are working on the extranet web application. Do the following steps: Click on Add Users. In the Zones drop down, select the appropriate Extranet zone. IMPORTANT: If you select the incorrect zone, you may not be able to resolve user names. Hence, the zone you select must match the zone of the web application that is configured to use FBA. Click the Next button. In the Users edit box, type the name of the FBA user whom you wish to have full control for the site. Click the Resolve link next to the Users edit box. If the web application's FBA information has been configured correctly, the name will resolve and become underlined. Check the Full Control checkbox. Click the Finish button.

    Read the article

  • MySQL Cluster 7.3: On-Demand Webinar and Q&A Available

    - by Mat Keep
    The on-demand webinar for the MySQL Cluster 7.3 Development Release is now available. You can learn more about the design, implementation and getting started with all of the new MySQL Cluster 7.3 features from the comfort and convenience of your own device, including: - Foreign Key constraints in MySQL Cluster - Node.js NoSQL API  - Auto-installation of higher performance distributed, clusters We received some great questions over the course of the webinar, and I wanted to share those for the benefit of a broader audience. Q. What Foreign Key actions are supported: A. The core referential actions defined in the SQL:2003 standard are implemented: CASCADE RESTRICT NO ACTION SET NULL Q. Where are Foreign Keys implemented, ie data nodes or SQL nodes? A. They are implemented in the data nodes, therefore can be enforced for both the SQL and NoSQL APIs Q. Are they compatible with the InnoDB Foreign Key implementation? A. Yes, with the following exceptions: - InnoDB doesn’t support “No Action” constraints, MySQL Cluster does - You can choose to suspend FK constraint enforcement with InnoDB using the FOREIGN_KEY_CHECKS parameter; at the moment, MySQL Cluster ignores that parameter. - You cannot set up FKs between 2 tables where one is stored using MySQL Cluster and the other InnoDB. - You cannot change primary keys through the NDB API which means that the MySQL Server actually has to simulate such operations by deleting and re-adding the row. If the PK in the parent table has a FK constraint on it then this causes non-ideal behaviour. With Restrict or No Action constraints, the change will result in an error. With Cascaded constraints, you’d want the rows in the child table to be updated with the new FK value but, the implicit delete of the row from the parent table would remove the associated rows from the child table and the subsequent implicit insert into the parent wouldn’t reinstate the child rows. For this reason, an attempt to add an ON UPDATE CASCADE where the parent column is a primary key will be rejected. Q. Does adding or dropping Foreign Keys cause downtime due to a schema change? A. Nope, this is an online operation. MySQL Cluster supports a number of on-line schema changes, ie adding and dropping indexes, adding columns, etc. Q. Where can I see an example of node.js with MySQL Cluster? A. Check out the tutorial and download the code from GitHub Q. Can I use the auto-installer to support remote deployments? How about setting up MySQL Cluster 7.2? A. Yes to both! Q. Can I get a demo Check out the tutorial. You can download the code from http://labs.mysql.com/ Go to Select Build drop-down box Q. What is be minimum internet speen required for Geo distributed cluster with synchronous replication? A. if you're splitting you cluster between sites then we recommend a network latency of 20ms or less. Alternatively, use MySQL asynchronous replication where the latency of your WAN doesn't impact the latency of your reads/writes. Q. Where you can one learn more about the PayPal project with MySQL Cluster? A. Take a look at the following - you'll find press coverage, a video and slides from their keynote presentation  So, if you want to learn more, listen to the new MySQL Cluster 7.3 on-demand webinar  MySQL Cluster 7.3 is still in the development phase, so it would be great to get your feedback on these new features, and things you want to see!

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >