Search Results

Search found 4268 results on 171 pages for 'srikanth vm'.

Page 167/171 | < Previous Page | 163 164 165 166 167 168 169 170 171  | Next Page >

  • What steps can you take to ensure sane build environments when compiling software?

    - by Chris Adams
    Hi guys, I've been stuck with a compilation problem when building a standardised virtual machine on CentOS 5.4, and I'm in the dark here as to a) why this error is occurring, and b) how to fix it, and in the hope that someone else stumbles across this problem too, I'm hoping someone can help me find the solution here. I'm getting a configure: error: newly created file is older than distributed files! error when trying to compile Ruby Enterprise like below when I try to run the installer, and the solutions offered to on the forums (of checking the tine, and touching the files to update the time associated with them) don't seem to be helping here. What steps can I take to work out what the cause of this problem? [vagrant@vagrant-centos-5 ruby-enterprise-1.8.7-2009.10]$ sudo ./installer Welcome to the Ruby Enterprise Edition installer This installer will help you install Ruby Enterprise Edition 1.8.7-2009.10. Don't worry, none of your system files will be touched if you don't want them to, so there is no risk that things will screw up. You can expect this from the installation process: 1. Ruby Enterprise Edition will be compiled and optimized for speed for this system. 2. Ruby on Rails will be installed for Ruby Enterprise Edition. 3. You will learn how to tell Phusion Passenger to use Ruby Enterprise Edition instead of regular Ruby. Press Enter to continue, or Ctrl-C to abort. Checking for required software... * C compiler... found at /usr/bin/gcc * C++ compiler... found at /usr/bin/g++ * The 'make' tool... found at /usr/bin/make * Zlib development headers... found * OpenSSL development headers... found * GNU Readline development headers... found -------------------------------------------- Target directory Where would you like to install Ruby Enterprise Edition to? (All Ruby Enterprise Edition files will be put inside that directory.) [/opt/ruby-enterprise] : -------------------------------------------- Compiling and optimizing the memory allocator for Ruby Enterprise Edition In the mean time, feel free to grab a cup of coffee. ./configure --prefix=/opt/ruby-enterprise --disable-dependency-tracking checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... configure: error: newly created file is older than distributed files! Check your system clock This is a virtual machine running on virtualbox, and the time of the host and the virtual machine are identical, and up to date. I've also tried running this after updating time with an ntp-client, so no avail. I tried this after reading this post here of someone having a similar problem [vagrant@vagrant-centos-5 ruby-enterprise-1.8.7-2009.10]$ date Tue Apr 27 08:09:05 BST 2010 The other approach I've tried is to touch the top level the files in the build folder like suggested here, but this hasn't worked either (an to be honest, I'm not sure why it would have worked either) [vagrant@vagrant-centos-5 ruby-enterprise-1.8.7-2009.10]$ sudo touch ruby-enterprise-1.8.7-2009.10/* I'm not sure what I can do next here - the problem seems to be the bash configure script that returns this error error: newly created file is older than distributed files!, at line :2214 { echo "$as_me:$LINENO: checking whether build environment is sane" >&5 echo $ECHO_N "checking whether build environment is sane... $ECHO_C" >&6; } # Just in case sleep 1 echo timestamp > conftest.file # Do `set' in a subshell so we don't clobber the current shell's # arguments. Must try -L first in case configure is actually a # symlink; some systems play weird games with the mod time of symlinks # (eg FreeBSD returns the mod time of the symlink's containing # directory). if ( set X `ls -Lt $srcdir/configure conftest.file 2> /dev/null` if test "$*" = "X"; then # -L didn't work. set X `ls -t $srcdir/configure conftest.file` fi rm -f conftest.file if test "$*" != "X $srcdir/configure conftest.file" \ && test "$*" != "X conftest.file $srcdir/configure"; then # If neither matched, then we have a broken ls. This can happen # if, for instance, CONFIG_SHELL is bash and it inherits a # broken ls alias from the environment. This has actually # happened. Such a system could not be considered "sane". { { echo "$as_me:$LINENO: error: ls -t appears to fail. Make sure there is not a broken alias in your environment" >&5 echo "$as_me: error: ls -t appears to fail. Make sure there is not a broken alias in your environment" >&2;} { (exit 1); exit 1; }; } fi ### PROBLEM LINE #### # this line is the problem line - this is returned true, sometimes it isn't and I can't # see a pattern that that determines when this will test will pass or not. test "$2" = conftest.file ) then # Ok. : else { { echo "$as_me:$LINENO: error: newly created file is older than distributed files! Check your system clock" >&5 echo "$as_me: error: newly created file is older than distributed files! Check your system clock" >&2;} { (exit 1); exit 1; }; } fi the thing that makes this really frustrating is that this script works sometimes, when the VM has been running for an hour or so it works, but not at boot. There's nothing I see in the crontab that suggests any hourly tasks are run that might change the state of the system enough make a difference to this script working. I'm totally at a loss when it comes to debugging beyond here. What's the best approach to take here? Thanks

    Read the article

  • KVM/Libvirt bridged/routed networking not working on newer guest kernels

    - by SharkWipf
    I have a dedicated server running Debian 6, with Libvirt (0.9.11.3) and Qemu-KVM (qemu-kvm-1.0+dfsg-11, Debian). I am having a problem getting bridged/routed networking to work in KVM guests with newer kernels (2.6.38). NATted networking works fine though. Older kernels work perfectly fine as well. The host kernel is at version 3.2.0-2-amd64, the problem was also there on an older host kernel. The contents of the host's /etc/network/interfaces (ip removed): # Loopback device: auto lo iface lo inet loopback # bridge auto br0 iface br0 inet static address 176.9.xx.xx broadcast 176.9.xx.xx netmask 255.255.255.224 gateway 176.9.xx.xx pointopoint 176.9.xx.xx bridge_ports eth0 bridge_stp off bridge_maxwait 0 bridge_fd 0 up route add -host 176.9.xx.xx dev br0 # VM IP post-up mii-tool -F 100baseTx-FD br0 # default route to access subnet up route add -net 176.9.xx.xx netmask 255.255.255.224 gw 176.9.xx.xx br0 The output of ifconfig -a on the host: br0 Link encap:Ethernet HWaddr 54:04:a6:8a:66:13 inet addr:176.9.xx.xx Bcast:176.9.xx.xx Mask:255.255.255.224 inet6 addr: fe80::5604:a6ff:fe8a:6613/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:20216729 errors:0 dropped:0 overruns:0 frame:0 TX packets:19962220 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:14144528601 (13.1 GiB) TX bytes:7990702656 (7.4 GiB) eth0 Link encap:Ethernet HWaddr 54:04:a6:8a:66:13 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:26991788 errors:0 dropped:12066 overruns:0 frame:0 TX packets:19737261 errors:270082 dropped:0 overruns:0 carrier:270082 collisions:1686317 txqueuelen:1000 RX bytes:15459970915 (14.3 GiB) TX bytes:6661808415 (6.2 GiB) Interrupt:17 Memory:fe500000-fe520000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:6240133 errors:0 dropped:0 overruns:0 frame:0 TX packets:6240133 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:6081956230 (5.6 GiB) TX bytes:6081956230 (5.6 GiB) virbr0 Link encap:Ethernet HWaddr 52:54:00:79:e4:5a inet addr:192.168.100.1 Bcast:192.168.100.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:225016 errors:0 dropped:0 overruns:0 frame:0 TX packets:412958 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:16284276 (15.5 MiB) TX bytes:687827984 (655.9 MiB) virbr0-nic Link encap:Ethernet HWaddr 52:54:00:79:e4:5a BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) vnet0 Link encap:Ethernet HWaddr fe:54:00:93:4e:68 inet6 addr: fe80::fc54:ff:fe93:4e68/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:607670 errors:0 dropped:0 overruns:0 frame:0 TX packets:5932089 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:83574773 (79.7 MiB) TX bytes:1092482370 (1.0 GiB) vnet1 Link encap:Ethernet HWaddr fe:54:00:ed:6a:43 inet6 addr: fe80::fc54:ff:feed:6a43/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:922132 errors:0 dropped:0 overruns:0 frame:0 TX packets:6342375 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:251091242 (239.4 MiB) TX bytes:1629079567 (1.5 GiB) vnet2 Link encap:Ethernet HWaddr fe:54:00:0d:cb:3d inet6 addr: fe80::fc54:ff:fe0d:cb3d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:9461 errors:0 dropped:0 overruns:0 frame:0 TX packets:665189 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:4990275 (4.7 MiB) TX bytes:49229647 (46.9 MiB) vnet3 Link encap:Ethernet HWaddr fe:54:cd:83:eb:aa inet6 addr: fe80::fc54:cdff:fe83:ebaa/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1649 errors:0 dropped:0 overruns:0 frame:0 TX packets:12177 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:77233 (75.4 KiB) TX bytes:2127934 (2.0 MiB) The guest's /etc/network/interfaces, in this case running Ubuntu 12.04 (ip removed): # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 176.9.xx.xx netmask 255.255.255.248 gateway 176.9.xx.xx # Host IP pointopoint 176.9.xx.xx # Host IP dns-nameservers 8.8.8.8 8.8.4.4 The output of ifconfig -a on the guest: eth0 Link encap:Ethernet HWaddr 52:54:cd:83:eb:aa inet addr:176.9.xx.xx Bcast:0.0.0.0 Mask:255.255.255.255 inet6 addr: fe80::5054:cdff:fe83:ebaa/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:14190 errors:0 dropped:0 overruns:0 frame:0 TX packets:1768 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2614642 (2.6 MB) TX bytes:82700 (82.7 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:954 errors:0 dropped:0 overruns:0 frame:0 TX packets:954 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:176679 (176.6 KB) TX bytes:176679 (176.6 KB) Output of ping -c4 on the guest: PING google.nl (173.194.35.151) 56(84) bytes of data. 64 bytes from muc03s01-in-f23.1e100.net (173.194.35.151): icmp_req=1 ttl=55 time=14.7 ms From static.174.82.xx.xx.clients.your-server.de (176.9.xx.xx): icmp_seq=2 Redirect Host(New nexthop: static.161.82.9.176.clients.your-server.de (176.9.82.161)) 64 bytes from muc03s01-in-f23.1e100.net (173.194.35.151): icmp_req=2 ttl=55 time=15.1 ms From static.198.170.9.176.clients.your-server.de (176.9.170.198) icmp_seq=3 Destination Host Unreachable From static.198.170.9.176.clients.your-server.de (176.9.170.198) icmp_seq=4 Destination Host Unreachable --- google.nl ping statistics --- 4 packets transmitted, 2 received, +2 errors, 50% packet loss, time 3002ms rtt min/avg/max/mdev = 14.797/14.983/15.170/0.223 ms, pipe 2 The static.174.82.xx.xx.clients.your-server.de (176.9.xx.xx) is the host's IP. I have encountered this problem with every guest OS I've tried, that being Fedora, Ubuntu (server/desktop) and Debian with an upgraded kernel. I've also tried compiling the guest kernel myself, to no avail. I have no problem with recompiling a kernel, though the host cannot afford any downtime. Any ideas on this problem are very welcome. EDIT: I can ping the host from inside the guest.

    Read the article

  • How to disable Mac OS X from using swap when there still is "Inactive" memory?

    - by Motin
    A common phenomena in my day to day usage (and several other's according to various posts throughout the internet) of OS X, the system seems to become slow whenever there is no more "Free" memory available. Supposedly, this is due to swapping, since heavy disk activity is apparent and that vm_stat reports many pageouts. (Correct me from wrong) However, the amount of "Inactive" ram is typically around 12.5%-25% of all available memory (^1.) when swapping starts/occurs/ends. According to http://support.apple.com/kb/ht1342 : Inactive memory This information in memory is not actively being used, but was recently used. For example, if you've been using Mail and then quit it, the RAM that Mail was using is marked as Inactive memory. This Inactive memory is available for use by another application, just like Free memory. However, if you open Mail before its Inactive memory is used by a different application, Mail will open quicker because its Inactive memory is converted to Active memory, instead of loading Mail from the slower hard disk. And according to http://developer.apple.com/library/mac/#documentation/Performance/Conceptual/ManagingMemory/Articles/AboutMemory.html : The inactive list contains pages that are currently resident in physical memory but have not been accessed recently. These pages contain valid data but may be released from memory at any time. So, basically: When a program has quit, it's memory becomes marked as Inactive and should be claimable at any time. Still, OS X will prefer to start swapping out memory to the Swap file instead of just claiming this memory, whenever the "Free" memory gets to low. Why? What is the advantage of this behavior over, say, instantly releasing Inactive memory and not even touch the swap file? Some sources (^2.) indicate that OS X would page out the "Inactive" memory to swap before releasing it, but that doesn't make sense now does it if the memory may be released from memory at any time? Swapping is expensive, releasing is cheap, right? Can this behavior be changed using some preference or known hack? (Preferably one that doesn't include disabling swap/dynamic_pager altogether and restarting...) I do appreciate the purge command, as well as the concept of Repairing disk permissions to force some Free memory, but those are ways to painfully force more Free memory than to actually fixing the swap/release decision logic... Btw a similar question was asked here: http://forums.macnn.com/90/mac-os-x/434650/why-does-os-x-swap-when/ and here: http://hintsforums.macworld.com/showthread.php?t=87688 but even though the OPs re-asked the core question, none of the replies addresses an answer to it... ^1. UPDATE 17-mar-2012 Since I first posted this question, I have gone from 4gb to 8gb of installed ram, and the problem remains. The amount of "Inactive" ram was 0.5gb-1.0gb before and is now typically around 1.0-2.0GB when swapping starts/occurs/ends, ie it seems that around 12.5%-25% of the ram is preserved as Inactive by osx kernel logic. ^2. For instance http://apple.stackexchange.com/questions/4288/what-does-it-mean-if-i-have-lots-of-inactive-memory-at-the-end-of-a-work-day : Once all your memory is used (free memory is 0), the OS will write out inactive memory to the swapfile to make more room in active memory. UPDATE 17-mar-2012 Here is a round-up of the methods that have been suggested to help so far: The purge command "Used to approximate initial boot conditions with a cold disk buffer cache for performance analysis. It does not affect anonymous memory that has been allocated through malloc, vm_allocate, etc". This is useful to prevent osx to swap-out the disk cache (which is ridiculous that osx actually does so in the first place), but with the downside that the disk cache is released, meaning that if the disk cache was not about to be swapped out, one would simply end up with a cold disk buffer cache, probably affecting performance negatively. The FreeMemory app and/or Repairing disk permissions to force some Free memory Doesn't help releasing any memory, only moving some gigabytes of memory contents from ram to the hd. In the end, this causes lots of swap-ins when I attempt to use the applications that were open while freeing memory, as a lot of its vm is now on swap. Speeding up swap-allocation using dynamicpagerwrapper Seems a good thing to do in order to speed up swap-usage, but does not address the problem of osx swapping in the first place while there is still inactive memory. Disabling swap by disabling dynamicpager and restarting This will force osx not to use swap to the price of the system hanging when all memory is used. Not a viable alternative... Disabling swap using a hacked dynamicpager Similar to disabling dynamicpager above, some excerpts from the comments to the blog post indicate that this is not a viable solution: "The Inactive Memory is high as usual". "when your system is running out of memory, the whole os hangs...", "if you consume the whole amount of memory of the mac, the machine will likely hang" To sum up, I am still unaware of a way of disabling Mac OS X from using swap when there still is "Inactive" memory. If it isn't possible, maybe at least there is an explanation somewhere of why osx prefers to swap out memory that may be released from memory at any time?

    Read the article

  • Neo4j OutOfMemory problem

    - by Edward83
    Hi! This is my source code of Main.java. It was grabbed from neo4j-apoc-1.0 examples. The goal of modification to store 1M records of 2 nodes and 1 relation: package javaapplication2; import org.neo4j.graphdb.GraphDatabaseService; import org.neo4j.graphdb.Node; import org.neo4j.graphdb.RelationshipType; import org.neo4j.graphdb.Transaction; import org.neo4j.kernel.EmbeddedGraphDatabase; public class Main { private static final String DB_PATH = "neo4j-store-1M"; private static final String NAME_KEY = "name"; private static enum ExampleRelationshipTypes implements RelationshipType { EXAMPLE } public static void main(String[] args) { GraphDatabaseService graphDb = null; try { System.out.println( "Init database..." ); graphDb = new EmbeddedGraphDatabase( DB_PATH ); registerShutdownHook( graphDb ); System.out.println( "Start of creating database..." ); int valIndex = 0; for(int i=0; i<1000; ++i) { for(int j=0; j<1000; ++j) { Transaction tx = graphDb.beginTx(); try { Node firstNode = graphDb.createNode(); firstNode.setProperty( NAME_KEY, "Hello" + valIndex ); Node secondNode = graphDb.createNode(); secondNode.setProperty( NAME_KEY, "World" + valIndex ); firstNode.createRelationshipTo( secondNode, ExampleRelationshipTypes.EXAMPLE ); tx.success(); ++valIndex; } finally { tx.finish(); } } } System.out.println("Ok, client processing finished!"); } finally { System.out.println( "Shutting down database ..." ); graphDb.shutdown(); } } private static void registerShutdownHook( final GraphDatabaseService graphDb ) { // Registers a shutdown hook for the Neo4j instance so that it // shuts down nicely when the VM exits (even if you "Ctrl-C" the // running example before it's completed) Runtime.getRuntime().addShutdownHook( new Thread() { @Override public void run() { graphDb.shutdown(); } } ); } } After a few iterations (around 150K) I got error message: "java.lang.OutOfMemoryError: Java heap space at java.nio.HeapByteBuffer.(HeapByteBuffer.java:39) at java.nio.ByteBuffer.allocate(ByteBuffer.java:312) at org.neo4j.kernel.impl.nioneo.store.PlainPersistenceWindow.(PlainPersistenceWindow.java:30) at org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool.allocateNewWindow(PersistenceWindowPool.java:534) at org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool.refreshBricks(PersistenceWindowPool.java:430) at org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool.acquire(PersistenceWindowPool.java:122) at org.neo4j.kernel.impl.nioneo.store.CommonAbstractStore.acquireWindow(CommonAbstractStore.java:459) at org.neo4j.kernel.impl.nioneo.store.AbstractDynamicStore.updateRecord(AbstractDynamicStore.java:240) at org.neo4j.kernel.impl.nioneo.store.PropertyStore.updateRecord(PropertyStore.java:209) at org.neo4j.kernel.impl.nioneo.xa.Command$PropertyCommand.execute(Command.java:513) at org.neo4j.kernel.impl.nioneo.xa.NeoTransaction.doCommit(NeoTransaction.java:443) at org.neo4j.kernel.impl.transaction.xaframework.XaTransaction.commit(XaTransaction.java:316) at org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.commit(XaResourceManager.java:399) at org.neo4j.kernel.impl.transaction.xaframework.XaResourceHelpImpl.commit(XaResourceHelpImpl.java:64) at org.neo4j.kernel.impl.transaction.TransactionImpl.doCommit(TransactionImpl.java:514) at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:571) at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:543) at org.neo4j.kernel.impl.transaction.TransactionImpl.commit(TransactionImpl.java:102) at org.neo4j.kernel.EmbeddedGraphDbImpl$TransactionImpl.finish(EmbeddedGraphDbImpl.java:329) at javaapplication2.Main.main(Main.java:62) 28.05.2010 9:52:14 org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool logWarn WARNING: [neo4j-store-1M\neostore.propertystore.db.strings] Unable to allocate direct buffer" Guys! Help me plzzz, what I did wrong, how can I repair it? Tested on platform Windows XP 32bit SP3. Maybe solution within creation custom configuration? thnx 4 every advice!

    Read the article

  • Problems with shutting down JBoss in Eclipse if I change JNDI port

    - by Balint Pato
    1st phase I have a problem shutting down my running JBoss instance under Eclipse since I changed the JNDI port of JBoss. Of course I can shut it down from the console view but not with the stop button (it still searches JNDI port at the default 1099 port). I'm looking forward to any solutions. Thank you! Used environment: JBoss 4.0.2 (using default) Eclipse 3.4.0. (using JBoss Tools 2.1.1.GA) Default ports: 1098, 1099 Changed ports: 11098, 11099 I changed the following part in jbosspath/server/default/conf/jboss-service.xml: <!-- ==================================================================== --> <!-- JNDI --> <!-- ==================================================================== --> <mbean code="org.jboss.naming.NamingService" name="jboss:service=Naming" xmbean-dd="resource:xmdesc/NamingService-xmbean.xml"> <!-- The call by value mode. true if all lookups are unmarshalled using the caller's TCL, false if in VM lookups return the value by reference. --> <attribute name="CallByValue">false</attribute> <!-- The listening port for the bootstrap JNP service. Set this to -1 to run the NamingService without the JNP invoker listening port. --> <attribute name="Port">11099</attribute> <!-- The bootstrap JNP server bind address. This also sets the default RMI service bind address. Empty == all addresses --> <attribute name="BindAddress">${jboss.bind.address}</attribute> <!-- The port of the RMI naming service, 0 == anonymous --> <attribute name="RmiPort">11098</attribute> <!-- The RMI service bind address. Empty == all addresses --> <attribute name="RmiBindAddress">${jboss.bind.address}</attribute> <!-- The thread pool service used to control the bootstrap lookups --> <depends optional-attribute-name="LookupPool" proxy-type="attribute">jboss.system:service=ThreadPool</depends> </mbean> <mbean code="org.jboss.naming.JNDIView" name="jboss:service=JNDIView" xmbean-dd="resource:xmdesc/JNDIView-xmbean.xml"> </mbean> Eclipse setup: About my JBoss Tools preferences: I had a previous version, I got this problem, I read about some bugfix in JbossTools, so updated to 2.1.1.GA. Now the buttons changed, and I've got a new preferences view, but I cannot modify anything...seems to be abnormal as well: Error dialog: The stacktrace: javax.naming.CommunicationException: Could not obtain connection to any of these urls: localhost:1099 [Root exception is javax.naming.CommunicationException: Failed to connect to server localhost:1099 [Root exception is javax.naming.ServiceUnavailableException: Failed to connect to server localhost:1099 [Root exception is java.net.ConnectException: Connection refused: connect]]] at org.jnp.interfaces.NamingContext.checkRef(NamingContext.java:1385) at org.jnp.interfaces.NamingContext.lookup(NamingContext.java:579) at org.jnp.interfaces.NamingContext.lookup(NamingContext.java:572) at javax.naming.InitialContext.lookup(InitialContext.java:347) at org.jboss.Shutdown.main(Shutdown.java:202) Caused by: javax.naming.CommunicationException: Failed to connect to server localhost:1099 [Root exception is javax.naming.ServiceUnavailableException: Failed to connect to server localhost:1099 [Root exception is java.net.ConnectException: Connection refused: connect]] at org.jnp.interfaces.NamingContext.getServer(NamingContext.java:254) at org.jnp.interfaces.NamingContext.checkRef(NamingContext.java:1370) ... 4 more Caused by: javax.naming.ServiceUnavailableException: Failed to connect to server localhost:1099 [Root exception is java.net.ConnectException: Connection refused: connect] at org.jnp.interfaces.NamingContext.getServer(NamingContext.java:228) ... 5 more Caused by: java.net.ConnectException: Connection refused: connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:305) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:171) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:158) at java.net.Socket.connect(Socket.java:452) at java.net.Socket.connect(Socket.java:402) at java.net.Socket.<init>(Socket.java:309) at java.net.Socket.<init>(Socket.java:211) at org.jnp.interfaces.TimedSocketFactory.createSocket(TimedSocketFactory.java:69) at org.jnp.interfaces.TimedSocketFactory.createSocket(TimedSocketFactory.java:62) at org.jnp.interfaces.NamingContext.getServer(NamingContext.java:224) ... 5 more Exception in thread "main" 2nd phase: After creating a new Server in File/new/other/server, it did appear in the preferences tab. Now the stop button is working (the server receives the shutdown messages without any additional modification of the jndi port -- there is no opportunity for it now) but it still throws an error message, though different, it's without exception stack trace: "Server JBoss 4.0 Server failed to stop."

    Read the article

  • HTTP error code 405: tomcat Url mapping issue

    - by Andrew
    I am having trouble POSTing to my java HTTPServlet. I am getting "HTTP Status 405 - HTTP method GET is not supported by this URL" from my tomcat server". When I debug the servlet the login method is never called. I think it's a url mapping issue within tomcat... web.xml <servlet-mapping> <servlet-name>faxcom</servlet-name> <url-pattern>/faxcom/*</url-pattern> </servlet-mapping> FaxcomService.java @Path("/rest") public class FaxcomService extends HttpServlet{ private FAXCOM_x0020_ServiceLocator service; private FAXCOM_x0020_ServiceSoap port; @GET @Produces("application/json") public String testGet() { return "{ \"got here\":true }"; } @POST @Path("/login") @Consumes("application/json") // @Produces("application/json") public Response login(LoginBean login) { ArrayList<ResultMessageBean> rm = new ArrayList<ResultMessageBean>(10); try { service = new FAXCOM_x0020_ServiceLocator(); service.setFAXCOM_x0020_ServiceSoapEndpointAddress("http://cd-faxserver/faxcom_ws/faxcomservice.asmx"); service.setMaintainSession(true); // enable sessions support port = service.getFAXCOM_x0020_ServiceSoap(); rm.add(new ResultMessageBean(port.logOn( "\\\\CD-Faxserver\\FaxcomQ_API", /* path to the queue */ login.getUserName(), /* username */ login.getPassword(), /* password */ login.getUserType() /* 2 = user conf user */ ))); } catch (RemoteException e) { e.printStackTrace(); } catch (ServiceException e) { e.printStackTrace(); } // return rm; return Response.status(201).entity(rm).build(); } @POST @Path("/newFaxMessage") @Consumes(MediaType.APPLICATION_JSON) @Produces(MediaType.APPLICATION_JSON) public ArrayList<ResultMessageBean> newFaxMessage(FaxBean fax) { ArrayList<ResultMessageBean> rm = new ArrayList<ResultMessageBean>(); try { rm.add(new ResultMessageBean(port.newFaxMessage( fax.getPriority(), /* priority: 0 - low, 1 - normal, 2 - high, 3 - urgent */ fax.getSendTime(), /* send time */ /* "0.0" - immediate */ /* "1.0" - offpeak */ /* "9/14/2007 5:12:11 PM" - to set specific time */ fax.getResolution(), /* resolution: 0 - low res, 1 - high res */ fax.getSubject(), /* subject */ fax.getCoverpage(), /* cover page: "" – default, “(none)� – no cover page */ fax.getMemo(), /* memo */ fax.getSenderName(), /* sender's name */ fax.getSenderFaxNumber(), /* sender's fax */ fax.getRecipients().get(0).getName(), /* recipient's name */ fax.getRecipients().get(0).getCompany(), /* recipient's company */ fax.getRecipients().get(0).getFaxNumber(), /* destination fax number */ fax.getRecipients().get(0).getVoiceNumber(), /* recipient's phone number */ fax.getRecipients().get(0).getAccountNumber() /* recipient's account number */ ))); if (fax.getRecipients().size() > 1) { for (int i = 1; i < fax.getRecipients().size(); i++) rm.addAll(addRecipient(fax.getRecipients().get(i))); } } catch (RemoteException e) { // TODO Auto-generated catch block e.printStackTrace(); } return rm; } } Main.java private static void main(String[] args) { try { URL url = new URL("https://andrew-vm/faxcom/rest/login"); HttpURLConnection conn = (HttpURLConnection) url.openConnection(); conn.setRequestMethod("POST"); conn.setDoOutput(true); conn.setRequestProperty("Content-Type", "application/json"); FileInputStream jsonDemo = new FileInputStream("login.txt"); OutputStream os = (OutputStream) conn.getOutputStream(); os.write(IOUtils.toByteArray(jsonDemo)); os.flush(); if (conn.getResponseCode() != 200) { throw new RuntimeException("Failed : HTTP error code : " + conn.getResponseCode()); } BufferedReader br = new BufferedReader(new InputStreamReader( (conn.getInputStream()))); String output; System.out.println("Output from Server .... \n"); while ((output = br.readLine()) != null) { System.out.println(output); } // Don't want to disconnect - servletInstance will be destroyed // conn.disconnect(); } catch (MalformedURLException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } I am working from this tutorial: http://www.mkyong.com/webservices/jax-rs/restfull-java-client-with-java-net-url/

    Read the article

  • puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work?

    Read the article

  • How to make sure that grub does use menu.lst?

    - by Glen S. Dalton
    On my Ubuntu 9.04 ("Karmic") laptop I suspect grub does not use the /boot/grub/menu.lst file. What happens on boot is that I see a blank screen and nothing happens. When I press ESC I see a boot list which is different from what I would expect from the menu.lst file. The menu lines are different and when I choose the first entry it does not use the kernel options that are in the first entry in menu.lst. Where do the entries that grub uses come from? How can I find out what happens, is there a log? I could not find anything in /var/log/syslog or /var/log/dmesg about grub using a menu.lst. How can I set it to work like expected? Some Files: $ sudo ls -la /boot/grub/*lst -rw-r--r-- 1 root root 1558 2009-12-12 15:25 /boot/grub/command.lst -rw-r--r-- 1 root root 121 2009-12-12 15:25 /boot/grub/fs.lst -rw-r--r-- 1 root root 272 2009-12-12 15:25 /boot/grub/handler.lst -rw-r--r-- 1 root root 4576 2010-03-19 11:26 /boot/grub/menu.lst -rw-r--r-- 1 root root 1657 2009-12-12 15:25 /boot/grub/moddep.lst -rw-r--r-- 1 root root 62 2009-12-12 15:25 /boot/grub/partmap.lst -rw-r--r-- 1 root root 22 2009-12-12 15:25 /boot/grub/parttool.lst $ sudo ls -la /vm* lrwxrwxrwx 1 root root 30 2009-12-12 16:15 /vmlinuz -> boot/vmlinuz-2.6.31-16-generic lrwxrwxrwx 1 root root 30 2009-12-12 14:07 /vmlinuz.old -> boot/vmlinuz-2.6.31-14-generic $ sudo ls -la /init* lrwxrwxrwx 1 root root 33 2009-12-12 16:15 /initrd.img -> boot/initrd.img-2.6.31-16-generic lrwxrwxrwx 1 root root 33 2009-12-12 14:07 /initrd.img.old -> boot/initrd.img-2.6.31-14-generic The only menu.lst that I found: $ sudo find / -name "menu.lst" /boot/grub/menu.lst $ sudo cat /boot/grub/menu.lst # menu.lst - See: grub(8), info grub, update-grub(8) # grub-install(8), grub-floppy(8), # grub-md5-crypt, /usr/share/doc/grub # and /usr/share/doc/grub-doc/. ## default num # Set the default entry to the entry number NUM. Numbering starts from 0, and # the entry number 0 is the default if the command is not used. # # You can specify 'saved' instead of a number. In this case, the default entry # is the entry saved with the command 'savedefault'. # WARNING: If you are using dmraid do not use 'savedefault' or your # array will desync and will not let you boot your system. default 0 ## timeout sec # Set a timeout, in SEC seconds, before automatically booting the default entry # (normally the first entry defined). timeout 3 ## hiddenmenu # Hides the menu by default (press ESC to see the menu) #hiddenmenu # Pretty colours color cyan/blue white/blue ## password ['--md5'] passwd # If used in the first section of a menu file, disable all interactive editing # control (menu entry editor and command-line) and entries protected by the # command 'lock' # e.g. password topsecret # password --md5 $1$gLhU0/$aW78kHK1QfV3P2b2znUoe/ # password topsecret # examples # # title Windows 95/98/NT/2000 # root (hd0,0) # makeactive # chainloader +1 # # title Linux # root (hd0,1) # kernel /vmlinuz root=/dev/hda2 ro # Put static boot stanzas before and/or after AUTOMAGIC KERNEL LIST ### BEGIN AUTOMAGIC KERNELS LIST ## lines between the AUTOMAGIC KERNELS LIST markers will be modified ## by the debian update-grub script except for the default options below ## DO NOT UNCOMMENT THEM, Just edit them to your needs ## ## Start Default Options ## ## default kernel options ## default kernel options for automagic boot options ## If you want special options for specific kernels use kopt_x_y_z ## where x.y.z is kernel version. Minor versions can be omitted. ## e.g. kopt=root=/dev/hda1 ro ## kopt_2_6_8=root=/dev/hdc1 ro ## kopt_2_6_8_2_686=root=/dev/hdc2 ro # kopt=root=UUID=9b454298-18e1-43f7-a5bc-f56e7ed5f9c6 ro noresume ## default grub root device ## e.g. groot=(hd0,0) # groot=70fcd2b0-0ee0-4fe6-9acb-322ef74c1cdf ## should update-grub create alternative automagic boot options ## e.g. alternative=true ## alternative=false # alternative=true ## should update-grub lock alternative automagic boot options ## e.g. lockalternative=true ## lockalternative=false # lockalternative=false ## additional options to use with the default boot option, but not with the ## alternatives ## e.g. defoptions=vga=791 resume=/dev/hda5 ## defoptions=quiet splash # defoptions=apm=on acpi=off ## should update-grub lock old automagic boot options ## e.g. lockold=false ## lockold=true # lockold=false ## Xen hypervisor options to use with the default Xen boot option # xenhopt= ## Xen Linux kernel options to use with the default Xen boot option # xenkopt=console=tty0 ## altoption boot targets option ## multiple altoptions lines are allowed ## e.g. altoptions=(extra menu suffix) extra boot options ## altoptions=(recovery) single # altoptions=(recovery mode) single ## controls how many kernels should be put into the menu.lst ## only counts the first occurence of a kernel, not the ## alternative kernel options ## e.g. howmany=all ## howmany=7 # howmany=all ## specify if running in Xen domU or have grub detect automatically ## update-grub will ignore non-xen kernels when running in domU and vice versa ## e.g. indomU=detect ## indomU=true ## indomU=false # indomU=detect ## should update-grub create memtest86 boot option ## e.g. memtest86=true ## memtest86=false # memtest86=true ## should update-grub adjust the value of the default booted system ## can be true or false # updatedefaultentry=false ## should update-grub add savedefault to the default options ## can be true or false # savedefault=false ## ## End Default Options ## title Ubuntu 9.10, kernel 2.6.31-14-generic noresume uuid 70fcd2b0-0ee0-4fe6-9acb-322ef74c1cdf kernel /vmlinuz-2.6.31-14-generic root=UUID=9b454298-18e1-43f7-a5bc-f56e7ed5f9c6 ro quiet splash apm=on acpi=off noresume initrd /initrd.img-2.6.31-14-generic title Ubuntu 9.10, kernel 2.6.31-14-generic (recovery mode) uuid 70fcd2b0-0ee0-4fe6-9acb-322ef74c1cdf kernel /vmlinuz-2.6.31-14-generic root=UUID=9b454298-18e1-43f7-a5bc-f56e7ed5f9c6 ro sing le initrd /initrd.img-2.6.31-14-generic title Ubuntu 9.10, memtest86+ uuid 70fcd2b0-0ee0-4fe6-9acb-322ef74c1cdf kernel /memtest86+.bin ### END DEBIAN AUTOMAGIC KERNELS LIST These are the choices that grub displays after i press ESC: Ubuntu, Linux 2-6-31-16-generic Ubuntu, Linux 2-6-31-16-generic (recovery mode) Ubuntu, Linux 2-6-31-14-generic Ubuntu, Linux 2-6-31-14-generic (recovery mode) Memory test (memtest86+) Memory test (memtest86+, serial console 115200)

    Read the article

  • BroadcastReceiver not reading stored value from SharedPreferences

    - by bobby123
    In my app I have a broadcast receiver that turns on GPS upon receiving a set string of text. In the onLocationChanged method, I want to pass the GPS data and a value from my shared preferences to a thread in a string. I have the thread writing to log and can see all the GPS values in the string but the last value from my shared preferences is just showing up as 'prefPhoneNum' which I initialised the string to at the beginning of the receiver class. I have the same code to read the prefPhoneNum from shared preferences in the main class and it works there, can anyone see what I might be doing wrong? public class SmsReceiver extends BroadcastReceiver implements LocationListener { LocationManager lm; LocationListener loc; public SharedPreferences sharedpreferences; public static final String US = "usersettings"; public String prefPhoneNum = "prefPhoneNum"; @Override public void onReceive(Context context, Intent intent) { sharedpreferences = context.getSharedPreferences(US, Context.MODE_PRIVATE); prefPhoneNum = sharedpreferences.getString("prefPhoneNum" , ""); lm = (LocationManager)context.getSystemService(Context.LOCATION_SERVICE); loc = new SmsReceiver(); //---get the SMS message passed in--- Bundle bundle = intent.getExtras(); SmsMessage[] msgs = null; String str = ""; if (bundle != null) { //---retrieve the SMS message received--- Object[] pdus = (Object[]) bundle.get("pdus"); msgs = new SmsMessage[pdus.length]; for (int i=0; i<msgs.length; i++) { msgs[i] = SmsMessage.createFromPdu((byte[])pdus[i]); str += msgs[i].getMessageBody().toString() + "\n"; } Toast.makeText(context, str, Toast.LENGTH_SHORT).show(); //Display SMS if ((msgs[0].getMessageBody().toString().equals("Enable")) || (msgs[0].getMessageBody().toString().equals("enable"))) { enableGPS(); } else { /* Do Nothing*/ } } } public void enableGPS() { //new CountDownTimer(10000, 1000) { //10 seconds new CountDownTimer(300000, 1000) { //300 secs = 5 mins public void onTick(long millisUntilFinished) { lm.requestLocationUpdates(LocationManager.GPS_PROVIDER, 0, 0, loc); } public void onFinish() { lm.removeUpdates(loc); } }.start(); } @Override public void onLocationChanged(Location location) { String s = ""; s += location.getLatitude() + "\n"; s += location.getLongitude() + "\n"; s += location.getAltitude() + "\n"; s += location.getAccuracy() + "\n" + prefPhoneNum; Thread cThread = new Thread(new SocketsClient(s)); cThread.start(); } @Override public void onProviderDisabled(String provider) { } @Override public void onProviderEnabled(String provider) { } @Override public void onStatusChanged(String provider, int status, Bundle extras) { } } Here is the logcat for when the application shuts - D/LocationManager( 3912): requestLocationUpdates: provider = gps, listener = accel.working.TrackGPS@4628bce0 D/GpsLocationProvider( 96): setMinTime 0 D/GpsLocationProvider( 96): startNavigating D/GpsLocationProvider( 96): TTFF: 3227 D/AndroidRuntime( 3912): Shutting down VM W/dalvikvm( 3912): threadid=1: thread exiting with uncaught exception (group=0x400259f8) E/AndroidRuntime( 3912): FATAL EXCEPTION: main E/AndroidRuntime( 3912): java.lang.NullPointerException E/AndroidRuntime( 3912): at android.content.ContextWrapper.getSharedPreferences(ContextWrapper.java:146) E/AndroidRuntime( 3912): at accel.working.TrackGPS.onLocationChanged(TrackGPS.java:63) E/AndroidRuntime( 3912): at android.location.LocationManager$ListenerTransport._handleMessage(LocationManager.java:191) E/AndroidRuntime( 3912): at android.location.LocationManager$ListenerTransport.access$000(LocationManager.java:124) E/AndroidRuntime( 3912): at android.location.LocationManager$ListenerTransport$1.handleMessage(LocationManager.java:140) E/AndroidRuntime( 3912): at android.os.Handler.dispatchMessage(Handler.java:99) E/AndroidRuntime( 3912): at android.os.Looper.loop(Looper.java:144) E/AndroidRuntime( 3912): at android.app.ActivityThread.main(ActivityThread.java:4937) E/AndroidRuntime( 3912): at java.lang.reflect.Method.invokeNative(Native Method) E/AndroidRuntime( 3912): at java.lang.reflect.Method.invoke(Method.java:521) E/AndroidRuntime( 3912): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:868) E/AndroidRuntime( 3912): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:626) E/AndroidRuntime( 3912): at dalvik.system.NativeStart.main(Native Method)

    Read the article

  • KVM Guest installed from console. But how to get to the guest's console?

    - by badbishop
    I'm trying to install a fully virtualized guest (Fedora 14 x86_64) on KVM (RHEL 6), using command-line only (both hypervisor and guest). It goes without errors, and without a tangible result . I'd like to know how to do a text-only installation. So, here's what I've done: # virt-install \ --name=FE --ram=756 --vcpus=1 \ --file=/var/lib/libvirt/images/FE.img --network bridge:br0 \ --nographics --os-type=linux \ --extra-args='console=tty0' -v \ --cdrom=/media/usb/Fedora-14-x86_64-Live-Desktop.iso Starting install... Creating domain... | 0 B 00:00 Connected to domain FE Escape character is ^] ÿ Now what? As I understand after googling for a couple of days, I should see the guest's output from the text installation, but nothing happens. virt-viewer cannot connect to it, kindly suggesting that I explore all the options by adding --help (which I did). If I reconnect with virsh, I see this: Domain installation still in progress. You can reconnect to the console to complete the installation process. [root@v ~] # virsh console FEConnected to domain FE Escape character is ^] This shows that VM is running # virsh list Id Name State ---------------------------------- 8 FE running Qemu log: LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin /usr/libexec/qemu-kvm -S -M rhel6.0.0 -enable-kvm -m 756 -smp 1,sockets=1,cores=1,threads=1 -name FE -uuid 6989d008-7c89-424c-d2d3-f41235c57a18 -nographic -nodefconfig -nodefaults -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/FE.monitor,server,nowait -mon chardev=monitor,mode=control -rtc base=utc -no-reboot -boot d -drive file=/var/lib/libvirt/images/FE.img,if=none,id=drive-ide0-0-0,format=raw,cache=none -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -drive file=/media/usb/Fedora-14-x86_64-Live-Desktop.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=20,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:0a:65:8d,bus=pci.0,addr=0x2 -chardev pty,id=serial0 -device isa-serial,chardev=serial0 -usb -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 char device redirected to /dev/pts/1 Output of /etc/libvirt/qemu/FE.xml # cat /etc/libvirt/qemu/FE.xml <domain type='kvm'> <name>FE</name> <uuid>6989d008-7c89-424c-d2d3-f41235c57a18</uuid> <memory>774144</memory> <currentMemory>774144</currentMemory> <vcpu>1</vcpu> <os> <type arch='x86_64' machine='rhel6.0.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/var/lib/libvirt/images/FE.img'/> <target dev='hda' bus='ide'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <disk type='block' device='cdrom'> <driver name='qemu' type='raw'/> <target dev='hdc' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='1' unit='0'/> </disk> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <interface type='bridge'> <mac address='52:54:00:0a:65:8d'/> <source bridge='br0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target port='0'/> </console> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </memballoon> </devices> </domain> I'm obviously missing something that many others don't, but what is it? Thanx in advance!

    Read the article

  • How to write specs with MSpec for code that changes Thread.CurrentPrincipal?

    - by Dan Jensen
    I've been converting some old specs to MSpec (were using NUnit/SpecUnit). The specs are for a view model, and the view model in question does some custom security checking. We have a helper method in our specs which will setup fake security credentials for the Thread.CurrentPrincipal. This worked fine in the old unit tests, but fails in MSpec. Specifically, I'm getting this exception: "System.Runtime.Serialization.SerializationException: Type is not resolved for member" It happens when part of the SUT tries to read the app config file. If I comment out the line which sets the CurrentPrincipal (or simply call it after the part that checks the config file), the error goes away, but the tests fail due to lack of credentials. Similarly, if I set the CurrentPrincipal to null, the error goes away, but again the tests fail because the credentials aren't set. I've googled this, and found some posts about making sure the custom principal is serializable when it crosses AppDomain boundaries (usually in reference to web apps). In our case, this is not a web app, and I'm not crossing any AppDomains. Our pincipal object is also serializable. I downloaded the source for MSpec, and found that the ConsoleRunner calls a class named AppDomainRunner. I haven't debugged into it, but it looks like it's running the specs in different app domains. So does anyone have any ideas on how I can overcome this? I really like MSpec, and would love to use it exclusively. But I need to be able to supply fake security credentials while running the tests. Thanks! Update: here's the spec class: [Subject(typeof(CountryPickerViewModel))] public class When_the_user_makes_a_selection : PickerViewModelSpecsBase { protected static CountryPickerViewModel picker; Establish context = () => { SetupFakeSecurityCredentials(); CreateFactoryStubs(); StubLookupServicer<ICountryLookupServicer>() .WithData(BuildActiveItems(new [] { "USA", "UK" })); picker = new CountryPickerViewModel(ViewFactory, ViewModelFactory, BusinessLogicFactory, CacheFactory); }; Because of = () => picker.SelectedItem = picker.Items[0]; Behaves_like<Picker_that_has_a_selected_item> a_picker_with_a_selection; } We have a number of these "picker" view models, all of which exhibit some common behavior. So I'm using the Behaviors feature of MSpec. This particular class is simulating the user selecting something from the (WPF) control which is bound to this VM. The SetupFakeSecurityCredentials() method is simply setting Thread.CurrentPrincipal to an instance of our custom principal, where the prinipal has been populated will full-access rights. Here's a fake CountryPickerViewModel which is enough to cause the error: public class CountryPickerViewModel { public CountryPickerViewModel(IViewFactory viewFactory, IViewModelFactory viewModelFactory, ICoreBusinessLogicFactory businessLogicFactory, ICacheFactory cacheFactory) { Items = new Collection<int>(); var validator = ValidationFactory.CreateValidator<object>(); } public int SelectedItem { get; set; } public Collection<int> Items { get; private set; } } It's the ValidationFactory call which blows up. ValidationFactory is an Enterprise Library object, which tries to access the config.

    Read the article

  • android alarm not able to launch alarm activity

    - by user965830
    I am new to android programming and am trying to make this simple alarm app. I have my code written and it is compiled with no errors. The app runs in the emulator, that is the main activity asks the date and time, but when i click on the confirm button, it displays the message - "Unfortunately, Timer1 has stopped working." The code for my main activity is as follows: public void onClick(View v) { EditText date = (EditText) findViewById(R.id.editDate); EditText month = (EditText) findViewById(R.id.editMonth); EditText hour = (EditText) findViewById(R.id.editHour); EditText min = (EditText) findViewById(R.id.editMin); int dt = Integer.parseInt(date.getText().toString()); int mon = Integer.parseInt(month.getText().toString()); int hr = Integer.parseInt(hour.getText().toString()); int mnt = Integer.parseInt(min.getText().toString()); Intent myIntent = new Intent(MainActivity.this, AlarmActivity.class); pendingIntent = PendingIntent.getService(MainActivity.this, 0, myIntent, 0); AlarmManager alarmManager = (AlarmManager) getSystemService(ALARM_SERVICE); Calendar calendar = Calendar.getInstance(); calendar.set(Calendar.DATE, dt); calendar.set(Calendar.MONTH, mon); calendar.set(Calendar.HOUR, hr); calendar.set(Calendar.MINUTE, mnt); alarmManager.set(AlarmManager.RTC_WAKEUP, calendar.getTimeInMillis(), pendingIntent); } I do not understand what all the errors in logcat mean, so i am posting them: 06-25 16:03:32.175: I/Process(566): Sending signal. PID: 566 SIG: 9 06-25 16:03:53.775: I/dalvikvm(612): threadid=3: reacting to signal 3 06-25 16:03:54.046: I/dalvikvm(612): Wrote stack traces to '/data/anr/traces.txt' 06-25 16:03:54.255: I/dalvikvm(612): threadid=3: reacting to signal 3 06-25 16:03:54.305: I/dalvikvm(612): Wrote stack traces to '/data/anr/traces.txt' 06-25 16:03:54.735: I/dalvikvm(612): threadid=3: reacting to signal 3 06-25 16:03:54.785: I/dalvikvm(612): Wrote stack traces to '/data/anr/traces.txt' 06-25 16:03:54.925: D/gralloc_goldfish(612): Emulator without GPU emulation detected. 06-25 16:05:09.605: D/AndroidRuntime(612): Shutting down VM 06-25 16:05:09.605: W/dalvikvm(612): threadid=1: thread exiting with uncaught exception (group=0x409c01f8) 06-25 16:05:09.685: E/AndroidRuntime(612): FATAL EXCEPTION: main 06-25 16:05:09.685: E/AndroidRuntime(612): java.lang.NumberFormatException: Invalid int: "android.widget.EditText@41030b40" 06-25 16:05:09.685: E/AndroidRuntime(612): at java.lang.Integer.invalidInt(Integer.java:138) 06-25 16:05:09.685: E/AndroidRuntime(612): at java.lang.Integer.parse(Integer.java:375) 06-25 16:05:09.685: E/AndroidRuntime(612): at java.lang.Integer.parseInt(Integer.java:366) 06-25 16:05:09.685: E/AndroidRuntime(612): at java.lang.Integer.parseInt(Integer.java:332) 06-25 16:05:09.685: E/AndroidRuntime(612): at com.kapymay.tversion1.MainActivity$1.onClick(MainActivity.java:34) 06-25 16:05:09.685: E/AndroidRuntime(612): at android.view.View.performClick(View.java:3511) 06-25 16:05:09.685: E/AndroidRuntime(612): at android.view.View$PerformClick.run(View.java:14105) 06-25 16:05:09.685: E/AndroidRuntime(612): at android.os.Handler.handleCallback(Handler.java:605) 06-25 16:05:09.685: E/AndroidRuntime(612): at android.os.Handler.dispatchMessage(Handler.java:92) 06-25 16:05:09.685: E/AndroidRuntime(612): at android.os.Looper.loop(Looper.java:137) 06-25 16:05:09.685: E/AndroidRuntime(612): at android.app.ActivityThread.main(ActivityThread.java:4424) 06-25 16:05:09.685: E/AndroidRuntime(612): at java.lang.reflect.Method.invokeNative(Native Method) 06-25 16:05:09.685: E/AndroidRuntime(612): at java.lang.reflect.Method.invoke(Method.java:511) 06-25 16:05:09.685: E/AndroidRuntime(612): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:784) 06-25 16:05:09.685: E/AndroidRuntime(612): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:551) 06-25 16:05:09.685: E/AndroidRuntime(612): at dalvik.system.NativeStart.main(Native Method) 06-25 16:05:10.445: I/dalvikvm(612): threadid=3: reacting to signal 3 06-25 16:05:10.575: I/dalvikvm(612): Wrote stack traces to '/data/anr/traces.txt'

    Read the article

  • NPE when drawing TabWidget in android (only on HTC Magic?)

    - by David Hedlund
    I've received report from the user of an app I've written that he gets FC whenever starting a certain activity. I have not been able to reproduce the issue on the emulator or on my HTC Hero (running 1.5), but this user running HTC Magic (with 1.6) is facing this error every time. What bothers me is that no single step in the stacktrace actually includes any code in my app (com.filmtipset) 01-07 00:10:26.773 I/ActivityManager( 141): Starting activity: Intent { cmp=com.filmtipset/.ViewMovie (has extras) } 01-07 00:10:27.023 D/AndroidRuntime( 2402): Shutting down VM 01-07 00:10:27.023 W/dalvikvm( 2402): threadid=3: thread exiting with uncaught exception (group=0x4001e170) 01-07 00:10:27.023 E/AndroidRuntime( 2402): Uncaught handler: thread main exiting due to uncaught exception 01-07 00:10:27.083 E/AndroidRuntime( 2402): java.lang.NullPointerException 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.widget.TabWidget.dispatchDraw(TabWidget.java:173) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.view.ViewGroup.drawChild(ViewGroup.java:1529) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.view.ViewGroup.dispatchDraw(ViewGroup.java:1258) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.view.ViewGroup.drawChild(ViewGroup.java:1529) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.view.ViewGroup.dispatchDraw(ViewGroup.java:1258) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.view.ViewGroup.drawChild(ViewGroup.java:1529) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.view.ViewGroup.dispatchDraw(ViewGroup.java:1258) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.view.View.draw(View.java:6552) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.widget.FrameLayout.draw(FrameLayout.java:352) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.view.ViewGroup.drawChild(ViewGroup.java:1531) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.view.ViewGroup.dispatchDraw(ViewGroup.java:1258) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.view.View.draw(View.java:6552) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.widget.FrameLayout.draw(FrameLayout.java:352) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at com.android.internal.policy.impl.PhoneWindow$DecorView.draw(PhoneWindow.java:1883) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.view.ViewRoot.draw(ViewRoot.java:1332) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.view.ViewRoot.performTraversals(ViewRoot.java:1097) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.view.ViewRoot.handleMessage(ViewRoot.java:1613) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.os.Handler.dispatchMessage(Handler.java:99) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.os.Looper.loop(Looper.java:123) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at android.app.ActivityThread.main(ActivityThread.java:4320) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at java.lang.reflect.Method.invokeNative(Native Method) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at java.lang.reflect.Method.invoke(Method.java:521) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:791) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:549) 01-07 00:10:27.083 E/AndroidRuntime( 2402): at dalvik.system.NativeStart.main(Native Method) Full dump here if I've missed anything of interest I'm guessing, then, that there might be something wrong with my layout. It's quite verbose, so I'm posting it here, rather than pasting the whole thing on SO. There is a tabwidget, where one tab is connected to the scrollview, svFilmInfo, and one to the linear layout llComments. The tab host is populated as such: Drawable commentSelector = getResources().getDrawable(R.drawable.tabcomment); Drawable infoSelector = getResources().getDrawable(R.drawable.tabinfo); mTabHost = getTabHost(); mTabHost.getTabWidget().setBackgroundColor(Color.BLACK); mTabHost.addTab(mTabHost.newTabSpec("tabInfo").setIndicator("Filminfo", infoSelector).setContent(R.id.svFilmInfo)); mTabHost.addTab(mTabHost.newTabSpec("tabInfo").setIndicator("Kommentarer", commentSelector).setContent(R.id.llComments)); Since I cannot reproduce the error myself, and since I cannot find any mention in the stack trace of what might be causing the error, I don't quite know where to start troubleshooting this. I'd appreciate any pointers.

    Read the article

  • Tomcat 7 on Ubuntu 12.04 with JRE 7 not starting

    - by Andreas Krueger
    I am running a virtual server in the web on Ubuntu 12.04 LTS / 32 Bit. After a clean install of JRE 7 and Tomcat 7, following the instructions on http://www.sysadminslife.com, I don't get Tomcat 7 up and running. > java -version java version "1.7.0_09" Java(TM) SE Runtime Environment (build 1.7.0_09-b05) Java HotSpot(TM) Client VM (build 23.5-b02, mixed mode) > /etc/init.d/tomcat start Starting Tomcat Using CATALINA_BASE: /usr/local/tomcat Using CATALINA_HOME: /usr/local/tomcat Using CATALINA_TMPDIR: /usr/local/tomcat/temp Using JRE_HOME: /usr/lib/jvm/java-7-oracle Using CLASSPATH: /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar > telnet localhost 8080 Trying ::1... Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refused netstat sometimes shows a Java process, most of the times not. If it does, nothing works either. Does anyone have a solution or encountered similar situations? Here are the contents of catalina.out: 16.11.2012 18:36:39 org.apache.catalina.core.AprLifecycleListener init INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/lib/jvm/java-6-oracle/lib/i386/client:/usr/lib/jvm/java-6-oracle/lib/i386:/usr/lib/jvm/java-6-oracle/../lib/i386:/usr/java/packages/lib/i386:/lib:/usr/lib 16.11.2012 18:36:40 org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["http-bio-8080"] 16.11.2012 18:36:40 org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["ajp-bio-8009"] 16.11.2012 18:36:40 org.apache.catalina.startup.Catalina load INFO: Initialization processed in 1509 ms 16.11.2012 18:36:40 org.apache.catalina.core.StandardService startInternal INFO: Starting service Catalina 16.11.2012 18:36:40 org.apache.catalina.core.StandardEngine startInternal INFO: Starting Servlet Engine: Apache Tomcat/7.0.29 16.11.2012 18:36:40 org.apache.catalina.startup.HostConfig deployDirectory INFO: Deploying web application directory /usr/local/tomcat/webapps/manager Here come the results of ps -ef, iptables --list and netstat -plut: > ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 Nov16 ? 00:00:00 init root 2 1 0 Nov16 ? 00:00:00 [kthreadd/206616] root 3 2 0 Nov16 ? 00:00:00 [khelper/2066167] root 4 2 0 Nov16 ? 00:00:00 [rpciod/2066167/] root 5 2 0 Nov16 ? 00:00:00 [rpciod/2066167/] root 6 2 0 Nov16 ? 00:00:00 [rpciod/2066167/] root 7 2 0 Nov16 ? 00:00:00 [rpciod/2066167/] root 8 2 0 Nov16 ? 00:00:00 [nfsiod/2066167] root 119 1 0 Nov16 ? 00:00:00 upstart-udev-bridge --daemon root 125 1 0 Nov16 ? 00:00:00 /sbin/udevd --daemon root 157 125 0 Nov16 ? 00:00:00 /sbin/udevd --daemon root 158 125 0 Nov16 ? 00:00:00 /sbin/udevd --daemon root 205 1 0 Nov16 ? 00:00:00 upstart-socket-bridge --daemon root 276 1 0 Nov16 ? 00:00:00 /usr/sbin/sshd -D root 335 1 0 Nov16 ? 00:00:00 /usr/sbin/xinetd -dontfork -pidfile /var/run/xinetd.pid -stayalive -inetd root 348 1 0 Nov16 ? 00:00:00 cron syslog 368 1 0 Nov16 ? 00:00:00 /sbin/syslogd -u syslog root 472 1 0 Nov16 ? 00:00:00 /usr/lib/postfix/master postfix 482 472 0 Nov16 ? 00:00:00 qmgr -l -t fifo -u root 520 1 0 Nov16 ? 00:00:04 /usr/sbin/apache2 -k start www-data 523 520 0 Nov16 ? 00:00:00 /usr/sbin/apache2 -k start www-data 525 520 0 Nov16 ? 00:00:00 /usr/sbin/apache2 -k start www-data 526 520 0 Nov16 ? 00:00:00 /usr/sbin/apache2 -k start tomcat 1074 1 0 Nov16 ? 00:01:08 /usr/lib/jvm/java-6-oracle/bin/java -Djava.util.logging.config.file=/usr/ postfix 1351 472 0 Nov16 ? 00:00:00 tlsmgr -l -t unix -u -c postfix 3413 472 0 17:00 ? 00:00:00 pickup -l -t fifo -u -c root 3457 276 0 17:31 ? 00:00:00 sshd: root@pts/0 root 3459 3457 0 17:31 pts/0 00:00:00 -bash root 3470 3459 0 17:31 pts/0 00:00:00 ps -ef > iptables --list Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:http-alt ACCEPT tcp -- anywhere anywhere tcp dpt:8005 ACCEPT tcp -- anywhere anywhere tcp dpt:http-alt Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination > netstat -plut Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:smtp *:* LISTEN 472/master tcp 0 0 *:3213 *:* LISTEN 276/sshd tcp6 0 0 [::]:smtp [::]:* LISTEN 472/master tcp6 0 0 [::]:8009 [::]:* LISTEN 1074/java tcp6 0 0 [::]:3213 [::]:* LISTEN 276/sshd tcp6 0 0 [::]:http-alt [::]:* LISTEN 1074/java tcp6 0 0 [::]:http [::]:* LISTEN 520/apache2

    Read the article

  • What container type provides better (average) performance than std::map?

    - by Truncheon
    In the following example a std::map structure is filled with 26 values from A - Z (for key) and 0 - 26 for value. The time taken (on my system) to lookup the last entry (10000000 times) is roughly 250 ms for the vector, and 125 ms for the map. (I compiled using release mode, with O3 option turned on for g++ 4.4) But if for some odd reason I wanted better performance than the std::map, what data structures and functions would I need to consider using? I apologize if the answer seems obvious to you, but I haven't had much experience in the performance critical aspects of C++ programming. UPDATE: This example is rather trivial and hides the true complexity of what I'm trying to achieve. My real world project is a simple scripting language that uses a parser, data tree, and interpreter (instead of a VM stack system). I need to use some kind of data structure (perhaps map) to store the variables names created by script programmers. These are likely to be pretty randomly named, so I need a lookup method that can quickly find a particular key within a (probably) fairly large list of names. #include <ctime> #include <map> #include <vector> #include <iostream> struct mystruct { char key; int value; mystruct(char k = 0, int v = 0) : key(k), value(v) { } }; int find(const std::vector<mystruct>& ref, char key) { for (std::vector<mystruct>::const_iterator i = ref.begin(); i != ref.end(); ++i) if (i->key == key) return i->value; return -1; } int main() { std::map<char, int> mymap; std::vector<mystruct> myvec; for (int i = 'a'; i < 'a' + 26; ++i) { mymap[i] = i - 'a'; myvec.push_back(mystruct(i, i - 'a')); } int pre = clock(); for (int i = 0; i < 10000000; ++i) { find(myvec, 'z'); } std::cout << "linear scan: milli " << clock() - pre << "\n"; pre = clock(); for (int i = 0; i < 10000000; ++i) { mymap['z']; } std::cout << "map scan: milli " << clock() - pre << "\n"; return 0; }

    Read the article

  • Android Google-Shopping API force closes while parsing

    - by Sam Jackson
    I'm trying to send a request to the Google-Shopping API with the following static method which I think is working: public static String GET_TITLE(String url) throws JSONException { InputStream is = null; String result = ""; JSONObject jArray = null; // http post try { HttpClient httpclient = new DefaultHttpClient(); HttpPost httppost = new HttpPost(url); HttpResponse response = httpclient.execute(httppost); HttpEntity entity = response.getEntity(); is = entity.getContent(); } catch(Exception e) { Log.e("log_tag", "Error in http connection "+e.toString()); } The URL I'm passing is this BTW: https://www.googleapis.com/shopping/search/v1/public/products/country=US&q=shirts&alt=json &rankBy=relevancy&key=AIzaSyDRKgGmJrdG6pV6DIg2m-nmIbXydxvpjww Next I try to parse this response (where I think the problem comes in) in the same method: try { jArray = new JSONObject(result); } catch(JSONException e){ Log.e("log_tag", "Error parsing data "+e.toString()); } JSONObject itemObject = jArray.getJSONObject("items"); JSONObject productObject = itemObject.getJSONObject("product"); String attributeGoogleId = productObject.getString("googleId"); String attributeProviderId = productObject.getString("providerId"); String attributeTitle = productObject.getString("title");*/ String attributePrice = productObject.getString("price"); JSONObject popupObject = productObject.getJSONObject("popup"); return attributeTitle; } This has been so frustrating, I know it should be simple but everywhere I look I just can't quite get it to work, I'm not exactly sure what the error is since I'm testing it on my HTC Desire because my emulator gives an 'invalid command-line parameter' error when starting, but that's a different issue, anyway, thanks in advance! EDIT: The first one makes it look like there's a problem with the URL, should I change it and see if it makes a difference? 04-01 12:09:05.142: ERROR/log_tag(24968): Error in http connection java.net.UnknownHostException: www.googleapis.com 04-01 12:09:05.142: ERROR/log_tag(24968): Error converting result java.lang.NullPointerException 04-01 12:09:05.142: ERROR/log_tag(24968): Error parsing data org.json.JSONException: End of input at character 0 of 04-01 12:09:05.142: DEBUG/AndroidRuntime(24968): Shutting down VM 04-01 12:09:05.142: WARN/dalvikvm(24968): threadid=1: thread exiting with uncaught exception (group=0x400259f8) 04-01 12:09:05.152: ERROR/AndroidRuntime(24968): FATAL EXCEPTION: main 04-01 12:09:05.152: ERROR/AndroidRuntime(24968): java.lang.RuntimeException: Failure delivering result ResultInfo{who=null, request=0, result=-1, data=Intent { act=com.google.zxing.client.android.SCAN flg=0x80000 (has extras) }} to activity {com.spectrum.stock/com.spectrum.stock.CaptureActivity}: java.lang.NullPointerException 04-01 12:09:05.152: ERROR/AndroidRuntime(24968): at android.app.ActivityThread.deliverResults(ActivityThread.java:3734) 04-01 12:09:05.152: ERROR/AndroidRuntime(24968): at android.app.ActivityThread.handleSendResult(ActivityThread.java:3776) 04-01 12:09:05.152: ERROR/AndroidRuntime(24968): at android.app.ActivityThread.access$2800(ActivityThread.java:135) 04-01 12:09:05.152: ERROR/AndroidRuntime(24968): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2166) 04-01 12:09:05.152: ERROR/AndroidRuntime(24968): at android.os.Handler.dispatchMessage(Handler.java:99) 04-01 12:09:05.152: ERROR/AndroidRuntime(24968): at android.os.Looper.loop(Looper.java:144) 04-01 12:09:05.152: ERROR/AndroidRuntime(24968): at android.app.ActivityThread.main(ActivityThread.java:4937) 04-01 12:09:05.152: ERROR/AndroidRuntime(24968): at java.lang.reflect.Method.invokeNative(Native Method) 04-01 12:09:05.152: ERROR/AndroidRuntime(24968): at java.lang.reflect.Method.invoke(Method.java:521) 04-01 12:09:05.152: ERROR/AndroidRuntime(24968): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:868) 04-01 12:09:05.152: ERROR/AndroidRuntime(24968): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:626) 04-01 12:09:05.152: ERROR/AndroidRuntime(24968): at dalvik.system.NativeStart.main(Native Method) 04-01 12:09:05.152: ERROR/AndroidRuntime(24968): Caused by: java.lang.NullPointerException 04-01 12:09:05.152: ERROR/AndroidRuntime(24968): at com.spectrum.stock.JSONResponse.GET_TITLE(JSONResponse.java:61) 04-01 12:09:05.152: ERROR/AndroidRuntime(24968): at com.spectrum.stock.CaptureActivity.onActivityResult(CaptureActivity.java:78) 04-01 12:09:05.152: ERROR/AndroidRuntime(24968): at android.app.Activity.dispatchActivityResult(Activity.java:3931) 04-01 12:09:05.152: ERROR/AndroidRuntime(24968): at android.app.ActivityThread.deliverResults(ActivityThread.java:3730) 04-01 12:09:05.152: ERROR/AndroidRuntime(24968): ... 11 more 04-01 12:09:05.162: WARN/ActivityManager(96): Force finishing activity com.spectrum.stock/.CaptureActivity

    Read the article

  • SVN Server not responding

    - by Rob Forrest
    I've been bashing my head against a wall with this one all day and I would greatly appreciate a few more eyes on the problem at hand. We have an in-house SVN Server that contains all live and development code for our website. Our live server can connect to this and get updates from the repository. This was all working fine until we migrated the SVN Server from a physical machine to a vSphere VM. Now, for some reason that continues to fathom me, we can no longer connect to the SVN Server. The SVN Server runs CentOS 6.2, Apache and SVN 1.7.2. SELinux is well and trully disabled and the problem remains when iptables is stopped. Our production server does run an older version of CentOS and SVN but the same system worked previously so I don't think that this is the issue. Of note, if I have iptables enabled, using service iptables status, I can see a single packet coming in and being accepted but the production server simply hangs on any svn command. If I give up waiting and do a CTRL-C to break the process I get a "could not connect to server". To me it appears to be something to do with the SVN Server rejecting external connections but I have no idea how this would happen. Any thoughts on what I can try from here? Thanks, Rob Edit: Network topology Production server sits externally to our in-house SVN server. Our IPCop (?) firewall allows connections from it (and it alone) on port 80 and passes the connection to the SVN Server. The hardware is all pretty decent and I don't doubt that its doing its job correctly, especially as iptables is seeing the new connections. subversion.conf (in /etc/httpd/conf.d) LoadModule dav_svn_module modules/mod_dav_svn.so <Location /repos> DAV svn SVNPath /var/svn/repos <LimitExcept PROPFIND OPTIONS REPORT> AuthType Basic AuthName "SVN Server" AuthUserFile /var/svn/svn-auth Require valid-user </LimitExcept> </Location> ifconfig eth0 Link encap:Ethernet HWaddr 00:0C:29:5F:C8:3A inet addr:172.16.0.14 Bcast:172.16.0.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe5f:c83a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:32317 errors:0 dropped:0 overruns:0 frame:0 TX packets:632 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2544036 (2.4 MiB) TX bytes:143207 (139.8 KiB) netstat -lntp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 1484/mysqld tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1135/rpcbind tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1351/sshd tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1230/cupsd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1575/master tcp 0 0 0.0.0.0:58401 0.0.0.0:* LISTEN 1153/rpc.statd tcp 0 0 0.0.0.0:5672 0.0.0.0:* LISTEN 1626/qpidd tcp 0 0 :::139 :::* LISTEN 1678/smbd tcp 0 0 :::111 :::* LISTEN 1135/rpcbind tcp 0 0 :::80 :::* LISTEN 1615/httpd tcp 0 0 :::22 :::* LISTEN 1351/sshd tcp 0 0 ::1:631 :::* LISTEN 1230/cupsd tcp 0 0 ::1:25 :::* LISTEN 1575/master tcp 0 0 :::445 :::* LISTEN 1678/smbd tcp 0 0 :::56799 :::* LISTEN 1153/rpc.statd iptables --list -v -n (when iptables is stopped) Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination iptables --list -v -n (when iptables is running, after one attempted svn connection) Chain INPUT (policy ACCEPT 68 packets, 6561 bytes) pkts bytes target prot opt in out source destination 19 1304 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 1 60 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:80 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 17 packets, 1612 bytes) pkts bytes target prot opt in out source destination tcpdump 17:08:18.455114 IP 'production server'.43255 > 'svn server'.local.http: Flags [S], seq 3200354543, win 5840, options [mss 1380,sackOK,TS val 2011458346 ecr 0,nop,wscale 7], length 0 17:08:18.455169 IP 'svn server'.local.http > 'production server'.43255: Flags [S.], seq 629885453, ack 3200354544, win 14480, options [mss 1460,sackOK,TS val 816478 ecr 2011449346,nop,wscale 7], length 0 17:08:19.655317 IP 'svn server'.local.http > 'production server'k.43255: Flags [S.], seq 629885453, ack 3200354544, win 14480, options [mss 1460,sackOK,TS val 817679 ecr 2011449346,nop,wscale 7], length 0

    Read the article

  • How to design a C / C++ library to be usable in many client languages?

    - by Brian Schimmel
    I'm planning to code a library that should be usable by a large number of people in on a wide spectrum of platforms. What do I have to consider to design it right? To make this questions more specific, there are four "subquestions" at the end. Choice of language Considering all the known requirements and details, I concluded that a library written in C or C++ was the way to go. I think the primary usage of my library will be in programs written in C, C++ and Java SE, but I can also think of reasons to use it from Java ME, PHP, .NET, Objective C, Python, Ruby, bash scrips, etc... Maybe I cannot target all of them, but if it's possible, I'll do it. Requirements It would be to much to describe the full purpose of my library here, but there are some aspects that might be important to this question: The library itself will start out small, but definitely will grow to enormous complexity, so it is not an option to maintain several versions in parallel. Most of the complexity will be hidden inside the library, though The library will construct an object graph that is used heavily inside. Some clients of the library will only be interested in specific attributes of specific objects, while other clients must traverse the object graph in some way Clients may change the objects, and the library must be notified thereof The library may change the objects, and the client must be notified thereof, if it already has a handle to that object The library must be multi-threaded, because it will maintain network connections to several other hosts While some requests to the library may be handled synchronously, many of them will take too long and must be processed in the background, and notify the client on success (or failure) Of course, answers are welcome no matter if they address my specific requirements, or if they answer the question in a general way that matters to a wider audience! My assumptions, so far So here are some of my assumptions and conclusions, which I gathered in the past months: Internally I can use whatever I want, e.g. C++ with operator overloading, multiple inheritance, template meta programming... as long as there is a portable compiler which handles it (think of gcc / g++) But my interface has to be a clean C interface that does not involve name mangling Also, I think my interface should only consist of functions, with basic/primitive data types (and maybe pointers) passed as parameters and return values If I use pointers, I think I should only use them to pass them back to the library, not to operate directly on the referenced memory For usage in a C++ application, I might also offer an object oriented interface (Which is also prone to name mangling, so the App must either use the same compiler, or include the library in source form) Is this also true for usage in C# ? For usage in Java SE / Java EE, the Java native interface (JNI) applies. I have some basic knowledge about it, but I should definitely double check it. Not all client languages handle multithreading well, so there should be a single thread talking to the client For usage on Java ME, there is no such thing as JNI, but I might go with Nested VM For usage in Bash scripts, there must be an executable with a command line interface For the other client languages, I have no idea For most client languages, it would be nice to have kind of an adapter interface written in that language. I think there are tools to automatically generate this for Java and some others For object oriented languages, it might be possible to create an object oriented adapter which hides the fact that the interface to the library is function based - but I don't know if its worth the effort Possible subquestions is this possible with manageable effort, or is it just too much portability? are there any good books / websites about this kind of design criteria? are any of my assumptions wrong? which open source libraries are worth studying to learn from their design / interface / souce? meta: This question is rather long, do you see any way to split it into several smaller ones? (If you reply to this, do it as a comment, not as an answer)

    Read the article

  • Languages and VMs: Features that are hard to optimize and why

    - by mrjoltcola
    I'm doing a survey of features in preparation for a research project. Name a mainstream language or language feature that is hard to optimize, and why the feature is or isn't worth the price paid, or instead, just debunk my theories below with anecdotal evidence. Before anyone flags this as subjective, I am asking for specific examples of languages or features, and ideas for optimization of these features, or important features that I haven't considered. Also, any references to implementations that prove my theories right or wrong. Top on my list of hard to optimize features and my theories (some of my theories are untested and are based on thought experiments): 1) Runtime method overloading (aka multi-method dispatch or signature based dispatch). Is it hard to optimize when combined with features that allow runtime recompilation or method addition. Or is it just hard, anyway? Call site caching is a common optimization for many runtime systems, but multi-methods add additional complexity as well as making it less practical to inline methods. 2) Type morphing / variants (aka value based typing as opposed to variable based) Traditional optimizations simply cannot be applied when you don't know if the type of someting can change in a basic block. Combined with multi-methods, inlining must be done carefully if at all, and probably only for a given threshold of size of the callee. ie. it is easy to consider inlining simple property fetches (getters / setters) but inlining complex methods may result in code bloat. The other issue is I cannot just assign a variant to a register and JIT it to the native instructions because I have to carry around the type info, or every variable needs 2 registers instead of 1. On IA-32 this is inconvenient, even if improved with x64's extra registers. This is probably my favorite feature of dynamic languages, as it simplifies so many things from the programmer's perspective. 3) First class continuations - There are multiple ways to implement them, and I have done so in both of the most common approaches, one being stack copying and the other as implementing the runtime to use continuation passing style, cactus stacks, copy-on-write stack frames, and garbage collection. First class continuations have resource management issues, ie. we must save everything, in case the continuation is resumed, and I'm not aware if any languages support leaving a continuation with "intent" (ie. "I am not coming back here, so you may discard this copy of the world"). Having programmed in the threading model and the contination model, I know both can accomplish the same thing, but continuations' elegance imposes considerable complexity on the runtime and also may affect cache efficienty (locality of stack changes more with use of continuations and co-routines). The other issue is they just don't map to hardware. Optimizing continuations is optimizing for the less-common case, and as we know, the common case should be fast, and the less-common cases should be correct. 4) Pointer arithmetic and ability to mask pointers (storing in integers, etc.) Had to throw this in, but I could actually live without this quite easily. My feelings are that many of the high-level features, particularly in dynamic languages just don't map to hardware. Microprocessor implementations have billions of dollars of research behind the optimizations on the chip, yet the choice of language feature(s) may marginalize many of these features (features like caching, aliasing top of stack to register, instruction parallelism, return address buffers, loop buffers and branch prediction). Macro-applications of micro-features don't necessarily pan out like some developers like to think, and implementing many languages in a VM ends up mapping native ops into function calls (ie. the more dynamic a language is the more we must lookup/cache at runtime, nothing can be assumed, so our instruction mix is made up of a higher percentage of non-local branching than traditional, statically compiled code) and the only thing we can really JIT well is expression evaluation of non-dynamic types and operations on constant or immediate types. It is my gut feeling that bytecode virtual machines and JIT cores are perhaps not always justified for certain languages because of this. I welcome your answers.

    Read the article

  • WPF items not visible when grouping is applied

    - by Tri Q
    Hi, I'm having this strange issue with my ItemsControl grouping. I have the following setup: <ItemsControl Margin="3" ItemsSource="{Binding Communications.View}" > <ItemsControl.GroupStyle> <GroupStyle> <GroupStyle.ContainerStyle> <Style TargetType="{x:Type GroupItem}"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type GroupItem}"> <Expander> <Expander.Header> <Grid> <Grid.ColumnDefinitions > <ColumnDefinition Width="*" /> <ColumnDefinition Width="Auto" /> </Grid.ColumnDefinitions> <TextBlock Text="{Binding ItemCount, StringFormat='{}[{0}] '}" FontWeight="Bold" /> <TextBlock Grid.Column="1" Text="{Binding Name, Converter={StaticResource GroupingFormatter}, StringFormat='{}Subject: {0}'}" FontWeight="Bold" /> </Grid> </Expander.Header> <ItemsPresenter /> </Expander> </ControlTemplate> </Setter.Value> </Setter> </Style> </GroupStyle.ContainerStyle> </GroupStyle> </ItemsControl.GroupStyle> <ItemsControl.ItemTemplate> <DataTemplate> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="Auto" /> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition /> <RowDefinition /> </Grid.RowDefinitions> <TextBlock FontWeight="Bold" Text="{Binding Inspector, Converter={StaticResource NameFormatter}, StringFormat='{}From {0}:'}" Margin="3" /> <TextBlock Text="{Binding SentDate, StringFormat='{}{0:dd/MM/yy}'}" Grid.Row="1" Margin="3"/> <TextBlock Text="{Binding Message }" Grid.Column="1" Grid.RowSpan="2" Margin="3"/> <Button Command="vm:CommunicationViewModel.DeleteMessageCommand" CommandParameter="{Binding}" HorizontalAlignment="Right" Grid.Column="2">Delete</Button> </Grid> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> In my ViewModel, I expose a CollectionViewSource named 'Communications'. I proceed to adding a grouping patter like so: Communications.GroupDescriptions.Add(new PropertyGroupDescription("Subject")); Now, the problem i'm experience is the grouping work fine, but I can't see any items inside the groups. What am I doing wrong? Any pointers would be much appreciated.

    Read the article

  • Neo4j increasing latency as SKIP increases on Cypher query + REST API

    - by voldomazta
    My setup: Java(TM) SE Runtime Environment (build 1.7.0_45-b18) Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode) Neo4j 2.0.0-M06 Enterprise First I made sure I warmed up the cache by executing the following: START n=node(*) RETURN COUNT(n); START r=relationship(*) RETURN count(r); The size of the table is 63,677 nodes and 7,169,995 relationships Now I have the following query: START u1=node:node_auto_index('uid:39') MATCH (u1:user)-[w:WANTS]->(c:card)<-[h:HAS]-(u2:user) WHERE u2.uid <> 39 WITH u2.uid AS uid, (CASE WHEN w.qty < h.qty THEN w.qty ELSE h.qty END) AS have RETURN uid, SUM(have) AS total ORDER BY total DESC SKIP 0 LIMIT 25 This UID has about 40k+ results that I want to be able to put a pagination to. The initial skip was around 773ms. I tried page 2 (skip 25) and the latency was around the same even up to page 500 it only rose up to 900ms so I didn't really bother. Now I tried some fast forward paging and jumped by thousands so I did 1000, then 2000, then 3000. I was hoping the ORDER BY arrangement will already have been cached by Neo4j and using SKIP will just move to that index in the result and wont have to iterate through each one again. But for each thousand skip I made the latency increased by alot. It's not just cache warming because for one I already warmed up the cache and two, I tried the same skip a couple of times for each skip and it yielded the same results: SKIP 0: 773ms SKIP 1000: 1369ms SKIP 2000: 2491ms SKIP 3000: 3899ms SKIP 4000: 5686ms SKIP 5000: 7424ms Now who the hell would want to view 5000 pages of results? 40k even?! :) Good point! I will probably put a cap on the maximum results a user can view but I was just curious about this phenomenon. Will somebody please explain why Neo4j seems to be re-iterating through stuff which appears to be already known to it? Here is my profiling for the 0 skip: ==> ColumnFilter(symKeys=["uid", " INTERNAL_AGGREGATE65c4d6a2-1930-4f32-8fd9-5e4399ce6f14"], returnItemNames=["uid", "total"], _rows=25, _db_hits=0) ==> Slice(skip="Literal(0)", _rows=25, _db_hits=0) ==> Top(orderBy=["SortItem(Cached( INTERNAL_AGGREGATE65c4d6a2-1930-4f32-8fd9-5e4399ce6f14 of type Any),false)"], limit="Add(Literal(0),Literal(25))", _rows=25, _db_hits=0) ==> EagerAggregation(keys=["uid"], aggregates=["( INTERNAL_AGGREGATE65c4d6a2-1930-4f32-8fd9-5e4399ce6f14,Sum(have))"], _rows=41659, _db_hits=0) ==> ColumnFilter(symKeys=["have", "u1", "uid", "c", "h", "w", "u2"], returnItemNames=["uid", "have"], _rows=146826, _db_hits=0) ==> Extract(symKeys=["u1", "c", "h", "w", "u2"], exprKeys=["uid", "have"], _rows=146826, _db_hits=587304) ==> Filter(pred="((NOT(Product(u2,uid(0),true) == Literal(39)) AND hasLabel(u1:user(0))) AND hasLabel(u2:user(0)))", _rows=146826, _db_hits=146826) ==> TraversalMatcher(trail="(u1)-[w:WANTS WHERE (hasLabel(NodeIdentifier():card(1)) AND hasLabel(NodeIdentifier():card(1))) AND true]->(c)<-[h:HAS WHERE (NOT(Product(NodeIdentifier(),uid(0),true) == Literal(39)) AND hasLabel(NodeIdentifier():user(0))) AND true]-(u2)", _rows=146826, _db_hits=293696) And for the 5000 skip: ==> ColumnFilter(symKeys=["uid", " INTERNAL_AGGREGATE99329ea5-03cd-4d53-a6bc-3ad554b47872"], returnItemNames=["uid", "total"], _rows=25, _db_hits=0) ==> Slice(skip="Literal(5000)", _rows=25, _db_hits=0) ==> Top(orderBy=["SortItem(Cached( INTERNAL_AGGREGATE99329ea5-03cd-4d53-a6bc-3ad554b47872 of type Any),false)"], limit="Add(Literal(5000),Literal(25))", _rows=5025, _db_hits=0) ==> EagerAggregation(keys=["uid"], aggregates=["( INTERNAL_AGGREGATE99329ea5-03cd-4d53-a6bc-3ad554b47872,Sum(have))"], _rows=41659, _db_hits=0) ==> ColumnFilter(symKeys=["have", "u1", "uid", "c", "h", "w", "u2"], returnItemNames=["uid", "have"], _rows=146826, _db_hits=0) ==> Extract(symKeys=["u1", "c", "h", "w", "u2"], exprKeys=["uid", "have"], _rows=146826, _db_hits=587304) ==> Filter(pred="((NOT(Product(u2,uid(0),true) == Literal(39)) AND hasLabel(u1:user(0))) AND hasLabel(u2:user(0)))", _rows=146826, _db_hits=146826) ==> TraversalMatcher(trail="(u1)-[w:WANTS WHERE (hasLabel(NodeIdentifier():card(1)) AND hasLabel(NodeIdentifier():card(1))) AND true]->(c)<-[h:HAS WHERE (NOT(Product(NodeIdentifier(),uid(0),true) == Literal(39)) AND hasLabel(NodeIdentifier():user(0))) AND true]-(u2)", _rows=146826, _db_hits=293696) The only difference is the LIMIT clause on the Top function. I hope we can make this work as intended, I really don't want to delve into doing an embedded Neo4j + my own Jetty REST API for the web app.

    Read the article

  • Database call crashes Android Application

    - by Darren Murtagh
    i am using a Android database and its set up but when i call in within an onClickListener and the app crashes the code i am using is mButton.setOnClickListener( new View.OnClickListener() { public void onClick(View view) { s = WorkoutChoice.this.weight.getText().toString(); s2 = WorkoutChoice.this.height.getText().toString(); int w = Integer.parseInt(s); double h = Double.parseDouble(s2); double BMI = (w/h)/h; t.setText(""+BMI); long id = db.insertTitle("001", ""+days, ""+BMI); Cursor c = db.getAllTitles(); if (c.moveToFirst()) { do { DisplayTitle(c); } while (c.moveToNext()); } } }); and the log cat for when i run it is: 04-01 18:21:54.704: E/global(6333): Deprecated Thread methods are not supported. 04-01 18:21:54.704: E/global(6333): java.lang.UnsupportedOperationException 04-01 18:21:54.704: E/global(6333): at java.lang.VMThread.stop(VMThread.java:85) 04-01 18:21:54.704: E/global(6333): at java.lang.Thread.stop(Thread.java:1391) 04-01 18:21:54.704: E/global(6333): at java.lang.Thread.stop(Thread.java:1356) 04-01 18:21:54.704: E/global(6333): at com.b00348312.workout.Splashscreen$1.run(Splashscreen.java:42) 04-01 18:22:09.444: D/dalvikvm(6333): GC_FOR_MALLOC freed 4221 objects / 252640 bytes in 31ms 04-01 18:22:09.474: I/dalvikvm(6333): Total arena pages for JIT: 11 04-01 18:22:09.574: D/dalvikvm(6333): GC_FOR_MALLOC freed 1304 objects / 302920 bytes in 29ms 04-01 18:22:09.744: D/dalvikvm(6333): GC_FOR_MALLOC freed 2480 objects / 290848 bytes in 33ms 04-01 18:22:10.034: D/dalvikvm(6333): GC_FOR_MALLOC freed 6334 objects / 374152 bytes in 36ms 04-01 18:22:14.344: D/AndroidRuntime(6333): Shutting down VM 04-01 18:22:14.344: W/dalvikvm(6333): threadid=1: thread exiting with uncaught exception (group=0x400259f8) 04-01 18:22:14.364: E/AndroidRuntime(6333): FATAL EXCEPTION: main 04-01 18:22:14.364: E/AndroidRuntime(6333): java.lang.IllegalStateException: database not open 04-01 18:22:14.364: E/AndroidRuntime(6333): at android.database.sqlite.SQLiteDatabase.insertWithOnConflict(SQLiteDatabase.java:1567) 04-01 18:22:14.364: E/AndroidRuntime(6333): at android.database.sqlite.SQLiteDatabase.insert(SQLiteDatabase.java:1484) 04-01 18:22:14.364: E/AndroidRuntime(6333): at com.b00348312.workout.DataBaseHelper.insertTitle(DataBaseHelper.java:84) 04-01 18:22:14.364: E/AndroidRuntime(6333): at com.b00348312.workout.WorkoutChoice$3.onClick(WorkoutChoice.java:84) 04-01 18:22:14.364: E/AndroidRuntime(6333): at android.view.View.performClick(View.java:2408) 04-01 18:22:14.364: E/AndroidRuntime(6333): at android.view.View$PerformClick.run(View.java:8817) 04-01 18:22:14.364: E/AndroidRuntime(6333): at android.os.Handler.handleCallback(Handler.java:587) 04-01 18:22:14.364: E/AndroidRuntime(6333): at android.os.Handler.dispatchMessage(Handler.java:92) 04-01 18:22:14.364: E/AndroidRuntime(6333): at android.os.Looper.loop(Looper.java:144) 04-01 18:22:14.364: E/AndroidRuntime(6333): at android.app.ActivityThread.main(ActivityThread.java:4937) 04-01 18:22:14.364: E/AndroidRuntime(6333): at java.lang.reflect.Method.invokeNative(Native Method) 04-01 18:22:14.364: E/AndroidRuntime(6333): at java.lang.reflect.Method.invoke(Method.java:521) 04-01 18:22:14.364: E/AndroidRuntime(6333): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:858) 04-01 18:22:14.364: E/AndroidRuntime(6333): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:616) 04-01 18:22:14.364: E/AndroidRuntime(6333): at dalvik.system.NativeStart.main(Native Method) i have notice errors when the application opens but i dont no what thet are from. when i take out the statements to do with the database there is no errors and everthign runs smoothly

    Read the article

  • Scala: Recursively building all pathes in a graph?

    - by DarqMoth
    Trying to build all existing paths for an udirected graph defined as a map of edges using the following algorithm: Start: with a given vertice A Find an edge (X.A, X.B) or (X.B, X.A), add this edge to path Find all edges Ys fpr which either (Y.C, Y.B) or (Y.B, Y.C) is true For each Ys: A=B, goto Start Providing edges are defined as the following map, where keys are tuples consisting of two vertices: val edges = Map( ("n1", "n2") -> "n1n2", ("n1", "n3") -> "n1n3", ("n3", "n4") -> "n3n4", ("n5", "n1") -> "n5n1", ("n5", "n4") -> "n5n4") As an output I need to get a list of ALL pathes where each path is a list of adjecent edges like this: val allPaths = List( List(("n1", "n2") -> "n1n2"), List(("n1", "n3") -> "n1n3", ("n3", "n4") -> "n3n4"), List(("n5", "n1") -> "n5n1"), List(("n5", "n4") -> "n5n4"), List(("n2", "n1") -> "n1n2", ("n1", "n3") -> "n1n3", ("n3", "n4") -> "n3n4", ("n5", "n4") -> "n5n4")) //... //... more pathes to go } Note: Edge XY = (x,y) - "xy" and YX = (y,x) - "yx" exist as one instance only, either as XY or YX So far I have managed to implement code that duplicates edges in the path, which is wrong and I can not find the error: object Graph2 { type Vertice = String type Edge = ((String, String), String) type Path = List[((String, String), String)] val edges = Map( //(("v1", "v2") , "v1v2"), (("v1", "v3") , "v1v3"), (("v3", "v4") , "v3v4") //(("v5", "v1") , "v5v1"), //(("v5", "v4") , "v5v4") ) def main(args: Array[String]): Unit = { val processedVerticies: Map[Vertice, Vertice] = Map() val processedEdges: Map[(Vertice, Vertice), (Vertice, Vertice)] = Map() val path: Path = List() println(buildPath(path, "v1", processedVerticies, processedEdges)) } /** * Builds path from connected by edges vertices starting from given vertice * Input: map of edges * Output: list of connected edges like: List(("n1", "n2") -> "n1n2"), List(("n1", "n3") -> "n1n3", ("n3", "n4") -> "n3n4"), List(("n5", "n1") -> "n5n1"), List(("n5", "n4") -> "n5n4"), List(("n2", "n1") -> "n1n2", ("n1", "n3") -> "n1n3", ("n3", "n4") -> "n3n4", ("n5", "n4") -> "n5n4")) */ def buildPath(path: Path, vertice: Vertice, processedVerticies: Map[Vertice, Vertice], processedEdges: Map[(Vertice, Vertice), (Vertice, Vertice)]): List[Path] = { println("V: " + vertice + " VM: " + processedVerticies + " EM: " + processedEdges) if (!processedVerticies.contains(vertice)) { val edges = children(vertice) println("Edges: " + edges) val x = edges.map(edge => { if (!processedEdges.contains(edge._1)) { addToPath(vertice, processedVerticies.++(Map(vertice -> vertice)), processedEdges, path, edge) } else { println("ALready have edge: "+edge+" Return path:"+path) path } }) val y = x.toList y } else { List(path) } } def addToPath( vertice: Vertice, processedVerticies: Map[Vertice, Vertice], processedEdges: Map[(Vertice, Vertice), (Vertice, Vertice)], path: Path, edge: Edge): Path = { val newPath: Path = path ::: List(edge) val key = edge._1 val nextVertice = neighbor(vertice, key) val x = buildPath (newPath, nextVertice, processedVerticies, processedEdges ++ (Map((vertice, nextVertice) -> (vertice, nextVertice))) ).flatten // need define buidPath type x } def children(vertice: Vertice) = { edges.filter(p => (p._1)._1 == vertice || (p._1)._2 == vertice) } def containsPair(x: (Vertice, Vertice), m: Map[(Vertice, Vertice), (Vertice, Vertice)]): Boolean = { m.contains((x._1, x._2)) || m.contains((x._2, x._1)) } def neighbor(vertice: String, key: (String, String)): String = key match { case (`vertice`, x) => x case (x, `vertice`) => x } } Running this results in: List(List(((v1,v3),v1v3), ((v1,v3),v1v3), ((v3,v4),v3v4))) Why is that?

    Read the article

  • Asus me302c create script crash

    - by wxfred
    State beforehand: So far, only Asus me302c would crash when it creates a script. This device can create renderscript context successfully 06-03 10:12:50.509: V/RenderScript_jni(3144): RS compat mode 06-03 10:12:50.509: V/RenderScript(3144): 0x610bbfc0 Launching thread(s), CPUs 4 06-03 10:12:50.549: D/libEGL(3144): loaded /vendor/lib/egl/libEGL_POWERVR_SGX544_115.so 06-03 10:12:50.559: D/libEGL(3144): loaded /vendor/lib/egl/libGLESv1_CM_POWERVR_SGX544_115.so 06-03 10:12:50.559: D/libEGL(3144): loaded /vendor/lib/egl/libGLESv2_POWERVR_SGX544_115.so 06-03 10:12:50.619: D/OpenGLRenderer(3144): Enabling debug mode 0 06-03 10:12:52.869: D/dalvikvm(3144): Rejecting registerization due to ushr-int/lit8 v4, v7, (#19) When create a script, it crashed. 06-03 09:55:09.859: D/basefilter(26682): ===createScript=== 06-03 09:55:09.869: E/RenderScript(26682): Unable to open shared library (/data/data/xxx.xxxxxxxxxxx.xxxxxxxxx.xxxxx.xxxx//lib/librs.basefilter.so): Cannot load library: soinfo_relocate(linker.cpp:975): cannot locate symbol "_Z3dotDv3_fS_" referenced by "librs.basefilter.so"... 06-03 09:55:09.869: E/RenderScript(26682): Unable to open system shared library (/system/lib/librs.basefilter.so): (null) 06-03 09:55:09.869: D/AndroidRuntime(26682): Shutting down VM 06-03 09:55:09.869: W/dalvikvm(26682): threadid=1: thread exiting with uncaught exception (group=0x418b9e10) 06-03 09:55:09.869: E/AndroidRuntime(26682): FATAL EXCEPTION: main 06-03 09:55:09.869: E/AndroidRuntime(26682): android.support.v8.renderscript.RSRuntimeException: Loading of ScriptC script failed. 06-03 09:55:09.869: E/AndroidRuntime(26682): at android.support.v8.renderscript.ScriptC.<init>(ScriptC.java:69) 06-03 09:55:09.869: E/AndroidRuntime(26682): at xxx.xxxxxxxxxxx.xxxxxxxxx.xxxxxxxxxx.xxxxxx.ScriptC_BaseFilter.<init>(ScriptC_BaseFilter.java:41) 06-03 09:55:09.869: E/AndroidRuntime(26682): at xxx.xxxxxxxxxxx.xxxxxxxxx.xxxxxxxxxx.xxxxxx.ScriptC_BaseFilter.<init>(ScriptC_BaseFilter.java:35) 06-03 09:55:09.869: E/AndroidRuntime(26682): at xxx.xxxxxxxxxxx.xxxxxxxxx.xxxxxxxxxx.xxxxxx.xxxxxx.TeethWhiteningRSFilter.onCreateScript(TeethWhiteningRSFilter.java:19) 06-03 09:55:09.869: E/AndroidRuntime(26682): at xxx.xxxxxxxxxxx.xxxxxxxxx.xxxxxxxxxx.xxxxxx.xxxxxx.TeethWhiteningRSFilter.onCreateScript(TeethWhiteningRSFilter.java:1) 06-03 09:55:09.869: E/AndroidRuntime(26682): at xxx.xxxxxxxxxxx.xxxxxxxxx.xxxxxxxxxx.BaseRSFilter.createScript(BaseRSFilter.java:39) 06-03 09:55:09.869: E/AndroidRuntime(26682): at xxx.xxxxxxxxxxx.xxxxxxxxx.xxxxxxxxxx.RSFilterEngine.addFilter(RSFilterEngine.java:76) 06-03 09:55:09.869: E/AndroidRuntime(26682): at xxx.xxxxxxxxxxx.xxxxxxxxx.xxxxx.xxxx.BeautyActivity.changeBeautyEffect(BeautyActivity.java:277) 06-03 09:55:09.869: E/AndroidRuntime(26682): at xxx.xxxxxxxxxxx.xxxxxxxxx.xxxxx.xxxx.BeautyActivity$2.onClick(BeautyActivity.java:100) 06-03 09:55:09.869: E/AndroidRuntime(26682): at com.android.internal.app.AlertController$ButtonHandler.handleMessage(AlertController.java:166) 06-03 09:55:09.869: E/AndroidRuntime(26682): at android.os.Handler.dispatchMessage(Handler.java:99) 06-03 09:55:09.869: E/AndroidRuntime(26682): at android.os.Looper.loop(Looper.java:152) 06-03 09:55:09.869: E/AndroidRuntime(26682): at android.app.ActivityThread.main(ActivityThread.java:5132) 06-03 09:55:09.869: E/AndroidRuntime(26682): at java.lang.reflect.Method.invokeNative(Native Method) 06-03 09:55:09.869: E/AndroidRuntime(26682): at java.lang.reflect.Method.invoke(Method.java:511) 06-03 09:55:09.869: E/AndroidRuntime(26682): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:793) 06-03 09:55:09.869: E/AndroidRuntime(26682): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:560) 06-03 09:55:09.869: E/AndroidRuntime(26682): at dalvik.system.NativeStart.main(Native Method) Now, I will post some related infomation. project.properties ... renderscript.target=9 renderscript.support.mode=true sdk.buildtools=19.0.3 device info Brand: Asus Model: ME302C OS: 4.2.2 CPU: Intel(R) Atom(TM) CPU Z2560 @1.60GHz GPU Renderer PowerVR SGX 544MP Finally, by the way, the same code runs well on Galaxy s2, s4, note 2.

    Read the article

  • Photo inside the image view should not go cross on dragging

    - by TGMCians
    I want photo inside the imageview should not go outside on dragging. In my code when i start to drag bitmap inside the imageview its goes out from imageview but i want when it cross the imageview its should come at starting point of imageview. How to achieve this. please help me for this. @Override protected void onDraw(Canvas canvas) { super.onDraw(canvas); canvas.save(); scaleCount=scaleCount+scale; angleCount = addAngle(angleCount, Math.toDegrees(angle)); Log.v("Positions", "X: "+x+" " + "Y: "+y); Log.d("ScaleCount", String.valueOf(scaleCount)); Log.d("Angle", String.valueOf(angleCount)); if (!isInitialized) { int w = getWidth(); int h = getHeight(); position.set(w / 2, h / 2); isInitialized = true; } Paint paint = new Paint(); Log.v("Height and Width", "Height: "+ getHeight() + "Width: "+ getWidth()); transform.reset(); transform.postTranslate(-width / 2.0f, -height / 2.0f); transform.postRotate((float) Math.toDegrees(angle)); transform.postScale(scale, scale); transform.postTranslate(position.getX(), position.getY()); canvas.drawBitmap(bitmap, transform, paint); canvas.restore(); BitmapWidth=BitmapWidth+bitmap.getScaledWidth(canvas); BitmapHeight=BitmapHeight+bitmap.getScaledHeight(canvas); try { /*paint.setColor(0xFF007F00); canvas.drawCircle(vca.getX(), vca.getY(), 30, paint); paint.setColor(0xFF7F0000); canvas.drawCircle(vcb.getX(), vcb.getY(), 30, paint);*/ /*paint.setColor(0xFFFF0000); canvas.drawLine(vpa.getX(), vpa.getY(), vpb.getX(), vpb.getY(), paint); paint.setColor(0xFF00FF00); canvas.drawLine(vca.getX(), vca.getY(), vcb.getX(), vcb.getY(), paint);*/ } catch(NullPointerException e) { // Just being lazy here... } } @Override public boolean onTouch(View v, MotionEvent event) { vca = null; vcb = null; vpa = null; vpb = null; x=event.getX(); y=event.getY(); try { touchManager.update(event); if (touchManager.getPressCount() == 1) { vca = touchManager.getPoint(0); vpa = touchManager.getPreviousPoint(0); position.add(touchManager.moveDelta(0)); } else { if (touchManager.getPressCount() == 2) { vca = touchManager.getPoint(0); vpa = touchManager.getPreviousPoint(0); vcb = touchManager.getPoint(1); vpb = touchManager.getPreviousPoint(1); VMVector2D current = touchManager.getVector(0, 1); VMVector2D previous = touchManager.getPreviousVector(0, 1); float currentDistance = current.getLength(); float previousDistance = previous.getLength(); if (currentDistance-previousDistance != 0) { scale *= currentDistance / previousDistance; } angle -= VMVector2D.getSignedAngleBetween(current, previous); /*angleCount=angleCount+angle;*/ } } invalidate(); } catch(Exception exception) { // Log.d("VM", exception.getMessage()); } return true; }

    Read the article

< Previous Page | 163 164 165 166 167 168 169 170 171  | Next Page >