Search Results

Search found 10719 results on 429 pages for 'temp tables'.

Page 181/429 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • Tomcat running, catalina throwing exception

    - by Mark Steudel
    So I have to preface that I'm not familiar with tomcat/catalina, but trying to troubleshoot this anyway. Anyway I see in /var/log/tomcat5/catalina.out I'm seeing these errors: Using CATALINA_BASE: /usr/share/tomcat5 Using CATALINA_HOME: /usr/share/tomcat5 Using CATALINA_TMPDIR: /usr/share/tomcat5/temp Using JRE_HOME: java.lang.ClassNotFoundException: org.apache.catalina.startup.Catalina at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at java.lang.ClassLoader.loadClass(ClassLoader.java:248) at org.apache.catalina.startup.Bootstrap.init(Bootstrap.java:223) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:410) I'm not really sure what this means. This installation was working a week ago ... did something get corrupted? How would I figure if it did ... what other information would be valuable here? Tomcat seems to be running and starting up fine ... UPDATE: this might be related: Jun 19, 2011 11:00:25 PM org.apache.coyote.http11.Http11BaseProtocol pause INFO: Pausing Coyote HTTP/1.1 on http-9080 Jun 19, 2011 11:00:26 PM org.apache.catalina.core.StandardService stop INFO: Stopping service Catalina log4j:ERROR LogMananger.repositorySelector was null likely due to error in class reloading, using NOPLoggerRepository. Some more stuff in the logs: 2011-06-12 23:04:45,223 INFO [main] [com.atlassian.confluence.lifecycle] contextInitialized Starting Confluence 3.1.1 (build #1724) 2011-06-12 23:04:45,663 INFO [main] [beans.factory.xml.XmlBeanDefinitionReader] loadBeanDefinitions Loading XML bean definitions from c lass path resource [bootstrapContext.xml] 2011-06-12 23:04:46,134 INFO [main] [beans.factory.xml.XmlBeanDefinitionReader] loadBeanDefinitions Loading XML bean definitions from c lass path resource [setupContext.xml] 2011-06-12 23:04:46,236 INFO [main] [beans.factory.xml.XmlBeanDefinitionReader] loadBeanDefinitions Loading XML bean definitions from c lass path resource [bootstrapCacheContext.xml] 2011-06-12 23:04:47,571 INFO [main] [atlassian.plugin.manager.DefaultPluginManager] init Initialising the plugin system 2011-06-12 23:04:48,338 INFO [main] [atlassian.plugin.manager.DefaultPluginManager] init Plugin system started in 0:00:00.748 Jun 12, 2011 11:05:05 PM org.apache.catalina.startup.Catalina stopServer SEVERE: Catalina.stop: java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:525) at java.net.Socket.connect(Socket.java:475) at java.net.Socket.<init>(Socket.java:372) at java.net.Socket.<init>(Socket.java:186) at org.apache.catalina.startup.Catalina.stopServer(Catalina.java:395) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.stopServer(Bootstrap.java:344) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:435) Jun 12, 2011 11:05:44 PM org.apache.catalina.core.AprLifecycleListener lifecycleEvent INFO: The Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.pa th: /usr/java/jdk1.6.0_18/jre/lib/i386/client:/usr/java/jdk1.6.0_18/jre/lib/i386:/usr/java/jdk1.6.0_18/jre/../lib/i386:/usr/java/packag es/lib/i386:/lib:/usr/lib CLEAN LOG OUTPUT FROM STARTING TOMCAT: Using CATALINA_BASE: /usr/share/tomcat5 Using CATALINA_HOME: /usr/share/tomcat5 Using CATALINA_TMPDIR: /usr/share/tomcat5/temp Using JRE_HOME: java.lang.ClassNotFoundException: org.apache.catalina.startup.Catalina at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at java.lang.ClassLoader.loadClass(ClassLoader.java:248) at org.apache.catalina.startup.Bootstrap.init(Bootstrap.java:223) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:410) So I did a /etc/rc.d/init.d/tomcat status and I get this: [wqadm1n@ip-72-167-51-178 proc]$ sudo /etc/rc.d/init.d/tomcat5 status /etc/rc.d/init.d/tomcat5 is stopped [wqadm1n@ip-72-167-51-178 proc]$ sudo /etc/rc.d/init.d/tomcat5 start Starting tomcat5: [ OK ] [wqadm1n@ip-72-167-51-178 proc]$ sudo /etc/rc.d/init.d/tomcat5 status lock file found but no process running for pid 30774

    Read the article

  • How do I send traffic from my Mac's wifi to my VPN client?

    - by Heath Borders
    I need to connect my Android to a Juniper VPN. Unfortunately, Juniper doesn't support Android on our VPN version. We've already put in a feature request for it, but we have no idea how long it will take to be complete. Right now, I connect to the Juniper VPN with a Juniper Mac OSX VPN client that uses Java to install kernel extensions to start and stop the VPN. Thus, I can't use the Network panel in System Preferences to create a VPN device, which means it won't show up in the 'Sharing' panel's Internet Sharing Share your connection from: menu, as suggested here. I used newproc.d to see what /usr/libexec/InternetSharing did when it ran, and it runs the following processes: 2013 Nov 1 00:26:54 5565 <1> 64b /usr/libexec/launchdadd 2013 Nov 1 00:26:55 5566 <1> 64b /usr/libexec/InternetSharing 2013 Nov 1 00:26:56 5568 <5566> 64b natpmpd -d -y bridge100 en0 2013 Nov 1 00:26:56 5569 <1> 64b /usr/libexec/pfd -d 2013 Nov 1 00:26:56 5567 <5566> 64b bootpd -d -P My Juniper VPN client creates the following devices (output of ifconfig): jnc0: flags=841<UP,RUNNING,SIMPLEX> mtu 1400 inet 10.61.9.61 netmask 0xffffffff open (pid 920) jnc1: flags=841<UP,RUNNING,SIMPLEX> mtu 1450 closed So, it seems like I should just be able to do this and have everything work: sudo killall -9 natpmpd sudo /usr/libexec/natpmpd -y bridge100 jnc0 My android connected fine and could hit public internet sites, but it couldn't hit private VPN sites. I assume this is because I need to change the routes that /usr/libexec/InternetSharing sets up. This is the output from sudo pfctl -s all before starting Internet Sharing: No ALTQ support in kernel ALTQ related functions disabled TRANSLATION RULES: nat-anchor "com.apple/*" all rdr-anchor "com.apple/*" all FILTER RULES: scrub-anchor "com.apple/*" all fragment reassemble anchor "com.apple/*" all DUMMYNET RULES: dummynet-anchor "com.apple/*" all INFO: Status: Disabled for 0 days 00:11:02 Debug: Urgent State Table Total Rate current entries 0 searches 22875 34.6/s inserts 1558 2.4/s removals 1558 2.4/s Counters match 2005 3.0/s bad-offset 0 0.0/s fragment 0 0.0/s short 0 0.0/s normalize 0 0.0/s memory 0 0.0/s bad-timestamp 0 0.0/s congestion 0 0.0/s ip-option 12 0.0/s proto-cksum 0 0.0/s state-mismatch 1 0.0/s state-insert 0 0.0/s state-limit 0 0.0/s src-limit 0 0.0/s synproxy 0 0.0/s dummynet 0 0.0/s TIMEOUTS: tcp.first 120s tcp.opening 30s tcp.established 86400s tcp.closing 900s tcp.finwait 45s tcp.closed 90s tcp.tsdiff 60s udp.first 60s udp.single 30s udp.multiple 120s icmp.first 20s icmp.error 10s grev1.first 120s grev1.initiating 30s grev1.estblished 1800s esp.first 120s esp.estblished 900s other.first 60s other.single 30s other.multiple 120s frag 30s interval 10s adaptive.start 6000 states adaptive.end 12000 states src.track 0s LIMITS: states hard limit 10000 app-states hard limit 10000 src-nodes hard limit 10000 frags hard limit 5000 tables hard limit 1000 table-entries hard limit 200000 OS FINGERPRINTS: 696 fingerprints loaded This is the output from sudo pfctl -s all after starting Internet Sharing: No ALTQ support in kernel ALTQ related functions disabled TRANSLATION RULES: nat-anchor "com.apple/*" all nat-anchor "com.apple.internet-sharing" all rdr-anchor "com.apple/*" all rdr-anchor "com.apple.internet-sharing" all FILTER RULES: scrub-anchor "com.apple/*" all fragment reassemble scrub-anchor "com.apple.internet-sharing" all fragment reassemble anchor "com.apple/*" all anchor "com.apple.internet-sharing" all DUMMYNET RULES: dummynet-anchor "com.apple/*" all STATES: ALL tcp 10.0.1.32:50593 -> 74.125.225.113:443 SYN_SENT:CLOSED ALL udp 10.0.1.32:61534 -> 10.0.1.1:53 SINGLE:NO_TRAFFIC ALL udp 10.0.1.32:55433 -> 10.0.1.1:53 SINGLE:NO_TRAFFIC ALL udp 10.0.1.32:64041 -> 10.0.1.1:53 SINGLE:NO_TRAFFIC ALL tcp 10.0.1.32:50619 -> 74.125.225.131:443 SYN_SENT:CLOSED INFO: Status: Enabled for 0 days 00:00:01 Debug: Urgent State Table Total Rate current entries 5 searches 22886 22886.0/s inserts 1563 1563.0/s removals 1558 1558.0/s Counters match 2010 2010.0/s bad-offset 0 0.0/s fragment 0 0.0/s short 0 0.0/s normalize 0 0.0/s memory 0 0.0/s bad-timestamp 0 0.0/s congestion 0 0.0/s ip-option 12 12.0/s proto-cksum 0 0.0/s state-mismatch 1 1.0/s state-insert 0 0.0/s state-limit 0 0.0/s src-limit 0 0.0/s synproxy 0 0.0/s dummynet 0 0.0/s TIMEOUTS: tcp.first 120s tcp.opening 30s tcp.established 86400s tcp.closing 900s tcp.finwait 45s tcp.closed 90s tcp.tsdiff 60s udp.first 60s udp.single 30s udp.multiple 120s icmp.first 20s icmp.error 10s grev1.first 120s grev1.initiating 30s grev1.estblished 1800s esp.first 120s esp.estblished 900s other.first 60s other.single 30s other.multiple 120s frag 30s interval 10s adaptive.start 6000 states adaptive.end 12000 states src.track 0s LIMITS: states hard limit 10000 app-states hard limit 10000 src-nodes hard limit 10000 frags hard limit 5000 tables hard limit 1000 table-entries hard limit 200000 TABLES: OS FINGERPRINTS: 696 fingerprints loaded It looks like I need to change the pf settings that /usr/libexec/InternetSharing set up, but I have no idea how to do that.

    Read the article

  • Server Memory with Magento

    - by Mohamed Elgharabawy
    I have a cloud server with the following specifications: 2vCPUs 4G RAM 160GB Disk Space Network 400Mb/s System Image: Ubuntu 12.04 LTS I am only running Magento CE 1.7.0.2 on this server. Nothing else. Usually, the server has a loading time of 4-5 seconds. Recently, this has dropped to over 30 seconds and sometimes the server just goes away and I get HTTP error reports to my email stating that HTTP requests took more than 20000ms. Running top command and sorting them returns the following: top - 15:29:07 up 3:40, 1 user, load average: 28.59, 25.95, 22.91 Tasks: 112 total, 30 running, 82 sleeping, 0 stopped, 0 zombie Cpu(s): 90.2%us, 9.3%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.3%si, 0.2%st PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 31901 www-data 20 0 360m 71m 5840 R 7 1.8 1:39.51 apache2 32084 www-data 20 0 362m 72m 5548 R 7 1.8 1:31.56 apache2 32089 www-data 20 0 348m 59m 5660 R 7 1.5 1:41.74 apache2 32295 www-data 20 0 343m 54m 5532 R 7 1.4 2:00.78 apache2 32303 www-data 20 0 354m 65m 5260 R 7 1.6 1:38.76 apache2 32304 www-data 20 0 346m 56m 5544 R 7 1.4 1:41.26 apache2 32305 www-data 20 0 348m 59m 5640 R 7 1.5 1:50.11 apache2 32291 www-data 20 0 358m 69m 5256 R 6 1.7 1:44.26 apache2 32517 www-data 20 0 345m 56m 5532 R 6 1.4 1:45.56 apache2 30473 www-data 20 0 355m 66m 5680 R 6 1.7 2:00.05 apache2 32093 www-data 20 0 352m 63m 5848 R 6 1.6 1:53.23 apache2 32302 www-data 20 0 345m 56m 5512 R 6 1.4 1:55.87 apache2 32433 www-data 20 0 346m 57m 5500 S 6 1.4 1:31.58 apache2 32638 www-data 20 0 354m 65m 5508 R 6 1.6 1:36.59 apache2 32230 www-data 20 0 347m 57m 5524 R 6 1.4 1:33.96 apache2 32231 www-data 20 0 355m 66m 5512 R 6 1.7 1:37.47 apache2 32233 www-data 20 0 354m 64m 6032 R 6 1.6 1:59.74 apache2 32300 www-data 20 0 355m 66m 5672 R 6 1.7 1:43.76 apache2 32510 www-data 20 0 347m 58m 5512 R 6 1.5 1:42.54 apache2 32521 www-data 20 0 348m 59m 5508 R 6 1.5 1:47.99 apache2 32639 www-data 20 0 344m 55m 5512 R 6 1.4 1:34.25 apache2 32083 www-data 20 0 345m 56m 5696 R 5 1.4 1:59.42 apache2 32085 www-data 20 0 347m 58m 5692 R 5 1.5 1:42.29 apache2 32293 www-data 20 0 353m 64m 5676 R 5 1.6 1:52.73 apache2 32301 www-data 20 0 348m 59m 5564 R 5 1.5 1:49.63 apache2 32528 www-data 20 0 351m 62m 5520 R 5 1.6 1:36.11 apache2 31523 mysql 20 0 3460m 576m 8288 S 5 14.4 2:06.91 mysqld 32002 www-data 20 0 345m 55m 5512 R 5 1.4 2:01.88 apache2 32080 www-data 20 0 357m 68m 5512 S 5 1.7 1:31.30 apache2 32163 www-data 20 0 347m 58m 5512 S 5 1.5 1:58.68 apache2 32509 www-data 20 0 345m 56m 5504 R 5 1.4 1:49.54 apache2 32306 www-data 20 0 358m 68m 5504 S 4 1.7 1:53.29 apache2 32165 www-data 20 0 344m 55m 5524 S 4 1.4 1:40.71 apache2 32640 www-data 20 0 345m 56m 5528 R 4 1.4 1:36.49 apache2 31888 www-data 20 0 359m 70m 5664 R 4 1.8 1:57.07 apache2 32511 www-data 20 0 357m 67m 5512 S 3 1.7 1:47.00 apache2 32054 www-data 20 0 357m 68m 5660 S 2 1.7 1:53.10 apache2 1 root 20 0 24452 2276 1232 S 0 0.1 0:01.58 init Moreover, running free -m returns the following: total used free shared buffers cached Mem: 4003 3919 83 0 118 901 -/+ buffers/cache: 2899 1103 Swap: 0 0 0 To investigate this further, I have installed apache buddy, it recommeneded that I need to reduce the maxclient connections. Which I did. I also installed MysqlTuner and it suggests that I need to set my innodb_buffer_pool_size to = 3.0G. However, I cannot do that, since the whole memory is 4G. Here is the output from apache buddy: ### GENERAL REPORT ### Settings considered for this report: Your server's physical RAM: 4003MB Apache's MaxClients directive: 40 Apache MPM Model: prefork Largest Apache process (by memory): 73.77MB [ OK ] Your MaxClients setting is within an acceptable range. Max potential memory usage: 2950.8 MB Percentage of RAM allocated to Apache 73.72 % And this is the output of MySQLTuner: -------- Performance Metrics ------------------------------------------------- [--] Up for: 47m 22s (675K q [237.552 qps], 12K conn, TX: 1B, RX: 300M) [--] Reads / Writes: 45% / 55% [--] Total buffers: 2.1G global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 2.5G (64% of installed RAM) [OK] Slow queries: 0% (0/675K) [OK] Highest usage of available connections: 26% (40/151) [OK] Key buffer size / total MyISAM indexes: 36.0M/18.7M [OK] Key buffer hit rate: 100.0% (245K cached / 105 reads) [OK] Query cache efficiency: 92.5% (500K cached / 541K selects) [!!] Query cache prunes per day: 302886 [OK] Sorts requiring temporary tables: 0% (1 temp sorts / 15K sorts) [!!] Joins performed without indexes: 12135 [OK] Temporary tables created on disk: 25% (8K on disk / 32K total) [OK] Thread cache hit rate: 90% (1K created / 12K connections) [!!] Table cache hit rate: 17% (400 open / 2K opened) [OK] Open file limit used: 12% (123/1K) [OK] Table locks acquired immediately: 100% (196K immediate / 196K locks) [!!] InnoDB buffer pool / data size: 2.0G/3.5G [OK] InnoDB log waits: 0 -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Read this before increasing table_cache over 64: http://bit.ly/1mi7c4C Variables to adjust: query_cache_size ( 64M) join_buffer_size ( 128.0K, or always use indexes with joins) table_cache ( 400) innodb_buffer_pool_size (= 3G) Last but not least, the server still has more than 60% of free disk space. Now, based on the above, I have few questions: Are these numbers normal? Do they make sense? Do I need to upgrade the server? If I don't need to upgrade and my configuration is not correct, how do I optimize it?

    Read the article

  • Minecraft server Rkit ubuntu upstart [closed]

    - by user1637491
    I have an Intel server running Ubuntu Server 12.04.1 I am working on moving my CraftBukkit Minecraft Server to the new platform. I read the upstart ubuntu cookbook and wrote a .conf file I have a minecraft user (named minecraft) and its home Directory is /home/minecraft it contains prwxrwxrwx 1 minecraft minecraft 0 Sep 19 14:49 command-fifo drwx------ 8 minecraft minecraft 4096 Sep 19 14:50 HDsaves drwx------ 2 minecraft minecraft 4096 Aug 31 15:13 logrolls -rw-r--r-- 1 root root 5 Sep 19 14:49 minecraft.pid drwxrwxrwx 8 minecraft minecraft 180 Sep 19 14:49 ramdisk -rw------- 1 minecraft minecraft 119 Sep 19 10:34 save.sh drwxrwxrwx 9 minecraft minecraft 4096 Sep 19 14:50 server -rw-rw-r-- 1 minecraft minecraft 44 Aug 31 11:40 shutdown.sh the server directory contains drwxrwxrwx 6 minecraft minecraft 4096 Aug 30 13:32 Backups -rwxrwxrwx 1 minecraft minecraft 0 Sep 18 12:26 banned-ips.txt -rwxrwxrwx 1 minecraft minecraft 17 Sep 18 12:26 banned-players.txt drwxrwxrwx 4 minecraft minecraft 4096 Aug 30 12:26 buildcraft -rwxrwxrwx 1 minecraft minecraft 1447 Sep 18 12:26 bukkit.yml -rwxrwxrwx 1 minecraft minecraft 0 Aug 30 11:05 command-fifo drwxrwxrwx 2 minecraft minecraft 4096 Aug 30 12:26 config lrwxrwxrwx 1 minecraft minecraft 23 Sep 19 14:49 craftbukkit.jar -> ramdisk/craftbukkit.jar -rwxrwxrwx 1 minecraft minecraft 17419 Sep 18 12:26 ForgeModLoader-0.log -rwxrwxrwx 1 minecraft minecraft 17420 Sep 18 12:24 ForgeModLoader-1.log -rwxrwxrwx 1 minecraft minecraft 17420 Sep 18 11:53 ForgeModLoader-2.log -rwxrwxrwx 1 minecraft minecraft 2576 Aug 30 11:05 help.yml drwxrwxrwx 2 minecraft minecraft 4096 Aug 30 12:31 lib drwxrwxrwx 3 minecraft minecraft 4096 Sep 19 14:49 logrolls -rwxrwxrwx 1 minecraft minecraft 200035 Sep 4 17:58 Minecraft_RKit.jar lrwxrwxrwx 1 minecraft minecraft 12 Sep 19 14:49 mods -> ramdisk/mods -rwxrwxrwx 1 minecraft minecraft 5 Sep 18 12:26 ops.txt -rwxrwxrwx 1 minecraft minecraft 0 Aug 30 11:05 permissions.yml lrwxrwxrwx 1 minecraft minecraft 15 Sep 19 14:49 plugins -> ramdisk/plugins lrwxrwxrwx 1 minecraft minecraft 16 Sep 19 14:49 redpower -> ramdisk/redpower -rw-r--r-- 1 root root 255 Sep 19 15:10 server.log -rwxrwxrwx 1 minecraft minecraft 464 Sep 8 11:09 server.properties drwxrwxrwx 3 minecraft minecraft 4096 Sep 5 16:05 SpaceModule drwxrwxrwx 3 minecraft minecraft 4096 Aug 30 13:07 toolkit -rwxrwxrwx 1 minecraft minecraft 1433 Sep 14 21:04 wepif.yml -rwxrwxrwx 1 minecraft minecraft 0 Sep 18 12:26 white-list.txt lrwxrwxrwx 1 minecraft minecraft 13 Sep 19 14:49 world -> ramdisk/world lrwxrwxrwx 1 minecraft minecraft 20 Sep 19 14:49 world_nether -> ramdisk/world_nether lrwxrwxrwx 1 minecraft minecraft 21 Sep 19 14:49 world_the_end -> ramdisk/world_the_end the startup .conf file: # Starts the minecraft server after loading JRE from ramdisk # # for now im still working on it description "minecraft-server" start on filesystem or runlevel [2345] stop on runlevel [!2345] oom score -999 kill timeout 60 pre-start script sh /usr/lib/jvm/java.sh end script script cd /home/minecraft echo "$(date) Starting minecraft" sudo cp -r /home/minecraft/HDsaves/* ramdisk sudo chown -R minecraft:minecraft ramdisk sudo chmod -R 777 ramdisk sudo ln -sf ramdisk/* server sudo chown -R minecraft:minecraft server sudo chmod -R 777 server sudo mv server/server.log server/logrolls/ zip server/logrolls/temp.zip server/logrolls/server.log sudo mv server/logrolls/temp.zip server/logrolls/"$(date)".log.zip sudo rm server/logrolls/server.log sudo rm -f command-fifo sudo mkfifo command-fifo sudo chown minecraft:minecraft command-fifo sudo chmod 777 command-fifo echo "$(date) Root commands finished" echo "$(date) Starting Wrapper" cd server sudo -u minecraft java -Xmx30M -Xms30M -XX:MaxPermSize=40M -Djava.awt.headless=true -jar Minecraft_RKit.jar timv:*spoilers* <> /home/minecraft/command-fifo & sudo echo $! >| /home/minecraft/minecraft.pid echo "$(date) Minecraft Started" end script pre-stop script cd /home/minecraft PID=`cat minecraft.pid` if [ "$PID" != "" ]; then echo "Stopping MineCraft Server PID=$PID" sudo echo save-all >> command-fifo sudo echo .stopwrapper >> command-fifo wait $PID sudo rm minecraft.pid sudo rsync -rt --delete ramdisk/* HDsaves/ echo "$(date) ramdisk save complete" echo "MineCraft save-shutdown complete." else echo "MineCraft not running" fi end script so when I start it up the upstart gererated log says: Wed Sep 19 14:49:30 CDT 2012 Starting minecraft adding: server/logrolls/server.log (stored 0%) Wed Sep 19 14:49:56 CDT 2012 Root commands finished Wed Sep 19 14:49:56 CDT 2012 Starting Wrapper Wed Sep 19 14:49:56 CDT 2012 Minecraft Started

    Read the article

  • Two network interfaces and two IP addresses on the same subnet in Linux

    - by Scott Duckworth
    I recently ran into a situation where I needed two IP addresses on the same subnet assigned to one Linux host so that we could run two SSL/TLS sites. My first approach was to use IP aliasing, e.g. using eth0:0, eth0:1, etc, but our network admins have some fairly strict settings in place for security that squashed this idea: They use DHCP snooping and normally don't allow static IP addresses. Static addressing is accomplished by using static DHCP entries, so the same MAC address always gets the same IP assignment. This feature can be disabled per switchport if you ask and you have a reason for it (thankfully I have a good relationship with the network guys and this isn't hard to do). With the DHCP snooping disabled on the switchport, they had to put in a rule on the switch that said MAC address X is allowed to have IP address Y. Unfortunately this had the side effect of also saying that MAC address X is ONLY allowed to have IP address Y. IP aliasing required that MAC address X was assigned two IP addresses, so this didn't work. There may have been a way around these issues on the switch configuration, but in an attempt to preserve good relations with the network admins I tried to find another way. Having two network interfaces seemed like the next logical step. Thankfully this Linux system is a virtual machine, so I was able to easily add a second network interface (without rebooting, I might add - pretty cool). A few keystrokes later I had two network interfaces up and running and both pulled IP addresses from DHCP. But then the problem came in: the network admins could see (on the switch) the ARP entry for both interfaces, but only the first network interface that I brought up would respond to pings or any sort of TCP or UDP traffic. After lots of digging and poking, here's what I came up with. It seems to work, but it also seems to be a lot of work for something that seems like it should be simple. Any alternate ideas out there? Step 1: Enable ARP filtering on all interfaces: # sysctl -w net.ipv4.conf.all.arp_filter=1 # echo "net.ipv4.conf.all.arp_filter = 1" >> /etc/sysctl.conf From the file networking/ip-sysctl.txt in the Linux kernel docs: arp_filter - BOOLEAN 1 - Allows you to have multiple network interfaces on the same subnet, and have the ARPs for each interface be answered based on whether or not the kernel would route a packet from the ARP'd IP out that interface (therefore you must use source based routing for this to work). In other words it allows control of which cards (usually 1) will respond to an arp request. 0 - (default) The kernel can respond to arp requests with addresses from other interfaces. This may seem wrong but it usually makes sense, because it increases the chance of successful communication. IP addresses are owned by the complete host on Linux, not by particular interfaces. Only for more complex setups like load- balancing, does this behaviour cause problems. arp_filter for the interface will be enabled if at least one of conf/{all,interface}/arp_filter is set to TRUE, it will be disabled otherwise Step 2: Implement source-based routing I basically just followed directions from http://lartc.org/howto/lartc.rpdb.multiple-links.html, although that page was written with a different goal in mind (dealing with two ISPs). Assume that the subnet is 10.0.0.0/24, the gateway is 10.0.0.1, the IP address for eth0 is 10.0.0.100, and the IP address for eth1 is 10.0.0.101. Define two new routing tables named eth0 and eth1 in /etc/iproute2/rt_tables: ... top of file omitted ... 1 eth0 2 eth1 Define the routes for these two tables: # ip route add default via 10.0.0.1 table eth0 # ip route add default via 10.0.0.1 table eth1 # ip route add 10.0.0.0/24 dev eth0 src 10.0.0.100 table eth0 # ip route add 10.0.0.0/24 dev eth1 src 10.0.0.101 table eth1 Define the rules for when to use the new routing tables: # ip rule add from 10.0.0.100 table eth0 # ip rule add from 10.0.0.101 table eth1 The main routing table was already taken care of by DHCP (and it's not even clear that its strictly necessary in this case), but it basically equates to this: # ip route add default via 10.0.0.1 dev eth0 # ip route add 130.127.48.0/23 dev eth0 src 10.0.0.100 # ip route add 130.127.48.0/23 dev eth1 src 10.0.0.101 And voila! Everything seems to work just fine. Sending pings to both IP addresses works fine. Sending pings from this system to other systems and forcing the ping to use a specific interface works fine (ping -I eth0 10.0.0.1, ping -I eth1 10.0.0.1). And most importantly, all TCP and UDP traffic to/from either IP address works as expected. So again, my question is: is there a better way to do this? This seems like a lot of work for a seemingly simple problem.

    Read the article

  • Error on installing SVN extension with pecl

    - by thedp
    Hello, I'm trying to install the following PHP extension: http://php.net/manual/en/book.svn.php But when I do pecl install svn-beta I receive an error message that it can't locate the svn_client.h file. I searched the net but couldn't find any useful reference to this error. Thank you for your help. Installation result: root@myUbuntu:/home/thedp# pecl install svn-beta downloading svn-0.5.1.tgz ... Starting to download svn-0.5.1.tgz (23,563 bytes) .....done: 23,563 bytes 4 source files, building running: phpize Configuring for: PHP Api Version: 20041225 Zend Module Api No: 20060613 Zend Extension Api No: 220060519 1. Please provide the prefix of Subversion installation : autodetect 1-1, 'all', 'abort', or Enter to continue: 1. Please provide the prefix of the APR installation used with Subversion : autodetect 1-1, 'all', 'abort', or Enter to continue: building in /var/tmp/pear-build-root/svn-0.5.1 running: /tmp/pear/temp/svn/configure --with-svn --with-svn-apr checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for a sed that does not truncate output... /bin/sed checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking whether gcc and cc understand -c and -o together... yes checking for system library directory... lib checking if compiler supports -R... no checking if compiler supports -Wl,-rpath,... yes checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking target system type... i686-pc-linux-gnu checking for PHP prefix... /usr checking for PHP includes... -I/usr/include/php5 -I/usr/include/php5/main -I/usr/include/php5/TSRM -I/usr/include/php5/Zend -I/usr/include/php5/ext -I/usr/include/php5/ext/date/lib -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 checking for PHP extension directory... /usr/lib/php5/20060613+lfs checking for PHP installed headers prefix... /usr/include/php5 checking for re2c... no configure: WARNING: You will need re2c 0.12.0 or later if you want to regenerate PHP parsers. checking for gawk... no checking for nawk... nawk checking if nawk is broken... no checking for svn support... yes, shared checking for specifying the location of apr for svn... yes, shared checking for svn includes... configure: error: failed to find svn_client.h ERROR: `/tmp/pear/temp/svn/configure --with-svn --with-svn-apr' failed

    Read the article

  • SubSonic Stored Procedure Issue - Data Generated at Stored Procedure is different from Data Received

    - by ShaShaIn
    Hi All, I am facing a unknown problem while using stored procedure with SubSonic. I have written a stored procedure & application code that takes first name & last name as input parameter and return last login id as ouput parameter. It creates login id as first character of first name & complete last name for no-existing login id otherwise it adds 1 in the last login id e.g. First Name - Mark, Last Name - Waugh, First Login Id - MWaugh, Second Login Id - MWaugh1, Third Login Id - MWaugh2 etc. Stored Procedure SET ANSI_NULLS OFF GO SET QUOTED_IDENTIFIER ON GO CREATE PROCEDURE [dbo].[Users_FetchLoginId] ( @FirstName nvarchar(64), @LastName nvarchar(64), @LoginId nvarchar(256) OUTPUT ) AS DECLARE @UserId nvarchar(256); SET @UserId = NULL; SET @LoginId = NULL; SELECT @UserId = LoweredUserName FROM aspnet_Users WHERE LoweredUserName LIKE (LOWER(SUBSTRING(@FirstName,1,1) + @LastName)) IF @@rowcount = 0 OR @UserId IS NULL BEGIN SET @LoginId = (SUBSTRING(@FirstName, 1, 1) + @LastName); print @LoginId RETURN 1; END ELSE BEGIN SELECT TOP 1 LoweredUserName FROM aspnet_Users WHERE LoweredUserName LIKE (LOWER(SUBSTRING(@FirstName,1,1) + @LastName + '%')) ORDER BY LoweredUserName DESC RETURN 2; END Application Code public string FetchLoginId(string firstName, string lastName) { SubSonic.StoredProcedure sp = SPs.UsersFetchLoginId( firstName, lastName, null ); sp.Command.AddReturnParameter(); sp.Execute(); if (sp.Command.Parameters.Find(delegate(QueryParameter queryParameter) { return queryParameter.Mode == ParameterDirection.ReturnValue; }).ParameterValue != System.DBNull.Value) { int returnCode = Convert.ToInt32(sp.Command.Parameters.Find(delegate(QueryParameter queryParameter) { return queryParameter.Mode == ParameterDirection.ReturnValue; }).ParameterValue, CultureInfo.InvariantCulture); if (returnCode == 1) { // UserName as First Character of First Name & Full Last Name return sp.Command.Parameters[2].ParameterValue.ToString(); } if (returnCode == 2) { DataSet ds = sp.GetDataSet(); if (null == ds || null == ds.Tables[0] || 0 == ds.Tables[0].Rows.Count) return ""; string maxLoginId = ds.Tables[0].Rows[0]["LoweredUserName"].ToString(); string initialLoginId = firstName.Substring(0, 1) + lastName; int maxLoginIdIndex = 0; int initialLoginIdLength = initialLoginId.Length; if (maxLoginId.Substring(initialLoginIdLength).Length == 0) { maxLoginIdIndex++; // UserName as Max Lowered User Name Found & Incrementer as Suffix (Here, First Incrementer i.e. 1) return (initialLoginId + maxLoginIdIndex); } if (int.TryParse(maxLoginId.Substring(initialLoginIdLength), out maxLoginIdIndex)) { if (maxLoginIdIndex > 0) { maxLoginIdIndex++; // UserName as Max Lowered User Name Found & Incrementer as Suffix return (initialLoginId + maxLoginIdIndex); } } } } Now the problem is for some input (see test data below), the login id created at sql server end correctly but at application subsonic dal side, it truncates some characters. First Name - Jenelia and Last Name - Kanupatikenalaalayampentyalavelugoplansubhramanayam [dbo].[Users_FetchLoginId] - Execute Stored Procedure Separately - Login Id Is Correct JKanupatikenalaalayampentyalavelugoplansubhramanayam public string FetchLoginId(string firstName, string lastName) - Application Code DAL Side - LginId Is Wrongly Received From Stored Procedure JKanupatikenalaalayampentyalavelugoplansubhramanay You can easily see that 2 charactes are removed. If the data is correctly generated by stored procedure then why the characters are removed when data is received in output parameter of stored procedure? Is it due to any internal known or unknown bug of SubSonic? Your help is significant. Thanks in advance...

    Read the article

  • Lots of mysql Sleep processes

    - by user259284
    Hello, I am still having trouble with my mysql server. It seems that since i optimize it, the tables were growing and now sometimes is very slow again. I have no idea of how to optimize more. mySQL server has 48GB of RAM and mysqld is using about 8, most of the tables are innoDB. Site has about 2000 users online. I also run explain on every query and every one of them is indexed. mySQL processes: http://www.pik.ba/mysqlStanje.php my.cnf: # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 10.100.27.30 # # * Fine Tuning # key_buffer = 64M key_buffer_size = 512M max_allowed_packet = 16M thread_stack = 128K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 1000 table_cache = 1000 join_buffer_size = 2M tmp_table_size = 2G max_heap_table_size = 2G innodb_buffer_pool_size = 3G innodb_additional_mem_pool_size = 128M innodb_log_file_size = 100M log-slow-queries = /var/log/mysql/slow.log sort_buffer_size = 5M net_buffer_length = 5M read_buffer_size = 2M read_rnd_buffer_size = 12M thread_concurrency = 10 ft_min_word_len = 3 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 512M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/

    Read the article

  • SqlBulkCopy is slow, doesn't utilize full network speed

    - by Alex
    Hi, for that past couple of weeks I have been creating generic script that is able to copy databases. The goal is to be able to specify any database on some server and copy it to some other location, and it should only copy the specified content. The exact content to be copied over is specified in a configuration file. This script is going to be used on some 10 different databases and run weekly. And in the end we are copying only about 3%-20% of databases which are as large as 500GB. I have been using the SMO assemblies to achieve this. This is my first time working with SMO and it took a while to create generic way to copy the schema objects, filegroups ...etc. (Actually helped find some bad stored procs). Overall I have a working script which is lacking on performance (and at times times out) and was hoping you guys would be able to help. When executing the WriteToServer command to copy large amount of data ( 6GB) it reaches my timeout period of 1hr. Here is the core code for copying table data. The script is written in PowerShell. $query = ("SELECT * FROM $selectedTable " + $global:selectiveTables.Get_Item($selectedTable)).Trim() Write-LogOutput "Copying $selectedTable : '$query'" $cmd = New-Object Data.SqlClient.SqlCommand -argumentList $query, $source $cmd.CommandTimeout = 120; $bulkData = ([Data.SqlClient.SqlBulkCopy]$destination) $bulkData.DestinationTableName = $selectedTable; $bulkData.BulkCopyTimeout = $global:tableCopyDataTimeout # = 3600 $reader = $cmd.ExecuteReader(); $bulkData.WriteToServer($reader); # Takes forever here on large tables The source and target databases are located on different servers so I kept track of the network speed as well. The network utilization never went over 1% which was quite surprising to me. But when I just transfer some large files between the servers, the network utilization spikes up to 10%. I have tried setting the $bulkData.BatchSize to 5000 but nothing really changed. Increasing the BulkCopyTimeout to an even greater amount would only solve the timeout. I really would like to know why the network is not being used fully. Anyone else had this problem? Any suggestions on networking or bulk copy will be appreciated. And please let me know if you need more information. Thanks. UPDATE I have tweaked several options that increase the performance of SqlBulkCopy, such as setting the transaction logging to simple and providing a table lock to SqlBulkCopy instead of the default row lock. Also some tables are better optimized for certain batch sizes. Overall, the duration of the copy was decreased by some 15%. And what we will do is execute the copy of each database simultaneously on different servers. But I am still having a timeout issue when copying one of the databases. When copying one of the larger databases, there is a table for which I consistently get the following exception: System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. It is thrown about 16 after it starts copying the table which is no where near my BulkCopyTimeout. Even though I get the exception that table is fully copied in the end. Also, if I truncate that table and restart my process for that table only, the tables is copied over without any issues. But going through the process of copying that entire database fails always for that one table. I have tried executing the entire process and reseting the connection before copying that faulty table, but it still errored out. My SqlBulkCopy and Reader are closed after each table. Any suggestions as to what else could be causing the script to fail at the point each time?

    Read the article

  • Is SQL Server DRI (ON DELETE CASCADE) slow?

    - by Aaronaught
    I've been analyzing a recurring "bug report" (perf issue) in one of our systems related to a particularly slow delete operation. Long story short: It seems that the CASCADE DELETE keys were largely responsible, and I'd like to know (a) if this makes sense, and (b) why it's the case. We have a schema of, let's say, widgets, those being at the root of a large graph of related tables and related-to-related tables and so on. To be perfectly clear, deleting from this table is actively discouraged; it is the "nuclear option" and users are under no illusions to the contrary. Nevertheless, it sometimes just has to be done. The schema looks something like this: Widgets | +--- Anvils (1:1) | | | +--- AnvilTestData (1:N) | +--- WidgetHistory (1:N) | +--- WidgetHistoryDetails (1:N) Nothing too scary, really. A Widget can be different types, an Anvil is a special type, so that relationship is 1:1 (or more accurately 1:0..1). Then there's a large amount of data - perhaps thousands of rows of AnvilTestData per Anvil collected over time, dealing with hardness, corrosion, exact weight, hammer compatibility, usability issues, and impact tests with cartoon heads. Then every Widget has a long, boring history of various types of transactions - production, inventory moves, sales, defect investigations, RMAs, repairs, customer complaints, etc. There might be 10-20k details for a single widget, or none at all, depending on its age. So, unsurprisingly, there's a CASCADE DELETE relationship at every level here. If a Widget needs to be deleted, it means something's gone terribly wrong and we need to erase any records of that widget ever existing, including its history, test data, etc. Again, nuclear option. Relations are all indexed, statistics are up to date. Normal queries are fast. The system tends to hum along pretty smoothly for everything except deletes. Getting to the point here, finally, for various reasons we only allow deleting one widget at a time, so a delete statement would look like this: DELETE FROM Widgets WHERE WidgetID = @WidgetID Pretty simple, innocuous looking delete... that takes over 2 minutes to run, for a widget with no data! After slogging through execution plans I was finally able to pick out the AnvilTestData and WidgetHistoryDetails deletes as the sub-operations with the highest cost. So I experimented with turning off the CASCADE (but keeping the actual FK, just setting it to NO ACTION) and rewriting the script as something very much like the following: DECLARE @AnvilID int SELECT @AnvilID = AnvilID FROM Anvils WHERE WidgetID = @WidgetID DELETE FROM AnvilTestData WHERE AnvilID = @AnvilID DELETE FROM WidgetHistory WHERE HistoryID IN ( SELECT HistoryID FROM WidgetHistory WHERE WidgetID = @WidgetID) DELETE FROM Widgets WHERE WidgetID = @WidgetID Both of these "optimizations" resulted in significant speedups, each one shaving nearly a full minute off the execution time, so that the original 2-minute deletion now takes about 5-10 seconds - at least for new widgets, without much history or test data. Just to be absolutely clear, there is still a CASCADE from WidgetHistory to WidgetHistoryDetails, where the fanout is highest, I only removed the one originating from Widgets. Further "flattening" of the cascade relationships resulted in progressively less dramatic but still noticeable speedups, to the point where deleting a new widget was almost instantaneous once all of the cascade deletes to larger tables were removed and replaced with explicit deletes. I'm using DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE before each test. I've disabled all triggers that might be causing further slowdowns (although those would show up in the execution plan anyway). And I'm testing against older widgets, too, and noticing a significant speedup there as well; deletes that used to take 5 minutes now take 20-40 seconds. Now I'm an ardent supporter of the "SELECT ain't broken" philosophy, but there just doesn't seem to be any logical explanation for this behaviour other than crushing, mind-boggling inefficiency of the CASCADE DELETE relationships. So, my questions are: Is this a known issue with DRI in SQL Server? (I couldn't seem to find any references to this sort of thing on Google or here in SO; I suspect the answer is no.) If not, is there another explanation for the behaviour I'm seeing? If it is a known issue, why is it an issue, and are there better workarounds I could be using?

    Read the article

  • Mysql 100% CPU + Slow query

    - by felipeclopes
    I'm using the RDS database from amazon with a some very big tables, and yesterday I started to face 100% CPU utilisation on the server and a bunch of slow query logs that were not happening before. I tried to check the queries that were running and faced this result from the explain command +----+-------------+-------------------------------+--------+----------------------------------------------------------------------------------------------+---------------------------------------+---------+-----------------------------------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------------------------------+--------+----------------------------------------------------------------------------------------------+---------------------------------------+---------+-----------------------------------------------------------------+------+----------------------------------------------+ | 1 | SIMPLE | businesses | const | PRIMARY | PRIMARY | 4 | const | 1 | Using index; Using temporary; Using filesort | | 1 | SIMPLE | activities_businesses | ref | PRIMARY,index_activities_users_on_business_id,index_tweets_users_on_tweet_id_and_business_id | index_activities_users_on_business_id | 9 | const | 2252 | Using index condition; Using where | | 1 | SIMPLE | activities_b_taggings_975e9c4 | ref | taggings_idx | taggings_idx | 782 | const,myapp_production.activities_businesses.id,const | 1 | Using index condition; Using where | | 1 | SIMPLE | activities | eq_ref | PRIMARY,index_activities_on_created_at | PRIMARY | 8 | myapp_production.activities_businesses.activity_id | 1 | Using where | +----+-------------+-------------------------------+--------+----------------------------------------------------------------------------------------------+---------------------------------------+---------+-----------------------------------------------------------------+------+----------------------------------------------+ Also checkin in the process list, I got something like this: +----+-----------------+-------------------------------------+----------------------------+---------+------+--------------+------------------------------------------------------------------------------------------------------+ | Id | User | Host | db | Command | Time | State | Info | +----+-----------------+-------------------------------------+----------------------------+---------+------+--------------+------------------------------------------------------------------------------------------------------+ | 1 | my_app | my_ip:57152 | my_app_production | Sleep | 0 | | NULL | | 2 | my_app | my_ip:57153 | my_app_production | Sleep | 2 | | NULL | | 3 | rdsadmin | localhost:49441 | NULL | Sleep | 9 | | NULL | | 6 | my_app | my_other_ip:47802 | my_app_production | Sleep | 242 | | NULL | | 7 | my_app | my_other_ip:47807 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | | 8 | my_app | my_other_ip:47809 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | | 9 | my_app | my_other_ip:47810 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | | 10 | my_app | my_other_ip:47811 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | | 11 | my_app | my_other_ip:47813 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | ... So based on the numbers, it looks like there is no reason to have a slow query, since the worst execution plan is the one that goes through 2k rows which is not much. Edit 1 Another information that might be useful is the slow query_log SET timestamp=1401457485; SELECT my_query... # User@Host: myapp[myapp] @ ip-10-195-55-233.ec2.internal [IP] Id: 435 # Query_time: 95.830497 Lock_time: 0.000178 Rows_sent: 0 Rows_examined: 1129387 Edit 2 After profiling, I got this result. The result have approximately 250 rows with two columns each. +----------------------+----------+ | state | duration | +----------------------+----------+ | Sending data | 272 | | removing tmp table | 0 | | optimizing | 0 | | Creating sort index | 0 | | init | 0 | | cleaning up | 0 | | executing | 0 | | checking permissions | 0 | | freeing items | 0 | | Creating tmp table | 0 | | query end | 0 | | statistics | 0 | | end | 0 | | System lock | 0 | | Opening tables | 0 | | logging slow query | 0 | | Sorting result | 0 | | starting | 0 | | closing tables | 0 | | preparing | 0 | +----------------------+----------+ Edit 3 Adding query as requested SELECT activities.share_count, activities.created_at FROM `activities_businesses` INNER JOIN `businesses` ON `businesses`.`id` = `activities_businesses`.`business_id` INNER JOIN `activities` ON `activities`.`id` = `activities_businesses`.`activity_id` JOIN taggings activities_b_taggings_975e9c4 ON activities_b_taggings_975e9c4.taggable_id = activities_businesses.id AND activities_b_taggings_975e9c4.taggable_type = 'ActivitiesBusiness' AND activities_b_taggings_975e9c4.tag_id = 104 AND activities_b_taggings_975e9c4.created_at >= '2014-04-30 13:36:44' WHERE ( businesses.id = 1 ) AND ( activities.created_at > '2014-04-30 13:36:44' ) AND ( activities.created_at < '2014-05-30 12:27:03' ) ORDER BY activities.created_at; Edit 4 There may be a chance that the indexes are not being applied due to difference in column type between the taggings and the activities_businesses, on the taggable_id column. mysql> SHOW COLUMNS FROM activities_businesses; +-------------+------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | activity_id | bigint(20) | YES | MUL | NULL | | | business_id | bigint(20) | YES | MUL | NULL | | +-------------+------------+------+-----+---------+----------------+ 3 rows in set (0.01 sec) mysql> SHOW COLUMNS FROM taggings; +---------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | tag_id | int(11) | YES | MUL | NULL | | | taggable_id | bigint(20) | YES | | NULL | | | taggable_type | varchar(255) | YES | | NULL | | | tagger_id | int(11) | YES | | NULL | | | tagger_type | varchar(255) | YES | | NULL | | | context | varchar(128) | YES | | NULL | | | created_at | datetime | YES | | NULL | | +---------------+--------------+------+-----+---------+----------------+ So it is examining way more rows than it shows in the explain query, probably because some indexes are not being applied. Do you guys can help m with that?

    Read the article

  • Data adapter not filling my dataset

    - by Doug Ancil
    I have the following code: Imports System.Data.SqlClient Public Class Main Protected WithEvents DataGridView1 As DataGridView Dim instForm2 As New Exceptions Private Sub Button1_Click_1(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles startpayrollButton.Click Dim ssql As String = "select MAX(payrolldate) AS [payrolldate], " & _ "dateadd(dd, ((datediff(dd, '17530107', MAX(payrolldate))/7)*7)+7, '17530107') AS [Sunday]" & _ "from dbo.payroll" & _ " where payrollran = 'no'" Dim oCmd As System.Data.SqlClient.SqlCommand Dim oDr As System.Data.SqlClient.SqlDataReader oCmd = New System.Data.SqlClient.SqlCommand Try With oCmd .Connection = New System.Data.SqlClient.SqlConnection("Initial Catalog=mdr;Data Source=xxxxx;uid=xxxxx;password=xxxxx") .Connection.Open() .CommandType = CommandType.Text .CommandText = ssql oDr = .ExecuteReader() End With If oDr.Read Then payperiodstartdate = oDr.GetDateTime(1) payperiodenddate = payperiodstartdate.AddSeconds(604799) Dim ButtonDialogResult As DialogResult ButtonDialogResult = MessageBox.Show(" The Next Payroll Start Date is: " & payperiodstartdate.ToString() & System.Environment.NewLine & " Through End Date: " & payperiodenddate.ToString()) If ButtonDialogResult = Windows.Forms.DialogResult.OK Then exceptionsButton.Enabled = True startpayrollButton.Enabled = False End If End If oDr.Close() oCmd.Connection.Close() Catch ex As Exception MessageBox.Show(ex.Message) oCmd.Connection.Close() End Try End Sub Private Sub Button2_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles exceptionsButton.Click Dim connection As System.Data.SqlClient.SqlConnection Dim adapter As System.Data.SqlClient.SqlDataAdapter = New System.Data.SqlClient.SqlDataAdapter Dim connectionString As String = "Initial Catalog=mdr;Data Source=xxxxx;uid=xxxxx;password=xxxxx" Dim ds As New DataSet Dim _sql As String = "SELECT [Exceptions].Employeenumber,[Exceptions].exceptiondate, [Exceptions].starttime, [exceptions].endtime, [Exceptions].code, datediff(minute, starttime, endtime) as duration INTO scratchpad3" & _ " FROM Employees INNER JOIN Exceptions ON [Exceptions].EmployeeNumber = [Exceptions].Employeenumber" & _ " where [Exceptions].exceptiondate between @payperiodstartdate and @payperiodenddate" & _ " GROUP BY [Exceptions].Employeenumber, [Exceptions].Exceptiondate, [Exceptions].starttime, [exceptions].endtime," & _ " [Exceptions].code, [Exceptions].exceptiondate" connection = New SqlConnection(connectionString) connection.Open() Dim _CMD As SqlCommand = New SqlCommand(_sql, connection) _CMD.Parameters.AddWithValue("@payperiodstartdate", payperiodstartdate) _CMD.Parameters.AddWithValue("@payperiodenddate", payperiodenddate) adapter.SelectCommand = _CMD Try adapter.Fill(ds) If ds Is Nothing OrElse ds.Tables.Count = 0 OrElse ds.Tables(0).Rows.Count = 0 Then 'it's empty MessageBox.Show("There was no data for this time period. Press Ok to continue", "No Data") connection.Close() Exceptions.saveButton.Enabled = False Exceptions.Hide() Else connection.Close() End If Catch ex As Exception MessageBox.Show(ex.ToString) connection.Close() End Try Exceptions.Show() End Sub Private Sub payrollButton_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles payrollButton.Click Payrollfinal.Show() End Sub End Class and when I run my program and press this button Private Sub Button2_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles exceptionsButton.Click I have my date range within a time that I know that my dataset should produce a result, but when I put a line break in my code here: adapter.Fill(ds) and look at it in debug, I show a table value of 0. If I run the same query that I have to produce these results in sql analyser, I see 1 result. Can someone see why my query on my form produces a different result than the sql analyser does? Also here is my schema for my two tables: Exceptions employeenumber varchar no 50 yes no no SQL_Latin1_General_CP1_CI_AS exceptiondate datetime no 8 yes (n/a) (n/a) NULL starttime datetime no 8 yes (n/a) (n/a) NULL endtime datetime no 8 yes (n/a) (n/a) NULL duration varchar no 50 yes no no SQL_Latin1_General_CP1_CI_AS code varchar no 50 yes no no SQL_Latin1_General_CP1_CI_AS approvedby varchar no 50 yes no no SQL_Latin1_General_CP1_CI_AS approved varchar no 50 yes no no SQL_Latin1_General_CP1_CI_AS time timestamp no 8 yes (n/a) (n/a) NULL employees employeenumber varchar no 50 no no no SQL_Latin1_General_CP1_CI_AS name varchar no 50 no no no SQL_Latin1_General_CP1_CI_AS initials varchar no 50 no no no SQL_Latin1_General_CP1_CI_AS loginname1 varchar no 50 yes no no SQL_Latin1_General_CP1_CI_AS

    Read the article

  • mysql_close doesn't kill locked sql requests

    - by Nikita
    I use mysqld Ver 5.1.37-2-log for debian-linux-gnu I perform mysql calls from c++ code with functions mysql_query. The problem occurs when mysql_query execute procedure, procedure locked on locked table, so mysql_query hangs. If send kill signal to application then we can see lock until table is locked. Create the following SQL table and procedure CREATE TABLE IF NOT EXISTS `tabletolock` ( `id` INT NOT NULL AUTO_INCREMENT, PRIMARY KEY (`id`) )ENGINE = InnoDB; DELIMITER $$ DROP PROCEDURE IF EXISTS `LOCK_PROCEDURE` $$ CREATE PROCEDURE `LOCK_PROCEDURE`() BEGIN SELECT id INTO @id FROM tabletolock; END $$ DELOMITER; There are sql commands to reproduce the problem: 1. in one terminal execute lock tables tabletolock write; 2. in another terminal execute call LOCK_PROCEDURE(); 3. In first terminal exeute show processlist and see | 2492 | root | localhost | syn_db | Query | 12 | Locked | SELECT id INTO @id FROM tabletolock | Then perfrom Ctrl-C in second terminal to interrupt our procudere and see processlist again. It is not changed, we already see locked select request and can teminate it by unlock tables or kill commands. Problem described is occured with mysql command line client. Also such problem exists when we use functions mysql_query and mysql_close. Example of c code: #include <iostream> #include <mysql/mysql.h> #include <mysql/errmsg.h> #include <signal.h> // g++ -Wall -g -fPIC -lmysqlclient dbtest.cpp using namespace std; MYSQL * connection = NULL; void closeconnection() { if(connection != NULL) { cout << "close connection !\n"; mysql_close(connection); mysql_thread_end(); delete connection; mysql_library_end(); } } void sigkill(int s) { closeconnection(); signal(SIGINT, NULL); raise(s); } int main(int argc, char ** argv) { signal(SIGINT, sigkill); connection = new MYSQL; mysql_init(connection); mysql_options(connection, MYSQL_READ_DEFAULT_GROUP, "nnfc"); if (!mysql_real_connect(connection, "127.0.0.1", "user", "password", "db", 3306, NULL, CLIENT_MULTI_RESULTS)) { delete connection; cout << "cannot connect\n"; return -1; } cout << "before procedure call\n"; mysql_query(connection, "CALL LOCK_PROCEDURE();"); cout << "after procedure call\n"; closeconnection(); return 0; } Compile it, and perform the folloing actions: 1. in first terminal local tables tabletolock write; 2. run program ./a.out 3. interrupt program Ctrl-C. on the screen we see that closeconnection function is called, so connection is closed. 4. in first terminal execute show processlist and see that procedure was not intrrupted. My question is how to terminate such locked calls from c code? Thank you in advance!

    Read the article

  • @OneToMany association joining on the wrong field

    - by april26
    I have 2 tables, devices which contains a list of devices and dev_tags, which contains a list of asset tags for these devices. The tables join on dev_serial_num, which is the primary key of neither table. The devices are unique on their ip_address field and they have a primary key identified by dev_id. The devices "age out" after 2 weeks. Therefore, the same piece of hardware can show up more than once in devices. I mention that to explain why there is a OneToMany relationship between dev_tags and devices where it seems that this should be a OneToOne relationship. So I have my 2 entities @Entity @Table(name = "dev_tags") public class DevTags implements Serializable { private Integer tagId; private String devTagId; private String devSerialNum; private List<Devices> devices; @Id @GeneratedValue @Column(name = "tag_id") public Integer getTagId() { return tagId; } public void setTagId(Integer tagId) { this.tagId = tagId; } @Column(name="dev_tag_id") public String getDevTagId() { return devTagId; } public void setDevTagId(String devTagId) { this.devTagId = devTagId; } @Column(name="dev_serial_num") public String getDevSerialNum() { return devSerialNum; } public void setDevSerialNum(String devSerialNum) { this.devSerialNum = devSerialNum; } @OneToMany(mappedBy="devSerialNum") public List<Devices> getDevices() { return devices; } public void setDevices(List<Devices> devices) { this.devices = devices; } } and this one public class Devices implements java.io.Serializable { private Integer devId; private Integer officeId; private String devSerialNum; private String devPlatform; private String devName; private OfficeView officeView; private DevTags devTag; public Devices() { } @Id @GeneratedValue(strategy = IDENTITY) @Column(name = "dev_id", unique = true, nullable = false) public Integer getDevId() { return this.devId; } public void setDevId(Integer devId) { this.devId = devId; } @Column(name = "office_id", nullable = false, insertable=false, updatable=false) public Integer getOfficeId() { return this.officeId; } public void setOfficeId(Integer officeId) { this.officeId = officeId; } @Column(name = "dev_serial_num", nullable = false, length = 64, insertable=false, updatable=false) @NotNull @Length(max = 64) public String getDevSerialNum() { return this.devSerialNum; } public void setDevSerialNum(String devSerialNum) { this.devSerialNum = devSerialNum; } @Column(name = "dev_platform", nullable = false, length = 64) @NotNull @Length(max = 64) public String getDevPlatform() { return this.devPlatform; } public void setDevPlatform(String devPlatform) { this.devPlatform = devPlatform; } @Column(name = "dev_name") public String getDevName() { return devName; } public void setDevName(String devName) { this.devName = devName; } @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "office_id") public OfficeView getOfficeView() { return officeView; } public void setOfficeView(OfficeView officeView) { this.officeView = officeView; } @ManyToOne() @JoinColumn(name="dev_serial_num") public DevTags getDevTag() { return devTag; } public void setDevTag(DevTags devTag) { this.devTag = devTag; } } I messed around a lot with @JoinColumn(name=) and the mappedBy attribute of @OneToMany and I just cannot get this right. I finally got the darn thing to compile, but the query is still trying to join devices.dev_serial_num to dev_tags.tag_id, the @Id for this entity. Here is the transcript from the console: 13:12:16,970 INFO [STDOUT] Hibernate: select devices0_.office_id as office5_2_, devices0_.dev_id as dev1_2_, devices0_.dev_id as dev1_156_1_, devices0_.dev_name as dev2_156_1_, devices0_.dev_platform as dev3_156_1_, devices0_.dev_serial_num as dev4_156_1_, devices0_.office_id as office5_156_1_, devtags1_.tag_id as tag1_157_0_, devtags1_.comment as comment157_0_, devtags1_.dev_serial_num as dev3_157_0_, devtags1_.dev_tag_id as dev4_157_0_ from ond.devices devices0_ left outer join ond.dev_tags devtags1_ on devices0_.dev_serial_num=devtags1_.tag_id where devices0_.office_id=? 13:12:16,970 INFO [IntegerType] could not read column value from result set: dev4_156_1_; Invalid value for getInt() - 'FDO1129Y2U4' 13:12:16,970 WARN [JDBCExceptionReporter] SQL Error: 0, SQLState: S1009 13:12:16,970 ERROR [JDBCExceptionReporter] Invalid value for getInt() - 'FDO1129Y2U4' That value for getInt() 'FD01129Y2U4' is a serial number, definitely not an Int! What am I missing/misunderstanding here? Can I join 2 tables on any fields I want or does at least one have to be a primary key?

    Read the article

  • I want to use blur function instead mouseup function

    - by yossi
    I have the Demo Table which I can click on the cell(td tag) and I can change the value on it.direct php DataBase. to do that I need to contain two tags.1 - span. 2 - input. like the below. <td class='Name'> <span id="spanName1" class="text" style="display: inline;"> Somevalue </span> <input type="text" value="Somevalue" class="edittd" id="inputName1" style="display: none; "> </td> to control on the data inside the cell I use in jquery .mouseup function. mouseup work but also make truble. I need to replace it with blur function but when I try to replace mouseup with blur the program thas not work becose, when I click on the cell I able to enter the input tag and I can change the value but I can't Successful to Leave the tag/field by clicking out side the table, which alow me to update the DataBase you can see that Demo with blur Here. what you advice me to do? $(".edittd").mouseup(function() { return false; }); //************* $(document).mouseup(function() { $('#span' + COLUME + ROW).show(); $('#input'+ COLUME + ROW ).hide(); VAL = $("#input" + COLUME + ROW).val(); $("#span" + COLUME + ROW).html(VAL); if(STATUS != VAL){ //******ajax code //dataString = $.trim(this.value); $.ajax({ type: "POST", dataType: 'html', url: "./public/php/ajax.php", data: 'COLUME='+COLUME+'&ROW='+ROW+'&VAL='+VAL, //{"dataString": dataString} cache: false, success: function(data) { $("#statuS").html(data); } }); //******end ajax $('#statuS').removeClass('statuSnoChange') .addClass('statuSChange'); $('#statuS').html('THERE IS CHANGE'); $('#tables').load('TableEdit2.php'); } else { //alert(DATASTRING+'status not true'); } });//End mouseup function I change it to: $(document).ready(function() { var COLUMES,COLUME,VALUE,VAL,ROWS,ROW,STATUS,DATASTRING; $('td').click(function() { COLUME = $(this).attr('class'); }); //**************** $('tr').click(function() { ROW = $(this).attr('id'); $('#display_Colume_Raw').html(COLUME+ROW); $('#span' + COLUME + ROW).hide(); $('#input'+ COLUME + ROW ).show(); STATUS = $("#input" + COLUME + ROW).val(); }); //******************** $(document).blur(function() { $('#span' + COLUME + ROW).show(); $('#input'+ COLUME + ROW ).hide(); VAL = $("#input" + COLUME + ROW).val(); $("#span" + COLUME + ROW).html(VAL); if(STATUS != VAL){ //******ajax code //dataString = $.trim(this.value); $.ajax({ type: "POST", dataType: 'html', url: "./public/php/ajax.php", data: 'COLUME='+COLUME+'&ROW='+ROW+'&VAL='+VAL, //{"dataString": dataString} cache: false, success: function(data) { $("#statuS").html(data); } }); //******end ajax $('#statuS').removeClass('statuSnoChange') .addClass('statuSChange'); $('#statuS').html('THERE IS CHANGE'); $('#tables').load('TableEdit2.php'); } else { //alert(DATASTRING+'status not true'); } });//End mouseup function $('#save').click (function(){ var input1,input2,input3,input4=""; input1 = $('#input1').attr('value'); input2 = $('#input2').attr('value'); input3 = $('#input3').attr('value'); input4 = $('#input4').attr('value'); $.ajax({ type: "POST", url: "./public/php/ajax.php", data: "input1="+ input1 +"&input2="+ input2 +"&input3="+ input3 +"&input4="+ input4, success: function(data){ $("#statuS").html(data); $('#tbl').hide(function(){$('div.success').fadeIn();}); $('#tables').load('TableEdit2.php'); } }); }); }); Thx

    Read the article

  • Muti-Schema Privileges for a Table Trigger in an Oracle Database

    - by sisslack
    I'm trying to write a table trigger which queries another table that is outside the schema where the trigger will reside. Is this possible? It seems like I have no problem querying tables in my schema but I get: Error: ORA-00942: table or view does not exist when trying trying to query tables outside my schema. EDIT My apologies for not providing as much information as possible the first time around. I was under the impression this question was more simple. I'm trying create a trigger on a table that changes some fields on a newly inserted row based on the existence of some data that may or may not be in a table that is in another schema. The user account that I'm using to create the trigger does have the permissions to run the queries independently. In fact, I've had my trigger print the query I'm trying to run and was able to run it on it's own successfully. I should also note that I'm building the query dynamically by using the EXECUTE IMMEDIATE statement. Here's an example: CREATE OR REPLACE TRIGGER MAIN_SCHEMA.EVENTS BEFORE INSERT ON MAIN_SCHEMA.EVENTS REFERENCING OLD AS OLD NEW AS NEW FOR EACH ROW DECLARE rtn_count NUMBER := 0; table_name VARCHAR2(17) := :NEW.SOME_FIELD; key_field VARCHAR2(20) := :NEW.ANOTHER_FIELD; BEGIN CASE WHEN (key_field = 'condition_a') THEN EXECUTE IMMEDIATE 'select count(*) from OTHER_SCHEMA_A.'||table_name||' where KEY_FIELD='''||key_field||'''' INTO rtn_count; WHEN (key_field = 'condition_b') THEN EXECUTE IMMEDIATE 'select count(*) from OTHER_SCHEMA_B.'||table_name||' where KEY_FIELD='''||key_field||'''' INTO rtn_count; WHEN (key_field = 'condition_c') THEN EXECUTE IMMEDIATE 'select count(*) from OTHER_SCHEMA_C.'||table_name||' where KEY_FIELD='''||key_field||'''' INTO rtn_count; END CASE; IF (rtn_count > 0) THEN -- change some fields that are to be inserted END IF; END; The trigger seams to fail on the EXECUTE IMMEDIATE with the previously mentioned error. EDIT I have done some more research and I can offer more clarification. The user account I'm using to create this trigger is not MAIN_SCHEMA or any one of the OTHER_SCHEMA_Xs. The account I'm using (ME) is given privileges to the involved tables via the schema users themselves. For example (USER_TAB_PRIVS): GRANTOR GRANTEE TABLE_SCHEMA TABLE_NAME PRIVILEGE GRANTABLE HIERARCHY MAIN_SCHEMA ME MAIN_SCHEMA EVENTS DELETE NO NO MAIN_SCHEMA ME MAIN_SCHEMA EVENTS INSERT NO NO MAIN_SCHEMA ME MAIN_SCHEMA EVENTS SELECT NO NO MAIN_SCHEMA ME MAIN_SCHEMA EVENTS UPDATE NO NO OTHER_SCHEMA_X ME OTHER_SCHEMA_X TARGET_TBL SELECT NO NO And I have the following system privileges (USER_SYS_PRIVS): USERNAME PRIVILEGE ADMIN_OPTION ME ALTER ANY TRIGGER NO ME CREATE ANY TRIGGER NO ME UNLIMITED TABLESPACE NO And this is what I found in the Oracle documentation: To create a trigger in another user's schema, or to reference a table in another schema from a trigger in your schema, you must have the CREATE ANY TRIGGER system privilege. With this privilege, the trigger can be created in any schema and can be associated with any user's table. In addition, the user creating the trigger must also have EXECUTE privilege on the referenced procedures, functions, or packages. Here: Oracle Doc So it looks to me like this should work, but I'm not sure about the "EXECUTE privilege" it's referring to in the doc.

    Read the article

  • PHP mysql multi-table search returning different table and field data row after row

    - by Kinyanjui Kamau
    Hi Guys, I am building a social networking site that is dedicated to nightclubs and night events. Among other tables, I have a users table, events table and establishments table. I am really intrigued with how Facebook in particular is able to query and return matches of not just users but also pages, ads etc row after row. Im sure most who are reading this have tried the facebook search My question is in my case, should I: Perform 3 separate LIKE %search% on each of the tables on search.php. Draw up 3 separate tables to show the results of what matches in the relevant queries which are collapsed when empty(on the same search.php) ie In search.php //query users table $query_user = "SELECT user_first_name, user_last_name, username, picture_thumb_url, avatar FROM users JOIN picture ON users.user_id = picture.user_id AND picture.avatar=1 ORDER BY users.user_id"; $result_users = mysql_query($query_user, $connections) or die(mysql_error()); $row_result_users = mysql_fetch_assoc($wid_updates); //query events table $query_event = "SELECT event_thumb_url, event_name, event_venue, event_date FROM event WHERE event_name LIKE '%".$search_value."%'"; $event = mysql_query($query_event, $connections) or die(mysql_error()); $row_event = mysql_fetch_assoc($event); $totalRows_event = mysql_num_rows($event); //query establishments table $query_establishment = "SELECT establishment_thumb_url, establishment_name, location_id, establishment_pricing FROM establishment WHERE establishment_name LIKE '%".$search_value."%'"; $establishment = mysql_query($query_establishment, $connections) or die(mysql_error()); $row_establishment = mysql_fetch_assoc($establishment); $totalRows_establishment = mysql_num_rows($establishment); My html: <table max-width="500" name="users" border="0"> <tr> <td width="50" height="50"></td> <td width="150"></td> <td width="150"></td> <td width="150"></td> </tr> </table> <table width="500" name="events" border="0"> <tr> <td width="50" height="50"><a href="#profile.php"><img src="Images/<?php echo $row_event['event_thumb_url']; ?>" border="0" height="50" width="50"/></a></td> <td width="150"><?php echo $row_event['event_name']; ?></td> <td width="150"><?php echo $row_event['event_venue']; ?></td> <td width="150"><?php echo $row_event['event_date']; ?></td> </tr> </table> <table width="500" name="establishments" border="0"> <tr> <td width="50" height="50"><a href="#profile.php"><img src="Establishment_Images/<?php echo $row_establishment['establishment_thumb_url']; ?>" border="0" height="50" width="50"/></a></td> <td width="150"><?php echo $row_establishment['establishment_name']; ?></td> <td width="150"><?php echo $row_establishment['location_id']; ?></td> <td width="150"><?php echo $row_establishment['establishment_pricing']; ?></td> </tr> </table> I haven't populated the PHP echo's for the user table. This is just to give you an idea of what I am trying to do. Any assistance?

    Read the article

  • How to make MySQL utilize available system resources, or find "the real problem"?

    - by anonymous coward
    This is a MySQL 5.0.26 server, running on SuSE Enterprise 10. This may be a Serverfault question. The web user interface that uses these particular queries (below) is showing sometimes 30+, even up to 120+ seconds at the worst, to generate the pages involved. On development, when the queries are run alone, they take up to 20 seconds on the first run (with no query cache enabled) but anywhere from 2 to 7 seconds after that - I assume because the tables and indexes involved have been placed into ram. From what I can tell, the longest load times are caused by Read/Update Locking. These are MyISAM tables. So it looks like a long update comes in, followed by a couple 7 second queries, and they're just adding up. And I'm fine with that explanation. What I'm not fine with is that MySQL doesn't appear to be utilizing the hardware it's on, and while the bottleneck seems to be the database, I can't understand why. I would say "throw more hardware at it", but we did and it doesn't appear to have changed the situation. Viewing a 'top' during the slowest times never shows much cpu or memory utilization by mysqld, as if the server is having no trouble at all - but then, why are the queries taking so long? How can I make MySQL use the crap out of this hardware, or find out what I'm doing wrong? Extra Details: On the "Memory Health" tab in the MySQL Administrator (for Windows), the Key Buffer is less than 1/8th used - so all the indexes should be in RAM. I can provide a screen shot of any graphs that might help. So desperate to fix this issue. Suffice it to say, there is legacy code "generating" these queries, and they're pretty much stuck the way they are. I have tried every combination of Indexes on the tables involved, but any suggestions are welcome. Here's the current Create Table statement from development (the 'experimental' key I have added, seems to help a little, for the example query only): CREATE TABLE `registration_task` ( `id` varchar(36) NOT NULL default '', `date_entered` datetime NOT NULL default '0000-00-00 00:00:00', `date_modified` datetime NOT NULL default '0000-00-00 00:00:00', `assigned_user_id` varchar(36) default NULL, `modified_user_id` varchar(36) default NULL, `created_by` varchar(36) default NULL, `name` varchar(80) NOT NULL default '', `status` varchar(255) default NULL, `date_due` date default NULL, `time_due` time default NULL, `date_start` date default NULL, `time_start` time default NULL, `parent_id` varchar(36) NOT NULL default '', `priority` varchar(255) NOT NULL default '9', `description` text, `order_number` int(11) default '1', `task_number` int(11) default NULL, `depends_on_id` varchar(36) default NULL, `milestone_flag` varchar(255) default NULL, `estimated_effort` int(11) default NULL, `actual_effort` int(11) default NULL, `utilization` int(11) default '100', `percent_complete` int(11) default '0', `deleted` tinyint(1) NOT NULL default '0', `wf_task_id` varchar(36) default '0', `reg_field` varchar(8) default '', `date_offset` int(11) default '0', `date_source` varchar(10) default '', `date_completed` date default '0000-00-00', `completed_id` varchar(36) default NULL, `original_name` varchar(80) default NULL, PRIMARY KEY (`id`), KEY `idx_reg_task_p` (`deleted`,`parent_id`), KEY `By_Assignee` (`assigned_user_id`,`deleted`), KEY `status_assignee` (`status`,`deleted`), KEY `experimental` (`deleted`,`status`,`assigned_user_id`,`parent_id`,`date_due`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 And one of the ridiculous queries in question: SELECT users.user_name assigned_user_name, registration.FIELD001 parent_name, registration_task.status status, registration_task.date_modified date_modified, registration_task.date_due date_due, registration.FIELD240 assigned_wf, if(LENGTH(registration_task.description)>0,1,0) has_description, registration_task.* FROM registration_task LEFT JOIN users ON registration_task.assigned_user_id=users.id LEFT JOIN registration ON registration_task.parent_id=registration.id where (registration_task.status != 'Completed' AND registration.FIELD001 LIKE '%' AND registration_task.name LIKE '%' AND registration.FIELD060 LIKE 'GN001472%') AND registration_task.deleted=0 ORDER BY date_due asc LIMIT 0,20; my.cnf - '[mysqld]' section. [mysqld] port = 3306 socket = /var/lib/mysql/mysql.sock skip-locking key_buffer = 384M max_allowed_packet = 100M table_cache = 2048 sort_buffer_size = 2M net_buffer_length = 100M read_buffer_size = 2M read_rnd_buffer_size = 160M myisam_sort_buffer_size = 128M query_cache_size = 16M query_cache_limit = 1M EXPLAIN above query, without additional index: +----+-------------+-------------------+--------+--------------------------------+----------------+---------+------------------------------------------------+---------+-----------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------------------+--------+--------------------------------+----------------+---------+------------------------------------------------+---------+-----------------------------+ | 1 | SIMPLE | registration_task | ref | idx_reg_task_p,status_assignee | idx_reg_task_p | 1 | const | 1067354 | Using where; Using filesort | | 1 | SIMPLE | registration | eq_ref | PRIMARY,gbl | PRIMARY | 8 | sugarcrm401.registration_task.parent_id | 1 | Using where | | 1 | SIMPLE | users | ref | PRIMARY | PRIMARY | 38 | sugarcrm401.registration_task.assigned_user_id | 1 | | +----+-------------+-------------------+--------+--------------------------------+----------------+---------+------------------------------------------------+---------+-----------------------------+ EXPLAIN above query, with 'experimental' index: +----+-------------+-------------------+--------+-----------------------------------------------------------+------------------+---------+------------------------------------------------+--------+-----------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------------------+--------+-----------------------------------------------------------+------------------+---------+------------------------------------------------+--------+-----------------------------+ | 1 | SIMPLE | registration_task | range | idx_reg_task_p,status_assignee,NewIndex1,tcg_experimental | tcg_experimental | 259 | NULL | 103345 | Using where; Using filesort | | 1 | SIMPLE | registration | eq_ref | PRIMARY,gbl | PRIMARY | 8 | sugarcrm401.registration_task.parent_id | 1 | Using where | | 1 | SIMPLE | users | ref | PRIMARY | PRIMARY | 38 | sugarcrm401.registration_task.assigned_user_id | 1 | | +----+-------------+-------------------+--------+-----------------------------------------------------------+------------------+---------+------------------------------------------------+--------+-----------------------------+

    Read the article

  • Which of CouchDB or MongoDB suits my needs?

    - by vonconrad
    Where I work, we use Ruby on Rails to create both backend and frontend applications. Usually, these applications interact with the same MySQL database. It works great for a majority of our data, but we have one situation which I would like to move to a NoSQL environment. We have clients, and our clients have what we call "inventories"--one or more of them. An inventory can have many thousands of items. This is currently done through two relational database tables, inventories and inventory_items. The problems start when two different inventories have different parameters: # Inventory item from inventory 1, televisions { inventory_id: 1 sku: 12345 name: Samsung LCD 40 inches model: 582903-4 brand: Samsung screen_size: 40 type: LCD price: 999.95 } # Inventory item from inventory 2, accomodation { inventory_id: 2 sku: 48cab23fa name: New York Hilton accomodation_type: hotel star_rating: 5 price_per_night: 395 } Since we obviously can't use brand or star_rating as the column name in inventory_items, our solution so far has been to use generic column names such as text_a, text_b, float_a, int_a, etc, and introduce a third table, inventory_schemas. The tables now look like this: # Inventory schema for inventory 1, televisions { inventory_id: 1 int_a: sku text_a: name text_b: model text_c: brand int_b: screen_size text_d: type float_a: price } # Inventory item from inventory 1, televisions { inventory_id: 1 int_a: 12345 text_a: Samsung LCD 40 inches text_b: 582903-4 text_c: Samsung int_a: 40 text_d: LCD float_a: 999.95 } This has worked well... up to a point. It's clunky, it's unintuitive and it lacks scalability. We have to devote resources to set up inventory schemas. Using separate tables is not an option. Enter NoSQL. With it, we could let each and every item have their own parameters and still store them together. From the research I've done, it certainly seems like a great alterative for this situation. Specifically, I've looked at CouchDB and MongoDB. Both look great. However, there are a few other bits and pieces we need to be able to do with our inventory: We need to be able to select items from only one (or several) inventories. We need to be able to filter items based on its parameters (eg. get all items from inventory 2 where type is 'hotel'). We need to be able to group items based on parameters (eg. get the lowest price from items in inventory 1 where brand is 'Samsung'). We need to (potentially) be able to retrieve thousands of items at a time. We need to be able to access the data from multiple applications; both backend (to process data) and frontend (to display data). Rapid bulk insertion is desired, though not required. Based on the structure, and the requirements, are either CouchDB or MongoDB suitable for us? If so, which one will be the best fit? Thanks for reading, and thanks in advance for answers. EDIT: One of the reasons I like CouchDB is that it would be possible for us in the frontend application to request data via JavaScript directly from the server after page load, and display the results without having to use any backend code whatsoever. This would lead to better page load and less server strain, as the fetching/processing of the data would be done client-side.

    Read the article

  • SQL SERVER – Shrinking NDF and MDF Files – Readers’ Opinion

    - by pinaldave
    Previously, I had written a blog post about SQL SERVER – Shrinking NDF and MDF Files – A Safe Operation. After that, I have written the following blog post that talks about the advantage and disadvantage of Shrinking and why one should not be Shrinking a file SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008. On this subject, SQL Server Expert Imran Mohammed left an excellent comment. I just feel that his comment is worth a big article itself. For everybody to read his wonderful explanation, I am posting this blog post here. Thanks Imran! Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. Comments from Imran - Before I explain to you the concept of Shrink Database, let us understand the concept of Database Files. When we create a new database inside the SQL Server, it is typical that SQl Server creates two physical files in the Operating System: one with .MDF Extension, and another with .LDF Extension. .MDF is called as Primary Data File. .LDF is called as Transactional Log file. If you add one or more data files to a database, the physical file that will be created in the Operating System will have an extension of .NDF, which is called as Secondary Data File; whereas, when you add one or more log files to a database, the physical file that will be created in the Operating System will have the same extension as .LDF. The questions now are, “Why does a new data file have a different extension (.NDF)?”, “Why is it called as a secondary data file?” and, “Why is .MDF file called as a primary data file?” Answers: Note: The following explanation is based on my limited knowledge of SQL Server, so experts please do comment. A data file with a .MDF extension is called a Primary Data File, and the reason behind it is that it contains Database Catalogs. Catalogs mean Meta Data. Meta Data is “Data about Data”. An example for Meta Data includes system objects that store information about other objects, except the data stored by the users. sysobjects stores information about all objects in that database. sysindexes stores information about all indexes and rows of every table in that database. syscolumns stores information about all columns that each table has in that database. sysusers stores how many users that database has. Although Meta Data stores information about other objects, it is not the transactional data that a user enters; rather, it’s a system data about the data. Because Primary Data File (.MDF) contains important information about the database, it is treated as a special file. It is given the name Primary Data file because it contains the Database Catalogs. This file is present in the Primary File Group. You can always create additional objects (Tables, indexes etc.) in the Primary data file (This file is present in the Primary File group), by mentioning that you want to create this object under the Primary File Group. Any additional data file that you add to the database will have only transactional data but no Meta Data, so that’s why it is called as the Secondary Data File. It is given the extension name .NDF so that the user can easily identify whether a specific data file is a Primary Data File or a Secondary Data File(s). There are many advantages of storing data in different files that are under different file groups. You can put your read only in the tables in one file (file group) and read-write tables in another file (file group) and take a backup of only the file group that has read the write data, so that you can avoid taking the backup of a read-only data that cannot be altered. Creating additional files in different physical hard disks also improves I/O performance. A real-time scenario where we use Files could be this one: Let’s say you have created a database called MYDB in the D-Drive which has a 50 GB space. You also have 1 Database File (.MDF) and 1 Log File on D-Drive and suppose that all of that 50 GB space has been used up and you do not have any free space left but you still want to add an additional space to the database. One easy option would be to add one more physical hard disk to the server, add new data file to MYDB database and create this new data file in a new hard disk then move some of the objects from one file to another, and put the file group under which you added new file as default File group, so that any new object that is created gets into the new files, unless specified. Now that we got a basic idea of what data files are, what type of data they store and why they are named the way they are, let’s move on to the next topic, Shrinking. First of all, I disagree with the Microsoft terminology for naming this feature as “Shrinking”. Shrinking, in regular terms, means to reduce the size of a file by means of compressing it. BUT in SQL Server, Shrinking DOES NOT mean compressing. Shrinking in SQL Server means to remove an empty space from database files and release the empty space either to the Operating System or to SQL Server. Let’s examine this through an example. Let’s say you have a database “MYDB” with a size of 50 GB that has a free space of about 20 GB, which means 30GB in the database is filled with data and the 20 GB of space is free in the database because it is not currently utilized by the SQL Server (Database); it is reserved and not yet in use. If you choose to shrink the database and to release an empty space to Operating System, and MIND YOU, you can only shrink the database size to 30 GB (in our example). You cannot shrink the database to a size less than what is filled with data. So, if you have a database that is full and has no empty space in the data file and log file (you don’t have an extra disk space to set Auto growth option ON), YOU CANNOT issue the SHRINK Database/File command, because of two reasons: There is no empty space to be released because the Shrink command does not compress the database; it only removes the empty space from the database files and there is no empty space. Remember, the Shrink command is a logged operation. When we perform the Shrink operation, this information is logged in the log file. If there is no empty space in the log file, SQL Server cannot write to the log file and you cannot shrink a database. Now answering your questions: (1) Q: What are the USEDPAGES & ESTIMATEDPAGES that appear on the Results Pane after using the DBCC SHRINKDATABASE (NorthWind, 10) ? A: According to Books Online (For SQL Server 2000): UsedPages: the number of 8-KB pages currently used by the file. EstimatedPages: the number of 8-KB pages that SQL Server estimates the file could be shrunk down to. Important Note: Before asking any question, make sure you go through Books Online or search on the Google once. The reasons for doing so have many advantages: 1. If someone else already has had this question before, chances that it is already answered are more than 50 %. 2. This reduces your waiting time for the answer. (2) Q: What is the difference between Shrinking the Database using DBCC command like the one above & shrinking it from the Enterprise Manager Console by Right-Clicking the database, going to TASKS & then selecting SHRINK Option, on a SQL Server 2000 environment? A: As far as my knowledge goes, there is no difference, both will work the same way, one advantage of using this command from query analyzer is, your console won’t be freezed. You can do perform your regular activities using Enterprise Manager. (3) Q: What is this .NDF file that is discussed above? I have never heard of it. What is it used for? Is it used by end-users, DBAs or the SERVER/SYSTEM itself? A: .NDF File is a secondary data file. You never heard of it because when database is created, SQL Server creates database by default with only 1 data file (.MDF) and 1 log file (.LDF) or however your model database has been setup, because a model database is a template used every time you create a new database using the CREATE DATABASE Command. Unless you have added an extra data file, you will not see it. This file is used by the SQL Server to store data which are saved by the users. Hope this information helps. I would like to as the experts to please comment if what I understand is not what the Microsoft guys meant. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Week in Geek: LastPass Rescues Xmarks Edition

    - by Asian Angel
    This week we learned how to breathe new life into an aging Windows Mobile 6.x device, use filters in Photoshop, backup and move VirtualBox machines, use the BitDefender Rescue CD to clean an infected PC, and had fun setting up a pirates theme on our computers. Photo by _nash. Weekly Feature Do you love using the Faenza icon set on your Ubuntu system but feel that there are a few much needed icons missing (or you desire a different version of a particular icon)? Then you may want to take a look at the Faenza Variants icon pack. The icons are available in the following sizes: 16px, 22px, 32px, 48px and scalable sizes. Photo by Asian Angel. Faenza Variants Random Geek Links Another week with extra link goodness to help keep you on top of the news. Photo by Asian Angel. LastPass acquires Xmarks, premium service announced Xmarks announced that it has been acquired by LastPass, a cross-platform password management service. This also means that Xmarks is now in transition from a “free” to a “freemium” business model. WikiLeaks reappears on European Net domains WikiLeaks has re-emerged on a Swiss Internet domain followed by domains in Germany, Finland, and the Netherlands, sidestepping a move that had in effect taken the controversial site off the Internet. Iran: Yes, Stuxnet hurt our nuclear program The Stuxnet worm got some big play from Iranian President Mahmoud Ahmadinejad, who acknowledged that the malware dinged his nuclear program. More Windows Rogues than Just AV – Fake Defragmenter Check Disk Don’t think for a second that rogues are limited to scareware, because as so-called products such as “System Defragmenter”, “Scan Disk” “Check Disk” prove, they’re not. Internet Explorer’s Protected Mode can be bypassed Researchers from Verizon Business have now described a way of bypassing Protected Mode in IE 7 and 8 in order to gain access to user accounts. Can you really see who viewed your Facebook profile? Rogue application spreads virally Once again, a rogue application is spreading virally between Facebook users pretending to offer you a way of seeing who has viewed your profile. More holes in Palm’s WebOS Researchers Orlando Barrera and Daniel Herrera, who both work for security firm SecTheory, have discovered a gaping security hole in Palm’s WebOS smartphone operating system. Next-gen banking Trojans hit APAC With the proliferation of banking Trojans, Web and smartphone users of online banking services have to be on constant alert to avoid falling prey to fraud schemes, warned Etay Maor, project manager for RSA Fraud Action. AVG update cripples 64-bit computers A signature update automatically deployed by the AVG virus scanner Thursday has crippled numerous computers. Article includes link to forums to fix computers affected after a restart. Congress moves to outlaw ‘mystery charges’ for Web shoppers Legislation that makes it illegal for Web merchants and so-called post-transaction marketers to charge credit cards without the card owners’ say-so came closer to becoming law this week. Ballmer Set to “Look Into” Windows Home Server Drive Extender Fiasco Tuesday’s announcement from Microsoft regarding the removal of Drive Extender from Windows Home Server has sent shock waves across the web. Google tweaks search recipe to ding scam artists Google has changed its search algorithm to penalize sites deemed to provide an “extremely poor user experience” following a New York Times story on a merchant who justified abusive behavior towards customers as a search-engine optimization tactic. Geek Video of the Week Watch as our two friends debate back and forth about the early adoption of new technology through multiple time periods (Stone Age to the far future). Will our reluctant friend finally succumb to the temptation? Photo by CollegeHumor. Early Adopters Through History Random TinyHacker Links Fix Issues in Windows 7 Using Reliability Monitor Learn how to analyze Windows 7 errors and then fix them using the built-in reliability monitor. Learn About IE Tab Groups Tab groups is a useful feature in IE 8. Here’s a detailed guide to what it is all about. Google’s Book Helps You Learn About Browsers and Web A cool new online book by the Google Chrome team on browsers and the web. TrustPort Internet Security 2011 – Good Security from a Less Known Provider TrustPort is not exactly a well-known provider of security solutions. At least not in the consumer space. This review tests in detail their latest offering. How the World is Using Cell phones An infographic showing the shocking demographics of cell phone use. Super User Questions See the great answers to these questions from Super User. I am unable to access my C drive. It says it is unable to display current owner. List of Windows special directories/shortcuts like ‘%TEMP%’ Is using multiple passes for wiping a disk really necessary? How can I view two files side by side in Notepad++ Is there any tool that automatically puts screenshots to my Dropbox? How-To Geek Weekly Article Recap Look through our hottest articles from this past week at How-To Geek. How to Create a Software RAID Array in Windows 7 9 Alternatives for Windows Home Server’s Drive Extender Why Doesn’t Disk Cleanup Delete Everything from the Temp Folder? Ask the Readers: How Much Do You Customize Your Operating System? How to Upload Really Large Files to SkyDrive, Dropbox, or Email One Year Ago on How-To Geek Enjoy reading through these awesome articles from one year ago. How To Upgrade from Vista to Windows 7 Home Premium Edition How To Fix No Aero Transparency in Windows 7 Troubleshoot Startup Problems with Startup Repair Tool in Windows 7 & Vista Rename the Guest Account in Windows 7 for Enhanced Security Disable Error Reporting in XP, Vista, and Windows 7 The Geek Note That wraps things up here for this week. Regardless of the weather wherever you may be, we hope that you have an opportunity to get outside and have some fun! Remember to keep sending those great tips in to us at [email protected]. Photo by Tony the Misfit. Latest Features How-To Geek ETC The How-To Geek Guide to Learning Photoshop, Part 8: Filters Get the Complete Android Guide eBook for Only 99 Cents [Update: Expired] Improve Digital Photography by Calibrating Your Monitor The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography How to Choose What to Back Up on Your Linux Home Server How To Harmonize Your Dual-Boot Setup for Windows and Ubuntu Hang in There Scrat! – Ice Age Wallpaper How Do You Know When You’ve Passed Geek and Headed to Nerd? On The Tip – A Lamborghini Theme for Chrome and Iron What if Wile E. Coyote and the Road Runner were Human? [Video] Peaceful Winter Cabin Wallpaper Store Tabs for Later Viewing in Opera with Tab Vault

    Read the article

  • Random movement of wandering monsters in x & y axis in LibGDX

    - by Vishal Kumar
    I am making a simple top down RPG game in LibGDX. What I want is ... the enemies should wander here and there in x and y directions in certain interval so that it looks natural that they are guarding something. I spend several hours doing this but could not achieve what I want. After a long time of coding, I came with this code. But what I observed is when enemies come to an end of x or start of x or start of y or end of y of the map. It starts flickering for random intervals. Sometimes they remain nice, sometimes, they start flickering for long time. public class Enemy extends Sprite { public float MAX_VELOCITY = 0.05f; public static final int MOVING_LEFT = 0; public static final int MOVING_RIGHT = 1; public static final int MOVING_UP = 2; public static final int MOVING_DOWN = 3; public static final int HORIZONTAL_GUARD = 0; public static final int VERTICAL_GUARD = 1; public static final int RANDOM_GUARD = 2; private float startTime = System.nanoTime(); private static float SECONDS_TIME = 0; private boolean randomDecider; public int enemyType; public static final float width = 30 * World.WORLD_UNIT; public static final float height = 32 * World.WORLD_UNIT; public int state; public float stateTime; public boolean visible; public boolean dead; public Enemy(float x, float y, int enemyType) { super(x, y); state = MOVING_LEFT; this.enemyType = enemyType; stateTime = 0; visible = true; dead = false; boolean random = Math.random()>=0.5f ? true :false; if(enemyType == HORIZONTAL_GUARD){ if(random) velocity.x = -MAX_VELOCITY; else velocity.x = MAX_VELOCITY; } if(enemyType == VERTICAL_GUARD){ if(random) velocity.y = -MAX_VELOCITY; else velocity.y = MAX_VELOCITY; } if(enemyType == RANDOM_GUARD){ //if(random) //velocity.x = -MAX_VELOCITY; //else //velocity.y = MAX_VELOCITY; } } public void update(Enemy e, float deltaTime) { super.update(deltaTime); e.stateTime+= deltaTime; e.position.add(velocity); // This is for updating the Animation for Enemy Movement Direction. VERY IMPORTANT FOR REAL EFFECTS updateDirections(); //Here the various movement methods are called depending upon the type of the Enemy if(e.enemyType == HORIZONTAL_GUARD) guardHorizontally(); if(e.enemyType == VERTICAL_GUARD) guardVertically(); if(e.enemyType == RANDOM_GUARD) guardRandomly(); //quadrantMovement(e, deltaTime); } private void guardHorizontally(){ if(position.x <= 0){ velocity.x= MAX_VELOCITY; velocity.y= 0; } else if(position.x>= World.mapWidth-width){ velocity.x= -MAX_VELOCITY; velocity.y= 0; } } private void guardVertically(){ if(position.y<= 0){ velocity.y= MAX_VELOCITY; velocity.x= 0; } else if(position.y>= World.mapHeight- height){ velocity.y= -MAX_VELOCITY; velocity.x= 0; } } private void guardRandomly(){ if (System.nanoTime() - startTime >= 1000000000) { SECONDS_TIME++; if(SECONDS_TIME % 5==0) randomDecider = Math.random()>=0.5f ? true :false; if(SECONDS_TIME>=30) SECONDS_TIME =0; startTime = System.nanoTime(); } if(SECONDS_TIME <=30){ if(randomDecider && position.x >= 0) velocity.x= -MAX_VELOCITY; else{ if(position.x < World.mapWidth-width) velocity.x= MAX_VELOCITY; else velocity.x= -MAX_VELOCITY; } velocity.y =0; } else{ if(randomDecider && position.y >0) velocity.y= -MAX_VELOCITY; else velocity.y= MAX_VELOCITY; velocity.x =0; } /* //This is essential so as to keep the enemies inside the boundary of the Map if(position.x <= 0){ velocity.x= MAX_VELOCITY; //velocity.y= 0; } else if(position.x>= World.mapWidth-width){ velocity.x= -MAX_VELOCITY; //velocity.y= 0; } else if(position.y<= 0){ velocity.y= MAX_VELOCITY; //velocity.x= 0; } else if(position.y>= World.mapHeight- height){ velocity.y= -MAX_VELOCITY; //velocity.x= 0; } */ } private void updateDirections() { if(velocity.x > 0) state = MOVING_RIGHT; else if(velocity.x<0) state = MOVING_LEFT; else if(velocity.y>0) state = MOVING_UP; else if(velocity.y<0) state = MOVING_DOWN; } public Rectangle getBounds() { return new Rectangle(position.x, position.y, width, height); } private void quadrantMovement(Enemy e, float deltaTime) { int temp = e.getEnemyQuadrant(e.position.x, e.position.y); boolean random = Math.random()>=0.5f ? true :false; switch(temp){ case 1: velocity.x = MAX_VELOCITY; break; case 2: velocity.x = MAX_VELOCITY; break; case 3: velocity.x = -MAX_VELOCITY; break; case 4: velocity.x = -MAX_VELOCITY; break; default: if(random) velocity.x = MAX_VELOCITY; else velocity.y =-MAX_VELOCITY; } } public float getDistanceFromPoint(float p1,float p2){ Vector2 v1 = new Vector2(p1,p2); return position.dst(v1); } private int getEnemyQuadrant(float x, float y){ Rectangle enemyQuad = new Rectangle(x, y, 30, 32); if(ScreenQuadrants.getQuad1().contains(enemyQuad)) return 1; if(ScreenQuadrants.getQuad2().contains(enemyQuad)) return 2; if(ScreenQuadrants.getQuad3().contains(enemyQuad)) return 3; if(ScreenQuadrants.getQuad4().contains(enemyQuad)) return 4; return 0; } } Is there a better way of doing this. I am new to game development. I shall be very grateful to any help or reference.

    Read the article

  • SSIS - XML Source Script

    - by simonsabin
    The XML Source in SSIS is great if you have a 1 to 1 mapping between entity and table. You can do more complex mapping but it becomes very messy and won't perform. What other options do you have? The challenge with XML processing is to not need a huge amount of memory. I remember using the early versions of Biztalk with loaded the whole document into memory to map from one document type to another. This was fine for small documents but was an absolute killer for large documents. You therefore need a streaming approach. For flexibility however you want to be able to generate your rows easily, and if you've ever used the XmlReader you will know its ugly code to write. That brings me on to LINQ. The is an implementation of LINQ over XML which is really nice. You can write nice LINQ queries instead of the XMLReader stuff. The downside is that by default LINQ to XML requires a whole XML document to work with. No streaming. Your code would look like this. We create an XDocument and then enumerate over a set of annoymous types we generate from our LINQ statement XDocument x = XDocument.Load("C:\\TEMP\\CustomerOrders-Attribute.xml");   foreach (var xdata in (from customer in x.Elements("OrderInterface").Elements("Customer")                        from order in customer.Elements("Orders").Elements("Order")                        select new { Account = customer.Attribute("AccountNumber").Value                                   , OrderDate = order.Attribute("OrderDate").Value }                        )) {     Output0Buffer.AddRow();     Output0Buffer.AccountNumber = xdata.Account;     Output0Buffer.OrderDate = Convert.ToDateTime(xdata.OrderDate); } As I said the downside to this is that you are loading the whole document into memory. I did some googling and came across some helpful videos from a nice UK DPE Mike Taulty http://www.microsoft.com/uk/msdn/screencasts/screencast/289/LINQ-to-XML-Streaming-In-Large-Documents.aspx. Which show you how you can combine LINQ and the XmlReader to get a semi streaming approach. I took what he did and implemented it in SSIS. What I found odd was that when I ran it I got different numbers between using the streamed and non streamed versions. I found the cause was a little bug in Mikes code that causes the pointer in the XmlReader to progress past the start of the element and thus foreach (var xdata in (from customer in StreamReader("C:\\TEMP\\CustomerOrders-Attribute.xml","Customer")                                from order in customer.Elements("Orders").Elements("Order")                                select new { Account = customer.Attribute("AccountNumber").Value                                           , OrderDate = order.Attribute("OrderDate").Value }                                ))         {             Output0Buffer.AddRow();             Output0Buffer.AccountNumber = xdata.Account;             Output0Buffer.OrderDate = Convert.ToDateTime(xdata.OrderDate);         } These look very similiar and they are the key element is the method we are calling, StreamReader. This method is what gives us streaming, what it does is return a enumerable list of elements, because of the way that LINQ works this results in the data being streamed in. static IEnumerable<XElement> StreamReader(String filename, string elementName) {     using (XmlReader xr = XmlReader.Create(filename))     {         xr.MoveToContent();         while (xr.Read()) //Reads the first element         {             while (xr.NodeType == XmlNodeType.Element && xr.Name == elementName)             {                 XElement node = (XElement)XElement.ReadFrom(xr);                   yield return node;             }         }         xr.Close();     } } This code is specifically designed to return a list of the elements with a specific name. The first Read reads the root element and then the inner while loop checks to see if the current element is the type we want. If not we do the xr.Read() again until we find the element type we want. We then use the neat function XElement.ReadFrom to read an element and all its sub elements into an XElement. This is what is returned and can be consumed by the LINQ statement. Essentially once one element has been read we need to check if we are still on the same element type and name (the inner loop) This was Mikes mistake, if we called .Read again we would advance the XmlReader beyond the start of the Element and so the ReadFrom method wouldn't work. So with the code above you can use what ever LINQ statement you like to flatten your XML into the rowsets you want. You could even have multiple outputs and generate your own surrogate keys.        

    Read the article

  • CodePlex Daily Summary for Friday, July 19, 2013

    CodePlex Daily Summary for Friday, July 19, 2013Popular ReleasesuComponents: uComponents v5.5.0: Following on from our 101355 release, we bring you... v5.5.0! Resolved issues The following issues have been resolved in 5.5.0: 14634 14848 DataType Grid 14840 14842 14845 UpgradingThe best way to upgrade uComponents is to re-install via Umbraco's back-office package installer, (either from the package repository or a local package upload). Do not uninstall an existing uComponents package, as this may lead to data loss! Umbraco compatibilityThis release is compatible with Umbraco 4...51Degrees.mobi - Mobile Device Detection and Redirection: 2.1.19.4: One Click Install from NuGet This release introduces the 51Degrees.mobi IIS Vary Header Fix. When Compression and Caching is used in IIS, the Vary header is overwritten, making intelligent caching with dynamic content impossible. Find out more about installing the Vary Header fix. Changes to Version 2.1.19.4Handlers now have a ‘Count’ property. This is an integer value that shows how many devices in the dataset that use that handler. Provider.cs -> GetDeviceInfoByID to address a problem w...SalarDbCodeGenerator: SalarDbCodeGenerator v2.1.2013.0719: Version 2.1.2013.0719 2013/7/19 Pattern Changes: * DapperContext pattern is added. * All patterns are updated to work with one-to-one relations. Changes: * One-to-one relation is supported. * Minor bug fixes.Scryber PDF Generation: Scryber.Installer.v0.8.3: This one has tables! A significant update to the scryber library, with the inclusion of tables. There is no column span, or row span, and nested tables are not fully supported. But - they overflow nicely and are well styled. And they can be data bound. Other updates The Link element has inner components directly as a child, rather than in a Content element. The existing files will still continue to work, but are invalid wrt the schema. Databinding is now supported at the document level, r...Player Framework by Microsoft: Player Framework for Windows and WP (v1.3 beta 2): Includes all changes in v1.3 beta 1 Additional support for Windows 8.1 Preview New API (JS): addTextTrack New API (JS): msKeys New API (JS): msPlayToPreferredSourceUri New API (JS): msSetMediaKeys New API (JS): onmsneedkey New API (Xaml): SetMediaStreamSource method New API (Xaml): Stretch property New API (Xaml): StretchChanged event New API (Xaml): AreTransportControlsEnabled property New API (Xaml): IsFullWindow property New API (Xaml): PlayToPreferredSourceUri proper...Outlook 2013 Add-In: Multiple Calendars: As per popular request, this new version includes: - Support for multiple calendars. This can be enabled in the configuration by choosing which ones to show/hide appointments from. In some cases (public folders) it may time out and crash, and so far it only supports "My Calendars", so not shared ones yet. Also they're currently shown in the same font/color so there are no confusions with color categories, but please drop me a line on any suggestions you'd like to see implemented. - Added fri...GoAgent GUI: GoAgent GUI 1.2.0 ???: *???????,??????????????,?????????????? ????????* ????????(Beta) Beta????:https://goagent.codeplex.com/workitem/list/basic ??????:uygw@outlook.com ????:https://goagent.codeplex.com/documentation ?????????????????。Circuit Diagram: Circuit Diagram 2.0 Beta 2: New in this release: Show grid in editor Cut/copy/paste support Bug fixesAscend 3D: Ascend (2013-07-17): Ascend 2.1 Animation bugfixes and speed/robustness improvements Added compatibility for multiple top-level bones on armatures Ascend Blender Exporter 2.1 Removed need to create parent-child relationship between armature and skinning mesh Added support for exporting multiple top-level bones on armatures AscendViewer 1.0.4 Updated to use latest version of Ascend libraryCommunity TFS Build Extensions: July 2013: The July 2013 release contains VS2010 Activities(target .NET 4.0) VS2012 Activities (target .NET 4.5) VS2013 Activities (target .NET 4.5.1) Community TFS Build Manager VS2012 The Community TFS Build Manager can also be found in the Visual Studio Gallery here where updates will first become available. A version supporting VS2010 is also available in the Gallery here.smoketestgit1: new release: Wiki Markup Guide bold italics underline Heading 1 Heading 2 Bullet List Bullet List 2 Number List Number List 2 http://www.example.com Do not apply formattingsmoketesthg: new release: Wiki Markup Guide bold italics underline Heading 1 Heading 2 Bullet List Bullet List 2 Number List Number List 2 http://www.example.com Do not apply formattingsmoketesttfs: new release: Wiki Markup Guide bold italics underline Heading 1 Heading 2 Bullet List Bullet List 2 Number List Number List 2 http://www.example.com Do not apply formattingdatajs - JavaScript Library for data-centric web applications: datajs version 1.1.1: datajs is a cross-browser and UI agnostic JavaScript library that enables data-centric web applications with the following features: OData client that enables CRUD operations including batching and metadata support using both ATOM and JSON payloads. Single store abstraction that provides a common API on top of HTML5 local storage technologies. Data cache component that allows reading data ranges from a collection and storing them locally to reduce the number of network requests. Changes ...Orchard Project: Orchard 1.7 RC: Planning releasedTerminals: Version 3.1 - Release: Changes since version 3.0:15992 Unified usage of icons in user interface Added context menu in Organize favorites grid Fixed:34219 34210 34223 33981 34209 Install notes:No changes in database (use database from release 3.0) No upgrade of configuration, passwords, credentials or favorites See also upgrade notes for release 3.0Media Companion: Media Companion MC3.573b: XBMC Link - Let MC update your XBMC Library Fixes in place, Enjoy the XBMC Link function Well, Phil's been busy in the background, and come up with a Great new feature for Media Companion. Currently only implemented for movies. Once we're happy that's working with no issues, we'll extend the functionality to include TV shows. All the help for this is build into the application. Go to General Preferences - XBMC Link for details. Help us make it better* Currently only tested on local and ...Wsus Package Publisher: Release v1.2.1307.15: Fix a bug where WPP crash if 'ShowPendingUpdates' is start with wrong credentials. Fix a bug where WPP crash if ArrivalDateAfter and ArrivalDateBefore is equal in the ComputerView. Add a filter in the ComputerView. (Thanks to NorbertFe for this feature request) Add an option, when right-clicking on a computer, you can ask for display the current logon user of the remote computer. Add an option in settings to choose if WPP ping remote computers using IPv4, IPv6 or IPv6 and, if fail, IP...Lab Of Things: vBeta1: Initial release of LoTVidCoder: 1.4.23: New in 1.4.23 Added French translation. Fixed non-x264 video encoders not sticking in video tab. New in 1.4 Updated HandBrake core to 0.9.9 Blu-ray subtitle (PGS) support Additional framerates: 30, 50, 59.94, 60 Additional sample rates: 8, 11.025, 12 and 16 kHz Additional higher bitrates for audio Same as Source Constant Framerate 24-bit FLAC encoding Added Windows Phone 8 and Apple TV 3 presets Introduced process isolation for encodes. Now if HandBrake crashes, VidCoder will ...New Projects[C#]A Complete MySQL Manager Project: An open-source MySQL database manager oriented to multi-threading environments, supporting to create an entire database from scratch and edit it.bCal Javascript Calendar Generator: Using this simple javascript library, you can create simple navigable calendars in your website with just one line of user code.Better Network: Tools to delete network profile and network lists in Windows 8.C# Intellisense for Notepad++: 'CS-Script Intellisense' is a real C# intellisense solution based on CS-Script and ICSharpCode.NRefactory/Mono.Cecil.DARF: Dynamic Application Runtime Framework: A set of tools and libraries which help software developers create fully distributed and component-based applications. FeiXinV5: Just TestFollower Catcher: Game made for the 2012 Windows 8 Megathon. Second place in Madrid's local competition.MeeOS .NET: El primer sistema operativo GNU GPL en español.Music Vault: Music Vault is program to store your favorite music in one place, you can in any time move all your music library by moving only one file.MVC uTorrent: This is an ASP.NET MVC uTorrent front end, that utilises the uTorrent API. The main features: - Single page application - Can be hosted in IISMyTaskManager: This project Related to distribute the task to log in users OO2RDF: OO2RDF is a framework for data triplification. P(y)rotein Subcellular Location Image Database: A library that allows communication with an OMERO database, an open-source software for storage and manipulation of biological images.PE Toolbox: PE Toolbox ? ????? ?? ???????????, ????? ??????? ??? ??????????? ?? ?????. ???? ????? ??????????? ???????: - PE IFE (Importing from Excel) PeGscan: *PeGscan*ScriptZilla: ScriptZilla is an Free Editor for Windows with more Than 23 Languages.Sharepoint : Reconciliate SSRS Reports and Datasets with datasources Powershell: Powershell to link datasources to SSRS reports and datasetsSync-Moped: The idea behind the "Sync-Moped" is to synchronize two directories with backup functionality.System.Time: System.Time provides a type for .NET that allows the managing of a time in System.String, System.DateTime and System.TimeSpan form at the same time.Text Analytics: Project summary will be added later.Tibia Universal MC .NET: This tool was written as an attempt to port the [url:http://code.google.com/p/xenomc/] to the .NET Framework.VRFramework: VR Framework is a system for creating a virtual reality games using your PC, and Android smart phone, and Unity 3D.

    Read the article

  • Creating A SharePoint Parent/Child List Relationship&ndash; SharePoint 2010 Edition

    - by Mark Rackley
    Hey blog readers… It has been almost 2 years since I posted my most read blog on creating a Parent/Child list relationship in SharePoint 2007: Creating a SharePoint List Parent / Child Relationship - Out of the Box And then a year ago I improved on my method and redid the blog post… still for SharePoint 2007: Creating a SharePoint List Parent/Child Relationship – VIDEO REMIX Since then many of you have been asking me how to get this to work in SharePoint 2010, and frankly I have just not had time to look into it. I wish I could have jumped into this sooner, but have just recently began to look at it. Well.. after all this time I have actually come up with two solutions that work, neither of them are as clean as I’d like them to be, but I wanted to get something in your hands that you can start using today. Hopefully in the coming weeks and months I’ll be able to improve upon this further and give you guys some better options. For the most part, the process is identical to the 2007 process, but you have probably found out that the list view web parts in 2010 behave differently, and getting the Parent ID to your new child form can be a pain in the rear (at least that’s what I’ve discovered). Anyway, like I said, I have found a couple of solutions that work. If you know of a better one, please let us know as it bugs me that this not as eloquent as my 2007 implementation. Getting on the same page First thing I’d recommend is recreating this blog: Creating a SharePoint List Parent/Child Relationship – VIDEO REMIX in SharePoint 2010… There are some vague differences, but it’s basically the same…  Here’s a quick video of me doing this in SP 2010: Creating Lists necessary for this blog post Now that you have the lists created, lets set up the New Time form to use a QueryString variable to populate the Parent ID field: Creating parameters in Child’s new item form to set parent ID Did I talk fast enough through both of those videos? Hopefully by now that stuff is old hat to you, but I wanted to make sure everyone could get on the same page.  Okay… let’s get started. Solution 1 – XSLTListView with Javascript This solution is the more elegant of the two, however it does require the use of a little javascript.  The other solution does not use javascript, but it also doesn’t use the pretty new SP 2010 pop-ups.  I’ll let you decide which you like better. The basic steps of this solution are: Inserted a Related Item View Insert a ContentEditorWebPart Insert script in ContentEditorWebPart that pulls the ID from the Query string and calls the method to insert a new item on the child entry form Hide the toolbar from data view to remove “add new item” link. Again, you don’t HAVE to use a CEWP, you could just put the javascript directly in the page using SPD.  Anyway, here is how I did it: Using Related Item View / JavaScript Here’s the JavaScript I used in my Content Editor Web Part: <script type="text/javascript"> function NewTime() { // Get the Query String values and split them out into the vals array var vals = new Object(); var qs = location.search.substring(1, location.search.length); var args = qs.split("&"); for (var i=0; i < args.length; i++) { var nameVal = args[i].split("="); var temp = unescape(nameVal[1]).split('+'); nameVal[1] = temp.join(' '); vals[nameVal[0]] = nameVal[1]; } var issueID = vals["ID"]; //use this to bring up the pretty pop up NewItem2(event,"http://sp2010dev:1234/Lists/Time/NewForm.aspx?IssueID=" + issueID); //use this to open a new window //window.location="http://sp2010dev:1234/Lists/Time/NewForm.aspx?IssueID=" + issueID; } </script> Solution 2 – DataFormWebPart and exact same 2007 Process This solution is a little more of a hack, but it also MUCH more close to the process we did in SP 2007. So, if you don’t mind not having the pretty pop-up and prefer the comforts of what you are used to, you can give this one a try.  The basics steps are: Insert a DataFormWebPart instead of the List Data View Create a Parameter on DataFormWebPart to store “ID” Query String Variable Filter DataFormWebPart using Parameter Insert a link at bottom of DataForm Web part that points to the Child’s new item form and passes in the Parent Id using the Parameter. See.. like I told you, exact same process as in 2007 (except using the DataFormWeb Part). The DataFormWebPart also requires a lot more work to make it look “pretty” but it’s just table rows and cells, and can be configured pretty painlessly.  Here is that video: Using DataForm Web Part One quick update… if you change the link in this solution from: <tr> <td><a href="http://sp2010dev:1234/Lists/Time/NewForm.aspx?IssueID={$IssueIDParam}">Click here to create new item...</a> </td> </tr> to: <tr> <td> <a href="javascript:NewItem2(event,'http://sp2010dev:1234/Lists/Time/NewForm.aspx?IssueID={$IssueIDParam}');">Click here to create new item...</a> </td> </tr> It will open up in the pretty pop up and act the same as solution one… So… both Solutions will now behave the same to the end user. Just depends on which you want to implement. That’s all for now… Remember in both solutions when you have them working, you can make the “IssueID” invisible to users by using the “ms-hidden” class (it’s my previous blog post on the subject up there). That’s basically all there is to it! No pithy or witty closing this time… I am sorry it took me so long to dive into this and I hope your questions are answered. As I become more polished myself I will try to come up with a cleaner solution that will make everyone happy… As always, thanks for taking the time to stop by.

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >