Search Results

Search found 5719 results on 229 pages for 'cpu spike'.

Page 227/229 | < Previous Page | 223 224 225 226 227 228 229  | Next Page >

  • Why is the upgrade manager freezing when upgrading to 13.10

    - by Nil
    Earlier today I started to upgrade to 13.10 only to return much much later and notice that the update manager was still running. It seems to be frozen and I am reluctant to hit ctrl+C. I can't launch nautilus using the icon on the launcher. When I try to run it via the terminal, this is what happens: $ nautilus Could not register the application: GDBus.Error:org.freedesktop.DBus.Error.UnknownMethod: No such interface `org.gtk.Actions' on object at path /org/gnome/Nautilus My printers aren't showing up when I attempt to print. I don't know whether these a symptoms of the same problem. Should I let the update manage continue to run, or should I shut it down? Here are the processes running if it is any help: $ ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.2 5920 4072 ? Ss Oct18 0:02 /sbin/init root 2 0.0 0.0 0 0 ? S Oct18 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? S Oct18 0:31 [ksoftirqd/0] root 5 0.0 0.0 0 0 ? S< Oct18 0:00 [kworker/0:0H] root 6 0.0 0.0 0 0 ? S Oct18 0:00 [kworker/u:0] root 7 0.0 0.0 0 0 ? S< Oct18 0:00 [kworker/u:0H] root 8 0.0 0.0 0 0 ? S Oct18 0:00 [migration/0] root 9 0.0 0.0 0 0 ? S Oct18 0:00 [rcu_bh] root 10 0.0 0.0 0 0 ? S Oct18 0:54 [rcu_sched] root 11 0.0 0.0 0 0 ? S Oct18 0:00 [watchdog/0] root 12 0.0 0.0 0 0 ? S Oct18 0:00 [watchdog/1] root 13 0.0 0.0 0 0 ? S Oct18 0:43 [ksoftirqd/1] root 14 0.0 0.0 0 0 ? S Oct18 0:00 [migration/1] root 16 0.0 0.0 0 0 ? S< Oct18 0:00 [kworker/1:0H] root 17 0.0 0.0 0 0 ? S< Oct18 0:00 [cpuset] root 18 0.0 0.0 0 0 ? S< Oct18 0:00 [khelper] root 19 0.0 0.0 0 0 ? S Oct18 0:00 [kdevtmpfs] root 20 0.0 0.0 0 0 ? S< Oct18 0:00 [netns] root 21 0.0 0.0 0 0 ? S Oct18 0:00 [bdi-default] root 22 0.0 0.0 0 0 ? S< Oct18 0:00 [kintegrityd] root 23 0.0 0.0 0 0 ? S< Oct18 0:00 [kblockd] root 24 0.0 0.0 0 0 ? S< Oct18 0:00 [ata_sff] root 25 0.0 0.0 0 0 ? S Oct18 0:00 [khubd] root 26 0.0 0.0 0 0 ? S< Oct18 0:00 [md] root 27 0.0 0.0 0 0 ? S< Oct18 0:00 [devfreq_wq] root 29 0.0 0.0 0 0 ? S Oct18 0:00 [khungtaskd] root 30 0.0 0.0 0 0 ? S Oct18 0:09 [kswapd0] root 31 0.0 0.0 0 0 ? SN Oct18 0:00 [ksmd] root 32 0.0 0.0 0 0 ? SN Oct18 0:00 [khugepaged] root 33 0.0 0.0 0 0 ? S Oct18 0:00 [fsnotify_mark] root 34 0.0 0.0 0 0 ? S Oct18 0:00 [ecryptfs-kthre root 35 0.0 0.0 0 0 ? S< Oct18 0:00 [crypto] root 46 0.0 0.0 0 0 ? S< Oct18 0:00 [kthrotld] root 49 0.0 0.0 0 0 ? S< Oct18 0:00 [binder] root 69 0.0 0.0 0 0 ? S< Oct18 0:00 [deferwq] root 70 0.0 0.0 0 0 ? S< Oct18 0:00 [charger_manage root 166 0.0 0.0 0 0 ? S Oct18 0:00 [scsi_eh_0] root 167 0.0 0.0 0 0 ? S Oct18 0:00 [scsi_eh_1] root 188 0.0 0.0 0 0 ? S Oct18 0:00 [scsi_eh_2] root 244 0.0 0.0 0 0 ? S Oct18 0:00 [kworker/u:4] root 245 0.0 0.0 0 0 ? S< Oct18 0:00 [ttm_swap] root 260 0.0 0.0 0 0 ? S Oct18 0:00 [scsi_eh_3] root 266 0.0 0.0 0 0 ? S Oct18 0:00 [scsi_eh_4] root 267 0.0 0.0 0 0 ? S Oct18 1:08 [usb-storage] root 268 0.0 0.0 0 0 ? S Oct18 0:00 [scsi_eh_5] root 269 0.0 0.0 0 0 ? S Oct18 0:06 [usb-storage] root 302 0.0 0.0 2904 504 ? S 14:11 0:00 upstart-udev-br root 305 0.0 0.0 12080 1632 ? Ss 14:11 0:00 /lib/systemd/sy root 329 0.0 0.0 0 0 ? S Oct18 0:24 [jbd2/sda2-8] root 330 0.0 0.0 0 0 ? S< Oct18 0:00 [ext4-dio-unwri root 352 0.0 0.0 2944 4 ? S Oct18 0:00 /sbin/ureadahea root 440 0.0 0.0 0 0 ? S Oct18 0:05 [flush-8:0] root 734 0.0 0.0 0 0 ? S< Oct18 0:00 [cfg80211] root 761 0.0 0.0 0 0 ? S< Oct18 0:00 [kpsmoused] root 780 0.0 0.0 0 0 ? S Oct18 0:00 [pccardd] root 784 0.0 0.0 0 0 ? S< Oct18 0:00 [kvm-irqfd-clea root 902 0.0 0.0 0 0 ? S< Oct18 0:00 [hd-audio0] syslog 916 0.0 0.0 31120 680 ? Sl Oct18 0:13 rsyslogd -c5 102 1010 0.0 0.1 4344 1988 ? Ss Oct18 0:04 dbus-daemon --s root 1061 0.0 0.0 4844 924 ? Ss Oct18 0:00 /usr/sbin/bluet root 1077 0.0 0.0 2268 388 ? Ss Oct18 0:00 /bin/sh /etc/in root 1079 0.0 0.0 4664 484 tty4 Ss+ Oct18 0:00 /sbin/getty -8 root 1087 0.0 0.0 4664 484 tty5 Ss+ Oct18 0:00 /sbin/getty -8 root 1089 0.0 0.0 0 0 ? S< Oct18 0:00 [krfcommd] root 1098 0.0 0.0 4664 484 tty2 Ss+ Oct18 0:00 /sbin/getty -8 root 1099 0.0 0.1 4408 2076 tty3 Ss Oct18 0:00 /bin/login -- root 1101 0.0 0.0 4664 484 tty6 Ss+ Oct18 0:00 /sbin/getty -8 root 1168 0.0 0.0 2780 524 ? Ss Oct18 0:00 cron daemon 1169 0.0 0.0 2636 212 ? Ss Oct18 0:00 atd root 1183 0.0 0.0 34872 1448 ? SLsl Oct18 0:00 lightdm root 1249 0.0 0.0 3536 468 ? S Oct18 0:00 /bin/bash /etc/ root 1254 4.2 2.2 125832 40040 tty7 Rsl+ Oct18 81:27 /usr/bin/X :0 - root 1261 0.0 0.0 2268 344 ? S Oct18 0:00 /bin/sh /etc/ac root 1265 0.0 0.1 42004 2836 ? Ssl Oct18 0:01 NetworkManager root 1272 0.0 0.0 2268 376 ? S Oct18 0:00 /bin/sh /usr/sb root 1286 0.0 0.3 30616 5824 ? Sl Oct18 0:05 /usr/lib/policy root 1304 0.0 0.0 2268 372 ? D Oct18 0:00 /bin/sh /usr/li root 1360 0.0 0.0 5532 560 ? S Oct18 0:00 /sbin/dhclient nobody 1368 0.0 0.0 5476 784 ? S Oct18 0:00 /usr/sbin/dnsma root 1514 0.0 0.1 34036 1932 ? Sl Oct18 0:01 /usr/lib/accoun root 1530 0.0 0.0 0 0 ? S Oct18 0:00 [kauditd] root 1536 0.0 0.1 30480 2260 ? Sl Oct18 0:01 /usr/sbin/conso root 1653 0.0 0.1 28908 2104 ? Sl Oct18 0:00 /usr/lib/upower root 1698 0.0 0.0 17464 1388 ? Sl Oct18 0:00 lightdm --sessi rtkit 1750 0.0 0.0 21368 696 ? SNl Oct18 0:00 /usr/lib/rtkit/ 1000 1844 0.0 0.1 88116 2320 ? SLl Oct18 0:00 /usr/bin/gnome- 1000 1855 0.0 0.3 73076 5884 ? Ssl Oct18 0:00 gnome-session - 1000 1901 0.0 0.0 4128 24 ? Ss Oct18 0:00 /usr/bin/ssh-ag 1000 1904 0.0 0.0 3880 192 ? S Oct18 0:00 /usr/bin/dbus-l 1000 1905 0.0 0.1 5520 2500 ? Ss Oct18 0:23 //bin/dbus-daem 1000 1915 0.0 0.0 43348 1420 ? Sl Oct18 0:00 /usr/lib/at-spi 1000 1919 0.0 0.0 3412 1252 ? S Oct18 0:01 /bin/dbus-daemo 1000 1922 0.0 0.0 17176 1624 ? Sl Oct18 0:00 /usr/lib/at-spi 1000 1932 0.0 0.5 165916 9124 ? Sl Oct18 0:21 /usr/lib/gnome- 1000 1947 1.9 0.2 100716 4024 ? S<l Oct18 37:48 /usr/bin/pulsea 1000 1949 0.0 0.0 27568 1616 ? Sl Oct18 0:00 /usr/lib/gvfs/g 1000 1953 0.0 0.0 42628 1184 ? Sl Oct18 0:00 /usr/lib/gvfs// 1000 1962 0.0 0.0 14472 916 ? S Oct18 0:00 /usr/lib/pulsea 1000 1964 0.0 0.1 9548 2480 ? S Oct18 0:00 /usr/lib/i386-l 1000 1980 0.0 0.0 3764 364 ? S Oct18 0:43 syndaemon -i 1. 1000 1987 0.0 0.0 24476 1668 ? Sl Oct18 0:00 /usr/lib/dconf/ 1000 1990 0.0 0.4 122968 8844 ? Sl Oct18 0:00 /usr/lib/policy 1000 1991 0.0 0.2 80480 5392 ? Sl Oct18 0:00 /usr/lib/gnome- 1000 1992 0.0 1.2 167532 22776 ? Sl Oct18 0:07 nautilus -n 1000 1998 0.0 0.4 181444 7744 ? Sl Oct18 0:00 nm-applet 1000 2002 0.0 0.1 38020 2892 ? Sl Oct18 0:00 /usr/lib/gvfs/g root 2012 0.0 0.1 59908 2664 ? Sl Oct18 0:24 /usr/lib/udisks 1000 2024 0.0 0.0 26456 1540 ? Sl Oct18 0:00 /usr/lib/gvfs/g 1000 2028 0.0 0.0 27684 1536 ? Sl Oct18 0:00 /usr/lib/gvfs/g 1000 2036 0.0 0.0 38964 1452 ? Sl Oct18 0:00 /usr/lib/gvfs/g root 2049 0.0 0.0 3328 588 ? Ss Oct18 0:00 /sbin/mount.ntf 1000 2053 0.0 0.0 36792 1284 ? Sl Oct18 0:00 /usr/lib/gvfs/g 1000 2058 0.0 0.1 53664 2364 ? Sl Oct18 0:00 /usr/lib/gvfs/g 1000 2069 0.0 0.4 82816 8112 ? Sl Oct18 0:07 /usr/lib/i386-l 1000 2084 0.0 0.1 17984 2048 ? Sl Oct18 0:00 /usr/lib/gvfs/g 1000 2086 0.0 0.0 2268 392 ? Ss Oct18 0:00 /bin/sh -c /usr 1000 2087 0.0 0.7 68100 12856 ? Sl Oct18 0:13 /usr/bin/gtk-wi 1000 2089 0.0 0.9 98508 17756 ? Sl Oct18 0:13 /usr/lib/unity/ 1000 2091 0.0 0.3 65380 6692 ? Sl Oct18 0:01 /usr/lib/i386-l 1000 2117 0.0 0.2 98024 3888 ? Sl Oct18 0:00 /usr/lib/i386-l 1000 2125 0.0 0.1 86644 3408 ? Sl Oct18 0:00 /usr/lib/indica 1000 2126 0.0 0.3 84272 6664 ? Sl Oct18 0:00 /usr/lib/i386-l 1000 2127 0.0 0.1 94384 2752 ? Sl Oct18 0:00 /usr/lib/i386-l 1000 2128 0.0 0.1 83968 2828 ? Sl Oct18 0:00 /usr/lib/i386-l 1000 2129 0.0 0.2 150020 4684 ? Sl Oct18 0:01 /usr/lib/i386-l 1000 2130 0.0 0.2 86572 3884 ? Sl Oct18 0:00 /usr/lib/indica 1000 2131 0.0 0.1 69352 2524 ? Sl Oct18 0:00 /usr/lib/i386-l 1000 2144 0.0 0.1 74192 3152 ? Sl Oct18 0:00 /usr/lib/evolut 1000 2182 0.0 0.2 101120 4420 ? Sl Oct18 0:02 /usr/lib/gnome- 1000 2193 0.0 0.3 77752 6448 ? Sl Oct18 0:00 telepathy-indic 1000 2200 0.0 0.1 44032 2708 ? Sl Oct18 0:00 /usr/lib/telepa 1000 2209 0.0 0.2 77664 3860 ? Sl Oct18 0:02 zeitgeist-datah 1000 2216 0.0 0.2 44464 4180 ? Sl Oct18 0:01 /usr/bin/zeitge root 2234 0.0 0.0 0 0 ? S< Oct18 0:00 [kworker/1:1H] 1000 2246 0.0 1.1 93428 21256 ? Sl Oct18 0:02 /usr/bin/python 1000 2284 0.0 0.6 110040 11656 ? Sl Oct18 0:14 /usr/lib/i386-l 1000 2289 0.0 0.2 85632 3728 ? Sl Oct18 0:00 /usr/lib/i386-l 1000 2296 0.0 0.1 77900 3388 ? Sl Oct18 0:00 /usr/lib/i386-l 1000 2298 0.0 0.6 120356 11992 ? Sl Oct18 0:00 /usr/bin/python 1000 2300 0.0 0.1 87560 2408 ? Sl Oct18 0:00 /usr/lib/i386-l 1000 2301 0.0 0.2 91764 4404 ? Sl Oct18 0:00 /usr/lib/i386-l 1000 2303 0.0 0.2 78224 4592 ? Sl Oct18 0:00 /usr/lib/i386-l 1000 2370 0.0 0.2 74976 4908 ? Sl Oct18 0:00 /usr/lib/i386-l 1000 2372 0.0 0.4 106760 8972 ? Sl Oct18 0:00 /usr/bin/python 1000 2394 0.0 0.1 95624 2736 ? Sl Oct18 0:00 /usr/lib/i386-l 1000 2433 0.0 0.1 46640 2124 ? Sl Oct18 0:00 /usr/lib/i386-l 1000 2457 0.0 0.0 34496 1648 ? Sl Oct18 0:00 /usr/lib/libuni root 2513 0.0 0.0 0 0 ? S< Oct18 0:00 [kworker/0:1H] 1000 3361 0.0 0.0 2268 396 ? SN 07:54 0:20 /bin/sh -c /usr root 4919 1.8 2.1 201196 38760 ? SNl 13:29 11:25 /usr/bin/python root 4957 0.0 0.0 3880 400 ? SN 13:29 0:00 dbus-launch --a root 4958 0.0 0.0 3424 1196 ? SNs 13:29 0:05 //bin/dbus-daem root 5128 0.0 0.0 2268 416 ? SN 13:50 0:00 /bin/sh -c whil root 5141 0.0 0.0 2436 508 ? SN 13:50 0:00 gnome-pty-helpe root 5145 0.0 1.7 245280 30872 pts/1 SNs+ 13:50 0:05 /usr/bin/python root 5159 0.0 0.4 64200 7432 ? SNl 13:50 0:05 /usr/bin/gnome- root 5163 0.0 0.0 27440 1552 ? SNl 13:50 0:00 /usr/lib/gvfs/g root 5167 0.0 0.0 42628 1648 ? SNl 13:50 0:00 /usr/lib/gvfs// root 9236 0.0 0.1 19112 2680 ? Ss 14:33 0:00 /usr/sbin/winbi root 9243 0.0 0.0 19112 1448 ? S 14:33 0:00 /usr/sbin/winbi whoopsie 9409 0.0 0.2 53608 4264 ? Ssl 14:33 0:00 whoopsie root 20087 0.0 0.0 0 0 ? S< 14:34 0:00 [xfsalloc] root 20088 0.0 0.0 0 0 ? S< 14:34 0:00 [xfs_mru_cache] root 20089 0.0 0.0 0 0 ? S< 14:34 0:00 [xfslogd] root 20092 0.0 0.0 0 0 ? S 14:34 0:00 [jfsIO] root 20093 0.0 0.0 0 0 ? S 14:34 0:00 [jfsCommit] root 20094 0.0 0.0 0 0 ? S 14:34 0:00 [jfsCommit] root 20095 0.0 0.0 0 0 ? S 14:34 0:00 [jfsSync] root 20845 0.0 0.3 7980 6048 pts/2 SNs+ 14:29 0:04 /usr/bin/dpkg - root 23330 0.0 0.0 2896 568 ? S 14:09 0:00 upstart-file-br root 23332 0.0 0.0 2884 572 ? S 14:09 0:00 upstart-socket- root 24577 0.2 0.0 0 0 ? S 23:09 0:04 [kworker/1:2] root 24656 0.1 0.0 0 0 ? S 23:10 0:02 [kworker/0:0] 1000 24758 2.8 4.7 243692 85516 ? Sl 23:11 0:50 compiz root 25774 0.0 0.0 0 0 ? S< 14:39 0:00 [iprt] 1000 26128 5.5 10.3 641628 187420 ? Sl 23:27 0:46 /usr/lib/firefo root 26374 0.0 0.0 3964 720 ? Ss 14:39 0:02 /usr/sbin/irqba root 26534 0.0 0.0 0 0 ? S 23:34 0:00 [kworker/0:1] root 26564 0.0 0.0 0 0 ? S 23:35 0:00 [kworker/1:1] 1000 26664 0.0 0.1 6784 3068 tty3 S+ 23:36 0:00 -bash 1000 26936 15.2 1.3 67520 23672 ? Sl 23:39 0:21 gnome-system-mo root 26992 0.0 0.0 0 0 ? S 23:40 0:00 [kworker/1:0] root 27049 0.0 0.0 4248 288 ? SN 23:41 0:00 sleep 30 1000 27057 9.5 0.8 68624 16140 ? Rl 23:41 0:00 gnome-terminal 1000 27064 0.0 0.0 2440 704 ? S 23:41 0:00 gnome-pty-helpe 1000 27065 2.6 0.1 6344 2608 pts/3 Ss 23:41 0:00 bash 1000 27113 0.0 0.0 5240 1144 pts/3 R+ 23:41 0:00 ps aux root 28267 0.0 0.0 2216 632 ? Ss 14:39 0:00 acpid -c /etc/a root 28333 0.0 0.0 2272 552 pts/2 SN+ 14:39 0:00 /bin/sh /var/li root 29699 0.0 0.2 8384 4608 pts/2 SN+ 14:40 0:00 modprobe wl

    Read the article

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • MSBuild cannot find SGen when compiling a solution

    - by Jaxidian
    I've looked at several other SGen-related questions on here and either their answers don't apply or their answers don't fix this for me. I have installed several SDKs to fix this issue with no luck. Reference types should not be changed since this is the only place this is a problem. Once suggestion is to put SGen.exe into the C:\Windows\Microsoft.NET\Framework\v3.5 folder, but that's not been done on the box where this is not a problem. In this scenario, SGen.exe actually exists and is right where it's supposed to be, but MSBuild still is having issues with finding it for some reason! Background: We have a NAnt script that automates our builds. In this scenario, NAnt is calling MSBuild and MSBuild is generating the error claiming to be unable to find SGen. The project is .NET 3.5-based. I have my primary dev environment (64-bit Vista Ultimate) where the script works perfectly and I am attempting to duplicate it in a VM (64-bit Win 7 Ultimate). I THINK I have everything to the point where I should be good-to-go but this fails on the Win7 box (works perfectly on the Vista box). I have done some comparisons between the two boxes and they both look identical in this regard, but it still fails. For example, the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework's sdkInstallRootv2.0 value is set to C:\Program Files\Microsoft.NET\SDK\v2.0 64bit\ on both machines. In both machines, SGen.exe is in that path's bin subdirectory. NAnt Script: <target name="report-installer" depends="fail-if-environment-not-set"> <exec program="MSBuild.exe" basedir="${framework35.directory}"> <arg value="${tools.directory.current}\ReportInstaller\ReportInstaller.sln" /> <arg value="/p:Configuration=${buildconfiguration.current}" /> </exec> </target> The error message I get is this: report-installer: [exec] Microsoft (R) Build Engine Version 3.5.30729.4926 [exec] [Microsoft .NET Framework, Version 2.0.50727.4927] [exec] Copyright (C) Microsoft Corporation 2007. All rights reserved. [exec] [exec] Build started 4/8/2010 11:28:23 AM. [exec] Project "C:\Projects\Production\Tools\ReportInstaller\ReportInstaller.sln" on node 0 (default targets). [exec] Building solution configuration "Release|Any CPU". [exec] Project "C:\Projects\Production\Tools\ReportInstaller\ReportInstaller.sln" (1) is building "C:\Projects\Production\Tools\ReportInstaller\ReportInstaller.csproj" (2) on node 0 (default targets). [exec] Could not locate the .NET Framework SDK. The task is looking for the path to the .NET Framework SDK at the location specified in the SDKInstallRootv2.0 value of the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework. You may be able to solve the problem by doing one of the following: 1.) Install the .NET Framework SDK. 2.) Manually set the above registry key to the correct location. [exec] CoreCompile: [exec] Skipping target "CoreCompile" because all output files are up-to-date with respect to the input files. [exec] C:\Windows\Microsoft.NET\Framework\v2.0.50727\Microsoft.Common.targets(1902,9): error MSB3091: Task failed because "sgen.exe" was not found, or the .NET Framework SDK v2.0 is not installed. The task is looking for "sgen.exe" in the "bin" subdirectory beneath the location specified in the SDKInstallRootv2.0 value of the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework. You may be able to solve the problem by doing one of the following: 1.) Install the .NET Framework SDK v2.0. 2.) Manually set the above registry key to the correct location. 3.) Pass the correct location into the "ToolPath" parameter of the task. [exec] Done Building Project "C:\Projects\Production\Tools\ReportInstaller\ReportInstaller.csproj" (default targets) -- FAILED. [exec] Done Building Project "C:\Projects\Production\Tools\ReportInstaller\ReportInstaller.sln" (default targets) -- FAILED. [exec] [exec] Build FAILED. [exec] [exec] "C:\Projects\Production\Tools\ReportInstaller\ReportInstaller.sln" (default target) (1) -> [exec] "C:\Projects\Production\Tools\ReportInstaller\ReportInstaller.csproj" (default target) (2) -> [exec] (GenerateSerializationAssemblies target) -> [exec] C:\Windows\Microsoft.NET\Framework\v2.0.50727\Microsoft.Common.targets(1902,9): error MSB3091: Task failed because "sgen.exe" was not found, or the .NET Framework SDK v2.0 is not installed. The task is looking for "sgen.exe" in the "bin" subdirectory beneath the location specified in the SDKInstallRootv2.0 value of the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework. You may be able to solve the problem by doing one of the following: 1.) Install the .NET Framework SDK v2.0. 2.) Manually set the above registry key to the correct location. 3.) Pass the correct location into the "ToolPath" parameter of the task. [exec] [exec] 0 Warning(s) [exec] 1 Error(s) [exec] [exec] Time Elapsed 00:00:00.24 [call] C:\Projects\Production\Source\reports.build(15,4): [call] External Program Failed: C:\Windows\Microsoft.NET\Framework\v3.5\MSBuild.exe (return code was 1) What am I doing wrong here that is causing MSBuild to STILL be unable to find SGen?

    Read the article

  • WPF Blurry Images - Bitmap Class

    - by Luke
    I am using the following sample at http://blogs.msdn.com/dwayneneed/archive/2007/10/05/blurry-bitmaps.aspx within VB.NET. The code is shown below. I am having a problem when my application loads the CPU is pegging 50-70%. I have determined that the problem is with the Bitmap class. The OnLayoutUpdated() method is calling the InvalidateVisual() continously. This is because some points are not returning as equal but rather, Point(0.0,-0.5) Can anyone see any bugs within this code or know a better implmentation for pixel snapping a Bitmap image so it is not blurry? p.s. The sample code was in C#, however I believe that it was converted correctly. Imports System Imports System.Collections.Generic Imports System.Windows Imports System.Windows.Media Imports System.Windows.Media.Imaging Class Bitmap Inherits FrameworkElement ' Use FrameworkElement instead of UIElement so Data Binding works as expected Private _sourceDownloaded As EventHandler Private _sourceFailed As EventHandler(Of ExceptionEventArgs) Private _pixelOffset As Windows.Point Public Sub New() _sourceDownloaded = New EventHandler(AddressOf OnSourceDownloaded) _sourceFailed = New EventHandler(Of ExceptionEventArgs)(AddressOf OnSourceFailed) AddHandler LayoutUpdated, AddressOf OnLayoutUpdated End Sub Public Shared ReadOnly SourceProperty As DependencyProperty = DependencyProperty.Register("Source", GetType(BitmapSource), GetType(Bitmap), New FrameworkPropertyMetadata(Nothing, FrameworkPropertyMetadataOptions.AffectsRender Or FrameworkPropertyMetadataOptions.AffectsMeasure, New PropertyChangedCallback(AddressOf Bitmap.OnSourceChanged))) Public Property Source() As BitmapSource Get Return DirectCast(GetValue(SourceProperty), BitmapSource) End Get Set(ByVal value As BitmapSource) SetValue(SourceProperty, value) End Set End Property Public Shared Function FindParentWindow(ByVal child As DependencyObject) As Window Dim parent As DependencyObject = VisualTreeHelper.GetParent(child) 'Check if this is the end of the tree If parent Is Nothing Then Return Nothing End If Dim parentWindow As Window = TryCast(parent, Window) If parentWindow IsNot Nothing Then Return parentWindow Else ' Use recursion until it reaches a Window Return FindParentWindow(parent) End If End Function Public Event BitmapFailed As EventHandler(Of ExceptionEventArgs) ' Return our measure size to be the size needed to display the bitmap pixels. ' ' Use MeasureOverride instead of MeasureCore so Data Binding works as expected. ' Protected Overloads Overrides Function MeasureCore(ByVal availableSize As Size) As Size Protected Overloads Overrides Function MeasureOverride(ByVal availableSize As Size) As Size Dim measureSize As New Size() Dim bitmapSource As BitmapSource = Source If bitmapSource IsNot Nothing Then Dim ps As PresentationSource = PresentationSource.FromVisual(Me) If Me.VisualParent IsNot Nothing Then Dim window As Window = window.GetWindow(Me.VisualParent) If window IsNot Nothing Then ps = PresentationSource.FromVisual(window.GetWindow(Me.VisualParent)) ElseIf FindParentWindow(Me) IsNot Nothing Then ps = PresentationSource.FromVisual(FindParentWindow(Me)) End If End If ' If ps IsNot Nothing Then Dim fromDevice As Matrix = ps.CompositionTarget.TransformFromDevice Dim pixelSize As New Vector(bitmapSource.PixelWidth, bitmapSource.PixelHeight) Dim measureSizeV As Vector = fromDevice.Transform(pixelSize) measureSize = New Size(measureSizeV.X, measureSizeV.Y) Else measureSize = New Size(bitmapSource.PixelWidth, bitmapSource.PixelHeight) End If End If Return measureSize End Function Protected Overloads Overrides Sub OnRender(ByVal dc As DrawingContext) Dim bitmapSource As BitmapSource = Me.Source If bitmapSource IsNot Nothing Then _pixelOffset = GetPixelOffset() ' Render the bitmap offset by the needed amount to align to pixels. dc.DrawImage(bitmapSource, New Rect(_pixelOffset, DesiredSize)) End If End Sub Private Shared Sub OnSourceChanged(ByVal d As DependencyObject, ByVal e As DependencyPropertyChangedEventArgs) Dim bitmap As Bitmap = DirectCast(d, Bitmap) Dim oldValue As BitmapSource = DirectCast(e.OldValue, BitmapSource) Dim newValue As BitmapSource = DirectCast(e.NewValue, BitmapSource) If ((oldValue IsNot Nothing) AndAlso (bitmap._sourceDownloaded IsNot Nothing)) AndAlso (Not oldValue.IsFrozen AndAlso (TypeOf oldValue Is BitmapSource)) Then RemoveHandler DirectCast(oldValue, BitmapSource).DownloadCompleted, bitmap._sourceDownloaded RemoveHandler DirectCast(oldValue, BitmapSource).DownloadFailed, bitmap._sourceFailed ' ((BitmapSource)newValue).DecodeFailed -= bitmap._sourceFailed; // 3.5 End If If ((newValue IsNot Nothing) AndAlso (TypeOf newValue Is BitmapSource)) AndAlso Not newValue.IsFrozen Then AddHandler DirectCast(newValue, BitmapSource).DownloadCompleted, bitmap._sourceDownloaded AddHandler DirectCast(newValue, BitmapSource).DownloadFailed, bitmap._sourceFailed ' ((BitmapSource)newValue).DecodeFailed += bitmap._sourceFailed; // 3.5 End If End Sub Private Sub OnSourceDownloaded(ByVal sender As Object, ByVal e As EventArgs) InvalidateMeasure() InvalidateVisual() End Sub Private Sub OnSourceFailed(ByVal sender As Object, ByVal e As ExceptionEventArgs) Source = Nothing ' setting a local value seems scetchy... RaiseEvent BitmapFailed(Me, e) End Sub Private Sub OnLayoutUpdated(ByVal sender As Object, ByVal e As EventArgs) ' This event just means that layout happened somewhere. However, this is ' what we need since layout anywhere could affect our pixel positioning. Dim pixelOffset As Windows.Point = GetPixelOffset() If Not AreClose(pixelOffset, _pixelOffset) Then InvalidateVisual() End If End Sub ' Gets the matrix that will convert a Windows.Point from "above" the ' coordinate space of a visual into the the coordinate space ' "below" the visual. Private Function GetVisualTransform(ByVal v As Visual) As Matrix If v IsNot Nothing Then Dim m As Matrix = Matrix.Identity Dim transform As Transform = VisualTreeHelper.GetTransform(v) If transform IsNot Nothing Then Dim cm As Matrix = transform.Value m = Matrix.Multiply(m, cm) End If Dim offset As Vector = VisualTreeHelper.GetOffset(v) m.Translate(offset.X, offset.Y) Return m End If Return Matrix.Identity End Function Private Function TryApplyVisualTransform(ByVal Point As Windows.Point, ByVal v As Visual, ByVal inverse As Boolean, ByVal throwOnError As Boolean, ByRef success As Boolean) As Windows.Point success = True If v IsNot Nothing Then Dim visualTransform As Matrix = GetVisualTransform(v) If inverse Then If Not throwOnError AndAlso Not visualTransform.HasInverse Then success = False Return New Windows.Point(0, 0) End If visualTransform.Invert() End If Point = visualTransform.Transform(Point) End If Return Point End Function Private Function ApplyVisualTransform(ByVal Point As Windows.Point, ByVal v As Visual, ByVal inverse As Boolean) As Windows.Point Dim success As Boolean = True Return TryApplyVisualTransform(Point, v, inverse, True, success) End Function Private Function GetPixelOffset() As Windows.Point Dim pixelOffset As New Windows.Point() Dim ps As PresentationSource = PresentationSource.FromVisual(Me) If ps IsNot Nothing Then Dim rootVisual As Visual = ps.RootVisual ' Transform (0,0) from this element up to pixels. pixelOffset = Me.TransformToAncestor(rootVisual).Transform(pixelOffset) pixelOffset = ApplyVisualTransform(pixelOffset, rootVisual, False) pixelOffset = ps.CompositionTarget.TransformToDevice.Transform(pixelOffset) ' Round the origin to the nearest whole pixel. pixelOffset.X = Math.Round(pixelOffset.X) pixelOffset.Y = Math.Round(pixelOffset.Y) ' Transform the whole-pixel back to this element. pixelOffset = ps.CompositionTarget.TransformFromDevice.Transform(pixelOffset) pixelOffset = ApplyVisualTransform(pixelOffset, rootVisual, True) pixelOffset = rootVisual.TransformToDescendant(Me).Transform(pixelOffset) End If Return pixelOffset End Function Private Function AreClose(ByVal Point1 As Windows.Point, ByVal Point2 As Windows.Point) As Boolean Return AreClose(Point1.X, Point2.X) AndAlso AreClose(Point1.Y, Point2.Y) End Function Private Function AreClose(ByVal value1 As Double, ByVal value2 As Double) As Boolean If value1 = value2 Then Return True End If Dim delta As Double = value1 - value2 Return ((delta < 0.00000153) AndAlso (delta > -0.00000153)) End Function End Class

    Read the article

  • Problem calling linux C code from FIQ handler

    - by fastmonkeywheels
    I'm working on an armv6 core and have an FIQ hander that works great when I do all of my work in it. However I need to branch to some additional code that's too large for the FIQ memory area. The FIQ handler gets copied from fiq_start to fiq_end to 0xFFFF001C when registered static void test_fiq_handler(void) { asm volatile("\ .global fiq_start\n\ fiq_start:"); // clear gpio irq asm("ldr r10, GPIO_BASE_ISR"); asm("ldr r9, [r10]"); asm("orr r9, #0x04"); asm("str r9, [r10]"); // clear force register asm("ldr r10, AVIC_BASE_INTFRCH"); asm("ldr r9, [r10]"); asm("mov r9, #0"); asm("str r9, [r10]"); // prepare branch register asm(" ldr r11, fiq_handler"); // save all registers, build sp and branch to C asm(" adr r9, regpool"); asm(" stmia r9, {r0 - r8, r14}"); asm(" adr sp, fiq_sp"); asm(" ldr sp, [sp]"); asm(" add lr, pc,#4"); asm(" mov pc, r11"); #if 0 asm("ldr r10, IOMUX_ADDR12"); asm("ldr r9, [r10]"); asm("orr r9, #0x08 @ top/vertex LED"); asm("str r9,[r10] @turn on LED"); asm("bic r9, #0x08 @ top/vertex LED"); asm("str r9,[r10] @turn on LED"); #endif asm(" adr r9, regpool"); asm(" ldmia r9, {r0 - r8, r14}"); // return asm("subs pc, r14, #4"); asm("IOMUX_ADDR12: .word 0xFC2A4000"); asm("AVIC_BASE_INTCNTL: .word 0xFC400000"); asm("AVIC_BASE_INTENNUM: .word 0xFC400008"); asm("AVIC_BASE_INTDISNUM: .word 0xFC40000C"); asm("AVIC_BASE_FIVECSR: .word 0xFC400044"); asm("AVIC_BASE_INTFRCH: .word 0xFC400050"); asm("GPIO_BASE_ISR: .word 0xFC2CC018"); asm(".globl fiq_handler"); asm("fiq_sp: .long fiq_stack+120"); asm("fiq_handler: .long 0"); asm("regpool: .space 40"); asm(".pool"); asm(".align 5"); asm("fiq_stack: .space 124"); asm(".global fiq_end"); asm("fiq_end:"); } fiq_hander gets set to the following function: static void fiq_flip_pins(void) { asm("ldr r10, IOMUX_ADDR12_k"); asm("ldr r9, [r10]"); asm("orr r9, #0x08 @ top/vertex LED"); asm("str r9,[r10] @turn on LED"); asm("bic r9, #0x08 @ top/vertex LED"); asm("str r9,[r10] @turn on LED"); asm("IOMUX_ADDR12_k: .word 0xFC2A4000"); } EXPORT_SYMBOL(fiq_flip_pins); I know that since the FIQ handler operates outside of any normal kernel API's and that it is a rather high priority interrupt I must ensure that whatever I call is already swapped into memory. I do this by having the fiq_flip_pins function defined in the monolithic kernel and not as a module which gets vmalloc. If I don't branch to the fiq_flip_pins function, and instead do the work in the test_fiq_handler function everything works as expected. It's the branching that's causing me problems at the moment. Right after branching I get a kernel panic about a paging request. I don't understand why I'm getting the paging request. fiq_flip_pins is in the kernel at: c00307ec t fiq_flip_pins Unable to handle kernel paging request at virtual address 736e6f63 pgd = c3dd0000 [736e6f63] *pgd=00000000 Internal error: Oops: 5 [#1] PREEMPT Modules linked in: hello_1 CPU: 0 Not tainted (2.6.31-207-g7286c01-svn4 #122) PC is at strnlen+0x10/0x28 LR is at string+0x38/0xcc pc : [<c016b004>] lr : [<c016c754>] psr: a00001d3 sp : c3817ea0 ip : 736e6f63 fp : 00000400 r10: c03cab5c r9 : c0339ae0 r8 : 736e6f63 r7 : c03caf5c r6 : c03cab6b r5 : ffffffff r4 : 00000000 r3 : 00000004 r2 : 00000000 r1 : ffffffff r0 : 736e6f63 Flags: NzCv IRQs off FIQs off Mode SVC_32 ISA ARM Segment user Control: 00c5387d Table: 83dd0008 DAC: 00000015 Process sh (pid: 1663, stack limit = 0xc3816268) Stack: (0xc3817ea0 to 0xc3818000) Since there are no API calls in my code I have to assume that something is going wrong in the C call and back. Any help solving this is appreciated.

    Read the article

  • Help with force close occurrences in my app

    - by Ken
    This is the last issue with this app. Periodic force close situations. I think something should be on another thread but I'm not sure what. Anyway, I can always count on a freeze on first install. If I wait, eventually (maybe 10 seconds) the app comes around, maybe more. here is an excerpt from logcat--the three lines occur after full layout is displayed and I attempt to touch a [game] 'peg' which should spawn a sprite, but the freeze occurs there. Can anybody tell what the issue might be?: I/System.out( 279): TouchDown (17.0,106.0) I/System.out( 279): checking (17,106 I/System.out( 279): hit for bounds Rect(3, 98 - 32, 130) [FREEZE BEGINS] W/webcore ( 279): Can't get the viewWidth after the first layout W/WindowManager( 60): Key dispatching timed out sending to com.live.brainbuilderfree/com.live.brainbuilderfree.BrainBuilderFree W/WindowManager( 60): Previous dispatch state: null W/WindowManager( 60): Current dispatch state: {{null to Window{43fd87a0 com.live.brainbuilderfree/com.live.brainbuilderfree.BrainBuilderFree paused=false} @ 1295232880017 lw=Window{43fd87a0 com.live.brainbuilderfree/com.live.brainbuilderfree.BrainBuilderFree paused=false} lb=android.os.BinderProxy@440523b8 fin=false gfw=true ed=true tts=0 wf=false fp=false mcf=Window{43fd87a0 com.live.brainbuilderfree/com.live.brainbuilderfree.BrainBuilderFree paused=false}}} I/Process ( 60): Sending signal. PID: 279 SIG: 3 I/dalvikvm( 279): threadid=3: reacting to signal 3 D/dalvikvm( 124): GC_EXPLICIT freed 1754 objects / 106104 bytes in 7365ms I/Process ( 60): Sending signal. PID: 60 SIG: 3 I/dalvikvm( 60): threadid=3: reacting to signal 3 I/dalvikvm( 60): Wrote stack traces to '/data/anr/traces.txt' I/Process ( 60): Sending signal. PID: 263 SIG: 3 I/dalvikvm( 263): threadid=3: reacting to signal 3 I/dalvikvm( 279): Wrote stack traces to '/data/anr/traces.txt' I/Process ( 60): Sending signal. PID: 117 SIG: 3 I/dalvikvm( 117): threadid=3: reacting to signal 3 I/dalvikvm( 117): Wrote stack traces to '/data/anr/traces.txt' I/Process ( 60): Sending signal. PID: 254 SIG: 3 I/Process ( 60): Sending signal. PID: 121 SIG: 3 I/dalvikvm( 121): threadid=3: reacting to signal 3 D/AudioSink( 34): bufferCount (4) is too small and increased to 12 I/System.out( 279): making white sprite I/Process ( 60): Sending signal. PID: 186 SIG: 3 I/Process ( 60): Sending signal. PID: 232 SIG: 3 D/MillennialMediaAdSDK( 279): size: 1 D/MillennialMediaAdSDK( 279): num: 1 D/AdWhirl SDK( 279): Millennial success D/AdWhirl SDK( 279): Will call rotateAd() in 120 seconds I/dalvikvm( 232): threadid=3: reacting to signal 3 I/dalvikvm( 121): Wrote stack traces to '/data/anr/traces.txt' I/Process ( 60): Sending signal. PID: 222 SIG: 3 I/MillennialMediaAdSDK( 279): Millennial ad return success D/MillennialMediaAdSDK( 279): View height: 0 D/MillennialMediaAdSDK( 279): nextUrl: [deleted] I/Process ( 60): Sending signal. PID: 239 SIG: 3 I/Process ( 60): Sending signal. PID: 213 SIG: 3 D/AdWhirl SDK( 279): Added subview D/AdWhirl SDK( 279): Pinging URL: [deleted] I/Process ( 60): Sending signal. PID: 197 SIG: 3 I/dalvikvm( 197): threadid=3: reacting to signal 3 I/Process ( 60): Sending signal. PID: 164 SIG: 3 I/dalvikvm( 164): threadid=3: reacting to signal 3 D/dalvikvm( 279): GC_FOR_MALLOC freed 7735 objects / 639688 bytes in 217ms I/Process ( 60): Sending signal. PID: 124 SIG: 3 I/dalvikvm( 124): threadid=3: reacting to signal 3 I/Process ( 60): Sending signal. PID: 158 SIG: 3 I/dalvikvm( 158): threadid=3: reacting to signal 3 I/Process ( 60): Sending signal. PID: 127 SIG: 3 E/ActivityManager( 60): ANR in com.live.brainbuilderfree (com.live.brainbuilderfree/.BrainBuilderFree) E/ActivityManager( 60): Reason: keyDispatchingTimedOut E/ActivityManager( 60): Load: 3.46 / 1.69 / 0.65 E/ActivityManager( 60): CPU usage from 28095ms to 140ms ago: E/ActivityManager( 60): system_server: 30% = 25% user + 4% kernel / faults: 3119 minor 66 major E/ActivityManager( 60): mediaserver: 11% = 7% user + 4% kernel / faults: 746 minor 17 major E/ActivityManager( 60): com.svox.pico: 1% = 0% user + 1% kernel / faults: 2833 minor 8 major E/ActivityManager( 60): d.process.acore: 1% = 0% user + 0% kernel / faults: 1146 minor 36 major E/ActivityManager( 60): ndroid.launcher: 1% = 0% user + 0% kernel / faults: 852 minor 6 major E/ActivityManager( 60): m.android.phone: 0% = 0% user + 0% kernel / faults: 621 minor 7 major E/ActivityManager( 60): kswapd0: 0% = 0% user + 0% kernel E/ActivityManager( 60): ronsoft.openwnn: 0% = 0% user + 0% kernel / faults: 337 minor 2 major E/ActivityManager( 60): adbd: 0% = 0% user + 0% kernel / faults: 3 minor E/ActivityManager( 60): zygote: 0% = 0% user + 0% kernel / faults: 169 minor E/ActivityManager( 60): events/0: 0% = 0% user + 0% kernel E/ActivityManager( 60): rild: 0% = 0% user + 0% kernel / faults: 103 minor 3 major E/ActivityManager( 60): pdflush: 0% = 0% user + 0% kernel E/ActivityManager( 60): .quicksearchbox: 0% = 0% user + 0% kernel / faults: 61 minor E/ActivityManager( 60): id.defcontainer: 0% = 0% user + 0% kernel / faults: 12 minor E/ActivityManager( 60): +rainbuilderfree: 0% = 0% user + 0% kernel E/ActivityManager( 60): +sh: 0% = 0% user + 0% kernel E/ActivityManager( 60): +app_process: 0% = 0% user + 0% kernel E/ActivityManager( 60): TOTAL: 100% = 76% user + 21% kernel + 2% iowait + 0% irq + 0% softirq I/dalvikvm( 127): threadid=3: reacting to signal 3 I/dalvikvm( 186): threadid=3: reacting to signal 3 D/dalvikvm( 60): GC_FOR_MALLOC freed 3747 objects / 228920 bytes in 609ms I/dalvikvm-heap( 60): Grow heap (frag case) to 4.759MB for 36896-byte allocation I/dalvikvm( 239): threadid=3: reacting to signal 3 D/dalvikvm( 60): GC_FOR_MALLOC freed 226 objects / 9952 bytes in 546ms I/dalvikvm( 213): threadid=3: reacting to signal 3 D/dalvikvm( 60): GC_FOR_MALLOC freed 105 objects / 5816 bytes in 492ms I/dalvikvm-heap( 60): Grow heap (frag case) to 4.815MB for 49188-byte allocation I/dalvikvm( 222): threadid=3: reacting to signal 3 D/dalvikvm( 60): GC_FOR_MALLOC freed 77 objects / 5232 bytes in 546ms I/dalvikvm( 254): threadid=3: reacting to signal 3 D/dalvikvm( 60): GC_FOR_MALLOC freed 105 objects / 55856 bytes in 521ms I/dalvikvm-heap( 60): Grow heap (frag case) to 4.876MB for 98360-byte allocation D/dalvikvm( 60): GC_FOR_MALLOC freed 58 objects / 3632 bytes in 340ms D/dalvikvm( 60): GC_FOR_MALLOC freed 1093 objects / 185256 bytes in 572ms W/WindowManager( 60): Continuing to wait for key to be dispatched I/System.out( 279): TouchMove (117.0,124.0) I/System.out( 279): TouchUP (117.0,124.0) D/dalvikvm( 60): GC_FOR_MALLOC freed 141 objects / 108328 bytes in 564ms I/ARMAssembler( 60): generated scanline__00000077:03515104_00000000_00000000 [ 33 ipp] (47 ins) at [0x313d78:0x313e34] in 11621593 ns W/InputManagerService( 60): Window already focused, ignoring focus gain of: com.android.internal.view.IInputMethodClient$Stub$Proxy@43f66a10 I/dalvikvm( 239): Wrote stack traces to '/data/anr/traces.txt' I/dalvikvm( 263): Wrote stack traces to '/data/anr/traces.txt' etc...

    Read the article

  • Why does ffmpeg stop randomly in the middle of a process?

    - by acidzombie24
    ffmpeg feels like its taking a long time. I then look at my output file and i see it stops between 6 and 8mbs. A fully encoded file is about 14mb. Why does ffmpeg stop? My code locks up on StandardOutput.ReadToEnd();. I had to kill the process (after seeing it not move for more then 10 seconds when i see it update every second previously) then i get the results of stdout and err. stdout is "" stderr is below. The output msg shows the filesize ended. I also see a drop in my CPU usage when it stops. I copyed the argument from visual studios. CD to the same working directory and ran the cmd (bin/ffmpeg) and pasted the argument. It was able to complete. int soundProcess(string infn, string outfn) { string aa, aa2; aa = aa2 = "DEAD"; var app = new Process(); app.StartInfo.UseShellExecute = false; app.StartInfo.RedirectStandardOutput = true; app.StartInfo.RedirectStandardError = true; //*/ app.StartInfo.FileName = @"bin\ffmpeg.exe"; app.StartInfo.Arguments = string.Format(@"-i ""{0}"" -ab 192k -y {2} ""{1}""", infn, outfn, param); app.Start(); try { app.PriorityClass = ProcessPriorityClass.BelowNormal; } catch (Exception ex) { if (!Regex.IsMatch(ex.Message, @"Cannot process request because the process .*has exited")) throw ex; } aa = app.StandardOutput.ReadToEnd(); aa2 = app.StandardError.ReadToEnd(); app.WaitForExit(); if (aa2.IndexOf("could not find codec parameters") != -1) return 1; else if (aa == "DEAD" || aa2 == "DEAD") return -1; else if (aa2.Length != 0) return -2; else return 0; } The output of stderr. stdout is empty. FFmpeg version SVN-r15815, Copyright (c) 2000-2008 Fabrice Bellard, et al. configuration: --enable-memalign-hack --enable-postproc --enable-swscale --enable-gpl --enable-libfaac --enable-libfaad --enable-libgsm --enable-libmp3lame --enable-libvorbis --enable-libtheora --enable-libx264 --enable-libxvid --disable-ffserver --disable-vhook --enable-avisynth --enable-pthreads libavutil 49.12. 0 / 49.12. 0 libavcodec 52. 3. 0 / 52. 3. 0 libavformat 52.23. 1 / 52.23. 1 libavdevice 52. 1. 0 / 52. 1. 0 libswscale 0. 6. 1 / 0. 6. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Nov 13 2008 10:28:29, gcc: 4.2.4 (TDM-1 for MinGW) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'C:\dev\src\trunk\prjname\prjname\App_Data/temp/m/o/6304266424778814852': Duration: 00:12:53.36, start: 0.000000, bitrate: 154 kb/s Stream #0.0(und): Audio: aac, 44100 Hz, stereo, s16 Output #0, ipod, to 'C:\dev\src\trunk\prjname\prjname\App_Data\temp\m\o\2.m4a': Stream #0.0(und): Audio: libfaac, 44100 Hz, stereo, s16, 192 kb/s Stream mapping: Stream #0.0 -> #0.0 Press [q] to stop encoding size= 87kB time=4.74 bitrate= 150.7kbits/s size= 168kB time=9.06 bitrate= 151.9kbits/s size= 265kB time=14.28 bitrate= 151.8kbits/s size= 377kB time=20.29 bitrate= 152.1kbits/s size= 487kB time=26.22 bitrate= 152.1kbits/s size= 594kB time=32.02 bitrate= 152.1kbits/s size= 699kB time=37.64 bitrate= 152.1kbits/s size= 808kB time=43.54 bitrate= 152.0kbits/s size= 930kB time=50.09 bitrate= 152.2kbits/s size= 1058kB time=57.05 bitrate= 152.0kbits/s size= 1193kB time=64.23 bitrate= 152.1kbits/s size= 1329kB time=71.63 bitrate= 152.0kbits/s size= 1450kB time=78.16 bitrate= 152.0kbits/s size= 1578kB time=85.05 bitrate= 152.0kbits/s size= 1706kB time=92.00 bitrate= 152.0kbits/s size= 1836kB time=98.94 bitrate= 152.0kbits/s size= 1971kB time=106.25 bitrate= 151.9kbits/s size= 2107kB time=113.57 bitrate= 152.0kbits/s size= 2214kB time=119.33 bitrate= 152.0kbits/s size= 2345kB time=126.39 bitrate= 152.0kbits/s size= 2479kB time=133.56 bitrate= 152.0kbits/s size= 2611kB time=140.76 bitrate= 152.0kbits/s size= 2745kB time=147.91 bitrate= 152.1kbits/s size= 2880kB time=155.20 bitrate= 152.0kbits/s size= 3013kB time=162.40 bitrate= 152.0kbits/s size= 3146kB time=169.58 bitrate= 152.0kbits/s size= 3277kB time=176.61 bitrate= 152.0kbits/s size= 3412kB time=183.90 bitrate= 152.0kbits/s size= 3540kB time=190.80 bitrate= 152.0kbits/s size= 3670kB time=197.81 bitrate= 152.0kbits/s size= 3805kB time=205.08 bitrate= 152.0kbits/s size= 3932kB time=211.93 bitrate= 152.0kbits/s size= 4052kB time=218.38 bitrate= 152.0kbits/s size= 4171kB time=224.82 bitrate= 152.0kbits/s size= 4277kB time=230.55 bitrate= 152.0kbits/s size= 4378kB time=235.96 bitrate= 152.0kbits/s size= 4486kB time=241.79 bitrate= 152.0kbits/s size= 4592kB time=247.50 bitrate= 152.0kbits/s size= 4698kB time=253.21 bitrate= 152.0kbits/s size= 4804kB time=258.95 bitrate= 152.0kbits/s size= 4906kB time=264.41 bitrate= 152.0kbits/s size= 5012kB time=270.09 bitrate= 152.0kbits/s size= 5118kB time=275.85 bitrate= 152.0kbits/s size= 5234kB time=282.10 bitrate= 152.0kbits/s size= 5331kB time=287.39 bitrate= 151.9kbits/s size= 5445kB time=293.55 bitrate= 152.0kbits/s size= 5555kB time=299.40 bitrate= 152.0kbits/s size= 5665kB time=305.37 bitrate= 152.0kbits/s size= 5766kB time=310.80 bitrate= 152.0kbits/s size= 5876kB time=316.70 bitrate= 152.0kbits/s size= 5984kB time=322.50 bitrate= 152.0kbits/s size= 6094kB time=328.49 bitrate= 152.0kbits/s size= 6212kB time=334.76 bitrate= 152.0kbits/s size= 6327kB time=340.99 bitrate= 152.0kbits/s

    Read the article

  • DirectX 10 Primitive is not displayed

    - by pypmannetjies
    I am trying to write my first DirectX 10 program that displays a triangle. Everything compiles fine, and the render function is called, since the background changes to black. However, the triangle I'm trying to draw with a triangle strip primitive is not displayed at all. The Initialization function: bool InitDirect3D(HWND hWnd, int width, int height) { //****** D3DDevice and SwapChain *****// DXGI_SWAP_CHAIN_DESC swapChainDesc; ZeroMemory(&swapChainDesc, sizeof(swapChainDesc)); swapChainDesc.BufferCount = 1; swapChainDesc.BufferDesc.Width = width; swapChainDesc.BufferDesc.Height = height; swapChainDesc.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; swapChainDesc.BufferDesc.RefreshRate.Numerator = 60; swapChainDesc.BufferDesc.RefreshRate.Denominator = 1; swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; swapChainDesc.OutputWindow = hWnd; swapChainDesc.SampleDesc.Count = 1; swapChainDesc.SampleDesc.Quality = 0; swapChainDesc.Windowed = TRUE; if (FAILED(D3D10CreateDeviceAndSwapChain( NULL, D3D10_DRIVER_TYPE_HARDWARE, NULL, 0, D3D10_SDK_VERSION, &swapChainDesc, &pSwapChain, &pD3DDevice))) return fatalError(TEXT("Hardware does not support DirectX 10!")); //***** Shader *****// if (FAILED(D3DX10CreateEffectFromFile( TEXT("basicEffect.fx"), NULL, NULL, "fx_4_0", D3D10_SHADER_ENABLE_STRICTNESS, 0, pD3DDevice, NULL, NULL, &pBasicEffect, NULL, NULL))) return fatalError(TEXT("Could not load effect file!")); pBasicTechnique = pBasicEffect->GetTechniqueByName("Render"); pViewMatrixEffectVariable = pBasicEffect->GetVariableByName( "View" )->AsMatrix(); pProjectionMatrixEffectVariable = pBasicEffect->GetVariableByName( "Projection" )->AsMatrix(); pWorldMatrixEffectVariable = pBasicEffect->GetVariableByName( "World" )->AsMatrix(); //***** Input Assembly Stage *****// D3D10_INPUT_ELEMENT_DESC layout[] = { {"POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D10_INPUT_PER_VERTEX_DATA, 0}, {"COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 12, D3D10_INPUT_PER_VERTEX_DATA, 0} }; UINT numElements = 2; D3D10_PASS_DESC PassDesc; pBasicTechnique->GetPassByIndex(0)->GetDesc(&PassDesc); if (FAILED( pD3DDevice->CreateInputLayout( layout, numElements, PassDesc.pIAInputSignature, PassDesc.IAInputSignatureSize, &pVertexLayout))) return fatalError(TEXT("Could not create Input Layout.")); pD3DDevice->IASetInputLayout( pVertexLayout ); //***** Vertex buffer *****// UINT numVertices = 100; D3D10_BUFFER_DESC bd; bd.Usage = D3D10_USAGE_DYNAMIC; bd.ByteWidth = sizeof(vertex) * numVertices; bd.BindFlags = D3D10_BIND_VERTEX_BUFFER; bd.CPUAccessFlags = D3D10_CPU_ACCESS_WRITE; bd.MiscFlags = 0; if (FAILED(pD3DDevice->CreateBuffer(&bd, NULL, &pVertexBuffer))) return fatalError(TEXT("Could not create vertex buffer!"));; UINT stride = sizeof(vertex); UINT offset = 0; pD3DDevice->IASetVertexBuffers( 0, 1, &pVertexBuffer, &stride, &offset ); //***** Rasterizer *****// // Set the viewport viewPort.Width = width; viewPort.Height = height; viewPort.MinDepth = 0.0f; viewPort.MaxDepth = 1.0f; viewPort.TopLeftX = 0; viewPort.TopLeftY = 0; pD3DDevice->RSSetViewports(1, &viewPort); D3D10_RASTERIZER_DESC rasterizerState; rasterizerState.CullMode = D3D10_CULL_NONE; rasterizerState.FillMode = D3D10_FILL_SOLID; rasterizerState.FrontCounterClockwise = true; rasterizerState.DepthBias = false; rasterizerState.DepthBiasClamp = 0; rasterizerState.SlopeScaledDepthBias = 0; rasterizerState.DepthClipEnable = true; rasterizerState.ScissorEnable = false; rasterizerState.MultisampleEnable = false; rasterizerState.AntialiasedLineEnable = true; ID3D10RasterizerState* pRS; pD3DDevice->CreateRasterizerState(&rasterizerState, &pRS); pD3DDevice->RSSetState(pRS); //***** Output Merger *****// // Get the back buffer from the swapchain ID3D10Texture2D *pBackBuffer; if (FAILED(pSwapChain->GetBuffer(0, __uuidof(ID3D10Texture2D), (LPVOID*)&pBackBuffer))) return fatalError(TEXT("Could not get back buffer.")); // create the render target view if (FAILED(pD3DDevice->CreateRenderTargetView(pBackBuffer, NULL, &pRenderTargetView))) return fatalError(TEXT("Could not create the render target view.")); // release the back buffer pBackBuffer->Release(); // set the render target pD3DDevice->OMSetRenderTargets(1, &pRenderTargetView, NULL); return true; } The render function: void Render() { if (pD3DDevice != NULL) { pD3DDevice->ClearRenderTargetView(pRenderTargetView, D3DXCOLOR(0.0f, 0.0f, 0.0f, 0.0f)); //create world matrix static float r; D3DXMATRIX w; D3DXMatrixIdentity(&w); D3DXMatrixRotationY(&w, r); r += 0.001f; //set effect matrices pWorldMatrixEffectVariable->SetMatrix(w); pViewMatrixEffectVariable->SetMatrix(viewMatrix); pProjectionMatrixEffectVariable->SetMatrix(projectionMatrix); //fill vertex buffer with vertices UINT numVertices = 3; vertex* v = NULL; //lock vertex buffer for CPU use pVertexBuffer->Map(D3D10_MAP_WRITE_DISCARD, 0, (void**) &v ); v[0] = vertex( D3DXVECTOR3(-1,-1,0), D3DXVECTOR4(1,0,0,1) ); v[1] = vertex( D3DXVECTOR3(0,1,0), D3DXVECTOR4(0,1,0,1) ); v[2] = vertex( D3DXVECTOR3(1,-1,0), D3DXVECTOR4(0,0,1,1) ); pVertexBuffer->Unmap(); // Set primitive topology pD3DDevice->IASetPrimitiveTopology( D3D10_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP ); //get technique desc D3D10_TECHNIQUE_DESC techDesc; pBasicTechnique->GetDesc(&techDesc); for(UINT p = 0; p < techDesc.Passes; ++p) { //apply technique pBasicTechnique->GetPassByIndex(p)->Apply(0); //draw pD3DDevice->Draw(numVertices, 0); } pSwapChain->Present(0,0); } }

    Read the article

  • Node.js vs PHP processing speed

    - by Cody Craven
    I've been looking into node.js recently and wanted to see a true comparison of processing speed for PHP vs Node.js. In most of the comparisons I had seen, Node trounced Apache/PHP set ups handily. However all of the tests were small 'hello worlds' that would not accurately reflect any webpage's markup. So I decided to create a basic HTML page with 10,000 hello world paragraph elements. In these tests Node with Cluster was beaten to a pulp by PHP on Nginx utilizing PHP-FPM. So I'm curious if I am misusing Node somehow or if Node is really just this bad at processing power. Note that my results were equivalent outputting "Hello world\n" with text/plain as the HTML, but I only included the HTML as it's closer to the use case I was investigating. My testing box: Core i7-2600 Intel CPU (has 8 threads with 4 cores) 8GB DDR3 RAM Fedora 16 64bit Node.js v0.6.13 Nginx v1.0.13 PHP v5.3.10 (with PHP-FPM) My test scripts: Node.js script var cluster = require('cluster'); var http = require('http'); var numCPUs = require('os').cpus().length; if (cluster.isMaster) { // Fork workers. for (var i = 0; i < numCPUs; i++) { cluster.fork(); } cluster.on('death', function (worker) { console.log('worker ' + worker.pid + ' died'); }); } else { // Worker processes have an HTTP server. http.Server(function (req, res) { res.writeHead(200, {'Content-Type': 'text/html'}); res.write('<html>\n<head>\n<title>Speed test</title>\n</head>\n<body>\n'); for (var i = 0; i < 10000; i++) { res.write('<p>Hello world</p>\n'); } res.end('</body>\n</html>'); }).listen(80); } This script is adapted from Node.js' documentation at http://nodejs.org/docs/latest/api/cluster.html PHP script <?php echo "<html>\n<head>\n<title>Speed test</title>\n</head>\n<body>\n"; for ($i = 0; $i < 10000; $i++) { echo "<p>Hello world</p>\n"; } echo "</body>\n</html>"; My results Node.js $ ab -n 500 -c 20 http://speedtest.dev/ This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking speedtest.dev (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Finished 500 requests Server Software: Server Hostname: speedtest.dev Server Port: 80 Document Path: / Document Length: 190070 bytes Concurrency Level: 20 Time taken for tests: 14.603 seconds Complete requests: 500 Failed requests: 0 Write errors: 0 Total transferred: 95066500 bytes HTML transferred: 95035000 bytes Requests per second: 34.24 [#/sec] (mean) Time per request: 584.123 [ms] (mean) Time per request: 29.206 [ms] (mean, across all concurrent requests) Transfer rate: 6357.45 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.2 0 2 Processing: 94 547 405.4 424 2516 Waiting: 0 331 399.3 216 2284 Total: 95 547 405.4 424 2516 Percentage of the requests served within a certain time (ms) 50% 424 66% 607 75% 733 80% 813 90% 1084 95% 1325 98% 1843 99% 2062 100% 2516 (longest request) PHP/Nginx $ ab -n 500 -c 20 http://speedtest.dev/test.php This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking speedtest.dev (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Finished 500 requests Server Software: nginx/1.0.13 Server Hostname: speedtest.dev Server Port: 80 Document Path: /test.php Document Length: 190070 bytes Concurrency Level: 20 Time taken for tests: 0.130 seconds Complete requests: 500 Failed requests: 0 Write errors: 0 Total transferred: 95109000 bytes HTML transferred: 95035000 bytes Requests per second: 3849.11 [#/sec] (mean) Time per request: 5.196 [ms] (mean) Time per request: 0.260 [ms] (mean, across all concurrent requests) Transfer rate: 715010.65 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.2 0 1 Processing: 3 5 0.7 5 7 Waiting: 1 4 0.7 4 7 Total: 3 5 0.7 5 7 Percentage of the requests served within a certain time (ms) 50% 5 66% 5 75% 5 80% 6 90% 6 95% 6 98% 6 99% 6 100% 7 (longest request) Additional details Again what I'm looking for is to find out if I'm doing something wrong with Node.js or if it is really just that slow compared to PHP on Nginx with FPM. I certainly think Node has a real niche that it could fit well, however with these test results (which I really hope I made a mistake with - as I like the idea of Node) lead me to believe that it is a horrible choice for even a modest processing load when compared to PHP (let alone JVM or various other fast solutions). As a final note, I also tried running an Apache Bench test against node with $ ab -n 20 -c 20 http://speedtest.dev/ and consistently received a total test time of greater than 0.900 seconds.

    Read the article

  • [ebp + 6] instead of +8 in a JIT compiler

    - by David Titarenco
    I'm implementing a simplistic JIT compiler in a VM I'm writing for fun (mostly to learn more about language design) and I'm getting some weird behavior, maybe someone can tell me why. First I define a JIT "prototype" both for C and C++: #ifdef __cplusplus typedef void* (*_JIT_METHOD) (...); #else typedef (*_JIT_METHOD) (); #endif I have a compile() function that will compile stuff into ASM and stick it somewhere in memory: void* compile (void* something) { // grab some memory unsigned char* buffer = (unsigned char*) malloc (1024); // xor eax, eax // inc eax // inc eax // inc eax // ret -> eax should be 3 /* WORKS! buffer[0] = 0x67; buffer[1] = 0x31; buffer[2] = 0xC0; buffer[3] = 0x67; buffer[4] = 0x40; buffer[5] = 0x67; buffer[6] = 0x40; buffer[7] = 0x67; buffer[8] = 0x40; buffer[9] = 0xC3; */ // xor eax, eax // mov eax, 9 // ret 4 -> eax should be 9 /* WORKS! buffer[0] = 0x67; buffer[1] = 0x31; buffer[2] = 0xC0; buffer[3] = 0x67; buffer[4] = 0xB8; buffer[5] = 0x09; buffer[6] = 0x00; buffer[7] = 0x00; buffer[8] = 0x00; buffer[9] = 0xC3; */ // push ebp // mov ebp, esp // mov eax, [ebp + 6] ; wtf? shouldn't this be [ebp + 8]!? // mov esp, ebp // pop ebp // ret -> eax should be the first value sent to the function /* WORKS! */ buffer[0] = 0x66; buffer[1] = 0x55; buffer[2] = 0x66; buffer[3] = 0x89; buffer[4] = 0xE5; buffer[5] = 0x66; buffer[6] = 0x66; buffer[7] = 0x8B; buffer[8] = 0x45; buffer[9] = 0x06; buffer[10] = 0x66; buffer[11] = 0x89; buffer[12] = 0xEC; buffer[13] = 0x66; buffer[14] = 0x5D; buffer[15] = 0xC3; // mov eax, 5 // add eax, ecx // ret -> eax should be 50 /* WORKS! buffer[0] = 0x67; buffer[1] = 0xB8; buffer[2] = 0x05; buffer[3] = 0x00; buffer[4] = 0x00; buffer[5] = 0x00; buffer[6] = 0x66; buffer[7] = 0x01; buffer[8] = 0xC8; buffer[9] = 0xC3; */ return buffer; } And finally I have the main chunk of the program: void main (int argc, char **args) { DWORD oldProtect = (DWORD) NULL; int i = 667, j = 1, k = 5, l = 0; // generate some arbitrary function _JIT_METHOD someFunc = (_JIT_METHOD) compile(NULL); // windows only #if defined _WIN64 || defined _WIN32 // set memory permissions and flush CPU code cache VirtualProtect(someFunc,1024,PAGE_EXECUTE_READWRITE, &oldProtect); FlushInstructionCache(GetCurrentProcess(), someFunc, 1024); #endif // this asm just for some debugging/testing purposes __asm mov ecx, i // run compiled function (from wherever *someFunc is pointing to) l = (int)someFunc(i, k); // did it work? printf("result: %d", l); free (someFunc); _getch(); } As you can see, the compile() function has a couple of tests I ran to make sure I get expected results, and pretty much everything works but I have a question... On most tutorials or documentation resources, to get the first value of a function passed (in the case of ints) you do [ebp+8], the second [ebp+12] and so forth. For some reason, I have to do [ebp+6] then [ebp+10] and so forth. Could anyone tell me why?

    Read the article

  • Javascript stockticker : not showing data on php page

    - by developer
    iam not getting any javascript errors , code is getting rendered properly only, but still server not displaying data on the page. please check the code below . <style type="text/css"> #marqueeborder { color: #cccccc; background-color: #EEF3E2; font-family:"Lucida Console", Monaco, monospace; position:relative; height:20px; overflow:hidden; font-size: 0.7em; } #marqueecontent { position:absolute; left:0px; line-height:20px; white-space:nowrap; } .stockbox { margin:0 10px; } .stockbox a { color: #cccccc; text-decoration : underline; } </style> </head> <body> <div id="marqueeborder" onmouseover="pxptick=0" onmouseout="pxptick=scrollspeed"> <div id="marqueecontent"> <?php // Original script by Walter Heitman Jr, first published on http://techblog.shanock.com // List your stocks here, separated by commas, no spaces, in the order you want them displayed: $stocks = "idt,iye,mill,pwer,spy,f,msft,x,sbux,sne,ge,dow,t"; // Function to copy a stock quote CSV from Yahoo to the local cache. CSV contains symbol, price, and change function upsfile($stock) { copy("http://finance.yahoo.com/d/quotes.csv?s=$stock&f=sl1c1&e=.csv","stockcache/".$stock.".csv"); } foreach ( explode(",", $stocks) as $stock ) { // Where the stock quote info file should be... $local_file = "stockcache/".$stock.".csv"; // ...if it exists. If not, download it. if (!file_exists($local_file)) { upsfile($stock); } // Else,If it's out-of-date by 15 mins (900 seconds) or more, update it. elseif (filemtime($local_file) <= (time() - 900)) { upsfile($stock); } // Open the file, load our values into an array... $local_file = fopen ("stockcache/".$stock.".csv","r"); $stock_info = fgetcsv ($local_file, 1000, ","); // ...format, and output them. I made the symbols into links to Yahoo's stock pages. echo "<span class=\"stockbox\"><a href=\"http://finance.yahoo.com/q?s=".$stock_info[0]."\">".$stock_info[0]."</a> ".sprintf("%.2f",$stock_info[1])." <span style=\""; // Green prices for up, red for down if ($stock_info[2]>=0) { echo "color: #009900;\">&uarr;"; } elseif ($stock_info[2]<0) { echo "color: #ff0000;\">&darr;"; } echo sprintf("%.2f",abs($stock_info[2]))."</span></span>\n"; // Done! fclose($local_file); } ?> <span class="stockbox" style="font-size:0.6em">Quotes from <a href="http://finance.yahoo.com/">Yahoo Finance</a></span> </div> </div> </body> <script type="text/javascript"> // Original script by Walter Heitman Jr, first published on http://techblog.shanock.com // Set an initial scroll speed. This equates to the number of pixels shifted per tick var scrollspeed=2; var pxptick=scrollspeed; var marqueediv=''; var contentwidth=""; var marqueewidth = ""; function startmarquee(){ alert("hi"); // Make a shortcut referencing our div with the content we want to scroll marqueediv=document.getElementById("marqueecontent"); //alert("marqueediv"+marqueediv); alert("hi"+marqueediv.innerHTML); // Get the total width of our available scroll area marqueewidth=document.getElementById("marqueeborder").offsetWidth; alert("marqueewidth"+marqueewidth); // Get the width of the content we want to scroll contentwidth=marqueediv.offsetWidth; alert("contentwidth"+contentwidth); // Start the ticker at 50 milliseconds per tick, adjust this to suit your preferences // Be warned, setting this lower has heavy impact on client-side CPU usage. Be gentle. var lefttime=setInterval("scrollmarquee()",50); alert("lefttime"+lefttime); } function scrollmarquee(){ // Check position of the div, then shift it left by the set amount of pixels. if (parseInt(marqueediv.style.left)>(contentwidth*(-1))) marqueediv.style.left=parseInt(marqueediv.style.left)-pxptick+"px"; //alert("hikkk"+marqueediv.innerHTML);} // If it's at the end, move it back to the right. else{ alert("marqueewidth"+marqueewidth); marqueediv.style.left=parseInt(marqueewidth)+"px"; } } window.onload=startmarquee; </script> </html> Below is the server displayed page. I have updated with screenshot with your suggestion, i made change in html too, to check what is showing by child dev

    Read the article

  • When building a web Application project, TFS 2008 Builds two spearate projects in the _PublishedFold

    - by Steve Johnson
    Hi all, I am trying to a perform build automation on one of web application projects built using VS 2008. The _PublishedWebSites contains two folders: Web and Deploy. I just want the TFS 2008 to generate only the Deploy Folder and Not the Web Folder. Here is my TFSBuild.proj File <Project ToolsVersion="3.5" DefaultTargets="Compile" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Import Project="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\TeamBuild\Microsoft.TeamFoundation.Build.targets" /> <Import Project="$(MSBuildExtensionsPath)\Microsoft\WebDeployment\v9.0\Microsoft.WebDeployment.targets" /> <ItemGroup> <SolutionToBuild Include="$(BuildProjectFolderPath)/../../Development/Main/MySoftware.sln"> <Targets></Targets> <Properties></Properties> </SolutionToBuild> </ItemGroup> <ItemGroup> <ConfigurationToBuild Include="Release|AnyCPU"> <FlavorToBuild>Release</FlavorToBuild> <PlatformToBuild>Any CPU</PlatformToBuild> </ConfigurationToBuild> </ItemGroup> <!--<ItemGroup> <SolutionToBuild Include="$(BuildProjectFolderPath)/../../Development/Main/MySoftware.sln"> <Targets></Targets> <Properties></Properties> </SolutionToBuild> </ItemGroup> <ItemGroup> <ConfigurationToBuild Include="Release|x64"> <FlavorToBuild>Release</FlavorToBuild> <PlatformToBuild>x64</PlatformToBuild> </ConfigurationToBuild> </ItemGroup>--> <ItemGroup> <AdditionalReferencePath Include="C:\3PR" /> </ItemGroup> <Target Name="GetCopyToOutputDirectoryItems" Outputs="@(AllItemsFullPathWithTargetPath)" DependsOnTargets="AssignTargetPaths;_SplitProjectReferencesByFileExistence"> <!-- Get items from child projects first. --> <MSBuild Projects="@(_MSBuildProjectReferenceExistent)" Targets="GetCopyToOutputDirectoryItems" Properties="%(_MSBuildProjectReferenceExistent.SetConfiguration); %(_MSBuildProjectReferenceExistent.SetPlatform)" Condition="'@(_MSBuildProjectReferenceExistent)'!=''"> <Output TaskParameter="TargetOutputs" ItemName="_AllChildProjectItemsWithTargetPathNotFiltered"/> </MSBuild> <!-- Remove duplicates. --> <RemoveDuplicates Inputs="@(_AllChildProjectItemsWithTargetPathNotFiltered)"> <Output TaskParameter="Filtered" ItemName="_AllChildProjectItemsWithTargetPath"/> </RemoveDuplicates> <!-- Target outputs must be full paths because they will be consumed by a different project. --> <CreateItem Include="@(_AllChildProjectItemsWithTargetPath->'%(FullPath)')" Exclude= "$(BuildProjectFolderPath)/../../Development/Main/Web/Bin*.pdb; *.refresh; *.vshost.exe; *.manifest; *.compiled; $(BuildProjectFolderPath)/../../Development/Main/Web/Auth/MySoftware.dll; $(BuildProjectFolderPath)/../../Development/Main/Web/BinApp_Web_*.dll;" Condition="'%(_AllChildProjectItemsWithTargetPath.CopyToOutputDirectory)'=='Always' or '%(_AllChildProjectItemsWithTargetPath.CopyToOutputDirectory)'=='PreserveNewest'" > <Output TaskParameter="Include" ItemName="AllItemsFullPathWithTargetPath"/> <Output TaskParameter="Include" ItemName="_SourceItemsToCopyToOutputDirectoryAlways" Condition="'%(_AllChildProjectItemsWithTargetPath.CopyToOutputDirectory)'=='Always'"/> <Output TaskParameter="Include" ItemName="_SourceItemsToCopyToOutputDirectory" Condition="'%(_AllChildProjectItemsWithTargetPath.CopyToOutputDirectory)'=='PreserveNewest'"/> </CreateItem> </Target> <!-- To modify your build process, add your task inside one of the targets below and uncomment it. Other similar extension points exist, see Microsoft.WebDeployment.targets. <Target Name="BeforeBuild"> </Target> <Target Name="BeforeMerge"> </Target> <Target Name="AfterMerge"> </Target> <Target Name="AfterBuild"> </Target> --> </Project> I want to build everything that the builtin Deploy project is doing for me. But i dont want the generated Web Project as it conatains App_Web_xxxx.dll assemblies instead of a single compiled assembly. Please help. Thanks

    Read the article

  • Windows 7 Seems to break SWT Control.print(GC)

    - by GreenKiwi
    A bug has been filed and fixed (super quickly) in SWT: https://bugs.eclipse.org/bugs/show_bug.cgi?id=305294 Just to preface this, my goal here is to print the two images into a canvas so that I can animate the canvas sliding across the screen (think iPhone), sliding the controls themselves was too CPU intensive, so this was a good alternative until I tested it on Win7. I'm open to anything that will help me solve my original problem, it doesn't have to be fixing the problem below. Does anyone know how to get "Control.print(GC)" to work with Windows 7 Aero? I have code that works just fine in Windows XP and in Windows 7, when Aero is disabled, but the command: control.print(GC) causes a non-top control to be effectively erased from the screen. GC gc = new GC(image); try { // As soon as this code is called, calling "layout" on the controls // causes them to disappear. control.print(gc); } finally { gc.dispose(); } I have stacked controls and would like to print the images from the current and next controls such that I can "slide" them off the screen. However, upon printing the non-top control, it is never redrawn again. Here is some example code. (Interesting code bits are at the top and it will require pointing at SWT in order to work.) Thanks for any and all help. As a work around, I'm thinking about swapping controls between prints to see if that helps, but I'd rather not. import org.eclipse.swt.SWT; import org.eclipse.swt.custom.StackLayout; import org.eclipse.swt.events.SelectionAdapter; import org.eclipse.swt.events.SelectionEvent; import org.eclipse.swt.graphics.GC; import org.eclipse.swt.graphics.Image; import org.eclipse.swt.graphics.Point; import org.eclipse.swt.layout.GridData; import org.eclipse.swt.layout.GridLayout; import org.eclipse.swt.widgets.Button; import org.eclipse.swt.widgets.Composite; import org.eclipse.swt.widgets.Control; import org.eclipse.swt.widgets.Display; import org.eclipse.swt.widgets.Label; import org.eclipse.swt.widgets.Shell; public class SWTImagePrintTest { private Composite stack; private StackLayout layout; private Label lblFlip; private Label lblFlop; private boolean flip = true; private Button buttonFlop; private Button buttonPrint; /** * Prints the control into an image * * @param control */ protected void print(Control control) { Image image = new Image(control.getDisplay(), control.getBounds()); GC gc = new GC(image); try { // As soon as this code is called, calling "layout" on the controls // causes them to disappear. control.print(gc); } finally { gc.dispose(); } } /** * Swaps the controls in the stack */ private void flipFlop() { if (flip) { flip = false; layout.topControl = lblFlop; buttonFlop.setText("flop"); stack.layout(); } else { flip = true; layout.topControl = lblFlip; buttonFlop.setText("flip"); stack.layout(); } } private void createContents(Shell shell) { shell.setLayout(new GridLayout(2, true)); stack = new Composite(shell, SWT.NONE); GridData gdStack = new GridData(GridData.FILL_BOTH); gdStack.horizontalSpan = 2; stack.setLayoutData(gdStack); layout = new StackLayout(); stack.setLayout(layout); lblFlip = new Label(stack, SWT.BOLD); lblFlip.setBackground(Display.getCurrent().getSystemColor( SWT.COLOR_CYAN)); lblFlip.setText("FlIp"); lblFlop = new Label(stack, SWT.NONE); lblFlop.setBackground(Display.getCurrent().getSystemColor( SWT.COLOR_BLUE)); lblFlop.setText("fLoP"); layout.topControl = lblFlip; stack.layout(); buttonFlop = new Button(shell, SWT.FLAT); buttonFlop.setText("Flip"); GridData gdFlip = new GridData(); gdFlip.horizontalAlignment = SWT.RIGHT; buttonFlop.setLayoutData(gdFlip); buttonFlop.addSelectionListener(new SelectionAdapter() { @Override public void widgetSelected(SelectionEvent e) { flipFlop(); } }); buttonPrint = new Button(shell, SWT.FLAT); buttonPrint.setText("Print"); GridData gdPrint = new GridData(); gdPrint.horizontalAlignment = SWT.LEFT; buttonPrint.setLayoutData(gdPrint); buttonPrint.addSelectionListener(new SelectionAdapter() { @Override public void widgetSelected(SelectionEvent e) { print(lblFlip); print(lblFlop); } }); } /** * @param args */ public static void main(String[] args) { Shell shell = new Shell(); shell.setText("Slider Test"); shell.setSize(new Point(800, 600)); shell.setLayout(new GridLayout()); SWTImagePrintTest tt = new SWTImagePrintTest(); tt.createContents(shell); shell.open(); Display display = Display.getDefault(); while (shell.isDisposed() == false) { if (display.readAndDispatch() == false) { display.sleep(); } } display.dispose(); } }

    Read the article

  • When building a web application project, TFS 2008 builds two separate projects in _PublishedFolder.

    - by Steve Johnson
    I am trying to perform build automation on one of my web application projects built using VS 2008. The _PublishedWebSites contains two folders: Web and Deploy. I want TFS 2008 to generate only the deploy folder and not the web folder. Here is my TFSBuild.proj file: <Project ToolsVersion="3.5" DefaultTargets="Compile" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Import Project="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\TeamBuild\Microsoft.TeamFoundation.Build.targets" /> <Import Project="$(MSBuildExtensionsPath)\Microsoft\WebDeployment\v9.0\Microsoft.WebDeployment.targets" /> <ItemGroup> <SolutionToBuild Include="$(BuildProjectFolderPath)/../../Development/Main/MySoftware.sln"> <Targets></Targets> <Properties></Properties> </SolutionToBuild> </ItemGroup> <ItemGroup> <ConfigurationToBuild Include="Release|AnyCPU"> <FlavorToBuild>Release</FlavorToBuild> <PlatformToBuild>Any CPU</PlatformToBuild> </ConfigurationToBuild> </ItemGroup> <!--<ItemGroup> <SolutionToBuild Include="$(BuildProjectFolderPath)/../../Development/Main/MySoftware.sln"> <Targets></Targets> <Properties></Properties> </SolutionToBuild> </ItemGroup> <ItemGroup> <ConfigurationToBuild Include="Release|x64"> <FlavorToBuild>Release</FlavorToBuild> <PlatformToBuild>x64</PlatformToBuild> </ConfigurationToBuild> </ItemGroup>--> <ItemGroup> <AdditionalReferencePath Include="C:\3PR" /> </ItemGroup> <Target Name="GetCopyToOutputDirectoryItems" Outputs="@(AllItemsFullPathWithTargetPath)" DependsOnTargets="AssignTargetPaths;_SplitProjectReferencesByFileExistence"> <!-- Get items from child projects first. --> <MSBuild Projects="@(_MSBuildProjectReferenceExistent)" Targets="GetCopyToOutputDirectoryItems" Properties="%(_MSBuildProjectReferenceExistent.SetConfiguration); %(_MSBuildProjectReferenceExistent.SetPlatform)" Condition="'@(_MSBuildProjectReferenceExistent)'!=''"> <Output TaskParameter="TargetOutputs" ItemName="_AllChildProjectItemsWithTargetPathNotFiltered"/> </MSBuild> <!-- Remove duplicates. --> <RemoveDuplicates Inputs="@(_AllChildProjectItemsWithTargetPathNotFiltered)"> <Output TaskParameter="Filtered" ItemName="_AllChildProjectItemsWithTargetPath"/> </RemoveDuplicates> <!-- Target outputs must be full paths because they will be consumed by a different project. --> <CreateItem Include="@(_AllChildProjectItemsWithTargetPath->'%(FullPath)')" Exclude= "$(BuildProjectFolderPath)/../../Development/Main/Web/Bin*.pdb; *.refresh; *.vshost.exe; *.manifest; *.compiled; $(BuildProjectFolderPath)/../../Development/Main/Web/Auth/MySoftware.dll; $(BuildProjectFolderPath)/../../Development/Main/Web/BinApp_Web_*.dll;" Condition="'%(_AllChildProjectItemsWithTargetPath.CopyToOutputDirectory)'=='Always' or '%(_AllChildProjectItemsWithTargetPath.CopyToOutputDirectory)'=='PreserveNewest'" > <Output TaskParameter="Include" ItemName="AllItemsFullPathWithTargetPath"/> <Output TaskParameter="Include" ItemName="_SourceItemsToCopyToOutputDirectoryAlways" Condition="'%(_AllChildProjectItemsWithTargetPath.CopyToOutputDirectory)'=='Always'"/> <Output TaskParameter="Include" ItemName="_SourceItemsToCopyToOutputDirectory" Condition="'%(_AllChildProjectItemsWithTargetPath.CopyToOutputDirectory)'=='PreserveNewest'"/> </CreateItem> </Target> <!-- To modify your build process, add your task inside one of the targets below and uncomment it. Other similar extension points exist, see Microsoft.WebDeployment.targets. <Target Name="BeforeBuild"> </Target> <Target Name="BeforeMerge"> </Target> <Target Name="AfterMerge"> </Target> <Target Name="AfterBuild"> </Target> --> </Project> I want to build everything that the builtin Deploy project is doing for me. But I don't want the generated web project as it contains App_Web_xxxx.dll assemblies instead of a single compiled assembly. How can I do this?

    Read the article

  • c++ and c# speed compared

    - by Mack
    I was worried about C#'s speed when it deals with heavy calculations, when you need to use raw CPU power. I always thought that C++ is much faster than C# when it comes to calculations. So I did some quick tests. The first test computes prime numbers < an integer n, the second test computes some pandigital numbers. The idea for second test comes from here: Pandigital Numbers C# prime computation: using System; using System.Diagnostics; class Program { static int primes(int n) { uint i, j; int countprimes = 0; for (i = 1; i <= n; i++) { bool isprime = true; for (j = 2; j <= Math.Sqrt(i); j++) if ((i % j) == 0) { isprime = false; break; } if (isprime) countprimes++; } return countprimes; } static void Main(string[] args) { int n = int.Parse(Console.ReadLine()); Stopwatch sw = new Stopwatch(); sw.Start(); int res = primes(n); sw.Stop(); Console.WriteLine("I found {0} prime numbers between 0 and {1} in {2} msecs.", res, n, sw.ElapsedMilliseconds); Console.ReadKey(); } } C++ variant: #include <iostream> #include <ctime> int primes(unsigned long n) { unsigned long i, j; int countprimes = 0; for(i = 1; i <= n; i++) { int isprime = 1; for(j = 2; j < (i^(1/2)); j++) if(!(i%j)) { isprime = 0; break; } countprimes+= isprime; } return countprimes; } int main() { int n, res; cin>>n; unsigned int start = clock(); res = primes(n); int tprime = clock() - start; cout<<"\nI found "<<res<<" prime numbers between 1 and "<<n<<" in "<<tprime<<" msecs."; return 0; } When I ran the test trying to find primes < than 100,000, C# variant finished in 0.409 seconds and C++ variant in 5.553 seconds. When I ran them for 1,000,000 C# finished in 6.039 seconds and C++ in about 337 seconds. Pandigital test in C#: using System; using System.Diagnostics; class Program { static bool IsPandigital(int n) { int digits = 0; int count = 0; int tmp; for (; n > 0; n /= 10, ++count) { if ((tmp = digits) == (digits |= 1 << (n - ((n / 10) * 10) - 1))) return false; } return digits == (1 << count) - 1; } static void Main() { int pans = 0; Stopwatch sw = new Stopwatch(); sw.Start(); for (int i = 1; i <= 123456789; i++) { if (IsPandigital(i)) { pans++; } } sw.Stop(); Console.WriteLine("{0}pcs, {1}ms", pans, sw.ElapsedMilliseconds); Console.ReadKey(); } } Pandigital test in C++: #include <iostream> #include <ctime> using namespace std; int IsPandigital(int n) { int digits = 0; int count = 0; int tmp; for (; n > 0; n /= 10, ++count) { if ((tmp = digits) == (digits |= 1 << (n - ((n / 10) * 10) - 1))) return 0; } return digits == (1 << count) - 1; } int main() { int pans = 0; unsigned int start = clock(); for (int i = 1; i <= 123456789; i++) { if (IsPandigital(i)) { pans++; } } int ptime = clock() - start; cout<<"\nPans:"<<pans<<" time:"<<ptime; return 0; } C# variant runs in 29.906 seconds and C++ in about 36.298 seconds. I didn't touch any compiler switches and bot C# and C++ programs were compiled with debug options. Before I attempted to run the test I was worried that C# will lag well behind C++, but now it seems that there is a pretty big speed difference in C# favor. Can anybody explain this? C# is jitted and C++ is compiled native so it's normal that a C++ will be faster than a C# variant. Thanks for the answers!

    Read the article

  • Improving HTML scrapper efficiency with pcntl_fork()

    - by Michael Pasqualone
    With the help from two previous questions, I now have a working HTML scrapper that feeds product information into a database. What I am now trying to do is improve efficiently by wrapping my brain around with getting my scrapper working with pcntl_fork. If I split my php5-cli script into 10 separate chunks, I improve total runtime by a large factor so I know I am not i/o or cpu bound but just limited by the linear nature of my scraping functions. Using code I've cobbled together from multiple sources, I have this working test: <?php libxml_use_internal_errors(true); ini_set('max_execution_time', 0); ini_set('max_input_time', 0); set_time_limit(0); $hrefArray = array("http://slashdot.org", "http://slashdot.org", "http://slashdot.org", "http://slashdot.org"); function doDomStuff($singleHref,$childPid) { $html = new DOMDocument(); $html->loadHtmlFile($singleHref); $xPath = new DOMXPath($html); $domQuery = '//div[@id="slogan"]/h2'; $domReturn = $xPath->query($domQuery); foreach($domReturn as $return) { $slogan = $return->nodeValue; echo "Child PID #" . $childPid . " says: " . $slogan . "\n"; } } $pids = array(); foreach ($hrefArray as $singleHref) { $pid = pcntl_fork(); if ($pid == -1) { die("Couldn't fork, error!"); } elseif ($pid > 0) { // We are the parent $pids[] = $pid; } else { // We are the child $childPid = posix_getpid(); doDomStuff($singleHref,$childPid); exit(0); } } foreach ($pids as $pid) { pcntl_waitpid($pid, $status); } // Clear the libxml buffer so it doesn't fill up libxml_clear_errors(); Which raises the following questions: 1) Given my hrefArray contains 4 urls - if the array was to contain say 1,000 product urls this code would spawn 1,000 child processes? If so, what is the best way to limit the amount of processes to say 10, and again 1,000 urls as an example split the child work load to 100 products per child (10 x 100). 2) I've learn that pcntl_fork creates a copy of the process and all variables, classes, etc. What I would like to do is replace my hrefArray variable with a DOMDocument query that builds the list of products to scrape, and then feeds them off to child processes to do the processing - so spreading the load across 10 child workers. My brain is telling I need to do something like the following (obviously this doesn't work, so don't run it): <?php libxml_use_internal_errors(true); ini_set('max_execution_time', 0); ini_set('max_input_time', 0); set_time_limit(0); $maxChildWorkers = 10; $html = new DOMDocument(); $html->loadHtmlFile('http://xxxx'); $xPath = new DOMXPath($html); $domQuery = '//div[@id=productDetail]/a'; $domReturn = $xPath->query($domQuery); $hrefsArray[] = $domReturn->getAttribute('href'); function doDomStuff($singleHref) { // Do stuff here with each product } // To figure out: Split href array into $maxChilderWorks # of workArray1, workArray2 ... workArray10. $pids = array(); foreach ($workArray(1,2,3 ... 10) as $singleHref) { $pid = pcntl_fork(); if ($pid == -1) { die("Couldn't fork, error!"); } elseif ($pid > 0) { // We are the parent $pids[] = $pid; } else { // We are the child $childPid = posix_getpid(); doDomStuff($singleHref); exit(0); } } foreach ($pids as $pid) { pcntl_waitpid($pid, $status); } // Clear the libxml buffer so it doesn't fill up libxml_clear_errors(); But what I can't figure out is how to build my hrefsArray[] in the master/parent process only and feed it off to the child process. Currently everything I've tried causes loops in the child processes. I.e. my hrefsArray gets built in the master, and in each subsequent child process. I am sure I am going about this all totally wrong, so would greatly appreciate just general nudge in the right direction.

    Read the article

  • Image/"most resembling pixel" search optimization?

    - by SigTerm
    The situation: Let's say I have an image A, say, 512x512 pixels, and image B, 5x5 or 7x7 pixels. Both images are 24bit rgb, and B have 1bit alpha mask (so each pixel is either completely transparent or completely solid). I need to find within image A a pixel which (with its' neighbors) most closely resembles image B, OR the pixel that probably most closely resembles image B. Resemblance is calculated as "distance" which is sum of "distances" between non-transparent B's pixels and A's pixels divided by number of non-transparent B's pixels. Here is a sample SDL code for explanation: struct Pixel{ unsigned char b, g, r, a; }; void fillPixel(int x, int y, SDL_Surface* dst, SDL_Surface* src, int dstMaskX, int dstMaskY){ Pixel& dstPix = *((Pixel*)((char*)(dst->pixels) + sizeof(Pixel)*x + dst->pitch*y)); int xMin = x + texWidth - searchWidth; int xMax = xMin + searchWidth*2; int yMin = y + texHeight - searchHeight; int yMax = yMin + searchHeight*2; int numFilled = 0; for (int curY = yMin; curY < yMax; curY++) for (int curX = xMin; curX < xMax; curX++){ Pixel& cur = *((Pixel*)((char*)(dst->pixels) + sizeof(Pixel)*(curX & texMaskX) + dst->pitch*(curY & texMaskY))); if (cur.a != 0) numFilled++; } if (numFilled == 0){ int srcX = rand() % src->w; int srcY = rand() % src->h; dstPix = *((Pixel*)((char*)(src->pixels) + sizeof(Pixel)*srcX + src->pitch*srcY)); dstPix.a = 0xFF; return; } int storedSrcX = rand() % src->w; int storedSrcY = rand() % src->h; float lastDifference = 3.40282347e+37F; //unsigned char mask = for (int srcY = searchHeight; srcY < (src->h - searchHeight); srcY++) for (int srcX = searchWidth; srcX < (src->w - searchWidth); srcX++){ float curDifference = 0; int numPixels = 0; for (int tmpY = -searchHeight; tmpY < searchHeight; tmpY++) for(int tmpX = -searchWidth; tmpX < searchWidth; tmpX++){ Pixel& tmpSrc = *((Pixel*)((char*)(src->pixels) + sizeof(Pixel)*(srcX+tmpX) + src->pitch*(srcY+tmpY))); Pixel& tmpDst = *((Pixel*)((char*)(dst->pixels) + sizeof(Pixel)*((x + dst->w + tmpX) & dstMaskX) + dst->pitch*((y + dst->h + tmpY) & dstMaskY))); if (tmpDst.a){ numPixels++; int dr = tmpSrc.r - tmpDst.r; int dg = tmpSrc.g - tmpDst.g; int db = tmpSrc.g - tmpDst.g; curDifference += dr*dr + dg*dg + db*db; } } if (numPixels) curDifference /= (float)numPixels; if (curDifference < lastDifference){ lastDifference = curDifference; storedSrcX = srcX; storedSrcY = srcY; } } dstPix = *((Pixel*)((char*)(src->pixels) + sizeof(Pixel)*storedSrcX + src->pitch*storedSrcY)); dstPix.a = 0xFF; } This thing is supposed to be used for texture generation. Now, the question: The easiest way to do this is brute force search (which is used in example routine). But it is slow - even using GPU acceleration and dual core cpu won't make it much faster. It looks like I can't use modified binary search because of B's mask. So, how can I find desired pixel faster? Additional Info: It is allowed to use 2 cores, GPU acceleration, CUDA, and 1.5..2 gigabytes of RAM for the task. I would prefer to avoid some kind of lengthy preprocessing phase that will take 30 minutes to finish. Ideas?

    Read the article

  • mysql: Bind on unix socket: Permission denied

    - by Alex
    Can't start mysql with: sudo /usr/bin/mysqld_safe --datadir=/srv/mysql/myDB --log-error=/srv/mysql/logs/mysqld-myDB.log --pid-file=/srv/mysql/pids/mysqld-myDB.pid --user=mysql --socket=/srv/mysql/sockets/mysql-myDB.sock --port=3700 120222 13:40:48 mysqld_safe Starting mysqld daemon with databases from /srv/mysql/myDB 120222 13:40:54 mysqld_safe mysqld from pid file /srv/mysql/pids/mysqld-myDB.pid ended /srv/mysql/logs/mysqld-myDB.log: 120222 13:43:53 mysqld_safe Starting mysqld daemon with databases from /srv/mysql/myDB 120222 13:43:53 [Note] Plugin 'FEDERATED' is disabled. /usr/sbin/mysqld: Table 'plugin' is read only 120222 13:43:53 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 120222 13:43:53 InnoDB: Completed initialization of buffer pool 120222 13:43:53 InnoDB: Started; log sequence number 32 4232720908 120222 13:43:53 [ERROR] Can't start server : Bind on unix socket: Permission denied 120222 13:43:53 [ERROR] Do you already have another mysqld server running on socket: /srv/mysql/sockets/mysql-myDB.sock ? 120222 13:43:53 [ERROR] Aborting 120222 13:43:53 InnoDB: Starting shutdown... One instance mysqld is running: $ ps aux | grep mysql mysql 1093 0.0 0.2 169972 18700 ? Ssl 11:50 0:02 /usr/sbin/mysqld $ Port 3700 is available: $ netstat -a | grep 3700 $ Directory with sockets is empty: $ ls /srv/mysql/sockets/ $ There are all permissions: $ ls -l /srv/mysql/ total 20 drwxrwxrwx 2 mysql mysql 4096 2012-02-22 13:28 logs drwxrwxrwx 13 mysql mysql 4096 2012-02-22 13:44 myDB drwxrwxrwx 2 mysql mysql 4096 2012-02-22 12:55 pids drwxrwxrwx 2 mysql mysql 4096 2012-02-22 12:55 sockets drwxrwxrwx 2 mysql mysql 4096 2012-02-22 13:25 version Apparmor config: $cat /etc/apparmor.d/usr.sbin.mysqld # vim:syntax=apparmor # Last Modified: Tue Jun 19 17:37:30 2007 #include <tunables/global> /usr/sbin/mysqld flags=(complain) { #include <abstractions/base> #include <abstractions/nameservice> #include <abstractions/user-tmp> #include <abstractions/mysql> #include <abstractions/winbind> capability dac_override, capability sys_resource, capability setgid, capability setuid, network tcp, /etc/hosts.allow r, /etc/hosts.deny r, /etc/mysql/*.pem r, /etc/mysql/conf.d/ r, /etc/mysql/conf.d/* r, /etc/mysql/*.cnf r, /usr/lib/mysql/plugin/ r, /usr/lib/mysql/plugin/*.so* mr, /usr/sbin/mysqld mr, /usr/share/mysql/** r, /var/log/mysql.log rw, /var/log/mysql.err rw, /var/lib/mysql/ r, /var/lib/mysql/** rwk, /var/log/mysql/ r, /var/log/mysql/* rw, /{,var/}run/mysqld/mysqld.pid w, /{,var/}run/mysqld/mysqld.sock w, /srv/mysql/ r, /srv/mysql/** rwk, /sys/devices/system/cpu/ r, # Site-specific additions and overrides. See local/README for details. #include <local/usr.sbin.mysqld> } Any suggestions? UPD1: $ touch /srv/mysql/sockets/mysql-myDB.sock $ sudo chown mysql:mysql /srv/mysql/sockets/mysql-myDB.sock $ ls -l /srv/mysql/sockets/mysql-myDB.sock -rw-rw-r-- 1 mysql mysql 0 2012-02-22 14:29 /srv/mysql/sockets/mysql-myDB.sock $ sudo /usr/bin/mysqld_safe --datadir=/srv/mysql/myDB --log-error=/srv/mysql/logs/mysqld-myDB.log --pid-file=/srv/mysql/pids/mysqld-myDB.pid --user=mysql --socket=/srv/mysql/sockets/mysql-myDB.sock --port=3700 120222 14:30:18 mysqld_safe Can't log to error log and syslog at the same time. Remove all --log-error configuration options for --syslog to take effect. 120222 14:30:18 mysqld_safe Logging to '/srv/mysql/logs/mysqld-myDB.log'. 120222 14:30:18 mysqld_safe Starting mysqld daemon with databases from /srv/mysqlmyDB 120222 14:30:24 mysqld_safe mysqld from pid file /srv/mysql/pids/mysqld-myDB.pid ended $ ls -l /srv/mysql/sockets/mysql-myDB.sock ls: cannot access /srv/mysql/sockets/mysql-myDB.sock: No such file or directory $ UPD2: $ sudo netstat -lnp | grep mysql tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 1093/mysqld unix 2 [ ACC ] STREAM LISTENING 5912 1093/mysqld /var/run/mysqld/mysqld.sock $ sudo lsof | grep /srv/mysql/sockets/mysql-myDB.sock lsof: WARNING: can't stat() fuse.gvfs-fuse-daemon file system /home/sears/.gvfs Output information may be incomplete. UPD3: $ cat /etc/mysql/my.cnf # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/

    Read the article

  • Why doesn't pppd over ssh work here? Why can't I kill pppd?

    - by Peter V. Mørch
    I'm trying to setup a simple ppp tunnel over ssh. It works on several machines just fine. But on one machine, pppd gets "stuck": > pgrep pppd | xargs ps up USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 4178 0.0 0.1 3020 1088 pts/1 Ds+ 05:28 0:00 /usr/sbin/pppd Any attempt to kill it (even sudo kill -9 4178) has no effect that I can see. strace -p 4178 also hangs similarly. After it has been started for a while, I start getting messages in dmesg like shown below. It is started like so from another machine: ssh -t root@server /usr/sbin/pppd passive noauth When I do this to one of the machines that work, the remote end's pppd spits out garbage/binary data to the console (as expected). When I do it to the one that fails, I get no output from pppd, but the ssh session eventually times out. If I instead ssh to the machine, and then run /usr/sbin/pppd passive noauth in a separate step I also get the expected binary output. I now have a couple of questions: What could be up with the one machine where pppd fails? I don't even know where to start looking... What could be the difference between ssh -t root@server /usr/sbin/pppd passive noauth in a single step and ssh root@server and /usr/sbin/pppd passive noauth in two steps? How can it be that I can't kill the process even with sudo kill -9? The only way I know is to reboot. (I've tried searching for something similar but didn't get anywhere so I'm sorry I don't have any more leads) Any ideas? The problem machine runs in debian on VMware "hardware" (as do the ones that work) and it exhibits the problem when cloned and on both debian lenny (original) and squeeze (after upgrade) dmesg entries: [ 1198.727248] INFO: task pppd:4178 blocked for more than 120 seconds. [ 1198.727507] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 1198.727904] pppd D ece2dc9c 0 4178 4174 0x00000004 [ 1198.727908] 00000098 00000082 f2503520 ece2dc9c 0000b1e7 00000000 c148d1c0 c148d1c0 [ 1198.727913] f2a06100 f6e071c0 00000000 ece2dc18 f5cd07e0 00000000 ece2d400 ece2dc9d [ 1198.727918] 00c52300 ece2dcbc f67bfef8 ec98e480 f291cec0 00000000 c10cf5b0 c10dfd21 [ 1198.727923] Call Trace: [ 1198.727926] [<c10cf5b0>] ? nameidata_to_filp+0x37/0x41 [ 1198.727929] [<c10dfd21>] ? dput+0x21/0xb7 [ 1198.727932] [<c11cfecc>] ? tty_ldisc_ref_wait+0x5f/0x76 [ 1198.727935] [<c104de7a>] ? wake_up_bit+0x5c/0x5c [ 1198.727938] [<c11cb91b>] ? tty_ioctl+0x85f/0x8ba [ 1198.727941] [<c10fec18>] ? do_lock_file_wait+0x3d/0xd9 [ 1198.727944] [<c1162c97>] ? _copy_from_user+0x2b/0x102 [ 1198.727946] [<c11cb0bc>] ? tty_check_change+0xb9/0xb9 [ 1198.727949] [<c10dbeb7>] ? do_vfs_ioctl+0x485/0x4c7 [ 1198.727952] [<c10db59a>] ? do_fcntl+0x24f/0x3a2 [ 1198.727954] [<c10dbf3a>] ? sys_ioctl+0x41/0x58 [ 1198.727957] [<c12c6a1f>] ? sysenter_do_call+0x12/0x28 [ 1318.457225] INFO: task sshd:4174 blocked for more than 120 seconds. [ 1318.457500] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 1318.457896] sshd D f25024cc 0 4174 2393 0x00000000 [ 1318.457901] 00000098 00000086 f2a06940 f25024cc 0000b246 00000000 c148d1c0 c148d1c0 [ 1318.457906] f2503520 f6e071c0 00000000 3f056585 0000000f ece2d4bc 3f056585 f2503520 [ 1318.457911] ec98bb38 ec98bbdc 00000000 00000000 00000000 c12c09b5 f2503520 c10327cb [ 1318.457916] Call Trace: [ 1318.457926] [<c12c09b5>] ? schedule_hrtimeout_range_clock+0x3c/0xd9 [ 1318.457931] [<c10327cb>] ? try_to_wake_up+0x13f/0x13f [ 1318.457935] [<c11cfecc>] ? tty_ldisc_ref_wait+0x5f/0x76 [ 1318.457940] [<c104de7a>] ? wake_up_bit+0x5c/0x5c [ 1318.457943] [<c11c9ad3>] ? tty_poll+0x32/0x5e [ 1318.457947] [<c10dd4d5>] ? do_select+0x2a1/0x42e [ 1318.457950] [<c10dcb83>] ? poll_freewait+0x69/0x69 [ 1318.457953] [<c10dcc25>] ? __pollwait+0xa2/0xa2 [ 1318.457955] [<c10dcc25>] ? __pollwait+0xa2/0xa2 [ 1318.457958] [<c10dcc25>] ? __pollwait+0xa2/0xa2 [ 1318.457960] [<c10dcc25>] ? __pollwait+0xa2/0xa2 [ 1318.457963] [<c10dcc25>] ? __pollwait+0xa2/0xa2 [ 1318.457965] [<c10dcc25>] ? __pollwait+0xa2/0xa2 [ 1318.457968] [<c10dcc25>] ? __pollwait+0xa2/0xa2 [ 1318.457971] [<c10429c2>] ? lock_timer_base+0x19/0x35 [ 1318.457974] [<c1042eb5>] ? __mod_timer+0x10c/0x116 [ 1318.457977] [<c1042f89>] ? mod_timer+0x69/0x6e [ 1318.457981] [<c121325d>] ? sk_reset_timer+0xc/0x16 [ 1318.457984] [<c1252f57>] ? tcp_event_new_data_sent+0x66/0x6b [ 1318.457987] [<c1255b85>] ? tcp_write_xmit+0x7a7/0x86a [ 1318.457990] [<c121760d>] ? __alloc_skb+0x50/0xfd [ 1318.457994] [<c12c12bc>] ? _raw_spin_lock_bh+0x8/0x1e [ 1318.457996] [<c1212e98>] ? release_sock+0x10/0xc4 [ 1318.457999] [<c124b543>] ? tcp_sendmsg+0x6dd/0x7b7 [ 1318.458003] [<c1162c97>] ? _copy_from_user+0x2b/0x102 [ 1318.458006] [<c10dd7a0>] ? core_sys_select+0x13e/0x1c3 [ 1318.458009] [<c12102a3>] ? sock_aio_write+0xc0/0xd4 [ 1318.458012] [<c10d0655>] ? do_sync_write+0xa0/0xe4 [ 1318.458016] [<c10b141c>] ? handle_mm_fault+0x222/0x238 [ 1318.458019] [<c10f6096>] ? fsnotify+0x1de/0x1f9 [ 1318.458022] [<c10dd9e8>] ? sys_select+0x6e/0x8f [ 1318.458024] [<c10d105e>] ? sys_write+0x3c/0x63 [ 1318.458028] [<c12c6a1f>] ? sysenter_do_call+0x12/0x28

    Read the article

  • Configuring OpenLDAP and SSL

    - by Stormshadow
    I am having trouble trying to connect to a secure OpenLDAP server which I have set up. On running my LDAP client code java -Djavax.net.debug=ssl LDAPConnector I get the following exception trace (java version 1.6.0_17) trigger seeding of SecureRandom done seeding SecureRandom %% No cached client session *** ClientHello, TLSv1 RandomCookie: GMT: 1256110124 bytes = { 224, 19, 193, 148, 45, 205, 108, 37, 101, 247, 112, 24, 157, 39, 111, 177, 43, 53, 206, 224, 68, 165, 55, 185, 54, 203, 43, 91 } Session ID: {} Cipher Suites: [SSL_RSA_WITH_RC4_128_MD5, SSL_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, SSL_RSA_W ITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, SSL_RSA_WITH_DES_CBC_SHA, SSL_DHE_RSA_WITH_DES_CBC_SHA, SSL_DHE_DSS_WITH_DES_CBC_SH A, SSL_RSA_EXPORT_WITH_RC4_40_MD5, SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA] Compression Methods: { 0 } *** Thread-0, WRITE: TLSv1 Handshake, length = 73 Thread-0, WRITE: SSLv2 client hello message, length = 98 Thread-0, received EOFException: error Thread-0, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake Thread-0, SEND TLSv1 ALERT: fatal, description = handshake_failure Thread-0, WRITE: TLSv1 Alert, length = 2 Thread-0, called closeSocket() main, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake javax.naming.CommunicationException: simple bind failed: ldap.natraj.com:636 [Root exception is javax.net.ssl.SSLHandshakeException: Remote host closed connection during hands hake] at com.sun.jndi.ldap.LdapClient.authenticate(Unknown Source) at com.sun.jndi.ldap.LdapCtx.connect(Unknown Source) at com.sun.jndi.ldap.LdapCtx.<init>(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(Unknown Source) at javax.naming.spi.NamingManager.getInitialContext(Unknown Source) at javax.naming.InitialContext.getDefaultInitCtx(Unknown Source) at javax.naming.InitialContext.init(Unknown Source) at javax.naming.InitialContext.<init>(Unknown Source) at javax.naming.directory.InitialDirContext.<init>(Unknown Source) at LDAPConnector.CallSecureLDAPServer(LDAPConnector.java:43) at LDAPConnector.main(LDAPConnector.java:237) Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.AppInputStream.read(Unknown Source) at java.io.BufferedInputStream.fill(Unknown Source) at java.io.BufferedInputStream.read1(Unknown Source) at java.io.BufferedInputStream.read(Unknown Source) at com.sun.jndi.ldap.Connection.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.io.EOFException: SSL peer shut down incorrectly at com.sun.net.ssl.internal.ssl.InputRecord.read(Unknown Source) ... 9 more I am able to connect to the same secure LDAP server however if I use another version of java (1.6.0_14) I have created and installed the server certificates in the cacerts of both the JRE's as mentioned in this guide -- OpenLDAP with SSL When I run ldapsearch -x on the server I get # extended LDIF # # LDAPv3 # base <dc=localdomain> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # localdomain dn: dc=localdomain objectClass: top objectClass: dcObject objectClass: organization o: localdomain dc: localdomain # admin, localdomain dn: cn=admin,dc=localdomain objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator # search result search: 2 result: 0 Success # numResponses: 3 # numEntries: 2 On running openssl s_client -connect ldap.natraj.com:636 -showcerts , I obtain the self signed certificate. My slapd.conf file is as follows ####################################################################### # Global Directives: # Features to permit #allow bind_v2 # Schema and objectClass definitions include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema # Where the pid file is put. The init.d script # will not stop the server if you change this. pidfile /var/run/slapd/slapd.pid # List of arguments that were passed to the server argsfile /var/run/slapd/slapd.args # Read slapd.conf(5) for possible values loglevel none # Where the dynamically loaded modules are stored modulepath /usr/lib/ldap moduleload back_hdb # The maximum number of entries that is returned for a search operation sizelimit 500 # The tool-threads parameter sets the actual amount of cpu's that is used # for indexing. tool-threads 1 ####################################################################### # Specific Backend Directives for hdb: # Backend specific directives apply to this backend until another # 'backend' directive occurs backend hdb ####################################################################### # Specific Backend Directives for 'other': # Backend specific directives apply to this backend until another # 'backend' directive occurs #backend <other> ####################################################################### # Specific Directives for database #1, of type hdb: # Database specific directives apply to this databasse until another # 'database' directive occurs database hdb # The base of your directory in database #1 suffix "dc=localdomain" # rootdn directive for specifying a superuser on the database. This is needed # for syncrepl. rootdn "cn=admin,dc=localdomain" # Where the database file are physically stored for database #1 directory "/var/lib/ldap" # The dbconfig settings are used to generate a DB_CONFIG file the first # time slapd starts. They do NOT override existing an existing DB_CONFIG # file. You should therefore change these settings in DB_CONFIG directly # or remove DB_CONFIG and restart slapd for changes to take effect. # For the Debian package we use 2MB as default but be sure to update this # value if you have plenty of RAM dbconfig set_cachesize 0 2097152 0 # Sven Hartge reported that he had to set this value incredibly high # to get slapd running at all. See http://bugs.debian.org/303057 for more # information. # Number of objects that can be locked at the same time. dbconfig set_lk_max_objects 1500 # Number of locks (both requested and granted) dbconfig set_lk_max_locks 1500 # Number of lockers dbconfig set_lk_max_lockers 1500 # Indexing options for database #1 index objectClass eq # Save the time that the entry gets modified, for database #1 lastmod on # Checkpoint the BerkeleyDB database periodically in case of system # failure and to speed slapd shutdown. checkpoint 512 30 # Where to store the replica logs for database #1 # replogfile /var/lib/ldap/replog # The userPassword by default can be changed # by the entry owning it if they are authenticated. # Others should not be able to see it, except the # admin entry below # These access lines apply to database #1 only access to attrs=userPassword,shadowLastChange by dn="cn=admin,dc=localdomain" write by anonymous auth by self write by * none # Ensure read access to the base for things like # supportedSASLMechanisms. Without this you may # have problems with SASL not knowing what # mechanisms are available and the like. # Note that this is covered by the 'access to *' # ACL below too but if you change that as people # are wont to do you'll still need this if you # want SASL (and possible other things) to work # happily. access to dn.base="" by * read # The admin dn has full write access, everyone else # can read everything. access to * by dn="cn=admin,dc=localdomain" write by * read # For Netscape Roaming support, each user gets a roaming # profile for which they have write access to #access to dn=".*,ou=Roaming,o=morsnet" # by dn="cn=admin,dc=localdomain" write # by dnattr=owner write ####################################################################### # Specific Directives for database #2, of type 'other' (can be hdb too): # Database specific directives apply to this databasse until another # 'database' directive occurs #database <other> # The base of your directory for database #2 #suffix "dc=debian,dc=org" ####################################################################### # SSL: # Uncomment the following lines to enable SSL and use the default # snakeoil certificates. #TLSCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem #TLSCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key TLSCipherSuite TLS_RSA_AES_256_CBC_SHA TLSCACertificateFile /etc/ldap/ssl/server.pem TLSCertificateFile /etc/ldap/ssl/server.pem TLSCertificateKeyFile /etc/ldap/ssl/server.pem My ldap.conf file is # # LDAP Defaults # # See ldap.conf(5) for details # This file should be world readable but not world writable. HOST ldap.natraj.com PORT 636 BASE dc=localdomain URI ldaps://ldap.natraj.com TLS_CACERT /etc/ldap/ssl/server.pem TLS_REQCERT allow #SIZELIMIT 12 #TIMELIMIT 15 #DEREF never Why is it that I can connect to the same server using one version of JRE while I cannot with another ?

    Read the article

  • IRQ problem with 2.6.32/2.6.39 kernel on Debian Squeeze x86_64

    - by MasterM
    I recently assembled a new computer so that all hardware is pretty new. Since then I've been experiencing some problem with IRQs when running Debian 6.0. On random occasions, usually after an hour or so of running I hear a beep and this shows up in dmesg: [ 3537.762795] irq 16: nobody cared (try booting with the "irqpoll" option) [ 3537.762797] Pid: 0, comm: swapper Tainted: P W O 2.6.39-2-amd64 #1 [ 3537.762798] Call Trace: [ 3537.762799] <IRQ> [<ffffffff810924d4>] ? __report_bad_irq+0x3a/0xa2 [ 3537.762803] [<ffffffff810926a4>] ? note_interrupt+0x168/0x1da [ 3537.762805] [<ffffffff81090dd4>] ? handle_irq_event_percpu+0x171/0x18f [ 3537.762807] [<ffffffff8100e0e2>] ? read_tsc+0x5/0x16 [ 3537.762809] [<ffffffff8106b8a2>] ? update_ts_time_stats+0x32/0x6b [ 3537.762810] [<ffffffff81090e26>] ? handle_irq_event+0x34/0x52 [ 3537.762812] [<ffffffff81063fb7>] ? sched_clock_idle_wakeup_event+0x12/0x1c [ 3537.762813] [<ffffffff81092df2>] ? handle_fasteoi_irq+0x82/0xa4 [ 3537.762815] [<ffffffff8100aadb>] ? handle_irq+0x1a/0x23 [ 3537.762816] [<ffffffff8100a384>] ? do_IRQ+0x45/0xaa [ 3537.762818] [<ffffffff81332c93>] ? common_interrupt+0x13/0x13 [ 3537.762818] <EOI> [<ffffffff81332c8e>] ? common_interrupt+0xe/0x13 [ 3537.762821] [<ffffffff81026800>] ? native_safe_halt+0x2/0x3 [ 3537.762829] [<ffffffffa016ed58>] ? acpi_idle_do_entry+0x39/0x62 [processor] [ 3537.762831] [<ffffffffa016edde>] ? acpi_idle_enter_c1+0x5d/0xad [processor] [ 3537.762834] [<ffffffff81261033>] ? cpuidle_idle_call+0x11f/0x1cc [ 3537.762835] [<ffffffff81008dd2>] ? cpu_idle+0xab/0xe1 [ 3537.762837] [<ffffffff8169fc60>] ? start_kernel+0x3e0/0x3eb [ 3537.762838] [<ffffffff8169f3c8>] ? x86_64_start_kernel+0x102/0x10f [ 3537.762839] handlers: [ 3537.762840] [<ffffffffa0358d5a>] (rtl8169_interrupt+0x0/0x2d7 [r8169]) [ 3537.762842] [<ffffffffa08ff2ca>] (nv_kern_isr+0x0/0x54 [nvidia]) [ 3537.762902] Disabling IRQ #16 After that Xorg either hogs on CPU or is unstable (up to hanging the system completely). When I restart Xorg everything is fine again and the problem doesn't occur until next reboot. I tried to upgrade the kernel from stock 2.6.32 to 2.6.39 from unstable repository but that didn't help. Booting with irqpoll option only seems to prolong the initial time period after which the problem occurs. I'm using latest NVIDIA drivers and Realtek firmware from firmware-realtek package. I have two GTX 560Ti that run in SLI. Disabling SLI or taking out one card completely doesn't solve the problem either. Output of uname -a is: Linux whitestar 2.6.39-2-amd64 #1 SMP Wed Jun 8 11:01:04 UTC 2011 x86_64 GNU/Linux Output of lspci is: 00:00.0 Host bridge: Intel Corporation Sandy Bridge DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Sandy Bridge PCI Express Root Port (rev 09) 00:01.1 PCI bridge: Intel Corporation Sandy Bridge PCI Express Root Port (rev 09) 00:16.0 Communication controller: Intel Corporation Cougar Point HECI Controller #1 (rev 04) 00:19.0 Ethernet controller: Intel Corporation 82579V Gigabit Network Connection (rev 05) 00:1a.0 USB Controller: Intel Corporation Cougar Point USB Enhanced Host Controller #2 (rev 05) 00:1b.0 Audio device: Intel Corporation Cougar Point High Definition Audio Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 1 (rev b5) 00:1c.1 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 2 (rev b5) 00:1c.2 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 3 (rev b5) 00:1c.4 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 5 (rev b5) 00:1c.6 PCI bridge: Intel Corporation 82801 PCI Bridge (rev b5) 00:1d.0 USB Controller: Intel Corporation Cougar Point USB Enhanced Host Controller #1 (rev 05) 00:1f.0 ISA bridge: Intel Corporation Cougar Point LPC Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation Cougar Point 6 port SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation Cougar Point SMBus Controller (rev 05) 01:00.0 VGA compatible controller: nVidia Corporation Device 1200 (rev a1) 01:00.1 Audio device: nVidia Corporation Device 0e0c (rev a1) 02:00.0 VGA compatible controller: nVidia Corporation Device 1200 (rev a1) 02:00.1 Audio device: nVidia Corporation Device 0e0c (rev a1) 04:00.0 USB Controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 04) 06:00.0 USB Controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 04) 07:00.0 PCI bridge: Device 1b21:1080 (rev 01) 08:02.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8110SC/8169SC Gigabit Ethernet (rev 10) 08:03.0 FireWire (IEEE 1394): VIA Technologies, Inc. VT6306/7/8 [Fire II(M)] IEEE 1394 OHCI Controller (rev c0) Contents of /proc/interrupts: CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7 0: 77 0 0 0 0 0 0 0 IO-APIC-edge timer 1: 2 0 0 0 0 0 0 0 IO-APIC-edge i8042 8: 1 0 0 0 0 0 0 0 IO-APIC-edge rtc0 9: 0 0 0 0 0 0 0 0 IO-APIC-fasteoi acpi 12: 4 0 0 0 0 0 0 0 IO-APIC-edge i8042 16: 699083 0 0 0 0 0 0 0 IO-APIC-fasteoi nvidia, eth0 17: 87810 0 0 0 0 0 0 0 IO-APIC-fasteoi firewire_ohci, hda_intel, nvidia 18: 242 0 0 0 0 0 0 0 IO-APIC-fasteoi hda_intel 23: 85925 0 0 0 0 0 0 0 IO-APIC-fasteoi ehci_hcd:usb5, ehci_hcd:usb6 40: 0 0 0 0 0 0 0 0 PCI-MSI-edge PCIe PME 41: 0 0 0 0 0 0 0 0 PCI-MSI-edge PCIe PME 42: 0 0 0 0 0 0 0 0 PCI-MSI-edge PCIe PME 43: 0 0 0 0 0 0 0 0 PCI-MSI-edge PCIe PME 44: 0 0 0 0 0 0 0 0 PCI-MSI-edge PCIe PME 45: 0 0 0 0 0 0 0 0 PCI-MSI-edge PCIe PME 46: 79853 0 0 0 0 0 0 0 PCI-MSI-edge ahci 48: 1 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 49: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 50: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 51: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 52: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 53: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 54: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 55: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 56: 1 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 57: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 58: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 59: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 60: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 61: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 62: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 63: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 64: 173506 0 0 0 0 0 0 0 PCI-MSI-edge hda_intel NMI: 482 89 25 13 277 24 11 10 Non-maskable interrupts LOC: 783857 194752 114133 70577 372438 179065 117179 162016 Local timer interrupts SPU: 0 0 0 0 0 0 0 0 Spurious interrupts PMI: 482 89 25 13 277 24 11 10 Performance monitoring interrupts IWI: 0 0 0 0 0 0 0 0 IRQ work interrupts RES: 131917 46750 7432 3291 150003 9576 3435 3067 Rescheduling interrupts CAL: 2759 6563 7150 6997 5387 7140 7269 6678 Function call interrupts TLB: 4396 2038 1336 492 5434 1896 1121 606 TLB shootdowns TRM: 0 0 0 0 0 0 0 0 Thermal event interrupts THR: 0 0 0 0 0 0 0 0 Threshold APIC interrupts MCE: 0 0 0 0 0 0 0 0 Machine check exceptions MCP: 37 37 37 37 37 37 37 37 Machine check polls ERR: 0 MIS: 0 Last but not least, right after boot-up those lines are usually present in dmesg: [ 18.367094] hda-intel: IRQ timing workaround is activated for card #1. Suggest a bigger bdl_pos_adj. [ 18.458859] hda-intel: IRQ timing workaround is activated for card #2. Suggest a bigger bdl_pos_adj. I'm not sure if it's related or a symptom of a bigger problem so I'm posting it just in case. I don't really know what other information might be of relevance here. Don't hesitate to ask for more in the comments.

    Read the article

  • Failed to convert a wmv file to mp4 with ffmpeg

    - by Olaf Erlandsen
    i need a help with this command FFMPEG COMMAND: ffmpeg -y -i /input.wmv -vcodec libx264 -acodec libfaac -ac 2 -bufsize 20M -sameq -f mp4 /output.mp4 Output: ffmpeg version 1.0 Copyright (c) 2000-2012 the FFmpeg developers built on Oct 9 2012 07:04:08 with gcc 4.4.6 (GCC) 20120305 (Red Hat 4.4.6-4) [wmv3 @ 0x16a4800] Extra data: 8 bits left, value: 0 Guessed Channel Layout for Input Stream #0.0 : stereo Input #0, asf, from '/input.wmv': Metadata: WMFSDKVersion : 11.0.5721.5275 WMFSDKNeeded : 0.0.0.0000 IsVBR : 0 Duration: 00:01:35.10, start: 0.000000, bitrate: 496 kb/s Stream #0:0(spa): Audio: wmav2 (a[1][0][0] / 0x0161), 44100 Hz, stereo, s16, 64 kb/s Stream #0:1(spa): Video: wmv3 (Main) (WMV3 / 0x33564D57), yuv420p, 320x240, 425 kb/s, SAR 1:1 DAR 4:3, 29.97 tbr, 1k tbn, 1k tbc [libx264 @ 0x16c3000] VBV bufsize set but maxrate unspecified, ignored [libx264 @ 0x16c3000] using SAR=1/1 [libx264 @ 0x16c3000] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2 [libx264 @ 0x16c3000] profile High, level 1.3 [libx264 @ 0x16c3000] 264 - core 128 - H.264/MPEG-4 AVC codec - Copyleft 2003-2012 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 [wmv3 @ 0x16a4800] Extra data: 8 bits left, value: 0 Output #0, mp4, to '/output.mp4': Metadata: WMFSDKVersion : 11.0.5721.5275 WMFSDKNeeded : 0.0.0.0000 IsVBR : 0 encoder : Lavf54.29.104 Stream #0:0(spa): Video: h264 ([33][0][0][0] / 0x0021), yuv420p, 320x240 [SAR 1:1 DAR 4:3], q=-1--1, 30k tbn, 29.97 tbc Stream #0:1(spa): Audio: aac ([64][0][0][0] / 0x0040), 44100 Hz, stereo, s16, 128 kb/s Stream mapping: Stream #0:1 -> #0:0 (wmv3 -> libx264) Stream #0:0 -> #0:1 (wmav2 -> libfaac) Press [q] to stop, [?] for help [libfaac @ 0x16b3600] Que input is backward in time [mp4 @ 0x16bb3a0] st:0 PTS: 6174 DTS: 6174 < 7169 invalid, clipping frame= 144 fps=0.0 q=29.0 size= 207kB time=00:00:03.38 bitrate= 500.3kbits/s frame= 259 fps=257 q=29.0 size= 447kB time=00:00:07.30 bitrate= 501.3kbits/s frame= 375 fps=248 q=29.0 size= 668kB time=00:00:11.01 bitrate= 496.5kbits/s frame= 487 fps=241 q=29.0 size= 836kB time=00:00:14.85 bitrate= 460.7kbits/s frame= 605 fps=240 q=29.0 size= 1080kB time=00:00:18.92 bitrate= 467.4kbits/s frame= 719 fps=238 q=29.0 size= 1306kB time=00:00:22.80 bitrate= 469.2kbits/s frame= 834 fps=237 q=29.0 size= 1546kB time=00:00:26.52 bitrate= 477.3kbits/s frame= 953 fps=237 q=29.0 size= 1763kB time=00:00:30.27 bitrate= 477.0kbits/s frame= 1071 fps=237 q=29.0 size= 1986kB time=00:00:34.36 bitrate= 473.4kbits/s frame= 1161 fps=231 q=29.0 size= 2160kB time=00:00:37.21 bitrate= 475.4kbits/s frame= 1221 fps=220 q=29.0 size= 2282kB time=00:00:39.53 bitrate= 472.9kbits/s frame= 1280 fps=212 q=29.0 size= 2392kB time=00:00:41.16 bitrate= 476.1kbits/s frame= 1331 fps=203 q=29.0 size= 2502kB time=00:00:43.23 bitrate= 474.1kbits/s frame= 1379 fps=195 q=29.0 size= 2618kB time=00:00:44.72 bitrate= 479.6kbits/s frame= 1430 fps=189 q=29.0 size= 2733kB time=00:00:46.34 bitrate= 483.0kbits/s frame= 1487 fps=184 q=29.0 size= 2851kB time=00:00:48.40 bitrate= 482.6kbits/s frame= 1546 fps=180 q=26.0 size= 2973kB time=00:00:50.43 bitrate= 482.9kbits/s frame= 1610 fps=177 q=29.0 size= 3112kB time=00:00:52.40 bitrate= 486.5kbits/s frame= 1672 fps=174 q=29.0 size= 3231kB time=00:00:54.35 bitrate= 487.0kbits/s frame= 1733 fps=171 q=29.0 size= 3348kB time=00:00:56.51 bitrate= 485.3kbits/s frame= 1792 fps=169 q=29.0 size= 3459kB time=00:00:58.28 bitrate= 486.2kbits/s frame= 1851 fps=166 q=29.0 size= 3588kB time=00:01:00.32 bitrate= 487.2kbits/s frame= 1910 fps=164 q=29.0 size= 3716kB time=00:01:02.36 bitrate= 488.1kbits/s frame= 1972 fps=162 q=29.0 size= 3833kB time=00:01:04.45 bitrate= 487.1kbits/s frame= 2032 fps=161 q=29.0 size= 3946kB time=00:01:06.40 bitrate= 486.8kbits/s frame= 2091 fps=159 q=29.0 size= 4080kB time=00:01:08.35 bitrate= 488.9kbits/s frame= 2150 fps=158 q=29.0 size= 4201kB time=00:01:10.54 bitrate= 487.9kbits/s frame= 2206 fps=156 q=29.0 size= 4315kB time=00:01:12.39 bitrate= 488.3kbits/s frame= 2263 fps=154 q=29.0 size= 4438kB time=00:01:14.21 bitrate= 489.9kbits/s frame= 2327 fps=154 q=29.0 size= 4567kB time=00:01:16.16 bitrate= 491.2kbits/s frame= 2388 fps=152 q=29.0 size= 4666kB time=00:01:18.48 bitrate= 487.0kbits/s frame= 2450 fps=152 q=29.0 size= 4776kB time=00:01:20.24 bitrate= 487.6kbits/s frame= 2511 fps=151 q=29.0 size= 4890kB time=00:01:22.15 bitrate= 487.6kbits/s frame= 2575 fps=150 q=29.0 size= 5015kB time=00:01:24.42 bitrate= 486.6kbits/s frame= 2635 fps=149 q=29.0 size= 5130kB time=00:01:26.62 bitrate= 485.2kbits/s frame= 2695 fps=148 q=29.0 size= 5258kB time=00:01:28.65 bitrate= 485.9kbits/s frame= 2758 fps=147 q=29.0 size= 5382kB time=00:01:30.64 bitrate= 486.4kbits/s frame= 2816 fps=147 q=29.0 size= 5521kB time=00:01:32.69 bitrate= 487.9kbits/s get_buffer() failed Error while decoding stream #0:0: Invalid argument frame= 2848 fps=143 q=-1.0 Lsize= 5787kB time=00:01:35.10 bitrate= 498.4kbits/s video:5099kB audio:581kB subtitle:0 global headers:0kB muxing overhead 1.884230% [libx264 @ 0x16c3000] frame I:12 Avg QP:22.64 size: 12092 [libx264 @ 0x16c3000] frame P:1508 Avg QP:25.39 size: 2933 [libx264 @ 0x16c3000] frame B:1328 Avg QP:30.62 size: 491 [libx264 @ 0x16c3000] consecutive B-frames: 10.0% 80.8% 8.1% 1.1% [libx264 @ 0x16c3000] mb I I16..4: 1.8% 72.1% 26.0% [libx264 @ 0x16c3000] mb P I16..4: 0.4% 2.4% 0.3% P16..4: 48.3% 19.6% 19.3% 0.0% 0.0% skip: 9.5% [libx264 @ 0x16c3000] mb B I16..4: 0.1% 0.2% 0.0% B16..8: 52.6% 6.6% 2.3% direct: 1.4% skip:36.8% L0:48.8% L1:42.5% BI: 8.7% [libx264 @ 0x16c3000] 8x8 transform intra:75.3% inter:55.4% [libx264 @ 0x16c3000] coded y,uvDC,uvAC intra: 77.9% 81.7% 33.1% inter: 24.2% 11.6% 1.1% [libx264 @ 0x16c3000] i16 v,h,dc,p: 25% 16% 44% 14% [libx264 @ 0x16c3000] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 19% 15% 29% 6% 5% 6% 6% 7% 7% [libx264 @ 0x16c3000] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 20% 15% 17% 7% 9% 8% 9% 7% 7% [libx264 @ 0x16c3000] i8c dc,h,v,p: 50% 19% 24% 7% [libx264 @ 0x16c3000] Weighted P-Frames: Y:3.8% UV:1.1% [libx264 @ 0x16c3000] ref P L0: 75.6% 19.1% 4.2% 1.0% 0.1% [libx264 @ 0x16c3000] ref B L0: 98.1% 1.9% 0.0% [libx264 @ 0x16c3000] ref B L1: 98.9% 1.1% [libx264 @ 0x16c3000] kb/s:439.47 FFMPEG Configuration: --enable-version3 --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libvpx --enable-libfaac --enable-libmp3lame --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-gpl --enable-postproc --enable-nonfree libavutil 51. 73.101 / 51. 73.101 libavcodec 54. 59.100 / 54. 59.100 libavformat 54. 29.104 / 54. 29.104 libavdevice 54. 2.101 / 54. 2.101 libavfilter 3. 17.100 / 3. 17.100 libswscale 2. 1.101 / 2. 1.101 libswresample 0. 15.100 / 0. 15.100 libpostproc 52. 0.100 / 52. 0.100 PROBLEM #1: [libfaac @ 0x16b3600] Que input is backward in time [mp4 @ 0x16bb3a0] st:0 PTS: 6174 DTS: 6174 < 7169 invalid, clipping PROBLEM #2: get_buffer() failed Error while decoding stream #0:0: Invalid argument

    Read the article

  • Use DivX settings to encode to mp4 with ffmpeg

    - by sjngm
    I'm used to use VirtualDub to encode a video to AVI container with DivX-codec (and MP3 for audio). Now I'm planning to use ffmpeg to encode videos to MP4 container with h264-codec. What I've figured out is that I need to use libx264 and one of those presets to make anything work. However, I'm amazed about the video bitrate ffmpeg uses for encoding. What I currently have is this little batch file: @ECHO OFF SETLOCAL SET IN=source.avs SET FFMPEG_PATH=C:\Program Files (x86)\ffmpeg SET PRESET=-fpre "%FFMPEG_PATH%\presets\libx264-lossless_slow.ffpreset" SET AUDIO=-acodec libmp3lame -ab 128000 SET VIDEO=-vcodec libx264 -vb 1978000 "%FFMPEG_PATH%\ffmpeg.exe" -i %IN% %AUDIO% %VIDEO% %PRESET% test.mp4 ENDLOCAL With this I tell ffmpeg to use 1978k as the bitrate, but ffmpeg uses 15000k+! I tried other presets, but they don't use my specified bitrate. Here are the presets I have: libx264-baseline.ffpreset libx264-ipod320.ffpreset libx264-ipod640.ffpreset libx264-lossless_fast.ffpreset libx264-lossless_max.ffpreset libx264-lossless_medium.ffpreset libx264-lossless_slow.ffpreset libx264-lossless_slower.ffpreset libx264-lossless_ultrafast.ffpreset ffmpeg version: FFmpeg git-N-29181-ga304071 libavutil 50. 40. 1 / 50. 40. 1 libavcodec 52.120. 0 / 52.120. 0 libavformat 52.108. 0 / 52.108. 0 libavdevice 52. 4. 0 / 52. 4. 0 libavfilter 1. 79. 0 / 1. 79. 0 libswscale 0. 13. 0 / 0. 13. 0 Note that I don't use the latest version as it has problems with spaces in filenames. Here's what seems to be the full parameter list DivX 6.9.2 uses: -bvnn 1978000 -vbv 218691200,100663296,100663296 -dir "C:\Users\sjngm\AppData\Roaming\DivX\DivX Codec" -w -b 1 -use_presets=1 -preset=10 -windowed_fullsearch=2 -thread_delay=1 What command line parameters would that be for ffmpeg? EDIT: Going with slhck's suggestion I tried a new 32-bit version. I have no idea if that is 0.9 or newer, I can't find that info. ffmpeg version N-36890-g67f5650 libavutil 51. 34.100 / 51. 34.100 libavcodec 53. 56.105 / 53. 56.105 libavformat 53. 30.100 / 53. 30.100 libavdevice 53. 4.100 / 53. 4.100 libavfilter 2. 59.100 / 2. 59.100 libswscale 2. 1.100 / 2. 1.100 libswresample 0. 6.100 / 0. 6.100 libpostproc 51. 2.100 / 51. 2.100 I reworked my batch file to look like this (interestingly enough I can't find parameter -vprofile in the documentation): @ECHO OFF SETLOCAL SET IN=VTS_01_1.avs SET FFMPEG_PATH=C:\Program Files (x86)\ffmpeg SET PRESET=-vprofile high -preset veryslow SET AUDIO=-acodec libmp3lame -ab 128000 SET VIDEO=-vcodec libx264 -vb 1978000 "%FFMPEG_PATH%\ffmpeg.exe" -i %IN% %AUDIO% %PRESET% %VIDEO% test.mp4 ENDLOCAL I see that it now uses the bitrate properly (thanks to LongNeckbeard for pointing out that the lossless-stuff ignores the bitrate!). Just in case you wonder how I came up with the 1978000, I'm using this formula which I found valid for DivX-files (I'm guessing the bitrate won't change that much for h264): width * height * 25 * 0.22 / 1000 I'm not sure if the 0.22 correlates with the CRF somehow. Overall I forgot to say the I will use a two-pass scenario, which is why I don't use the CRF here. I will try to read more about this. Currently I'm just trying to get something running that shows me that I'm doing something right (ffmpeg isn't the easiest tool to understand ;)). C:\Program Files (x86)\ffmpeg\ffmpeg.exe" -i VTS_01_1.avs -acodec libmp3lame -ab 128000 -vcodec libx264 -vb 1978000 -vprofile high -preset veryslow test.mp4 The output is now: ffmpeg version N-36890-g67f5650 Copyright (c) 2000-2012 the FFmpeg developers built on Jan 16 2012 21:57:13 with gcc 4.6.2 configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-runtime-cpudetect --enable-avisynth --enable-bzlib --enable-frei0r --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --enable-zlib libavutil 51. 34.100 / 51. 34.100 libavcodec 53. 56.105 / 53. 56.105 libavformat 53. 30.100 / 53. 30.100 libavdevice 53. 4.100 / 53. 4.100 libavfilter 2. 59.100 / 2. 59.100 libswscale 2. 1.100 / 2. 1.100 libswresample 0. 6.100 / 0. 6.100 libpostproc 51. 2.100 / 51. 2.100 Input #0, avs, from 'VTS_01_1.avs': Duration: 00:58:46.12, start: 0.000000, bitrate: 0 kb/s Stream #0:0: Video: rawvideo (YV12 / 0x32315659), yuv420p, 576x448, 77414 kb/s, 25 tbr, 25 tbn, 25 tbc Stream #0:1: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 48000 Hz, 2 channels, s16, 1536 kb/s File 'test.mp4' already exists. Overwrite ? [y/N] y w:576 h:448 pixfmt:yuv420p tb:1/1000000 sar:0/1 sws_param: [libx264 @ 05A2C400] using cpu capabilities: MMX2 SSE2Fast FastShuffle SSEMisalign LZCNT [libx264 @ 05A2C400] profile High, level 3.1 [libx264 @ 05A2C400] 264 - core 120 r2120 0c7dab9 - H.264/MPEG-4 AVC codec - Copyleft 2003-2011 - http://www.videolan.org/x264.html - options: cabac=1 ref=16 deblock=1:0:0 analyse=0x3:0x133 me=umh subme=10 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=24 chroma_me=1 trellis=2 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=3 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=8 b_pyramid=2 b_adapt=2 b_bias=0 direct=3 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=60 rc=abr mbtree=1 bitrate=1978 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to 'test.mp4': Metadata: encoder : Lavf53.30.100 Stream #0:0: Video: h264 (![0][0][0] / 0x0021), yuv420p, 576x448, q=-1--1, 1978 kb/s, 25 tbn, 25 tbc Stream #0:1: Audio: mp3 (i[0][0][0] / 0x0069), 48000 Hz, 2 channels, s16, 128 kb/s Stream mapping: Stream #0:0 -> #0:0 (rawvideo -> libx264) Stream #0:1 -> #0:1 (pcm_s16le -> libmp3lame) Press [q] to stop, [?] for help frame= 0 fps= 0 q=0.0 size= 0kB time=00:00:00.00 bitrate= 0.0kbits/s frame= 0 fps= 0 q=0.0 size= 0kB time=00:00:00.00 bitrate= 0.0kbits/s frame= 0 fps= 0 q=0.0 size= 0kB time=00:00:00.00 bitrate= 0.0kbits/s frame= 3 fps= 1 q=22.0 size= 39kB time=00:00:00.04 bitrate=8063.8kbits/ frame= 8 fps= 2 q=22.0 size= 82kB time=00:00:00.24 bitrate=2801.3kbits/ frame= 13 fps= 3 q=23.0 size= 120kB time=00:00:00.44 bitrate=2229.5kbits/ frame= 16 fps= 4 q=23.0 size= 147kB time=00:00:00.56 bitrate=2156.7kbits/ frame= 20 fps= 4 q=22.0 size= 175kB time=00:00:00.72 bitrate=1987.4kbits/ : video:4387kB audio:273kB global headers:0kB muxing overhead 0.260038% [libx264 @ 05A2C400] frame I:2 Avg QP:19.53 size: 29850 [libx264 @ 05A2C400] frame P:76 Avg QP:22.24 size: 19541 [libx264 @ 05A2C400] frame B:359 Avg QP:25.93 size: 8210 [libx264 @ 05A2C400] consecutive B-frames: 0.5% 0.5% 0.0% 8.2% 17.2% 52.2% 16.0% 5.5% 0.0% [libx264 @ 05A2C400] mb I I16..4: 5.4% 75.3% 19.3% [libx264 @ 05A2C400] mb P I16..4: 1.3% 16.5% 2.2% P16..4: 36.3% 28.6% 12.7% 1.8% 0.2% skip: 0.4% [libx264 @ 05A2C400] mb B I16..4: 0.4% 3.8% 0.3% B16..8: 40.0% 18.4% 4.7% direct:18.5% skip:13.9% L0:45.4% L1:38.1% BI:16.5% [libx264 @ 05A2C400] final ratefactor: 20.35 [libx264 @ 05A2C400] 8x8 transform intra:83.1% inter:68.5% [libx264 @ 05A2C400] direct mvs spatial:99.2% temporal:0.8% [libx264 @ 05A2C400] coded y,uvDC,uvAC intra: 64.9% 83.4% 49.2% inter: 49.0% 50.4% 4.4% [libx264 @ 05A2C400] i16 v,h,dc,p: 25% 22% 27% 26% [libx264 @ 05A2C400] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 10% 7% 23% 9% 10% 10% 10%10% 13% [libx264 @ 05A2C400] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 12% 11% 13% 9% 12% 11% 10% 9% 12% [libx264 @ 05A2C400] i8c dc,h,v,p: 42% 28% 16% 14% [libx264 @ 05A2C400] Weighted P-Frames: Y:18.4% UV:7.9% [libx264 @ 05A2C400] ref P L0: 29.1% 11.3% 15.7% 7.3% 6.9% 4.9% 5.1% 3.4%3.9% 2.7% 2.8% 1.8% 1.7% 1.2% 1.4% 0.9% [libx264 @ 05A2C400] ref B L0: 68.8% 11.4% 5.5% 2.9% 2.3% 1.9% 1.5% 1.1%1.1% 1.0% 0.9% 0.7% 0.5% 0.3% 0.1% [libx264 @ 05A2C400] ref B L1: 91.9% 8.1% [libx264 @ 05A2C400] kb/s:2055.88 As far as I'm concerned it doesn't look that bad to me.

    Read the article

  • Server randomly freezes

    - by PsySkeletor
    Im facing a very strange issue, my debian squeeze freezes up always at night (Berlin, time). Here is what i get from a time and after doing this a few times, it becomes frozen and must be hard-reset. From /var/log/messages Dec 11 01:36:11 srv156 kernel: [125983.204251] CPU 1: Dec 11 01:36:11 srv156 kernel: [125983.204251] Modules linked in: xt_multiport nf_conntrack_ipv4 nf_defrag_ipv4 xt_recent xt_state nf_conntrack xt_tcpudp iptable_filter ip_tables x_tables hwmon_vid snd_hda_codec_atihdmi snd_hda_intel snd_hda_codec snd_hwdep snd_pcm radeon snd_timer ttm drm_kms_helper snd k10temp i2c_piix4 soundcore snd_page_alloc edac_core parport_pc drm i2c_algo_bit i2c_core shpchp pci_hotplug pcspkr edac_mce_amd parport wmi evdev processor button ext3 jbd mbcache raid1 md_mod sd_mod crc_t10dif ata_generic ahci ohci_hcd pata_atiixp e100 mii libata xhci floppy ehci_hcd thermal thermal_sys usbcore scsi_mod nls_base [last unloaded: i2c_dev] Dec 11 01:36:11 srv156 kernel: [125983.204251] Pid: 758, comm: flush-9:0 Tainted: G B 2.6.32-5-amd64 #1 GA-78LMT-USB3 Dec 11 01:36:11 srv156 kernel: [125983.204251] RIP: 0010:[<ffffffff810b3506>] [<ffffffff810b3506>] find_get_pages_tag+0x66/0xdd Dec 11 01:36:11 srv156 kernel: [125983.204251] RSP: 0018:ffff8804235e7b30 EFLAGS: 00000286 Dec 11 01:36:11 srv156 kernel: [125983.204251] RAX: ffffffffffffffff RBX: ffff8804235e7c00 RCX: 0000000000000000 Dec 11 01:36:11 srv156 kernel: [125983.204251] RDX: 0000000000040000 RSI: ffffea000496b2a8 RDI: ffffea000496b2a0 Dec 11 01:36:11 srv156 kernel: [125983.204251] RBP: ffffffff8101166e R08: ffff8804235e7af0 R09: 0000000000000000 Dec 11 01:36:11 srv156 kernel: [125983.204251] R10: 0000000000000000 R11: 0000000000040000 R12: ffff8804235e7c08 Dec 11 01:36:11 srv156 kernel: [125983.204251] R13: 0000000d22678a20 R14: ffff8804235e7af0 R15: 00000000091b9060 Dec 11 01:36:11 srv156 kernel: [125983.204251] FS: 0000000000000000(0000) GS:ffff880010440000(0000) knlGS:000000007ebf7b70 Dec 11 01:36:11 srv156 kernel: [125983.204522] CS: 0010 DS: 0018 ES: 0018 CR0: 000000008005003b Dec 11 01:36:11 srv156 kernel: [125983.204522] CR2: 00000000dec86000 CR3: 0000000001001000 CR4: 00000000000006e0 Dec 11 01:36:11 srv156 kernel: [125983.204522] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Dec 11 01:36:11 srv156 kernel: [125983.204522] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Dec 11 01:36:11 srv156 kernel: [125983.204522] Call Trace: Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff810bb792>] ? pagevec_lookup_tag+0x1a/0x21 Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff810ba330>] ? write_cache_pages+0x162/0x327 Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff810b9d48>] ? __writepage+0x0/0x25 Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff8110758a>] ? writeback_single_inode+0xe7/0x2da Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff81108290>] ? writeback_inodes_wb+0x424/0x4ff Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff81108497>] ? wb_writeback+0x12c/0x1ab Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff8110870d>] ? wb_do_writeback+0x14f/0x165 Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff81108754>] ? bdi_writeback_task+0x31/0xaa Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff810c8664>] ? bdi_start_fn+0x0/0xd0 Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff810c86d4>] ? bdi_start_fn+0x70/0xd0 Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff810c8664>] ? bdi_start_fn+0x0/0xd0 Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff81064ac1>] ? kthread+0x79/0x81 Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff81011baa>] ? child_rip+0xa/0x20 Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff81064a48>] ? kthread+0x0/0x81 Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff81011ba0>] ? child_rip+0x0/0x20 From /var/log/syslog Dec 10 21:20:29 srv156 kernel: [110625.162930] BUG: Bad page map in process java pte:14fa4f067 pmd:424b54067 Dec 10 21:20:29 srv156 kernel: [110625.162937] page:ffffea000496c148 flags:0200000000000878 count:2 mapcount:-1 mapping:ffff88014f8d7de8 index:2f4 Dec 10 21:20:29 srv156 kernel: [110625.162946] addr:0000000009096000 vm_flags:00100077 anon_vma:ffff880422410d40 mapping:(null) index:9096 Dec 10 21:20:29 srv156 kernel: [110625.162955] Pid: 21356, comm: java Tainted: G B 2.6.32-5-amd64 #1 Dec 10 21:20:29 srv156 kernel: [110625.162961] Call Trace: Dec 10 21:20:29 srv156 kernel: [110625.162966] [<ffffffff810ca4bf>] ? print_bad_pte+0x232/0x24a Dec 10 21:20:29 srv156 kernel: [110625.162973] [<ffffffff810cb56f>] ? unmap_vmas+0x62d/0x931 Dec 10 21:20:29 srv156 kernel: [110625.162980] [<ffffffff810cfc74>] ? exit_mmap+0xc4/0x148 Dec 10 21:20:29 srv156 kernel: [110625.162986] [<ffffffff8104bbc1>] ? mmput+0x3c/0xdf Dec 10 21:20:29 srv156 kernel: [110625.162992] [<ffffffff8104f81e>] ? exit_mm+0x102/0x10d Dec 10 21:20:29 srv156 kernel: [110625.162998] [<ffffffff81051243>] ? do_exit+0x1f8/0x6c9 Dec 10 21:20:29 srv156 kernel: [110625.163004] [<ffffffff81071abb>] ? futex_wake+0xd6/0xe7 Dec 10 21:20:29 srv156 kernel: [110625.163010] [<ffffffff8105178a>] ? do_group_exit+0x76/0x9d Dec 10 21:20:29 srv156 kernel: [110625.163016] [<ffffffff8105df9f>] ? get_signal_to_deliver+0x310/0x339 Dec 10 21:20:29 srv156 kernel: [110625.163023] [<ffffffff81010037>] ? do_notify_resume+0x87/0x73f Dec 10 21:20:29 srv156 kernel: [110625.163029] [<ffffffff810cc664>] ? handle_mm_fault+0x7aa/0x80f Dec 10 21:20:29 srv156 kernel: [110625.163036] [<ffffffff81073f14>] ? compat_sys_futex+0x10d/0x12b Dec 10 21:20:29 srv156 kernel: [110625.163043] [<ffffffff812fb546>] ? do_page_fault+0x2e0/0x2fc Dec 10 21:20:29 srv156 kernel: [110625.163049] [<ffffffff81010e0e>] ? int_signal+0x12/0x17 Dec 10 21:20:29 srv156 kernel: [110625.163114] BUG: Bad page state in process java pfn:14fa0c Dec 10 21:20:29 srv156 kernel: [110625.163120] page:ffffea000496b2a0 flags:020000000002001c count:0 mapcount:-1 mapping:ffff88039dc0db30 index:11e3 Dec 10 21:20:29 srv156 kernel: [110625.164563] Pid: 21356, comm: java Tainted: G B 2.6.32-5-amd64 #1 Dec 10 21:20:29 srv156 kernel: [110625.164570] Call Trace: Dec 10 21:20:29 srv156 kernel: [110625.164578] [<ffffffff810b71a9>] ? bad_page+0x116/0x129 Dec 10 21:20:29 srv156 kernel: [110625.164586] [<ffffffff810b7692>] ? free_pages_check+0x38/0x57 Dec 10 21:20:29 srv156 kernel: [110625.164595] [<ffffffff810b89cf>] ? free_hot_cold_page+0x46/0x190 Dec 10 21:20:29 srv156 kernel: [110625.164603] [<ffffffff810b8b82>] ? __pagevec_free+0x69/0x7f Dec 10 21:20:29 srv156 kernel: [110625.164611] [<ffffffff810bba3f>] ? release_pages+0x137/0x18d Dec 10 21:20:29 srv156 kernel: [110625.164620] [<ffffffff810d8559>] ? free_pages_and_swap_cache+0x57/0x73 Dec 10 21:20:29 srv156 kernel: [110625.164629] [<ffffffff810cb5ed>] ? unmap_vmas+0x6ab/0x931 Dec 10 21:20:29 srv156 kernel: [110625.164637] [<ffffffff810cfc74>] ? exit_mmap+0xc4/0x148 Dec 10 21:20:29 srv156 kernel: [110625.164644] [<ffffffff8104bbc1>] ? mmput+0x3c/0xdf Dec 10 21:20:29 srv156 kernel: [110625.164652] [<ffffffff8104f81e>] ? exit_mm+0x102/0x10d Dec 10 21:20:29 srv156 kernel: [110625.164660] [<ffffffff81051243>] ? do_exit+0x1f8/0x6c9 Dec 10 21:20:29 srv156 kernel: [110625.164667] [<ffffffff81071abb>] ? futex_wake+0xd6/0xe7 Dec 10 21:20:29 srv156 kernel: [110625.164675] [<ffffffff8105178a>] ? do_group_exit+0x76/0x9d Dec 10 21:20:29 srv156 kernel: [110625.164683] [<ffffffff8105df9f>] ? get_signal_to_deliver+0x310/0x339 Dec 10 21:20:29 srv156 kernel: [110625.164692] [<ffffffff81010037>] ? do_notify_resume+0x87/0x73f Dec 10 21:20:29 srv156 kernel: [110625.164700] [<ffffffff810cc664>] ? handle_mm_fault+0x7aa/0x80f The last piece of log, has been recently posted, because I've just found it. It seems Java process do something and began to slowly eat all the resources of the server. I don't know exactly if this could be the root cause. Im using Debian Squeeze. uname -a Linux srv156 2.6.32-5-amd64 #1 SMP Sun Sep 23 11:00:33 UTC 2012 x86_64 GNU/Linux I really will appreciate your help, i dont know what more to do.

    Read the article

  • ProFTPd server on Ubuntu getting access denied message when successfully authenticated?

    - by exxoid
    I have a Ubuntu box with a ProFTPD 1.3.4a Server, when I try to log in via my FTP Client I cannot do anything as it does not allow me to list directories; I have tried logging in as root and as a regular user and tried accessing different paths within the FTP Server. The error I get in my FTP Client is: Status: Retrieving directory listing... Command: CDUP Response: 250 CDUP command successful Command: PWD Response: 257 "/var" is the current directory Command: PASV Response: 227 Entering Passive Mode (172,16,4,22,237,205). Command: MLSD Response: 550 Access is denied. Error: Failed to retrieve directory listing Any idea? Here is the config of my proftpd: # # /etc/proftpd/proftpd.conf -- This is a basic ProFTPD configuration file. # To really apply changes, reload proftpd after modifications, if # it runs in daemon mode. It is not required in inetd/xinetd mode. # # Includes DSO modules Include /etc/proftpd/modules.conf # Set off to disable IPv6 support which is annoying on IPv4 only boxes. UseIPv6 off # If set on you can experience a longer connection delay in many cases. IdentLookups off ServerName "Drupal Intranet" ServerType standalone ServerIdent on "FTP Server ready" DeferWelcome on # Set the user and group that the server runs as User nobody Group nogroup MultilineRFC2228 on DefaultServer on ShowSymlinks on TimeoutNoTransfer 600 TimeoutStalled 600 TimeoutIdle 1200 DisplayLogin welcome.msg DisplayChdir .message true ListOptions "-l" DenyFilter \*.*/ # Use this to jail all users in their homes # DefaultRoot ~ # Users require a valid shell listed in /etc/shells to login. # Use this directive to release that constrain. # RequireValidShell off # Port 21 is the standard FTP port. Port 21 # In some cases you have to specify passive ports range to by-pass # firewall limitations. Ephemeral ports can be used for that, but # feel free to use a more narrow range. # PassivePorts 49152 65534 # If your host was NATted, this option is useful in order to # allow passive tranfers to work. You have to use your public # address and opening the passive ports used on your firewall as well. # MasqueradeAddress 1.2.3.4 # This is useful for masquerading address with dynamic IPs: # refresh any configured MasqueradeAddress directives every 8 hours <IfModule mod_dynmasq.c> # DynMasqRefresh 28800 </IfModule> # To prevent DoS attacks, set the maximum number of child processes # to 30. If you need to allow more than 30 concurrent connections # at once, simply increase this value. Note that this ONLY works # in standalone mode, in inetd mode you should use an inetd server # that allows you to limit maximum number of processes per service # (such as xinetd) MaxInstances 30 # Set the user and group that the server normally runs at. # Umask 022 is a good standard umask to prevent new files and dirs # (second parm) from being group and world writable. Umask 022 022 # Normally, we want files to be overwriteable. AllowOverwrite on # Uncomment this if you are using NIS or LDAP via NSS to retrieve passwords: # PersistentPasswd off # This is required to use both PAM-based authentication and local passwords AuthPAMConfig proftpd AuthOrder mod_auth_pam.c* mod_auth_unix.c # Be warned: use of this directive impacts CPU average load! # Uncomment this if you like to see progress and transfer rate with ftpwho # in downloads. That is not needed for uploads rates. # UseSendFile off TransferLog /var/log/proftpd/xferlog SystemLog /var/log/proftpd/proftpd.log # Logging onto /var/log/lastlog is enabled but set to off by default #UseLastlog on # In order to keep log file dates consistent after chroot, use timezone info # from /etc/localtime. If this is not set, and proftpd is configured to # chroot (e.g. DefaultRoot or <Anonymous>), it will use the non-daylight # savings timezone regardless of whether DST is in effect. #SetEnv TZ :/etc/localtime <IfModule mod_quotatab.c> QuotaEngine off </IfModule> <IfModule mod_ratio.c> Ratios off </IfModule> # Delay engine reduces impact of the so-called Timing Attack described in # http://www.securityfocus.com/bid/11430/discuss # It is on by default. <IfModule mod_delay.c> DelayEngine on </IfModule> <IfModule mod_ctrls.c> ControlsEngine off ControlsMaxClients 2 ControlsLog /var/log/proftpd/controls.log ControlsInterval 5 ControlsSocket /var/run/proftpd/proftpd.sock </IfModule> <IfModule mod_ctrls_admin.c> AdminControlsEngine off </IfModule> # # Alternative authentication frameworks # #Include /etc/proftpd/ldap.conf #Include /etc/proftpd/sql.conf # # This is used for FTPS connections # #Include /etc/proftpd/tls.conf # # Useful to keep VirtualHost/VirtualRoot directives separated # #Include /etc/proftpd/virtuals.con # A basic anonymous configuration, no upload directories. # <Anonymous ~ftp> # User ftp # Group nogroup # # We want clients to be able to login with "anonymous" as well as "ftp" # UserAlias anonymous ftp # # Cosmetic changes, all files belongs to ftp user # DirFakeUser on ftp # DirFakeGroup on ftp # # RequireValidShell off # # # Limit the maximum number of anonymous logins # MaxClients 10 # # # We want 'welcome.msg' displayed at login, and '.message' displayed # # in each newly chdired directory. # DisplayLogin welcome.msg # DisplayChdir .message # # # Limit WRITE everywhere in the anonymous chroot # <Directory *> # <Limit WRITE> # DenyAll # </Limit> # </Directory> # # # Uncomment this if you're brave. # # <Directory incoming> # # # Umask 022 is a good standard umask to prevent new files and dirs # # # (second parm) from being group and world writable. # # Umask 022 022 # # <Limit READ WRITE> # # DenyAll # # </Limit> # # <Limit STOR> # # AllowAll # # </Limit> # # </Directory> # # </Anonymous> # Include other custom configuration files Include /etc/proftpd/conf.d/ UseReverseDNS off <Global> RootLogin on UseFtpUsers on ServerIdent on DefaultChdir /var/www DeleteAbortedStores on LoginPasswordPrompt on AccessGrantMsg "You have been authenticated successfully." </Global> Any idea what could be wrong? Thanks for your help!

    Read the article

< Previous Page | 223 224 225 226 227 228 229  | Next Page >