Search Results

Search found 8552 results on 343 pages for '14 04'.

Page 21/343 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • WebLogic Server???@??:12????????

    - by ???Y
    ??2012?12?14?(?)???31? WebLogic Server???@???????????????2012?WebLogic Server???????????Lightning Talks(+????????)??2????????????2012??WebLogic Server???????????????1???????????????????1????????WebLogic Server??????????????????????WLST?????????JMS?Flight Recorder????????JSF 2.0, EJB3.1, JPA2.0??????????????????????????????!???????????????????????????????????????????????????????????????Lightning Talks?????(+????????)??????????????????????????????????????????????????????????????12/14(?) ???????????????6?30????????????????????? 2012?12?14?(?) 18:30~20:40 (???? 18:00~) ?????????? ?107-0061 ????????2-5-8 ???????? ??? ???

    Read the article

  • Out of memory error

    - by Rahul Varma
    Hi, I am trying to retrieve a list of images and text from a web service. I have first coded to get the images to a list using Simple Adapter. The images are getting displayed the app is showing an error and in the Logcat the following errors occur... 04-26 10:55:39.483: ERROR/dalvikvm-heap(1047): 8850-byte external allocation too large for this process. 04-26 10:55:39.493: ERROR/(1047): VM won't let us allocate 8850 bytes 04-26 10:55:39.563: ERROR/AndroidRuntime(1047): Uncaught handler: thread Thread-96 exiting due to uncaught exception 04-26 10:55:39.573: ERROR/AndroidRuntime(1047): java.lang.OutOfMemoryError: bitmap size exceeds VM budget 04-26 10:55:39.573: ERROR/AndroidRuntime(1047): at android.graphics.BitmapFactory.nativeDecodeStream(Native Method) 04-26 10:55:39.573: ERROR/AndroidRuntime(1047): at android.graphics.BitmapFactory.decodeStream(BitmapFactory.java:451) 04-26 10:55:39.573: ERROR/AndroidRuntime(1047): at com.stellent.gorinka.AsyncImageLoaderv.loadImageFromUrl(AsyncImageLoaderv.java:57) 04-26 10:55:39.573: ERROR/AndroidRuntime(1047): at com.stellent.gorinka.AsyncImageLoaderv$2.run(AsyncImageLoaderv.java:41) 04-26 10:55:40.393: ERROR/dalvikvm-heap(1047): 14600-byte external allocation too large for this process. 04-26 10:55:40.403: ERROR/(1047): VM won't let us allocate 14600 bytes 04-26 10:55:40.493: ERROR/AndroidRuntime(1047): Uncaught handler: thread Thread-93 exiting due to uncaught exception 04-26 10:55:40.493: ERROR/AndroidRuntime(1047): java.lang.OutOfMemoryError: bitmap size exceeds VM budget 04-26 10:55:40.493: ERROR/AndroidRuntime(1047): at android.graphics.BitmapFactory.nativeDecodeStream(Native Method) 04-26 10:55:40.493: ERROR/AndroidRuntime(1047): at android.graphics.BitmapFactory.decodeStream(BitmapFactory.java:451) 04-26 10:55:40.493: ERROR/AndroidRuntime(1047): at com.stellent.gorinka.AsyncImageLoaderv.loadImageFromUrl(AsyncImageLoaderv.java:57) 04-26 10:55:40.493: ERROR/AndroidRuntime(1047): at com.stellent.gorinka.AsyncImageLoaderv$2.run(AsyncImageLoaderv.java:41) 04-26 10:55:40.594: INFO/Process(584): Sending signal. PID: 1047 SIG: 3 Here's the coding in the adapter... final ImageView imageView = (ImageView) rowView.findViewById(R.id.image); AsyncImageLoaderv asyncImageLoader=new AsyncImageLoaderv(); Bitmap cachedImage = asyncImageLoader.loadDrawable(imgPath, new AsyncImageLoaderv.ImageCallback() { public void imageLoaded(Bitmap imageDrawable, String imageUrl) { imageView.setImageBitmap(imageDrawable); } }); imageView.setImageBitmap(cachedImage); .......... ........... ............ //To load the image... public static Bitmap loadImageFromUrl(String url) { InputStream inputStream;Bitmap b; try { inputStream = (InputStream) new URL(url).getContent(); BitmapFactory.Options bpo= new BitmapFactory.Options(); bpo.inSampleSize=2; b=BitmapFactory.decodeStream(inputStream, null,bpo ); return b; } catch (IOException e) { throw new RuntimeException(e); } // return null; } Please tell me how to fix the error....

    Read the article

  • Galaxy Tab 3 producing continuous LightSensor error in LogCat

    - by Richard Tingle
    I am using a Galaxy Tab 3 as a test device for writing an android app. As such I'm interested in the output of the LogCat which is being filled with these error level messages. The device itself appears to work correctly, apps which rely on the light sensor correctly respond to it and the number in the error itself goes down if the light sensor is obscured. If I wasn't using it to develop apps I wouldn't even be aware of the issue but I believe it is an issue with the device itself not my app: simply plugging the tab 3 into the computer and using Eclipse - ADT to look at the LogCat without any app running leads to these errors being shown. I know I could filter the LogCat to ignore these errors but inconvenience aside; they concern me. A sample of the log cat is below (it generates errors continuously). This is on verbose so it includes some debug level (D/) messages as well as the error level messages (E/). How can I correct the device to no longer generate these errors. 06-11 10:08:45.789: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:45.992: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:46.195: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:46.398: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:46.601: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:46.804: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:47.007: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:47.210: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:47.414: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:47.617: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:47.820: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:48.023: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:48.039: D/dalvikvm(15201): GC_CONCURRENT freed 1947K, 17% free 16973K/20359K, paused 13ms+13ms, total 50ms 06-11 10:08:48.226: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:48.429: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 13 06-11 10:08:48.632: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 13 06-11 10:08:48.632: D/STATUSBAR-NetworkController(472): refreshSignalCluster: data=0 bt=false 06-11 10:08:48.835: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:49.039: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:49.242: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:49.445: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 13 06-11 10:08:49.632: D/STATUSBAR-NetworkController(472): refreshSignalCluster: data=0 bt=false 06-11 10:08:49.648: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 14 06-11 10:08:49.851: E/LightSensor(377): LightSensor::readEvents mPendingEvent.light = 13

    Read the article

  • UFW blocks SSL connections Varnish/Apache2 on Ubuntu 12.04

    - by user1383815
    I have installed Virtualmin on a Ubuntu 12.04 server and I'm using LAMP stack with Varnish (:80) in front of Apache (:8000). However, I cannot access https when UFW is enabled. When I disable UFW, all works fine. Here is what UFW logging shows when I attempt to access a website via https: Dec 14 05:42:29 localhost kernel: [64491.327263] [UFW BLOCK] IN=eth0 OUT= MAC=e4:11:5b:e5:ef:8c:00:d0:02:8f:f0:00:08:00 SRC=MY_IP_ADDRESS DST=SERVER_IP_ADDRESS LEN=52 TOS=0x00 PREC=0x00 TTL=115 ID=2524 DF PROTO=TCP SPT=56430 DPT=20000 WINDOW=8192 RES=0x00 SYN URGP=0 Here is my UFW ruleset: $ ufw status Status: active To Action From -- ------ ---- 2221 ALLOW Anywhere 10000 ALLOW Anywhere 80 ALLOW Anywhere 21 ALLOW Anywhere 8000 ALLOW Anywhere Apache Secure ALLOW Anywhere 2221 ALLOW Anywhere (v6) 10000 ALLOW Anywhere (v6) 80 ALLOW Anywhere (v6) 21 ALLOW Anywhere (v6) 8000 ALLOW Anywhere (v6) Apache Secure (v6) ALLOW Anywhere (v6) Does anyone have any pointers how to fix this problem? Thank you for your time.

    Read the article

  • Error 2013: Lost connection to MySQL server during query when executing CHECK TABLE FOR UPGRADE

    - by Dean Richardson
    I just upgraded Ubuntu from 11.10 to 12.04. My rails app now returns the (passenger) error "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) (Mysql2::Error)". I get a similar error when I try to access mysql at the command line on my Ubuntu server using mysql -u root -p. I have mysql-server 5.5 installed. I've checked and mysql is not running. When I try to restart it, it fails. Here are some key lines from the tail of /var/log/syslog after an attempted restart: dean@dgwjasonfried:/etc/mysql$ tail -f /var/log/syslog Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: /usr/bin/mysqlcheck: Got error: 2013: Lost connection to MySQL server during query when executing 'CHECK TABLE ... FOR UPGRADE' Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: FATAL ERROR: Upgrade failed Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: molex_app_development.assets OK Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: molex_app_development.ecd_types OK Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5124]: Checking for insecure root accounts. Mar 7 08:55:27 dgwjasonfried kernel: [ 7551.769657] init: mysql main process (5064) terminated with status 1 Mar 7 08:55:27 dgwjasonfried kernel: [ 7551.769697] init: mysql respawning too fast, stopped Here is most of /etc/mysql/my.cnf: Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock Here is entries for some specific programs The following values assume you have at least 32M ram This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] Basic Settings user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking Instead of skip-networking the default is now to listen only on localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 And here are permissions for var/run/mysqld/mysqld.sock: srwxrwxrwx 1 mysql mysql 0 Mar 7 09:18 mysqld.sock I'd be grateful for any suggestions the community might have. I reviewed the related questions here and attempted some of the fixes offered but to no avail. Thanks! Dean Richardson Update: Thanks to quanta's suggestion, I looked at the /var/log/mysql/error.log file. I found error messages relating to pointers, fatal signals, and more stuff that I really couldn't make much sense of. I also found mysql man page references, however. One suggested that I try starting mysqld with the --innodb_force_recovery=# option, then attempt to dump (or drop) the offending/corrupted database or table. I worked through the escalating option levels one-by-one (innodb_force_recovery=1, innodb_force_recovery=2, etc.) This allowed me to successfully run mysql -u root -p from the command line and execute several commands. I was able to run queries on my production database, but any attempt to query, dump, or even drop my development database raised an error and led to me losing the connection to mysql. So I've made progress, but until I'm somehow able to drop or repair my development db I'm still unable to get my app to load. Any further advice or suggestions? Thanks! Dean Update: Right after running sudo mysqld --innodb_force_recover=1 from the command line, the error.log contains this: Right after retrying sudo mysqld --innodb_force_recover=1, The error.log file shows this: 130308 4:55:39 [Note] Plugin 'FEDERATED' is disabled. 130308 4:55:39 InnoDB: The InnoDB memory heap is disabled 130308 4:55:39 InnoDB: Mutexes and rw_locks use GCC atomic builtins 130308 4:55:39 InnoDB: Compressed tables use zlib 1.2.3.4 130308 4:55:39 InnoDB: Initializing buffer pool, size = 128.0M 130308 4:55:39 InnoDB: Completed initialization of buffer pool 130308 4:55:39 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 130308 4:55:39 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 130308 4:55:40 InnoDB: Waiting for the background threads to start 130308 4:55:41 InnoDB: 1.1.8 started; log sequence number 10259220 130308 4:55:41 InnoDB: !!! innodb_force_recovery is set to 1 !!! 130308 4:55:41 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 130308 4:55:41 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 130308 4:55:41 [Note] Server socket created on IP: '127.0.0.1'. 130308 4:55:41 [Note] Event Scheduler: Loaded 0 events 130308 4:55:41 [Note] mysqld: ready for connections. Version: '5.5.29-0ubuntu0.12.04.2' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) Then after mysql -u root -p and mysql> drop database molex_app_development; ERROR 2013 (HY000): Lost connection to MySQL server during query mysql> the error.log contains: dean@dgwjasonfried:/var/log/mysql$ tail -f error.log /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f6a3ff9ecbd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f6a1c004bd8): is an invalid pointer Connection ID (thread ID): 1 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 130308 4:55:39 [Note] Plugin 'FEDERATED' is disabled. 130308 4:55:39 InnoDB: The InnoDB memory heap is disabled 130308 4:55:39 InnoDB: Mutexes and rw_locks use GCC atomic builtins 130308 4:55:39 InnoDB: Compressed tables use zlib 1.2.3.4 130308 4:55:39 InnoDB: Initializing buffer pool, size = 128.0M 130308 4:55:39 InnoDB: Completed initialization of buffer pool 130308 4:55:39 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 130308 4:55:39 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 130308 4:55:40 InnoDB: Waiting for the background threads to start 130308 4:55:41 InnoDB: 1.1.8 started; log sequence number 10259220 130308 4:55:41 InnoDB: !!! innodb_force_recovery is set to 1 !!! 130308 4:55:41 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 130308 4:55:41 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 130308 4:55:41 [Note] Server socket created on IP: '127.0.0.1'. 130308 4:55:41 [Note] Event Scheduler: Loaded 0 events 130308 4:55:41 [Note] mysqld: ready for connections. Version: '5.5.29-0ubuntu0.12.04.2' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 130308 4:58:23 [ERROR] Incorrect definition of table mysql.proc: expected column 'comment' at position 15 to have type text, found type char(64). 130308 4:58:23 InnoDB: Assertion failure in thread 140168992810752 in file fsp0fsp.c line 3639 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: about forcing recovery. 10:58:23 UTC - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=151 thread_count=1 connection_count=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 346681 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x7f7ba4f6c2f0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f7ba3065e60 thread_stack 0x30000 mysqld(my_print_stacktrace+0x29)[0x7f7ba3609039] mysqld(handle_fatal_signal+0x483)[0x7f7ba34cf9c3] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f7ba2220cb0] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35)[0x7f7ba188c425] /lib/x86_64-linux-gnu/libc.so.6(abort+0x17b)[0x7f7ba188fb8b] mysqld(+0x65e0fc)[0x7f7ba37160fc] mysqld(+0x602be6)[0x7f7ba36babe6] mysqld(+0x635006)[0x7f7ba36ed006] mysqld(+0x5d7072)[0x7f7ba368f072] mysqld(+0x5d7b9c)[0x7f7ba368fb9c] mysqld(+0x6a3348)[0x7f7ba375b348] mysqld(+0x6a3887)[0x7f7ba375b887] mysqld(+0x5c6a86)[0x7f7ba367ea86] mysqld(+0x5ae3a7)[0x7f7ba36663a7] mysqld(_Z15ha_delete_tableP3THDP10handlertonPKcS4_S4_b+0x16d)[0x7f7ba34d3ffd] mysqld(_Z23mysql_rm_table_no_locksP3THDP10TABLE_LISTbbbb+0x568)[0x7f7ba3417f78] mysqld(_Z11mysql_rm_dbP3THDPcbb+0x8aa)[0x7f7ba339780a] mysqld(_Z21mysql_execute_commandP3THD+0x394c)[0x7f7ba33b886c] mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x10f)[0x7f7ba33bb28f] mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1380)[0x7f7ba33bc6e0] mysqld(_Z24do_handle_one_connectionP3THD+0x1bd)[0x7f7ba346119d] mysqld(handle_one_connection+0x50)[0x7f7ba3461200] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f7ba2218e9a] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f7ba1949cbd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f7b7c004b60): is an invalid pointer Connection ID (thread ID): 1 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. --Dean

    Read the article

  • Xen hipervisor 4.1 Kernel Panic on Ubuntu 12.04

    - by rkmax
    I have a fresh Ubuntu 12.04.1 amd64 server install following this guide I have used LVM option used all disk and make 2 LV /dev/mapper/vg-root / (80GB) vg-swap swap (4GB) now i install xen with apt-get install xen-hypervisor-4.1-amd64 and config /etc/default/grub like the guide and add GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=768M" later all this i exec update-grub and reboot. but when i try to boot with Xen 4.1-amd64 always i get a kernel panic with the message Domain-0 allocation is too small for kernel image my questions are: this error is about what? where i can grow this allocation for avoid this error? grub.cfg menuentry 'Ubuntu GNU/Linux, with Xen 4.1-amd64 and Linux 3.2.0-29-generic' --class ubuntu --class gnu-linux --class gnu --class os --class xen { insmod part_gpt insmod ext2 set root='(hd0,gpt2)' search --no-floppy --fs-uuid --set=root 3541e241-7f39-4ebe-8d99-c5306294c266 echo 'Loading Xen 4.1-amd64 ...' multiboot /xen-4.1-amd64.gz placeholder dom0_mem=768M echo 'Loading Linux 3.2.0-29-generic ...' module /vmlinuz-3.2.0-29-generic placeholder root=/dev/mapper/backup--xen-root ro rootdelay=180 echo 'Loading initial ramdisk ...' module /initrd.img-3.2.0-29-generic } Note: I've followed this guide too

    Read the article

  • r1soft agent is failing with the error: "write error while sending code: Broken pipe"

    - by curiousguy
    I have an Ubuntu 10.04.4 LTS server with r1soft agent installed in it. Recently, the backups are failing with the following error. -------- write error while sending code: Broken pipe -------- I have reinstalled the buagent but to no avail. On checking the server logs, I could see the following errors listed in it: -------- # tail -f /var/log/messages |grep -i buagent Nov 17 03:35:06 microscope buagent: Need to back up 126 sectors Nov 17 03:35:06 microscope buagent: (Righteous Backup Linux Agent) 1.79.0 build 12433 Nov 17 03:35:06 microscope buagent: allowing control from backup server (10.128.136.195) with valid RSA key Nov 17 03:35:06 microscope buagent: allowing control from backup server (10.128.136.201) with valid RSA key Nov 17 03:35:06 microscope buagent: sending auth challenge for allowed host at (10.128.136.201) port (47890) Nov 17 03:35:06 microscope buagent: host (10.128.136.201) port (47890) authentication successful Nov 17 03:35:06 microscope buagent: Backup request accepted. Starting backup. Nov 17 03:35:06 microscope buagent: Snapshot completed in 0.010 seconds. Nov 17 03:45:03 microscope buagent: Error reading blocks from snapshot. Nov 17 03:45:03 microscope buagent: Reading blocks failed Nov 17 03:45:03 microscope buagent: error backup aborted Nov 17 03:45:03 microscope buagent: backup failed on agent closing connection Nov 17 03:45:03 microscope buagent: Backup failed. Nov 17 03:45:03 microscope buagent: write error while sending code: Broken pipe (32) Nov 17 03:45:03 microscope buagent: tell child write failed -------- I tried changing the 'Timeout' and 'DiskAsPartition' value in '/etc/buagent/agent_config' file but no luck. Also, verified that proper route is added to the backup server. The agent is also running fine. Am I missing anything? Any help would be much appreciated. Note: CDP 2.0 is installed in the backup server.

    Read the article

  • saslauthd authentication error

    - by James
    My server has developed an expected problem where I am unable to connect from a mail client. I've looked at the server logs and the only thing that looks to identify a problem are events like the following: Nov 23 18:32:43 hig3 dovecot: imap-login: Login: user=, method=PLAIN, rip=xxxxxxxx, lip=xxxxxxx, TLS Nov 23 18:32:55 hig3 postfix/smtpd[11653]: connect from xxxxxxx.co.uk[xxxxxxx] Nov 23 18:32:55 hig3 postfix/smtpd[11653]: warning: SASL authentication failure: cannot connect to saslauthd server: No such file or directory Nov 23 18:32:55 hig3 postfix/smtpd[11653]: warning: xxxxxxx.co.uk[xxxxxxxx]: SASL LOGIN authentication failed: generic failure Nov 23 18:32:56 hig3 postfix/smtpd[11653]: lost connection after AUTH from xxxxxxx.co.uk[xxxxxxx] Nov 23 18:32:56 hig3 postfix/smtpd[11653]: disconnect from xxxxxxx.co.uk[xxxxxxx] The problem is unusual, because just half an hour previously at my office, I was not being prompted for a correct username and password in my mail client. I haven't made any changes to the server, so I can't understand what would have happened to make this error occur. Searches for the error messages yield various results, with 'fixes' that I'm uncertain of (obviously don't want to make it worse or fix something that isn't broken). When I run testsaslauthd -u xxxxx -p xxxxxx I also get the following result: connect() : No such file or directory But when I run testsaslauthd -u xxxxx -p xxxxxx -f /var/spool/postfix/var/run/saslauthd/mux -s smtp I get: 0: OK "Success." I found those commands on another forum and am not entirely sure what they mean, but I'm hoping they might give an indication of where the problem might lie. If it makes any difference, I'm running Ubuntu 10.04.1, Postfix 2.7.0 and Webmin/ Virtualmin.

    Read the article

  • moving plone or recovering data

    - by Atilla Filiz
    I had a simple plone site running on my Ubuntu 8.10 box, setup by following instructions on https://help.ubuntu.com/community/forum/server/Plone Now the computer got an Ubuntu 9.04 install from scratch, and I want to get the site up and running. As far as I understand several python libraries from 9,04 are not compatible with plone. I tried installing plon3-site from ubuntu repos, and I got this: $bin/instance start Traceback (most recent call last): File "bin/instance", line 103, in ? import plone.recipe.zope2instance.ctl File "/media/Robocup2009/robocup2009/eggs/plone.recipe.zope2instance-2.7-py2.4.egg/plone/recipe/zope2instance/__init__.py", line 16, in ? import zc.buildout File "/media/Robocup2009/robocup2009/eggs/zc.recipe.egg-1.1.0-py2.4.egg/zc/__init__.py", line 1, in ? __import__('pkg_resources').declare_namespace(__name__) ImportError: No module named pkg_resources I have python-pkg-resources installed. I actually don't care about plone, I just found it easy to use. My site is just photo and video archive of several users, with low traffic but several gigabytes of data. What would you suggest? Is it easier to get plone working on my box and start my instance(how?), or recover the content from the instance folder(how?) and setup another content management system(which?)

    Read the article

  • Unable to get prosody running on Ubuntu 10.04 (lua issues)

    - by user90374
    All this is performed on Ubuntu 10.04.4 LTS Server I installed LUA 5.1.4 following this procedure - http://ubuntuforums.org/showthread.php?t=1874860 I installed prosody following this command (after downloading the package) - sudo dpkg -i prosody_0.8.2-1_i386.deb After installation, I get the following error: I have tried to use as suggested luarock and sudo apt-get install to fix these. But still it keeps showing me these errors. Selecting previously deselected package prosody. (Reading database ... 59416 files and directories currently installed.) Unpacking prosody (from prosody_0.8.2-1_i386.deb) ... Setting up prosody (0.8.2-1) ... * Starting Prosody XMPP Server prosody ************** Prosody was unable to find luaexpat This package can be obtained in the following ways: Source: www[dot]keplerproject[dot]org/luaexpat/ Debian/Ubuntu: sudo apt-get install liblua5.1-expat0 luarocks: luarocks install luaexpat luaexpat is required for Prosody to run, so we will now exit. More help can be found on our website, at prosody[dot]im/doc/depends ************ Prosody was unable to find luasocket This package can be obtained in the following ways: Source: www[dot]tecgraf[dot]puc-rio[dot]br/~diego/professional/luasocket/ Debian/Ubuntu: sudo apt-get install liblua5.1-socket2 luarocks: luarocks install luasocket luasocket is required for Prosody to run, so we will now exit. More help can be found on our website, at prosody[dot]im/doc/depends ************ Prosody was unable to find LuaSec This package can be obtained in the following ways: Source: www[dot]inf[dot]puc-rio[dot]br/~brunoos/luasec/ Debian/Ubuntu: prosody[dot]im/download/start#debian_and_ubuntu luarocks: luarocks install luasec SSL/TLS support will not be available More help can be found on our website, at prosody[dot]im/doc/depends [fail] invoke-rc.d: initscript prosody, action "start" failed. dpkg: error processing prosody (--install): subprocess installed post-installation script returned error exit status 1 Processing triggers for man-db ... Processing triggers for ureadahead ... Errors were encountered while processing: prosody Thanks a lot for your patience and answers.

    Read the article

  • Where is my CPU usage going?

    - by Josh
    My Ubuntu 10.04 Lucid virtual machine is saying it's at 100% CPU usage... but all I'm running is Thunderbird. According to top, CPU usage should be ~25.9%... How do I interpret this conflicting output from top? top - 13:55:26 up 3:35, 4 users, load average: 3.03, 2.59, 2.48 Tasks: 178 total, 1 running, 177 sleeping, 0 stopped, 0 zombie Cpu(s): 16.0%us, 79.7%sy, 0.0%ni, 0.0%id, 0.0%wa, 1.3%hi, 3.0%si, 0.0%st Mem: 509364k total, 479108k used, 30256k free, 3092k buffers Swap: 2096440k total, 58380k used, 2038060k free, 225116k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 7708 jnet 20 0 480m 109m 17m S 18.4 22.1 21:59.14 thunderbird-bin 4615 jnet 20 0 5488 1268 1040 S 2.3 0.2 5:00.03 nx-rootless-ses 7124 jnet 20 0 56688 27m 4812 S 2.0 5.5 6:35.09 nxagent 6724 nx 20 0 9628 1400 636 S 1.6 0.3 3:26.59 sshd 30106 root 20 0 2544 1236 908 R 0.7 0.2 0:00.33 top 19 root 20 0 0 0 0 S 0.3 0.0 0:22.45 ata/0 38 root 20 0 0 0 0 S 0.3 0.0 0:05.53 scsi_eh_1 345 root 20 0 0 0 0 S 0.3 0.0 0:04.72 kjournald 1719 root 20 0 3260 1192 944 S 0.3 0.2 0:17.36 vmware-guestd 1 root 20 0 2804 1356 940 S 0.0 0.3 0:01.99 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.01 kthreadd 3 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 4 root 20 0 0 0 0 S 0.0 0.0 0:00.15 ksoftirqd/0 5 root RT 0 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 ... Specifically I'm referring to the fact that the CPU usage totals show 0% idle time: Cpu(s): 16.0%us, 79.7%sy, 0.0%ni, 0.0%id, 0.0%wa, 1.3%hi, 3.0%si, 0.0%st Yet when adding up the percentages in the %CPU column I get 25.9%, not 100%!

    Read the article

  • Why Virtual Box won't give me option to create 64 bits guests?

    - by Eduardo Born
    My host is x64 bits Windows 8.1. I downloaded the latest Virtual Box (4.3) and I'm trying to create a VM with a 64 bits Ubuntu OS (ubuntu-12.04.3-desktop-amd64). When I go to New VM wizard, it doesn't give me option to select "Ubuntu (x64)" as I have seen in other people's screenshots, only just "Ubuntu". As a result, the ISO can't boot. I tried in another PC and Virtual Box gives the x64 variants to most listed OS... Control Panel shows x64 OS, x64 processor. My host laptop is a Sony Vaio VPCZ22UGX/N, Intel® Core™ i7-2640M processor. CPUz shows Vx-t is available on my processor, of course. Here is what I tried so far: I enabled IO APIC as required in the docs. I have virtualization enabled in the BIOS. It works fine in VMware. Check that Hyper-V is not running or even installed on my Windows. Same for VMware. I also tried running the command: VBoxManage modifyvm [vmname] --longmode on for that VM, but no change.. I think the issue is really that I can't select x64 variant of the Ubuntu OS for that VM. Other people seem to indicate that's a requirement, but I don't get that option for some reason. I spent a lot of time and can't find what's wrong... Anyone knows what could be missing here? Thank you very much!! Eduardo

    Read the article

  • Speakers silent, headphones work in Ubuntu 9.04

    - by CarlF
    I'm running Ubuntu 9.04. Worked fine for months, then I rebooted yesterday after weeks of continuous operation. Now audio won't play through the speakers. The USB headset works fine, but the Conexant audio (CX20549) does not. Weirdly, it thinks it's playing. pavumeter shows appropriate levels, volume looks OK in alsamixer, but no sound. I did find this page: http://www.eugeneteplitsky.com/fixing-silent-pulseaudio-in-ubuntu-9-04/ Unfortunately the advice there doesn't help me. For one thing, the syntax for the alsa-base.conf file is apparently not actually documented anywhere. For another, my chipset isn't listed in the kernel.org docs he links to! EDIT: would upgrading to 9.10 help? Is there a major change in the audio subsystem between 9.04 and 9.10? Any suggestions? EDIT 2: This is stranger than I thought. Sound works normally in Xine, but is silent in Audacity, VLC and mplayer. What the?

    Read the article

  • Ubuntu 64bit Xen DomU Issues after upgrade from Karmic to Lucid

    - by Shoaibi
    I was upgrading my servers today and it all went fine except the last machine which has the following issues: [Resolved using http://www.ndchost.com/wiki/server-administration/upgrade-ubuntu-pre-10.04#post-1004-upgradefinal-steps] No login prompt on console Done. Begin: Mounting root file system... ... Begin: Running /scripts/local-top ... Done. [ 0.545705] blkfront: xvda: barriers enabled [ 0.546949] xvda: xvda1 [ 0.549961] blkfront: xvde: barriers enabled [ 0.550619] xvde: xvde1 xvde2 Begin: Running /scripts/local-premount ... Done. [ 0.870385] kjournald starting. Commit interval 5 seconds [ 0.870449] EXT3-fs: mounted filesystem with ordered data mode. Begin: Running /scripts/local-bottom ... Done. Done. Begin: Running /scripts/init-bottom ... Done. Also tried by pressing ENTER and CTRL+C many times, no use. Resolved: [/tmp was mounted as noexec, changing that fix it]: I get errors when i try to re-install udev in single user mode: Unpacking replacement udev ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Processing triggers for man-db ... Setting up udev (151-12.1) ... udev start/running, process 1003 Removing `local diversion of /sbin/udevadm to /sbin/udevadm.upgrade' update-initramfs: deferring update (trigger activated) Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-2.6.32-25-server /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-premount/fixrtc: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-premount/ntfs_3g: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-premount/resume: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/nfs-top/udev: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/panic/console_setup: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-top/all_generic_ide: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-top/blacklist: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-top/udev: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/init-bottom/udev: Permission denied /usr/sbin/mkinitramfs: 329: /tmp/mkinitramfs_yuuTSc/scripts/local-bottom/ntfs_3g: Permission denied

    Read the article

  • Is ffmpeg incorrectly interpreting .aif files?

    - by marue
    Being on an Ubuntu 10.04 server i installed the ffmpeg packages with apt. ffmpeg is working afterwards, and doing as it should. Almost. For testing purposes i uploaded a few audiofiles. One of them, an aif file, is not being correctly interpreted. While on my workhorse (Mac SnowLeopard) ffmpeg tells the format as Stream #0.0: Audio: pcm_s24be, 44100 Hz, 2 channels, s32, 2116 kb/s my Ubuntu server says it is: Stream #0.0: Audio: pcm_s24be, 44100 Hz, stereo, s16, 2116 kb/s which is the wrong bitdepth. Ubuntu then fails to convert the file with the error message [pcm_s24be @ 0xcd4b580]invalid PCM packet Error while decoding stream #0.0 which certainly is not true. The file is perfectly valid. Are there any know issues for ffmpeg interpreting the aif format? How can i find out which version of the aif-codec ffmpeg is using? Any ideas how to approach this issue? ffprobe output: FFprobe version SVN-r20090707, Copyright (c) 2007-2009 Stefano Sabatini libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 1 libavformat 52.31. 0 / 52.31. 0 built on Jan 20 2010 00:13:01, gcc: 4.4.3 20100116 (prerelease) Input #0, aiff, from 'testfile.aif': Duration: 00:00:04.00, start: 0.000000, bitrate: 2117 kb/s Stream #0.0: Audio: pcm_s24be, 44100 Hz, stereo, s16, 2116 kb/s update 2: Forcing the conversion with -sample_fmt s32 doesn't change anything. Strange thing is: Even without using -sample_fmt s32 i just realized that the conversion is working and creates valid audiofiles. There just is the error message from above.

    Read the article

  • Why java -version returning a different version than the one defined in JAVA_HOME?

    - by Shekhar
    I am trying to set JAVA_HOME in Ubuntu OS. I have copied jdk 1.7 in /usr/lib/jvm and set JAVA_HOME in /etc/profile file. Contents of /usr/lib/jvm folder are as follows : shekhar@ubuntu:~$ ls /usr/lib/jvm/ default-java java-1.6.0-openjdk java-6-openjdk java-6-openjdk-i386 jdk1.7.0_01 java-1.5.0-gcj-4.6 java-1.6.0-openjdk-i386 java-6-openjdk-common java-7-openjdk-i386 and last few lines of /etc/profile file are as follows : export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_01 export PATH=$PATH:$JAVA_HOME/bin After finishing all this when I run java -version command I get following output : shekhar@ubuntu:~$ java -version java version "1.6.0_24" OpenJDK Runtime Environment (IcedTea6 1.11.4) (6b24-1.11.4-1ubuntu0.12.04.1) OpenJDK Server VM (build 20.0-b12, mixed mode) and when I run ls -lah command I get following output : shekhar@ubuntu:~$ ls -lah /usr/bin/java lrwxrwxrwx 1 root root 22 Sep 29 09:58 /usr/bin/java -> /etc/alternatives/java shekhar@ubuntu:~$ ls -lah /etc/alternatives/java lrwxrwxrwx 1 root root 45 Sep 29 09:58 /etc/alternatives/java -> /usr/lib/jvm/java-6-openjdk-i386/jre/bin/java Can anyone please tell me which thing I am missing? Why Ubuntu is still pointing to open jdk and not to my jdk 7? PS : I have seen this similar question and its answers but that question is related to Windows OS and not for Ubuntu so I am reposting this similar question for Ubuntu.

    Read the article

  • Ubuntu server crashes; need help figuring how to figure out why

    - by neezer
    I have a 768 Slice at slicehost.com running Ubuntu Server 8.04.2 LTS (hardy) with a LAMP stack on it that periodically crashes, though why I am not sure. From what I can tell, there is a process that basically goes rogue and consumes all the memory on the slice, suffocating all the other programs running until the whole thing comes to a grinding halt, and I have to do a hard reboot of the slice to get it back up and running again. I can't detect any pattern for this (it seems to happen about once a month, more or less). Here's a screenshot of my console during the last crash: I would assume that a possible cause might a PHP script or an apache configuration rule that might cause the crash if triggered? How would I be able to find out which one is the offending one? I've checked and rechecked all my PHP scripts, and running them doesn't seem to trigger the crash. I've also been able to log on to my system during a crash and see what's running (with top), but I can't tell how the offending process was started, so I can't trace the root of the problem! I know my description is overly generic, but unfortunately my expertise in tracking down the source of these glitches is very limited. If you need any additional information about my system in order to help me figure this out, please let me know in the comments, and I will append it to the question. My only other lead as to the culprit here is Wordpress, which we have installed on this server. Here are the details: Wordpress 3.0.3 with the following plugins installed and activated: Addmarx - Bookmark/Share/Email Dropdown, Akismet, All in One SEO Pack, Animated Banners, Automatically publish highlights of any website, directly to your Blog, Broken Link Checker, CMS Dashboard, Collapsing Categories, Status Updater, SubHeading, Ultimate Google Analytics, VastSubCat, WP-CMS Post Control, and WP Super Cache

    Read the article

  • Cursor while loop returning every value but the last

    - by LordSnoutimus
    Hello, I am using a while loop to iterate through a cursor and then outputing the longitude and latitude values of every point within the database. For some reason it is not returning the last (or first depending on if I use Cursor.MoveToLast) set of longitude and latitude values in the cursor. Here is my code: public void loadTrack() { SQLiteDatabase db1 = waypoints.getWritableDatabase(); Cursor trackCursor = db1.query(TABLE_NAME, FROM, "trackidfk=1", null, null, null,ORDER_BY); trackCursor.moveToFirst(); while (trackCursor.moveToNext()) { Double lat = trackCursor.getDouble(2); Double lon = trackCursor.getDouble(1); //overlay.addGeoPoint( new GeoPoint( (int)(lat*1E6), (int)(lon*1E6))); System.out.println(lon); System.out.println(lat); } } From this I am getting: 04-02 15:39:07.416: INFO/System.out(10551): 3.0 04-02 15:39:07.416: INFO/System.out(10551): 5.0 04-02 15:39:07.416: INFO/System.out(10551): 4.0 04-02 15:39:07.416: INFO/System.out(10551): 5.0 04-02 15:39:07.416: INFO/System.out(10551): 5.0 04-02 15:39:07.416: INFO/System.out(10551): 5.0 04-02 15:39:07.416: INFO/System.out(10551): 4.0 04-02 15:39:07.416: INFO/System.out(10551): 4.0 04-02 15:39:07.416: INFO/System.out(10551): 3.0 04-02 15:39:07.416: INFO/System.out(10551): 3.0 04-02 15:39:07.416: INFO/System.out(10551): 2.0 04-02 15:39:07.416: INFO/System.out(10551): 2.0 04-02 15:39:07.493: INFO/System.out(10551): 1.0 04-02 15:39:07.493: INFO/System.out(10551): 1.0 7 Sets of values, where I should be getting 8 sets. Thanks.

    Read the article

  • HTTP crawler in Erlang

    - by ctp
    I'm coding on a simple HTTP crawler but I have an issue running the code at the bottom. I'm requesting 50 URLs and get the content of 20+ back. I've generated few files with 150kB size each to test the crawler. So I think the 20+ responses are limited by the bandwidth? BUT: how to tell the Erlang snippet not to quit until the last file is not fetched? The test data server is online, so plz try the code out and any hints are welcome :) -module(crawler). -define(BASE_URL, "http://46.4.117.69/"). -export([start/0, send_reqs/0, do_send_req/1]). start() -> ibrowse:start(), proc_lib:spawn(?MODULE, send_reqs, []). to_url(Id) -> ?BASE_URL ++ integer_to_list(Id). fetch_ids() -> lists:seq(1, 50). send_reqs() -> spawn_workers(fetch_ids()). spawn_workers(Ids) -> lists:foreach(fun do_spawn/1, Ids). do_spawn(Id) -> proc_lib:spawn_link(?MODULE, do_send_req, [Id]). do_send_req(Id) -> io:format("Requesting ID ~p ... ~n", [Id]), Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)), case Result of {ok, Status, _H, B} -> io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]); Err -> io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]) end. That's the output: Requesting ID 1 ... Requesting ID 2 ... Requesting ID 3 ... Requesting ID 4 ... Requesting ID 5 ... Requesting ID 6 ... Requesting ID 7 ... Requesting ID 8 ... Requesting ID 9 ... Requesting ID 10 ... Requesting ID 11 ... Requesting ID 12 ... Requesting ID 13 ... Requesting ID 14 ... Requesting ID 15 ... Requesting ID 16 ... Requesting ID 17 ... Requesting ID 18 ... Requesting ID 19 ... Requesting ID 20 ... Requesting ID 21 ... Requesting ID 22 ... Requesting ID 23 ... Requesting ID 24 ... Requesting ID 25 ... Requesting ID 26 ... Requesting ID 27 ... Requesting ID 28 ... Requesting ID 29 ... Requesting ID 30 ... Requesting ID 31 ... Requesting ID 32 ... Requesting ID 33 ... Requesting ID 34 ... Requesting ID 35 ... Requesting ID 36 ... Requesting ID 37 ... Requesting ID 38 ... Requesting ID 39 ... Requesting ID 40 ... Requesting ID 41 ... Requesting ID 42 ... Requesting ID 43 ... Requesting ID 44 ... Requesting ID 45 ... Requesting ID 46 ... Requesting ID 47 ... Requesting ID 48 ... Requesting ID 49 ... Requesting ID 50 ... OK -- ID: 49 -- Status: "200" -- Content length: 150000 OK -- ID: 47 -- Status: "200" -- Content length: 150000 OK -- ID: 50 -- Status: "200" -- Content length: 150000 OK -- ID: 17 -- Status: "200" -- Content length: 150000 OK -- ID: 48 -- Status: "200" -- Content length: 150000 OK -- ID: 45 -- Status: "200" -- Content length: 150000 OK -- ID: 46 -- Status: "200" -- Content length: 150000 OK -- ID: 10 -- Status: "200" -- Content length: 150000 OK -- ID: 09 -- Status: "200" -- Content length: 150000 OK -- ID: 19 -- Status: "200" -- Content length: 150000 OK -- ID: 13 -- Status: "200" -- Content length: 150000 OK -- ID: 21 -- Status: "200" -- Content length: 150000 OK -- ID: 16 -- Status: "200" -- Content length: 150000 OK -- ID: 27 -- Status: "200" -- Content length: 150000 OK -- ID: 03 -- Status: "200" -- Content length: 150000 OK -- ID: 23 -- Status: "200" -- Content length: 150000 OK -- ID: 29 -- Status: "200" -- Content length: 150000 OK -- ID: 14 -- Status: "200" -- Content length: 150000 OK -- ID: 18 -- Status: "200" -- Content length: 150000 OK -- ID: 01 -- Status: "200" -- Content length: 150000 OK -- ID: 30 -- Status: "200" -- Content length: 150000 OK -- ID: 40 -- Status: "200" -- Content length: 150000 OK -- ID: 05 -- Status: "200" -- Content length: 150000 Update: thanks stemm for the hint with the wait_workers. I've combined your and mine code but same behaviour :( -module(crawler). -define(BASE_URL, "http://46.4.117.69/"). -export([start/0, send_reqs/0, do_send_req/2]). start() -> ibrowse:start(), proc_lib:spawn(?MODULE, send_reqs, []). to_url(Id) -> ?BASE_URL ++ integer_to_list(Id). fetch_ids() -> lists:seq(1, 50). send_reqs() -> spawn_workers(fetch_ids()). spawn_workers(Ids) -> %% collect reference to each worker Refs = [ do_spawn(Id) || Id <- Ids ], %% wait for response from each worker wait_workers(Refs). wait_workers(Refs) -> lists:foreach(fun receive_by_ref/1, Refs). receive_by_ref(Ref) -> %% receive message only from worker with specific reference receive {Ref, done} -> done end. do_spawn(Id) -> Ref = make_ref(), proc_lib:spawn_link(?MODULE, do_send_req, [Id, {self(), Ref}]), Ref. do_send_req(Id, {Pid, Ref}) -> io:format("Requesting ID ~p ... ~n", [Id]), Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)), case Result of {ok, Status, _H, B} -> io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]), %% send message that work is done Pid ! {Ref, done}; Err -> io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]), %% repeat request if there was error while fetching a page, do_send_req(Id, {Pid, Ref}) %% or - if you don't want to repeat request, put there: %% Pid ! {Ref, done} end. Running the crawler forks fine for a handful of files, but then the code even doesnt fetch the entire files (file size each 150000 bytes) - he crawler fetches some files partially, see the following web server log :( 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /10 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /1 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /3 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /8 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /39 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /7 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /6 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /2 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /5 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /50 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /9 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /44 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /38 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /47 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /49 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /43 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /37 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /46 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /48 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /36 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /42 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /41 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /45 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /17 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /35 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /16 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /15 HTTP/1.1" 200 17020 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /21 HTTP/1.1" 200 120360 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /40 HTTP/1.1" 200 117600 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /34 HTTP/1.1" 200 60660 "-" "-" Any hints are welcome. I have no clue what's going wrong there :(

    Read the article

  • C++14 : le draft final a été publié, découvrez son contenu et ce qu'apporte la nouvelle spécification

    C++14 : le draft final a été publié Découvrez son contenu Le draft final de la nouvelle version du C++14 est publié. La seconde bonne nouvelle, c'est que les compilateurs les plus utilisés supportent déjà cette nouvelle version. Voici ce qu'elle nous apporte : N3323 - Correction de certaines conversions contextuelles du C++ : améliorations du comportement de conversion à un unique opérateur, alors que la conversion d'une valeur de la classe vers le type spécifié par le contexte est possible...

    Read the article

  • HAProxy error: Some configuration options require full privileges, so global.uid cannot be changed

    - by Athena Wisdom
    After adding the line to /etc/haproxy/haproxy.cfg as part of creating a transparent proxy, source 0.0.0.0 usesrc clientip restarting haproxy starts giving an error ~# service haproxy reload * Reloading haproxy haproxy [ALERT] 230/153724 (1140) : [/usr/sbin/haproxy.main()] Some configuration options require full privileges, so global.uid cannot be changed. I'm already running service haproxy reload as root. What else do we have to do? Thank you!

    Read the article

  • ERR_INCOMPLETE_CHUNKED_ENCODING apache 2.4

    - by Bujanca Mihai
    I upgraded my Ubuntu server to 14.04 and Apache 2.4.7. Now my images don't load and console yields net::ERR_INCOMPLETE_CHUNKED_ENCODING. Also, I can sometimes see some of the images load for a little while (1 sec max) and then they disappear. .htaccess RewriteEngine On # Serve the favicon file from img folder RewriteCond %{REQUEST_URI} ^/favicon.ico$ RewriteRule ^(.*)$ /img/$1 [NC,L] # Redirect HTTP traffic to WWW subdomain RewriteCond %{HTTPS} off [NC] RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] # Redirect HTTPS traffic to WWW subdomain RewriteCond %{HTTPS} on [NC] RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteRule ^(.*)$ https://www.%{HTTP_HOST}/$1 [R=301,L] # Auto Versioning rules RewriteCond %{REQUEST_FILENAME} !-s RewriteRule ^(.*)\.[\d]+\.(css|js)$ $1.$2 [L] # Default Zend rewrite rules RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] VHost <VirtualHost *:80> ServerAdmin admin@localhost ServerName localhost DocumentRoot /home/mihai/ARTD/www/public/website # Omit this in production environment SetEnv APPLICATION_ENV local <Directory /home/mihai/ARTD/www/public/website > Options Indexes FollowSymLinks MultiViews AllowOverride All #Order deny,allow #Allow from all Require all granted </Directory> <IfModule mod_php5.c> php_value memory_limit 128M php_value upload_max_filesize 20M php_value post_max_size 20M </IfModule> ErrorLog /var/log/apache2/ARTD-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/ARTD-access.log combined </VirtualHost> <IfModule mod_ssl.c> <VirtualHost *:443> ServerAdmin admin@localhost ServerName localhost DocumentRoot /home/mihai/ARTD/www/public/website # Omit this in production environment SetEnv APPLICATION_ENV local <Directory /home/mihai/ARTD/www/public/website > Options Indexes FollowSymLinks MultiViews AllowOverride All #Order deny,allow #Allow from all Require all granted </Directory> <IfModule mod_php5.c> php_value memory_limit 128M php_value upload_max_filesize 20M php_value post_max_size 20M </IfModule> ErrorLog /var/log/apache2/ARTD-ssl-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/ARTD.log combined # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire #<FilesMatch "\.(cgi|shtml|phtml|php)$"> # SSLOptions +StdEnvVars #</FilesMatch> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. #BrowserMatch ".*MSIE.*" \ # nokeepalive ssl-unclean-shutdown \ # downgrade-1.0 force-response-1.0 </VirtualHost> </IfModule> logs Apache/2.4.7 (Ubuntu) PHP/5.5.9-1ubuntu4.3 OpenSSL/1.0.1f (internal dummy connection) 127.0.0.1 - - [25/Aug/2014:13:09:53 +0300] "GET /img/header/top-nav-separator.png HTTP/1.1" 200 462 "https://localhost/art" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.132 Safari/537.36"

    Read the article

  • MySQL permission errors

    - by dotancohen
    It seems that on a Ubuntu 14.04 machine the user mysql cannot access anything. It is not writing logs nor reading files. Witness: - bruno():mysql$ cat /etc/passwd | grep mysql mysql:x:116:127:MySQL Server,,,:/nonexistent:/bin/false - bruno():mysql$ sudo mysql_install_db Installing MySQL system tables... 140818 18:16:50 [ERROR] Can't read from messagefile '/usr/share/mysql/english/errmsg.sys' 140818 18:16:50 [ERROR] Aborting 140818 18:16:50 [Note] Installation of system tables failed! Examine the logs in /var/lib/mysql for more information. ...boilerplate trimmed... - bruno():mysql$ ls -la /usr/share/mysql/english/errmsg.sys -rw-r--r-- 1 root root 59535 Jul 29 13:40 /usr/share/mysql/english/errmsg.sys - bruno():mysql$ wc -l /usr/share/mysql/english/errmsg.sys 16 /usr/share/mysql/english/errmsg.sys Here we have seen that mysql cannot read /usr/share/mysql/english/errmsg.sys even though the permissions are open to read it, and in fact the regular login user can read the file (with wc). Additionally, MySQL is not writing any logs: - bruno():mysql$ ls -la /var/log/mysql total 8 drwxr-s--- 2 mysql adm 4096 Aug 18 16:10 . drwxrwxr-x 18 root syslog 4096 Aug 18 16:10 .. What might cause this user to not be able to access anything? What can I do about it?

    Read the article

  • Postfix : error: unsupported dictionary type: mysql

    - by flavio.troja
    I've a problem w/ postfix problem: # tail -f /var/log/mail.err Aug 20 17:57:50 myserver postfix/smtpd[8243]: error: unsupported dictionary type: mysql Aug 20 17:57:50 myserver postfix/smtpd[8243]: error: unsupported dictionary type: mysql Aug 20 17:58:05 myserver postfix/smtpd[8244]: error: unsupported dictionary type: mysql Aug 20 17:58:05 myserver postfix/smtpd[8244]: error: unsupported dictionary type: mysql Aug 20 18:00:38 myserver postfix/smtpd[8277]: error: unsupported dictionary type: mysql Aug 20 18:00:38 myserver postfix/smtpd[8277]: error: unsupported dictionary type: mysql Aug 20 18:03:32 myserver postfix/smtpd[8320]: error: unsupported dictionary type: mysql Aug 20 18:03:32 myserver postfix/smtpd[8320]: error: unsupported dictionary type: mysql Aug 20 18:03:33 myserver postfix/trivial-rewrite[8322]: error: unsupported dictionary type: mysql Aug 20 18:03:33 myserver postfix/trivial-rewrite[8322]: error: unsupported dictionary type: mysql idea?

    Read the article

  • Access ReverseProxy from iframe only

    - by soundsofpolaris
    I have OwnCloud setup and it has a feature called External Site. It loads a site in an iframe, so you can stay in the websites shell. I have a page running on a reverse proxy. Is there anyway to only allow the OwnCloud page to access the reverse proxy while blocking any other connections to it? So in this situation, the OwnCloud site is publically accessible but I don't want the reverse proxy to be. On apache2, Ubuntu 14.04, own cloud 7

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >