Search Results

Search found 2369 results on 95 pages for 'bugs bunny'.

Page 68/95 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • Xinerama creates a panning viewport

    - by iblue
    EDIT: I've created a bug report: https://bugs.freedesktop.org/show_bug.cgi?id=48458 My Setup I have 4 monitors, 1920x1080, which are in portrait mode (rotated left). They are connected to two radeon graphic cards. As usual, a picture says more than a thousand words. The problem Everything works fine, when Xinerama is disabled. But when I enable Xinerama, things get weird. When I move the mouse of the screen and return, the screen contents begin to move with the mouse, only on this monitor. It seems like the virtual display size does not match the real screen size, which activates a panning viewport. Any idea how to stop this? The video I created a video to demonstrate the issue: http://www.youtube.com/watch?v=zq_XHji1P24 xorg.conf This is my xorg.conf: Section "ServerLayout" ##################[ Evilness begins here ]############# Option "Xinerama" "on" # <--- Makes it go b0rked! ##################[ End of all evil ]############# Identifier "BOFH Console of Doom" Screen 0 "Screen-0" 0 0 Screen 1 "Screen-1" RightOf "Screen-0" Screen 2 "Screen-2" RightOf "Screen-1" Screen 3 "Screen-3" RightOf "Screen-2" EndSection Section "ServerFlags" Option "RandR" "false" EndSection Section "Module" Load "dbe" Load "dri" Load "extmod" Load "dri2" Load "record" Load "glx" EndSection Section "Monitor" Identifier "Monitor-0" Option "Rotate" "left" EndSection Section "Monitor" Identifier "Monitor-1" Option "Rotate" "left" EndSection Section "Monitor" Identifier "Monitor-2" Option "Rotate" "left" EndSection Section "Monitor" Identifier "Monitor-3" Option "Rotate" "left" EndSection Section "Device" Identifier "Radeon-0-0" Driver "radeon" BusID "PCI:9:0:0" Option "ZaphodHeads" "DVI-0" Screen 0 EndSection Section "Device" Identifier "Radeon-0-1" Driver "radeon" BusID "PCI:9:0:0" Option "ZaphodHeads" "DVI-1" Screen 1 EndSection Section "Device" Identifier "Radeon-1-0" Driver "radeon" BusID "PCI:4:0:0" Option "ZaphodHeads" "DVI-2" Screen 0 EndSection Section "Device" Identifier "Radeon-1-1" Driver "radeon" BusID "PCI:4:0:0" Option "ZaphodHeads" "DVI-3" Screen 1 EndSection Section "Screen" Identifier "Screen-0" Device "Radeon-0-0" Monitor "Monitor-0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen-1" Device "Radeon-0-1" Monitor "Monitor-1" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen-2" Device "Radeon-1-0" Monitor "Monitor-2" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen-3" Device "Radeon-1-1" Monitor "Monitor-3" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection

    Read the article

  • What's needed in a complete ASP.NET environment?

    - by Christian W
    We have a ASP3.0 application with a few ASP.NET (2.0) dittys mixed in. (Our longtime goal is to migrate everything to ASP.NET but that's not important for this issue) Our current test/deploy workflow is like this: 1 Use notepad++ or VS2008 to fix a bug/feature (depending on what I have open) 2 Open my virtual test-server 3 Copy the fixed file over, either with explorer, or if I can be bothered to open it, WinMerge 4 Test that the fix works 5 Close the virtual test-server 6 Connect to our host with VPN 7 Use WinMerge to update the files necessary 8 Pray to higher powers that the production environment is not so different that something bombs. To make things worse, only I have access to my "test-server". So I'm the only one testing it. I really want to make this a bit more robust, I even have a subversion setup running. But I always forget to commit changes... And I don't even work in my checked out folder, but a copy of what is currently in production... Can someone recommend some good reading on deploying, testing, staging and stuff like that. I currently use VS2008 and want to use subversion or GIT (or any other free VCS). Since I'm the only developer, teamsystem is not really an option (cost-related). I have found myself developing an "improved" feature, only to find a bug in the same feature in the production system. And since my "improved" feature incorporated deleting some old functionality, I have to fix bugs directly in production... That's not a fun feeling... (I have inherited this system recently... So it's not directly my fault that it is like this ;) )

    Read the article

  • Making libmagic/file detect .docx files

    - by Jonatan Littke
    As seen elsewhere, docx, xlsx and pttx are ZIPs. When uploading them to my web application, file (via libmagic andpython-magic) detects them as being ZIP. I store the contents of the file as a blob in the database, but naturally I don't want to trust the user with what kind of file type this is. So I would like to trust file for and automatically generate a filename during download. I know one can modify /etc/magic but the format (magic(5)) is way too complicated for me. I found a bug report on the issue at Debian bugs but since it's from 2008 it doesn't seem to be fixed any time soon. I guess my only other alternative is to indeed trust the user (but still store the contents as a blob) and only check the file extension based on the file name. This way I can disallow some extensions and allow others. And when the user re-downloads his file, he can have it in whatever way he uploaded it. But this solution is insecure if the file is shared with others, since you can simply rename the file to allow uploading it. Any ideas? Lastly, I found a list of magic numbers for docx etc, but I'm unable to convert these into the magic(5) format.

    Read the article

  • ssd firmware, linux: updating large batch of drives

    - by wryfi
    I was recently hit with a fatal firmware bug that affected dozens of Crucial SSDs deployed in my datacenter. Many of the affected machines use LSI or other proprietary SAS controllers, which Crucial's bootable ISO does not recognize. None of the affected machines has a Windows license. The story is roughly similar for other SSD mfrs, including Samsung and Intel. To resolve this issue, I was forced to stop each machine, remove the affected SSD, remove the SSD from its hotswap caddy, install it temporarily into my ThinkPad, flash the firmware, reverse, rinse, repeat. It took the better part of a day to get through all the affected devices. I am looking for hardware, software, and/or purchasing strategies to ease this pain, as SSD firmware bugs seem inevitable, and our SSD footprint is growing. My first thought is to get a laptop with eSATA and one of these cables (http://www.newegg.com/Product/Product.aspx?Item=N82E16812311004). That should at least make it so I don't have to remove the drives from their caddies. Surely others have run into this. Any novel solutions?

    Read the article

  • mysqld crashes on any statement

    - by ??iu
    I restarted my slave to change configuration settings to skip reverse hostname lookup on connecting and to enable the slow query log. I edited /etc/my.cnf making only these changes, then restarted mysqld with /etc/init.d/mysql restart All appeared to be well but when I connect to msyqld remotely or locally though it connects okay a slight problem is that mysqld crashes whenever you try to issue any kind of statement. The client looks like: Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.1.31-1ubuntu2-log Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> show tables; ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... Connection id: 1 Current database: mydb ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... ERROR 2003 (HY000): Can't connect to MySQL server on 'xx.xx.xx.xx' (61) ERROR: Can't connect to the server ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... ERROR 2003 (HY000): Can't connect to MySQL server on 'xx.xx.xx.xx' (61) ERROR: Can't connect to the server ERROR 2006 (HY000): MySQL server has gone away Bus error The mysqld error log looks like: 101210 16:35:51 InnoDB: Error: (1500) Couldn't read the MAX(job_id) autoinc value from the index (PRIMARY). 101210 16:35:51 InnoDB: Assertion failure in thread 140245598570832 in file handler/ha_innodb.cc line 2595 InnoDB: Failing assertion: error == DB_SUCCESS InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 101210 16:35:51 - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=3 max_threads=600 threads_connected=3 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x18209220 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f8d791580d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f902a76a080] /lib/libc.so.6(gsignal+0x35) [0x7f90291f8fb5] /lib/libc.so.6(abort+0x183) [0x7f90291fabc3] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x41b) [0x781f4b] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f902a7623ba] /lib/libc.so.6(clone+0x6d) [0x7f90292abfcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x18213c70 = thd->thread_id=3 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 101210 16:35:51 mysqld_safe Number of processes running now: 0 101210 16:35:51 mysqld_safe mysqld restarted InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 101210 16:35:54 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 101210 16:35:56 InnoDB: Started; log sequence number 456 143528628 101210 16:35:56 [Warning] 'user' entry 'root@PSDB102' ignored in --skip-name-resolve mode. 101210 16:35:56 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=mysqld-relay-bin' to avoid this problem. 101210 16:35:56 [Note] Event Scheduler: Loaded 0 events 101210 16:35:56 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.31-1ubuntu2-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 101210 16:36:11 InnoDB: Error: (1500) Couldn't read the MAX(job_id) autoinc value from the index (PRIMARY). 101210 16:36:11 InnoDB: Assertion failure in thread 139955151501648 in file handler/ha_innodb.cc line 2595 InnoDB: Failing assertion: error == DB_SUCCESS InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 101210 16:36:11 - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=600 threads_connected=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x18588720 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f49d916f0d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f4c8a73f080] /lib/libc.so.6(gsignal+0x35) [0x7f4c891cdfb5] /lib/libc.so.6(abort+0x183) [0x7f4c891cfbc3] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x41b) [0x781f4b] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f4c8a7373ba] /lib/libc.so.6(clone+0x6d) [0x7f4c89280fcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x18599950 = thd->thread_id=1 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 101210 16:36:11 mysqld_safe Number of processes running now: 0 101210 16:36:11 mysqld_safe mysqld restarted The config is [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] innodb_file_per_table innodb_buffer_pool_size=10G innodb_log_buffer_size=4M innodb_flush_log_at_trx_commit=2 innodb_thread_concurrency=8 skip-slave-start server-id=3 # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /DB2/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 128K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 600 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 32M # skip-federated slow-query-log skip-name-resolve Update: I followed the instructions as per http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html and set innodb_force_recovery = 4 and the logs are showing a different error but the behavior is still the same: 101210 19:14:15 mysqld_safe mysqld restarted 101210 19:14:19 InnoDB: Started; log sequence number 456 143528628 InnoDB: !!! innodb_force_recovery is set to 4 !!! 101210 19:14:19 [Warning] 'user' entry 'root@PSDB102' ignored in --skip-name-resolve mode. 101210 19:14:19 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=mysqld-relay-bin' to avoid this problem. 101210 19:14:19 [Note] Event Scheduler: Loaded 0 events 101210 19:14:19 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.31-1ubuntu2-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 101210 19:14:32 InnoDB: error: space object of table mydb/__twitter_friend, InnoDB: space id 1602 did not exist in memory. Retrying an open. 101210 19:14:32 InnoDB: error: space object of table mydb/access_request, InnoDB: space id 1318 did not exist in memory. Retrying an open. 101210 19:14:32 InnoDB: error: space object of table mydb/activity, InnoDB: space id 1595 did not exist in memory. Retrying an open. 101210 19:14:32 - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=600 threads_connected=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x1753c070 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f7a0b5800d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f7cbc350080] /usr/sbin/mysqld(ha_innobase::innobase_get_index(unsigned int)+0x46) [0x77c516] /usr/sbin/mysqld(ha_innobase::innobase_initialize_autoinc()+0x40) [0x77c640] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x3f3) [0x781f23] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f7cbc3483ba] /lib/libc.so.6(clone+0x6d) [0x7f7cbae91fcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x1754d690 = thd->thread_id=1 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash.

    Read the article

  • Green flickering pixels that move with black images

    - by user568458
    Strange question... Occasionally, on my LCD screen, pixels that should be black flicker rapidly and constantly between black and green, about 4 flickers a second. The crazy part is, unlike dead/stuck pixels, they are relative to content on the screen and move with it. For example, I might be looking at a web page with a picture that has lots of black. There might be a couple of green flashing pixels in that black that shouldn't be there. I scroll the page, and the green flickering pixels move with the image. It seems that everyphysical pixel is fine, but somehow something interprets part of the image in a way that causes flickering green... It's not just in a web browser. My first thought was to blame a trolling blogger cunningly uploading an animated gif that simulates a failing pixel... but it happens in a wide range of applications. It seems to occur randomly, other than that it seems to only occur in areas of pure black, and it's always pure 100% green. It happens rarely enough that it's not a big deal, but it's such a strange problem it bugs me. I can't find any info on anything like this. I'm not even sure if it's hardware or software. Any ideas? (windows 7 laptop connected to LCD by DVI to HDMI cable)

    Read the article

  • How to change the spell checking language in Chromium?

    - by mote
    When write I need spell checking in Danish or English (one at a time, not both at the same time), but changing from one language to the other is not working well for me. Example: Writing an email in English, everything is underlined as misspelled because the spelling language is set to Danish. I then right-click the text field and choose "Spell-checker Options" and set the language to English. But is does not change the language. Only after I try maybe 4-5 times it does. Selecting the text, clicking it, clicking the background, click the text, I have tried it all but I cannot figure out how it works. Sometimes it changes on first try, sometimes I have to do it 6-7 times. Searching brings lots of a known bugs stating that Chromium does not re-check the text after changing the language. But that is not what drives me crazy, for starters being able to change the language in the first try would be nice. It is not a fault in my installation, I have the same problem on 3 computers. Does any know something that I don't?

    Read the article

  • How do you gracefully upgrade mission critical systems to wildly disparate systems?

    - by Ernie
    In the span of the 12+ years of my career, I have yet to overcome this hurdle and I suspect the answer simply isn't easy or even possible, so I ask everyone here for their experience. Say that you're running into egregious problems that can only be fixed by moving from one platform to another - either from making a mistake in choosing the platform that was chosen years ago, or simply growing beyond what the system was originally designed for. You know for certain that the cruft that has built up over time will invariably mean that it will be nearly impossible to test for all the things that will certainly lead to tech support hell - which we all know leads to the loss of customers. Not that customers aren't already complaining about the egregious problems that already exist! The best possible way that I've discovered so far is to maybe devise a plan for the changeover, test it on a few clients, test it on a dozen clients, test it on a hundred clients, then finally finish the changeover for everyone and pray that you've worked out all the bugs with those first hundred and twenty, and that the animal by-products will not hit the ventilation system in the most spectacular fashion possible. However, that doesn't mean that it won't anyway. So say that you're moving from Exchange to Exim (or even just Sendmail to Exim). How do you handle it?

    Read the article

  • Thunderbird 11.0.1 and Lightning 1.3: How do I propose a different time for a meeting?

    - by seaao
    This all happens on my x64 Linux workstation btw. tl;dr: My colleague invited some people and me for a meeting. The meeting was scheduled a week to late. I -wrongfully- accepted. How do I propose a new time? To explain a bit more in detail: I received a meeting request in my mailbox. Thunderbird is so nice to let me accept or reject it, and after I click the button to accept, it is directly added to my calendar. But when I double click on the meeting to edit it, I get a function-wise scaled-down version of the meeting: The only settings I can alter are whether I want reminders, and if I go at all. Trying to drag it to another day doesn't work either: My calendar behaves like read-only (which it isn't btw). There are several questions (without answer...) to be found on Stack Overflow and on the Thunderbird knowledge base about using lightning. But I get the idea that I'm one of the few who won't comply with the team even before the meeting is started. My googling revealed no bugs or feature requests in the direction I'm thinking. A link to an explanation how to achieve this, or another perspective about how to reach the desired goal (meeting with my colleagues and me) would be mostly welcome!

    Read the article

  • Error compiling PHP 5.5.9 on CentOS 6.5 during make command

    - by Chris Mancini
    Here is the error message: cc: internal compiler error: Killed (program cc1) Please submit a full bug report, with preprocessed source if appropriate. See <file:///usr/share/doc/gcc-4.6/README.Bugs> for instructions. make: *** [ext/fileinfo/libmagic/apprentice.lo] Error 1 The very last thing make was processing is apprentice.lo which appears to be part of the image manipulation libraries (maybe?). I am using Ansible to provision my instance. It is a Digital Ocean single core 512MB VM. I have been using vagrant / ansible with the same config locally for dev and it has compiled fine, this is the first cloud VM I am attempting to provision. The only difference is the base image for my DO server is coming from DO and for my local dev, I built my own Vagrant box via VirtualBox from a stock CentOS basic server install. I pull it down from my DropBox. The problem has been experienced by others and reported as a php bug report My php ansible role up to the error: --- - name: Download php source get_url: url={{ php_source_url }} dest=/tmp register: get_url_result - name: untar the source package command: tar -xvf php-{{ php_version }}.tar.gz chdir=/tmp when: get_url_result.changed or php_reinstall - name: configure php 5.5 command: > ./configure --prefix={{ php_prefix }} --with-config-file-path={{ php_config_file_path }} --enable-fpm --enable-ftp --enable-mbstring --enable-pdo --enable-soap --enable-sockets=shared --enable-zip --with-curl --with-fpm-group={{ nginx_group }} --with-fpm-user={{ nginx_user }} --with-freetype-dir=/usr/lib64/ --with-gd --with-jpeg-dir=/usr/lib64/ --with-libdir=lib64 --with-mcrypt --with-openssl --with-pdo-mysql --with-pear --with-readline --with-tidy --with-xsl --with-zlib --without-pdo-sqlite --without-sqlite3 chdir=/tmp/php-{{ php_version }} when: get_url_result.changed or php_reinstall - name: make clean when reinstalling command: make clean chdir=/tmp/php-{{ php_version }} when: php_reinstall - name: make php command: make chdir=/tmp/php-{{ php_version }} when: get_url_result.changed or php_reinstall Thanks in advance for any help. :)

    Read the article

  • How do you set max execution time of PHP's CLI component?

    - by cwd
    How do you set max execution time of PHP's CLI component? I have a CLI script that has gone into a infinite loop and I'm not sure how to kill it without restarting. I used quicksilver to launch it, so I can't press control+c at the command line. I tried running ps -A (show all processes) but php is not showing up in that list, so perhaps it has timed out on it's own - but how do you manually set the time limit? I tried to find information about where I should set the max_execution_time setting, I'm used to setting this for the version of PHP that runs with apache, but I have no idea where to set it for the version of PHP that lives in /usr/bin. I did see the follow quote, which does seem to be accurate (see screenshot below), but having an unlimited execution time doesn't seem like a good idea. Keep in mind that for CLI SAPI max_execution_time is hardcoded to 0. So it seems to be changed by ini_set or set_time_limit but it isn't, actually. The only references I've found to this strange decision are deep in bugtracker (http://bugs.php.net/37306) and in php.ini (comments for 'max_execution_time' directive). (via http://php.net/manual/en/function.set-time-limit.php) ini_set('max_execution_time') has no effect. I also tried the same thing and go the same result with set_time_limit(7).

    Read the article

  • Platform to allow users to suggest and vote for ideas

    - by Simon
    I head up a support team for a software product. I am in the process of setting up a blog (Wordpress.org), a forum (PHPBB or maybe Vanilla), a means to view bugs (probably expose Jira), but would also like to allow our customers to suggest enhancement requests. Currently we have this as a 'contact form' via the blog, which feeds into a dedicated inbox, which the product manager reviews and may include some of these ideas into the product. However, I would like to give the clients more power/visibility over this. Have a look at ideas.arcgis.com. There are plenty of other similar sites as well. Users can suggest new ideas Other users can vote these ideas up or down (similar to Stack Exchange sites) Ideas that are voted highly will be given a higher priority over lower ones. We can report back on the # of ideas we implement, and potentially reject ideas with reasoning. Has anyone seen any platforms (ideally free) which would replicate something similar to this? I was half thinking of embedding Vanilla within a wordpress page, but need to look into it more.

    Read the article

  • Win 7 firewall won't turn on, nor the McAfee firewall. Hit by "Win 7 Anti-virus 2012" trojan. Removed, but a downed firewall is a lasting legacy

    - by PhxTitan
    I caught the Trojan right away, I think, but both my McAfee & Win 7 (x64) firewalls are not able to be engaged/turned on now. MS Error Code 0x80070424 when attempting to turn on Win 7 firewall. No viruses. Swept it with McAfee AV, Malwarebytes Anti-Malware, Microsoft malware removal tools. Followed Microsoft's three courses of alternative actions they posted for instructions for getting the Win 7 firewall back up and on. Nothing. Same error code. The post just said see MS support if those fixes failed. So I removed McAfee altogether. Still Win 7 (professional version) firewall won't come on; and clean of detectable bugs. And I'm fully updated with MS Windows 7 updates as well, which is no longer automatic, that too a legacy of the trojan bug I think. Any thoughts on how to get the Win 7 firewall operational??? And auto updating reengaged?

    Read the article

  • Understanding ulimit -u

    - by tripleee
    I'd like to understand what's going on here. linvx$ ( ulimit -u 123; /bin/echo nst ) nst linvx$ ( ulimit -u 122; /bin/echo nst ) -bash: fork: Resource temporarily unavailable Terminated linvx$ ( ulimit -u 123; /bin/echo one; /bin/echo two; /bin/echo three ) one two three linvx$ ( ulimit -u 123; /bin/echo one & /bin/echo two & /bin/echo three ) -bash: fork: Resource temporarily unavailable Terminated one I speculate that the first 122 processes are consumed by Bash itself, and that the remaining ulimit governs how many concurrent processes I am allowed to have. The documentation is not very clear on this. Am I missing something? More importantly, for a real-world deployment, how can I know what sort of ulimit is realistic? It's a long-running daemon which spawns worker threads on demand, and reaps them when the load decreases. I've had it spin the server to its death a few times. The most important limit is probably memory, which I have now limited to 200M per process, but I'd like to figure out how I can enforce a limit on the number of children (the program does allow me to configure a maximum, but how do I know there are no bugs in that part of the code?)

    Read the article

  • Does Google sometimes ignore "special" characters, possibly depending on your location or font type settings? [closed]

    - by RLH
    TLDR Google tends to ignore special characters in my search strings. Is there anything that I can do about it and is it, possibly, happening because Google makes certain assumptions based off of my default text-encoding settings and my location? I just posted this question over at StackOverflow. I had found a C preprocessor that I'd never seen before. As I should have done, I Googled it and tried to find out further information. I attempted various search terms which were all variations of "C Operator ##" (some times with and some times without the double-quotes.) Google didn't bring back anything of use so I posted my question on SO. As you can see from the comments, someone mentioned a search string (ironically one which I did try to search) and stated that I could have even hit the "I'm feeling lucky" button and have gotten my answer. The problem is I did search that, and the results that I received were far more basic and even after following the top results and searching the resulting pages, I could find nothing referencing the string "##". I'm not posting this question to complain but it does provide an empirical example of something I've seen before that really bugs me-- Google often ignores special characters in my search strings and the results are often useless. As a developer I often need to search for string values containing non-alphanumeric characters. Some characters (like the underscore or hyphen) can be used without trouble. However, other characters (such as the ampersand, carat, tilde and pound sign) are often ignored in my query strings. Is there a way to prevent this from happening so that I can get meaningful results from Google? NOTE I stay logged into Google and I live in the US. I wonder if Google detects some form of text-encoding setting or derives my results based off of certain, localized text-based assumptions. Regardless, I would like to for Google to search for what I give it. Is there anything that I can do to improve my results?

    Read the article

  • How to set up multi users on dev server with git and github

    - by Derek Organ
    I'm working on lamp application. We have 2 servers (Debian) Live and Dev. I constantly work on dev main to add new features and fix bugs. When happy all works well I scp the relevant code to the Live system. Database (mysql) is local to each machine. Now this is pretty basic setup really and I want to improve the workflow a bit. I use git and github for version control. Admittedly I've only really used one branch. Their can be 3 different developers who work on the code at different times. We all use the same linux username to connect to the dev server and edit the code directly when needed. I usually then commit and push the code at the end of the day to github. One thing to bare in mind is it isn't easy to run this code on a local machine as there are many apache and subdomain configurations that wouldn't work on a local machine so it is important to work on the dev server not locally. I need to create a new process because we need to have a main trunk now and a branch with a big code re-write. What is the best way to do this. Should I create different unix logins for each developer and set up different working areas on the dev server for there changes? e.g. /var/www/mysite_derek /var/www/mysite_paul /var/www/mysite_mike my thinking is they can do a pull from the main branch and then create there own branch and merge it back in. I'm not sure how this will work though with git locally and with github. will i need to create different github user accounts as well. I'd like to do this the 'right' way and future proof for having lots of potential developers but I also don't want to over complicate it. I simple and elegant solution is preferred. any recommendations or suggestions?

    Read the article

  • GIT and Django Projects

    - by Garfonzo
    I have two servers, a Dev server and a Production server. The Production server runs a live Django site, while the Dev server has a copy of the Django project. I use the Dev server to work on the Django site, make improvements, fix bugs, etc. Once I am satisfied with how the Dev version is working, I move the whole Django directory from the Dev server and replace the same directory on the Production server. The two servers are not on the same LAN so the process is not straight forward. There are a few issues with this that I am having so far. Moving the whole directory is laborious and time consuming If I only change a few files, it is even move tedious to replace a few files than the whole directory since the project is getting fairly large and I worry that I'll miss something I often run into permission issues after I've moved things It's super inefficient, and, due to lack of time, I haven't bothered figuring out a new method. Now it's just getting out of hand and i need to address the situation. I am thinking I need to move to a GIT repository for this process. But my question is how would I set this all up? Do I host the repository on the Production server, pull from the Dev server, do work, then commit? Then I would pull from the Production server (same server the repo is hosted on) to run the current working version? Do I host the repo on the Dev Server, pulling from the same server to do work on the repo, then pull a working version onto the Production server? Should I be hosting the repo on a different server than the Production server and the Dev server (a third server)? Are there any special considerations with Django and repos that I need to worry about? Thanks for the help :)

    Read the article

  • Understanding Netbook Partitions & UNR Installation

    - by Wesley
    Hi all, I have a Samsung N120 netbook (with upgraded 2GB RAM). I'm just looking at the Disk Management right now (in Windows XP) and I'm trying to understand what partition holds what. There is "Local Disk (C:)" which is 40GB, "RECOVERY" (no drive letter) which is 6GB and then "TEMP_PART01 (D:)" which is 103.05GB. XP is installed on Local Disk (C:) and I've only used this hard drive for all my files, etc. Recovery is recovery... probably not removable anyways. Now, what bugs me is the TEMP_PART01 (D:) partition, which contains quite a bit of random junk, such as EULA text documents, an "external installer", UI Wrapper Resource DLLs, a "VC_RED" Windows Installer Package and a few more files. I have no clue what any of it means, but I'm assuming that this was probably stuff that could have been on the Local Disk (C:), along with the WINDOWS, Program Files, and Docs and Settings folder. So, how should I go about this? Should I have kept all my data on D: and left all OS related files/folders on C:? Now, I want to install Ubuntu Netbook Remix. Question is, will this install within Windows, if I want to dual boot it? If not, would I partition D: into two small chunks, one on which I would install UNR? There are basically two questions in here, but it'd be great to get answers for both! Thanks in advance.

    Read the article

  • Windows XP VM on VMWare ESXi 4.1 "pausing" / blocked occasionally

    - by FelixD
    We have an issue with Windows XP SP3 VMs on VMWare ESXi 4.1.0 (the free version): They sometimes seem to "pause" for several minutes. This happens rarely (maybe once a month per VM, at least noticed only that often), but still is an issue for us. It happens for three different but similar VMs on three pretty different hosts (different hardware). I have the feeling that the "pausing" is not actually the CPU blocking, but probably the harddisks, but not 100% sure. The servers have one IDE disk (C:) and one SCSI (D:) and it might be either of the two. I have seen scheduled tasks simply not starting for up to 9 minutes and then running normally again with normal speed. They were totally blocked. This is not a load issue, the VMWare hosts have average load and the VMs in question already have reserved CPU resources plus high priorities for CPU and disk. The Windows boxes run mainly MySQL, Tomcat, FileZilla server, Cygwin stuff, Java + R applications, VMWare client, Elusiva Terminal Server pro, Nagios client. Not sure if this might be related with any of that software (e.g. Elusiva). Trying to debug this, there was nothing visible in Windows Event log, other logs in C:\Windows, VMWare events etc. Unfortunately the vmware.log file ends with "Log throttled". We found that we ran into 2 VMWare bugs there: The VMWare client writes lots on bogus messages in the vmware.log, which we now disabled (log level error setting) plus the bug that VMWare does not unthrottle the log (at least so far despite VM reboots). I know there is not much guidance and that may also be the reason why I so far didn't find anything related on the web or on ServerFault, but maybe some of this rings a bell with someone? Or please direct me to what more info to post. I hope that the vmware.logs get unthrottled eventually (can't easily restart the hosts at the moment). Thanks for any input!

    Read the article

  • Routing connections through VPN based on hostname (not IP range)

    - by Michal M
    This bugs me immensly. I need to connect to client's network through VPN. But I definitely do not want to send all the traffic through client's network so this option is out of question. What I need basically is for the OS to know that all client's network subdomains (*.example.com) need to go through the VPN connection. I tried a couple of options: Changing order of services and setting the VPN on top, but this works the same as "Send all traffic over VPN connection". Using "VPN on Demand" option from network advanced options, but this feature is quite rubbish to be honest. Seems to work only in Safari (?!) and it doesn't route the connection, but it basically triggers the OS to connect to the selected VPN. The reason I need it to work based on hostnames rather than IP range is simple - my client has a lot of servers inside his network and it's impossible for me to remember all IPs. They are all within a range, but this doesn't help me remembering. Another option would be to put the VPN connection on the bottom of network services and untick "Send all traffic..." and then put all known hostnames in hosts file, but considering there could be hundreds of servers (therefore hostnames and ips too) it ridiculous job. And if new server appears on the network I'd need to edit the hosts file again. Sisyphean labours. However this works on Windows very simply. If a hostname is not available through default network interface, then it seems to try VPN connection and this works brilliantly. So, how can I achieve that on Mac, then? I know client's internal DNS addresses if that is of any help (like directing a certain domains through a different DNS)? PS. Using latest version 10.6.6. PS2. I am using VPN to access intranet, version control servers (svn://), samba shares and for SSH access to servers.

    Read the article

  • Computers on network crashing

    - by Phil Cross
    We have recently upgraded our network to Windows 7 clients with Windows server 2008 servers. The upgrade was completed by the end of September and until now has been fine (apart from the minor bugs). Recently (within the last 2 weeks) we've notice all computers on the network (around 1000) start to slow down to the point their unusable. It starts at about 08:45 and finishes at 09:15. Because of this, we think something may be broadcasting across the network. This happens every day, between these times. I cant use my computer at all at the slowdown peak, and looking at task managers performance graphs, Physical memory is hovering around 35% and CPU usage is at 0-10% (idle) yet still crashing. I've looked on DHCPs server log and cant see anything which stands out. The only change we made prior to the slowdown was installing adobe CS6 on some computers, however the slowdown affects computers without CS6. We have 2 physical machines, each with around 5-7 virtual machines running on them with ample memory. Does anyone have any suggestions as to what we can do to narrow down whats causing the crashes? Any help, suggestions or advice would be appreciated.

    Read the article

  • Does VLC Player work well on Windows 7 64-bit?

    - by ????
    I tried VLC Player on Windows 7 64-bit version and the playback was fine, but when the video is maximized the image is very pixelated. I tried this on a Radeon HD 2600 XT graphics card as well as a computer with an Intel graphics chipset and both have the same result. If I use VLC Player on Windows 7 32-bit instead of 64-bit, there is no pixelation. Is this a known problem or is there any method to fix it on Windows 7 64-bit? Update: there isn't a 32-bit vs 64-bit version of VLC player, is there? (unlike 7-Zip) I also tried GOM Player and it doesn't have the problem on Windows 7 64-bit. Update: Nov 4, 2009 VLC displays an update notice: VLC 1.0.3 is a minor release fixing many bugs, especially for Windows Vista and 7, but it also introduces 2 new modes for deinterlacing, and a new udev module. Major fixes are about WMA Pro support, Dolby tracks in 4.0, v4l/v4l2 and atsc and a crash in mjpeg demuxer. Update of translations are also part of this release.

    Read the article

  • Cannot open /dev/rfcomm1 : Host is down

    - by srj0408
    I am working on raspberry PI and on Bluetooth. I am using old raspberry pi kernel as the new one has got some bugs that were not resolved with respect to the bluez daemon. At present my kernel version is 3.6.11. I am using a USB bluetooth dongle and my sole purpose is to auto connect the bluetooth dongle when ever it is in range. For that i think i have to run a script in the backend on RPI that will keep on checking the existence of usb bluetooth dongle. I started from the very scratch. I installed bluez daemon using apt-get install bluetooth bluez utils blueman and then i used hciconfig which gives me that my bluetooth usb dongle is working fine. But when i did hcitool scan , it give me no device in range even though my Serial bluetooth Device was on. I wasn't able to find any device in vicinity. Also when i unplugged and plug the USB dongle again, i was able to scan the serial device , but when i repeat the process, i find the earlier condition of not finding any deice. I had find another useful link, but that need address of the bluetooth device that need to be connected. I want to automate this using hcitool scan, storing the output to the a file and then comparing it with already paired devices and their name. For that i need to figure out why hcitool scan is sometime working and sometime not. ? Can some one help me in figuring out why this is happening. Is there any problem on hardware side i.e Bluetooth dongle is buggy or i had some problem in bluez utils. Edit 1: While as of now, hcitool scan is giving me my remote device address but still i am getting the same issue of HOUST IS DOWN, '/dev/rfcomm1'. I am really not getting any idea of what to be done.

    Read the article

  • Why are my SharePoint downloads not completing for outside users?

    - by CT
    I am using WSS 3.0 with Microsoft Server 2003. I am running into the following problem. On a pretty frequent basis, outside users are having trouble downloading documents. Some downloads are completing while the download is still incomplete. So for instance, a PDF is a 17MB file. If I download it from within the office, all 17MBs are downloaded and it opens. If I download it from an outside connection, it may download anywhere from 5-10 MB of the file and then say it is complete. When these partial downloads are opened, it gives the user the error, this file is corrupt and cannot be repaired. I have solved this problem on some of the occasions by simply deleting the document and uploading a new copy of the document. This does not always work. Are there known bugs? Are the Internet settings that need to be modified on the outside user's machine? Does anyone else run into this?

    Read the article

  • how to debug "deep" crashes in Android?

    - by eerok512
    Hi All, I've been trying to debug an android crash that is occurring without a Java Stack Trace... Java Stack Trace bugs are very easy for me to fix... but this bug I'm getting seems to be crashing inside the "NDK" or whatever it is the deep internals of Android are called... I've made no modifications to the NDK btw... I just dunno what else to call that layer hehe. Anyway I'm mainly looking for advice on deep-debug methods, rather than help with this specific problem... because I doubt I can post all the source code involved... so really I just need to know how to set breakpoints at the deep layers or whatever other methods there are to trace deep-crashes to their source... so I will briefly describe the bug and then post a LogCat. I have an app with 7 Activities Activity_INTRO Activity_EULA Activity_MAIN Activity_Contact Activity_News Activity_Library Activity_More INTRO is the initiating one... it fades in some company logos... after displaying them for a set time it jumps to the EULA activity... after the user accepts the EULA, it jumps to MAIN... MAIN then creates a TabHost and populates it with the 4 remaining activities now heres the thing... when I click on say, the More tab of the TabHost, the app pauses for a few seconds and then hard-crashes... no java stack trace, but an actual ASM level trace with the registers and IP and stack... the same thing occurs no matter which tab I select, Contact, News, Library, More... all of them crash with the same hard-crash if however I set the manifest to start the app at Activity_MAIN, bypassing the INTRO and EULA, then these crashes do not occur... so something is lingering from those opening activities that is somehow hosing the TabHost'ed Activities... and I'm wondering what the hell that could be... because I'm using finish() on those activites when they need to jump... in fact here is how I'm doing it let me know if you see any bugs: when jumping from INTRO to EULA I do: //Display the EULA Intent newIntent = new Intent (avi, Activity_EULA.class); startActivity (newIntent); finish(); and EULA to MAIN: Intent newIntent = new Intent (this, Activity_Main.class); startActivity (newIntent); finish(); anyway, here is the hard crash log... please let me know if there is some way I can reverse engineer either /system/lib/libcutils.so or /system/lib/libandroid_runtime.so, because I think the crash is happening in one of them... i think its happening in the libandroid_runtime in fact.... anyway on to the log: 12-25 00:56:07.322: INFO/DEBUG(551): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** 12-25 00:56:07.332: INFO/DEBUG(551): Build fingerprint: 'generic/sdk/generic/:1.5/CUPCAKE/150240:eng/test-keys' 12-25 00:56:07.362: INFO/DEBUG(551): pid: 722, tid: 723 >>> com.killerapps.chokes <<< 12-25 00:56:07.362: INFO/DEBUG(551): signal 11 (SIGSEGV), fault addr 00000004 12-25 00:56:07.362: INFO/DEBUG(551): r0 00000004 r1 40021800 r2 00000004 r3 ad3296c5 12-25 00:56:07.372: INFO/DEBUG(551): r4 00000000 r5 00000000 r6 ad342da5 r7 41039fb8 12-25 00:56:07.372: INFO/DEBUG(551): r8 100ffcb0 r9 41039fb0 10 41e014a0 fp 00001071 12-25 00:56:07.382: INFO/DEBUG(551): ip ad35b874 sp 100ffc98 lr ad3296cf pc afb045a8 cpsr 00000010 12-25 00:56:07.552: INFO/DEBUG(551): #00 pc 000045a8 /system/lib/libcutils.so 12-25 00:56:07.572: INFO/DEBUG(551): #01 lr ad3296cf /system/lib/libandroid_runtime.so 12-25 00:56:07.582: INFO/DEBUG(551): stack: 12-25 00:56:07.582: INFO/DEBUG(551): 100ffc58 00000000 12-25 00:56:07.592: INFO/DEBUG(551): 100ffc5c 001c5278 [heap] 12-25 00:56:07.602: INFO/DEBUG(551): 100ffc60 000000da 12-25 00:56:07.602: INFO/DEBUG(551): 100ffc64 0016c778 [heap] 12-25 00:56:07.602: INFO/DEBUG(551): 100ffc68 100ffcc8 12-25 00:56:07.602: INFO/DEBUG(551): 100ffc6c 001c5278 [heap] 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc70 427d1ac0 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc74 000000c1 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc78 40021800 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc7c 000000c2 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc80 00000000 12-25 00:56:07.612: INFO/DEBUG(551): 100ffc84 00000000 12-25 00:56:07.622: INFO/DEBUG(551): 100ffc88 00000000 12-25 00:56:07.622: INFO/DEBUG(551): 100ffc8c 00000000 12-25 00:56:07.622: INFO/DEBUG(551): 100ffc90 df002777 12-25 00:56:07.632: INFO/DEBUG(551): 100ffc94 e3a070ad 12-25 00:56:07.632: INFO/DEBUG(551): #00 100ffc98 00000000 12-25 00:56:07.632: INFO/DEBUG(551): 100ffc9c ad3296cf /system/lib/libandroid_runtime.so 12-25 00:56:07.632: INFO/DEBUG(551): 100ffca0 100ffcd0 12-25 00:56:07.642: INFO/DEBUG(551): 100ffca4 ad342db5 /system/lib/libandroid_runtime.so 12-25 00:56:07.642: INFO/DEBUG(551): 100ffca8 410a79d0 12-25 00:56:07.642: INFO/DEBUG(551): 100ffcac ad00e3b8 /system/lib/libdvm.so 12-25 00:56:07.652: INFO/DEBUG(551): 100ffcb0 410a79d0 12-25 00:56:07.652: INFO/DEBUG(551): 100ffcb4 0016bac0 [heap] 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcb8 ad342da5 /system/lib/libandroid_runtime.so 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcbc 40021800 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcc0 410a79d0 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcc4 afe39dd0 12-25 00:56:07.662: INFO/DEBUG(551): 100ffcc8 100ffcd0 12-25 00:56:07.662: INFO/DEBUG(551): 100ffccc ad040a8d /system/lib/libdvm.so 12-25 00:56:07.672: INFO/DEBUG(551): 100ffcd0 41039fb0 12-25 00:56:07.672: INFO/DEBUG(551): 100ffcd4 420000f8 12-25 00:56:07.672: INFO/DEBUG(551): 100ffcd8 ad342da5 /system/lib/libandroid_runtime.so 12-25 00:56:07.672: INFO/DEBUG(551): 100ffcdc 100ffd48 12-25 00:56:07.852: DEBUG/dalvikvm(722): GC freed 367 objects / 15144 bytes in 210ms 12-25 00:56:08.081: DEBUG/InetAddress(722): www.akillerapp.com: 74.86.47.202 (family 2, proto 6) 12-25 00:56:08.242: DEBUG/dalvikvm(722): GC freed 62 objects / 2328 bytes in 122ms 12-25 00:56:08.771: DEBUG/dalvikvm(722): GC freed 245 objects / 11744 bytes in 179ms 12-25 00:56:09.131: INFO/ActivityManager(577): Process com.killerapps.chokes (pid 722) has died. 12-25 00:56:09.171: INFO/WindowManager(577): WIN DEATH: Window{43719320 com.killerapps.chokes/com.killerapps.chokes.Activity_Main paused=false} 12-25 00:56:09.251: INFO/DEBUG(551): debuggerd committing suicide to free the zombie! 12-25 00:56:09.291: DEBUG/Zygote(553): Process 722 terminated by signal (11) 12-25 00:56:09.311: INFO/DEBUG(781): debuggerd: Jun 30 2009 17:00:51 12-25 00:56:09.331: WARN/InputManagerService(577): Got RemoteException sending setActive(false) notification to pid 722 uid 10020

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >