Search Results

Search found 53174 results on 2127 pages for 'transport error'.

Page 77/2127 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Problem installing virtualbox on ubuntu 9.04

    - by debanjan
    I'm using ubuntu 9.04. From last few days I'm trying to install a virtualbox,but i can't. I went to this link for virtualbox. https://www.virtualbox.org/wiki/Download_Old_Builds_3_0 Found the virtualbox for my version. After downloading it in xp i restarted my pc and opened ubuntu. While installing by package installer there is a problem. But the status giving a massage everytime I'm trying to install.it saying "Error: Dependency is not satisfiable: libcurl3 (=7.16.2-1)". Here is an icon named "install package" but its hidden. I don't know what's the problem.I'm new to ubuntu,pls try to help me.

    Read the article

  • dpkg snippet isn't working for me when fixing / installing dockbar

    - by user3924
    I'm trying to use this code from webupd8 to fix the problem I have. It seems it didn't work. sudo dpkg -i --force-overwrite /var/cache/apt/archives/smplayer_0.6.9+svn3595-1ppa1~maverick1_i386.deb Here is the output from my attempt to install dockbar : $ sudo dpkg -i --force-all var/apt/cache/archives/dockmanager_0.1.0~bzr80-0ubuntu1~10.10~dockers1_i386.deb dpkg: error processing var/apt/cache/archives/dockmanager_0.1.0~bzr80-0ubuntu1~10.10~dockers1_i386.deb (--install): cannot access archive: No such file or directory Errors were encountered while processing: var/apt/cache/archives/dockmanager_0.1.0~bzr80-0ubuntu1~10.10~dockers1_i386.deb Is there any way around this?

    Read the article

  • How do I install latest NVIDIA drivers?

    - by Shahe Tajiryan
    This is what I am trying to do. I downloaded the latest driver for my VGA from http://www.nvidia.com. The installation needs the X11 to be shut down, so I log out of my account, then press CTRL+ALT+F1, then log in with my username and passowrd, then run the command sh NVIDIA-Linux-x86_64-285.05.09.run in every possible way, I have even tried CHMODing the package with 777 permissions, but still I'm getting the sh: can't open NVIDIA-Linux-x86_64-285.05.09.run error. Any help would be greatly appreciated.

    Read the article

  • Returning status code where one of many errors could have occured

    - by yttriuszzerbus
    I'm developing a PHP login component which includes functions to manipulate the User object, such as $User->changePassword(string $old, string $new) What I need some advice with is how to return a status, as the function can either succeed (no further information needs to be given) or fail (and the calling code needs to know why, for example incorrect password, database problem etc.) I've come up with the following ideas: Unix-style: return 0 on success, another code on failure. This doesn't seem particularly common in PHP and the language's type-coercion messes with this (a function returning FALSE on success?) This seems to be the best I can think of. Throw an exception on error. PHP's exception support is limited, and this will make life harder for anybody trying to use the functions. Return an array, containing a boolean for "success" or not, and a string or an int for "failure status" if applicable. None of these seem particularly appealing, does anyone have any advice or better ideas?

    Read the article

  • Application Errors vs User Errors in PHP

    - by CogitoErgoSum
    So after much debate back and forth, I've come up with what I think may be a valid plan to handle Application/System errors vs User Errors (I.e. Validation Issues, Permission Issues etc). Application/System Errors will be handled using a custom error handler (via set_error_handler()). Depending on the severity of the error, the user may be redirected to a generic error page (I.e. Fatal Error), or the error may simply be silently logged (i.e E_WARNING). These errors are ones most likely caused by issues outside the users control (Missing file, Bad logic, etc). The second set of errors would be User Generated ones. These are the ones may not automatically trigger an error but would be considered one. In these cases i"ve decided to use the trigger_error() function and typically throw a waning or notice which would be logged silently by the error handler. After that it would be up to the developer to then redirect the user to another page or display some sort of more meaningful message to the user. This way an error of any type is always logged, but user errors still allow the developer freedom to handle it in their own ways. I.e. Redirect them back to their form with it fully repopulated and a message of what went wrong. Does anyone see anything wrong with this, or have a more intuitive way? My approach to error handling is typically everyone has their own ways but there must be a way instituted.

    Read the article

  • Why did I get this error : java.lang.Exception: XMLEncoder: discarding statement Vector.add() ?

    - by Frank
    My Java program look like this : public class Biz_Manager { static Contact_Info_Setting Customer_Contact_Info_Panel; static XMLEncoder XML_Encoder; ...... void Get_Customer_Agent_Shipping_Company_And_Shipping_Agent_Net_Worth_Info() { try { XML_Encoder=new XMLEncoder(new BufferedOutputStream(new FileOutputStream(Customer_Contact_Info_Panel.Contact_Info_File_Path))); XML_Encoder.writeObject(Customer_Contact_Info_Panel.Contacts_Vector); } catch (Exception e) { e.printStackTrace(); } finally { if (XML_Encoder!=null) { XML_Encoder.close(); // <== Error here , line : 9459 XML_Encoder=null; } } } } // ======================================================================= public class Contact_Info_Setting extends JPanel implements ActionListener,KeyListener,ItemListener { public static final long serialVersionUID=26362862L; ...... Vector<Contact_Info_Entry> Contacts_Vector=new Vector<Contact_Info_Entry>(); ...... } // ======================================================================= package Utility; import java.io.*; import java.util.*; import javax.jdo.annotations.IdGeneratorStrategy; import javax.jdo.annotations.IdentityType; import javax.jdo.annotations.PersistenceCapable; import javax.jdo.annotations.Persistent; import javax.jdo.annotations.PrimaryKey; @PersistenceCapable(identityType=IdentityType.APPLICATION) public class Contact_Info_Entry implements Serializable { @PrimaryKey @Persistent(valueStrategy=IdGeneratorStrategy.IDENTITY) public Long Id; public static final long serialVersionUID=26362862L; public String Contact_Id="",First_Name="",Last_Name="",Company_Name="",Branch_Name="",Address_1="",Address_2="",City="",State="",Zip="",Country=""; ...... public boolean B_1; public Vector<String> A_Vector=new Vector<String>(); public Contact_Info_Entry() { } public Contact_Info_Entry(String Other_Id) { this.Other_Id=Other_Id; } ...... public void setId(Long value) { Id=value; } public Long getId() { return Id; } public void setContact_Id(String value) { Contact_Id=value; } public String getContact_Id() { return Contact_Id; } public void setFirst_Name(String value) { First_Name=value; } public String getFirst_Name() { return First_Name; } public void setLast_Name(String value) { Last_Name=value; } public String getLast_Name() { return Last_Name; } public void setCompany_Name(String value) { Company_Name=value; } public String getCompany_Name() { return Company_Name; } ...... } I got this error message : java.lang.Exception: XMLEncoder: discarding statement Vector.add(Contact_Info_Entry); Continuing ... java.lang.Exception: XMLEncoder: discarding statement Vector.add(Contact_Info_Entry); Continuing ... java.lang.Exception: XMLEncoder: discarding statement Vector.add(Contact_Info_Entry); Continuing ... java.lang.Exception: XMLEncoder: discarding statement Vector.add(Contact_Info_Entry); Continuing ... Exception in thread "Thread-8" java.lang.NullPointerException at java.beans.XMLEncoder.outputStatement(XMLEncoder.java:611) at java.beans.XMLEncoder.outputValue(XMLEncoder.java:552) at java.beans.XMLEncoder.outputStatement(XMLEncoder.java:682) at java.beans.XMLEncoder.outputStatement(XMLEncoder.java:687) at java.beans.XMLEncoder.outputValue(XMLEncoder.java:552) at java.beans.XMLEncoder.flush(XMLEncoder.java:398) at java.beans.XMLEncoder.close(XMLEncoder.java:429) at Biz_Manager.Get_Customer_Agent_Shipping_Company_And_Shipping_Agent_Net_Worth_Info(Biz_Manager.java:9459) Seems it can't deal with vector, why ? Anything wrong ? How to fix it ? Frank

    Read the article

  • Isn't it better to use a single try catch instead of tons of TryParsing and other error handling sometimes?

    - by Ryan Peschel
    I know people say it's bad to use exceptions for flow control and to only use exceptions for exceptional situations, but sometimes isn't it just cleaner and more elegant to wrap the entire block in a try-catch? For example, let's say I have a dialog window with a TextBox where the user can type input in to be parsed in a key-value sort of manner. This situation is not as contrived as you might think because I've inherited code that has to handle this exact situation (albeit not with farm animals). Consider this wall of code: class Animals { public int catA, catB; public float dogA, dogB; public int mouseA, mouseB, mouseC; public double cow; } class Program { static void Main(string[] args) { string input = "Sets all the farm animals CAT 3 5 DOG 21.3 5.23 MOUSE 1 0 1 COW 12.25"; string[] splitInput = input.Split(' '); string[] animals = { "CAT", "DOG", "MOUSE", "COW", "CHICKEN", "GOOSE", "HEN", "BUNNY" }; Animals animal = new Animals(); for (int i = 0; i < splitInput.Length; i++) { string token = splitInput[i]; if (animals.Contains(token)) { switch (token) { case "CAT": animal.catA = int.Parse(splitInput[i + 1]); animal.catB = int.Parse(splitInput[i + 2]); break; case "DOG": animal.dogA = float.Parse(splitInput[i + 1]); animal.dogB = float.Parse(splitInput[i + 2]); break; case "MOUSE": animal.mouseA = int.Parse(splitInput[i + 1]); animal.mouseB = int.Parse(splitInput[i + 2]); animal.mouseC = int.Parse(splitInput[i + 3]); break; case "COW": animal.cow = double.Parse(splitInput[i + 1]); break; } } } } } In actuality there are a lot more farm animals and more handling than that. A lot of things can go wrong though. The user could enter in the wrong number of parameters. The user can enter the input in an incorrect format. The user could specify numbers too large or too small for the data type to handle. All these different errors could be handled without exceptions through the use of TryParse, checking how many parameters the user tried to use for a specific animal, checking if the parameter is too large or too small for the data type (because TryParse just returns 0), but every one should result in the same thing: A MessageBox appearing telling the user that the inputted data is invalid and to fix it. My boss doesn't want different message boxes for different errors. So instead of doing all that, why not just wrap the block in a try-catch and in the catch statement just display that error message box and let the user try again? Maybe this isn't the best example but think of any other scenario where there would otherwise be tons of error handling that could be substituted for a single try-catch. Is that not the better solution?

    Read the article

  • javax.naming.NameNotFoundException: Name [comp/env] is not bound in this Context. Unable to find [comp] error with java scheduler

    - by Morgan Azhari
    What I'm trying to do is to update my database after a period of time. So I'm using java scheduler and connection pooling. I don't know why but my code only working once. It will print: init success success javax.naming.NameNotFoundException: Name [comp/env] is not bound in this Context. Unable to find [comp]. at org.apache.naming.NamingContext.lookup(NamingContext.java:820) at org.apache.naming.NamingContext.lookup(NamingContext.java:168) at org.apache.naming.SelectorContext.lookup(SelectorContext.java:158) at javax.naming.InitialContext.lookup(InitialContext.java:411) at test.Pool.main(Pool.java:25) ---> line 25 is Context envContext = (Context)initialContext.lookup("java:/comp/env"); I don't know why it only works once. I already test it if I didn't running it without java scheduler and it works fine. No error whatsoerver. Don't know why i get this error if I running it using scheduler. Hope someone can help me. My connection pooling code: public class Pool { public DataSource main() { try { InitialContext initialContext = new InitialContext(); Context envContext = (Context)initialContext.lookup("java:/comp/env"); DataSource datasource = new DataSource(); datasource = (DataSource)envContext.lookup("jdbc/test"); return datasource; } catch (Exception ex) { ex.printStackTrace(); } return null; } } my web.xml: <web-app version="3.0" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"> <listener> <listener-class> package.test.Pool</listener-class> </listener> <resource-ref> <description>DB Connection Pooling</description> <res-ref-name>jdbc/test</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> Context.xml: <?xml version="1.0" encoding="UTF-8"?> <Context path="/project" reloadable="true"> <Resource auth="Container" defaultReadOnly="false" driverClassName="com.mysql.jdbc.Driver" factory="org.apache.tomcat.jdbc.pool.DataSourceFactory" initialSize="0" jdbcInterceptors="org.apache.tomcat.jdbc.pool.interceptor.ConnectionState;org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer" jmxEnabled="true" logAbandoned="true" maxActive="300" maxIdle="50" maxWait="10000" minEvictableIdleTimeMillis="300000" minIdle="30" name="jdbc/test" password="test" removeAbandoned="true" removeAbandonedTimeout="60" testOnBorrow="true" testOnReturn="false" testWhileIdle="true" timeBetweenEvictionRunsMillis="30000" type="javax.sql.DataSource" url="jdbc:mysql://localhost:3306/database?noAccessToProcedureBodies=true" username="root" validationInterval="30000" validationQuery="SELECT 1"/> </Context> my java scheduler public class Scheduler extends HttpServlet{ public void init() throws ServletException { System.out.println("init success"); try{ Scheduling_test test = new Scheduling_test(); ScheduledExecutorService executor = Executors.newScheduledThreadPool(100); ScheduledFuture future = executor.scheduleWithFixedDelay(test, 1, 60 ,TimeUnit.SECONDS); }catch(Exception e){ e.printStackTrace(); } } } Schedule_test public class Scheduling_test extends Thread implements Runnable{ public void run(){ Updating updating = new Updating(); updating.run(); } } updating public class Updating{ public void run(){ ResultSet rs = null; PreparedStatement p = null; StringBuilder sb = new StringBuilder(); Pool pool = new Pool(); Connection con = null; DataSource datasource = null; try{ datasource = pool.main(); con=datasource.getConnection(); sb.append("SELECT * FROM database"); p = con.prepareStatement(sb.toString()); rs = p.executeQuery(); rs.close(); con.close(); p.close(); datasource.close(); System.out.println("success"); }catch (Exception e){ e.printStackTrace(); } }

    Read the article

  • PHP/APC fatal error, apc_mmap: mmap failed

    - by Sudowned
    I'm seeing some intermittent CPU usage spikes to 100%, sort of correlated to these log entries: [27-Feb-2012 13:29:29] PHP Fatal error: PHP Startup: apc_mmap: mmap failed: in Unknown on line 0 [27-Feb-2012 13:29:30] PHP Fatal error: PHP Startup: apc_mmap: mmap failed: in Unknown on line 0 [27-Feb-2012 13:29:31] PHP Fatal error: PHP Startup: apc_mmap: mmap failed: in Unknown on line 0 [27-Feb-2012 13:29:31] PHP Fatal error: PHP Startup: apc_mmap: mmap failed: in Unknown on line 0 phpinfo() indicates that APC is set up, and as far as I can tell this error doesn't cause visible 500 errors on the live site, which is a WordPress install that gets about 600k views monthly. Google's been unhelpful so far, and I was hoping that someone here had some insight as to what's causing this and how to fix it. Curiously, this error only shows up /usr/local/apache2/logs/error_log and not the error_log for the cpanel-configured site.

    Read the article

  • Error 2013: Lost connection to MySQL server during query when executing CHECK TABLE FOR UPGRADE

    - by Dean Richardson
    I just upgraded Ubuntu from 11.10 to 12.04. My rails app now returns the (passenger) error "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) (Mysql2::Error)". I get a similar error when I try to access mysql at the command line on my Ubuntu server using mysql -u root -p. I have mysql-server 5.5 installed. I've checked and mysql is not running. When I try to restart it, it fails. Here are some key lines from the tail of /var/log/syslog after an attempted restart: dean@dgwjasonfried:/etc/mysql$ tail -f /var/log/syslog Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: /usr/bin/mysqlcheck: Got error: 2013: Lost connection to MySQL server during query when executing 'CHECK TABLE ... FOR UPGRADE' Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: FATAL ERROR: Upgrade failed Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: molex_app_development.assets OK Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: molex_app_development.ecd_types OK Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5124]: Checking for insecure root accounts. Mar 7 08:55:27 dgwjasonfried kernel: [ 7551.769657] init: mysql main process (5064) terminated with status 1 Mar 7 08:55:27 dgwjasonfried kernel: [ 7551.769697] init: mysql respawning too fast, stopped Here is most of /etc/mysql/my.cnf: Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock Here is entries for some specific programs The following values assume you have at least 32M ram This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] Basic Settings user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking Instead of skip-networking the default is now to listen only on localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 And here are permissions for var/run/mysqld/mysqld.sock: srwxrwxrwx 1 mysql mysql 0 Mar 7 09:18 mysqld.sock I'd be grateful for any suggestions the community might have. I reviewed the related questions here and attempted some of the fixes offered but to no avail. Thanks! Dean Richardson Update: Thanks to quanta's suggestion, I looked at the /var/log/mysql/error.log file. I found error messages relating to pointers, fatal signals, and more stuff that I really couldn't make much sense of. I also found mysql man page references, however. One suggested that I try starting mysqld with the --innodb_force_recovery=# option, then attempt to dump (or drop) the offending/corrupted database or table. I worked through the escalating option levels one-by-one (innodb_force_recovery=1, innodb_force_recovery=2, etc.) This allowed me to successfully run mysql -u root -p from the command line and execute several commands. I was able to run queries on my production database, but any attempt to query, dump, or even drop my development database raised an error and led to me losing the connection to mysql. So I've made progress, but until I'm somehow able to drop or repair my development db I'm still unable to get my app to load. Any further advice or suggestions? Thanks! Dean Update: Right after running sudo mysqld --innodb_force_recover=1 from the command line, the error.log contains this: Right after retrying sudo mysqld --innodb_force_recover=1, The error.log file shows this: 130308 4:55:39 [Note] Plugin 'FEDERATED' is disabled. 130308 4:55:39 InnoDB: The InnoDB memory heap is disabled 130308 4:55:39 InnoDB: Mutexes and rw_locks use GCC atomic builtins 130308 4:55:39 InnoDB: Compressed tables use zlib 1.2.3.4 130308 4:55:39 InnoDB: Initializing buffer pool, size = 128.0M 130308 4:55:39 InnoDB: Completed initialization of buffer pool 130308 4:55:39 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 130308 4:55:39 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 130308 4:55:40 InnoDB: Waiting for the background threads to start 130308 4:55:41 InnoDB: 1.1.8 started; log sequence number 10259220 130308 4:55:41 InnoDB: !!! innodb_force_recovery is set to 1 !!! 130308 4:55:41 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 130308 4:55:41 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 130308 4:55:41 [Note] Server socket created on IP: '127.0.0.1'. 130308 4:55:41 [Note] Event Scheduler: Loaded 0 events 130308 4:55:41 [Note] mysqld: ready for connections. Version: '5.5.29-0ubuntu0.12.04.2' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) Then after mysql -u root -p and mysql> drop database molex_app_development; ERROR 2013 (HY000): Lost connection to MySQL server during query mysql> the error.log contains: dean@dgwjasonfried:/var/log/mysql$ tail -f error.log /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f6a3ff9ecbd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f6a1c004bd8): is an invalid pointer Connection ID (thread ID): 1 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 130308 4:55:39 [Note] Plugin 'FEDERATED' is disabled. 130308 4:55:39 InnoDB: The InnoDB memory heap is disabled 130308 4:55:39 InnoDB: Mutexes and rw_locks use GCC atomic builtins 130308 4:55:39 InnoDB: Compressed tables use zlib 1.2.3.4 130308 4:55:39 InnoDB: Initializing buffer pool, size = 128.0M 130308 4:55:39 InnoDB: Completed initialization of buffer pool 130308 4:55:39 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 130308 4:55:39 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 130308 4:55:40 InnoDB: Waiting for the background threads to start 130308 4:55:41 InnoDB: 1.1.8 started; log sequence number 10259220 130308 4:55:41 InnoDB: !!! innodb_force_recovery is set to 1 !!! 130308 4:55:41 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 130308 4:55:41 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 130308 4:55:41 [Note] Server socket created on IP: '127.0.0.1'. 130308 4:55:41 [Note] Event Scheduler: Loaded 0 events 130308 4:55:41 [Note] mysqld: ready for connections. Version: '5.5.29-0ubuntu0.12.04.2' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 130308 4:58:23 [ERROR] Incorrect definition of table mysql.proc: expected column 'comment' at position 15 to have type text, found type char(64). 130308 4:58:23 InnoDB: Assertion failure in thread 140168992810752 in file fsp0fsp.c line 3639 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: about forcing recovery. 10:58:23 UTC - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=151 thread_count=1 connection_count=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 346681 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x7f7ba4f6c2f0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f7ba3065e60 thread_stack 0x30000 mysqld(my_print_stacktrace+0x29)[0x7f7ba3609039] mysqld(handle_fatal_signal+0x483)[0x7f7ba34cf9c3] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f7ba2220cb0] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35)[0x7f7ba188c425] /lib/x86_64-linux-gnu/libc.so.6(abort+0x17b)[0x7f7ba188fb8b] mysqld(+0x65e0fc)[0x7f7ba37160fc] mysqld(+0x602be6)[0x7f7ba36babe6] mysqld(+0x635006)[0x7f7ba36ed006] mysqld(+0x5d7072)[0x7f7ba368f072] mysqld(+0x5d7b9c)[0x7f7ba368fb9c] mysqld(+0x6a3348)[0x7f7ba375b348] mysqld(+0x6a3887)[0x7f7ba375b887] mysqld(+0x5c6a86)[0x7f7ba367ea86] mysqld(+0x5ae3a7)[0x7f7ba36663a7] mysqld(_Z15ha_delete_tableP3THDP10handlertonPKcS4_S4_b+0x16d)[0x7f7ba34d3ffd] mysqld(_Z23mysql_rm_table_no_locksP3THDP10TABLE_LISTbbbb+0x568)[0x7f7ba3417f78] mysqld(_Z11mysql_rm_dbP3THDPcbb+0x8aa)[0x7f7ba339780a] mysqld(_Z21mysql_execute_commandP3THD+0x394c)[0x7f7ba33b886c] mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x10f)[0x7f7ba33bb28f] mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1380)[0x7f7ba33bc6e0] mysqld(_Z24do_handle_one_connectionP3THD+0x1bd)[0x7f7ba346119d] mysqld(handle_one_connection+0x50)[0x7f7ba3461200] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f7ba2218e9a] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f7ba1949cbd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f7b7c004b60): is an invalid pointer Connection ID (thread ID): 1 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. --Dean

    Read the article

  • Hardware error messages from syslogd

    - by Farhat
    I have a 64-core AMD server running CEntOS on which I was running a long job. In the midst of the output, I see these lines. It appears to be a memory error. How severe is this and what exactly does it indicate? Message from syslogd@heracles at Nov 7 21:00:02 ... kernel:[Hardware Error]: MC4_STATUS[Over|CE|MiscV|-|AddrV|-|-|CECC]: 0xdc10410040080a13 Message from syslogd@heracles at Nov 7 21:00:02 ... kernel:[Hardware Error]: Northbridge Error (node 4): DRAM ECC error detected on the NB. Message from syslogd@heracles at Nov 7 21:00:02 ... kernel:[Hardware Error]: cache level: L3/GEN, mem/io: MEM, mem-tx: RD, part-proc: RES (no timeout)

    Read the article

  • SQL 2000: Intermittent Error 7399 with OLE DB Provider for Microsoft Jet

    - by Tim Lara
    I am using SQL Server 2000 on Windows Server 2003 SP2 and have set up a linked server to point at an Access 97 database using the OLE DB Provider 4.0 for Microsoft Jet. The problem I am having sounds almost exactly like the one described in this Microsoft KB article, except that the error I am getting is intermittent: http://support.microsoft.com/kb/814398 The SQL Server is running under the Local System account (which I don't have authority to change), and the Access 97 .mdb file that the linked server points to is on a Win XP Pro machine on the same LAN as the SQL Server machine, inside of a shared folder with permissions set to "Everyone" and "Full Control". Now, if the linked server connection never worked, it would make more sense that the problem is merely a permissions issue with the Local System account as the KB article above suggests, but the maddening thing is that sometimes the connection works just fine. When it fails, the error message is always the same: Error 7399: OLE DB provider 'Microsoft.Jet.OLEDB.4.0' reported an error. [OLE/DB provider returned message: Unspecified error] OLE DB error trace [OLE/DB Provider 'Microsoft.Jet.OLEDB.4.0' IDBInitialize::Initialize returned 0x80004005: ]. Also, not only does the linked server setup occasionally work just fine on this one particular SQL Server, what is supposed to be exactly the same setup on 25 other servers works just fine EVERY TIME! Obviously, something in the non-working setup must not be exactly the same, but I'm having trouble figuring out where to look for the differences since the error message SQL Server returns is so vague. I know our sysadmins have had numerous issues with Active Directory replication across our domain, so my best guess is that there is some sort of odd group policy corruption going on, but I thought I'd ask here to see if I might be overlooking something more straightforward. Any ideas on how to further isolate the error would be greatly appreciated! For the record, here is a list of things I've already tried: Rebooting the SQL Server machine. Fixes the issue temporarily, then the error returns within a minute or two of startup. (This is why I suspect a rogue group policy that is slow to apply fouling things up.) Importing all database objects from the Access 97 mdb into a new, clean mdb file. Makes no difference. Moving the Access 97 mdb file to a local directory on the SQL Server machine instead of accessing it via a share on the Win XP Pro LAN machine. This works, but does not solve the problem because the mdb needs to be on the client machine for performance reasons and the ability to work "stand alone". Plus, the same shared folder access works fine on all other servers / clients on my network. Compared all the SQL Server, Windows Server, etc versions to a known working setup and everything appears to be the same.

    Read the article

  • SQL 2000: Intermittent Error 7399 with OLE DB Provider for Microsoft Jet

    - by Tim Lara
    I am using SQL Server 2000 on Windows Server 2003 SP2 and have set up a linked server to point at an Access 97 database using the OLE DB Provider 4.0 for Microsoft Jet. The problem I am having sounds almost exactly like the one described in this Microsoft KB article, except that the error I am getting is intermittent: http://support.microsoft.com/kb/814398 The SQL Server is running under the Local System account (which I don't have authority to change), and the Access 97 .mdb file that the linked server points to is on a Win XP Pro machine on the same LAN as the SQL Server machine, inside of a shared folder with permissions set to "Everyone" and "Full Control". Now, if the linked server connection never worked, it would make more sense that the problem is merely a permissions issue with the Local System account as the KB article above suggests, but the maddening thing is that sometimes the connection works just fine. When it fails, the error message is always the same: Error 7399: OLE DB provider 'Microsoft.Jet.OLEDB.4.0' reported an error. [OLE/DB provider returned message: Unspecified error] OLE DB error trace [OLE/DB Provider 'Microsoft.Jet.OLEDB.4.0' IDBInitialize::Initialize returned 0x80004005: ]. Also, not only does the linked server setup occasionally work just fine on this one particular SQL Server, what is supposed to be exactly the same setup on 25 other servers works just fine EVERY TIME! Obviously, something in the non-working setup must not be exactly the same, but I'm having trouble figuring out where to look for the differences since the error message SQL Server returns is so vague. I know our sysadmins have had numerous issues with Active Directory replication across our domain, so my best guess is that there is some sort of odd group policy corruption going on, but I thought I'd ask here to see if I might be overlooking something more straightforward. Any ideas on how to further isolate the error would be greatly appreciated! For the record, here is a list of things I've already tried: Rebooting the SQL Server machine. Fixes the issue temporarily, then the error returns within a minute or two of startup. (This is why I suspect a rogue group policy that is slow to apply fouling things up.) Importing all database objects from the Access 97 mdb into a new, clean mdb file. Makes no difference. Moving the Access 97 mdb file to a local directory on the SQL Server machine instead of accessing it via a share on the Win XP Pro LAN machine. This works, but does not solve the problem because the mdb needs to be on the client machine for performance reasons and the ability to work "stand alone". Plus, the same shared folder access works fine on all other servers / clients on my network. Compared all the SQL Server, Windows Server, etc versions to a known working setup and everything appears to be the same.

    Read the article

  • Nginx + php-fpm "504 Gateway Time-out" error with almost zero load (on a test-server)

    - by rahul286
    After debugging for 6-hours - I am giving this up :| We have a nginx+php-fpm+mysql in LAN with almost 100 wordpress (created and used by different designers/developers all working on test wordpres setup) We are using nginx without any issues from long. Today, all of a sudden - nginx started returning "504 Gateway Time-out" out of the blue... I checked nginx error log for a virtual host... 2010/09/06 21:24:24 [error] 12909#0: *349 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 21:25:11 [error] 12909#0: *349 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 21:25:11 [error] 12909#0: *443 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /info.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 21:25:12 [error] 12909#0: *443 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:08:32 [error] 12909#0: *1025 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:09:33 [error] 12909#0: *1025 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:09:40 [error] 12909#0: *1064 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /info.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:09:40 [error] 12909#0: *1064 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:24:44 [error] 12909#0: *1313 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:24:53 [error] 12909#0: *1313 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" As I run php-fpm on port 9000 via TCP mode, I ran "netstat | grep 9000" and noticed something unusual... (Pasting partial output here for ease of read) tcp 9 0 localhost:9000 localhost:36094 CLOSE_WAIT 14269/php5-fpm tcp 0 0 localhost:46664 localhost:9000 FIN_WAIT2 - tcp 1257 0 localhost:9000 localhost:36135 CLOSE_WAIT - tcp 1257 0 localhost:9000 localhost:36125 CLOSE_WAIT - tcp 9 0 localhost:9000 localhost:36102 CLOSE_WAIT 14268/php5-fpm tcp 0 0 localhost:46662 localhost:9000 FIN_WAIT2 - tcp 745 0 localhost:9000 localhost:46644 CLOSE_WAIT - tcp 0 0 localhost:46658 localhost:9000 FIN_WAIT2 - tcp 1265 0 localhost:9000 localhost:46607 CLOSE_WAIT - tcp 0 0 localhost:46672 localhost:9000 ESTABLISHED 12909/nginx: worker tcp 1257 0 localhost:9000 localhost:36119 CLOSE_WAIT - tcp 1265 0 localhost:9000 localhost:46613 CLOSE_WAIT - tcp 0 0 localhost:46646 localhost:9000 FIN_WAIT2 - tcp 1257 0 localhost:9000 localhost:36137 CLOSE_WAIT - tcp 0 0 localhost:46670 localhost:9000 ESTABLISHED 12909/nginx: worker tcp 1265 0 localhost:9000 localhost:46619 CLOSE_WAIT - tcp 1336 0 localhost:9000 localhost:46668 ESTABLISHED - tcp 0 0 localhost:46648 localhost:9000 FIN_WAIT2 - tcp 1336 0 localhost:9000 localhost:46670 ESTABLISHED - tcp 9 0 localhost:9000 localhost:36108 CLOSE_WAIT 14274/php5-fpm tcp 1336 0 localhost:9000 localhost:46684 ESTABLISHED - tcp 0 0 localhost:46674 localhost:9000 ESTABLISHED 12909/nginx: worker tcp 1336 0 localhost:9000 localhost:46666 ESTABLISHED - tcp 1257 0 localhost:9000 localhost:46648 CLOSE_WAIT - tcp 1336 0 localhost:9000 localhost:46678 ESTABLISHED - tcp 0 0 localhost:46668 localhost:9000 ESTABLISHED 12909/nginx: wo There are plenty of "CLOSE_WAIT" & "FIN_WAIT2" pairs as highlighted below (in above output): tcp 1337 0 localhost:9000 localhost:46680 CLOSE_WAIT - tcp 0 0 localhost:46680 localhost:9000 FIN_WAIT2 - Please note port 46680 in above. I enabled mysql slow queries error log, but it didn't work. As of now restarting php5-fpm every minute via a cronjob (see command below) keeping everything running "smoothly" but I hate patchwork and want to solve this... 1 * * * * service php5-fpm restart > /dev/null I searched extensively on Google - got no help. As mentioned, this a test-server in LAN, CPU load is never crossed 0.10 and memory usage is also below 25% (System has 2GB RAM and ubuntu-server installed) So if you find its time-confusing to help me out, please atleast drop a hint. Thanks in advance for help. -Rahul (note - this is reposting of - http://forum.nginx.org/read.php?11,127694) Update: I found answer, which is posted below.

    Read the article

  • HT Link Sync Error after Ubuntu 10.04 LTS Installation

    - by marklab
    Update 1 I just assembled an exact replica of this server, and successfully installed Ubuntu 10.04 LTS in a RAID10 configuration. The success was confirmed by a login to the initial account. There must be a hardware component that is faulty. Since the error mentions HT, which I believe to be Hyper Threading, I will start with the CPUs. Please indicate if this error is more strongly associated with any other piece of hardware. Or make a recommendation of another approach that would be good for this issue. Issue I was attempting to install Ubuntu 10.04 LTS on this system with the board RAID10 configured. However, the installation failed at the partitioning stage by rebooting the system. Upon reboot, there is an error report after POST listing the following: Node0: NB WatchDog Timer Error Node1: HT Link Sync Error Node2: HT Link Sync Error ... Node7: HT Link Sync Error Press F1 to continue/resume. After pressing F1 the system will boot from the Ubuntu 10.04 LTS installation disc. However, it will fail at the same stage, and go through the same process from there. Hardware CPU: AMD OPTERON X12 6172 G34 2.1G 18MB Motherboard: Supermicro H8QG6-F HDD: WD Caviar Green 2TB 5.4K RPM Troubleshooting I disabled RAID10 on the system, and installed the Ubuntu on a single drive. It installed successfully. I then went back to a RAID10 setup and attempted to install on the system again, and was able to make it through the partitioning stage. However, upon reboot, the system reported: Error: file not found, and then booted me into the Grub Rescue console. I feel I have aggravated the problem at this point because when I attempted to install from the boot disc again, the system reboots upon hitting enter to even start the installation process. It does the same thing when trying to boot from an Ubuntu 11 disc. I have not been able to find any information on this HT Link Sync Error, which I feel may have started the problems I am experiencing now with the installation of the OS. I am also aware that Ubuntu is said not to be supported by the motherboard according to Supermicro's site. However, since I was able to install it successfully on a single drive, I do not believe it is incompatible. I would like to know a reason for why it's failing to install on/off.

    Read the article

  • SMTP authentication error using PHPMailer

    - by Javier
    I am using PHPMailer to send a basic form to an email address but I get the following error: SMTP Error: Could not authenticate. Message could not be sent. Mailer Error: SMTP Error: Could not authenticate. SMTP server error: VXNlcm5hbWU6 The weird thing is that if I try to send it again, IT WORKS! Every time I submit the form after that first error it works. But if I leave it for a few minutes and then try again I get the same error again. The username and password have to be right as sometimes it works fine. I even created the following (very basic) script just to test it and I got the same result <?php require("phpmailer/class.phpmailer.php"); $mail = new PHPMailer(); $mail->IsSMTP(); $mail->Host = "smtp.host.com"; $mail->SMTPAuth = true; $mail->Username = "[email protected]"; $mail->Password = "password"; $mail->From = "[email protected]"; $mail->FromName = "From Name"; $mail->AddAddress("[email protected]"); $mail->AddReplyTo("[email protected]"); $mail->IsHTML(true); $mail->Subject = "Here is the subject"; $mail->Body = "This is the HTML message body <b>in bold!</b>"; $mail->AltBody = "This is the body in plain text for non-HTML mail clients"; if(!$mail->Send()) { echo "Message could not be sent. <p>"; echo "Mailer Error: " . $mail->ErrorInfo; exit; } echo "Message has been sent"; ?> I don't think this is relevant, but I just changed my hosting to a Linux shared server. Any idea why this is happening? Thanks! ***UPDATED 02/06/2012 I've been doing some tests. The results: I tested the script in an IIS server and it worked fine. The error seems to happen only in the Linux server. Also, if I use the gmail mail server it works fine in both, IIS and Linux. Could it be a problem with the configuration of my Linux server??

    Read the article

  • Need help trouble shooting Https webserver error - SSL Handshake failed

    - by DerNalia
    I followed this guide: http://hints.macworld.com/article.php?story=20041129143420344 Here is my virtual host definition <VirtualHost *:443> SSLEngine on SSLProxyEngine On RequestHeader set Front-End-Https "On" CacheDisable * SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL DocumentRoot "/Users/me/projects/myproject/public" ServerName ssl.mydomain.com ServerAlias *.ssl.mydomain.com SSLCertificateKeyFile "/private/etc/apache2/certs/webserver.nopass.key" SSLCertificateFile "/private/etc/apache2/certs/newcert.pem" SSLCACertificateFile "/private/etc/apache2/certs/demoCA/cacert.pem" SSLCARevocationPath "/private/etc/apache2/certs/demoCA/crl" ErrorLog "/Users/me/Desktop/ssl.log" ProxyPass / https://localhost:3002/ ProxyPassReverse / https://localhost:3002 ProxyPreserveHost on </VirtualHost> And when I try connecting to the sevre viov the web browser, I get this error: [Thu Feb 02 16:50:40 2012] [error] (502)Unknown error: 502: proxy: pass request body failed to 127.0.0.1:3002 (localhost) [Thu Feb 02 16:50:40 2012] [error] [client 96.11.81.39] proxy: Error during SSL Handshake with remote server returned by /session/new [Thu Feb 02 16:50:40 2012] [error] proxy: pass request body failed to 127.0.0.1:3002 (localhost) from 96.11.81.39 () how do I debug / fix this?

    Read the article

  • Lync Edge and Exchange Server: how to have access to my exchange mailbox from external network and also to the OWA

    - by Garcia Julien
    I've some problem in the configuration of Exchange 2010. My topology is like that: Server1 = Domain Controller Server2 = Exchange Server Server3 = Lync Server Server4 = Lync Edge Our public address (the one accessible by outside world) is directed to Server4. I would like to have access to my exchange mailbox from external network and also to the OWA. Could you help me in the configuration of thoses servers? Thank in advance Julien

    Read the article

  • Redirect email based on Source IP / Hostname

    - by PostMan
    So, we have a situation where we would like to be able to redirect (ignoring original destination) emails coming through out exchange server (locally sent) to a different email address. The situation is as follows: Testing a new server, this hosts a lot of our custom applications / scheduled tasks (as does the production version of this server). Instead of changing config files / parameters for these applications we'd rather leave them as they are, as we want to be as close to production as possible, however we don't want the emails go to the staff they would normally go to, we'd rather they were sent to a test email address where we are able to confirm the results. The emails look exactly like the production emails (same from / to / cc / bcc) except their source, hence why we'd like to know if the ability to redirect based on hostname exists. Cheers.

    Read the article

  • Pgpool-regclass gives error when installling

    - by user119720
    I have a problem when installing the pgpool-regclass. When I'm running 'Make',it shows me this kind of error : p,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -I/usr/include/et -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -I. -I. -I/usr/pgsql-9.2/include/server -I/usr/pgsql-9.2/include/internal -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2 -I/usr/include -c -o pgpool-regclass.o pgpool-regclass.c pgpool-regclass.c:99:37: error: macro "RangeVarGetRelid" requires 3 arguments, but only 2 given pgpool-regclass.c: In function âpgpool_regclassâ: pgpool-regclass.c:99: error: âRangeVarGetRelidâ undeclared (first use in this function) pgpool-regclass.c:99: error: (Each undeclared identifier is reported only once pgpool-regclass.c:99: error: for each function it appears in.) make: *** [pgpool-regclass.o] Error 1 Can anyone help me to sort this things out?I really appreciate it. Thanks.

    Read the article

  • Server error 500 when adding .htaccess file in the root folder

    - by welou
    I read a lot of server error 500 questions related to the famous .htaccess file but I have still not found an answer to this error. I have a folder which I used to test the .htaccess file. =>http://localhost/xampp/example/ I copied the .htaccess in the example folder and I get server error 500 my .htaccess file contains this: <IfModule mod_expires.c> ExpiresActive on ExpiresByType image/png "access plus 1 days" </IfModule> I have changed my httpd.conf file: all AllowOverrride None to AllowOverride All i have also uncommented this line: LoadModule expires_module modules/mod_expires.so error.log says: ExpiresActive not allowed here I still get the error 500 What is actually happening? How can I resolve this error? PS: i am running on localhost with XAMPP

    Read the article

  • Unable to locate Windows Server Error log files

    - by Sam007
    I am getting an error in my application on 500 Internal Server Error. The firebug gives me, NetworkError: 500 Internal Server Error - http://webgis.arizona.edu/ArcGIS/rest/services/webGIS/Shock_Models/GPServer/Income_Log/jobs/jc09c501156564f71abc5d98393581267/results/final_shp?dpi=96&transparent=true&format=png8&imageSR=102100&f=image&bbox=%7B%22xmin%22%3A-14519891.438356264%2C%22ymin%22%3A637618.0139790997%2C%22xmax%22%3A-6692739.741956295%2C%22ymax%22%3A6507981.786279075%2C%22spatialReference%22%3A%7B%22wkid%22%3A102100%7D%7D&bboxSR=102100&size=800%2C600 And when I goto that particular link, I get this error, Server Error - Object reference not set to an instance of an object. Any idea how I can correct it? UPDATE I was told to use fiddler and see the error details that is occurring over the network and this is the output that I got, SESSION STATE: Done. Response Entity Size: 849 bytes. == FLAGS ================== BitFlags: [ClientPipeReused, ServerPipeReused] 0x18 X-CLIENTPORT: 2010 X-RESPONSEBODYTRANSFERLENGTH: 849 X-EGRESSPORT: 2023 X-HOSTIP: 128.196.53.161 X-PROCESSINFO: firefox:2248 X-CLIENTIP: 127.0.0.1 X-SERVERSOCKET: REUSE ServerPipe#2 == TIMING INFO ============ ClientConnected: 15:53:51.383 ClientBeginRequest: 15:53:51.494 GotRequestHeaders: 15:53:51.494 ClientDoneRequest: 15:53:51.494 Determine Gateway: 0ms DNS Lookup: 0ms TCP/IP Connect: 0ms HTTPS Handshake: 0ms ServerConnected: 15:52:45.077 FiddlerBeginRequest: 15:53:51.495 ServerGotRequest: 15:53:51.495 ServerBeginResponse: 15:53:51.679 GotResponseHeaders: 15:53:51.679 ServerDoneResponse: 15:53:51.679 ClientBeginResponse: 15:53:51.679 ClientDoneResponse: 15:53:51.679 Overall Elapsed: 00:00:00.1850106 The response was buffered before delivery to the client. == WININET CACHE INFO ============ This URL is not present in the WinINET cache. [Code: 2] * Note: Data above shows WinINET's current cache state, not the state at the time of the request. * Note: Data above shows WinINET's Medium Integrity (non-Protected Mode) cache only. But I am still confused as to what the error is? This is the application. I am not sure if the error is due to the ArcGIS-Server or the Windows Server 2008. I am new on working with the Windows Server and wanted to know where can I look for the error lof files? This is the link which gives the details and the log info of the job executed. This is the output.

    Read the article

  • show php error message on IIS 7

    - by Max
    I am using IIS as a webserver on my development machine for PHP webdevelopment. Or at least, I am trying to. When there is a syntax error in a PHP script and I open that file in my webbrowser, I just get an 503 "internal server error" and the default IIS error page for this error. Some browsers dont open that file at all, possibly because of the 503 HTTP Response Header. I would like IIS to act in that case just like the apache webserver: display the PHP file with the error anyway, so that the error message gets printed out. How can this be done? EDIT: PHP settings: display_errors is on and error_reporting is set to E_ALL

    Read the article

  • Passenger 'premature end of script headers' error

    - by fatnic
    Hi. I really need help debugging an error I'm getting with Passenger on Apache. I've just made a fresh install of Ubuntu 10.4 and have Apache, Ruby and Passenger installed. I'm trying to run a simple rack app but keep getting this error in my Apache error.log [Tue Sep 28 05:54:41 2010] [error] [client 86.171.2.82] Premature end of script headers: The error then continues with The backend application (process 25574) did not send a valid HTTP response; instead, it sent nothing at all. It is possible that it has crashed; please check whether there are crashing bugs in this application. *** Exception NoMethodError in PhusionPassenger::Rack::ApplicationSpawner (undefined method `call' for nil:NilClass) (process 25574): I've tried older versions of passenger also but get the same error. Ubuntu 10.4 Apache 2.2.14 Ruby 1.9.2-p0 Passenger 2.2.15

    Read the article

  • kernel warning disk error for command write - solaris svm

    - by help_me
    Recently this warning came up on my message logs, scsi: [ID 107833 kern.warning] WARNING: /pci@1c,600000/scsi@2/sd@0,0 (sd0): Oct 27 00:14:44 Error for Command: write(10) Error Level:Retryable Oct 27 00:14:44 scsi: [ID 107833 kern.notice] Requested Block: 101515828 Error Block: 101515828 Oct 27 00:14:44 scsi: [ID 107833 kern.notice] Vendor: SEAGATE Serial Number: 0441B9B5H Oct 27 00:14:44 scsi: [ID 107833 kern.notice] Sense Key: Hardware Error Oct 27 00:14:44 scsi: [ID 107833 kern.notice] ASC: 0x19 (defect list error), ASCQ: 0x0, FRU: 0x2 This is showing signs of disk failing in my opinion. I have not seen the messages re-occurring. This is on a Solaris 9 Sparc system V240. The disks are managed by SVM and "metadb" is showing the flags as "a" Are there any tests or indications as to check/see if the disk is actually failing or was that error message initiated by something else. Thank you!

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >