Search Results

Search found 17749 results on 710 pages for 'connection pool'.

Page 267/710 | < Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >

  • weird postgresql log entries

    - by hyperboreean
    I am trying to figure out why I get some weird entries in my postgresql log after I do a restart: 2010-05-14 11:30:25 EEST LOG: database system was shut down at 2010-05-14 11:30:22 EEST 2010-05-14 11:30:25 EEST LOG: autovacuum launcher started 2010-05-14 11:30:25 EEST LOG: database system is ready to accept connections 2010-05-14 11:30:25 EEST LOG: incomplete startup packet 2010-05-14 11:30:40 EEST WARNING: there is already a transaction in progress 2010-05-14 11:30:40 EEST LOG: could not receive data from client: Connection reset by peer 2010-05-14 11:30:40 EEST LOG: unexpected EOF on client connection First, there's the 2010-05-14 11:30:25 EEST LOG: incomplete startup packet which bugs me. Anyone has any idea why this happens? And also, this one is very strange: 2010-05-14 11:30:40 EEST WARNING: there is already a transaction in progress ...

    Read the article

  • Postfix flow/hook reference, or high-level overview?

    - by threecheeseopera
    The Postfix MTA consists of several components/services that work together to perform the different stages of delivery and receipt of mail; these include the smtp daemon, the pickup and cleanup processes, the queue manager, the smtp service, pipe/spawn/virtual/rewrite ... and others (including the possibility of custom components). Postfix also provides several types of hooks that allow it to integrate with external software, such as policy servers, filters, bounce handlers, loggers, and authentication mechanisms; these hooks can be connected to different components/stages of the delivery process, and can communicate via (at least) IPC, network, database, several types of flat files, or a predefined protocol (e.g. milter). An old and very limited example of this is shown at this page. My question: Does anyone have access to a resource that describes these hooks, the components/delivery stages that the hook can interact with, and the supported communication methods? Or, more likely, documentation of the various Postfix components and the hooks/methods that they support? For example: Given the requirement "if the recipient primary MX server matches 'shadysmtpd', check the recipient address against a list; if there is a match, terminate the SMTP connection without notice". My software would need to 1) integrate into the proper part of the SMTP process, 2) use some method to perform the address check (TCP map server? regular expressions? mysql?), and 3) implement the required action (connection termination). Additionally, there will probably be several methods to accomplish this, and another requirement would be to find that which best fits (ex: a network server might be faster than a flat-file lookup; or, if a large volume of mail might be affected by this check, it should be performed as early in the mail process as possible). Real-world example: The apolicy policy server (performs checks on addresses according to user-defined rules) is designed as a standalone TCP server that hooks into Postfix inside the smtpd component via the directive 'check_policy_service inet:127.0.0.1:10001' in the 'smtpd_client_restrictions' configuration option. This means that, when Postfix first receives an item of mail to be delivered, it will create a TCP connection to the policy server address:port for the purpose of determining if the client is allowed to send mail from this server (in addition to whatever other restrictions / restriction lookup methods are defined in that option); the proper action will be taken based on the server's response. Notes: 1)The Postfix architecture page describes some of this information in ascii art; what I am hoping for is distilled, condensed, reference material. 2) Please correct me if I am wrong on any level; there is a mountain of material, and I am just one man ;) Thanks!

    Read the article

  • UFW: force traffic thru OpenVPN tunnel / do not leak any traffic

    - by hotzen
    I have VPN access using OpenVPN and try to create a safe machine that does not leak traffic over non-VPN interfaces. Using the firewall UFW I try to achieve the following: Allow Access from LAN to the machine's web-interface Otherwise only allow Traffic on tun0 (OpenVPN-Tunnel interface when established) Reject (or forward?) any traffic over other interfaces Currently I am using the following rules (sudo ufw status): To Action From -- ------ ---- 192.168.42.11 9999/tcp ALLOW Anywhere # allow web-interface Anywhere on tun0 ALLOW Anywhere # out only thru tun0 Anywhere ALLOW OUT Anywhere on tun0 # in only thru tun0 My problem is that the machine is initially not able to establish the OpenVPN-connection since only tun0 is allowed, which is not yet established (chicken-egg-problem) How do I allow creating the OpenVPN connection and from this point onward force every single packet to go thru the VPN-tunnel?

    Read the article

  • Can't bring NAT to work

    - by user31738
    Hello, I bought a D-link DIR-300 wireless router and i can't bring NAT to work, i have an ssh and http service i need to forward to the internet. My connection is as follows: I have an ADSL connection, i'm using a ADSL ethernet modem connected and working, it doesnt let me put it on bridge mode. I have my router connected to my adsl modem through ethernet, it gets its ip through DHCP (and i'ts always the same) I have a desktop computer running linux with apache and openssh configured and working, it has fixed ip. I configured the NAT in the modem forwarding port 22 from the router ip to the internet. In the router i setup NAT forwarding port 22 from the desktop computer fixed ip to out there. This setup already worked with a fonera i had before, can anyone help me with this or tell me what kind of tests do i need to do? How can i test if the router is forwarding ports correctly before the modem?

    Read the article

  • How to achieve reliable Gigabit Ethernet Link with my Acer Aspire Revo R3610?

    - by The Operator
    I want to stream HD movies over my wired Gigabit LAN from my PC to my Acer Aspire Revo R3610. It's connected with a 3ft Cat5e patch cable to my Netgear GS605v2 Switch. The PC acting as File Server is connected at 1Gbps to the Switch. Network driver options are set to defaults, including automatic speed/duplex negotiation on both machines. The Revo will not connect to my Network Switch at 1Gbps - the OS reports that it reverts to 100Mbps either shortly after connection or immediately upon connection. Through a process of elimination (trying different drivers, patch cables, ports on the switch, and other 1Gbps-capable devices connected to the Network switch which successfully achieve 1Gbps links and performance) I have drawn the conclusion there is either a Hardware or Software (Driver) issue with the Revo itself. I have performed tests using Windows 7 and Ubuntu 9.10. Can anyone offer insight on Gigabit Ethernet with the Revo?

    Read the article

  • Apt-get saying "Unable to correct problems, you have held broken packages."

    - by YatharthROCK
    TL;DR: sudo apt-get install ... saying "Unable to correct problems, you have held broken packages." The problem I was trying to get the WebApps feature for PP and QQ following this blog post. I ran the sudo add-apt-repository ppa:webapps/preview command to add the repository, but i got a connection error. Since I know my current ISP gives a shaky connection, I tried again and sure enough, it worked. Then I ran sudo apt-get install unity-webapps-preview, but I realized we had to update apt-get first, so I hit Ctrl + C to stop it. Then I ran sudo apt-get update which worked w/o a fuss, but when I ran sudo apt-get install unity-webapps-preview again later, it showed an error message. Here's the dump: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: unity-webapps-preview : Depends: xul-ext-unity but it is not going to be installed Depends: xul-ext-websites-integration but it is not going to be installed Depends: xul-ext-webaccounts but it is not going to be installed E: Unable to correct problems, you have held broken packages. I think this might be because of me interrupting the earlier command. It hadn't got a chance to output anything, though — I stopped it pretty fast. What I tried I have tried running a number of commands:- sudo apt-get install --fix-broken sudo apt-get autoclean sudo apt-get autoremove sudo apt-get -f install sudoapt-get install ppa-purgeandsudo ppa-purge ppa:webapps/preview` Even after running sudo apt-get upgrade after every try, none of them worked. Research I tried searching Google, looking at a couple of forums and searching on AU, but to no avail. Help would be appreciated.

    Read the article

  • Set Users as chrooted for sftp, but allow user to login in SSH

    - by Eghes
    I have setup a ssh server on debian 7, to use sftp connection. I chrooted some user, with this config: Match Group sftpusers ChrootDirectory /sftp/%u ForceCommand internal-sftp But if i want login with one of this chrooted users in ssh console, they get logged, but autoclose the connection. In logs I see: Oct 17 13:39:32 xxxxxx sshd[31100]: Accepted password for yyyyyy from zzz.zzz.zzz.zzz port 7855 ssh2 Oct 17 13:39:32 xxxxxx[31100]: pam_unix(sshd:session): session opened for user yyyyyyyyyyyy by (uid=0) Oct 17 13:39:32 d00hyr-ea1 sshd[31100]: pam_unix(sshd:session): session closed for user yyyyyyyyyyyy How can I chroot a user only for sftp, and use it as a normal user for ssh?

    Read the article

  • Internet keeps getting Disconnected & Reconnected

    - by Paul
    I have an Internet connection thru a Phone company which uses DSL modem to connect. For past few weeks my net connection keeps getting disconnected for few seconds & it reconnects automatically. I asked the service provider & they checked & even replaced some wires but of no help. Funny thing is that sometimes there seems to be a rhythm where it disconnects for an average of 13 seconds or so plus or minus few seconds, for hours at a time. I don't know how to attach an image to this post where it shows the pattern of connect & disconnect. I have taken an image using snagit to show the timing pattern. If someone can explain it, I can attach it here. Thanks

    Read the article

  • A Generic RIDC Test Program

    - by Kevin Smith
    Many times I have found it useful to use a java program that communicates with WebCenter Content (WCC) using RIDC for testing. I might not have access to the web GUI or need to test a service running as a specific user. In the past I had created a number of "one off" programs that submitted specific services, e.g GET_SEARCH_RESULTS, DOCINFO, etc. Recently I decided to create a generic RIDC test program that could submit any service with the desired parameters based on a configuration file. The programs gets the following information from the configuration file: WCC connection information (host, port) User to use to run service Service to run Any parameters for the service The program will make a connection to the WCC server, send the service request, and print the results of the service call using the getResponseAsString() method. Here is a sample configuration file: ridc.host=localhostridc.port=4444ridc.user=sysadminridc.idcservice=GET_SEARCH_RESULTSidcservice.QueryText=dDocType <matches> `Document`idcservice.SortField=dDocNameidcservice.SortDesc=ASC There is a readme file included in the zip with instructions for how to configure and run the program. The program takes one command line argument, the configuration file name. The configuration file name is optional and defaults to config.properties. If you have any suggestions for improvements let me know. Right now it only submits a single service call each time you run it. One enhancement I have already thought about would be to allow you to specify multiple services to tun in the configuration file. You can do that with the current program by having multiple configuration files and running the program multiple times, each with a different configuration file. You can download the program here.

    Read the article

  • Can I connect an external antenna to a range extender?

    - by ercan
    I live in a kind of dormitory and the next access point is 60 meters away from my room. So I bought a range extender (TP-Link WA730RE) and installed it into my room in the same height as the access point. But my problem is still not solved. The reception is slightly better but my connection still gets broken every two minutes. The antenna that comes with this range extender was 4dB. My question is, can I buy an 8dB external antenna (like this one: http://www.amazon.com/TP-Link-TL-ANT2408C-Desktop-Omni-Directional-Antenna/dp/B0034CQSKW/ref=sr_1_14?ie=UTF8&qid=1332074993&sr=8-14) and replace it with the antenna that comes with the range extender? Or is this external antenna only suitable for the "receiver" end of the connection, i.e. the computer?

    Read the article

  • Cannot connect to a VPN server - authentication failed with error code 691

    - by stacker
    When trying to connect to a VPN server, I get the 691 error code on the client, which say: Error Description: 691: The remote connection was denied because the user name and password combination you provided is not recognized, or the selected authentication protocol is not permitted on the remote access server. Now, I validated that the username and password are correct, and tried to login with domain name and without. I also installed a certification to use with the IKEv2 security type. I also validated that the VPN server support security method. But I cannot login. In the server log I get this log: Network Policy Server denied access to a user. The user DomainName\UserName connected from IP address but failed an authentication attempt due to the following reason: The remote connection was denied because the user name and password combination you provided is not recognized, or the selected authentication protocol is not permitted on the remote access server. Any idea of what can I do? Thanks in advance!

    Read the article

  • Antenna Aligner Part 3: Kaspersky

    - by Chris George
    Quick one today. Since starting this project, I've been encountering times where Nomad fails to build my app. It would then take repeated attempts at building to then see a build go through successfully. Rob, who works on Nomad at Red Gate, investigated this and it showed that certain parts of the message required to trigger the 'cloud build' were not getting through to the Nomad app, causing the HTTP connection to stall until timeout. After much scratching heads, it turns out that the Kaspersky Internet Security system I have installed on my laptop at home, was being very aggressive and was causing the problem. Perhaps it's trying to protect me from myself? Anyway, we came up with an interim solution why the Nomad guys investigate with Kaspersky by setting Visual Studio to be a trusted application with the Kaspersky settings and setting it to not scan network traffic. Hey presto! This worked and I have not had a single build problem since (other than losing internet connection, or that embarrassing moment when you blame everyone else then realise you've accidentally switched off your wireless on the laptop).

    Read the article

  • My windows xp wireless hotspot wifi isn't working

    - by Dominic Grenier
    I add the hotspot the regular way. Yet it doesn't show up as available when I try to connect to it using my other Ubuntu laptop. And nothing can connect to it. Yesterday, I successfully made it work for 5 minutes and then it stopped without me changing any configuration. I've already tryed resetting the DNS. Edit: I've updated my Broadcom 802.11b/g driver to a generic but more recent version. I've also repaired the WMI, now the advanced tab of my primary connection is available and the hand meaning the connection is shared appeared. But the computers still connect the wrong way around. (Windows to ubuntu instead of ubuntu to windows) Reinstalled SP3...

    Read the article

  • IP Address on internet access shared over ad-hoc wifi network

    - by Jacxel
    The Situation When im staying at my girlfriends we both like to have internet access on our laptops but her accommodation don't allow wireless routers. My Question My question is if I set up an Ad-Hoc network to share the internet connection as shown here on How To Geek. Will my laptop be acting as a wireless router or will the connection all go through my laptop as one ip address so it appears to be my computer accessing the webpages etc, that my girlfriend actually is. i would be interested in knowing any additional information that could help sove this problem eg. if connectify would do what i want.

    Read the article

  • squid configuration change to accept http request on LAN

    - by Ratan Kumar
    installed squid + dansguardian to block adult content on my linux (ubuntu 12.10) . everything worked fine. it has blocked as expected . now the problem is i am also running an apache server for my LAN . ( kind of website ) but when accessing it via 192.168.0.1 , it says squid has blocked the connection , this is the exact error The following error was encountered while trying to retrieve the URL: http: //192.168.0.16/ Connection to 192.168.0.16 failed. The system returned: (113) No route to host The remote host or network may be down. Please try the request again. Your cache administrator is webmaster. before configuring the squid it was working fine . what changes in the squid.conf i have to make . i tried acl Safe_ports 80 allow_all Safe_ports ( i want to know how i can configure it again to listen HTTP request from LAN )

    Read the article

  • Can I make ssh tell me which control file it would use for multiplexing?

    - by Ryan Thompson
    I am using the following options in my ~/.ssh/config in order to enable connection multiplexing: ControlMaster auto ControlPath ~/.ssh/control/master-%r@%h:%p However, this has the annoying problem that the first shell to connect to a particular server must be the last to disconnect, because it is the master connection that all the other connections are using. So if you log out of the master, it appears to just hang. To solve this, I would like to wrap ssh with a script that checks if the control master file exists, and if not, starts a master ssh process in the background. Then it would start a slave ssh session. In order to accomplish this, my script would have to determine the path to the control file that ssh would use. This would entail parsing the ssh command line options and config files and implementing the logic for determining the ControlPath. Is there any way to just ask ssh what path it would use, so I can check it?

    Read the article

  • telnet to 3389 connects, RDP remote desktop app bails ?

    - by scott_lotus
    I can TELENT 192.168.10.10 3389 and get a connection. But RDP client to 192.168.10.10 immedietly bails (i.e less then 1 sec) "connect" button greys briefly, RDP app remains on screen. Have tried these from many nodes on the subnet to 192.168.10.10 with same result. On 192.168.10.10 Allow Remote Desktop is enabled. On 192.168.10.10 windows firemall is off. Im connecting from the same subnet i.e no firewall hardware / routers in the way. AV software is installed but other nodes on same subnet allow RDP connection using exact same AV settings (network group profile) Checked 192.168.10.10 for any additonal AV software or local firewall products. Im sure non exist. Checked regedit to ensure 3389 was the port set for listening. Seems to be an XP problem (sp3) ( 2 nodes on the my LAN have this issue) and many work fine. Thanks for any help Scott

    Read the article

  • android app unable to connect to the hsqldb server

    - by Chinta
    I am trying to connect my android app to the hsql db server. Server runs on computer-1. I can connect to the db server from local machine through java as well as Db-visualizer. I can connect to the db server from another computer(computer-2) using Db-visualizer with comouter-1 ip address. Now trying to connect from my app in Nexus 7 the same way I was connecting from computer-2. I am getting "No Suitable Driver" error. Below is the log. 11-02 12:01:41.235: W/System.err(9803): connection string <jdbc:hsqldb:hsql://192.168.2.6:9001/qBank> 11-02 12:01:41.235: W/System.err(9803): user id string <SA> 11-02 12:01:41.235: W/System.err(9803): password string <> 11-02 12:01:41.235: W/System.err(9803): ERROR: failed to get connection. 11-02 12:01:41.235: W/System.err(9803): java.sql.SQLException: No suitable driver 11-02 12:01:41.235: W/System.err(9803): at java.sql.DriverManager.getConnection(DriverManager.java:186) 11-02 12:01:41.235: W/System.err(9803): at java.sql.DriverManager.getConnection(DriverManager.java:213) 11-02 12:01:41.235: W/System.err(9803): at com.scan.util.GatherData.getConnection(GatherData.java:135)

    Read the article

  • Hired developer insists on doing things the wrong way

    - by Tariq- iPHONE Programmer
    Hello, i am working with Social Networking iphone app which require remote data connection. So i hired a php developer in order to provide me RESTful services. But when i start working with him, he arguing me that he will not make stored procedures and web services. Instead of he suggested me to pass query as a parameter. Suppose If I have to call Search service, he told me to send POST request with 3 parameters: Query="select * from users", username=abd and password = 123 And i thing there is no such architecture in order to use remote data. Then he is saying it is possible through socket programming. And I am 100% sure this is not an appropriate way to access remote data. This is simply illogical. Thousands of iphone application using REST/SOAP services to make remote data connection He just declined me to provide RESTful services. Please its my heartily advice to all developers that post your own views over here. So that I can show to that developers that these are the views from all developers worldwide.

    Read the article

  • mysqld crashes on any statement

    - by ??iu
    I restarted my slave to change configuration settings to skip reverse hostname lookup on connecting and to enable the slow query log. I edited /etc/my.cnf making only these changes, then restarted mysqld with /etc/init.d/mysql restart All appeared to be well but when I connect to msyqld remotely or locally though it connects okay a slight problem is that mysqld crashes whenever you try to issue any kind of statement. The client looks like: Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.1.31-1ubuntu2-log Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> show tables; ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... Connection id: 1 Current database: mydb ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... ERROR 2003 (HY000): Can't connect to MySQL server on 'xx.xx.xx.xx' (61) ERROR: Can't connect to the server ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... ERROR 2003 (HY000): Can't connect to MySQL server on 'xx.xx.xx.xx' (61) ERROR: Can't connect to the server ERROR 2006 (HY000): MySQL server has gone away Bus error The mysqld error log looks like: 101210 16:35:51 InnoDB: Error: (1500) Couldn't read the MAX(job_id) autoinc value from the index (PRIMARY). 101210 16:35:51 InnoDB: Assertion failure in thread 140245598570832 in file handler/ha_innodb.cc line 2595 InnoDB: Failing assertion: error == DB_SUCCESS InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 101210 16:35:51 - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=3 max_threads=600 threads_connected=3 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x18209220 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f8d791580d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f902a76a080] /lib/libc.so.6(gsignal+0x35) [0x7f90291f8fb5] /lib/libc.so.6(abort+0x183) [0x7f90291fabc3] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x41b) [0x781f4b] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f902a7623ba] /lib/libc.so.6(clone+0x6d) [0x7f90292abfcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x18213c70 = thd->thread_id=3 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 101210 16:35:51 mysqld_safe Number of processes running now: 0 101210 16:35:51 mysqld_safe mysqld restarted InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 101210 16:35:54 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 101210 16:35:56 InnoDB: Started; log sequence number 456 143528628 101210 16:35:56 [Warning] 'user' entry 'root@PSDB102' ignored in --skip-name-resolve mode. 101210 16:35:56 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=mysqld-relay-bin' to avoid this problem. 101210 16:35:56 [Note] Event Scheduler: Loaded 0 events 101210 16:35:56 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.31-1ubuntu2-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 101210 16:36:11 InnoDB: Error: (1500) Couldn't read the MAX(job_id) autoinc value from the index (PRIMARY). 101210 16:36:11 InnoDB: Assertion failure in thread 139955151501648 in file handler/ha_innodb.cc line 2595 InnoDB: Failing assertion: error == DB_SUCCESS InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 101210 16:36:11 - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=600 threads_connected=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x18588720 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f49d916f0d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f4c8a73f080] /lib/libc.so.6(gsignal+0x35) [0x7f4c891cdfb5] /lib/libc.so.6(abort+0x183) [0x7f4c891cfbc3] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x41b) [0x781f4b] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f4c8a7373ba] /lib/libc.so.6(clone+0x6d) [0x7f4c89280fcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x18599950 = thd->thread_id=1 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 101210 16:36:11 mysqld_safe Number of processes running now: 0 101210 16:36:11 mysqld_safe mysqld restarted The config is [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] innodb_file_per_table innodb_buffer_pool_size=10G innodb_log_buffer_size=4M innodb_flush_log_at_trx_commit=2 innodb_thread_concurrency=8 skip-slave-start server-id=3 # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /DB2/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 128K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 600 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 32M # skip-federated slow-query-log skip-name-resolve Update: I followed the instructions as per http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html and set innodb_force_recovery = 4 and the logs are showing a different error but the behavior is still the same: 101210 19:14:15 mysqld_safe mysqld restarted 101210 19:14:19 InnoDB: Started; log sequence number 456 143528628 InnoDB: !!! innodb_force_recovery is set to 4 !!! 101210 19:14:19 [Warning] 'user' entry 'root@PSDB102' ignored in --skip-name-resolve mode. 101210 19:14:19 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=mysqld-relay-bin' to avoid this problem. 101210 19:14:19 [Note] Event Scheduler: Loaded 0 events 101210 19:14:19 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.31-1ubuntu2-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 101210 19:14:32 InnoDB: error: space object of table mydb/__twitter_friend, InnoDB: space id 1602 did not exist in memory. Retrying an open. 101210 19:14:32 InnoDB: error: space object of table mydb/access_request, InnoDB: space id 1318 did not exist in memory. Retrying an open. 101210 19:14:32 InnoDB: error: space object of table mydb/activity, InnoDB: space id 1595 did not exist in memory. Retrying an open. 101210 19:14:32 - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=600 threads_connected=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x1753c070 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f7a0b5800d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f7cbc350080] /usr/sbin/mysqld(ha_innobase::innobase_get_index(unsigned int)+0x46) [0x77c516] /usr/sbin/mysqld(ha_innobase::innobase_initialize_autoinc()+0x40) [0x77c640] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x3f3) [0x781f23] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f7cbc3483ba] /lib/libc.so.6(clone+0x6d) [0x7f7cbae91fcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x1754d690 = thd->thread_id=1 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash.

    Read the article

  • SSH broken, segfault error 4

    - by trampi
    i cannot connect to my server via ssh, it throws me a "server unexpectedly closed connection" after entering the password. in var/log/messages the only noteable message is: Apr 14 17:41:23 s15410270 kernel: sshd[3602]: segfault at c0 ip 7f0801acbdb0 sp 7fff0adff860 error 4 in libc-2.8.so[7f0801a87000+14f000] Apr 14 17:41:29 s15410270 kernel: sshd[3606]: segfault at c0 ip 7f75f9463db0 sp 7fff027971f0 error 4 in libc-2.8.so[7f75f941f000+14f000] This message appears after a log in attempt via ssh or via sftp Its a SuSE Linux server. I'm looking for help where to start to search for the error, i can still act as root via a serial console. edit: "server unexpectedly closed connection" only appears if i enter the correct password!

    Read the article

  • Wired and wireless conections: force Windows to connect to laptop through Ethernet?

    - by danielkza
    I have a desktop connected to the internet and to my home network through Wi-Fi, and a latptop connected to said desktop through an Ethernet cable. But Windows seems to only reach the laptop through Wi-Fi: I want to transfer files through the wired connection instead. Setting up Internet Connectin Sharing and disconnecting the laptop from Wi-Fi altogether doesn't seem like the most elegant solution to me. I also thought about going to the hosts file and setting up the IP address manually, but that would make the laptop completely unavailable if it's not wired, which happens quite often unfortunately. Is there any way for me to tell Windows to use the wired connection for a particular host if possible, and fallback to any other route it finds otherwise?

    Read the article

  • Netgear WNR1000 WiFi speed

    - by Kamil Klimek
    I have Netgear WNR1000 150N, Macbook Pro 13" with Broadcom BCM43xx 1.0, Network connection 60mbps When I connect through the cable I easily get around 60mbps. When I go through the WiFi it's capable to get only 32mbps at tops. Any ideas why is that? Is that my router limitation or maybe my WiFi card? If it is routers fault what router would you suggest. Best router would be with usb port for external hard drive. Forgot to add screenshot with connection details: Szybkosc transmisji == Transmission speed

    Read the article

  • How can I restrict SSH access when the source IP is dynamic

    - by Supratik
    Hi I want to protect SSH access to our live web server from all IP's except our office static IP. There are some employees who connects to this live server from their dynamic IP's. So, it is not always possible for me to change in the iptables rule in live server whenever the dynamic IP of the employee changes. I tried to put them in office VPN and allowed only SSH access from office IP but the office connection is slow in compared to our employee's private internet connection, moreover it adds an extra overhead to our office network. Is there any way I can solve this problem ?

    Read the article

  • Squid "system returned (13) Permission denied"

    - by AndyM
    I can get to a site form my Squid server directly using lynx http://my-URL , ie not using squid as the proxy, just to prove the connectivity exists. Lynx connects fine to the site - its a Weblogic portal When I try the same site from client with the squid machine as a proxy I get a squid error indicating that the destination site refused the connection from Squid. The squid server is a RHEL5.5 server. The error is something like The following error was encountered: Connection Failed The systen returned: (13) Permission denied Any ideas ? The squid access.log just indicates a TCP_MISS. Its as if the destinatin site knows its been accessed by squid and is not allowing ?

    Read the article

< Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >