Search Results

Search found 8277 results on 332 pages for 'global scope'.

Page 118/332 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • haproxy and tomcat intermittent hangs

    - by user7347
    I am trying to run haproxy in front of tomcat on a Solaris x86 box, but I am getting intermittent failures. At seemingly random intervals, the request just hangs until haproxy times out the connection. I thought maybe it was my app, but I've been able to reproduce it with the tomcat manager app, and hitting tomcat directly there is no problems at all. Hitting it repeatedly with curl will cause the error within 10-15 tries curl -ikL http://admin:admin@<my server>:81/manager/status haproxy is running on port 81, tomcat on port 7000. haproxy returns a 504 gateway timeout to the client, and puts this into the log file: Sep 7 21:39:53 localhost haproxy[16887]: xxx.xxx.xxx.xxx:65168 [07/Sep/2009:21:39:23.005] http_proxy http_proxy/tomcat7000 5/0/0/-1/30014 504 194 - - sHNN 0/0/0/0/0 0/0 "GET /manager/status HTTP/1.1" Tomcat shows nothing, no error in the logs and no indication that the request ever makes it to the tomcat server. The request count is not incremented, the manager app only shows activity on one thread, serving up the manager app. Here are my haproxy and tomcat connector settings, I've been playing with both a good deal trying to chase down the issue, so they may not be ideal, but they definitely don't seem like they should cause this error. server.xml <Connector port="7000" protocol="HTTP/1.1" enableLookups="false" maxKeepAliveRequests="1" connectionLinger="10" /> haproxy config global log loghost local0 chroot /var/haproxy listen http_proxy :81 mode http log global option httplog option httpclose clitimeout 150000 srvtimeout 30000 contimeout 3000 balance roundrobin cookie SERVERID insert server tomcat7000 127.0.0.1:7000 cookie server00 check inter 2000

    Read the article

  • Sticky connection and HTTPS support for HAProxy

    - by Saif
    We have 2 HTTP Load balancer with HAproxy and heartbeat. There are 4 apache nodes in this cluster. It's doing round robin load balancing. The HTTP cluster working fine. We are having problem with our portal because it uses SSO. We need sticky connection support in our HAproxy. Also we need load balancing for HTTPS traffic. Here's our HAproxy conf file. global # to have these messages end up in /var/log/haproxy.log you will # need to: # # 1) configure syslog to accept network log events. This is done # by adding the '-r' option to the SYSLOGD_OPTIONS in # /etc/sysconfig/syslog # # 2) configure local2 events to go to the /var/log/haproxy.log # file. A line like the following can be added to # /etc/sysconfig/syslog # # local2.* /var/log/haproxy.log # log 127.0.0.1 local0 log 127.0.0.1 local1 notice chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 #--------------------------------------------------------------------- # main frontend which proxys to the backends #--------------------------------------------------------------------- frontend main *:5000 acl url_static path_beg -i /static /images /javascript /stylesheets acl url_static path_end -i .jpg .gif .png .css .js use_backend static if url_static default_backend app #--------------------------------------------------------------------- # static backend for serving up images, stylesheets and such #--------------------------------------------------------------------- backend static balance roundrobin server static 127.0.0.1:4331 check #--------------------------------------------------------------------- # round robin balancing between the various backends #--------------------------------------------------------------------- backend app listen ha-http 10.190.1.28:80 mode http stats enable stats auth admin:xxxxxx balance roundrobin cookie JSESSIONID prefix option httpclose option forwardfor option httpchk HEAD /haproxy.txt HTTP/1.0 server apache1 portal-04:80 cookie A check server apache2 im-01:80 cookie B check server apache3 im-02:80 cookie B check server apache4 im-03:80 cookie B check Please advice. Thanks for your help in advance.

    Read the article

  • HA Proxy won't load balance my web requests. What have I done wrong?

    - by Josh Smeaton
    I've finally got HA Proxy set up and running in a way I think I want. However, it is not load balancing the web requests it receives. All requests are currently being forwarded to the first server in the cluster. I'm going to paste my configuration below - if anyone can see where I may have gone wrong, I'd appreciate it. This is my first stab at configuring web servers in a *nix environment. First up, I have HA Proxy running on the same host as the first server in the apache cluster. We are moving these servers to virtual later on, and they will have different virtual hosts, but I wanted to get this running now. Both web servers are receiving their health checks, and are reporting back correctly. The haproxy?stats page correctly reports servers that are up and down. I've tested this by altering the name of the file that is checked. I haven't put any load onto these servers yet. I've just opened up the URLs on several tabs (private browsing), and had several co-workers hit the URL too. All of the traffic goes to WEB1. Am I balancing incorrectly? global maxconn 10000 nbproc 8 pidfile /var/run/haproxy.pid log 127.0.0.1 local0 debug daemon defaults log global mode http retries 3 option redispatch maxconn 5000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen WEBHAEXT :80,:8443 mode http cookie sessionbalance insert indirect nocache balance roundrobin option httpclose option forwardfor except 127.0.0.1 option httpchk HEAD health_check.txt stats enable stats auth rah:rah server WEB1 10.90.2.131:81 cookie WEB_1 check server WEB2 10.90.2.130:80 cookie WEB_2 check

    Read the article

  • Generic RPM package for Python 2.x

    - by RaphDG
    I have a python application, it can run on Python = 2.6 and it's architecture independant. I need the rpm package of this application to be installed on Fedora 14 (python 2.7) and Centos 6.2 (python 2.6). I currently use mock to build one rpm package for each "flavour" and it works well. I apparently can't install the Centos compiled rpm on Fedora. It gives me this error message : error: Failed dependencies: python(abi) = 2.6 is needed by myapp-0.9.el6.noarch Here is the relevant part of my .spec file : %{!?python_sitelib: %global python_sitelib %(%{__python} -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())")} %{!?python_sitearch: %global python_sitearch %(%{__python} -c "from distutils.sysconfig import get_python_lib; print(get_python_lib(1))")} Name: myapp Version: #VERSION# Release: #RELEASE#%{dist} Summary: myapp Group: Development/Languages License: Apache v2 Source0: %{name}-%{version}-#RELEASE#.tar.gz BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n) BuildArch: noarch BuildRequires: python-devel BuildRequires: python-setuptools %description myapp %prep %setup -c %build %{__python} setup.py build %install %{__rm} -rf %{buildroot} %{__python} setup.py install -O1 --skip-build --root %{buildroot} Do I really have to use mock and build 2 rpms or is there another way to create a single generic 2.x rpm package ?

    Read the article

  • Debugging Samba/CUPS printer sharing with Windows

    - by mrdrbob
    I've got a HP Deskjet hooked up to a Slackware 12.2 box. I've got CUPS set up and can print a test page from the box just fine. I've also got Samba set up and have a couple file shares that work fine. I'm trying to share that HP Deskjet out via Samba, but I can't get it to show up in any Windows system. I see the server and its file shares in Windows networking, but when I open the Printers, no printer shows up. Running net view \\servername from the command line lists the file shares, but no printers. Here's the pertinent part of my smb.conf, if that helps: [global] workgroup = HOMENET security = share hosts allow = 192.168.1. 192.168.2. 127. load printers = yes printcap name = cups printing = cups log file = /var/log/samba.%m max log size = 50 [printers] comment = All Printers path = /var/spool/samba browseable = no public = yes writable = no printable = yes guest only = yes Can anyone give me some pointers as to where to start looking for potential causes? Update: Running testparm shows no errors. Here's the output (minus the file shares): [global] workgroup = HOMENET security = SHARE log file = /var/log/samba.%m max log size = 50 printcap name = cups hosts allow = 192.168.1., 192.168.2., 127. [printers] comment = All Printers path = /var/spool/samba guest only = Yes guest ok = Yes printable = Yes browseable = No

    Read the article

  • HaProxy - Http and SSL pass through config

    - by Bill
    I've currently got an HaProxy LB solution in place and everything is working fine however we are having an issue with a very few clients who cannot get to our site via HTTPS (SSL) they can browse our site in Http but as soon as they click on an absolute HTTPS link they are taken to our home page instead. Wondering if anyone can look at our config below and see if there's something awry. I believe we are on HaProxy 1.2.17 global log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 6144 #debug #quiet user haproxy group haproxy defaults log global mode http option httplog option dontlognull retries 3 redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 stats auth # admin password stats uri /monitor listen webfarm # bind :80,:443 bind :443 mode tcp balance source #cookie SERVERID insert indirect #option httpclose #option forwardfor #option httpchk HEAD /check.cfm HTTP/1.0 server webA 111.10.10.1 #server webB 111.10.10.2 server webB 111.10.10.3 server webC 111.10.10.4 listen webfarmhttp :80 mode http balance source # option httpclose option forwardfor # option httpchk HEAD /check.cfm HTTP/1.0 option httpchk /check.cfm server webA 111.10.10.1 #server webB 111.10.10.2 server webB 111.10.10.3 server webC 111.10.10.4 listen monitor :8443 mode http balance roundrobin #cookie SERVERID insert indirect option httpclose option forwardfor #option httpchk HEAD /check.txt HTTP/1.0 #option httpchk HEAD /check.cfm HTTP/1.0 server webA 111.10.10.1 server webB 111.10.10.2

    Read the article

  • Setting up vncserver on OpenSolaris zone

    - by k.park
    I am running OpenSolaris 5.10 and set up a sparse zone(inherits most of bin directories from global zone). I ended up copying many etc and var files from global zone, eventually most of the stuff(firefox,gvim, etc.) working through ssh via X11. However, I am having problems setting up vncserver on the zone. This is what I get if I tried to start the vncserver. vncext: VNC extension running! vncext: Listening for VNC connections on port 5911 vncext: created VNC server for screen 0 Fatal server error: could not open default font 'fixed' _X11TransNAMEDOpenClient: Cannot open /tmp/.X11-pipe/X11 for NAMED connection _X11TransOpen: transport open failed for local/%zone%:11 xsetroot: unable to open display '%zone%:11' _X11TransNAMEDOpenClient: Cannot open /tmp/.X11-pipe/X11 for NAMED connection _X11TransOpen: transport open failed for local/%zone%:11 _X11TransNAMEDOpenClient: Cannot open /tmp/.X11-pipe/X11 for NAMED connection _X11TransOpen: transport open failed for local/%zone%:11 _X11TransNAMEDOpenClient: Cannot open /tmp/.X11-pipe/X11 for NAMED connection _X11TransOpen: transport open failed for local/%zone%:11 vncconfig: unable to open display "%zone%:11" twm: unable to open display "%zone%:11" xterm Xt error: Can't open display: %zone%:11 I already chmoded /tmp/.X11-pipe with 777, and there is no pipe in /tmp/.X11-pipe or /tmp/.X11-unix directory. Here is my cat /etc/release: OpenSolaris 2009.06 snv_111b X86 Copyright 2009 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 07 May 2009 BRAND: ipkg

    Read the article

  • NetApp NDMP backup with BE 2010 R2 works, restore fails

    - by uuwe
    Hi, I'm having some issues with a new Backup Exec 2010 R2 installation. I configured a NetApp FAS2020 as an NDMP device and want to backup files from the NAS to a tape drive connected to my backup server. I set up ndmpd according to this document (http://www.symantec.com/business/support/index?page=content&id=TECH48957) and created a separate backup user (http://filers.blogspot.com/2006/09/setting-veritas-netbackup-with-non.html). Backup works perfectly, but restoring any file gives me an authentication failed error. The NDMP device has a "global" ndmp user configured in the device tab (tried this with the newly created ndmpd backup user and the netapp root) and I can also configure separate resource credentials in the BE restore job. I have tried setting the same accounts for the "global" ndmp device and the restore credentials and have also tried setting different accounts for them. NDMP debug level is at 5 and this is what shows up in /etc/messages. The session is closed immediately after it has been granted. 16:12:07 PST [Java_Thread:info]: ndmpdserver: ndmpd.access allowed for version = 4, sessionId = 51, from src ip = 192.168.11.17, dst ip = FAS2020-1/192.168.11.75, src port = 50857, dst port = 10000 16:12:07 PST [Java_Thread:info]: Ndmpd51: ndmpd session closed successfully for version = 4, sessionId = 51, from src ip = 192.168.11.17, dst ip = FAS2020-1/192.168.11.75, src port = 50857, dst port = 10000 Running wireshark on the backup server doesn't produce much. It shows a SYN - SYN/ACK - NDMP CONNECT_CLOSE Request from the backup server. The Resource Credentials for the restore job behave very oddly. If I enter NDMP credentials and do "Test All" it fails. If I use my regular domain backup account, it is successful. There are no failed or succeeded logons in the NetApp ndmp log and tracing this check shows that it doesn't even connect to the NAS. This makes me think that this is more likely flaky BE behaviour rather than misconfiguration of the NAS. Here is the options ndmp output: FAS2020-1 options ndmp ndmpd.access all ndmpd.authtype challenge ndmpd.connectlog.enabled on ndmpd.enable on ndmpd.ignore_ctime.enabled off ndmpd.offset_map.enable on ndmpd.password_length 16 ndmpd.preferred_interface disable ndmpd.tcpnodelay.enable off

    Read the article

  • How can I have puppet deploy ssh keys for virtual users?

    - by Pheezy
    I am trying to get puppet to assign authorized ssh keys for virtual users but I keep getting the following error: err: Could not retrieve catalog: Could not parse for environment production: Syntax error at 'user'; expected '}' at /etc/puppet/modules/users/manifests/ssh_authorized_keys.pp:9 I believe my configuration are correct (listed below) but is there a syntax error or scoping issue I am missing? I would simply like to assign users to nodes and have those users automagically have their ssh keys installed. Is there maybe a better way to do this and I'm just overthinking it? # /etc/puppet/modules/users/virtual.pp class user::virtual { @user { "user": home => "/home/user", ensure => "present", groups => ["root","wheel"], uid => "8001", password => "SCRAMBLED", comment => "User", shell => "/bin/bash", managehome => "true", } # /etc/puppet/modules/users/manifests/ssh_authorized_keys.pp ssh_authorized_key { "user": ensure => "present", type => "ssh-dss", key => "AAAAB....", user => "user", } # /etc/puppet/modules/users/init.pp import "users.pp" import "ssh_authorized_keys.pp" class user::ops inherits user::virtual { realize( User["user"], ) } # /etc/puppet/manifests/modules.pp import "sudo" import "users" # /etc/puppet/manifests/nodes.pp node basenode { include sudo } node 'testbox' inherits basenode { include user::ops } # /etc/puppet/manifests/site.pp import "modules" import "nodes" # The filebucket option allows for file backups to the server filebucket { main: server => 'puppet' } # Set global defaults - including backing up all files to the main filebucket and adds a global path File { backup => main } Exec { path => "/usr/bin:/usr/sbin/:/bin:/sbin" }

    Read the article

  • Exchange 2003: Unrestrict send mail size for specific users / groups?

    - by Kip
    Good (insert appropriate time of day here) SF folks, I have the following situation; We have a message size limit for sending set at 20mb in Global Settings | Message Delivery. We have a limit of 50mb set at an external 3rd party spam vendor. I need to enable some users to be able to send messages that are upwards of around 40mb in size. However, when I set the Sending Message Size Maximum to 50mb within the delivery restrictions of a users exchange properties, it would appear that this does not win. It seems that the lowest value wins for this situation. I need to be able to allow certain users to send messages larger than the 20mb limit, but to have everyone else have the 20mb limit in place. How can I do this? The only way I could see was to raise the limit set in Global Settings | Message Delivery to 50mb and then set everyone elses (bar the people who need increased limit) delivery restrictions max size down. But I cannot see an easy way to do the last bit hence my post here looking for advice. There are valid reasons we need to send mail this size and whilst we are putting together other mechanisms for delivery this data, we still need to get this put in place. Thanks in advance Kip

    Read the article

  • Debugging Samba/CUPS printer sharing with Windows

    - by mrdrbob
    I've got a HP Deskjet hooked up to a Slackware 12.2 box. I've got CUPS set up and can print a test page from the box just fine. I've also got Samba set up and have a couple file shares that work fine. I'm trying to share that HP Deskjet out via Samba, but I can't get it to show up in any Windows system. I see the server and its file shares in Windows networking, but when I open the Printers, no printer shows up. Running net view \\servername from the command line lists the file shares, but no printers. Here's the pertinent part of my smb.conf, if that helps: [global] workgroup = HOMENET security = share hosts allow = 192.168.1. 192.168.2. 127. load printers = yes printcap name = cups printing = cups log file = /var/log/samba.%m max log size = 50 [printers] comment = All Printers path = /var/spool/samba browseable = no public = yes writable = no printable = yes guest only = yes Can anyone give me some pointers as to where to start looking for potential causes? Update: Running testparm shows no errors. Here's the output (minus the file shares): [global] workgroup = HOMENET security = SHARE log file = /var/log/samba.%m max log size = 50 printcap name = cups hosts allow = 192.168.1., 192.168.2., 127. [printers] comment = All Printers path = /var/spool/samba guest only = Yes guest ok = Yes printable = Yes browseable = No

    Read the article

  • Users database empty after Samba3 to Samba4 migration on different servers

    - by ouzmoutous
    I have to migrate a Samba 3 to a new Samba 4 server. My problem is that the database on the samba 3 server seems a bit empty. The secrets.dtb file is only 20K whereas the “pbedit -L |wc -l”command give me 16970 lines. On my Samba3 /var/lib/samba is 1,5M After I had migrate the databse (following instructions on http://dev.tranquil.it/index.php/SAMBA_-_Migration_Samba3_Samba4), “pdbedit -L” command on the new server give me only : SAMBA4$, Administrator, dns-samba4, krbtgt and nobody. So I tried to create a VM with a Samba3. I added some users, done the same things I did for the migration and now I can see the users created on the VM. It’s like users on the Samba 3 server are in a sort of cache. I already migrate the /etc/{passwd,shadow,group} files and I can see users with the “getent passwd” command. Any ideas why my users are present when I use pdbedit but the database is so empty ? The global part of my smb.conf on the Samba 3 server : [global] workgroup = INTERNET netbios name = PDC-SMB3 server string = %h server interfaces = eth0 obey pam restrictions = Yes passdb backend = smbpasswd passwd program = /usr/bin/passwd %u passwd chat = *new* %n\n *Re* %n\n *pa* username map = /etc/samba/smbusers unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%U max log size = 1000 socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192 add user script = /usr/sbin/useradd -s /bin/false -m '%u' -g users delete user script = /usr/sbin/userdel -r '%u' add group script = /usr/sbin/groupadd '%g' delete group script = /usr/sbin/groupdel '%g' add user to group script = /usr/sbin/usermod -G '%g' '%u' add machine script = /usr/sbin/useradd -s /bin/false -d /dev/null '%u' -g machines logon script = logon.cmd logon home = \\$L\%U domain logons = Yes os level = 255 preferred master = Yes local master = Yes domain master = Yes dns proxy = No ldap ssl = no panic action = /usr/share/samba/panic-action %d invalid users = root admin users = admin, root, administrateur log level = 2

    Read the article

  • Samba PDC share slow with LDAP backend

    - by hmart
    The scenario I have a SUSE SLES 11.1 SP1 machine as Samba master PDC with LDAP backend. In one share there are Database files for a Client-Server application. I log XP and Windows 7 machines to the local domain (example.local), the login is a little slow but works. In the client computers have an executable which opens, reads and writes the database files from the server share. The Problem When running Samba with LDAP password backend the client application runs VERY SLOW with a maximum transfer rate of 2500 MBit per second. If disable LDAP the client app speed increases 20x, with transfer rate of 50Mbit/sec and running smoothly. I'm doing test with just two users and two machines, so concurrency, or LDAP size shouldn't be the problem here. The suspect LDAP, Smb.conf [global] section configuration. The Question What can I do? I've googled a lot, but still have no answer. Slow smb.conf WITH LDAP [global] workgroup = zmartsoft.local passdb backend = ldapsam:ldap://127.0.0.1 printing = cups printcap name = cups printcap cache time = 750 cups options = raw map to guest = Bad User logon path = \\%L\profiles\.msprofile logon home = \\%L\%U\.9xprofile logon drive = P: usershare allow guests = Yes add machine script = /usr/sbin/useradd -c Machine -d /var/lib/nobody -s /bin/false %m$ domain logons = Yes domain master = Yes local master = Yes netbios name = server os level = 65 preferred master = Yes security = user wins support = Yes idmap backend = ldap:ldap://127.0.0.1 ldap admin dn = cn=Administrator,dc=zmartsoft,dc=local ldap group suffix = ou=Groups ldap idmap suffix = ou=Idmap ldap machine suffix = ou=Machines ldap passwd sync = Yes ldap ssl = Off ldap suffix = dc=zmartsoft,dc=local ldap user suffix = ou=Users

    Read the article

  • HAProxy "503 Service Unavailable" for webserver running on a KVM virtual machine

    - by Menda
    I'm setting up a server with KVM (IP 192.168.0.100) and I have created inside of it one virtual machine using network bridging at 192.168.0.194. This virtual machine has an nginx instance running, which I can access from the server or from any computer computer in the internal network just typing in the browser http://192.168.0.194. However, I try configure HAProxy in the same server that hosts KVM and looking the status page of HAProxy it always shows the virtual machine as "DOWN". If I try from the server http://localhost, it should be the same than if I go to http://192.168.0.194. My goal is to build a reverse proxy, but I tried this little example and won't work. What am I doing bad? This is my config file in the server: # /etc/haproxy/haproxy.cfg global maxconn 4096 user haproxy group haproxy daemon defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen ServerStatus *:8081 mode http stats enable stats auth haproxy:haproxy listen Server *:80 mode http balance roundrobin cookie JSESSIONID prefix option httpclose option forwardfor option httpchk HEAD /check.txt HTTP/1.0 server mv1 192.168.0.194:80 cookie A check Thanks.

    Read the article

  • Issue with SSL using HAProxy and Nginx

    - by Ben Chiappetta
    I'm building a highly available site using a multiple HAProxy load balancers, Nginx web serves, and MySQL servers. The site needs to be able to survive load balancer or web servers nodes going offline without any interruption of service to visitors. Currently, I have two boxes running HAProxy sharing a virtual IP using keepalived, which forward to two web servers running Nginx, which then tie into two MySQL boxes using MySQL replication and sharing a virtual IP using heartbeat. Everything is working correctly except for SSL traffic over HAProxy. I'm running version 1.5 dev12 with openssl support compiled in. When I try to navigate to the virtual IP for haproxy over https, I get the message: The plain HTTP request was sent to HTTPS port. Here's my haproxy.cfg so far, which was mainly assembled from other posts: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice # log 127.0.0.1 local0 user haproxy group haproxy daemon maxconn 20000 defaults log global option dontlognull balance leastconn clitimeout 60000 srvtimeout 60000 contimeout 5000 retries 3 option redispatch listen front bind :80 bind :443 ssl crt /etc/pki/tls/certs/cert.pem mode http option http-server-close option forwardfor reqadd X-Forwarded-Proto:\ https if { is_ssl } reqadd X-Proto:\ SSL if { is_ssl } server web01 192.168.25.34 check inter 1s server web02 192.168.25.32 check inter 1s stats enable stats uri /stats stats realm HAProxy\ Statistics stats auth admin:********* Any idea why SSL traffic isn't being passed correctly? Also, any other changes you would recommend? I still need to configure logging, so don't worry about that section. Thanks in advance your help.

    Read the article

  • Oracle Database Recovery Problem

    - by Palani
    I am very new to Oracle, and trying to restore a oracle 8i database on win 2000 server. I have one week old database backup (backup taken with exp command), and i want to restore it now. Now I am unable to login through sqlplus (got shutdown in progress error) I have a backup and i want to restore it, but oracle is not starting at all, and 'imp' command is failing. I started sqlplus / as sysdba and following is the log of what i am trying to do. Can some one guide me further. SQL> shutdown immediate; ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> startup; ORACLE instance started. Total System Global Area 143423516 bytes Fixed Size 75804 bytes Variable Size 58105856 bytes Database Buffers 85164032 bytes Redo Buffers 77824 bytes Database mounted. ORA-01589: must use RESETLOGS or NORESETLOGS option for database open SQL> shutdown immediate; ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> startup mount; ORACLE instance started. Total System Global Area 143423516 bytes Fixed Size 75804 bytes Variable Size 58105856 bytes Database Buffers 85164032 bytes Redo Buffers 77824 bytes Database mounted. SQL> alter database open; alter database open * ERROR at line 1: ORA-01589: must use RESETLOGS or NORESETLOGS option for database open SQL> alter database open resetlogs; alter database open resetlogs * ERROR at line 1: ORA-01245: offline file 1 will be lost if RESETLOGS is done ORA-01110: data file 1: 'C:\ORACLE\ORADATA\ABCD\SYSTEM01.DBF'

    Read the article

  • Appears to be "randomly" switching between the acl matched backend and the default backend

    - by Xoor
    I have HAProxy acting as a proxy in front of: An NGinx instance An in-house load balancer in front of multiple dynamic services exposed with socket.io (websockets) My problem is that from time to time my connections are proxied correctly to my socket.io cluster, and then randomly it fallsback to routing to NGinx which obviously is annoying and meaningless since NGinx isn't mean't to handle the request. This happens when requesting for URLs of the format : http://mydomain.com/backends/* There's an ACL in the HAProxy config to match the '/backends/*' path. Here's a simplified version of my HAProxy config (removed extra unrelated entries and changed names): global daemon maxconn 4096 user haproxy group haproxy nbproc 4 defaults mode http timeout server 86400000 timeout connect 5000 log global #this frontend interface receives the incoming http requests frontend http-in mode http #process all requests made on port 80 bind *:80 #set a large timeout for websockets timeout client 86400000 # Default Backend default_backend www_backend # Loadfire (socket cluster) acl is_loadfire_backends path_beg /backends use_backend loadfire_backend if is_loadfire_backends # NGinx backend backend www_backend server www_nginx localhost:12346 maxconn 1024 # Loadfire backend backend loadfire_backend option forwardfor # This sets X-Forwarded-For option httpclose server loadfire localhost:7101 maxconn 2048 It's really quite confusing for me why the behaviour appears to be "random", since being hard to reproduce it's hard to debug. I appreciate any insight on this.

    Read the article

  • FTP Server on Centos 5.8 - Transfer fails randomly

    - by Diego
    Hi have ProFTPD runningon a brand new CentOS 5.8 server with Plesk, and its behaviour is inconsistent at best. I tried to transfer a directory from my PC, and every time I get a transfer failed on a random file. It's never the same one that fails, it just fails. Sometimes it's a .gif, sometimes it's a .css, sometimes it's a JPG. Of several hundred files, a dozen is always failing for no apparent reason. The error that I get is the following: COMMAND:> [27/11/2012 11:43:52] STOR main_border.gif [27/11/2012 11:43:53] 500 Invalid command: try being more creative ERROR:> [27/11/2012 11:43:53] Syntax error: command unrecognized. The above is just an example, the "command unrecognized" occurs with LIST and other commands as well. Here's the ProFTPD configuration, just in case: ServerName "ProFTPD" #ServerType standalone ServerType inetd DefaultServer on <Global> DefaultRoot ~ psacln AllowOverwrite on </Global> DefaultTransferMode binary UseFtpUsers on TimesGMT off SetEnv TZ :/etc/localtime Port 21 Umask 022 MaxInstances 30 ScoreboardFile /var/run/proftpd/scoreboard TransferLog /usr/local/psa/var/log/xferlog #Change default group for new files and directories in vhosts dir to psacln <Directory /var/www/vhosts> GroupOwner psacln </Directory> # Enable PAM authentication AuthPAM on AuthPAMConfig proftpd IdentLookups off UseReverseDNS off AuthGroupFile /etc/group Include /etc/proftpd.include Note: file /etc/proftpd.include is blank. The above is the default configuration set by Plesk 11. I don't know much of why is that way, my knowledge of Linux System Administration is very basic and the one of ProFTPD is a complete zero. Thanks in advance for the help. Update Issue experienced with CuteFTP and FileZilla. Update Replaced ProFTPd with PureFTPd, issue persists. Sometimes I get "command unrecognized", sometimes "failed to establish data connection". I'm starting to think that it could be a network issue, but I have completely zero knowledge of networking.

    Read the article

  • smbclient timing out

    - by Sam Lee
    I am trying to set up a Samba share on a Centos machine. I want to connect to this server using smbclient on OS X. Here is what happens: > smbclient -L X.X.X.X timeout connecting to X.X.X.X:445 timeout connecting to X.X.X.X:139 Error connecting to X.X.X.X (Operation already in progress) Connection to X.X.X.X failed What could be going wrong? Here is my iptables dump on the Centos machine (the server): > iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 REJECT all -- 0.0.0.0/0 127.0.0.0/8 reject-with icmp-port-unreachable ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:445 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:3000 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 8 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:3000 And finally, my smb.conf: [global] workgroup = workgroup security = SHARE load printers = No default service = global path = /home available = No encrypt passwords = yes [share] writeable = yes admin users = myusername path = /home/myhome/ force user = root valid users = myusername public = yes available = yes

    Read the article

  • HAProxy is caching the forwarding?

    - by shadow_of__soul
    i'm trying to set up a server structure for an application i'm building in Node.js with socket.io. My setup is: HAProxy frontend forward to -> apache2 as default backend (or nginx, is apache in this local test) -> node.js app if the url has socket.io in the request AND a domain name i have something like: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 user haproxy group haproxy daemon defaults log global mode http maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 frontend all 0.0.0.0:80 timeout client 5000 default_backend www_backend acl is_soio url_dom(host) -i socket.io #if the request contains socket.io acl is_chat hdr_dom(host) -i chaturl #if the request comes from chaturl.com use_backend chat_backend if is_chat is_soio backend www_backend balance roundrobin option forwardfor # This sets X-Forwarded-For timeout server 5000 timeout connect 4000 server server1 localhost:6060 weight 1 maxconn 1024 check #forwards to apache2 backend chat_backend balance roundrobin option forwardfor # This sets X-Forwarded-For timeout queue 50000 timeout server 50000 timeout connect 50000 server server1 localhost:5558 weight 1 maxconn 1024 check #forward to node.js app The problem comes when i made a request to something like www.chaturl.com/index.html it load perfectly but fails to loads the socket.io files (www.chaturl.com/socket.io/socket.io.js) why it redirect to apache (and should redirect to the node.js app that serve the files). The weird thing is that if i access directly to the socket.io file, after refreshing a few times, it loads, so i suppose is "caching" the forwarding for the client when it makes the first request and reach the apache server. Any suggestion of how this can be solved? or what i can try or look about this?

    Read the article

  • IIS and ASP.NET

    - by sam
    i'm trying to add asp.net feature on windows 7 i tried to turn it on using turn windows features on or off but it fails every time so i download web platform installer and try it that way and it fails also next i uninstall .net framework 4 restart again! and reinstall it and try again the previous steps but it fails the same i need this installed so i can view it on iis7 anyone know what i can do with this to get it working i've searched and searched and everything fails i get this error on the web platform installer Failed with 0x80070643 – Fatal Error during installation please help i cant do my work with out it working :( ok i did a few things now get this error Server Error in '/pulse' Application. Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Could not load type 'pulsesite.MvcApplication'. Source Error: Line 1: <%@ Application Codebehind="Global.asax.vb" Inherits="pulsesite.MvcApplication" Language="VB" % Source File: /pulse/global.asax Line: 1 Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.1 i know its ust about changing the code but i'm not good with c# anyone know how?

    Read the article

  • samba joined to AD canot see users when in the security tab on client

    - by Jonathan
    I've got samba joined via kerberos and winbindd to our AD network and user authentication and everything else is working great. However when I try to add users/groups to file permissions it tells me they are not found. All the users groups show up fine with getent so I'm not sure why they are not showing up. Here is my smb.conf and I would much appreciate any help with this. #GLOBAL PARAMETERS [global] socket options = TCP_NODELAY IPTOS_LOWDELAY SO_KEEPALIVE SO_RCVBUF=11264 SO_SNDBUF=11264 workgroup = [hidden] realm = [hidden] preferred master = no server string = xerxes web/file server security = ADS encrypt passwords = yes log level = 3 log file = /var/log/samba/%m max log size = 50 printcap name = cups printing = cups winbind enum users = Yes winbind enum groups = Yes winbind use default domain = Yes winbind nested groups = Yes winbind separator = + winbind refresh tickets = yes idmap uid = 1600-20000 idmap gid = 1600-20000 template primary group = "Domain Users" template shell = /bin/bash kerberos method = system keytab nt acl support = yes [homes] comment = Home Direcotries valid users = %S read only = No browseable = No create mask = 0770 directory mask = 0770 force create mode = 0660 force directory mode = 2770 inherit owner = no [test] comment = Test path=/mnt/test writeable=yes valid users = %s create mask = 0770 directory mask = 0770 force create mode = 0660 force directory mode = 2770 inherit owner = no [printers] comment = All Printers path = /var/spool/cups browseable = no printable = yes

    Read the article

  • Oracle Database Recovery Problem

    - by Palani
    I am very new to Oracle, and trying to restore a oracle 8i database on win 2000 server. I have one week old database backup (backup taken with exp command), and i want to restore it now. Now I am unable to login through sqlplus (got shutdown in progress error) I have a backup and i want to restore it, but oracle is not starting at all, and 'imp' command is failing. I started sqlplus / as sysdba and following is the log of what i am trying to do. Can some one guide me further. SQL> shutdown immediate; ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> startup; ORACLE instance started. Total System Global Area 143423516 bytes Fixed Size 75804 bytes Variable Size 58105856 bytes Database Buffers 85164032 bytes Redo Buffers 77824 bytes Database mounted. ORA-01589: must use RESETLOGS or NORESETLOGS option for database open SQL> shutdown immediate; ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> startup mount; ORACLE instance started. Total System Global Area 143423516 bytes Fixed Size 75804 bytes Variable Size 58105856 bytes Database Buffers 85164032 bytes Redo Buffers 77824 bytes Database mounted. SQL> alter database open; alter database open * ERROR at line 1: ORA-01589: must use RESETLOGS or NORESETLOGS option for database open SQL> alter database open resetlogs; alter database open resetlogs * ERROR at line 1: ORA-01245: offline file 1 will be lost if RESETLOGS is done ORA-01110: data file 1: 'C:\ORACLE\ORADATA\ABCD\SYSTEM01.DBF'

    Read the article

  • Windows Network File Transfer to Samba server: “Are you sure you want to copy this file without its properties?”

    - by jimp
    I am transferring a lot of files to a new NAS based on OpenMediaVault, with the Samba 3.5.6 service running. I am transferring from Windows 7 64-bit to the NAS, and on some media files Windows is prompting about losing some property data across the transfer. I have never seen this before when transferring to Samba boxes I have built myself (vs this turnkey solution), so I'm guessing there must be a Samba setting I can change to preserve the file properties in question instead of permanently losing whatever they contain (Date Taken? Exposure? Flash Fired? etc). Or maybe I've just never encountered this before; I'm really not sure. I tried adding ea support = yes and store dos attributes = yes to the [global] section, but the problem remains. The Linux file system is ext4 mounted with user_xattr (full options: defaults,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0) as Samba requires. Any ideas would be greatly appreciated. Thank you! Samba config: [global] workgroup = WORKGROUP server string = %h server include = /etc/samba/dhcp.conf dns proxy = no log level = 2 syslog = 2 log file = /var/log/samba/log.%m max log size = 1000 syslog only = yes panic action = /usr/share/samba/panic-action %d encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = no passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes socket options = TCP_NODELAY IPTOS_LOWDELAY guest account = nobody load printers = no disable spoolss = yes printing = bsd printcap name = /dev/null unix extensions = yes wide links = no create mask = 0777 directory mask = 0777 use sendfile = no null passwords = no local master = yes time server = yes wins support = yes ea support = yes store dos attributes = yes Note: I found this related question, but it explains the loss due to the user trying to transfer from NTFS to FAT32.

    Read the article

  • Generating alerts from ossec ( server- agent ) model

    - by batman
    I'm very new to OSSEC. I use a server-agent model. I wish to generate alert for the following actions ( in agent side ): 1) Sample Alert for delation of logs I added the rules for these in agent's ossec.conf using <localfile> tags. Like this : <localfile> <log_format>syslog</log_format> <location>/var/log/syslog</location> </localfile> In my server's ossec.conf. I added the following : <global> <email_notification>yes</email_notification> <email_to>xxxx@xxxxxx</email_to> <smtp_server>smtp.gmail.com</smtp_server> <email_from>xxxx@xxx</email_from> </global> And I restarted my server. Now I tried to delete the agents syslog file using rm syslog. But no alerts has been triggered. Where I'm making the mistake?

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >