Search Results

Search found 904 results on 37 pages for '4096'.

Page 26/37 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Files on ext4 on Drobo with corrupt, zero-ed out blocks

    - by Patrick
    I have a 2TB ext4 file system (Ubuntu running Linux kernel 2.6.31-22-server x86_64). This file system is the second drive on a Drobo box plugged in via USB. We've not had problems on the first drive (Drobo limits drive size to 2TB due to some OS limitations, so if you have more space than that it appears as two separate drives). I am sharing this files with Samba (smbd 3.4.0) with a mix of Windows and Linux workstations. Recently we've been experiencing some data corruption in multiple files. In many cases I have an un-corrupt original file stored on one of the workstations. These are binary files of various formats, (e.g. SQLite, but others as well). I used "split" to split a corrupt and uncorrupt file into 4096 byte chunks (this is the block size of the ext4 file system). I then ran md5sum on pairs of chunks and discovered that the chunks matched in many cases and in every case where they did not match, the corrupt chunk was a solid chunk of zeroes (620f0b67a91f7f74151bc5be745b7110 for what it's worth). I'm trying to track down a culprit but am a bit at a loss. I don't believe Samba is at fault since I'm using it without issue on the first drive exported by the Drobo. What can I do to narrow this down and find out what's going on?

    Read the article

  • uWSGI cannot find "application" using Flask and Virtualenv

    - by skyler
    Using uWSGI to serve a simple wsgi app, (a simple "Hello, World") my configuration works, but when I try to run a Flask app, I get this in uWSGI's error logs: current working directory: /opt/python-env/coefficient/lib/python2.6/site-packages writing pidfile to /var/run/uwsgi.pid detected binary path: /opt/uwsgi/uwsgi setuid() to 497 your memory page size is 4096 bytes detected max file descriptor number: 1024 lock engine: pthread robust mutexes uwsgi socket 0 bound to TCP address 127.0.0.1:3031 fd 3 Python version: 2.6.6 (r266:84292, Jun 18 2012, 14:18:47) [GCC 4.4.6 20110731 (Red Hat 4.4.6-3)] Set PythonHome to /opt/python-env/coefficient/ *** Python threads support is disabled. You can enable it with --enable-threads *** Python main interpreter initialized at 0xbed3b0 your server socket listen backlog is limited to 100 connections *** Operational MODE: single process *** added /opt/python-env/coefficient/lib/python2.6/site-packages/ to pythonpath. unable to find "application" callable in file /var/www/coefficient/flask.py unable to load app 0 (mountpoint='') (callable not found or import error) *** no app loaded. going in full dynamic mode *** *** uWSGI is running in multiple interpreter mode ***` Note in particular this part of the log: unable to find "application" callable in file /var/www/coefficient/flask.py unable to load app 0 (mountpoint='') (callable not found or import error) **no app loaded. going in full dynamic mode** This is my Flask app: from flask import Flask app = Flask(__name__) @app.route("/") def hello(): return "Hello, World, from Flask!" Before I added my Virtualenv's pythonpath to my configuration file, I was getting an ImportError for Flask. I solved this though, I believe (I'm not receiving errors about it anymore) and here is my complete configuration file: uwsgi: #socket: /tmp/uwsgi.sock socket: 127.0.0.1:3031 daemonize: /var/log/uwsgi.log pidfile: /var/run/uwsgi.pid master: true vacuum: true #wsgi-file: /var/www/coefficient/coefficient.py wsgi-file: /var/www/coefficient/flask.py processes: 1 virtualenv: /opt/python-env/coefficient/ pythonpath: /opt/python-env/coefficient/lib/python2.6/site-packages This is how I start uWSGI, from an rc script: /opt/uwsgi/uwsgi --yaml /etc/uwsgi/conf.yaml --uid uwsgi And if I try to view the Flask program in a browser, I get this: **uWSGI Error** Python application not found Any help is appreciated.

    Read the article

  • Deployment and Ownership issues

    - by kylemac
    As an extreme newbie, I am having difficulty managing ownership and permissions on my first box. What I can't figure out is how to deploy using one user, we will call him deploy and operate my php application with www-data user. Currently as it stands, I know my server runs as www-data through this function <?php echo(exec("whoami")); ?> but I am having to chown between deploy and www-data every time I deploy. There has got to be an easier way to deploy with one user and still run as www-data. EDIT: Here is the output from ls- l on the folder in question. You will see user deploy and group www-pub, the group is from an attempt to add the two different users to a new group and chown one of them in the hopes that they both would have the permissions (newb alert) drwxrwxr-x 4 deploy www-pub 4096 Mar 7 01:41 example.com I am using capistrano for deployment under the user deploy then once its done i chown to www-data, otherwise I can't use php to manipulate files. I am also unsure how to even change which user apache is running.

    Read the article

  • How to make new file permission inherit from the parent directory?

    - by Wai Yip Tung
    I have a directory called data. Then I am running a script under the user id 'robot'. robot writes to the data directory and update files inside. The idea is data is open for both me and robot to update. So I setup the permission and owner group like this drwxrwxr-x 2 me robot-grp 4096 Jun 11 20:50 data where both me and robot belongs to the 'robot-grp'. I change the permission and the owner group recursively like the parent directory. I regularly upload new files into the data directory using rsync. Unfortunately, new files uploaded does not inherit the parent directory's permission as I hope. Instead it looks like this -rw-r--r-- 1 me users 6 Jun 11 20:50 new-file.txt When robot tries to update new-file.txt, it fails due to lack of file permission. I'm not sure if setting umask helps. In anycase the new files does not really follow it. $ umask -S u=rwx,g=rx,o=rx I'm often confounded by Unix file permission. Do I even have a right plan? I'm using Debian lenny.

    Read the article

  • HAProxy "503 Service Unavailable" for webserver running on a KVM virtual machine

    - by Menda
    I'm setting up a server with KVM (IP 192.168.0.100) and I have created inside of it one virtual machine using network bridging at 192.168.0.194. This virtual machine has an nginx instance running, which I can access from the server or from any computer computer in the internal network just typing in the browser http://192.168.0.194. However, I try configure HAProxy in the same server that hosts KVM and looking the status page of HAProxy it always shows the virtual machine as "DOWN". If I try from the server http://localhost, it should be the same than if I go to http://192.168.0.194. My goal is to build a reverse proxy, but I tried this little example and won't work. What am I doing bad? This is my config file in the server: # /etc/haproxy/haproxy.cfg global maxconn 4096 user haproxy group haproxy daemon defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen ServerStatus *:8081 mode http stats enable stats auth haproxy:haproxy listen Server *:80 mode http balance roundrobin cookie JSESSIONID prefix option httpclose option forwardfor option httpchk HEAD /check.txt HTTP/1.0 server mv1 192.168.0.194:80 cookie A check Thanks.

    Read the article

  • Squid SSL transparent proxy - SSL_connect:error in SSLv2/v3 read server hello A

    - by larryzhao
    I am trying to setup a SSL proxy for one of my internal servers to visit https://www.googleapis.com using Squid, to make my Rails application on that server to reach googleapis.com via the proxy. I am new to this, so my approach is to setup a SSL transparent proxy with Squid. I build Squid 3.3 on Ubuntu 12.04, generated a pair of ssl key and crt, and configure squid like this: http_port 443 transparent cert=/home/larry/ssl/server.csr key=/home/larry/ssl/server.key And leaves almost all other configurations default. The authorization of the dir that holds key/crt is drwxrwxr-x 2 proxy proxy 4096 Oct 17 15:45 ssl Back on my dev laptop, I put <proxy-server-ip> www.googleapis.com in my /etc/hosts to make the call goes to my proxy server. But when I try it in my rails application, I got: SSL_connect returned=1 errno=0 state=SSLv2/v3 read server hello A: unknown protocol And I also tried with openssl in cli: openssl s_client -state -nbio -connect www.googleapis.com:443 2>&1 | grep "^SSL" SSL_connect:before/connect initialization SSL_connect:SSLv2/v3 write client hello A SSL_connect:error in SSLv2/v3 read server hello A SSL_connect:error in SSLv2/v3 read server hello A Where did I do wrong?

    Read the article

  • Xen domU passwd file overwritten with console log output

    - by malfy
    I was setting up a Debian Xen domU and after booting it fine, I added basic configuration to /etc/network/interfaces and ran /etc/init.d/networking restart. This failed so I decided to reboot. After the reboot I also ran xm shutdown box. When dropped to a shell prompt it wouldn't let me login. Upon further inspection, I now have garbage in some critical files in /etc: root@box:/# tail +1 mnt/etc/{passwd-,shadow} tail: cannot open `+1' for reading: No such file or directory ==> mnt/etc/passwd- <== 0000000000100000 (reserved) Nov 23 02:02:39 box kernel: [ 0.000000] Xen: 0000000000100000 - 0000000004000000 (usable) Nov 23 02:02:39 box kernel: [ 0.000000] DMI not present or invalid. Nov 23 02:02:39 box kernel: [ 0.000000] last_pfn = 0x4000 max_arch_pfn = 0x1000000 Nov 23 02:02:39 box kernel: [ 0.000000] initial memory mapped : 0 - 033ff000 Nov 23 02:02:39 box kernel: [ 0.000000] init_memory_mapping: 0000000000000000-0000000004000000 Nov 23 02:02:39 box kernel: [ 0.000000] NX (Execute Disable) protection: active Nov 23 02:02:39 box kernel: [ 0.000000] 0000000000 - 0004000000 page 4k Nov 23 02:02:39 box kernel: [ 0.000000] kernel direct mapping tables up to 4000000 @ 7000-2c000 Nov 23 02:02:3 ==> mnt/etc/shadow <== 32 nr_cpumask_bits:32 nr_cpu_ids:1 nr_node_ids:1 Nov 23 02:02:39 box kernel: [ 0.000000] PERCPU: Embedded 15 pages/cpu @c15b0000 s37688 r0 d23752 u65536 Nov 23 02:02:39 box kernel: [ 0.000000] pcpu-alloc: s37688 r0 d23752 u65536 alloc=16*4096 Nov 23 02:02:39 box kernel: [ 0.000000] pcpu-alloc: [0] 0 Nov 23 02:02:39 box kernel: [ 0.000000] Xen: using vcpu_info placement Nov 23 02:02:39 box kernel: [ 0.000000] Built 1 zonelists in Zone order, mobility grouping on. Total pages: 16160 Nov 23 02:02:39 box kernel: [ 0.000000] Kernel command line: root=/dev/mapper/xen-guest_root ro quiet root=/dev/xvda1 ro Nov 23 02:02:39 box kernel: [ 0.000000] PID hash table entries: The garbage is also present in the passwd file and the group file (although I didn't paste that above since I have since ran debootstrap on the filesystem again). Does anyone have any insight into what happened and why?

    Read the article

  • SFTP ChRoot result in broken pipe

    - by Patrick Pruneau
    I have a website that I want to add some restricted access to a sub-folder. For this, I've decided to use CHROOT with SFTP (I mostly followed this link : http://shapeshed.com/chroot_sftp_users_on_ubuntu_intrepid/) For now, I've created a user (sio2104) and a group (magento).After following the guide, my folder list look like this : -rw-r--r-- 1 root root 27 2012-02-01 14:23 index.html -rw-r--r-- 1 root root 21 2012-02-01 14:24 info.php drwx------ 15 root root 4096 2012-02-25 00:31 magento As you can see, i've chown root:root the folder magento I wanted to jail-in the user and ...everything else by the way. Also in the magento folder, I chown sio2104:magento everything so they can access what they want. Finally, I've added this to sshd_config file : #Subsystem sftp /usr/lib/openssh/sftp-server Subsystem sftp internal-sftp Match Group magento ChrootDirectory /usr/share/nginx/www/magento ForceCommand internal-sftp AllowTCPForwarding no X11Forwarding no PasswordAuthentication yes #UsePAM yes And the result is...well, I can enter my login, password and it's all finished with a "broken pipe" error. $ sftp [email protected] [....some debug....] [email protected]'s password: debug1: Authentication succeeded (password). Authenticated to 10.20.0.50 ([10.20.0.50]:22). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. Write failed: Broken pipe Connection closed Verbose mode gives nothing to help. Anyone have an idea of what I've done wrong? If I try to login with ssh or sftp with my personnal user, everything works fine.

    Read the article

  • How to make working TFTP server on CentOS 6.2

    - by Dima
    I'm trying to setup TFTP server on CentOS 6.2. The /etc/xinet.d/tftp configuration file is the following: service tftp { disable = no socket_type = dgram protocol = udp wait = yes user = root server = /usr/sbin/in.tftpd server_args = -s /tftpboot -vvv per_source = 11 cps = 100 2 flags = IPv4 } The selinux and firewall are disabled. The /etc/hosts.allow and /etc/hosts.deny files are empty. When I'm trying to get a file from the TFTP server, the file transfer always failed and I see the following errors into /var/log/messages Jul 11 03:16:53 localhost xinetd[4155]: xinetd Version 2.3.14 started with libwrap loadavg labeled-networking options compiled in. Jul 11 03:16:53 localhost xinetd[4155]: Started working: 1 available service Jul 11 03:17:00 localhost xinetd[4155]: START: tftp pid=4157 from=192.168.10.3 Jul 11 03:17:00 localhost in.tftpd[4158]: RRQ from 192.168.10.3 filename 1 Jul 11 03:17:00 localhost in.tftpd[4158]: sending NAK (0, Permission denied) to 192.168.10.3 Jul 11 03:17:01 localhost in.tftpd[4159]: RRQ from 192.168.10.3 filename 1 Jul 11 03:17:01 localhost in.tftpd[4159]: sending NAK (0, Permission denied) to 192.168.10.3 Jul 11 03:17:03 localhost in.tftpd[4160]: RRQ from 192.168.10.3 filename 1 The tftpboot directory permissions are (output of the ls -l command): drw-rw-rw-. 3 root root 4096 Jul 11 03:32 tftpboot I also see that the tftpboot directory is shown (by ls -l) with green background (unlike other files/directories) (Why? As I know the green background is for sticky bit only). What I did wrong? How can I make TFTP server working?

    Read the article

  • Appears to be "randomly" switching between the acl matched backend and the default backend

    - by Xoor
    I have HAProxy acting as a proxy in front of: An NGinx instance An in-house load balancer in front of multiple dynamic services exposed with socket.io (websockets) My problem is that from time to time my connections are proxied correctly to my socket.io cluster, and then randomly it fallsback to routing to NGinx which obviously is annoying and meaningless since NGinx isn't mean't to handle the request. This happens when requesting for URLs of the format : http://mydomain.com/backends/* There's an ACL in the HAProxy config to match the '/backends/*' path. Here's a simplified version of my HAProxy config (removed extra unrelated entries and changed names): global daemon maxconn 4096 user haproxy group haproxy nbproc 4 defaults mode http timeout server 86400000 timeout connect 5000 log global #this frontend interface receives the incoming http requests frontend http-in mode http #process all requests made on port 80 bind *:80 #set a large timeout for websockets timeout client 86400000 # Default Backend default_backend www_backend # Loadfire (socket cluster) acl is_loadfire_backends path_beg /backends use_backend loadfire_backend if is_loadfire_backends # NGinx backend backend www_backend server www_nginx localhost:12346 maxconn 1024 # Loadfire backend backend loadfire_backend option forwardfor # This sets X-Forwarded-For option httpclose server loadfire localhost:7101 maxconn 2048 It's really quite confusing for me why the behaviour appears to be "random", since being hard to reproduce it's hard to debug. I appreciate any insight on this.

    Read the article

  • Xen Windows Guest spawn doesn't spawn a vnc display

    - by Henrik P. Hessel
    I'm using this HVM File to create a new guest kernel = "/usr/lib/xen-3.2-1/boot/hvmloader" builder='hvm' memory = 4096 # Should be at least 2KB per MB of domain memory, plus a few MB per vcpu. shadow_memory = 64 name = "hessel-windows2008" vif = [ 'ip=188.40.xx.xx,mac=00:16:3E:C1:8F:CE' ] acpi = 1 apic = 1 disk = [ 'file:/home/xen/disks/hessel/win2008/win2008.img,hda,w', 'file:/home/xen/isopool/win2008_32.iso,hdc:cdrom,r' ] device_model = '/usr/lib/xen/bin/qemu-dm' #----------------------------------------------------------------------------- # boot on floppy (a), hard disk (c) or CD-ROM (d) # default: hard disk, cd-rom, floppy boot="dc" sdl=0 vnc=1 vncdisplay=1 vnclisten="0.0.0.0" vncconsole=1 vncpasswd='howtoforge' stdvga=0 serial='pty' usbdevice='tablet' The guest is created without an error. But no vnc display is created. Any ideas, how to fix that? prometheus:~# netstat -ant Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.0.1:615 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:53 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:6010 0.0.0.0:* LISTEN tcp 0 232 188.40.xx.xx:8080 195.36.75.26:54032 ESTABLISHED tcp 0 0 188.40.xx.xx:8080 195.36.75.26:53085 ESTABLISHED tcp6 0 0 :::8080 :::* LISTEN tcp6 0 0 :::53 :::* LISTEN tcp6 0 0 :::22 :::* LISTEN tcp6 0 0 ::1:6010 :::* LISTEN

    Read the article

  • How to find the real IP to which IPVS is routing a virtual IP

    - by Wayne Conrad
    I'm trying to find a problem server hiding behind a virtual IP (using LVS/ipvs). I've got a test program that sends requests to the virtual IP until it gets the bad response, but how can I tell to which real IP a request to the virtual IP got routed? On the box doing the virtual IP magic, here's the virtual IP configuration (for the service I care about): IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn ... TCP 10.1.0.254:5025 nq -> 10.1.0.5:5025 Route 1 0 1 -> 10.1.0.6:5025 Route 1 0 5 -> 10.1.0.7:5025 Route 1 0 2 -> 10.1.0.9:5025 Local 1 0 3 -> 10.1.0.11:5025 Route 1 0 3 ... My client program is sending TCP requests to 10.1.0.254:5025, usually getting a good response but sometimes a bad response. With this few servers, I could send my request to each server in turn until I discover the culprit, but I wonder if that technique will scale as we add servers. What means exist for me to find out where requests got routed? Kernel: Linux 2.6.32 OS: Debian testing (whatever that's called these days). ipvsadm is version 1.25, compiled with ipvs v1.2.1

    Read the article

  • HAproxy with MySQL Master-Master Replication incredibly slow

    - by Yayap
    I have two MySQL servers in multi-master mode, with an HAproxy machine for simple load balancing/redundancy. When I am connected to one of the servers directly and try to update about 100,000 entries, it is completed including replication in about half a minute. When connecting through the proxy it takes usually over three whole minutes. Is it normal to have that type of latency? Is something amiss with my proxy configuration (included below)? This is getting really frustrating as I assumed the proxy would do some sort of load balancing, or at least have little to no overhead. #--------------------------------------------------------------------- # Example configuration for a possible web application. See the # full configuration options online. # # http://haproxy.1wt.eu/download/1.4/doc/configuration.txt # #--------------------------------------------------------------------- #--------------------------------------------------------------------- # Global settings #--------------------------------------------------------------------- global # to have these messages end up in /var/log/haproxy.log you will # need to: # # 1) configure syslog to accept network log events. This is done # by adding the '-r' option to the SYSLOGD_OPTIONS in # /etc/sysconfig/syslog # # 2) configure local2 events to go to the /var/log/haproxy.log # file. A line like the following can be added to # /etc/sysconfig/syslog # # local2.* /var/log/haproxy.log # log 127.0.0.1 local2 # chroot /var/lib/haproxy # pidfile /var/run/haproxy.pid maxconn 4096 user haproxy group haproxy daemon #debug #quiet # turn on stats unix socket stats socket /var/lib/haproxy/stats #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode tcp log global #option tcplog option dontlognull option tcp-smart-accept option tcp-smart-connect #option http-server-close #option forwardfor except 127.0.0.0/8 #option redispatch retries 3 #timeout http-request 10s #timeout queue 1m timeout connect 400 timeout client 500 timeout server 300 #timeout http-keep-alive 10s #timeout check 10s maxconn 2000 listen mysql-cluster 0.0.0.0:3306 mode tcp balance roundrobin option tcpka option httpchk server db01 192.168.15.118:3306 weight 1 inter 1s rise 1 fall 1 server db02 192.168.15.119:3306 weight 1 inter 1s rise 1 fall 1

    Read the article

  • VSFTPD - FTP over TLS - Upload stops after exactly 82k?

    - by Redsandro
    I installed a VSFTP daemon on a CentOS server, using a RSA certificate for logging in using explicit TLS. Now, I cannot upload more than 82k. With files under that limit, there is no problem. The FTP works like a charm. But as soon as a file reaches 82k with FileZilla (81,952 bytes to be exact), the transfer will stop, and the FTP client hangs until time out is reached. FTP client console: 15:10:21 Command: STOR jquery-1.7.2.min.js 15:10:21 Response: 150 Ok to send data. 15:11:21 Error: Connection timed out 15:11:21 Error: File transfer failed after transferring 82 KB in 60 seconds /var/log/vsftpd.log FTP command: Client "x.x.x.x", "STOR jquery-1.7.2.min.js" FTP response: Client "x.x.x.x", "150 Ok to send data." OK UPLOAD: Client "x.x.x.x", "jquery-1.7.2.min.js", 81952 bytes, 1.32Kbyte/sec FTP response: Client "x.x.x.x", "226 File receive OK." // NOT okay, file is bigger // No mention of error here I cannot find relevant info about this problem, apart from a possible problem with trans_chunk_size (not mentioned in default config), but I tried different sizes and it has no impact on the problem. trans_chunk_size=4096 trans_chunk_size=8192 trans_chunk_size=9999 Ofcourse, after every configuration change, I restarted the server: /etc/init.d/vsftpd restart What else can cause this? It's not the latest version, but it's the latest update within the repositories that has been deemed fit for enterprise usage: Package info: $ yum info vsftpd Loaded plugins: fastestmirror Installed Packages Name : vsftpd Arch : x86_64 Version : 2.0.5 Release : 24.el5_8.1 Size : 286 k Repo : installed Summary : vsftpd - Very Secure Ftp Daemon URL : http://vsftpd.beasts.org/ License : GPL Description: vsftpd is a Very Secure FTP daemon. It was written completely from scratch.

    Read the article

  • Why does my DSDT table is different from what I found online?

    - by Hao Shen
    I have found a field in DSDT table where I want to modify from here http://www.ztex.de/misc/c2ctl.e.html Generally, I want to modify the _PSS field about the processor so that I can have more frequency levels available in the CPUfreq driver interface. I try to use this command to dissemble the DSDT table from my Desktop(Linux2.6.29,Intel CORE 2): cat /proc/acpi/dsdt > dsdt.aml iasl -d dsdt.aml Then I have a file dsdt.dsl as following(very long, so I just show the beginning of the file): /* * Intel ACPI Component Architecture * AML Disassembler version 20090123 * * Disassembly of dsdt.aml, Mon May 6 20:41:40 2013 * * * Original Table Header: * Signature "DSDT" * Length 0x00003794 (14228) * Revision 0x01 **** ACPI 1.0, no 64-bit math support * Checksum 0x46 * OEM ID "DELL" * OEM Table ID "dt_ex" * OEM Revision 0x00001000 (4096) * Compiler ID "INTL" * Compiler Version 0x20050624 (537200164) */ DefinitionBlock ("dsdt.aml", "DSDT", 1, "DELL", "dt_ex", 0x00001000) { Method (DBIN, 0, NotSerialized) { Noop } Scope (\) { Device (_SB.VBTN) ................... But I can not find the _PSS field as shown in the website I have given above. I do not know why? I am sure the current cpufreq driver shows 4 frequency levels available. So at least there should be something in the table showing this..right? Has anybody here played with the DSDT table before? Thanks,

    Read the article

  • HAProxy is caching the forwarding?

    - by shadow_of__soul
    i'm trying to set up a server structure for an application i'm building in Node.js with socket.io. My setup is: HAProxy frontend forward to -> apache2 as default backend (or nginx, is apache in this local test) -> node.js app if the url has socket.io in the request AND a domain name i have something like: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 user haproxy group haproxy daemon defaults log global mode http maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 frontend all 0.0.0.0:80 timeout client 5000 default_backend www_backend acl is_soio url_dom(host) -i socket.io #if the request contains socket.io acl is_chat hdr_dom(host) -i chaturl #if the request comes from chaturl.com use_backend chat_backend if is_chat is_soio backend www_backend balance roundrobin option forwardfor # This sets X-Forwarded-For timeout server 5000 timeout connect 4000 server server1 localhost:6060 weight 1 maxconn 1024 check #forwards to apache2 backend chat_backend balance roundrobin option forwardfor # This sets X-Forwarded-For timeout queue 50000 timeout server 50000 timeout connect 50000 server server1 localhost:5558 weight 1 maxconn 1024 check #forward to node.js app The problem comes when i made a request to something like www.chaturl.com/index.html it load perfectly but fails to loads the socket.io files (www.chaturl.com/socket.io/socket.io.js) why it redirect to apache (and should redirect to the node.js app that serve the files). The weird thing is that if i access directly to the socket.io file, after refreshing a few times, it loads, so i suppose is "caching" the forwarding for the client when it makes the first request and reach the apache server. Any suggestion of how this can be solved? or what i can try or look about this?

    Read the article

  • Kickstart CentOS 6 prompting for TCP/IP with network set to DHCP

    - by Andy Shinn
    I am trying to stop my kickstart CentOS install prompting me for TCP/IP information. After I click through this prompt (keeping IPv4 and IPv6 to their defaults) the installation continues and completes just fine. Below is my kickstart file: # Andy's super awesome VM kickstart file install url --url=http://mirrors.kernel.org/centos/6/os/x86_64 lang en_US.UTF-8 keyboard us text %include /tmp/network.ks rootpw --iscrypted $6$RA8DyrNTsVJkGIgY$ohZ62HHiOjNnn1yDMZlIu3lQ63D3plGPcbVZtPKE8Oq6Z.IGUgN.kNLkxs/ZymZuluRDWsW2eey5zLOl2G3mp. firewall --service=ssh authconfig --enableshadow --passalgo=sha512 selinux --disabled timezone America/Los_Angeles bootloader --location=mbr --driveorder=vda --append="crashkernel=auto rhgb quiet" # The following is the partition information you requested # Note that any partitions you deleted are not expressed # here so unless you clear all partitions first, this is # not guaranteed to work zerombr clearpart --all --drives=vda --initlabel part /boot --fstype=ext4 --size=500 part pv.253002 --grow --size=1 volgroup vg1 --pesize=4096 pv.253002 logvol / --fstype=ext4 --name=lv_root --vgname=vg1 --grow --size=1024 --maxsize=51200 logvol swap --name=lv_swap --vgname=vg1 --grow --size=4032 --maxsize=4032 repo --name="CentOS" --baseurl=http://mirrors.kernel.org/centos/6/os/x86_64 --cost=100 repo --name="Puppet Labs Products" --baseurl=http://yum.puppetlabs.com/el/6/products/x86_64 repo --name="Puppet Labs Dependencies" --baseurl=http://yum.puppetlabs.com/el/6/dependencies/x86_64 repo --name="EyeFi" --baseurl=http://flexo.eye.fi/6/eye-fi-api %packages @core @server-policy puppet facter %end %pre --erroronfail #!/bin/bash for x in `cat /proc/cmdline`; do case $x in SERVERNAME*) eval $x echo "network --onboot yes --device eth0 --bootproto dhcp --hostname ${SERVERNAME}.eye.fi" /tmp/network.ks ;; esac; done %end %post puppet agent --waitforcert 10 --onetime --no-daemon --pluginsync --server puppet.eye.fi %end reboot My kernel arguments are in this following virt-install command that I use to start the install: virt-install -n zabbix -r 2048 --vcpus=2 -l http://mirrors.kernel.org/centos/6/os/x86_64 --disk /dev/vg_inf1/zabbix --network bridge=br85 --initrd-inject=/home/ashinn/vm_kickstart --extra-args "ks=file:/vm_kickstart SERVERNAME=zabbix" --autostart During the install, I can pull up a console on the second terminal and verify the contents of /tmp/network.ks are: network --onboot=yes --bootproto=dhcp --ipv6=auto --hostname=jenkins2.mydomain.com Why might Anaconda be prompting for the TCP/IP settings when they are already set to DHCP?

    Read the article

  • How to make new file permission inherit from the parent directory?

    - by Wai Yip Tung
    I have a directory called data. Then I am running a script under the user id 'robot'. robot writes to the data directory and update files inside. The idea is data is open for both me and robot to update. So I setup the permission and owner group like this drwxrwxr-x 2 me robot-grp 4096 Jun 11 20:50 data where both me and robot belongs to the 'robot-grp'. I change the permission and the owner group recursively like the parent directory. I regularly upload new files into the data directory using rsync. Unfortunately, new files uploaded does not inherit the parent directory's permission as I hope. Instead it looks like this -rw-r--r-- 1 me users 6 Jun 11 20:50 new-file.txt When robot tries to update new-file.txt, it fails due to lack of file permission. I'm not sure if setting umask helps. In anycase the new files does not really follow it. $ umask -S u=rwx,g=rx,o=rx I'm often confounded by Unix file permission. Do I even have a right plan? I'm using Debian lenny.

    Read the article

  • Not able to find scripts present in /etc/profile.d directory [on hold]

    - by priya
    I am using Red Hat Linux 6.0 ... using davinchi board. I have to change system clock resolution so I am changing (HZ) env var. For this I have written script so that I can change HZ = 1000 n insert that script in /etc/profile.d and write code for loop in /etc/profile so that while running as usual /etc/profile can load the scripts present in /etc/profile.d. But when I am logging into the system at root level then showing error as "-bash: ./etc/profile.d/resolution.sh(my script name): No such file or directory Also here why it is showing ./etc and not /etc . Is something related to that?? Also I tried to add script in /etc/init.d but still no change in value of HZ takes place. Please tell where to change so that this env var can get changed. The script(resolution.sh) written has :- #!/bin/bash export HZ=1000 The content of /etc/profile which I entered is: if [ -d /etc/profile.d ]; then for i in /etc/profile.d/*.sh; do if [ -r $i ]; then .$i fi done unset i fi And the output of grep command is -rw-r--r-- 1 root root 535 Feb 4 2004 profile -rwxr-xr-x 2 root root 4096 Feb 2 2004 profile.d

    Read the article

  • Degraded RAID-5 array with lvm2 lost superblock and partition table

    - by Fred Phillips
    I have a RAID-5 array of 4x1TB hard disks with one lvm2 partition on Ubuntu Linux 10.04 LTS. One of the disks has failed. I have re-assembled the array without this failed disk but now mdadm --examine claims the array has no superblock and fdisk says it has no partition table. What can I do to recover the data? # mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sat Mar 5 14:43:49 2011 Raid Level : raid5 Array Size : 2930276352 (2794.53 GiB 3000.60 GB) Used Dev Size : 976758784 (931.51 GiB 1000.20 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sat Mar 5 15:06:49 2011 State : clean, degraded Active Devices : 3 Working Devices : 3 Failed Devices : 1 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : boba:1 (local to host boba) UUID : 52eb4bc9:c3d8aab5:e0699505:e0e1aa05 Events : 18 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 65 1 active sync /dev/sde1 2 8 49 2 active sync /dev/sdd1 3 0 0 3 removed 4 8 17 - faulty spare /dev/sdb1 # mdadm --examine /dev/md0 mdadm: No md superblock detected on /dev/md0. # fdisk -l /dev/md0 Disk /dev/md0: 3000.6 GB, 3000602984448 bytes 2 heads, 4 sectors/track, 732569088 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 524288 bytes / 1572864 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] md0 : active raid5 sdb1[4](F) sda1[0] sdd1[2] sde1[1] 2930276352 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_] unused devices: <none>

    Read the article

  • How can I recover my system after running 'mkfs' on the system partition?

    - by Filip Podgórny
    I am not a Linux user, and was doing some homework, I blindly typed sudo mkfs ext3 dev/sda2 (I had Ubuntu as Windows installation). I've done few more things, and turned Ubuntu off to switch on Windows back. No operating system installed - this is the message I'm getting. I plugged my HDD onto another computer and all my files are still there. What should I do to get my windows installation back? df -l (before mkfs) /dev/loop0 29G 2,0G 27G 8% / udev 3,0G 4,0K 3,0G 1% /dev tmpfs 1,2G 900K 1,2G 1% /run none 5,0M 0 5,0M 0% /run/lock none 3,0G 1,3M 3,0G 1% /run/shm /dev/sda3 455G 123G 333G 27% /host /dev/sdb1 1,9G 820M 1,1G 43% /media/PHONE CARD mkfs output (polish, sorry) mke2fs 1.41.14 (22-Dec-2010) Etykieta systemu plików= Typ OS: Linux Rozmiar bloku=1024 (log=0) Rozmiar fragmentu=1024 (log=0) Stride=0 bloków, szerokosc Stripe=0 bloków 25688 i-wezlów, 102400 bloków 5120 bloków (5.00%) zarezerwowanych dla superuzytkownika Pierwszy blok danych=1 Maksymalna liczba bloków systemu plików=67371008 13 grup bloków 8192 bloków w grupie, 8192 fragmentów w grupie 1976 i-wezlów w grupie Kopie zapasowe superbloku zapisane w blokach: 8193, 24577, 40961, 57345, 73729 Zapis tablicy i-wezlów: zakonczono Tworzenie kroniki (4096 bloków): wykonano Zapis superbloków i podsumowania systemu plików: wykonano Ten system plików bedzie automatycznie sprawdzany co kazde 30 montowan lub co 180 dni, zaleznie co nastapi pierwsze. Mozna to zmienic poprzez tune2fs -c lub -i.

    Read the article

  • bind9 named.conf zones size limit

    - by mox601
    I am trying to set up a test environment on my local machine, and I am trying to start a DNS daemon that loads tha configuration from a named.conf.custom file. As long as the size of that file is like 3-4 zones, the bind9 daemon loads fine, but when i enter the config file i need (like 10000 lines long), bind can't startup and in the syslog i find this message: starting BIND 9.7.0-P1 -u bind Jun 14 17:06:06 cibionte-pc named[9785]: built with '--prefix=/usr' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc/bind' '--localstatedir=/var' '--enable-threads' '--enable-largefile' '--with-libtool' '--enable-shared' '--enable-static' '--with-openssl=/usr' '--with-gssapi=/usr' '--with-gnu-ld' '--with-dlz-postgres=no' '--with-dlz-mysql=no' '--with-dlz-bdb=yes' '--with-dlz-filesystem=yes' '--with-dlz-ldap=yes' '--with-dlz-stub=yes' '--with-geoip=/usr' '--enable-ipv6' 'CFLAGS=-fno-strict-aliasing -DDIG_SIGCHASE -O2' 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS=' Jun 14 17:06:06 cibionte-pc named[9785]: adjusted limit on open files from 1024 to 1048576 Jun 14 17:06:06 cibionte-pc named[9785]: found 1 CPU, using 1 worker thread Jun 14 17:06:06 cibionte-pc named[9785]: using up to 4096 sockets Jun 14 17:06:06 cibionte-pc named[9785]: loading configuration from '/etc/bind/named.conf' Jun 14 17:06:06 cibionte-pc named[9785]: /etc/bind/named.conf.saferinternet:1: unknown option 'zone' Jun 14 17:06:06 cibionte-pc named[9785]: loading configuration: failure Jun 14 17:06:06 cibionte-pc named[9785]: exiting (due to fatal error) Are there any limits on the file size bind9 is allowed to load?

    Read the article

  • How to create partition when growing raid5 with mdadm.

    - by hometoast
    I have 4 drives, 2x640GB, and 2x1TB drives. My array is made up of the four 640GB partitions and the beginning of each drive. I want to replace both 640GB with 1TB drives. I understand I need to 1) fail a disk 2) replace with new 3) partition 4) add disk to array My question is, when I create the new partition on the new 1TB drive, do I create a 1TB "Raid Auto Detect" partition? Or do I create another 640GB partition and grow it later? Or perhaps the same question could be worded: after I replace the drives how to I grow the 640GB raid partitions to fill the rest of the 1TB drive? fdisk info: Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xe3d0900f Device Boot Start End Blocks Id System /dev/sdb1 1 77825 625129281 fd Linux raid autodetect /dev/sdb2 77826 121601 351630720 83 Linux Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xc0b23adf Device Boot Start End Blocks Id System /dev/sdc1 1 77825 625129281 fd Linux raid autodetect /dev/sdc2 77826 121601 351630720 83 Linux Disk /dev/sdd: 640.1 GB, 640135028736 bytes 255 heads, 63 sectors/track, 77825 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x582c8b94 Device Boot Start End Blocks Id System /dev/sdd1 1 77825 625129281 fd Linux raid autodetect Disk /dev/sde: 640.1 GB, 640135028736 bytes 255 heads, 63 sectors/track, 77825 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xbc33313a Device Boot Start End Blocks Id System /dev/sde1 1 77825 625129281 fd Linux raid autodetect Disk /dev/md0: 1920.4 GB, 1920396951552 bytes 2 heads, 4 sectors/track, 468846912 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk identifier: 0x00000000

    Read the article

  • ps aux as non-root doesn't show all processes

    - by JMW
    hi, i'm using an ubuntu 10.04 server... when i run ps aux as root i see all processes when i run ps aux as nonroot i see JUST the processes of the current user after a bit of research i found the following solution: root@m85:~# ls -al /proc/ total 4 dr-xr-xr-x 122 root root 0 2010-12-23 14:08 . drwxr-xr-x 22 root root 4096 2010-12-23 13:30 .. dr-x------ 6 root root 0 2010-12-23 14:08 1 dr-x------ 6 root root 0 2010-12-23 14:08 10 dr-x------ 6 root root 0 2010-12-23 14:08 1212 dr-x------ 6 root root 0 2010-12-23 14:08 1227 dr-x------ 6 root root 0 2010-12-23 14:08 1242 dr-x------ 6 zabbix zabbix 0 2010-12-24 23:52 12747 [...] my first idea was, that it got mounted in a weird way: /etc/fstab is ok and it doesn't seem to be mounted in an weird way... my second idea was, that there might be a rootkit: but it's not a rootkit... rkhunter tells me, that there is no rootkit installed... i don't know if it is since the machine got installed or came with an update. i've just installed zabbix-agent on the machine and realized, that it didn't work properly... What could have caused such strange permissions (500) and how can i set it back to an normal level (555) ? Crazy, i've never seen something like that... thanks in advance for any help and merry christmas :) see you

    Read the article

  • How to debug a kernel created using ubuntu-vm-builder?

    - by user265592
    Aim: Trying to perform a code walkthrough of what functions are getting called for sending and receiving packets over the network. I am building a kernel and using gdb for debugging/ tracing purposes. I have build a vm using the following command : time sudo ubuntu-vm-builder qemu precise --arch 'amd64' --mem '1024' --rootsize '4096' --swapsize '1024' --kernel-flavour 'generic' --hostname 'ubuntu' --components 'main' --name 'Bob' --user 'ubuntu' --pass 'ubuntu' --bridge 'br0' --libvirt 'qemu:///system' And I can run the VM successfully in qemu using the following command: qemu-system-x86_64 -smp 1 -drive file=tmpGgEOzK.qcow2 "$@" -net nic -net user -serial stdio -redir tcp:2222::22 Now, I want to debug the kernel using gdb. For this I need an executable with debug symbols(vmlinux), which apparently I don't have, as the vm-builder never asked for any such options and simply created a .qcow2 file. Question 1: Am I taking the correct approach to solve the problem and is there an easier way to do it? Question 2: Is there a way to debug this kernel using GDB? P.S: I don't have hardware support for KVM. Please correct me if I am wrong. Thanks.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >