Search Results

Search found 19688 results on 788 pages for 'con current'.

Page 183/788 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • Cannot connect to server via SSH

    - by Rayne
    I'm running RHEL 6.0, and I accidentally moved the /bin, /boot, /cgroup, console.txt, /data, /dev, /etc to another folder. I think I managed to move these folders back, but now I'm having trouble connecting to the server using SSH, but am able to access the server via VNC. When I tried to connect to the server using a terminal from another server, I get the error ssh_exchange_identification: Connection closed by remote host I'm currently still connected via SSH to the server (haven't closed the window yet), and am still able to access it normally. But if I try to open a new SSH terminal from my current session, I see /bin/bash: Permission denied If I try to open a new SSH File Transfer window from my current session, I get the error File transfer server could not be started or it exited unexpectedly. Exit value 0 was returned. Most likely the sftp-server is not in the path of the user on the server-side I checked and I have Subsystem sftp /usr/libexec/openssh/sftp-server which is the same path as the output of locate sftp-server Also, when I tried to restart sshd, I get the error Couldn't open /dev/null: Permission denied But my /dev/null has the permissions crw-rw-rw- for root,root. How can I resolve this? ETA: Thanks for all your help! I was able to start ssh by running the application directly /usr/sbin/sshd Even though the status of the openssh-daemon is still "stopped".

    Read the article

  • Please, help writing a MIB

    - by facha
    I have a problem with an snmpwalk query returning snmp variables in a non-uniform way: .1.3.6.1.2.1.10.127.1.3.3.1.2.215 -> Hex-STRING: 24 37 4C 0C 65 0E .1.3.6.1.2.1.10.127.1.3.3.1.2.216 -> Hex-STRING: 24 37 4C 0B A2 DA .1.3.6.1.2.1.10.127.1.3.3.1.2.217 -> STRING: "$7L f:" .1.3.6.1.2.1.10.127.1.3.3.1.2.218 -> STRING: "$7L k2" As you can see, some variables are of a STRING type, others are Hex-STRING. So, I'm trying to write a simple MIB to force them all come out as Hex-STRING. This is where I've gotten so far: TEST-MIB DEFINITIONS ::= BEGIN PhysAddress ::= TEXTUAL-CONVENTION DISPLAY-HINT "1x:" STATUS current SYNTAX OCTET STRING test OBJECT-TYPE SYNTAX PhysAddresss MAX-ACCESS read-only STATUS current ::= { 1 3 6 1 2 1 10 127 1 3 3 1 2 } END However, snmpwalk doesn't seem to notice my textual convention (even though the "test" variable is being recognized). I still get a mixture of STIRNGs and Hex-STRINGs. Could anybody point to where is my mistake? snmpwalk -v2c -cpublic 192.168.1.2 TEST-MIB::test ... TEST-MIB::test.216 = Hex-STRING: 24 37 4C 0B A2 DA TEST-MIB::test.217 = STRING: "$7L f:"

    Read the article

  • Performance of Virtual machines on very low end machines

    - by TheLQ
    I am managing a few cheap servers as my user base isn't large enough to get much more powerful servers. I also don't have the money lying around to invest in a server to prepare for the larger user base. So I'm stuck with the old hardware I have. I am toying with the idea of virtualizing all the current OS's with most likely VMware vSphere Hypervisor (AKA ESXi) Xen (ESXi has too strict of an HCL, and my hardware is too old). Big reasons for doing so: Ability to upgrade and scale hardware rapidly - This is most likely what I'll be doing as I distribute services, get a bigger server, centralize (electricity bills are horrible), distribute, get a bigger server, etc... Manually doing this by reinstalling the entire OS would be a big pain Safety from me - I've made many rookie mistakes, like doing lots of risky work on a vital production server. With a VM I can just backup the state, work on my machine, test, and revert if necessary. No worries, and no OS reinstallation Safety from other factors - As I scale servers might go down, and a backup VM can instantly be started. Various other reasons. However the limiting factor here is hardware. And I mean very depressing hardware. The current server's run off of a Pentium 3 and 4, and have 512 MB and 768 MB RAM respectively (RAM can be upgraded soon however). Is the Virtualization layer small enough to run itself and a Linux OS effectively? Will performance be acceptable (50% CPU overhead for every operation isn't acceptable)? Does it leave enough RAM for the Linux OS? Is this even feasible?

    Read the article

  • Sync desktop Mac environment to laptop

    - by Andrew Vit
    I spend the majority of my time working at my desktop Mac, which I have configured for my web development environment. My spouse has a MacBook for casual use, and I occasionally steal it back when I need to work off-site, or when travelling. The question is how to best synchronize the two so I can switch between them more readily. I've solved a few obvious things by using online services: Email is hosted on IMAP. Working files are in Dropbox. Source code is managed in git. However, the following are things I always miss when jumping on the laptop: Installed Applications (current versions) Installed libraries & utilities (/usr/local) Apache VirtualHosts & other configurations (/etc) Disk image files for VMs My current method is to connect the MacBook via Firewire target mode and rsync the /Users/me home directory, and then cherry-pick the other items I need from Applications, /etc and /usr/local. The problem with this method is that it can be very time consuming due to things like my virtual machine image files, cached emails, etc. How can I make this faster & easier? Can you recommend a solution for configuration management (so I can repeatably install & configure the same software on both), or synchronization (so I can bring the MacBook up to date nightly, over our home network)?

    Read the article

  • Monit unable to start sidekiq on Opsworks server

    - by webdevtom
    I have used AWS Opsworks to create some servers. I have Sidekiq running as part of my Rails application. When I deploy Sidekiq restarts nicely. I am configuring Monit to watch the pid and start and stop Sidekiq if there are any issues. However when Monit trys to start Sidekiq I see that the wrong Ruby looks to be used. Oct 17 13:52:43 daitengu sidekiq: /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.4/lib/bundler/definition.rb:361:in `validate_ruby!': Your Ruby version is 1.8.7, but your Gemfile specified 1.9.3 (Bundler::RubyVersionMismatch) Oct 17 13:52:43 daitengu sidekiq: from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.4/lib/bundler.rb:116:in `setup' Oct 17 13:52:43 daitengu sidekiq: from /usr/local/lib/ruby/gems/1.9.1/gems/bundler-1.3.4/lib/bundler/setup.rb:17 When I run the command from the cli Sidekiq launches correctly. $> cd /srv/www/myapp/current && RAILS_ENV=production nohup /usr/local/bin/bundle exec sidekiq -C config/sidekiq.yml >> /srv/www/myapp/shared/log/sidekiq.log 2>&1 & $> ps -aef |grep sidekiq root 1236 1235 8 20:54 pts/0 00:00:50 sidekiq 2.11.0 myapp [0 of 25 busy] My sidekiq.monitrc file check process unicorn with pidfile /srv/www/myapp/shared/pids/unicorn.pid start program = "/bin/bash -c 'cd /srv/www/myapp/current && /usr/local/bin/bundle exec unicorn_rails --env production --daemonize -c /srv/www/myapp/shared/config/unicorn.conf'" stop program = "/bin/bash -c 'kill -QUIT `cat /srv/www/myapp/shared/pids/unicorn.pid`'"

    Read the article

  • PXE bootable image for terminal server?

    - by HeavenCore
    We have 300 windows xp machines on cruddy old hardware across the company. With extended support for XP ending April next year we're looking into our options. Couple of options: Replace the 300 PC's with full windows 7 PC's (£100k +?) - no use of terminal server (our current model) Replace the 300 PC's with off the shelf thin clients & make use of our terminal server - Cheaper clients but Terminal Server CALS required? Keep the 300 PC's, replace windows XP with linux thin client capable of connecting to our terminal server - no hardware costs, just Terminal Server CALS required? Keep the 300 PC's - remove hard drives and make use of a PXE bootable "thin client" to connect to our terminal server If we were to choose option 4, what our the options out there? Is there any official PXE bootable thin clients for terminal server out there? If so, what are the licence requirements? Is there options we haven’t considered? There must be lots of companies out there in this situation - curious what the current trend is for this problem? Edit: Option 5 - Create a bootable Windows PE image with RDP auto start and use that as a "thin client" for our terminal server - is Windows PE licence free in such a model?

    Read the article

  • FTP on Linux "Failed to retrieve directory listing" not firewall issue

    - by Jaka Prasnikar
    I've got an VPS in germany running Debian X64. I have very strange issue. I have ISPConfig CP installed using proftpd and I can not connect to FTP by any means. Few hours ago I've had installed DirectAdmin on CentOS same VPS and same issue. Simply when I connect to FTP server I get these: Status: Resolving address of web02.defikon.com Status: Connecting to 130.255.190.71:21... Status: Connection established, waiting for welcome message... Response: 220---------- Welcome to Pure-FTPd [privsep] [TLS] ---------- Response: 220-You are user number 1 of 50 allowed. Response: 220-Local time is now 12:15. Server port: 21. Response: 220-This is a private system - No anonymous login Response: 220-IPv6 connections are also welcome on this server. Response: 220 You will be disconnected after 15 minutes of inactivity. Command: USER default1 Response: 331 User default1 OK. Password required Command: PASS ****** Response: 230-User default1 has group access to: client0 sshusers Response: 230 OK. Current restricted directory is / Command: OPTS UTF8 ON Response: 200 OK, UTF-8 enabled Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is your current location Command: TYPE I Response: 200 TYPE is now 8-bit binary Command: PASV Error: Connection timed out Error: Failed to retrieve directory listing I even tried telnet localhost 21 and the same happends. Once I issue command "LIST" I get time out. I've tried every thing and I can't get this to work =( Please help ! P.S.: iptables is turned off.

    Read the article

  • Apache2 MPM-prefork MPM-worker multiple instances on same Ubuntu host machine possible?

    - by user60985
    I have a live Apache2/MPM-Worker instance running Django. I want to also run an Apache2/MPM-prefork instance to run some Drupal6 applications on the same host machine and utilize a vast selection of PHP modules that run on the prefork model. I plan to use my MPM-worker instance to reverse proxy to the Apache2-prefork instance for URLS starting with myhost.com/drupal6/. It seems theoretically doable/configurable by having the second Apache2-prefork instance configured to listen on an internal port, say 127.0.0.1:8080 and having my current Apache2-worker configured to proxy pass and reverse pass to it for the 'drupal6' URLs. However, how do I compile or install the apache2-prefork version so it has a different executable name than /usr/sbin/apache2, for example /usr/sbin/apache2p, and so apache2ctl has a different name, say apache2pctl, and that apache2pctl invokes the /usr/sbin/apache2p instead of /usr/sbin/apache2... and so on down the line (eg /etc/apache2p) so I can start and restart my two instances independently? As I understand it, no one executable of 'apache2' can be compiled with both the MPM-prefork and MPM-worker modules, so it seems I need two separate versions of the apache2 MPM flavors. But then I need to invoke and control them by separate names, I assume. I looked at the configuration options for apache2 and I am a bit queasy about compiling a second apache2 version with prefork because I am not sure I can set all the options so that none of my current apache2 files is overwritten. Is there a way? Is there a standard solution to separately installing and controlling prefork and worker apache2 executables on the same machine without them stepping on each other during installation or operation?

    Read the article

  • Skype Connect as SIP/Trunk for Asterisk

    - by Kaurin
    First off: I'm not sure if this should be on superuser or here. I have recently built a few Asterisk boxes with OpenVOX FXO/FXS ports little or no trouble. My current project is building an Asterisk box with SIP trunks. My current employer insisted on getting Skype Business/Skype connect for that purpose. After reviewing the Skype Connect plan, I agreed, because I thought it is going to be straightforward: Purchase G729 licences and setup SIP trunk/trunks. Boy was i wrong :) Here is the setup: The setup is for calling US numbers only via skype (we got skype US minute bundles in skype connect) AsteriskNOW - Asterisk 1.4 + asterisk-gui Trunks: SIP Trunk configured with Skype Connect - shows as registered Users: 2 test extensions. Both work fine when calling each other, voicemail etc works fine too The asterisk box is behind a Mikrotik router which i configured to forward all relevant ports: 5060-5090 UDP, 10000-20000 UDP. When trying out an extension outside of my LAN, it worked. I could make calls to the other extension. Outgoing rule: _NXXXXXXXXX Strip:0 Prepend:+1 Use skype trunk Inbound rule: Trunk: Skype Pattern: s Destination: Extension1 (6210) Here is the output of asterisk CLI (-rvvvvv) with outgoing calls: http://pastebin.com/eWVpL72e you can see the circuit-busy response when using trunk1 (skype) When calling my Skype Connect number from the outside, I get nothing in the logs. Can anyone with Skype Connect / Asterisk experience help out? :)

    Read the article

  • Mac Mini server (10.6) behind router with FQDN hostname

    - by thechriskelley
    I have a Mac Mini running Mac OS 10.6.6 Server that will be part of a local network, and a static IP from my ISP. I'd like to set up DNS for the Mini with a FQDN as the hostname (example.com) properly. The Mini is behind a router (Apple Airport Extreme) and is given a private, static IP address. I can't assign it the public static IP directly because it's behind a router with DHCP/NAT for other machines on the local net. My end goal here is for services to resolve to the server properly from outside and inside the local network to users via example.com (and subdomains like mail.example.com, www.example.com), which will point to the public static IP assigned to the router. Will DNS work/resolve properly (for mail services and other subdomains) if it has a private ip address, but the necessary services are forwarded properly through NAT? I'm open to any (hopefully better) suggestions, as my current setup doesn't seem like it's the best way. Currently, more hardware or another public static IP is not possible. With the current setup, it seems as though one static IP is not necessary anyway. Thanks in advance for any insight.

    Read the article

  • How to set up IP forwarding on Nexenta (Solaris)?

    - by Gleb
    I am trying to set up IP forwarding on my Nexenta box: root@hdd:~# uname -a SunOS hdd 5.11 NexentaOS_134f i86pc i386 i86pc Solaris The box has 2 network interfaces: root@hdd:~# ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 e1000g1: flags=1001100843<UP,BROADCAST,RUNNING,MULTICAST,ROUTER,IPv4,FIXEDMTU> mtu 1500 index 2 inet 192.168.12.2 netmask ffffff00 broadcast 192.168.12.255 ether 68:5:ca:9:51:b8 myri10ge0: flags=1100843<UP,BROADCAST,RUNNING,MULTICAST,ROUTER,IPv4> mtu 9000 index 3 inet 10.10.10.10 netmask ffffff00 broadcast 10.10.10.255 ether 0:60:dd:47:87:2 lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1 inet6 ::1/128 192.168.12.0 is my normal LAN with 192.168.12.1 being the firewall/gateway 10.10.10.0 is a separate LAN for iSCSI (with no internet access) I want to set up IP forwarding so that a computer on 10.10.10.0 will be able to access the internet by using 10.10.10.10 as a gateway (I don't need any port forwarding) I have turned on IP forwarding: root@hdd:~# routeadm Configuration Current Current Option Configuration System State --------------------------------------------------------------- IPv4 routing disabled disabled IPv6 routing disabled disabled IPv4 forwarding enabled enabled IPv6 forwarding disabled disabled Routing services "route:default ripng:default" Routing daemons: STATE FMRI disabled svc:/network/routing/rdisc:default disabled svc:/network/routing/route:default disabled svc:/network/routing/legacy-routing:ipv4 disabled svc:/network/routing/legacy-routing:ipv6 disabled svc:/network/routing/ripng:default online svc:/network/routing/ndp:default But when I dry to start ipnat, I get an error: root@hdd:~# ipnat -CF -f /etc/ipf/ipnat.conf ioctl(SIOCGNATS): I/O error Here is the config: root@hdd:~# cat /etc/ipf/ipnat.conf #!/sbin/ipnat -f - # map e1000g1 10.10.10.10/24 -> 192.168.12.2/32 So the question is how to fix this.. Thanks in advance!

    Read the article

  • Apache 2.4 Prefork vs. PHP-FPM Event shows sig decrease in requests per second

    - by Mark
    On my Apache 2.4.2 server with a standard mod_php Prefork setup these are my server-status results Current Time: Wednesday, 24-Oct-2012 19:36:24 CDT Restart Time: Wednesday, 24-Oct-2012 01:27:30 CDT Parent Server Config. Generation: 1 Parent Server MPM Generation: 0 Server uptime: 18 hours 8 minutes 54 seconds Total accesses: 14304233 - Total Traffic: 342.3 GB CPU Usage: u12584.6 s721.93 cu.66 cs3.43 - 20.4% CPU load 219 requests/sec - 5.4 MB/second - 25.1 kB/request 507 requests currently being processed, 355 idle workers ______KKKKR_K______W_KKC___CKK_K_K_W__CC_KKK_KK._K_K_KK._KKKK_K_ K_____KK_KKKK_K_KK__K___KK_K___K_____CKKK_WK_K_____KCKK__K___K_K K_CK_K_K_____K__KKKK_K__K___K_KK_K_K_KKKCK____________KK_CK__KKK __C_KKKKKKK___CK___C_KKK_K__C__K_CK____KKK__K__K__K_K__KK_CK_K__ _KKKKK_K_W__KK______K___K__W___C_K__K____KKKKKKKK.KKKKKKKCK_K___ _C_KK_K_WK__K_KK__K__RK_KK___K____K_KK_K_K___RKC_KKKK___KKKC_K_W _C_KK_KK__W____KC__KKK__KKK___K___KKK_KK_K_KKW__K_KR_KK_KK__KKK_ R__KKK__KKKKKK__K_KKKKK_K__K_K___KKW_________KK_K___KKK___KK.K_C KKKKKKW_____K__K_KKC_KCKK_K_KK_K__KK__K___K__KK_KK__________KK__ __K___KK_K__K_C_KK_K___KK__KK__K__KCK_K__KK_________K_K_KK__.K__ K_CKK.CCRW__KKKKKKKKKKKC__W____K___KWK_KK_KKC______.K_K_KK_KKKC_ __KKK_W_KCKKK_K_K____CCCK__KC_KKKK_K____K_CK_K____K__K____KKK_KK KK___K_K_K__KW__KCKKKK____WKWK__K_KKRKK__C_K_KK_KK_K__KKCC_K__C_ KK_K___K_KK______K_____CKK_K_______KK_CKCK__KKKKK____K__K..K____ __KKWK_KW__KKK__K_KKK___K_KK_KKK__KK___KK___KK_KK___KK____KKWKKC KK_KKKK_................................` When I switch to a PHP-FPM setup with the Event MPM with no other variables changes, my requests/sec plummet and overall apache response is garbage. Current Time: Wednesday, 24-Oct-2012 19:51:21 CDT Restart Time: Wednesday, 24-Oct-2012 19:48:03 CDT Parent Server Config. Generation: 1 Parent Server MPM Generation: 0 Server uptime: 3 minutes 18 seconds Total accesses: 18720 - Total Traffic: 307.1 MB CPU Usage: u16.57 s4.74 cu0 cs0 - 10.8% CPU load 94.5 requests/sec - 1.6 MB/second - 16.8 kB/request 15 requests currently being processed, 49 idle workers PID Connections Threads Async connections total accepting busy idle writing keep-alive closing 11701 114 no 10 22 0 66 38 11702 134 no 5 27 0 81 48 Sum 248 15 49 0 147 86 __R_R__W___RRW________RR__R___W_W_______W_____W_____________R_R_ Is there any obvious reason anyone could think of why this would be the case. I can provide any other additional stats or server setup info to help out. Ive tried tweaking everything up and down and nothing really helps get the PHP-FPM setup anywhere near a baseic prefork/mod-php setup. Thanks!

    Read the article

  • How this could happen to my ftp server?

    - by srisar
    hi, again me, i just dont get why i m getting this message, when i try to connect my ftp server thorugh my wan address (112.135.26.115) its saying me, 530 user access denied but when i give the same data to http://ftptest.net the result is as follows... Status: Resolving address of 112.135.26.115 Status: Connecting to 112.135.26.115 Status: Connected, waiting for welcome message Reply: 220-FileZilla Server version 0.9.34 beta Reply: 220-written by Tim Kosse ([email protected]) Reply: 220 Please visit http://sourceforge.net/projects/filezilla/ Status: CLNT http://ftptest.net on behalf of 112.135.26.115 Reply: 200 Don't care Status: USER saravana Reply: 331 Password required for saravana Status: PASS ********* Reply: 230 Logged on Status: SYST Reply: 215 UNIX emulated by FileZilla Status: FEAT Reply: 211-Features: Reply: MDTM Reply: REST STREAM Reply: SIZE Reply: MLST type*;size*;modify*; Reply: MLSD Reply: AUTH SSL Reply: AUTH TLS Reply: UTF8 Reply: CLNT Reply: MFMT Reply: 211 End Status: PWD Reply: 257 "/" is current directory. Status: Current path is / Status: TYPE I Reply: 200 Type set to I Status: PASV Reply: 227 Entering Passive Mode (112,135,26,115,43,9) Status: MLSD Reply: 150 Connection accepted Listing: type=dir;modify=20100322113235; it_is_working!_Andrejs_Cainikovs_from_serverfault.com Listing: type=file;modify=20100322110559;size=5; New Text Document.txt Reply: 226 Transfer OK Status: Success can anyone say why this happens to me? please im trying the whole day!!

    Read the article

  • How can I use the Homebrew Python with Homebrew MacVim on Mountain Lion?

    - by Stephen Jennings
    I originally asked and answered this question: How can I use the Homebrew Python version with Homebrew MacVim? These instructions worked on Snow Leopard using Xcode 4.0.1 and associated developer tools. However, they no longer seem to work on Mountain Lion with Xcode 4.4.1. My goal is to leave the system's version of Python completely untouched, and to only install PyPI packages into Homebrew's site-packages directory. I want to use the vim_bridge package in MacVim, so I need to compile MacVim against the Homebrew version of Python. I've edited the MacVim formula to add these to the arguments: --enable-pythoninterp=dynamic --with-python-config-dir=/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/Current/lib/python2.7/config Then I install with the command: brew install macvim --override-system-vim --custom-icons --with-cscope --with-lua However, it still seems to be somehow using Python 2.7.2 from the system. This seems strange to me because it also seems to be using the correct executable. :python print(sys.version) 2.7.2 (default, Jun 20 2012, 16:23:33) [GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] :python print(sys.executable) /usr/local/bin/python $ /usr/local/bin/python --version Python 2.7.3 $ /usr/local/bin/python -c "import sys; print(sys.version)" 2.7.3 (default, Aug 12 2012, 21:17:22) [GCC 4.2.1 Compatible Apple Clang 4.0 ((tags/Apple/clang-421.0.60))] $ readlink /usr/local/lib/python2.7/config /usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/Current/lib/python2.7/config I've removed everything in /usr/local and reinstalled Homebrew by running these commands: $ ruby <(curl -fsSkL raw.github.com/mxcl/homebrew/go) $ brew install git mercurial python ruby $ brew install macvim (nope, still broken) $ brew remove macvim $ ln -s /usr/local/Cellar/python/..../python2.7/config /usr/local/lib/python2.7/config $ brew install macvim

    Read the article

  • Lighttpd - byte range request doesn't work. can't stream mp4

    - by w-01
    Am attempting to use the lastest flowplayer. (if it could work it would be pretty awesome btw) http://flowplayer.org One of the cool things about it is it uses the new HTML5 video element and supports random seeking/playback. In order to do this, you need a byte range request capable server on the backend. Luckily I'm using Lighttpd 1.5.0 on the backend. Unfortunately the current behavior is that when I do a random seek, the video simply restarts itself from the beginning. the docs say: "For HTML5 video you don't have to do any client side configuration. If your server supports byte range requests then seeking should work on the fly. Most servers including Apache, Nginx and Lighttpd support this." On my page, using chrome web developer tools, i can see when the video is requested, the server response headers indicate it is able to acce[t byte ranges. Accept-Ranges:bytes when I do random seek in the player, I can see that that byte ranges are request appropriately in the request header: Range: bytes=5668-10785 I can also verify the moov atom is at the front of the video file. My question here is if there is something else on the lighttpd side i'm missing in order to enable byte-range requests? The reason i ask is because the current behavior suggests that the lighttpd simply doesn't understand the byte range request and is just reserving the video from the beginning. Update it's clearer to put this here. As per RJS' suggestion I ran a curl command. in the response it looks like lighttpd is working as expected. Content-Range: bytes 1602355-18844965/18844966 Content-Length: 17242611

    Read the article

  • Using NginX and Apache alongside for both static and dynamic files

    - by faridv
    Background: I've searched a lot and found these useful threads about using of Apache or NginX for static or dynamic files. But they are old (mostly about 1 or 2 years ago) and I think both webservers, specifically Nginx has had important changes in performance and usage. So I think take another look on these issue cannot be that bad. Nginx (for static files) and Apache (for dynamic content)? nginx better than apache for dynamic content? [closed] Apache or NGINX for PHP? Nginx as reverse proxy to Apache with only dynamic content? My question: I have a PHP web application with lots of dynamic files and lots of static contents (videos, images etc.) and it's currently running on a CentOS 6 server and Apache 2.2 since 2 months ago. In past few days, number of our site visitors have gained so fast and I just thought if this number continues to increase with current ratio, we need to change many things (web server, application, etc.) to prevent failures. Because of hardware limitations that we are facing, I thought that it's best for us to start with web server. Should I start with something else? Should I try to increase performance of my PHP application and forget about web server for now? (even if gonna take a long time!) Because of huge usage of .htaccess files (for redirection, rewrites, etc.), I think it's gonna be painful to migrate to NginX as default web server or maybe only for dynamic files. Does this mean that I can't even use Nginx as reverse proxy? I'm not sure latest stable version of NginX and PHP-FPM have a better performance over my current Apache and my limitations (too many things) won't let me to give it a try. Which one is doing better currently? What will I lose by migrating to Nginx? To make it short, what should I do?

    Read the article

  • btrfs: can i create a btrfs file system with data as jbod and metadata mirrored

    - by Yogi
    I am trying to build a home server that will be my NAS/Media server as well a the XBMC front end. I am planning on using Ubuntu with btrfs for the NAS part of it. The current setup consists of 1TB hdd for the OS etc and two 2TB hdd's for data. I plan to have the 2TB hdd's used as JBOD btrfs system in which i can add hdd's as needed later, basically growing the filesystem online. They way I had setup the file system for testing was while installing the OS just have one of the HDD's connected and have btrfs on it mounted as /data. Later on add another hdd to this file system. When the second disk was added btrfs made as RAID 0, with metadata being RAID 1. However, this presents a problem: even if one of the disk fails I loose all my data (mostly media). Also most of the time the server will be running without doing any disk access, i.e. the HDD's can be spun down, when a access request comes in this with the current RAID 0 setup both disks will spin up. in case I manage a JBOD only the disk that has the file needs to be spun up. This should hopefully reduce the MTBF for each disk. So, is there a way in which I can have btrfs setup such that metadata is mirrored but data stays in a JBOD formation? Another question I have is this, I understand that a full drive failure in JBOD will lose data on the drive, but having metadeta mirrored across all drives, will this help the filesytem correct errors that migh creep in (ex bit rot?) and is btrfs capable of doing this.

    Read the article

  • Apache, suexec, PHP, suPHP

    - by Chris_K
    While I'm quite comfortable as a Linux user, my Linux Admin-fu is a bit weak. Thus, I'm here looking for guidance with a CentOS server I'm about to build. I need to setup an Apache2 web server for a few of our clients. I want each client's web content to be under their home directory (USERDIR in apache.conf, right?) for the static HTML sites. I want Apache to run as the client (suexec?). Some of their stuff will be PHP apps and I'm under the impression I'll want to look at suphp as well then. So basically I want to look like a small version of a shared web hosting company. Considering how common those are I thought I'd easily find a nice current How-To guide on setting this all up but so far I've had very little luck. I suspect my search words are off. So the questions (feel free to answer any or all): Anyone have some solid links to current/modern guides that would help me set this all up? No, the apache documentation site is not a guide ;-) Since I have a mix of static sites and PHP apps do I want/need both suexec and suphp installed? If so, does that introduce any challenges I should be aware of? Should I be looking at other options instead of suexec and suphp? I plan to give the end users SSH, SFTP or SCP access to their stuff (if that affects anything). Thanks in advance for your help.

    Read the article

  • See former key sequences in vim

    - by Vasiliy Sharapov
    Sometimes I share screen shots and clips of vim usage with others. It would be nice to expand on the part of the status bar highlighted in this picture: I would like some way to make previous key sequences visible as well, such as: y2w jj f[ p 2d - You can see the key sequences leading up to the current one. I'll elaborate on my wish list at the bottom. Is something like this is available as a plugin or vim script? The sheer number of scripts available on vim online makes this hard to find by keyword. Some features I would hope for (but seem improbable): Delimit key sequences with a non-keyboard character instead of space, and a different one for the current command, so y2w jj f[ p 2d might become y2w¦jj¦f[¦p » 2d Replace keys that have a letter alternative with the alternative, such as the right arrow key - ^[[C with the equivalent l. Edit: To clarify, the right arrow key is a valid key in vim, but has no character to represent it, the l key preforms the same function and could/should substitute it. Have previous keystrokes run all the way to the beginning of the line (instead of just one or two), and just have vim's command prompt overwrite it when necessary. Replace some keystrokes with a more elegant alternative, for example hhhhh with 5h or more impressively d2f) with d% (in the appropriate situation).

    Read the article

  • Purge print driver cache on windows 7 with powershell script

    - by Doltknuckle
    [Background] We have been having trouble with our network clients suddenly being unable to print. They get an odd error with a hex code. We determined that something in the driver was messed up and we could resolve the issue by clearing the driver cache and reinstalling the driver. This happens to random computers every so often. We're assuming this is a bug with the latest Dell 2330dn driver since that is the only model that has this problem. [Problem] What we are looking to do is write a Powershell script that would clear the driver cache and redownload the driver. I see a ton of scripts out there to manage queues, servers, and ports, but nothing for local driver cache management. [Current Workaround] Since we have to do this manually, I'll write out the steps so you know what we want this script to replicate. Disable print spooler Restart machine Delete contents of: C:\windows\system32\spool\drivers\w32x86 Enable print spooler and start service. Delete the network printer object and re-add network printer off of server. [Request] I'm good enough with powershell to translate the above workaround into a pair of scripts. I'd like to find a more elegant solution then my current workaround. Any suggestions?

    Read the article

  • Load Testing a Security/Gateway Appliance

    - by Joel Coel
    In a couple weeks I will load testing a security/gateway appliance. We're a small residential college, and that "residential" means the traffic moving through the appliance is a bit like the Wild West. We have everything from Facebook to World of Warcraft, BitTorrent to Netflix, or Halo to YouTube... basically anything you might find in the home of a high-school or college aged person. Somewhere in there some real academic work gets done as well. We rely on our current appliance for traffic shaping, antivirus, malware filtering, intrusion detection on our servers, logging and abuse reporting, and even some content filtering. All this puts a decent load when we have students around, and I'm concerned about the ability of the new candidate to keep up. On paper it should handle things, but I'm worried. Prior experience is that vendors greatly over-report what an appliance can handle. The product also includes a licensed session limit, and I'm also worried that just a few misbehaving students could unwittingly bring us to that limit and cause service disruptions. I need to know this will work for our campus in order to commit to it. Going a performance level higher in that product takes the pricing way out of line with what we expect and have done in the past. What I need is a good way to load test this guy. My problem is that our current level of summer traffic is less than one percent of what it will be when students come back just six weeks from now. Any ideas on how to really stress this thing and see what it can do, in a way that will give me some clear ideas o. How that will scale for our campus? For the curious, I'm looking at a Watchguard 515, but it could be anything. If I were evaluating a competitor, I'd ask the same question.

    Read the article

  • Windows Server 2003 SBS domain in multiple sites

    - by E3 Group
    We have about 25 employees in our current office and are looking to open up another office in another capital city housing about 15 employees. In our current office, we are running a domain hosted by a 2003 SBS server and I've been tasked by the boss to expand our infrastructure to the new office in the cheapest way possible (cheapest way in the short run that is, because my boss doesn't think more than 6 months ahead). So I'm looking to get a second hand server and have it run Server 2003 Std with exchange server 2003. These are the things that it needs to do: Replicate shared folders that are hosted in the parent LAN. Deliver emails hosted in the parent Exchange Server Somehow link up with the parent domain controller and push the AD to the remote site I'm pretty sure 3 is impossible but the DC would be available if a VPN connection is present, right? On that note, would I be looking at hardware VPN connections? I'm not sure how to deploy the new site as this is my first time doing it and i'm making it especially difficult for myself, seeing as the AD and DC is on an SBS server. Would I first start by establishing a VPN connection and then joining the new server to the domain? Will things 'just work' if I install exchange onto the new server and point outlooks to it? and how would I be able to replicate shared folders?

    Read the article

  • Switch Windows 8 from a hybrid MBR/GPT => GPT only on Macbook Pro Retina

    - by Sid
    I used DiskUtility+Bootcamp Wizard to setup my hard drive for Windows 8 (final MSDN). Somewhere in that process, the Apple tools turned my GPT disk into a hybrid MBR/GPT. All my 4 primary MBR partitions are used up, so when I try turning on Bitlocker in Windows 8, it complains about not finding a System drive. I know on Windows 8 the Bitlocker setup tries to create the 200(?)MB system partition if it's missing. However with all 4 partitions filled I suspect it can't create system drive = it can't find it = throws back an error like "BitLocker Setup could not find a target system drive. You may need to manually prepare your drive for BitLocker". I've already tried disabling hibernation, swap file etc. Now I'm thinking that if I were to get rid of the MBR scheme altogether, perhaps I can be alright within the GPT world without MBR's 4 primary partitions limit. So, how can I get rid of the MBR tables on the hybrid scheme in a manner that still leaves Mac OS and Windows 8 in working conditions? Details: Hardware is the MacbookPro Retina. Primary MBR partitions are consumed as follows: EFI partition HFS+ partition (=encrypted, therefore ="Apple_CoreStorage") HFS+ partition (Recovery partition, contains unencrypted Mac bootloader) NTFS partition (Windows8 all-in-one partition) diskutil list output sid-mbpr:~ sid$ diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *251.0 GB disk0 1: EFI 209.7 MB disk0s1 2: Apple_CoreStorage 160.0 GB disk0s2 3: Apple_Boot Recovery HD 650.0 MB disk0s3 4: Microsoft Basic Data Win8 90.1 GB disk0s4 GPT vs MBR addresses sid-mbpr:~ sid$ sudo gptsync /dev/rdisk0 Password: Current GPT partition table: # Start LBA End LBA Type 1 40 409639 EFI System (FAT) 2 409640 312909639 Unknown 3 312909640 314179175 Mac OS X Boot 4 314179584 490233855 Basic Data Current MBR partition table: # A Start LBA End LBA Type 1 1 409639 ee EFI Protective 2 409640 312909639 ac Apple RAID 3 312909640 314179175 ab Mac OS X Boot 4 * 314179584 490233855 07 NTFS/HPFS Status: GPT partition of type 'Unknown' found, will not touch this disk.** **: Ignore this message, the gptsync tool is old and doesn't understand the UUID for "Apple_CoreStorage" / FileVault2 partitions. Since LBA addresses are alright, safe to ignore this message.

    Read the article

  • Automatically Kill/Restart Process(es) When Memory is Critically Low

    - by nemesisfixx
    I have a Debian Wheezy VPS box where am running a couple of Django apps in production. Ideally, would have tried addressed my current memory footprint issues by optimizing the apps, adding more RAM or augmenting with Swap. But the problem is that I doubt there's much memory optimization I'd milk from optimizing the Django apps (the stack being open-source and robust), and adding RAM is a cost constraint for me (this is a remote VPS), also, the host doesn't offer options to use Swap! So, in the meantime (as I wait to secure more resources to afford more RAM), I wish to mitigate the scenarios where the server runs out memory so that I just have to request a VPS restart (as in, at that point, I can't even SSH into the box!). So, what I would love in a solution is the ability to detect when a process (or generally, total system memory usage) exceeds a certain critical amount (for now, example the FREE RAM falls to say 10%) - which I've noticed occurs after the VPS's been up for long, and when also traffic is suddenly much to some of the heavy apps (most are just staging apps anyway). So, I wish to be able to kill/restart the offending process(es) - most likely Apache. Which solution when done manually in these situations has restored sane memory usage levels - a hint that possibly one or more of the Django apps has a memory leak? In brief: Monitor overall system RAM usage When FREE RAM falls below a given critical threshold (say below 10%), kill/restart the offending process(es) - or simpler, if we assume from my current log analysis (using linux-dash) that Apache is often the offender, then kill/restart it. Rinse and repeat...

    Read the article

  • OpenVPN server throws an "access denied" error

    - by HackToHell
    OpenVPN refuses to start up and exists with this error ever since i upgraded Ubuntu from 1.04 to 11.10 Dec 14 19:12:38 oogle ovpn-server[32150]: OpenVPN 2.2.0 i686-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [eurephia] [MH] [PF_INET6] [IPv6 payload 20110424-2 (2.2RC2)] built on Jul 4 2011 Dec 14 19:12:38 oogle ovpn-server[32150]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Dec 14 19:12:38 oogle ovpn-server[32150]: Note: cannot open openvpn-status.log for WRITE Dec 14 19:12:38 oogle ovpn-server[32150]: Note: cannot open ipp.txt for READ/WRITE Dec 14 19:12:38 oogle ovpn-server[32150]: Diffie-Hellman initialized with 1024 bit key Dec 14 19:12:38 oogle ovpn-server[32150]: Cannot load private key file server.key: error:0200100D:system library:fopen:Permission denied: error:20074002:BIO routines:FILE_CTRL:system lib: error:140B0002:SSL routines:SSL_CTX_use_PrivateKey_file:system lib Dec 14 19:12:38 oogle ovpn-server[32150]: Error: private key password verification failed Dec 14 19:12:38 oogle ovpn-server[32150]: Exiting Dec 14 19:12:46 oogle ovpn-server[32201]: OpenVPN 2.2.0 i686-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [eurephia] [MH] [PF_INET6] [IPv6 payload 20110424-2 (2.2RC2)] built on Jul 4 2011 Dec 14 19:12:46 oogle ovpn-server[32201]: NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Dec 14 19:12:46 oogle ovpn-server[32201]: Note: cannot open openvpn-status.log for WRITE Dec 14 19:12:46 oogle ovpn-server[32201]: Note: cannot open ipp.txt for READ/WRITE Dec 14 19:12:46 oogle ovpn-server[32201]: Diffie-Hellman initialized with 1024 bit key Dec 14 19:12:46 oogle ovpn-server[32201]: Cannot load private key file server.key: error:0200100D:system library:fopen:Permission denied: error:20074002:BIO routines:FILE_CTRL:system lib: error:140B0002:SSL routines:SSL_CTX_use_PrivateKey_file:system lib Dec 14 19:12:46 oogle ovpn-server[32201]: Error: private key password verification failed Dec 14 19:12:46 oogle ovpn-server[32201]: Exiting

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >