Search Results

Search found 16948 results on 678 pages for 'static analysis'.

Page 516/678 | < Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >

  • How to deal with ssh's "WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!"?

    - by Vi.
    I often need to login to multiple remote stations that are just placed to the same static IPs for me. SSH complains about changed keys in this case: $ ssh [email protected] @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ ... Offending RSA key in /home/vi/.ssh/known_hosts:70 ... I usually just run vim /home/vi/.ssh/known_hosts +70, dd wq and re-run the SSH command. How to do it simpler? Requirements: The warning should be displayed, and not like this: The authenticity of host '172.1.2.3 (172.1.2.3)' can't be established. It is easy to accept the key change. I expect something like this: $ ssh [email protected] @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ ... The fingerprint for the RSA key sent by the remote host is 82:cd:be:7a:ae:1b:91:2c:23:c1:74:4d:8a:38:10:32. Change the host key in /home/vi/.ssh/known_hosts (yes/no)? yes Warning: Changed host key for '172.1.2.3' (RSA) in the list of known hosts. [email protected]'s password: Simple and differs from usual "The authenticity of host can't be established." message.

    Read the article

  • How do I setup routing for two companies with different Internet connections on the same LAN?

    - by Clint Miller
    Here's the setup: Two companies (A & B) share office space and a LAN. A 2nd ISP is brought in and company A wants its own Internet connection (ISP A) and company B wants its own Internet connection (ISP B). VLANs are deployed internally to separate the two companies' networks (company A: VLAN 1, company B: VLAN 2, shared VOIP: VLAN 3). With separate VLANs it's simple enough to use separate DHCP servers (or separate scopes on the same server) to assign the default gateway to each company's gateway for their Internet connection. Static routes can be created on each gateway to point traffic destined for the other company's VLAN or the voice VLAN so that all nodes are reachable as expected. However, I think this is a form of asymmetrical routing, right? (The path from node A1 to node B1 is not the same as the path back from node B1 to node A1). Can I set up policy-based routing to correct this? In that case, can I assign the same default gateway to every device on all VLANs and create a routing policy on a L3 switch to look at the source address and forward traffic to the appropriate next hop? In that case, I want the routing logic to go like this: If the destination address is known, forward the traffic (traffic destined for a different VLAN). If the destination address is unknown, forward the traffic to ISP A's gateway if the source address is on VLAN A; or forward the traffic to ISP B's gateway if the source address is VLAN B. Am I thinking about this problem in the correct way? Is there another way to solve this problem that I am overlooking?

    Read the article

  • Unexpected behaviour in a Lotus Notes programmable table

    - by Mark B
    I'm designing a workflow database in Lotus Notes 6.0.3 (soon upgrading to 8.5), and my OS is Windows XP. I have recently tried converting a tabbed table into a programmable one. This was so that I could control which tab was displayed to the user when it was opened, so that they were presented with the most appropriate one for that document's progress through the workflow. That part of it works! One of the tabs features a radio button that controls visibility of the next tab, and a pair of cascading dialogue boxes. One contains the static list "Person":"Team", and the other has a formula based on the first: view:=@If(PeerReview = "Team"; "GroupNames"; "GroupMembers"); @Unique(@DbColumn(""; ""; view; 1)) The dialogue boxes have the property "Refresh fields on keyword change" selected. The behaviour that I wasn't expecting is this. If the radio button is set to "Yes" and a value is selected in one of the dialogue boxes, the table opens the next tab. If the radio button is set to "No" and a value is selected in one of the dialogue boxes, the entire table is hidden. I can duplicate the latter by switching off the "Refresh fields on keyword change" property on the dialogue boxes and instead pressing F9 after selecting a value. I have no idea why the former occurs, though. The table is called "RFCInfo", and I have a field on the form called "$RFCInfo" which is editable, hidden from all users who aren't me and initially set by a Postopen script, which I can post if necessary - it's essentially a Select Case statement that looks at a particular item value and returns the name of the table row relating to that value. Can anyone offer any pointers?

    Read the article

  • Cron job checking for changes in Git repository

    - by HNygard
    We have just moved our server configs to a Git repository. Therefore there should not be any changes in any of the repository folders. I was thinking about how I could set up a cron job to check for any uncommited changes. How could a cron job be set up to check for changes in a Git repository? Greping the output of the git status command might just do it. Grep and cron jobs are not my strong side. Here are some sample outputs from git status: Standing the folder containing the git repository (e.g. /path/gitrepo/) with changed files: $ git status # On branch master # Changes not staged for commit: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: apache2/sites-enabled/000-default # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # apache2/conf.d/test no changes added to commit (use "git add" and/or "git commit -a") Standing in the folder when there is no changes: $ git status # On branch master nothing to commit (working directory clean) Update: Synced up with origin is not important. There should be no local changes. Local files that must be in place go into the .gitignore file. In addition to the server configs there are also git repos for content (static web sites, web apps, wordpress, etc). None of the repositories should have local changes. We might use Puppet in the long run since its being used for development of one of the web apps.

    Read the article

  • nginx reverse proxy slows down my throughput by half

    - by Isaac A Mosquera
    I'm currently using nginx to proxy back to gunicorn with 8 workers. I'm using an amazon extra large instance with 4 virtual cores. When I connect to gunicorn directly I get about 10K requests/sec. When I serve a static file from nginx I get about 25 requests/sec. But when I place gunicorn behind nginx on the same physical server I get about 5K requests/sec. I understand there will be some latency from nginx, but I think there might be a problem since it's a 50% drops. Anybody heard of something similar? any help would be great! Here is the relevant nginx conf: worker_processes 4; worker_rlimit_nofile 30000; events { worker_connections 5120; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; } sites-enabled/default: upstream backend { server 127.0.0.1:8000; } server { server_name api.domain.com ; location / { proxy_pass http://backend; proxy_buffering off; } }

    Read the article

  • visually documenting web server configuration and infrastructure

    - by Alex Ciarlillo
    I have just finished a large re-organization and update of our institutions web server(s). This server hosts 3 virtual hosts, 3-4 blogs, 2 wikis, some legacy static HTML pages, and many hosted documents (PDF, .jpg, .xls). I have organized the site into a structure of something like: /var/www/sites/vhost1, vhost2, vhost3 .../wordpress/blogX .../mediawiki/wikiX Data is in a seperate directory structure so I can run a cron task over it to make sure it is all writeable and such. I then symlink to these data directories for each application. /var/www/data/vhost1, vhost2, vhost3 .../wordpress/blogX/uploads .../mediawiki/wikiX/images All Apache configs are in /etc/httpd/conf.d/vhosts.d/vhost1,2,3.conf On top of this there is also a testing server which mirrors this setup. Once changes are fully tested, they are rsynced down to the live server. All the wordpress installs and mediawiki installs are straight form SVN and updates are done by switching branches or "svn up". So my question is how can I best document to share with a) co-workers, b) possible future replacement, c) myself 6 months from now. Obviously I can make a wiki page, excel document, whatever and fill it with text, but I am looking for a more visual representation that I can use to explain the architecture to less-technical people. Ideally it would be awesome if this visual representation could then be expanded to get more technical details.

    Read the article

  • How can I generate filesystem images that are usable on many different virtualization systems?

    - by Mark Longair
    I have written a script that generates a root filesystem image (based on Debian lenny) suitable for User-Mode Linux. (Essentially this script creates a filesystem image, mounts it with a loop device, uses debootstrap to create a lenny install, sets up a static IP for TUN/TAP networking, adds public keys for login by SSH and installs a web application.) These filesystem images work pretty well with UML, but it would be nice to be able to generate similar images that people can use on alternative virtualization software, and I'm not familiar with these options at all. In particular, since the idea is to use this image as a standalone server for testing the web application, it's important that the networking works. I wonder if anyone can suggest what would be involved in customizing such root filesystem images such that they could be used with other virtualization software, such as VMware, Xen or as an Amazon EC2 instance? Two particular concerns are: If such systems don't use a raw filesystem image (e.g. they need headers with metadata or are compressed in some particular way) do there exist tools to convert between the different formats? I assume that in the filesystem, at least /etc/network/interfaces will have to be customized, but are more involved changes likely to be necessary? Many thanks for any suggestions...

    Read the article

  • using nginx with proxy_pass on a subdomain

    - by marcus3006
    a have a rails app that should listen on the subdomain redmine.example.com (using proxy_pass). all other requests for *.example.com should just redirect to a normal index.html. Here is my configuration: server { server_name www.example.com example.com; root /home/deploy/static/example; } upstream redmine { server unix:/tmp/redmine.socket fail_timeout=0; } server { # you could put a list of other domain names this application answers server_name redmine.example.com; root /home/deploy/rails/redmine/public; access_log /var/log/nginx/redmine_access.log; rewrite_log on; location * { proxy_pass http://redmine; } location ~ ^/(assets)/ { root /home/deploy/rails/redmine/public; gzip_static on; # to serve pre-gzipped version expires max; add_header Cache-Control public; } } anyone knows what's going wrong here? requests to example.com and www.example.com are handled correctly. when i try to acces redmine.example.com = "couldn't resolve host"

    Read the article

  • Suggestions for transitioning to new GW/private network

    - by Quinten
    I am replacing a private T1 link with a new firewall device with an ipsec tunnel for a branch office. I am trying to figure out the right way to transition folks at the new site over to the new connection, so that they default to using the much faster tunnel. Existing network: 192.168.254.0/24, gw 192.168.254.253 (Cisco router plugged in to private t1) Test network I have been using with ipsec tunnel: 192.168.1.0/24, gw 192.168.1.1 (pfsense fw plugged in to public internet), also plugged in to same switch as the old network. There are probably ~20-30 network devices in the existing subnet, about 5 with static IPs. The remote endpoint is already the firewall--I can't set up redundant links to the existing subnet. In other words, as soon as I change the tunnel configuration to point to 192.168.254.0/24, all devices in the existing subnet will stop working because they point to the wrong gateway. I'd like some ability to do this slowly--such that I can move over a few clients and verify the stability of the new link before moving critical services or less tolerant users over. What's the right way to do this? Change the netmask on all of the devices to /16, and update gateway to point to the new device? Could this cause any problems? Also, how should I handle DNS? The pfsense box is not aware of my Active Directory environment. But if I change DNS to use the local servers, it will result in a huge slowdown as DNS queries will still be routed over the private t1. I need some help coming up with a plan that's not too disruptive but will really let me thoroughly test the stability of the IPSEC tunnel before I make the final switch. The AD version is 2008R2, as are the servers. Workstations are mostly Windows XP SP3. I have not configured the 192.168.1.0/24 as a site in AD sites and services.

    Read the article

  • How do I restrict access to certain web files/folders on an IIS 7.5 based web server?

    - by cpuguru
    We're moving a website that was previously hosted on Win2k3 & IIS 6 to a Win2k8 R2 & IIS 7.5 platform. The website is public, but we want to restrict anonymous access to certain files and folders such that the user would be prompted for a password to access them. If this were Apache, a simple .htaccess file would serve the purpose. However, since it's IIS 7.5 and we're serving up mainly static HTML files and a few classic ASP pages I'm in a bit of a quandry as to how to restrict access to individual files and folders for various committees such that attempts to committee_1's files and/or folders would prompt the user for a password and, if entered correctly, would serve up their files. Same thing for committee_2 and so on. Under IIS 6, we would take away the read privileges for IIS_IUSRS and create a user called "committee_1" with a password known by the group and give that user read privileges to the files/folders. There's got to be a better (and more secure) way. Reminder, these are not *.aspx pages that are being served up. Any suggestions on how to password protect key files and/or folders under IIS 7.5 are much appreciated.

    Read the article

  • Constructor and Destructor of a singleton object called twice

    - by Bikram990
    I'm facing a problem in singleton object in c++. Here is the explanation: Problem info: I have a 4 shared libraries (say libA.so, libB.so, libC.so, libD.so) and 2 executable binary files each using one another shared library( say libE.so) which deals with files. The purpose of libE.so is to write data into a file and if the executable restarts or size of file exceeds a certain limit it is zipped and a new file is created with time stamp in name. It is using singleton object. It exports a handler class for getting and using singleton. Compressing only happens in the above said two cases. The user/loader executable can specify the starting name of file only no other control is provided by handler class. libA.so, libB.so, libC.so and libD.so have almost same behavior. They all have a class and declare and object of an handler which gets the instance of the singleton in libE.so and uses it for further purpose. All these libraries are linked to two executable binary files. If only one of the two executable runs then its fine, But if both executable runs one after other then the file of the first started executable gets compressed. Debug info: The constructor and destructor of the singleton object is called twice.(for each executable) The object of singleton is a static object and never deleted. The executable is not able to exit/return gives: glibc detected * (exe1 or exe2): double free or corruption (!prev): some_addr * Running with binaries valgrind gives that the above error is due to the destructor of the singleton object. Thanks

    Read the article

  • How do I setup an Alias on Apache with XAMPP on Linux ? (Permission problem)

    - by knarf
    XAMPP works fine but I want to have http://localhost/f to point to /home/knarf/prog/php/fwyxz. I've chmod -R 777 /home/knarf/prog/php/fwyxz I've added Alias /f /home/knarf/prog/php/fwyxz at the end of the httpd.conf And when I try to access it, I get a 403. From the apache error_log: [error] [client 127.0.0.1] (13)Permission denied: access to /f denied. I've already tried several solutions (userdir and symlinks) but they both failed with the same error. I've also tried to add this after the Alias: <Directory "/home/knarf/prog/php/fwyxz"> Order allow,deny Allow from all </Directory> But again, permission denied. Now if I change the User/Group under which apache runs from nobody to knarf, it seems to work (static files are ok) but PHP can't use/initialize sessions : [error] [client 127.0.0.1] PHP Warning: session_start() [function.session-start]: open(/tmp/sess_r5nrmu4ugqguqqe83rs53lq6k0, O_RDWR) failed: Permission denied (13) in /home/knarf/prog/php/fwyxz/index.php on line 3 [error] [client 127.0.0.1] PHP Warning: Unknown: open(/tmp/sess_r5nrmu4ugqguqqe83rs53lq6k0, O_RDWR) failed: Permission denied (13) in Unknown on line 0 [error] [client 127.0.0.1] PHP Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct () in Unknown on line 0 This is really frustrating.

    Read the article

  • Apache 2 settings for high traffic website

    - by Harry
    I'm having problems with the load on my website. It's an amazon ec2 server with 15Gb ram and 4 CPUs behind an LB. apachetop says I'm getting around 80 reqs per second which seems really low for this kind of server and the load ( given by top ) is usually around 15 but does increase to about 150 in 24 hrs. I'm seeing about 100 active apache processes at any time. Apache is in prefork mode. Mysql is used very little on the server and there are almost no static files. Here are my Apache settings: Timeout 20 KeepAlive Off MaxKeepAliveRequests 0 KeepAliveTimeout 3 <IfModule mpm_prefork_module> StartServers 40 MinSpareServers 25 MaxSpareServers 40 ServerLimit 400 MaxClients 400 MaxRequestsPerChild 4 </IfModule> Can anyone advise on how to tweak the settings? Thanx! Edit: The config was gotten by trial and error. Any and I mean by a number, change to these lines make the load skyrocket in like 5 minutes. It literally jumps to like 200-300 in a matter of minutes. Especially MaxRequestsPerChild. I've tried with 10, 15, 100, 1000 and the load just skyrockets. About php - there are actually only a few php files which aren't really that expensive at all. They just spit some simple stuff out. If I turn on KeepAlive load also goes to space..

    Read the article

  • NginxHttpAuthBasicModule with Sinatra & Passenger

    - by scainey
    Hi, I'm serving static pages from a Sinatra application using Nginx. I've implemented Basic Authentication for one page on the site using NginxHttpAuthBasicModule, the authentication succeeds but Nginx doesn't resolve the link. Error log gives - 2010/03/22 12:15:19 [error] 7143#0: *2902 open() "/home/me/live/mysite_home/public /mypage" failed (2: No such file or directory), client: 82.71.18.122, server: mysite.com, request: "GET /mypage HTTP/1.1", host: "mysite.com" The actual file is found at: /home/me/live/mysite_home/live/mypage.erb The configuration file is: server { listen 80; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } server { listen 443; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; ssl on; ssl_certificate /etc/nginx/conf/certs/server.crt; ssl_certificate_key /etc/nginx/conf/certs/server.key; keepalive_timeout 70; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } Not sure if this is a Sinatra, Passenger or Nginx thing, or if I'm just missing something.

    Read the article

  • ubuntu 12.04 kvm virtual server network setup, can't get the machine to be connectable

    - by xyious
    I have worked on my Ubuntu Server host for weeks now and I just can not manage to get the virtual machines into the network.... here's what I need to do: I need to be able to create virtual machines that have IP addresses that can be reached from the outside (192.168 network). I need to be able to connect to the virtual machines through ssh, ftp, http and preferably https, anything else doesn't matter that much. So far everything seems simple enough and I have a lot of leeway in terms of IP address range and server/client configuration. I have the option of taking part of a /24 net as most IPs aren't used, and if it's absolutely necessary I have the option of creating a new /24 subnet. Also have the option of reformatting and reinstalling OS on the host and recreating the virtual machines as nothing has been done other than trying to get virtual machines to work. I would prefer if the virtual machines were just part of the normal network which would be 192.168.5.0/24. The host machine has 2 network cards so I don't even necessarily need the Host to be connectable in the same /24 network. I have tried (I think) just about everything from about 5 different tutorials on bridging (giving br0 the same IP that eth0 used to have (Host is able to connect to VM and vice versa, VM doesn't have outside network access), having eth0 set up like it always was and having br0 have a different IP (same as above), NAT with port forwarding (which I would have preferred not to use but will if it works), turning off one of the hosts network cards and just using one of them, different subnets.... etc. I do know my way around iptables fairly well.... Host is 64bit Ubuntu Server 12.04, using libvirt/kvm. edits: Local network is 192.168.5.0/24, host has static ip 192.168.5.254, GW .5.1 which is also nameserver. We have a second Local network at 192.168.10.0/24 with .10.1 GW, but both hosts and VMs were supposed to go into the .5 subnet. The .10 subnet isn't required, but it wouldn't be horrible if the Host were only accessible in the .10 subnet.

    Read the article

  • ffmpeg What's the difference between -to and -t options?

    - by Vantuz
    Command ffmpeg -ss 5:09 -i foo.mkv -to 5:10 -c copy bar.mkv works exactly like ffmpeg -ss 5:09 -i foo.mkv -t 5:10 -c copy bar.mkv Is it a bug? Using Zeranoe git-bd75651 for Windows 64-bit >ffmpeg -version ffmpeg version N-57906-gbd75651 built on Nov 4 2013 18:09:19 with gcc 4.8.2 (GCC) configuration: --disable-static --enable-shared --enable-gpl --enable-version3 - -disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enabl e-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --ena ble-libcaca --enable-libfreetype --enable-libgsm --enable-libilbc --enable-libmo dplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enab le-libvidstab --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis - -enable-libvpx --enable-libwavpack --enable-libx264 --enable-libxavs --enable-li bxvid --enable-zlib libavutil 52. 51.100 / 52. 51.100 libavcodec 55. 41.100 / 55. 41.100 libavformat 55. 21.100 / 55. 21.100 libavdevice 55. 5.100 / 55. 5.100 libavfilter 3. 90.101 / 3. 90.101 libswscale 2. 5.101 / 2. 5.101 libswresample 0. 17.104 / 0. 17.104 libpostproc 52. 3.100 / 52. 3.100

    Read the article

  • How much Ram should I need on my VPS package? Am I being ripped off?

    - by Tamerax
    Hello! So, I'm currently on a VPSVille Cpanel3 account that has 768 MB guaranteed ram and 2048 MB burst ram (full details here: http://www.vpsville.ca/cpanel-vps). It's running CentOS, Cpanel, Apache and FastCGI. On the server itself I have a joomla community site with a forum system that generally has about 20 people on it max at any point and even then, during the evening, no one. It's a pretty small site but has a number of modules running on it. It gets about 6000 visits a month. Also on the server is a wordpress site that gets about 80-150 visits a day, 2 other wordpress sites that aren't developed yet so they don't get any traffic at all and 2 static html websites that also only get about 500 hits a month. All in all, no huge sites. The issue is that I get "out of memory" errors fairly frequently and it kills my server and I need to reboot it in order to get all my sites up and running again. It seems to me that I shouldn't have these issues with that much ram allotted to my account and everytime I send in a support ticket, they just tell me to upgrade the ram. Now, I'm still pretty new to all this so I'm not a good judge of how much I really need for my sites to run. I don't know if my sites really do need this much OR vpsville has oversold there servers, they don't actually have those resources available and I'm getting ripped off. So, how much ram should I be using with my current setup? Thanks!

    Read the article

  • Xen HVM Windows 2008 network bridge

    - by JavierMartinz
    I have a problem with the Windows Server 2008 guest (hvm). I can't get a network interface running for him. I also have a Debian guest and it's working ok, but I can't do it with the Win2k8 guest. When I started the VM, the machine freezes and I can't connect by ssh to the host. /etc/network/interfaces # The loopback network interface auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 188.165.B.C netmask 255.255.255.0 network 188.165.B.0 broadcast 188.165.255.255 gateway 188.165.B.254 brctl show bridge name bridge id STP enabled interfaces eth0 8000.e840f20acc28 no peth0 /etc/xen/xend-config.sxp ... (vif-script vif-bridge) (network-script 'network-bridge') ... /etc/xen/win2k8.cfg # Networking # vif = [ 'ip=5.39.F.G,mac=yy:yy:yy:yy:yy:yy,type=ioemu,bridge=eth0' ] /etc/xen/debian.cfg # Networking # vif = [ 'ip=178.33.D.E,mac=xx:xx:xx:xx:xx:xx' ] As you can see, in the Debian guest I only have to specify an IP address and a MAC. But if I put that in the Win2k8 guest, the machine does not start. I am using Xen 4.0

    Read the article

  • setting up vpn server

    - by Lock
    I need help in visualising how to setup our VPN box when we move to our new network with Telstra. We have a safe@office 500P, which has a public IP and a private IP of 192.168.19.2. It is physically connected to our router, which has 4 different interfaces, one being 192.168.19.1. On the VPN box, we have a static route to forward everything to 192.168.19.1 which is the router, and from there it works out where to go. Now, we are moving to a Telstra WAN and things are setup a little differently. Our head office router has only 3 interfaces- 1 is for the link to the switch that has the fibre connection (so our route to the internet and other branches), 1 is for our 10.10.20.x network and one is for the local branch network. I really have no idea how to set this up as with the new setup, we will not have a port for it to plug into on the router. Could I just plug it into the 10.10.20.x network? Would I have to give it a public IP or can we just forward through the ports that it would use? Another suggestion was to VLAN our switch into two networks- one for the 10.10.20.x network and one for the network the VPN currently sits on (192.168.19.x), and setup the router to trunk between the port and the switch. Not sure how to do this. Sorry VPN's are definitely not my strong suit. Any advice appreciated!

    Read the article

  • NGINX + PHP-FPM - Strange issue when trying to display images via php-gd / readfile - Connection wont terminate

    - by anonymous-one
    Ok, to get the details out of the way: The php script can be anything as simple as: <? header('Content-Type: image/jpeg'); readfile('/local/image.jpg'); ?> When I try to execute this via nginx + php-fpm what happens is the image shows up in the browser, here is what happens: IE - The page stays blank for a long period of time, and eventually the image is shown. Chrome - The image shows, but the loading spinner spins and spins for a long period of time. Eventually the debugger will show the image in red as in error, but the image shows up fine. Everything else on the server works great. Its pushing out about 100mbit steady serving static content. So this is definatly a php-fpm related issue. I THINK this may have something to do with the chunked encoding being sent back wrong? Also, I threw in a pause before the image was read, and got the pid of the fpm process, and it looks as tho its terminatly correctly (from strace): shutdown(3, 1 /* send */) = 0 recvfrom(3, "\1\5\0\1\0\0\0\0", 8, 0, NULL, NULL) = 8 recvfrom(3, "", 8, 0, NULL, NULL) = 0 close(3) = 0 The above was dumped long before ie/chrome decided to give up (even tho the image was shown) loading the image. Displaying HTML / text content is fine. Big bodies etc all load nice and fast and terminate right away (as they should). Doing something like: THIS IS THE IMAGE ---BINARY DUMP OF IMAGE--- Works fine too. Any ideas?

    Read the article

  • Monitoring Between EC2 Regions

    - by ABrown
    I'm working on a small EC2 project that involves a handful of servers in two different regions (US East and EU West). My first task is to implement a Nagios monitoring solution. Monitoring within a region is simple - I just use the private domain names/IPs, but I'm a little unsure of the best way to handle monitoring the second region without setting up a second Nagios install. The environment is fairly static, so I'm not going to be scripting the configuration with the EC2 tools just yet. As I see it, I have two options. Two Nagios installations (which is over-kill for the small number of servers I'm dealing with). Pros: I don't have to alter the group permissions nor do I have to pay for the traffic, redundancy in the monitoring solution - I could monitor the Nagios servers. Cons: two installations to deal with and I'd need to run another server instance. Have the single installation monitor both regions. Pros: one installation to deal with. Cons: slightly reduced security - security group will have to have NRPE (5666) opened for one source IP and also paying for a small amount of bandwidth at the Internet rate for data transfer between the regions. I guess my question is - how have others handled this problem and what are your recommendations? Thanks!

    Read the article

  • Strange 3-second tcp connection latencies (Linux, HTTP)

    - by user25417
    Our webservers with static content are experiencing strange 3 second latencies occasionally. Typically, an ApacheBench run ( 10000 requests, concurrency 1 or 40, no difference, but keepalive off) looks like this: Connection Times (ms) min mean[+/-sd] median max Connect: 2 10 152.8 3 3015 Processing: 2 8 34.7 3 663 Waiting: 2 8 34.7 3 663 Total: 4 19 157.2 6 3222 Percentage of the requests served within a certain time (ms) 50% 6 66% 7 75% 7 80% 7 90% 9 95% 11 98% 223 99% 225 100% 3222 (longest request) I have tried many things: - Apache2 2.2.9 with worker or prefork MPM, no difference (with KeepAliveTimeout 10-15) - Nginx 0.6.32 - various tcp parameters (net.core.somaxconn=3000, net.ipv4.tcp_sack=0, net.ipv4.tcp_dsack=0) - putting the files/DocumentRoot on tmpfs - shorewall on or off (i.e. empty iptables or not) - AllowOverride None is on for /, so no .htaccess checks (verified with strace) - the problem persists whether the webservers are accessed directly or through a Foundry load balancer Kernel is 2.6.32 (Debian Lenny backports), but it occurred with 2.6.26 also. IPv6 is enabled, but not used. Does the issue look familiar to anyone? Help/suggestions are much appreciated. It sounds a bit like a SYN,ACK packet getting lost or ignored.

    Read the article

  • How do I tunnel an HTTPS proxy through a virtual machine (VMWare)

    - by Kyle
    I have a personal setup at home using VMWare Workstation. I also have a set of Virtual Private Machines that run Squid, and therefore provide me HTTPS proxy tunnels. Using Proxifier, I can tunnel all traffic for given applications through these tunnels. However, I also have a few virtual machines for dev/staging/experimentation/etc. I generally just use NAT to provide Internet access to the machines, and if I need to use these proxies, I can just setup Proxifier (or a Linux equivalent) to pipe the traffic through them. No problem. But... I got to thinking: Wouldn't it be great if I could assign these proxy tunnels to a virtual machine, so that when I start up the VM, it has instant-on access through the tunnel and not my local connection? (EDIT: Of course, it would USE my local connection, but it would tunnel traffic through the proxy.) To be more clear: I want a solution that binds the proxy to a VM, so that when I start the VM, I don't have to use a proxy client to connect to the tunnel - I am already piping all traffic from that VM through that proxy. I did a bit of searching, and the closest thing I could find was this: How to route public static IP to a virtual machine on a vmware ESXi host? Which wasn't all that applicable. The proxies are protected by user/pass but do not filter by IP. Again, they are HTTPS proxies setup through Squid. Any ideas on how to make this happen? Thanks a ton.

    Read the article

  • using virtual machine like mySql server

    - by ffmm
    i'm developing a java program and i need a database. Now i'm using MAMP and it's pretty easy but i would have a virtual machine (ubuntu server) and i need to connect my java program with this virtual machine using vitualBox. the situation: I installed VirtualBox on my mac and I installed an ubuntu-server machine set "bridge adapter" in the network settings of VB I installed mysql on ubuntu-server and i created a simple database (all work well by ubuntu) doing ifconfig by ubuntu I get the ip: 192.168.1.217 so in the java program i made this function: public static Connection connect(String host, int port, String dbName, String user, String passwd) { Connection dbConnection = null; try { String dbString = null; Class.forName("com.mysql.jdbc.Driver").newInstance(); dbString = "jdbc:mysql://" + host + ":" + port + "/" + dbName; dbConnection = DriverManager.getConnection(dbString, user, passwd); } catch (Exception e) { System.err.println("Failed to connect with the DB"); e.printStackTrace(); } return dbConnection; } and in the main() i use: Connection con = connect(1, "192.168.1.217", 3306, "Ciao", "root", "cocacola"); 3306 was a default value. I don't know if is correct, it works on mamp, but…. how I can find the correct port that I have to use with VB? when I ran the program I get the catch excepion… what's wrong? ps: i have to install apache o something else?

    Read the article

  • Load balance to proxies

    - by LoveRight
    I have installed several proxy programs whose IP addresses are, for example, 127.0.0.1:8580(use http), 127.0.0.1:9050(use socks5). You may regrard them as Tor and its alternatives. You know, certain proxy programs are faster than others at times, while at other times, they would be slower. The Firefox add-in, AutoProxy and FoxyProxy Standard, can define a list of rules such as any urls matching the pattern *.google.com should be proxied to 127.0.0.1:8580 using socks5 protocol. But the rule is "static". I want *.google.com to be proxied to the fastest proxy, no matter which one. I think that is kind of load balancing. I thought I could set a rule that direct request of *.google.com to the address the load balancer listens, and the load balancer forwards the request to the fastest real proxy. I notice that tor uses socks5 protocol and some other applications use http. I feel confused that which protocol should the load balancer use. I also start to wonder about the feasibility of this solution. Any suggestions? My operating system is Windows 7 x64.

    Read the article

< Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >