Search Results

Search found 11703 results on 469 pages for 'dev to production'.

Page 345/469 | < Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >

  • Virtual host config issues in osx 10.7 server app

    - by Benno
    I have two mac mini lion servers setup to run as production and staging machines. My sysadmin decided on these machines over the previous CentOS we had because it had an "interface" to be able to manage it, rather than just the terminal. To be honest, I prefer the terminal. My problem is, the mac osx 10.7 server.app seems to be having issues with the creation of virtual hosts in the 'Web' section. It seems VERY touchy. For example, I cannot create a http virtual host first. I have to create a https host first with a unique dns name 9e..g vuly6), then create the http host with a different dns name to the first (e.g. www), or it appears to override it the first one, even though one is ssl and one is non-ssl. Further, it seems to override perfectly good configurations at random. For example, the default sites directory is usually /Users/default/Sites/Customsites or something, but sometimes when I load the server.app it changes to /var/empty. Also, if I change or add extra virtual hosts after the first one or two, it starts to mess up and the first two virtual hosts start having issues. Has anyone had any experience with setting up virtual hosts via this app? Am I able to manually create these virtual hosts, without using the app, and without the app overriding my settings when I restart apache?

    Read the article

  • Ruby 1.9.3 - Bundler - Graylog2

    - by Arenstar
    im having a strange problem with bundler. Using ruby 1.8 the following works fine however not with 1.9 it always results in Could not find rake-0.9.2.2 in any of the sources Run `bundle install` to install missing gems. i dont understand why, but it functions correctly with rvm. I can not however use rvm, this is not a solution to my problem Install Ruby cd /usr/local/src wget http://ftp.ruby-lang.org/pub/ruby/1.9/ruby-1.9.3-p194.tar.gz tar xzf ruby-1.9.3-p194.tar.gz && cd ruby-1.9.3-p194 ./configure --prefix=/opt/lp/ruby-1.9.3-test make all && make install Install Graylog cd /usr/local/src wget https://github.com/downloads/Graylog2/graylog2-web-interface/graylog2-web-interface-0.9.6p1.tar.gz tar xzf graylog2-web-interface-0.9.6p1.tar.gz cd graylog2-web-interface-0.9.6p1 Setup Graylog cd /usr/local/src/graylog2-web-interface-0.9.6p1 sed -i "3 i gem 'thin', '~> 1.3.1'" Gemfile /opt/lp/ruby-1.9.3-test/bin/gem install bundle /opt/lp/ruby-1.9.3-test/bin/bundle install --path vendor/bundle --binstubs Begin the Test cd /usr/local/src/graylog2-web-interface-0.9.6p1 /opt/lp/ruby-1.9.3/bin/bundle exec bin/rake #Could not find rake-0.9.2.2 in any of the sources #Run `bundle install` to install missing gems. cd /usr/local/src/graylog2-web-interface-0.9.6p1 /opt/lp/ruby-1.9.3/bin/bundle exec bin/thin -e production -S test.sock -c . -R config.ru start #Could not find rake-0.9.2.2 in any of the sources #Run `bundle install` to install missing gems. Where am i going wrong?

    Read the article

  • DPM server 2010 Attach agent error : administrator privileges missing?

    - by Michael
    Hello, I’m hoping you would be able to help me out with this little problem I’m having. I installed DPM 2010 in our test environment to test backups on Exchange 2010 servers. The environment includes : 1xDC 2x Exchange Server 2010 1x DPM 2010 server All of these are running on Microsoft server 2008 R2 Virtual machines. The host machines are using Hyper-v. So the problem goes like this : 1- I tried to install the agents from the DPM server GUI, which failed saying I didn’t have the correct permissions. 2- So then I tried the manual installation using the commands from : the Microsoft site http://technet.microsoft.com/en-us/library/bb870935.aspx 3- The agent installation worked but when I get to attaching the agents to the DPM server it still gives me the error saying that the specified account does not have administrator rights. 4- I tried the Domain admin, users who are domain admin + local admin, single local admins. 5- I have turned off the windows firewall and made sure all the services are running. So now I’m out of ideas and really need help, the agent attach to the DPM server is the last thing that is holding me back from deploying everything to the production site. Any help would be really appreciated.

    Read the article

  • Automatically Kill/Restart Process(es) When Memory is Critically Low

    - by nemesisfixx
    I have a Debian Wheezy VPS box where am running a couple of Django apps in production. Ideally, would have tried addressed my current memory footprint issues by optimizing the apps, adding more RAM or augmenting with Swap. But the problem is that I doubt there's much memory optimization I'd milk from optimizing the Django apps (the stack being open-source and robust), and adding RAM is a cost constraint for me (this is a remote VPS), also, the host doesn't offer options to use Swap! So, in the meantime (as I wait to secure more resources to afford more RAM), I wish to mitigate the scenarios where the server runs out memory so that I just have to request a VPS restart (as in, at that point, I can't even SSH into the box!). So, what I would love in a solution is the ability to detect when a process (or generally, total system memory usage) exceeds a certain critical amount (for now, example the FREE RAM falls to say 10%) - which I've noticed occurs after the VPS's been up for long, and when also traffic is suddenly much to some of the heavy apps (most are just staging apps anyway). So, I wish to be able to kill/restart the offending process(es) - most likely Apache. Which solution when done manually in these situations has restored sane memory usage levels - a hint that possibly one or more of the Django apps has a memory leak? In brief: Monitor overall system RAM usage When FREE RAM falls below a given critical threshold (say below 10%), kill/restart the offending process(es) - or simpler, if we assume from my current log analysis (using linux-dash) that Apache is often the offender, then kill/restart it. Rinse and repeat...

    Read the article

  • How can I have puppet deploy ssh keys for virtual users?

    - by Pheezy
    I am trying to get puppet to assign authorized ssh keys for virtual users but I keep getting the following error: err: Could not retrieve catalog: Could not parse for environment production: Syntax error at 'user'; expected '}' at /etc/puppet/modules/users/manifests/ssh_authorized_keys.pp:9 I believe my configuration are correct (listed below) but is there a syntax error or scoping issue I am missing? I would simply like to assign users to nodes and have those users automagically have their ssh keys installed. Is there maybe a better way to do this and I'm just overthinking it? # /etc/puppet/modules/users/virtual.pp class user::virtual { @user { "user": home => "/home/user", ensure => "present", groups => ["root","wheel"], uid => "8001", password => "SCRAMBLED", comment => "User", shell => "/bin/bash", managehome => "true", } # /etc/puppet/modules/users/manifests/ssh_authorized_keys.pp ssh_authorized_key { "user": ensure => "present", type => "ssh-dss", key => "AAAAB....", user => "user", } # /etc/puppet/modules/users/init.pp import "users.pp" import "ssh_authorized_keys.pp" class user::ops inherits user::virtual { realize( User["user"], ) } # /etc/puppet/manifests/modules.pp import "sudo" import "users" # /etc/puppet/manifests/nodes.pp node basenode { include sudo } node 'testbox' inherits basenode { include user::ops } # /etc/puppet/manifests/site.pp import "modules" import "nodes" # The filebucket option allows for file backups to the server filebucket { main: server => 'puppet' } # Set global defaults - including backing up all files to the main filebucket and adds a global path File { backup => main } Exec { path => "/usr/bin:/usr/sbin/:/bin:/sbin" }

    Read the article

  • Reporting Services 2008: Virtual directories not visible in IIS7..

    - by Ryan Barrett
    I'm having some problems with Reporting Services on Windows Server 2008 Standard. I've installed server 2008 as a standalone webserver (with roles/features of an web application server). On top of that, I've installed Sql Server 2008 Standard with Reporting Services (and the rest of the BI tools). Problem is, I want to modify the rights on the virtual directories. However, the virtual directories aren't appearing in IIS 7 management tool. I can connect to reporting services, albeit only with the local windows admin account. I can download Report Builder fine from an session on the server (but not from any clients). I've tried removing the default website from IIS, and that stops the reporting services website from working. The machine (a VM) isn't for production use - it's used on a closed network internally for testing and development purposes. I need to be able to let my fellow developers login without a password, and they must be able to install ReportBuilder 2.0. Must not be linked to a domain or active directory in any form. Google isn't much help, the results suggest I modify the virtual directory Does anyone have any suggestions?

    Read the article

  • JBoss 5 on AIX 5.3

    - by jess
    I am a very newbie for AIX and system monitoring. Actually our application currently run production on jboss 5.1 in AIX 5.3. Please check below configuration & system settings. AIX system configuration OS Level 5.3.9.0 (oslevel -g) Physical Memory size 24GB (svmon -G) Page space 4GB (lsps -s) processors 3 cores, Processor Type: PowerPC_POWER6, Processor Clock Speed: 4704 MHz (prtconf | grep Processor) Java version JRE 1.6.0 IBM AIX build pap6460sr10fp1-20120321_01 (SR10 FP1) (java -fullversion) JBoss configuration JBoss 5.1/JBoss ESB 4.11 Hornetq messaging with consumer flow control java opts : -d64 -Xms2g -Xmx4g -XX:MaxPermSize=1024m Sometime we observe very strange behavior in the JBoss that freeze without any error logs. Also server log stop without any further trace. We also not able to get thread dump (kill -3) and its not generate at that point. (kill -3 xxxxx works in normal circumstances) Only option available for us was restart the jboss server and its seem all messages that were in queues during the freeze time process after restarting. We try tweak some of setting in JBoss hornetq, we though issue was there. Hornetq Stuck By Default. But we haven't any luck and also unable to isolate the issue in any point. We looking at tool like nmon for monitoring this but no clue is that good enough to do so. Please provide some point to investigate this issue. Thanks

    Read the article

  • Windows clients unable to access Samba share on AD joined Linux box every 7 days

    - by Hassle2
    The problem: Every 7 days, 2 Windows Servers are unable to access a SMB/CIFS share. It will start working after a handful of hours. The environment: OpenFiler Linux box joined to 2003 AD Domain Foreground app on Win2003 server access the SMB/CIFS share with windows credentials Another process on Win2008 access the share via SQL Server with windows credentials The Samba version on the Linux box is 3.4.5. Security is set to ADS wbinfo and getent return back expected users and groups Does not look to be a double hop issue as it's always the 2 accounts, regardless of the calling user. There is a DNS entry in both forward and reverse lookup zone for the linux box The linux box's computer object in active directory shows that it was modified around/at the same time that the two clients started failing to access the share Trying to access the share via IP works when by name does not Rebooting the Windows server takes care of it (it's production and only restarted it once) Restarting smbd, winbind, nmbd had no effect Error in samba log for the client in question: smbd/sesssetup.c:342(reply_spnego_kerberos) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! The Question: Does this look like the machine account password is changing (hence the AD object showing the updated modified date) or are the two windows clients unable to request a new ticket that works against this linux box?

    Read the article

  • Using ZFS or XFS on a Xen guest running Linux

    - by zoot
    Background: I'm investigating the viability of using a filesystem other than ext3/4, with the ability to run snapshots for backup and rollback purposes. The servers under consideration are mailbox server nodes running on Linode's Xen based VPS platform. I'm particularly drawn to the various published benefits which ZFS offers in terms of data integrity and this year's stable release of native ZFS support in Linux - http://zfsonlinux.org ZFS appears to be the more thorough option in terms of benefits and simplicity (instead of LVM+XFS). Please note that I have little experience with ZFS (which I use on a local FreeNAS installation) and none with XFS, hence the post. To date, my servers are using ext3 filesystems, not managed under LVM. Question in detail: So, I have two questions. (1) Which of the two filesystems would be the better choice for the best of all of the following 3 aspects, running on a Xen Linux guest? Snapshots Data Integrity Performance (2) If ZFS is a viable option, is it practical to use ZRAID across Xen disk images to further enhance the solution for data integrity? Note: I'm reluctant to consider btrfs, given the many warnings I've read about in using it on production systems.

    Read the article

  • fedora apache/nginx pylons

    - by microchasm
    I'm trying to wrap my head around Pylons and how it works. So far... it's been confusing... I'm using EC2 with Fedora8. Everything is working so far (i.e. I have Pylons/python et al installed and after creating a test app and running paster serve I can access the default page via my domain name). As the Pylons docs explain and as I understand, the built in paster serve server is not suited for a production environment. What I am not clear on, then, is what to do next... It seems like nginx is a good option, but I am more familiar with Apache (like .0002%). I plan on having virtualhosts (which nginx says can accomodate). However, I am totally unclear on how the big picture is supposed to work. In order to serve an app, does paster serve need to be running? Does then nginx/apache basically just act as a proxy to shuttle connections to the paster server? How do I start it so it doesn't terminate after closing the ssh connection? If running multiple apps, what do I set as the host/port in development.ini to differentiate the apps? Or if this is not the right way, how do I differentiate beween apps? I am more familiar with MySQL, but willing to negotiate PostgreSQL if it's a better fit. Is it? Is virtualenv a prerequisite to running multiple apps on the same machine? Thanks in advance for any tips.

    Read the article

  • Relax Linux - it's just me! (filesystem permissions)

    - by Xeoncross
    One of my favorite things about Linux is also the most annoying - file system permissions. In production machines and web servers I love how everything is so secure and locked down - but on development machines it really slows me down. I'll give one example out of the many that I discover weekly. Like most people, I dual-boot Ubuntu and Windows so I can continue using the Adobe CS4 suite. I often design web themes and other things while I'm still using windows. Later I'll boot into Ubuntu to take the themes and write the backend PHP for them. After mounting the windows C: drive partition I can copy the template files over so I can begin editing them. However, thanks to Linux desire to protect me I find that after coping the files I end up with a totally locked set of files where even I don't have read-write permissions. So after carful consideration about the tremendous risks that the HTML files pose to me - I chmod them so that I and apache can begin using them. Now given, the chmod process isn't that hard - but after you chmod enough files per day you get sick of doing it. I'm constantly creating, fetch, editing, and removing files from my user, git repos, php, or other random processes. This is a personal development machine after all. Everything changes on a day by day basis. So my question is, how can I get linux to relax about what I'm doing with my HTML/JS/PHP/TXT/SQL/etc. files so that I can work faster without constantly stopping to chmod things? I pinky-promise I won't hack into my account with an HTML file. ;)

    Read the article

  • Has anyone used tools like (Chef, Salt, Puppet, CfEngine) to configure a 2008 Win Server with Sql?

    - by Development 4.0
    I have been looking into tools to automate the creation of servers. For two different reasons: Production Development machines I love the idea of the immutable server. I have seen the tools demoed and used successfully on *nix boxes running Rails or Lamp etc. Has anyone found a good way to do this in the Microsoft stack? I would like to get in on the fun and create scripts that will install Windows, patch it according to specification, deploy Sql Server create scripts to build out a database and just for fun deploy SharePoint and configure it, and then deploy a SharePoint solution to it. I can get part of the way, install Windows manually, install Sql Server manually, use Powershell to do all the configuration and setup. Install SharePoint and configure part of it, then powershell for the rest of the configuration and deploying a solution. I would love to have the ability to run one script though, or at least one unified process. I can, and have mostly used VM template images and then instantiated them, but the creation of the template is usually a manual step.

    Read the article

  • SQL Server Installation: Is it 32 or 64 bit?

    - by CapBBeard
    Recently I was performing an OS upgrade on one of our DB servers, moving from Server 2003 to Server 2008. The DBMS is SQL Server 2005. While reinstalling SQL on the new Windows installation, I went to another of our DB servers to verify a couple of settings. Now, I always thought this second server was Server 2003 x64 + SQL 2005 x64 (from what I'd been told), but I now have my doubts about this. I now suspect that it is in fact only 32 bit SQL, however I'd like to verify this. Here's some details: The OS is definitely 64 bit. xp_msver shows Platform as NT INTEL X86 SELECT @@VERSION shows Microsoft SQL Server 2005 - 9.00.4035.00 (Intel X86)... However sqlservr.exe is not shown with '* 32' in taskmgr, does anyone know why this is the case, if it is in fact 32 bit as claimed? Despite this, it does seem to be running out of the x86 program files folder. If I do the same checks on a confirmed 64 bit installation, it does give back the expected 64 bit readings, which can only prove that this server in question is only running in 32 bit. Now, that being the case, the question arises about how much memory this '32 bit' install can use. Task manager reports about 3.5GB memory usage for sqlservr.exe (The server has 16GB physical). I suspect that AWE has not been configured at all, and therefore the server will be significantly under-utilised (remembering that the OS is 64 bit) if SQL is simply using a 32bit address space. Is this assumption correct? I feel the server should have SQL reinstalled as 64 bit in order to fully utilise the hardware platform, however it is currently heavily in production; this will be no easy task. I suspect we may just have to configure AWE correctly and let it be for the time being (Unless this is a bad idea?). I apologise that this question is a little vague/lost; I'm no SQL expert, just trying to get a handle on what's going on here.

    Read the article

  • Rails app complaining can't connect to memcached but I'm pretty sure it's running

    - by centipedefarmer
    All was well, then I rebooted the server. Right now: $ ps aux | grep memcache 1000 27168 0.0 0.0 121972 1056 pts/0 Sl 15:18 0:00 memcached -m 64 -p 11211 -u nobody -l 127.0.0.1 1000 27816 0.0 0.0 7628 956 pts/0 S+ 15:36 0:00 grep memcache meanwhile the rails app's log is getting tons of this: MemCacheError (No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:55 -0600 2011)): No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:55 -0600 2011) MemCacheError (No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:55 -0600 2011)): No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:55 -0600 2011) MemCacheError (No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:55 -0600 2011)): No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:55 -0600 2011) MemCacheError (No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:55 -0600 2011)): No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:55 -0600 2011) MemCacheError (No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:56 -0600 2011)): No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:56 -0600 2011) MemCacheError (No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:56 -0600 2011)): No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:56 -0600 2011) MemCacheError (No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:56 -0600 2011)): No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:56 -0600 2011) MemCacheError (No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:56 -0600 2011)): No connection to server (localhost:11211 DEAD (Timeout::Error: execution expired), will retry at Tue Feb 15 15:35:56 -0600 2011) Being that I'm more of a developer than a server guy, and being that we don't really have a "server guy," and this being in production... where do I start with this?

    Read the article

  • Current wisdom on SQL Server and Hyperthreading?

    - by BradC
    Lots of articles out there (see Slava Oks's original SQL 2000 article and Kevin Kline's SQL 2005 update) recommend disabling hyperthreading on SQL servers, or at least testing your specific workload before enabling it on your servers. This issue is gradually becoming less relevant as true multi-core processors replace hyperthreaded ones, but what's the current wisdom on this issue? Does this advice change any with SQL 2005 64-bit, or SQL 2008, or Windows Server 2008? Ideally, this should be tested in advance in a staging environment, but what about for servers that have already made it into production with HT enabled? How can I tell if performance issues we're experiencing might be related to HT? Is there some specific combination of perfmon counters that might point me in that direction, as opposed to all the other things I normally pursue when working on improving SQL performance? Edit: This is especially attractive because of the potential for an across the board improvement for some of my high-cpu servers, but the client is going to want to see something concrete that helps me identify which servers really could benefit from disabling hyperthreading. Of course, conventional performance troubleshooting is ongoing, but sometimes any little bit helps.

    Read the article

  • Make a snapshot of a live mySQL database with myISAM & innoDB tables without locking

    - by Artem
    We have a live database in production where we are running out of space on the server. So I would like to transfer to a new server without any downtime (or as little downtime as possible). In general, I would also like to have a hot failover copy of the database available. I would like to use replication to get all of the data copied to the new machine, and then at some point flip a switch and have that new machine become the master (normal failover scenario). My problem is that I am not sure how to initialize replication without locking the db to make the initial snapshot I will use? Is there any way to do this? I know I could do it using single-transaction if I was using innoDB, but very unfortunately we have some myISAM tables in there (in fact the largest 150GB table is myISAM and I want to switch it to InnoDB but I can't do it until I have more space & a hot copy to switch to). Any ideas? Is there some way to make such a snapshot? Or is there alternatively a way to get replication to "catch up" without an snapshot for initialization?

    Read the article

  • How to create RPM for 32-bit arch from a 64-bit arch server?

    - by Gnanam
    Our production server is running CentOS5 64-bit arch. Because there are no RPM available currently for SQLite latest version (v3.7.3), I created RPM using rpmbuild the very first time by following the instructions given here. I was able to successfully create RPM for 64-bit (x86_64) architecture. But am not able to create RPM for 32-bit (i386) architecture. It failed with the following errors: ... ... ... + ./configure --build=x86_64-redhat-linux-gnu --host=x86_64-redhat-linux-gnu --target=i386-redhat-linux-gnu --program-prefix= --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64 --libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/usr/com --mandir=/usr/share/man --infodir=/usr/share/info --enable-threadsafe checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... gawk checking whether make sets $(MAKE)... yes checking for style of include used by make... GNU checking for x86_64-redhat-linux-gnu-gcc... no checking for gcc... gcc checking for C compiler default output file name... configure: error: C compiler cannot create executables See `config.log' for more details. error: Bad exit status from /var/tmp/rpm-tmp.73141 (%build) RPM build errors: Bad exit status from /var/tmp/rpm-tmp.73141 (%build) This is the command I called: rpmbuild --target i386 -ba sqlite.spec My question is, how do I create RPM for 32-bit arch from a 64-bit arch server?

    Read the article

  • MS SQL Server Firewall Ports

    - by mmacaulay
    Hi, I've recently found myself in the position of quickly deploying a production app on SQL Server 2008 (EXPRESS), and I've been having some issues with configuring firewall rules between our web server running the ASP.NET app and our database server. Everything that I can find on the internet claims that I should only need to have TCP ports 1433/1434 and UDP port 1434 accessible on the database server. However, we were unable to get connectivity going between the web app and the database with just those ports. With the help of one of the guys in our datacentre, we discovered that there was traffic also going to TCP port 2242 on the database server. After opening this port, everything worked, but we're not sure why. Later on, I had to reinstall SQL Server due to some disk space issues, and found that the problem had resurfaced - after another session with the packet sniffer, we discovered that this time traffic was going to TCP port 4541 on the database server. My question is, is there some configuration option that I'm missing in SQL server that's making it choose random ports? I'd like to have our firewall rules locked down as much as possible, and of course we'd like to avoid any future mysterious connectivity issues, especially once the app is live. Both servers are running Windows 2003 R2 X64.

    Read the article

  • Problem posting multipart form data using Apache with mod_proxy to a mongrel instance

    - by Ryan E
    I am attempting to simulate my site's production environment as closely as I can on my local machine. This is a rails site that uses Apache w/ mod_proxy to forward requests to a mongrel cluster. On my Mac OSX Leopard machine, I have the default install of apache running and have configured a vhost to use mod_proxy to to forward requests to a local running mongrel instance on port 3000. <Proxy balancer://mongrel_cluster-development> BalancerMember http://127.0.0.1:3000 </Proxy> For the most part, this is working fine. I can browse my development site using the ServerName of the vhost I configured and can confirm that requests are being properly forwarded to the mongrel instance. However, there is a page on the site that has a multipart form that is used to upload an image to the server. When I post this form, there is a delay of about 5 minutes and the browser ultimately returns a Bad Request Your browser sent a request that this server could not understand. In the error log for my vhost: [Tue Sep 22 09:47:57 2009] [error] (70007)The timeout specified has expired: proxy: prefetch request body failed to 127.0.0.1:3000 (127.0.0.1) from ::1 () This same form works fine if I browse directly to the mongrel instance (http://127.0.0.1:3000). Anybody have any idea what the problem might be and how to fix it? If there is any important information that I neglected to include, post a comment, and I can add to this question. Note: Upon further investigation, this appears to be a problem specific to Safari. The form works fine in Firefox.

    Read the article

  • Configure Cisco Pix 515 with DMZ and no NAT

    - by Rickard
    I hope that someone could shed some light over my situation, as I am fairly new to PIX configurations. I will be getting a new net for my department, which I am going to configure. At my hands, I have a Cisco PIX 515 (not E), a Cisco 2948 switch (and if needed, I can bring up a 2621XM router, but this is my private and not owned by my dept.). The network I will be getting is the following: 10.12.33.0/26 Link net between the ISP routers and my network will be 10.12.32.0/29 where GW is .1 and HSRP roututers are .2 and .3 The ISP has asked me not to NAT the addresses on my side, as they will set it up to give 10.12.33.2 as a one-to-one nat to a public IP. The rest of the IP's will be a many-to-one NAT to another public IP. 10.12.33.2 is supposed to be my server placed on the DMZ, the rest of the IP's will be used for my clients and the AD server (which is currently also acting as a DHCP server in the old network config with another ISP). Now, the question is, how would I best configure this? I mean, am I thinking wrong here, I am expected to put the PIX first from the ISP outlet, then to the switch which will connect my clients. But with the ISP routers being on a different network, how will the firewall forward the packets to the other network, it's a firewall, not a router. I have actually never configured a pix before, and fortunately, this is more like a lab network, not a production network, so if something goes wrong it's not the end of the world, if though annoying. I am not asking for a full configuration from anyone, just some directions, or possibly some links which will give me some hints. Thank you very much!

    Read the article

  • How to manually start and re-start Apache with mod_wsgi powering a password protected Python WSGI app?

    - by Mahmoud Abdelkader
    I'm working on a project where I have to meet some regulatory requirements that require at least 3 out of 5 authorized users to start a backend web service that handles very sensitive information using pre-assigned passwords. Right now, the prototype has been approved and is running using Python's wsgiref.simple_server(), which I have programmed to manually prompt for the passwords. Now that the prototype has been approved, I have to migrate the web application to a production environment where I will need to run it behind Apache and mod_wsgi. I have two questions: Right now, I use a thin Python wrapper around expect to programmatically allow for remote password entry. How do I get Apache to prompt me for a password before starting? Will this have to be in the app.wsgi script that's executed by mod_wsgi? How would that work since Apache daemonizes, and thus, has no stdin! Will I have to worry about some type of code reload? Apache probably has some maximum number of requests before it kills and restarts another worker process, but, would this require a password prompt as well?

    Read the article

  • Thin client - cloud machine - to run via iPad, iPhone, most Androids etc

    - by Carl Lindberg
    I'm tired of having a laptop macbook that breaks down or having files that I need to sync via dropbox etc all the time via the machines to different OS installations. It sucks. I want a thin client where I can login on any machine - my iPhone, PC desktop, iPad etc to one running machine. I would like to replace a modernly powerful desktop iMac with a thin client running via my iPad. I will connect the iPad with a keyboard/mouse too so you get the idea. But I want to be able to use some of the Android phones as well (I guess most Android phones today has a good enough performance/resolution etc to run a thin client). Of course it has to be able to have input/output in sound. Printing can be solved by PDF/emailing etc - so no direct communication to the printer ports to USB etc is necessary. Is there such a service today? It should cost somewhere under something like $40/ month. I will run stuff like CPU heavy duty ableton for music production, xCode for making iOS apps, some games etc. And on the thin client also run virtual machines. VM of Ubuntu and Windows.

    Read the article

  • Will adding a SSD cache device to my ZFS storage improve performance?

    - by Sysadminicus
    The server has 4GB of RAM and my zpool is made up of 15.5k SAS drives arranged like this: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c0t3d0 ONLINE 0 0 0 c0t4d0 ONLINE 0 0 0 c0t5d0 ONLINE 0 0 0 c0t6d0 ONLINE 0 0 0 c0t7d0 ONLINE 0 0 0 c0t8d0 ONLINE 0 0 0 raidz1-1 ONLINE 0 0 0 c0t10d0 ONLINE 0 0 0 c0t11d0 ONLINE 0 0 0 c0t12d0 ONLINE 0 0 0 c0t13d0 ONLINE 0 0 0 c0t14d0 ONLINE 0 0 0 spares c0t9d0 AVAIL c0t1d0 AVAIL The primary use is as an NFS store for a couple VMWare ESXi servers. I can't do any "true" benchmarks because this is a production system (no budget for test systems), but using dd and bonnie++ I can't get more than ~40-50MB/s writes and ~70-90MB/s reads. It seems I should be able to do much better, but I'm not sure where to optimize. Based on what I've read, I think dropping in a OCZ Vertex 2 Pro SSD as my L2ARC is going to be the best bang-for-the-buck to improve througput. Is there something else I should be looking into to help performance? If not... How do I know how big a cache device I need? Am I safe with only a single SSD as my cache device?

    Read the article

  • Is it possible to use rsync over sftp (without an ssh shell)?

    - by Tom Feiner
    Rsync over ssh, works great every time. However, trying to rsync to a host which allows only sftp logins, but not ssh logins, provides the following error: rsync -av /source ssh user@remotehost:/target/ protocol version mismatch -- is your shell clean? (see the rsync man page for an explanation) rsync error: protocol incompatibility (code 2) at compat.c(171) [sender=3.0.6] Here's the relevant section from the rsync man page: This message is usually caused by your startup scripts or remote shell facility producing unwanted garbage on the stream that rsync is using for its transport. The way to diagnose this problem is to run your remote shell like this: ssh remotehost /bin/true > out.dat then look at out.dat. If everything is working correctly then out.dat should be a zero length file. If you are getting the above error from rsync then you will probably find that out.dat contains some text or data. Look at the contents and try to work out what is producing it. The most com- mon cause is incorrectly configured shell startup scripts (such as .cshrc or .profile) that contain output statements for non-interactive logins. Trying this on my system produced the following in out.dat: ssh-dummy-shell: Command not allowed. As I thought, the host is not allowing ssh logins. The following link shows that it is possible to accomplish this task using fuse with sshfs - however it is extremely slow, and not fit for production use. Is there any chance of getting rsync sftp to work?

    Read the article

  • Haproxy not properly passing on X-Forwarded-For header

    - by JesseP
    I have backend web servers that receive requests by way of haproxy-nginx-fastcgi. The web app used to see multiple ip's coming through in the X-Forwarded-For header, chained together with commas (most original IP on the left). At some point in the recent past (just noticed, so not sure what caused it) something changed, and now I'm only seeing a single IP passed in the header to my web application. I've tried with haproxy 1.4.21 and 1.4.22 (recent upgrade) with the same behavior. Haproxy has the forwardfor header set: option forwardfor Nginx fastcgi_params config defines this header to be passed to the app: fastcgi_param HTTP_X_FORWARDED_FOR $http_x_forwarded_for; Anyone have any ideas on what might be going wrong here? EDIT: I just started logging the $http_x_forwarded_for variable in nginx logs, and nginx is only ever seeing a single IP, which shouldn't ever be the case, as we should always see our haproxy ip added in there, right? So, issue must either be in nginx handling of the variable coming in, or haproxy not building it properly. I'll keep digging... EDIT #2: I enabled request and response header logging in HAProxy, and it is not spitting anything out for X-Forwarded-For, which seems very odd: Oct 10 10:49:01 newark-lb1 haproxy[19989]: 66.87.95.74:47497 [10/Oct/2012:10:49:01.467] http service/newark2 0/0/0/16/40 301 574 - - ---- 4/4/3/0/0 0/0 {} {} "GET /2zi HTTP/1.1" O Here are the options i set for this in my frontend: mode http option httplog capture request header X-Forwarded-For len 25 capture response header X-Forwarded-For len 25 option httpclose option forwardfor EDIT #3: It really seems like haproxy is munging the header and just passing on a single one to the backend. This is fairly impacting to our production service, so if anyone has an ideas it would be greatly appreciated. I'm stumped... :(

    Read the article

< Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >