Search Results

Search found 18845 results on 754 pages for 'wayback machine'.

Page 267/754 | < Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >

  • Adding Multiple Interfaces to EC2 Ubuntu 12.04

    - by nocode
    I have a m1.medium Ubuntu 12.04 instance with two ENI's. I have a VPC setup with a private and public subnet. Private: 10.50.1.0/24 Public: 10.50.101.0/24 I initiated the instance on the private subnet. I configured a NAT instance and route all servers in the private subnet internet access. The route tables on the private subnet point towards the NAT instance and the route table on the public subnet point to the internet gateway. I am trying to add a public interface on the machine so that I can put it behind a ELB. When I added the second ENI and configured a static IP in /etc/network/interfaces and restarted the network services, I can no longer access from the Public subnet to the Private Subnet. Works Private private Private public Does not work Public private From Public Private, I ran a TCPDUMp on the private machine and can see the request coming in. My guess is it's trying to route over the new Public interface instead of the Private. Here's my route: default 10.50.1.1 0.0.0.0 UG 100 0 0 eth0 10.50.1.0 * 255.255.255.0 U 0 0 0 eth0 10.50.101.0 * 255.255.255.0 U 0 0 0 eth1 My networking knowledge is limited and I believe I have to add some routes but unsure of what command/syntax needs to be.

    Read the article

  • Some Apps don't start on Windows 8 Release Preview

    - by Exa
    I recently installed the Release Preview of Windows 8 in a virtual machine. Some apps do not work. When I open them (by clicking on their tile in the start screen) I see a splash screen and nothing else happens. Sometimes the app crashes after 30 seconds, sometimes it just keeps on loading. A good example is the "Map"-App from Windows 8 or the app "Cookbook" by Bewise. I installed Cookbook and when I had a look at the task manager I saw that it was the 32bit version running, but I have an x64 Windows 8... Could this be a problem? Shouldn't the Windows Store download the correct version? This is the setup of my virtual machine: Windows 8 Release Preview x64 Oracle VirtualBox 4 of 8 cores from host system 8 of 16 GB RAM from the host system 256 MB graphics memory guest additions installed resolution 1920 x 1080 Do you need further information? Unfortunately there is no error message... I just see the start screen of the app with its logo and it keeps loading, but nothing happens. Other Apps (like Mail, Video, Social, etc.) work fine.

    Read the article

  • ssh initial prompt hangs for 10 minutes but console login and initial prompt is very responsive - why?

    - by rfreytag
    I have been running an ESXi 4.0 server for months with a couple of WinServer2003 and several Ubuntu Server 10.4 VMs. The performance has been impressive on 6GB i7 Asus P6T hardware. Suddenly, a week ago, ssh logins to the Ubuntu VMs take 10 minutes when connecting over the LAN (over a WAN the connection (pipe) is broken long before that). When logging in to these VMs the password prompt arrives immediately, and failed passwords are responded to immediately. But the moment I log in then the shell prompt appears and I hang for many minutes. Sometimes the connection hangs before the shell prompt appears and sometimes I can type in a command but the moment I hit return the machine hangs. 10 full minute later control returns and the VM is responsive. NOTE: there are several Ubuntu VMs on the same host machine that are identical in all ways that I can tell. However, only one of the VMs displays this behavior. That is why I mention the ESXi host in passing - I don't think it has anything to do with the problem. This behavior is never seen when I connect with the troubled-VM's console (through vSphere Client). From the console the Ubuntu VMs all respond beautifully. I have seen: http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=1003496&sliceId=1&docTypeID=DT_KB_1_1&dialogID=229586372&stateId=1%200%20229588522 ...and since that relates to delays in seeing the password prompt that does not appear to be the solution here. Any other suggestions very welcome - thank you.

    Read the article

  • Faster, secure, protocol/code required for long-distance transfer.

    - by Chopper3
    I've ran into a problem and I'm looking for a new secure protocol/client/server that's faster over a 1Gb/s fibre link - let me tell you the story... I have a pair of redundant, diversely-routed, 1Gb/s links over a distance of around 250 miles or so (not dark fibre but a dedicated point to point link, not a mesh). At the 'client' end I have a HP DL380 G5 (2 x dual-core 2.66Ghz Xeon's, 4GB, Windows 2003EE 32-bit), at the 'server' end I have a HP BL460c G6 (2 x quad-core 2.53Ghz Xeons, 48GB, Oracle Linux 5.3 64-bit). I need to transfer around 500 x 2GB files per week from the client to the server machines per week - but the transfer NEEDS to be secure. Using both iPerf or regular FTP I can get ~80MB/s of transfer pretty consistently, which is great. Using WinSCP or Windows SFTP I can't seem to get more that ~3-4MB/s, at this point the server's CPU is 3% busy while CPU0 of the client goes to ~30% utilised. We've tried editing various TCP window sizes with little success. Both ends are connected to quite low-usage Cisco Cat6509's with Sup720's. I can replace the client machine with a newer machine and/or move it to Linux - but this will take time. Clearly these single-threaded secure Windows clients are introducing too much latency doing their encryption. So a few questions/thoughts; Are there any higher performing secure protocols or client software for Windows that I could try? I'm pretty protocol-gnostic so long as it'll work between Windows and Linux. Should I be using hardware to do the encryption, either in the client or the network parts? If so what would you recommend? I'm not convinced that just swapping the server would be that much faster, the CPU was only at 30% but then again that's higher than I'd have expected given the load - moving to Linux at the client end may be a better idea but would be quite disruptive. Am I missing a trick? Thanks in advance.

    Read the article

  • pcfg_openfile: unable to check htaccess file, ensure it is readable

    - by rxt
    After moving a website folder on my local development machine to another drive, then moving it back, I got a 403 error. Most of this problem had probably to do with rights that got messed up. After deleting the code and restoring it from SVN, the rights seemed allright. The error stayed however. The setup is a bit complex, as follows: I have Ubuntu 10.4 as development machine, trying to mimic the server as much as possible We use Eclipse + SVN and I create all projects in a local folder under my user account In /var/www-vhosts I create folders for each vhost, like this one: test.localhost test.local/index.php: includes the index file of the project test.local/.htaccess is a dynamic link to the htaccess file in a project subfolder I get the following error in the apache error log: [Thu Jul 08 15:55:56 2010] [crit] [client 127.0.0.1] (13)Permission denied: /var/www-vhosts/test.localhost/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable The problem seems to be the .htaccess file, or the link to it. When I empty the htaccess, nothing changes When I remove the link, the index-include produces some output (in the apache error log) When I remove the link and replace it with the actual file, I get another error: [Thu Jul 08 16:47:54 2010] [error] [client 127.0.0.1] Symbolic link not allowed or link target not accessible: /var/www-vhosts/test.localhost/test I'm lost here, don't know what to do next. Do you have any ideas what I can try? This setup has worked before, but I don't know what is different now.

    Read the article

  • SQL Server Unattended Install through SSH

    - by Samuel
    I'm trying to install SQL Server from the command line through Cygwin open-ssh. The install works when I log onto the server as Administrator and execute the script through a Cygwin shell, but the install doesn't work when I SSH into the machine using Administrator's credentials and run the exact same command. I've already verified that the SSHD process is running as the Admistrator, and I've verified that the install script is indeed starting under Administrator. Is there something different with the terminal in SSH vs. the Cygwin terminal on the machine that would cause this problem? Specifically what's failing is Sql Server install runs for a while then hangs with a MSI error 1622. "Error opening installation log file. Verify that the specified log file location exists and is writable." If I run both installs, I've noticed that they have different authentication id's in ProcMon, but they have the exact same command line parameters. There has to be something in SSH that is causing permissions issues... Any ideas?

    Read the article

  • Memory overcommitment on VmWare ESXi 5.0

    - by Tibor
    I would like to understand better the possibilities of VmWare ESXi memory overcommitment. I've read this paper from VmWare, so I am familiar with general concepts, such as hypervisor swapping, memory balooning and page sharing. It seems that a combination of these techniques allows for quite a large degree of overcommitment. However, I am not sure. I am deploying a virtual test lab comprising of 4 identical sets of virtual servers and workstations and a couple of virtual router instances. Overall, I expect to be running around 20 virtual machines with Windows XP, Windows 7 and Ubuntu for workstation hosts as well as CentOS and Windows 2008 Server instances for servers. The problem is, however, that the host machine only has 12GB of RAM and I don't have an option to stuff in some more. I would like to know what is the best option to configure hosts in order to achieve reasonable performance within the constrains. I have these two options: Allocate as little as possible of RAM to each virtual machine. Allocate an extraordinary amount (such as 4 GB per instance) and let the baloon driver do the rest. Something else? Which would work better? Machines will mostly be idle, so I don't have any major performance expectations, but they should run reasonably smoothly nevertheless.

    Read the article

  • I need to preserve a tape using symantec backup exec. I'm aving trouble doing so

    - by MrVimes
    Please forgive me if this is the wrong stack exchange site. Please suggest which one I should post this to if it is. There's an automatic tape machine running in a remote location, with software (symantec backup exec 11d) Recently one of the servers being backed up had problems with its raid controller, so one of the drives has become invisible. I need to preserve the last good backup of that drive so I am trying to replace the tape with the most recent backup of that drive on it with one of the scratch tapes (blank tapes) present in the machine. I've tried the following... Associate the blank media with the media set in question (Wednesday) For the existing media (the tape with the data I want to keep) I click 'move to vault' and move it to the offline vault. I associate it with something other than 'Wednesday' (a media set called 'keep data infinitely...') I then do an inventory on that slot. The above steps I'm led to believe are supposed to put the fresh tape in the slot that had the tape I want to keep in it. But it just keeps showing up as containing the tape I want to keep after the inventory. (after refreshing the device tree) I am a complete newbie with this software. Can you tell me what I'm doing wrong, and/or tell me how to acheive my desired goal Edit: Just want to point out that I did try to get help directly from symantec with this, but having jumped through countless hoops to create an account and create a support ticket my progress was halted by requiring something called a 'tecnical contact id' at the final step with no explanation of what it is or how to get one.

    Read the article

  • processes slow after some time of actively running

    - by Yervand Aghababyan
    i have several cron jobs running on an ubuntu machine. each one does some pretty heavy load stuff. The cron jobs are parsing files and the bigger the file the longer it takes them to parse it. The strange thing is that if i make the files too big ( like 30mb) the script kind of hangs. It starts processing them really enthusiastically but after some time (something like 5-10 minutes) the cpu usage of the process drops a lot and it gets into some "zombie" state. If prior to this the process in htop was using 70-80% of the CPU then after this drop occurs it slows down to something like 5-10%. the load average drops down as well. The status of the processes sometimes changes to D in htop, which AFAIR stands for zombie. Today i noticed the same behavior of processes of mysql when executing heavy queries (a query took something like 4 hours to execute). the cron jobs are mostly php and during their processing most of the CPU eats the php process and not mysql. so i think the issue is not with a specific language/program but with the way the processes are "managed". The only other place i've seen similar behavior was on my Amazon EC2 micro instance when after some aggressive use of CPU the CPU quota was taking effect and everything was slowing down dramatically. This is a dedicated machine running ubuntu. what may be the cause?

    Read the article

  • Reverse proxy for mailserver (SMTP + HTTP for web client)

    - by ba
    I'm looking at doing some reverse proxy work for a mail server with corresponding web client. Both servers are running on the same machine, this is not a server with a high load. :) The solution I've discussed with friends is having the mail server/web client on our internal network. Then to put a reverse proxy on the DMZ to service both SMTP and web client HTTP-traffic to the mail server on the internal network. From what I understand this is the recommended secure solution? So far I've thought for the SMTP-proxy part of using postfix which will receive mail, do some spamhause and similar anti-spam measures and if it all checks out, send the mail to the mail server on the inside. The mail server on the inside will send all outgoing mail to the proxy which will then send it out on the Internet. For the web client I'm not sure exactly which software I should be running on the proxy machine, I've been thinking about using Squid -- but that's basically based on the fact that I know squid is a http proxy. The web client data will be sent out over SSL. Reading around some here on Serverfault I've seen other people using Apache with mod_proxy+mod_security for similar situations. Am I thinking correctly for this solution? What software would you guys use and with which modules? Thanks in advance for the help! :)

    Read the article

  • Are there any tools to migrate your files, applications, and settings to a new Windows computer?

    - by calbar
    I've decided to upgrade my laptop on a regular basis and one of my main concerns is recreating my entire Windows 7 environment every time I do this. I'm talking toolbar positions, login settings, start menu items, applications and all their customizations... everything but my drivers. It literally takes weeks to fully recreate my working environment, not to mention the risk of user error or just simply forgetting "how I liked it." I'm assuming I won't find something as painless as Apple's Migration Assistant for Windows, but maybe there's something out there that can at least package up your apps and their settings? Bonus points if you can point it to your personal files, too - whatever's the quickest way to get from one machine to the next. I intend to install Windows fresh to remove bloatware on every machine that I buy, then selectively install the drivers I need. Something that accommodates loading my old apps into this newly prepared environment would be ideal. One random point of concern is in regard to application settings that refer to old hardware. I'm not sure if there's anything that can be done about this. If you have any thoughts, feel free to share. Thanks for your help!

    Read the article

  • Open an X application going through many hoops (SSH, vpn etc)

    - by ??O?????
    The players: my home computer, running Linux with an X server running. (Call it HOME.) a remote site, to which I can connect over the internet using a VPN. (SITE) a Linux computer at the remote site, to which I can connect with ssh -X and nicely have X clients displaying on my local server. (MIDDLE) a very old Irix machine (an Onyx) at the remote site, which has no SSH server (therefore I can't ssh -X to it), only an ssh client. (ONYX) Purpose I need to run an X11 application on the ONYX machine, and see the GUI on HOME. I think I stumble upon xauth issues. So far The current situation is: ? HOME connects to SITE ? A vncserver starts on MIDDLE:7 ? vncviewer on HOME connects to vncserver on MIDDLE ? ONYX starts a forwarding ssh session to MIDDLE: ssh -TfN -L 6007:127.0.0.1:6007 MIDDLE ? DISPLAY=localhost:7 xclient on ONYX fails with Xlib: connection to "127.0.0.1:7.0" refused by server I do know that the forwarding (6007:127.0.0.1:6007) succeeds. A previous attempt was: ? HOME connects to SITE ? HOME connects to MIDDLE: ssh -X MIDDLE (xclock displays on HOME, DISPLAY is 127.0.0.1:10) ? ONYX starts an SSH tunnel to MIDDLE: ssh -TfN -L 6010:127.0.0.1:6010 MIDDLE ? DISPLAY=127.0.0.1:10 xclient fails with X connection to 127.0.0.1:10.0 broken (explicit kill or server shutdown). while an error pops up in the MIDDLE session: X11 connection rejected because of wrong authentication. Despair How can I achieve my purpose?

    Read the article

  • Simple Distributed Disconnected way to sync a directory

    - by Rory
    I want to start regularly backup my home directory on my ubuntu laptop, machine X. Suppose I have access to 2 different remote (linux) servers that I can backup to, machines A & B. Machine X will be the master, and should be synced to A and B. I could just regularly run rsync from X to A and then from X to B. That's all I need. However I'm curious if there's a more bandwidth effecient, and hence faster way to do it. Assuming X is going to be on residential style broadband lines, and since I don't want to soak up the bandwidth, I would limit the transfer from X. A and B will be on all the time, however X, will not be, so I'd also like to reduce the amount of time that X is transfering, potentially allowing A and B to spend more time transfering. Also, X won't be connected all the time. What's the best way to do this? rsync from X to A, then from A to B? Timing that right could be troublesome. I don't want to keep old files around, so if I was to rsync, then the --del option would be used. Could that mean something might get tranfered from A to B, then deleted from B, then transfered from A to B again? That's suboptimal. I know there are fancy distributed filesystems like gluster, but I think that's overkill in this case, and might not fit with the disconnected nature.

    Read the article

  • 'The rpc server is unavailable' or 'access is denied' error when using Remote desktop Services Manager on Windows 7 (but mstsc.exe works!)

    - by tbone
    I am trying to connect to a Windows XP workstation from a Windows 7 Ultimate workstation using Remote Desktop Services Manager. I am able to do a Remote Desktop (mstsc.exe) session from the Win7 machine to the WinXP machine with no problem at all. When running the Remote Desktops Admin (tsmmc.msc) too on a Windows XP box, I can also connect with no problem. However, when I use the new Remote Desktop Services Manager on Windows 7 and try to connect, I get the error: "The rpc server is unavailable" What could cause this? Has there been some fundamental change in Remote Desktop Services Manager, does it connect in a different way somehow? Update #1 Turned off firewall on the Windows XP box and the "The rpc server is unavailable" error went away; so RDSM seems to be using an entirely new port/connection/service compared to mstsc.exe or the old Remote Desktops Admin tool. Now... after disabling the firewall, I get a new error: Access is Denied. After doing some googling, I found some articles discussing this; basically, the error is very misleading - the actual problem is, if either side of the connection has dual monitors, and they are not both Win7 Ultimate, then you cannot connect using Remote Desktop Services Manager...the reason is, by default it uses the /multimon switch, and this switch requires a certain level of Windows license - and, there seems to be no way of changing this default (if anyone knows of a way to change this default, please post an answer or comment!). Nice going Microsoft. http://social.technet.microsoft.com/Forums/en-US/windowsserver2008r2rds/thread/4d06278f-e0f4-4f8e-a8e1-3697ee967ef4 http://www.experts-exchange.com/OS/Microsoft_Operating_Systems/Windows/Windows_7/Q_26225743.html

    Read the article

  • SQL - an error occurred during the pre-login handshake

    - by Rivka
    Until yesterday evening, I was able to connect to my server from my local machine. Now, I get the following error: A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: SSL Provider, error: 0 - The wait operation timed out.) (.Net SqlClient Data Provider) Note, I can log on to the actual server with no problem. Yesterday, I installed IIS on my machine and set up a site using my IP address - don't know if this has anything to do. I did come across this article, followed the steps, but didn't seem to help. http://www.escapekeys.com/blog/index.cfm/2011/1/26/Microsoft-SQL-Server-Error-64-A-connection-was-successfully-established-with-the-server I also went through the following article, changed TC/IP settings, restarted, but nothing. http://blog.sqlauthority.com/2009/05/21/sql-server-fix-error-provider-named-pipes-provider-error-40-could-not-open-a-connection-to-sql-server-microsoft-sql-server-error/ Started trying suggestions from comments too but stopped when I realized I might be messing things up more. So, why is this happening / how can I fix?

    Read the article

  • Synchronize Dreamweaver over an SSH tunnel using an SFTP connection

    - by Aeo
    Maybe... Just maybe... I'm asking too much here. Maybe I'm even barking up the wrong tree. I'm looking to essentially have Dreamweaver establish an SSH tunnel to one machine, and then use that connection to synchronize a site that is on another machine entirely. Now for some details: We've got two connections here at work. We've got our office connection for day to day business, and then we've got some fancy connection hosting our web servers upstairs. For the most part they've been mutually exclusive until recently. We had been establishing an SFTP connection to synchronize our web sites by going out over the office connection to the web and coming back in over the fancy connection to our servers upstairs. Recently -ish, we established a LAN connection to one of our servers that makes a pleasant change in VNC connection quality. Thanks to Vinagre, this makes it really easy to connect to any of our servers over this LAN connection via SSH tunnel for VNC. However, in spite of that new addition of a LAN connection, we still synchronize over the 'net. Out the office connection and in on the fancy one upstairs. I'm looking to change this. I'd like to get Dreamweaver to first tunnel over our LAN connection to the servers, and then go from there to whatever connection it needs to. Am I asking too much? The current set up: Dreamweaver is installed on Windows XP which is running within VirtualBox on top of Ubuntu 10.10. The network connection for VirtualBox is currently made in NAT mode, but could easily be switched to a Bridged Connection should it need be. The LAN connection is to 1 of 5 servers running CentOS 5.

    Read the article

  • Why is my rsync so slow compared to pure cp or even scp?

    - by nfm
    I'm transfering the files from Linux to Windows 7 via a mounted share (the share is mounted from Windows on Linux).. I'm copying lots of data (i.e. nearly a TB) from the old to the new machine within my LAN. I'm unfortunate enough already that I only have 100MBit. Naturally I blindly used rsync but already wondered after a day why it feels so slow. Enabling the progress meter showed my a transfer rate of about 2MBit/s . So I took a reasonable big file (800MB) and tracked the transfer timing: cp : 05:33 scp (*): 06:33 rsync : 21:51 *) scp via localhost to the same Linux machine directly onto the share; completely useless but provided a progress meter The tests were as simple as (cp|scp|rsync) <source> <destination> No special arguments except host/port for scp. I even tried the -W switch for rsync but cancelled after ten minutes. rsync is 3.0.3 running on Lenny. To be able to interrupt the copy process anytime and resume lead me to rsync, but now I think I seriously need to reconsider this requirement. How's such a big difference possible?

    Read the article

  • Figuring out which PC part is faulty

    - by Davy8
    I have an odd scenario and I'm having trouble figuring out which is the faulty component. First of all, the video doesn't work, monitor says it's not getting a signal. Monitor's not faulty (works on other computer) so the first suspect was video card. However 2 things make me think it's not the video card. (Don't have another machine with PCIe around to test definitively) First, the GPU fan is spinning so it's getting power. Second, tried putting in an older PCI video card that is known to be working (pulled out of another working machine) and there's still no video. Normally if it's not the video card I'd suspect the motherboard, but everything's getting power on the mobo, so I'm not sure. The case apparently doesn't have system speakers, so can't hear any of the diagnostic beeps either. Also not sure whether a faulty CPU would cause no image at all either. The parts are brand new so something's going to get RMA'd but I'm not sure which component is to blame in this case. (Only slightly related, but I also accidentally put too much thermal paste on the CPU. The fan/heatsink instructions said to put the whole tube which seemed like a lot compared to previous experience, and as I started squeezing I knew it was definitely too much and stopped at about 1/3 but against my better judgement I didn't wipe any off. I'm not sure whether that would cause problems other than not cooling as effectively as it should)

    Read the article

  • Change A Password

    - by Thomas
    I have a non-domain machine that I use with our company's domain resources over vpn regularly. I switched to Windows 8 (fresh install), and the "Change a password" option went away from the Ctrl-Alt-Del window. Can't seem to google anything about this subject, or find a way to access that password change dialog. I tried running the .reg file from http://www.sevenforums.com/tutorials/63014-ctrl-alt-del-screen-add-remove-change-password.html with no luck. I also tried to Disable "Remove Change Password" via gpedit.msc. I could do it from my domain laptop, but I like to do it on this machine because it updates all my saved copies of those credentials. My local account is tied to my hotmail account if that matters. Updates: Administrator account. I apologize for stating this was an upgrade, it was a fresh install to a diff't drive. 64-bit Pro install. Bounty's almost up If someone can just confirm that the Change A Password... should or should not be present on a non-domain, Live tied, Win8 install, I'll be satisfied that I can or cannot expect to fix it.

    Read the article

  • Triple-Boot + 4 partition Limit

    - by dsimcha
    I just bought a new hard drive so that I could convert my XP-only machine into an XP-Ubuntu-Windows 7 triple boot machine. Since the drive is absurdly huge (1 TB) I wouldn't mind throwing ReactOS into the mix, too. I just found out that master boot records are limited to 4 entries, meaning 4 primary partitions. I had Windows XP set up on my old drive as a boot partition, a program files partition and a media partition. Since I really didn't want to install XP from scratch, I cloned this setup on my new drive. This leaves me one MBR partition entry for installing Windows 7, Ubuntu and ReactOS. I'd like to avoid having to install XP from scratch like the plague, partly because it's supposed to be a safety net in case things go wrong with my other OS's and because I've invested a lot of time getting it set up exactly the way I like it. Here are the options I've considered and why I don't like them: Install Windows 7 on my media partition. This would work, but I prefer to keep my media partition completely separate from any OS, so that I can reformat an OS partition without affecting my media partition at all. Use wubi or something to install Ubuntu in the same partition as something else. Again, this is brittle. Move all my media to a logical drive on an extended partition. Create another logical drive on this extended partition for Ubuntu. The problem here is that extended partitions are rather brittle--if you nuke one, it renders the rest useless. Just put the old drive back in my computer and run XP off it. Use the new one for the other OS's. The problem here is that the old drive is slower and uses extra power, generates extra heat, etc. Can anyone suggest any other possibilities that I may have overlooked?

    Read the article

  • Xen domU mem-set issue

    - by Casper Langemeijer
    I'm running into a problem on my xen 4.0.1 server (debian squeeze) My host has 32G of memory, Domain-0 has 2048 M assigned to it. (scaled down with xm mem-set Domain-0 2048) top in Domain-0 confirms this. I created a virtual machine config file (using xen-tools) with the following options: memory = '512' maxmem = '2048' Both host and guest machines are running the standard 2.6.32-5-xen-amd64 debian kernel. 'xm create' creates a virtual machine with 512MB of memory as expected. Then 'xm mem-set domU 1024' will not expand the memory to 1024MB running 'xm mem-set domU 400' does set the memory to about 400MB Then 'xm mem-set domU 1024' will expands the memory back to 512MB Based on this, you would say that xm ignores the maxmem and silently sets maxmem to 512, but in the output of xm top the MAXMEM column reads 2G. the MEM column will not go over 512M. The output of xm list tells another story, it shows 1024 when I 'xm mem-set domU 1024'. I've googled myself all away around the internet for this issue and found that most people don't scale back Domain-0. I know I've seen a bugreport about the issue I'm experiencing, but can't find it anymore. Does anyone see what I'm doing wrong here? Hmm.. I just upgraded my kernel to the one provided by debian backports. The issue has gone.

    Read the article

  • Rails 3 + Nginx + Passenger -- Routing index

    - by Bijan
    I have no index.html file in my public folder. My rails routes file routes this, and it works fine when I run 'rails server' on my machine. I'm trying to deploy the app. I have passenger and nginx running When I run rails server on my local machine, it works fine. But it's just trying to access static file when I try to access it on the production server. Here's my nginx conf: worker_processes 1; #pid logs/nginx.pid; events { worker_connections 1024; } http { passenger_root /usr/lib/ruby/gems/1.9.1/gems/passenger-3.0.2; passenger_ruby /usr/bin/ruby; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name mmjconsult.com; root /www/mmjs/public; access_log logs/host.access.log; passenger_enabled on; } } Thank you for any help. I really appreciate it.

    Read the article

  • SSH connection times out unless I tunnel in from a different server-

    - by rm-vanda
    OK, so this just started last week - Whenever we try to connect to our server via ssh (we use sftp, as well) - The connection times out. However, when you ssh to any other server and then ssh into the machine - it works flawlessly. Now, the mindblowing thing is that sometimes the ssh connection will succeed. Moments ago, I tried it from another machine, and then my own, and it worked - only to time out the next go around. Last week, simply restarting the ssh daemon worked, but this week, no such luck. I even went in and changed: /etc/hosts.allow ALL : ALL and /etc/hosts.deny is blank. The firewall config hasn't changed - but I even disabled the firewall to see if that would work - It did, for a moment - before cutting off, again. (ufw is set to "ALLOW" not "LIMIT") When I try SSH'ing in from my phone -- it works, fine -- So, it seems the problem is with our ISP/router/gateway - However, I see no log in the router/gateway that says its blocking our connections - And that wouldn't explain why we can SSH into any other server -- except for this one - from our network --- I truly appreciate any insight that anyone may have on this matter -

    Read the article

  • when to upgrade server to include more cores, versus more processors, versus additional server?

    - by gkdsp
    The server hosting market is separated into single, double, qual, etc., processors, where each processor has several cores, or CPUs. My company will offer a Linux-based web application that relies on an Apache web server and a middle tier for business logic. The middle tier is used to crunch math, and return result to a client. Many clients may access the application simultaneously. The company will start with one processor having 4 cores. I'm trying to understand how the app uses the cores and then how to scale the application as business grows, in terms of servers/processors/cores. For example, I'd assume initially one core would be used for Apache, and the other 3 used to process client's requests for math crunching... Question 1: does that mean, with the 3 cores available, I can handle 3 separate client requests simultaneously (e.g. 1 for each of 3 cores)? I mean, except for the shared RAM, is this effectively like having 3 individual machines (from pt of view or processing client requests simulaneously)? Or, only one client's request may be processed at any one time, but that client's request is divided up into up to 3 cores depending on the type of process running that does the math crunching and whether or not it can take advantage of multi threading (so the # of cores impacts how fast any one client request completes)? I'm confused about what the cores mean to the application here. Question 2: As the business grows and more client requests need to be processed, should the server be upgraded to (A) a new machine with more cores, (B) a new machine with two processors, 4 cores each, or (C) keep the original server and add another server with a single processor? Which route provides the most efficient way to scale the application, in terms of processing more client requests per time interval? Is the choice, for example, limited by RAM (when you need more RAM than box can handle it's time to add another server), or something else? Question 3: Is the total number of client requests processed simultaneously equal to the number of cores times the number of servers (minus the one core for Apache)?

    Read the article

  • Uninstall Glassfish and metro completely

    - by user775829
    I thought of updating my Glassfish server from 2.1 to 3.1.1 in a Linux machine. I downloaded the .ZIP package. However during uninstalling of Glassfish v2.1 I did not find the uninstall.sh file in "bin" directory. Following are a few steps which I did... I removed the glassfish folder (rm -rf ...) After removing files in the end it gave me a notification that it could not remove 2 files used by Metro. I cant recollect those file names, but I manually deleted that folder. I made a mistake by first not uninstalling Metro. I uninstalled metro completely after that. but it seemed pointless (it uninstalled successfully :P ) I transfered the Glassfish 3.1.1 ZIP file and unzipped and configured it. FOllow are a few Problems I am facing I cannot deploy any of my WAR file. Its giving errors saying " Error creating bean,Instantiation of bean failed etc etc." (However the WAR file is getting deployed successfully in other Linux Machine) When I try installing Metro v2.1 separately, it does not show the admin console or it timesout while starting the domain. The Log File of the Domain says it has started the domain successfully and the process is also created. But after running the command (asadmin) it takes like forever and times out without showing Domain Started Successfully, There is no uninstall.sh in Glassfishv3.1.1 bin directory. How do I completely uninstall Glassfish v 3.1.1 and Metro 2.1 ??? What are the files which I will have to manually remove?

    Read the article

< Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >