Search Results

Search found 42297 results on 1692 pages for 'run'.

Page 310/1692 | < Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >

  • Debugging IO limitation

    - by Martin F
    I have a Fedora box with some severe IO limitations which I have no idea how to debug. The server has a Areca Technology Corp. ARC-1130 12-Port PCI-X to SATA RAID Controller with 12 7200 RPM 1.5 TB disks and a Marvell Technology Group Ltd. 88E8050 PCI-E ASF Gigabit Ethernet Controller. uname -a output: 2.6.32.11-99.fc12.x86_64 #1 SMP Mon Apr 5 19:59:38 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux The server is a file server running Nginx with the stub status module enabled, so I can see the current amount of connections. The problem present itself when I have a high number of simultaneous connections in a writing state. Usually around 350, at this very moment it's at 590 and the server is almost unusable and stuck at 230mbit/s. If I run stop and hit 1 to see CPU core usages I have all 4 cores with around 99% io wait, if I run iotop the nginx workers are the only processes producing any read load, currently at around 25MB/s. I have each of the workers bound to their own core. Initially I figured it was just the disks being bugged. But I've run fscheck and smartmontools checks and found no errors. I also ran an iozone test which you can see the result of here: http://www.pastie.org/951667.txt?key=fimcvljulnuqy2dcdxa Additionally, when the amount of connections are low I have no problem getting a good speed. If I wget over the local network it easily hits 60MB/sec. Right now I just tried putting a file in /dev/shm, then I symlinked a file from the public dir to it and used wget over the local network and only got 50KB/s. Also, if I try to cp /dev/shm/test /root/test it quickly copies around 740MB and then slows down HEAVILY. Again with iotop reporting 99% iowait. I'm not really sure how to go about figuring out what the problems are. It could be a natural disk limitation but then the file from /dev/shm ought to transfer so it seems there's a network limit, but that's fine when there's not many connections. Perhaps it's a TCP stack problem but I really have no idea how to check that. Any suggestions on how to proceed with debugging would be very welcome. If additional information is required then let me know and I'll try to get it. Thanks.

    Read the article

  • Ubuntu 9.10 x86_32 with PAE causes graphics issues, mainly slowness

    - by widgisoft
    I've tried the 64bit version and found I was constantly hitting a brick wall trying to get 32bit stuff to run; I'd previously used PAE on 9.04 without any issues so figured I'd give it a shot. However, on 9.10 it seems PAE or the process of enabling PAE breaks the nvidia drivers/module somewhat as performance is terrible; I can't even enable desktop effects and there's lots of artifacting on random controls. Disabling the pae image and rebooting fixes the issue however I'm then stuck with "only" 2.7Gb of ram and unable to use the full 8gb that's installed. Is there somthing special I need to do when using pae and nVidia drivers or should I be using 64bit and just figure out how to force run 32bit packages? :-p

    Read the article

  • DB2 LUW tools for diagnosing issues when the stuff hits the fan

    - by Ichorus
    I am no DBA and very much a novice when it comes to DB2 so even 'obvious' answers are welcome to this question: I love db2top but sometimes I cannot get it to run if the load average is high on a db2 LUW. This morning I was looking at an issue where load average shot up suddenly, I could not get db2top to come up and I needed to find out what was happening. What can I do to find out who is doing what in this situation? I suspected a horribly bad query was being run by someone...is there a good way to find information on poor performing SQL on the fly in that type of situation? Are there any good ways to collect good, actionable stats who/where bad sql is coming from in the event that load average is so high? I know about db2pd but I am not sure how to use it effectively and slogging through tens of thousands of lines of raw data is probably not the most efficient way to get at the heart of a problem. Any tips or resources?

    Read the article

  • Can 'screen' grab an existing process and tie itself to it?

    - by warren
    Scenario: Started a process that's going to take "a while" to complete outside of screen. Need to leave the terminal / netowrk hiccups Process lost Would be nice if: Started a process outside of screen Realize error Run screen <magic-goes-here> and it grabs the active process to itself From the man pages and --help info, I don't see a way this can be done. Is this possible directly with screen? If not, is it possible to change the owning shell of a process, so that the bash (or other shell of your choosing) instance inside screen can have a command run which will change the parent shell of the initial process to itself from the originator?

    Read the article

  • Flash player, HD videos and games are choppy

    - by Aimad Majdou
    I have a problem with flash player. HD videos from Youtube or Vimeo and flash games do not play smoothly. I'm using Flash player 11, Windows 7 Sp1, and my graphic card is Intel GMA 4500. Device Manager shows me that all drivers are installed on my computer, so i don't have any problems with drivers. When I run Google chrome, Resource Monitor shows me 15% ~ 40 % of CPU Usage and 40% used Physical Memory, but when I watch a video on Youtube or play a Flash game, the Resource Monitor shows 70% - 90% CPU Usage. Also, when I run some HD Video (Frame width : 1920, Frame height : 1080) on my computer, Device Manager shows me 80% ~ 100% of CPU Usage. before I Reinstall Windows 7, HD Videos and flash games were play smoothly I hadn't any problem with them !! I hope all these informations are enough to answer my question.

    Read the article

  • Is there a way to prevent output from backgrounded tasks from covering the command line in a shell?

    - by Chris Pick
    I would like to be able to run task(s) in the background of a shell and not have their output to stdout or stderr cover the command line at the bottom. Frequently I need to run other commands to interact with the background processes and would like to do so from the same shell without having to open up another terminal or using multiplexer to split the terminal like screen. Ideally there would be some setting that I just don't know about (I commonly use bash or ksh), but a new or different shell or a script would be fine by me. I'm open to any suggestions and appreciate any help, thanks.

    Read the article

  • How can I get past a BSOD 07b when booting from a VHD?

    - by Dan
    I thought I'd be clever and use disk2vhd at the end of my contract to backup my machine so I could easily restore it when I started my new contract, no matter what the buffoons had done with it in the meantime. I didn't count on them losing it. I'm trying to boot this new machine from the VHD. I get the windows logo and then a 07B bluescreen error, something to do with the disk, and it won't boot even in safe mode. The cure apparently is to run sysprep but I can't run that unless I can boot into the VHD so... I can mount the disk, is there a way to modify it

    Read the article

  • Problems getting Squirrelmail and passenger working on apache

    - by Kenneth
    I'm trying to have a setup where I want to run a squirrelmail and Passenger on the same apache server, having a url point to squirrelmail and everything else handled by passenger. I've gotten so far that both squirrelmail and passenger will run fine by themselves but when passenger is running it handles all urls. So far I've tried using Alias and Redirect to point a webmail/ url to squirrelmails directory but that does not work. Here is my httpd.conf file: <VirtualHost *:80> ServerName not.my.real.server.name DocumentRoot /var/www/sinatra/public # Does not work: #Redirect webmail/ /usr/share/squirrelmail/ #<Directory /usr/share/squirrelmail> # Require all granted #</Directory> <Directory /var/www/sinatra/public> Order allow,deny Allow from all </Directory> </VirtualHost>

    Read the article

  • How do I choose which Ethernet Adapter to bridge in VMPlayer

    - by Catherine MacInnes
    I am running vmplayer 3.1.0 on Ubuntu. The host machine has four ethernet adapters that are configured to run on four different subnets. I need to run four VMs each with a single ethernet adapter bridged onto a specific one of the physical ethernet adapters. Does anyone know how to do this? Am I simply exceeding the capabilities of vmplayer and have to go to one of the other vmware products, if so, which one. Note that I have no need to create additional VMs, these are VMs that are being given to me by companies that want us to develop software for their products.

    Read the article

  • How many cron jobs are too many?

    - by guitar-
    I have a couple of cron jobs for basic maintenance which aren't very resource-intensive. I also have custom task scheduling (which is just calling a .php file and passing information via GET, ie: cronjob.php?param1=param ...). These can add up pretty quickly. These just call system commands and run external programs (Nmap is one of them). They usually don't take long either. Anyway, can anyone tell me, roughly what point is too many? I know it's hard to say since it depends on what job is being run and how often, but at what point does the crontab program start "struggling"? Anyone have any idea? Thanks.

    Read the article

  • Please help, I am losing my data?

    - by 5YrsLaterDBA
    I was trying to remove MSDN Library-January 2002 by using Revo Uninstaller. I allowed it to delelet those leftover registry entries(my fault, i should be more careful and don't do that). Now my machine is almost unaccessible, cannot open that Revo Uninstaller to recovery, cannot open Windows Explorer, cannot run CMD command from run, all executable shortcuts are not work anymore. I guess I have to reinstall everything but how can I get my data out of the machine? It is a Dell laptop with Windows XP. I am afraid if I restart my machine, my machine will not come back anymore. Please help.

    Read the article

  • Can't deploy rails 4 app on Bluehost with Passenger 4 and nginx

    - by user2205763
    I am at Bluehost (dedicated server) trying to run a rails 4 app. I asked to have my server re-imaged, specifying that I do not want rails, ruby, or passenger install automatically as I wanted to install the latest versions myself using a version manager (Bluehost by default offers rails 2.3, ruby 1.8, and passenger 3, which won't work with my app). I installed ruby 1.9.3p327, rails 4.0.0, and passenger 4.0.5. I can verify this by typing, "ruby -v", "rails -v", and "passenger -v" (also "gem -v"). I made sure to install these not as root, so that I don't get a 403 forbidden error when trying to deploy the app. I installed passenger by typing "gem install passenger", and then installed the nginx passenger module (into "/nginx") with "passenger-install-nginx-module". I am trying to run my rails app on a subdomain, http://development.thegraduate.hk (I am using the subdomain to show my client progress on the website). In bluehost I created that subdomain, and had it point to "public_html/thegraduate". I then created a symlink from "rails_apps/thegraduate/public" to "public_html/thegraduate" and verified that the symlink exists. The problem is: when I go to http://development.thegraduate.hk, I get a directory listing. There is nothing resembling a rails app. I have not added a .htaccess file to /rails_apps/thegraduate/public, as that was never specified in the installation of passenger. It was meant to be 'install and go'. When I type "passenger-memory-status", I get 3 things: - Apache processes (7) - Nginx processes (0) - Passenger processes (0) So it appears that nginx and passenger are not running, and I can't figure out how to get it to run (I'm not looking to have it run as a standalone server). Here is my nginx.conf file (/nginx/conf/nginx.conf): #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { passenger_root /home/thegrad4/.rbenv/versions/1.9.3-p327/lib/ruby/gems/1.9.1/gems/passenger-4.0.5; passenger_ruby /home/thegrad4/.rbenv/versions/1.9.3-p327/bin/ruby; include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name development.thegraduate.hk; root ~/rails_apps/thegraduate/public; passenger_enabled on; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } I don't get any errors, just the directory listing. I've tried to be as detailed as possible. Any help on this issue would be greatly appreciated as I've been stumped for the past 3 days. Scouring the web has not helped as my issue seems to be specific to me. Thanks so much. If there are any potential details I forgot to specify, just ask. ** ADDITIONAL INFORMATION ** Going to development.thegraduate.hk/public/ will correctly display the index.html page in /rails_apps/thegraduate/public. However, changing root in the routes.rb file to "root = 'home#index'" does nothing.

    Read the article

  • I want to install an MSI twice

    - by don.vince
    I have a peculiar wish to install an msi twice on a machine. The purpose of the double install is to first install under the pre-production folder, run the deployment in a safe environment prior to deploying in the production folder. We typically use separate machines to represent these different environments however in this case I need to use the same box. The two scenarios I get are as follows: I've installed pre-production, I'm happy, I want to install production, I run the msi, it asks whether I want to repair or remove the installation I've production installed, I want to install the new version of the msi, it tells me I already have a version of the product installed and I must first un-install the current version The first scenario isn't too bad as we can at that point sensibly un-install and re-install under the production folder, but the second scenario is a pain as we don't want to un-install the live production deployment. Is there a setting I can give to msiexec that will allow this? Is there a more suitable different approach I could use?

    Read the article

  • Linux or Windows for a server?

    - by Matt
    I'm a Linux guy when it comes to (web) servers for the following reasons Legally free Fast software updates (Unless you're running Cent OS :) Powerful CLI management of services Easy to secure (in terms of users and groups) Web server software is, well, built for Linux... Apache, PHP, Python, etc, are Linux programs that get ported to Windows - I'm 90% sure of this Unless the web server needed to run ASP, I wouldn't use Windows. My boss' IT friend is a Windows guy, though. He recently got a server setup in the office to run Microsoft Exchange and some other shit. What I'm asking is, if he wanted to start running websites on this thing, what would be good reasoning to convince him otherwise? He's not very bright in terms of IT and the IT friend is all Windows. So it's two against one here... What would you say to running a Windows web server?

    Read the article

  • Windows screen shots via command-line SSH session

    - by Geoff Fritz
    I've browsed the handful of "screen capture" queries here, but I was unable to find anything which addressed my specific need. I'm looking for a command-line tool that I can run via remote SSH connection (by way of the cygwin sshd daemon). There are several to choose from, but the few I've tried (ImageMagick, nircmd, and MiniCap) all result in a blank screen. I assume that this is due to the remotely logged in user not having a proper graphical console session running. The goal here is automate screen capture and retrieval of the main system console (what one would see if they were looking at the physical monitor) through the use of ssh script from a Unix host: ssh user@windowshost "screencap --output /tmp/console.jpg" scp user@windowshost:/tmp/console.jpg /some/destdir Note that these must be done on demand, so polling a remote directory that has snapshots dumped periodically will not work. Bonus points for programs that are open source and have a portable install (so I don't need to RDP/VNC into the machine to run a graphical installer).

    Read the article

  • What are "Missing thread recordng" erros when running fsck -fy ?

    - by Horace Ho
    There is some error reported when I run Disk Utility and verify the root volume on my OS X MacBook. So I boot and CMD-S into the shell mode and run /sbin/fsck -fy. Errors are like: ** Checking catalog file. Missing thread record (id = ...) In correct number of thread records ** Checking catalog hierarchy. Invalid volume file count (It should be ... instead of ...) ** Repairing Volume Missing directory record (id = ...) I'd like to know what is the cause of the above errors? Hopefully I will be more careful in the future to prevent them from happening again. p.s. I am using a SSD and so I assume mechanical hard disk error is less likely. Thanks!

    Read the article

  • What are "Missing thread recordng" erros when running fsck -fy ?

    - by user32616
    There is some error reported when I run Disk Utility and verify the root volume on my OS X MacBook. So I boot and CMD-S into the shell mode and run /sbin/fsck -fy. Errors are like: ** Checking catalog file. Missing thread record (id = ...) In correct number of thread records ** Checking catalog hierarchy. Invalid volume file count (It should be ... instead of ...) ** Repairing Volume Missing directory record (id = ...) I'd like to know what is the cause of the above errors? Hopefully I will be more careful in the future to prevent them from happening again. p.s. I am using a SSD and so I assume mechanical hard disk error is less likely. Thanks!

    Read the article

  • webserver running as nobody cannot resolve domain names

    - by jalal
    if i try to run the following: <?php echo file_get_contents("http://www.yahoo.com/index.html"); ?> through the web server I get a an "php_network_getaddresses: getaddrinfo" error. If I run the same file from the shell with: php test.php then I get the expected file output. This indicates to me that the 'nobody' user, which the webserver runs as, is not able to resolve the domain name, but the shell user can. Any ideas on how to fix this? Further info: CentOS 6, cPanel install, Apache, PHP running as dso. BTW, I've tried disabling the firewall to no effect.

    Read the article

  • Any reason not to disable the Windows pagefile given enough physical RAM?

    - by Evgeny
    The question of disabling the Windows pagefile has already been discussed quite a bit, for example here and here and here. People continue to upvote answers that say "you should not disable your pagefile even if you have plenty of RAM", but I have yet to see any concrete, verifiable reasons being given for this advice. As far as I can see, if you never need to read from the pagefile (because you have enough RAM) then performance could only be worse with it enabled due to Windows pre-emptively writing to it. At best, performance would be the same. I can't see how it could possibly be improved by writing data you never need to read. So my question is: Assuming that I have enough physical RAM for everything I do, is there any reason I should not disable the pagefile? Let's say the version of Windows is Windows XP x64 SP2 or Windows Server 2003 x64 SP2 (same thing). If it's different for Windows Server 2008 x64 I'd be interested to hear an answer for that as well. I'm looking for specific, objective reasons from good sources, not just opinions. Something like "here are the benchmarks done with and without a pagefile and the results were better with a pagefile, even with enough RAM" or "according to this MS KB article problem X occurs if you disable the pagefile". So far the only reasons I've seen mentioned are: Even if you think you have enough RAM you might run out. OK, but for the purposes of this question, let's just take it as a given that I have enough. Maybe I only ever read my email and I have 16GB RAM. Or 128GB. Or 1TB. Or whatever - but it's enough for 100% of what I do, 100% of the time. Another way to think of it is: if I have x MB physical RAM and y MB pagefile and I never run out of RAM in that configuration, would I not be better off, performance-wise, with x+y MB physical RAM and no pagefile? Windows is "used to" having a paging file and it might not function as reliably (from Understanding the Impact of RAM on Overall System Performance That's rather vague and I find it hard to believe, given that MS has provided the option to disable the pagefile. Windows knows what it's doing better than you. No - it doesn't know that I won't run more programs or load more data, but I do.

    Read the article

  • One bigger Virtual Machine distributed across many OpenStack nodes [duplicate]

    - by flyer
    This question already has an answer here: Can a virtualized machine have the CPU and RAM resources of multiple underlying physical machines? 2 answers I just setup virtual machines on one hardware with Vagrant. I want to use a Puppet to configure them and next try to setup OpenStack. I am not sure If I am understanding how this should look at the end. Is it possible to have below architecture with OpenStack after all where I will run one Virtual Machine with Linux? ------------------------------- | VM with OS | ------------------------------- | NOVA | NOVA | NOVA | ------------------------------- | OpenStack | ------------------------------- | Node | Node | Node | ------------------------------- More details: In my environment Nodes are just virtual machines, but my question concerns separate Hardware nodes. If we imagine this Nodes(Novas) are placed on a separate machines (e.g. every has 4 cores) can I run one Virtual Machine across many OpenStack Nodes? Is it possible to aggregate the computation power of OpenStack in one virtual distributed operating system?

    Read the article

  • Does Ubuntu 12.04.1 come with everything I need for using virtual servers and are the tools efficient?

    - by orokusaki
    I noticed that Ubuntu 12.04.1 comes with Xen, OpenStack, KVM and other virtualization-related tools. I have used VMWare in the past. If I was to use Xen for visualization, would I see considerable performance lost, since Xen is run on the host OS? Is it even run on the host OS, or is it like VMWare where it's installed below any Linux OS on the machine (embedded, I guess is the word)? Do you have any recommendations on what sort of set up to use with these built-in tools? I have 2 physical servers, side-by-side. Each will need a VM used for Postgres and a VM used as an app server. One will be a failover for the other.

    Read the article

  • xen-create-image does not create inird or initramfs image and domU does not starts with system image

    - by user219372
    I have Fedora 19 as Dom0. To create image I run # xen-create-image --hostname=debian-wheezy --memory=512Mb --dhcp --size=20Gb --swap=512Mb --dir=/xen --arch=amd64 --dist=wheezy After generation finished I start vm and see: # xl create /etc/xen/debian-wheezy.cfg Parsing config from /etc/xen/debian-wheezy.cfg libxl: error: libxl_dom.c:409:libxl__build_pv: xc_dom_ramdisk_file failed: No such file or directory libxl: error: libxl_create.c:919:domcreate_rebuild_done: cannot (re-)build domain: -3 In the /etc/xen/debian-wheezy.cfg i have # # Kernel + memory size # kernel = '/boot/vmlinuz-3.11.2-201.fc19.x86_64' ramdisk = '/boot/initrd.img-3.11.2-201.fc19.x86_64' and ls -1 /boot/*201* shows /boot/config-3.11.2-201.fc19.x86_64 /boot/initramfs-3.11.2-201.fc19.x86_64.img /boot/System.map-3.11.2-201.fc19.x86_64 /boot/vmlinuz-3.11.2-201.fc19.x86_64 Then if I fix ramdisk directive in .cfg file to /boot/initramfs-3.11.2-201.fc19.x86_64.img vm will start but os inside will not boot. In a tail of xl console I get [ OK ] Reached target Basic System. dracut-initqueue[130]: Warning: Could not boot. dracut-initqueue[130]: Warning: /dev/disk/by-uuid/085883ad-73ca-45cc-8bc5-e6249f869b26 does not exist dracut-initqueue[130]: Warning: /dev/fedora/root does not exist dracut-initqueue[130]: Warning: /dev/fedora/swap does not exist dracut-initqueue[130]: Warning: /dev/mapper/fedora-root does not exist dracut-initqueue[130]: Warning: /dev/mapper/fedora-swap does not exist dracut-initqueue[130]: Warning: /dev/xvda2 does not exist Starting Dracut Emergency Shell... Warning: /dev/disk/by-uuid/085883ad-73ca-45cc-8bc5-e6249f869b26 does not exist Warning: /dev/fedora/root does not exist Warning: /dev/fedora/swap does not exist Warning: /dev/mapper/fedora-root does not exist Warning: /dev/mapper/fedora-swap does not exist Warning: /dev/xvda2 does not exist Generating "/run/initramfs/sosreport.txt" Entering emergency mode. Exit the shell to continue. Type "journalctl" to view system logs. You might want to save "/run/initramfs/sosreport.txt" to a USB stick or /boot after mounting them and attach it to a bug report. dracut:/# .img files in /xen/domains/debian-wheezy exists and listed in disk section of debian-wheezy.cfg So what should i do? Update: I've found that xl does not mount images. In debian-wheezy.cfg I have that: root = '/dev/xvda2 ro' disk = [ 'file:/xen/domains/debian-wheezy/disk.img,xvda2,w', 'file:/xen/domains/debian-wheeze/swap.img,xvda1,w', ] And there is no /dev/xvda* or /dev/sda* or /dev/hda* files in VM.

    Read the article

  • How can I configure Firefox to assume I have less memory?

    - by WoLpH
    Firefox has a few different settings that automatically get tuned based on the system ram. This is all great if you're running nothing besides Firefox, but when you're running half a dozen apps at the same time and they all assume that they can take a decent chunk of mem it just kills the box. Example settings: http://kb.mozillazine.org/Browser.sessionhistory.max_total_viewers http://kb.mozillazine.org/Browser.cache.memory.capacity How can I make Firefox automatically configure all these settings with the assumption that I only have 512MB of memory instead of 4GB (or whatever number, but you get the idea). I am running Ubuntu 12.04 with Firefox 14 Current workarounds: Running a Windows XP virtual machine with 512MB of ram. It actually runs smooth and takes less memory (including Windows) to run than having Firefox (or Chrome for that matter) run standalone. Install the 32 bit version of Firefox By installing the 32 bit version of firefox (apt-get install firefox:i386) the base memory usage is only about 50% of what it is with the 64 bit.

    Read the article

  • Nodes inside Cisco VPN. Incoming SSH requests allowed. But can't initiate an outbound SSH.

    - by Douglas Peter
    I've a gateway-to-gateway VPN setup between my Linksys RV042 router and a Cisco VPN. I am able to SSH into any of the machine inside the VPN from my network. But none of the machines inside the VPN can initiate an SSH into my network. It seems they've blocked even all ping requests to my network gateway. This is the requirement: I have scripts that SSH into the machines inside the VPN and run a long mysql query. The query generates an output to a file. The time that these queries take is variable. So I have a loop in my machine that periodically SSHes into the VPN machine and checks if the query has finished, and pulls the generated file using SCP. I need to simplify it thus: The script will run at the machine inside the VPN, and when the query completes, it will SSH into my machine and pushes the generated file. Thanks for any ideas.

    Read the article

  • Failure to create keytab file using msktutil on Centos to W2K8

    - by user49321
    I'm trying to setup a centos 5.5 squid server to authenticate against a windows 2008 DC. I have followed the tutorial: http://serverfault.com/questions/66556/getting-squid-to-authenticate-with-kerberos-and-windows-2008-2003-7-xp However I have run into an issue. When I run the command: (Obviously changed for my enviroment) # msktutil -c -b "CN=COMPUTERS" -s HTTP/centos.dom.local -h centos.dom.local -k /etc/HTTP.keytab --computer-name centos-http --upn HTTP/centos.dom.local --server server.dom.local --verbose --enctypes 28 I get the following error (The whole message is too long to post here): Error: Unable to set machine password for centos: (3) Authentication error Error: set_password failed kinit works fine and the computer is added to the DC under COMPUTERS and SRV records created except no keytab is created.

    Read the article

< Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >