Search Results

Search found 17847 results on 714 pages for 'virtual disk'.

Page 590/714 | < Previous Page | 586 587 588 589 590 591 592 593 594 595 596 597  | Next Page >

  • I get a 502 bad gateway ONLY with a specific combination of domain/root folders - NGINX

    - by Patrick De Amorim
    I have a VPS running NGINX and virtual hosts, with a configuration such as this: Domains directing to it: lolpics.no smscloud.no idmag.no Root folders: /home/vds/www/lolpics /home/vds/www/smscloud /home/vds/www/idmag SMSCloud.no is the site that keeps getting 502 errors, but if I make the domain direct to any of the other folders, the site works, or if I make any other domain name direct to the /home/vds/www/smscloud folder, it works. Only smscloud.no with /home/vds/www/smscloud breaks I tried putting this between the http{} in my nginx.conf and no help: proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; EDIT: Well, that was slightly silly, if anyone from Google stumbles on this, here's how I fixed it, I just added this to the http{}: fastcgi_buffer_size 16k; fastcgi_buffers 16 16k; So that the start of my http block is: http { include /etc/nginx/mime.types; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; fastcgi_buffer_size 16k; fastcgi_buffers 16 16k;

    Read the article

  • webserver horrible slow, sometimes incredible fast

    - by dhanke
    i am running a small community ( 6000+ Members ) on a non-virtual 64-bit ubuntu 11.04 system. I am not a Linux-pro, not even advanced, i just tried to setup a webserver, which does nothing special actually. Delivering some dynamic PHP and RoR websites is its task. So it might be that my configuration files do look horrible bad. Also, i might use the wrong vocabulary, so in doubt, please ask. Having a current all-time record of 520 registered users (board-accounts, no system-users) online at same time, average server-load is about 2.0 - 5.0. Meantime (~250 users) average server load value is at about 0.4 - 0.8, sometimes, on some expensive searches a bit higher. everything fine. From time to time however, the load increases up to 120 (120.0, not 12.0 ;) ). In this time, its hard to even connect via SSH, but when i reach the server, and use top/htop/iotop to see whats happening, i cannot identify any process causing high CPU load. iotop tells me about a current reading/writing speed of about approx. 70kb/s, which is quite equal to power-off i think. Memory-Usage is max. at ~ 12GB of 16GB, so swap remains empty. now the odd (at least for me:) waiting some minutes ( since i always get a bit into a panic when this happens, it feels like 5 minutes, but i suppose its more like 20-30 minutes) and the server is back to normal. everything continues as normal. another odd fact: when i run hdparm -tT /dev/sda, i get answer like: /dev/sda: Timing cached reads: 7180 MB in 2.00 seconds = 3591.13 MB/sec Timing buffered disk reads: 348 MB in 3.02 seconds = 115.41 MB/sec when i run the same command while the server is "frozen", the answer is like /dev/sda: <- takes about 5 minutes until this line appears Timing cached reads: 7180 MB in 2.00 seconds = 3591.13 MB/sec <- 5 more minutes Timing buffered disk reads: 348 MB in 3.02 seconds = 115.41 MB/sec <- another 5 minutes so the values are the same, but the quoted time is completely wrong. using time command as prefix also tells me that ~ 15 minutes were used. I searched in dmesg, /var/log/[messages|syslog] - nothing found. /var/log/errors however tells me that: Jul 4 20:28:30 localhost kernel: [19080.671415] INFO: task php5-fpm:27728 blocked for more than 120 seconds. Jul 4 20:28:30 localhost kernel: [19080.671419] "echo 0 /proc/sys/kernel/hung_task_timeout_secs" disables this message. multiple times. now that message does tell me that php5-fpm task was blocked or did block ? - but not if that is the cause or just one of the results of that "freeze". Anyone? to cut the long story short, i dont know where even to start analyzing. So if you can give me any advice by looking at following specs and configs, or ask me to provide more information, i`d be glad. Specs: 6 Core AMD Phenom(tm) II X6 1055T Processor * 16 Gigabyte Ram 2x 1.5 TB Seagate ST1500DL003-9VT16L via SATA 3 via SoftwareRaid (i suppose) Services: (due to service --status-all, those with [ + ]) nginx Webserver 1.0.14 mySQL 5.1.63 Server Ruby on Rails 2.3.11 ( passenger-nginx-module ) php5-fpm 5.3.6-13ubuntu3.7 SSH ido2db Further services: default crontab + nightly backup. syslog-ng Website consists of 2 subdomains, forum. and www. where forum is a phpBB3.x PHP-Board, and www a Ruby on Rails 2.3.11 application (portal). Mini-Note: sometimes i notice that the forum is pretty slow, in contrast to the always-fast (except for this "freeze") portal. Both share the same Database, but the portal is using it read-only. The Webserver is nginx, using phusion passenger module to communicate with the ruby-application. Also, for the forum it communicates with php5-fpm via socket: relevant nginx configuration parts ( with comments/questions starting by ; ) ; in case of freeze due to too high Filesystem activity, maybe adding a limit? #worker_rlimit_nofile 50000; user www-data; ; 6 cores, so i read 6 fits. maybe already wrong? worker_processes 6; pid /var/run/nginx.pid; events { worker_connections 1024; } http { passenger_root /var/lib/gems/1.8/gems/passenger-3.0.11; passenger_ruby /usr/bin/ruby1.8; ; the forum once featured a chat, which was working w/o websockets. ; so it was a hell of pull requests (deactivated now, freeze still happening) keepalive_timeout 65; keepalive_requests 50; gzip on; server { listen 80; server_name www.domain.tld; root /var/www/domain/rails/public; passenger_enabled on; } server { listen 80; server_name forum.domain.tld; location / { root /var/www/domain/forum; index index.php; } ; satic stuff to be handled by nginx location ~* ^/style/.+.(jpg|jpeg|gif|css|png|js|ico|xml)$ { access_log off; expires 30d; root /var/www/domain/forum/; } ; now the php magic, note the "backend"-fcgi_pass location ~ .php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/domain/forum$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_max_temp_file_size 0; } location ~ /\.ht { deny all; } } ;the php5-fpm socket. i read that /dev/shm/ whould be the fastes place for this. bad idea in general? upstream backend { server unix:/dev/shm/phpfpm; } ... } php5-fpm settings (i changed this values due to php5-fpm error log messages higher and higher.. (freeze-problem was there before as well)* listen = /dev/shm/phpfpm user = www-data group = www-data pm = dynamic ; holy, 4000! well, shinking this value to earth-level gave me ; 100s of 502 bad gateway commands. this values were quite stable. ; since there are only max 520 users online i dont get it, why i would need ; as many children as configured here. due to keep-alive maybe? ; asking questions is easier for me since restarting server will make ; my community-members angry ;) pm.max_children = 4000 pm.start_servers = 100 pm.min_spare_servers = 50 pm.max_spare_servers = 150 pm.max_requests = 10 pm.status_path = /status ping.path = /ping ping.response = pong slowlog = log/$pool.log.slow ;should i use rlimit? ;rlimit_files = 1024 chdir = / mysql/my.cnf [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = 127.0.0.1 key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP ; high number, but less gives some phpBB errors. max_connections = 450 table_cache = 512 ; i read twice the cpu cores, bad? thread_concurrency = 12 join_buffer_size = 2084K concurrent_insert = 3 query_cache_limit = 64M query_cache_size = 512M query_cache_type = 1 log_error = /var/log/mysql/error.log log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 2 expire_logs_days = 10 max_binlog_size = 100M low_priority_updates=1 [mysqldump] quick quote-names max_allowed_packet = 16M [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/ I used smartctl already, hdds seem to be fine. /proc/mdstatus quotes: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md3 : active raid1 sda3[1] 1459264192 blocks [2/1] [_U] md1 : active raid1 sda1[0] 3911680 blocks [2/1] [U_] unused devices: ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 127727 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 127727 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited I quote some questions in my configuration files, these are not (intentional) directly problem-related, but would be nice for me to know wether they are indeed questionable or done right. One additional Fact: my MYSQL-database is at 12GB size. i dont know if that does matter, but mytop sometimes shows me 4-5 seconds long insert queries, some are 20-30 seconds long. Its just a feeling that i am unable to prove (because i dont know how), but when i disable the database, the freeze seems not to happen. Example: i created a dummy rails application to see the development log. the app made some sql-queries, reads and inserts. the log quite often was like: DbTest Load (0.3ms) SELECT * FROM `db_test` WHERE (`db_test`.`id` = 31722) LIMIT 1 SQL (0.1ms) BEGIN DbTest Update (0.3ms) UPDATE `db_test` SET `updated_at` = '2012-07-04 23:32:34' WHERE `id` = 31722 - now the log stands still for 5-60 seconds. SQL (49.1ms) COMMIT - SQL-Update time in the log does not include freeze time Rendering test/index Completed in 96ms (View: 16, DB: 59) | 200 OK [http://localhost:9000/test] Bad part is: this mini-freeze here only happens from time to time as well. note: meanwhile i cannot even upload files via scp. I currently feel like running form bad to worse and back by googling for my server-problem due to immense lack of knowledge regarding server configurations. It still makes me wonder, why those problems even appear, since 250 users a time is not such a high amount, right? So my questions: whats wrong and how to fix? ;) or: what information can i provide to make the situation more clear? can you point at some critical bad configuration-line which i should consider to catch up in the documentation? are there any tools i can run to see some possible bottlenecks? any further advice? (next to: "pay someone who knows what he does" - its a private project, server costs enough already. :)) Thanks for your time and help. Best Regards, Daniel P.S.: i renamed the configfiles to domain.tld since i dont want to have any % more load to the server until its fixed. might be a exaggeratedly thought.. P.P.S: if i asked a complete duplicate question, sorry. my search results seemed to be quite specific in their own way.

    Read the article

  • Building new computer, turns on, but no post

    - by addybojangles
    Pardon my ignorance here, finally decided to put together a computer and egads. I purchased a new motherboard, power supply, processor, video card and memory. ASUS M4A79XTD EVO AM3 AMD 790X ATX AMD Motherboard OCZ Fatal1ty OCZ550FTY 550W ATX12V v2.2 / EPS12V SLI Ready 80 PLUS Certified Modular Active PFC Power Supply AMD Phenom II X4 965 Black Edition Deneb 3.4GHz 4 x 512KB L2 Cache 6MB L3 Cache Socket AM3 125W Quad-Core Processor XFX HD-577A-ZNFC Radeon HD 5770 (Juniper XT) 1GB 128-bit GDDR5 PCI Express 2.0 x16 HDCP Ready CrossFireX Support Video Card G.SKILL 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Dual Channel Kit Desktop Memory Model F3-12800CL9D-4GBNQ (originally had links for you guys, but I lack the rep, sorry!!) And I've got it all in the tower. I put in power supply, installed processor on motherboard, installed heatsink, put in ram, and I am using an older IDE hard disk. When I start the computer, the monitor tells me "check signal cable." As far as I can tell, the heatsink on the processor is spinning, the power supply is on (obviously), and the green LED on the motherboard is on. I originally only had the bigger output plugged in to the motherboard (what I saw in a YouTube vid as well as the mobo instructions), but after doing some research, it said plug in the other ATX power supply. Which I did. And trying to power the computer results in nothing. No beeps on startup, no post, anyone have any ideas? Your ideas and help is greatly appreciated.

    Read the article

  • Openfiler iSCSI performance

    - by Justin
    Hoping someone can point me in the right direction with some iSCSI performance issues I'm having. I'm running Openfiler 2.99 on an older ProLiant DL360 G5. Dual Xeon processor, 6GB ECC RAM, Intel Gigabit Server NIC, SAS controller with and 3 10K SAS drives in a RAID 5. When I run a simple write test from the box directly the performance is very good: [root@localhost ~]# dd if=/dev/zero of=tmpfile bs=1M count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 4.64468 s, 226 MB/s So I created a LUN, attached it to another box I have running ESXi 5.1 (Core i7 2600k, 16GB RAM, Intel Gigabit Server NIC) and created a new datastore. Once I created the datastore I was able to create and start a VM running CentOS with 2GB of RAM and 16GB of disk space. The OS installed fine and I'm able to use it but when I ran the same test inside the VM I get dramatically different results: [root@localhost ~]# dd if=/dev/zero of=tmpfile bs=1M count=1000 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB) copied, 26.8786 s, 39.0 MB/s [root@localhost ~]# Both servers have brand new Intel Server NIC's and I have Jumbo Frames enabled on the switch, the openfiler box as well as the VMKernel adapter on the ESXi box. I can confirm this is set up properly by using the vmkping command from the ESXi host: ~ # vmkping 10.0.0.1 -s 9000 PING 10.0.0.1 (10.0.0.1): 9000 data bytes 9008 bytes from 10.0.0.1: icmp_seq=0 ttl=64 time=0.533 ms 9008 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.736 ms 9008 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.570 ms The only thing I haven't tried as far as networking goes is bonding two interfaces together. I'm open to trying that down the road but for now I am trying to keep things simple. I know this is a pretty modest setup and I'm not expecting top notch performance but I would like to see 90-100MB/s. Any ideas?

    Read the article

  • Xen hipervisor 4.1 Kernel Panic on Ubuntu 12.04

    - by rkmax
    I have a fresh Ubuntu 12.04.1 amd64 server install following this guide I have used LVM option used all disk and make 2 LV /dev/mapper/vg-root / (80GB) vg-swap swap (4GB) now i install xen with apt-get install xen-hypervisor-4.1-amd64 and config /etc/default/grub like the guide and add GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=768M" later all this i exec update-grub and reboot. but when i try to boot with Xen 4.1-amd64 always i get a kernel panic with the message Domain-0 allocation is too small for kernel image my questions are: this error is about what? where i can grow this allocation for avoid this error? grub.cfg menuentry 'Ubuntu GNU/Linux, with Xen 4.1-amd64 and Linux 3.2.0-29-generic' --class ubuntu --class gnu-linux --class gnu --class os --class xen { insmod part_gpt insmod ext2 set root='(hd0,gpt2)' search --no-floppy --fs-uuid --set=root 3541e241-7f39-4ebe-8d99-c5306294c266 echo 'Loading Xen 4.1-amd64 ...' multiboot /xen-4.1-amd64.gz placeholder dom0_mem=768M echo 'Loading Linux 3.2.0-29-generic ...' module /vmlinuz-3.2.0-29-generic placeholder root=/dev/mapper/backup--xen-root ro rootdelay=180 echo 'Loading initial ramdisk ...' module /initrd.img-3.2.0-29-generic } Note: I've followed this guide too

    Read the article

  • Problem recreating BCD on Windows 7 64bit - The requested system device cannot be found

    - by Domchi
    NVIDIA drivers upgrade crashed my Windows 7 installation, so I'm working to undo the damage. What I can do: I can boot Windows install from the USB drive, and I can boot the Hiren's Boot CD. Although automated Windows repair fails, I can get to command prompt when I boot Windows install from USB drive, and I can see my drive and all my data. What I cannot do: I cannot boot into Windows - I get this message: Windows failed to start. A recent hardwware or software change might be the cause. To fix the problem: 1.insert windos cd and run a repair your computer option. File: /boot/bcd Status: 0xc000000f Info: an error occured while attempting to read the boot configuration data. It seems that something is wrong with my /Boot/BCD, so I'm trying to recreate it from scratch. I've tried all the methods detailed here (including Windows repair which fails), and I'm left with the last one (near the bottom of that page). When I type the following command as in the tutorial: bcdedit.exe /import c:\boot\bcd.temp ...it fails with the following error: The store import operation has failed. The requested system device cannot be found. Many Google results say that I must use diskpart to set my partition active, however it's already set as active. Also, when I try this: bcdedit /enum It fails with similar message: The boot configuration data store could not be opened. The requested system device cannot be found. Does anyone know what does that error message mean, and what is the requested system device? I'd like to avoid having to reinstall Windows since all the files on disk seem to be fine.

    Read the article

  • Update to Lion, Cannot boot into Bootcamp partitions, but can use in Parallels

    - by Jon Jester
    Using Snow Leopard had boot camp partitions for both XP and Windows 7. These were both accessible through Parallels 7 or through direct boot through boot camp. Each is on a separate partitioned hard drive. Upgraded to Lion, both were still accessible through Parallels, but have not been able to directly boot into either. Unfortunately is important to me to be able to boot into a least the Windows 7 partition. Have tried virtually everything I can find online. Seen similar issues, but nothing where they were usable virtually but not directly. Nothing works. reFit, correcting the master boot records in Windows with command line, have wiped the Windows 7 partition clean and reinstalled Windows 7 several times 1st using Boot Camp4 drivers then using Boot Camp3 drivers. Have tried resizing the bootcamp partitions. When booting into the Boot Camp partitions directly will go all the way to seeing the desktop before it fails, where I get a Windows error screen. I can see all the disks and their appropriate partitions both in OS X disk utility as well as the Windows installer utility.

    Read the article

  • IIS serving static content gives 503 at random

    - by Steffen
    We're having a few issues with our image server. It's a Win 2008 running IIS 7.5 and it only serves static content: images. It has run without issues for quite a while, until recently when we disabled Output Caching, as we noticed having it enabled meant it sent no-cache host-headers to the clients (forcing them to fetch the images from the server every time) We've read quite a bit about it, and it seems IIS just works that way - either you use Output Caching or you get to use cache host-headers. Anyway having disabled the Output Cache, we now experience random 5 minutes intervals, where all requests just get a 503 Service Unavailable. During this period the "Files cached" performance counter staggers (neither increased nor decreased) and after the period all caches are flushed. You might find it weird I talk about caching, since we disabled Output Caching. The thing is we changed the ObjectTTL parameter in registry, so we cache files for 3 minutes (which has worked very well, our Disk I/O dropped significantly) So even with Output Caching disabled, we're still caching plenty of files - if we could just get rid of the random 503 it'd be perfect :-D We don't get any messages in the Windows event log during these 503 intervals, so we're pretty stumped as to what to do. Any ideas are very welcome :-)

    Read the article

  • Enable PasswordAuthentication on OpenSuse 10

    - by Riduidel
    Hi, I've a virtual instance of Suse 10 running in my VMWare player, and I'm fighting against it to allow ssh password authentcation. How can I make it working since I already have tuned the /etc/ssh/ssh_config file like that # $OpenBSD: ssh_config,v 1.20 2005/01/28 09:45:53 dtucker Exp $ Host * # ForwardAgent no ForwardX11 yes ForwardX11Trusted yes PubkeyAuthentication no RhostsRSAAuthentication no RSAAuthentication no PasswordAuthentication yes HostbasedAuthentication no Protocol 2 SendEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES SendEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT SendEnv LC_IDENTIFICATION LC_ALL With ssh connection sending me the following logs Incoming packet #0x5, type 51 / 0x33 (SSH2_MSG_USERAUTH_FAILURE) 00000000 00 00 00 1e 70 75 62 6c 69 63 6b 65 79 2c 6b 65 ....publickey,ke 00000010 79 62 6f 61 72 64 2d 69 6e 74 65 72 61 63 74 69 yboard-interacti 00000020 76 65 00 ve. Outgoing packet #0x6, type 50 / 0x32 (SSH2_MSG_USERAUTH_REQUEST) 00000000 00 00 00 04 72 6f 6f 74 00 00 00 0e 73 73 68 2d ....root....ssh- 00000010 63 6f 6e 6e 65 63 74 69 6f 6e 00 00 00 14 6b 65 connection....ke 00000020 79 62 6f 61 72 64 2d 69 6e 74 65 72 61 63 74 69 yboard-interacti 00000030 76 65 00 00 00 00 00 00 00 00 ve........ Telling me that it expects publickey and keyboard-interactive authentications, which I don't want to use.

    Read the article

  • Experience with AMCC 3ware 9650se raid cards? Ours seems dead

    - by antiduh
    We have a 8-port 3ware 9650se raid card for our main disk array. We had to bring the server down for a pending power outage, and when we turned the machine back on, the raid card never started. This card has been in service for a couple years without problems, and was working up until the shutdown. Now, when we turn the machine on, the bios option rom that normally kicks in before the bootloader doesn't show up, none of the drives start, and when the OS tries to access the device, it just times out. The firmware on it has been upgraded in the past, so it's possible we've hit some sort of firmware bug. We're using it in a Silicon Mechanics R272 machine with gentoo for the OS. The OS eventually boots, but alas, without the card. We've ordered a new one, but I'm worried that if we replace the card it won't recognize the existing array. Has anybody performed a card swap before? Any help would be greatly appreciated. Edit: These are the kernel errors we see: 3ware 9000 Storage Controller device driver for Linux v2.26.02.012. 3w-9xxx 0000:09:00.0: PCI INT A -> GSI 18 (level, low) -> IRQ 18 3w-9xxx 0000:09:00.0: setting latency timer to 64 3w-9xxx: scsi0: ERROR: (0x06:0x000D): PCI Abort: clearing. 3w-9xxx: scsi0: ERROR: (0x06:0x001F): Microcontroller not ready during reset sequence. 3w-9xxx: scsi0: ERROR: (0x06:0x0036): Response queue (large) empty failed during reset sequence. 3w-9xxx 0000:09:00.0: PCI INT A disabled

    Read the article

  • How to access programs in one PC using another PC

    - by darkstar13
    Hi, I was recently given an old PC for my remote access at work. The CPU that comes with it has Windows XP installed, 400+ MB of ram, all USB devices disabled. I access my work applications using VPN / Citrix. Basically, it' sooooo slow. Plus it's bulky and it will just occupy space, so I am now hoping to find a way for me to integrate this work PC with my home PC. I tried to put in the hard drive in my home PC CPU, and set the drive as slave. However, when I booted my PC from this hard drive, I am stuck at the screen where windows is prompting me to select how am I going to boot (ex. Safe Mode, Safe mode with command prompt, Last Working Configuration, etc), but whatever option I select, I am still stuck at this option after reboot. I am thinking if maybe I can clone the drive and mount the cloned drive and access the system as a virtual machine. But I don't know if that will work. I would like to know if there's something I can do so I can work at home using my home PC, where I can access my work programs to connect to VPN / Citrix. My home PC's OS is Windows 7 Ultimate x64.

    Read the article

  • Cacti not working for SNMP data sources

    - by lorenzo-s
    I installed packages cacti and snmpd on a Debian server. I'm able to display common graphs in Cacti (such as memory usage, load average, logged in users, etc) using the data templates listed as Unix. Now I want to replace these graphs with new ones using SNMP data sources, because I see there is also CPU usage and because it's not excluded I have to manage multiple hosts in the future. So, I installed snmpd on the machine and left the snmpd.conf as it is. In Cacti, I created three new data sources from SNMP templates for 127.0.0.1 host: ucd/net - CPU Usage - Nice ucd/net - CPU Usage - System ucd/net - CPU Usage - User Then I created a new graph from template ucd/net - CPU Usage, and select the three data sources in the Graph Item Fields section. Graph is now enabled and running, but empty. No data have been collected. Under Console - Devices my SNMP host is listed as up and running: System:Linux ip-xx-xx-xxx-xxx 3.2.0-23-virtual #36-Ubuntu SMP Tue Apr 10 22:29:03 UTC 2012 x86_64 Uptime: 929267 (0 days, 2 hours, 34 minutes) Hostname: ip-xx-xx-xxx-xxx Location: Sitting on the Dock of the Bay Contact: Me [email protected] In SNMP Options I left all as it is: SNMP Version: Version 1 SNMP Community: public SNMP Timeout: 500 ms Maximum OID's Per Get Request: 10 In Console - Utilities - Cacti Log I have multiple warning (two for each data source) every 5 minutes: 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] Host[2] DS[18] WARNING: Result from SNMP not valid. Partial Result: U 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] WARNING: SNMP Get Timeout for Host:'127.0.0.1', and OID:'.1.3.6.1.4.1.2021.4.15.0' 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] Host[1] DS[9] WARNING: Result from SNMP not valid. Partial Result: U 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] WARNING: SNMP Get Timeout for Host:'127.0.0.1', and OID:'.1.3.6.1.4.1.2021.11.52.0' 10/29/2012 01:40:01 PM - CMDPHP: Poller[0] Host[2] DS[19] WARNING: Result from SNMP not valid. Partial Result: U 10/29/2012 01:40:01 PM - CMDPHP: Poller[0] WARNING: SNMP Get Timeout for Host:'127.0.0.1', and OID:'.1.3.6.1.4.1.2021.4.6.0' [...] I have the feeling I'm missing something, but I cannot get it...

    Read the article

  • Leopard Network Shares and browsing are unreliable

    - by EvilChookie
    I have two macs, running Leopard 10.5.8. One is the 13" MBP connected via WiFi, and the other is a 24" 2008 iMac, connected via ethernet. There are at least another 6-10 machines (windows and mac) awake on the network (with shares) at any given time, yet there are plenty of times where I cannot see any devices/shares in either my "Shared" section in Finder, nor can I see any computers in "Network" in Finder. Restarting doesn't help. I've restarted all the networking gear in the house to no avail. Our network is a series of gigabit switches connected to a D-Link gaming router. I believe we use OpenDNS, and our provider is Cox. I hate having to use "Go - Connect to Server" to browse to commonly used file shares (by IP). I'd like to know why my shares do not always and consistently appear in Leopard. Edit: I ran OnyX this morning, and performed the cleaning and maintenance operations (including disk permissions) on both my Macs, and at least one of my macs has started showing network devices again. (the other is still going). No idea how long this will last. Any ideas as to what is causing this issue, and how to prevent it? Edit 2: Aaaand there the shares go again. So running OnyX is not a permanent or reliable fix for this issue. Edit 3: After a clean reinstall and update, network shares are still unreliable. The SMBClient command mentioned in comments shows me the information it's supposed to show, but the shares do not appear in the shared section. They'll also vanish at random and reappear at random throughout the day.

    Read the article

  • TicTac Photo and Windows 7

    - by Ben
    Hello, My wife has been creating a tictac photo album. I had to upgrade to windows 7 as i had enough of Vista so i backed up the tic tac photo file and the photos to an external hard disk and performed a fresh install of win7. Now here is the problem. TicTacPhoto says it can find the photos in the album. The locations were as follows: Vista: C:\Users\Kelly\Pictures Win 7 C:\Users\Kelly\My Pictures When i try to create a Pictures folder under Kelly it popups a message about merging the two folders and simply moves the pictures to the My Pictures folder. Does anyone know a way to make a foler called pictures so i can eliminate the file path problem and then try again with tic tac photo support to get them to fix my file. My wife is going to kill me as its our wedding album and she has spent upwards of 30hrs designing it and me upgrading to win 7 means its all my fault. She does not understand file paths etc. Im going to try and open the album file in a text editor and see if i can see anything but thought i would ask here as well. Any help appreciated.

    Read the article

  • Short POST data in HTTP

    - by Matt
    We're hosting a customer's Debian Linux web server. It's running a PHP based web application. The server is sitting behind our firewall with it's own virtual interface and port 80 is forwarded internally to a machine sitting in the DMZ. The issue we're having is that when data is posted to the server it seems to be being cut short for some users. It's reproducable for some users on the same box. But the same user sending the same data on the same lan on another PC it works. The data gets cut to around 1140 bytes I'm told. Any idea why this might be happening? The customer is blaming our firewall, but then surely we'd have issues with other services. I'm suspecting it's a problem with the website itself. Suggestions on how to isolate the problem would be of help. Our firewall is Astaro. EDIT: A customer has set the ethernet frame size temporarily to 500bytes on the server. This made it work for now! I know some of the customers are using an internet provider that runs PPPoE

    Read the article

  • Has anyone used the sharedband connection bonding product?

    - by John Rennie
    See http://www.sharedband.com/ for details on the product. Obviously Sharedband aren't too keen on giving away their technical secrets, but I would guess that it bonds the connections at the IP layer i.e. their routers send the IP packets to the SharedBand routers over all available lines and the ShareBand routers handle all the virtual circuitry and provide the NATing to whatever IP address(es) they've assigned you. It looks a clever idea, and a good way to provide some resilience over ADSL links. You can even use ADSL links from different ISPs and SharedBand will still bond them for you. But, I find myself wondering how well it really works, and whether it's worth it. The Draytek routers can already load balance (though not bond) up to four ADSL lines, so the SharedBand product really only offers an advantage if you're hosting servers i.e. you can have one IP address to accept incoming connections through all your (working) ADSL lines. But should you really try and host servers using ADSL lines, especially since ADSL upload performance isn't stellar? Wouldn't it be better to use a hosted server, or maybe pay up for a leased line with a SLA? So I'm asking if anyone is using SharedBand, and if so what do you think of it? JR

    Read the article

  • Microsoft Remote Desktop Services - Android

    - by Matt Rogers
    We have recently started testing Remote Desktop Services. We have deployed the environment using the latest server, Windows Server 2012 R2. We have deployed the Web Access Roles, RD Gateway, Connection Broker Virtualization Host and Session Host. We are running both, Virtual machine-based and Session-based deployments. All of these are working as expected internally and externally when using a Windows workstation as the RDS client, however, the Android client is unable to launch applications. Once you install the app from Google Play you are given a screen to add Remote Resources. After entering the appropriate URL, username and password we see the applications that have been published. Unfortunately, when we attempt to launch an app we get the following error: Connection Error Host not found. Please provide the fully-qualified name or the IP address of the host. We have already entered this information otherwise I don't believe we would be able to see the published applications. I think the error is related to the certificate and how it is being used to connect to the applications. Since this is in our lab environment we have not configured a valid external certificate on the servers and the trusted certificate that is installed on the android tablet points to our internal server / domain name. What I would like to know: Has anyone configured RDS Web Access on Server 2012 R2 and attempted to externally connect an Android or iOS device using the Microsoft supported Remote Desktop client. Are others experiencing the same problem we are? Were you able to resolve the issue? Was it related to the external cert / host name?

    Read the article

  • Reloading NAT configuration on a running VMWare Server 2.0.2

    - by Jonathan Clarke
    I have a server running VMWare Server 2.0.2. The host is Debian Lenny. I have 15-20 virtual machines running, all attached to a single NAT network (named vmnet8). I have configured VMWare's NAT (the vmnet-natd daemon) to forward some incoming to ports to one of the VMs, since it hosts some publicly accessible services. I did this via the file /etc/vmware/vmnet8/nat/nat.conf by adding lines like the following: 80 = 192.168.100.100:80 This works great, I can reach the web server on the VM at 192.168.100.100 by connecting to the host's IP address. Sometimes, I need to add port redirections to this NAT configuration. So, I add a line to the configuration file. Now for the question. How do I make the natd process take this new configuration into account? Clearly, restarting the host machine does take it into account, and the newly added port is forwarded. However, this is not an option on this server, so how should one do this without restarting the whole host? Thanks for any ideas!

    Read the article

  • Poor NFS Performance: OpenFiler

    - by Safin09
    Good Day Everyone, I have an issue with OpenFiler, a Linux-based operating that converts a computer system into a SAN/NAS appliance. Here is the problem. In my environment we have two Netapp Storevault 500 appliances that I normally perform backups to a NFS share. There are two backup cronjobs that use ghettoVCB to backup two groups of VM's. One group is a pool of 3 VMs. This takes 13 mins to complete. A second job that backups a pool of 5 VMs to a 2nd Storevault appliance which takes 2 hours. We then installed Openfiler on a old server that has 2 core Xeon processors. There is a software RAID 5 process in place. When performing the same backups to a NFS Openfiler share, the first backup job, which takes 13 mins, takes around 4 hours. The second backup job, which takes 2 hours, takes almost 10 hours to complete. This is unacceptable!!!! Especially considering the strain placed on the host ESX Server. I assumed that because of the software RAID 5, the overhead on the CPU explained the long backup times. I then installed Openfiler on a 2nd server, an IBM x306 machine which has a P4 Intel processor. This time no software RAID or any RAID at all. A single 750GB hard drive that contained the OS and the rest of the disk uses to backup VMs to a NFS share. I performed the first backup job of the pool of 3 VMs. This time the backup job took 1 and 1/2 hours to complete instead of 13 mins!!!!!!!!!! Is Openfiler simply poor at being an NFS Server!!!!!!!!!!!!! Has anyone else had these issues with Openfiler?

    Read the article

  • Linksys/Cisco Small Business SRW-Series (ie SRW248G4) - Overcoming the Limitations

    - by Warren P
    We just purchased a Cisco/Linksys SRW 248G4 switch to try it out. We have always had unmanaged switches before, and this is our first "somewhat managed" switch. So far the major limitations are: Only Internet Explorer 6 (manual says IE 5.5!) works for the web interface SSH exists but is not practically useable because the only key length that is supported is no longer even used by most modern SSH installs. (I get the error "RSA modulus too small" in openssh 4.x/5.x) This is with the latest firmware revision, I believe, although Cisco's website does not actually tell you what version you're downloading. All in all, I think, they must be trying to tell me that if I want a good-quality switch, I shouldn't buy these SRWs and should buy a Dell or an HP ProCurve, or save up my pennies, and buy a Catalyst. The question here, then, at long last: Has anyone gotten the web-browser to work via some IE 7 or IE 8 compatibility mode settings or used another browser (Opera? KDE/Safari/WebKit?) and spoofed IE6? Is there any way to get the SSH key length upgraded? I'm guessing a 0% chance of a yes on that last one. I found an XP machine, used telnet (via PuttyTel.exe) and IE6 to set this up, and I doubt we'll have to touch it again. Which is fine with us. But it would be nice if I could administer this thing from either (a) a linux box, or (b) my primary desktop which is windows 7. It looks like XPMode with IE6 on the virtual XP machine may be my only way to administer this type of switch via the web.

    Read the article

  • Always failed in connecting to the Outlook Anywhere through TMG 2010 with certificate ?

    - by Albert Widjaja
    Hi, I have successfully published Exchange Activesync using TMG 2010 and OWA internally only but somehow when I tried to publish the Outlook Anywhere it failed ( as can be seen from the https://www.testexchangeconnectivity.com ) Settings: IIS 7 settings, I have unchecked the require SSL and "Ignore" the client certificate Exchange CAS settings: ServerName : ExCAS02-VM SSLOffloading : True ExternalHostname : activesync.domain.com ClientAuthenticationMethod : Basic IISAuthenticationMethods : {Basic} MetabasePath : IIS://ExCAS02-VM.domainad.com/W3SVC/1/ROOT/Rpc Path : C:\Windows\System32\RpcProxy Server : ExCAS02-VM AdminDisplayName : ExchangeVersion : 0.1 (8.0.535.0) Name : Rpc (Default Web Site) DistinguishedName : CN=Rpc (Default Web Site),CN=HTTP,CN=Protocols,CN=ExCAS02-VM,CN=Servers,CN=Exchange Administrative....... Identity : ExCAS02-VM\Rpc (Default Web Site) Guid : 59873fe5-3e09-456e-9540-f67abc893f5e ObjectCategory : domainad.com/Configuration/Schema/ms-Exch-Rpc-Http-Virtual-Directory ObjectClass : {top, msExchVirtualDirectory, msExchRpcHttpVirtualDirectory} WhenChanged : 18/02/2011 4:31:54 PM WhenCreated : 18/02/2011 4:30:27 PM OriginatingServer : ADDC01.domainad.com IsValid : True Test-OutlookWebServices settings: 1013 Error When contacting https://activesync.domain.com/Rpc received the error The remote server returned an error: (500) Internal Server Error. 1017 Error [EXPR]-Error when contacting the RPC/HTTP service at https://activesync.domain.com/Rpc. The elapsed time was 0 milliseconds. https://www.testexchangeconnectivity.com testing result: Checking the IIS configuration for client certificate authentication. Client certificate authentication was detected. Additional Details Accept/Require client certificates were found. Set the IIS configuration to Ignore Client Certificates if you aren't using this type of authentication. environment: Windows Server 2008 (HT-CAS) Exchange Server 2007 SP1 TMG 2010 Standard Outlook 2007 client SP2. Any kind of help would be greatly appreciated. Thanks.

    Read the article

  • Can't install NPM after installing Node on EC2 Linux instance?

    - by frequent
    I'm trying my first attempt on getting a node server set up on an amazon ec2 linux instance. I think I made it quite far. First problem I ran into was when trying to make Node the connection timed out after a while, so I need three attempts until I got this: LINK(target) /home/ec2-user/node/out/Release/node: Finished touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_header.stamp touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_provider.stamp touch /home/ec2-user/node/out/Release/obj.target/node_dtrace_ustack.stamp touch /home/ec2-user/node/out/Release/obj.target/node_etw.stamp make[1]: Leaving directory `/home/ec2-user/node/out' ln -fs out/Release/node node Which tells me, "Node is done", although I'm not sure it is also working as it should. Following this,this and this tutorial, I'm now stuck at installing npm. I think I first cloned into the wrong folder, which always gave me error 127, but even if I'm doing this: cd ~ git clone git://github.com/isaacs/npm.git cd npm sudo -s PATH=/usr/local/bin:$PATH make install I'm still getting this: #after cloning# make[1]: Entering directory `/root/npm' node cli.js install bash: node: command not found make[1]: *** [node_modules/.bin/ronn] Error 127 make[1]: Leaving directory `/root/npm' make: *** [man/man3/start.3] Error 2 Question:: Since I'm pretty much a newby at everything I'm trying here, can someone please tell me what I'm doing wrong and how to get npm to install? Also, in case I cloned into the wrong folder, is there a way to remove the "false clone" or is this not written to disk until I call make install and I don't need to worry? Thanks for helping out!

    Read the article

  • Can't install py-subversion on freebsd 8.2

    - by max taldykin
    I'm trying to install python bindings for subversion: # cd /usr/ports/devel/py-subversion # make ===> Patching for py26-subversion-1.6.15 ===> Applying extra patch /usr/ports/devel/py-subversion/../../devel/subversion/files /bindings-patch-subversion--bindings--swig--perl--native--Makefile.PL.in cannot open /usr/ports/devel/py-subversion/../../devel/subversion/files/bindings-patch-subversion--bindings--swig--perl--native--Makefile.PL.in: No such file or directory *** Error code 2 Yes, there is no such file in subversion/files, but there is file patch-subversion::bindings::swig::perl::natives::Makefle.PL.in (with colons instead of hyphens). After renaming and rerunning make I got another error: # make ===> Patching for py26-subversion-1.6.15 ===> Applying extra patch /usr/ports/devel/py-subversion/../../devel/subversion/files/bindings-patch-subversion--bindings--swig--perl--native--Makefile.PL.in cannot open /usr/ports/devel/py-subversion/../../devel/subversion/files/bindings-patch-subversion--bindings--swig--perl--native--Makefile.PL.in: No such file or directory *** Error code 2 But now there is nothing like bindings-* in subversion/files. So, the question is why is it so and how can I install py-subversion? PS: FreeBSD is running on virtual private server, so I think it is somehow patched. # uname -a FreeBSD mskhug.ru 8.2-PRERELEASE FreeBSD 8.2-PRERELEASE #0 r50: Thu Feb 24 10:15:34 IRKT 2011 [email protected]:/root/src/sys/amd64/compile/DEBUG amd64

    Read the article

  • Can I change the user id of a user on one Linux server to match another server in /etc/passwd?

    - by user76177
    I have a Rails application that is on a virtual machine (RHEL 6) and it's database is on dedicated hardware (also RHEL 6). The app server has an NFS directory from the db server mounted and accessible. It needs to write images to that server that are uploaded via the app. Background processes on the db server need to read and write to the same directory, as they perform resizing operations on the uploaded files. Right now none of this is working, because the user ids are different between the two systems. I only need this to work for this one application, so it is way too much overhead to put an LDAP system in place. Can I simply change the user id of this one user in one of the systems, or will that cause mass chaos? UPDATE: The fix worked, at least on local devices. Unfortunately the device I have mounted to the main db server still thinks my user id is 502 instead of 506. Do I need to remount that device, or is there an NFS daemon I can stop and restart to refresh it?

    Read the article

  • How do I effectively use WinSCP on my GoDaddy Dedicated Hosting

    - by Scott
    After being told that Virtual Private Servers would not fit the scope of my project, I have timidly entered the world of dedicated hosting. Unfortunately, this is forcing me how to learn the basics of being a Linux server admin. GoDaddy has a master account for the server. When you use SSH, they want you to use "su" to switch to the root user. Thus far, I have been able to do everything I have needed to thus far via the command line as this root user. However, now I need to upload files to my server. I'm used to using WinSCP to upload files. I can use my general server account to view the files but when I try to drag or create files its says that I cannot because I do not have permission to do so. I have researched the WinSCP documentation and it seems that this "su" function is beyond the scope of the program. How am I to grant myself access to upload these files using SSH? Should I create a user with the proper permissions? I'm happy to do this but thus far I have not been able to make sense of what I have found online. I'm going to try and move forward but any help and/or insight is appreciated.

    Read the article

< Previous Page | 586 587 588 589 590 591 592 593 594 595 596 597  | Next Page >