Search Results

Search found 13859 results on 555 pages for 'non functional'.

Page 424/555 | < Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >

  • My linux server "Number of processes created" and "Context switches" are growing incredibly fast

    - by Jorge Fuentes González
    I have a strange behaviour in my server :-/. Is a OpenVZ VPS (I think is OpenVZ, because /proc/user_beancounters exists and df -h returns /dev/simfs drive. Also ifconfig returns venet0). When I do cat /proc/stat, I can see how each second about 50-100 processes are created and happens about 800k-1200k context switches! All that info is with the server completely idle, no traffic nor programs running. Top shows 0 load average and 100% idle CPU. I've closed all non-needed services (httpd, mysqld, sendmail, nagios, named...) and the problem still happens. I do ps -ALf each second too and I don't see any changes, only a new ps process is created each time and the PID is just the same as before + 1, so new processes are not created, so I thought that process growing in cat /proc/stat must be threads (Yes, seems that processes in /proc/stat counts threads creation too as this states: http://webcache.googleusercontent.com/search?q=cache:8NLgzKEzHQQJ:www.linuxhowtos.org/System/procstat.htm&hl=es&tbo=d&gl=es&strip=1). I've changed to /proc dir and done cat [PID]\status with all PIDs listed with ls (Including kernel ones) and in any process voluntary_ctxt_switches nor nonvoluntary_ctxt_switches are growing at the same speed as cat /proc/stat does (just a few tens/second), Threads keeps the same also. I've done strace -p PID to all process too so I can see if any process is crating threads or something but the only process that has a bit of movement is ssh and that movement is read/write operations because of the data is sending to my terminal. After that, I've done vmstat -s and saw that forks is growing at the same speed processes in /proc/stat does. As http://linux.die.net/man/2/fork says, each fork() creates a new PID but my server PID is not growing! The last thing I can think of is that all process data that proc/stat and vmstat -s show is shared with all the other VPS stored in the same machine, but I don't know if that is correct... If someone can throw some light on this I would be really grateful.

    Read the article

  • Server 2012, Jumbo Frames - should I expect problems?

    - by TomTom
    Ok, this sound might stupid - but is there any negative on just enabling jumbo frames in practice? From what I understand: Any switch or ethernet adapter that sees a jumbo frame it can not handle will just drop it. TCP is not a problem as max frame size is negotiated in the setinuo phase. UCP is a theoretical problem as a server may just send a LARGE UDP packet that gets dropped on the way. Practically though, as UDP is packet based, I do not really think any software WOULD send a UDP packet larger than 1500 bytes net without app level configuration changes - at least this is how I do my programming, as it is quite hard to get a decent MTU size for that without testing yourself, so you fall back in programming to max 1500 packets. The network in question is a standard small business network - we upgraded now from a non managed 24 port switch to a 52 port switch with 4 10g ports (netgear - quite cheap) and will mov a file server to 10g for also ISCSI serving. All my equipment on the Ethernet level can handle minimum 9000 bytes and due to local firewalls I really want to get packets larger (less firewall processing), but the network is also NAT'ed to the internet. On top, different machines move around (download) large files (multi gigabyte area) quite often for processing. The question is - can I expect problems when I just enable jumbo frames? Again, this is not totally ignorance - I just don't see programs sending more than 1500 byte UDP packets (if that is a practical problem please tell me) and for TCP the MTU is negotiated anyway. if there is a problem I can move to a dedicated VLAN, but this has it's own shares of problems as basically most workstations must then be on both VLAN's.

    Read the article

  • How private is the Opera Turbo feature of Opera?

    - by Marcus V
    If I'm using Opera with the Opera Turbo feature turned on (always, not set to "automaticly"). Can anyone see what sites I'm visiting (except Opera of course ...)? Opera Turbo uses a proxy server, so it should be that way, but as a not very technical person I'm not sure. Why do I want this? Well: nowadays, at least in my country, more and more (legal) open Wi-Fi connections are available. In those environments I like to have more privacy protections. I don't mind if they can see my IP address, but I just want to hide as much as I can of what I am doing. BTW: I don't care that they can see the data transferred; it doesn't have to be that secret. I only want to hide the requested Internet site links. BTW: I know that Opera Turbo only works with non-secure websites (HTTP), but that's fine for me. I only want it to work with these sites. BTW: I'm not need this for illegal purposes; I only want this for privacy reasons.

    Read the article

  • Hyper-v on 2012R2 startup gen1 vm causes the host to freeze up

    - by sputnik
    I've searched a lot to resolve the following issue, but nothing helped me. My problem is, that starting up a first-gen vm locks up the whole host. Only a hard reset helps. Second-gen vm starts and runs perfectly. The freezes happened on 3 different vms. FreeBSD, Ubuntu, Windows Server 2008R2, while Windows 8.1 on second gen config works perfectly. Im using this pc mainly as a workstation. No eventlog errors nor dumps are generated. My system: Windows Server 2012R2 FX-8350, non OC ASRock 870 Extreme R2 (Crappy board imho) 32GB DDR3 1866@1600 (My motherboard, against the "support" for 1866ram won't work with full speed) 120GB SSD 4.5TB Storage space device I dont think that its due to my system, because vmware workstation was running without problems. Did I forget to configure something? Any help is appreciated. P.S: Even deactivating C1E, C6, C&Q didnt work. P.P.S: With no virtual network adapter set, the system still locks up. Creating a first gen vm without any hdds and network and launching works. Attaching a boot dvd causes the host to freeze. The host freezes as the gen1 vm begins to boot, doesn't matter if from dvd or hdd

    Read the article

  • Keyboard's media keys are blocked by a program

    - by Mike Hanson
    I've got a Microsoft Natural Ergonomic Keyboard 4000. In addition to the regular keys, it's also got keys for Web/Home, Search, Mail, Favorites (5), Calculator, and Media functions (Mute, Volume Up/Down, and Play/Pause). Everything works most of the time, and the exception is rather odd. I use a programming system called Clarion. When that has focus, the Media keys don't work. (All the others still do.) I've also discovered that programs that I create using Clarion also block the media keys (only when they have focus). This indicates that it's probably something in Clarion's Run-Time Library (RTL) that's causing the trouble. The keys will work if I click on a non-Clarion window before hitting the media key, but that's an undesirable hassle. The odd thing is that I have many colleagues with the same keyboard, and they have no problem. When I recently upgraded from Vista Professional to Win7 Ultimate, I noticed that various things "appear" differently. For example, with my old system, when I changed the volume or muted the volume bar visualization always appeared at the bottom right on the screen. Now it doesn't appear in certain programs, even when it works. This indicates an order of precedence for visual elements. I'm fairly certain a similar order of precedence exists for keyboard hooks. Depending on how the hooks are defined, and the order in which they're applied, it would seem that sometimes the IntelliType drivers don't see the media keystrokes. The Media keys probably behave differently than the rest of the "special" keys, because they are more of a standard across all keyboards, so perhaps are handled by a different driver hooking mechanism. Does anyone have any suggestions of how I might fix this problem? Is there some way to change the order of hooks? Delay the loading of the IntelliType driver? Thanks in advance!

    Read the article

  • Simulating audio playback on headless linux server

    - by afro
    Hi people, We have a headless linux server (Debian 5) we use for runnin integration tests of our web-page code. Among these tests are ones implemented using Selenium, which practically simulates a user browsing our pages and clicking on things. One of these tests is failing now, because it involves starting a flash-based audio player and checking to see whether the progress bar gets displayed properly. The reason this test fails is that there is no way to play the audio, and no sound card on the machine, which has simple webserver hardware. So, my question would be: Is there a simple way of giving a program the impression that its audio output is being processed, and playback is taking place? I don't have to record the playback, or redirect it or anything like that, just a dummy soundcard, like the dummy X-server we aer using, which actually does not need to display stuff. I have tried using JACK, but it's too complicated, and the documentation does not even answer this very simple question. I also installed alsa on the server; it 'pretends' to run, but when a program tries to play audio, just spews error and debug information having to do with the non-existence of a soundcard. It would be really awesome if one of you has a simple answer to this question. Cheers, Ulas

    Read the article

  • OS X borked; need to backup outside of Time Machine

    - by rlbgator
    Quick Background: iMac G5 (the white one; 4 years old?) Running Leopard 10.5.something. Time Machine started failing on me; and every time I touch the Finder, things beachball like crazy. Booting from install disk then using Disk Utility to "Repair Disk" also fails. I'm left with the conclusion that I have a corrupt file somewhere important, that's (i) keeping TM from working and (ii) messing with basic functionality. I am not (yet) savvy enough in OS X to know what logs to look in, or how to decipher them - but 'corrupt file' seems to be the likely case, based on my readings of apple.com forum threads. So I think I need to backup, outside of Time Machine, then install fresh OS X on a new drive (or maybe SpinRite the current drive?). I'm able to put a (non-Time Machine) external USB drive on, so I dragged all 3 Users' folders to that... am I done backing up? Am I going to have a massive Permissions problem, trying to put things back together after a re-install? Thanks for reading.

    Read the article

  • DNS issue on Fedora 12? wget wordpress.org fails where wget www.google.com works

    - by Tom Auger
    I'm administering a Fedora 12 box, but am quite new to networking specifics. Recently one of our WordPress apps hosted on our server has stopped being able to perform its auto-update or auto-download of plugins. Investigating further, I have tried the following: $ wget wordpress.org --2010-12-17 11:26:50-- http://wordpress.org/ Resolving wordpress.org... failed: Temporary failure in name resolution. wget: unable to resolve host address âwordpress.orgâ Whereas: $ wget www.google.com --2010-12-17 11:27:26-- http://www.google.com/ Resolving www.google.com... 74.125.226.82, 74.125.226.84, 74.125.226.80, ... Connecting to www.google.com|74.125.226.82|:80... connected. HTTP request sent, awaiting response... 302 Found Location: http://www.google.ca/ [following] --2010-12-17 11:27:26-- http://www.google.ca/ Resolving www.google.ca... 173.194.32.104 Connecting to www.google.ca|173.194.32.104|:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: âindex.html.4â [ <=> ] 9,079 --.-K/s in 0.02s 2010-12-17 11:27:26 (462 KB/s) - âindex.html.4â Interestingly: $ ping wordpress.org PING wordpress.org (72.233.56.138) 56(84) bytes of data. 64 bytes from wordpress.org (72.233.56.138): icmp_seq=1 ttl=50 time=81.5 ms 64 bytes from wordpress.org (72.233.56.138): icmp_seq=2 ttl=50 time=67.3 ms ^C --- wordpress.org ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1783ms rtt min/avg/max/mdev = 67.361/74.448/81.536/7.092 ms and $ nslookup wordpress.org Server: 192.168.2.1 Address: 192.168.2.1#53 Non-authoritative answer: Name: wordpress.org Address: 72.233.56.138 Name: wordpress.org Address: 72.233.56.139 nscd has been stopped and flushed. iptables appear to be clean. At this point I have exhausted my limited abilities to diagnose the issue. Can anyone suggest a resolution path?

    Read the article

  • Why I cannot copy install.wim from Windows 7 ISO to USB (in linux env)

    - by fastreload
    I need to make a USB bootable disk of Windows 7 ISO. My USB is formatted to NTFS, ISO is not corrupt. I can copy install.wim elsewhere but I cannot copy it to USB. I even tried rsync. rsync error sources/install.wim rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Broken pipe (32) rsync: write failed on "/media/52E866F5450158A4/sources/install.wim": Input/output error (5) rsync error: error in file IO (code 11) at receiver.c(322) [receiver=3.0.8] Stat for windows.vim File: `X15-65732 (2)/sources/install.wim' Size: 2188587580 Blocks: 4274600 IO Block: 4096 regular file Device: 801h/2049d Inode: 671984 Links: 1 Access: (0664/-rw-rw-r--) Uid: ( 1000/ umur) Gid: ( 1000/ umur) Access: 2011-10-17 22:59:54.754619736 +0300 Modify: 2009-07-14 12:26:40.000000000 +0300 Change: 2011-10-17 22:55:47.327358410 +0300 fdisk -l Disk /dev/sdd: 8103 MB, 8103395328 bytes 196 heads, 32 sectors/track, 2523 cylinders, total 15826944 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xc3072e18 Device Boot Start End Blocks Id System /dev/sdd1 * 32 15826943 7913456 7 HPFS/NTFS/exFAT hdparm -I /dev/sdd: SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 0a 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ATA device, with non-removable media Model Number: UF?F?A????U]r???U u??tF?f?`~ Serial Number: ?@??~| Firmware Revision: ????V? Media Serial Num: $I?vnladip raititnot baelErrrol aoidgn Media Manufacturer: o eparitgns syetmiM Standards: Used: unknown (minor revision code 0x0c75) Supported: 12 8 6 Likely used: 12 Configuration: Logical max current cylinders 17218 0 heads 0 0 sectors/track 128 0 -- Logical/Physical Sector size: 512 bytes device size with M = 1024*1024: 0 MBytes device size with M = 1000*1000: 0 MBytes cache/buffer size = unknown Capabilities: IORDY(may be)(cannot be disabled) Queue depth: 11 Standby timer values: spec'd by Vendor R/W multiple sector transfer: Max = 0 Current = ? Recommended acoustic management value: 254, current value: 62 DMA: not supported PIO: unknown * reserved 69[0] * reserved 69[1] * reserved 69[3] * reserved 69[4] * reserved 69[7] Security: Master password revision code = 60253 not supported not enabled not locked not frozen not expired: security count not supported: enhanced erase 71112min for SECURITY ERASE UNIT. 172min for ENHANCED SECURITY ERASE UNIT. Integrity word not set (found 0xaa55, expected 0x80a5)

    Read the article

  • HD video editing system with Truecrypt

    - by Rob
    I'm looking to do hi-def video editing and transcoding on an unencrypted standard partition, with Truecrypt on the system partition for sensitive data. I'm aiming to keep certain data private but still have performance where needed. Goals: Maximum, unimpacted, performance possible for hi-def video editing, encryption of video not required Encrypt system partition, using Truecrypt, for web/email privacy, etc. in the event of loss In other words I want to selectively encrypt the hard drive - i.e. make the system partition encrypted but not impact the original maximum performance that would be available to me for hi-def/HD video editing. The thinking is to use an unencrypted partition for the video and set up video applications to point at that. Assuming that they would use that partition only for their workspace and not the encrypted system partition, then I should expect to not get any performance hit. Would I be correct? I guess it might depend on the application, if that app is hard-wired to use the system partition always for temporary storage during editing and transcoding, or if it has to be installed on the C: system partition always. So some real data on how various apps behave in the respect would be useful, e.g. Adobe, Cyberlink, Nero etc. etc. I have a Intel i7 Quad-core (8 threads) 1.6Ghz (up to 2.8Ghz turbo-boost) 4Gb, 7200rpm SATA, nvidia HP laptop. I've read the excellent posting about the general performance impact of truecrypt but the benchmarks weren't specific enough for my needs where I'm dealing with HD-video and using a non-encrypted partition to maintain max performance.

    Read the article

  • IPSec tunnelling with ISA Server 2000...

    - by Izhido
    Believe it or not, our corporate network still uses ISA Server 2000 (in a Windows Server 2003 machine) to enable / control Internet access to / from it. I was asked recently to configure that ISA Server to create a site-to-site VPN for a new branch in a office about 25 km. away from it. The idea is basically to enable not only computers, but also Palm devices (WiFi-enabled, of course), to be able to see other computers in both sites. I was also told that a simple VPN-enabled wireless AP/router (in this case, a Cisco WRV210 unit) should be enough to establish communications with the main office. To be fair, the router looks easy to configure; it was confusing at first, but further understanding of how site-to-site VPNs work cleared all doubts about it. Now I need to make modifications to our ISA Server in order to recognize the newly installed & configured "remote" VPN site. Thing is, either my Googling skills are pathethically horrible, or there doesn't seem to be much (or any, at all) information about how to configure an ISA Server 2000 for this purpose. Lots of stuff on 2004, of course; also, I think I saw something for 2006. But nothing I could find about 2000. Reading about 2004, it seems that the only way I can do site-on-site with a Cisco router (read: a non-ISA-Server machine) is through something they call a "IPSec tunnel". Fair enough. However, I can't figure for the life of me how could I even start to find, leave alone configure, such a thing. Do you, people, happen to know how to do IPSec tunelling on a ISA Server 2000, so I can connect to a Cisco WRV210 VPN-enabled router, and build a site-to-site VPN for both networks? Or is this not possible at all? (Meaning I should change anything in this configuration to make it work...)

    Read the article

  • replacing the default console emulator under Windows XP

    - by Gilles
    How can I replace the default program providing console windows under Windows XP? I know of alternative programs, and I have a shortcut to start cmd.exe in Console2. But now I want console applications to start in Console2 rather than the default console program, even when I have no control over the program that starts the console application. (I.e. a non-console program starts consoleapp.exe, and I can't change it to start Console2 instead, but I still want the application to be started inside a new instance of Console2.) (Note that I want to replace the console itself, that is, the window in which console (i.e. text mode) applications run. And I must be able to run arbitrary, unmodified console applications: a substitute for a specific console program such as Cmd won't do me any good.) EDIT: So what I'm after is a CSRSS replacement, which leads to OT: I want to know when Microsoft is going to make a decent CSRSS replacement. Not being able to adjust the width of a "terminal" by resizing the window is a complete joke. Go download the ISE already. (It's included in Win7/2008R2.) But as far as I understand this ISE is an environment for Powershell, not a general console emulator.

    Read the article

  • Running docker in VPC and accessing container from another VPC machine

    - by Bogdan Gaza
    I'm having issues while running docker in AWS VPC. Here is my setup: I've got two machines running in VPC: 10.0.100.150 10.0.100.151 both having an elastic IPs assigned to them, both running in the same internet enabled subnet. Let's say I'm running a web server that serves static files in a container on the 10.0.100.150 machine the container: IP: 172.17.0.2 port 8111 is forwarded on the 8111 port on the machine. I'm trying to access the static files from my local machine (or another non-VPC machine also tried an EC2 instance not running in the VPC) and it work flawlessly. If I try to access the files from the other machine (10.0.100.151) it hangs. I'm using wget to pull the files. Tried to debug it with tcpdump and ngrep and that I have seen is that the request reaches the container. If I ngrep on the host machine I see the requests going in but no response going back. If I ngrep on the container I see the requests going in and the response going back. I've tried multiple iptables setups (with postrouting enabled, with manually forwarding ports etc) but no success. Help in any way - even debugging directions would be much appreciated. Thanks!

    Read the article

  • How to diagnose remote assistance problem

    - by cantabilesoftware
    I have a long standing issue with remote assistance between a home and work PC. My wife and I both use MSN messenger and I used to be able to control her PC at home via MSN Remote Assistance. Some time ago however this stopped working and I don't know why. We're both running the latest versions of MSN Live Messenger and I've checked the appropriate firewall ports are open, but it still doesn't work and MSN just says something useless like "The person isn't responding". Any suggestions for how can I diagnose this? More info: I just tried direct Remote Desktop between work PC and home PC and it works fine - so I presume all the appropriate ports are open. Just Remote Assistance doesn't work. I'd like to get RA working so I can demonstrate how to do things remotely. With Remote Desktop the person at the other end gets booted off and can't see. With Remote Assistance they can follow along step by step. Some comments below suggest using other solutions, which is fine and do work, but there must be a way to diagnose RA and get it working. Experimenting with this some more, the notebook that I was using at work today that refused to connect works fine for remote assistance when I bring it home. So I guess this must be a problem with our network configuration at work. I've checked that 3389 is open on firewall on office router and remote desktop works both ways.... just not remote assistance. I've read that remote assitance won't work if client and server are both behind Non-UPnP/NAT routers. If one has UPnP it's supposed to work. Office router doesn't have UPnP enabled but my home one does. I've also scoured the event logs on both ends, nothing noteworthy - unless I'm looking in the wrong spot). Note (copied from comment): I've just tried ShowMyPC which is based on VNC and it works, but I'd still like to figure out what's wrong with RA - it's just bugging me. The question is only about Remote Assistance, no need to propose solutions based on other programs.[/edit by Gnoupi]

    Read the article

  • Maintaining "Portability" Between Linux and Windows 7

    - by lokheart
    I am using the following ways in my office's Windows 7 machine to maintain my "portabilibity" when disaster strikes and I need to switch computer while I have no luxury of time for reinstalling all my program to the new PC. a majority of programs I used are portable, mostly from portableapp.com, like notepad+, GIMP, even R, I extract them and store them in a folder in My document, in a structure similar to the default portableapp installation when they are installed to a thumbdrive only a few software that portable version is not available and I will install them as usual all of my working files are stored in a folder in My document I regularly backup them all using syncback, because this program can keep versioning of my backup, and the backup is stored in a portable drive. One day I need to switch my computer and the operation is relative simple for me: I just move the two folders mentioned above into the my document folder of the new PC, install those few "non-portable" program in it, and this is almost done, some minor hiccups can be solved by reinstalling the portableapp into the drive. Overall speaking it is a smooth process. I would like to maintain the same degree of "portability" in my home Linux desktop (Ubuntu or Mint, I'm still deciding), that is, if my Linux crash and I need to reinstall it again. All I need to do is the move the two folder back to the new Linux, and most of my work will be almost ready to be worked on again. But I don't know how to find a Linux-alternative of portableapps. Being a newer to Linux, can anyone tell me whether this is possible in Linux?

    Read the article

  • What steps to take when CPAN installation fails?

    - by pythonic metaphor
    I have used CPAN to install perl modules on quite a few occasions, but I've been lucky enough to just have it work. Unfortunately, I was trying to install Thread::Pool today and one of the required dependencies, Thread::Converyor::Monitored failed the test: Test Summary Report ------------------- t/Conveyor-Monitored02.t (Wstat: 65280 Tests: 89 Failed: 0) Non-zero exit status: 255 Parse errors: Tests out of sequence. Found (2) but expected (4) Tests out of sequence. Found (4) but expected (5) Tests out of sequence. Found (5) but expected (6) Tests out of sequence. Found (3) but expected (7) Tests out of sequence. Found (6) but expected (8) Displayed the first 5 of 86 TAP syntax errors. Re-run prove with the -p option to see them all. Files=3, Tests=258, 6 wallclock secs ( 0.07 usr 0.03 sys + 4.04 cusr 1.25 csys = 5.39 CPU) Result: FAIL Failed 1/3 test programs. 0/258 subtests failed. make: *** [test_dynamic] Error 255 ELIZABETH/Thread-Conveyor-Monitored-0.12.tar.gz /usr/bin/make test -- NOT OK //hint// to see the cpan-testers results for installing this module, try: reports ELIZABETH/Thread-Conveyor-Monitored-0.12.tar.gz Running make install make test had returned bad status, won't install without force Failed during this command: ELIZABETH/Thread-Conveyor-Monitored-0.12.tar.gz: make_test NO What steps do you take to start seeing why an installation failed? I'm not even sure how to begin tracking down what's wrong.

    Read the article

  • Cannot remove storage account because of lease, but I already deleted the server [closed]

    - by djechelon
    I recently created a temporary virtual server on Azure. Then I deleted it. I wanted to delete the storage account associated with it because I didn't need it any more. The problem is that the VHD file is still associated to a non-existing virtual machine!! If I try to delete the VHD from Virtual Machines\Disks I get the Delete button greyed and the table tells me it's still associated with the old VM. If I go to storage administration and try to delete the blob from vhds/ directory I get there is an active lease. I've read on Azure forums that, in these case, one should try to force releasing the lease from the blob. I followed their instructions and downloaded their script, but running it failed. The script detected that the disk is associated to a Virtual Machine and can't be deleted. The problem is that I'm 1000000% sure that I already deleted the VM. In fact, I currently only have a single VM that has its own HD and is up and running fine! What can I do to delete that storage account that is probably sucking money from my pocket?

    Read the article

  • Apache-style multiviews with Nginx

    - by Kenn
    I'm interested in switching from Apache/mod_php to Nginx for some non-CMS sites I'm running. The sites in question are either completely static HTML files or simple PHP, but the one thing they have in common is that I'm currently using Apache's mod_negotiation to serve them up without file extensions. I'm not concerned with actual content negotiation; I'm using this just so I don't have to use file extensions in my URLs. For example, the file at /info/contact.php is accessed via a URL of just /info/contact The actual file is a .php file in that location, but I don't use the extension in the URLs. This gives me slightly shorter, cleaner URLs and also doesn't expose what's essentially a meaningless implementation detail to the user. In Apache, all this takes is enabling mod_negotiation and adding +MultiViews to the Options for the site. In Nginx I gather I'll be rewriting somehow but being new to Nginx, I'm not exactly sure how to do it. These sites are currently working fine proxied from Nginx to Apache, but I'd like to try running them solely with Nginx/fastcgi. They work fine this way as long as I'm using the extensions, so the fastcgi aspect is working great. My concern now is just with removing those extensions. It's important to keep in mind that the filename is not always in the URL, in the case of subdirectories. That is, /foo/bar should look for /foo/bar.php or /foo/bar/index.php /foo/ should look for /foo/index.php Is there a simple way to achieve this with Nginx or should I stick with proxying to Apache?

    Read the article

  • Unable to record using Jmeter: [help me very urgent]

    - by krish
    Hi, I am trying to record a http web page using Jmeter 2.3.3 version.I has setup the JMeter proxy and tried, but did n't work. I have followed the below steps. 1. Launch jmeter 2.3.3, added thred group to test plan 2. Under Workbench-add-non-test elements- added HTTP proxy server. proxy server setting are port:9090, target:use recording controller, grouping:donot group samplers, Type:HTTp request and checked the boxes of all under http sampler settings 3.Saved the settings 4. Now in browser(IE 7.0 or firefox 3.0.16), under connection settings, setup the manual proxy settings as local host and port as 9090(no auto detect settings nothing, only manual proxy). Setting saved 5.Now in the jmeter, started the http proxy server. 6. Open a browser and hit the webpage needs to be tested. The page is not opened. In fact because of the changes made in browsers, no pages are opened. Whenever i try hitting a page, the pages are recorded in the Jmeter. but without the page open, how can i test. I looking for an immediate answer and my work is blocked. Immediate answer would be appreciated.

    Read the article

  • Anti Virus Service does not run - Windows XP SP3 32bit Home

    - by Stefan Fassel
    I have a somewhat strange problem here. I am trying to run Anti Virus Software on my Windows XP Home 32bit System. After a serious crash I had to fall back to an outdated copy of my initial installation and had Windows install 5 years of updates. So far so good. After Intalling a new Anti Virus Software (Bitdefender 2012) everything seemed to be fine, initial scanning went fine and configuration was working. But after restarting the System the Virus Scanner was unable to start up again. Even the Configuration console of the AV Software did not start. I tried scanning the System for malware, but nothing was found. Then I tried a different AV Software (MS Security Essentials), but in the end it did fail to start too. I have tried to start the Service manually, but I seem to be missing the privilege to do so. I am logged in as a Non-"Administrator" User with Admin privileges (Not much choices there on a XP Home System). I cannot switch to Administrator account outside the protected mode. When running Windows in protected mode I am unable to start the AV Software because it does not run in protected mode. I am a bit at loss now...

    Read the article

  • Why isn't 'Low Fragmentation Heap' LFH enabled by default on Windows Server 2003?

    - by James Wiseman
    I've been investigating an issue with a production Classic ASP website running on IIS6 which seems indicative of memory fragmentation. One of the suggestions of how to ameliorate this came from Stackoverflow: How can I find why some classic asp pages randomly take a real long time to execute?. It suggested flipping a setting in the site's global.asa file to 'turn on' Low Fragmentation Heap (LFH). The following code (with a registered version of the accompanying DLL) did the trick. Set LFHObj=CreateObject("TURNONLFH.ObjTurnOnLFH") LFHObj.TurnOnLFH() application("TurnOnLFHResult")=CStr(LFHObj.TurnOnLFHResult) (Really the code isn't that important to the question). An author of a linked post reported a seemingly magic resolution to this issue, and, reading around a little more, I discovered that this setting is enabled by default on Windows Server 2008. So, naturally, this left me a little concerned: Why is this setting not enabled by default on 2003, or If it works in 2008 why have Microsoft not issued a patch to enable it by default on 2003? I suspect the answer to the above is the same for both (if there is one). Obviously, we're testing it in a non-production environment, and doing an array of metrics and comparisons to deem if it does help us. But aside from this I'm really just trying to understand if there's any technical reason why we should do this, or if there are any gotchas that we need to be aware of.

    Read the article

  • how to setup .ssh directory inside an encrypted volume on Mac OSX and still have public key logins?

    - by Vitaly Kushner
    I have my .ssh directory inside an encrypted sparse image. i.e. ~/.ssh is a symlink to /Volumes/VolumeName/.ssh The problem is that when I try to ssh into that machine using a public key I see the following error message in /var/log/secure.log: Authentication refused: bad ownership or modes for directory /Volumes Any way to solve this in a clean way? Update: The permissions on ~/.ssh and authorized_keys are right: > ls -ld ~ drwxr-xr-x+ 77 vitaly staff 2618 Mar 16 08:22 /Users/vitaly/ > ls -l ~/.ssh lrwxr-xr-x 1 vitaly staff 22 Mar 15 23:48 /Users/vitaly/.ssh@ -> /Volumes/Astrails/.ssh > ls -ld /Volumes/Astrails/.ssh drwx------ 3 vitaly staff 646 Mar 15 23:46 /Volumes/Astrails/.ssh/ > ls -ld /Volumes/Astrails/ drwx--x--x@ 18 vitaly staff 1360 Jan 12 22:05 /Volumes/Astrails// > ls -ld /Volumes/ drwxrwxrwt@ 5 root admin 170 Mar 15 20:38 /Volumes// error message sats the problem is with /Volumes, but I don't see the problem. Yes it is o+w but it is also +t which should be ok but apparently isn't. The problem is I can't change /Volumes permissions (or rather shouldn't) but I do want public key login to work. First I thought of mounting the image on other place then /Volumes, but it is automaunted on login by standard OSX mounting. I asked about it here: How to change disk image's default mount directory on osx The only answer I got is "you can't" ;) I could hack my way around, by writing some shellscript that will manually mounting volume at a non-standard location but it would be a gross hack, I'm still looking for a cleaner way to do what I need.

    Read the article

  • Solaris 10: How to image a machine?

    - by nonot1
    I've got a Solaris 10 workstation that I'd like to create a full image backup from. The machine has 2 drives, one UFS for system root, and 1 ZFS for data storage. I intend to add a third HD to keep the backup images of both primary drives (including any zfs snapshots). The purpose is not disaster recovery, but rather to allow me to easily blow away a series of application installation/configuration changes I intend to try. What's the best way to do this? I'm not too familiar with Solaris, but have some basic Linux knowledge. I looked at CloneZilla, but it does not support Solaris. I'm OK with just a dd | gzip > image style solution, but I'd need some way to first zero-out the non-used blocks on the primary drives to aid gzip. They are are much larger than my 3rd drive, but hardly have any real data. Update to clarify: I specifically want to avoid using any file-system snapshot functionality, because part of the app configuration changes involve/depend slightly on existing and new snapshots. Ideally the full collection of snapshots should be part of the backup. Virtualization not an option, because the goal is to do performance evaluation on a very specific HW configuration. For the same reason, the spurious "back up" snapshots could skew performance data. Thank you

    Read the article

  • DNS Replication on Server 2008 R2

    - by Aaron
    Hi There, I have been trying out public only facing DNS servers with Server 2008 R2 Web - I've wanted to setup at least 2 in a master/slave replication. Using Microsoft DNS I am able to add in the domains into the primary zone on the master DNS server (ns1) and add the records ok and have them visible publically. On ns2 I can then add in the same domain but as a secondary zone and get them to replicate / zone transfer fine. Is there a way inside of Windows to have the slave(s) automatically synchronise all the changes from the master? For example it's ok if i have manually added the domains onto each of the NS's but if i add a new zone on the master i have to add it on the slave before it replicates. I installed Simple DNS and they have a 'Super Master/Slave' which takes care of exactly this whereby if you add a new domain into the primary zone it is automatically created and kept in sync on NS2 but i would have to buy a licence. All this is non active directory if that helps. Can anyone advise if it is possible to do this using Microsoft DNS? Many Thanks in Advance!

    Read the article

  • Can enabling a RAID controller's writeback cache harm overall performance?

    - by Nathan O'Sullivan
    I have an 8 drive RAID 10 setup connected to an Adaptec 5805Z, running Centos 5.5 and deadline scheduler. A basic dd read test shows 400mb/sec, and a basic dd write test shows about the same. When I run the two simultaneously, I see the read speed drop to ~5mb/sec while the write speed stays at more or less the same 400mb/sec. The output of iostat -x as you would expect, shows that very few read transactions are being executed while the disk is bombarded with writes. If i turn the controller's writeback cache off, I dont see a 50:50 split but I do see a marked improvement, somewhere around 100mb/s reads and 300mb/s writes. I've also found if I lower the nr_requests setting on the drive's queue (somewhere around 8 seems optimal) I can end up with 150mb/sec reads and 150mb/sec writes; ie. a reduction in total throughput but certainly more suitable for my workload. Is this a real phenomenon? Or is my synthetic test too simplistic? The reason this could happen seems clear enough, when the scheduler switches from reads to writes, it can run heaps of write requests because they all just land in the controllers cache but must be carried out at some point. I would guess the actual disk writes are occuring when the scheduler starts trying to perform reads again, resulting in very few read requests being executed. This seems a reasonable explanation, but it also seems like a massive drawback to using writeback cache on an system with non-trivial write loads. I've been searching for discussions around this all afternoon and found nothing. What am I missing?

    Read the article

< Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >