Search Results

Search found 17847 results on 714 pages for 'virtual disk'.

Page 628/714 | < Previous Page | 624 625 626 627 628 629 630 631 632 633 634 635  | Next Page >

  • How can I password protect & let cgi-bin to work?

    - by jaaaaaaax
    This is taken from sites-available directory. It's a virtual host setting for apache. Accessing myiphere/cgi-bin/ throws 403. The directory setting for /var/www2/ drwxrwxrwx 8 www-data www-data NameVirtualHost myiphere <VirtualHost myiphere> ServerAdmin webmaster@localhost DocumentRoot /var/www2/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www2/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory>

    Read the article

  • Software for RAID Failure Alerts?

    - by QF_Developer
    I have two 256 GB Samsung 840 Pro SSD disks in a RAID 1 array. I would like to receive a notification if one of the disks in the array fails. Can anybody recommend an application I can install on the server to fire an email if such an event occurs? Here are some additional specs: Supermicro X9SCM-IIF motherboard utilising the hardware RAID controller. OS = Windows 2012 Standard Also is it possible to simulate a disk failure by pulling it out of the bay? SSDs appear to fail close together when in a mirrored config so I'd like to know ASAP if one goes down so I can swap them out with minimum delay. UPDATE 26th June 2013 ------------------------ None of the software that ships with the Supermicro X9SCM-* motherboards offer support for RAID monitoring. As has been pointed out here, these boards are built on an Intel chipset for RAID and so I installed Intel Rapid Storage Technology that supports automated email notifications on RAID failure http://www.intel.com/support/chipsets/imsm/sb/cs-020784.htm One small issue, the software only allows you to send email notifications without SMTP authentication. There's a bunch of different workarounds here: http://communities.intel.com/thread/30771

    Read the article

  • after building in more ram, bios/debian does nothing [closed]

    - by derty
    My private server has 2x1gb Ram working with a 64bit Debian and an Q6600 Intel. This runs 2 virtual mashines on it each one recives 512mb RAM. Which you can immagine is a bit tight for the hole system. Now i got 2x2gb ram from a friend. I'm not sure if thery are clocking at the same speed, but i'm sure my power adaptor is not on his limit and can handle that. So there are 2 ram sockets left at the mainboard. I shuted down the system and build in the 4 gigs and looked what happen. After pressing the start button, everything gets noisy as known BUT the screen shows nothing, not even the bios stuff it usally does. Why isn't the system booting? I can immagine that it does not direkt link between booting debian and the bios not showing a thing. Or is it Grub? I mounted the system disk, and i can see it, switch folders write stuff with "vim" it does not seems like there is a problem.

    Read the article

  • Hibernate & Sleep broken after IE 9 RTM installation in Windows 7 x64.

    - by AKa
    I have a question about hibernation. I installed Internet Explorer 9 RTM x64 on my Windows 7 x64 SP1 desktop machine. After this, computer don't entry the hibernation or (hybrid) sleep state properly. After I hibernate the computer the monitor become blank screen and the keyboard and mouse are inactive. But the machine is still running and there isn't any possibility to switch them off as only with power button. But this is recognized on next start as ineligible because of log entry with message “The previous system shutdown at xx:xx:xx on ?xx.?xx.?x was unexpected“ and menu with safe mode option. I’m clearly not sure if it has something to do with Internet Explorer installation, but I’m definitely guaranteed that before of this I never had some problems with hibernation or (hybrid) sleep. In Windows logs isn’t something suspect. I switched the hibernation off and on, installed new drivers for mainboard, graphic and network card, checked the hard disk, nothing was helpful. This is really sad, beacuse I don't like to switch the computer completely off because it takes longer to boot. Any suggestions?

    Read the article

  • Samba access works with IP address only

    - by Sebastian Rittau
    I added a Debian etch host (hostname: webserver, IP address: 192.168.101.2) running Samba to a Windows network with a Windows 2003 PDC (IP address 192.168.101.3). The Samba server exports a public guest share, called "Intranet". The server shows up fine in the network, but trying to click on it produces an error dialog, stating I don't have the necessary permissions. So does entering \webserver manually and using \webserver\internet states that the path does not exist. Interestingly, accessing the share by IP address (\192.168.101.2 or \192.168.101.2\intranet) works fine. DNS is configured correctly, and "smbclient //webserver/intranet" on another Linux client works fine. One complicating issue is that the webserver is only a VMware virtual machine running on PDC server. Here is our smb.conf: [global] workgroup = Foobar server string = Webserver wins support = yes ; commenting out these wins server = 192.168.101.3 ; two lines has no effect dns proxy = no guest account = nobody [... snipped some unrelated bits, like logging ...] security = share [... snipped some password-related things ...] domain master = no [intranet] comment = Intranet path = /srv/webserver/contents browseable = yes guest ok = yes guest only = yes read only = yes create mask = 0775 directory mask = 0775

    Read the article

  • Security Token for Mac/Linux/Windows, self-managed, pref. open source?

    - by DevelopersDevelopersDevelopers
    I'm looking to buy an evaluation security token (combined smart card/usb reader) for my business that works on: Windows 7 x64 OS X 10.6.x x64 Ubuntu Linux (64 or 32 bit, 10.04 or 10.10, I can bend based on possible tokens) Functionality I need is: Login authentication Authentication for whole-disk encryption (in Linux/Windows, Mac is flexible here) Signing/encryption using PGP and x.509 certificates RSA-2048 key-capable (1024 not good enough.) I can manage the certificates myself Open source middleware/drivers (not necessarily FOSS, just source available. Can flex on this, I just want to be able to audit the code. OpenSC-compatible on Linux would be great.) Is there any token that can do all of this? Or would I need multiple ones to accomplish this? Or do I need to look at smart cards and readers to get this? I have been researching this for a while and have had a heck of a time even getting accurate information about products. Also, I am in the USA, and it appears that EU export laws prevent me from buying from there, so those vendors are out. I was looking at Feitian tokens from Gooze, but since they are in France I can't buy.

    Read the article

  • can't install anything anymore with apt-get

    - by Aymane Shuichi
    Welcome this is the log I have when trying to install anything (php5-fpm after removing it) apt-get install php5-fpm Reading package lists... Done Building dependency tree Reading state information... Done php5-fpm is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up php5-fpm (5.4.4-14+deb7u10) ... insserv: warning: script 'S55IptabLes' missing LSB tags and overrides insserv: warning: script 'S55IptabLex' missing LSB tags and overrides insserv: There is a loop between service IptabLes and mountnfs if started insserv: loop involving service mountnfs at depth 8 insserv: loop involving service networking at depth 7 insserv: loop involving service mountnfs-bootclean at depth 10 insserv: There is a loop between service rc.local and mountall if started insserv: loop involving service mountall at depth 6 insserv: loop involving service checkfs at depth 5 insserv: loop involving service kbd at depth 11 insserv: There is a loop between service rc.local and mountall-bootclean if started insserv: loop involving service mountall-bootclean at depth 7 insserv: loop involving service urandom at depth 9 insserv: There is a loop between service IptabLes and mountdevsubfs if started insserv: loop involving service mountdevsubfs at depth 2 insserv: loop involving service udev at depth 1 insserv: There is a loop at service rc.local if started insserv: There is a loop at service IptabLes if started insserv: Starting IptabLes depends on rc.local and therefore on system facility `$all' which can not be true! (x99 times repeated ) insserv: Max recursions depth 99 reached insserv: loop involving service postfix at depth 2 insserv: There is a loop between service IptabLes and udev if started insserv: loop involving service mountkernfs at depth 1 insserv: loop involving service IptabLes at depth 1 Now here is the error i get insserv: exiting now without changing boot order! update-rc.d: error: insserv rejected the script header dpkg: error processing php5-fpm (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: php5-fpm E: Sub-process /usr/bin/dpkg returned an error code (1) The biggest operation I held before this was updating nginx from 1.2 to 1.6 and it was thanks to this site : here is the link : How to upgrade nginx from 1.2 to 1.6 on debian 7 Please help !

    Read the article

  • File corruption (bad checksums) in large files copied to VMware guest

    - by AllanA
    In setting up a development lab, I've got a desktop system running ESXi 4.1.0 (free license) on SATA RAID 0 (already purchased and configured when I started this job; I'm open to hardware input as it pertains to my problem.) Its guests so far include two Win2008 Server R2 64-bit VMs and on Ubuntu 10.04 64-bit VM. I'm installing onto the Windows servers. We've been copying off some fairly large files (over a gigabyte) for an installation, hoping to install more quickly from a (virtual) hard drive than from the network for from BD-ROM. The problem is that they keep coming up with different checksums from the originals. The file sizes are the same, but md5sum reports different numbers (and so does the installer, as it refuses to continue when the checksums don't match.) I've tried copying directly from the BD-ROM (attaching the OS drive to the host system's physical drive). I've tried copying the large files onto a co-worker's Windows machine from his Blu-Ray drive; when I do that, the checksums match. But when I copy from his machine to the VM guest over a network share, the checksums no longer match. Thinking this meant a corrupt destination drive, I deleted it in vSphere and added another freshly created drive. The problem persists. I'm not sure what to try next.

    Read the article

  • Replicated filesystem and EC2 MySQL

    - by El Yobo
    I'm currently investigating migrating our infrastructure over to run on Amazon's EC2 and am trying to figure out the best way to set up a MySQL service. I'm leaning towards running our own MySQL instances, rather than going with Amazon's RDS, but am still considering the best approach for performance and cost on the instance itself. In order to have persistent data, the MySQL data needs to be on an EBS volume (with some form of striped RAID, e.g. RAID0 or RAID10) to improve persistence. However, EBS IO is limited by the network interface (gigabit, so a theoretical maximum of 128 MB/s), while the ephemeral volumes have no such problem. I did see a suggestion for running two MySQL servers on an instance, with a master running on the ephemeral disk (which we would also RAID) and a slave storing changes to an EBS volume, but this has some additional overhead and complexity (two servers). What I was imagining is using some form of replicated file system such that I could have a filesystem on top of a RAID0 of ephemeral volumes to maximise performance all changes from the above immediately replicated to another RAID1 volume backed by multiple EBS volumes to ensure no data loss The advantages of this would be best possible IO performance for the DB server; no network delay in IO decreased IO on EBS volumes (as all read IO will be done on the ephemeral volumes) so decreased cost good data security, as it's backed onto redundant EBS volumes However, I haven't seen an appropriate system to replicate all changes from one volume to the other; is there a filesystem, or any other approach, which will do this? The distributed file systems, e.g. GlusterFS, DRBD etc seem to focus on replicating disks between servers, can they be set up to do what I'm interested in here? I also haven't seen anything about other's taking this approach. Do I have a solution in need of a problem here (i.e. is performance good enough, so this whole idea is redundant)? Is there some flaw in the plan?

    Read the article

  • Reread partition table without rebooting?

    - by Teddy
    Sometimes, when resizing or otherwise mucking about with partitions on a disk, cfdisk will say: Wrote partition table, but re-read table failed. Reboot to update table. (This also happens with other partitioning tools, so I'm thinking this is a Linux issue rather than a cfdisk issue.) Why is this, and why does it only happens sometimes, and what can I do to avoid it? Note: Please assume that none of the partitions I am actually editing are opened, mounted or otherwise in use. Update: cfdisk uses ioctl(fd, BLKRRPART, NULL) to tell Linux to reread the partition table. Two of the other tools recommended so far (hdparm -z DEVICE, sfdisk -R DEVICE) does exactly the same thing. The partprobe DEVICE command, on the other hand, seems to use a new ioctl called BLKPG, which might be better; I don't know. (It also falls back on BLKRRPART if BLKPG fails.) BLKPG seems to be a "this partition has changed; here is the new size" operation, and it looked like partprobe called it individually on all the partitions on the device passed, so it should work if the individual partitions are unused. However, I have not had the opportunity to try it.

    Read the article

  • Google MAIL not arriving - relay not allowed

    - by renevdkooi
    I have a server with sendmail, hosting my domain mind-zone.nl, i changed the MX records to point to the server. When I use Hotmail or any other client the email arrives and everything is fine. ONLY mail from GMAIL server is bounced and gmail returns "relay denied". I have set all the virtual server host settings etc, from command line I can send mails as well, hotmail works, etc. Just not gmail. The strange thing is, this is what gmail returns: Look at the lower part: "Received by" it returns some IP address which is not mine and has absolutely nothing with my domain. While when I do a NSLOOKUP and change to google's DNS server it will state that the IP Address for my domain is correctly pointing at my server. Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 554 554 5.7.1: Relay access denied (state 14). ----- Original message ----- MIME-Version: 1.0 Received: by 10.14.37.138 with SMTP id y10mr3421504eea.43.1297665573901; Sun, 13 Feb 2011 22:39:33 -0800 (PST) Received: by 10.14.29.75 with HTTP; Sun, 13 Feb 2011 22:39:33 -0800 (PST)

    Read the article

  • RewriteRule causes POST data to get dumped before I can access it

    - by MatthewMcGovern
    I'm currently setting up my own 'webserver' (a Ubuntu Server on some old hardware) so I can have a mess around with PHP and get some experience managing a server. I'm using my own little MVC framework and I've hit a snag... In order for all requests to make it through the dispatcher, I am using: <Directory /var/www/> RewriteEngine On RewriteCond %{REQUEST_URI} !\.(png|jpg|jpeg|bmp|gif|css|js)$ [NC] RewriteRule . HomeProjects/index.php [L] </Directory> Which works great. I read on Stackoverflow to change the [L] to [P] to preserve post data. However, this causes every page to return: Not Found The requested URL <url> was not found on this server. So after some more searching, I found, "Note that you need to enable the proxy module, and the proxy_http_module in the config files for this to work." The problem is, I have no idea how to do this and everything I google has people using examples with virtual hosts and I don't know how to 'translate' that into something useful for my setup. I'm accessing my webserver via my public IP and forwarding traffic on port 80 to the web server (like I'm pretending I have a domain/server). How can I get this enabled/get post data working again? Edit: When I use the following, the server never responds and the page loads indefinately? LoadModule proxy_module /usr/lib/apache2/modules/mod_proxy.so LoadModule proxy_http_module /usr/lib/apache2/modules/mod_proxy_http.so <Directory /var/www/> RewriteEngine On RewriteCond %{HTTP_REFERER} !^http://(.+\.)?82\.6\.150\.51/ [NC] RewriteRule .*\.(jpe?g|gif|bmp|png|jpg)$ /no-hotlink.png [L] RewriteCond %{REQUEST_URI} !\.(png|jpg|jpeg|bmp|gif|css|js)$ [NC] RewriteRule . HomeProjects/index.php [P] </Directory>

    Read the article

  • How to copy VirtualBox VDI contents to a partition and dual boot the OS from it?

    - by Calmarius
    I'm a Linux user but I keep a compressed Windows XP ISO with me on a pen drive for the case I absolutely need Windows to do something. This works in VirtualBox most of the time. But now I want to play some games, so I would like to run the Windows image natively. My computer don't have CD drive so cannot just burn the ISO and make an install normally. What I trying to do is moving the installed Windows image to a physical NTFS partition on my HDD and set up GRUB to let me dual boot it. I found many tutorials that deal with making VDI to physical drive. But they assume I want to overwrite my entire drive. Moving the raw disk image with dd to the partition resulted in a corrupt partition. I also tried the VMDK trick to use that empty partition and install the Windows on it. Although the text mode phase of the installation finishes without problems, the VM won't work, either crashes and keeps rebooting or just immediately or freezes (depending on how I created the VMDK, with -rawdisk /dev/sda3 or -rawdisk /dev/sda -partition 3).

    Read the article

  • Open ports broken from internal network

    - by ksvi
    Quick summary: Forwarded port works from the outside world, but from the internal network using the external IP the connection is refused. This is a simplified situation to make the explanation easier: I have a computer that is running a service on port 12345. This computer has an internal IP 192.168.1.100 and is connected directly to a modem/router which has internal IP 192.168.1.1 and external (public, static) IP 1.2.3.4. (The router is TP-LINK TD-w8960N) I have set up port forwarding (virtual server) at port 12345 to go to port 12345 at 192.168.1.100. If I run telnet 192.168.1.100 12345 from the same computer everything works. But running telnet 1.2.3.4 12345 says connection refused. If I do this on another computer (on the same internal network, connected to the router) the same thing happens. This would seem like the port forwarding is not working. However... If I run a online port checking service on my external IP and the service port it says the port is open and I can see the remote server connecting and immediately closing connection. And using another computer that is connected to the internet using a mobile connection I can also use telnet 1.2.3.4 12345 and I get a working connection. So the port forwarding seems to be working, however using external IP from the internal network doesn't. I have no idea what can be causing this, since another setup very much like this (different router) works for me. I can access a service running on a server from inside the network both through the internal and external IP.

    Read the article

  • Our clients site is redirecting to a pill scammy site [closed]

    - by Alex Demchak
    Possible Duplicate: My server's been hacked EMERGENCY We've usually host our clients site, but we aren't hosting this one. The website itself (weddle-funeral.com) works just fine. if you load google and search for weddle funeral stayton oregon - and click that link, the site links to a scammy pill site. I went through the site and there were some php files in the wordpress plugins that got quarantined by my antivirus. I removed ALL non essential files, and uploaded fresh versions of all the plugins, but it's STILL redirecting from google. I tried logging in to the cpanel (on a virtual private server), and the cpanel flashed a red warning screen The site's security certificate is not trusted! You attempted to reach XXXXX.com, but the server presented a certificate issued by an entity that is not trusted by your computer's operating system. This may mean that the server has generated its own security credentials, which Google Chrome cannot rely on for identity information, or an attacker may be trying to intercept your communications. You should not proceed, especially if you have never seen this warning before for this site. (Keep in mind, that's for the HOSTING accounts CPanel) Is there something in the SERVER probably that's causing the redirect? EDIT: .htaccess file contents # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress

    Read the article

  • Arduino IDE "launch 4j" error

    - by John
    I have a computer running Windows XP. I am trying to run the Arduino IDE 0022. I double-click on arduino.exe, it waits about 30 seconds on the load up title screen, and then it gives me this error: Launch 4j: an error occurred while starting the application My only choice is to click "OK"; the error goes away, and the Arduino IDE closes. If I try to delete the Arduino files (to try overwriting with some different files), I get an error that doesn't allow me to do so: Cannot delete awt.dll: Access denied Make sure the disk is not full or write protected and that the file is not currently in use. The only way to delete the file is by restarting the computer. So something must still be trying to run after that first error. I have noticed in Task Manager that some Java programs are still running: javaw.exe (3 processes) I think this is a problem with Java, but I checked and updated all of my Java software and it is all up to date. I have looked on other forums for this issue and none of them seemed to help. From the forums I have tried: Different Arduino IDE versions Updating Java Opening arduino.exe as Administrator Nothing has worked. Anyone have any suggestions?

    Read the article

  • Mysterious swap usage on EC2

    - by rusty
    We're in the middle of a project to move our infrastructure from a co-lo situation into Amazon EC2 and we've noticed some weird memory characteristics of the processes in our setup. Without going into too much detail about the specifics of our processes, we've noticed that on our EC2 instances "top" will show processes using a lot of swap space -- in fact, much greater than the amount of available swap or (if you add it all up) more than the available disk. Here's a sample top output: Mem: 7136868k total, 5272300k used, 1864568k free, 256876k buffers Swap: 1048572k total, 0k used, 1048572k free, 2526504k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ SWAP COMMAND 4121 jboss 20 0 5913m 603m 14m S 0.7 8.7 3:59.90 5.2g java 22730 root 20 0 2394m 4012 1976 S 2.0 0.1 4:20.57 2.3g PassengerHelper 20564 rails 20 0 2539m 220m 9828 S 0.3 3.2 0:23.58 2.3g java 1423 nscd 20 0 877m 1464 972 S 0.0 0.0 0:03.89 876m nscd You can see, for instance, that jboss is reportedly using 5.2 gigs of swap space which is definitely impossible since there's only 1G allocated and none is being used (probably because there's still 1.8G of RAM free). And here's the results of uname -a: Linux xxx.yyy.zzz 2.6.35.14-106.53.amzn1.x86_64 #1 SMP Fri Jan 6 16:20:10 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux We're running an AMI based off of the default Amazon Linux AMI (Amazon Linux AMI release 2011.09, so some RHEL5 and RHEL 6) with not too many customizations and definitely no kernel-level customizations. Something here tells me that on this particular kernel/distribution, the reporting of swap or maybe even total memory usage isn't what it appears to be... Any help would be appreciated!

    Read the article

  • RAIDs with a lot of spindles - how to safely put to use the "wasted" space

    - by kubanczyk
    I have a fairly large number of RAID arrays (server controllers as well as midrange SAN storage) that all suffer from the same problem: barely enough spindles to keep the peak I/O performance, and tons of unused disk space. I guess it's a universal issue since vendors offer the smallest drives of 300 GB capacity but the random I/O performance hasn't really grown much since the time when the smallest drives were 36 GB. One example is a database that has 300 GB and needs random performance of 3200 IOPS, so it gets 16 disks (4800 GB minus 300 GB and we have 4.5 TB wasted space). Another common example are redo logs for a OLTP database that is sensitive in terms of response time. The redo logs get their own 300 GB mirror, but take 30 GB: 270 GB wasted. What I would like to see is a systematic approach for both Linux and Windows environment. How to set up the space so sysadmin team would be reminded about the risk of hindering the performance of the main db/app? Or, even better, to be protected from that risk? The typical situation that comes to my mind is "oh, I have this very large zip file, where do I uncompress it? Umm let's see the df -h and we figure something out in no time..." I don't put emphasis on strictness of the security (sysadmins are trusted to act in good faith), but on overall simplicity of the approach. For Linux, it would be great to have a filesystem customized to cap I/O rate to a very low level - is this possible?

    Read the article

  • Run as another user, but also as administrator

    - by Tewr
    I am trying to debug a virtual machine (VM) running on a remote computer from my workstation (A). Both VM and A are running Windows 7 Enterprise. Apparently, I need to start the Remote Debugger Service (RDS) on VM as an administrator. Apparently, I also need to run RDS as the user Tewr logged in on A (domain: DOM). VM runs the services i need to debug, as well as the remote desktop interface with an account VMUSER in a domain called VMDOMAIN. I manage to start RDS as administrator, but then the RDS process is owned by VMUSER and that's not good enough. I also manage to run RDS as DOM\Tewr, but then not as an administrator. I have Added DOM\Tewr as an administrator on VM, but thats not good enough becuase the process is still not run as administrator. How can I run the RDS process as DOM\Tewr and "As Administrator", while logged on in windows as VMDOMAIN\VM? (note: I have tried creating an account with the same credentials / password as VMUSER, as hinted in the ms article above, but with no luck...)

    Read the article

  • Working with different PHP version at the same time, php_value extension_dir not working?

    - by Gremo
    I need both PHP 5.4.7 and 5.3.17 running on Windows 7 x64 with Apache 2.2.23. This is my virtual host configuration: <VirtualHost *:80> DocumentRoot "C:/WAMP/Apache/htdocs/php54" ServerName php54.local PHPIniDir "C:/WAMP/PHP54" LoadModule php5_module "C:/WAMP/PHP54/php5apache2_2.dll" php_value extension_dir "C:/WAMP/PHP54/ext" <Directory "C:/WAMP/Apache/htdocs/php54"> Order allow,deny Allow from all </Directory> </VirtualHost> The PHPIniDir and LoadModule directives work fine and using phpinfo() inside my script prints the right PHP version. But I need to load extensions, and this is where it fails. php_value extension_dir should be C:/WAMP/PHP54/ext but it's (default one) C:/php. What I'm missing here? EDIT: Of course I can set this value directly in C:/WAMP/PHP54/php.ini, but I prefer passing it using vhost configuration: ; Directory in which the loadable extensions (modules) reside. ; http://php.net/extension-dir ; extension_dir = "./" ; On windows: extension_dir = "C:/WAMP/PHP54/ext"

    Read the article

  • Connections to IIS sometimes get stuck in CLOSE_WAIT state

    - by randomhuman
    Our application includes an ASP.Net web service that only needs to deal with a handful of clients. As such, the 10 incoming connection limit of Windows XP Pro is generally not a problem. However, on one particular server, connections are occasionally becoming stuck in the CLOSE_WAIT state. These connections build up over time and eventually new client connections are refused because the maximum number of connections are used up. From my googling it sounds like a failure of the webservice to properly close the connection can cause this problem, but as it works just fine on hundreds of other Windows XP pro machines I can't see it being a bug in our code. It also ran fine on the affected machine until some shenanigans on the part of the end user (I think they set about deleting duplicate files in order to reduce their disk usage, but they did not exactly come clean about it). What could the user have changed to introduce this problem? Is there any way I can force connections that are in CLOSE_WAIT to time out rather than letting them hang around? I have seen suggestions to reduce TcpTimedWaitDelay, but that only relates to the TIME_WAIT state, and changing it did not have any effect.

    Read the article

  • Force Installing a Radeon HD 2100 on Windows 8

    - by Click Ok
    I'm trying force installing a Radeon HD 2100 on Windows 8. I've found that link from AMD with the drivers for Windows7: http://support.amd.com/us/gpudownload/windows/legacy/Pages/legacy-radeonaiw-vista64.aspx I know too that AMD will stop support Radeon HD 400 and older: http://www.techspot.com/news/48321-amd-drops-windows-8-support-for-radeon-hd-4000-and-older.html Now, let's go to the problem. If I try install the 12.6 driver, Windows will stick with the "basic display adapter", and this is bad for 3d games like Minecraft, that runs really slow now compared with the previous Windows7 installation. Forcing install the catalyst driver can help to fix it. So, I follow that steps: Extract the Catalist Driver (C:\AMD\Support\12-6-legacy_vista_win7_64_dd_ccc_whql) Right click the "basic display adapter" on device manager, and "update driver" Search on PC I will choose the driver "With Disk" "C:\AMD\Support\12-6-legacy_vista_win7_64_dd_ccc_whql\Packages\Drivers\Display\W76A_INF" There is a big list of drivers and the nearest driver to HD 2100 is "Radeon HD 2350 Series" My questions: Why isn't "Radeon HD 2100 Series" listed? (or Where is it listed?) In theory it must be listed" The first link above show that "This article applies to the following configuration(s):" (...) "AMD Radeon HD 2000 Series" Am I doing something wrong?

    Read the article

  • Why am I getting programs stuck in log_wait_commit under Linux?

    - by staticsan
    There is something subtly wrong with my Linux install that I just can't locate. It is Ubuntu Lucid Lynx (10.04) 64-bit. Hardware is a Dell Optiplex 960: Intel Core 2 Quad CPU, 8Gb of RAM, 2x 300Gb HDDs. /home is ext2 on one disk and everything else is on the other (/ is also ext3). I have VirtualBox running a 64-bit Vista image for Outlook calendaring, but the heavyweight apps are IntelliJ, NetBeans, MySQL and Opera. Opera also loads my mail (IMAP) of which there is over 10,000 messages. The problem is that Opera stalls for a few seconds from time-to-time. Watching the process list shows it's in log_wait_commit which means (as far as I have figured out) the filesystem is holding things up. Sometimes I can make this happen by doing a subversion update, but usually it happens for no reason I can see. It usually happens to Opera, but I've seen NetBeans go under, too. It doesn't make the app crash - it's just completely unresponsive for a few seconds. Googling has not helped. The closest I got was to remove the sync attribute in the file system. This achieved nothing. On the advice of a Linux guru friend, I lowered /proc/sys/vm/dirty_writeback_centisecs to 300, but that didn't do anything, either. And it was all he could think of. What is going on and can I fix it? (And how?)

    Read the article

  • Nginx: Rewriting directory path to file

    - by Doug
    I'm a little new to Nginx here so bear with me - I want to rewrite a url like foo.bar.com/newfoo?limit=30 to foo.bar.com/newfoo.php?limit=30. Seems pretty simple to do it something like this rewrite ^([a-z]+)(.*)$ $1.php$2 last; The part that I am confused about is where to put it - I've tried my hand at a some location directives but I'm doing it wrong. Here's my existing virtual host config, where should I implement my rewrite? server { listen 80; listen [::]:80; server_name foo.bar.com; root /home/foo; index index.php index.html index.htm; location / { try_files $uri $uri/ /index.html; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; deny all; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; } } Thanks!

    Read the article

  • Expanding raidz vdev

    - by Blubber
    I'm currently planning on installing FreeBSD 9 on my home server. The machine has 4x 1.5TB disks, and at some point, when HDD prices drop I'll be upgrading to something bigger, perhaps 3TB. The disks are connected to an IBM ServerRaid m1015 in IT mode, this card has room for up to eight disks. Now here is the problem, currently the 4x 1.5TB will be connected to the m1015. Then when prices drop I'll be adding something like 4x 3TB, also connected to the m1015. No problem yet, I can just run 2 raidz2 vdevs and put them in the same pool. But, at some point the 1.5TBs will start to break, or I will have to upgrade them when the pool runs out of space. So I started researching if it's possible to expand a raidz vdev, and I found several pages explaining the same procedure, like this on SF: How to upgrade a ZFS RAID-Z array to larger disks on OpenSolaris?. So I went a head and tried that in vmware, I installed FreeBSD 9 and created 6 virtual disks, 3 of 1GB each and 3 of 10GB each. After building a raidz vdev of the 1GBs I replaced them one by one with the 10GB, but the pool did not increase in size. Is this a limitation of the ZFS implementation in FreeBSD? Or am I just doing something wrong?

    Read the article

< Previous Page | 624 625 626 627 628 629 630 631 632 633 634 635  | Next Page >