Search Results

Search found 17973 results on 719 pages for 'x frame options'.

Page 594/719 | < Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >

  • Windows 7 32bit resolution is limited for HDTV monitor

    - by Nick
    I have a small Magnavox HDTV that i am using to test a Frankenstein PC build. The goal is to eventually connect to my old rear projection HDTV which supports 1080i via component input. The goal is also not to buy anymore stuff, otherwise i will just buy a smartTV and be done. I have a ATI Radeon HD 3450 with component out adapter YPrPb. The monitor supports 1080p, but over analog component out, should only go upto 1080i. I have had this working with another setup. On this particular setup, i have Windows 7 32bit, with the latest 12.8 catalyst drivers installed. the windows splash screen starts in 480p, then switches to 480i when the login prompt is shown. When try to change the resolution, 720x480 is the maximum value of the slider. I have also tried the "list all modes" and that also maxes out at 720x480. There are two options for this monitor in the devices seciton, Generic PNP monitor, and Generic non-PNP monitor. Neither setting fixes this. Any ideas on how to get 1080i?

    Read the article

  • Relative path incorrect in the view layer when hosting a rails3 app in a subdirectory using passenger and apache

    - by Saifis
    I want to host multiple Rails apps on a multiple server using sub-directories. And have encountered some relative path problems. I have made a symbolic link to the app's public directory and placed it in the /var/www/html directory, var/www/html/ /test_app (symbolic link to the public folder of test_app) and set apache as so LoadModule passenger_module /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.12/ext/apache2/mod_passenger.so PassengerRoot /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.12 PassengerRuby /usr/local/bin/ruby <VirtualHost *:80> ServerName test.com DocumentRoot /var/www/html Options Indexes FollowSymLinks -MultiViews RailsBaseURI /test_app </Location> </VirtualHost> The links in the app itself works just fine, all the links acknowledge the test_app/ directory and work, however, when it comes to showing images in the public directory in the view, the relative path goes wrong. Say I have /system/files/1/aaa.png it goes looking for it in /var/www/html/system/files/1/aaa.png rather than /var/www/html/test_app/system/files/1/aaa.png As far as I understand this is an Apache setting problem than something to be done in Rails, if its possible I would prefer to have it contained in the conf file of apache rather than having to alter the code.

    Read the article

  • Cannot change power button or lid close action

    - by Mark Henderson
    I have a Samsung 900x laptop and I want to change it so that when I close the lid, nothing happens (I often close the lid to carry it somewhere 10 seconds away, and by putting it into suspend it cancels any active downloads/etc). Easy, right? Go to Power Options and change it there; just like on every other laptop in the world. Not so fast: Saywhat?! That message only shows up for the nodes for Lid Close Action, Power Button and Sleep Button. I can change every other setting except for those three. I'm definately an Administrator on the computer, and I've googled the error and found dozens of hits on other crappy forums, but of course nothing on those worked (otherwise, I wouldn't be here). And as ususal the "Why can't..." hyperlink gives no useful infomation what so ever (just a generic Help document). So - how can I change what closing the lid does? I will modify the registry directly if I have to.

    Read the article

  • Debian: Unable to mount a second drive as a subdirectory inside of another partition.

    - by jkndrkn
    Hello. I have the following /etc/fstab: # /etc/fstab: static file system information. # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc defaults 0 0 /dev/md1 / ext3 defaults,errors=remount-ro 0 1 /dev/md0 /boot ext3 defaults 0 2 /dev/md5 /home ext3 defaults 0 2 /dev/md3 /opt ext3 defaults 0 2 /dev/md6 /tmp ext3 defaults 0 2 /dev/md2 /usr ext3 defaults 0 2 /dev/md4 /var ext3 defaults 0 2 /dev/md7 none swap sw 0 0 /dev/sdc /home/httpd ext3 defaults 0 2 /dev/hda /media/cdrom0 udf,iso9660 user,noauto 0 0 /dev/sdc1 /mnt/usb/backup-1 auto defaults 0 0 I am unable to get /dev/sdc/ to mount at /home/httpd/ on reboot. The /home/httpd/ directory exists. Mounting via mount -t ext3 /dev/sdc /home/httpd works just fine. Mounting via mount -a generates the following error message: mount: you must specify the filesystem type This is, incidentally, the same message that I see while booting. The error message goes away if I comment out the line in fstab starting with /dev/sdc.

    Read the article

  • PXELinux and compressed kernels/images

    - by Yvan JANSSENS
    Is it possible to boot compressed kernels with a compressed initrd with PXELinux? First, a little background: We created a custom Linux distro, for diskless OpenCL computing nodes. We want those nodes to fetch their OS from the network. Our Distro is composed out of a kernel (duh) and a large initrd which is loaded into RAM and everything is executed from there. We chose to run everything off the initrd for two reasons: NFS was not an option to serve the filesystem's extra contents Fast file access from RAM. No persistent storage needed, data and config is pulled dynamically through a SOAP service. Now our initrd is about 450M in size. At our network speeds, it takes about two to three minutes to load a single client. Will compression speed up te downloading, and if yes, which one should be used? Is LZMA supported by PXELinux, or do we need to stick to bzip2 or gzip? Because of the 2-3 minutes loading time, booting 15 nodes over the same network link takes quite a lot of time. We decided not to use hard drives or CD/DVD drives, for financial reasons (cheapest HDD @ €30 times 15 is a lot of money saved ;-) ) So, our question is: what compression options are available for this setup? And how do we do this? Thank you for your time! Yvan Janssens

    Read the article

  • How do I speed up and cache mmap file access over NFS on Linux?

    - by Zan Lynx
    The server and client are both 64-bit Ubuntu 10.04 LTS. The application in question is a custom app that uses mmap() for fast random file access. Its ideal state is when the entire file is cached in RAM. The network connections are really fast 10Gb Ethernet. It is a virtual server blade setup. It isn't the network connections slowing things down because everything performs superbly when using a virtual disk (iSCSI to the SAN). But when we run the application on a NFS home directory mount, performance goes to the dogs. It appears that the Linux kernel isn't caching anything. So it is reading every single disk block needed by mmap() accesses over and over and over again. The NFS mount is done through autofs, which has only default settings. /proc/mounts shows the NFS mount is done with the following options: rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.11.52,mountvers=3,mountproto=tcp,addr=192.168.11.52 How can I make Ubuntu 10.04 cache the file instead of reloading it all the time?

    Read the article

  • Viewing local websites on my iOS device over Wi-fi

    - by John
    Trying to view some local html/css/js files in a mobile browser on my iOS device. Thought maybe file-sharing would be an option, and is, but I'm not completely satisfied with it. Any time I try to do the following an error occurs. Web sharing is on and available at http://192.168.1.101/~user but I have to manually copy the files in. If I try to symlink a folder in so that the address could be viewed at ''~user/some_dir by issuing $ ln -s /Users/user/dev/some_dir ~/Sites/ then I get a 403 forbidden error. I've tried to remedy this by modifying a user.conf file in /private/etc/apache2/ and using the following syntax: <Directory "/Users/user/Sites/"> Options Indexes MultiViews SymLinks AllowOverride None Order allow,deny Allow from all </Directory> but nope, still doesn't work. I get a 403 error. If I try to symlink each individual file in instead of using a directory as a sub-directory, same error. Any help would be greatly appreciated! I'd just like to symlink directories into the ~/Sites one and browse them on my iOS device over wifi. I'm on OS X 10.7 Lion trying to connect with iOS 5.

    Read the article

  • Windows 7 explorer crashing trying to read external hard disk

    - by Mario De Schaepmeester
    I have a 1TB Western Digital hard drive which is almost full and last time I tried to plug it into my laptop, I got a Windows dialog saying "this hard drive needs to be formatted". I did not panic because I have experienced things like this before and I know it's often solved by simply re-inserting the drive. Now however, whenever I plug it in and try to browse it in explorer by going to "computer", the explorer process crashes after a while. I simply close explorer since it takes ages trying to read the disk and nothing happens. After searching on the internet, the best thing to do would be a chkdsk. I tried it via properties in explorer (which also took a good 5 minutes to open up), locks up as well, after waiting a couple of minutes it says there's no access to the disk so a chkdsk is not possible... I want to make clear that I always use safe removal before pulling out the USB cable. Last time however, safe removal just would not work and when trying to shut down Windows, the logoff screen just would not disappear (I've waited at least 10 minutes or so) and I powered off the PC by force. This may be the cause of the problems but the disk was still recognised immediately after that. I really don't want to format this thing because it contains C: clones of 3 computers and a lot of other stuff that I don't want to re-copy. What would be the best course of action? Update I got chkdsk working via the command line. I used the /F and /R options. I already got a bunch of lines saying "file record segment X is unreadable" or whatever it is in English, my OS is Dutch. It looks bad... Will chdsk repair these errors?

    Read the article

  • Configure New Server for .htaccess

    - by Phil T
    I have a new LAMP CENTOS 5 server I am setting up and trying to copy the configuration from another web server I have. I am stuck with what I think is a mod_rewrite problem. If I go to http://old-server.com/any_page_name.php it correctly routes through some handling code in index.php and shows me a graceful "Page Cannot Be Displayed" message. But if I go to http://new-server.com/any_page_name.php I get an ugly Apache 404 Not Found error message. I looked in both httpd.conf files and they both have only one reference to mod_rewrite. LoadModule rewrite_module modules/mod_rewrite.so So it seems like that should be fine. At the bottom of httpd.conf I have the code: <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/html ServerName new-server.com ErrorLog logs/new-server.com-error_log CustomLog logs/new-server.com-access_log common </VirtualHost> Then in the root of /var/www/html I have the exact same .htaccess file that looks like this: RewriteEngine on Options +FollowSymlinks RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . index.php [L] ErrorDocument 404 /page-unavailable/ <files ~ "\.tpl$"> order deny,allow allow from none deny from all </files> So I don't see why the page load at old-server.com works fine while new-server.com doesn't route through index.php like I want it to do. Thanks.

    Read the article

  • PHP potential issues with compiling 5.3.8 extensions against RHEL 6 / CentOS 6 PHP 5.3.3 package

    - by user101203
    I'm working on getting a Red Hat 6 LAMP server going and while the PHP that comes with it has many extensions we use, it doesn't have all of them. To solve this, I was thinking about either compiling the PHP extensions which come in the ext folder of the downloadable source code of PHP 5.3.3 from php.net same as #1, but using the extensions from the latest PHP version (currently 5.3.8). Do #1 but manually decide which updates to backport from the latest version of the PHP extensions into the older version and then compile the backported result A drawback to #1 is that security and bug fixes come out which we wouldn't be able to take advantage of. A drawback to #3 is that it might be a lot of work Does anyone know what the drawbacks to #2 are? I don't want to go down that route if it might result in some unexpected negative outcomes. Also, are there any other drawbacks to the other options or a better way to go altogether? I want to use the PHP 5.3.3 which comes with the Linux distro because I don't want us to get to a place again where we are forced to upgrade to a new version of PHP to stay on top of security updates like from PHP 5.2.x to 5.3.x and there be backwards incompatible changes (this is the situation we're in now with PHP 5.2.x no longer being supported).

    Read the article

  • Kindle (client) for Mac--text search or highlighting/notes?

    - by doug
    just so we're clear, i'm talking about the client/software version here--ie, that you install on your Mac or PC--not the device. The Kindle client was recently released for the Mac. I downloaded it and bought a couple of Kindle-edition books to view on this client. Astonishingly, two features i consider to be more or less essential to any ebook reader are missing in the Kindle client, either that, or i can't find them: (i) text searching; and (ii) highlighting text. First, does anyone know how to access the search feature? I'm aware of the "Go To" button at the top middle of the reader window--the options in that menu when you click the button are: "Cover", "Table of Contents", "Beginning" and "Location." "Location" requires that you type in an integer (but it doesn't correspond to page number--e.g., typing "167" brought me to the table of contents), not a search term. Second, there's a button on the upper right-hand corner of the window "Show Notes and Marks" yet i can't find any way to highlight text. The only kind of "note" or "mark" i have been able to record is to "bookmark" a page by clicking the "bookmark" button also at the top of the window.

    Read the article

  • Command line switching

    - by Larry
    I have read through some suggestions but I am just not technical enough to get this I think. I am a CAD designer and each file has 5 files associated with it. I have 3 sets of 5 files, and each set needs to go into its own zip file, placed on a separate server. For example: "C:\Program Files\7-zip\7z.exe" a file1.zip "O:\server2\map files\BC\BC.d*"-0 "C:\Program Files\7-zip\7z.exe" a file2.zip "O:\server2\map files\BC\ON.d*"-0 "C:\Program Files\7-zip\7z.exe" a file3.zip "O:\server2\map files\BC\AB.d*"-0 and I am in directory "S:\server\map files\provinces" (for example). These lines run within an existing batch file and by the time it reaches the 3 lines above, it's in the S: directory sample above. So it's looking on my pc for the 7-zip program, creating 3 zip file names which it does, but places those zip files on a separate server which it doesn't and the first zip file also includes all the other 10 files, the second zip file the same plus the first zip file, and the third the same with the other two zip files making me think the code isn't recognizing the part after file1.zip where I am trying to tell it what files to include and where to place the zip files. Ultimately, I want to either have the system create a new zip file if the old one was deleted, or copy the new files into the existing zip and overwrite any older files, and for these zip files to be placed in a separate location which is where we share our files with other personnel from within our company. The S: drive is for all originals, and O: is for sharing. Is there a list of all switching options with many different samples?

    Read the article

  • Connect USB hard drive to wireless router on RJ45 port? Possible?

    - by lawphotog
    just a quick story behind. I was trying to set up wireless networked hard drive at home. My wireless router doesn't take USB. I am considering few options. First i was considering to get something like WD My Cloud. My router is an old one provided by service provider. It only has 10/100 Ethernet. WD My Cloud has Gigabit interface. So unless i changed a new router, data transfer will be slow. So upgrading the router is a must if i want fast transfer speed. Plus I already own an external hard drive with USB 3.0 interface. So if I get a router like Netgear D6300, i can get a decent speed wireless shared drive at home. And i can use my existing HDD instead of WD My Cloud. But the router isn't cheap so I am saving up for that. In the meantime I found out the existence of USB to RJ45 adaptor. I read the reviews and some say it works for them and for some don't. They didn't really say what they were trying to do so I'm confused. So if i bought an adaptor like this, can i connect my existing HDD (USB) with my existing router (RJ45) and use it as a shared drive for data transfer? I know it will be slow as the adaptor will only have USB 2.0 and 10/100 for Ethernet. But it's fine as it's for temporary until i got my new router.

    Read the article

  • Apache, Permissions, and Convenience

    - by Mike
    I'm on Mac OSX and i I have apache2 installed via MacPorts, running as the _www user. I have some files I want to serve in the /Users/Me/Documents/abc folder. Right now, though, the permissions of /Users/Me/Documents are 700. So, _www can't get in, even if abc is chmod 777. I recognize the following options: Allow _www access to my Documents folder. Put the files I want to share outside of my Documents folder. Hard-link the files outside of my Documents folder, and point apache to the hard links. None of these solutions are acceptable to me, however. I don't feel safe allowing _www access to my entire Documents folder. I really want to keep the files in my Documents folder for other reasons. The files are changing all the time, so hard-linking would not always reflect the right file structure, and, as I understand it, you can't hard-link a directory (though, if you could, that would solve it). Any ideas for a solution? Is there a way to run a few httpd processes as my user account so it can get in there? Or, is there some way to hard-link a directory, or some way to get httpd to follow a symlink past a directory that is 700 not owned by _www? Thanks!

    Read the article

  • Diff bios - corrupt video driver

    - by sfonck
    Hi, I'm using an Dell M90 Precision Laptop which has a NVidia Quadro FX 2500M graphics card and is running Windows XP. Laptop has been running fine - but a few weeks ago screen went 'white' - restarted computer- bios and startup screens show weird green dots and stripes, normal startup only shows a black screen... only VGA mode works to display something. I've been trying to remove and reinstall the correct drivers downloaded from Dell's website - no solution. I gave up and reinstalled XP - everything was working perfect again. 2 weeks later - again the white screen - tried everything again (flashin new bios also - nothing works) Reinstalled XP - everyhting was working again, so I made a DriveSnapShot of the partition. Today - again the 'white screen'. Ok, no problem ...I was thinking all I needed to do was to restore the DriveSnapShot backup... Few minutes later the backup is restored ... but guess what: video driver does not work correctly... As the DriveSnapShot restored the complete partition, as it was at the time everything was working perfectly, this would mean my driver problems are due to 'settings' in the bios or on the graphics-card itself + these 'settings' can get overridden by doing a new XP-install.... I'm out of options, can somebody help me to find a solution for this problem: Is there some way to backup and restore a bios after seeing some problems? Is there some way to know what is causing this problem like a bios diff utility? Thanks!

    Read the article

  • Wildcard subdomain setup ... want to change host IP throws off client A records... what to do...

    - by Joe
    Here is the current set up (in a nutshell). The site is set up with a wildcard subdomain, so *.website.com is accessible. Clients can then domain map their own domains with an A record to the server IP address and it will translate the to appropriate *.website.com with re directions and env variables in htaccess. Everything is working perfect... but now comes the problem. The site has grown larger than a single DQC Xeon server can handle at peak times. Looking at cloud options seems tempting, but clients are pointing their domains to a single IP address with the A record (our server). Now, this was probably bad planing from the start, but the question is, if this was to be done today, how would we set it up so that clients use a CNAME perhaps to point their domains to our server rather than an A record. And, if that is not possible for the root domain, how can we then use multiple IP addresses on our side to translate the incoming http request? Complex enough? Hope I've explained it well!

    Read the article

  • Install Debian stable linux ISO from USB to dual boot Windows

    - by tgkprog
    I want debian as dual boot with my windows vista, Free'd up 50GB in my d drive. Plan to use 40 for debian install, 6GB for swap space Have a 16GB USB drive Downloaded http://unetbootin.sourceforge.net/ Downloaded DVD files of stable debian-7.0.0-amd64-DVD-1.iso ( debian-7.0.0-amd64-DVD-2.iso and 3) After I choose HD install, unetbootin says place the ISO in the same place. but I have 3. do i need to merge them? if so any freeware to do that? can i do it with 7zip? when I extract with 7 zip there are classes between the 3 ISO files. Just over write? Options to merge (format etc for 7zip) ? Or I must use I tried to keep the 3 files with the other unetbootin files but get an error msg Files I have on my USB 06/30/2013 11:44 PM 2,835,648 ubnkern 06/05/2013 12:14 AM 3,998,007,296 debian-7.0.0-amd64-DVD-1.iso 06/04/2013 03:30 PM 4,696,872,960 debian-7.0.0-amd64-DVD-2.iso 06/05/2013 01:25 AM 4,698,955,776 debian-7.0.0-amd64-DVD-3.iso 06/30/2013 11:45 PM 6,530,278 ubninit 06/30/2013 11:46 PM 155 syslinux.cfg 06/30/2013 11:46 PM 60,928 menu.c32 also i can only copy above files if i format my USB as NTFS On FAT32 says too large to copy .iso How do I get around that? My internet needs a login so cannot do net install

    Read the article

  • Ubuntu problem - monitor out of range

    - by Kelp
    Hello, I am using an external monitor for my laptop to run Ubuntu with. I just updated Ubuntu today, but when it is about to reach the Ubuntu login screen, then the monitor says "out of range." Now, Ubuntu boots up into the GUI if I unplug my monitor and use my laptop screen, but I prefer to use the external display. I have tried all of the suggestions from my search results in Google. I tried pressing Ctrl + Alt + +, but nothing happens. I tried pressing Ctrl + Alt + -, but nothing happens. I used Ctrl + Alt + F2 to get into a terminal to run the command: sudo dpkg-reconfigure xserver-xorg, but nothing happens. I believe there are supposed to be options to change the settings, but it does not even give me any. I tried to edit /etc/usplash.conf and /nano/etc/usplash.conf, but they do not exist. I did sudo apt-get update and sudo apt-get upgrade hoping that it would install drivers or something to help my situation, but they did not help. My monitor is a Westinghouse 22" LCD with resolution 1680x1050. It has been working for the past few months until I updated it today.

    Read the article

  • How to Access User Directory shared by Apache on OS X Mountain Lion?

    - by schluchc
    When trying to access the local user web page on localhost/~username, I get a "403 Forbidden". The system web page in /Library/WebServer/Documents is accessible on localhost/ though, so I assume Apache is working fine. I know that this problem has been discussed several times, also on superuser. I implemented and checked all I could find, but I still couldn't solve the problem and would be glad if someone had a suggestion for this particular case: sudo apachectl -t returns Syntax OK. I have a username.conf file in /etc/apache2/users/: <Directory "/Users/username/Sites/"> Options Indexes MultiViews FollowSymLinks AllowOverride AuthConfig Limit Order allow,deny Allow from all </Directory> as proposed here [SuperUser] and in several other tutorials. The permissions of the username.conf file are -rw-r--r-- root wheel, as they should be. The httpd.conf is unchanged and therefore contains the line Include /private/etc/apache2/extra/httpd-userdir.conf. That file in turn contains UserDir Sites Include /private/etc/apache2/users/*.conf <IfModule bonjour_module> RegisterUserSite customized-users </IfModule> So the httpd*.conf files should be ok. The permissions of /Users/username/Sites is drwxr-xr-x 10 username staff and -rw-r--r--@ 1 username staff for the index.html. In the error log I simply get a [Sun Nov 25 22:14:32 2012] [error] [client 127.0.0.1] (13)Permission denied: access to /~username/ denied. And yes, after each change I did the sudo apachectl restart. Any help no how to solve the problem or how to further analyze it would be highly appreciated!

    Read the article

  • Can't ssh from CentOS 6.5 to SUSE LINUX 10.1

    - by Pavel Tankov
    We have a quite old installation of SUSE LINUX 10.1 (i586) in the office. The problem shortly: I can successfully ssh to it from machines in the same LAN (192.168.1.0) and not from others (that are in 10.23.0.0). The SuSE has SSH server openssh-4.2p1-18.12. I have ruled out the firewall and hosts.allow and hosts.deny files. When my ssh login attempt fails, here is what the logs say: on the client: $ ssh -vvv 192.168.1.5 OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 192.168.1.5 [192.168.1.5] port 22. debug1: Connection established. debug1: identity file /home/nbuild/.ssh/identity type -1 debug1: identity file /home/nbuild/.ssh/identity-cert type -1 debug1: identity file /home/nbuild/.ssh/id_rsa type -1 debug1: identity file /home/nbuild/.ssh/id_rsa-cert type -1 debug1: identity file /home/nbuild/.ssh/id_dsa type -1 debug1: identity file /home/nbuild/.ssh/id_dsa-cert type -1 on the server: Aug 21 16:34:25 serverhost sshd[20736]: debug3: fd 4 is not O_NONBLOCK Aug 21 16:34:25 serverhost sshd[20736]: debug1: Forked child 20739. Aug 21 16:34:25 serverhost sshd[20736]: debug3: send_rexec_state: entering fd = 7 config len 403 Aug 21 16:34:25 serverhost sshd[20736]: debug3: ssh_msg_send: type 0 Aug 21 16:34:25 serverhost sshd[20736]: debug3: send_rexec_state: done Aug 21 16:34:25 serverhost sshd[20739]: debug1: rexec start in 4 out 4 newsock 4 pipe 6 sock 7 Aug 21 16:34:25 serverhost sshd[20739]: debug1: inetd sockets after dupping: 3, 3 Aug 21 16:34:25 serverhost sshd[20739]: debug3: Normalising mapped IPv4 in IPv6 address Aug 21 16:34:25 serverhost sshd[20739]: Connection from 10.23.1.11 port 44340 The above log on the server is when I enable DEBUG3 log level. However, with the default log level (INFO), the only thing the server logs is this: Aug 21 16:38:32 serverhost sshd[20749]: Did not receive identification string from 10.23.1.11 Any hints? I feel I've tried everything already.

    Read the article

  • Virtual Host Configuration and mod_rewrite - Removing PHP Extension and Adding Forward Slash

    - by nicorellius
    On my production server, things are fine: PHP extension removal and trailing slash rules are in place in my .htaccess file. But locally, this isn't working (well, partially, anyway). I'm running Apache2 with a virtual host for the site in question. I decided to not use the .htaccess file in this case and just add the rules to the httpd-vhosts.conf file instead, which, I've heard, if possible on your server, is a better way to go. The virtual host is working and the URL I use for my site is like this: devserver:9090 Here is my httpd-vhosts.conf file: NameVirtualHost *:9090 # for stuff other than this site <VirtualHost *:9090> ServerAdmin admin@localhost DocumentRoot "/opt/lampstack/apache2/htdocs" ServerName localhost </VirtualHost> # for site in question <VirtualHost *:9090> ServerAdmin admin@localhost DocumentRoot "/opt/lampstack/apache2/htdocs/devserver" ServerName devserver <Directory "/opt/lampstack/apache2/htdocs/devserver"> Options Indexes FollowSymLinks Includes AllowOverride None Order allow,deny Allow from all </Directory> <IfModule rewrite_module> RewriteEngine ON # remove PHP extension and add trailing slash # note - this doesn't work for directories, and throws 404 # TODO - fix so directories use index.php RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{THE_REQUEST} ^GET\ /[^?\s]+\.php RewriteRule (.*)\.php$ /$1/ [R=302,L] RewriteCond %{REQUEST_FILENAME} !-d RewriteRule (.*)/$ /$1.php [L] RewriteCond %{REQUEST_FILENAME}.php -f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .*[^/]$ /$0/ [R=302,L] </IfModule> # error docs ErrorDocument 404 /errors/404.php </VirtualHost> The problem I'm facing is that when I go to directories on the site, I get a 404 error. So for example, this: devserver:9090/page.php goes to devserver:9090/page/ but going to a directory (that has an index.php): devserver:9090/dir/ throws 404 error page. If I type in devserver:9090/dir/index.php I get devserver:9090/dir/index/ and the contents I want appear... Can anyone help me with my rewrite rules?

    Read the article

  • Performance-optimizing Oracle 10g on a server that is also a Tomcat JSP app server?

    - by PKHunter
    I have inherited a simple RedHat 5 - 64bit platform. It has SCSI disks on RAID1, with 16GB of RAM. Double Core CPU. Oracle 10g, Release 2. This would be a decent platform for running the DB only, perhaps, but the same server in an "A-A mode" clustering (very simple) also runs Tomcat and there are several Java servlets running on this. Sadly there is no caching platform etc. We only use an external CDN for some html caching. I am personally more familiar with web environments on the LAMPP platform (apache, php, mysql, postgresql). PROBLEM: Because the server has both Tomcat JSP/Java and Oracle 10g running on the same server, with no caching, I have some issues of the server going down. Often, sadly. QUESTION: What are my options in terms of improving performance of all these different apps? Connection Pooling? Example, in Postgresql world we have PgBouncer, which really helps things. Does Oracle have something similar? Or is there a famous Java-based external pooler that people use in production environments? (I'm not familiar with Java) Any "SQL cache" as in the MySQL and Postgresql world? Any other kind of application cache, as "APC" or "eAccelarator" in the PHP world? The "OSCache" stuff from the Java world (JSP thingie I found on Google: http://onjava.com/pub/a/onjava/2005/01/05/jspcache.html?page=2) ... What else? Sorry if this is a noob question. I have googled and googled, but problem is I don't know what to google for, other than the broad general concepts above. So if not full answers, I would even appreciate basic pointers and I am happy to JFGI myself. Thanks!

    Read the article

  • PassEnv does not find ENV variables

    - by quodlibetor
    I've got this /etc/profile.d/myfile.sh: export MYVAR=myval I also have a PassEnv MYVAR line in a <virtualhost> section of an apache conf dir. That lets me do things like: $ echo $MYVAR myval $ python >>> import os; os.getenv('MYVAR') 'myval' $ sudo echo $MYVAR myval $ sudo -i root# echo $MYVAR myval But then, despite that being the case I get: root# /sbin/service httpd restart /sbin/service httpd restart Stopping httpd: [ OK ] Starting httpd: [Mon Oct 22 14:44:02 2012] [warn] PassEnv variable MYVAR was undefined [ OK ] And all of my attempts to access MYVAR from within my wsgi scripts just don't work. Thoughts? Am I doing something obviously wrong? EDIT for more detail I've got a swarm of computers/VMs and a swarm of developers working on a swarm of projects. I need a simple central place to keep environment information, the most common is the "environment" (dev/stage/prod). The scheme that we've got (modifying *.wsgi programmatically) is turning out to be more fragile than we'd like. The main options that I see are: put things in the shell environment put things in other config files Getting things into the shell environment is the best, because we won't need to write yet more duplicated "what is my environment" code.

    Read the article

  • Outlook 2010 search not working after upgrade to windows 8

    - by Klaaz
    After upgrading my computer to Windows 8 Outlook 2010 has stopped displaying search results. Normally you can enter a (part) of a word in the search box on top of the inbox list and it will show you result immediatly. Even mails allready visible on the screen are not found. Somebody familiar with this issue? Update: maybe relevant: I use an Google Apps Pro account. All mail is synced and locally available in Outlook 2010. I did not change this in any way while upgrading, it was working perfectly before. I can scroll through all the e-mails, new mails are coming in as expected. This morning I received two mails from a person by the name of Rosanne. When searching on her name in Outlook it gives me One (1) result, the last mail from today. Update 2: Rebuilding the index seemed to be working. But after another day it stopped working again. No results whatsoever in Outlook search. Rebuilding indexes every day is not an option as it takes several hours. I suspect it has something to do with the fact that I use Google Apps Pro. It acts like a Exchange server to outlook. In indexing options (configuration) I added the directories containg the PST from this service (mail is also synced locally)

    Read the article

  • Boot stuck at blinking cursor before GRUB - only works via BIOS boot menu

    - by delta1
    I have a new box running Debian Squeeze. Grub is installed on /dev/sda, but when booting up I just get a blinking cursor, before the Grub menu. I can only boot to grub successfully when I choose boot options (during post) and select that specific drive! I have made sure the correct drive is set to boot first in the BIOS. So Grub works, but the system won't boot to that drive automatically? Any ideas on what could cause this? Drives sda/b/c are all 2TB (sda runs the system with b/c as raid device md0) with the following partitions: $ cat /proc/partitions major minor #blocks name 8 0 1953514584 sda 8 1 977 sda1 8 2 9765625 sda2 8 3 6445313 sda3 8 4 1937302627 sda4 8 32 1953514584 sdc 8 16 1953514584 sdb 9 0 1953513424 md0 but # fdisk -l /dev/sda gives WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 1 243202 1953514583+ ee GPT Any insight into this strange behaviour would be greatly appreciated.

    Read the article

< Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >