Search Results

Search found 44026 results on 1762 pages for 'raid question'.

Page 411/1762 | < Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >

  • Why does the wireless network icon have a red X over it when everything seems to work?

    - by Kristo
    I booted my almost brand new laptop running Windows 7 this morning and noticed a red X through the wireless networking icon in the system tray. At first I thought something was wrong, but clicking on it shows a good connection to my wireless network. I had no problem getting here to post this question. I'm very new to Windows 7 so I have no idea how to troubleshoot this myself. Is there an actual problem here? Can I fix the icon so it doesn't falsely display an error (I assume that's what the red X means)? Here's what I know: I can get here to post this question. There's at least one unsecured network available that I'm not connected to. I can see a bunch of wireless networks, presumably from my neighbors' houses. There are no other computers turned on in my house right now. The device manager shows no problems with any devices. I can ping my default gateway, DNS, and yahoo.com with no problem.

    Read the article

  • Write stderror to a file using PowerShell

    - by Zian Choy
    How do I capture error messages from a PowerShell-launched command in a text file? I searched the Internet for a while and found that supposedly, I should be able to do something like cmd /c "big blob of text >C:\output.txt 2>c:\errors.txt" to direct the output to output.txt and the errors to errors.txt but when I try to run the command, I get the following error: cmd.exe : The filename, directory name, or volume label syntax is incorrect. At C:\Users\Zian\Desktop\Untitled1.ps1:27 char:4 + cmd <<<< /c $command + CategoryInfo : NotSpecified: (The filename, d...x is incorrect.:String) [], RemoteException + FullyQualifiedErrorId : NativeCommandError Furthermore, if I try to run the command without everything starting at "2", then the command executes correctly and output.txt catches the right output. I looked at Redirect stderr to variable in powershell but it wasn't helpful because the answer to that question suggests capturing the entire output and filtering it in memory. In my case, I am backing up every database on a computer and since the databases won't fit in my laptop's RAM, I cannot use the question's solution. I also found tantalizing suggestions about using $err = @(command goes here) but with no information on what to do other than simply inserting that line of text. I tried to utilize the search function on Serverfault with the string "@()", but it did not return any results. What can I do to get the error messages into errors.txt?

    Read the article

  • Firefox url / link to a group of saved bookmarks?

    - by This_Is_Fun
    In Firefox you can easily save a group of tabs together. When (re-)accessing this group, the 'cascading' bookmark menu shows each individual bookmark (and under a line) it says "open all in tabs" I'm looking for a way to launch those tabs without going up through the bookmark menu. Possible options: A) Record a simple macro w/ any number of "superuser" utilities* ('A' is not the preferred option, since many 'little-macros' are hard to keep track of) b) Use Autohotkey (similar to option 'A' and more flexible once you learn the basics) c) How does Firefox load all those tabs? The info must be stored somewhere (as a type of URL??) Quick Summary: The moment I click on "open all in tabs", I am clicking on something very similar to a hyper-link. How do I find the content (exact code) of that 'hyper-link', and / or "How do I easily launch the tabs?" .. . New EDIT #1: I'm looking for a way to launch those tabs without going up through the bookmark menu, or cluttering the bookmarks toolbar which I hide anyway :o) .. . New EDIT #2: I tried to keep the question simple and not mentioning Autohotkey programming. The objective is to launch all tabs using a button on an AHK gui. When grawity said, "It's just an ordinary folder containing ordinary bookmarks," he (she) reminds me I can easily find the folder / Now how to launch to urls inside that folder? .. FYI: (Basic-level) AHK works like this: ; Open one folder ButtonWinMerge_Files: Run, C:\Program Files\WinMerge\ Return .. ; Use default web browser for one link ButtonGoogle: Run, http://google.com Return .. . Question still open: The moment I click on "open all in tabs", I am clicking on something very similar to a hyper-link. "How to 'replicate' the way Firefox launches the tabs with one click?"

    Read the article

  • 64-bit Windows 7 gets stuck on logo screen on bootup

    - by Richard B
    I've had a PC running Windows 7 in my office which I'm not using at the moment (cause I'm working elsewhere as a consultant atm), I'm only accessing the PC using Team Viewer (http://www.teamviewer.com/) which means the PC has been running for quite some time now. I've restarted it maybe twice a week though. A few days ago I couldn't access it using Team Viewer and when I got to the office the screen was black with only the mouse pointer showing. The PC has four hard disks, three of them (all 1Tb) is using RAID 5. This is what I've done so far: I reboot and everything seems to load correctly. I get to a screen that gives me two choices - boot Windows normally or perform a startup repair. Choosing to boot Windows only gets me to the Windows 7 logo screen which only animates over and over again. Choosing to repair gets me to the repair screen that "checks for problems" and then it gets stuck on the "Attempting repairs..."-screen (I let it run for about 24 hours before giving up). What is the next step to take? I don't have any backups and no system restore points saved. I can access files and folders through a terminal window using a Windows 7 DVD so I guess nothing is lost yet... Please help me, thanks!

    Read the article

  • Allow members of a group to be unlocked by a specific account on AD

    - by JohnLBevan
    Background I'm creating a service to allow support staff to enable their firecall accounts out of hours (i.e. if there's an issue in the night and we can't get hold of someone with admin rights, another member of the support team can enable their personal firecall account on AD, which has previously been setup with admin rights). This service also logs a reason for the change, alerts key people, and a bunch of other bits to ensure that this change of access is audited / so we can ensure these temporary admin rights are used in the proper way. To do this I need the service account which my service runs under to have permissions to enable users on active directory. Ideally I'd like to lock this down so that the service account can only enable/disable users in a particular AD security group. Question How do you grant access to an account to enable/disable users who are members of a particular security group in AD? Backup Question If it's not possible to do this by security group, is there a suitable alternative? i.e. could it be done by OU, or would it be best to write a script to loop through all members of the security group and update the permissions on the objects (firecall accounts) themselves? Thanks in advance. Additional Tags (I don't yet have access to create new tags here, so listing below to help with keyword searches until it can be tagged & this bit editted/removed) DSACLS, DSACLS.EXE, FIRECALL, ACCOUNT, SECURITY-GROUP

    Read the article

  • Windows won't boot after moving house. How do I solve this?

    - by James
    Ive just moved house and tried to set up my desktop after packing it away and now when I power it on, the BIOS boots up and no errors are found but when my computer tires to boot into Windows 7 a continuous fast beeping sound is made and a black screen is displayed. What I've done so far: Reset to UEFI defauts Played about with RAM, I had 4*4 GB sticks, I took all of them out to test for a mobo error which I have and now im only using 1 stick of 4 GB. Changed my GPU, I tok my gtx580 out and now im using the onboard Intel 3000 graphics driver, the BIOS and uefi are correctly displaying so I no longer think its a GPU based error. Ive check all of the connections and nothing seems to be loose. My HDD setup is: 2 128 GB SSD's in Raid 0 as my main C drive (possibly cause of error?) 1 1 TB Games drive 1 2 TB Data Drive Ive also got a blueray drive connected. After searching the internet im pretty much out of suggestions but im currently downloading a live CD to see if it will boot and if I can access some files on my HDD.

    Read the article

  • Can SQL Server (2008) transaction logs handle the database being dropped and re-created?

    - by Ben
    We're trying to restore a database (created programatically by running a hand-crafted SQL script we have). Our backup routine is to create a full backup of every database on the SQL Server 2008 instance on a Saturday then automatic transaction logs (I assume these are created automatically anyway - we appear to have lots of log files, possibly one per transaction after the full backup was taken?). On Tuesday this week the database in question was dropped and another one with the exact same name and schema was created. SQL Server has continued to create transaction log files but it hasn't had chance to create a new full backup (that won't happen until next Saturday). Now as it turns out we need to restore the database to how it was on Thursday. This is after the "drop and re-create". My question is, is this possible? If it isn't, what exactly does SQL Server think that it's writing to those transaction logs created since the drop and re-create? (I understood they were kind of files containing a binary delta, which makes me think maybe we can restore from them?) I'm no DBA but then neither is our IT department, so I'm doing the best I can to resolve this. Any advice much appreciated!

    Read the article

  • powershell vs GPO for installation, configuration, maintenance

    - by user52874
    My question is about using powershell scripts to install, configure, update and maintain Windows 7 Pro/Ent workstations in a 2008R2 domain, versus using GPO/ADMX/msi. Here's the situation: Because of a comedy of cumulative corporate bumpfuggery we suddenly found ourselves having to design, configure and deploy a full Windows Server 2008R2 and Windows 7 Pro/Enterprise on very short notice and delivery schedule. Of course, I'm not a windows expert by any means, and we're so understaffed that our buzzword bingo includes 'automate' and 'one-button' and 'it needs to Just Work'. (FWIW, I started with DEC, then on to solaris and cisco, then linux of various flavors with a smattering of BSD nowadays. I use Windows for email and to fill out forms). So we decided to bring in a contractor to do this for us. and they met the deadline. The system is up and mostly usable, and this is good. We would not have been able to do this. But it's the 'mostly' part that is proving to be the PIMA now, and I'm having to learn Microsoft stuff anyway until/if we can get a new contract with these guys for ongoing operations. Here's my question. The contractor used powershell almost exclusively for deployment, configuration and updating. My intensive reading over the last week leads me to think that the generally accepted practices for deployment, configuration and updating microsoft stuff uses elements of GPOs and ADMX templates, along with maybe some third party stuff like PolicyPak. Are there solid reasons that I've not found yet that powershell scripts would be preferred over the GPO methods? I'm going to discuss this with the contractor lead when he gets back from his vacation, and he'll be straight with me (nor do I think they set us up). But I can also see this might be a religious issue, so I would still like some background on this. Thoughts? or weblinks? Thanks!

    Read the article

  • Remote Desktop Services Licensing - Does server have to have a RDS role?

    - by transistor1
    I recently set up a "micro" size Windows 2008 Datacenter server on Amazon AWS. My small group needs several concurrent RDS users to be able to access the machine. Without installing the "Remote Desktop Server" role, it allows 2 concurrent connections. I read on MS' website that in order to set up multiple users, we needed to install the RDS role. I did so, but now the application we are trying to share is running much slower than it was before. Prior to the role installation, it was taking about 5 seconds to open; now it is taking a few minutes to open -- without any other users logged on except me. My assumption is that the RDS role may be too much for this micro instance to handle, and currently, changing to another size instance is not an option (it may be possible later if we were to receive enough funding). This leads me to the following questions: 1) Is it a sensible assessment to assume that it is the RDS role is slowing things down, or are there other things that I could look at to speed it up? We are talking about a machine with ~600MB of memory. 2) If I revert back to the pre-RDS role, is there any legitimate way (in terms of purchasing RDS licenses) to get more than 2 concurrent desktops? I did read this, and am not questioning that the answerer is knowlegeable; but someone else may have some other experience. I am also making it clear that we want to do this in a legitimate way. Thanks in advance for any assistance that can be provided! EDIT: if it is helpful in answering the question, the application in question is a Lotus Approach database. Also, I am asking this from a technical perspective: not a legal one. I want to know if it is possible to install valid licenses without the RDS role.

    Read the article

  • SBS 2008 Sharepoint Database good Memory Limit.

    - by ldelgado
    I manage a small network running on Small Business Server 2008. Lately, the Sharepoint embedded database is getting out of control with its memory usage. I've got a total of 16 GB of RAM on this server, and the Sharepoint database sometimes uses almost 8 GB of RAM. This never happened before, and it started happening after I installed Backup Exec 2010. It happens after a backup is performed. So I suspect there is a memory leak involved. I am working on that issue, but this question isn't about that. I would like to limit the amount of memory the Embedded database uses. I know how to do it. My question is, what would be the ideal amount of memory that I can allocate to Sharepoint? There are only 4 users on my network. One of the users uses two computers but not at the same time. They use sharepoint for a company calendar, and sometimes they share files that way also. Let me know if you need to know anything else. Thanks,

    Read the article

  • Hard drive causing BSOD

    - by JoshIrving
    I've come across a problem after building my new PC and installing a clean Windows 7. I originally planed on a RAID 1 or 0 but after further research I decided against it. So I was left with two 1TB Western Digital Black SATA 6Gb/s hard drives. My plan now was to use my second hard drive as a backup (using Windows Backup or 3rd party software). I set both hard drives to AHCI in the BIOS and installed Windows 7. I went through the lengthy process of downloading and installing each driver manually (latest versions), using the motherboard disk for a list of what I need. After a few restarts and before installing any software, I took an image backup onto DVD and the second hard drive. First witnessed the problem during the first scheduled Windows backup. The progress bar froze at about 70% (doc backup done, image backup in progress). It stayed still for 2 hours until it blue screened. Next time the backup froze, I tried shutting down. It logged me out and got stuck at the last step ("Shutting down" and blue spinner) for an hour, until I hard shutdown. I later realised this hasn't got anything to do with the backup. I ended up blue screening on almost every shut down (same place). Turns out, it's because of the second hard drive spinning down or turning off. The computer will now shutdown properly, as long as I remember to read or write to the second drive before executing shutdown. I've now set "Turn off hard disk after: Never" - No problems, so far. Do I have dodgy hard drive(s) or should I investigate the POWER_STATE_DRIVER_FAILURE BSOD - can it be a driver issue? AHCI?

    Read the article

  • Cisco VoIP stuck as Unregistered?

    - by Shifty
    Question: Why is one VoIP stuck as Unregistered? Background: We have a Cisco UC540 Small Business switch/router/voip combo. This phone was working until I powered everything down to install a larger UPS unit. The phone originally had a status of "Deceased". I removed the registration and tried to add it again. Now it just sits as "Unregistered". I even tried giving it another extension. I am stuck using the Cisco Communication Assistant since this is small business hardware. There is very limited CLI access. Also, from what I heard, if you access the CLI with out cisco permission, you will void any warranty. The phone in question is a Cisco SPA501G. It is connected to a SG300-28P. There are 5 other phones on this switch working just fine. I have tried other ports with no luck. Both the link and PoE lights are lit up. Any ideas?

    Read the article

  • Terminal Server CPU usage at 100%

    - by Light1c3
    I'm running a terminal server with around 50-60 users,and every so often the server will go from 40% usage to 100%. I took a closer look an it seems every time this happens, a different user or two seem to be caught in a loop and end up using < 30% where the rest of the users only use a maximum of 5%. The company behind the software we use clame it's due to the servers inadequate hardware (It's a VM system running on a dual - quad core setup) which to me sounds like BS! I'm fairly new to this level of IT so if I misspoke I apologize. I have no way to prove it but I believe adding more raw hardware power wont do me any good as this to me seems like a bug in their software, and it will suck up as much ( or little) CPU as it's given. The VM in question has 4 vCPU cores and 12 GB RAM available, and is running Windows Server 2008, 64-bit Thanks in advance for your help! Note: I have the same question posted on SO, but was pointed in this direction so just in case, here is a link to the post http://stackoverflow.com/questions/17276602/termserver-cpu-at-100

    Read the article

  • Outbound ports to allow through firewall - core requirements

    - by dunxd
    This question was asked before, but in a rather general way. I'm asking more specifically based on my current requirements. We have a number of remote offices made up of a bunch of PCs and an ASA 5505 which is used as firewall and VPN termination point. In the offices we share the internet connection with one or more other organisations over whom we have very little control, asides from the config on the ASAs. For a bunch of reasons I'd like to lock down these ASA 5505s to only allow outbound traffic to ports used by applications we know we need. I'm putting a standard config to roll out to all the ASAs, and if we need to open up ports for the other orgs we can do it on request. But I want to leave open the most commonly required ports so we can get up and running without waiting on other folks technical staff to get back. I plan to allow the following TCP ports to support email and web access, which I know everyone will need: POP3 (110 and 995) HTTP (80 and 443) IMAP4 (143 and 993) SMTP (25 and and 465) The question really is, what other ports do I need to leave open to allow for "normal" working? I've seen UDP port 53 for DNS as one. Are there any others that would be worth opening up? Just to note - I'll also be setting up monitoring systems to keep an eye on the ports we do allow. Any of the above could be misused of course. We'll also back all this up with signed agreements. But I'm aiming for a technical solutions where I don't have to start out with the full requirements of everyone we share connections with. See also: outbound ports that are always open

    Read the article

  • Windows 7 using llt for ipv6

    - by Seoman
    The question asked below is based on the specific implementations of the Os not the RFC. Looking on a way to be able to assign a fixed ip address to a host, before it boots I found that Centos 6 works fine with no modifications and Windows 7 does not work at all. As defined in enter link description here exists 3 valid ways of generate a DUID: 1 Link-layer address plus time 2 Vendor-assigned unique ID based on Enterprise Number 3 Link-layer address Looking at the centos, that works fine, I can see the following autogenerated DUID: option dhcp6.client-id 0:1:0:1:19:60:25:f1:52:54:0:6b:b9:9e; and the MAC address for this host is: ifconfig eth1 | grep HWaddr eth1 Link encap:Ethernet HWaddr 52:54:00:6B:B9:9E As you can see, the DUID containts the MAC address. I can assign a fixed ip address to this host by including an entry on my dhcp server similar to: host vm { hardware ethernet 52:54:00:6B:B9:9E; fixed-address6 2001:db8:0:1::200; if packet(0,1) = 1 { log(debug,"VM Request match!"); } } And the Centos 6 gets his ip. On the windows side, I faced a common problem explained on this other link enter link description here As summary, Win7 uses the option 2 of the DUID generation or a variation of this one. On the link explains how to move it to a llt (link layer + time) but is not working fine. If I modify the DUID to one that looks like the one generated on Centos (but with the right MAC) it works as expected. Question 1 How Can I change the DUID generation for Windows 7 to be based on MAC as Centos 6 does? Thanks

    Read the article

  • Updating files with a Perforce trigger before submit [migrated]

    - by phantom-99w
    I understand that this question has, in essence, already been asked, but that question did not have an unequivocal answer, so please bear with me. Background: In my company, we use Perforce submission numbers as part of our versioning. Regardless of whether this is a correct method or not, that is how things are. Currently, many developers do separate submissions for code and documentation: first the code and then the documentation to update the client-facing docs with what the new version numbers should be. I would like to streamline this process. My thoughts are as follows: create a Perforce trigger (which runs on the server side) which scans the submitted documentation files (such as .txt) for a unique term (such as #####PERFORCE##CHANGELIST##NUMBER###ROFL###LOL###WHATEVER#####) and then replaces it with the value of what the change list would be when submitted. I already know how to determine this value. What I cannot figure out, is how or where to update the files. I have already determined that using the change-content trigger (whether possible or not), which "fire[s] after changelist creation and file transfer, but prior to committing the submit to the database", is the way to go. At this point the files need to exist somewhere on the server. How do I determine the (temporary?) location of these files from within, say, a Python script so that I can update or sed to replace the placeholder value with the intended value? The online documentation for Perforce which I have found so far have not been very explicit on whether this is possible or how the mechanics of a submission at this stage would work.

    Read the article

  • File descriptor linked to socket or pipe in proc

    - by primero
    i have a question regarding the file descriptors and their linkage in the proc file system. I've observed that if i list the file descriptors of a certain process from proc ls -la /proc/1234/fd i get the following output: lr-x------ 1 root root 64 Sep 13 07:12 0 -> /dev/null l-wx------ 1 root root 64 Sep 13 07:12 1 -> /dev/null l-wx------ 1 root root 64 Sep 13 07:12 2 -> /dev/null lr-x------ 1 root root 64 Sep 13 07:12 3 -> pipe:[2744159739] l-wx------ 1 root root 64 Sep 13 07:12 4 -> pipe:[2744159739] lrwx------ 1 root root 64 Sep 13 07:12 5 -> socket:[2744160313] lrwx------ 1 root root 64 Sep 13 07:12 6 -> /var/lib/log/some.log I get the meaning of a file descriptor and i understand from my example the file descriptors 0 1 2 and 6, they are tied to physical resources on my computer, and also i guess 5 is connected to some resource on the network(because of the socket), but what i don't understand is the meaning of the numbers in the brackets. Do the point to some property of the resource? Also why are some of the links broken? And lastly as long as I asked a question already :) what is pipe?

    Read the article

  • Viability of Mac OS X 10.9 Time Machine Server in office environment

    - by user197609
    Currently we have about 20 Mac OS 10.9 MacBook Pros (almost all with SSDs) backing up to individual USB drives. I'd like to consolidate these to one drobo thunderbolt drive array attached to a Mac Mini server (running 10.9 server) using time machine server. My question is, will this scale to 20 users? Examples I have seen seem to be 5 or 6 users tops, and this isn't easy for me to test (I'd rather not ask everyone to backup to the array and then switch back to USB drives if it brings our network to its knees). My primary concern is saturating our gigabit network, as time machine backs up every hour for every machine, so there would usually be a couple people backing up at any given time. We also have some people occasionally on our 802.11ac network and not on ethernet (usually connected via 802.11n until people upgrade to newer machines), but most of the time people are connected to our thunderbolt displays which have a gigabit ethernet connection on them. Our network topology is one 32 port gigabit switch with 5 smaller gigabit switches at each desk cluster. The mac mini server is connected directly to the top level switch. Update: Failing information from someone who has done this in practice, I suppose my question is really around how switches work. If three or four people are backing up simultaneously, and then other two (different) users transfer a file between each other, will they be able to transfer the file at gigabit speeds?

    Read the article

  • Is there an Installer Analyser tool that can list what Registry Keys will be created?

    - by EvoGamer
    I can think of 3 ways to achieve my goal: Create a clean VPC, install a given piece of software, and compare the before and after states. Somehow reverse-engineer the installer. Somehow redirect the output of the installer in question so that all registry calls and copy/move file commands are recorded, but not executed. The first option can be done manually, or potentially automated, but I feel it's rather OTT for my needs. The second could cause all sorts of licencing issues, not to mention it may not always return a correct result. Also, without delving into hex editing, I can't think of a way that it would be possible to do manually (some installers - eg Anti-Virus software - may react unfavourably on automated attempts to investigate the installer). The third option shows the most promise, although if the first could be stripped down into a lightweight throwaway environment, it would work pretty much the same way. However, I'm not sure how to do it. So my question is: What tools are available (if any) and/or how could I find out this information manually? I'm not looking to reverse-engineer anything (if I can help it), but I just want to know exactly what changes are being made to my PC by a given piece of software.

    Read the article

  • Hard drive causing BSOD

    - by JoshIrving
    I've come across a problem after building my new PC and installing a clean Windows 7. I originally planed on a RAID 1 or 0 but after further research I decided against it. So I was left with two 1TB Western Digital Black SATA 6Gb/s hard drives. My plan now was to use my second hard drive as a backup (using Windows Backup or 3rd party software). I set both hard drives to AHCI in the BIOS and installed Windows 7. I went through the lengthy process of downloading and installing each driver manually (latest versions), using the motherboard disk for a list of what I need. After a few restarts and before installing any software, I took an image backup onto DVD and the second hard drive. First witnessed the problem during the first scheduled Windows backup. The progress bar froze at about 70% (doc backup done, image backup in progress). It stayed still for 2 hours until it blue screened. Next time the backup froze, I tried shutting down. It logged me out and got stuck at the last step ("Shutting down" and blue spinner) for an hour, until I hard shutdown. I later realised this hasn't got anything to do with the backup. I ended up blue screening on almost every shut down (same place). Turns out, it's because of the second hard drive spinning down or turning off. The computer will now shutdown properly, as long as I remember to read or write to the second drive before executing shutdown. I've now set "Turn off hard disk after: Never" - No problems, so far. Do I have dodgy hard drive(s) or should I investigate the POWER_STATE_DRIVER_FAILURE BSOD - can it be a driver issue? AHCI?

    Read the article

  • File corruption (bad checksums) in large files copied to VMware guest

    - by AllanA
    In setting up a development lab, I've got a desktop system running ESXi 4.1.0 (free license) on SATA RAID 0 (already purchased and configured when I started this job; I'm open to hardware input as it pertains to my problem.) Its guests so far include two Win2008 Server R2 64-bit VMs and on Ubuntu 10.04 64-bit VM. I'm installing onto the Windows servers. We've been copying off some fairly large files (over a gigabyte) for an installation, hoping to install more quickly from a (virtual) hard drive than from the network for from BD-ROM. The problem is that they keep coming up with different checksums from the originals. The file sizes are the same, but md5sum reports different numbers (and so does the installer, as it refuses to continue when the checksums don't match.) I've tried copying directly from the BD-ROM (attaching the OS drive to the host system's physical drive). I've tried copying the large files onto a co-worker's Windows machine from his Blu-Ray drive; when I do that, the checksums match. But when I copy from his machine to the VM guest over a network share, the checksums no longer match. Thinking this meant a corrupt destination drive, I deleted it in vSphere and added another freshly created drive. The problem persists. I'm not sure what to try next.

    Read the article

  • Virtual Host Configuration and mod_rewrite - Removing PHP Extension and Adding Forward Slash

    - by nicorellius
    On my production server, things are fine: PHP extension removal and trailing slash rules are in place in my .htaccess file. But locally, this isn't working (well, partially, anyway). I'm running Apache2 with a virtual host for the site in question. I decided to not use the .htaccess file in this case and just add the rules to the httpd-vhosts.conf file instead, which, I've heard, if possible on your server, is a better way to go. The virtual host is working and the URL I use for my site is like this: devserver:9090 Here is my httpd-vhosts.conf file: NameVirtualHost *:9090 # for stuff other than this site <VirtualHost *:9090> ServerAdmin admin@localhost DocumentRoot "/opt/lampstack/apache2/htdocs" ServerName localhost </VirtualHost> # for site in question <VirtualHost *:9090> ServerAdmin admin@localhost DocumentRoot "/opt/lampstack/apache2/htdocs/devserver" ServerName devserver <Directory "/opt/lampstack/apache2/htdocs/devserver"> Options Indexes FollowSymLinks Includes AllowOverride None Order allow,deny Allow from all </Directory> <IfModule rewrite_module> RewriteEngine ON # remove PHP extension and add trailing slash # note - this doesn't work for directories, and throws 404 # TODO - fix so directories use index.php RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{THE_REQUEST} ^GET\ /[^?\s]+\.php RewriteRule (.*)\.php$ /$1/ [R=302,L] RewriteCond %{REQUEST_FILENAME} !-d RewriteRule (.*)/$ /$1.php [L] RewriteCond %{REQUEST_FILENAME}.php -f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .*[^/]$ /$0/ [R=302,L] </IfModule> # error docs ErrorDocument 404 /errors/404.php </VirtualHost> The problem I'm facing is that when I go to directories on the site, I get a 404 error. So for example, this: devserver:9090/page.php goes to devserver:9090/page/ but going to a directory (that has an index.php): devserver:9090/dir/ throws 404 error page. If I type in devserver:9090/dir/index.php I get devserver:9090/dir/index/ and the contents I want appear... Can anyone help me with my rewrite rules?

    Read the article

  • Performance-optimizing Oracle 10g on a server that is also a Tomcat JSP app server?

    - by PKHunter
    I have inherited a simple RedHat 5 - 64bit platform. It has SCSI disks on RAID1, with 16GB of RAM. Double Core CPU. Oracle 10g, Release 2. This would be a decent platform for running the DB only, perhaps, but the same server in an "A-A mode" clustering (very simple) also runs Tomcat and there are several Java servlets running on this. Sadly there is no caching platform etc. We only use an external CDN for some html caching. I am personally more familiar with web environments on the LAMPP platform (apache, php, mysql, postgresql). PROBLEM: Because the server has both Tomcat JSP/Java and Oracle 10g running on the same server, with no caching, I have some issues of the server going down. Often, sadly. QUESTION: What are my options in terms of improving performance of all these different apps? Connection Pooling? Example, in Postgresql world we have PgBouncer, which really helps things. Does Oracle have something similar? Or is there a famous Java-based external pooler that people use in production environments? (I'm not familiar with Java) Any "SQL cache" as in the MySQL and Postgresql world? Any other kind of application cache, as "APC" or "eAccelarator" in the PHP world? The "OSCache" stuff from the Java world (JSP thingie I found on Google: http://onjava.com/pub/a/onjava/2005/01/05/jspcache.html?page=2) ... What else? Sorry if this is a noob question. I have googled and googled, but problem is I don't know what to google for, other than the broad general concepts above. So if not full answers, I would even appreciate basic pointers and I am happy to JFGI myself. Thanks!

    Read the article

  • HP DL380 G3 2U For Basic Web Server in 2012

    - by ryandlf
    I have an opportunity to pick up a used HP DL380 G3 2U for $100. I'm looking for a basic entry level web server that I can host a small - medium size website on and more or less learn the ins and outs of running my own web server before I bite the bullet and spend a couple grand on a server. The specs are: Dual (2) Intel Xeon 2.4GHz 400MHz 512KB Cache 4GB PC2100 ECC Registered Memory 6 x 72GB 10K U320 SCSI Hard Drives Smart Array 5i RAID Controller Redundant Power Supplies DVD/Floppy, Dual Intel GB NIC's, USB Or would I be better off spending a couple hundred bucks on something like: this new HP Seems like the only major difference is SATA and a bit of storage, but I will likely be implementing a separate storage system of some sort anyways. I guess it also wouldn't hurt to mention that I plan on running a linux server distro, so would the hardware be likely to support linux with a system that is 4 generations old? I don't mind spending a couple hundred extra dollars if its a better solution, but as mentioned previously I am simple looking for a server to learn on and probably use for a year or so while I put together a small - medium size website.

    Read the article

  • Strange Behaviour with Unicode Characters in Windows

    - by open_sourse
    Ok, I do not know if this is a programming question, but it certainly is a technical one so I am asking it here. I was working on some internationalization stuff in my PHP code, and in order to ensure that my generated HTML shows up Unicode correctly based on the encoding and stuff I decided to add some Chinese text to my PHP page, which then echoes it into the browser to complete my test case. So I went into google and typed "Chinese", copied the first Chinese text that the search returned (which was ??/??). I then copied it into Notepad++ which is my editor, and to my surprise showed up as boxes similar to [][]/[][]. So I thought the encoding in Notepad++ was messed up and I changed the encoding to UTF-8 and UCS, neither worked. I did it fresh in a newly encoded file, still I got the boxes. The same content when I paste into Google and StackOverFlow (like I did in this posting) shows up correct Chinese! I even opened up Windows Clipboard Viewer and the content is represented in the Clipboard as boxes! I tried pasting it into Windows Explorer address bar and using to rename a file to, but I still get boxes. But it shows up correctly when pasted into my Chrome Browser address bar! Is this a Windows issue? Since I am able to paste it correctly in SO, the data in memory should be encoded correctly right? But if that is the case why does it show up as boxes in the Clipboard Viewer? I am confused here...By the way I am using Windows XP with SP3. (I am asking this question here, even if it is not programmatic, because it is preventing me from running my programming test cases..)

    Read the article

< Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >