Search Results

Search found 22641 results on 906 pages for 'case'.

Page 649/906 | < Previous Page | 645 646 647 648 649 650 651 652 653 654 655 656  | Next Page >

  • Firefox: Unload a tab manually

    - by unor
    Firefox has a setting "Don't load tabs until selected" (see How do I make Firefox 13 Load All My Tabs on Startup or when Resuming Reload). I like that behaviour. I am searching for a way to "deload"/deactivate a tab manually for a session (until I reload it). It should stop all running JavaScript functions and plugins (like Flash). The whole webpage content may disappear until I reload/re-activate the tab, but that is not a requirement. The title has to be displayed as tab label (like it is the case with the startup setting, too). The workaround would be to restart Firefox and don't switch to the tab I want to be deactivated. This is pretty annoying, of course. EDIT: Here is what I found so far (thanks, @bytebuster!) BarTab no longer being maintained (see why) BarTab Lite seems to miss this functionality from BarTab Dormancy experimental; comes with warning that it "may eat your session" Tab Mix Plus Feature request: Unload Tab feature Tab Utilites seems to offer this functionality only for automatic unloading Feature request: Add "unload tab" to tab context menu. UnloadTab removed from addons.mozilla.org (who knows why)

    Read the article

  • Can Spotlight or Media Browser index metadata contained in iPhoto or Aperture in Mac OS X?

    - by jaydles
    It seems silly to go to all the trouble to assign "Face" data to thousands of photos, but not make it possible to use that data to locate them outside of that application. Is there any way to get Spotlight or Media Browser in OSX (Snow Leopard) to index and recognize metadata (Faces, Places, etc.) contained in iPhoto or Aperture? I know that that metadata is stored in the "library" database for Aperture/iphoto, rather than on the actual files (which is too bad). And I can even potentially see why it might create challenges for spotlight to use it, since spotlight is presumably a file index system, not a media organizer, but surely the media browser used across the other OSX apps is intended to use it? The media browser's whole purpose seems to be to let you easily locate and reference the items you organize in one of the ilife apps (iphoto or Aperture, in this case) from the others (say, imovie, or Mail). It's particularly vexing since the photo app on the iphone sorts by faces by default. Additionally, the mac-based media browser does access smart albums and folders, so you could establish a workaround by creating a smart album for each "face" or place, or tag, and access them that way, but it seems like there must be an easier way. Am I missing something?

    Read the article

  • Video card not detected in POST on initial boot.

    - by Jeff M
    I have a minor problem with my desktop computer after cleaning it out for dust. When I first boot up the computer, the video card does not get detected so I can't see anything. In POST, I'm getting the "can't detect video card" beeps. The boot sequence continues normally, just without video. However, if I restart it (using the restart button) anytime after POST, it would boot up normally. I have no reason to think that the motherboard, video card or PSU got damaged in the process. It was working fine before, works fine after resetting. Took all the necessary precautions while cleaning. On the initial boot, I can hear the video card's fan power up but immediately power down and try again one more time only to fail. After the beep, resetting gets everything running and sounding normally. I've reseated the card a couple of times and reset the BIOS but doesn't seem to help. I'm hoping I won't have to take it out and remove and reinstall everything again. Does anyone recognize these symptoms to know exactly what the problem is? My guess is that the video card isn't getting enough juice initially to be running stable to be detected. I just don't know what I did (or didn't do) to get it to be in this state. It's not a high priority thing for me at the moment, just means I have to always reset it after initially turning it on but will eventually remove everything and reinstall if it comes to that. I don't think the specs are relevant here but just in case, here's the relevant stuff: Motherboard: Gigabyte P35-DS3P Video: EVGA GeForce 8600 GTS PSU: Antec True Power Trio 650W Built ~2 years ago, still running well

    Read the article

  • How to set umask globally?

    - by DevSolar
    I am using a private user group setup, i.e. a user foo's home directory is owned by foo:foo, not foo:users. For this to work, I need to set the umask to 002 globally. After a quick grep -RIi umask /etc/*, it seemed for a moment that modifying the UMASK entry in /etc/login.defs should do the trick. It does, too -- but only for console logins. If I log in to my desktop, and open a terminal there, I still get to see the default umask 022. Same goes for files created from apps started through the menu. Apparently, the display manager (or whatever X11 component responsible) does source some different setting than a console login does, and damned if I could tell which one it is. (I tried changing the setting in /etc/init.d/rc, and no, it did not help.) How / where do I set umask globally (and for all users), so that the X11 desktop environment gets the memo as well? (The system is Linux Mint / Ubuntu, in case that changes anything...)

    Read the article

  • Forcing programs to be installed to another drive

    - by zyboxenterprises
    I have an SSD as my main Windows drive, with a 640GB 2.5" HDD, partitioned to store programs and user settings, and also to act as backup (it's the only thing I had lying around at the time of building my PC). The task was to make the PC as fast as possible, while having an increased storage capacity available to store normal user data, and to assist in my small data recovery business. The problem is that whenever I install a program, it installs to C:\Program Files [(x86 for the 32 bit programs]\, although I have changed the environment variables. This wouldn't normally be an issue, however every installation program points its shortcut to my 640GB HDD. The root layout of both drives: To clarify: Program files get installed to C:\ Program shortcuts are always pointed to Z:\, my 640GB HDD Modifying the relevant environment variables doesn't do anything, I looked at this, but however it only talks about modifying the registry and environment variables, which I have already done so. I install to the Z:\ drive if the installation program lets me change the installation path, but however the installation programs sometimes don't let me change this. Is there a way that I can force every program to install to the relevant location on Z:\? Perhaps I'm missing something here? Edit: Found this program; would it be appropriate to use in my case? I would be able to move the entire Program Files (and its x86 version) to Z:\, without impacting on the performance.

    Read the article

  • “NT AUTHORITY\ANONYMOUS LOGON” error in Windows 7 (ASP.NET & Web Service)

    - by Tony_Henrich
    I have an asp.net web app which works fine in Windows XP machine in a domain. I am porting it to a Windows 7 stand alone machine. The app uses a web service which makes a call to sql server. The web server (IIS 7.5) and SQL Server are on the same stand alone machine. I enabled Windows authentication for the website and web service. The web service uses a trusted connection connection string. The web service credentials uses System.Net.CredentialCache.DefaultCredentials. I noticed username, password and domainname are blank after the call! The webservice and web site use the 'Classic .NET AppPool' with NetworkServices identity. I am getting an exception "NT AUTHORITY\ANONYMOUS LOGON" in the database call in the web service. I am assuming it's related to the blank credentials. I am expecting ASPNET user to be the security token to the database. Why is this not happening? Did I miss a setting? (Usually this happens when sql server and web server are on two different machines in a domain, delegation & double hopping, but in my case everything is on a dev box)

    Read the article

  • How to include worksheet 3 and 4 in a cell formula provided?

    - by user21255
    I have been kindly given this formula with an explanation on how it works: Insert this formula into the cell B4 of the sheet "Cases": =IF(NOT(ISBLANK('1st'!B25)),'1st'!B25,IF(NOT(ISBLANK(INDIRECT("'2nd'!R" & (ROW($B4)-(COUNTA('1st'!$B:$B)-COUNTA('1st'!$B$1:$B$24))-4+25) & "C" & COLUMN(B4),FALSE))),INDIRECT("'2nd'!R" & (ROW($B4)-(COUNTA('1st'!$B:$B)-COUNTA('1st'!$B$1:$B$24))-4+25) & "C" & COLUMN(B4),FALSE),"")) Copy the formula to the other cells in the worksheet; the relative addresses will adjust automatically. The formula works like this: Check if there is content in 1st. If yes, copy it. If no, find out how many entries there are in 1st in total. (This is done by using the COUNTA function on the whole B column in 1st and subtracting the number of non-empty cells above the actual case data.) Use this information together with the current cells's number to find out the location of the cell that has to be copied from 2nd. Create the address of the cell and use the ISBLANK function on the INDIRECT function with that address to check if the cell is empty. If it is not, use the INDIRECT function again to display it. If it is empty, just display an empty string. Now this works fine when I have only 2 sheets. But lets say I want to include a third and fourth sheet (name as 3rd and 4th respectively), then what and should I put the formula for this in the formula above? There are actually 31 sheets but if I know how to add 3rd and 4th sheet in the formula, then I can figure out how to do the rest. Thanks

    Read the article

  • VirtualBox management interface unreliability

    - by Arlen Cuss
    I'm using VirtualBox 3.2.8_OSE with 20 VMs running, and everything's going fine. I find that if I hammer the VBoxManage interface, all sorts of interesting things happen, usually necessitating either a restart of the VM in question, or of all VMs. For instance, if I use VBoxManage guestcontrol execute to run processes, after a few hours of using it maybe once or twice a minute on any given VM, it'll mysteriously start reporting VERR_NOT_IMPLEMENTED and refusing to do anything—sometimes trying to restart /usr/sbin/VBoxService on the VM itself will get it back in working order, but often it won't, and in the meantime, no data can be collected using VBoxManage. Such data includes the VM's IP, so if I hadn't recorded it earlier, I'm usually in trouble and have no option but to portscan the network for it, or kill the VM's process on the host manually and restart it. This one I haven't narrowed down yet, but it seems that even using VBoxManage guestproperty get (to retrieve a machine's IP) frequently and rapidly is enough to cause all VMs' management interfaces to die. The processes are still running fine, but VBoxManage reports them all as "powered off". In the meantime, another process somewhere in the system seems to have decided that their being powered off means they need to be powered on again, and suddenly I have 2x the number of VBoxHeadless processes running than I used to. Has anyone else seen behaviour like this? Is there any workaround? This is a serious impediment to my work, as I've had to resort to a lot of (hacky) caching of data and rate-limiting how often I call VBoxManage, just in case I accidentally bring 20 VMs to their knees.

    Read the article

  • Wireless router setup for 1-1 NAT

    - by Carlos
    What I have: A linksys router WAG160N with firmware version 2 A "pool" of 5 external static IP's provided by my ISP 213.xx.xxx.n All the required configuration values for the static IPs such as (Subnet Mask, Gateway and static DNS 1, 2, 3) Current WAN Configuration: Encapsulation: RFC 2364 PPPoA Multiplexing: VC QoS type: UBR DSL modulation: MultiMode What's connected to the network: 1 x Server (That I want to make available to the outside) 5 x Desktops with static internal IP's, such as 192.168.0.xx 2 x Network printers, also with internal static IP's 2 x Laptops 1 x NAS (Network Attached Storage) also on static IP What I want to do: I would like to make the server available from outside the network, for example from your house. The problem is that Im not really sure how to do this. I have tried following the steps on the instruction manual in Linksys but they do not seem to work, once I set it up as shown bellow, I loose internet and all hell breaks loose. Going into further detail, I would prefer if the network is changed as little as possible, by this I mean that all the computers stay networked within eachother and only the server is accessible from the outside the network. What I need HELP with: I have read around that it is possible to set a 1-1 NAT (I know where it is in the menu but have no clue what it does...) so that I can NAT a single public IP directly to a single private IP (in our case the server). But please, How do I do that? Or maybe an alternative?

    Read the article

  • Connecting multiple access points

    - by mohsen farahanipoor
    I'm working on a big project. We want to create a wireless network throughout the building with 15 floors. My idea is that we should set up one unified wireless access point at least in each floor...in case of signal attenuate, we use Access point extender/repeater. I selected DWL-6600AP from among D-Link industrial access points. I want to implement a single wireless LAN throughout the building. Is it possible to combines multiple DWL-6600 access points to achieving just a single WLAN? Can a wireless switch controller do this task? Can these Access Points interfere with each other? What is the solution? I read D-Link website's learning materials, but I am still confused. My other question is around the connecting these APs to Wireless Switch Controller - Is it possible to use power line for connecting DWL-6600 to Wireless Switch Controller device? My main goal is that clients with portable devices such as laptops should be easily connected to the network to share & have communication without any more manual configuration as they are already connected to a single network.

    Read the article

  • Generic/Text Printer on Windows 7 not prompting for file name

    - by Trevor Tippins
    Hope someone can shed some light on this. I am downloading reports from an AIX-based system by directing them to a TT printer which the terminal emulator (MultiView 2000) intercepts and directs to the default printer on the local system. This local printer is configured as a vanilla Generic/Text printer attached to a FILE port. When I print from AIX, the output is spooled down and the local printer prompts for a file name into which to save the file...but not under Windows 7. This has worked fine for many years, on both Win2K and WinXP. However, on Windows 7 the output gets spooled as a file into spool\PRINTERS (and looks as expected) but the print job then hangs with a status of "Error - Printing" and never prompts for a file name. I have to cancel the job. The Generic/Text printer works as expected with other applications. I have tried setting the printer to print directly rather than spooling but this only serves to hang the terminal session too. I've also tried to run the emulator in Windows 2000 Compatibility Mode and as Administrator in case it was something like that but with no luck. As you might expect, it does work fine in XP Mode (as long as I print to a printer defined therein and not the host's printer) but operationally this isn't going to be an option. Obviously this emulation software is a decade old (at least) and I could just cross/upgrade all the users (at a cost) but, before I do so, has anyone seen this sort of behaviour before and found some sort of fix? Remote OS: AIX 5 Client OS: Windows 7 Pro (32-bit) Printer: Generic/Text on a FILE port TE Software: MultiView 2000 (32-bit) Thanks in advance.

    Read the article

  • Fedora installed in Legacy mode, how to make it work in UEFI?

    - by TryntaLearn
    I am trying to install a linux distribution on my new laptop. It's an MSI GE40 which comes preinstalled with windows 8. It's a UEFI machine. I have tried installing Ubuntu and Fedora with limited success. I've tried: running it in UEFI, UEFI with CSM mode, with secureboot enabled, ... with secureboot disabeled, ... with secureboot enabled but in user mode. I have had no success with any of these methods. With Ubuntu the grub loader shows up, but when I pick 'try ubuntu', or 'install ubuntu', it's just a blank screen(I've been using liveusb's btw). With Fedora, it'll show me the next screen on which it says 'binary authorised by vendor certificate' or 'Secure boot not enabled' and then stop doing anything. The closest thing to success I reached was switching to legacy mode to install Ubuntu, in which case I was able to get to the ubunutu installer but it wouldn't recognize windows 8 on my computer, so instead of continuing on I rebooted, and removed the USB pendrive to find my computer couldn't find windows 8. After a little dicking about I got it to find windows 8 again. Any ideas on how I should go about trying to install a distro on my computer? UPDATE:- So I ended up installing fedora using Legacy mode. To use both it and Windows at boot, I manually enter automatic repair so I can get to my UEFI settings and switch boot mode to UEFI to boot windows 8. I guess my question needs to be modified as to how do I get all of this to work in UEFI mode, so I can dual boot via selection through a bootloader, and not by repeatedly switching boot mode.

    Read the article

  • Windows-to-linux: Putty with SSH and private/public key pair

    - by Johnny Kauffman
    I spent about 3 hours trying to figure out how to connect to a linux box from my windows machine using putty without having to send the password. This is connecting to an Ubuntu server that is using OpenSSH. The private key is SSH-2 RSA, 1024 bits. I am connecting using SSH2. I have run into the more common problems already: Putty generated the public key in the "wrong format". I have corrected this (as seen on this blog post). However, since I am not yet connected, I cannot absolutely confirm that this file is in the correct format. The key is all on a single line now, and I have tried adding/removing line breaks at the end of the file. I've also tried the public file doctoring process a few times to ensure that I haven't flubbed up the manual conversion. Even so, I have no way to verify accuracy here. The permissions were at once point wrong as well, specifically meaning that the file had too many permissions. I had to solve this too and I know it got past this because I no longer see a related error in /var/log/auth.log. I've tried both authorized_keys and authorized_keys2 in case the server has an old version of OpenSSH, but this changed nothing. I do have access as a user. After this keyfile stuff fails, I can enter my password instead The only remaining nibble of information I have is that it claims I have the alleged password wrong: sshd[22288]: Failed password for zzzzzzz from zz.zz.zz.zz port 53620 ssh2 Even so, as far as I can tell, this is just a lazy try/catch somewhere, since I don't think there's a password involved at all. I see nothing else in any of the /var/log files of use. What else could be wrong?

    Read the article

  • Hard Disk Not Counting Reallocated Sectors

    - by MetaNova
    I have a drive that is reporting that the current pending sectors is "45". I have used badblocks to identify the sectors and I have been trying to write zeros to them with dd. From what I understand, when I attempt writing data directly to the bad sectors, it should trigger a reallocation, reducing current pending sectors by one and increasing the reallocated sector count. However, on this disk both Reallocated_Sector_Ct and Reallocated_Event_Count raw values are 0, and dd fails with I/O errors when I attempt to write zeros to the bad sectors. dd works fine, however, when I write to a good sector. # dd if=/dev/zero of=/dev/sdb bs=512 count=1 seek=217152 dd: error writing ‘/dev/sdb’: Input/output error Does this mean that my drive, in some way, has no spare sectors to be used for reallocation? Is my drive just in general a terrible person? (The drive isn't actually mine, I'm helping a friend out. They might have just gotten a cheap drive or something.) In case it is relevant, here is the output of smartctl -i : Model Family: Western Digital Caviar Green (AF) Device Model: WDC WD15EARS-00Z5B1 Serial Number: WD-WMAVU3027748 LU WWN Device Id: 5 0014ee 25998d213 Firmware Version: 80.00A80 User Capacity: 1,500,301,910,016 bytes [1.50 TB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS (minor revision not indicated) SATA Version is: SATA 2.6, 3.0 Gb/s Local Time is: Fri Oct 18 17:47:29 2013 CDT SMART support is: Available - device has SMART capability. SMART support is: Enabled UPDATE: I have run shred on the disk, which has caused Current_Pending_Sector to go to zero. However, Reallocated_Sector_Ct and Reallocated_Event_Count are still zero, and dd is now able to write data to the sectors it was previously unable to. This leads me with several other questions: Why aren't the reallocations being recored by the disk? I'm assuming the reallocation took place as I can now write data directly to the sector and couldn't before. Why did shred cause reallocation and not dd? Does the fact that shred writes random data instead of just zeros make a difference?

    Read the article

  • Accessing clearcase view drive from virtual machine is slow

    - by PermanentGuest
    I have a windows XP virtual machine running under a Windows XP host. On the host : On the host clearcase 7.1.1.2 is installed. I have a dynamic view mapped onto some drive. The view has certain VOB/directory structure where my application DLLs from the nightly build and config files are stored. I run my application on the host machine which uses the DLLs and config files from the VOB and everything runs smooth. Now I want to move this set-up to a virtual machine. On the guest : I'm running the guest with a vm-player. I don't want to install clear-case on this as I don't want to expose this machine onto the network. The network setting in the guest is 'host-only'. I have mapped the host's clearcase view drive as a shared folder and I'm able to access this drive from the virtual machine. Also, the application is running. However, the problem is that the access of the clearcase drive from the virtual machine is very slow. I can experience this from the windows explorer. Due to this, the starting of my application takes several seconds in the virtual machine while on the guest it comes up pretty fast. My question is : Is there any way to speed up the performance? I have managed to copy some of the DLLs which don't change frequently to the virtual machine to improve the performance. However, there are still lot of DLLs which have to be taken from the clearcase drive as they change frequently. VMplayer version is : VM Player 3.0.1 build-227600 Both guest and host is : Windows XP service pack 3 Host clearcase is : clearcase 7.1.1.2

    Read the article

  • ssh many users to one home

    - by filippo
    Hiya, I want to allow some trusted users to scp files into my server (to an specific user), but I do not want to give these users a home, neither ssh login. I'm having problems to understand the correct settings of users/groups I have to create to allow this to happen. I will put an example; Having: MyUser@MyServer MyUser belongs to the group MyGroup MyUser's home will be lets say, /home/MyUser SFTPGuy1@OtherBox1 SFTPGuy2@OtherBox2 They give me their id_dsa.pub's and I add it to my authorized_keys I reckon then, I'd do in my server something like useradd -d /home/MyUser -s /bin/false SFTPGuy1 (and the same for the other..) And for the last, useradd -G MyGroup SFTPGuy1 (then again, for the other guy) I'd expect then, the SFTPGuys to be able to sftp -o IdentityFile=id_dsa MyServer and to be taken to MyUser's home... Well, this is not the case... SFTP just keeps asking me for a password. Could someone point out what am I missing? Thanks a mil, f. [EDIT: Messa in StackOverflow asked me if authorized_keys file was readable to the other users (members of MyGroup). Its an interesting point, this was my answer: Well, it wasn't (it was 700), but then I changed the permissions of the .ssh dir and the auth file to 750 though still no effect. Guess it's worth mentioning that my home dir ( /home/MyUser) is also readable for the group; most dirs being 750 and the specific folder where they'd drop files is 770. Nevertheless, about the auth file, I reckon the authentication would be performed by the local user on MyServer, isn't it? if so, I don't understand the need for other users to read it... well.. just wondering. ]

    Read the article

  • Time Machine is getting stuck at "Preparing to Back Up" and my Trash isn't emptying

    - by zarose
    I have encountered two separate problems, but I am putting them in the same question in case they are related. First, my Trash would not empty. It seems to be getting stuck on certain files, because I will reset my Macbook and some of the files will be deleted, and then if I remove a file or two at random, more can be deleted. Some of these files had strange characters in their names. I tried changing the names to single characters, but this did not help. Next, I attempted to backup my Macbook using Time Machine. I plugged in the HDD I've been using for this, but every time I try to start the backup, Time Machine gets stuck at "Preparing to Back Up". I definitely need to know how to fix the Time Machine problem, but I am curious how to solve the trash problem as well, and whether or not these problems are related. EDIT: Console.app logged the following this morning before I left on a trip. I did not bring the HDD with me. 6/5/12 7:41:28.312 AM com.apple.backupd: Starting standard backup 6/5/12 7:41:46.877 AM com.apple.backupd: Error -35 while resolving alias to backup target 6/5/12 7:41:58.368 AM com.apple.backupd: Backup failed with error: 19 6/5/12 7:59:08.999 AM com.apple.backupd: Starting standard backup 6/5/12 7:59:10.187 AM com.apple.backupd: Backing up to: /Volumes/Seagate 3TB Mac/Backups.backupdb 6/5/12 7:59:13.308 AM com.apple.backupd: Event store UUIDs don't match for volume: Macintosh HD 6/5/12 7:59:13.331 AM com.apple.backupd: Event store UUIDs don't match for volume: Blank 6/5/12 7:59:13.683 AM com.apple.backupd: Deep event scan at path:/ reason:must scan subdirs|new event db| 6/5/12 8:23:31.807 AM com.apple.backupd: Backup canceled. 6/5/12 8:23:33.373 AM com.apple.backupd: Stopping backup to allow backup destination disk to be unmounted or ejected. 6/5/12 9:51:21.572 PM com.apple.backupd: Starting standard backup 6/5/12 9:51:22.515 PM com.apple.backupd: Error -35 while resolving alias to backup target 6/5/12 9:51:32.741 PM com.apple.backupd: Backup failed with error: 19

    Read the article

  • Extract Distinct restful MVC routes from IIS logs

    - by Grummle
    This is a cross post from StackOverflow that after some consideration I believe can be asked here (not getting anything on SO). My shop is using MVC3/FUBU on IIS 7. I recently put something into production and I wanted to gather metrics from the IIS logs using log parser. I've done this many times before with file endpoints but because the MVC3 routes are of the form /api/person/{personid}/address/{addressid} the log saves /api/person/123/address/456 in the uristem column. Does anyone have any ideas on how to get data about specific routes from IIS logs? As an exmaple: Log Like this: cs-uri-stem /api/person/123/address/456 /api/person/121/address/33 /api/person/3555 /api/person/1555/address/5555 I want information about all where the route used was /api/person/{personid} so the count would be 1 in this case. Ideally what I'd like to figure out is how to do is is have IIS log the regex for the route that is choose for a particular url. So in the IIS logs have /api/person/{personid}/address/{addressid} in a column in addition to the cs-uristem /api/person/1555/address/5555

    Read the article

  • Looking for some IIS redirect help/ideas

    - by CoreyT
    Right now we have a site with a LOT of static asp pages such as, www.site.com/123.asp. This is due to how our current site's CMS builds it's pages by default. I don't have an exact count but we have roughly 6000 asp files in the site right now. We are in the middle of a redesign and restructuring of the site, and are looking to migrate to SEO friendly URLs. The problem we're having right now is what do we do to redirect the old pages to the new friendly URLs? I know how to do redirects that is not the issue here. The problems I am coming up with right now are listed below. 1 - Is there a limit to the number of redirects in IIS? 2 - Would having even a few thousand redirects affect IIS performance? 3 - My understanding is that we would not be passing along page rank to the new URLs, is that true? (not a major question I can ask on more SEO forums if nobody here is sure) 4 - Would using something like the IIS URL Rewrite 2 module for IIS 7 help us out? Or would I still need to define several thousand unique redirects in it? Our server right now is running Server 2003, however in the redesign I would be open to migrating to Server 2008 R2 if there is a good case for it (i.e. the URL Rewrite module). Thanks for any guidance or help. I have been looking for a good way to do this for a while now and keep coming up with things that sound problematic and bad (such as having 6000 redirects).

    Read the article

  • VirtualBox: using physical partition as virtual drive

    - by Hamman Samuel
    Background: I am using VirtualBox installed on Windows 7. From within VirtualBox I am using Xubuntu as a virtual OS. The reason I chose this approach is so that I don't have to keep turning off Windows and rebooting from Xubuntu every time I needed to switch OSes. And VirtualBox's seamless mode is pretty amazing to allow me see Xubuntu and Windows 7 all in one screen. Issue: Now I am thinking of a way to have Xubuntu more integrated into my system. By this I mean I want to have a physical partition for Xubuntu. But I want to still have the feeling of the seamless mode. Question: So finally, my question is: is it possible to load a partition in VirtualBox as a virtual OS? Case examples: Ideal scenario would be: I physically boot up and login to Windows 7. Now I want to access Xubuntu, so I load VirtualBox and access my Xubuntu partition without rebooting. And the other way around too, i.e. I boot up the system, login to Xubuntu, and can access the actual Windows 7 partition through VirtualBox. Other info: Please note that I am not talking about getting access to files, as I have a completely separate partition for my files, and am very familiar with VirtualBox's Shared Folders option.

    Read the article

  • Best way to build / implement a corporate developer Linux distro with multiple kernels?

    - by Garen
    At work we have Linux users who understandably prefer using Ubuntu. Problem is, we also have developer tools that only work with 'officially' supported Linux distributions that use much older 2.6.18 based kernels. (And even if they worked with newer ones, the vendors could always say they won't "support" the software unless it's on one of their 'officially' supported platforms.) We could of course just tell them to use CentOS or something else 2.6.18-based, and I'm sure their response would be something like: "you can take Ubuntu from our cold, dead hands." :) Which brings to me some questions--is there any good/easy/recommended way to run something like Ubuntu as a host VM and Centos 5.x as a guest OS (with which system--Xen,KVM,VMWare, ...?), and then roll that into our own custom internal distribution that could be easily installed? KVM looks like a good high-performance option just recently included in RHEL 5.4, but if hardware support for virtualization like Intel-VT or AMD-V is necessary, then I'd guess only those folks with fairly new PCs will be able to do it. Would be very interested to hear how anyone else has addressed this kind issue. EDIT: The target audience / users of this kind of system would be developers, each one needs to run locally licensed commercial software, so building out some separate beefy central machines isn't an option unfortunately due to license restrictions. Even if that weren't the case, a couple developers could quickly eat up the resources with parallel builds. :) Ideally, I was hoping there was some step-by-step guide out there to build your own pre-built distribution that had e.g. CentOS 5.x and Ubuntu Desktop as a guest.

    Read the article

  • Virtualize SBS 2003 - P2V vs migrating to new VM

    - by jlehtinen
    I need to virtualize a SBS 2003 server in my work environment. I need some tips on what people think is the best way to proceed. Background: The SBS 2003 server is the primary DC for the domain and also hosts FTP, RRAS(VPN), DNS, and file shares. Exchange is NOT used, neither is SQL server. DHCP is done via a firewall appliance. I have added a Server 2003 VM to the domain and promoted it to the DC role. AD/DNS is replicating here correctly. This was mainly done to provide fault-tolerance to the domain, I was not intending to make this VM the primary DC. I've already asked about buying upgraded licensing for Server 2008/2012 but was refused due to cost. Options: I see (at least) two routes I could take to complete this. From what I've read option 2 is the "preferred" method, but there's a few steps where I'm not clear on what to expect. Option 1.) P2V the primary DC Power off primary DC Power off secondary DC (to prevent USN rollback in case P2V has issue) P2V (cold clone) primary DC Boot new PDC VM Allow new hardware to detect Remove old NIC hardware from device manager Assign old IPs to new virtual NICs Reboot PDC VM, confirm connectivity and no major issues Power on secondary DC, confirm replication Option 2.) Create new VM, transfer roles, remove original DC from domain Create new VM, install SBS 2003 Do I need the original SBS install discs for this? MS migration doc mentions this. Add VM to domain, promote to DC role Does this start 7 day timer where two SBS servers can be in same domain? Set up RRAS on new VM Set up IIS/FTP on new VM Move file shares to new VM Transfer FSMO roles to new VM DC dcpromo original primary DC out of domain

    Read the article

  • ProCurve ACL to prevent a subnet from leaving the switch

    - by kce
    I have a single HP ProCurve 2610 in a remote location that is connected in with the rest of the network via SHDSL. There are two Layer-3 networks on this segment. ACLs are setup to deny one subnet (192.0.2.0/24) from ever being able to leave the switch by virtue of being applied to port attached to the upstream connection. The other subnet should be permitted to freely leave the switch. Both subnets are on the same VLAN. Unfortunately SFlow very clearly show broadcast traffic from 192.0.2.0/24 on the upstream connection. ProCurve ACLs are not my strong suit but I feel like I'm missing something very simple here. ip access-list extended "Filter for Camera Network" deny ip 192.0.2.0 0.0.0.255 0.0.0.0 255.255.255.255 log permit ip 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 exit interface 24 name "DSL - UPLINK" access-group "Filter for Camera Network" in exit Unless I am mistaken traffic from 192.0.2.0/24 should be dropped as it crosses the uplink port (int 24) whereas all other traffic will be permited by the following default allow rule. What exactly am I missing here? EDIT: Firstly, why do you have two subnets contained in the same VLAN? Because that's how it was configured by a previous administrator and while it makes conceptual sense that a single subnet is "mapped" to a single VLAN there's no technical constraint that I am aware of that makes this have to be the case. Instead of filtering inbound traffic on your uplink, you should be filtering outbound traffic. The HP2600 series can only filter inbound traffic on interfaces. Should I change my filter to deny any to 192.0.2.0/24?

    Read the article

  • Intel Ethernet Bottlenecking Internet?

    - by Donald Darma
    I'm having trouble with my internet speeds. So I just recent build a pc and everything is fine. I installed the Intel drivers and connected to the internet. It connects but I'm only half the speed I should be getting. My normal speed is 20mbps but speedtest.net is only showing 10. It can't be my ISP (which is TWC if anyone is asking) because my other devices like my laptop and my smartphone are showing 20 down. Heres my system: CPU: i5 4430 HSF: Stock cooler Mobo: Gigabyte Z87MX-D3H GPU: x2 MSI R7950-3GD5/OC BE RAM: Crucial Ballistix Tactical Tracer 8GB dual channel PSU: Silencer High Performance Power Supply 750 Watt 80+ (It's a subdivision of OCZ) HDD: Seagate Barracuda 7200RPM 3TB SSD: Samsung 840 Evo 120 GB Case: Corsair Obsidian 350D Edit: I am using the stock adapter that is on the motherboard. I know for a fact that the cable is good because I used it on my laptop and it ran fine. Its a CAT5E cable. I also ran IPERF and its giving me the same results, 10 mbps.

    Read the article

  • "one-off" use of http_proxy in a Chef remote_file resource

    - by user169200
    I have a use case where most of my remote_file resources and yum resources download files directly from an internal server. However, there is a need to download one or two files with remote_file that is outside our firewall and which must go through a HTTP proxy. If I set the http_proxy setting in /etc/chef/client.rb, it adversely affects the recipe's ability to download yum and other files from internal resources. Is there a way to have a remote_file resource download a remote URL through a proxy without setting the http_proxy value in /etc/chef/client.rb? In my sample code, below, I'm downloading a redmine bundle from rubyforge.org, which requires my servers to go through a corporate proxy. I came up with a ruby_block before and after the remote_file resource that sets the http_proxy and "unsets" it. I'm looking for a cleaner way to do this. ruby_block "setenv-http_proxy" do block do Chef::Config.http_proxy = node['redmine']['http_proxy'] ENV['http_proxy'] = node['redmine']['http_proxy'] ENV['HTTP_PROXY'] = node['redmine']['http_proxy'] end action node['redmine']['rubyforge_use_proxy'] ? :create : :nothing notifies :create_if_missing, "remote_file[redmine-bundle.zip]", :immediately end remote_file "redmine-bundle.zip" do path "#{Dir.tmpdir}/redmine-#{attrs['version']}-bundle.zip" source attrs['download_url'] mode "0644" action :create_if_missing notifies :decompress, "zipp[redmine-bundle.zip]", :immediately notifies :create, "ruby_block[unsetenv-http_proxy]", :immediately end ruby_block "unsetenv-http_proxy" do block do Chef::Config.http_proxy = nil ENV['http_proxy'] = nil ENV['HTTP_PROXY'] = nil end action node['redmine']['rubyforge_use_proxy'] ? :create : :nothing end

    Read the article

< Previous Page | 645 646 647 648 649 650 651 652 653 654 655 656  | Next Page >