Search Results

Search found 16331 results on 654 pages for 'option'.

Page 157/654 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • RDS, RDWeb, and RemoteApp: How to use public certificate for launching apps on session host?

    - by Bret Fisher
    Question: How do i tell RDWeb to launch apps from remote.domain.com rather then host.internaldomain.local? Environment: Existing org with AD forest. New single Server 2012 running all Remote Desktop Services roles for session host. Used the new 2012 wizard to setup "QuickSessionCollection" with roles: RD Session Host RD Connection Broker RD Gateway RD Web Access RD Licensing Everything works with self-signed cert, but we want to prevent those. The users are potentially non-domain machines so sticking a private root cert for on their machines isn't an option. Every part of the solution needs to use public cert. Added public remote.domain.com cert to all roles using Server Manager GUI: RD Connection Broker - Enable Single Sign On RD Connection Broker - Publishing RD Web Access RD Gateway So now everything works beautifully except the last step: user logs into https://remote.domain.com user clicks a app icon, which in background downloads a .rdp file that is signed by remote.domain.com. .rdp is set to use RD Gateway, which is remote.domain.com .rdp says app is hosted on internal host.internaldomain.local, which doesn't match the RDP-tcp TLS cert of remote.domain.com, and pops a warning. It's this last step that I'd like to fix. Is there a config option in PowerShell, WMI, or .config to tell RDWeb/RemoteApp to use remote.domain.com for all published apps so the TLS cert for RDP matches what the Session Host is using? NOTE: This question talks about this issue, and this answer mentions how you might fix it in 2008, but that GUI doesn't exist in 2012 for RemoteApp, and I can't find a PowerShell setting for it. NOTE: Here's a screenshot of the setting in 2008R2 that I need to change. It tells RemoteApp what to use for the Session Host server name. How can I set that in 2012?

    Read the article

  • mdadm+zfs vs mdadm+lvm

    - by Alex
    This may be a naive question since I'm new to this and I cannot find any results about mdadm+zfs, but after some testing it seems it might work: The use case is a server with RAID6 for some data that is backed-up somewhat infrequently. I think I'm well served by any of ZFS or RAID6. Platform is Linux. Performance is secondary. So the two setups I am considering are: A RAID6 array plus regular LVM and ext4 A RAID6 array plus ZFS (without redundancy). Is this second option that I don't see discussed at all. Why ZFS+RAID6? It's mainly because the inability of ZFS to grow a raidz2 with new disks. You can replace disks with larger ones, I know, but not add another disk. You can accomplish 2-disk redundancy and ZFS disk growth using mdadm as the redundancy layer. Besides that main point (otherwise I could go directly to raidz2 without RAID under it), these are the pros-cons that I see for each option: ZFS has snapshots without preallocated space. LVM requires preallocation (might be no longer true). ZFS has checksumming (very interested in this) and compression (nice bonus). LVM has online filesystem growth (ZFS can do it offline with export/mdadm --grow/import). LVM has encryption (ZFS-on-Linux has not). This is the only major con of this combo I see. I guess I could go RAID6+LVM+ZFS... seems too heavy, or not? So, to close with a proper question: 1) Is there anything that inherently discourages or precludes RAID6+ZFS? Anyone has experience with a setup like this? 2) Are there possibilities for checksumming and compression that would make ZFS unnecessary (maintaining the possibility of filesystem growth)? Because the RAID6+LVM combo seems the sanctioned, tested way.

    Read the article

  • How do I add a VMware ESXi Host to Microsoft Virtual Machine Manager?

    - by user63250
    I am trying to manage virtual machines running on a VMware ESXi host using Microsoft System Center Virtual Machine Manager. I was able to add the ESXi machine using the "Add VMware VirtualCenter server" option, but can't access any of the VMs on the datastore associated with this ESXi server. The datastore of the ESXi box is showing up with the correct name, but it won't let me see any of the VMs that have already been created; I get "There are no virtual machines on this host." Because I couldn't get any of the existing virtual machines to show up, I tried creating some new ones. When using VMM to connect to ESXi and create new VMs, I get the following error messages in the "rating explanation" section: The virtualization software on the selected host does not support virtual hard disks on an IDE bus. and The virtualization software on the host XXXXXX does not support the creation of dynamic virtual hard disk. Any ideas on why I can't manage existing machines and why I can't create new ones? The existing machines were created in vSphere. I should note that the ESXi server and the server running SCVMM are both on the same domain. I should also note that although the ESXi box has been added as a VirtualCetner server, when I try to add it through the "Add Host" option, I get an error message saying "Virtual Machine Manager cannot complete the VirtualCenter action on server EXSi because of the following error: The operation is not supported on the object."

    Read the article

  • Problem while Installing mysql cluster

    - by Champion
    I downloaded latest mysql cluster file 64 bit (mysql-cluster-gpl-7.1.9-linux-x86_64-glibc23) from mysql site. Ran following command to install mysql server. /home/mysql_cluster/mysqlc/scripts/mysql_install_db --no-defaults --basedir=/home/mysql_cluster/mysqlc --datadir=/home/mysql_cluster/my_cluster/mysqld_data It's giving following issues. /home/mysql_cluster/mysqlc/scripts/mysql_install_db: line 245: /home/mysql_cluster/mysqlc/bin/my_print_defaults: No such file or directory Neither host 'db' nor 'localhost' could be looked up with /home/mysql_cluster/mysqlc/bin/resolveip Please configure the 'hostname' command to return a correct hostname. If you want to solve this at a later stage, restart this script with the --force option My host is pinging and hostname command gives correct hostname. I tried also with --force option force but still not working giving following issues: /home/mysql_cluster/mysqlc/scripts/mysql_install_db: line 245: /home/mysql_cluster/mysqlc/bin/my_print_defaults: No such file or directory Installing MySQL system tables... /home/mysql_cluster/mysqlc/scripts/mysql_install_db: line 393: /home/mysql_cluster/mysqlc/bin/mysqld: No such file or directory Installation of system tables failed! Examine the logs in /home/mysql_cluster/my_cluster/mysqld_data for more information. It says above files doesn't exist but this files are present checked with ls command. We are using XEN VM with 64 bit debian lenny os. How this issue can be resolved ?

    Read the article

  • How to access programs in one PC using another PC

    - by darkstar13
    Hi, I was recently given an old PC for my remote access at work. The CPU that comes with it has Windows XP installed, 400+ MB of ram, all USB devices disabled. I access my work applications using VPN / Citrix. Basically, it' sooooo slow. Plus it's bulky and it will just occupy space, so I am now hoping to find a way for me to integrate this work PC with my home PC. I tried to put in the hard drive in my home PC CPU, and set the drive as slave. However, when I booted my PC from this hard drive, I am stuck at the screen where windows is prompting me to select how am I going to boot (ex. Safe Mode, Safe mode with command prompt, Last Working Configuration, etc), but whatever option I select, I am still stuck at this option after reboot. I am thinking if maybe I can clone the drive and mount the cloned drive and access the system as a virtual machine. But I don't know if that will work. I would like to know if there's something I can do so I can work at home using my home PC, where I can access my work programs to connect to VPN / Citrix. My home PC's OS is Windows 7 Ultimate x64.

    Read the article

  • haproxy and tomcat intermittent hangs

    - by user7347
    I am trying to run haproxy in front of tomcat on a Solaris x86 box, but I am getting intermittent failures. At seemingly random intervals, the request just hangs until haproxy times out the connection. I thought maybe it was my app, but I've been able to reproduce it with the tomcat manager app, and hitting tomcat directly there is no problems at all. Hitting it repeatedly with curl will cause the error within 10-15 tries curl -ikL http://admin:admin@<my server>:81/manager/status haproxy is running on port 81, tomcat on port 7000. haproxy returns a 504 gateway timeout to the client, and puts this into the log file: Sep 7 21:39:53 localhost haproxy[16887]: xxx.xxx.xxx.xxx:65168 [07/Sep/2009:21:39:23.005] http_proxy http_proxy/tomcat7000 5/0/0/-1/30014 504 194 - - sHNN 0/0/0/0/0 0/0 "GET /manager/status HTTP/1.1" Tomcat shows nothing, no error in the logs and no indication that the request ever makes it to the tomcat server. The request count is not incremented, the manager app only shows activity on one thread, serving up the manager app. Here are my haproxy and tomcat connector settings, I've been playing with both a good deal trying to chase down the issue, so they may not be ideal, but they definitely don't seem like they should cause this error. server.xml <Connector port="7000" protocol="HTTP/1.1" enableLookups="false" maxKeepAliveRequests="1" connectionLinger="10" /> haproxy config global log loghost local0 chroot /var/haproxy listen http_proxy :81 mode http log global option httplog option httpclose clitimeout 150000 srvtimeout 30000 contimeout 3000 balance roundrobin cookie SERVERID insert server tomcat7000 127.0.0.1:7000 cookie server00 check inter 2000

    Read the article

  • Unzip individual files from multiple zip files and extract those individual files to home directory

    - by James P.
    I would like to unzip individual files. These files have a .txt extension. These files also live within multiple zipped files. Here is the command I'm trying to use. unzip -jn /path/to/zipped/files/zipArchiveFile2011\*.zip /path/to/specific/individual/files/myfiles2011*.txt -d /path/to/home/directory/for/extract/ From my understanding, the -j option excludes directories and will extract only the txt files The -n option will not overwrite a file if it has already been extracted. I've also learned that the forward slash in /path/to/zipped/files/zipArchiveFile2011\*.zip is necessary to escape the wildcard (*) character. Here is sample of error messages I'm coming accross: Archive: /path/to/zipped/files/zipArchiveFile20110808.zip caution: filename not matched: /path/to/specific/individual/files/myfiles20110807.txt caution: filename not matched: /path/to/specific/individual/files/myfiles20110808.txt Archive: /path/to/zipped/files/zipArchiveFile20110809.zip caution: filename not matched: /path/to/specific/individual/files/myfiles20110810.txt caution: filename not matched: /path/to/specific/individual/files/myfiles20110809.txt I feel that I'm missing something very simple. I've tried using single quotes (') and double quotes (") around directory paths. But no luck.

    Read the article

  • I cannot format my PC

    - by Jesus Buelna
    I have a Toshiba Satellite(1) l505 6gb RAM, 6.00GB hard disk.Initially I have problem with another satellite(2) I had (mother board problem). I took my Laptop to a technician and cost a lot of money (almost as much as buying new one). So, since I have HDD problems with the first one(1) I decided to use the hard disk of the other one(2). I formatted the HDD and erased the partitions it had into 1 partition (or no partition). The problem is that when I try to format with the SO CD, in the screen, where I have to decide in which partition I want to install the SO, the only one option I have says "unallocated partition and I receive this message "Windows cannot install the SO in this partition, run files do not existed or maybe corrupted" When I erased the disk with Parted Magic, Did I erased any files needed for running the installing disk? I don't know. Is it possible to fixed or reinstate the disk to install the OS? By the way, I checked the disk physical health with Parted Magic, and it is OK. One more thing when I erased the disc to 0, I used the safety option offered by the Parted Magic.Need help please.

    Read the article

  • Ghost Solution Suite: Booting over PXE to WinPE for re-imaging, then booting to installed OS

    - by uberdanzik
    I have 40 networked computers that need to be re-imaged each night over a network via an automatic and unattended process. This is to reset the computers to a default state, as well as update the computers to the latest software loads. I'm using Symantec Ghost Solution Suite 2.5. My process so far is the following: Client begins in a powered down WakeOnLan accepting state. Ghost Console task uses WakeOnLan and PXE to boot the client into the WinPE environment. The client connects to the ghost console and reimages itself successfully. The client closes WinPE and restarts. PROBLEM: The client boots into the WinPE environment again, instead of the newly installed OS (Win7) I need it to boot into Win7 once so that I can run a few configuration batch files, then shut down into the WakeOnLan state again. Ghost console even reports an error on the process, that it never rebooted into the OS. Right now it seems that there is not an option to stop it from booting into the PXE server's WinPE image after re-imaging. Even if I set up a PXE boot menu with other boot options, its pointless, because it will always boot the default option. I would expect the ghost console task to be able to influence the PXE boot choice somehow. What do they expect us to do, turn the PXE server on and off manually? How can I get the client to boot to the OS after re-imaging? Thank you.

    Read the article

  • iTunes Home Sharing only works one way between 2 WinXP PC's on the same LAN

    - by scunliffe
    Both PC's have the latest iTunes installed. PC (A) can "see" that there is a shared library "B library" but attempts to connect to it return this error message: The shared library "{Username}'s Library" is not responding (-3259) Check that any firewall software running on either the shared computer or this computer has been set to allow communication on port 3689. however the reverse works fine. e.g. PC (B) can "see" shared library "A library" and can access all content. Notes: Both PC's have Home Sharing enabled (turned off/on several times to verify). Both PC's have Windows Firewall turned on, but in the exceptions tab, iTunes is allowed, and Port 3689 is also added as a firewall exception (just in case) Both iTunes accounts have been "authorized" on both PC's Both PC's connect via LAN via D-Link DIR-615 router. In the advanced application rules, iTunes has also been added to allow traffic on port 3689 un-hindered. Is there any other magical setting/configuration option that I should be aware of and set in order to get this to work? I could care less about sharing apps etc. I just want the music sharing to work. Update: Solved! It turns out on PC (B) there were multiple accounts set up. 1 of the accounts had the checkbox checked under the windows firewall "On" option which states "No exceptions" thus even though it was added to the exception list on the main user account, this other account was blocking access.

    Read the article

  • Routing Apache TracEnv

    - by fampinheiro
    Hello, i have a situation with many trac instances. They all have the same structure in the filesystem. PATH/trac1 PATH/trac2 PATH/trac3 i have this configuration <Location /trac/trac1> SetHandler mod_python PythonInterpreter main_interpreter PythonHandler trac.web.modpython_frontend PythonOption TracEnv PATH/trac1 PythonOption TracUriRoot /trac/trac1 PythonOption PYTHON_EGG_CACHE PATH/eggs/ </Location> <Location /trac/trac2> SetHandler mod_python PythonInterpreter main_interpreter PythonHandler trac.web.modpython_frontend PythonOption TracEnv PATH/trac2 PythonOption TracUriRoot /trac/trac2 PythonOption PYTHON_EGG_CACHE PATH/eggs/ </Location> <Location /trac/trac3> SetHandler mod_python PythonInterpreter main_interpreter PythonHandler trac.web.modpython_frontend PythonOption TracEnv PATH/trac3 PythonOption TracUriRoot /trac/trac3 PythonOption PYTHON_EGG_CACHE PATH/eggs/ </Location> i wonder if it's possible to do something like (TracEnvParentDir is not an option) <Location /trac/{ENV}> SetHandler mod_python PythonInterpreter main_interpreter PythonHandler trac.web.modpython_frontend PythonOption TracEnv PATH/{ENV} PythonOption TracUriRoot /trac/{ENV} PythonOption PYTHON_EGG_CACHE PATH/eggs/ </Location> Thank you for your time. EDIT: TracEnvParentDir is not an option because my structure is the following +---projs +---trac1 ¦ +---public [instance] ¦ +---t1 ¦ ¦ +---common [instance] ¦ ¦ +---g1 [instance] ¦ ¦ +---g2 [instance] ¦ ¦ +---g3 [instance] ¦ ¦ +---g4 [instance] ¦ ¦ +---g5 [instance] ¦ +---t2 ¦ ¦ +---common [instance] ¦ ¦ +---g1 [instance] ¦ ¦ +---g2 [instance] ¦ ¦ +---g3 [instance] ¦ ¦ +---g4 [instance] ¦ ¦ +---g5 [instance] ¦ +---t3 ¦ +---common [instance] ¦ +---g1 [instance] ¦ +---g2 [instance] ¦ +---g3 [instance] ¦ +---g4 [instance] ¦ +---g5 [instance] ¦ +---trac2 +---public [instance] +---t1 ¦ +---common [instance] ¦ +---g1 [instance] ¦ +---g2 [instance] ¦ +---g3 [instance] ¦ +---g4 [instance] ¦ +---g5 [instance] +---t2 ¦ +---common [instance] ¦ +---g1 [instance] ¦ +---g2 [instance] ¦ +---g3 [instance] ¦ +---g4 [instance] ¦ +---g5 [instance] +---t3 +---common [instance] +---g1 [instance] +---g2 [instance] +---g3 [instance] +---g4 [instance] +---g5 [instance] I use the TracEnvParentDir on t1, t2 and t3 and TracEnv on trac1/public and trac2/public I wonder if it's possible to define a part of the url variable.

    Read the article

  • How to make DD-WRT router's (configured like a repeater) devices be accessible on LAN? (i.e. integrate DHCP for both routers)

    - by Annonomus Penguin
    I have a D-Link DIR-600-A1 router running DD-WRT (using the 601's firmware: except for the model number, they are near identical). It has an Atheros chip, so there is no "repeater" option. You can bypass this by setting the main radio as a client to the main router, and adding a virtual radio configured as an AP. You can then set up the credentials for connecting to the main router and allowing devices to connect to the repeater/router. I have a few devices on my network: Ethernet computers Server with Samba running WiFi devices connected to the main router I then wanted to add a repeater. I have a couple of other things on the repeater: WiFi Computer Other WiFi devices. Anyway, I wanted to connect my WiFi computer to the share on my server via Samba. However, for some reason, my router treats the main router as WAN, not another device. I've tried disabling the SPI firewall: However, that doesn't work. I've tried pinging my WiFi computer from my server. However, I can ping my server from my WiFi computer. AFAIK, they are on the same subset, just using different IPs: the main one uses 192.168.0.x and the repeater uses 192.168.1.x (starting at 100 for some reason). It seems as I need to configure my router(s) to work together for DHCP. I noticed there was a "DHCP forwarder" option, but I have no idea what that would do. A quick note: for some reason (that's beyond me) my ISP disabled the capability to bridge a WiFi to ethernet connection with the router they provide (something about PPPoE or similar...). The service rep I talked to when I was having issues after I changed ISPs said that, but they couldn't explain exactly what they were "blocking." How can I get DD-WRT to not treat the client connection as WAN and the router to recognize the devices connected to the repeater?

    Read the article

  • Blocking requests from specific IPs using IIS Rewrite module

    - by Thomas Levesque
    I'm trying to block a range of IP that is sending tons of spam to my blog. I can't use the solution described here because it's a shared hosting and I can't change anything to the server configuration. I only have access to a few options in Remote IIS. I see that the URL Rewrite module has an option to block requests, so I tried to use it. My rule is as follows in web.config: <rule name="BlockSpam" enabled="true" stopProcessing="true"> <match url=".*" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{REMOTE_ADDR}" pattern="10\.0\.146\.23[0-9]" ignoreCase="false" /> </conditions> <action type="CustomResponse" statusCode="403" /> </rule> Unfortunately, if I put it at the end of the rewrite rules, it doesn't seem to block anything... and if I put it at the start of the list, it blocks everything! It looks like the condition isn't taken into account. In the UI, the stopProcessing option is not visible and is true by default. Changing it to false in web.config doesn't seem to have any effect. I'm not sure what to do now... any ideas?

    Read the article

  • Prestashop is not saving Memcached settings

    - by ianenri
    I have a issue with the admin site of prestashop. I'm trying to activate the caching system but it doesn't save the setting. When I add a server (I'm using an Amazon ElasticCache server) saves it, but when I select the enable option and click Save, it redirects me to "Back Office Preferences Performance" but with a blank page and the admin tabs visible. I go back to those settings again and i see that the caching option is disabled. Also, there is a warning: "To use Memcached, you must install the Memcache PECL extension on your server. http://www.php.net/manual/en/memcache.installation.php" , even if i already installed memcached via yum. I tried also, by modifying the settings.inc.php file editing define('_PS_CACHE_ENABLED_', '0'); to: define('_PS_CACHE_ENABLED_', '1'); But i get 500 Internal Server Error in every page, so i prefer to leave it as before. Any ideas? I'm using PrestaShop 1.4.6.2 with Nginx 1.0.11 and PHP-FPM 5.3.8 in a CentOS 5.7 system.

    Read the article

  • Migrate installation from one HDD to another?

    - by dougoftheabaci
    I have a server with a 64 GB SSD inside. It runs just fine, but occasionally there's a hiccup that causes it to nearly fill up. When that happens my server starts to lockup and generally misbehave. I'm looking to buy a bigger SSD (either 128 GB or 256 GB) but I'm a bit unsure of how best to make the transition. For a start, I don't have an external monitor. If I need one I'll have to borrow it from work. Most of the time I just SSH into the server from my iMac. The only solution I can think of would be to buy two FW800 2.5" cases, boot from the 64 GB SSD and clone it to the 128 GB SSD. Seem a bit excessive but it might be my best option. I do have more than one SATA port on my server, but they're all currently being use for storage drives. They don't mount by default, so I could unplug them and just have the two SSDs and do the whole thing via SSH. This is another option I'm considering. My main concern with either is how best to make sure everything goes across. I want a carbon copy of the first one onto the second. This is especially important because I have a ZFS volume (my storage) and I'm a bit unfamiliar with how to move everything across. I could just start fresh and reinstall everything on the SSD, but that seems like extra trouble I don't need. So any advice on how best to achieve my goals would be appreciated. Thanks! Server is running Ubuntu Server 12.04. The iMac has 10.8.1.

    Read the article

  • Boot iMac into Centos from external hard drive

    - by user1704978
    I have Centos 6.3 installed on an external Western Digital drive with Firewire and USB interfaces. I want to be able to boot an iMac (2008, 3.06GHz Core 2 Duo) from this disk. The iMac has Mac OS X 10.5.8 and also a Window XP installation. I have tried holding 'T' on bootup for target disk mode but the external disk is ignored (presumably as it's not a Mac OSX image). I created an rEFit boot DVD which when booted in CD mode (holding 'C' on startup) displays three options, Mac OS (on internal drive), Linux and Windows. Selecting the Linux option unfortunately boots the Mac into XP. Three options are only displayed when the external disk is plugged into the Firewire port. If the external disk is plugged into a USB port the Linux option is not displayed and I can only boot into Mac OS X or Windows. This external disk will happily boot a Lenovo T410 laptop into Centos. My questions are: 1) Is it actually possible to boot into Centos on an iMac with an external hard drive. If so how do I achieve this? 2) Why is rEFit apparently booting from the wrong partition?

    Read the article

  • Need clear steps on how to convert a Windows 2000 Server to a XenServer VM

    - by Jay
    The source system is not local. The target host running XenServer is not local. The source system is running Windows 2000 Server SP4 and has 1 disk split into 6 partitions, all NTFS: C: 6 GB (boot) D: 15 GB E: 6 GB F: 6 GB G: 5 GB H: 26 GB Most of the partitions are mostly mostly full ( 60%). What is the most straightforward way to do a P2V migration of the server? I can do minor database & data syncs after the P2V is successful & running as a VM within XenServer, it's just getting to that point which is not clear. The option of installing a Windows 2000 Server from scratch is not available, I need to convert the existing physical server as-is into a VM to be hosted within a XenServer environment. I've looked at XenConvert but it maxes out on converting only 4 partitions in one shot, and I'm not certain how to account for the 2 extra partitions. I'm not familiar with XenServer but it's my only option right now to go P2V.

    Read the article

  • PXE bootable image for terminal server?

    - by HeavenCore
    We have 300 windows xp machines on cruddy old hardware across the company. With extended support for XP ending April next year we're looking into our options. Couple of options: Replace the 300 PC's with full windows 7 PC's (£100k +?) - no use of terminal server (our current model) Replace the 300 PC's with off the shelf thin clients & make use of our terminal server - Cheaper clients but Terminal Server CALS required? Keep the 300 PC's, replace windows XP with linux thin client capable of connecting to our terminal server - no hardware costs, just Terminal Server CALS required? Keep the 300 PC's - remove hard drives and make use of a PXE bootable "thin client" to connect to our terminal server If we were to choose option 4, what our the options out there? Is there any official PXE bootable thin clients for terminal server out there? If so, what are the licence requirements? Is there options we haven’t considered? There must be lots of companies out there in this situation - curious what the current trend is for this problem? Edit: Option 5 - Create a bootable Windows PE image with RDP auto start and use that as a "thin client" for our terminal server - is Windows PE licence free in such a model?

    Read the article

  • Getting Server 2008 R2 to ignore all traffic from Internet-facing NIC, leaving it to a VM

    - by Wolvenmoon
    I got in to Server 2008 R2 via Dreamspark and would like to start learning on it. I don't have much option but to put it on a system sitting between the Internet and my home LAN due to electricity bills and the fact that 3 computers in an 11x11 space in 102 degree weather is pretty stygian. Currently I use a ClearOS gateway to manage everything, what I'd like to do is take my server 2008 R2 box, which has two NICs, and drop it at the head of my network. I'd want Server 2008 R2 to ignore all traffic on the external facing NIC and pass it to a virtual ClearOS gateway, and to put all its Internet traffic through its other NIC - which will face the rest of my network and be the default gateway for it. The theory is to keep the potentially vulnerable Server 2008 R2 install as tucked behind a Linux box as possible, without sacrificing too much performance. This is a home network that occasionally hosts dedicated game servers and voice chat servers, so most malicious activity is in the form of drive by non-targeted attacks, however, I don't trust Windows Server because I don't know the OS well enough, yet. So, three questions: How do I do this, am I going to be reasonably more secure doing this than if I just let the Server 2008 R2 rig handle all the network traffic and DHCP (not an option), and should I virtualize the Server 2008 R2 rig instead and if so in what? (Core 2 Duo e6600 w/ 5 gigs usable RAM)

    Read the article

  • Video card not detected on Lenovo T410 in Linux

    - by wich
    I have a T410 with an nVidia NVS 3100M, this is not a hybrid system, there is no Optimus. (No option in the BIOS for Optimus, lspci in linux as well as the Windows device manager only show the nVidia) Using lspci I see the GPU as a present device, however, I can not, for the life of me, get any video driver to work that will let me start an X session, every time X craps out with the error (EE) No devices detected. I have tried the nVidia binary blob, (with nvidia-config, made sure no nvidia support in the kernel), I have tried nouveau, I have tried nv, I have even tried generic vesa, nothing will work. When I compare the dmesg that I get when loading the nvidia kernel module, I see that it is missing some lines compared with another system that also has an nvidia card, specifically the line mentioning the GPU name (3100M) is not there. I have checked every option in the BIOS, there is nothing to control except for the BIOS video output port, which is set to the LCD panel. I have no idea anymore what the problem may be, or even how I can diagnose this problem further. Any help will be appreciated.

    Read the article

  • How can I explain to dspam that the user "brandon" is the same as "brandon@mydomain"

    - by Brandon Craig Rhodes
    I am using dspam for spam filtering by running the "dspamd" daemon under Ubuntu 9.10 and then setting up a Postfix rule that says: smtpd_recipient_restrictions = ... check_client_access pcre:/etc/postfix/dspam_everything ... where that PCRE map looks like this: /./ FILTER lmtp:[127.0.0.1]:11124 This works well, and means that all users on my system get all of their email, whether "dspam" thinks it is innocent or not, and have the option of filtering on its decisions or ignoring them. The problem comes when I want to train dspam using my email archives. After reading about the "dspam" command, I tried this on the files in my Inbox and spam boxes (which date from when I was using another filtering solution): for file in Mail/Inbox/*; do cat $file | dspam --class=innocent --source=corpus; done for file in Mail/spam/*; do cat $file | dspam --class=spam --source=corpus; done The symptom I noticed after doing all of this was that dspam was horrible at classifying spam — it couldn't find any! The problem, when I tracked it down, was that I was training the user "brandon" with the above commands, but the incoming email was instead compared against the username "brandon@mydomain", so it was running against a completely empty training database! So, what can I do to make the above commands actually train my fully-qualified email address rather than my bare username? I would like to avoid having to run "dspam" as root with a "--user" option. I would have expected that the "dspam" configuration files would have had an "append_domain" attribute or something with which to decorate local usernames with an appropriate email domain, but I can't find any such thing. When I used to use the Berkeley DB backend to "dspam", I solved this problem by creating a symlink from one of the databases to the other. :-) But that solution eventually died because the BDB backend is not thread-safe, so now I have moved to the PostgreSQL back-end and need a way to solve the problem there. And, no, the table where it keeps usernames has a UNIQUE constraint that prevents me from listing both usernames as mapping to the same ID. :-)

    Read the article

  • Dual Monitor support rdp 7 to win 7 on esxi

    - by rphilli5
    I am trying to RDP from a Windows 7 Professional dual monitor physical machine to a Windows 7 Professional VM hosted on esxi 4.0. I can get the spanning option to work to both monitors, but I have tried 3 different methods of connecting but have not been able to use true multiple monitors. At different times, I tried checking the "use all monitors" option, command line mstsc /multimon and added the line use multimon:i:1 to the .rdp file. None of these worked. Any ideas? The physical machine can connect to other Windows 7 physical machines with true multi monitor access. I also have the same issue when going from a 32bit RC1 machine to a Windows 7 Professional x64, but not when going in the reverse direction. Here's the .rdp: screen mode id:i:2 use multimon:i:1 desktopwidth:i:1440 desktopheight:i:900 session bpp:i:16 winposstr:s:0,1,341,118,1139,568 compression:i:1 keyboardhook:i:2 audiocapturemode:i:0 videoplaybackmode:i:1 connection type:i:1 displayconnectionbar:i:1 disable wallpaper:i:1 allow font smoothing:i:0 allow desktop composition:i:0 disable full window drag:i:1 disable menu anims:i:1 disable themes:i:1 disable cursor setting:i:0 bitmapcachepersistenable:i:1 full address:s:192.168.1.5 audiomode:i:0 redirectprinters:i:1 redirectcomports:i:0 redirectsmartcards:i:1 redirectclipboard:i:1 redirectposdevices:i:0 redirectdirectx:i:1 autoreconnection enabled:i:1 authentication level:i:2 prompt for credentials:i:0 negotiate security layer:i:1 remoteapplicationmode:i:0 alternate shell:s: shell working directory:s: gatewayhostname:s: gatewayusagemethod:i:4 gatewaycredentialssource:i:4 gatewayprofileusagemethod:i:0 promptcredentialonce:i:1 use redirection server name:i:0 drivestoredirect:s:

    Read the article

  • Making Apache 2.2 on SuSE Linux Case In-Sensitive. Which is a better approach?

    - by pingu
    Problem: http://<server>/home/APPLE.html http://<server>/hoME/APPLE.html http://<server>/HOME/aPPLE.html http://<server>/hoME/aPPLE.html All the above should pick this http://<server>/home/apple.html I implemented 2 solutions and both are working fine. Not sure which one is better(performance). Please Suggest..Also Directive - CheckCaseOnly on never worked Option 1: a)Enable:mod_speling In /etc/sysconfig/apache2 - APACHE_MODULES="rewrite speling apparmor......" b) Add directive - CheckSpelling on (Either in .htaccess or add in httpd.conf) In httpd.conf <Directory srv/www/htdcos/home> Order allow,deny CheckSpelling on Allow from all </Directory> or In .htaccess inside /srv/www/htdcos/home(your content folder) CheckSpelling on Option 2: a) Enable: mod_rewrite b) Write the rule vhost(you can not write RewriteMap in directory. check apache docs ) <VirtualHost _default_:80> <IfModule mod_rewrite.c> Options +FollowSymLinks RewriteEngine on RewriteMap lc int:tolower RewriteCond %{REQUEST_URI} [A-Z] RewriteRule (.*) ${lc:$1} [R=301,L] </IfModule> </VirtualHost> <VirtualHost _default_:80> <IfModule mod_rewrite.c> Options +FollowSymLinks RewriteEngine on RewriteMap lc int:tolower RewriteCond %{REQUEST_URI} [A-Z] RewriteRule (.*) ${lc:$1} [R=301,L] </IfModule> </VirtualHost> This changes the entire request uri into lowercase. I want this to happen for specific folder, but RewriteMap doesn't work in .htaccess. I am novice in regex and Rewrite. I need a RewriteCond which checks only /css//. can any body help

    Read the article

  • How does one change the UUID of a Volume on Mac OS X 10.6?

    - by Emmel
    Does anyone know how to change the UUID of a Volume? The background for this question is that I have a duplicate UUID issue: I have /Volumes/OldMacHD with a UUID of XYZ. I have /Volumes/Mirror1 with a UUID of XYZ (same UUID! I bet that's because OldMacHD USED to be part of this mirror). I got these UUIDs via 'diskutil info /dev/thatdisknumber | grep UUID'. I'd like to change the UUID of 'Mirror1'. I discovered by chance the 'hfs.util' utility, since these are HFS volumes after all. The man page for hfs.util says that if you issue the -s flag, this changes the UUID. However, if you type hfs.util all by itself, it doesn't show you the -s option at all, just every option besides that! Grr. I tried it anyway: sudo /System/Library/Filesystems/hfs.fs/hfs.util -s /dev/disk4 (the raid volume). Nothing happens. No error message, no success message. UUID exactly the same. I tried it while the volume was unmounted. Any ideas?

    Read the article

  • Users using Perl script to bypass Squid Proxy

    - by mk22
    The users on our network have been using a perl script to bypass our Squid proxy restrictions. Is there any way we can block this script from working?? #!/usr/bin/perl ######################################################################## # (c) 2008 Indika Bandara Udagedara # [email protected] # http://indikabandara19.blogspot.com # # ---------- # LICENCE # ---------- # This work is protected under GNU GPL # It simply says # " you are hereby granted to do whatever you want with this # except claiming you wrote this." # # # ---------- # README # ---------- # A simple tool to download via http proxies which enforce a download # size limit. Requires curl. # This is NOT a hack. This uses the absolutely legal HTTP/1.1 spec # Tested only for squid-2.6. Only squids will work with this(i think) # Please read the verbose README provided kindly by Rahadian Pratama # if u r on cygwin and think this documentation is not enough :) # # The newest version of pget is available at # http://indikabandara.no-ip.com/~indika/pget # # ---------- # USAGE # ---------- # + Edit below configurations(mainly proxy) # + First run with -i <file> giving a sample file of same type that # you are going to download. Doing this once is enough. # eg. to download '.tar' files first run with # pget -i my.tar ('my.tar' should be a real file) # + Run with # pget -g <URL> # # ######################################################################## ######################################################################## # CONFIGURATIONS - CHANGE THESE FREELY ######################################################################## # *magic* file # pls set absolute path if in cygwin my $_extFile = "./pget.ext" ; # download in chunks of below size my $_chunkSize = 1024*1024; # in Bytes # the proxy that troubles you my $_proxy = "192.168.0.2:3128"; # proxy URL:port my $_proxy_auth = "user:pass"; # proxy user:pass # whereis curl # pls set absolute path if in cygwin my $_curl = "/usr/bin/curl"; ######################################################################## # EDIT BELOW ONLY IF YOU KNOW WHAT YOU ARE DOING ######################################################################## use warnings; my $_version = "0.1.0"; PrintBanner(); if (@ARGV == 0) { PrintHelp(); exit; } PrimaryValidations(); my $val; while(scalar(@ARGV)) { my $arg = shift(@ARGV); if($arg eq '-h') { PrintHelp(); } elsif($arg eq '-i') { $val = shift(@ARGV); if (!defined($val)) { printf("-i option requires a filename\n"); exit; } Init($val); } elsif($arg eq '-g') { $val = shift(@ARGV); if (!defined($val)) { printf("-g option requires a URL\n"); exit; } GetURL($val); } elsif($arg eq '-c') { $val = shift(@ARGV); if (!defined($val)) { printf("-c option requires a URL\n"); exit; } ContinueURL($val); } else { printf ("Unknown option %s\n", $arg); PrintHelp(); } } sub GetURL { my ($URL) = @_; chomp($URL); my $fileName = GetFileName($URL); my %mapExt; my $first; my $readLen; my $ext = GetExt($fileName); ReadMap($_extFile, \%mapExt); if ( exists($mapExt{$ext})) { $first = $mapExt{$ext}; GetFile($URL, $first, $fileName, 0); } else { die "Unknown ext in $fileName. Rerun with -i <fileName>"; } } sub ContinueURL { my ($URL) = @_; chomp($URL); my $fileName = GetFileName($URL); my $fileSize = 0; $fileSize = -s $fileName; printf("Size = %d\n", $fileSize); my $first = -1; if ( $fileSize > 0 ) { $fileSize -= 1; GetFile($URL, $first, $fileName, $fileSize); } else { GetURL($URL); } } sub Init { my ($fileName) = @_; my ($key, $value); my %mapExt; my $ext = GetExt($fileName); if ( $ext eq "") { die "Cannot get ext of \'$fileName\'"; } ReadMap($_extFile, \%mapExt); my $b = GetFirst($fileName); $mapExt{$ext} = $b; WriteMap($_extFile, \%mapExt); print "I handle\n"; while ( ($key, $value) = each(%mapExt) ) { print "\t$key -> $value\n"; } } sub GetExt { my ($name) = @_; my @x = split(/\./, $name); my $ext = ""; if (@x != 1) { $ext = pop @x; } return $ext; } sub ReadMap { my($fileName, $mapRef) = @_; my $f; my @arr; open($f, '<', $fileName) or die "Couldn't open $fileName"; my %map = %{$mapRef}; while (<$f>) { my $line = $_; chomp($line); @arr = split(/[ \t]+/, $line, 2); $mapRef->{ $arr[0]} = $arr[1]; } printf("known ext\n"); while (($key, $value) = each(%$mapRef)) { print("$key, $value\n"); } close($f); } sub WriteMap { my ($fileName, $mapRef) = @_; my $f; my @arr; open($f, '>', $fileName) or die "Couldn't open $fileName"; my ($k, $v); while( ($k, $v) = each(%{$mapRef})) { print $f "$k" . "\t$v\n"; } close($f); } sub PrintHelp { print "usage: -h Print this help -i <filename> Initialize for this filetype -g <URL> Get this URL\n -c <URL> Continue this URL\n" } sub GetFirst { my ($fileName) = @_; my $f; open($f, "<$fileName") or die "Couldn't open $fileName"; my $buffer = ""; my $first = -1; binmode($f); sysread($f, $buffer, 1, 0); close($f); $first = ord($buffer); return $first; } sub GetFirstFromMap { } sub GetFileName { my ($URL) = @_; my @x = split(/\//, $URL); my $fileName = pop @x; return $fileName; } sub GetChunk { my ($URL, $file, $offset, $readLen) = @_; my $end = $offset + $_chunkSize - 1; my $curlCmd = "$_curl -x $_proxy -u $_proxy_auth -r $offset-$end -# \"$URL\""; print "$curlCmd\n"; my $buff = `$curlCmd`; ${$readLen} = syswrite($file, $buff, length($buff)); } sub GetFile { my ($URL, $first, $outFile, $fileSize) = @_; my $readLen = 0; my $start = $fileSize + 1; my $file; open($file, "+>>$outFile") or die "Couldn't open $outFile to write"; if ($fileSize <= 0) { my $uc = pack("C", $first); syswrite ($file, $uc, 1); } do { GetChunk($URL, $file, $start ,\$readLen); $start = $start + $_chunkSize; $fileSize += $readLen; }while ($readLen == $_chunkSize); printf("Downloaded %s(%d bytes).\n", $outFile, $fileSize); close($file); } sub PrintBanner { printf ("pget version %s\n", $_version); printf ("There is absolutely NO WARRANTY for pget.\n"); printf ("Use at your own risk. You have been warned.\n\n"); } sub PrimaryValidations { unless( -e "$_curl") { printf("ERROR:curl is not at %s. Pls install or provide correct path.\n", $_curl); exit; } unless( -e "$_extFile") { printf("extFile is not at %s. Creating one\n", $_extFile); `touch $_extFile`; } if ( $_chunkSize <= 0) { printf ("Invalid chunk size. Using 1Mb as default.\n"); $_chunkSize = 1024*1024; } }

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >