Search Results

Search found 14745 results on 590 pages for 'setting'.

Page 425/590 | < Previous Page | 421 422 423 424 425 426 427 428 429 430 431 432  | Next Page >

  • Adding Static IP's to the NIC

    - by Brett Powell
    We are currently working on migrating a lot of new machines to our network, and my job this morning was to setup all of the IP Addresses. I worked on this all morning, and when I got back tonight I was informed that they had all been setup incorrectly, and had to be removed and re-added. I am quite confused as I have been setting up IP's on machines for a long time and I am curious as to what the issue is. Just taking into account this example... 72.26.196.160/29 255.255.255.248 A /29 block is 5 usable IP's. With the script I wrote and used, the IP Addresses .162 - .166 were added to the NIC. I can't remember now what the name for .161 was, but isn't it the broadcast address or something which isn't assigned to the NIC when adding additional IP Blocks? I am curious as to where my logic is failing me. Not to mention even if .161 was to be added, there is no reason why all of the IPs would have to be removed, as .161 could just be added in addition to these.

    Read the article

  • Using Virtualbox Bridge Networking fails connection from Guest OS to Oracle XE running on Host

    - by Licheng
    I am trying to make a JDBC connection from a VirtualBox Ubuntu Guest OS to an Oracle XE database running o Host. However, the connection is refused. Here are the details of my environment: VirtualBox: 4.1.4 Host OS: Windows 7 Guest OS: Ubuntu server 11.4 Networking mode: Bridged network Oracle XE database running on Host Issue: WebLogic server runs on the Ubuntu virtualbox. It attempts to connect to an Oracle XE database running on the Host OS (windows 7) with listening port 1521. On the Guest OS (Ubuntu), I am able to ping the Host computer from the Guest OS. However, when I configured a JDBC data source on the WebLogic server on the Guest OS to connect to the Oracle XE, connection took a long time, and eventually I received an "IO Exception: The Network Adapter could not establish the connection". When I tried "telnet host-ip 1521", no connection was established. With Bridge networking, I can make bi-directional connections between the host and the guest OS (e.g. connection through ssh and ftp). Is there anything I missed in the setup of Bridge networking and the guest/host OS? Note that I was able to make the same connection within a normal networking environment (i.e. not using virtual box). I am not sure whether Bridge networking is a good option for the work described above. Should I use host-only networking mode? If so, any specific configurations I need to perform? I read through the Virtual box document on setting up the host-only network, however, it lacks of details. I followed the procedures described in the manual, and couldn't even connect to the host. Could some experts here enlighten me on this issue? Much appreciated. Licheng

    Read the article

  • Whats the difference between local and remote addresses in 2008 firewall address

    - by Ian
    In the firewall advanced security manager/Inbound rules/rule property/scope tab you have two sections to specify local ip addresses and remote ip addresses. What makes an address qualify as a local or remote address and what difference does it make? This question is pretty obvious with a normal setup, but now that I'm setting up a remote virtualized server I'm not quite sure. What I've got is a physical host with two interfaces. The physical host uses interface 1 with a public IP. The virtualized machine is connected interface 2 with a public ip. I have a virtual subnet between the two - 192.168.123.0 When editing the firewall rule, if I place 192.168.123.0/24 in the local ip address area or remote ip address area what does windows do differently? Does it do anything differently? The reason I ask this is that I'm having problems getting the domain communication working between the two with the firewall active. I have plenty of experience with firewalls so I know what I want to do, but the logic of what is going on here escapes me and these rules are tedious to have to edit one by one. Ian

    Read the article

  • Accessing network shares on Windows7 via SonicWall VPN client

    - by Jack Lloyd
    I'm running Windows7 x64 (fully patched) and the SonicWall 4.2.6.0305 client (64-bit, claims to support Windows7). I can login to the VPN and access network resources (eg SSH to a machine that lives behind the VPN). However I cannot seem to be able to access shared filesystems. Windows is refusing to do discovery on the VPN network. I suspect part of the problem is Windows persistently considers the VPN connection to be a 'public network'. Normally, you can open the network and sharing center and modify this setting, however it does not give me a choice for the VPN. So I did the expedient thing and turned on file sharing for public networks. I also disabled the Windows firewall for good measure. Still no luck. I can access the server directly by putting \\192.168.1.240 in the taskbar, which brings up the list of shares on the server. However, trying to open any of the shares simply tells me "Windows cannot access \\192.168.1.240\share You do not have permission to access ..."; it never asks for a domain password. I also tried Windows7 native VPN functionality - it couldn't successfully connect to the VPN at all. I suspect this is because SonicWall is using some obnoxious special/undocumented authentication system; I had similar problems trying to connect on Linux with the normal IPsec tools there. What magical invocation or control panel option am I missing that will let this work? Are there any reasonable debugging strategies? I'm feeling quite frustrated at Windows tendency to not give me much useful information that might let me understand what it is trying to do and what is going wrong.

    Read the article

  • Postfix character encoding?

    - by Anonymous12345
    I use Postfix as a mailserver. I have Ubuntu OS. Then I use PHP to send emails. Problem is that none of my emails are encoded properly by a mailsoftware which my VPS provider uses. According to them, the problem lies with me. It is only the name field which isn't encoded properly. For example "Björn" becomes "Björn" in my emails. However, when I echo the $name, it outputs "Björn" which is correct. Also, gmail and hotmail does show it correctly. The strange part is that the "text" (the message itself) is encoded properly. I use the following for sending mail: $headers="MIME-Version: 1.0"."\n"; $headers.="Content-type: text/plain; charset=UTF-8"."\n"; $headers.="From: $name <$email>"."\n"; $name= iconv(mb_detect_encoding($name), "UTF-8//IGNORE//TRANSLIT", $name); //// I HAVE TRIED WITH AND WITHOUT THE LINE ABOVE, NO DIFFERENCE mail($to, '=?UTF-8?B?'.base64_encode($subject).'?=', $text, $headers, '[email protected]'); I have tried with and without the iconv line also, no luck. The last thing I can think of is POSTFIX, could there be a setting for character encoding there? Anybody knows?

    Read the article

  • How to multiseat with HW 3d accel on CentOS 6.3 Final?

    - by user35070
    I would like to setup a multiseat configuration on CentOS 6.3 (two video cards, two keyboards, two mice, two monitors) and have hardware accelerated 3D on both monitors. 3D HW acceleration rules out Xephyr. I saw somewhere that recent versions of GDM (3.3 and newer?) don't support multiseat, so do I have to install KDM to make this work? If I just create a duplicate section with new device identifiers in my xorg.conf file, will this 'just work'? Using different ports on the same video card and separate keyboards, mice, and displays, the result was a desktop which spanned both monitors with both keyboards and mice acting as the same input in the GUI. I will power down and put in the new video card and report on the results soon. Both video cards are nvidia. UPDATE after putting in another NVIDIA video card, default behavior (before changing xorg.conf) is that one screen works normally, and both mice and keyboards are connected to it. Changing xorg.conf and the display manager to KDM and following the directions here https://help.ubuntu.com/community/MultiseatX#Ubuntu_10.04_.28Lucid.29 , I have 2 mirrored screens connected to separate video cards, DRI enabled, and 2 mice both connected to the same pointer. Keyboards don't do anything, however, I probably just need to fix a setting in xorg.conf I would still like to get multiseat functionality, eg. separate screens with separate input devices I have verified that the separate X processes are running (see page above) using 'ps aux | grepX [01]'

    Read the article

  • Logic behind SCCM 2012 required PXE deployments

    - by Omnomnomnom
    I'm in the process of setting up Windows 7 deployment through PXE boot, with Microsoft SCCM 2012. The imaging itself works very well, but I have a question about the logic behind PXE deployments. My setup is the following: My Windows 7 deployment task sequence is deployed to the unknown computers group. (not required, press F12 to start installing) OSDComputerName variable is also set on the unknown computers group, so unknown computers that are being imaged will prompt for a pc name. The computer then becomes known in SCCM and is added to the correct collection(s). But if I want to reïnstall windows on a known computer things are different: I can do a required deployment of the imaging task sequence to the collection of computers. Then windows installs through PXE, without any human interaction, keeping the original computer name. But because the initial deployment was not required, the "required PXE deployment" flag is not set. So as soon as I add a new computer to a collection with a required PXE deployment, it will start to reïnstall windows again. I can also deploy the imaging task sequence to the new unknown computers as required, so the flag gets set initially. But then it does not prompt for a computer name. (and it generates a name like MININT-xxx) Which is also sort of what I want. Because when i want to re-install a machine, I want it to install without interaction. How can I solve this?

    Read the article

  • PHP running too slow, always showing "504 Gateway Time-out"

    - by komase
    PHP running too slow, always showing "504 Gateway Time-out" My server spec: Dual core ATOM 330 CPU 2GB RAM Use nginx with PHP in fastcgi use eaccelerator CPU 74.3%id RAM used: 350MB of 2GB I have lots of sites in my server, with cron running every minutes all time, even on some minutes, double or triple cron running at same time. All my sites cron is heavy, usually the cron running more than one minutes. my nginx.conf has become too big until nginx refuse to start because too many sites in it. it has been solved by increasing server_names_hash_max_size. Im planning to add more sites in my server Now, opening my website always showing 504 Gateway Time-out. I have tested many eaccelerator and PHP setting, but this 504 Gateway Time-out still happen. the 504 Gateway Time-out will dissappeared when cron is disabled I have no idea: is this because not enough processor power? And what should I do? upgrade my processor? --------added this is top for my CPU just now: Cpu(s): 17.5%us, 3.8%sy, 0.1%ni, 71.6%id, 6.9%wa, 0.1%hi, 0.1%si, 0.0%st

    Read the article

  • How to remove request blocking on apache reverse proxy after failure of backend before asking backen

    - by matnagel
    I am working on an apache2 reverse proxy vhost. When the server behind apache is down, the first request to apache shows the error page of course. But at subsequent requests it seems apache delays for some time before asking the backend server again. During all this time (which is short but in development I don't want a delay at all) only the apache error page is shown to the browser, although the backend server is already up. Where is this setting in apache, what is this behaviour, and how can I set the delay time to zero? Edit: I am not trying to change the timeout for a single request. I want to change the blocking time. It is my experience that apache blocks further requests for a certain time before asking a backend server again that has failed once. Edit2: This is what apache delivers: Service Temporarily Unavailable The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later. Apache/2.2.8 (Ubuntu) PHP/5.2.4-2ubuntu5.7 with Suhosin-Patch proxy_html/3.0.0 Server at localhost Port 80 After hitting Ctrl-R in firefox for 60 seconds the page finally appears.

    Read the article

  • SquidGuard and Active Directory: how to deal with multiple groups?

    - by Massimo
    I'm setting up SquidGuard (1.4) to validate users against an Active Directory domain and apply ACLs based on group membership; this is an example of my squidGuard.conf: src AD_Group_A { ldapusersearch ldap://my.dc.name/dc=domain,dc=com?sAMAccountName?sub?(&(sAMAccountName=%s)(memberOf=cn=Group_A%2cdc=domain%2cdc=com)) } src AD_Group_B { ldapusersearch ldap://my.dc.name/dc=domain,dc=com?sAMAccountName?sub?(&(sAMAccountName=%s)(memberOf=cn=Group_B%2cdc=domain%2cdc=com)) } dest dest_a { domainlist dest_a/domains urllist dest_b/urls log dest_a.log } dest dest_b { domainlist dest_b/domains urllist dest_b/urls log dest_b.log } acl { AD_Group_A { pass dest_a !dest_b all redirect http://some.url } AD_Group_B { pass !dest_a dest_b all redirect http://some.url } default { pass !dest_a !dest_b all redirect http://some.url } } All works fine if an user is member of Group_A OR Group_B. But if an user is member of BOTH groups, only the first source rule is evaluated, thus applying only the first ACL. I understand this is due to how source rule matching works in SquidGuard (if one rule matches, evaluation stops there and then the related ACL is applied); so I tried this, too: src AD_Group_A_B { ldapusersearch ldap://my.dc.name/dc=domain,dc=com?sAMAccountName?sub?(&(sAMAccountName=%s)(memberOf=cn=Group_A%2cdc=domain%2cdc=com)) ldapusersearch ldap://my.dc.name/dc=domain,dc=com?sAMAccountName?sub?(&(sAMAccountName=%s)(memberOf=cn=Group_B%2cdc=domain%2cdc=com)) } acl { AD_Group_A_B { pass dest_a dest_b all redirect http://some.url } [...] } But this doesn't work, too: if an user is member of either one of those groups, the whole source rule is matched anyway, so he can reach both destinations (which is of course not what I want). The only solution I found so far is creating a THIRD group in AD, and assign a source rule and an ACL to it; but this setup grows exponentially with more than two or three destination sets. Is there any way to handle this better?

    Read the article

  • Ubuntu 9.10 Only Sees 244 MB RAM, while BIOS and Windows Sees 1.5 GB

    - by nicorellius
    I have 1.5 GB of RAM installed on an older Dell, Pentium 4. I just installed Ubuntu 9.1 and the system is only seeing 244 MB of RAM, even though there is 1.5 GB on the system. The BIOS sees all of it. I ran a Knoppix disc and it only saw 25 MB upon booting. I made no particular changes to the installation taht would affect this. I looked through the BIOS and the only setting I could see was the AGP aperture. Not even sure what this is. Anyone know where I went wrong? I also tried moving the memory modules around on the board. Booted with the 1 GB stick, still saw 244 MB. NOTE - This same system, except for the hard drive, had Windows XP running on it. The user who ran it said that the RAM was good and always showed 1.5 GB. Here is sudo cat /proc/meminfo MemTotal: 250064 kB MemFree: 3832 kB Buffers: 13356 kB Cached: 52216 kB SwapCached: 19676 kB Active: 91504 kB Inactive: 113884 kB Active(anon): 60572 kB Inactive(anon): 82156 kB Active(file): 30932 kB Inactive(file): 31728 kB Unevictable: 0 kB Mlocked: 0 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 250064 kB LowFree: 3832 kB SwapTotal: 4883720 kB SwapFree: 4781204 kB Dirty: 496 kB Writeback: 720 kB AnonPages: 123796 kB Mapped: 23368 kB Slab: 17248 kB SReclaimable: 7932 kB SUnreclaim: 9316 kB PageTables: 5304 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 5008752 kB Committed_AS: 740372 kB VmallocTotal: 770600 kB VmallocUsed: 26008 kB VmallocChunk: 662544 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 4096 kB DirectMap4k: 114128 kB DirectMap4M: 147456 kB

    Read the article

  • Can I have a single solid state drive and a RAID array on the same machine?

    - by jaminto
    Hi- To summarize, i'm looking to use a single solid state drive as my primary drive, and two conventional sata drives in a RAID 1 configuration for data. I am trying to install 64-bit Windows 7 onto this configuration. Is this possible? Here are the details: I built a desktop that has been running 64-bit Vista on two 500Gb in a RAID 1 array for a few years. I just purchased an Intel X25-M 80Gb Sata Solid-State Drive, and was planning on using this a my primary drive, and keeping the RAID 1 array as my data drive. I added the SSD drive and in the RAID setup, configured it as a RAID 0 array of only one disk. Then, I tried to do a clean install of windows 7 64-bit, but got stuck in the "Missing driver for CD/DVD drive" black hole of selecting driver files and Windows telling me that i don't have the appropriate driver for my hardware. The missing hardware is NOT a CD/DVD drive, since i'm installing off of my only CD/DVD drive. Plus at one point i was able to point it at a driver for my raid controller, and then my hard drives magically showed up as browsable sources for finding drivers for some other unnamed device that setup couldn't recognize. After a few hours of trying drivers (this was a very slow process) i decided to reboot and look at the BIOS settings. I'm using an ASUS M2A-VM motherboard which has an ATI SB600 RAID controller on board. I switched the "On board SATA Type" setting from "SATA" to "AHCI" thinking that since AHCI is an Intel thing, this would help. Unfortunately, this abandoned my RAID configuration, and my previously mirrored drives are showing up as separate drives when i boot into my current windows installation. Am i trying to do the impossible here? Should i just buy a separate SATA/RAID PCI card and plug the SSD into that? Any help would be greatly appreciated.

    Read the article

  • How can I set up a 404 error page when people access http://ftp.mydomain.com?

    - by Tim B.
    I am a freelance videographer/developer, and part of my job involves transferring large files over FTP to production houses/television stations. While the majority of people in my industry understand the difference between FTP and HTTP, I've experienced several interactions in the past couple months of people who still open Internet Explorer and try to access http://ftp.mydomain.com, receive an error page served by HostGator, and tell me that they cannot access my FTP server. Instead of spending time delivering instructions via e-mail, I'd much prefer to serve up a custom error page in this instance that instructs them how to download and use an FTP client. I tried setting up a sub-domain in Cpanel hoping I could simply drop in an .htaccess file with the error page, but I got this error: ftp.mydomain.com domainadmin-domainexistsglobal I also tried creating a custom error page in PHP which reads the site URL and serves up the custom content only when http://ftp.mydomain.com is accessed. Unfortunately, the error page works for every subdomain except that one. I'm not entirely sure this is even technically possible, which is why I bring it to the good people of StackOverflow to help. Thanks!

    Read the article

  • How to rescue from an SD (SDHC) card that I can't reformat (possible hardware failure)

    - by sbwoodside
    I have a transcend 16GB SDHC card and a lot of photos on it that I'd like to recover. When I plug it into the SD card reader, it takes a while for the Mac to even recognize that there's a disk present, and it shows up as 1.07GB with geometry 520/64/63 (according to fdisk). First I tried file recovery: PhotoRec: no files are found (the images are in CR2 format and I'm using testdisk-6.14-WIP which claims to recognize that format under TIF) dd / ddrescue: they create a 1.07GB image, same problem as above TestDisk: doesn't find any partitions to recover I found a source saying that the correct geometry for this type of SD Card is Heads 255, Sectors/Track 63, Cylinders 1953, so I tried manually setting that geometry in PhotoRec/TestDisk. No improvement. Next I tried formatting the disk with fdisk. After writing and quitting, I ran fdisk again and it reported that the new format hadn't been saved on the disk. I also tried resetting the format/partitions with TestDisk and that failed also. The fdisk log is below. I don't really care about the card, I've already ordered a new SanDisk card. But I'd like to get the data off. Maybe, is there any way to force dd or some other tool to create an image of the disk based on the original geometry and not on what the card "thinks" its geometry is? Or am I missing something?

    Read the article

  • Biztalk 2009 logshipping with SQL 2008

    - by Manjot
    Hi, I am setting up biztalk logshipping for Biztalk 2009 database. Following http://msdn.microsoft.com/en-us/library/aa560961.aspx article, I am doing the following to setup biztalk logshipping on destination server: Enable Ad-hoc queries by: sp_configure 'show advanced options',1 go reconfigure go sp_configure 'Ad Hoc Distributed Queries',1 go reconfigure go sp_configure 'show advanced options',0 go reconfigure go Execute LogShipping_Destination_Schema & LogShipping_Destination_Logic in master on destinations server Run: exec bts_ConfigureBizTalkLogShipping @nvcDescription = '', @nvcMgmtDatabaseName = '', @nvcMgmtServerName = '', @SourceServerName = null, -- null indicates that this destination server restores all databases @fLinkServers = 1 -- 1 automatically links the server to the management database When I run this I am receiving the following error: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. After some research I found some info : Usually this error means that the SQL Server Service Principal Name (SPN) was not configured, and NTLM was not being used as an authentication mechanism. SQl services are runing under different domain accounts. So, I asked the domain admin to create SPNs for the servers, SQL service accounts for beoth source and destination using name and FQDN. enabled computer name and service accounts for delegation. When I run the following: select * from sys.dm_exec_connections I get the the same error: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON' Any help please?

    Read the article

  • Convert HTACCESS mod_rewrite directives to nginx format?

    - by Chris
    I'm brand new to nginx and I am trying to convert the app I wrote over from Apache as I need the ability to serve a lot of clients at once without a lot of overhead! I'm getting the hang of setting up nginx and FastCGI PHP but I can't wrap my head around nginx's rewrite format just yet. I know you have to write some simple script that goes in the server {} block in the nginx config but I'm not yet familiar with the syntax. Could anyone with experience with both Apache and nginx help me convert this to nginx format? Thanks! # ------------------------------------------------------ # # Rewrite from canonical domain (remove www.) # # ------------------------------------------------------ # RewriteCond %{HTTP_HOST} ^www.domain.com RewriteRule (.*) http://domain.com/$1 [R=301,L] # ------------------------------------------------------ # # This redirects index.php to / # # ------------------------------------------------------ # RewriteCond %{THE_REQUEST} ^[A-Z]+\ /(index|index\.php)\ HTTP/ RewriteRule ^(index|index\.php)$ http://domain.com/ [R=301,L] # ------------------------------------------------------ # # This rewrites 'directories' to their PHP files, # # fixes trailing-slash issues, and redirects .php # # to 'directory' to avoid duplicate content. # # ------------------------------------------------------ # RewriteCond %{DOCUMENT_ROOT}/$1.php -f RewriteRule ^(.*)$ $1.php [L] RewriteCond %{DOCUMENT_ROOT}/$1.php -f RewriteRule ^(.*)/$ http://domain.com/$1 [R=301,L] RewriteCond %{THE_REQUEST} ^[A-Z]+\ /[^.]+\.php\ HTTP/ RewriteCond %{DOCUMENT_ROOT}/$1.php -f RewriteRule ^([^.]+)\.php$ http://domain.com/$1 [R=301,L] # ------------------------------------------------------ # # If it wasn't redirected previously and is not # # a file on the server, rewrite to image generation # # ------------------------------------------------------ # RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^([a-z0-9_\-@#\ "'\+]+)/?([a-z0-9_\-]+)?(\.png|/)?$ generation/image.php?user=${escapemap:$1}&template=${escapemap:$2} [NC,L]

    Read the article

  • Domain Environment + Certificate Authority + Server 2008 R2

    - by user1110302
    I have recently been delegated the task to setup a CA in our domain environment and have a question on why Microsoft does somethings the way they do lol. I have been trying to read up on what the best practices are for going about this task, and have decided that in an ideal CA environment you should have one “offline” Root CA, and then two subordinate CAs for redundancy/issuing the certs. That is all good, I understand how this works and why, but in messing with a sandbox I have setup, the way you go about adding certificate authorities to a domain environment seems extremely trivial and against all of their best practices… Dooes anyone know what the purpose is of an Enterprise Root CA that is integrated into Active Directory? From what I have read, once you setup an Enterprise Root CA that is integrated into Active Directory, it stays with Active Directory for the long haul and must not be turned off/renamed/touched under any circumstances. If this is true, that seems to go against the practice of setting up a standalone root CA, adding the subordinates, and then taking the root offline. Thanks for any feedback you may have to offer!

    Read the article

  • Cannot run logwatch due to Date::Manip issue

    - by Quintin Par
    I tried to run logwatch at follows [root@machine cron.daily]# ./0logwatch ERROR: Date::Manip unable to determine TimeZone. Execute the following command in a shell prompt: perldoc Date::Manip The section titled TIMEZONES describes valid TimeZones and where they can be defined. My date is as follows root@machine cron.daily]# date Thu Aug 23 06:25:21 GMT 2012 Now based on details in various forums I tried to fix this by setting /etc/timezone to “+0800” but it didn’t work My /etc/localtime points to /usr/share/zoneinfo/GMT and is managed by puppet How do I go about fixing this? I still want all my machines to be in GMT timezone. EDIT: Sadly, Both the changes are not working: [root@machine cron.daily]# cat /etc/TIMEZONE UTC Quanta’s [root@machine cron.daily]# cat ~/.bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export TZ=GMT export PATH [root@machine cron.daily]# source ~/.bash_profile [root@machine cron.daily]# ./0logwatch ERROR: Date::Manip unable to determine TimeZone. Execute the following command in a shell prompt: perldoc Date::Manip The section titled TIMEZONES describes valid TimeZones and where they can be defined.

    Read the article

  • Cisco ASA and static IPv6 tunnel endpoint?

    - by Martijn Heemels
    I recently installed a Cisco ASA 5505 firewall on the edge of our LAN. The setup is simple: Internet <-- ASA <-- LAN I would like provide the hosts in the LAN with IPv6 connectivity by setting up a 6in4 tunnel to SixXS. It would be nice to have the ASA as tunnel endpoint so it can firewall both IPv4 and IPv6 traffic. Unfortunately the ASA apparently can't create a tunnel itself, and can't port-forward protocol 41 traffic, so I believe I would have to do one of the following instead: Set up a host with it's own IP outside the firewall, and have that function as tunnel-endpoint. The ASA can then firewall and route the v6 subnet to the LAN. Set up a host inside the firewall that functions as endpoint, separated via vlan or whatever, and loop the traffic back into the ASA where it can be firewalled and routed. This seems contrived, but would allow me to use a VM instead of a physical machine as endpoint. Any other way? What would you suggest is the optimal way to set this up? P.S. I do have a spare public IP address available if needed, and can spin up another VM in our VMware infrastructure.

    Read the article

  • Missing Home Folder XP Clients 2008R2 Domain

    - by minamhere
    We just completed a migration from Server 2003 to Server 2008R2. Everything seems to have gone well except that many of our desktops have stopped mapping the Home Folder as set in Active Directory. Other mappings that are defined on individual clients are mapping just fine, these mappings are all on the same file server as the failing Home Folders. Half of the users are on 1 file server and half are on another. Users from both servers are having this problem. I have enabled the Group Policy setting to "Wait for network before logging in". I enabled the policy to "Run Logon Scripts synchronously". There are no errors on the Domain Controller or either File Server. When I enabled Group Policy Preferences as an attempted workaround, I get this error: The user 'V:' preference item in the '<Policy Name>' Group Policy object did not apply because it failed with error code '0x800708ca This network connection does not exist.' This error was suppressed. This seems to indicate that the network connection is not ready by the time Group Policy is processed. But isn't this the point of the "Wait before logging in" and "Run Logon scripts synchronously" settings? Some other background facts: The new Server 2008R2 installation is a Virtual Machine. It is on a new Subnet in a different building from the old server. DNS and DHCP were also migrated from the old DC to this new DC. These Home Folders were all working properly before the migration. Are there new security restrictions/policies in Server 2008R2 that might be causing this? Is there a way to check whether I have an underlying network connectivity issue? Maybe moving the server to the new building is causing a delay/timeout? Any thoughts or ideas on what could be causing this or how I can resolve this? Thanks.

    Read the article

  • Choosing the right TV tuner - USB or PCI TV tuners, hardware/software, DVB? Hybrid/combo/analog?

    - by Nucleon
    Greetings, I'll start with some background information so you know what I'm trying to accomplish and then get to my question. I work at a Television station in the US and we are working on setting up an online DVR/Podcast system for all of our newscasts. So basically we would be recording every newscast in HD, encoding it to flv/h.264 for viewing in a browser on flash compatible and iphone/ipad devices, eventually migrating to WebM when it's browser compliant. This task is theoretically pretty simple as it all it involves is a TV tuner device and a program like VLC, MythTV or whatever to schedule and dump it to a file, encode it with VLC/FFMPEG and push it to the streaming server. Now to the hardware, in order to accomplish that task, should I use an internal PCI tuner or a USB 2.0 tuner? Is there a difference? The bus speeds of both are not too far apart, and is the bus speed really relevant in this case? Does it matter if the device has a hardware encoder or a software encoder? On many sites the USB was recommended for ease of set up and use, but would it overly task a processor, or is that not a concern as long as it's a decent PC (at least duo core, 6gb ram). What's the difference between the stick USB and the Box USBs? To my understanding analog is basically gone in the US, so we would want a hybrid or combo tuner correct? How do those differ from DVB? Are there any other features or concepts which I am missing which may influence the recommended product. It would be ideal if the device which could work in both Linux and a Windows environment, to my knowledge most Hauppauge are? Example 1: PCI Hauppage http://www.newegg.com/Product/Product.aspx?Item=N82E16815116033 Example 2: USB 2.0 Box http://www.newegg.com/Product/Product.aspx?Item=N82E16815116029 Example 3: USB 2.0 Stick http://www.newegg.com/Product/Product.aspx?Item=N82E16815116031 Any guidance from the Superusers would be much appreciated!

    Read the article

  • Windows Server 2003 DHCP not handing out IPs

    - by SnOrfus
    I'm trying to setup a home server (to tinker with) as a domain controller. I've setup the domain and I've installed DHCP and setup a scope without any exclusions (with the default range of 192.168.0.1-254). My client machine is a Windows 7 (RC) machine and it has a connection but can't get an IP address. Even if I try setting the IP to a static 192.168.0.2 and there is still no connectivity. I can ping it from the server, but pinging the server from the client just times out. The only thing between the server and the client is a 24 port switch (D-Link DES-1024D). edit Ok, it turned out that the interfaces were setup backwards in the NAT settings (the internal nic connection was set to public and the external nic connection was set to private). I changed this and all was OK.... sort-of. Problem is now: If I set a static ip on the client (where I am typing this from) all is fine. BUT; when I set it to get it from DHCP, I get a correct IP from the server (192.168.0.2) but there is no internet on the client; but I can still ping the server fine from the client (which makes sense cause I was able to get an IP from it). edit I ended up just removing the Routing and DHCP server roles and just going with ICS for the time being until I get my hands on some better learning tools.

    Read the article

  • VMware virtual machine network devices malfunctioning

    - by sheepz
    I'm running Ubuntu 10.04 LTS and VMvware workstation 7.0.1 build-227600. The virtual machine i'm running in VMware is a custom distribution built on Debian Linux version 3.1. I'm still pretty much a beginner with UNIX administration. After having messed around with the vmware (changed only the name of the folder, the vmx and and other .v* files accordingly in which the .vmx was situated, and the configuration in the vmx file accordingly), the network devices on the virtual machine do not work anymore. The virtual machine is used for securely sending messages. The virtual machine: As far as I know, this perl file called proxy-gen-ifalias eth0 is responsible for properly setting up the two virtual network devices eth0 and eth1. The Virtual machine comes with a GUI interface in which I have set up two ethernet network devices, one internal, the other external. Now, after having messed around with this, the UI gives me this error message: perl proxy-gen-ifalias eth0 /etc/modprobe.d/alias-eth0 /sbin/update-modules perl proxy-gen-ifalias eth1 /etc/modprobe.d/alias-eth1 /sbin/update-modules ifdown eth0 ifdown: interface eth0 not configured ifdown eth1 ifdown: interface eth1 not configured perl proxy-gen-netcfg /etc/network/interfaces ifup eth0 SICCSIFADDR: No such device eth0: ERROR while getting interface flags: No such device SIOCSIFNETMASK: No such device eth0: ERROR while getting interface flags: No such device Failed to bring up eth0. ifconfig eth0 eth0: error fetching interface information: Device not found make: *** [/etc/network/interfaces] Error 1 ~ Here are the contents of the two perl files referred to in the message: paste.pocoo.org/show/2AMzAYhoCRZqlGY7wUFk/ proxy-gen-netcfg

    Read the article

  • DNS setup problems with Windows Azure VPS

    - by jbigelow
    What is the proper to setup the A record (or CNAME) for a Windows Azure VPS? I can't connect to my website after setting up IIS and believe I don't have the correct DNS setup. I created a small VPS instance with the default Windows Server 2012 configuration. I RDP'd in and added the Webserver role. In my DNSMadeEasy control panel I added an A record with my Public Virtual IP Address. In IIS I went to the default website and added bindings for the hostname of my website, so I should be able to type mywebsite.com and see the IIS 8 splash screen, but instead my browser cannot connect. I attempted to navigate to the site by typing in my Virtual IP address into the browser and still cannot connect. I RDP'd back into the machine and turned off Windows Firewall. No change, still cannot navigate to my website. From within IIS I double checked my binding. If I click "browse *:80" I can bring up my website in IE with the http:// localhost address. If I click "browse mywebsite on *.80" IE says "This page cannot be displayed.", from within the RDP session I can view the site if I navigate to http:// 127.0.0.1 but not if I navigate to my Virtual IP, nor can I view the page if I try navigating to http:// mywebservername.cloudapp.net I'm thinking I must be fundamentally not understanding how do DNS setup with Azure VPS but my initial Google searches aren't turning up any helpful information. (spaces added after the http:// so serverfault doesn't try and render them as valid urls.)

    Read the article

  • Favorite tricks with linux kernel boot parameters?

    - by ~drpaulbrewer
    Most linux bootloaders let you edit the kernel boot command line before booting. There are often lots of parameters available -- Knoppix, for instance, has a list on their Knoppix Cheat Codes page -- but most are applicable only to compatibility and special situations. A few are hidden gems. Common usages of these codes are to boot to single-user mode, alter screen mode or drivers, or to specify an alternative root directory. Other more exotic uses are possible. Some linux distributions let you copy the boot cd into ram. Others (e.g., Ubuntu) let you use preseed files to clone installs when setting up multiple systems -- useful when installing a lab full of computers without having to baby sit each install. What other tricks have you found useful in system installs, repairs, backups, restores, establishing temporary servers, or other tasks? To add your favorite trick to the list: As much of the code for these options goes on either in initrd, or in a service handler that detects the kernel parameters, please list *(1) the kernel boot line parameter, (2) what it does, (3) the linux distribution and any required packages to activate the feature*. Thanks.

    Read the article

< Previous Page | 421 422 423 424 425 426 427 428 429 430 431 432  | Next Page >