Search Results

Search found 5784 results on 232 pages for 'points'.

Page 167/232 | < Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >

  • Can I use iptables on my Varnish server to forward HTTPS traffic to a specific server?

    - by Dylan Beattie
    We use Varnish as our front-end web cache and load balancer, so we have a Linux server in our development environment, running Varnish with some basic caching and load-balancing rules across a pair of Windows 2008 IIS web servers. We have a wildcard DNS rule that points *.development at this Varnish box, so we can browse http://www.mysite.com.development, http://www.othersite.com.development, etc. The problem is that since Varnish can't handle HTTPS traffic, we can't access https://www.mysite.com.development/ For dev/testing, we don't need any acceleration or load-balancing - all I need is to tell this box to act as a dumb proxy and forward any incoming requests on port 443 to a specific IIS server. I suspect iptables may offer a solution but it's been a long while since I wrote an iptables rule. Some initial hacking has got me as far as iptables -F iptables -A INPUT -p tcp -m tcp --sport 443 -j ACCEPT iptables -A OUTPUT -p tcp -m tcp --dport 443 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 443 -j DNAT --to 10.0.0.241:443 iptables -t nat -A POSTROUTING -p tcp -d 10.0.0.241 --dport 443 -j MASQUERADE iptables -A INPUT -j LOG --log-level 4 --log-prefix 'PreRouting ' iptables -A OUTPUT -j LOG --log-level 4 --log-prefix 'PostRouting ' iptables-save > /etc/iptables.rules (where 10.0.0.241 is the IIS box hosting the HTTPS website), but this doesn't appear to be working. To clarify - I realize there's security implications about HTTPS proxying/caching - all I'm looking for is completely transparent IP traffic forwarding. I don't need to decrypt, cache or inspect any of the packets; I just want anything on port 443 to flow through the Linux box to the IIS box behind it as though the Linux box wasn't even there. Any help gratefully received... EDIT: Included full iptables config script.

    Read the article

  • Certificate revocation check fails for non-domain guest in spite of accessible CRL

    - by 0xFE
    When we try to use certificates on computers that are not part of the domain, Windows complains that The revocation function was unable to check revocation because the revocation server was offline. However, if I manually open the certificate and check the CRL Distribution Point property, I see an ldap:/// URL and an http:// URL that points to externally-accessible IIS site that hosts the CRLs. Of course, the non-domain-joined client cannot access the ldap:/// URL, but it can download the CRL from the http:// link (at least in a browser). I enabled CAPI logging and I see the event that corresponds to this failed revocation check. The RevocationInfo section is: RevocationInfo [ freshnessTime] PT11H27M4S RevocationResult The revocation function was unable to check revocation because the revocation server was offline. [ value] 80092013 CertificateRevocationList [ location] UrlCache [ url] http://the correct URL [fileRef] 6E463C2583E17C63EF9EAC4EFBF2AEAFA04794EB.crl [issuerName] the name of the CA Furthermore, I can see the HTTP request to the correct URL and the server's response (HTTP 304 Not Modified) with Microsoft Network Monitor. I ran certutil -verify -urlfetch, and it seems to show the same thing: the computer recognizes both URLs, tries both, and even though the http:// link succeeds, returns the same error. Is there a way to have non-domain-joined clients skip the ldap:/// link and only check the http:// one? Edit: The ldap:/// URL is ldap:///CN=<name of CA>,CN=<name of server that is running the CA>,CN=CDP,CN=Public Key Services,CN=Services,CN=Configuration,DC=<domain name>?certificateRevocationList?base?objectClass=cRLDistributionPoint The non-domain-joined clients may be on the domain network or on an external network. The http:// CDP is accessible from the public internet.

    Read the article

  • pptp server 2003 hands out gateway from nic not dhcp server

    - by Pete
    I have created a pptp RRAS server for a handful of clients to connect to. I would like them to use the servers default gateway (.1) for internet access. They are able to successfully connect (& see LAN) but it then cuts them off the internet. I understand that all internet traffic would be routed through the pptp server but that's ok since I have enough pipe. The problem seems to be that: the clients gateway shows as their assigned RAS ip. The clients assigned DNS settings seem to be what is set to the servers nic not what I have specified in dhcp (which is the same server). DHCP relay agent properties points to the nic DHCP is running on (192.168.100.163). .1 is gateway in nic hw properties & dhcp. I have different dns secondary & third entries on my nic properties than what dhcp is configured for. The problem is that I have a 10.10.1.x network that people can not see if they uncheck the gateway option but, they are then unable to see our other hosted sites on the internet.

    Read the article

  • How can I get Pinch to Zoom back in Desktop mode?

    - by Ben Brocka
    Windows 7 had an old implimentation of Pinch to Zoom where bringing your fingers apart/together would act similar to ctrl + +/-, the standard zoom. It's not as nice as granular zoom (like iOS/Android use) but it worked. Most notably it doesn't work in Chrome (did before) but I haven't noticed it working in any other apps. In windows 8 desktop mode, pinch to zoom doesn't seem to work at all. It doesn't even work in One Note 2010, which, if I recall correctly, had granular zoom in Windows 7. I have an (older) 2 touch point multi-touch monitor, and I can see the visual feedback that the two touch points and coming closer/farther apart, but it doesn't zoom. Note I'm using the touchscreen, not a touchpad or the Arch mouse or other peripherals. Can I enable this somehow or is it gone from Desktop mode? It works fine in Metro apps. Additionally I get weird visual feedback when placing my second finger on the screen; a shrinking transparent square appears somewhere between the two fingers, visually similar to the Right Click visual queue when long-pressing. It's not a right click though, I can't tell what, if anything, it's doing.

    Read the article

  • On Windows machines, what is the typical toolchain for remote maintenance?

    - by Hanno Fietz
    I need to deploy PHP and Python code and the appropriate environment (web server, db server) to remote Windows systems, and I don't know what toolchain would be the equivalent to ssh, scp, bash and the like. So, basically, what I need to be able to do is the following: access remote Windows with the appropriate privileges in a secure manner, like I routinely do with ssh (I don't even know whether that would be a text or graphic interface on Windows). remotely install software: Apache or IIS, MySQL or Postgres, Python or PHP copy files from remote (the application we're deploying) remotely configure the machine to run regular tasks (e. g. checking for updates to the application) automate tasks like downloading files from a designated place The main question is probably how I get onto the machine securely in the first place, and then the rest is general Windows admin knowledge, which probably is too broad a scope to fit into one question. I have years of experience with maintaining Linux boxes and I have used tools of varying sophistication on those, ranging from plain scping of PHP files to deployment of Java application containers and even full VMs with Vagrant. On Windows, I'm a complete noob, and I don't even know where to start. I have installed Apache, MySQL , PHP on a desktop machine maybe twice in my life, that's about it. Bonus points for things that work from a Linux machine at my end, but I could run a VM and do everything from there.

    Read the article

  • why is Mac OSX Lion losing login/network credentials?

    - by Larry Kyrala
    (moved from stackoverflow...) Symptoms So at work we have OSX 10.7.3 installed and every once in a while I will see the following behaviors: 1) if the screen is locked, then multiple tries of the same user/pass are not accepted. 2) if the screen is unlocked, then opening a new bash term may yield prompts such as: `I have no name$` or lkyrala$ ssh lkyrala@ah-lkyrala2u You don't exist, go away! Even when our macs are working normally, everyone here has to login twice. The first time after boot always fails, but the second time (with the same password, not changing anything, just pressing enter again) succeeds. Weird? Workarounds There are some workarounds that resolve the immediate problem, but don't prevent it from happening again: a) wait (maybe an hour or two) and the problems sometimes go away by themselves. b) kill 'opendirectoryd' and let it restart. (from https://discussions.apple.com/thread/3663559) c) hold the power button to reset the computer Discussion Now, the evidence above points me to something screwy with opendirectory and login credentials. Some other people report having these login problems, but it's hard to determine where the actual problem is (Mac, or network environment?). I should add that most of the network are Windows machines, but we have quite a few Macs and Linux machines as well, but I'm not sure of the details of how the network auth is mapped from various domains to others... all I know is that our network credentials work in Windows domains as well as mac and linux logins -- so something is connecting separate systems, or using the same global auth system.

    Read the article

  • Please help to find a solution for two way, real-time synchronization on Centos 5.5 64Bit

    - by Vipul Limbachiya
    I am in need of a real time, two way synchronization software for Centos 5.5 / 64Bit. Here's little explanation: It needs to be able to perform: Two way synchronization. It must be realtime. By realtime means it can be almost realtime, i.e. a delay of 1 second for example is fine. And the folders are on the same server. I am currently using GlusterFS across two webservers. However, it has extremely poor small file read performance and it's slowing down my website. There's nothing more that can be done to improve this, I have already tested many configurations. As a solution, I was going to mount a RAM drive (tmpfs) that mirrors the GlusterFS web files but get the webserver to use the RAM drive. The issue is that I need two way realtime mirroring or replication between glusterfs and the RAM drive. I need this is as Apache writes files as wells. As I said, realtime two way synchronization across two folders. Which are in fact 2 different mounts points. The RAM (tmpfs) mount poing and the GlusterFS mount point. I already know about: Rsync - Which is one way Unison - Which is not realtime Please suggest me any solution free or paid. Thanks in advance

    Read the article

  • Macros in Excel 2010 hangs

    - by Ahmad
    I have a spreadsheet with several macros. Generally, when previously using Excel 2007, a user clicks a button and everything works as expected (calculations, some email sending & file I/O). Typically, the expected run-time is about 90 seconds. The spreadsheet is a xlsm file created with Excel 2007. With Excel 2010 however, the same user process results in a non-responsive excel and forces us to kill excel from the task manager. Some note that I have gathered so far in trying to debug this issue: When monitoring CPU usage, it seems that Excel does start the macro. CPU usage increases as expected to about 47% for a few seconds. Excel.exe than drops to 0% usage and I now have a non-responsive Excel (even after 1 hour). If I set debug break points across modules and different functions and step through the code (after clicking the button) , the process works as expected albeit much slower. To add, there were no exceptions. I am at a complete loss as to what the issue may be. I initially thought it may be the add in that is being used but that was debunked by point 2. This seems to be a very odd situation. I can provide more information if required, but I'm at wits end about the root cause could be. I need help in diagnosing and resolving this issue.

    Read the article

  • Strange mysql problem moving website from Ubuntu server to Mac server

    - by evan
    I'm moving a website (php/mysql) from an Ubuntu server to a OSX 10.6 server. I've set up apache to run php scripts and setup the newest version of mysql on the mac. I just copied all of the php files and dumped/imported all of the mysql databases (including the mysql users database). When I visit the page being served on the Mac the page is able to connect to the database, but not query. Specifically this function mysql_error() is returning this error message NO SUCH FILE OR DIRECTORY The reason it's strange is that I'm able to change the php connection strings for mysql on the Ubuntu server so that it points to the Mac server and the page works correctly (so it seems mysql is correctly set up on the mac and definitely contains all of the users and tables it should). Thinking it was something to do with file permissions on the mac, I've changed all of the files 755 and it hasn't helped. Any ideas? Thanks!! UPDATE: I've found this error which I'm relatively certain is related to this problem in /var/log/apache2/error_log PHP Warning: mysql_query(): A link to the server could not be established

    Read the article

  • Symbolic link modification for HP unix

    - by kalpesh
    Hi David Zaslavsky, Recently i was working on modifying the Symbolic links ... for a particular files.. while searching on internet i saw your post ... I am trying to use this script which you had posted .. find /home/user/public_html/qa/ -type l \ -lname '/home/user/public_html/dev/*' -printf \ 'ln -nsf $(readlink %p|sed s/dev/qa/) $(echo %p|sed s/dev/qa/)\n'\ script.sh SO i tried to modify your script for my problem .. in Hp unix env.. but it seems that the -lname command does not works for HP unix. do you know something equivalent that i can use ... Just to give you and idea of my problem ... I want to change all the symbolic links inside a particular folder .. New Symbolic link -- /base/testusr/scripts Old Symbolic link -- /base/produsr/scripts Now folder "A" contains more than 100 different files having soft links which points to this path -- /base/produsr/scripts But what I want is that the files inside folder A to point to this soft link --/base/testusr/scripts I am trying to achieve in Hp unix ... would really appreciate your help on this ...

    Read the article

  • mod_rewrite not working for subdomain in Apache2

    - by Matt
    Hi, I'm having some trouble with mod_rewrite. So I'm implementing it through .htaccess, and I can get it working on my main vhost, domain.com - what I want it to do is rewrite http:// domain.com to force it to https:// domain.com, which it does well. I want to have name-based vhosts for the one IP with the following redirects: (I'm breaking up domain names with a space because otherwise serverfault recognises them as links) http:// domain.com -- https:// domain.com http:// staging.domain.com -- https:// staging.domain.com http:// test.domain.com -- https:// test.domain.com http:// beta.domain.com -- https:// beta.domain.com domain.com redirects to https:// domain.com, but staging.domain.com doesn't, although I can access https:// staging.domain.com. The .htaccess is identical for both, just with the domain name different. It doesn't seem to do any rewriting at all for staging.domain.com, I've tested this by trying to get it to rewrite to www.google.com. I have a wildcard DNS record, *.domain.com which points to the domain IP. Is there a particular way I should have the virtualhosts configured to allow this? I keep reading in the Apache documentation that it doesn't support multiple SSL name-based vhosts. But I can access both https:// domain.com and https:// staging.domain.com just fine. Any thoughts? Thanks to everyone for your help with this.

    Read the article

  • How to verify system using right GPU, after system reset [duplicate]

    - by Antoros
    This question already has an answer here: Is my mobile AMD card being used? 2 answers OS: Windows 8 CPU: Intel® Core™ i7 Processor 3635QM GPU 1 : Intel HD Graphics 4000 GPU 2 : AMD Radeon™ HD 8870M other info: System Spects Problem: im unsure that CCC is using AMD card instead of Intel's, i have encountered several issues since updating to 8.1 and i don't know what to do What happened: Installed 8.1 patch first day After 1 minute of use, BBSOD, windows never loaded again System restore wouldnt recognize 8.0 restore points i did a system reset to windows 8 since the laptop was only 3 weeks old System Broke, it did restore to factory BUT kept the registry almost intact, i had to install almost everything again, since the factory drivers where working with the updated one's registry and several problems CCC Broke too <- What i've already done Installing new drivers on top of old ones didnt work, so i used AMD uninstaller first Uninstalled and Re-installed Intel's HD Graphics Driver Tried to install mobile center, but AMD told me that it wasnt compatible (even if thats the only driver that they provide via their page as seen Here) Tried to use Auto-Detect, couldnt install driver because card was disabled because it didnt have the drivers... (see what they did here?) Had to use a workaround with Samsung Update, the driver didnt appear as download so had to use search and downloaded the driver manually. Now the graphic card appears on device manager and catalyst but as 8800 series (not exact model), and cant check the card with neither dxdiag/GPU-z/HWMonitor when right-clicking on CCC only Intel card appears launching a game and using as "high performance" would speed it up a little but i cant be sure How to verify its working properly? HWMonitor wont show AMD card even when set to high performance Latest GPU-Z wont work because a problem with Intel's, and legacy ones wont either what can I do now? I don't even know if I fixed my problem or not, and i also want to to use Adobe Premier with it, and its locked (the option to run it with the amd card not intels) Edit: now it seems to work, but cant change the setting for adobe Premiere and other programs that i Need to

    Read the article

  • ubuntu preseed installation keep missing mirror files

    - by JackWu
    Install ubuntu12.04.2 with preseed file, but there is one buggy problem about preseed mirror setting. The symptom here is installing process got stuck. So I track down the log file, and find out the real problem, the installation is looking for a file that's not there. This is just one of them, another pops up if I faked this file. This all happened during preseed, so I believe preseed has something to do with this. I google ubuntu preseed mirror and find this post saying: # If you select ftp, the mirror/country string does not need to be set. #d-i mirror/protocol string ftp d-i mirror/country string manual d-i mirror/http/hostname string archive.ubuntu.com d-i mirror/http/directory string /ubuntu d-i mirror/http/proxy string # Alternatively: by default, the installer uses CC.archive.ubuntu.com where # CC is the ISO-3166-2 code for the selected country. You can preseed this # so that it does so without asking. #d-i mirror/http/mirror select CC.archive.ubuntu.com # Suite to install. #d-i mirror/suite string lucid # Suite to use for loading installer components (optional). #d-i mirror/udeb/suite string lucid # Components to use for loading installer components (optional). #d-i mirror/udeb/components multiselect main, restricted I wonder the difference between d-i mirror/http/hostname and d-i mirror/http/mirror, I mean they all specify a mirror, right? In my preseed file, this is no d-i mirror/http/mirror, and d-i mirror/http/hostname points to my own repo as you might notice in the previous image. Here is my question: Does preseed fetches file/resource from internet, if I use local repo? Why it's looking for file that's not even there? This has bothered for quite time, many thanks in advance to anyone who might give any help.

    Read the article

  • Methods to transfer files from Windows server to linux server

    - by Raze2dust
    Hi, I need to transfer webserver-log-like-files containing periodically from windows production servers in the US to linux servers here in India. The files are ~4 MB in size each and I get about 1 file per minute. I can take about 5 mins lag between the files getting written in windows and them being available in the linux machines. I am a bit confused between the various options here as I am quite inexperienced in such design: I am thinking of writing a service in C#.NET which will periodically archive, compress and send them over to the linux machines. These files are pretty compressible. WinRAR can convert 32 MB of these files into a 1.2 MB archive. So that should solve the network transfer speed issue. But then how exactly do I transfer files to linux? I could mount linux drive on windows server using samba, or should I create an ftp server, or send the file serialized as a POST request. Which one would be good? Also, I have to minimize the load on the windows server. Mount the windows drive on linux instead. I could use the mount command or I could use samba here (What are the pros and cons of these two?). I can then write the compressing and copying part in linux itself. I don't trust the internet connection to be very stable, so there should be a good retry mechanism and failure protection too. What are the potential gotchas in these situations, and other points that I must be worried about? Thanks, Hari

    Read the article

  • IP to IP forwarding with iptables [centos]

    - by FunkyChicken
    I have 2 servers. Server 1 with ip 1.1.1.1 and server 2 with ip 2.2.2.2 My domain example.com points to 1.1.1.1 at the moment, but very soon I'm going to switch to ip 2.2.2.2. I have already setup a low TTL for domain example.com, but some people will still hit the old ip a after I change the ip address of the domain. Now both machines run centos 5.8 with iptables and nginx as a webserver. I want to forward all traffic that still hits server 1.1.1.1 to 2.2.2.2 so there won't be any downtime. Now I found this tutorial: http://www.debuntu.org/how-to-redirecting-network-traffic-a-new-ip-using-iptables but I cannot seem to get it working. I have enabled ip forwarding: echo "1" > /proc/sys/net/ipv4/ip_forward After that I ran these 2 commands: /sbin/iptables -t nat -A PREROUTING -s 1.1.1.1 -p tcp --dport 80 -j DNAT --to-destination 2.2.2.2:80 /sbin/iptables -t nat -A POSTROUTING -j MASQUERADE But when I load http://1.1.1.1 in my browser, I still get the pages hosted on 1.1.1.1 and not the content from 2.2.2.2. What am I doing wrong?

    Read the article

  • How I can recover files when the folder shows empty but the files are not deleted?

    - by Borror0
    Yesterday, my laptop caught a virus which caused massive damage. Since them, I have been trying to recover important files before reformatting my computer, a task the virus has not made easy. Restoration points predating the attack have been deleted. Most of my folders show empty. My Start menu is essentially empty, with the exception of Trillian and Mirror's Edge. The same goes for my Desktop, which only has programs which were installed after the attack. Searching for files though my computer is pretty much useless, as it only rarely brings up anything. I suspect most of my files have not been deleted. While my folders show empty, uTorrent still does display them and I can open them from here. Unfortunately, when I select Open Containing Folder, the folder still shows as completely empty even if I'm currently watching a video from that very folder. Further adding evidence to the not-deleted, just-missing theory, the data recovery software I'm using (Restoration) cannot find only find an handful of the missing files. If they were deleted, I could do a forensic recovery to get them back but since they're probably still somewhere on my computer, just out out of my reach, I can't find them. Under those circumstances, is there a way I can recover those files?

    Read the article

  • Where did my backup files go? Can they be recovered?

    - by Ken
    I just purchased a Western Digital Essential SE 1TB external hard drive from Best Buy at their recommendation. I then exchanged it for a Toshiba Canvio (I think that was the name). I have a Toshiba Qosmio X505-Q898. The Canvio locked up my computer and rewrote some kind of OS file, and erased all the restore points as well as the system image backup (according to Best Buy) just by plugging it in for the first time. Never even got to the install part or anything -- plugged it in and fried my computer. They spent about an hour and a half on my computer and got it back to a somewhat working condition and gave me access to my files. So now they say I have to back it up using my recovery disk and rewriting my OS. Enter the Essential. Brought it home last night, plugged it in and installed everything. Works perfect, no problems. Backed up everything on it. I unplugged and plugged it twice to make sure that everything was on it. Essential told me it had both the HDD and SSD backed up. So I reinstalled my OS. Plugged the Essential in and everything loads right up. Went to retrieve my files and the Western Digital has nothing on it. It shows all my music, pics, ETC. as still being on my computer and needing to be backed up, but since there are no files on my computer now. Where is this information coming from and where did my files go? It's about 810GB worth of files I've amassed over several years. Is there any way to recover data from this? I plan to contact Western Digital and Best Buy, just thought I would check here too. Any advice will be appreciated as a lot of these files are invaluable to me.

    Read the article

  • Boot drive not found issue after cloning using Apricorn EZgig

    - by TomWilsonFL
    A couple days ago I cloned a drive for someone using the EZgig software. Usually this goes without a hitch, but this particular drive I was cloning is quite old. When I restarted with the new drive I received the typical bootable disk not found message, so I turned it off, messed with the BIOS, restarted and it came up fine. That night I was working remotely on the computer and had to restart it. It didn't come back up; not a good sign. When the user came to the computer in the morning it was giving the same message. I have found that to make the computer boot, all I have to do is go into the BIOS and "Load Defaults", then restart. It will boot and runs great. Any thoughts on what is causing this situation? Is it MBR corruption? Are some settings being saved in the CMOS? A couple points of mention: I have already attempted looking for a BIOS update for the computer, but the newest is already installed (from 2003). When the computer reboots it either shows "None" for Primary Master, or sometimes it will just not show anything. Thanks, Tom

    Read the article

  • I want to replace a number with successive numbers, starting at 1 and going up.

    - by Pär-Magnus Green
    I'm aware that this is something people can easily program, but I'm not familiar with those things, so I was hoping there was an easier solution. I've edited a large database of questions, including sorting, and ended up getting all the question ID's mixed up. I'm trying to do this in Notepad++, but any other suggestions are welcome. I'm wondering if there's a way to search for a string (ex. "Question[any number]") and replace the numbers with first 1, then 2, et.c. For clarification, this is how one question looks like: TriviaBot_Questions[3]['Question'][63] = "Acronyms: What does WTT stand for?"; TriviaBot_Questions[3]['Answers'][63] = {"Want To Trade"}; TriviaBot_Questions[3]['Category'][63] = 7; TriviaBot_Questions[3]['Points'][63] = 1; TriviaBot_Questions[3]['Hints'][63] = {}; and the next question, as is now, is not followed by 64. It might be 245 or 1029, and regardless of which number it is, I want it replaced with "previous number + 1", basically.

    Read the article

  • VMWare use of Gratuitous ARP REPLY

    - by trs80
    I have an ESXi cluster that hosts several Windows Server VMs and around 30 Windows workstation VMs. Packet captures show a high number of ARP replies of the form: -sender_ip: VM IP -sender_mac: VM virtual MAC -target_ip: 0.0.0.0 -target_mac: Switch interface MAC The specific addresses aren't really a concern -- they're all legitimate and we're not having any problems with communications (most of the questions surrounding GARP and VMWare have to do with ping issues, a problem we don't have). I'm looking for an explanation of the traffic pattern in an environment that functions as expected. So the question is why would I see a high number of unsolicited ARP replies? Is this a mechanism VMWare uses for some purpose? What is it? Is there an alternative? EDIT: Quick diagram: [esxi]--[switch vlan]--[inline IDS]--[fw]--(rest of network) The IDS is complaining about these unsolicited ARPs. Several IDS vendors trigger on ARP replies without a prior request, or for ARP replies that have a target IP of 0.0.0.0. The target MAC in these replies is the VLAN interface on the switch. Capture points: -The IDS grabs the offending packets -The FW can see the same ones -A VM on the ESXi host does not see these, although there is an ARP request for a specific IP on the ESXi host that has source_ip=0.0.0.0 and source_mac=[switch vlan interface]. I can't share the captures, unfortunately. Really I'm interested in finding out if this is normal for an ESXi deployment.

    Read the article

  • restrict access to IIS virtual directory from root website

    - by senthilkumar-c
    Hi, I have two domains (domain1.com and domain2.com). Both of them use the same Windows hosting service with IIS7. One of the domains is being called the "primary domain" by my hosting provider and it always points to the root folder that I was given. For the other domain, I have created a virtual directory in IIS and pointed it there. The folder structure is like this - root --Default.aspx --domain2folder ----Default.aspx So, if I type domain1.com, I see the regulakr Default.aspx. But if I type domain2.com, I am shown the contents of domain2folder as if it were a separate web application - I think that is what IIS virtual directory is meant for. Well and good. But the problem is, when I type http://domain1.com/domain2folder/, I see the domain2's website! But I don't want that to be shown when I use the path like that from domain1. Only if they use domain2.com, user should be able to see those contents. How can I do that? Hope I am making sense. Thanks.

    Read the article

  • XP - ping changes routing table?

    - by Corelgott
    Hey Folks, I have got a real strange behaviour with one of my XP-Sp3 machines. Setup: A Server in the lan (192.168.5.0) proviedes access to all roadwarriors in 10.8.0.0 The DCHP has a static route for all clients pronouncing 192.168.5.235 as gateway for 10.8.0.0 All Clients can ping & access the vpn-machines; everything works like a charm But one Xp-Sp3 is not willing to connect to them. It gets all the same routes as any other sytem in the lan and I trippel-checked - there are no static routes on this machine When I ping any 10.8.0.0 device from this machine, the first two packaged work like a charm; but the next two (and any package after them) fail and get lost. When I look back into the routing table: There is a new route; a special one just for the device I pinged, which points to the right gateway - but which wasn't there earlier... As Long as this route exists the machine can't ping anything on 10.8.0.0. But if I remove the route by hand: The next to ping packages work fine... Has anybody got an idea about that? Anybody every seen such a behaviour? Any hint / help / tip is greatly appreachiated! thx in advance Corelgott Ps: I attach an image of the cmd to clarify things - its in german, but reading a routing table shouldn't be that hard...

    Read the article

  • Postfix auto create Maildir

    - by Eugene
    I've been beating my head against a wall for a while now on this one. Basically, here is the rundown: Our MX record points to a frontend SMTP server, which contains aliases for actually routing the mail. No alias, no access to the backend storage server, which is what our clients connect to. I'm upgrading the backend email server. Currently, a user is created for every email user on the server, which creates the mailbox. On the new server, everything autheticates through PAM to an LDAP server (all of which is working properly). My goal is to get Postfix to create the Maildir directory for the user automatically. This works fine when I have the /home directory with 777 permissions, but for obvious reasons, this should be avoided. I would like to do this with 775 permissions on /home with a group owner of whatever user Postfix is running as, but I can't seem to figure out what user to use. With the 777 permissions, the /home/$user/Maildir directory is created on message delivery. Does anybody know how I can do this without 777 permissions? The system I am working on is a 64-bit Debian Lenny 5.07 install. Any advice would be appreciated.

    Read the article

  • Remote network traffic not passing through VPN

    - by John Virgolino
    We have the following topology: LAN A LAN B LAN C 10.14.0.0/16 <-VPN-> 10.18.0.0/16 --- SONICWALL <-VPN-> M0N0WALL --- 10.32.0.0/16 Traffic between LAN A and LAN B works perfectly. Traffic between LAN C and LAN B works perfectly. Traffic between LAN A and LAN C, not so much. LAN A's gateway has a route to LAN C that points to the Sonicwall. The Sonicwall has a route to LAN A pointing to the VPN gateway connecting LAN B to LAN A. Tracing packets on the Sonicwall shows the LAN C destined traffic to arrive on the Sonicwall, but it does not forward the traffic, it dies there. Traffic from LAN B gets forwarded. Tracing packets on the Sonicwall while sending traffic from LAN C destined for LAN A shows nothing. This tells me that the M0N0WALL is not forwarding traffic for the 10.14.0.0 network and the Sonicwall is not forwarding from 10.14.0.0. The SA on the Sonicwall terminates on the WAN ZONE and is defined to use an address group that incorporates both the 10.14.0.0 and 10.18.0.0 networks. The M0N0WALL is configured for the 10.18.0.0 network and I have tried with both a static route to 10.14.0.0 and without on the M0N0WALL. I tried manually adding the 10.14.0.0 network to the SA on the M0N0WALL, but that really aggravated it and the SA never came up, so I reverted. I have checked all the firewall rules to make sure nothing is blocked. All of the Sonicwall auto-added rules look right. Specs: Sonicwall TZ200, Enhanced OS M0N0WALL v1.32 I'm at a loss at this point. Any help would be appreciated.

    Read the article

  • ASP.Net Session Timing Out Rapidly

    - by Zac
    We have an ASP.Net 3.5 website running on Windows Server 2008 with IIS7. The session timeout period for this site is configured to be 20 minutes - however, it is currently lasting for between 40 and 50 seconds. After researching the problem we investigated several configuration values which could be involved in the timeout period but none of them are set to less than 20 minutes. The areas we look are as follows: web.config system.web/sessionState element (20 minutes). web.config system.web/authentication/forms element (not present, defaults to 30 minutes). Sites/{website}/ASP/Session Properties/Time-out (20 minutes). Application Pools/{appPool}/Advanced Settings/Process Model/Idle Time-out (20 minutes). We've also noted that the CPU is staying around 0% and that RAM usage is flat-lining around 1.07 GB (of 8 GB available) - so there is no performance-based reason for IIS to be recycling the Application Pool as far as we can tell. Are there any settings we've overlooked which could cause the session timeouts to be expiring so quickly? EDIT A couple of additional points: This is not occurring in development, only on the server. The session is not sliding (i.e. if we refresh the page a few times it still times out approximately 40 - 50 seconds after the session was created.

    Read the article

< Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >