Search Results

Search found 22308 results on 893 pages for 'floating point'.

Page 627/893 | < Previous Page | 623 624 625 626 627 628 629 630 631 632 633 634  | Next Page >

  • can't access SATA card config screen on boot, nor access the disks

    - by Ronald
    We've just upgraded our file server using an ASUS P6T WS Pro board, running FreeBSD-RELEASE 8.2 and using zfs to manage 12 WD20EARS disks. Since our 3ware card has been giving us trouble we started using the six on-board SATA connectors and got a SuperMicro USAS2-L8i to provide eight more ports. Mechanically, the card is an awkward fit but electrically it all seems ok. Upon boot, the LSI controller shows up and states that pressing ctrl-c will bring up the LSI Config Utility. When doing that, the message changes to state that the utility will be started after initialization, however that never happens. There does seem to be an error message that's only displayed too briefly to read and seems to be about PCI and "not enough space". (That message is pushed off by a hardware summary and I've found no way to scroll back at this point.) The disks do not show up in any recognizable ways after booting, either. I found a hint in another discussion to check the address mapping on either the card or the motherboard BIOS, but have found no way to do that. So what I tried on a hunch is to disable everything that's on-board, including network adapters, Firewire controller and SATA. In fact, after doing that, I can successfully launch the LSI Config Utility. As far as I can tell, all looks well in there, and when booting in that configuration it also displays a list of the disks connected to it, which looks just fine as well. Only problem now is that I can't boot that way, because I need the on-board SATA controller and network adapters. As soon as I re-enable any of them I'm back to square one. That discussion I mentioned about mapping addresses said to try D000, then D7FF, then DFFF, in order. The LSI Config Utility shows the card address as D000 but offers no way of changing it. Any tips or insights would be appreciated.

    Read the article

  • Weird .#filename files on remote ssh-connected systems after mcedit

    - by etranger
    I'm using MacFusion sshfs in combination with Midnight Commander, and when I edit remote text files with mcedit, weird symlinks are created on the remote system. $ ls -l .* lrwxr-xr-x 1 user group 34 Jun 27 01:54 .#filename.txt -> [email protected] where etranger is my local login name, and mbp is a hostname of my notebook running MacOS. symlinks can be removed by running remote rm command, but cannot be deleted on the mac-fuse mounted volume and thus pollutes the filesystem. I cannot figure what part of software is responsible for this, and how I could fix this, any help is appreciated. EDIT: This appears to be mcedit behavior as documented here: https://dev.openwrt.org/ticket/8245 Apparently, sshfs fails to remove symlink to the lock file for some reason (".#" in filename, perhaps), and it pollutes the filesystem. A quick workaround is possible, using another bug of Midnight Commander: editing (F4) the broken symlink effectively converts it to a missing lock file it was supposed to point to, and removes the symlink itself. The newly created file may then be deleted normally. EDIT 2: Unchecking "Follow symlink" in MacFusion apparently allows sshfs to remove dead symlinks, so the problem disappears completely.

    Read the article

  • Simple P2V help from Linux to Windows

    - by Ke
    Hi, I have two OS's installed on different drives in my PC. One linux (Centos 5.4) and one windows 7. Its getting tiresome to constantly have to stop and restart the PC when I want to use either OS. I would very much like to use Windows 7 as my host OS and access my linux OS from within Windows. However, im having trouble deciphering exactly how to do this (many of the articles seem confusing and a bit overkill) From what i have seen its possible to use VMWare converter to convert the physical linux image to a virtual image so that I can use it in windows. As im having problems understanding how this is done, I would really appreciate a step by step guide (for a newbie), or any simple tutorials that you can point me at. Some questions beforehand: 1) My linux image is around 80gb, do i need to take this into consideration? The linux drive is around 180gb in total. All my other drives are NTFS non writeable in linux (as I use them in windows and ntfs is dodgy in linux), so probably not possible to move the image over to my ntfs drives 2) Can I just zip the linux files up somehow and transfer it to windows to create the p2v? 3) Is it possible to do the P2V conversion while I am logged into windows. I can see the actual linux drive loaded in disk manager, but windows doesnt read linux file systems so im confused as to how to access the linux drive if this is possible. 4) Or will i need to do the whole p2v conversion inside linux? Cheers, any help is much appreciated Ke (a confused p2v newbie)

    Read the article

  • I keep losing wireless connection

    - by posfan12
    I have a WRT54GL v1.1 wireless router and a WUSB54G v4 wireless adapter, both made by Linksys. The router is in the living room by the TV and the my computer is in the bedroom. My ISP is Brighthouse. Operating System Microsoft Windows 7 Home Premium 64-bit SP1 CPU Intel Core 2 Duo E6600 @ 2.40GHz 36 °C Conroe 65nm Technology RAM 3.00GB Single-Channel DDR2 @ 333MHz (5-4-4-14) Motherboard eMachines EMCP73VT-PM (CPU 1) 26 °C Graphics ASUS VS247 (1920x1080@60Hz) 767MB GeForce GTX 460 (nVidia) 43 °C Hard Drives 466GB Seagate ST350041 8AS SCSI Disk Device (SATA) 35 °C Optical Drives HL-DT-ST DVDRAM GH41N SCSI CdRom Device Audio High Definition Audio Device The problem is that my Internet connection will work fine for 15 minutes or so. Then the data will just stop flowing. Windows says I am still connected, and the systray icon still shows five bars. But Comodo Firewall will stop showing up and down traffic, and another of my systray applications complains about a lack of connection. What I usually do is either disconnect from the network manually, or unplug and re-plug the USB adapter. At which point the connection will work properly for another 15 minutes. I've tried unplugging my router for 30 seconds and letting it reboot. I've also tried looking for a newer driver for my adapter but I seem to have the latest version 3.1.3.0. This is a recent problem starting about a week ago. For the previous several months things were working just fine. I haven't made any changes to my system that I am aware of. The only thing I did was open my case to blow the dust out of it, then put everything back together. How do I fix this issue?

    Read the article

  • Can I edit the snapshots of websites on most visited sites page in Chrome?

    - by arik
    I saw the previous chain of discussion, but did not find a way to continue the thread - I have found the "Preference" (I have Windows 7), but in that, I did not find clearly what to modify. I did find a section called 'URLs pinned" or something like this, but it did NOT match fully the ones I have. I have activated the 'profile sync' for Chrome - don't know if it has any effect. Can you manually edit the icons of the most visited sites, for the new tab page in Chrome? When you open a new tab in Chrome, I get the 'new page' tab, where I selected the 'most popular', so I have 8 icons with the 8 most popular sites I visited; I can also pin any one of those I see on screen, such that they remain 'permanent' there. We are missing one which would point to, say, "hotmail". So, I was looking for a way to add 'hotmail' to be one of the 8. (and, by the way, we ticked the 'X' on one of those 8, and now it remains grey / shows nothing in it). So, my double-question: How can I add a URL of my choice into one of those 8 spaces? How can I restore usage of the last one?

    Read the article

  • Where is my free space?

    - by Andrey
    A week ago I got a low disk space warning on my Vista x64 Ultimate box - 60 Mb free on the disk C; I cleaned up some downloaded msdn images and got 20 Gb freed up. Three days ago I got another notification, it looked suspicious but I didnt have time to deal with it and just moved some heavy stuff to another drive to free up about 17 Gb.... Today morning - 53Mb left on drive C, again! Now it looks really suspecious, so I downloaded TreeSize to see what's taking up the space, just to see it reporting only 121 GB out of 200 GB used, in other words I suppose to have about 79 Gb free. Then I went to Folder Options, enabled viewing of system and hidden files, rerun teh tool to see another 5 Gb added (which is expected). Then I open disk C in windows explorer, select all and right click Properties, to see it reporting teh same amount of files - 126 Gb. But when I look at Drive C properties, it reports that 200GB of 200 Gb are taken. I just scanned the drive with two different antiviruses - Symantec and AVG and found no viruses... I'm a little confused at this point, any ideas where is my free space, woudl be highly appreciated! Thank you! Andrey

    Read the article

  • Opening and Testing Ports on Modem > Router Connection

    - by JakeTheSnake
    Working off of my last question, I can access my server's FTP over the LAN but not over the internet. I'm using Filezilla on port 666. My router/modem configuration is as such (similar to other post): 1) Modem connects to WAN 2) WAN port on modem connects to LAN port on Router 3) Modem internal IP address is 192.168.0.254 4) Router internal IP address is 192.168.0.1 5) Modem has DHCP turned OFF 6) Router has DHCP turned ON 7) Router is running Tomato firmware and it's set as 'Router' (not 'Gateway') 8) The internet is working (just had to say that) I've set up port forwarding both on the modem and router - both route port 666 to the IP address of 192.168.0.3 (TCP); that is the IP address of the server which has FileZilla running. I don't know if that's hindering anything but I've also tried it with just the modem and just the router...same result. I've also tried setting the server to be DMZ (both on router and modem). Neither router nor modem have anything in their logs about denying inbound traffic on port 666 so my ability to troubleshoot stops there. I've tried contacting my ISP (Telus, running on mobility plan...it's a "Smart" Hub) but they weren't much help. They said they only block port 25 and 80 and maybe a few others, but not most ports. I test whether or not the port is open by going to canyouseeme.org - I don't know whether or not that would produce a 'connection refused' result just based on the fact that the FTP requires a login...I'm not well versed on this matter. FWIW, sometimes I get a 'connection refused' error on canyouseeme.org but mostly it's 'connection timed out'. I don't know what else to do at this point.

    Read the article

  • Ubuntu 11.10 Virtual-box Unity 3D not working

    - by naveen
    After struggling for four hours, I still cannot get Unity 3D of Gnome 3 to work on my VirtualBox - I have been pouring through Internet and forum posts but to no avail. Here's what I've done so far: VirtualBox 4.1.4r74921 on Windows 7 Installed Ubuntu Desktop 11.10 ( 32 bit ) Enabled 3D acceleration Allocated 1.5GB of RAM Allocated 50MB video memory (hope this is not the culprit) Installed Guest edition 4.1.4 Did apt-get update and apt-get upgrade Booted back in to Ubuntu - falls back to Unity 2D Shared folder, mouse integration all works, so guest edition is properly installed Tried the command and below is the output /usr/lib/nux/unity_support_test –p OpenGL vendor string: Mesa Project OpenGL renderer string: Software Rasterizer OpenGL version string: 2.1 Mesa 7.11 Not software rendered: no Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: no GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no I am trying to find what the "no" means but cannot find any good answers. Inter Core i5 processor 4GB of RAM on the host Display adapter: NVIDIA GeForce 8400GS Is anyone else facing the same problem? If so, can you point me to a solution or any reference where I can find a solution?

    Read the article

  • Issues with duplicating TFS 2005 to a virtual server

    - by Hitchhiker
    We have a problem with our current TFS installation. For some reason, which I won't get into, the Sharepoint DBs (sts_content, sts_config) were upgraded to WSS 3 (MOSS). So now, none of our team-project sites work, we have no access to our documents and can't create new team projects. We can still work with the version control, though. We wanted to "play" with the server and try to fix it, without affecting the users. So we used p2v for VmWare, to duplicate the server to a virtual one. We now continued and changed all of the relevant configuration to point to the new server, as explained in the MSDN article. The step of rebuilding the warehouse (with setupwarehouse) failed. Also, we can't access the VersionControl web service (ourserver:8080/VersionControl/v1.0/repository.asmx). We are seeing errors in the EventLog: TF53002: Unable to obtain registration data for application Build. TF53005: Unable to retrieve the Team Foundation Server installed UI culture. TF53002: Unable to obtain registration data for application VersionControl. TF30040: The database is not correctly configured. Contact your Team Foundation Server administrator. The solutions suggested in this blog post did not assist. So now we're kind of stuck. Any assistance will be appreciated.

    Read the article

  • Way to speed up load-balanced ssl using nginx?

    - by paulnsorensen
    So the setup for our website is 4 nodes running rails 3 and nginx 1 that all use the same GoDaddy certificate. Because we are a paid site, we have to maintain PCI-DSS compliance and thus have to use the more expensive SSL ciphers -- also we force SSL using Rack. I've recently switched over to Linode's NodeBalancer (which I've read is an HACluster), and we're not getting the performance we'd ideally like. From what I've read, it looks like terminating the SSL on the nodes using the high cipher is what is causing the poor performance, but I'd like to be thorough. Is there anything I can do? I've read about other ways to terminate the SSL before the NodeBalancer (like using stud), but I don't know enough about these solutions. We certainly don't want to do anything experimental or anything that has a single point of failure. If there really isn't anything I can do to speed up the SSL handshake, my alternative would be to support certain pages on Rails using a secure and insecure subdomain. I've found a few guides that walk through that, but my resulting question is in this situation, would it be better to have nginx handle forcing ssl on the secure subdomain instead of rails? Thanks!

    Read the article

  • Logging hurts MySQL performance - but, why?

    - by jimbo
    I'm quite surprised that I can't see an answer to this anywhere on the site already, nor in the MySQL documentation (section 5.2 seems to have logging otherwise well covered!) If I enable binlogs, I see a small performance hit (subjectively), which is to be expected with a little extra IO -- but when I enable a general query log, I see an enormous performance hit (double the time to run queries, or worse), way in excess of what I see with binlogs. Of course I'm now logging every SELECT as well as every UPDATE/INSERT, but, other daemons record their every request (Apache, Exim) without grinding to a halt. Am I just seeing the effects of being close to a performance "tipping point" when it comes to IO, or is there something fundamentally difficult about logging queries that causes this to happen? I'd love to be able to log all queries to make development easier, but I can't justify the kind of hardware it feels like we'd need to get performance back up with general query logging on. I do, of course, log slow queries, and there's negligible improvement in general usage if I disable this. (All of this is on Ubuntu 10.04 LTS, MySQLd 5.1.49, but research suggests this is a fairly universal issue)

    Read the article

  • AWS Application - Subscription Payments Scheduled but Not Initializing

    - by nicorellius
    I briefly browsed the AWS forums but these are not nearly as easy to use and efficient as Stack Exchange derived ones, so here I am... the company I work for has an app on the AWS cloud. In general the app works well. However, since it is new, we haven't had many customers and the ones who signed up are now coming to the point where their subscriptions to our service should be renewed. In comes my question. When I query the database for payment status of certain subscriptions using the console, I get what I would expect: Instrument XYZ is on a Monthly subscription (subscription=xxxyyyzz). The subscription is active with an expiration date of 19 Apr 2010 10:43 Z The payment token will expire on 1 Dec 2012 12:00 Z There are xy runs remaining. The subscription fee of $XX.xx will be charged on 17 Apr 2010 10:43 Z OK, this is great. According to this, I would expect the next payment to be initiated on 17 Apr. The reason I am asking is because for a different user last month I got this same output but the payment never went through, i.e., was not initiated through Amazon payments. The user didn't see the payment go through and neither did we. It should be noted that the initial payments were received. The sign up process works and if the user goes to "pay" from within our app, they will be directed to https://payments.amazon.com to make their payment. Any ideas?

    Read the article

  • Changing IP address in IIS for SharePoint site results in Directory listing error

    - by Dan
    I have a server here that has 2 roles. One is Exchange 2007 and the other is MOSS 2007. In IIS i have a site, go.domain.com which has our OWA. The other is internal.domain.com which is the MOSS site. I have given the NIC local IPs and each site is using host headers. The GO site has an SSL cert from NetSol, and the MOSS site has a self signed. Right now going to either shows the NetSol site, which browsers complain about when going to the internal.domain.com site, obviously, since they are on the same IP in IIS. Both sites have always run off the original IP of 10.0.0.3 in IIS. When i added the second IP to the NIC, (10.0.0.6) and changed the Sharepoint site in IIS to use this for http and https access, I now get this message in a browser when trying to connect. Directory Listing Denied This Virtual Directory does not allow contents to be listed. Changing the IP back to 10.0.0.3 and the internal site is back up. What am I missing here? Do i need to fool around with Alternate Access Mappings in Central Admin? Am i completely missing the point with multiple SSL certs and host headers?

    Read the article

  • What configuration changes can I make to speed up extremely slow Windows VM's in ESXi 4.0.

    - by Shawn Anderson
    I've recently moved from VMWare Server to ESXi 4.0. Running on Dell T310. My VM's have been restored but they are running dog slow compared to VMWare Server. I loaded ESXi 4.0 using only default values. Where are some areas where I can tweak the performance? Even logging onto the VM's can be extremely sluggish. Trying to install software on any of them is a new experience in pain. Dell PowerEdge T310 Xeon X3460 2.80 GHz 32 GB RAM 1 HD (2 TB) I have 16 VM's on this server, but only six or so will be running during my testing. I keep an eye on the Resource Allocation and Performance tabs for the host and I never see CPU or RAM getting anywhere close to pegged. Events tab does show some notices for video RAM issues and some hints on Windows activation issues, but nothing that would point to the sort of sluggishness that I'm experiencing. 1 Windows Server 2008 R2 (64-bit) - 4 GB RAM 1 Windows 7 (32-bit) - 2 GB RAM 1 Vista (32-bit) - 1 GB RAM 3 XP (32-bit) - 1 GB RAM Over to you! Thanks - Shawn

    Read the article

  • Windows Server 2008 network speed slow, Xen 3.4.3 HVM ISO

    - by Elliot.Bradshaw
    I've setup a VM running Windows Server 2008 on a host node running Xen 3.4.3-5 and the following kernel: 2.6.18-308.1.1.el5xen #1 SMP Wed Mar 7 05:38:01 EST 2012 i686 i686 i386 GNU/Linux The network speed on the VM is very slow--using the online speed tests I can only get it up to 8-9mbps. The line is 100mbps burstable and the host node has no problem achieving those speeds. If it setup a VM running CentOS, it too has no problems achieving those speeds. I've done some pretty exhaustive troubleshooting, but nothing has helped: New VM installations of Win2k8 do have the same network problem. Upgrading to most recent kernel-xen did not help (2.6.18-308.1.1.el5xen). Upgrading from xen 3.4.0 to xen 3.4.3-5 did not help. Disabling Windows firewall, etc did not help. Changing network card device config from auto negotiation to manually be 100mbps full duplex did not help. Changing the network receive buffer packet size did not help (tried all combos from 64k to 8k). At this point I'm pretty much out of ideas--any help would be appreciated!

    Read the article

  • Configuring DNS & MX records for exchange 2010

    - by Mahmoud Saleh
    i am trying to configure Exchange Server 2010 on Windows Server 2008 R2 to receive emails from the internet following the danscourses tutorials: and i followed this video for the DNS & MX records: http://www.youtube.com/watch?v=jdf_3DRssks i don't have any windows administration skills, and i am stuck with the DNS configuration. and the following are my domain configuration i got from the hosting. following are the steps i made: 1- Add new name server: add ns1.centors.com ip Exchange Server Public IP: 41.233.26.131 2- Change the A record change it to point to the public ip address Exchange Server Public IP: 41.233.26.131 3- New cname record for www and make it resolve to centors.com 4- New mx record for mail.centors.com 5- New A record for mail.centors.com: name: mail ip: Exchange Server Public IP: 41.233.26.131 6- new A record for ns1: ip: Exchange Server Public IP: 41.233.26.131 7- i made port forward in the router for SMTP and POP3 to the exchange server local ip address. ISSUE: i have a user account in the active directory, and the user is member of the domain, the user is [email protected] and when trying to login with this account in outlook 2010 on other machine using following data: account type: POP3 incoming mail server: mail.centors.com outgoing mail server: mail.centors.com i always get the error: Authorization failed, check your server settings. please advise what's wrong with the configuration, thanks in advance.

    Read the article

  • PORT FORWARDING TO PUT MY WEB SERVER ON THE INTERNET

    - by Chadworthington
    I went to http://canyouseeme.org/ to check to see what my external IP address. Regardless of what port I enter, it tells me that the port is blocked. I have a LinkSys router that basically has the default settings with the exception that I have WEP encrptin setup and I have forwarded a few ports, including 80 and 69. I forwarded them to the 192.x.x.103 IP address of the PC which is running IIS. That PC runs Symantec Endpoint Protection, which I right mouse clicked in the tray to Disable. These steps used to make my PC visible so I could host my own web site in IIS on port 80, or some other port, like 69. Yet, the Open Port tool cannot see my IP when it checks eiether port and when I navigate to http://my external ip/ I get "page cant be displayed" At first I was thinking that maybe Comcast is blocking port 80, but 69 doesnt work eiether. I do not see any other blockking set up in my router and, as I mentioned, I went with teh defaults except where discussed. This is a corporate PC and Symantec End Point Protecion is new to it (this previously worked on teh same PC with Symantec Protection Agent), but I thought that disabling Sym End Pt from the tray, that that would effectively neutralize it. I do not have the rights to kill the program itself. Any suggestions on what else to try to make my PC externally visible?

    Read the article

  • Sophos UTM in Hyper-V

    - by TheD
    So, I had a previous thread about this Virtualizing Firewalls/UTM. Essentially, I have configured what I think would work, but networking isn't my strong point! Two Virtual Adapters - with IP addresses 192.168.0.2 (External) and 192.168.0.3 (Internal) respectively. The External Adapater looks at 192.168.0.1 (my Zyxel) for it's default gateway. The Internal Adapter, 192.168.0.3, which is what the Sophos UTM listens on, has it's default gateway set to 192.168.0.2, the IP of the External Lan interface. So, PC (192.168.0.11, DHCP) --> (LAN) --> Switch --> 192.168.0.3 (Internal LAN Interface IP) --> Sophos UTM --> 192.168.0.2 (External LAN Interface IP) --> 192.168.0.1 --> Internet Would this be the correct setup, or am I completely out of the game here? Cheers!

    Read the article

  • Poor gaming performance with hyper-v installed in windows 8

    - by SnowCrash
    I am getting very poor gaming performance on my Windows 8 host OS with Hyper-V installed but no guest machines running. For example World of Tanks reports 60-70 FPS without Hyper-V installed and 4-14 FPS with it installed. A similar, dramatic, hit is observed in several other games so the issue is not WoT specific. To make the point clear, I am not trying to run games in a virtual machine. I don't even have a VM running while observing this effect. I simply have the Hyper-V feature installed. My system specs: AMD Phenom II 965 (3.4 GHz) AMD Radeon 6950 2GB (XFX Double D HD-695X-CDFC) 16GB DDR3 1333 AMD 790GX chipset Mainboard (Gigabyte GA-MA790GPT-UD3H) I have tried every AMD driver from 12.8 to the current 12.11beta8, virtualization is enabled in the BIOS settings, the onboard 3300HD video device is disabled in BIOS and I have read the MSDN blog entry here regarding a similar issue in Server 2008 that was resolved in 2008 R2 (and hopefully not regressed in Win 8). I'd like to be able to use Hyper-V for development and testing at home (I am a sysadmin/software developer professionally). If, however, I can't also use my home system for entertainment I'll have to scrap those plans.

    Read the article

  • Sharing disk volumes across OpenVZ guests to reduce Package Management Overhead

    - by andyortlieb
    Is it feasible to create a single "master" OpenVZ guest who would only be used for package management, and use something like mount --bind on several other OpenVZ guests sort of trick them into using the environment installed by the master guest? The point of this would be so that users can maintain their own containers, and yet stay in sync with the master development environment, so they'll always have the latest & greatest requirements without worrying too much about system administration. If they need to install their own packages, could put them in /opt, or /usr/local (or set a path to their home directory)? To rephrase, I would like several (developer's, for example) OpenVZ guests whose /bin, /usr (and so on...) actually refer to the same disk location as that of a master OpenVZ guest who can be started up to install and update common packages for the environment to be shared by all of this group of OpenVZ guests. For what it's worth, we're running Debian 6. Edit: I have tried mounting (bind, and readonly) /bin, /lib, /sbin, /usr in this fashion and it refuses to start the containers stating that files are already mounted or otherwise in use: Starting container ... vzquota : (error) Quota on syscall for id 1102: Device or resource busy vzquota : (error) Possible reasons: vzquota : (error) - Container's root is already mounted vzquota : (error) - there are opened files inside Container's private area vzquota : (error) - your current working directory is inside Container's vzquota : (error) private area vzquota : (error) Currently used file(s): /var/lib/vz/private/1102/sbin /var/lib/vz/private/1102/usr /var/lib/vz/private/1102/lib /var/lib/vz/private/1102/bin vzquota on failed [3] If I unmount these four volumes, and start the guest, and then mount them after the guest has started, the guest never sees them mounted.

    Read the article

  • IIS running but not serving content

    - by Kyle
    I have an internal dev server running Windows 2k8 R2 with the Web and FTP Server roles set up which won't serve any content at all. Trying to connect from another host via telnet yields 'connection failed': c:\>telnet devserver 80 Connecting To devserver...Could not open connection to the host, on port 80: Conn ect failed Using netstat -an | find "80" on the dev server returns no connections on port 80 (a few on 1801, etc) tcpview confirms this, listing no open connections on port 80. The following services related to the Web role are running: World Wide Web Publishing Service Application Host Helper Service Microsoft FTP Service (ftp connections to port 21 are granted) Windows Process Activation Service The default website bindings are: Type Host Name Port IP Address Binding Information http 80 * net.tcp 808:* net.pipe * net.msmq localhost msmq.formatname localhost When setting up a new application under the default site, the test function passes both connection/authorisation only if the 'connect as' user is local admin, otherwise the test errors with 'invalid application path'. At no point is the W3SVC service PID bound to port 80 (it is running and bound to 21 for ftp). There are no W3SVC log directory at c:\inetpub\logs\LogFiles\ (only FTPSVC2), and no HTTPERR directory at c:\windows\system32\ or c:\windows\system32\logfiles\. There do not appear to be any related errors in the event logs. I'd really appreciate any thoughts on be a good place dig into what's (not) going on here!

    Read the article

  • Redundant Microsoft server solution for small company

    - by MadBoy
    I'm planning to change one server Microsoft SBS 2003 with SharePoint, Exchange and SQL database into something that will provide me with some redundancy and won't be single point of failure. I was thinking to buy 2x exactly the same physical servers and put 2 virtualized servers on HyperV or VMWare on each. Then i would put SharePoint, Exchange and SQL on that 1 physical server (shared onto 2x VM's). I would like 2nd physical server to be exact duplicate of the first one so that when 1st server goes down (for reboot or hw failure), 2nd takes care of everything so that users don't even see anything changed (in terms all their emails, sharepoint stuff is available). My questions are: Will I have to pay for licenses for both servers even thou only one instance of SharePoint, Exchange, SQL will be used at same time? What are proposed solutions to do that? Any additional hardware I would need, any complicated software configuration to be expected to configure such redundancy so that when one physical server goes down 2nd one is taking care of rest? What problems should I expect? This solution is for 60 people. Later on it may or may expand.

    Read the article

  • Alternate way to connect a vpn through a MIFI

    - by questor
    This has gotten to be a major problem at our company and depending on who I ask, the problem either does not really exist (mfr. and vendor) or is insoluble ( according to most users including techs who know how to prove their point). The problem involves getting a normal Windows 7 system to connect to a normal Server 2008 R2 Server over a cellular router (usually called a Mifi). A very few brands/models appear to work but the majority cannot make the connection. Since it is a cellular device, there are many variables that come into play and I wondered if anyone had ever found a consistent way to either make one work or else prove to the providers that their equipment was at fault. They all specifically state “VPN use” on the sales brochures. But few if any work. And those that do are not reliable. From a standpoint of pure knowledge, I just wondered if anyone knew the real reason why they fail? Pptp, L2tp, IPsec doesn’t matter. I have not tried Shrew or OpenVPN and am using strictly MS Windows protocols. Plenty of Google Searches back up my complaints but none seem to be any closer to knowing "why" they fail, just that they do. This is a "quest for knowledge"question. I don't expect a solution. Just a reason for the problem if anyone has any ideas.

    Read the article

  • Googlebot repeatedly looks for files that aren't on my server

    - by John at CashCommons
    I'm hosting a site for a volunteer organization. I've moved the site to WordPress, but it wasn't always that way. I suspect at one point it was hacked badly. My Apache error log file has grown to 122 kB in just the past 18 hours. The large majority of the errors logged are of this form -- it's repeated hundreds of times today alone in my log files: [Mon Nov 12 18:29:27 2012] [error] [client xx.xxx.xx.xxx] File does not exist: /home/*******/public_html/*******.org/calendar.php [Mon Nov 12 18:29:27 2012] [error] [client xx.xxx.xx.xxx] File does not exist: /home/*******/public_html/*******.org/404.shtml (I verified that xx.xxx.xx.xxx was a Google server.) I suspect there was a security hole somewhere before, likely in calendar.php, that was exploited. The files don't exist anymore, but there may be many backlinks that exist that reference here, hence why googlebot is so interested in crawling them. How do I fix this gracefully? I still would like Google to index the site. I just want to tell it somehow not to look for these files anymore.

    Read the article

  • How to configure TFTPD32 to ignore non PXE DHCP requests?

    - by Ingmar Hupp
    I want to give our Windows guy a way of easily PXE booting machines for deployment by plugging his laptop into one of our site networks. I've set up a TFTPD32 configuration which does just that, and our normal DHCP server ignores the PXE DHCP requests due to them having some magic flag, so this part works as desired. However I'm not sure how to configure TFTPD32 to only respond to PXE DHCP requests (the ones with the magic flag) and ignore all normal DHCP requests (so that the production machines don't get a non-routed address from the PXE server). How do I configured TFTPD32 to ignore these non-PXE DHCP requests? Or if it can't, is there another equally easy to use piece of software that he can run on his Windows laptop? Since the TFTPD part is working fine, a DHCP server with the ability to serve PXE only would do. Worst case I'll have to set up a virtual machine with all this, but I'd much prefer a small, simple solution. I'm not interested in solutions that involve using the existing DHCP servers or separating machines on the network for deployment, the whole point is to be simple and stand-alone.

    Read the article

< Previous Page | 623 624 625 626 627 628 629 630 631 632 633 634  | Next Page >