Search Results

Search found 1861 results on 75 pages for 'loss'.

Page 47/75 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • What could be causing SVCHost to leak handles?

    - by Goz
    I have a problem that has been causing me all sorts of grief recently. SVCHost appears to be leaking resources all over the shop. This is the SVCHost run with the arguments "-k netsvcs". At the moment it is sitting at around 5,700 Handles being used. Before I rebooted the machine it was sitting at around 33,000 handles! This higher number has been causing me large problems as my software, thus, fails to obtain the handles it needs (The software tries to create around 2000 handles). I'm totally at a loss as to what is going wrong. IF anyone could help me stop this happening it would be much appreciated. I'm running on XP with SP3. Edit: I tracked this problem down to the WMI system. I'm not sure why or how the problem was occurring. Basically I used "sc change" to move it into its own process and suddenly everything seems to be fine. I'm not entirely sure what is going on ...

    Read the article

  • Resources for Smartphone Security

    - by Shial
    My organization is currently working on improving our data and network security due to increasing HIPAA laws and a general need to get a better grasp on controlling our health related information. We are a non-profit working with people with developmental disabilities so we handle a lot of medical related information. One area that has been identified as a risk is our use of smartphones, specifically at this time Windows Mobile 6.1 devices from T-Mobile. We do not utilize the VPNs on the phones so there isn't any way they can access our databases or file servers (username/password for VPNs is not the domain logons). What would be exposed however is the particular user's email account since you could extract out the username/password and access the email either on the device or on our web email (Exchange 2003) which could contain HIPAA protected confidential information about clients and services and this would be an incident that would have to be reported. What resources or ideas would help us secure these devices? I'm not worried about data interception (using SSL) but more about physical theft or loss of the device. Are there websites that I just have not found with guidelines and suggestions or particualar products that would help protect us? I also don't want to limit the discussion to windows Mobile either. I myself am looking at an android 2.0 device and there is always the eventual possibility we could get pushed to enable the VPNs. I know this is a subject that likely won't have any particular correct answer and it is something we should all be aware of since there devices are sitting outside of our immediate control most of the time.

    Read the article

  • Files on ext4 on Drobo with corrupt, zero-ed out blocks

    - by Patrick
    I have a 2TB ext4 file system (Ubuntu running Linux kernel 2.6.31-22-server x86_64). This file system is the second drive on a Drobo box plugged in via USB. We've not had problems on the first drive (Drobo limits drive size to 2TB due to some OS limitations, so if you have more space than that it appears as two separate drives). I am sharing this files with Samba (smbd 3.4.0) with a mix of Windows and Linux workstations. Recently we've been experiencing some data corruption in multiple files. In many cases I have an un-corrupt original file stored on one of the workstations. These are binary files of various formats, (e.g. SQLite, but others as well). I used "split" to split a corrupt and uncorrupt file into 4096 byte chunks (this is the block size of the ext4 file system). I then ran md5sum on pairs of chunks and discovered that the chunks matched in many cases and in every case where they did not match, the corrupt chunk was a solid chunk of zeroes (620f0b67a91f7f74151bc5be745b7110 for what it's worth). I'm trying to track down a culprit but am a bit at a loss. I don't believe Samba is at fault since I'm using it without issue on the first drive exported by the Drobo. What can I do to narrow this down and find out what's going on?

    Read the article

  • CentOS networking BNX2

    - by james moore
    Having some trouble with my NICs. The server starts fine and I can wget/ping etc. However, when I /etc/init.d/networking restart I then receive the following error: Bringing up interface eth0: bnx2: fw sync timeout, reset code = 1030009 SIOCSIFFLAGS: Device or resource busy Consequently, the task fails. I have searched around on google users suggesting to disable PNP in the BIOS but I see no option. Here is some system information: $ ethtool -i eth0 driver: bnx2 version: 2.0.8-rh firmware-version: bc 2.9.1 $ uname -a Linux host 2.6.18-238.9.1.el5 #1 SMP Tue Apr 12 18:10:13 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux $/sbin/lspci | grep Broadcom 04:00.0 PCI brodge: Broadcom EPB PCI-Express to PCI-X Bridge (rev c3) 05:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5700 Gigabit Ethernet (rev 12) 08:00.0 PCI brodge: Broadcom EPB PCI-Express to PCI-X Bridge (rev c3) 09:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5700 Gigabit Ethernet (rev 12) $ lsmod | grep bnx2 bnx2i 81704 0 cnic 109512 1 bnx2i libiscsi2 77765 6 be2iscsi,ib_iser,iscsi_tcp,bnx2i,cxgb3i,libiscsi_tcp scsi_transport_iscsi2 73945 8 be2iscsi,ib_iser,iscsi_tcp,bnx2i,cxgb3i,libiscsi2 bnx2 224780 0 scsi_mod 199001 15 mpt2sas,scsi_transport_sas,mptctl,be2iscsi,ib_iser,iscsi_tcp,bnx2i,cxgb3i,libiscsi2,scsi_transport_iscsi2,scsi_dh,sg,libata,megaraid_sas,sd_mod $ rmmod bnx2; modprobe bnx2 PCI: Enabling device 0000:05:00.0 (0158 -> 015a) PCI: Enabling device 0000:09:00.0 (0158 -> 015a) bnx2: fw sync timeout, reset code = 10300003 Any help would be appreciated as I am at a loss.

    Read the article

  • Sharepoint/WSS Reporting Services Integration woes

    - by mhollers
    after a number of failed attempts i seem to have successfully installed the Reporting services add-in to my WSS farm. However, I seem to be missing most of the enhanced functionality eg no report library template, no report center site template. the only additional functionality available is the report viewer web part. background: 2 server WSS 3.0 farm with CA (Central admin) WFE (web front end) and reporting services addin installed on 1, and SQL05 SP2 with Reporting services (RS) and all databases installed on other. I have a VM environment set up and have rolled this back and repeated a number of times. I have configured RS within CA and activated 'Report Server INtegration Feature'. Within the 'site settings' I have a 'Reporting Services' heading with a 'manage shared schedules' item/link, not sure if there should be other options? I was of the understanding that to view reports within sharepoint i could either create a new site using the 'report center' template or add a report library to an existing site, neither of which seems available I am at a loss as to what to do, as all online information seems to do with dealing with installation issues/errors, which i seem to have eventually got past

    Read the article

  • What's the state of the art in image upscaling?

    - by monov
    I like to collect cool pics and use them as wallpapers or for other things. Often, artists publish only low-res versions, probably for fear of theft. Example: Gabriel Pulecio's BIRDS Now, if I want to use that as a wallpaper, I'd have to upscale it, and obviously that'd make it look blurry because of the bicubic interpolation. I realize there's no real way to get a high-res version from a low-res pic, because the information is not simply there. That said, I'm wondering if heuristics have been developed for upscaling with less apparent loss of quality. Those would probably be optimized for specific image types. For photorealistic pictures, for cartoons with large flat areas, for pixel art... One algorithm I'm aware of is Seam Carving. It works for some kinds of pics, especially ones with a plain, undetailed or uninteresting background, and a subject that strongly stands out. But it's far from being general-purpose. Applying it to the above pic produces this. It looks quite sharp, but the proportions are horribly distorted because the algorithm is not designed for this kind of pic. Another is Pixel art scaling algorithms. Those are completely unfit for anything other than actual pixel art that's pixelized to begin with. For example, I tried the scale2x windows binary on my pic, but its output was nearly indistinguishable from nearest-neighbour scaling because the algorithm didn't detect any isolated pixely fragments to work from. Something else I tried was: I enlarged the image in Photoshop with bicubic interpolation, then I applied unsharp mask. The result looks pretty bad. The red blotch is actually resized reasonably well, but the dove is far from it. What I'm looking for is some app that makes a best-effort attempt at upscaling any input image while minimizing blurriness. If you know of any, I'll be thankful. Note that the subjective prettiness and sharpness of the result is what matters... the result doesn't need to be completely faithful to the original small image.

    Read the article

  • Massive SQL issue shutting down our site.

    - by Pselus
    Our website has started timing out like crazy today. All of our clients are finding it unusable. The only error we can seem to trace down as a potential problem is this: SQLAllocHandle on SQL_HANDLE_DBC failed Error ASP Description Error Category Microsoft OLE DB Provider for ODBC Drivers I have no idea what it means or how to go about fixing it. Anyone ever encountered this error before? Currently, you can log in to our site, but then once you go to do anything else, you find yourself logged out or nothing happens. We have a lot of Ajax going on so the "nothing happens" probably has to do with the Ajax pages not loading properly due to logouts and so nothing displays to the user. Like I said, I'm at a loss. Anyone have any advice on this error? EDIT I realize that this isn't necessarily a programming question, but we are a small startup company that just yesterday started talking about how we need to get a backup server running. Apparently we talked about it too late. We don't have a DBA, just 2 mid level programmers trying their hardest to keep our clients happy. So please, if you have any assistance give it but please don't close my question right now. EDIT 2 Turns out we had something on our server running called "ServerMask" that makes our IIS server look like Apache to the outside world. Shutting it down fixed our issue. Still no idea why it was messing up but it was the problem apparently. Thanks to everyone who tried to help.

    Read the article

  • Win 7: apps crash, then explorer crashes, then services fail, then boom

    - by snorfys
    Periodically, every 2-3 days one of my systems will go haywire: every app will crash search will fail via the start menu and then explorer will fail. Restarting explorer via taskmanager will cause it to fail again, then it'll BSOD and restart. The eventlog for when this happens goes something like this every time: ERROR: Session "ReadyBoot" stopped due to the following error: 0xC0000188 (supposedly not a problem) WARNING: The maximum file size for session "ReadyBoot" has been reached... (forget where I found out, but also 'not a problem') ERROR: Session "Circular Kernel Context Logger" stopped due to the following error: 0xC0000188 (again, supposedly not a problem) WARNING: The maximum file size for session "Circular Kernel Context Logger" has been reached... ERROR: Faulting application name: Explorer.EXE, version: 6.1.7600.16450, time stamp:... ERROR: Faulting application name: explorer.exe, version: 6.1.7600.16450, time stamp:... ERROR: Faulting application name: svchost.exe_iphlpsvc, version: 6.1.7600.16385, time stamp:... ERROR: The Service Name service terminated unexpectedly. It has done this 1 time(s) That last one happens a number of times but with a different service name. Then finally we have: ERROR: The Service Control Manager tried to take a corrective action (Restart the service) after the unexpected termination of the Server service, but this action failed with the following error: An instance of the service is already running. After that, I have my BSOD and logs complaining that windows started up without shutting down. It's a new machine: Intel i3 530 4gb RAM (Ran memtest for 4 hrs, no problems) 320GB WD/250GB Seagate HDDs (Happened on fresh installs on 2 separate HDDs) Win7 Pro/Ultimate x64 (wife's copy of pro, my copy of ult, no change) Fresh install + driver and windows update (happened without updates as well) I'm at a bit of a loss as to what I can look at next. Especially since it'll work like a charm for 2-3 days and then it's hooped for a night (I'm on it now in fact - no problems).

    Read the article

  • SQL 2K5 - Multiple databases vs. Multiple files

    - by Bob Palmer
    Hey all, quick question. Our current legacy system was built using multiple distinct databases (about ten of them). These are all part of the same discreet system, and a large number of SPs and functionalty span multiple databases. There are also key relationships that span (for example, a header table may be in database A with history, etc. in database B). When deploying multiple copies of our app to the same server therefore, we have to use multiple instances (because the database names are coded into so many sprocs). We're evaluating the idea of taking these ten databases (about 30gb total with individual sizes ranging from 100mb to 10gb) and merging them into a single database. Currently, we have our databases spread accross multiple spindles for better IO. The question I have is whether or not there is any performance loss or benefit of having 10 different databases vs. 10 different database files? i.e. rather than having three databases (A, B, and C) Disk D: A.mdf (1gb) Disk E: B.mdf (4gb) Disk F: C.mdf (10gb) Disk G: A_Log.ldf, B_Log.ldf, C_Log.ldf have one database (X) Disk D: X1.mdf (5gb) Disk E: X2.mdf (5gb) Disk F: X3.mdf (5gb) Disk G: X1_log.ldf,X2_log.ldf,X3_log.ldf Thanks! -Bob

    Read the article

  • Setting my NIC to full duplex

    - by David
    I am trying to optimize the network speed of my Solaris X86 server, and have discovered that the Cisco 3548 that it is connected to has issues with the NIC in my server. The NIC appears to have not been configured fully, and is coming up 100 half-duplex. The 3548 ports are all set to 100 full. Ideally I'd like to have the server set for 100 full, and have been attempting to configure it using ndd commands. However I have had no results. The following command: -bash-3.00# dladm show-dev rtls0 link: unknown speed: 100 Mbps duplex: unknown The NIC shows up as: pci bus 0x0001 cardnum 0x06 function 0x00: vendor 0x10ec device 0x8139 Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ which should be configurable. I have modified the configuration file from auto config (5) to 100 fdx (4) to no avail. If there is no other choice, I could alter the Cisco 3548 to be 100 half-duplex. However, this solution causes huge performance loss. Currently throughput is about 500Kbps, when it should be around 40Mbps.

    Read the article

  • Excel 2010 data validation warning (compatibility mode)

    - by Madmanguruman
    We have some legacy worksheets that were created in Excel 2003, which are used by LabVIEW-based test automation software. The current LabVIEW software can only handle the legacy .xls format, so we're forced to keep these worksheets as-is for the time being. We've migrated to Office 2010 and when working with these worksheets, I see this warning: "The following features in this workbook are not supported by earlier versions of Excel. These features may be lost or degraded when you save this workbook in the currently selected file format. Click Continue to save the workbook anyway. To keep all of your features, click Cancel and then save the file in one of the new file formats." "Significant loss of functionality" "One or more cells in this workbook contain data validation rules which refer to values on other worksheets. These data validation rules will not be saved." When I click 'Find', some cells that do indeed have validation rules are highlighted, but those rules are all on the same worksheet! We're using simple list-based validation, with some cells off to the side containing the valid values (for example, cell B4 has a List with Source "=$D$4:$E$4") This makes no sense to me whatsoever. One, the workbook was created in Excel 2003, so obviously we couldn't implement a feature that doesn't exist. Secondly, the modifications we're making don't involve changing the validation rules at all. Thirdly, the complaint that Excel is making is incorrect! All of the rules are on the same worksheet as the target. As if the story wasn't bizarre enough: I went ahead and saved the worksheet with Excel 2010. I then went to an old computer back in the lab and opened the document with Excel 2003. Guess what - the validations were untouched! My questions are: is this a legitimate bug in Excel 2010, or is this some exotic error in the legacy .xls worksheet that is confusing the heck out of Excel 2010? Has anyone else observed this issue working in compatibility mode?

    Read the article

  • SQL Server 2008 Web VS SQL Server 2008 Enterprise

    - by Jeremy
    I wrote an application a few months ago, and was hosting it out of our offices on a workstation with an Intel Core 2 Quad Q8200 @ 2.33GHz, 8 GB RAM, Windows Server 2008 Enterprise and SQL Server 2008 Enterprise. Both the webserver and database server were run on the same machine. We had a huge influx in traffic, and moved ClubUptime.com, and got 2 of their top teir windows VMs. The Database server runs Windows 2008 R2 Standard and SQL Server 2008 R2 Web on 8 GB ram and an Intel Xeon e5620 @ 2.40GHz. Ever since switching, the database which used to run at around 400MB in RAM now runs at around 4-7GB, and there haven't been any changes to it (other than a couple columns here and there). Our traffic has quadrupled, and our DB is 6 GB on disk, why would SQL server take up 7 GB if the DB is only 6. And why would it be storing the ENTIRE database in memory? Another thing is why growing 4 times in size did the database's memory footprint grow 12 times? Last question: Why does the CPU peg at 100% now where it didn't before? The design is simple, VERY few joins, NO subqueries. I am just at a loss, unless it is the SQL server edition, or the fact that I moved from real hardware to a VM.

    Read the article

  • How to recover a Linksys WRT54GL router that has a blinking green power LED and no response from the

    - by Peter Mounce
    I was flashing the router with the Tomato firmware, but something went wrong; I'm not sure what. Now, the router responds to ping at 192.168.1.1 (my Mac's on a static IP 192.168.1.21), but the web-interface doesn't come up. I have read that this situation is recoverable in a [couple of places][2], but I haven't been having much success and so I wondered whether anyone could help. From my Mac (OSX 10.5) I have tried to tftp a new vanilla-Linksys firmware to the router and reboot; according to the trace, this sends it but the router behaves no differently after a reboot. I've read that if boot_wait is turned on, I'll have an easier time, but I haven't been able to find any instructions that tell me how I can tell whether I did this or not (I don't think I have, but I might have, when I tinkered the first time months ago - the router has worked since then, though). I have found a couple of references to [something called JTAG][3], which seems like some kind of [homebrew diagnostic cable thing][4], but that's a little beyond my ken. Happy to try it, with muppet-level instructions, though (I do software, not hardware!). So, I'm at a bit of a loss, really, and wondered whether anyone could provide me with the route (ha. ha.) out of this mess? Hm, I can't post all the links I wanted to until I have some more reputation.

    Read the article

  • How to fix massive lag on ZyXEL HomePlug AV powerline adapters?

    - by Tim Abell
    I have 3 ZyXEL Homeplug AV powerline adapters as per the one in the review below. I have two plugged in currently, one into my Be / Thompson wireless router, and one into my desktop pc (box1). every now and then the link indicator on the adapters (the mains link, not the ethernet link) goes nutty, and performance falls off a cliff (see below). http://www.gadgetspeak.com/gadget/article.rhtm/753/479266/ZyXEL_PowerLine_HomePlug_AV_PLA401.html 64 bytes from box1 (192.168.1.101): icmp_seq=1064 ttl=64 time=996 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1065 ttl=64 time=549 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1066 ttl=64 time=6.15 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1067 ttl=64 time=1400 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1068 ttl=64 time=812 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1069 ttl=64 time=11.1 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1070 ttl=64 time=1185 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1071 ttl=64 time=501 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1072 ttl=64 time=1975 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1073 ttl=64 time=970 ms ^C --- box1 ping statistics --- 1074 packets transmitted, 394 received, +487 errors, 63% packet loss, time 1082497ms rtt min/avg/max/mdev = 5.945/598.452/3526.454/639.768 ms, pipe 4 Any idea how to diagnose/fix? I'm on linux so installing the windoze software that came with them is not something I'm terribly keen to do.

    Read the article

  • NTDS Replication Warning (Event ID 2089)

    - by Chris_K
    I have a simple little network with 3 AD servers in 2 sites. Site A has Win2k3 SP2 and Win2k SP4 servers, site B has a single Win2k3 SP2 server. All have been in place for at least 3 years now. Just last week I started getting Event 2089 "not backed up" warnings (example below) on both of the win2k3 servers. I understand what the message means, no need to send me links to the technet article explaining it. I'll improve my backups. What I'm more curious about is why did I just start getting this message now? Why haven't I been getting it for the past 3 years?!? Perhaps this is related: I recently decommissioned a few other sites and AD controllers (there used to be 3 more sites, each with their own controller). Don't worry, I did proper DCpromo exercises and made sure we didn't lose anything. But would shutting those down possibly be related to why I get this error now? This won't keep me awake at night but I am curious as to what changed... Event Type: Warning Event Source: NTDS Replication Event Category: Backup Event ID: 2089 Date: 3/28/2010 Time: 9:25:27 AM User: NT AUTHORITY\ANONYMOUS LOGON Computer: RedactedName Description: This directory partition has not been backed up since at least the following number of days. Directory partition: DC=MyDomain,DC=com 'Backup latency interval' (days): 30 It is recommended that you take a backup as often as possible to recover from accidental loss of data. However if you haven't taken a backup since at least the 'backup latency interval' number of days, this message will be logged every day until a backup is taken. You can take a backup of any replica that holds this partition. By default the 'Backup latency interval' is set to half the 'Tombstone Lifetime Interval'. If you want to change the default 'Backup latency interval', you could do so by adding the following registry key. 'Backup latency interval' (days) registry key: System\CurrentControlSet\Services\NTDS\Parameters\Backup Latency Threshold (days) For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • Apache2 - Hosting two sites on the same domain with different ports

    - by user1026361
    I am hosting a staging site (test.mydomain.com) which currently work well on port 80 for two sites (test.mydomain.com and test.FRmydomain.com) I am working on a new backend and I would like to deploy a third site on this server for testing. My hope is that it will live at test.mydomain.com:4204. I've got some experience with apache and quickly added statements: Listen 4204 NameVirtualHost *:4204 and created a new config for my site. What I imagine are the relevant parts of my config: <VirtualHost *:4204 > ServerAdmin [email protected] ServerName test.mydomain.com:4204 However, the site is not publicly available, by name or ip. If i curl localhost:4204 from the server, I get the expected page content At this point, I'm a bit of a loss on how to go forwards. It seems like my config is correct but not available to be served. Am I better off defining a proxy definition so that, for instance: test.mydomain.com/4204 proxies to my localhost server or is there a way to make the site available via the internet? EDIT: I have added an iptable rule after further Googling with the command: iptables -I INPUT -p tcp --dport 4204 -j ACCEPT I can see apache listening on 4204 and the rule is definitely in place but cant reach the site

    Read the article

  • Why can't I route to some sites from my MacBook Pro that I can see from my iPad? [closed]

    - by Robert Atkins
    I am on M1 Cable (residential) broadband in Singapore. I have an intermittent problem routing to some sites from my MacBook Pro—often Google-related sites (arduino.googlecode.com and ajax.googleapis.com right now, but sometimes even gmail.com.) This prevents StackExchange chat from working, for instance. Funny thing is, my iPad can route to those sites and they're on the same wireless network! I can ping the sites, but not traceroute to them which I find odd. That I can get through via the iPad implies the problem is with the MBP. In any case, calling M1 support is... not helpful. I get the same behaviour when I bypass the Airport Express entirely and plug the MBP directly into the cable modem. Can anybody explain a) how this is even possible and b) how to fix it? mella:~ ratkins$ ping ajax.googleapis.com PING googleapis.l.google.com (209.85.132.95): 56 data bytes 64 bytes from 209.85.132.95: icmp_seq=0 ttl=50 time=11.488 ms 64 bytes from 209.85.132.95: icmp_seq=1 ttl=53 time=13.012 ms 64 bytes from 209.85.132.95: icmp_seq=2 ttl=53 time=13.048 ms ^C --- googleapis.l.google.com ping statistics --- 3 packets transmitted, 3 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 11.488/12.516/13.048/0.727 ms mella:~ ratkins$ traceroute ajax.googleapis.com traceroute to googleapis.l.google.com (209.85.132.95), 64 hops max, 52 byte packets traceroute: sendto: No route to host 1 traceroute: wrote googleapis.l.google.com 52 chars, ret=-1 *traceroute: sendto: No route to host traceroute: wrote googleapis.l.google.com 52 chars, ret=-1 ^C mella:~ ratkins$ The traceroute from the iPad goes (and I'm copying this by hand): 10.0.1.1 119.56.34.1 172.20.8.222 172.31.253.11 202.65.245.1 202.65.245.142 209.85.243.156 72.14.233.145 209.85.132.82 From the MBP, I can't traceroute to any of the IPs from 172.20.8.222 onwards. [For extra flavour, not being able to access the above appears to stop me logging in to Server Fault via OpenID and formatting the above traceroutes correctly. Anyone with sufficient rep here to do so, I'd be much obliged.]

    Read the article

  • Problem with mydomain.com no prefix

    - by user10711
    Short question is. I have a domain name mydomain.com, we have a company website hosted on an IIS server 2003 configuration. Going to the address bar and typing www.mydomain.com will show my website properly. Typing mydomain.com into the same address bar will return an under construction website that seems to be hosted on my webserver. My domain name is hosted by Network Solutions, and I think I have it configured correctly using their advanced DNS services. In their settings I have www.mydomain.com, * and @ also pointed to the ip address of my webserver. On my webserver itself using the IIS manager, under the Web Site, and Web Site Identification. I have configured both www.mydomain.com and mydomain.com configured to work on the IP address on the webserver. I am hosting 4 different websites on my IIS server, all the other sites use prefixes other than www, an example is mail.mydomain.com and a couple of others. None of them show an under construction page as their default homepage. I am really at a loss as to why it would show an under construction page, especially since it seems to be pointing to the correct server. The reason this is such a big deal is because when you search for my company on google, the link there is for mydomain.com and by clicking on the link it shows under construction which is really quite embarrassing. Thanks in advance for any help and if there are further questions let me know.

    Read the article

  • How to forceably unmount stuck network share in Mac OS X?

    - by Kyle Lowry
    Not long ago my Xserve failed (power loss) while an iMac was working with files on a particular network share (called "Work Share"). This volume, "Work Share", is now stuck. It can't be seen in the GUI, you can only detect it using the Terminal. Even after power cycling over the course of several days, ls -a still shows that it's there, but I can't unmount it using any command - not even as root in single user mode. Every time I attempt to unmount that volume, I get the message that the resource is busy (which it can't possibly be since nothing is using it), and error code 4915. The issue is that when I mount the real "Work Share", it internally is renamed to "Work Share-1", which breaks all my links, and several files in the share. If I can't unmount the false "Work Share", then that computer would be unuseable without a reformat, I would imagine - and I don't want it to have to come to that. I've tried everything I can think of - it looks like sudo can't save me now. Any ideas on how to unmount this stuck volume?

    Read the article

  • Cable management techniques

    - by cornjuliox
    How do you manage the giant jungle of cables behind your PC? When you have 2 or more PCs next to each other, you wind up with this giant mess cables that's a pain in the neck to clean especially when both computers are running 24/7 and any fidgeting with the cables is likely to cause data loss and/or angry users. So far I've tried masking tape, cable ties and plain old string but none have been very effective. The masking tape kept the cables in place, but over time they ended up leaving this awful sticky residue on the sides of the cables that just won't come off gets all over your fingers and is horrible horrible horrible. I have nightmares about that stuff. We used cable ties and 'folded' up some of the longer cables so that they weren't any longer than they needed to be, but this meant that the position of some of our devices like the keyboard and the mouse were essentially 'fixed' until we removed the ties. The string didn't work much differently and required that we tie them properly or risk it coming loose. I would switch to a wireless keyboard and mouse, but I don't want to have to deal with the added expense of batteries, even the rechargable ones. Plus I don't want them to die on me at a crucial moment (happened to me once while playing Firearms _<). How do large offices and data centers manage their masses of cable?

    Read the article

  • Cable management techniques

    - by cornjuliox
    How do you manage the giant jungle of cables behind your PC? When you have 2 or more PCs next to each other, you wind up with this giant mess cables that's a pain in the neck to clean especially when both computers are running 24/7 and any fidgeting with the cables is likely to cause data loss and/or angry users. So far I've tried masking tape, cable ties and plain old string but none have been very effective. The masking tape kept the cables in place, but over time they ended up leaving this awful sticky residue on the sides of the cables that just won't come off gets all over your fingers and is horrible horrible horrible. I have nightmares about that stuff. We used cable ties and 'folded' up some of the longer cables so that they weren't any longer than they needed to be, but this meant that the position of some of our devices like the keyboard and the mouse were essentially 'fixed' until we removed the ties. The string didn't work much differently and required that we tie them properly or risk it coming loose. I would switch to a wireless keyboard and mouse, but I don't want to have to deal with the added expense of batteries, even the rechargable ones. Plus I don't want them to die on me at a crucial moment (happened to me once while playing Firearms _<). I know that there are people out there with home/office networks a thousand times more convoluted than mine, so

    Read the article

  • Cannot connect with Cisco VPN but can connect with ShrewSoft VPN

    - by rodey
    EDIT: We connected an air card to the computer to use a different Internet connection and using the Cisco software, we were able to successfully connect to our VPN server. I just don't understand why the ShrewSoft VPN client would connect but the Cisco connection won't. I'm not our network admin so sorry if I butcher some of the terminology. I have a computer at remote site that connects to our network through Cisco VPN. It uses the Cisco VPN software to do so. The problem is that the computer at this site cannot connect to our VPN because it is getting error "Reason 412: The remote peer is no longer responding." To see if perhaps something on their network was blocking the connection, I installed the ShrewSoft VPN client on the computer, imported our .pcf file and connected with no problem. I have tried two different versions of the Cisco VPN software (4.8.0.* and 5.0.03.*) and have the same problem. I installed Wireshark on the computer and have confirmed (while trying to connect through Cisco) that the computer is trying to contact the VPN server but is not receiving a response. We are not having any other problems regarding users not being able to connect. I'm at a loss at what else to check. I'll be monitoring this and have access to the computer at any time.

    Read the article

  • Cisco ASA Site-to-Site VPN Dropping

    - by ScottAdair
    I have three sites, Toronto (1.1.1.1), Mississauga (2.2.2.2) and San Francisco (3.3.3.3). All three sites have ASA 5520. All the sites are connected together with two site-to-site VPN links between each other location. My issue is that the tunnel between Toronto and San Francisco is very unstable, dropping every 40 min to 60 mins. The tunnel between Toronto and Mississauga (which is configured in the same manner) is fine with no drops. I also noticed that my pings with drop but the ASA thinks that the tunnel is still up and running. Here is the configuration of the tunnel. Toronto (1.1.1.1) crypto map Outside_map 1 match address Outside_cryptomap crypto map Outside_map 1 set peer 3.3.3.3 crypto map Outside_map 1 set ikev1 transform-set ESP-AES-256-MD5 ESP-AES-256-SHA crypto map Outside_map 1 set ikev2 ipsec-proposal AES256 group-policy GroupPolicy_3.3.3.3 internal group-policy GroupPolicy_3.3.3.3 attributes vpn-idle-timeout none vpn-tunnel-protocol ikev1 ikev2 tunnel-group 3.3.3.3 type ipsec-l2l tunnel-group 3.3.3.3 general-attributes default-group-policy GroupPolicy_3.3.3.3 tunnel-group 3.3.3.3 ipsec-attributes ikev1 pre-shared-key ***** isakmp keepalive disable ikev2 remote-authentication pre-shared-key ***** ikev2 local-authentication pre-shared-key ***** San Francisco (3.3.3.3) crypto map Outside_map0 2 match address Outside_cryptomap_1 crypto map Outside_map0 2 set peer 1.1.1.1 crypto map Outside_map0 2 set ikev1 transform-set ESP-AES-256-MD5 ESP-AES-256-SHA crypto map Outside_map0 2 set ikev2 ipsec-proposal AES256 group-policy GroupPolicy_1.1.1.1 internal group-policy GroupPolicy_1.1.1.1 attributes vpn-idle-timeout none vpn-tunnel-protocol ikev1 ikev2 tunnel-group 1.1.1.1 type ipsec-l2l tunnel-group 1.1.1.1 general-attributes default-group-policy GroupPolicy_1.1.1.1 tunnel-group 1.1.1.1 ipsec-attributes ikev1 pre-shared-key ***** isakmp keepalive disable ikev2 remote-authentication pre-shared-key ***** ikev2 local-authentication pre-shared-key ***** I'm at a loss. Any ideas?

    Read the article

  • Parsing text files

    - by d03boy
    I encountered a situation tonight where I wanted to parse a text file. I had a very, very long word list that contained English words delimited by lines. I wanted to get rid of every word (or line) that was longer than 7 characters. This would be simple in Linux but I can't seem to find a simple solution in WindowsXP. I tried using Notepad++ regular expression search but that was a huge failure. I tried using the expression .{6,} without finding any matches. I'm really at a loss because I thought this sort of thing would be extremely easy and there would be tons of tools to accomplish a task like this. It seems like Notepad++ supports every other feature in the world except the very basic ones that seem the most obvious. Another one of my goals was to put some code before and after the word on each line. aardvark apple azolio would turn into INSERT INTO Words (word) VALUES ('aardvark'); INSERT INTO Words (word) VALUES ('apple'); INSERT INTO Words (word) VALUES ('azolio'); What suggestions/tools/tips do you have to accomplish tasks similar to this in WindowsXP?

    Read the article

  • Postfix spool on ext3 optimiziations in >=linux-2.6.34 days

    - by Luke404
    Given the very specific nature of the subject (we're not talking about mailboxes, just the spool; we're not talking about other filesystems, just ext3; and so on...) and the maturity of the softwares involved (linux kernel, ext3fs, postfix) I'd think there should be a more or less agreed on set of best practices to filesystem related tuning. I'm trying to get a roundup of them: data=journal became the default in recent kernels (somewhere around 2.6.30 IIRC) so we should be ok with that Wietse Venema says atime must be on, but Postfix documentation recommendsnoatime while talking about the Incoming Queue. Does that mean that postfix needs atime on just for some queue directories and will benefit from noatime on the others? can we use noatime if we just don't use ETRN? filesystem can be mounted nodev,noexec,nosuid - no* won't prevent you from setting attributes (postfix uses exec attr) they just won't have any effect (we don't run anything from the spool) the fsync() issue cited by Wietse and/or the chattr -S are probably linked to sync/async options of ext3fs but I do not understand them enough. Mouting the filesystem with async option is equivalent to chattr -R -S the whole fs? Seems like it will increase performance, but will that pose a risk of "loss of mail after a system crash" or is it really "safe on /var/spool/postfix" ? would you tune anything else on postfix-2.6.x to work better on ext3 or do you leave defaults everywhere? is there a "best" linux I/O scheduler for this kind of workload (namely CFQ or deadline?) or that's something that will vary too much based on hardware configuration? would you tune anything else in the filesystem or in the kernel? anything else? References: Postfix Performance here on SF Postfix documentation about the Incoming Queue Wietse Venema in Best file system on [email protected] here Postfix and ext3 on [email protected] here and there

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >