Search Results

Search found 14080 results on 564 pages for 'known types'.

Page 354/564 | < Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >

  • Recovering a damaged microSDHC

    - by djechelon
    I just bought from eBay a Kingston 32GB microSDHC that was advertised as defective. The seller said that there could be formatting problems or with transfer of large files. Unfortunately, when I got it, it was a total mess. My Nikon camera doesn't read it at all (OK, maybe it doesn't support 32GB) My Linux laptop doesn't mount it: can't read superblock The same laptop refuses to mkfs.msdos because it failed whilst writing reserved sector The same laptop, under Windows, doesn't read nor format the card HTC HD2 mounts the MMC, allows me to write via USB, but is unable to open the just written files OK, folks, now you would say I would have to go through Paypal complaint... that's not that easy. I consciously bought a half-price card that was known to show some defects, and Paypal complaints take time. Obviously, I can't accept somebody sold me a completely use-less computer decoration. So I'll keep it as last option. My question is Do you know a way, under either Linux or Windows, to thoroughly scan, test and possibly repair memory cards, even if I have to lose some percentage of space because of bad sectors? If I can keep at least half of the card intact it would certainly be fine. I used to do broken sector marking with hard disks in the past. I almost forgot: MONSTR:/home/djechelon # fsck /dev/mmcblk0p1 fsck from util-linux-ng 2.17.2 dosfsck 3.0.9, 31 Jan 2010, FAT32, LFN Read 512 bytes at 0:Input/output error

    Read the article

  • Setting "Run WWW service in IIS 5.0 isolation mode" does not persist in IIS 6

    - by Saul Dolgin
    Our IIS server was recently patched with the latest Microsoft Security Updates and since then, I am unable to enable the "Run WWW service in IIS 5.0 isolation mode" setting. This setting was enabled prior to patching and somehow changed during the updates. I have tried both using the IIS Manager console and the adsutil.vbs approach to change it. Either way, after resetting IIS for the change to take effect, when I go to verify that the isolation mode setting is enabled (true) I find that is reverts back to being disabled (false). Now... The patches have already been rolled back, however the setting still does not persist when I enable it. While I am trying to research the patches that were applied to see if there is a known issue (or perhaps a change in this setting's behavior) I was hoping someone else might have come across the same problem. Any help towards a workaround would be greatly appreciated! >cscript adsutil.vbs set W3SVC/IIs5IsolationModeEnabled TRUE IIs5IsolationModeEnabled : (BOOLEAN) True >iisreset Attempting stop... Internet services successfully stopped Attempting start... Internet services successfully restarted >cscript adsutil.vbs get W3SVC/IIs5IsolationModeEnabled IIs5IsolationModeEnabled : (BOOLEAN) False

    Read the article

  • "Server Unavailable" and removed permissions on .NET sites after Windows Update [closed]

    - by andrewcameron
    Our company has five almost identical Windows 2003 servers with the same host, and all but one performed an automatic Windows Update last night without issue. The one that had problems, of course, was the one which hosts the majority of our sites. What the update appeared to do was cause the NETWORK user to stop having access to the .NET Framework 2.0 files, as the event log was complaining about not being able to open System.Web. This resulted in every .NET site on the server returning "Server Unavailable" as the App Domains failed to be initialise. I ran aspnet_regiis which didn't appear to fix the problem, so I ran FileMon which revealed that nobody but the Administrators group had access to any files in any of the website folders! After resetting the permissions, things appear to be fine. I was wondering if anyone had an idea of what could have caused this to go wrong? As I say, the four other servers updated without a problem. Are there any known issues involved with any of the following updates? My major suspect at the moment is the 3.5 update as all of the sites on the server are running in 3.5. Windows Server 2003 Update Rollup for ActiveX Killbits for Windows Server 2003 (KB960715) Windows Server 2003 Security Update for Internet Explorer 7 for Windows Server 2003 (KB960714) Windows Server 2003 Microsoft .NET Framework 3.5 Family Update (KB959209) x86 Windows Server 2003 Security Update for Windows Server 2003 (KB958687) Thanks for any light you can shed on this.

    Read the article

  • Scanning website for vulnerablities

    - by Kristen
    I have found that the local school's website installed a Perl Calendar - this was years ago, it has not been used for ages, but Google has it indexed (which is how I found it) and it full of Viagra links and the like ... program was by Matt Kruse, here is details of the exploit: http://www.securiteam.com/exploits/5IP040A1QI.html I've got the school to remove that, but I think they also have MySQL installed and I'm aware that out-of-the-box there have been some exploits of Admin Tools / Login in old versions. For all I know they also have PHPBB and the like installed ... The school is just using some cheap, shared hosting; the HTTP response header I get is: Apache/1.3.29 (Unix) (Red-Hat/Linux) Chili!Soft-ASP/3.6.2 mod_ssl/2.8.14 OpenSSL/0.9.6b PHP/4.4.9 FrontPage/5.0.2.2510 I'm looking for some means of checking if they have other junk installed (quite possibly from way back, and now unused) that might put the site at risk. I'm more interested in something that can scan for things like the MySQL Admin exploit rather than open ports etc. My guess is that they have little control over the hosting space that they have - but I'm a Windows DEV, so this *nix stuff is all Greek to me. I found http://www.beyondsecurity.com/ which looks like it might do what I want (within their evaluation :) ) but I have a worry about how to find out if they are well known / honest - otherwise I will be tipping them a wink with a Domain Name that may be at risk! Many thanks.

    Read the article

  • Fresh Install of SQL Server 2008 doesn't install managment studio. Help!

    - by Jordan S
    Ok I am running Windows 7, 64 bit. I cleaned of SQL server 2005 completely off my system leaving only SQL Compact Edition. I went here http://www.microsoft.com/downloads/details.aspx?FamilyID=01af61e6-2f63-4291-bcad-fd500f6027ff&displaylang=en and installed SQL Server 2008 Express Edition Service Pack 1. After the install, under my start bar menu all i have for SQL configuration tools are the Configuration Manager, Error and Usage Reporting and the Install Center. I don't have the SQL Managment Studio. So I went here http://www.microsoft.com/downloads/details.aspx?FamilyID=08e52ac2-1d62-45f6-9a4a-4b76a8564a2b&displaylang=en and downloaded the SQL Server 2008 Management Studio Express but when I try to install it I get a warning says This program has known compatibility issues and that I need to Install SQL Server 2008 Service Pack 1. I thought that is what I installed. So, I tried to continue running the install but I then get an error message that says Invoke or BeginInvoke can not be called on a Form before it is opened... How can I check if Service pack 1 is installed or not? What should I do?

    Read the article

  • Microsoft Windows DHCP: Steering IPv4 clients into specific scopes based on MAC

    - by Easter Sunshine
    We have visitors on our campus who bring their own laptops and devices and use our wireless and wired networks. When we receive a copyright infringement notice (typically BitTorrenting), we are required to quarantine that MAC address so that it no longer has Internet access. No matter what website it tries to visit, it is sent to a web page explaining to the user that the device has been quarantined. We have thus far implemented this in ISC DHCP on Linux. We have multiple VLANs with one or more public-IP subnets and one RFC1918 quarantine subnet each. All clients are leased IPs in the public-IP subnet(s) unless you're in a list of known bad MACs. Then, you are sent to the quarantine subnet so that your traffic is unroutable on the Internet (you are isolated by subnet only, not by VLAN). We would like to move to Windows DHCP in light of the IPAM role but I cannot figure out how to replicate this in Windows DHCP 2012 (Assign DHCP IPs for specific MAC prefixes on Windows Server 2008 R2 suggests it was not possible in 2008 R2), even while using policies. So here's what I'd like: The administrator/help desk provides and maintains a list of MAC addresses that are to be quarantined. The DHCP server places those MACs into the quarantine subnet on the respective VLAN, no matter which VLAN the client is in. I don't think reservations would work: We currently have about 300 registered bad MACs and about 12 VLANs. I don't want to make 300 x 12 reservations nor have to add 12 reservations per new MAC address. Not to mention all of the quarantine subnets are /24s. We do not have NPS/NAC. You do not have to register your MAC address get network access. We use Cisco routers/switches. Thanks.

    Read the article

  • Cacti is ignoring hash marks in interface aliases

    - by Matt Simmons
    I'm attempting to set up Cacti to monitor a router's interfaces, and I'm having trouble getting the graph templates to show the information that I'd like. Our interface configuration looks like this: interface GigabitEthernet3/6 description WalljackNumber # Server info no ip address no shutdown switchport switchport access vlan 116 switchport mode access ip dhcp snooping trust spanning-tree portfast The "Server Info" string is really just the machine name, and a short relevant description, such as "PolarSprings vmnic2". The important part appears to be that it follows the hashmark. When I run snmpwalk, I get the proper output: IF-MIB::ifAlias.230 = STRING: WalljackNumber # Server info But in Cacti, when I go into the graph templates and set the title to this: |host_description| - Traffic - |query_ifName| (|query_ifAlias|) All that shows up in the graph is: switchname - Traffic - Gi3/6 (WalljackNumber #) Which strikes me as a little weird. What I suppose MAY be happening is that somewhere in the cacti stream, it's interpreting # as being a comment and stripping everything after, but I'm not sure. I was hoping someone could tell me that this was a known documented behavior, or that I could change it in a setting that I wasn't aware of. The alternative answer is to change the delimiter from # to something else, but I've got over a thousand lit switchports on an old college infrastructure, and I'm not sure what else might be relying on them.

    Read the article

  • Using Confluence with virtual hosts and mod_proxy

    - by Marcus
    Hi @all, on a test server I have installed the latest version of Confluence. I configured a apache with ajp. But I have a problem, when I login in Confluence, I get the following error message: Not Found The requested URL / / homepage.action was not found on this server. The problem seems to be known, I found following Link: http://confluence.atlassian.com/display/DOC/Using+Apache+with+virtual+hosts+and+mod_proxy But unfortunately the forwards have not helped, I still get the error messages. Does anyone have any idea how I could solve the problem? The following Apache configuration I have set up: LoadModule proxy_module /usr/lib/apache2/modules/mod_proxy.so LoadModule proxy_http_module /usr/lib/apache2/modules/mod_proxy_http.so LoadModule proxy_ajp_module /usr/lib/apache2/modules/mod_proxy_ajp.so <IfModule proxy_http_module> ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> <Location /> Order allow,deny Allow from all </Location> ProxyPass / http://localhost:8080/ ProxyPassReverse / http://localhost:8080/ </IfModule>

    Read the article

  • 13" MacBook Pro with Win 7 and External VGA gets 640x480

    - by Jim McKeeth
    I have a brand new 13" MacBook Pro - 2.26 GHz and the NVIDIA 9400M Video card. I installed Windows 7 (final) in boot camp and booted up to Windows 7. Installed all the drivers from the Apple disk and it was working great. Then I attached the external VGA adapter (from apple) to connect to a projector and it dropped down at 640x480 resolution. No matter what I did it wouldn't let me change to a higher resolution if the external VGA was connected. Once it disconnects then it goes back to the normal resolution. If I am booted into Snow Leopard it works fine. I tried updating the NVIDIA drivers and it behaved exactly the same. Ultimately I want to get 1024x768 or better resolution when connected to an external display. If it isn't fixable then I am curious if anyone else has seen this, if it is a known issue, and who to contact for support (Apple, Microsoft or NVIDIA?) Update: Just attaching the Mini-DVI to VGA adapter kicks it into 640x480, no projector is required. I tried forcing the display driver from Generic PnP Monitor to one that supported 1024x768 and that didn't work either.

    Read the article

  • Epson Artisan 800 on Ubuntu/Linux

    - by Tim Lytle
    Update for Ubuntu 10.04: Printing should work 'out-of-box', scanning still needs the newer sane backend. Looking for a known good way to setup an Epson Artisan 800 on Ubuntu specifically or any linux box in general. It is a printer/scanner with ethernet/wifi/usb. I'd like to use it as a network printer/scanner being able to do both from my Windows and Ubuntu machines; however, if it needs to be physically connected to a computer (preferably the Ubuntu machine) that is doable (again, then sharing print/scan functions to the network). Basically, I'm looking for someone who has used this printer/scanner (or similar) in a multi-platform environment to share how the set it up and how well it worked. Updated: A little more information, like most printers (I expect) the documentation for the printer basically says, "don't use plug-n-play, run our setup CD from your Windows/Mac system", to do anything (set it up for network use even). I guess that's to make it easy for anyone else to setup, but when you're looking to use it with an unsupported (by Epson's documentation) OS, you're just stuck on your own. What I was hoping for was someone who could say, "Forget the bundled software, do [this] to set it up on wifi manually, install [this] to connect to the scanner from [os], printing works with [this] driver - at least that's how I set it up." I'll will (and have so far) use the information here, and post my own setup when I'm done, if there's no one else out there with that experience.

    Read the article

  • Why is OpenSSH not using the user specified in ssh_config?

    - by Jordan Evens
    I'm using OpenSSH from a Windows machine to connect to a Linux Mint 9 box. My Windows user name doesn't match the ssh target's user name, so I'm trying to specify the user to use for login using ssh_config. I know OpenSSH can see the ssh_config file since I'm specifying the identify file in it. The section specific to the host in ssh_config is: Host hostname HostName hostname IdentityFile ~/.ssh/id_dsa User username Compression yes If I do ssh username@hostname it works. Trying using ssh_config only gives: F:\>ssh -v hostname OpenSSH_5.6p1, OpenSSL 0.9.8o 01 Jun 2010 debug1: Connecting to hostname [XX.XX.XX.XX] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /cygdrive/f/progs/OpenSSH/home/.ssh/id_rsa type -1 debug1: identity file /cygdrive/f/progs/OpenSSH/home/.ssh/id_rsa-cert type -1 debug1: identity file /cygdrive/f/progs/OpenSSH/home/.ssh/id_dsa type 2 debug1: identity file /cygdrive/f/progs/OpenSSH/home/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3p1 Debia n-3ubuntu5 debug1: match: OpenSSH_5.3p1 Debian-3ubuntu5 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'hostname' is known and matches the RSA host key. debug1: Found key in /cygdrive/f/progs/OpenSSH/home/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /cygdrive/f/progs/OpenSSH/home/.ssh/id_rsa debug1: Offering DSA public key: /cygdrive/f/progs/OpenSSH/home/.ssh/id_dsa debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey). I was under the impression that (as outlined in this question: How to make ssh log in as the right user?) specifying User username in ssh_config should work. Why isn't OpenSSH using the username specified in ssh_config?

    Read the article

  • How would I setup iMail to forward a user's mail to another service w/o leaving a copy locally?

    - by Scott Mayfield
    I have an iMail 2006 server installation in which I have a particular user that has several aliases that all point to a single user (me, for the record). I've been copying all of my mail to GMail and reading it there, but it annoys me that I have to go back weekly and log into my mail account on iMail and delete between 6 and 10 thousand copies of messages I've already received, in order to keep my mailbox from filling up (yes, I have it set with no quota, but I consider it bad form to just let the box grow indefinitely). I've got the copying setup via an inbound user rule, but I'm wondering how to accomplish a "copy and delete" rule. The manual isn't clear on what happens with multiple matching rules (will they be processed in order, or is it a first match situation?) and there isn't a means to combine multiple actions into a single rule. If I use the "forward" action, I THINK that it's going to screw up all the sender information once the mail reaches my GMail account and show it as coming from me instead of the original senders (can anyone confirm that this is accurate?) An easy answer would be to delete my user account entirely, replace it with an alias that maps to my GMail account, but then I would lose my ability to log into the system for admin duties. So that leads me to creating a second, lesser known account for admin use, but since it's a real account, sooner or later I'm going to get mail sent to it and I'll be back to the same situation of having a user account that doesn't get emptied periodically. I imagine I can set the quota to 0 MB to cause all incoming mail to my admin account to bounce, or setup an inbound rule to bounce everything, but this is starting to sound kludgy to me. Does anyone know of a more direct work around to copying a user's incoming mail to an outside server and then deleting the local copy w/o removing their account entirely? Or is this just wishful thinking? Thanks in advance. Scott

    Read the article

  • flowchart for debugging a slow/unresponsive server

    - by davidosomething
    So the server is slow: Roll back to the previous known working build - Success? Code problem - Fail? Go on. Ping ip address - Success? maybe a DNS problem, go on. - Fail? Server or connection problem, go on. Ping and tracert your domain.com from inside your network - previous success - fail: DNS problem - success? go on. - previous fail and: - Fail? Go on, could be you or network. - Success? Go on. Try it from outside your network (http://centralops.net/co/) - Fail? The server's network connection sucks. - Success? If inside network was fail, your network sucks. Check the server load: CPU/RAM usage. Is it overloaded? - Yes. Who's the culprit? Kill some processes/reboot. - No? Go on. what other steps should i add?

    Read the article

  • (Zywall USG 300) NAT bypassed when accessing in-house-server From LAN Via domain name

    - by mschr
    My situations is like this; i host a number of websites from within our joint network solution. On the network is basically 3 categories: the known public, registered via mac, given static dhcp lease the anonymous lan connections, given lease from specific dhcp range switches, unix hosts firewall Now, consider following hosts which are of interest 111.111.111.111 (Zywall USG 300 WAN) 192.168.1.1 (ZyWall USG 300 LAN) load balances and bw monitors plus handles NAT 192.168.1.2 (Linux www) serves mydomain1.tld and mydomain2.tld 192.168.123.123 (Random LAN client) accesses mydomain1.tld from LAN 23.234.12.253 (Random External client) accesses mydomain1.tld via WAN DNS A records are setup so that both mydomain1.tld and mydomain2.tld points to 111.111.111.111 - and the Linux www serves the http parts with VirtualHost configurations, setting up the document roots pr ServerName, this is not so interesting though.. NAT rule translates 111.111.111.111:80 to 192.168.1.2:80 (1:1 NAT) Our problem follows; When accessing http://mydomain1.tld from outside (23.234.12.253 example host) the joint network - everything is fine, zywall receives requests via port 80 and maps it to the linux host' httpd. However - once trying to go through the NAT from LAN side (in-house, 192.168.123.123 example host) then one gets filtered in the Zywall port 80 firewall. I know this only because port 443 is open for administration interface and https://mydomain1.tld prompts for zywall login. So my conclusion is, that the LAN that accesses 111.111.111.111 in fact are routed to 192.168.1.1 whilst bypassing the NAT table. I need to know how to setup NAT / Policy Route, so that LAN WAN LAN will function with proper network translations instead of doing the 'quick nameserver lookup' or whatever this might be.

    Read the article

  • How to move my data from my old MacBook Pro to my new one?

    - by Tim Büthe
    I just purchased a new MacBook Pro and already got an 2008 model. I wonder how I move all my data over to the new one. My first idea was, to use my Time Machine backup and restore from it, which seems to be a good idea and should work just fine regarding to this link: http://blog.duncandavidson.com/2008/01/restoring-from-time-machine.html. But, since my current MacBook got older Software on it, like iLife '08 instead of iLife '09 I would have to upgrade this afterwards. Is this correct, or does Time Machine does some magic to exclude well known software? And is it possible to reinstall or upgrade iLife with the included installation DVDs? My second idea is, to just swap the hard drives instead of using the Time machine backup. If it is not too complicated to remove the hdd, this should be the fastest way. This also has the benefit, that the 2008er MacBook then contains a brand new installation and I don't have to remove all my stuff or reinstall Mac OS before I give it away. My question on that second idea would be: does snow leopard handle this stuff correctly? I reboot with the new hardware and all just works fine? So in a nutshell: What would you do: restore from backup or swap drives? And what about the new software?

    Read the article

  • How to use the correct SSH private key?

    - by Dail
    I have a private key inside /home/myuser/.ssh/privateKey I have a problem connecting to the ssh server, because i always get: Permission denied (publickey). I tried to debug the problem and i find that ssh is reading wrong file, take a look at the output: [damiano@Damiano-PC .ssh]$ ssh -v root@vps1 OpenSSH_5.8p2, OpenSSL 1.0.0g-fips 18 Jan 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for vps1 debug1: Applying options for * debug1: Connecting to 111.111.111.111 [111.111.111.111] port 2000. debug1: Connection established. debug1: identity file /home/damiano/.ssh/id_rsa type -1 debug1: identity file /home/damiano/.ssh/id_rsa-cert type -1 debug1: identity file /home/damiano/.ssh/id_dsa type -1 debug1: identity file /home/damiano/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.8p1 Debian-7ubuntu1 debug1: match: OpenSSH_5.8p1 Debian-7ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA 74:8f:87:fe:b8:25:85:02:d4:b6:5e:03:08:d0:9f:4e debug1: Host '[111.111.111.111]:2000' is known and matches the RSA host key. debug1: Found key in /home/damiano/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /home/damiano/.ssh/id_rsa debug1: Trying private key: /home/damiano/.ssh/id_dsa debug1: No more authentication methods to try. as you can see ssh is trying to read: /home/damiano/.ssh/id_rsa but i don't have this file, i named it differently. How could I tell to SSH to use the correct private key file? Thanks!

    Read the article

  • "Server Unavailable" and removed permissions on .NET sites after Windows Update

    - by tags2k
    Our company has five almost identical Windows 2003 servers with the same host, and all but one performed an automatic Windows Update last night without issue. The one that had problems, of course, was the one which hosts the majority of our sites. What the update appeared to do was cause the NETWORK user to stop having access to the .NET Framework 2.0 files, as the event log was complaining about not being able to open System.Web. This resulted in every .NET site on the server returning "Server Unavailable" as the App Domains failed to be initialise. I ran aspnet_regiis which didn't appear to fix the problem, so I ran FileMon which revealed that nobody but the Administrators group had access to any files in any of the website folders! After resetting the permissions, things appear to be fine. I was wondering if anyone had an idea of what could have caused this to go wrong? As I say, the four other servers updated without a problem. Are there any known issues involved with any of the following updates? My major suspect at the moment is the 3.5 update as all of the sites on the server are running in 3.5. Windows Server 2003 Update Rollup for ActiveX Killbits for Windows Server 2003 (KB960715) Windows Server 2003 Security Update for Internet Explorer 7 for Windows Server 2003 (KB960714) Windows Server 2003 Microsoft .NET Framework 3.5 Family Update (KB959209) x86 Windows Server 2003 Security Update for Windows Server 2003 (KB958687) Thanks for any light you can shed on this.

    Read the article

  • How do I setup routing for 2 companies with different Internet connections on the same LAN?

    - by Clint Miller
    Here's the setup: 2 companies (A & B) share office space and a LAN. A 2nd ISP is brought in and company A wants it's own Internet connection (ISP A) and company B wants it's own Internet connection (ISP B). VLANs are deployed internally to separate the 2 company's networks (company A: VLAN 1, company B: VLAN 2, shared VOIP: VLAN 3). With separate VLANs it's simple enough to use separate DHCP servers (or separate scopes on the same server) to assign the default gateway to each company's gateway for their Internet connection. Static routes can be created on each gateway to point traffic destined for the other company's VLAN or the voice VLAN so that all nodes are reachable as expected. However, I think this is a form of asymmetrical routing, right? (The path from node A1 to node B1 is not the same as the path back from node B1 to node A1). Can I setup policy-based routing to correct this? In that case, can I assign the same default gateway to every device on all VLANs and create a routing policy on a L3 switch to look at the source address and forward traffic to the appropriate next hop? In that case, I want the routing logic to go like this: If the destination address is known, forward the traffic (traffic destined for a different VLAN). If the destination address is unknown, forward the traffic to ISP A's gateway if the source address is on VLAN A; or forward the traffic to ISP B's gateway if the source address is VLAN B. Am I thinking about this problem in the correct way? Is there another way to solve this problem that I am overlooking?

    Read the article

  • How to fix Quicktime video color problems on nVidia chipset and Windows XP?

    - by Matthew Glidden
    My laptop frequently plays video as if in very low-color mode. Though the sound remains clear, it looks terrible, showing only a few shades of red, blue, or yellow. (It's even worse than 8-bit color.) The problem doesn't happen consistently, so I'm looking for troubleshooting advice or known solutions. I use a Dell Latitude D620 laptop with Windows XP, on-board nVidia video chipset, Quicktime, and multiple monitors (laptop screen + VGA-connected LCD). Color problems happen in each application I tried, iTunes, a browser, and the Quicktime standalone player. It doesn't happen right after reboot, so could be from a sleep-wake cycle, or at least being on for an extended period. Google results suggest reinstalling nVidia drivers, which I've done several times with no change. I have found 2 workarounds. Reboot, sacrificing significant time and disrupting work In nVidia control panel, change color to 16-bit, and then back to 32-bit This happens with all video playback, so it's definitely not one corrupt file. I use workaround #2 consistently, but would love a longer-term solution.

    Read the article

  • wget is working only when used with sudo

    - by Yusuf
    I'm having quite a strange behavior with wget since yesterday. I can download files by using sudo wget, but when I try the same file with only wget, I can get this error: yusufh@ubuntu-yuh:~$ wget http://www.kegel.com/wine/winetricks --2010-12-17 09:34:11-- http://www.kegel.com/wine/winetricks Resolving www.kegel.com... failed: Name or service not known. wget: unable to resolve host address `www.kegel.com' and with sudo wget: yusufh@ubuntu-yuh:~$ sudo wget http://www.kegel.com/wine/winetricks --2010-12-17 09:35:37-- http://www.kegel.com/wine/winetricks Connecting to 127.0.0.1:5865... connected. Proxy request sent, awaiting response... 200 OK Length: 190672 (186K) [text/plain] Saving to: `winetricks' 100%[==================================================================================================>] 190,672 --.-K/s in 0.03s 2010-12-17 09:35:37 (6.92 MB/s) - `winetricks' saved [190672/190672] After the comments below, here is an update: I can use Google Chrome or Firefox perfectly without running it as root. I use ntlmaps to connect to the office proxy. So I need to use 127.0.0.1:5865 as the proxy for clients. Result for env | grep -i proxy: NO_PROXY=localhost,127.0.0.0/8,*.local, http_proxy=127.0.0.1:5865 ftp_proxy=127.0.0.1:5865 all_proxy=socks://127.0.0.1:5865/ ALL_PROXY=socks://127.0.0.1:5865/ https_proxy=127.0.0.1:5865 no_proxy=localhost,127.0.0.0/8,*.local while sudo env | grep -i proxy is empty! HELP!

    Read the article

  • CentOS 5.6: How to resolve php53 RPM dependency conflict with php-mcrypt and php-common?

    - by Stefan Lasiewski
    We are running a CentOS 5.6 system, and want to install php53 with php-mcrypt. However, this introduces a dependency conflict between php-common & php53-common. Does anyone have a good workaround for this problem? host # yum install php-mcrypt Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * epel: linux.mirrors.es.net Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package php-mcrypt.x86_64 0:5.1.6-15.el5.centos.1 set to be updated --> Processing Dependency: php-api = 20041225 for package: php-mcrypt --> Processing Dependency: php >= 5.1.6 for package: php-mcrypt --> Running transaction check ---> Package php.x86_64 0:5.1.6-27.el5_5.3 set to be updated --> Processing Dependency: php-cli = 5.1.6-27.el5_5.3 for package: php ---> Package php-common.x86_64 0:5.1.6-27.el5_5.3 set to be updated --> Running transaction check ---> Package php-cli.x86_64 0:5.1.6-27.el5_5.3 set to be updated --> Processing Conflict: php53-common conflicts php-common --> Finished Dependency Resolution php53-common-5.3.3-1.el5_6.1.x86_64 from installed has depsolving problems --> php53-common conflicts with php-common Error: php53-common conflicts with php-common You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest This is apparently a known problem (See php-devel, Bug 700179 and Bug 695708) and this post at the CentOS forums, but there is no official fix yet.

    Read the article

  • How can Django/WSGI and PHP share / on Apache?

    - by Mark Snidovich
    I have a server running an established PHP site, as well as some Django apps. Currently, a VirtualHost set up for PHP listens on port 80, and requests to certain directories are proxied to a VirtualHost set up for Django with WSGI. I'd like to change it so Django handles anything not existing as a PHP script or static file. For example, / -parsed by PHP as index.php /page.php -parsed as PHP normally /images/border.jpg -served as a static file /johnfreep -handled by Django (interpreted by urls.py) /pages/john -handled by Django /(anything else) - handled by Django I have a few ideas. It seems the options are 'php first' or 'wsgi first'. set up Django on port 80, and set Apache to skip all the known PHP, CSS or image files. Maybe using SetHandler? Anything else goes to Django to be parsed by urls.py. Set up a script referring everything to Django as a 404 handler on PHP. So, if a file is not found for a name, it sends the request path to a VirtualHost running Django to be parsed.

    Read the article

  • Have an unprivileged non-account user ssh into another box?

    - by Daniel Quinn
    I know how to get a user to ssh into another box with a key: ssh -l targetuser -i path/to/key targethost But what about non-account users like apache? As this user doesn't have a home directory to which it can write a .ssh directory, the whole thing keeps failing with: $ sudo -u apache ssh -o StrictHostKeyChecking=no -l targetuser -i path/to/key targethost Could not create directory '/var/www/.ssh'. Warning: Permanently added '<hostname>' (RSA) to the list of known hosts. Permission denied (publickey). I've tried variations using -o UserKnownHostsFile=/dev/null and setting $HOME to /dev/null and none of these have done the trick. I understand that sudo could probably fix this for me, but I'm trying to avoid having to require a manual server config since this code will be deployed on a number of different environments. Any ideas? Here's a few examples of what I've tried that don't work: $ sudo -u apache export HOME=path/to/apache/writable/dir/ ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=path/to/apache/writable/dir/.ssh/known_hosts -l deploy -i path/to/key targethost $ sudo -u apache ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=path/to/apache/writable/dir/.ssh/known_hosts -l deploy -i path/to/key targethost $ sudo -u apache ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -l deploy -i path/to/key targethost Eventually, I'll be using this solution to run rsync as the apache user.

    Read the article

  • Setting up Live @ EDU

    - by user73721
    [PROBLEM] Hello everyone. I have a small issue here. We are trying to get our exchange accounts for students only ported over from an exchange server 2003 to the Microsoft cloud services known as live @ EDU. The problem we are having is that in order to do this we need to install 2 pieces of software 1: OLSync 2: Microsoft Identity Life cycle Manager "Download the Galsync.msi here" the "Here" link takes you to a page that needs a login for an admin account for live @ EDU. That part works. However once logged in it redirects to a page that states: https://connect.microsoft.com/site185/Downloads/DownloadDetails.aspx?DownloadID=26407 Page Not Found The content that you requested cannot be found or you do not have permission to view it. If you believe you have reached this page in error, click the Help link at the top of the page to report the issue and include this ID in your e-mail: afa16bf4-3df0-437c-893a-8005f978c96c [WHAT I NEED] I need to download that file. Does anyone know of an alternative location for that installation file? I also need to obtain Identity Lifecycle Management (ILM) Server 2007, Feature Pack 1 (FP1). If anyone has any helpful information that would be fantastic! As well if anyone has completed a migration of account from a on site exchange 2003 server to the Microsoft Live @ EDU servers any general guidance would be helpful! Thanks in advance.

    Read the article

  • Adobe e-book reader ("Digital Editions") not downloading on Mac OS X

    - by doug
    i recently bought a couple of books from ebooks.com, which i thought would be ordinary pdf files. After paying for them, and downloading them, you learn that while they are pdf files, they come with a lot of DRM baggage. The most conspicuous is that you can only view these files using an Adobe ebook reader called Adobe Digital Editions. (Note: this is not the ordinary Adobe Acrobat Reader, or anything close--it's a dedicated app for reading DRM-laden files. Fine--i'll know better next time. Still, i paid for these books and there's only one way i can actually read them, which happens to be an App that i seem to be unable to download. Here's the error message i get: "Couldn't write the application to the hard disk. Please verify the hard disk is available and try again" I've tried on several different browsers. My rig is a MBP, OS X 10.6.2. I've also checked the Adobe boards and this doesn't appear to be a known issue, nor could i find anything on their discussion forums. And just to be sure, i've checked my hard disk--no problems, plenty of space, and i have no problem, nor have i ever downloading other apps.

    Read the article

< Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >