Search Results

Search found 19788 results on 792 pages for 'remote host'.

Page 651/792 | < Previous Page | 647 648 649 650 651 652 653 654 655 656 657 658  | Next Page >

  • LYNC / OCS... problems getting edge server working.

    - by TomTom
    New setup Lync 2010 (i.e. OCS 2010). I have serious problems getting my edge system going. Internally things work fine. Externally I am stuck. I have used the tester at https://www.testocsconnectivity.com/ and it also fails. NOTE: I use the domain xample.com / xample.local here just as example. Here is the setup. I have 2 internal hosts (lync.xample.local, edge.xample.local). edge.xample.com is also correctly in dns. and points to the edge.xample.local external assigned ip address (external interface). Externally, I have the following dns entries: edge.xample.com _sip._tcp - edge.xample.com 443 _sipfederationtls._tcp - edge.xample.com 5061 _sipinternaltls._tcp - lync.xample.local 5061 _sip._tls - edge.xample.com 443 My problem is that the ocs connection test always ends up trying to contact lync.xample.local (i.e. the internal address) when connecting to [email protected]. The error is: Attempting to Resolve the host name lync.xample.local in DNS. This shows me it clearly manages to connect to SOMETHING, but it does either fall through to the _sipinternaltls._tcp entry, OR it does get that internal entry wrongly from the edge system. Am I missing some entries or have some wrong?

    Read the article

  • Docs for OpenSSH CA-based certificate based authentication

    - by Zoredache
    OpenSSH 5.4 added a new method for certificate authentication (changes). * Add support for certificate authentication of users and hosts using a new, minimal OpenSSH certificate format (not X.509). Certificates contain a public key, identity information and some validity constraints and are signed with a standard SSH public key using ssh-keygen(1). CA keys may be marked as trusted in authorized_keys or via a TrustedUserCAKeys option in sshd_config(5) (for user authentication), or in known_hosts (for host authentication). Documentation for certificate support may be found in ssh-keygen(1), sshd(8) and ssh(1) and a description of the protocol extensions in PROTOCOL.certkeys. Is there any guides or documentation beyond what is mentioned in the ssh-keygen man-page? The man page covers how to generate certificate and use them, but it doesn't really seem to provide much information about the certificate authority setup. For example, can I sign the keys with an intermediate CA, and have the server trust the parent CA? This comment about the new feature seems to mean that I could setup my servers to trust the CA, then setup a method to sign keys, and then users would not have to publish their individual keys on the server. This also seems to support key expiration, which is great since getting rid of old/invalid keys is more difficult then it should be. But I am hoping to find some more documentation about describe the total configuration CA, SSH server, and SSH client settings needed to make this work.

    Read the article

  • VNC authentication failure

    - by cf16
    I try to connect to my vncserver running on CentOs from home computer, behind firewall. I have installed Win7 and Ubuntu both on this machine. I have an error: VNC conenction failed: vncserver too many security failures even when loging with right credentials (I reset passwd on CentOs) I get: authentication failure. I observe that I have to wait a whole day to be able to relogin at all. Is it something regarding that I try as root? I think important is also that I have to login to remote Centos through port 6050 - none else port works for me. Do I have to do something with other ports? I see that vncserver is listening on 5901, 5902 if another added - and I consider connection is established because from time to time (long time) the passwd prompt appears,... right? I have created additional user1, password for him to CentOS and to VNC, also user2. I do: service vncserver start and two servers starts, one :1, and second on :2. When I try to connect to vncserverIP:1 I get what described above, but when I try connect to vncserverIP:2 it says that the trial was unsuccessful. please help, what to do? additionally: how to disable this lockout for a testing purposes?

    Read the article

  • Dedicated server automatic backup solution

    - by Luigi
    I have a dedicated Ubuntu web server in a cloud environment, and I am looking for a nice way to do automated backups. I would like to backup some directories with web apps, and all my MySql databases. As for destination: make snapshots every two hours localy, and every six hours to a remote ftp server. Also delete backup archives older than seven days(localy + ftp), and notify on any problems by email. Now to achieve some of this functionality I use cron + shell script, and http://www.mysqldumper.net/, but really that doesn't answer my needs. Mysqldumper doesn't know automaticly about new databases, and shell script does not notify on problems. It's something I have to check out from time to time, and i don't have trust for. I googled a while, and seems like most people solve this stuff with shell scripts. Is this a method you can trust? Are there any web-gui tools, I'm missing? Maybe there is a smarter startegy for doing this? I'm a little bit confused.

    Read the article

  • Integration of SharePoint 2010 with TFS2010

    - by Kabir Rao
    We have performed following steps as of now- Install TFS2010 10.0.30319.1 (RTM) on Windows Server 2008 R2 Enterprise(app tier) SQL 2008 SP1 with Cumulative update 2 on Windows Server 2008 R2 Enterprise(data tier) Reporting Service is installed on app tier. After this installation worked fine we installed SharePoint 2010 on app tier. After installation we followed http://blogs.msdn.com/b/team_foundation/archive/2010/03/06/configuring-sharepoint-server-2010-beta-for-dashboard-compatibility-with-tfs-2010-beta2-rc.aspx for configuration. We are not able to perform the last step described in the link as following error occured- TF249063: The following Web service is not available: http://apptier:31254/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. The underlying error is: The remote server returned an error: (404) Not Found.. Verify that the following URL points to a valid SharePoint Web application and that the application is available: http://apptier:31254. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. We have also noticed that Document Folder in Team project also have red x. Please help. Thanks upfront.

    Read the article

  • Cannot resolve Hostname to IP, but IP to hostname works

    - by dotnetdev
    I have deployed a bunch of windows server VMs on a cloud hosting service. These machines are all joined to a domain controller on the same service, which also hosts DNS. All of the domain-joined machines have dynamic IP (along with the DC). If I try to resolve any of the hostnames remotely, it fails. For example, I am in SQL Server Reporting Services and I need to connect to a remote server. I provide the hostname of the desired target server and this fails, but then if I provide the IP, this works. How can I pass the hostname and have this resolve to IP? Is there anything I need to look for in the DNS server? It has records of the hostnames (in forward lookup I think), but reverse is empty. Isn't it the case that forward lookup resolves ip to hostname and reverse resolves hostname to ip? Also, I don't know what he subnet mask because this is not in my control, so the machines may not be in the same subnet - can this be a cause of the problem? Where is the problem? Thanks

    Read the article

  • Apache2 - Hosting two sites on the same domain with different ports

    - by user1026361
    I am hosting a staging site (test.mydomain.com) which currently work well on port 80 for two sites (test.mydomain.com and test.FRmydomain.com) I am working on a new backend and I would like to deploy a third site on this server for testing. My hope is that it will live at test.mydomain.com:4204. I've got some experience with apache and quickly added statements: Listen 4204 NameVirtualHost *:4204 and created a new config for my site. What I imagine are the relevant parts of my config: <VirtualHost *:4204 > ServerAdmin [email protected] ServerName test.mydomain.com:4204 However, the site is not publicly available, by name or ip. If i curl localhost:4204 from the server, I get the expected page content At this point, I'm a bit of a loss on how to go forwards. It seems like my config is correct but not available to be served. Am I better off defining a proxy definition so that, for instance: test.mydomain.com/4204 proxies to my localhost server or is there a way to make the site available via the internet? EDIT: I have added an iptable rule after further Googling with the command: iptables -I INPUT -p tcp --dport 4204 -j ACCEPT I can see apache listening on 4204 and the rule is definitely in place but cant reach the site

    Read the article

  • Loop through several servers, find specific dlls , get the dll version, internal filename and path?

    - by Graham
    I am a newby to Powershell, and using PS v2. I can see the massive potential it has, but I just can't get the following code to work fully. I am trying to end up with a csv file that contains the wild carded required dlls in the GAC_MSIL or sub-directory, get the dll version, internal filename and path, and the server IP address. The code is below, and it is in single line format because it appears easier to remote onto one of the servers in the server farm and run the single line from that console, ue to security log-ins etc. The code has produced a set of results, but only for the last server, it probably does the first server, then overwrites it but I am not sure about that. I have done a lot of reading about using arrays, and custom objects, and had a go at doing that, but my scripting skills in PS are not yet up to it. Code: $out = "Ouput_dll_ver_results.csv";foreach ($server in '11.222.33.123', '11.222.33.124') {$VersionInfo = (Get-ChildItem \$server\C$\windows\assembly\GAC_MSIL -recurse -Include abc*.dll,def*.dll,ghi*.dll,jkl*.dll | Where-Object { $.FullName -notmatch "\windows\assembly\temp\" })}; $VersionInfo | %{Get-Command $.FullName} | select -expand File* |Export-Csv $out Can you please advise if/how the above code can be corrected, and if not, what alternatives do I have to get the information I need. Many thanks in advance. Graham

    Read the article

  • Map FTP folder to folder on different FTP server

    - by jolt
    In my team we work a lot with FTP. We upload and download files from several different servers daily. Currently every member of the team manages access credentials to each FTP server locally on their own machine. I am looking for a way to set up a central FTP server that we can connect to, and from there, navigate to folders that each represent one of the other FTP servers that we connect to daily. Something like this: In-house central FTP server: |- FolderA --> server A root folder |- FolderB --> server B root folder |- FolderC --> server C root folder A setup like this, would mean that we can manage access credentials on the central FTP server, and team members would only need to have the access credentials to the central FTP server, and from there they could navigate to the other servers through these "virtual" folders. We could potentially develop our own custom FTP server that just forward requests to the remote FTP servers, but i feel like something like this (or something similar) would already have been done. So I'm looking for pointers that could help us find software for Windows that could help us to simplify our current setup. Thank you! Similar (unanswered) question here: FTP management server

    Read the article

  • CPU / Affinity mask problem in SQL 2005

    - by Robert Moir
    Hi folks, Having a problem with a SQL Server which was virtualised. The CPU mask was set on the physical host for some reason and now advanced options are not available. So I need to reconfigure the CPU affinity mask settings - which are advanced options, so this is blocked because of the affinity mask issue. I've tried doing this from the SQL server in single user command line mode, I've googled and found lots of people with similar problems but no real solution. I'm stumped. Any ideas? Sample commands and output from query analyser below. sp_configure 'show advanced options', 1 GO RECONFIGURE WITH OVERRIDE GO sp_configure 'affinity mask', 0x00000000 GO RECONFIGURE GO ----------------------------------------- Configuration option 'show advanced options' changed from 0 to 1. Run the RECONFIGURE statement to install. Msg 5832, Level 16, State 1, Line 1 The affinity mask specified does not match the CPU mask on this system. Msg 15123, Level 16, State 1, Procedure sp_configure, Line 51 The configuration option 'affinity mask' does not exist, or it may be an advanced option.

    Read the article

  • How can Standard User change file associations in Windows 2000?

    - by Gary M. Mugford
    One of my clients is still running Win2K server with a host of Win2K workstations. And no net admin, due to the downturn of the economy over the years. I'm sort of helping out. Out of my depth, but I am a loyal foot soldier. A problem I encounter rather too often is a user double-clicks on a file in Explorer and then either gets no action, or the wrong program to run. It's a case of a missing or out-of-date file association. The current cure is to temporarily upgrade the user from Standard to Power, do the FA switch and then change back. As Winnie would whine, 'Oh, bother!' At any rate, I thought I'd ask here. Is there a method/program to run without the rigamarole FROM the Standard Users account on the workstation to edit/add a file association? I assume the program route would involve RunAs. I 'believe' most of the workstations run the RunAs service, but I could be wrong. I understand that's required, if there is to be a solution. Any help accepted with thanks. GM NOTE: Seems wassociate from http://www.xs4all.nl/~wstudios/Associate/index.html can resolve the issue.

    Read the article

  • How can Standard User change file associations in Windows 2000?

    - by Gary M. Mugford
    One of my clients is still running Win2K server with a host of Win2K workstations. And no net admin, due to the downturn of the economy over the years. I'm sort of helping out. Out of my depth, but I am a loyal foot soldier. A problem I encounter rather too often is a user double-clicks on a file in Explorer and then either gets no action, or the wrong program to run. It's a case of a missing or out-of-date file association. The current cure is to temporarily upgrade the user from Standard to Power, do the FA switch and then change back. As Winnie would whine, 'Oh, bother!' At any rate, I thought I'd ask here. Is there a method/program to run without the rigamarole FROM the Standard Users account on the workstation to edit/add a file association? I assume the program route would involve RunAs. I 'believe' most of the workstations run the RunAs service, but I could be wrong. I understand that's required, if there is to be a solution. Any help accepted with thanks. GM NOTE: Seems wassociate from http://www.xs4all.nl/~wstudios/Associate/index.html can resolve the issue.

    Read the article

  • Why won't dhclient use the static IP I'm telling it to request?

    - by mike
    Here's my /etc/dhcp3/dhclient.conf: request subnet-mask, broadcast-address, time-offset, routers, domain-name, domain-name-servers, domain-search, host-name, netbios-name-servers, netbios-scope, interface-mtu; timeout 60; reject 192.168.1.27; alias { interface "eth0"; fixed-address 192.168.1.222; } lease { interface "eth0"; fixed-address 192.168.1.222; option subnet-mask 255.255.255.0; option broadcast-address 255.255.255.255; option routers 192.168.1.254; option domain-name-servers 192.168.1.254; } When I run "dhclient eth0", I get this: There is already a pid file /var/run/dhclient.pid with pid 6511 killed old client process, removed PID file Internet Systems Consortium DHCP Client V3.1.1 Copyright 2004-2008 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ wmaster0: unknown hardware address type 801 wmaster0: unknown hardware address type 801 Listening on LPF/eth0/00:1c:25:97:82:20 Sending on LPF/eth0/00:1c:25:97:82:20 Sending on Socket/fallback DHCPREQUEST of 192.168.1.27 on eth0 to 255.255.255.255 port 67 DHCPACK of 192.168.1.27 from 192.168.1.254 bound to 192.168.1.27 -- renewal in 1468 seconds. I used strace to make sure that dhclient really is reading that conf file. Why isn't it paying attention to my "reject 192.168.1.27" and "fixed-address 192.168.1.222" lines?

    Read the article

  • AWS Linux EC2: yum won't run with plugins

    - by Patrick
    Short Version: yum commands on my Amazon Linux EC2 AMI only work with --noplugins. Long Version: A couple of days ago, I ran yum update at the behest of the SSH Login MoTD telling me I had updates to install. About midway through the update (specifically while updating the kernel), the update abruptly ended (79 of 138 items completed). The website I host on EC2 got weird for a few minutes, but eventually seemed to stabilize back out (maybe EC2 restarted itself?), and I didn't have further issues (other than MySQL started running out of memory, but I think that's probably unrelated to this). Today, I went to install gcc-c++ (with yum install gcc-c++). When I did, I got the following message: Loaded plugins: priorities, security, update-motd, upgrade-helper Config error: Command "updateinfo" already defined and I get that for any command I can think to run using yum. However, If I throw in the --noplugins flag, then magically it seems to work. To be clear, when I installed a different package a week ago, it worked totally correctly, so the yum update is the only thing I can think of that changed. I could find nothing on Google with regard to "updateinfo" already defined (with and without quotes). I tried running yum update --noplugins which spit out a message telling me that I should have run yum-complete-transaction instead, but proceeded to try to update something on its own. When that completed, I tried yum-complete-transaction but that gave me a message about the transactions not lining up correctly, so it removed the old transaction (Probably since I should have completed the first transaction before trying to update again, if I had known). Based on the SF question "Linux EC2 Broken Yum", I've also tried yum clean all --noplugins (fails the same with plugins) which just gives me Cleaning repos: amzn-main amzn-updates rpmforge Cleaning up everything I also tried package-cleanup --problems Loaded plugins: priorities, update-motd, upgrade-helper No Problems Found and package-cleanup --dupes Gives a lot of dupes, so I pasted them here: http://pastebin.com/VVFQEkTT instead of inline. At this point, I'm not sure what else there even is to check.

    Read the article

  • sudoer scheme for another web developer that retains my future control of a virtual server?

    - by Tchalvak
    Background: Virtual Private Server I have a virtual private server that I'm looking to host multiple websites on, and provide access to another web developer. I don't care about putting too many constraints on him, though I wouldn't mind isolating the site that he'll be developing from other sites on the server that I will develop. The problem: retain control Mainly what I want is to make sure that I retain control over the server in the future. I want to reserve the ability to create/promote/demote and other administrative functions that don't deal with web software. If I make him an admin, he can sudo su - and become root and remove root control from me, for example. I need him not to be able to: take away other admin permissions change the root password have control over other security/administrative functions I would like him to still be able to: install software (through apt-get) restart apache access mysql configure mysql/apache reboot edit web development configuration type files in /etc/ Other Standard Setups would be happily considered I've never really set up a good sudoers file, so simple example setups would be very useful, even if they're only somewhat similar to the settings that I'm hoping for above. Edit: I have not yet finalized permissions, standard, useful sudo setups are certainly an option, the lists above are more what I'm hoping I can do, I don't know that that setup can be done.

    Read the article

  • IIS doesn't serve certain file extensions

    - by Alekc
    Hi, i have this weird issue on Win 2k3 server and IIS: Iis has several sites, in one of them i need to create a subdir and set up it as web application. I've noticed that if i create new directory and put some .js/.txt file into it, they will not be served by iis (IE gives an error Internet Explorer cannot display the webpage). If i put the same files in another old site's subdirectory it will show correctly. By sniffing traffic i've seen that iis reply connection state 200 and then drop completely any connection http://domain.com/test2/prova.txt GET /test2/prova.txt HTTP/1.1 Host: domain.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7 (.NET CLR 3.5.30729) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive HTTP/1.x 200 OK If i rename file prova.txt in prova.asp for example it showing without problems so it shouldn't be permissions issue. After making some researches I've found out that it can be caused by missing mime types, I've checked out .txt and .js are present and served by aspnet_isapi.dll. And here comes another weird thing: if i remove mime mapping from directory's properties it's served correctly, but the same thing doesn't work with js. I'm really beginning to be out of ideas, is there someone who have some hint? Thanks in advance.

    Read the article

  • Apache2: 400 Bad Reqeust with Rewrite Rules, nothing in error log?

    - by neezer
    This is driving me nuts. Background: I'm using the built-in Apache2 & PHP that comes with Mac OS X 10.6 I have a vhost setup as follows: NameVirtualHost *:81 <Directory "/Users/neezer/Sites/"> Options Indexes MultiViews AllowOverride None Order allow,deny Allow from all </Directory> <VirtualHost *:81> ServerName lobster.dev ServerAlias *.lobster.dev DocumentRoot /Users/neezer/Sites/lobster/www RewriteEngine On RewriteCond $1 !^(index\.php|resources|robots\.txt) RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php/$1 [L,QSA] LogLevel debug ErrorLog /private/var/log/apache2/lobster_error </VirtualHost> This is in /private/etc/apache2/users/neezer.conf. My code in the lobster project is PHP with the CodeIgniter framework. Trying to load http://lobster.dev:81/ gives me: 400 Bad Request Normally, I'd go check my logs to see what caused it, yet my logs are empty! I looked in both /private/var/log/apache2/error_log and /private/var/log/apache2/lobster_error, and neither records ANY message relating to the 400. I have LogLevel set to debug in /private/etc/apache2/http.conf. Removing the rewrite rules gets rid of the error, but these same rules work on my MAMP host. I've double-checked and rewrite_module is loaded in my default Apache installation. My http.conf can be found here: https://gist.github.com/1057091 What gives? Let me know if you need any additional info. NOTE: I do NOT want to add the rewrite rules to .htaccess in the project directory (it's checked into a git repo and I don't want to touch it).

    Read the article

  • Cannot access domain from windows 2003 client

    - by Peuge
    Hey all, First off I am a novice at AD and DNS so please bear with me. This is my current situation: I have one server which is a DC and DNS server (win2k3) - Machine 1. I have another machine which is trying to join this domain - Machine2. This machine is also a win2k3 server. This is what I have done so far: I have setup DNS on the DC and its tcp/ip dns is pointing to itself. On machine2 I have set its dns to point to the dc. The DNS has been setup with a forward lookup zone with the same name as the domain (accdirect.com). I can ping machine1 from the machine2 by its FQDN and ip. I have set up forwarders on the DC for our ISP dns and can browse the internet on both machines. In the DNS mmc on the DC I can see a host (A) has been created for machine2. The problem is I still cannot join the domain. When I try join the domain via my computer - properties then it brings up the username/password box and after I go "ok" it says cannot find domain accdirect.com If I run this from machine2 dcdiag /s:accdirect.com /u:accdirect.com\admin /p: then I get the following: Performing initial setup: ** Warning: could not confirm the identity of this server in the directory versus the names returned by DNS servers. If there are problems accessing this directory server then you may need to check that this server is correctly registered with DNS [accdirect.com] Directory Binding Error 1722: Win32 Error 1722 This may limit some of the tests that can be performed. Done gathering initial info. On the dc all dcdiag and netdiag results pass. If anyone could help me I would really appreciate this! Sorry if any of my terminology is a bit off, I have only been doing this for two days. thanks Peuge

    Read the article

  • How to get a new-pssession in PowerShell to talk to my ICS-connected laptop for Remoting

    - by Scott Bilas
    If I have my laptop on the LAN, then Powershell remoting works fine from my workstation to the laptop. However, the LAN is wireless, and so sometimes I will connect on a wire to my workstation. It has two ethernet ports so I have the secondary wired up to share to the laptop using Win7's Internet Connection Sharing. (Btw I know that avoiding ICS would solve the problem, but that's not an option right now.) So my question is: what magic registry bits or command line options do I need to flip to get remoting to work to my laptop through ICS? Here's what happens when I try it: new-pssession -computername 192.168.137.161 [192.168.137.161] Connecting to remote server failed with the following error message : The WinRM client cannot process the request. Default authentication may be used with an IP address under the following conditions: the transport is HTTPS or the destination is in the TrustedHosts list, and explicit credentials are provided. Use winrm.cmd to configure TrustedHosts. Note that computers in the TrustedHosts list might not be authenticated. For more information on how to set TrustedHosts run the following command: winrm help config. For more information, see the about_Remote_Troubleshooting Help topic. + CategoryInfo : OpenError: (System.Manageme....RemoteRunspace:RemoteRunspace) [], PSRemotingTransportException + FullyQualifiedErrorId : PSSessionOpenFailed I'm having a hard time understanding the documentation for PowerShell and WinRM. I've tried messing with allowing ports in the firewall and setting TrustedHosts to * on my workstation (don't think this is a good idea on the laptop). I have no idea where to go from here, would appreciate any help.

    Read the article

  • ubuntu not detecting CDdrives

    - by Mirage
    Ihave insatlled ubuntu 10.4 on my compuer with 6 cd drives. Now initiallyi had window server 2008 and i had to install marvel raid sata controller and then my window detected all 6 drives. Now ubuntu is detecting only 3 drives and i have not found marvell drivers for ubuntu bt i have drives for window 2008. Now my question is if i have vrtual machine inside ubuntu using vmware workstation and i install that driver. then can VM dtect thse 6 drives or host has to detect those drives first to make VMs use that Ubuntu shows this thing from terminal *-cdrom:0 description: DVD-RAM writer product: DVDRAM GSA-H10N vendor: HL-DT-ST physical id: 0.0.0 bus info: scsi@0:0.0.0 logical name: /dev/cdrom2 logical name: /dev/cdrw2 logical name: /dev/dvd2 logical name: /dev/dvdrw2 logical name: /dev/scd0 logical name: /dev/sr0 version: JL10 capabilities: removable audio cd-r cd-rw dvd dvd-r dvd-ram configuration: ansiversion=5 status=nodisc *-cdrom:1 description: DVD writer product: DVDRRW GWA-4164B vendor: HL-DT-ST physical id: 0.1.0 bus info: scsi@0:0.1.0 logical name: /dev/cdrom logical name: /dev/cdrw logical name: /dev/dvd logical name: /dev/dvdrw logical name: /dev/scd1 logical name: /dev/sr1 version: 1.01 serial: [HL-DT-STDVDRRW GWA-4164B1.0105/05/12 capabilities: removable audio cd-r cd-rw dvd dvd-r configuration: ansiversion=5 status=nodisc Is t detecting all drives or thise local names just same

    Read the article

  • .NET Framework 1.1 on IIS 7

    - by Zack Peterson
    I have inherited a .NET Framework 1.1 web site that I must host with IIS 7 on Windows Server 2008. I'm having some trouble. 1. Installation I installed .NET Framework 1.1 following these instructions. The installation automatically created a new Application Pool "ASP.NET 1.1". I use that. 2. Trouble When I launch the web site I see web.config runtime errors: The tag contains an invalid value for the 'culture' attribute. I fix that one and then see: Child nodes are not allowed. I don't want to keep playing this whack-a-mole game. Something must be wrong. 3. Am I sure this is .NET 1.1? I examine the automatically created application pool. I see that it's 1.1. Advanced Settings... Basic Settings... This doesn't seem right. While 1.1 is set, it's not an option in the Advanced drop down selectors. And why in the Basic box is it just "v1.1" and not ".NET Framework v1.1.4322"? That would be more consistent. 4. I cannot create other .NET 1.1 App Pools I cannot select .NET Framework 1.1 for other application pools. It's not an option in the drop down selectors. What's up with that? What now? Why isn't v1.1 an option for all AppPools? How can I verify my application is in fact using .NET Framework 1.1? Why might I get these runtime errors?

    Read the article

  • Shared firewall or multiple client specific firewalls?

    - by Tauren
    I'm trying to determine if I can use a single firewall for my entire network, including customer servers, or if each customer should have their own firewall. I've found that many hosting companies require each client with a cluster of servers to have their own firewall. If you need a web node and a database node, you also have to get a firewall, and pay another monthly fee for it. I have colo space with several KVM virtualization servers hosting VPS services to many different customers. Each KVM host is running a software iptables firewall that only allows specific ports to be accessed on each VPS. I can control which ports any given VPS has open, allowing a web VPS to be accessed from anywhere on ports 80 and 443, but blocking a database VPS completely to the outside and only allowing a certain other VPS to access it. The configuration works well for my current needs. Note that there is not a hardware firewall protecting the virtualization hosts in place at this time. However, the KVM hosts only have port 22 open, are running nothing except KVM and SSH, and even port 22 cannot be accessed except for inside the netblock. I'm looking at possibly rethinking my network now that I have a client who needs to transition from a single VPS onto two dedicated servers (one web and one DB). A different customer already has a single dedicated server that is not behind any firewall except iptables running on the system. Should I require that each dedicated server customer have their own dedicated firewall? Or can I utilize a single network-wide firewall for multiple customer clusters? I'm familiar with iptables, and am currently thinking I'll use it for any firewalls/routers that I need. But I don't necessarily want to use up 1U of space in my rack for each firewall, nor the power consumption each firewall server will take. So I'm considering a hardware firewall. Any suggestions on what is a good approach?

    Read the article

  • Unable to ping domain.local, but can ping server.domain.local

    - by Force Flow
    I have a single windows 2008 server running active directory, group policy, and DNS. DHCP is running from the firewall (this is because there are multiple branch locations, and each location has its own firewall supplying DHCP. But, for this problem, the server and workstation are at the same location). On an XP workstation, if I try to visit \\domain.local or ping domain.local, the workstation can't find it. A ping returns Ping request could not find host domain.local. If I try to visit \\server or \\server.domain.local or ping server or server.domain.local, I'm able to connect normally. If I ping or visit domain.local on the server, I'm able to connect normally. A-Records are in place in the DNS service for server, domain.local, and server.domain.local. A reverse lookup zone also is enabled and PTR records are in place. If I wait 20-30 minutes, I am eventually able to ping and visit domain.local--but, when attempting to ping, it takes 30 second to return an IP address. I am also unable to join a new workstation to the domain during this wait period. If I try, the error message returned is "network path not found". Is there something I'm missing?

    Read the article

  • Using git through cygwin on windows 8

    - by 9point6
    I've got a windows 8 dev preview (not sure if it's relevant, but I never had this hassle on w7) machine and I'm trying to clone a git repo from github. The problem is that my ~/.ssh/id_rsa has 440 permissions and it needs to be 400. I've tried chmodding it but the any changes on the user permissions gets reflected in the group permissions (i.e. chmod 600 results in 660, etc). This appears to be constant throughout any file in the whole filesystem. I've tried messing with the ACLs but to no avail (full control on my user and deny everyone resulted in 000) here's a few outputs to help: $ git clone [removed] Cloning into [removed]... @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ Permissions 0660 for '/home/john/.ssh/id_rsa' are too open. It is required that your private key files are NOT accessible by others. This private key will be ignored. bad permissions: ignore key: /home/john/.ssh/id_rsa Permission denied (publickey). fatal: The remote end hung up unexpectedly $ ll ~/.ssh total 6 -r--r----- 1 john None 1675 Nov 30 19:15 id_rsa -rw-rw---- 1 john None 411 Nov 30 19:15 id_rsa.pub -rw-rw-r-- 1 john None 407 Nov 30 18:43 known_hosts $ chmod -v 400 ~/.ssh/id_rsa mode of `/home/john/.ssh/id_rsa' changed from 0440 (r--r-----) to 0400 (r--------) $ ll ~/.ssh total 6 -r--r----- 1 john None 1675 Nov 30 19:15 id_rsa -rw-rw---- 1 john None 411 Nov 30 19:15 id_rsa.pub -rw-rw-r-- 1 john None 407 Nov 30 18:43 known_hosts $ set | grep CYGWIN CYGWIN='sbmntsec ntsec server ntea' I realize I could use msysgit or something, but I'd prefer to be able to do everything from a single terminal Edit: Msysgit doesn't work either for the same reasons

    Read the article

  • BackupExec 12 + RALUS - VERY slow backups

    - by LVDave
    We use Backup Exec 12 and the Remote Agent for Linux/Unix Servers (RALUS) to backup a large RHEL5 system. For various reasons we need to do a daily working set job. These working-set jobs run abysmally slow. The link between the target machine and the BE server is gigabit, and any other type of job runs 1-3GB/min. These working-set jobs start out at perhaps 40MB/min and over the course of the backup job slowly drops down so low that the BE job rate display in the "current jobs" goes blank.. Since we usually are only doing changed-files for one day, the job is usually small and finishes overnight and we don't worry abotu the slowness, but we had some issues with the backup server, and missed about 6 days of fairly heavy work on the Linux box, so this working-set job will be a doozy.. We have support with Symantec, and I've pestered them a lot about this, they've had me run RALUS in debug mode, sent them that log and a VXgather from the BE host and they had no fix/workaround.. To give an idea, I have the mentioned working-set job running for the last 3 1/2 hours and it's backed up just under 10MEGAbytes.... I'm posting this here to see if anybody in the "real world" has seen this/and/or has any ideas what might be causing these abysmally slow jobs, since Symantec seems to be clueless...

    Read the article

< Previous Page | 647 648 649 650 651 652 653 654 655 656 657 658  | Next Page >