Search Results

Search found 9715 results on 389 pages for 'servers'.

Page 302/389 | < Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >

  • Exchange 2010: Replication Service Still Trying to Replicate Deleted Mailbox Store

    - by ThaKidd
    In advance, thank you for your opinions! I just migrated from Server/Exchange 2003 to Server 2008 SR2 running Exchange 2010. I had an extra mailbox that appeared with some system mailboxes in it. I used the EMS to move those mailboxes over and then deleted the store out of the EMC. Since then every so often I get an Error in Event Viewer. Source: MSExchangeRepl ID: 4098 Error: The Microsoft Exchange Replication service couldn't find a valid configuration for database '5f012f40-3bad-4003-a373-dbc0ffb6736f' on server 'EXCHSERVER'. Error: (nothing after this) I can confirm that the above GUID is the mailbox store of that I deleted. No other Exchange errors occur. How can I tell Exchange Replication to ignore this store? Setup, one Exchange server 2003 transitioned over to 2010. No other Exchange servers. Is there a way to fix this? Do I need to change a setting to stop replication? I plan to add a second Exchange server in the next few days so stopping replication would be a bad thing. Thanks again in advance. Jason

    Read the article

  • Citrix Performance monitoring

    - by Dr I
    Hi people, I has a strange thing which appears on my Citrix Farm today. My users are equiped with a Thin client Axel Model 80F, and today, one of them sustained a problem on it. He opened a citrix's Publish Desktop session (Host by a farm of Windows 2003 R2 SP2 Servers), he loaded Lotus Notes and a mail who contained an PDF attached file. Once he has opened his PDF File, his session has freezed. We've just reboot the Thin Client, and log in again on the session (which hasn't been closed during the process). Once we have log in again, we try to read the pdf and once again afer half a page the session freeze again (I can see the mouse moving on the screen but can make anything). Then I close the session, reboot correctly the thin client, and "Tada" with the same manipulationsn averything is correct and we don't facing any freeze. Well Now my question is: Is that bug came from the thin client or the server about you? I've checked on my farm and I don't have any alert from the Citrix's Monitoring console logs. According to me it's due to the Thin Client BUT I ddon't have enought monitoring tools to be sure of that. So do you have some quite godd monitoring tools or method? My config: Windows 2003 R2 SP2 Citrix Xenapp 5.0

    Read the article

  • DrayTek 2820 configuration using public IP addresses

    - by Kev
    I have a /29 range of public IP addresses assigned to me by my ISP. I'm trying to configure a SIP VOIP handset to register with my VOIP provider who recommend using public IP addresses rather than NAT. I have a DrayTek 2820 router flashed with the latest firmware and have configured my router as per DrayTek's FAQ at: How do I use a public subnet on the LAN (non-NAT operation ) ? My IP range is: xx.xx.94.16 -> xx.xx.94.23 This gives a usable range of: xx.xx.94.17 -> xx.xx.94.22 My router's public IP address is: xx.xx.94.17, the SIP VOIP handset is allocated xx.xx.94.18. I have a second internet connection and via that I can ping the handset. However for some reason I can't seem to get it to register with the provider. I tried adding a new Firewall filter to pass through from WAN to LAN: Source: ANY, Destination: xx.xx.94.18, UDP - Ports 1024 -> 65535 Out of interest I also tried opening port 80 to see if I could browse to the phone's admin web interface but no joy. I know that my ISP aren't blocking inbound service ports because I NAT Port Forwarded port 80 to one of my internal web servers and it rendered a test page I had set up. All the NAT settings are reset to factory defaults, i.e. there are no Port Redirection, DMZ Host, Open Ports or Address Mappings configured. The handset I'm using is a GrandStream GXP-2000. Is there anything else I should be doing?

    Read the article

  • OpenVPN and TomatoVPN

    - by Bill Johnson
    Wondering if someone can help me with the following. I have updated my Linksys router with TomatoVPN and used the following config: Interface Type:TAP Protocol:UDP Port:1195 Firewall Custom Authorization Mode:Static Key I have then inserted the static key generated in OpenVPN saved and started the service. connect.ovpn. # Use the following to have your client computer send all traffic through your router # (remote gateway) remote (entered my DNS/DHCP servers external IP address here) port 1195 dev tap secret static.key.txt proto udp comp-lzo route-gateway 192.168.1.1 redirect-gateway float I've then placed my static key in a file in the same directory as your connect.ovpn (static.key.txt) Now OpenVPN is installed on a laptop that I use at home. I have plugged in the laptop to my home connection and started connect.ovpn The Local Area Connection is connected as 'Home Network 3' - and when I start OpenVPN it is connected as 'Local Area Connection 2' and this is showing as 'Unidentified Network' and it appears there is no network access. TAP-Win32 Adapter V9 appears to be the adaptors name and the IP and DNS properties are set to automatic. If I open up the OpenVPN GUI it shows an error message saying "Connecting to connect has failed". Looking at the error message behind this pop-up one line says "TCP/UDP Socket bind failed on local address [undef]:1195 Address already in use [WSAEADDRINUSE] Could anyone possibly help me further with this please?

    Read the article

  • I need advice about iscsi + zfs(or ntfs) + windows 2008 clustering

    - by Fatih
    I want to setup a storage farm with iSCSI. I have 2 cluster node machine, 1 iscsi target machine that has 8TB installed as RAID 10. The capacity is now 8TB, but I'll upgrade the capacity in future. Let's say, I installed clusters as file server, and I connected these servers to iscsi target, then I shared 8TB capacity as an only folder to the windows users. Users now see only a folder whose capacity is 8TB. But if I want to add another 8TB to expand the main capacity, the users must not see the second folder for this new 8 TB. The users must see only a folder as before, but this time this folder's capacity expanded to 16TB. And so on, if I add another 8TB, the users must deal with only a folder. For this purpose, I've learnt that ZFS can expand its size without a problem. So if I use ZFS as a file system on iSCSI luns, how can the cluster machines see the ZFS. Because the cluster machines have windows 2008. Is there another way to expand the size of shared folder without a problem? Does ntfs support it?

    Read the article

  • ssh_exchange_identification: Connection closed by remote host

    - by Charlie Epps
    First: $ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys Connecting to SSH servers gives this message: $ ssh -vvv localhost OpenSSH_5.3p1, OpenSSL 0.9.8m 25 Feb 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to localhost [127.0.0.1] port 22. debug1: Connection established. debug1: identity file /home/charlie/.ssh/identity type -1 debug1: identity file /home/charlie/.ssh/id_rsa type -1 debug3: Not a RSA1 key file /home/charlie/.ssh/id_dsa. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /home/charlie/.ssh/id_dsa type 2 ssh_exchange_identification: Connection closed by remote host My /etc/hosts.allow is as following: sshd: ALLOW /etc/hosts.deny is as following: ALL: ALL: DENY I have changed my /etc/ssh/sshd_conf as following: ListenAddress 0.0.0.0 Protocol 2 # HostKey for protocol version 1 #HostKey /etc/ssh/ssh_host_key # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile .ssh/authorized_keys PasswordAuthentication no

    Read the article

  • Windows or Linux for VPN-VPN Bridge

    - by James
    I have the following network layout: Network1 ----VPN1-----Network2----VPN2----Network3 I can administer everything in Network1 only and my goal is to get to a box on Network3. I've been told by the admins of Network2 that it's not possible for them to route traffic from Network1 to Network3. I've finally been authorised to host a box in Network2 and I'm hoping with this I can set something up to resolve the issue. My question is should I set this up as a Windows or a Linux box. My initial thought was to use iptables to reroute requests but with my lack of experience with Windows Server (used for something or other in Network2) I'm not sure if this will work. My head's full of questions like: - can I get an ip without logging in to a windows domain? - if I do get an ip, do Windows Servers manage routing through the VPN? - can I make a linux box authenticate with Windows Server to log on to the domain? - would it just be easier to set up a windows box? - is it possible to configure a windows box to do routing from Network1 to Network3? Has anyone done anything like this before? Had experience managing Windows Server? Authenticated (or not as the case may be) to a Windows domain? I'd really appreciate your advice. It might be worth mentioning that the overall objective is to establish a telnet connection from a box on Network1 to a box on Network3.

    Read the article

  • SCVMM 2008 R2 problems migrating VM from VS2005 to Hyper-V host

    - by Scott Ivey
    I have System Center Virtual Machine Manager 2008 R2 installed, and have a Hyper-V R2 host and a Virtual Server 2005 host. I'm trying to migrate my machines from the VS2005 host to the Hyper-V host, and keep getting the following error... VMM is unable to complete the requested file transfer. The connection to the HTTP server myserver.mydomain.local could not be established. (Unknown error (0x80072efd)) Recommended Action Ensure that the HTTP service and/or the agent on the machine myserver.mydomain.local are installed and running and that a firewall is not blocking HTTPS traffic. (Note - migrations between Hyper-V hosts managed by the VMM server work fine - my problem is just going from VS2005-Hyper-V hosts) I have no firewalls turned on on either of the servers, and no firewalls in the middle. I've looked all over for answers to this problem, and am getting nowhere. All the articles I find when searching are talking about either V2V or P2V - and i'm just trying to do a straight migrate VM. I've tried rebooting the boxes, changing the BITS SSL port number, restarting services, triple-checking firewalls, etc. Does anyone have any good suggestions as to how I can resolve this problem?

    Read the article

  • mdadm cron job sends email that cron has run

    - by Andrew
    I've got an Ubuntu 8.04 server using mdadm to create several RAID1 arrays. I created /etc/cron.hourly/mdadm as follows: #! /bin/sh set -e mdadm --monitor /dev/md0 /dev/md3 /dev/md4 --oneshot (Yes, the array numbers are not sequential, and I'm not using --scan beacuse I have a degraded array that may or may not have been used as swap and I can't delete, but I think that's a separate issue. If it's the underlying cause of this, I need to fix it.) mdadm sends me email (configured in the /etc/mdadm/mdadm.conf) on DegradedArray etc. events. This is the desired behaviour. What is not desired, and I can't work out, is why cron is sending me (relatively pointless) emails, via an alias in /etc/aliases: From: root@<hostname> (Cron Daemon) To: root@<hostname> Subject: Cron <root@<hostname>> cd / && run-parts --report /etc/cron.hourly Content-Type: text/plain; charset=ANSI_X3.4-1968 X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin> X-Cron-Env: <HOME=/root> X-Cron-Env: <LOGNAME=root> Message-Id: <id@hostname> Date: Fri, 7 May 2010 13:17:01 +0930 (CST) /etc/cron.hourly/mdadm: mdadm: Monitor using email address "<root_alias@domain>" from config file I've got a dozen other servers behaving correctly (mdadm sends email, cron doesnt') with identical /etc/crontab files: # /etc/crontab: system-wide crontab # <snip comments> SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly <snip anacron jobs> Should I simply remove the --report, or is there something else in my cron config somewhere causing this?

    Read the article

  • iTunes' clandestine proxy settings

    - by pilcrow
    Problem: One user's iTunes consults a defunct HTTP proxy, but only for iTunes Store HTTP requests -- other iTunes web requests are unproxied. How do I dismiss this spurious proxy setting? Background: It's not as easy as Internet Options. Years ago my network had a mandatory HTTP proxy at 172.31.1.1:8080. When we switched to the 192.168.1/24 space and eliminated the proxy, this user's iTunes -- the only iTunes user at the time -- could no longer contact the iTunes Store, an operation which fails with "unknown error -9808". This has been the case through several iTunes.exe upgrades over the years and prevents, among other things, activation of a new or newly upgraded iPhone. wireshark and TCPView confirm that this user's iTunes.exe is attempting to contact the long-defunct http proxy when attempting to reach the iTunes Store, but is otherwise unproxied. Curious details: No other iTunes.exe HTTP traffic for this user is affected -- iTunes can successfully make HTTP chatter at Apple's servers. No other web traffic at all is proxied, whether this user or others, iTunes or browser, etc. I cannot find the spurious proxy setting anywhere in the registry nor on disk, though perhaps I haven't thought of every place to look and every format to look for. Other users who have experienced the same error code all seem to have unrelated web configuration problems (certificate validation, for example). UPDATE in response to Phoshi's excellent suggestion, reinstallation hasn't done the trick.

    Read the article

  • JBossMQ - Clustered Queues/NameNotFoundException: QueueConnectionFactory error

    - by mfarver
    I am trying to get an application working on a JBoss Cluster. It uses Queues internally, and the developer claims that it should work correctly in a clustered environment. I have jbossmq setup as a ha-singleton on the cluster. The application works correctly on whichever node currently is running the queue, but fails on the other nodes with a: "javax.naming.NameNotFoundException: QueueConnectionFactory not bound" error. I can look at JNDIview from the jmx-console and see that indeed the QueueConnectionFactory class only appears on the primary node in the Global context. Is there a way to see the Cluster's JNDI listing instead of each server? The steps I took from a default Jboss 4.2.3.GA installation were to use the "all" configuration. Then removed /server/all/deploy/hsqldb-ds.xml and /deploy-hasingleton/jms/hsqldb-jdbc2-service.xml, copying the example/jms/mysq-jdbc2-service.xml file into its place (editing that file to use DefaultDS instead of MySqlDS). Finally I created a mysql-ds.xml file in the deploy directory pointing "DefaultDS" at an empty database. I created a -services.xml file in the deploy directory with the queue definition. like the one below: <server> <mbean code="org.jboss.mq.server.jmx.Queue" name="jboss.mq.destination:service=Queue,name=myfirstqueue"> <depends optional-attribute-name="DestinationManager"> jboss.mq:service=DestinationManager </depends> </mbean> </server> All of the other cluster features of working, the servers list each other in the view, and sessions are replicating back and forth. The JBoss documentation is somewhat light in this area, is there another setting I might have missed? Or is this likely to be a code issue (is there different code to do a JNDI lookup in a clusted environment?) Thanks

    Read the article

  • Recommended motherboard with hardware raid for Linux

    - by luison
    Hi. We want to setup an internal office server for testing jobs (LAMP), email and samba. Only about 5-10 users. We are also considering starting to virtualize, initially by a base Ubuntu Server with Xen or VMWare Open Source server. Our current system runs with a Linux Raid which has worked great but it's always been complicated to recover the boot sector when one the drives fail and therefore I would prefer using now a hardware raid instead, but ideally with some kind of software monitoring. For this reason and considering we don't want to spend a fortune a I would appreciate any comments on the following options. Motherboard with RAID with linux support... which could you recommend. Motherboard + Hardware Raid card... Adaptec does not seem to have great Linux suppport. 3Ware seems to have a tc soft controller which we've used on a hosting company, but hard to find here in Spain. HP Proliant type basic server, which? Dell Small Servers... any good for Linux? Thanks in advance for any feedback.

    Read the article

  • How to get html/css/jpg pages server by both apache & tomcat with mod_jk

    - by user53864
    I've apache2 and tomcat6 both running on port 80 with mod_jk setup on ubutnu servers. I had to setup an error document 503 ErrorDocument 503 /maintenance.html in the apache configuration and somehow I managed to get it work and the error page is server by apache when tomcat is stopped. Developers created a good looking error page(an html page which calls css and jpg) and I'm asked to get this page served by apache when tomcat is down. When I tried with JkUnMount /*.css in the virtual hosting, the actual tomcat jsp pages didn't work properly(lost the format) as the tomcat applications uses jsp, css, js, jpg and so on. I'm trying if it is possible to get .css and .jpg served by both apache and tomcat so that when the tomcat is down I'll get css and jpg serverd by apache and the proper error document is served. Anyone has any technique? Here is my apache2 configuration: vim /etc/apache2/apache2.conf Alias / /var/www/ ErrorDocument 503 /maintenance.html ErrorDocument 404 /maintenance.html JkMount / myworker JkMount /* myworker JkMount /*.jsp myworker JkUnMount /*.html myworker <VirtualHost *:80> ServerName station1.mydomain.com DocumentRoot /usr/share/tomcat/webapps/myapps1 JkMount /* myworker JkUnMount /*.html myworker </VirtualHost> <VirtualHost *:80> ServerName station2.mydomain.com DocumentRoot /usr/share/tomcat/webapps/myapps2 JkMount /* myworker JkMount /*.html myworker </VirtualHost>

    Read the article

  • Configure APE-Server on Ubuntu10.10 webserver

    - by sadmicrowave
    I'm having problems configuring my ape-server. First, I reside behind a corporate firewall where our own DNS servers are maintained. I requested a domain name for my server and was provided uslonsweb003.us.mycompany.com from my IT group. Therefore, my website works and can be accessed via (intranet only) at http://uslonsweb003.us.mycompany.com/test.php. I followed the instructions at ape-project.org and run the Check Tool at the end only to find I get an error stating: Running test : Contacting APE Server (adding frequency) Can't contact APE Server. Please check the folowing url is pointing to your APE server : http://0.uslonsweb003.us.mycompany.com:6969 my /etc/apache2/apache2.conf module looks as follows: <VirtualHost *:80> Servername uslonsweb003.us.mycompany.com ServerAlias ape.uslonsweb003.us.mycompany.com ServerAlias *.ape.uslonsweb003.us.mycompany.com DocumentRoot "/var/www/" </VirtualHost> my /var/www/ape-jsf/Demos/config.js config section looks as follows: APE.Config.baseUrl = 'http://uslonsweb003.us.mycompany.com/ape-jsf'; APE.Config.domain = 'uslonsweb003.us.mycompany.com'; APE.Config.server = 'uslonsweb003.us.mycompany.com:6969'; The instructions at ape-project.org tell me that the APE.Config.server should be `ape.mydomain.com:6969'; but that does not work (I'm assuming because my corporate DNS does not understand the 'ape' before the domain name since 'ape' was not registered with the IT DNS). So therefore, I changed it to what you see above. Please help!! Thanks in advance UPDATE 1 per the installation instructions located on this page http://www.ape-project.org/wiki/index.php/Advanced_APE_configuration under 'Configure your Server/Computer' (I'm running it on a server obviously) It says I need to add some lines to my DNS config file. It sounds like (since I'm within a corporate network) I would ask my IT group to add the following lines to the DNS configuration file on their end: ape IN A x.x.x.x ; IP address of my APE server *.ape IN CNAME ape I just want to make sure this is all I have to have them add (or if this is even correct) before I ask them.

    Read the article

  • Windows NT from vmware to kvm

    - by Luca Rossi
    I'm trying to convert a couple of old Windows NT virtual servers from vmware to KVM. I tried almost all guidelines and how to I found around the web but with no luck. I have the vmware virtual disk: Dlc1.vmdk partitioned image. I converted the vmdk into qcow2 image with the qemu utility and I tried to use it with kvm: kvm -hda test.qemu -vnc :1 -m 750 but I receive "error loading operating system" I also tried with raw partitions I can mount through losetup and kpartx. but nothing changed I also tried to create an brand new image file with: qemu-img create -f qcow2 test.qcow2 2G I partitioned the new image file and I copied the original partition 1 to the new partition 1 with dd: dd if=/dev/mapper/loop1p1 of=/dev/mapper/loop0p1 bs=128M no luck again I also tried with a single unpartitioned file: qemu-img create -f qcow2 test.qcow2 2G and I copied the partition 1 to the new image file: dd if=/dev/mapper/loop0p1 of=test.img bs=128M but when booting, I receive a black screen and the virtual machine hangs. The bootloader is loaded successfully, because I also tried with a GRUB live iso and I receive the same screens and errors. Note that grub sees the Windows setup and give me the boot choice. I have the suspect the problem is that the vmware machine is probably a scsi guest and in centos 6 (my system) scsi emulation is no longer supported. But in that case, where to change in Windows? I'm not so skilled with MS systems. Thank you for the help Luca Rossi

    Read the article

  • Windows Authentication behaves oddly when VPN'd

    - by Dan F
    Hi all We've got a few apps that rely on windows authentication - a couple of web apps with AD auth turned on and we usually connect to our SQL servers with windows auth. This normally runs without a hitch. It doesn't work so well if we're VPN'd to a client site though. SSMS Opening SSMS normally from the start menu, then picking a server that normally accepts windows auth, results in a message saying: Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (.Net SqlClient Data Provider) If I drop to a command prompt and use runas /user:domain\user to launch SSMS I can successfully windows auth to our SQL server instances with that ssms process. If I look in task manager, both copies of ssms.exe (start menu vs runas) have the same user, and I can see no discernible differences between the processes in procexp. AD Auth websites If I open IE and browse to any of our websites that require an authenticated windows user, I get the "who are you" prompt, and that dialog thinks I'm whoever the VPN user is. I can click "Use another account" and authenticate that way though. Outlook Even Outlook prompts for a username when we are VPN'd! It's affecting our Win7 and Vista machines. It's been a while since we had an XP box, but I don't recall having this issue on XP for what it's worth. The VPN connections are just using the built in windows VPN connections, they're not fancy cisco VPNs or anything of that nature. Does anyone know how to tell windows that I'd like to be my normal old primary domain user rather than the VPN user when authenticating to resources in our domain? Heck, I'd be happy with a solution that prompted me with the "who are you" if I was trying to access windows auth requiring resources on the client's VPN. Thanks! Apologies if this is more a superuser question, I wasn't sure which site it best suited. It's about networking and infrastructure and plagues all of our developers here, so I hope it's a serverfault Q.

    Read the article

  • rr.com appended to URL and I don't know why

    - by Steph
    I've been having some pretty bad and intermittent internet connectivity issues. I keep getting timeouts on my browsers, or what appears to be timeouts on my browsers. In Chrome it's generally error 21. FF it times out. And so on. While this is happening, I can go into the command line and ping or traceroute the same domain and it works fine. I use my cellphone on the same network and it's fine connecting to the same domain. And when it fails, it's all domains that are down in all browsers, chrome, FF, etc. I also noticed that when I try to connect to 192.168.1.1 it says in chrome did you mean www.192.168.1.1rr.com but when I enter https://192.168.1.1 it's fine. It only seems to do this in Chrome. This is making me think I have a virus. I did some researching about something called road runner, but I can't find any related traces. I also ran a full virus scan using nod32 (eset) and nothing. Any suggestion or help would be greatly appreciated. The intermittent loss of total internet access is really annoying and I'm worried about why it's trying to append the rr.com domain in Chrome. I suspect I'm dealing with two different issues, but you never know. Also for the DNS I'm using Google's DNS servers, the famous 8.8.8.8 and 8.8.8.4

    Read the article

  • Why RSA SSH authentication only works after console log-in?

    - by smorhaim
    I setup RSA authentication on one of my Ubuntu servers, however after every restart, I can't log-in via ssh RSA. In order to log-in with ssh I need to first log-in via console, then the RSA starts working. Why??? Below are my sshd config file as well as an output from the ssh -vv command before console log-in and after. . Before console log-in: debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /Users/smorhaim/.ssh/smorhaim (0x7ff8d8c242c0) debug2: key: /Users/smorhaim/.ssh/id_rsaadmin (0x7ff8d8c24cf0) debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/smorhaim/.ssh/smorhaim debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey debug1: Offering RSA public key: /Users/smorhaim/.ssh/id_rsaadmin debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey debug2: we did not send a packet, disable method debug1: No more authentication methods to try. Permission denied (publickey). After console log-in: debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /Users/smorhaim/.ssh/smorhaim (0x7f91c14242c0) debug2: key: /Users/smorhaim/.ssh/id_rsaadmin (0x7f91c1424ae0) debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/smorhaim/.ssh/smorhaim debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-rsa blen 279 debug2: input_userauth_pk_ok: fp b1:d5:90:43:be:43:52:a9:7f:05:c7:04:86:57:b3:ff debug1: Authentication succeeded (publickey). Authenticated to 10.10.30.151 ([10.10.30.151]:22). sshd config: Port 22 Protocol 2 ListenAddress 10.10.30.151 UsePrivilegeSeparation yes SyslogFacility AUTHPRIV PermitRootLogin no PasswordAuthentication no ChallengeResponseAuthentication no UsePAM yes X11Forwarding yes

    Read the article

  • Web server with static IP from cable provider

    - by Dmitri
    I have a subscription to 5 static IP addresses. I want to run a web server from behind a router. My network config is as follows: Server's local address is 10.1.10.2, has IIS running on it, port 80 and 443 (IIS is not my fault, had to be done) the server's ip address is static, the subnet mask is 255.255.255.0, gateway is 10.1.10.1, which is the local address of the cable modem / router / gateway thingy. All looks to be in textbook order as far as the LAN goes. I can get to anything on my LAN from any computer on my LAN, whether they have static IP or get it through DHCP from the cable modem/router thingy. however, I have no internet access form any of my LAN computers. I called Comcast tech support and they say they can connect to my modem/router just fine and can actually use it to ping any computer on the internet or any computer on my LAN from the router/modem (i checked, myself, this is in fact the case). However, nothing on my LAN has internet connectivity. I tried pinging the DNS servers, nothing. I tried directly typing in web sites' IP addresses, nothing, so doesn't seem to be a DNS issue. Any Ideas? What malfunction of a router could be causing such weird behavior? nay ideas or educated guesses are very much appreciated.

    Read the article

  • How do you guys handle custom yum repository?

    - by luckytaxi
    I have a bunch of tools (nagios, munin, puppet, etc...) that gets installed on all my servers. I'm in the process of building a local yum repository. I know most folks just dump all the rpms into a single folder (broken down into the correct path) and then run createrepo inside the directory. However, what would happen if you had to update the rpms? I ask because I was going to throw each software into its own folder. Example one, put all packages inside one folder (custom_software) /admin/software/custom_software/5.4/i386 /admin/software/custom_software/5.4/x86_64 /admin/software/custom_software/4.6/i386 /admin/software/custom_software/4.6/x86_64 What I'm thinking of ... /admin/software/custom_software/nagios/5.4/i386 /admin/software/custom_software/nagios/5.4/x86_64 /admin/software/custom_software/nagios/4.6/i386 /admin/software/custom_software/nagios/4.6/x86_64 /admin/software/custom_software/puppet/5.4/i386 /admin/software/custom_software/puppet/5.4/x86_64 /admin/software/custom_software/puppet/4.6/i386 /admin/software/custom_software/puppet/4.6/x86_64 Ths way, if I had to update to the latest version of puppet, I can save manage the files accordingly. I wouldn't know which rpms belong to which software if I threw them into one big folder. Makes sense?

    Read the article

  • Changing subnet-mask of class-c network host to 255.255.0.0

    - by Prashant Mandhare
    We have a existing class-c network with IP address range 11.22.33.44/24 (just for example). My domain controller has been configured within this subnet. So all servers within this subnet have subnet mask configured to 255.255.255.0. Now we have got a new subnet with IP address 11.22.88.99/24 (note that only last 2 octets have changed). I want all new hosts in this new subnet to join my existing DC. For this we have configured firewall properly so allow this. (so there is no issue with firewall). But initially I was not able to join hosts in new subnet in existing domain. Later I doubted on subnet mask used in domain controller (255.255.255.0) and for testing purpose I changed it to 255.255.0.0, it worked like charm, i was able to join subnet-2 hosts in subnet-1 domain. Now i am wondering whether it will be good practice to change subnet mask of a class-c network to 255.255.0.0? Can any issues arise due to this? Experts please provide your opinion.

    Read the article

  • Sharepoint db issue after DB move to SQL 08

    - by JohnyV
    Recently we have moved our sharepoint 2007 db from sql 2000 server to 2008 x64 SQL server. All seems well, however there is a problem where the sql server stops running and the service has to be restarted. The errors mention insufficient internal memory etc. I have tried to start the db using -g384 which is the default in sql 2000 but 256 is default for 2008 I believe. This has not rectified the issue. I was advised that perhaps the issue may be rectified by upgrading to wss 3.0 sp2 however When I have tried to install this i get another error post sp2 update and have to refer back to a vm snapshot. The error after the service pack is Server error: http://go.microsoft.com/fwlink?LinkID=96177 So I guess I have a few questions How can I fix the first issue and the 2nd issue. I have checked out many forums and posts and have tried a few things and still get no joy. Any assistance would be great. UPDATE I have fixed the Server error: http://go.microsoft.com/fwlink?LinkID=96177 the i needed to run the wss sp2 as well as the office servers sp2 then the config wizard then the moss configuration worked. The errors I am getting in SQL are SQL Server was unable to run a new system task, either because there is insufficient memory or the number of configured sessions exceeds the maximum allowed in the server. Verify that the server has adequate memory. Use sp_configure with option 'user connections' to check the maximum number of user connections allowed. Use sys.dm_exec_sessions to check the current number of sessions, including user processes. A read operation on a large object failed while sending data to the client. A common cause for this is if the application is running in READ UNCOMMITED isolation level. The connection will be terminated. There is insufficient system memory in resource pool 'internal' to run this query. These errors are by a user that was created as a service for sharepoint.

    Read the article

  • Firebird 2.5 Database Corrupt

    - by BrendanH
    We have an issue where a database hangs the server when: a backup is performed (Hangs on a specific table) selecting * or count(1) from a specific table or viewing data that is related to the table (FKs, etc) We could browse the table to a certain point (using IBExpert) however after about 2900 records the machine just spikes and hangs. Performing a gfix -m does not work, and the validation reports back Record level errors = 4 (no matter how many times we run gfix -m, -v, etc. The Firebird.log file reports back these types of messages: Relation has 91631 orphan backversions (9214273 in use) in table BINS (137) - {Which is apparently just a warning} Unable to complete network request to host "MHPLZA1". Error reading data from the connection. INET/inet_error: read errno = 10054 SERVER/process_packet: broken port, server exiting Shutting down the server with 1 active connection(s) to 1 database(s), 0 active service(s) - {If we leave the backup to run while hanging, it eventually logs this error message} The setup is: The table is question has about 7000 records. The Firebird version is 2.5 Classic Server x64 install. The OS is Windows Server 2008. This is a virtual machine (VMWare) running on a massive server. (Does anyone have issues with VMs and Firebird?). We have the same setup running fine on other servers (However they are not virtual machines). Is there anyway to pin point the issue and or the cause?

    Read the article

  • Unable to get defined path in 'source' type on AIX node

    - by haris
    hi all, I am trying to create a set of users on my AIX node and trying to get their authorized_keys which are already hosted on my server with name like, 'myuser_id_dsa.pub'. Currently i am managing 2 nodes (1. SLES 2. AIX). I defined the 'source' file paths in 2 separate contexts in fileserver.conf; [AIX] path myfiles/users/ssh/ allow *.another.mydomain.com [SLES] path myfiles/users/keys/ssh/ allow *.mydomain.com but when I run puppet then it ended successfully on my SLES node but encountered failure on AIX; with following err; /* Could not describe /AIX/myuser_id_rsa.pub: Fileserver module 'AIX' not mounted*/ in my code i have defined the 'source' with $filserver variable as: case $operatingsystem { "AIX": { $fileserver = "AIX" } default: { $fileserver = "SLES" } } file { "${home}/${username}/.ssh/authorized_keys": source = "puppet:///$fileserver/${username}_is_dsa.pub", ... ... } why AIX is not able to get the source path from my fileserver.conf while SLES is running absolutely fine? and how can I do it? I have to run similar configuration across different servers so I can only deal it with case statement. looking forward for your help Thanks

    Read the article

  • Exchange Server 2010: move mailboxes from recoveded and mounted edb to user’s mailbox

    - by user36090
    One of our exchange servers crashed, and I am trying to recover the mailboxes. We had 1 exchange 2003 server named "apex" and 1 exchange 2010 server named "2008Enterprise. the exchange 2010 server named "2008Enterprise" crashed. I created a new exchange 2010 server named "Providence". I ran the command on Providence: New-MailboxDatabase -Recovery -Name JBCMail -Server Providence -EdbFilePath "c:\data\Exchange\Mailbox\Mailbox Database 0579285147\Mailbox Database 0579285147.edb" -LogFolderPath "c:\data\Exchange\Mailbox\Mailbox Database 0579285147" this command executed and finished without error I then ran the command: eseutil /p E00 this command was executed from the below directory: c:\data\Exchange\Mailbox\Mailbox Database 0579285147 I then mounted the JBCMail with the mount command note: I do not have my full typed command. Inside my Exchange Management Console (EMC) I can view the new mailbox database named JBCMail. The JBCMail database is show as mounted on the exchange server named Providence. I can see the crashed Exchange server named 2008Exchange. In the EMC the crashed exchange server states the Copy Status under ServerConfiguration-Mailbox is ServiceDown. From here I need to recover three mailboxes The mail boxes are on the apex server. How do I move the mailboxs from apex to Providence? How do I restore the mailboxes from JBCmail mounted database to the user's mailbox? I do not fully understand how to use the Restore-Mailbox command because when I use this command it tries to restore the mailbox to the dead apex server. Restore-Mailbox -ID 'Jason Young' -RecoveryDatabase JBCMail

    Read the article

< Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >