Search Results

Search found 9715 results on 389 pages for 'servers'.

Page 315/389 | < Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >

  • Strange IIS hits originating from Trend Micro

    - by TesterTurnedDeveloper
    I'm trying to trace thru an error on a extranet site I maintain. I've had a look thru the logs, and I'm seeing hits originate from these IP addresses: 216.104.15.130 216.104.15.138 216.104.15.142 216.104.15.13 150.70.84.49 150.70.84.44 Network-tools.com gives 'TREND MICRO INCORPORATED' as the owner of all these IPs. The hits fail as they aren't sending any cookies (therefore aren't considered logged in). The hits are to pages containing URLs that only the logged in user would see, i.e. ImageEdit.aspx?ImageId=467424. I.e. the server isn't guessing these URLs, someone would have to log into the site to know these URLs exist. Theory: the Trend Antivirus client grabs URLs and sends them to the server for 'extra processing'? Googling around gives me this: http://www.forumpostersunion.com/showthread.php?p=51272 - where people are reporting comment spam from these addresses. The articles says their servers have been hacked (a few months ago, presumably fixed now?). A hacked server wouldn't explain how the URLs have been plucked off the user's PCs. Has anyone seen this before? Anything nefarious going on here?

    Read the article

  • Apache is spawning more and more processes!!

    - by erotsppa
    We have a LAMP setup that is working prety good for half a year. All of a sudden today the apache server (mysql servers are not on this box) started to die. It seems to have started to spawn more and more processes over time. Eventually it will consume all the memory and the server would just die. We are using prefork. In the mean time what we are doing is just added more ram and increased the MaxClients and ServerLimit parameter to 512. We're just prolonging the crash. The number still goes up slowly. Maybe in a day, it would reach that limit. What is going on? We only have around 15-20 request per second. We have 1Gb memory and it's not half used, there's no swapping going on. Why is apache creating more and more processes? It's almost like theres a leak somewhere! The database boxes are fine, they are not causing a delay to requests. We tested some queries everything is quick!

    Read the article

  • I need scanning software to use with my scanner.

    - by D Connors
    So, I got this (HP F4280) printer/scanner about a year ago, and I'm really happy with it. The only thing is that I never really liked the HP scanning software that came with it. A few months ago I reformatted and reinstalled windows 7. Then, once I plugged in the printer, I noticed that windows recognized it automatically, and offered to install all the drivers by itself. So instead of manually installing the driver that came in the CD, I simply let windows automatically install it from its servers, and so far it's great. Instead of HP's scanning software (that really wasn't pleasing me), I got a very simplified interface that is more than enough for my ocasional scanning habits. Until today. Today I had to scan a bunch of old pictures for my father. And that simple interface felt like it was lacking quite a few features to make this repetitive task a little easier. And that's why I'm now looking for a good software to use for scanning. By "good" I mean anything well thought out, and specially anything that will make my life easier when repetitive-scanning. It doesn't need to have professional tweaking options, but having them is not a problem either. You guys got anything?

    Read the article

  • debian VM refusing all traffic apart from http

    - by james lewis
    I've got a VM with a fresh install of Debian (wheezy) and I've installed node and mongo on it. The VM is using a bridged network connection so I was expecting to be able to point my host machines browser at the ip address of the Debian VM (port 1337 for my node example or port 28017 for my mongo status page) and see one of the two services (node or mongo). My requests are refused though. As far as I can tell Debian allows all traffic by default and you have to manually configure iptables to drop traffic. I've checked iptables and it says it's setup to allow anything through. It looks like this: root@devbox:/home/jlewis# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination As a test I setup nginx and I was able to get to the nginx landing page from my host no problems so obviously http traffic is allowed. I then set nginx up to forward all traffic upstream to mongo - no problems there, I was able to see the status page. I then did the same for my example node server and again, no problems. So http traffic is fine, but all other traffic is blocked. Anyone know why debian might be refusing all other traffic other than iptables being setup to drop it? EDIT - output from netstat -nltp: Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:28017 0.0.0.0:* LISTEN 1762/mongod tcp 0 0 0.0.0.0:51028 0.0.0.0:* LISTEN 1541/rpc.statd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 2462/sshd tcp 0 0 127.0.0.1:1337 0.0.0.0:* LISTEN 2794/node tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2274/exim4 tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 1762/mongod tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1510/rpcbind tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 2189/nginx tcp6 0 0 :::22 :::* LISTEN 2462/sshd tcp6 0 0 :::45335 :::* LISTEN 1541/rpc.statd tcp6 0 0 ::1:25 :::* LISTEN 2274/exim4 tcp6 0 0 :::111 :::* LISTEN 1510/rpcbind

    Read the article

  • Programmer configuring a new network

    - by David Lively
    I'm in the process of expanding my home network from a couple of laptops on a wireless Verizon FiOS router to include: Linksys 24-port switch Cisco Pix 515 Cisco 3640 router One new development desktop and three new machines to act as a db server, web server and a backup system. My company is moving offices and we've decommissioned some older hardware, which I was able to pick up for the cost of the labor to move it home from the office. The benefits to working with dedicated web and db servers are very valuable to me. I know very little about network topology, other than that everything plugs into the switch, which then plugs into the cheap Verizon router. (Verizon provides a coax connection that the router must translate into Ethernet before I can use it with any of this equipment). Questions: What is the recommended topology for this equipment? Verizon router - Pix - 3600 - switch? Is the 3600 even necessary or desirable? The Verizon router has one WAN port and 4 client ports, all 10/100. Is there any performance benefit at all to wiring multiple connections from the verizon router to the switch, assuming I don't use the Pix? Should I use the Pix? Software firewalls are a pain, and seem silly if I have a device like this lying around. Anything else I should know? Am I wasting my time with this? I also obtained a 7 foot rack, shelves, patch panels, UPS, patch panels, etc, which are going into a conveniently air conditioned closet. All constructive advice appreciated.

    Read the article

  • Win2k8R2 / IIS 7.5 - users getting 503 response, no 503 error reported in logs

    - by merk
    I've got 2 web servers with mirrored content. There's a load balancer sitting in front of them. Starting yesterday we've been getting people complaining about 503 errors. i can't find any 503 errors in the IIS log file. However the server host is saying these errors are due to .Net errors in our website which are causing the app pool to recycle. They pointed out several errors in the windows application event log which look like this: Log Name: Application Source: ASP.NET 4.0.30319.0 Date: 3/31/2012 8:35:37 PM Event ID: 1309 Task Category: Web Event Level: Warning Keywords: Classic User: N/A Computer: 6251.local Description: Event code: 3005 Event message: An unhandled exception has occurred. Event time: 3/31/2012 8:35:37 PM Event time (UTC): 4/1/2012 1:35:37 AM Event ID: e7a580c7b38545cca3416a8595408f24 Event sequence: 97 Event occurrence: 1 Event detail code: 0 Application information: Application domain: /LM/W3SVC/2/ROOT-1-129777167518960645 Trust level: Full Application Virtual Path: / Application Path: C:\inetpub\wwwroot\mywebsite\ Machine name: 6252 Process information: Process ID: 20000 Process name: w3wp.exe Account name: IIS APPPOOL\MyAppPool In particular they are saying that the account name under Process Information indicates that the app pool is recycling. They said if the app pool were not recycle, the accountname would instead be the folder where the website files are located. I checked the app pool settings - it's set to recycle every 29 hours. And the rapid fail protection is set to the default of 5 failures in 5 minutes. But i have not seen 5 failures in the event log in that short of a time span. Can anyone help me confirm if the 503 responses are indeed being generated by the app pools recycling? Or are these errors coming from somewhere else? My guess at the time was their load balancer was the one actually returning the 503 error. But that was just a guess.

    Read the article

  • Google respond differently to two identical nginx setups and 200 codes; any ideas?

    - by Yuji Tomita
    I'm rather confused... I have a linode.com VPS which has been cloned recently, so the settings are the same between nginx servers. One lives on a dev subdomain, one on a www. I'm trying to run a google experiment on my live server, which claims: Web server rejects utm_expid. Your server doesn't support added query arguments in URLs. My logs show on the dev server where it works: 74.125.186.32 - - [13/Sep/2012:13:33:45 -0700] "GET /product/iphone-case/?utm_expid=25706866-0 HTTP/1.1" 200 12521 "-" "Google_Analytics_Content_Experiments 74.125.186.32 - - [13/Sep/2012:13:33:45 -0700] "GET /product/iphone-case/?ab_reviews=True&utm_expid=25706866-0 HTTP/1.1" 200 14679 "-" "Google_Analytics_Content_Experiments My production server shows google making a second request. 74.125.186.41 - - [13/Sep/2012:13:34:49 -0700] "GET /product/iphone-case/?ab_reviews=on&utm_expid=25706866-1 HTTP/1.1" 200 12104 "-" "Google_Analytics_Content_Experiments 74.125.186.41 - - [13/Sep/2012:13:34:49 -0700] "GET /product/iphone-case/?utm_expid=25706866-1 HTTP/1.1" 200 12122 "-" "Google_Analytics_Content_Experiments 74.125.186.41 - - [13/Sep/2012:13:34:49 -0700] "GET /product/iphone-case/ <--- A second request for some reason. HTTP/1.1" 200 12522 "-" "Google_Analytics_Content_Experiments I'm not sure how google determines why it needs to send a second request without the querystring. The original request has clearly sent a 200 OK status response. Does anybody have any suggestions where to look next? The HTML (compared by diff) on the two pages is exactly the same.

    Read the article

  • Looking for equivalent of ProxyPassReverseMatch in Apache to fix missing trailing forward slash issue

    - by Alex Man
    I have two web servers, www.example.com and www.userdir.com. I'm trying to make www.example.com as the front end proxy server to serve requests like in the format of http://www.example.com/~username such as http://www.example.com/~john/ so that it sends an internal request of http://www.userdir.com/~john/ to www.userdir.com. I can achieve this in Apache with ProxyPass /~john http://www.userdir.com/~john ProxyPassReverse /~john http://www.userdir.com/~john The ProxyPassReverse is necessary as without it a request like http://www.example.com/~john without the trailing forward slash will be redirected as http://www.userdir.com/~john/ and I want my users to stay in the example.com space. Now, my problem is that I have a lot of users and I cannot list all those user names in httpd.conf. So, I use ProxyPassMatch ^(/~.*)$ http://www.userdir.com$1 but there is no such thing as ProxyPassReverseMatch in Apache. Without it, whenever the trailing forward slash is missing in the URL, one will be directed to www.userdir.com, and that's not what I want. I also tried the following to add the trailing forward slash RewriteCond %{REQUEST_URI} ^/~[^./]*$ RewriteRule ^/(.*)$ http://www.userdir.com/$1/ [P] but then it will render a page with broken image and CSS because they are linked to http://www.example.com/images/image.gif while it should be http://www.example.com/~john/images/image.gif. I have been googling for a long time and still can't figure out a good solution for this. Would really appreciate it if any one can shed some light on this issue. Thank you!

    Read the article

  • Docs for OpenSSH CA-based certificate based authentication

    - by Zoredache
    OpenSSH 5.4 added a new method for certificate authentication (changes). * Add support for certificate authentication of users and hosts using a new, minimal OpenSSH certificate format (not X.509). Certificates contain a public key, identity information and some validity constraints and are signed with a standard SSH public key using ssh-keygen(1). CA keys may be marked as trusted in authorized_keys or via a TrustedUserCAKeys option in sshd_config(5) (for user authentication), or in known_hosts (for host authentication). Documentation for certificate support may be found in ssh-keygen(1), sshd(8) and ssh(1) and a description of the protocol extensions in PROTOCOL.certkeys. Is there any guides or documentation beyond what is mentioned in the ssh-keygen man-page? The man page covers how to generate certificate and use them, but it doesn't really seem to provide much information about the certificate authority setup. For example, can I sign the keys with an intermediate CA, and have the server trust the parent CA? This comment about the new feature seems to mean that I could setup my servers to trust the CA, then setup a method to sign keys, and then users would not have to publish their individual keys on the server. This also seems to support key expiration, which is great since getting rid of old/invalid keys is more difficult then it should be. But I am hoping to find some more documentation about describe the total configuration CA, SSH server, and SSH client settings needed to make this work.

    Read the article

  • Architecture for highly available MySQL with automatic failover in physically diverse locations

    - by Warner
    I have been researching high availability (HA) solutions for MySQL between data centers. For servers located in the same physical environment, I have preferred dual master with heartbeat (floating VIP) using an active passive approach. The heartbeat is over both a serial connection as well as an ethernet connection. Ultimately, my goal is to maintain this same level of availability but between data centers. I want to dynamically failover between both data centers without manual intervention and still maintain data integrity. There would be BGP on top. Web clusters in both locations, which would have the potential to route to the databases between both sides. If the Internet connection went down on site 1, clients would route through site 2, to the Web cluster, and then to the database in site 1 if the link between both sites is still up. With this scenario, due to the lack of physical link (serial) there is a more likely chance of split brain. If the WAN went down between both sites, the VIP would end up on both sites, where a variety of unpleasant scenarios could introduce desync. Another potential issue I see is difficulty scaling this infrastructure to a third data center in the future. The network layer is not a focus. The architecture is flexible at this stage. Again, my focus is a solution for maintaining data integrity as well as automatic failover with the MySQL databases. I would likely design the rest around this. Can you recommend a proven solution for MySQL HA between two physically diverse sites? Thank you for taking the time to read this. I look forward to reading your recommendations.

    Read the article

  • VNC authentication failure

    - by cf16
    I try to connect to my vncserver running on CentOs from home computer, behind firewall. I have installed Win7 and Ubuntu both on this machine. I have an error: VNC conenction failed: vncserver too many security failures even when loging with right credentials (I reset passwd on CentOs) I get: authentication failure. I observe that I have to wait a whole day to be able to relogin at all. Is it something regarding that I try as root? I think important is also that I have to login to remote Centos through port 6050 - none else port works for me. Do I have to do something with other ports? I see that vncserver is listening on 5901, 5902 if another added - and I consider connection is established because from time to time (long time) the passwd prompt appears,... right? I have created additional user1, password for him to CentOS and to VNC, also user2. I do: service vncserver start and two servers starts, one :1, and second on :2. When I try to connect to vncserverIP:1 I get what described above, but when I try connect to vncserverIP:2 it says that the trial was unsuccessful. please help, what to do? additionally: how to disable this lockout for a testing purposes?

    Read the article

  • Squid SSL transparent proxy - SSL_connect:error in SSLv2/v3 read server hello A

    - by larryzhao
    I am trying to setup a SSL proxy for one of my internal servers to visit https://www.googleapis.com using Squid, to make my Rails application on that server to reach googleapis.com via the proxy. I am new to this, so my approach is to setup a SSL transparent proxy with Squid. I build Squid 3.3 on Ubuntu 12.04, generated a pair of ssl key and crt, and configure squid like this: http_port 443 transparent cert=/home/larry/ssl/server.csr key=/home/larry/ssl/server.key And leaves almost all other configurations default. The authorization of the dir that holds key/crt is drwxrwxr-x 2 proxy proxy 4096 Oct 17 15:45 ssl Back on my dev laptop, I put <proxy-server-ip> www.googleapis.com in my /etc/hosts to make the call goes to my proxy server. But when I try it in my rails application, I got: SSL_connect returned=1 errno=0 state=SSLv2/v3 read server hello A: unknown protocol And I also tried with openssl in cli: openssl s_client -state -nbio -connect www.googleapis.com:443 2>&1 | grep "^SSL" SSL_connect:before/connect initialization SSL_connect:SSLv2/v3 write client hello A SSL_connect:error in SSLv2/v3 read server hello A SSL_connect:error in SSLv2/v3 read server hello A Where did I do wrong?

    Read the article

  • 550 Requested action not taken: mailbox unavailable

    - by Porch
    I setup a small box with Server 2003 64bit to be used as a webserver and email server for a small school. Real simple stuff for a few users. A simple website and a handful of emails. rDNS and spf records setup and pass every test I found including test at dnsstuff.com. Email sending to almost every email address (google, hotmail, aol, whatever) works. However, with one domain, I get an bounce back with the error. 550 Requested action not taken: mailbox unavailable It's another school running Exchange judging from some packet sniffing with WireShark. Every email on this domain I have tried sending to gives this error. The email address is valid as I can send to it from my personal, and gmail account without a problem. Does anyone know of some anti-spam software that gives an 550 error like the above? What else could this be? Thanks for any suggestions. Packet capture of the two servers communicating look like this. 220 <server snip> Microsoft ESMTP MAIL Service, Version: 6.0.3790.3959 ready at Sat, 2 Oct 2010 12:48:17 -0700 EHLO <email snip> 250-<server snip> Hello [<ip snip>] 250-TURN 250-SIZE 250-ETRN 250-XXXXXXXXXX 250-DSN 250-ENHANCEDSTATUSCODES 250-8bitmime 250-BINARYMIME 250-XXXXXXXX 250-VRFY 250-X-EXPS GSSAPI NTLM LOGIN 250-X-EXPS=LOGIN 250-AUTH GSSAPI NTLM LOGIN 250-AUTH=LOGIN 250-X-LINK2STATE 250-XXXXXXX 250 OK MAIL FROM: <email snip> 250 2.1.0 <email snip>....Sender OK RCPT TO:<email snip> 250 2.1.5 <email snip> DATA 354 Start mail input; end with <CRLF>.<CRLF> <email body here> . 550 Requested action not taken: mailbox unavailable QUIT 221 Goodbye

    Read the article

  • Multi-IP address zimbra server DNS PTR records and spam

    - by David Fraser
    We have a mail server running Zimbra (ZCS 6.0.8). The server has 5 active public IP addresses in the same subnet. (.226-.230). I currently have A records for each of these (host0.domain.com..host4.domain.com), with the main host.domain.com of the machine pointing to .226. Our host has ended up being listed on the SORBS DUHL list (even though it's in a server farm). According to them you can get removed quickly by checking that your host has an MX record, an A record, and a PTR record that points back to the hostname given in the MX record. I tried setting the PTR records so that each of these addresses resolved back to their A record (i.e. .228 had a PTR to host2.domain.com). However, I then got mail being rejected from other servers because when Postfix (under Zimbra control) sends out mail, it uses the main hostname for the HELO - there doesn't seem to be any way to override it. So the PTR records currently say host.domain.com for all 5 IP addresses. What's the correct way to handle this? Should I have an A record for the domain that points to all the IP addresses (for round-robin handling)? I'm nervous of changes that could cause problems, so I'm wondering what the standard way to handle a multiple-IP-address mail server is.

    Read the article

  • What are best practices on virtual lab/test bed architecture?

    - by WooYek
    I am currently preparing a new small virtual environment for development and testing with Windows Server + SQL Server + AD + Sharepoint + Exchange + IIS(ASP.NET) + Biztalk + ?, for a small (up to 5) dev team. What are pros and cons on different approaches, eg. splitting up over different machines or packing everything up per machine. I your experience what are the best practices I should follow in terms of architecture and various system/servers placement. What to share and what to split per person. I would like to achieve some flexibility for the dev and testing process (so teammebers would not be steeping on each other's toes) and limit administrative effort needed to propagate settings, integrate work items and revert changes when something breaks up. It's not supposed to be an everyday development working environment, more a tier 2 developer testing environment, and not yet an integration or QA testing environment with formal change process. IMO the two borderline solutions are: creating one all-inclusive machine for each dev team member giving them freedom to manage creating shared environment managed by the one with somehow formalized change request process What golden mean would you recommend, and why?

    Read the article

  • How can I track down the cause of ext3 filesystem corruption?

    - by Jon Buys
    We have a VMware vSphere 5 environment running CentOS 5.8 virtual machines. In the past two weeks we have had five incidents of virtual machines having a filesytem become corrupt, requiring an fsck to repair. Here is what we see in the logs: Nov 14 14:39:28 hostname kernel: EXT3-fs error (device dm-2): htree_dirblock_to_tree: bad entry in directory #2392098: rec_len is smaller than minimal - offset=0, inode=0, rec_len=0, name_len=0 Nov 14 14:39:28 hostname kernel: Aborting journal on device dm-2. Nov 14 14:39:28 hostname kernel: __journal_remove_journal_head: freeing b_committed_data Nov 14 14:39:28 hostname last message repeated 4 times Nov 14 14:39:28 hostname kernel: ext3_abort called. Nov 14 14:39:28 hostname kernel: EXT3-fs error (device dm-2): ext3_journal_start_sb: Detected aborted journal Nov 14 14:39:28 hostname kernel: Remounting filesystem read-only Nov 14 14:39:28 hostname kernel: EXT3-fs error (device dm-2): htree_dirblock_to_tree: bad entry in directory #2392099: rec_len is smaller than minimal - offset=0, inode=0, rec_len=0, name_len=0 Nov 14 14:31:17 hostname ntpd[3041]: synchronized to 194.238.48.2, stratum 2 Nov 14 15:00:40 hostname kernel: EXT3-fs error (device dm-2): htree_dirblock_to_tree: bad entry in directory #2162743: rec_len is smaller than minimal - offset=0, inode=0, rec_len=0, name_len=0 Nov 14 15:13:17 hostname kernel: __journal_remove_journal_head: freeing b_committed_data The problem seems to happen while we are rsync'ing application data from another server. So far we have been unable to reproduce the problem, or identify a root cause. After we had a few servers have this problem, we assumed that there was an issue with the template, so we scrapped all VM's cloned off of the template, destroyed the template, and built a new template from scratch, installed from a newly downloaded CentOS ISO. We use HP EVA SAN's for datastores, and moved from a 4400 to a 6300 after the first problem. Since the move and rebuilding new virtual machines we have seen the issue twice. On one VM we shut down the server, removed two virtual CPUs, and booted it back up again, the problem presented itself almost immediately. On the other VM, we rebooted it, and the problem happened a half hour later. Any tips or pointers in the right direction would be appreciated.

    Read the article

  • Dynamic ARP Entries turning into Static ARP entries

    - by Zach
    I recently acquired a client that has a strange ARP caching issue on one of thier servers. I have a server that will eventually start turning it's dynamic ARP entries into static ARP entries. This causes problems because when the machine that has a static ARP entries on this server receives a new IP via DHCP, then the server is not able to communicate with the clients. Clearing the ARP cache resolves the issue and the server is fine for about a week and then it starts slowly turning ARP entries into static ARP entries. I haven't narrowed it down to when or how many it starts to do, but slowly you start seeing 1 static ARP and then 5 and then 10. The server in question is a Windows Server 2003 SP2. It is a DC, DHCP, and DNS server. I've checked the DHCP scope options and there's nothing in there that would indicate anything to do with static ARP entries. The only thing different between this DNS server and our other DNS server is that the 'Dynamically Update DNA A and PTR records for DHCP clients that do not request updates' is checked on the problematic server. I've done a bit of research about this and it seems that this may happen if any PXE type services are running, from what I can tell, there is nothing running a PXE server. I'm a bit lost as I have never seen dynamic ARP entries start to turn into static ARP entries. Right now my solution is a schedule task that runs every 24 hours to clear the ARP cache (arp -d *). I would like to not rely on this schedule task. Has anybody seen this before or have any suggestions on how to troubleshoot this?

    Read the article

  • Securing bash scripts

    - by minnur
    Hi There, Does anybody know what is the best way to secure bash scripts. I have a script which creates database and source code backup and ftp it to other server. And login/password for destination ftp are plain text. I need somehow encrypt it or hide it in case of website hacking. Or should i create script written on C to create bash file then run it and delete ? Thanks. Thanks for the answers and I am sorry, i wasn't clear enough. I would like to clarify my question in the following items. We are storing the data in Rackspace Cloud files. We can't pull as Cloud files doesn't allow you run a script. We can write the script to run on Server A and pull FTP and MySQL data on servers B, C, D, etc. And we want to protect the passwords on A from the situation where A is hacked. Can we compile our script file to hide them? Thanks

    Read the article

  • BackupExec 12 + RALUS - VERY slow backups

    - by LVDave
    We use Backup Exec 12 and the Remote Agent for Linux/Unix Servers (RALUS) to backup a large RHEL5 system. For various reasons we need to do a daily working set job. These working-set jobs run abysmally slow. The link between the target machine and the BE server is gigabit, and any other type of job runs 1-3GB/min. These working-set jobs start out at perhaps 40MB/min and over the course of the backup job slowly drops down so low that the BE job rate display in the "current jobs" goes blank.. Since we usually are only doing changed-files for one day, the job is usually small and finishes overnight and we don't worry abotu the slowness, but we had some issues with the backup server, and missed about 6 days of fairly heavy work on the Linux box, so this working-set job will be a doozy.. We have support with Symantec, and I've pestered them a lot about this, they've had me run RALUS in debug mode, sent them that log and a VXgather from the BE host and they had no fix/workaround.. To give an idea, I have the mentioned working-set job running for the last 3 1/2 hours and it's backed up just under 10MEGAbytes.... I'm posting this here to see if anybody in the "real world" has seen this/and/or has any ideas what might be causing these abysmally slow jobs, since Symantec seems to be clueless...

    Read the article

  • Cannot access domain from windows 2003 client

    - by Peuge
    Hey all, First off I am a novice at AD and DNS so please bear with me. This is my current situation: I have one server which is a DC and DNS server (win2k3) - Machine 1. I have another machine which is trying to join this domain - Machine2. This machine is also a win2k3 server. This is what I have done so far: I have setup DNS on the DC and its tcp/ip dns is pointing to itself. On machine2 I have set its dns to point to the dc. The DNS has been setup with a forward lookup zone with the same name as the domain (accdirect.com). I can ping machine1 from the machine2 by its FQDN and ip. I have set up forwarders on the DC for our ISP dns and can browse the internet on both machines. In the DNS mmc on the DC I can see a host (A) has been created for machine2. The problem is I still cannot join the domain. When I try join the domain via my computer - properties then it brings up the username/password box and after I go "ok" it says cannot find domain accdirect.com If I run this from machine2 dcdiag /s:accdirect.com /u:accdirect.com\admin /p: then I get the following: Performing initial setup: ** Warning: could not confirm the identity of this server in the directory versus the names returned by DNS servers. If there are problems accessing this directory server then you may need to check that this server is correctly registered with DNS [accdirect.com] Directory Binding Error 1722: Win32 Error 1722 This may limit some of the tests that can be performed. Done gathering initial info. On the dc all dcdiag and netdiag results pass. If anyone could help me I would really appreciate this! Sorry if any of my terminology is a bit off, I have only been doing this for two days. thanks Peuge

    Read the article

  • Problem IIS 7.0 Locking files durring upload

    - by viscious
    I am running a server 2008 with iis7 and the ftp addon on to iis 7.0 I have the ftp site configured and mostly working Except that about 70% of the time when transferring a file the upload will hang forever. If I disconnect the ftp client and reconnect and try to upload the same file I will get an error on the client saying the file is locked. I have to restart the ftp service to clear the lock. I fired up process explorer and did a search on the file in question and sure enough the ftp service has a lock on the file and it takes around 20 minutes to release the lock on its own (and sometimes longer). This lock stays around even after I disconnect the client. Like I said this only happens about 70% of the time, the other 30% of the time it goes through just fine. Things i have verified. -Not a firewall issue. Server is using passive port range 8000-9000 which is allowed on the firewall. -Not a nat issue, server has a globally rout-able ip address -all recommended/required updates installed I have 5 other servers in a very similar configuration and this is the only one i have problems with.

    Read the article

  • SQL Server 2008 second instance times out when logging in -- but only the first time?

    - by Kromey
    This is a strange one that has plagued me for a while now. When logging in to the second instance of SQL Server 2008 on one of our database servers, we get a timeout error: Cannot connect to servername\mssqlserver2. Additional information: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. (Microsoft SQL Server)") (This is the error message when trying to connect with Microsoft SQL Server Management Studio; other tools experience the same error, but of course say it in different ways.) Immediately re-attempting to log in works just fine, so whatever the cause it's ephemeral! This happens regardless of user or authentication type (both Windows and SQL Server authentication methods are supported on this instance). What's even weirder, though, is that the first instance on this server has never once demonstrated this problem. Server is a Windows Server 2008 R2 virtual server, hosted in Microsoft Hyper-V (host is likewise Server 2008 R2). The server has 2GB of RAM, and seems to regularly be using 90% of that -- could low memory be the cause of this issue? I could see this second instance -- which is not used very often yet -- being swapped out to disk, and then taking too long to load back into memory to respond in time to the connection request, but I'd rather have more than just my own hunch before I go scheduling a downtime for this server (the first instance is used regularly) and then just throwing extra resources at it in the blind hope that the problem goes away.

    Read the article

  • server and user directly connected no pinging...

    - by jtzero
    I have a server(fedora 12) with two nics on it, directly connected to say 192.168.1.0 and 192.168.2.0 the route table looks like this Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.1.0 192.168.1.1 255.255.255.0 U 0 0 0 eth0 192.168.2.0 * 255.255.255.0 U 0 0 0 eth1 eth0 = 192.168.1.15 eth1 = 192.168.2.1 and a directly connected user (Mythdora) on the 192.168.2.0 network with ip 192.168.2.2 and route table like so Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.2.0 * 255.255.255.0 U 0 0 0 eth0 the cable is a crossover and it works all three nics work -- connected my laptop to either end and assign it a valid 192.168.2.0 ip the pings will work. In fact if I disconnect the server side and plug the eth cable into the laptop and have thte box ping the laptop continually remove the eth cable and plug it back into the server both sides ping... unfortunately the box realizing it's connected to a different pc wipes its route table after say ten minutes or so. if I do a trace route from a box on the 1.0 network to the servers 192.168.2.1 interface never get a reply from it. as a note at one point I could ping the server from the 192.168.2.2 box but the server couldnt ping the 192.168.2.2 box.

    Read the article

  • Hyper-V Virtual Machine won't respond over network

    - by Brad Gignac
    Recently, one of our Hyper-V virtual machines has periodically stopped responding over the network. It seems to be happening every few days, and it occasionally happens up to several times a day. I am by no means a sysadmin, so any direction you guys could provide would be very welcome. I've included everything I know to include below. If you need any additional information, I'll be glad to include it. I can connect through the Hyper-V console. I can't connect to network shares, IIS web apps, using RDP, or using ping. Memory usage seems to be normal (3 of 4 GB) Processor usage seems low. We don't know the exact time the server goes down, but the following error appears consistently around the time it goes down: Error 5719, NETLOGON This computer was not able to set up as secure session with a domain controller in domain *** due to the following: There are currently no logon servers available to service the logon request. This may lead to authentication problems. Make sure that this computer is connected to the network. If this problem persists, please contact your domain administrator.

    Read the article

  • VirtualBox problems writing to shared folders (Guest Additions installed)

    - by vincent
    I am trying to setup a shared folder from the host (ubuntu 10.10) to mount on a virtualized CentOS 5.5 with Guest Additions (4.0.0) installed (Guest addition features are working ie. seamless mode etc.). I am able to successfully mount the share with: mount -t vboxsf -o rw,exec,uid=48,gid=48 sf_html /var/www/html/ (uid and guid belong to the apache user/group) the only problem is that once mounted and I try to write/create directories and files I get the following: mkdir: cannot create directory `/var/www/html/test': Protocol error I am using the proprietary version of VirtualBox version 4.0.0 r69151. Has anyone had the same problem and been able to fix it or has any idea how to potentially fix this? Another question, the reason for setting this up is this. Our production servers are on CentOS 5.5 however I am a great fan of Ubuntu and would like to develop on Ubuntu rather than CentOS. However in order to stay as close to the production environment I would like to virtualize CentOS to use a web server and use the shared folder as web root. Anyone know whether this isn't a good idea? Has anyone successfully been able to set this up? Thanks guys, your help is always much appreciated and if you need any more information please let me know.

    Read the article

< Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >