Search Results

Search found 19256 results on 771 pages for 'boost log'.

Page 622/771 | < Previous Page | 618 619 620 621 622 623 624 625 626 627 628 629  | Next Page >

  • Run command remotely on Windows computer from C#

    - by Bilal Aslam
    I have a Windows Server 2008 instance on Amazon EC2 (Amazon's cloud compute platform, which provides VMs in the cloud). It has an external IP, and I have an admin account on the box. I would like to 'bootstrap' this instance remotely i.e. I want to run commands to download, install and configure apps on it, all without having to log on even once. Also, I cannot use psexec on the source computer. I have figured out how to do this to a remote, domain-joined computer using WMI. However, I have NOT been able to do for a remote computer on EC2. Here are some specific restrictions: 1) The remote computer is not part of my domain, hence no Kerberos 2) The remote computer does not have a cert I trust, or vice versa I am sure I am running into to some auth/trust restriction. Is there any way I can run a single command on the remote, given that I have admin privileges? I'm not tied down to using WMI, but I do need to run a command somehow. Feels like this should be a solved problem.

    Read the article

  • Loop through several servers, find specific dlls , get the dll version, internal filename and path?

    - by Graham
    I am a newby to Powershell, and using PS v2. I can see the massive potential it has, but I just can't get the following code to work fully. I am trying to end up with a csv file that contains the wild carded required dlls in the GAC_MSIL or sub-directory, get the dll version, internal filename and path, and the server IP address. The code is below, and it is in single line format because it appears easier to remote onto one of the servers in the server farm and run the single line from that console, ue to security log-ins etc. The code has produced a set of results, but only for the last server, it probably does the first server, then overwrites it but I am not sure about that. I have done a lot of reading about using arrays, and custom objects, and had a go at doing that, but my scripting skills in PS are not yet up to it. Code: $out = "Ouput_dll_ver_results.csv";foreach ($server in '11.222.33.123', '11.222.33.124') {$VersionInfo = (Get-ChildItem \$server\C$\windows\assembly\GAC_MSIL -recurse -Include abc*.dll,def*.dll,ghi*.dll,jkl*.dll | Where-Object { $.FullName -notmatch "\windows\assembly\temp\" })}; $VersionInfo | %{Get-Command $.FullName} | select -expand File* |Export-Csv $out Can you please advise if/how the above code can be corrected, and if not, what alternatives do I have to get the information I need. Many thanks in advance. Graham

    Read the article

  • Transparent proxying in MacOS X 10.6 Snow Leopard (and maybe FreeBSD)

    - by apenwarr
    I'm trying to create a transparent proxy on my MacOS machine in order to port the sshuttle ssh-based transproxy VPN from Linux. I think I almost have it working, but sadly, almost is not 100%. Short version is this. In one window, start something that listens on port 12300: $ while :; do nc -l 12300; done Now enable proxying: # sysctl -w net.inet.ip.forwarding=1 # sysctl -w net.inet.ip.fw.enable=1 # ipfw add 1000 fwd 127.0.0.1,12300 log tcp from any to any And now test it out: $ telnet localhost 9999 # any port number will do # this works; type stuff and you'll see it in the nc window $ telnet google.com 80 # any host/port will do # this *doesn't* work! After the latter experiment, I see lines like this in netstat: $ netstat -tn | grep ^tcp4 tcp4 0 0 66.249.91.104.80 192.168.1.130.61072 SYN_RCVD tcp4 0 0 192.168.1.130.61072 66.249.91.104.80 SYN_SENT The second socket belongs to my telnet program; the first is more suspicious. SYN_RCVD implies that my SYN packet was correctly captured by the firewall and taken in by the kernel, but apparently the SYNACK was never sent back to telnet, because it's still in SYN_SENT. On the other hand, if I kill the nc server, I get this: $ telnet google.com 80 Trying 66.249.81.104... telnet: connect to address 66.249.81.104: Connection refused telnet: Unable to connect to remote host ...which is as expected: my proxy server isn't running, so ipfw redirects my connection to port 12300, which has nobody listening on it, ie. connection refused. My uname says this: $ uname -a Darwin mean.local 10.2.0 Darwin Kernel Version 10.2.0: Tue Nov 3 10:37:10 PST 2009; root:xnu-1486.2.11~1/RELEASE_I386 i386 Does anybody see any different results? (I'm especially interested in Snow Leopard vs Leopard results, as there seem to be some internet rumours that transproxy is broken in Snow Leopard version) Any advice for how to fix?

    Read the article

  • Samba PDC share slow with LDAP backend

    - by hmart
    The scenario I have a SUSE SLES 11.1 SP1 machine as Samba master PDC with LDAP backend. In one share there are Database files for a Client-Server application. I log XP and Windows 7 machines to the local domain (example.local), the login is a little slow but works. In the client computers have an executable which opens, reads and writes the database files from the server share. The Problem When running Samba with LDAP password backend the client application runs VERY SLOW with a maximum transfer rate of 2500 MBit per second. If disable LDAP the client app speed increases 20x, with transfer rate of 50Mbit/sec and running smoothly. I'm doing test with just two users and two machines, so concurrency, or LDAP size shouldn't be the problem here. The suspect LDAP, Smb.conf [global] section configuration. The Question What can I do? I've googled a lot, but still have no answer. Slow smb.conf WITH LDAP [global] workgroup = zmartsoft.local passdb backend = ldapsam:ldap://127.0.0.1 printing = cups printcap name = cups printcap cache time = 750 cups options = raw map to guest = Bad User logon path = \\%L\profiles\.msprofile logon home = \\%L\%U\.9xprofile logon drive = P: usershare allow guests = Yes add machine script = /usr/sbin/useradd -c Machine -d /var/lib/nobody -s /bin/false %m$ domain logons = Yes domain master = Yes local master = Yes netbios name = server os level = 65 preferred master = Yes security = user wins support = Yes idmap backend = ldap:ldap://127.0.0.1 ldap admin dn = cn=Administrator,dc=zmartsoft,dc=local ldap group suffix = ou=Groups ldap idmap suffix = ou=Idmap ldap machine suffix = ou=Machines ldap passwd sync = Yes ldap ssl = Off ldap suffix = dc=zmartsoft,dc=local ldap user suffix = ou=Users

    Read the article

  • Fix for php 5.3.9 libxsl security "bug" fix

    - by Question Mark
    just this morning i updated my debian server to php 5.3.9 , change log (last item in list) has a fix for this bug and now when running any hosted site using XSL transforms i get: Warning: XSLTProcessor::transformToXml(): Can't set libxslt security properties, not doing transformation for security reasons I'm not using any <sax:output> tags in my xslt at all. Does anybody have any information on this, current chatter about it is thin, so i'm i little lost. Using the suggestion about switching ini settings on and off either side of -transformToXml(): ini_set("xsl.security_prefs", XSL_SECPREFS_NONE) or $xsl->setSecurityPreferences(XSL_SECPREFS_NONE) brings me back to the same error Many thanks. Progress: - Upgrading libxml and recompiling libxslt against the new version was a good suggestion, though has not fixed the issue. - Compiling the latest php5.3 snapshot does not fix the issue. Solution: I'm unsure what actually solved this, very sorry for anyone else having the same problem. firstly i upgraded libxml, then applied a few patches, then went into php source for the xsl parser and added some debugging and a few tweaks, after a few compiles getting the configure args right the error went away and wasn't reproducible. I would definitely recommend upgrading libxml as Petr suggested below and then grabbing the latest snapshot from php.net.

    Read the article

  • Optimal file system type and mount options for an rsnapshot dedicated drive

    - by Nimmy Lebby
    We have an external USB 2 drive that we are using as a backup drive for our configuration. We use rsnapshot for the backups. It uses a few standard commands for managing snapshots: rm -rf: deletes expired snapshots mv: moves older snapshots down a slot cp -al: duplicates last snapshot to new slot rsync -a --delete --numeric-ids --relative: synchronizes new snapshot As you could see by the log below, the majority of the time is spent on the rm -rf and the cp -al steps: [25/Dec/2010:14:00:02] rsnapshot hourly: started [25/Dec/2010:14:00:02] echo 21012 > /var/run/rsnapshot.pid [25/Dec/2010:14:00:02] rm -rf /mnt/extdrive/snapshots/hourly.5/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.4/ /mnt/extdrive/snapshots/hourly.5/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.3/ /mnt/extdrive/snapshots/hourly.4/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.2/ /mnt/extdrive/snapshots/hourly.3/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.1/ /mnt/extdrive/snapshots/hourly.2/ [25/Dec/2010:14:15:48] cp -al /mnt/extdrive/snapshots/hourly.0 /mnt/extdrive/snapshots/hourly.1 [25/Dec/2010:14:23:32] rsync -a --delete --numeric-ids --relative /etc /mnt/extdrive/snapshots/hourly.0/sm4/ [25/Dec/2010:14:23:52] touch /mnt/extdrive/snapshots/hourly.0/ [25/Dec/2010:14:23:52] rm -f /var/run/rsnapshot.pid [25/Dec/2010:14:23:52] rsnapshot hourly: completed successfully My questions: I'm currently using ext4 for the filesystem. Maybe this is not the best choice from those available in Red Hat. Anyone have any recommendations that would speed up the process? The partition's mount options are sync,dirsync 1 2. Is there a way to optimize this since it's solely used for rsnapshot? Of course, reasoning would be greatly appreciated.

    Read the article

  • Need to boot into chkdsk from USB on Windows netbook

    - by Gaz Davidson
    While attempting to install Ubuntu on a 32-bit Windows XP netbook, the partition resize operation failed due to inconsistencies in the NTFS filesystem (lesson learned: run chkdsk /f in Windows before trying to resize a partition in Linux). Now the installer only gives the option to replace Windows with Ubuntu, the partition can't be resized in gparted, which displays a red exclamation mark and an error log when you click it. To make matters worse, we're also unable to reboot into Windows to get at chkdsk. We get a BSoD when choosing any of the options (including the DOS recovery console thing). The netbook has no CD-ROM drive, contains no recovery image and our only connection to the Internet is via the hotspot on my mobile device. We don't have Windows recovery CDs, but we do have a USB flash drive. We have a 64-bit laptop running Ubuntu 12.04 and Windows 7 (both 64-bit). So, on to the question: Is anyone aware of a way to get into a DOS recovery console and run chkdsk from a USB disk drive, without having to pirate Windows XP or download hundreds and hundreds of megabytes of crap? If it was my device I'd just flatten it, but it isn't. Please help!

    Read the article

  • vmdk Recovery after migration from 3.5 to 4 and fallback tentative.

    - by olgirard
    Hy, I've tryed to migrate some VM from my 3.5i environment to a brand new vSphere 4.0 U1. The two platforms are running simultaneously, sharing the same SAN. I Migrate my VM by stopping it, unregistering in vcenter (esx ver. 3.5, i call it esx3), register in vSphere (esx ver. 4, i call it esx4), and migrate upgrade virtual hardware before powering it up (First mistake). vMotion was enabled on esx4, seem to be a second mistake. After a day or so, i encountred problems joigning the esx server (esx4) and decided to unregister my server for esx4 and fallback to esx3. esx3 refused to boot, i supposed this was due to virtual hardware in Version 7 so i recreated a new VM pointing to the vmdk of the old VM. Everithing seemed fine until i log into the server and discover that i was running on the original disk ith every snapshots ignored even those created on esx3. I tried to reboot VM on esx4 but VM doesn't power up because "The parent virtual disk has been modified since the child was created". I've got a copy of a later state of the drive but generated between two snapshots (ovf generated with canverter standalone) as a backup. Do i have a chance to recover at least some files on the virtual drive or (as i tink) all is played, i've done enought mistakes for this time. Thanks for your help.

    Read the article

  • Postfix + procmail - delivery fails because "can't create user output file" - on CentOS 6.2

    - by jshin47
    I verified that my postfix installation / relaying setup worked. Now I am having trouble with procmail. I have it wired to postfix with the following command: mailbox_command = /usr/bin/procmail -f -a "$USER" I have nothing in my procmail config but the following: LOGFILE=/var/procmailrc/log And I send an email to a recipient that previously worked (before I attached procmail). Now it fails with error: Apr 6 14:07:05 localhost postfix/qmgr[15194]: D0C3DFF6E1: from=<[email protected]>, size=938, nrcpt=1 (queue active) Apr 6 14:07:05 localhost postfix/local[1953]: D0C3DFF6E1: to=<[email protected]>, orig_to=<postmaster>, relay=local, delay=0.05, delays=0.02/0.01/0/0.02, dsn=5.2.0, status=bounced (can't create user output file. Command output: procmail: Couldn't create "/var/spool/mail/nobody" procmail: Couldn't read "//root" ) Apr 6 14:07:05 localhost postfix/bounce[1955]: warning: D0C3DFF6E1: undeliverable postmaster notification discarded Apr 6 14:07:05 localhost postfix/qmgr[15194]: D0C3DFF6E1: removed It seems like there is some sort of permissions issue but I do not know what the problem is, nor do I understand how I would go about diagnosing it further. The logfile that I specified is empty, by the way. How can I make procmail+postfix work?

    Read the article

  • BackupExec 12 + RALUS - VERY slow backups

    - by LVDave
    We use Backup Exec 12 and the Remote Agent for Linux/Unix Servers (RALUS) to backup a large RHEL5 system. For various reasons we need to do a daily working set job. These working-set jobs run abysmally slow. The link between the target machine and the BE server is gigabit, and any other type of job runs 1-3GB/min. These working-set jobs start out at perhaps 40MB/min and over the course of the backup job slowly drops down so low that the BE job rate display in the "current jobs" goes blank.. Since we usually are only doing changed-files for one day, the job is usually small and finishes overnight and we don't worry abotu the slowness, but we had some issues with the backup server, and missed about 6 days of fairly heavy work on the Linux box, so this working-set job will be a doozy.. We have support with Symantec, and I've pestered them a lot about this, they've had me run RALUS in debug mode, sent them that log and a VXgather from the BE host and they had no fix/workaround.. To give an idea, I have the mentioned working-set job running for the last 3 1/2 hours and it's backed up just under 10MEGAbytes.... I'm posting this here to see if anybody in the "real world" has seen this/and/or has any ideas what might be causing these abysmally slow jobs, since Symantec seems to be clueless...

    Read the article

  • (13) Permission denied on Apache CGI attempt

    - by ncv
    I have recently upgraded my Apache2 server, and am now unable to run a CGI app. My logs are showing (13) Permission denied unable to connect to cgi deamon after multiple tries I understand that the error message means Apache is being denied some permission to some file, and I'm stumped as to how to track down and solve the problem. Is the file mentioned in the error message truly the blocked file? Or might the problem be caused by some other needed file? The .cgi file is right where it has always been, under /usr/share. The file ownership (root) and permissions (world readable/executable) are the same as they have always been for the file and its ancestors. The SELinux file labels are unchanged. The SELinux audit log shows no denial associated with Apache nor the CGI program. In case of a donotaudit condition, I enabled audit, but still saw nothing. I switched SELinux into permissive mode briefly, to no avail. I even tried restarting Apache while in permissive mode. This did not solve the problem. Any suggestions on how to solve this problem? I'm tempted to just revert to the older Apache.

    Read the article

  • performance wise htaccess

    - by purpler
    hese's the my htaccess template, i wonder if anything could be added to increase website performance.. # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US ServerSignature Off FileETag None Header unset ETag Options -MultiViews #Options All -Indexes # Force the latest IE version or ChromeFrame <IfModule mod_setenvif.c> <IfModule mod_headers.c> BrowserMatch MSIE ie Header set X-UA-Compatible "IE=Edge,chrome=1" env=ie </IfModule> </IfModule> # Proxy X-UA Setup <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> #Rewrites Options +FollowSymlinks RewriteEngine On RewriteBase / # Redirect to non-WWW RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] # Redirect to WWW RewriteCond %{HTTP_HOST} ^domain.com RewriteRule (.*) http://www.domain.com/$1 [R=301,L] # Redirect index to root RewriteRule ^(.*)index\.(php|html)$ /$1 [R=301,L] # Caching ExpiresActive On ExpiresDefault A0 Header set Cache-Control "public" # 1 Year Long Cache <FilesMatch "\.(flv|fla|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav|png|jpg|jpeg|gif|swf|js|css|ttf|eot|woff|svg|svgz)$"> ExpiresDefault A31622400 </FilesMatch> # Proxy Caching <FilesMatch "\.(css|js|png)$"> ExpiresDefault A31622400 Header set Cache-Control "private" </FilesMatch> # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Proper SVG serving AddType image/svg+xml svg svgz AddEncoding gzip svgz # GZip Compression <IfModule mod_deflate.c> <FilesMatch "\.(php|html|css|js|xml|txt|ttf|otf|eot|svg)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error page ErrorDocument 404 /404.html # Deny access to sensitive files <FilesMatch "\.(htaccess|ini|log|psd)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • network policy + WPA enterprise (tkip) Windows 2008 R2

    - by Aceth
    hi I've attempted the following guide and in a bit of a pickle. http://techblog.mirabito.net.au/?p=87 My main goal is to have a username / password based wireless authentication with active directory integration. I keep getting the error Network Policy Server denied access to a user. Contact the Network Policy Server administrator for more information. User: Security ID: domain\rhysbeta Account Name: rhysbeta Account Domain: domain Fully Qualified Account Name: domain\rhysbeta Client Machine: Security ID: NULL SID Account Name: - Fully Qualified Account Name: - OS-Version: - Called Station Identifier: 00-12-BF-00-71-3C:wirelessname Calling Station Identifier: 00-23-76-5D-1E-31 NAS: NAS IPv4 Address: 0.0.0.0 NAS IPv6 Address: - NAS Identifier: - NAS Port-Type: Wireless - IEEE 802.11 NAS Port: 2 RADIUS Client: Client Friendly Name: Belkin54g Client IP Address: x.x.x.10 Authentication Details: Connection Request Policy Name: Secure Wireless Connections Network Policy Name: Secure Wireless Connections Authentication Provider: Windows Authentication Server: srvr.example.com Authentication Type: EAP EAP Type: - Account Session Identifier: - Logging Results: Accounting information was written to the local log file. Reason Code: 22 Reason: The client could not be authenticated because the Extensible Authentication Protocol (EAP) Type cannot be processed by the server. ` I would love to have it so that non domain devices

    Read the article

  • IPTables configuration for Localhost

    - by Gabe Mc
    I have a problem in connecting a JIRA instance running on a cloud server to an instance of MySQL running on the same box. I have configured it previously using quite a few iptables rules, but it seems overly broad/terribly inprecise. I want access to several of localhosts ports from the local machine, but deny it from all other accounts. Currently, my /etc/iptables.rules file looks like: *filter :INPUT DROP [223:17779] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [10161:1120819] # SSH Access -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -i eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT # Apache2 Access for connecting to Tomcat on port 8080 -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT # MySQL -I INPUT -i lo -p tcp -m tcp --dport mysql -j ACCEPT COMMIT However, this doesn't allow me to log in when I try logging in; it just hangs on: #> mysql -u root -p -h 127.0.0.1 The Tomcat servlet container starts throwing all kinds of exceptions, as well. This is a more general problem, as I need to enable things like accessing the shutdown port for the Tomcat container, but I need to at least get the MySQL part ironed out first, without the ugliness I was originally trying. Thanks.

    Read the article

  • Automounting Active Directory home drives on a Linux server on login

    - by Ethan
    I've got a Centos 5.7 box authenticating against Active Directory through PBIS Open (the new LikeWise Open), which works well. Now, I'm trying to get the server to automount the user's AD home directory, located at //ad.server.dom/shares/home directories (Yeah, it's a space in the path. I didn't set this up). Each user has a directory in there with the same name as the user. I've tried to get pam_mount working, but it has a series of issues on RedHat and friends, and I can't seem to get that working. The directory does need to be automounted for the server to perform it's role. My reading on automount seems to suggest that there's no way to get it to do it's thing with authentication, though I'm happy to be proved wrong. I've looked at this resource, but it requires version RedHat (thus CentOS) 6 or higher, and newer packages than I have. I can manually (As root) mount the AD directory using the command mount.cifs "//ad.server.dom/Shares/home directories/testuser" /home/local/AD/testuser/nfs_mount/ -o username=testuser and when I log in as testuser, I can see all of the sample files in the nfs_share directory. Any tips towards the right direction would be highly appreciated. This is going to be on a server at a college, so it needs to be fairly stable, and would lead towards more Linux adoption there.

    Read the article

  • postfix smtpd rejecting mail from outside network match_list_match: no match

    - by Loopo
    My postfix (V: 2.5.5-1.1) running on ubuntu server (9.04) started to reject mail arriving in from outside about 2 weeks ago. Doing a "manual" session via telnet shows that the connection is always closed after the MAIL FROM: [email protected] line is input, with the message "Connection closed by foreign host." Doing the same from another client inside the LAN works fine. In the log files I get the line "lost connection after MAIL from xxxxx.tld[xxx.xxx.xxx.xxx]" This is after some lines like: match_hostaddr: XXX.XXX.XXX.XXX ~? [::1]/128 match_hostname: XXXX.tld ~? 192.168.1.0/24 ... match_list_match: xxx.xxx.xxx.xxx: no match which seem to suggest some kind of filter which checks for allowed addresses. I have been unable to locate where this filter lives, or how to turn it off. I'm not even sure if that's what's causing my problem. Connections from inside the LAN don't get disconnected even though they also show a "match_list_match: ... no match" line. I didn't change any configuration files recently, below is my main.cf as it currently stands. I don't really know what all the parameters do and how they interact. I just set it up initially and it worked fine (up to recently). smtpd_banner = $myhostname ESMTP $mail_name (GNU) biff = no readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/server.crt smtpd_tls_key_file=/etc/ssl/private/server.key #smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_sasl_auth_enable = no smtp_use_tls=no smtp_sasl_password_maps = hash:/etc/postfix/smtp_auth myhostname = XXXXXXX.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = XXXX.XXXX.com, XXXX.com, localhost.XXXXX.com, localhost relayhost = XXX.XXX.XXX.XXX mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 192.168.1.0/24 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all smtpd_sasl_local_domain = #smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous smtpd_sasl_authenticated_header = yes broken_sasl_auth_clients = yes smtpd_recipient_restrictions = permit_mynetworks,permit_sasl_authenticated,reject_unauth_ when checking the process list, postfix/smtpd runs as smtpd -n smtp -t inet -u -c -o stress -v -v Any clues?

    Read the article

  • Sluggish Windows SBS 2003

    - by TomWilsonFL
    One of my customers has a Windows 2003 Small Business Server which at this point is basically the DC, DNS, Fileserver and Symantec Protection Manager. I have disabled Exchange because I moved their mail to Google Apps. The server is extremely sluggish when doing anything. It is most noticeable when a dialog box is open (say the System properties), and you try to change tabs. This is usually instant, but on this machine can take 3-5 seconds. What additional services / packages can I uninstall from this machine knowing that it is only performing the above roles? Will removing the "Small Business Server" package in Add / Remove Programs get rid of a few unnecessary things? Any other thoughts? P.S. I know Symantec Endpoint and the Protection Manager are hogs, but I have nothing to replace the solution with at the moment. Thanks, Tom UPDATE: I looked over the different performance metrics, but nothing stood out as a problem. One of my friends mentioned Symantec's log and temp files can get quite huge and slow things down, so I ran CCleaner on the machine and found close to 3 GB of Symantec "stuff." Removed that and now the machine is MUCH better. I am still unsure why the data just sitting there would cause such a slowdown. The drive is not even near full. The only thing I can imagine is that Symantec must have to run through this stuff now and then.

    Read the article

  • Unable to understand why Alfresco doesn't start on Tomcat

    - by Infernalsirius
    Hi all, I have a problem that I've been inspecting for a while now, googling and everything but could not begin to understand. I'm really not used to java, even less tomcat. So there it is. First, the setup. Centos 5.3 on a virtualized server. Bitnami Native Alfresco stack (tomcat5.5, mysql5, java, javajdk, JDBC) Content of catalina.log. Since it's the shortest and where I found my first clue to what is going wrong: SEVERE: Error listenerStart Aug 27, 2009 5:32:58 PM org.apache.coyote.http11.Http11BaseProtocol init INFO: Initializing Coyote HTTP/1.1 on http-8080 Aug 27, 2009 5:32:58 PM org.apache.catalina.startup.Catalina load INFO: Initialization processed in 229 ms Aug 27, 2009 5:32:58 PM org.apache.catalina.core.StandardService start INFO: Starting service Catalina Aug 27, 2009 5:32:58 PM org.apache.catalina.core.StandardEngine start INFO: Starting Servlet Engine: Apache Tomcat/5.5.25 Aug 27, 2009 5:32:58 PM org.apache.catalina.core.StandardHost start INFO: XML validation disabled Aug 27, 2009 5:34:47 PM org.apache.catalina.core.StandardContext start SEVERE: Error listenerStart Aug 27, 2009 5:34:47 PM org.apache.catalina.core.StandardContext start SEVERE: Context [/alfresco] startup failed due to previous errors Aug 27, 2009 5:34:48 PM org.apache.coyote.http11.Http11BaseProtocol start INFO: Starting Coyote HTTP/1.1 on http-8080 Aug 27, 2009 5:34:48 PM org.apache.jk.common.ChannelSocket init INFO: JK: ajp13 listening on /0.0.0.0:8009 Aug 27, 2009 5:34:48 PM org.apache.jk.server.JkMain start INFO: Jk running ID=0 time=0/11 config=null Aug 27, 2009 5:34:48 PM org.apache.catalina.storeconfig.StoreLoader load INFO: Find registry server-registry.xml at classpath resource Aug 27, 2009 5:34:48 PM org.apache.catalina.startup.Catalina start INFO: Server startup in 110327 ms Aug 27, 2009 5:38:27 PM org.apache.coyote.http11.Http11BaseProtocol pause INFO: Pausing Coyote HTTP/1.1 on http-8080 Aug 27, 2009 5:38:28 PM org.apache.catalina.core.StandardService stop INFO: Stopping service Catalina Aug 27, 2009 5:38:29 PM org.apache.coyote.http11.Http11BaseProtocol destroy INFO: Stopping Coyote HTTP/1.1 on http-8080 There's the content of catalina.out, it seems to be a stack trace or application trace of the error, is that right? Catalina.out gist on github There is a 404 error telling me this: The requested resource (/alfresco/) is not available. This is it. I think.

    Read the article

  • Linux Tuning for High Traffic JBoss Server with LDAP Binds

    - by Levi Stanley
    I'm configuring a high traffic Linux server (RedHat) and running into a limit I haven't been able to track down. I need to be able to handle sustained 300 requests per second throughput using Nginx and JBoss. The point of this server is to run checks on a user's account when that user signs in. Each request goes through Nginx to JBoss (specifically Torquebox with JBoss A7 with a Sinatra app) and then makes an LDAP request to bind that user and retrieve several attributes. It is during the bind that these errors occur. I'm able to reproduce this going directly to JBoss, so that rules out Nginx at least. I get a variety of error messages, though oddly JBoss stopped writing to the log file recently. It used to report errors about creating native threads. Now I just see "java.net.SocketException: Connection reset" and "org.apache.http.conn.HttpHostConnectException: Connection to http://my.awesome.server:8080 refused" as responses in jmeter. To the best of my knowledge, I have plenty of available file handles, processes, sockets, and ports, yet the issue persists. Unfortunately, I have very little experience tuning servers. I've found a couple useful documents - Ipsysctl tutorial 1.0.4 and Linux Tuning - but those documents are a bit over my head (and just entering the the configuration described in Linux Tuning doesn't fix my issue. Here are the configuration changes I've tried (webproxy is the user that runs Nginx and JBoss): /etc/security/limits.conf webproxy soft nofile 65536 webproxy hard nofile 65536 webproxy soft nproc 65536 webproxy hard nproc 65536 root soft nofile 65536 root hard nofile 65536 root soft nproc 65536 root hard nofile 65536 First attempt /etc/sysctl.conf sysctl net.core.somaxconn = 8192 sysctl net.ipv4.ip_local_port_range = 32768 65535 sysctl net.ipv4.tcp_fin_timeout = 15 sysctl net.ipv4.tcp_keepalive_time = 1800 sysctl net.ipv4.tcp_keepalive_intvl = 35 sysctl net.ipv4.tcp_tw_recycle = 1 sysctl net.ipv4.tcp_tw_reuse = 1 Second attempt /etc/sysctl.conf net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.core.netdev_max_backlog = 30000 net.ipv4.tcp_congestion_control=htcp net.ipv4.tcp_mtu_probing=1 Any ideas what might be happening here? Or better yet, are there some good documentation resources designed for beginners?

    Read the article

  • Windows Task Scheduler fails at sending e-mail

    - by Marki
    The error is 2147746321. I can see in the mailserver log that it tries, but the connection gets closed. Wed 2012-10-10 15:55:25: Session 990590; child 1 Wed 2012-10-10 15:55:25: Accepting SMTP connection from [x:49161] to [y:25] Wed 2012-10-10 15:55:25: --> 220 Mdaemon; Wed, 10 Oct 2012 15:55:25 +0200 Wed 2012-10-10 15:55:25: <-- EHLO x Wed 2012-10-10 15:55:25: --> 250-Hello x, pleased to meet you Wed 2012-10-10 15:55:25: --> 250-VRFY Wed 2012-10-10 15:55:25: --> 250-EXPN Wed 2012-10-10 15:55:25: --> 250-ETRN Wed 2012-10-10 15:55:25: --> 250-AUTH LOGIN Wed 2012-10-10 15:55:25: --> 250-8BITMIME Wed 2012-10-10 15:55:25: --> 250 SIZE 20971000 Wed 2012-10-10 15:55:25: <-- AUTH LOGIN Wed 2012-10-10 15:55:25: --> 334 VX...... Wed 2012-10-10 15:55:25: Connection closed Wed 2012-10-10 15:55:25: SMTP session terminated (Bytes in/out: 26/212) Googling does not reveal much except that it indeed "doesn't work" and Exchange pops up all over the place. This is no Exchange server. I just want a plain and straight SMTP connection to work. How? (I have tried running the task as normal user and as system account, no difference.)

    Read the article

  • SQL Server 2008 second instance times out when logging in -- but only the first time?

    - by Kromey
    This is a strange one that has plagued me for a while now. When logging in to the second instance of SQL Server 2008 on one of our database servers, we get a timeout error: Cannot connect to servername\mssqlserver2. Additional information: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. (Microsoft SQL Server)") (This is the error message when trying to connect with Microsoft SQL Server Management Studio; other tools experience the same error, but of course say it in different ways.) Immediately re-attempting to log in works just fine, so whatever the cause it's ephemeral! This happens regardless of user or authentication type (both Windows and SQL Server authentication methods are supported on this instance). What's even weirder, though, is that the first instance on this server has never once demonstrated this problem. Server is a Windows Server 2008 R2 virtual server, hosted in Microsoft Hyper-V (host is likewise Server 2008 R2). The server has 2GB of RAM, and seems to regularly be using 90% of that -- could low memory be the cause of this issue? I could see this second instance -- which is not used very often yet -- being swapped out to disk, and then taking too long to load back into memory to respond in time to the connection request, but I'd rather have more than just my own hunch before I go scheduling a downtime for this server (the first instance is used regularly) and then just throwing extra resources at it in the blind hope that the problem goes away.

    Read the article

  • Apache2 Segmentation fault with wsgi_module

    - by a coder
    Apache 2.2.3 is running as an existing web server under RHEL 5. Attempting to set up Trac using wsgi_module. RHEL 5 ships with python 2.4, so in order to use the current version of Trac (1.0) I needed to install it with easy_install-2.6. Trac works with the default mod_python, however users strongly encourage not using this module as it is officially dead. Using RHEL's package manager, I downloaded/installed python26-mod_wsgi.so. I backed up the httpd.conf, then made the following additions: LoadModule wsgi_module modules/python26-mod_wsgi.so #...# WSGIScriptAlias /trac /www/virtualhosts/trac/deploy/cgi-bin/trac.wsgi <Directory /www/virtualhosts/trac/deploy/cgi-bin> WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> Next I moved trac.conf to trac.conf.bak (contains mod_python calls). I tested the configuration using: apachectl configtest Syntax is OK. So I reloaded the server config using: service httpd reload At this time, all virtualhosted sites stopped responding. I restored my backup copy of httpd.conf, reloaded the server config, and the virtualhosted sites are being served again. A quick look at the httpd error_log shows: [Mon Oct 08 10:20:04 2012] [info] mod_wsgi (pid=28282): Initializing Python. [Mon Oct 08 10:20:04 2012] [info] mod_wsgi (pid=28280): Attach interpreter ''. [Mon Oct 08 10:20:04 2012] [debug] proxy_util.c(1817): proxy: grabbed scoreboard slot 0 in child 28283 for worker proxy:reverse [Mon Oct 08 10:20:04 2012] [debug] proxy_util.c(1836): proxy: worker proxy:reverse already initialized [Mon Oct 08 10:20:04 2012] [debug] proxy_util.c(1930): proxy: initialized single connection worker 0 in child 28283 for (*) [Mon Oct 08 10:20:04 2012] [info] mod_wsgi (pid=28283): Initializing Python. [Mon Oct 08 10:20:04 2012] [notice] child pid 28249 exit signal Segmentation fault (11) [Mon Oct 08 10:20:04 2012] [notice] child pid 28250 exit signal Segmentation fault (11) [Mon Oct 08 10:20:04 2012] [notice] child pid 28251 exit signal Segmentation fault (11) There are many similar lines, this is just a snip of the log file. Suggestions on what could be going on to cause the Segmentation faults?

    Read the article

  • HAProxy "503 Service Unavailable" for webserver running on a KVM virtual machine

    - by Menda
    I'm setting up a server with KVM (IP 192.168.0.100) and I have created inside of it one virtual machine using network bridging at 192.168.0.194. This virtual machine has an nginx instance running, which I can access from the server or from any computer computer in the internal network just typing in the browser http://192.168.0.194. However, I try configure HAProxy in the same server that hosts KVM and looking the status page of HAProxy it always shows the virtual machine as "DOWN". If I try from the server http://localhost, it should be the same than if I go to http://192.168.0.194. My goal is to build a reverse proxy, but I tried this little example and won't work. What am I doing bad? This is my config file in the server: # /etc/haproxy/haproxy.cfg global maxconn 4096 user haproxy group haproxy daemon defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen ServerStatus *:8081 mode http stats enable stats auth haproxy:haproxy listen Server *:80 mode http balance roundrobin cookie JSESSIONID prefix option httpclose option forwardfor option httpchk HEAD /check.txt HTTP/1.0 server mv1 192.168.0.194:80 cookie A check Thanks.

    Read the article

  • Is there a way to avoid Wacom Control Panel to stop showing in certain cases ?

    - by S.gfx
    This is the problem: Suddenly, you double click on desktop wacom tablet settings icon, and it wont show the dialog. It appears to be loaded as it's shown down in the windows taskbar. I suspect is caused by change of resolution or some setting, maybe suddenly it sets the origin of the panel dialog at some 3000 thosand pixels to the right or something. I have digged that wacom_tablet.dat file to see if I fix it changing some value there, but looks like a log, a history, more than a ini for settings... And anyway does not solve it. My solution is having always a very complete settings file done and backed up to restore (with wacom utility for this, which in previous driver versions did not exist) when this happens, but is counter productive: You change the settings even per each project(and software) needs. I have seen it happenning with Cintiq 12", intuos4 A6, Graphires, Intuos 1. Is just me, doing something weird every time? I doubt it, is normal use, I am amazed that seems no one else had this prob(or nobody asked), happens often with typical use. Maybe is because am setting a shortcut in the desktop? Weird as it works perfect till some random moment... (Have digged Wacom's forums and FAQs, then here, but nothing related to it... Neither in "related questions".) The thing happens in Win XP, 7, etc. Done so during years in my experience, and several times at work, which is worse.

    Read the article

  • Why are external domains appearing in my apache logs?

    - by Johan
    I've got several log entries that refer to an external domain - mainly a Russian search engine (http://www.yandex.ru/) How are these appearing in my logs? 82.146.58.53 - - [10/Jun/2010:00:49:11 +0000] "GET http://www.yandex.ru/ HTTP/1.0" 200 8859 "http://www.yandex.ru/" "Opera/9.80 (Windows NT 5.1; U; ru) Presto/2.5.22 Version/10.50"` 82.146.59.209 - - [10/Jun/2010:01:54:10 +0000] "GET http://www.yandex.ru/ HTTP/1.0" 200 8859 "http://www.yandex.ru/" "Mozilla/5.0 (Windows; U; Windows NT 5.1; ru; rv:1.9.2.2) Gecko/20100316 Firefox/3.6.2"` 82.146.41.7 - - [10/Jun/2010:02:55:34 +0000] "GET http://www.yandex.ru/ HTTP/1.0" 200 8859 "http://www.yandex.ru/" "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1045 Safari/532.5" 125.45.109.166 - - [09/Jun/2010:11:04:17 +0000] "GET http://proxyjudge1.proxyfire.net/fastenv HTTP/1.1" 404 1010 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)"

    Read the article

< Previous Page | 618 619 620 621 622 623 624 625 626 627 628 629  | Next Page >