Search Results

Search found 17621 results on 705 pages for 'just my correct opinion'.

Page 479/705 | < Previous Page | 475 476 477 478 479 480 481 482 483 484 485 486  | Next Page >

  • SQL Server 2008 Cluster Installation - First network name always fails

    - by boflynn
    I'm testing failover clustering in Windows Server 2008 to host a SQL Server 2008 installation using this installation guide. My base cluster is installed and working properly, as well as clustering the DTC service. However, when it comes time to install SQL Server, my first attempt at installation always fails with the same message and seems to "taint" the network name. For example, with my previous cluster attempt, I was installing SQL Server as VSQL. After approximately 15 attempts of installation and trying to resolve the errors, e.g. changing domain accounts for SQL, setting SPNs, etc., I typoed the network name as VQSL and the installation worked. Similarly on my current cluster, I tried installing with the SQL service named PROD-C1-DB and got the same errors as last time until I tried changing the name to anything else, e.g. PROD-C1-DB1, SQL, TEST, etc., at which point the install works. It will even install to VSQL now. While testing, my install routine was: Run setup.exe from patched media, selecting appropriate options After the install fails, I'd chose "Remove node from a SQL Server failover cluster" and remove the single, failed, node Attempt to diagnose problem, inspect event logs, etc. Delete the computer account that was created for the SQL Service from Active Directory Delete the MSSQL10.MSSQLSERVER folder from the shared data drive The error message I receive from the SQL Server installer is: The following error has occurred: The cluster resource 'SQL Server' could not be brought online. Error: The group or resource is not in the correct state to perform the requested operation. (Exception from HRESULT: 0x8007139F) Along with hundreds of the following errors in the Application event log: [sqsrvres] checkODBCConnectError: sqlstate = 28000; native error = 4818; message = [Microsoft][SQL Server Native Client 10.0][SQL Server]Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. System configuration notes: Windows Server 2008 Enterprise Edition x64 SQL Server 2008 Enterprise Edition x64 using slipstreamed SP1+CU1 media Dell PowerEdge servers Fibre attached storage

    Read the article

  • configure a Cisco ASA to use MS-CHAP v2 for RADIUS authentication

    - by DrStalker
    Cisco ASA5505 8.2(2) Windows 2003 AD server We want to configure our ASA (10.1.1.1) to authenticate remote VPN users through RADIUS on the Windows AD controller (10.1.1.200) We have the following entry on the ASA: aaa-server SYSCON-RADIUS protocol radius aaa-server SYSCON-RADIUS (inside) host 10.1.1.200 key ***** radius-common-pw ***** When I test a login using the account COMPANY\username I see the users credentials are correct in the security log, but I get the following in the windows system logs: User COMPANY\myusername was denied access. Fully-Qualified-User-Name = company.com/CorpUsers/AU/My Name NAS-IP-Address = 10.1.1.1 NAS-Identifier = <not present> Called-Station-Identifier = <not present> Calling-Station-Identifier = <not present> Client-Friendly-Name = ASA5510 Client-IP-Address = 10.1.1.1 NAS-Port-Type = Virtual NAS-Port = 7 Proxy-Policy-Name = Use Windows authentication for all users Authentication-Provider = Windows Authentication-Server = <undetermined> Policy-Name = VPN Authentication Authentication-Type = PAP EAP-Type = <undetermined> Reason-Code = 66 Reason = The user attempted to use an authentication method that is not enabled on the matching remote access policy. My assumption is that the ASA is using PAP authentication, instead of MS-CHAP v2; the credentials are confirmed, the proper Remote Access Policy is being used, but this policy is set to only allow MS-CHAP2. What do we need to do on the ASA to make it us MS-CHAP v2? In the ADSM GUI The "Microsoft CHAP v2 compatible" tickbox is enabled, but I don't know what this corresponds to in the config.

    Read the article

  • How do I leverage Bitlocker cmdlets?

    - by user1418882
    This article hints at being able to unlock a bitlocked drive using: Unlock-BitLocker -MountPoint -Password However, I know diddly squat about Powershell and how to use the Powershell cmdlets to do what I want to do. So, how do I do use the above to do something like the following? Unlock-BitLocker -MountPoint D:\ -Password "password" Currently about as much as I know how to do is start Powershell and that's it. I don't want to learn masses of Powershell to get to the point where I can do this. All that I need to know in enough to know how I can execute the commands that are pointed out in the first link. So far in the powershell prompt if I past in: Unlock-BitLocker -MountPoint D:\ -Password "password" I get the following error: The term 'Unlock-BitLocker' is not recognized as the name of a cmdlet, function, script file, or operable program. Chec k the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:17 + Unlock-BitLocker <<<< -MountPoint D:\ -Password "password" + CategoryInfo : ObjectNotFound: (Unlock-BitLocker:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException This is most likely because I don't have any clue about how the commands on the initially linked page work in a powershell context. This is so that I can answer my own question here: Bitlocker and scheduled task (powershell) script to unlock non-system drive

    Read the article

  • How to set up the CNAME in DNS zone record to work with Unbounce

    - by Lirik
    I'm trying to run split testing on some landing pages I "designed" with Unbounce, but it requires that I set the CNAME record for my domain/sub-domain and I'm having trouble figuring out what is the right way to do it. My host is arvixe (www.arvixe.com) and their customer support has failed to help me for the past 5 days (I spoke to them multiple times). I followed the directions for setting the CNAME record and I was able to set the CNAME record, but I'm consistently unable to verify that the CNAME record is set up correctly. I followed the instructions on Unbounce to verify the CNAME record for my sub-domain (beta.devboost.com) and here are the results: No records found reverse lookup smtp diag port scan blacklist Reported by ns1.SNARE.arvixe.com on Thursday, November 10, 2011 at 5:49:57 PM (GMT-6) Here is my DNS zone record from the control panel of my host (last record, CNAME unbouncepages.com): Is there something wrong with my DNS Zone Record? What's the right way to do this? Update: I also have a CNAME record for beta in my root domain (devboost.com): I've updated my sub-domain record now: I've removed most of the other DNS records and I've removed the beta label for the CNAME record: Is that correct? Is there anything else I need to do?

    Read the article

  • The file STDOLE2.TLB cannot be found or contains a Visual Basic for Applications library that is not

    - by Jim Birchall
    In Microsoft Project 2007 Professional, If I select Tools Macro Security the message: The file "STDOLE.TLB" cannot be found or contains a Visual Basic for Applications library that is not valid. Verify that the file name is correct, and try again. If the Visual Basic for Applications library is invalid, reinstall Project. I am a software developer and I have developed an Add-In for MS Project using Visual Studio 2005, with all the problems that entails. As such my machine is configured with both Project 2003 Professional and Project 2007 Professional. I only noticed this error when trying to debug my Add-In. The Add-In loads and draws the menu, but when I click on the menu option I receive this error (which is also generated by the Tools Macros Security option mentioned above). I have tried repairing office installations and uninstalling everything and then re-installing everything all over again, but after several hours I still get the same problem. Does anyone have any idea how to resolve this? Some method of finding out what type libraries are registered with STDOLE2.TLB may help if I can identify what is causing the problem. Also a way of manually unregistering the nasty library may be helpful. My machine is configured as follows: Windows 7 Ultimate x64 Project 2003 Professional Office 2007 Ultimate Project 2007 Professional Visio 2007 Professional Visual Studio 2005 Team Edition for Developers Visual Studio 2005 Tools for Office Second edition Visual Studio 2008 Professional I also installed MS Project 2010 Beta to test my Add-In against, but have since uninstalled it. I have a suspicion that this may have caused the problem, but I cannot be sure.

    Read the article

  • imapsync - Authentication failed

    - by Touff
    I've deployed many Google Apps accounts and have used imapsync a number of times to migrate accounts to Google Apps. This time however, no matter what I try imapsync refuses to work claiming my credentials are incorrect - I've checked them time and time again and they are 100% correct. On Ubuntu 12, built from source, my command is: imapsync --host1 myserver.com --user1 [email protected] --password1 mypassword1 -ssl1 --host2 imap.gmail.com --user2 [email protected] --password2 mypassword2 -ssl2 -authmech2 PLAIN Full output from the command: get options: [1] PID is 21316 $RCSfile: imapsync,v $ $Revision: 1.592 $ $Date: With perl 5.14.2 Mail::IMAPClient 3.35 Command line used: /usr/bin/imapsync --debug --host1 myserver.com --user1 [email protected] --password1 mypassword1 -ssl1 --host2 imap.gmail.com --user2 [email protected] --password2 mypassword2 -ssl2 -authmech2 PLAIN Temp directory is /tmp PID file is /tmp/imapsync.pid Modules version list: Mail::IMAPClient 3.35 IO::Socket 1.32 IO::Socket::IP ? IO::Socket::INET 1.31 IO::Socket::SSL 1.53 Net::SSLeay 1.42 Digest::MD5 2.51 Digest::HMAC_MD5 1.01 Digest::HMAC_SHA1 1.03 Term::ReadKey 2.30 Authen::NTLM 1.09 File::Spec 3.33 Time::HiRes 1.972101 URI::Escape 3.31 Data::Uniqid 0.12 IMAPClient 3.35 Info: turned ON syncinternaldates, will set the internal dates (arrival dates) on host2 same as host1. Info: will try to use LOGIN authentication on host1 Info: will try to use PLAIN authentication on host2 Info: imap connexions timeout is 120 seconds Host1: IMAP server [SERVER1] port [993] user [USER1] Host2: IMAP server [imap.gmail.com] port [993] user [USER2] Host1: * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE AUTH=PLAIN AUTH=LOGIN] Dovecot ready. Host1: SERVER1 says it has CAPABILITY for AUTHENTICATE LOGIN Host1: success login on [SERVER1] with user [USER1] auth [LOGIN] Host2: * OK Gimap ready for requests from MY-VPS Host2: imap.gmail.com says it has CAPABILITY for AUTHENTICATE PLAIN Failure: error login on [imap.gmail.com] with user [USER2] auth [PLAIN]: 2 NO [AUTHENTICATIONFAILED] Invalid credentials (Failure) I have tried -authmech2 LOGIN as well which returns: Host2: imap.gmail.com says it has NO CAPABILITY for AUTHENTICATE LOGIN Failure: error login on [imap.gmail.com] with user [[email protected]] auth [LOGIN]: 2 NO [AUTHENTICATIONFAILED] Invalid credentials (Failure) If anyone can shed some light on this I would greatly appreciate it.

    Read the article

  • ADODB DB2 DSN using IBMDADB2 provider

    - by Eli Sand
    I have a very bizarre issue with trying to establish a working connection to an IBM DB2 server from Classic ASP using ADODB. On my development server I am running IIS and have a local instance of DB2 running. When I create a system DSN on this server and try to connect to it with ADODB, I have to specify Provider=IBMDADB2; in my connection along with the DSN name - failure to include the provider and my connection won't work. On my production server(s), I have one running IIS and a second system running an instance of DB2. When I create a system DSN on the production IIS server and try to connect to it with ADODB, I cannot specify the provider, otherwise it throws an uncatchable error in an external module (I assume it's referring to the DB2 module) if I try to do anything past get a connection (oddly, opening the connection itself doesn't throw an error - but if I run a query it does). If I remove the Provider=IBMDADB2; from the connection string (thus I just have DSN=some_name), it works fine. On both systems I can verify through the ODBC connection manager that the DSN's work and can connect to the databases, and on both systems I have made sure to set the correct (only) instance of DB2 as the default. Can anyone tell me why I have to have different connection strings for the development and production servers? I would like to be able to use the same connection string for both environments if at all possible. If that means either specifying a provider for both, or for neither I don't care which - I would just like to know what's going on and how to fix it.

    Read the article

  • How-To Configure Weblogic, Agile PLM and an F5 LTM

    - by Brian Dunbar
    Agile, Weblogic, and an F5 walk into a bar ... I've got this Agile PLM v 9.3 Running on WebLogic, two managed servers. An F5 BigIP LTM. We're upgrading from Agile v 9.2.1.4 running on OAS. The problem is that while the Windows client works fine the Java client does not. My setup is identical to one outlined in F5's doc: http://www.f5.com/pdf/deployment-guides/bea-bigip45-dg.pdf When I launch the java client it returns this error "Server is not valid or is unavailable." Oracle claims Agile PLM is setup correctly, but won't comment on the specifics of the load balancer. F5 reports the configuration is correct but can't comment on the specifics of the application. I am merely the guy in a vortex of finger-pointing who wants my application to work. It's that or give up on WLS and move back to OAS. Which has it's own problems but at least we know how it works. Any ideas?

    Read the article

  • How to disable windows server 2008 timestamp response

    - by Cal
    Posted this question on stackoverflow but then got instructed to post it here: I was using Rapid7's Nexpose to scan one of our web servers (windows server 2008), and got a vulnerability for timestamp response. According to Rapid7, timestamp response shall be disabled: http://www.rapid7.com/db/vulnerabilities/generic-tcp-timestamp So far I have tried several things: Edit the registry, add a "Tcp1323Opts" key to HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters, and set it to 0. http://technet.microsoft.com/en-us/library/cc938205.aspx Use this command: netsh int tcp set global timestamps=disabled Tried powershell command: Set-netTCPsetting -SettingName InternetCustom -Timestamps disabled (got error: Set-netTCPsetting : The term 'Set-netTCPsetting' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.) None of above attempts was successful, after re-scan we still got the same alert. Rapid7 suggested using a firewall that's capable of blocking it, but we want to know if there is a setting on windows to achieve it. Is it through a specific port? If yes, what is the port number? If not, could you suggest a 3rd party firewall that is capable of blocking it? Thank you very much.

    Read the article

  • Timeline chart in Excel

    - by Axarydax
    Hi, I'd like to see a timeline of events from a database in a "timeline chart", that should look like this: http://i46.tinypic.com/sw3dj5.png - I've made me a small c# program that paints this onto a Bitmap, but that isn't the way to go. I have input data that has 3 fields: StartX EndX Y 2596 15008 1 5438 6783 2 5450 5453 4 5456 5459 4 5462 5466 4 5470 5474 4 5477 5657 5 5662 5665 4 5668 5671 4 As the picture shows, for each line I'd like to have a line from StartX to EndX with a Y value of Y. Stacked bar chart almost solves my problem, but I don't want to have a new line on the chart for every row, I have thousands of rows and I'd like to have X axis as the time axis, and view which events (Y is the type of the event) happened simultaneously. The image ( http://i46.tinypic.com/sw3dj5.png) I've generated with a simple C# program shows that the event SYSTEM was active all the time, and the events TECH and BREAK were almost exclusive, but had some overlaps. I'd like to at least know the correct direction which I should take; I'm lost in the multitude of Excel chart types. Thanks.

    Read the article

  • Django running on Apache+WSGI and apache SSL proxying

    - by Lessfoe
    Hi all, I'm trying to rewrite all requests for my Django server running on apache+WSGI ( inside my local network) and configured as the WSGI's wiki how to, except that I set a virtualhost for it. The server which from I want to rewrite requests is another apache server listening on port 80. I can manage it to work well if I don't try to enable SSL connection as the required way to connect. But I need all requests to Django server encrypted with SSL so I generally used this directive to achieve this ( on my public webserver ): Alias /dirname "/var/www/dirname" SSLVerifyClient none SSLOptions +FakeBasicAuth SSLRequireSSL AuthName "stuff name" AuthType Basic AuthUserFile /etc/httpd/djangoserver.passwd require valid-user # redirect all request to django.test:80 RewriteEngine On RewriteRule (.*)$ http://django.test/$1 [P] This configuration works if I try to load a specific page trough the external server from my browser. It is not working clicking my django application urls ( even tough the url seems correct when I put my mouse over). The url my public server is trying to serve use http ( instead of https ) and the directory "dirname" I specified on my apache configuration disappear, so it says that the page was not found. I think it depends on Django and its WSGI handler . Does anybody went trough my same problem? PS: I have already tried to modify the WSGI script . I'm Using Django 1.0.3, Apache 2.2 on a Fedora10 (inside), Apache 2.2 on the public server. Thanks in advance for your help. Fab

    Read the article

  • Server 2008 RAID5 resynching

    - by benpage
    I built a W2K8 R2 server last weekend, and built a R5 array using Disk Management, 5x 1TB drives. For the next 60 hours the status said 'Resynching (X%)' as it built the array. during this time i did read and write to the drive (slowly) and once the rebuild was complete speeds were quite fast. I was copying over some data from a USB hard drive overnight, and the machine crashed (looking in the error i believe it was the driver for the usb drive) and so the RAID5 went back to resynching, since the machine was not shut down properly. The issue is, it's been about 48 hours since that started and I'm happy to say it will take roughly 60 hours again to resynch, however.. all this time in Disk Management, the status has only said 'Resynching', not 'Resynching (X%)'. The hard drive light is going nuts, and read/writes are slow again, so I'm assuming it is actually resynching. The question is - is that a correct assumption? and why is disk management not telling me what %age it's done? Is this normal behaviour for a not-properly-shut-down r5 array?

    Read the article

  • Connectivity with SQL Server Express 2008 r2 and SQL Server 2000 on same machine

    - by Jim R
    At first glance this may same a duplicate of Installing both SQL Server 2000 and SQL Server 2008 on the same machine, but it is not. I have SQL Server 2000 and SQL Server 2008 R2 installed on the same machine and working fine. My problem lies with connecting to the 2008 R2 server from a remote machine. My connectivity needs to be TCP. The legacy installation or SQL 2000 uses the default port of 1433. The named instance is by default configured to use 'Shared Memory' and is working fine. When I configured the 2008 R2 server to use 1433 (I did not think that thru) the service refused to start becasue 1433 was already in use by the legacy SQL 2000 default instance. Doh! What I want to do is have both servers available simultaneously via TCP. both servers need not be on the same port, put if I cannot run them on the same port, then how do I configure the clients? Is there not some kind of proxy available that can monitor the 1433 port and pass the request thru to the correct SQL instance by name? Is this capability built into SQL server already? Thanks, Jim

    Read the article

  • Configure Postfix to send/relay emails Gmail (smtp.gmail.com) via port 587

    - by tom smith
    Hi. Using Centos 5.4, with Postfix. I can do a mail [email protected] subject: blah test . Cc: and the msg gets sent to gmail, but it resides in the spam folder, which is to be expected. My goal is to be able to generate email msgs, and to have them appear in the regular Inbox! As I understand Postfix/Gmail, it's possible to configure Postfix to send/relay mail via the authenticated/valid user using port 587, which would no longer have the mail be seen as spam. I've tried a number of parameters based on different sites/articles from the 'net, with no luck. Some of the articles, actually seem to conflict with other articles! I've also looked over the stacflow postings on this, but i'm still missing something... Also talked to a few people on IRC (Centos/Postfix) and still have questions.. So, i'm turning to Serverfault, once again! If there's someone who's managed to accomplish this, would you mind posting your main.cf, sasl-passwd, and any other conf files that you use to get this working! If I can review your config files, I can hopefully see where I've screwed up, and figure out how to correct the issue. Thanks for reading this, and any help/pointers you provide! ps, If there is a stackflow posting that speaks to this that I may have missed, feel free to point it out to me! -tom

    Read the article

  • Apache2 with SSL and mod_jk on SUSE Linux Enterprise | Apache always starts SSL disabled

    - by Shaakunthala
    I have installed Apache2 (with mod_ssl enabled) on SUSE Linux Enterprise Server 11 (x86_64) (patchlevel 1), using YaST. Once installed, I tested whether everything works fine so far. SSL also worked fine. Just 'apache2ctl start' was enough to make everything working. Then I installed mod_jk and applied the following configuration changes to make it work. /etc/sysconfig/apache2 (added JK module) APACHE_MODULES="... ... ... ... ...jk" /etc/apache2/httpd.conf (included mod_jk.conf) Include /etc/apache2/mod_jk.conf /etc/apache2/mod_jk.conf (new file) JkLogFile /var/log/apache2/mod_jk.log JkWorkersFile /etc/apache2/mod_jk/workers.properties JkShmFile /etc/apache2/mod_jk/mod_jk.shm # Set the jk log level [debug/error/info] JkLogLevel info # Select the timestamp log format JkLogStampFormat "[%a %b %d %H:%M:%S %Y] " mod_jk.log & mod_jk.shm files were also created. /etc/apache2/mod_jk/workers.properties (new file) worker.list=jira worker.jira.type=ajp13 worker.jira.host=127.0.0.1 worker.jira.port=8009 Once everything is done, I've restarted Apache using the following command, apache2ctl restart Then I observed that SSL is not working. When checked with telnet, I observed that port 443 is not open. In listen.conf, if I specify port 443 bypassing 'IfDefine' and 'IfModule' conditions, then SSL works properly. This is likely the 'SSL' flag is not passed to Apache. I did not make this a persistent change as I thought it might not be the correct practice. I checked /etc/sysconfig/apache2 to see if this has been altered, but it is there. Although this flag is enabled, Apache won't start with SSL support. APACHE_SERVER_FLAGS="SSL" Finally, I had to start Apache using the following command, apache2ctl -D SSL -k start And my question is, why did Apache (or apache2ctl) fail to start with SSL when I have installed and correctly configured mod_jk, and no other configuration changes were applied? Have I missed anything? Thanks in advance. -- Shaakunthala

    Read the article

  • Account Lockout with pam_tally2 in RHEL6

    - by Aaron Copley
    I am using pam_tally2 to lockout accounts after 3 failed logins per policy, however, the connecting user does not receive the error indicating pam_tally2's action. (Via SSH.) I expect to see on the 4th attempt: Account locked due to 3 failed logins No combination of required or requisite or the order in the file seems to help. This is under Red Hat 6, and I am using /etc/pam.d/password-auth. The lockout does work as expected but the user does not receive the error described above. This causes a lot of confusion and frustration as they have no way of knowing why authentication fails when they are sure they are using the correct password. Implementation follows NSA's Guide to the Secure Conguration of Red Hat Enterprise Linux 5. (pg.45) It's my understanding that that only thing changed in PAM is that /etc/pam.d/sshd now includes /etc/pam.d/password-auth instead of system-auth. If locking out accounts after a number of incorrect login attempts is required by your security policy, implement use of pam_tally2.so. To enforce password lockout, add the following to /etc/pam.d/system-auth. First, add to the top of the auth lines: auth required pam_tally2.so deny=5 onerr=fail unlock_time=900 Second, add to the top of the account lines: account required pam_tally2.so EDIT: I get the error message by resetting pam_tally2 during one of the login attempts. user@localhost's password: (bad password) Permission denied, please try again. user@localhost's password: (bad password) Permission denied, please try again. (reset pam_tally2 from another shell) user@localhost's password: (good password) Account locked due to ... Account locked due to ... Last login: ... [user@localhost ~]$

    Read the article

  • Unable to install ffmpeg-php

    - by matt74tm
    I followed the instructions on http://www.mysql-apache-php.com/ffmpeg-install.htm but ffmpeg-php does not show up in my phpinfo() The commands I ran (in order) #yum install ffmpeg ffmpeg-devel ... Public key for faac-1.26-1.el5.rf.x86_64.rpm is not installed #rpm -Uhv http://apt.sw.be/redhat/el5/en/i386/rpmforge/RPMS/rpmforge-release-0.3.6-1.el5.rf.i386.rpm ... 1:rpmforge-release ########################################### [100%] #yum install ffmpeg ... Complete! #wget http://space.dl.sourceforge.net/project/ffmpeg-php/ffmpeg-php/0.6.0/ffmpeg-php-0.6.0.tbz2 ... #tar -xjf ffmpeg-php-0.6.0.tbz2 #cd ffmpeg-php-0.6.0 #phpize ... configure: error: ffmpeg headers not found. Make sure ffmpeg is compiled as shared libraries using the --enable-shared option #yum install ffmpeg-devel ... Complete! #./configure ... config.status: creating config.h #make ... Build complete. Don't forget to run 'make test'. #make install Installing shared extensions: /usr/local/lib/php/extensions/no-debug-non-zts-20090626/ #ls -al /usr/local/lib/php/extensions/no-debug-non-zts-20090626/ ... -rwxr-xr-x 1 root root 185285 Sep 20 03:36 ffmpeg.so* ... #nano /usr/local/lib/php.ini In which I put these two lines at the end of the php.ini file [ffmpeg] extension=ffmpeg.so Then, #service httpd restart But phpinfo() still does not show any 'ffmpeg' section. This is the correct php.ini because: #php -i | grep php\.ini Configuration File (php.ini) Path => /usr/local/lib Loaded Configuration File => /usr/local/lib/php.ini

    Read the article

  • Cisco adaptive security appliance is dropping packets where SYN flag is not set

    - by Brett Ryan
    We have an apache instance sitting inside our DMZ which is configured to proxy requests to an internal NATed tomcat instance inside our network. It works fine, but then all of a sudden requests from apache to the tomcat instance stop getting through with the following in the apache logs: [error] (70007)The timeout specified has expired: ajp_ilink_receive() can't receive header Investigating into the Cisco log viewer reveals the following: Error Message %ASA-6-106015: Deny TCP (no connection) from IP_address/port to IP_address/port flags tcp_flags on interface interface_name. Explanation The adaptive security appliance discarded a TCP packet that has no associated connection in the adaptive security appliance connection table. The adaptive security appliance looks for a SYN flag in the packet, which indicates a request to establish a new connection. If the SYN flag is not set, and there is not an existing connection, the adaptive security appliance discards the packet. Recommended Action None required unless the adaptive security appliance receives a large volume of these invalid TCP packets. If this is the case, trace the packets to the source and determine the reason these packets were sent. All are machines are virtualised using VMware, and by default machines have been using the Intel E1000 emulated NIC. Our network administrator has changed this to a VMXNET3 driver in an attempt to correct the problem, we just have to wait and see if the problem persists as it's an intermittent problem. Is there something else that could be causing this problem? This isn't the first service where we have had similar issues. Our apache host is running Ubuntu 11.10 with a kernel version of 3.0.0-17-server. We have also had this issue on RHEL5 (5.8) running kernel 2.6.18-308.16.1.el5, this machine also has the E1000 NIC. NOTE: I am not a network administrator and am a software architect and analyst programmer responsible for these systems.

    Read the article

  • Howto make a diff of a bios or backup/ restore all bios settings

    - by sfonck
    Hi, I'm using an Dell M90 Precision Laptop which has a NVidia Quadro FX 2500M graphics card and is running Windows XP. Laptop has been running fine - but a few weeks ago screen went 'white' - restarted computer- bios and startup screens show weird green dots and stripes, normal startup only shows a black screen... only VGA mode works to display something. I've been trying to remove and reinstall the correct drivers downloaded from Dell's website - no solution. I gave up and reinstalled XP - everything was working perfect again. 2 weeks later - again the white screen - tried everything again (flashin new bios also - nothing works) Reinstalled XP - everyhting was working again, so I made a DriveSnapShot of the partition. Today - again the 'white screen'. Ok, no problem ...I was thinking all I needed to do was to restore the DriveSnapShot backup... Few minutes later the backup is restored ... but guess what: video driver does not work correctly... As the DriveSnapShot restored the complete partition, as it was at the time everything was working perfectly, this would mean my driver problems are due to 'settings' in the bios or on the graphics-card itself + these 'settings' can get overridden by doing a new XP-install.... I'm out of options, can somebody help me to find a solution for this problem: Is there some way to backup and restore a bios after seeing some problems? Is there some way to know what is causing this problem like a bios diff utility? Thanks!

    Read the article

  • Could not continue scan with NOLOCK due to data movement during installation

    - by dbdev1
    I am running Windows Server 2008 Standard Edition R2 x64 and I installed SQL Server 2008 Developer Edition. All of the preliminary checks run fine (Apart from a warning about Windows Firewall and opening ports which is unrelated to this and shouldn't be an issue - I can open those ports). Half way through the actual installation, I get a popup with this error: Could not continue scan with NOLOCK due to data movement. The installation still runs to completion when I press ok. However, at the end, it states that the following services "failed": database engine services sql server replication full-text search reporting services How do I know if this actually means that anything from my installation (which is on a clean Windows Server setup - nothing else on there, no previous SQL Servers, no upgrades, etc) is missing? I know from my programming experience that locks are for concurrency control and the Microsoft help on this issue points to changing my query's lock/transactions in a certain way to fix the issue. But I am not touching any queries? Also, now that I have installed the app, when I login, I keep getting this message: TITLE: Connect to Server ------------------------------ Cannot connect to MSSQLSERVER. ------------------------------ ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 67) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=67&LinkId=20476 ------------------------------ BUTTONS: OK ------------------------------ I went into the Configuration Manager and enabled named pipes and restarted the service (this is something I have done before as this message is common and not serious). I have disabled Windows Firewall temporarily. I have checked the instance name against the error logs. Please advise on both of these errors. I think these two errors are related. Thanks

    Read the article

  • Postfix: LDAP not working (warning: dict_ldap_lookup: Search base not found: 32: No such object)

    - by Heinzi
    I set up LDAP access with postfix. ldapsearch -D "cn=postfix,ou=users,ou=system,[domain]" -w postfix -b "ou=users,ou=people,[domain]" -s sub "(&(objectclass=inetOrgPerson)(mail=[mailaddr]))" delivers the correct entry. The LDAP config file looks like root@server2:/etc/postfix/ldap# cat mailbox_maps.cf server_host = localhost search_base = ou=users,ou=people,[domain] scope = sub bind = yes bind_dn = cn=postfix,ou=users,ou=system,[domain] bind_pw = postfix query_filter = (&(objectclass=inetOrgPerson)(mail=%s)) result_attribute = uid debug_level = 2 The bind_dn and bind_pw should be the same as I used above with ldapsearch. Nevertheless, calling postmap doesn't work: root@server2:/etc/postfix/ldap# postmap -q [mailaddr] ldap:/etc/postfix/ldap/mailbox_maps.cf postmap: warning: dict_ldap_lookup: /etc/postfix/ldap/mailbox_maps.cf: Search base 'ou=users,ou=people,[domain]' not found: 32: No such object If I change LDAP configuration, so that anonymous users have complete access to LDAP olcAccess: {-1}to * by * read then it works: root@server2:/etc/postfix/ldap# postmap -q [mailaddr] ldap:/etc/postfix/ldap/mailbox_maps.cf [user-id] But when I restrict this access to the postfix user: olcAccess: {-1}to * by dn="cn=postfix,ou=users,ou=system,[domain]" read by * break it doesn't work but produces the error printed above (although ldapsearch works, only postmap doesn't). Why doesn't it work when binding with a postfix DN? I think I set up the LDAP ACL for the postfix user correctly, as the ldapsearch command should prove. What can be the reason for this behaviour?

    Read the article

  • uninstall google chrome in fedora

    - by tbleckert
    Yesterday I installed Fedora 15 Beta with GNOME 3 - it works well. One problem though is that I installed Chrome 32-bit (which was wrong, should have been the 64-bit version) and now I can't uninstall it. I can't find it in Add/Remove Software, and I also can't install the correct version of Chrome because it complains about my other copy of Chrome. Any ideas how I can remove the existing copy and get the 64-bit version installed? Here's the message I get when trying to install: Test Transaction Errors: file /etc/cron.daily/google-chrome from install of google-chrome-stable-11.0.696.65-84435.x86_64 conflicts with file from package google-chrome-stable-11.0.696.65-84435.i386 file /opt/google/chrome/chrome from install of google-chrome-stable-11.0.696.65-84435.x86_64 conflicts with file from package google-chrome-stable-11.0.696.65-84435.i386 file /opt/google/chrome/chrome-sandbox from install of google-chrome-stable-11.0.696.65-84435.x86_64 conflicts with file from package google-chrome-stable-11.0.696.65-84435.i386 file /opt/google/chrome/libffmpegsumo.so from install of google-chrome-stable-11.0.696.65-84435.x86_64 conflicts with file from package google-chrome-stable-11.0.696.65-84435.i386 file /opt/google/chrome/libpdf.so from install of google-chrome-stable-11.0.696.65-84435.x86_64 conflicts with file from package google-chrome-stable-11.0.696.65-84435.i386 file /opt/google/chrome/libppGoogleNaClPluginChrome.so from install of google-chrome-stable-11.0.696.65-84435.x8...

    Read the article

  • Ubuntu 9.10 RSA authentication: ssh fails, filezilla runs fine

    - by MariusPontmercy
    This is quite a mistery for me. I usually use passwordless RSA authentication to login into my remote *nix servers with ssh and sftp. Never had any problem until now. I cannot connect to an Ubuntu 9.10 machine: user@myclient$ ssh -i .ssh/Ganymede_key [email protected] [...] debug1: Host 'ganymede.server.com' is known and matches the RSA host key. debug1: Found key in /home/user/.ssh/known_hosts:14 debug2: bits set: 494/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: .ssh/Ganymede_key (0xb96a0ef8) debug2: key: .ssh/Ganymede_key ((nil)) debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: publickey debug1: Offering public key: .ssh/Ganymede_key debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Trying private key: .ssh/Ganymede_key debug1: read PEM private key done: type RSA debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug2: we did not send a packet, disable method debug1: Next authentication method: keyboard-interactive debug2: userauth_kbdint debug2: we sent a keyboard-interactive packet, wait for reply debug2: input_userauth_info_req debug2: input_userauth_info_req: num_prompts 1 Then it falls back to password authentication. If I disable password authentication on the remote machine my connection attempt just fails with a "Permission denied (publickey)." state. Same thing for sftp from command line. The "funny" thing is that the exact same RSA key works like a charm with a Filezilla sftp session instead: 12:08:00 Trace: Offered public key from "/home/user/.filezilla/keys/Ganymede_key" 12:08:00 Trace: Offer of public key accepted, trying to authenticate using it. 12:08:01 Trace: Access granted 12:08:01 Trace: Opened channel for session 12:08:01 Trace: Started a shell/command 12:08:01 Status: Connected to ganymede.server.com 12:08:02 Trace: CSftpControlSocket::ConnectParseResponse() 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Status: Retrieving directory listing... 12:08:02 Trace: CSftpControlSocket::SendNextCommand() 12:08:02 Trace: CSftpControlSocket::ChangeDirSend() 12:08:02 Command: pwd 12:08:02 Response: Current directory is: "/root" 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Trace: CSftpControlSocket::ParseSubcommandResult(0) 12:08:02 Trace: CSftpControlSocket::ListSubcommandResult() 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Status: Directory listing successful Any thoughts? M

    Read the article

  • Help configuring Mercury mail or similiar with XAMPP to send e-mail outside of localhost

    - by user291040
    I'm building a PHP/MySQL driven website for my department at work (installed via XAMPP). I need to be able to send mail to outside e-mail addresses (e.g., Yahoo, Hotmail, etc.) using the PHP mail() function. As I see it I have to solutions: Configure the SMTP directive in php.ini to the server running at my work. Configure/run a mail server that can send e-mails outside of localhost (I'm trying Mercury because it comes installed with XAMPP). Here are problems I've come up against: I took a guess at our SMTP server name, and when calling PHP mail(), I get the error SMTP server response: 530 5.7.1 Client was not authenticated I can't be sure, however, the SMTP name is correct (I can't get help from our IT guys because of politics). I have tried to use mercury mail. Mercury seems to be picking up the request, but it doesn't want to forward the e-mail to the outside. I keep getting a Temporary error 240 (temporary MX resolution error). I've searched high and low but still can't find a definitive answer on how to send e-mails outside of localhost. Any help is greatly appreciated.

    Read the article

  • Megacli is killing me, any help appreciated

    - by Stefan
    I run a server with 2 drives in raid0 configured through BIOS. I just added 2 more drives using hotplug (the server is dell r610 with RHEL 5.4 64bit) and I would like to configure a separate raid0 partition on these drives. I am getting the following error: /opt/MegaRAID/MegaCli/MegaCli64 -CfgLdAdd r0[32:2, 32:3] -a0 The specified physical disk does not have the appropriate attributes to complete the requested command. Exit Code: 0x26 All the parameters are correct and there is just no reason why this command could not work, see this (fujitsu is current raid, seagate is the new one I want to create): /opt/MegaRAID/MegaCli/MegaCli64 -PDList -aALL | egrep 'Adapter|Enclosure|Slot|Inquiry' Adapter #0 Enclosure Device ID: 32 Slot Number: 0 Enclosure position: 0 Inquiry Data: FUJITSU MBD2147RC D807D0A4PA101174 Enclosure Device ID: 32 Slot Number: 1 Enclosure position: 0 Inquiry Data: FUJITSU MBD2147RC D807D0A4PA10115T Enclosure Device ID: 32 Slot Number: 2 Enclosure position: 0 Inquiry Data: SEAGATE ST9300603SS FS033SE0TF5K Enclosure Device ID: 32 Slot Number: 3 Enclosure position: 0 Inquiry Data: SEAGATE ST9300603SS FS023SE070FK I also tried to set up the drive as hotspare, also some strange error: /opt/MegaRAID/MegaCli/MegaCli64 -PDHSP -Set -physdrv[32:3] -a0 Adapter: 0: Set Physical Drive at EnclId-32 SlotId-3 as Hot Spare Failed. FW error description: The specified device is in a state that doesn't support the requested command. Exit Code: 0x32 As you can see the disk is in Unconfigured, Good state: Enclosure Device ID: 32 Slot Number: 3 Enclosure position: 0 Device Id: 3 Sequence Number: 1 Media Error Count: 0 Other Error Count: 0 Predictive Failure Count: 0 Last Predictive Failure Event Seq Number: 0 PD Type: SAS Raw Size: 279.396 GB [0x22ecb25c Sectors] Non Coerced Size: 278.896 GB [0x22dcb25c Sectors] Coerced Size: 278.875 GB [0x22dc0000 Sectors] Firmware state: Unconfigured(good), Spun Up SAS Address(0): 0x5000c50005cd20b1 SAS Address(1): 0x0 Connected Port Number: 3(path0) Inquiry Data: SEAGATE ST9300603SS FS023SE070FK FDE Capable: Not Capable FDE Enable: Disable Secured: Unsecured Locked: Unlocked Needs EKM Attention: No Foreign State: Foreign Foreign Secure: Drive is not secured by a foreign lock key Device Speed: Unknown Link Speed: Unknown Media Type: Hard Disk Device Drive Temperature :30C (86.00 F)

    Read the article

< Previous Page | 475 476 477 478 479 480 481 482 483 484 485 486  | Next Page >