Search Results

Search found 24177 results on 968 pages for 'true'.

Page 336/968 | < Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >

  • Is memcache impacting my performence negatively?

    - by iTech
    I am using pressflow 6 and NewRelic seems to suggest that memcache is infact hurting performance as shown below : My settings.php file : # Varnish reverse proxy on localhost $conf['reverse_proxy'] = TRUE; $conf['reverse_proxy_addresses'] = array('127.0.0.1'); # Memcached configuration $conf['cache_inc'] = './sites/all/modules/memcache/memcache.inc'; $conf['memcache_servers'] = array( '127.0.0.1:11211' => 'default', ); ### END Mercury settings written on 2011-11-01T07:12:49-04:00

    Read the article

  • Cobbler 2.2.2 problems

    - by Peter
    I have setup a dedicated LAN for Cobbler tests. My setup is: Cobbler server: openSUSE 12.3, cobbler 2.2.2 (from openSUSE repos) Imported distros: Centos 6.5, Red Hat 6.5, Red Hat 7.0, openSUSE 13.1 Target Machine: VMs in a Windows 7 Virtualbox Systems provisioning works OK, but I have some problems. The first one is that cobbler does not honor the "pxe_just_once: 1" setting. When the setup of the target OS is finished, after the reboot the target systems continues to PXE boot! The second problem is that the target server is not correctly configured! See my setup: cobbler system report --name=test Name : test TFTP Boot Files : {} Comment : Fetchable Files : {} Gateway : 192.168.0.1 Hostname : testcob1.example.com Image : IPv6 Autoconfiguration : False IPv6 Default Device : Kernel Options : {} Kernel Options (Post Install) : {} Kickstart : <<inherit>> Kickstart Metadata : {} LDAP Enabled : False LDAP Management Type : authconfig Management Classes : [] Management Parameters : <<inherit>> Monit Enabled : False Name Servers : ['192.168.0.1', '8.8.8.8'] Name Servers Search Path : [] Netboot Enabled : False Owners : ['admin'] Power Management Address : Power ID : Power Password : Power Management Type : ipmitool Power Username : Profile : RHEL-6.5-x86_64 Proxy : <<inherit>> Red Hat Management Key : <<inherit>> Red Hat Management Server : <<inherit>> Repos Enabled : False Server Override : <<inherit>> Status : testing Template Files : {} Virt Auto Boot : <<inherit>> Virt CPUs : <<inherit>> Virt Disk Driver Type : <<inherit>> Virt File Size(GB) : <<inherit>> Virt Path : <<inherit>> Virt RAM (MB) : <<inherit>> Virt Type : <<inherit>> Interface ===== : eth0 Bonding Opts : Bridge Opts : DHCP Tag : DNS Name : Master Interface : Interface Type : IP Address : 192.168.0.200 IPv6 Address : IPv6 Default Gateway : IPv6 MTU : IPv6 Secondaries : [] IPv6 Static Routes : [] MAC Address : Management Interface : True MTU : Subnet Mask : 255.255.255.0 Static : True Static Routes : [] Virt Bridge : So, although I have setup the hostname and the network interface of the target system, after the setup, the hostname is set to localhost.localdomain and eth0 is configured as a DHCP not static! How can I find the problem and fix it? Note that I have synced and restarted cobbler a couple of times, but the problems persists.

    Read the article

  • Problems set-up Single Sign-On using Kerberos authentication

    - by user1124133
    I need for Ruby on Rail application set authentication via Active Directory using Kerberos authentication. Some technical information: I are using Apache installed mod_auth_kerb In httpd.conf I added LoadModule auth_kerb_module modules/mod_auth_kerb.so In /etc/krb5.conf I added following configuration [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = EU.ORG.COM dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h forwardable = yes [realms] EU.ORG.COM = { kdc = eudc05.eu.org.com:88 admin_server = eudc05.eu.org.com:749 default_domain = eu.org.com } [domain_realm] .eu.org.com = EU.ORG.COM eu.org.com = EU.ORG.COM [appdefaults] pam = { debug = true ticket_lifetime = 36000 renew_lifetime = 36000 forwardable = true krb4_convert = false } When I test kinit validuser and enter password then authentication is successful. klist returns: Ticket cache: FILE:/tmp/krb5cc_600 Default principal: [email protected] Valid starting Expires Service principal 02/08/13 13:46:40 02/08/13 23:46:47 krbtgt/[email protected] renew until 02/09/13 13:46:40 Kerberos 4 ticket cache: /tmp/tkt600 klist: You have no tickets cached In application Apache configuration I added IfModule mod_auth_kerb.c> Location /winlogin> AuthType Kerberos AuthName "Kerberos Loginsss" KrbMethodNegotiate off KrbAuthoritative on KrbVerifyKDC off KrbAuthRealms EU.ORG.COM Krb5Keytab /home/crmdata/httpd/apache.keytab KrbSaveCredentials off Require valid-user </Location> </IfModule> I restarted apache Now some tests: When I try to access application from Win7, I got pop-up message box, with text: Warning: This server is requesting that your username and password be sent in an insecure manner (basic authentification without a secure connection) When I enter valid credentials then my application opens successfully, and all works fine. Questions: Is ok that for user pop-ups such windows? If I use NTLM authentication then there no such pop-up. I checked IE Internet Options and there 'Enable Integrated Windows Authentication' is checked. Why IE try to send username and password to application apache? If I correct to understand then Windows self must make authentication via Active Directory using Kerberos protocol. When I try to access application from Win7 and I enter incorrect credentials to pop-up message box Application say Authentication failed (this is OK) In apache error log I see: [error] [client 192.168.56.1] krb5_get_init_creds_password() failed: Client not found in Kerberos database But now I cannot get possibility to enter valid credentials, only when I restart IE I can get again pop-up box. What could be incorrect or missing in my Kerberos setup? I read in some blog post that probably something is needed to be done in Active Directory side. What exactly?

    Read the article

  • is it dangerous for the processor core to be *always* loaded at 100%?

    - by javapowered
    In my HFT software I plan to use one core for stock index calculation. That would be simply while(true) loop without any delays which will calculate (sum and multiply) components as often as possible (so millions times per second) and I plan to do that 8 hours per day every day. I was never before loading my computer to 100% full time every day regullary. May it be dangerous? Do processor has kind of "resource" (very big of course) after which it can stopped working?

    Read the article

  • Custom url rewrite stop working after 20 seconds?

    - by 101224863727594634919
    Hi all I have a simple question about using of custom URL rewrite module - http://weblogs.asp.net/scottgu/archive/2007/02/26/tip-trick-url-rewriting-with-asp-net.aspx. After period of time redirects stop working. When I trace non working requests I found that -URL_CACHE_ACCESS_END PhysicalPath URLInfoFromCache true URLInfoAddedToCache false ErrorCode 0 ErrorCode The operation completed successfully. (0x0) Is there any option to disable URL cache access for website. Thanks in advance.

    Read the article

  • Ubuntu karmic doesn't have the version file?

    - by Blankman
    I was following this tutorial: http://articles.slicehost.com/2010/4/23/ubuntu-karmic-setup-part-2 On my ubuntu karmic version (on ec2, ami from elastic) I don't see this file: cat /etc/lsb-release It just isn't there. How can I see the version of the O/S? And shouldn't that file be there? Some people have told me ubuntu isn't really used as a server, is that true or is the trend making it more viable?

    Read the article

  • How to override puppet class arguments in child node?

    - by Jon Skarpeteig
    I'm attempting to accomplish something like the below: node 'basenode' { class { 'puppet' : disable => false, } } node 'child' inherits 'basenode' { class { 'puppet' : disable => true, } } This gives me: err: Could not retrieve catalog from remote server: Error 400 on SERVER: Duplicate definition: Class[Puppet] is already defined How can I override this setting for this single node, and still have a parameterised class?

    Read the article

  • puppet variables

    - by Joey Bagodonuts
    I am trying to use variables in my modules manifest.pp with little luck class mysoftware($version="dev-2011.02.04b") { File { links => follow } file { "/opt/mysoftware": ensure => directory } file { "/opt/mysoftware/share": source => "puppet://puppet/mysoftware/air/$version", recurse => "true", } } This does not seem to be working when I assign this to a node via the nodes.pp file. I am running puppetmaster 2.6.4 puppetd clients are 0.25

    Read the article

  • Do I need a Gigabit router with a 24Mbps down and 7Mbps upload speed cable modem?

    - by djangofan
    Do I need a Gigabit port capable wireless router with a 24Mbps down and 7Mbps upload speed cable modem? Does anyone know how to calculate this? FYI, I wont be using the wireless connection from my main computer system. My computer will connect via a hard wire into the router (of the wireless variety), which in turn is connected to the cable modem. My research suggests that the 100 Mbps port can easily handle it. Is that true?

    Read the article

  • Is Oracle Smartscan really effective?

    - by Vimvq1987
    Oracle is marketing Exadata to our company, and they're focusing on Smartscan, which is, advertised 10x more effective than traditional solution. I don't really understand (and somewhat don't really believe) this solution, because they assume that data is put on separated storage, and no indexes are used, so "1TB data was scanned and only 2MB is returned". So, I want to ask you: is Smartscan really effective. Is it true solution that greatly improves performance or just a buzzword? Thank you so much for your experience

    Read the article

  • Is DNS propagation still slow?

    - by spiffytech
    I've been told to assume it takes as long as 48 hours for a DNS change to propagate throughout the entire Internet, because some DNS servers cache their entries for longer than my TTL. However, for years and across ISPs and domains, every time I've made a DNS change I see the effects within a couple of hours. Is it still true that I need to assume a full two days for everyone to see my changes?

    Read the article

  • How do I filter certain javascripts from running and not block them all on Drudge Report?

    - by jay
    How do I block this particular "auto refresh" script on Drudge Report from running in my Firefox browser? I have NoScript and AdblockPlus plugins installed, but neither of them explain how to filter out a particular script and keep it from running and leave the rest alone. I don't want to stop all javascripts from running just the one listed below. Any help would be appreciated. "var timer = setInterval("autoRefresh()", 1000 * 60 * 3); function autoRefresh(){self.location.reload(true);}"

    Read the article

  • seaudit report detail

    - by user1014130
    I've just started using selinux in the last 6 months and am getting to grips with it. However, using sealert on a new CENTOS 6 server, Im not getting the level of detail I was with CENTOS 5. To illustrate: Running sealert -a /var/log/audit/audit.log On CENTOS 5 I get: Summary: SELinux is preventing postdrop (postfix_postdrop_t) "getattr" to /var/log/httpd/error_log (httpd_log_t). Detailed Description: SELinux denied access requested by postdrop. It is not expected that this access is required by postdrop and this access may signal an intrusion attempt. It is also possible that the specific version or configuration of the application is causing it to require additional access. Allowing Access: Sometimes labeling problems can cause SELinux denials. You could try to restore the default system file context for /var/log/httpd/error_log, restorecon -v '/var/log/httpd/error_log' If this does not work, there is currently no automatic way to allow this access. Instead, you can generate a local policy module to allow this access - see FAQ (http://fedora.redhat.com/docs/selinux-faq-fc5/#id2961385) Or you can disable SELinux protection altogether. Disabling SELinux protection is not recommended. Please file a bug report (http://bugzilla.redhat.com/bugzilla/enter_bug.cgi) against this package. Additional Information: Source Context root:system_r:postfix_postdrop_t Target Context system_u:object_r:httpd_log_t Target Objects /var/log/httpd/error_log [ file ] Source postdrop Source Path /usr/sbin/postdrop Port Host Source RPM Packages postfix-2.3.3-2.1.el5_2 Target RPM Packages Policy RPM selinux-policy-2.4.6-279.el5_5.1 Selinux Enabled True Policy Type targeted MLS Enabled True Enforcing Mode Enforcing Plugin Name catchall_file Host Name server109-228-26-144.live-servers.net Platform Linux server109-228-26-144.live-servers.net 2.6.18-194.8.1.el5 #1 SMP Thu Jul 1 19:04:48 EDT 2010 x86_64 x86_64 Alert Count 1 First Seen Wed Jun 13 11:43:55 2012 Last Seen Wed Jun 13 11:43:55 2012 but on CENTOS 6 I just get: Summary: SELinux is preventing postdrop (postfix_postdrop_t) "getattr" to /var/log/httpd/error_log (httpd_log_t). Detailed Description: SELinux denied access requested by postdrop. It is not expected that this access is required by postdrop and this access may signal an intrusion attempt. It is also possible that the specific version or configuration of the application is causing it to require additional access. Allowing Access: Sometimes labeling problems can cause SELinux denials. You could try to restore the default system file context for /var/log/httpd/error_log, restorecon -v '/var/log/httpd/error_log' If this does not work, there is currently no automatic way to allow this access. Instead, you can generate a local policy module to allow this access - see FAQ (http://fedora.redhat.com/docs/selinux-faq-fc5/#id2961385) Or you can disable SELinux protection altogether. Disabling SELinux protection is not recommended. Please file a bug report (http://bugzilla.redhat.com/bugzilla/enter_bug.cgi) against this package. Im running exactly the same command. Does anyone have any idea why Im not getting the "Additional information" that I do with CENTOS 5? Thanks in advance Dylan

    Read the article

  • Google analytics and multiple independent subdomains

    - by MTilsted
    I need some help trying to setup google analytics correct. Here is my setup: We host sites for multiple customers, and each customer have their own subdomain on our site. So we have customerA.oursite.com and customerB.oursite.com As we add more customers we get more subdomains. We do want to track all data for each customer independent, but I don't want to to create a new google tracking code for each new customer. So my plan is to track all visits with "oursite.com", and then I will create a filter in google Analytics to get data for each specific customer(All visits for a specific subdomain). Is this(One tracking code, and a subdomain filter) the right way to do it? To create a subdomain filter i add a new profile for each customer, and then add a custom filter saying include "Request URI" and fill in "CustomerDomain.oursite.com". Is this the correct way to do it? And a general question about filters: Is it really impossible to create a new filter by applying it to data in an existing profile? I would really like to just collect all the data in one "main" profile and then create subdomain filters as we need them. But it seems that google only apply filters to new incomming data, not existing data. Is this really true? The following is my tracking code. Is '_setDomainName','none' the right thing to do? <script type="text/javascript"> /* Tracking code for qrtown.com */ var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-11584298-10']); _gaq.push(['_setDomainName', 'none']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script>

    Read the article

  • Can 32-bit print drivers work on 64-bit windows?

    - by Matt
    I'm reading around and it seems that 32 bit drivers do not work under 64 bit windows. Is this true? since 32-bit applications can run under 64 bit windows it seems ridiculous that 32-bit printer drivers cannot. Are printer drivers run at the kernel level? Sounds like we're in for driver hell for our RDP environments.

    Read the article

  • UDISKS instead of HAL

    - by MeJ
    Does anybody have some expirence with udisks, because HAL won't be longer supported on the most linux distribution, so I am thinking of to use udisks for UDI in $(hal-find-by-property --key storage.bus --string usb) do HAL_TMP=`hal-get-property --udi $UDI --key storage.removable.media_available` if [ "$HAL_TMP" = "true" ]; then HAL_DEV=$(hal-get-property --udi $UDI --key block.device) HAL_SIZE=$(hal-get-property --udi $UDI --key storage.removable.media_size) HAL_TYPE=$(hal-get-property --udi $UDI --key storage.drive_type) How do I have to adapt the above mentioned commands but use udisks instead of hal Thanks!

    Read the article

  • Disable ActiveMQ demo component

    - by Rich
    Is it possible to disable the demo component of the Active MQ console? I have tried removing the following lines from jetty.xml but the /demolink still works. <bean class="org.eclipse.jetty.webapp.WebAppContext"> <property name="contextPath" value="/demo" /> <property name="resourceBase" value="${activemq.home}/webapps/demo" /> <property name="logUrlOnStart" value="true" /> </bean> I am using Active MQ 5.5.1

    Read the article

  • Automatic backup software?

    - by Mehrdad
    Every backup software I've seen (even the ones that claim "continuous" protection) only backs up files periodically -- say, every 5 minutes. What I'm looking for is true continuous backup software, i.e. software that can transparently back up files immediately before they are written to, so that I can be certain I have all the versions of a file that ever existed. Is there any software that does this?

    Read the article

  • conditional mod_deflate based on headers

    - by Ben K.
    mod_deflate seems pretty sweet. I'd love to turn it on across the board for text/html--but for certain pages, I don't want to gzip since upstream proxies need to be able to inspect the content. I know there's an AddOutputFilterByType directive -- is there any way to combine that w/ a header inspect so that if I see X-NO-COMPRESS true I skip mod_deflate?

    Read the article

< Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >