Search Results

Search found 10622 results on 425 pages for 'shared hosting'.

Page 108/425 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • Gluster strange issue with shared mount point like seprate mount.

    - by Satish
    I have two nodes and for experiment i have install glusterfs and create volume and successfully mounted on own node, but if i create file in node1 it is not showing in node2, look like both behaving like they are separate. node1 10.101.140.10:/nova-gluster-vol 2.0G 820M 1.2G 41% /mnt node2 10.101.140.10:/nova-gluster-vol 2.0G 33M 2.0G 2% /mnt volume info split brian $ sudo gluster volume heal nova-gluster-vol info split-brain Gathering Heal info on volume nova-gluster-vol has been successful Brick 10.101.140.10:/brick1/sdb Number of entries: 0 Brick 10.101.140.20:/brick1/sdb Number of entries: 0 test node1 $ echo "TEST" > /mnt/node1 $ ls -l /mnt/node1 -rw-r--r-- 1 root root 5 Oct 27 17:47 /mnt/node1 node2 (file isn't there, while they are shared mount) $ ls -l /mnt/node1 ls: cannot access /mnt/node1: No such file or directory What i am missing??

    Read the article

  • How to deploy ASP.NET MVC application in a shared hosting without losing the beautiful uri?

    - by ZX12R
    i am trying to upload an asp.net mvc application in a shared server. I don't have access to IIS. What i have done after reading from various sources is change the route method in the global.asax file as follows routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", "{controller}.aspx/{action}/{id}", new { action = "Index", id = "" } ); routes.MapRoute( "Root", "", new { controller = "Home", action = "Index", id = "" } ); it is working fine. but the problem is i have lost the beautiful uri here. is there a way to remove the ".aspx" behind the controller..? thanks in advance.:)

    Read the article

  • How to set up an Android source repo while hosting the git trees as private repositories on github?

    - by gby
    Hello there, I am trying to set up a private repository of Android source code while hosting the git trees on github as private repos. I have no problem changing the manifest.xml file to point to public git trees hosted on github in the same way that CynagonMod does, but when trying to point to private repos I get the following error when trying "repo sync": Initializing project username/android_external_webkit ... fatal: The remote end hung up unexpectedly error: Cannot fetch username/android_external_webkit Where username/android_external_webkit is of course a private github repo of the same name. I understand the error occurs since I did not specify my user name and credentials to github, but I fail to see how to do it in the manifest.xml with repo. Any ideas? Thanks! Gilad

    Read the article

  • how to scrawl file hosting website with scrapy in python?

    - by Veryel Hua
    Can anyone help me to figure out how to scrawl file hosting website like filefactory.com? I don't want to download all the file hosted but just to index all available files with scrapy. I have read the tutorial and docs with respect to spider class for scrapy. If I only give the website main page as the begining url I wouldn't not scrawl the whole site, because the scrawling depends on links but the begining page seems not point to any file pages. That's the problem I am thinking and any help would be appreciated!

    Read the article

  • How to create a Appointment in a Shared Calendar (Sharepoint) with VBA (Macro)?

    - by Diogo K.
    I am actually trying to make an appointment from an excel spreadsheet. I have all the information of the appointment, like subject, body, start and end dates, I actually can create an appointment but only with my personal calendar in outlook. How do I copy/move/create an appointment in a shared calendar in a sharepoint server? I've tried: Dim apOL As Object Dim objFolder As Folder Dim cro As String Set apOL = CreateObject("Outlook.Application") Set oItem = apOL.CreateItem(olApItem) Set MAPISession = apOL.Session ... cro = "stssync://sts/?ver=1.0&type=calendar&cmd=add-folder&base-url=(MY SP SERVER)&list-url=%2FLists%2FCronograma%20%20%2Fcalendar%2Easpx&guid=%7B02717CEF%2D404F%2D482F%2DA131%2D5C3C245CD268%7D&site-name=Testes&list-name=Cronograma%20-" ... Set objFolder = MAPISession.OpenSharedFolder(cro, Null,Null,Null) It gives me the error "Type Mismatch" I'd try to get the objFolder as the Sharepoint Folder then later create an local appointment and then try an Item.Move objFolder Is it the correct way?

    Read the article

  • ASP.NET web site running in IIS and hosting WCF service fails to get connections on the TCP server

    - by Salil
    I am using the combination of Silverlight client application along with ASP.NET web site running in IIS and hosting WCF service. This WCF service uses the library that starts a TCP server and and initiates requests to the connected TCP clients when the silverlight client application makes the WCF async requests. When I use this library in a local WPF application, the TCP server is able to receive client connection requests and I can get info from these clients. But when I use the same library from the implementation of the WCF service inside the ASP .NET web site project (+ Silverlight client), the server strangely does not receive any connection requests i.e. when I create TcpListener object and issue a start, nothing happens (nor an exception is generated). My setup is I am using the Ethernet for the Internet and Wi-Fi for the TCP clients. Is the WCF service getting confused because of this? Is there any special WCF settings I should put in for TcpListener.Start to work?

    Read the article

  • Moving from dedicated to shared cpanel - any scripts to do all / some of the install tasks ?

    - by mbbcat
    Hi, I have a few hundred phpld sites to move - each has its own cpanel, ( & the target may have shared cpanel) & I can do a full cpanel backup on the original server, but I don't have whm on the current host - the backups are fairly easy to organize but the installs so far means picking through files & setting up db's & mail etc by hand - I am thinking there ought to be an easier ie scripted way to do the installs or at least some parts - can anyone please suggest something ? I would like to migrate the stats at the same time Thanks M

    Read the article

  • Could/Should I use static classes in asp.net/c# for shared data?

    - by death.au
    Here's the situation I have: I'm building an online system to be used by school groups. Only one school can log into the system at any one time, and from that school you'll get about 13 users. They then proceed into a educational application in which they have to co-operate to complete tasks, and from a code point of view, sharing variables all over the place. I was thinking, if I set up a static class with static properties that hold the variables that are required to be shared, this could save me having to store/access the variables in/from a database, as long as the static variables are all properly initialized when the application starts and cleaned up at the end. Of course I would also have to put locks on the get and set methods to make the variables thread safe. Something in the back of my mind is telling me this might be a terrible way of going about things, but I'm not sure exactly why, so if people could give me their thoughts for or against using a static class in this situation, I would be quite appreciative.

    Read the article

  • How do I setup a shared session between two users on my Ruby on Rails powered site?

    - by ben
    Hey guys, The website that I'm building includes a section where two users can interact. I think I know how to do most of it, except the actual session sharing part. I'm using Ruby on Rails & Javascript (jquery), and I've got user login and session management all working okay. Would the best way to create a shared session be to have a SharedSession model, with accompanying database table, with participant1ID, participant2ID etc? Is there a better way? Thanks so much for reading!

    Read the article

  • Is it OK to run an array with 22k strings in a PHP code on a shared web host?

    - by kuchikoo
    I'm new to writing code so kindly bear with me if this is a very noobish question. A couple of days back I asked a question about a PHP code that matches the the query entered by users on my website to an array stored within the PHP code and displays the matched queries. Here is the code I'm talking about Now I've ended up with a rather large list (over 22k) of strings that have to be stored in the array. Is it ok to run it like this? I'm hosting the site on a shared hostgator package, will this cause my site to crash? I don't know too much about DBs but can I somehow store this on MySQL instead of having it in the code?

    Read the article

  • NHibernate will insert but not update after move to host with shared server running mysql.

    - by Andy LifeBrixx
    Hi, I have a site running MVC and Nhibernate (not fluent) using standard session per request in an http module, runs fine locally (also with mysql) but after a move to a hosting provider no update statements are being issued. I can insert but not update, no exceptions are raised, I have the 'show_sql' option switched on which locally shows the update statements being issued but on the server no update statements are logged. I don't think NHProf is an option for me as I can only run asp.net apps on my shared server, are there any other methods of diagnosing NH issues like this ? Anyone had a similar issue ? Cheers, A

    Read the article

  • Globe SSL with NGINX SSL certificate problem, please help

    - by PartySoft
    I have a big problem with installing a certificat for nginx (same happends with apache though) I have 3 files __domain_com.crt __domain_com.ca-bundle and ssl.key. I tried to append cat __domain_com.crt __leechpack_com.ca-bundle bundle.crt but if I do it like this i get an error: [emerg]: SSL_CTX_use_certificate_chain_file("/etc/nginx/__leechpack_com.crt") failed (SSL: error:0906D066:PEM routines:PEM_read_bio:bad end line error:140DC009:SSL routines:SSL_CTX_use_certificate_chain_file:PEM lib) And that's because the delimiters of the certificates arren't separated. ZqTjb+WBJQ== -----END CERTIFICATE----------BEGIN CERTIFICATE----- MIIE6DCCA9CgAwIBAgIQdIYhlpUQySkmKUvMi/gpLDANBgkqhkiG9w0BAQUFADBv If i separate them with an enter between certificated it will at least start but i will get the same warning from Firefox: This Connection is Untrusted You have asked Firefox to connect securely to domain.com, but we can't confirm that your connection is secure. The concatenate solution it is given by Globe SSL and the NGINX site but it doesn't work. I think the bundle is ignored though. http://customer.globessl.com/knowledgebase/55/Certificate-Installation--Nginx.html http://nginx.org/en/docs/http/configuring_https_servers.html#chains%20http://wiki.nginx.org/NginxHttpSslModule if i do openssl s_client -connect down.leechpack.com:443 CONNECTED(00000003) depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=20:unable to get local issuer certificate verify return:1 depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=27:certificate not trusted verify return:1 depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=21:unable to verify the first certificate verify return:1 --- Certificate chain 0 s:/OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com i:/C=RO/O=GLOBE HOSTING CERTIFICATION AUTHORITY/CN=GLOBE SSL Domain Validated CA 1 s:/C=US/O=Globe Hosting, Inc./OU=GlobeSSL DV Certification Authority/CN=GlobeSSL CA i:/C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root --- Server certificate -----BEGIN CERTIFICATE----- MIIFQzCCBCugAwIBAgIQRnpCmtwX7z7GTla0QktE6DANBgkqhkiG9w0BAQUFADBl MQswCQYDVQQGEwJSTzEuMCwGA1UEChMlR0xPQkUgSE9TVElORyBDRVJUSUZJQ0FU SU9OIEFVVEhPUklUWTEmMCQGA1UEAxMdR0xPQkUgU1NMIERvbWFpbiBWYWxpZGF0 ZWQgQ0EwHhcNMTAwMjExMDAwMDAwWhcNMTEwMjExMjM1OTU5WjCBjTEhMB8GA1UE CxMYRG9tYWluIENvbnRyb2wgVmFsaWRhdGVkMSgwJgYDVQQLEx9Qcm92aWRlZCBi eSBHbG9iZSBIb3N0aW5nLCBJbmMuMSQwIgYDVQQLExtHbG9iZSBTdGFuZGFyZCBX aWxkY2FyZCBTU0wxGDAWBgNVBAMUDyoubGVlY2hwYWNrLmNvbTCCASIwDQYJKoZI hvcNAQEBBQADggEPADCCAQoCggEBAKX7jECMlYEtcvqVWQVUpXNxO/VaHELghqy/ Ml8dOfOXG29ZMZsKUMqS0jXEwd+Bdpm31lBxOALkj8o79hX0tspLMjgtCnreaker 49y62BcjfguXRFAaiseXTNbMer5lDWiHlf1E7uCoTTiczGqBNfl6qSJlpe4rYBtq XxBAiygaNba6Owghuh19+Uj8EICb2pxbJNFfNzU1D9InFdZSVqKHYBem4Cdrtxua W4+YONsfLnnfkRQ6LOLeYExHziTQhSavSv9XaCl9Zqzm5/eWbQqLGRpSJoEPY/0T GqnmeMIq5M35SWZgOVV10j3pOCS8o0zpp7hMJd2R/HwVaPCLjukCAwEAAaOCAcQw ggHAMB8GA1UdIwQYMBaAFB9UlnKtPUDnlln3STFTCWb5DWtyMB0GA1UdDgQWBBT0 8rPIMr7JDa2Xs5he5VXAvMWArjAOBgNVHQ8BAf8EBAMCBaAwDAYDVR0TAQH/BAIw ADAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwVQYDVR0gBE4wTDBKBgsr BgEEAbIxAQICGzA7MDkGCCsGAQUFBwIBFi1odHRwOi8vd3d3Lmdsb2Jlc3NsLmNv bS9kb2NzL0dsb2JlU1NMX0NQUy5wZGYwRgYDVR0fBD8wPTA7oDmgN4Y1aHR0cDov L2NybC5nbG9iZXNzbC5jb20vR0xPQkVTU0xEb21haW5WYWxpZGF0ZWRDQS5jcmww dwYIKwYBBQUHAQEEazBpMEEGCCsGAQUFBzAChjVodHRwOi8vY3J0Lmdsb2Jlc3Ns LmNvbS9HTE9CRVNTTERvbWFpblZhbGlkYXRlZENBLmNydDAkBggrBgEFBQcwAYYY aHR0cDovL29jc3AuZ2xvYmVzc2wuY29tMCkGA1UdEQQiMCCCDyoubGVlY2hwYWNr LmNvbYINbGVlY2hwYWNrLmNvbTANBgkqhkiG9w0BAQUFAAOCAQEAB2Y7vQsq065K s+/n6nJ8ZjOKbRSPEiSuFO+P7ovlfq9OLaWRHUtJX0sLntnWY1T9hVPvS5xz/Ffl w9B8g/EVvvfMyOw/5vIyvHq722fAAC1lWU1rV3ww0ng5bgvD20AgOlIaYBvRq8EI 5Dxo2og2T1UjDN44GOSWsw5jetvVQ+SPeNPQLWZJS9pNCzFQ/3QDWNPOvHqEeRcz WkOTCqbOSZYvoSPvZ3APh+1W6nqiyoku/FCv9otSCtXPKtyVa23hBQ+iuxqIM4/R gncnUKASi6KQrWMQiAI5UDCtq1c09uzjw+JaEzAznxEgqftTOmXAJSQGqZGd6HpD ZqTjb+WBJQ== -----END CERTIFICATE----- subject=/OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com issuer=/C=RO/O=GLOBE HOSTING CERTIFICATION AUTHORITY/CN=GLOBE SSL Domain Validated CA --- No client certificate CA names sent --- SSL handshake has read 3313 bytes and written 343 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DHE-RSA-AES256-SHA Session-ID: 5F9C8DC277A372E28A4684BAE5B311533AD30E251369D144A13DECA3078E067F Session-ID-ctx: Master-Key: 9B531A75347E6E7D19D95365C1208F2ED37E4004AA8F71FC614A18937BEE2ED9F82D58925E0B3931492AD3D2AA6EFD3B Key-Arg : None Start Time: 1288618211 Timeout : 300 (sec) Verify return code: 21 (unable to verify the first certificate) ---

    Read the article

  • Globe SSL with NGINX SSL certificate problem, please help

    - by PartySoft
    Hello, I have a big problem with installing a certificat for nginx (same happends with apache though) I have 3 files __domain_com.crt __domain_com.ca-bundle and ssl.key. I tried to append cat __domain_com.crt __leechpack_com.ca-bundle bundle.crt but if I do it like this i get an error: [emerg]: SSL_CTX_use_certificate_chain_file("/etc/nginx/__leechpack_com.crt") failed (SSL: error:0906D066:PEM routines:PEM_read_bio:bad end line error:140DC009:SSL routines:SSL_CTX_use_certificate_chain_file:PEM lib) And that's because the delimiters of the certificates arren't separated. ZqTjb+WBJQ== -----END CERTIFICATE----------BEGIN CERTIFICATE----- MIIE6DCCA9CgAwIBAgIQdIYhlpUQySkmKUvMi/gpLDANBgkqhkiG9w0BAQUFADBv If i separate them with an enter between certificated it will at least start but i will get the same warning from Firefox: This Connection is Untrusted You have asked Firefox to connect securely to domain.com, but we can't confirm that your connection is secure. The concatenate solution it is given by Globe SSL and the NGINX site but it doesn't work. I think the bundle is ignored though. http://customer.globessl.com/knowledgebase/55/Certificate-Installation--Nginx.html http://nginx.org/en/docs/http/configuring_https_servers.html#chains%20http://wiki.nginx.org/NginxHttpSslModule if i do openssl s_client -connect down.leechpack.com:443 CONNECTED(00000003) depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=20:unable to get local issuer certificate verify return:1 depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=27:certificate not trusted verify return:1 depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=21:unable to verify the first certificate verify return:1 --- Certificate chain 0 s:/OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com i:/C=RO/O=GLOBE HOSTING CERTIFICATION AUTHORITY/CN=GLOBE SSL Domain Validated CA 1 s:/C=US/O=Globe Hosting, Inc./OU=GlobeSSL DV Certification Authority/CN=GlobeSSL CA i:/C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root --- Server certificate -----BEGIN CERTIFICATE----- MIIFQzCCBCugAwIBAgIQRnpCmtwX7z7GTla0QktE6DANBgkqhkiG9w0BAQUFADBl MQswCQYDVQQGEwJSTzEuMCwGA1UEChMlR0xPQkUgSE9TVElORyBDRVJUSUZJQ0FU SU9OIEFVVEhPUklUWTEmMCQGA1UEAxMdR0xPQkUgU1NMIERvbWFpbiBWYWxpZGF0 ZWQgQ0EwHhcNMTAwMjExMDAwMDAwWhcNMTEwMjExMjM1OTU5WjCBjTEhMB8GA1UE CxMYRG9tYWluIENvbnRyb2wgVmFsaWRhdGVkMSgwJgYDVQQLEx9Qcm92aWRlZCBi eSBHbG9iZSBIb3N0aW5nLCBJbmMuMSQwIgYDVQQLExtHbG9iZSBTdGFuZGFyZCBX aWxkY2FyZCBTU0wxGDAWBgNVBAMUDyoubGVlY2hwYWNrLmNvbTCCASIwDQYJKoZI hvcNAQEBBQADggEPADCCAQoCggEBAKX7jECMlYEtcvqVWQVUpXNxO/VaHELghqy/ Ml8dOfOXG29ZMZsKUMqS0jXEwd+Bdpm31lBxOALkj8o79hX0tspLMjgtCnreaker 49y62BcjfguXRFAaiseXTNbMer5lDWiHlf1E7uCoTTiczGqBNfl6qSJlpe4rYBtq XxBAiygaNba6Owghuh19+Uj8EICb2pxbJNFfNzU1D9InFdZSVqKHYBem4Cdrtxua W4+YONsfLnnfkRQ6LOLeYExHziTQhSavSv9XaCl9Zqzm5/eWbQqLGRpSJoEPY/0T GqnmeMIq5M35SWZgOVV10j3pOCS8o0zpp7hMJd2R/HwVaPCLjukCAwEAAaOCAcQw ggHAMB8GA1UdIwQYMBaAFB9UlnKtPUDnlln3STFTCWb5DWtyMB0GA1UdDgQWBBT0 8rPIMr7JDa2Xs5he5VXAvMWArjAOBgNVHQ8BAf8EBAMCBaAwDAYDVR0TAQH/BAIw ADAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwVQYDVR0gBE4wTDBKBgsr BgEEAbIxAQICGzA7MDkGCCsGAQUFBwIBFi1odHRwOi8vd3d3Lmdsb2Jlc3NsLmNv bS9kb2NzL0dsb2JlU1NMX0NQUy5wZGYwRgYDVR0fBD8wPTA7oDmgN4Y1aHR0cDov L2NybC5nbG9iZXNzbC5jb20vR0xPQkVTU0xEb21haW5WYWxpZGF0ZWRDQS5jcmww dwYIKwYBBQUHAQEEazBpMEEGCCsGAQUFBzAChjVodHRwOi8vY3J0Lmdsb2Jlc3Ns LmNvbS9HTE9CRVNTTERvbWFpblZhbGlkYXRlZENBLmNydDAkBggrBgEFBQcwAYYY aHR0cDovL29jc3AuZ2xvYmVzc2wuY29tMCkGA1UdEQQiMCCCDyoubGVlY2hwYWNr LmNvbYINbGVlY2hwYWNrLmNvbTANBgkqhkiG9w0BAQUFAAOCAQEAB2Y7vQsq065K s+/n6nJ8ZjOKbRSPEiSuFO+P7ovlfq9OLaWRHUtJX0sLntnWY1T9hVPvS5xz/Ffl w9B8g/EVvvfMyOw/5vIyvHq722fAAC1lWU1rV3ww0ng5bgvD20AgOlIaYBvRq8EI 5Dxo2og2T1UjDN44GOSWsw5jetvVQ+SPeNPQLWZJS9pNCzFQ/3QDWNPOvHqEeRcz WkOTCqbOSZYvoSPvZ3APh+1W6nqiyoku/FCv9otSCtXPKtyVa23hBQ+iuxqIM4/R gncnUKASi6KQrWMQiAI5UDCtq1c09uzjw+JaEzAznxEgqftTOmXAJSQGqZGd6HpD ZqTjb+WBJQ== -----END CERTIFICATE----- subject=/OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com issuer=/C=RO/O=GLOBE HOSTING CERTIFICATION AUTHORITY/CN=GLOBE SSL Domain Validated CA --- No client certificate CA names sent --- SSL handshake has read 3313 bytes and written 343 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DHE-RSA-AES256-SHA Session-ID: 5F9C8DC277A372E28A4684BAE5B311533AD30E251369D144A13DECA3078E067F Session-ID-ctx: Master-Key: 9B531A75347E6E7D19D95365C1208F2ED37E4004AA8F71FC614A18937BEE2ED9F82D58925E0B3931492AD3D2AA6EFD3B Key-Arg : None Start Time: 1288618211 Timeout : 300 (sec) Verify return code: 21 (unable to verify the first certificate) ---

    Read the article

  • Is it safe to use Email service that provided from webhosting for business use?

    - by Kronass
    I work in a company who uses their web-hosting as their email provider, they use it for normal send, receive and basic contacts management, they use it in customers support, sales and marketing, I would prefer to use a dedicated or professional email hosting instead for this type of work. So for business use is it safe to use the email hosting that is included with hosting package or go with a professional email provider?

    Read the article

  • eAccelerator settings for PHP/Centos/Apache

    - by bobbyh
    I have eAccelerator installed on a server running Wordpress using PHP/Apache on CentOS. I am occassionally getting persistent "white pages", which presumably are PHP Fatal Errors (although these errors don't appear in my error_log). These "white pages" are sprinkled here and there throughout the site. They persist until I go to my eAccelerator control.php page and clear/clean/purge my caches, which suggests to me that I've configured eAccelerator improperly. Here are my current /etc/php.ini settings: memory_limit = 128M; eaccelerator.shm_size="64", where shm.size is "the amount of shared memory eAccelerator should allocate to cache PHP scripts" (see http://eaccelerator.net/wiki/Settings) eaccelerator.shm_max="0", where shm_max is "the maximum size a user can put in shared memory with functions like eaccelerator_put ... The default value is "0" which disables the limit" eaccelerator.shm_ttl="0" - "When eAccelerator doesn't have enough free shared memory to cache a new script it will remove all scripts from shared memory cache that haven't been accessed in at least shm_ttl seconds. By default this value is set to "0" which means that eAccelerator won't try to remove any old scripts from shared memory." eaccelerator.shm_prune_period="0" - "When eAccelerator doesn't have enough free shared memory to cache a script it tries to remove old scripts if the previous try was made more then "shm_prune_period" seconds ago. Default value is "0" which means that eAccelerator won't try to remove any old script from shared memory." eaccelerator.keys = "shm_only" - "These settings control the places eAccelerator may cache user content. ... 'shm_only' cache[s] data in shared memory" On my phpinfo page, it says: memory_limit 128M Version 0.9.5.3 and Caching Enabled true On my eAccelerator control.php page, it says 64 MB of total RAM available Memory usage 77.70% (49.73MB/ 64.00MB) 27.6 MB is used by cached scripts in the PHP opcode cache (I added up the file sizes myself) 22.1 MB is used by the cache keys, which is populated by the Wordpress object cache. My questions are: Is it true that there is only 36.4 MB of room in the eAccelerator cache for total "cache keys" (64 MB of total RAM minus whatever is taken by cached scripts, which is 27.6 MB at the moment)? What happens if my app tries to write more than 22.1 MB of cache keys to the eAccelerator memory cache? Does this cause eAccelerator to go crazy, like I've seen? If I change eaccelerator.shm_max to be equal to (say) 32 MB, would that avoid this problem? Do I also need to change shm_ttl and shm_prune_period to make eAccelerator respect the MB limit set by shm_max? Thanks! :-)

    Read the article

  • command history across multiple PuTTy sessions in SunOS 5.10

    - by foampile
    I have multiple PuTTy sessions open to my SunOS 5.10 server, and I am using ksh, and SOMETIMES the command history is shared among the different sessions and SOMETIMES it is not. I cannot figure out what determines whether it is or is not shared. By shared what I mean is that a command run in one session will be seen as previous command run in another session. I prefer it not to be shared, is there a config setting for that? Thanks

    Read the article

  • Syntax error on running a batch file to replace files

    - by Ralph
    I have a batch file intended to replace all instances of tracking.js within a folder/sub folders. FOR /R "D:\Virtual Servers (Testing)\CourseWare Master\Shared\Jenison\Version1.2\" %%I IN (tracking.js*) DO COPY /Y "D:\Virtual Servers (Testing)\CourseWare Master\Shared\Jenison\tracking.js" %%~fI When this is run I get the following syntax error C:COPY /Y "D:\Virtual Servers (Testing)\CourseWare Master\Shared\Jenison\track ing.js" D:\Virtual Servers (Testing)\CourseWare Master\Shared\Jenison\Version1.2 \SHAPERS_COMBINED\Smarter Communications\WhatisInfluencing\script\Tracking.js The syntax of the command is incorrect. Ideas please?

    Read the article

  • Best cPanel Reseller Company

    - by noorahmad
    I have a hosting company, I am using resellerpanel.com to buy domain and hosting for my clients, but I want to switch to cPanel, resellerpanel offers from 17.50$/M and hostgator.com offers 19.96$/M, I don't care about money all I care about services, live support and up time. Note: some of my clients request email only not hosting can I limit access to email mananger or hosting only with cPanel? Thanks.

    Read the article

  • ack misses results (vs. grep)

    - by techpeace
    I'm sure I'm misunderstanding something about ack's file/directory ignore defaults, but perhaps somebody could shed some light on this for me: mbuck$ grep logout -R app/views/ Binary file app/views/shared/._header.html.erb.bak.swp matches Binary file app/views/shared/._header.html.erb.swp matches app/views/shared/_header.html.erb.bak: <%= link_to logout_text, logout_path, { :title => logout_text, :class => 'login-menuitem' } %> mbuck$ ack logout app/views/ mbuck$ Whereas... mbuck$ ack -u logout app/views/ Binary file app/views/shared/._header.html.erb.bak.swp matches Binary file app/views/shared/._header.html.erb.swp matches app/views/shared/_header.html.erb.bak 98:<%= link_to logout_text, logout_path, { :title => logout_text, :class => 'login-menuitem' } %> Simply calling ack without options can't find the result within a .bak file, but calling with the --unrestricted option can find the result. As far as I can tell, though, ack does not ignore .bak files by default.

    Read the article

  • Installing Django on Shared Server: No module named MySQLdb?

    - by Mark
    I'm getting this error Traceback (most recent call last): File "/home/<username>/flup/server/fcgi_base.py", line 558, in run File "/home/<username>/flup/server/fcgi_base.py", line 1116, in handler File "/home/<username>/python/django/django/core/handlers/wsgi.py", line 241, in __call__ response = self.get_response(request) File "/home/<username>/python/django/django/core/handlers/base.py", line 73, in get_response response = middleware_method(request) File "/home/<username>/python/django/django/contrib/sessions/middleware.py", line 10, in process_request engine = import_module(settings.SESSION_ENGINE) File "/home/<username>/python/django/django/utils/importlib.py", line 35, in import_module __import__(name) File "/home/<username>/python/django/django/contrib/sessions/backends/db.py", line 2, in ? from django.contrib.sessions.models import Session File "/home/<username>/python/django/django/contrib/sessions/models.py", line 4, in ? from django.db import models File "/home/<username>/python/django/django/db/__init__.py", line 41, in ? backend = load_backend(settings.DATABASE_ENGINE) File "/home/<username>/python/django/django/db/__init__.py", line 17, in load_backend return import_module('.base', 'django.db.backends.%s' % backend_name) File "/home/<username>/python/django/django/utils/importlib.py", line 35, in import_module __import__(name) File "/home/<username>/python/django/django/db/backends/mysql/base.py", line 13, in ? raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e) ImproperlyConfigured: Error loading MySQLdb module: No module named MySQLdb when I try to run this script on my shared server #!/usr/bin/python import sys, os sys.path.insert(0, "/home/<username>/python/django") sys.path.insert(0, "/home/<username>/python/django/www") # projects directory os.chdir("/home/<username>/python/django/www/<project>") os.environ['DJANGO_SETTINGS_MODULE'] = "<project>.settings" from django.core.servers.fastcgi import runfastcgi runfastcgi(method="threaded", daemonize="false") But, my web host just installed MySQLdb for me a few hours ago. When I run python from the shell I can import MySQLdb just fine. Why would this script report that it can't find it?

    Read the article

  • How to avoid visual artifacts when hosting WPF user controls within a WinForms MDI app?

    - by jpierson
    When hosting WPF user controls within a WinForms MDI app there is a drawing issue when you have multiple forms that overlap each other that causes very distinct visual artifacts. These artifacts are mostly visible after dragging one child form over another one that also hosts WPF content or by allowing the edges of the child form to be clipped by the main MDI parent when dragging it around. After the drag and drop of the child form is completed the artifacts stay around gernally but I've found that setting focus to a different application's window and then refocusing back on to my application window that it is redrawn and all is good again until the child forms are moved once again. Please see the image below which demonstrates the problem. Since many at Microsoft insist that the WinForms MDI is already a complete solution for MDI even when dealing with WPF I find it hard to believe any of them have ever actually tried creating a mostly WPF app that utilizes WinForms MDI otherwise it would be hard to recommend while keeping a straignt face. I'm hoping to either come up with proof that this solution truly is not acceptable or possibly find a way to overcome this and a few other specific issues.

    Read the article

  • What is the correct way to synchronize a shared, static object in Java?

    - by johnrock
    This is a question concerning what is the proper way to synchronize a shared object in java. One caveat is that the object that I want to share must be accessed from static methods. My question is, If I synchronize on a static field, does that lock the class the field belongs to similar to the way a synchronized static method would? Or, will this only lock the field itself? In my specific example I am asking: Will calling PayloadService.getPayload() or PayloadService.setPayload() lock PayloadService.payload? Or will it lock the entire PayloadService class? public class PayloadService extends Service { private static PayloadDTO payload = new PayloadDTO(); public static void setPayload(PayloadDTO payload){ synchronized(PayloadService.payload){ PayloadService.payload = payload; } } public static PayloadDTO getPayload() { synchronized(PayloadService.payload){ return PayloadService.payload ; } } ... Is this a correct/acceptable approach ? In my example the PayloadService is a separate thread, updating the payload object at regular intervals - other threads need to call PayloadService.getPayload() at random intervals to get the latest data and I need to make sure that they don't lock the PayloadService from carrying out its timer task

    Read the article

  • Why slim reader/writer exclusive lock outperformance the shared one?

    - by Jichao
    I have tested the performance of slim reader/writer lock under windows 7 using the codefrom Windows Via C/C++. The result surprised me that the exclusive lock out performance the shared one. Here are the code and the result. unsigned int __stdcall slim_reader_writer_exclusive(void *arg) { //SRWLOCK srwLock; //InitializeSRWLock(&srwLock); for (int i = 0; i < 1000000; ++i) { AcquireSRWLockExclusive(&srwLock); g_value = 0; ReleaseSRWLockExclusive(&srwLock); } _endthreadex(0); return 0; } unsigned int __stdcall slim_reader_writer_shared(void *arg) { int b; for (int i = 0; i < 1000000; ++i) { AcquireSRWLockShared(&srwLock); //b = g_value; g_value = 0; ReleaseSRWLockShared(&srwLock); } _endthreadex(0); return 0; } g_value is a global int volatile variable. Could you kindly explain why this could happen?

    Read the article

  • Shared Git repo syncing to svn causing git svn rebase to pollute repo with a log of no-op merge prob

    - by John K
    This wasn't so bad at the beginning, but now I have hundreds of no-op merge problems (solved by git rebase --skip). I have setup a shared git repo for my group because it is easier to deal with. But the company uses SVN so I have to keep SVN in sync with GIT. Worked like a dream at first, but after weeks of doing this GIT is giving me a lot of the following errors. Applying: * making all config actions work Using index info to reconstruct a base tree... Falling back to patching base and 3-way merge... Auto-merging app/controllers/vulnerabilities_controller.rb CONFLICT (content): Merge conflict in app/controllers/vulnerabilities_controller.rb Auto-merging public/javascripts/network_analysis_vulnerability_config.js CONFLICT (content): Merge conflict in public/javascripts/network_analysis_vulnerability_config.js Failed to merge in the changes. Patch failed at 0046 * making all config actions work My workflow: git co master git pull origin git svn rebase ... deal with no-op merge problems ... git svn dcommit git pull origin git push origin The problem is that what is in SVN is the correct so I use git rebase --skip, but I have to do that hundreds of times before I can dcommit. How do I clear these merge problems permanently?

    Read the article

  • Why is function's length information of other shared lib in ELF?

    - by minastaros
    Our project (C++, Linux, gcc, PowerPC) consists of several shared libraries. When releasing a new version of the package, only those libs should change whose source code was actually affected. With "change" I mean absolute binary identity (the checksum over the file is compared. Different checksum - different version according to the policy). (I should mention that the whole project is always built at once, no matter if any code has changed or not per library). Usually this can by achieved by hiding private parts of the included Header files and not changing the public ones. However, there was a case where just a delete was added to the destructor of a class TableManager (in the TableManager.cpp file!) of library libTableManager.so, and yet the binary/checksum of library libB.so (which uses class TableManager ) has changed. TableManager.h: class TableManager { public: TableManager(); ~TableManager(); private: int* myPtr; } TableManager.cpp: TableManager::~TableManager() { doSomeCleanup(); delete myPtr; // this delete has been added } By inspecting libB.so with readelf --all libB.so, looking at the .dynsym section, it turned out that the length of all functions, even the dynamically used ones from other libraries, are stored in libB! It looks like this (length is the 668 in the 3rd column): 527: 00000000 668 FUNC GLOBAL DEFAULT UND _ZN12TableManagerD1Ev So my questions are: Why is the length of a function actually stored in the client lib? Wouldn't a start address be sufficient? Can this be suppressed somehow when compiling/linking of libB.so (kind of "stripping")? We would really like to reduce this degree of dependency...

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >