Search Results

Search found 11100 results on 444 pages for 'xt 20'.

Page 307/444 | < Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >

  • tomcat 'document base does not exist' error (but it does)

    - by SpliFF
    Gentoo / Tomcat 6 INFO: Starting Servlet Engine: Apache Tomcat/6.0.20 Sep 8, 2009 10:34:51 AM org.apache.catalina.core.StandardContext resourcesStart SEVERE: Error starting static Resources java.lang.IllegalArgumentException: Document base /www/rivervalley/site does not exist or is not a readable directory at org.apache.naming.resources.FileDirContext.setDocBase(Unknown Source) at org.apache.catalina.core.StandardContext.resourcesStart(Unknown Source) at org.apache.catalina.core.StandardContext.start(Unknown Source) at org.apache.catalina.core.ContainerBase.start(Unknown Source) at org.apache.catalina.core.StandardHost.start(Unknown Source) at org.apache.catalina.core.ContainerBase.start(Unknown Source) oh really? then how come: ls -la /www/rivervalley/site/ drwxr-xr-x 12 tomcat tomcat 4096 Sep 8 09:56 . drwxr-xr-x 16 tomcat tomcat 4096 Jun 29 16:22 .. -rwxr--r-- 1 tomcat tomcat 520 Jul 3 02:15 Application.cfm drwxr-xr-x 2 tomcat tomcat 4096 Sep 8 09:56 WEB-INF and ... tomcat 18916 1.0 5.5 1159188 167892 ? Ssl 10:37 0:11 /opt/sun-jdk-1.5.0.18/bin/java -Djava.util.loggin Hell, ANY account can read that directory so the claim is utter nonsense. What else can cause this? Here's my relevant server.xml section: <Host name="rivervalley" appBase="webapps" unpackWARs="false" autoDeploy="false" xmlValidation="false" xmlNamespaceAware="false"> <Context path="" docBase="/www/rivervalley/site" /> </Host>

    Read the article

  • How to find out what is causing a slow down of the application on this server?

    - by Jan P.
    This is not the typical serverfault question, but I'm out of ideas and don't know where else to go. If there are better places to ask this, just point me there in the comments. Thanks. Situation We have this web application that uses Zend Framework, so runs in PHP on an Apache web server. We use MySQL for data storage and memcached for object caching. The application has a very unique usage and load pattern. It is a mobile web application where every full hour a cronjob looks through the database for users that have some information waiting or action to do and sends this information to a (external) notification server, that pushes these notifications to them. After the users get these notifications, the go to the app and use it, mostly for a very short time. An hour later, same thing happens. Problem In the last few weeks usage of the application really started to grow. In the last few days we encountered very high load and doubling of application response times during and after the sending of these notifications (so basically every hour). The server doesn't crash or stop responding to requests, it just gets slower and slower and often takes 20 minutes to recover - until the same thing starts again at the full hour. We have extensive monitoring in place (New Relic, collectd) but I can't figure out what's wrong; I can't find the bottlekneck. That's where you come in: Can you help me figure out what's wrong and maybe how to fix it? Additional information The server is a 16 core Intel Xeon (8 cores with hyperthreading, I think) and 12GB RAM running Ubuntu 10.04 (Linux 3.2.4-20120307 x86_64). Apache is 2.2.x and PHP is Version 5.3.2-1ubuntu4.11. If any configuration information would help analyze the problem, just comment and I will add it. Graphs info phpinfo() apc status memcache status collectd Processes CPU Apache Load MySQL Vmem Disk New Relic Application performance Server overview Processes Network Disks (Sorry the graphs are gifs and not the same time period, but I think the most important info is in there)

    Read the article

  • ExpressCache not working after Windows 8 reinstall on Samsung Series 7 Gamer

    - by Morven
    I have a Samsung Series 7 Gamer laptop which came with Windows 8. After doing a reinstall of Windows, the ExpressCache software is no longer caching. Running "eccmd -info" shows me that the software is present and it has the MSATA drive partition configured. However, it's not actually caching anything. These are the results after having the system booted for days: C:\windows\system32eccmd -info ExpressCache Command Version 1.0.94.0 Copyright¬ 2010-2012 Condusiv Technologies. Date Time: 11/3/2013 12:26:20:263 (JAMETHIEL #36) EC Cache Info ================================================== Mounted : Yes Partition Size : 7.46 GB Reserved Size : 3.00 MB Volume Size : 7.46 GB Total Used Size : 86.50 MB Total Free Space : 7.38 GB Used Data Size : 16.63 MB Used Data Size on Disk : 84.38 MB Tiered Cache Stats ================================================== Memory in use : 32.00 MB Blocks in use : 136 Read Percent : 0.02% Cache Stats ================================================== Cache Volume Drive Number : 1 Total Read Count : 97242 Total Read Size : 4.13 GB Total Cache Read Count : 0 Total Cache Read Size : 595.50 KB Total Write Count : 161546 Total Write Size : 5.89 GB Total Cache Write Count : 0 Total Cache Write Size : 0 Bytes Cache Read Percent : 0.01% Cache Write Percent : 0.00% As you can see on the last two lines, cache read and write percent is nigh on zero. Anyone know where to look next? The only guides I can find deal with ExpressCache not being present or not having a configured drive.

    Read the article

  • heavy load on mysql

    - by payal
    i have dedicated server with very good configuation like 16 gb ram etc but i am facing heavy load from mysql i am running a music wesbite however only one database is running and 5-10 pages are only running.when i click on whm show processlist it shows only 2-3 processes However whm load is always less than one but when i click on whm load it shows 20% of cpu usage by mysql and after some time it starts saying can not connect to mysql . mysql server has gone away 1691 (Trace) (Kill) mysql 0 19.2 2.7 /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql --user=mysql --log- error=/var/lib/mysql/server.xyz.com.err --pid-file=/var/lib/mysql/server.xyz.com.pid i have tested static pages they are coming blezing fast but all dynamic pages which are using mysql is coming damn slow it takes years to open.. my.conf file is [mysqld] key_buffer = 1536M max_allowed_packet = 1M max_connections = 250 max_user_connections = 15 wait_timeout=40 connect_timeout=10 table_cache = 512 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 8M myisam_sort_buffer_size = 64M thread_cache_size = 8 query_cache_size = 32M server-id = 14 old-passwords = 1 [mysqldump] quick max_allowed_packet = 16M [mysql] no-auto-rehash [myisamchk] key_buffer = 256M sort_buffer_size = 256M read_buffer = 2M write_buffer = 2M [mysqlhotcopy] interactive-timeout ihave checked log error file it says nothing.i have increased maximum connnection also to 1000 but still same problem is there .if i disconnect that one databasejust by changing the name of database i can see withing half hour the load of server and mysql goes down to negliglble .i have tested everything and if there are some query which can cause heavy load to server can you please list which type of query can cause heavy load on server then also for 5-10 pages it will never cause that much heavy load. i have seen server with 500 websites but was working just fine.

    Read the article

  • heavy load on mysql

    - by payal
    i have dedicated server with very good configuation like 16 gb ram etc but i am facing heavy load from mysql however only one database is running and 5-10 pages are only running. However whm load is always less than one but when i click on whm load it shows 20% of cpu usage by mysql and after some time it starts saying can not connect to mysql . mysql server has gone away 1691 (Trace) (Kill) mysql 0 19.2 2.7 /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql --user=mysql --log- error=/var/lib/mysql/server.xyz.com.err --pid-file=/var/lib/mysql/server.xyz.com.pid i have tested static pages they are coming blezing fast but all dynamic pages which are using mysql is coming damn slow it takes years to open.. ihave checked log error file it says nothing.i have increased maximum connnection also to 1000 but still same problem is there .if i disconnect that one databasejust by changing the name of database i can see withing half hour the load of server and mysql goes down to negliglble .i have tested everything and if there are some query which can cause heavy load to server can you please list which type of query can cause heavy load on server then also for 5-10 pages it will never cause that much heavy load. i have seen server with 500 websites but was working just fine.

    Read the article

  • Why my Ldirectord check multiple times on read server every interval?

    - by garconcn
    I have a Ldirectord server and two real servers. My ldirectord used to check the request page on real server once in every interval, but now I found that it check four times. I have monitored the log on both real servers, they have the same problem. Here is my ldirectord configuration: checktimeout=10 checkinterval=5 autoreload=yes logfile="/var/log/ldirectord.log" quiescent=no virtual=192.168.1.100:80 fallback=127.0.0.1:80 real=192.168.1.10:80 gate real=192.168.1.20:80 gate service=http request="lb.html" receive="still alive" scheduler=sh persistent=60 protocol=tcp checktype=negotiate Ldirectord will connect to each real server once every 5 seconds (checkinterval) and request 192.168.0.10:80/test.html (real/request). The access log in real server: 192.168.1.100 - - [13/Jun/2012:10:36:44 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:44 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:44 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:44 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:49 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:49 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:49 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:49 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:54 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:54 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:54 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805" 192.168.1.100 - - [13/Jun/2012:10:36:54 -0700] "GET /lb.html HTTP/1.1" 200 12 "-" "libwww-perl/5.805"

    Read the article

  • Why Wireshark does not recognize this HTTP response?

    - by Alois Mahdal
    I have a trivial CGI script that outputs simple text content. It's written in Perl and using CGI module and it specifies only the most basic headers: print $q->header( -type => 'text/plain', -Content_length => $length, ); print $stuff; There's no apparent issue with functionality, but I'm confused about the fact that Wireshark does not recognize the HTTP response as HTTP--it's marked as TCP. Here is request and response: GET /cgi-bin/memfile/memfile.pl?mbytes=1 HTTP/1.1 Host: 10.6.130.38 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:11.0) Gecko/20100101 Firefox/11.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: cs,en-us;q=0.7,en;q=0.3 Accept-Encoding: gzip, deflate Connection: keep-alive HTTP/1.1 200 OK Date: Thu, 05 Apr 2012 18:52:23 GMT Server: Apache/2.2.15 (Win32) mod_ssl/2.2.15 OpenSSL/0.9.8m Content-length: 1048616 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/plain; charset=ISO-8859-1 XXXXXXXX... And here is the packet overview (Full packet is here on pastebin) No. Time Source srcp Destination dstp Protocol Info tcp.stream abstime 5 0.112749 10.6.130.38 80 10.6.130.53 48072 TCP [TCP segment of a reassembled PDU] 0 20:52:23.228063 Frame 5: 1514 bytes on wire (12112 bits), 1514 bytes captured (12112 bits) Ethernet II, Src: Dell_97:29:ac (00:1e:4f:97:29:ac), Dst: Dell_3b:fe:70 (00:24:e8:3b:fe:70) Internet Protocol Version 4, Src: 10.6.130.38 (10.6.130.38), Dst: 10.6.130.53 (10.6.130.53) Transmission Control Protocol, Src Port: http (80), Dst Port: 48072 (48072), Seq: 1, Ack: 330, Len: 1460 Now when I see this in Wireshark: there's usual TCP handshake then the GET request shown as HTTP with preview then the next packet contains the response, but is not marked as an HTTP response--just a generic "[TCP segment of a reassembled PDU]", and is not caught by "http.response" filter. Can somebody explain why Wireshark does not recognize it? Is there something wrong with the response?

    Read the article

  • OS X superuser folders automatically created. Perusers launchd process appears to kill 501

    - by Ric Pen
    New Apple laptop OSX 10.8.2. I have used OS X but many years previously, and am not familiar with subtleties or changes in com.apple.launchd.peruser.x... I have previously (and in retrospect, foolishly) made changes to these rapidly spawned new peruser accounts (my initial reaction was that if ipfw was disabled, then I might well be under hacker attack, which I have dealt with, years ago), but I believe I was wrong, and the results of my efforts at preserving the system's integrity have in fact been destructive, overreactive, and have resulted in much work to restore. My understanding from other posts is that superuser protocols have changed quite dramatically since I bought the first developer version of OS X many years ago. Haven't developed on Apple much since then, w/ exception of WebObjects (IMO, much underrated at that time, and was more user friendly than ASP (prior to .NET, I vaguely recall). Creation of apparently nasty peruser folders appear to confound 501 process, which logs inability to find firewall (ipfw). Can someone help me with this? I am concerned that either the system is improperly configured, an application was improperly installed (although there is little here beyond Apple's SDK, which I find quite accommodating and intuitive). Still, I am a novice, only sporadically develop at this time, and would really just like to see this system running happily. Please offer assistance, in the form of potential info sources, or if you have had a similar experience, then perhaps scripts to suss out this issue. I do not wish to damage the system, but Apple's Developer connection and discussion threads do not appear to have dealt with this particular issue recently... Although I may well have missed something you have not - please apprise. Any assistance on this issue is very much appreciated - by an old guy, who wants to do some things which were fun about 20 years ago.

    Read the article

  • OS X can connect to Windows machine, but can't access shared folders

    - by Bonnie
    I can create new folders on my Windows XP machine, set them to "shared". On my Mac, I pick Finder → Go → Connect to Server → smb://192.168.1.4 → Connect → Name / Password. It even shows me all the names of the newly created shared-folders on my PC, but when I try to actually connect to any of them I get connection failed, there was an error connecting Any idea on what would cause that? The fact that it successfully gets so far—to actually showing me my PC share-names—must mean I have 99% of this working correctly, i.e. the physical connection, the IP address, the user name, the password, etc. Still, I can't seem to access the folders themselves. I've tried this with my Windows XP firewall on/off, and Norton AntiVirus on/off. Same problem. Everything did work fine, 4 months ago. Were there any odd OS X or Windows updates released recently? I always apply them all. smbclient on the Mac does correctly find the XP machine, my XP user name, and accepts my XP password. I get the following from that smbclient command: Doing spnego session setup (blob length=16) server didn't supply a full spnego negprot Got challenge flags: ... Got NTLMSSP flags: ... Got NTLMSP flags: ... Domain=[XPMACHINE] OS=[Windows 5.1] Server=[Windows 2000 LAN Manager] tree connect failed: NT_STATUS_INSUFF_SERVER_RESOURCES I'm not sure why a standard XP box can't "supply a full spnego negprot". Whatever that means. Using XP's RegEdit to change my IRPStackSize from 11... to 13, 15, 20, 22... still gives that "NT_STATUS_INSUFF_SERVER_RESOURCES" error on the Mac.

    Read the article

  • apt-mirror does not mirror the i18n directory

    - by Fred
    I need to setup a local Ubuntu mirror so the whole network doesn't need to hit remote servers in order to update and install new packages. Following a brief tutorial found here, I managed to get a server up and running that correctly mirrors packages from the main and restricted categories. However, when I call apt-get update on a client, I get a couple of errors such as : Ign http://192.168.1.18 karmic/main Translation-fr Ign http://192.168.1.18 karmic/restricted Translation-fr Checking back on the server, I see that apt-mirror only took the binary-amd64 directory of the mirror, and didn't take i18n that would provide Translation-fr. The manpage for apt-mirror doesn't say anything about i18n, and Google is of no help either. How do I properly mirror i18n? My current mirror.list file is as follows : ############# config ################## # # set base_path /var/spool/apt-mirror # # if you change the base path you must create the directories below with write privileges # # set mirror_path $base_path/mirror # set skel_path $base_path/skel # set var_path $base_path/var # set cleanscript $var_path/clean.sh # set defaultarch <running host architecture> # set postmirror_script $var_path/postmirror.sh set run_postmirror 0 set nthreads 20 set _tilde 0 # ############# end config ############## deb http://mirror.cc.columbia.edu/pub/linux/ubuntu/archive karmic main restricted deb http://mirror.cc.columbia.edu/pub/linux/ubuntu/archive karmic-updates main restricted clean http://mirror.cc.columbia.edu/pub/linux/ubuntu/archive

    Read the article

  • Very Slow DSL (ethernet) speed

    - by Abhijit
    I 'was' on opensuse 12.2 when my dsl speed was normal. Yesterday I switched from opensuse to ubuntu 12.04 and speed decreased. It came to range of 7-10-13-20-25-kbps. Then I switch to linux mint, and then to fedora. Still slow speed. When I was in ubuntu I disabled ipv6 but still no luck. Now I am in fedora but this time with DIFFERENT ISP. And still I am getting very slow sped. So my guess is this is nothing to do with os. What can be wrong? Is this problem of NIC? Does NIC speed decreases over time? Does NIC life ends over time as with keyboard or mouse? Help please All the os I used are 64 bit and my laptop is Compaq Presario A965Tu Intel Centrino DUal Core. Interesting thing to notice is I get normal speed while downloading torrent inside torrent client softwares. This slow speed issue applied to download from any web browser or installing software using terminal.

    Read the article

  • Is there any way for ME to improve routing to an overseas server? [migrated]

    - by Simon Hartcher
    I am trying to make a connection to a gaming server in Asia from Australia, but my ISP routes my connection through the US. Tracing route to worldoftanks-sea.com [116.51.25.54]over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms 192.168.1.1 2 34 ms 42 ms 45 ms 10.20.21.123 3 40 ms 40 ms 43 ms 202.7.173.145 4 51 ms 42 ms 36 ms syd-sot-ken-crt1-ge-6-0-0.tpgi.com.au [202.7.171.121] 5 175 ms 200 ms 195 ms ge5-0-5d0.cir1.seattle7-wa.us.xo.net [216.156.100.37] 6 212 ms 228 ms 229 ms vb2002.rar3.sanjose-ca.us.xo.net [207.88.13.150] 7 205 ms 204 ms 206 ms 207.88.14.226.ptr.us.xo.net [207.88.14.226] 8 207 ms 215 ms 220 ms xe-0.equinix.snjsca04.us.bb.gin.ntt.net [206.223.116.12] 9 198 ms 201 ms 199 ms ae-7.r20.snjsca04.us.bb.gin.ntt.net [129.250.5.52] 10 396 ms 391 ms 395 ms as-6.r20.sngpsi02.sg.bb.gin.ntt.net [129.250.3.89] 11 383 ms 384 ms 383 ms ae-3.r02.sngpsi02.sg.bb.gin.ntt.net [129.250.4.178] 12 364 ms 381 ms 359 ms wotsg1-slave-54.worldoftanks.sg [116.51.25.54] Trace complete. Since I think it will be unlikely that my ISP will do anything, are there any ways to improve my routing to the server without them having to intervene? NB. The game runs predominately over UDP, so I believe most low ping services are out of the question, as they rely on TCP traffic.

    Read the article

  • issues with Nginx + Passenger Production setup - Loading time/request time delay

    - by Dani Cela
    having a bit of an issue relating to request time. I have NGINX as a proxy server for a ruby on rails app running passenger. I also have a postgresql database server which is running on its own VM separate from my nginx/application server. My issue is that when I try and access my products page which does a lot of database queries, my query takes maybe 3-4 seconds. The second I flood the web server with requests, i will choke out the web server and have requests take almost 20-30 seconds to process. The rails server and database server do not crash, and the usage is not that high. Each server has more than enough memory, even cpu usage on the rails server isn't more than 85%, albeit thats high but its not maxing it out. Is my problem related to my nginx proxy server? I dont really know how to fully explain this so if you have a question please ask it and I can clarify what I mean. EDIT: to see exactly what i mean relating to the database query, see http://207.245.4.215/products

    Read the article

  • Is it possible to download extremely large files intelligently or in parts via SSH from Linux to Windows?

    - by Andrew
    I have a ~35 GB file on a remote Linux Ubuntu server. Locally, I am running Windows XP, so I am connecting to the remote Linux server using SSH (specifically, I am using a Windows program called SSH Secure Shell Client version 3.3.2). Although my broadband internet connection is quite good, my download of the large file often fails with a Connection Lost error message. I am not sure, but I think that it fails because perhaps my internet connection goes out for a second or two every several hours. Since the file is so large, downloading it may take 4.5 to 5 hours, and perhaps the internet connection goes out for a second or two during that long time. I think this because I have successfully downloaded files of this size using the same internet connection and the same SSH software on the same computer. In other words, sometimes I get lucky and the download finishes before the internet connection drops for a second. Is there any way that I can download the file in an intelligent way -- whereby the operating system or software "knows" where it left off and can resume from the last point if a break in the internet connection occurs? Perhaps it is possible to download the file in sections? Although I do not know if I can conveniently split my file into multiple files -- I think this would be very difficult, since the file is binary and is not human-readable. As it is now, if the entire ~35 GB file download doesn't finish before the break in the connection, then I have to start the download over and overwrite the ~5-20 GB chunk that was downloaded locally so far. Do you have any advice? Thanks.

    Read the article

  • Network connection keeps dropping - bad hardware?

    - by Bill Sambrone
    Hello all, I've into a bit of a wall with a client of mine. In an office of 20 people, he is the only one who experiences broken connections to his mapped network drives. I have everyone set up with about 6 mapped drives, all pointing to the same server (no DFS), and everyone else can access them lightning fast. The environment consists of a mix of Windows 7 and XP machines, all 32-bit. The server holding the data everyone is mapping to is running on Server 2008 R2, and is a domain controller. We recently swapped out their old 10/100 switch for a shiny new Dell PowerConnect gigabit switch. We have also replaced an old dying Sonicwall with a shiny new one. Everything is running on an ESX host except for the DC, where everyone is getting data from. In my client's office, we have done the following: Swapped out his computer (Win7 and XP box) Swapped out the desktop switch in his office Removed the desktop switch in his office Changed out the network cable going to the wall Ran 'net config server /autodisconnect:-1' on the server Disabled remote differential compression on his current Win7 box When we swapped out his network cable, everything seemed fine for about 4 days. Normally I would get a phone call a couple times per day letting me know that Outlook has crashed (there is a 9GB PST living on the server he is always connected to), or that his software he is running from his L drive has crashed. I almost thought I had this solved, but after we rebooted the DC the other night he all of a sudden couldn't stay connected to his mapped network drives for more than 10 minutes. When I ran 'net use' from the command prompt, it listed all the network drives where were randomly in a state of 'OK', 'Disconnected', or 'Reconnecting'. What else should I try? Maybe there is bad wiring in the wall, patch panel, or a bad port in the new switch I have in the server room?

    Read the article

  • Scaling databases with cheap SSD hard drives

    - by Dennis Kashkin
    Hey guys! I hope that many of you are working with high traffic database-driven websites, and chances are that your main scalability issues are in the database. I noticed a couple of things lately: Most large databases require a team of DBAs in order to scale. They constantly struggle with limitations of hard drives and end up with very expensive solutions (SANs or large RAIDs, frequent maintenance windows for defragging and repartitioning, etc.) The actual annual cost of maintaining such databases is in $100K-$1M range which is too steep for me :) Finally, we got several companies like Intel, Samsung, FusionIO, etc. that just started selling extremely fast yet affordable SSD hard drives based on SLC Flash technology. These drives are 100 times faster in random read/writes than the best spinning hard drives on the market (up to 50,000 random writes per second). Their seek time is pretty much zero, so the cost of random I/O is the same as sequential I/O, which is awesome for databases. These SSD drives cost around $10-$20 per gigabyte, and they are relatively small (64GB). So, there seems to be an opportunity to avoid the HUGE costs of scaling databases the traditional way by simply building a big enough RAID 5 array of SSD drives (which would cost only a few thousand dollars). Then we don't care if the database file is fragmented, and we can afford 100 times more disk writes per second without having to spread the database across 100 spindles. . Is anybody else interested in this? I've been testing a few SSD drives and can share my results. If anybody on this site has already solved their I/O bottleneck with SSDs, I would love to hear your war stories! PS. I know that there are plenty of expensive solutions out there that help with scalability, for example the time proven RAM-based SANs. I want to be clear that even $50K is too expensive for my project. I have to find a solution that costs no more than $10K and does not take much time to implement.

    Read the article

  • Updated my WAMP Server and MySQL is eating up 580mB of memory

    - by Jon
    I updated my dev-box's WAMPSERVER, and along with updating PHP and Apache, MySQL updated to '5.6.12'. After doing that, I copied the data folder from my old (5.1.36) install to the new one and now MySQL takes up 580mB which is way too much, since I'm the only person using it (Locally) and there are only 20 or so databases on it, none of which have 'memory' tables. How can I get this down to a decent amount? My my.ini: # For advice on how to change settings please see # http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html # *** DO NOT EDIT THIS FILE. It's a template which will be copied to the # *** default location during install, and will be replaced if you # *** upgrade to a newer version of MySQL. [mysqld] # Remove leading # and set to the amount of RAM for the most important data # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%. # innodb_buffer_pool_size = 128M # Remove leading # to turn on a very important data integrity option: logging # changes to the binary log between backups. # log_bin # These are commonly set, remove the # and set as required. # basedir = ..... # datadir = ..... # port = ..... # server_id = ..... # Remove leading # to set options mainly useful for reporting servers. # The server defaults are faster for transactions and fast SELECTs. # Adjust sizes as needed, experiment to find the optimal values. # join_buffer_size = 128M # sort_buffer_size = 2M # read_rnd_buffer_size = 2M sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES Database info: Storage Engine Data Size Index Size Total Size InnoDB 48.00 KB 0.00 B 48.00 KB MEMORY 0.00 B 0.00 B 0.00 B MyISAM 163.64 MB 122.49 MB 286.13 MB Total 163.69 MB 122.49 MB 286.18 MB

    Read the article

  • How to log kernel panics without KVM

    - by Spacedust
    My server is crashing and I can't find an answer why. It all started after my datacenter upgrade RAM from 16 GB to 32 GB. I also found such logs in dmesg - they've started to show itself just before the first kernel panic: EXT4-fs error (device md2): ext4_ext_find_extent: bad header/extent in inode #97911179: invalid magic - magic 5f69, entries 28769, max 26988(0), depth 24939(0) EXT4-fs error (device md2): ext4_ext_remove_space: bad header/extent in inode #97911179: invalid magic - magic 5f69, entries 28769, max 26988(0), depth 24939(0) EXT4-fs error (device md2): ext4_mb_generate_buddy: EXT4-fs: group 20974: 8589 blocks in bitmap, 54896 in gd JBD: Spotted dirty metadata buffer (dev = md2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. EXT4-fs error (device md2): ext4_ext_split: inode #97911179: (comm pdflush) eh_entries 28769 != eh_max 26988! EXT4-fs (md2): delayed block allocation failed for inode 97911179 at logical offset 1039 with max blocks 1 with error -5 This should not happen!! Data will be lost EXT4-fs error (device md2): ext4_mb_generate_buddy: EXT4-fs: group 21731: 5 blocks in bitmap, 60762 in gd JBD: Spotted dirty metadata buffer (dev = md2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. My system is CentOS 5.8 64-bit with latest kernel 2.6.18-308.20.1.el5. How can I check what is the reason of kernel panic without having an access to the KVM ? I have told my datacenter admins to check the memory in the server.

    Read the article

  • convert a pdf/djvu file to png's under Linux how? [closed]

    - by user66732
    Imagemagick doesn't work (Fedora 14) on one PDF file: $ convert -density 300 INPUT.PDF out.png Error: /ioerror in --showpage-- Operand stack: 1 true Execution stack: %interp_exit .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- --nostringval-- false 1 %stopped_push 1878 1 3 %oparray_pop 1877 1 3 %oparray_pop 1861 1 3 %oparray_pop --nostringval-- --nostringval-- 141 1 319 --nostringval-- %for_pos_int_continue --nostringval-- --nostringval-- 1761 0 9 %oparray_pop --nostringval-- --nostringval-- Dictionary stack: --dict:1157/1684(ro)(G)-- --dict:1/20(G)-- --dict:75/200(L)-- --dict:75/200(L)-- --dict:108/127(ro)(G)-- --dict:288/300(ro)(G)-- --dict:22/25(L)-- --dict:6/8(L)-- --dict:22/40(L)-- Current allocation mode is local Last OS error: 27 GPL Ghostscript 8.71: Unrecoverable error, exit code 1 convert: Postscript delegate failed INPUT.PDF': @ error/pdf.c/ReadPDFImage/645.<br> convert: missing an image filenameout.png' @ error/convert.c/ConvertImageCommand/2953. $ And it doesn't work on a djvu file: $ convert -density 300 INPUT.DJVU out.png convert: no decode delegate for this image format INPUT.DJVU' @ error/constitute.c/ReadImage/532.<br> convert: missing an image filenameout.png' @ error/convert.c/ConvertImageCommand/2953. $ an extra: the output filenames. out-0.png out-1.png ... out-9.png out-10.png out-11.png .. out-123.png out-124.png is there a way to be like this?: out-000.png out-001.png ... out-009.png out-010.png out-011.png .. out-123.png out-124.png because they would be in wrong order: out-0.png out-1.png out-10.png out-11.png out-123.png out-124.png out-9.png thank you :\

    Read the article

  • converting to MXF using ffmpeg

    - by Prakash
    I have been trying to use FFmpeg utility to convert a avi file using DNxHD to mxf format. I am using "FFmpeg" with params as following: ffmpeg -i ccvt_box.avi -vcodec dnxhd -video_size 1920x1080 -r 24 -b:v 115m ex.mxf The error it is giving : ffmpeg version N-43737-g76c3fff Copyright (c) 2000-2012 the FFmpeg developers built on Aug 20 2012 18:50:42 with llvm-gcc 4.2.1 (LLVM build 2336.11.00) configuration: libavutil 51. 70.100 / 51. 70.100 libavcodec 54. 53.100 / 54. 53.100 libavformat 54. 25.104 / 54. 25.104 libavdevice 54. 2.100 / 54. 2.100 libavfilter 3. 11.101 / 3. 11.101 libswscale 2. 1.101 / 2. 1.101 libswresample 0. 15.100 / 0. 15.100 Input #0, avi, from 'ccvt_box.avi': Duration: 00:00:10.00, start: 0.000000, bitrate: 691 kb/s Stream #0:0: Video: indeo5 (IV50 / 0x30355649), yuv410p, 340x344, 10 tbr, 10 tbn, 10 tbc Metadata: title : bob.avi [dnxhd @ 0x7fcd60818e00] video parameters incompatible with DNxHD Output #0, mxf, to 'ex.mxf': Stream #0:0: Video: dnxhd, yuv422p, 340x344, q=2-1024, 90k tbn, 24 tbc Metadata: title : bob.avi Stream mapping: Stream #0:0 -> #0:0 (indeo5 -> dnxhd) Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height

    Read the article

  • How to run Fujitsu P27T-7 LED monitor in its not native resolution and have perfect fonts rendering

    - by Ilia Rostovtsev
    My problem is completely opposite to anything I could find as I need to run my monitor in its NOT native resolution and have perfect font rendering. I recently got myself Ultra HD 2560x1440 27 inch monitor (Fujitsu P27T-7 LED) and I have an issue with this. I would call it personal but I'm afraid it's not as few people already agreed with me. I do programming and the text on UHD is way to small for comfortable usage. I changed the resolution to regular Full HD (1920x1080), it became just right but the text is looking slightly blur now, in comparison to both its natural UHD resolution and/or to my old 23 inch NEC. I am pretty frustrated and not sure what to do and how to make fonts look just as sleek as they should? I can't work in UHD resolution (my vision is 100% perfect), simply if calculated, picture size with Ultra HD (2560x1440) on 27 inch is around 30% smaller than Full HD (1920x1080) on 23 inch. In order to have same font size, if compared with Full HD 23 inch, 27 inch Ultra HD monitor must be around 32 inches in size. If I set my new monitor to regular Full HD 1920x1080, then the fonts' size are just perfect but the quality is not as it's blurry? Could anyone please help me out with an advise of how to solve this problem? Spec: nVidia 560 Ti with DVI-D port on Fedora 20. EDIT 1: Changing fonts doesn't really help as everything else doesn't look the way it should. EDIT 2: The monitor is buzzing on 2560x1440 so badly in case there are lots of lines on the screen, like file listing. If I type ls /usr/bin it makes such nasty irritating sound. When resolution goes to 1920x1080 it's a bit better. Any idea why?

    Read the article

  • configuring lighttpd for large downloads

    - by ahmedre
    i run a web site that hosts pages that are just general scripts (php, etc) and mp3 downloads (some of which are fairly large - up to 200mb). i am running lighttpd on the servers on linux (ubuntu 64). everything is fine, but under high load, the server is not accessible (or very slow - even sshing in takes a while), and i am guessing this is due to a huge number of mp3 downloads at that time. consequently, dns sees the server as down and redirects all the traffic to the other servers, and after a while, it comes back up and things work again. so what's the best way to fix this? ideally, i want the server to continue running (and the web pages - php etc - to always work, but downloads don't always have to work). should i just have 2 web servers running (one for the downloads and one for the php pages), or is it perhaps something i can fix in my lighttpd configuration? here are the snippets from my configuration: server.max-worker = 4 server.max-fds = 2048 server.max-keep-alive-requests = 4 server.max-keep-alive-idle = 4 server.stat-cache-engine = "fam" fastcgi.server = ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 1, "idle-timeout" => 20, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "64", "PHP_FCGI_MAX_REQUESTS" => "1000" ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" )) ) # normal php site $HTTP["host"] =~ "bar.com" { server.document-root = "/usr/local/www/sites/bar.com/" accesslog.filename = "|/usr/sbin/cronolog /var/log/lighttpd/%m/%d/%H/bar.log" } # download site $HTTP["host"] =~ "(download|stream).foo.com" { server.document-root = "/home/audio/" dir-listing.activate = "enable" dir-listing.hide-dotfiles = "enable" evasive.max-conns-per-ip = 1 evasive.silent = "enable" # connection.kbytes-per-second = 256 accesslog.filename = "|/usr/sbin/cronolog /var/log/lighttpd/%m/%d/%H/download.log" }

    Read the article

  • Someone from china wants kill my entry bandwidth??

    - by yes123
    Hi guys. Someoen from china with two different ip is downloading the same big file from my server. Their ip are: 122.89.45.210 60.210.7.62 They requesting this file and downloading more than 20 times per minute. What Can I do to prevent this? (I am on gentoo with root access) And WHY they do this to a site that doesn't have nothing to do with china ? ADD1: Other ips: 221.8.60.131 124.67.47.56 119.249.179.139 60.9.0.176 ADD2: the stupid thing is they are requesting only 1 single file lol. Or they want that file removed (tho i don't see why) Or they are pretty stupid ADD3: Situation is getting worse. IP are spreading from other countries too (usa and korea if www.geobytes.com/iplocator.htm it's right) And now they are requesting another file. ADD4: it seems after they realized i removed that file they stopped attacking me. I will monitor the situation. They started again after a sleep of 3-4 mintues with the same file (lucky me). Hard to say why this is happening

    Read the article

  • Apache Mod SVN Access Forbidden

    - by Cerin
    How do you resolve the error svn: access to '/repos/!svn/vcc/default' forbidden? I recently upgraded a Fedora 13 server to 16, and now I'm trying to debug an access error with a Subversion server running on using Apache with mod_dav_svn. Running: svn ls http://myserver/repos/myproject/trunk Lists the correct files. But when I go to commit, I get the error: svn: access to '/repos/!svn/vcc/default' forbidden My Apache virtualhost for svn is: <VirtualHost *:80> ServerName svn.mydomain.com ServerAlias svn DocumentRoot "/var/www/html" <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory "/var/www/html"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> <Location /repos> Order allow,deny Allow from all DAV svn SVNPath /var/svn/repos SVNAutoversioning On # Authenticate with Kerberos AuthType Kerberos AuthName "Subversion Repository" KrbAuthRealms mydomain.com Krb5KeyTab /etc/httpd/conf/krb5.HTTP.keytab # Get people from LDAP AuthLDAPUrl ldap://ldap.mydomain.com/ou=people,dc=mydomain,dc=corp?uid # For any operations other than these, require an authenticated user. <LimitExcept GET PROPFIND OPTIONS REPORT> Require valid-user </LimitExcept> </Location> </VirtualHost> What's causing this error? EDIT: In my /var/log/httpd/error_log I'm seeing a lot of these: [Fri Jun 22 13:22:51 2012] [error] [client 10.157.10.144] ModSecurity: Warning. Operator LT matched 20 at TX:inbound_anomaly_score. [file "/etc/httpd/modsecurity.d/base_rules/modsecurity_crs_60_correlation.conf"] [line "31"] [msg "Inbound Anomaly Score (Total Inbound Score: 15, SQLi=, XSS=): Method is not allowed by policy"] [hostname "svn.mydomain.com"] [uri "/repos/!svn/act/0510a2b7-9bbe-4f8c-b928-406f6ac38ff2"] [unique_id "T@Sp638DCAEBBCyGfioAAABK"] I'm not entirely sure how to read this, but I'm interpreting "Method is not allowed by policy" as meaning that there's some security Apache module that might be blocking access. How do I change this?

    Read the article

  • Vlan on DD-WRT v24 filesharing without internet

    - by user148888
    I'm planning to do a vlan with shared files without internet. my vlan is working correctly until I could lock the internet but I can not share files only I can ping from 172.168.1.x to 172.168.2.x but not backward. can u help me please?? This is my config. WRT54g DD-wrt v24 vlan0 eth 1,2,3 172.168.1.1/24 vlan1 Wan vlan2 eth 4 172.168.2.1/24(don't want internet here just Lan Conection) Pc (my pc) 172.168.1.10/24 gtway 172.168.1.1 (with internet) Ubiquity Nanostation loco M2 172.168.2.20/24(AP)(connect to eth 4 vlan2)(don't want internet here) Ubiquity Nanostation loco M2 172.168.2.21/24(Client)(don't want internet here) Friend pc 172.168.2.115/24(connected from the client)(don't want internet here) Any Help Please.. ![Lan Map][1] ![Vlan Config1][2] ![Vlan Config2][3] ![Vlan Config3][4] ![Command to block internet][5] [url=http://imageshack.us/photo/my-images/194/lanxl.jpg/][img=http://imageshack.us/a/img194/4197/lanxl.th.jpg][/url] [url=http://imageshack.us/photo/my-images/600/commandmf.jpg/][img=http://imageshack.us/a/img600/9530/commandmf.th.jpg][/url] [url=http://imageshack.us/photo/my-images/811/57398524.jpg/][img=http://imageshack.us/a/img811/1917/57398524.th.jpg][/url] [url=http://imageshack.us/photo/my-images/717/48064277.jpg/][img=http://imageshack.us/a/img717/829/48064277.th.jpg][/url] [url=http://imageshack.us/photo/my-images/685/21456517.jpg/][img=http://imageshack.us/a/img685/256/21456517.th.jpg][/url]

    Read the article

< Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >