Search Results

Search found 8125 results on 325 pages for 'metadata cache'.

Page 262/325 | < Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >

  • Windows Server 2008 R2 grinds to a screeching halt during file copy operations

    - by skolima
    When my Windows Server 2008 R2 machine is performing any large disk operations (copying 10GB files from one drive to another, copying similar file over network, merging HyperV snapshots, compressing large files), performance of the whole machine slows down terribly, everything becomes unresponsive. This is noticeable in any situation when the disk access is large enough not to fit in the cache. Are there any settings available for tuning this behaviour? I can accept slower file transfer if this would give me more responsiveness. System details: Dell Optiflex 960, Core 2 Quad Q9650, 8GB RAM, 2 SATA drives - 320GB (ST3320418AS) and 1TB (ST31000528AS), NCQ active on both, Intel 82564LM-3 Gigabit Ethernet, ATI HD 3450 graphics, Intel ICH10 bridge. We have multiple machines like this, every one is exhibiting the same behaviour. I though this was overkill for a workstation, apparently I was mistaken. Update: I guess I shouldn't have mentioned the HyperV at all. The above configuration is a standard workstation setup at the company I work for, this is not a server of any kind. I have at most 3 virtual machines working, and usually I'm the only person accessing them. Never the less, the slowdown occurs even when no VMs are running. On a Linux machine I'd simply ionice the copy process and I could forget about it, is there any way to manage IO priorities on Windows?

    Read the article

  • How to get IE to open JavaScript as text

    - by Pete
    I am running IE 9. Up until last week sometime, if I would put the URL of a JavaScript file in the address bar, it would show the JavaScript as text in the browser window. Now when I do that, it wants to download the JavaScript file. How can I revert it to the previous handling? This is annoying since I'm developing a web application and if I can get it to display the .js files as text in the browser, then I can refresh it to force the cache to update. Update: I've tested on several co-workers machines. For some, browsing to .js files renders them in the browser (IE 9 in all cases). In others, it asks for a download. File associations don't seem to have any effect. One co-worker we tested with IE and Chrome. IE wanted to download it, but Chrome rendered it as text. This makes me think it's an IE issue and not an OS issue.

    Read the article

  • using nginx with proxy_pass on a subdomain

    - by marcus3006
    a have a rails app that should listen on the subdomain redmine.example.com (using proxy_pass). all other requests for *.example.com should just redirect to a normal index.html. Here is my configuration: server { server_name www.example.com example.com; root /home/deploy/static/example; } upstream redmine { server unix:/tmp/redmine.socket fail_timeout=0; } server { # you could put a list of other domain names this application answers server_name redmine.example.com; root /home/deploy/rails/redmine/public; access_log /var/log/nginx/redmine_access.log; rewrite_log on; location * { proxy_pass http://redmine; } location ~ ^/(assets)/ { root /home/deploy/rails/redmine/public; gzip_static on; # to serve pre-gzipped version expires max; add_header Cache-Control public; } } anyone knows what's going wrong here? requests to example.com and www.example.com are handled correctly. when i try to acces redmine.example.com = "couldn't resolve host"

    Read the article

  • Conflicts with file from package mysql-5.0.77

    - by Whiteyq
    I'm trying to install APC (Alternative PHP Cache) on a CentOs dedicated server. I've everything done apart from configuring phpize. Running :yum -y install php-devel gives me the following error file /usr/share/mysql/charsets/Index.xml from install of mysql-libs-5.1.57-1.el5.art.x86_64 conflicts with file from package mysql-5.0.77-4.el5_5.3.i386 etc etc for other languages So, i think the mysql version i have is too old & i more than likely need to upgrade mysql to version 5.1. Im reluctant to do this as a) its a live server (although only 3/4 domains) b) ive read ill read to recompile php if i upgrade To add to this i have plesk installed for managing domains & might need reinstalling/reconfiguring also. sorry for the long intro but its my first post & best to give as much info as possible, so my question is basically Is there any way i can run :yum -y install php-devel to get phpize working to complete installation of APC for the version of mysql i currently have installed? ie 5.0.77

    Read the article

  • How to make lighttpd respect X-Forwarded-Proto when constructing redirects for directories?

    - by Tim Landscheidt
    We have an nginx proxy at tools.wmflabs.org that receives requests by http and https and passes them by http on to lighttpds on a grid (one lighttpd per top-level path). Requests that reach the proxy by https are received by the lighttpds like this: HEAD /lighttpd-test/test HTTP/1.1 Connection: close Host: tools.wmflabs.org X-Forwarded-Proto: https X-Original-URI: /lighttpd-test/test User-Agent: curl/7.29.0 Accept: */* This works great except in the case where the URL references a physical directory and misses the trailing slash ("/"), as lighttpd then generates a redirect to the http URL: HTTP/1.1 301 Moved Permanently Location: http://tools.wmflabs.org/lighttpd-test/test/ Connection: close Date: Fri, 06 Jun 2014 14:50:29 GMT Server: lighttpd/1.4.28 The relevant parts of our lighttpd configurations are: server.modules = ( "mod_setenv", "mod_access", "mod_accesslog", "mod_alias", "mod_compress", "mod_redirect", "mod_rewrite", "mod_fastcgi", "mod_cgi", ) server.port = $port [...] server.document-root = "$home/public_html" [...] server.follow-symlink = "enable" [...] server.stat-cache-engine = "fam" ssl.engine = "disable" alias.url = ( "/$tool" => "$home/public_html/" ) index-file.names = ( "index.php", "index.html", "index.htm" ) dir-listing.encoding = "utf-8" server.dir-listing = "disable" url.access-deny = ( "~", ".inc" ) [...] How can I make lighttpd respect X-Forwarded-Proto and use it when constructing redirects for directories? I'm aware that I could try to tackle this in nginx, but I'd prefer if I can fix it in lighttpd.

    Read the article

  • linux kernel buffer memory is zero

    - by user64772
    Hi all. There are one qestion that i can`t find in google. I have many linux boxes mostly with SLES or openSUSE, diffrent versions and kernels. On some of them i faced with slow oracle transactions problem. It time to time problem and when i log in the box on that time i see that oracle blocked in kernel function sync_page # while :; do ps axo stat,pid,cmd,wchan | egrep '^D|^R'; echo --; sleep 5; done D 3483 hald-addon-storage: polling ide_do_drive_cmd Ds 4635 ora_dbw0_orcl sync_page Ds 4637 ora_lgwr_orcl sync_page Ds 4639 ora_ckpt_orcl sync_page D 11210 oracleorcl (LOCAL=NO) sync_page D 12457 [smtpd] sync_page R+ 12458 ps axo stat,pid,cmd,wchan - -- Ds 4635 ora_dbw0_orcl sync_page Ds 4637 ora_lgwr_orcl sync_page Ds 4639 ora_ckpt_orcl sync_page D 11210 oracleorcl (LOCAL=NO) sync_page R+ 12501 ps axo stat,pid,cmd,wchan - -- Ds 4635 ora_dbw0_orcl sync_page Ds 4637 ora_lgwr_orcl sync_page Ds 4639 ora_ckpt_orcl sync_page D 11210 oracleorcl (LOCAL=NO) sync_page R+ 12535 ps axo stat,pid,cmd,wchan - -- Ds 4635 ora_dbw0_orcl sync_page Ds 4637 ora_lgwr_orcl sync_page Ds 4639 ora_ckpt_orcl sync_page D 11210 oracleorcl (LOCAL=NO) sync_page R+ 12570 ps axo stat,pid,cmd,wchan - -- so i think that box is run out of memory for disk buffers but memry is fine total used free shared buffers cached Mem: 4149084 3994552 154532 0 0 2424328 -/+ buffers/cache: 1570224 2578860 Swap: 3148700 750696 2398004 i think that this is the problem, buffer is zero and we must write directly to disk, but why buffer is zero ? - i try to google it and find nothing - is anyone can help ?

    Read the article

  • Why host and vmware guests fail to get MAC of each other?

    - by Georgiy Nemtsov
    I have Windows 7 64bit host running VmWare Workstation 8 with two CentOS 6.3 guests. All guests adaptors are bridged with statically assigned ip's. Connectivity bitween host and guests was fine for many days running this setup. And today while I was working suddenly host and guests became unreachable for each other. While both host and guests could connect to internet and connect to other mashines in my networks. On guests arp -a showed for host ip address: ? (192.168.1.3) <incomlete> on eth0 On host arp -a showed for guests ip 192.168.1.19 00-00-00-00-00-00 192.168.1.20 00-00-00-00-00-00 All other arp records was OK. Deleting arp-caches didn't help neither on guests nor on host. After that I disabled and reenabled network adaptor on my Windows 7 host. And the problem was gone. arp -a now shows correct MACs on all instances. As I suppose the issue was about expiry of arp cache. For some reason host and guests couldn't get their MACs. Hope somebody knows what it is all about? I am preparing guests to work in production and don't want to face such problems in future! Also I was supprised while investigating this issue from another mashine that could connect both on guests and host. On that mashine arp -a showed same MAC for host and two guests.

    Read the article

  • Exchange 2010 UR3 - customizing OWA logon page

    - by STGdb
    I have an Exchange 2010 UR3 deployment that I need to customize the OWA logon page for. I've created a new LGNTOPL.GIF file to replace the existing one in the folder: “C:\Program Files\Microsoft\Exchange Server\V14\ClientAccess\Owa\14.3.158.1\themes\resources” When I bring up OWA, I still get the original “Outlook Web App” logo. I’ve searched and found a couple of other instances of LGNTOPL.GIF in the directories: “C:\Program Files\Microsoft\Exchange Server\V14\ClientAccess\Owa\14.3.123.3\themes\resources” “C:\Program Files\Microsoft\Exchange Server\V14\ClientAccess\Owa\14.3.146.0\themes\resources” “C:\Program Files\Microsoft\Exchange Server\V14\ClientAccess\Owa\Current\themes\resources” I’ve replaced the LGNTOPL.GIF file in each of the above directories but got the same results. I’ve tried clearing my browser cache and even using multiple browsers from multiple PC’s but the same results. I’ve even tried making my GIF file the same pixel size as the original LGNTOPL.GIF logo but still the same results. I’ve tried restarting IIS on the CAS server and restarting the server but same results. Has something changed with Exchange 2010 UR3 when trying to customize OWA? I don't see anything documented about any change to OWA customization. Thanks

    Read the article

  • Amazon EC2 - Free memory

    - by Damo
    We have an amazon ec2 small instance running and over the past few days we noticed that the memory is going down and down. On the small instance, we are running apache and tomcat6 Tomcat is started with the following JVM parameters -Xms32m -Xmx128m -XX:PermSize=128m -XX:MaxPermSize=256m We use nagios to monitor stuff like updates to apply, free disk space and memory. Everything else is behaving as expected but our memory is going down all the time. Our app receives approx half a million hits a day When I shutdown apache and tomcat, and ran free -m, we had only 594mb of memory free out out of the 1.7gb of memory. Not much else is running on the small instance and when running the top command I cannot see where the memory is going. The app we run on tomcat is a grails webapp. Could there be a possibility that there is a memory leak within our application? I read online and folks say that a small amazon instance is perfect for running apach and tomcat. I found a few posts online that showed how to setup apache and tomcat to limit the memory usage and I have already performed those steps. The memory is not being used up as quick but the memory is still decreasing over time. We have other amazone ec2 small instances running grails apps and the memory is fairly standard on those nodes. But they would not be receiving as much traffic Just to add, when I run the top command on the problem server, I cannot see where all the memory is being used Any help with this is greatly appreciated The output of free -m when run on my server is as follows total used free shared buffers cached Mem: 1657 1380 277 0 158 773 -/+ buffers/cache: 447 1209 Swap: 895 0 895 In your opinion, does this look ok? At what stage would the OS give back memory, would it wait to the memory reaches 0% or is this OS dependent?

    Read the article

  • Should I completely turn off swap for linux webserver?

    - by Poma
    Recently my friend told me that it is a good idea to turn off swap on linux webservers with enough memory. My server has 12 GB and currently uses 4GB (not counting cache and buffers) under peak load. His argument was that in a normal situation server will never use all of its RAM so the only way it can encounter OutOfMemory situation is due to some bug/ddos/etc. So in case swap is turned off system will run out of memory that will eventually crash the program hogging memory (most likely the web server process) and probably some other processes. In case swap is turned on it will eat both RAM and swap and eventually will result in the same crash, but before that it will offload crucial processes like sshd to swap and start to do a lot of swap operations resulting in major slowdown. This way when under ddos system may go into a completely unusable condition due to huge lags and I probably will not be unable to log in and kill webserver process or deny all incoming traffic (all but ssh). Is this right? Am I missing something (like the fact that swap partition is very useful in some way even if I have enough RAM)? Should I turn it off?

    Read the article

  • Matched or unmatched drives for RAID arrays?

    - by Will
    Looking around there is conflciting information on this, with some strongly suggesting one or the other. From my understanding the issue with matched drives is that the wear on both drives is more or less the same, so the potential for the second drive failing with or very soon after the first is pretty high. People also claim matched drives give substianatally higher performance however assuming the unmatched drives are more or less the same (eg 2, 1 TB STATA II 7200rpm drives with 32MB cache), would the minor differences between say a Seagate and a Western Digital one (say one has a 128MB/s read rate, and the other a 150MB/s read rate, as well as I guess various other minor differences) actually cause any notable performance loss, ie potentialy worse than two matched 128MB/s drives, or does RAID not really care and give you essentially an optimal solution (eg upto 278MB/s total read speed for RAID 0 and 1) and similar for other RAID with more "unmatched" drives (5 and 1+0 come to mind as possibilities)? Also I couldnt find much info on how this is different on different RAID setups, eg RAID 0 or RAID 1, software or hardware RAID, etc. I'm assuming such things have an effect, and thats it's not all the same for RAID in general?

    Read the article

  • Howto detect fake RAM

    - by Michael
    I just bought a virtual server which should have 2GB of RAM. Now i got a server with 4gb which looks very strange to me. I think it is just a virtual RAM. dmidecode only ouputs /dev/mem: Operation not permitted How can i check if it's a real RAM or just a virtual one? free -m outputs: total used free shared buffers cached Mem: 4093 364 3728 0 0 346 -/+ buffers/cache: 18 4074 Swap: 0 0 0 Output from cat /proc/user_beancounters Version: 2.5 uid resource held maxheld barrier limit failcnt 137: kmemsize 8922287 10194944 2145910784 2145910784 0 lockedpages 0 0 523904 523904 0 privvmpages 13387 59112 9223372036854775807 9223372036854775807 0 shmpages 769 785 9223372036854775807 9223372036854775807 0 dummy 0 0 9223372036854775807 9223372036854775807 0 numproc 22 54 9223372036854775807 9223372036854775807 0 physpages 93377 106010 0 1047808 0 vmguarpages 0 0 9223372036854775807 9223372036854775807 0 oomguarpages 2471 2473 9223372036854775807 9223372036854775807 0 numtcpsock 5 21 9223372036854775807 9223372036854775807 0 numflock 4 13 9223372036854775807 9223372036854775807 0 numpty 1 1 9223372036854775807 9223372036854775807 0 numsiginfo 0 39 9223372036854775807 9223372036854775807 0 tcpsndbuf 102592 381632 9223372036854775807 9223372036854775807 0 tcprcvbuf 81920 4820184 9223372036854775807 9223372036854775807 0 othersockbuf 4624 61632 9223372036854775807 9223372036854775807 0 dgramrcvbuf 0 9248 9223372036854775807 9223372036854775807 0 numothersock 39 56 9223372036854775807 9223372036854775807 0 dcachesize 4178917 4232732 1072955392 1072955392 0 numfile 378 535 9223372036854775807 9223372036854775807 0 dummy 0 0 9223372036854775807 9223372036854775807 0 dummy 0 0 9223372036854775807 9223372036854775807 0 dummy 0 0 9223372036854775807 9223372036854775807 0 numiptent 24 24 9223372036854775807 9223372036854775807 0

    Read the article

  • Mac OS X - Time Machine backup fails verification - What can I do to save the history?

    - by usermac75
    Hi, How do I make Time Machine to make a new complete backup without losing older versions of backed up files? Verbose: I am using the Time Machine backup on my OS X (Snow Leopard) to backup the whole computer to an external drive. I especially like the "history", i.e. the feature that allows you to restore the older version of a file. Problem: I had some data corruption on my external backup drive, I repaired it with the System Tool for doing that, it found some faults. I had the disk tool repair the external drive. After that, the external drive was OK and I could use Time Machine again. I let Time Machine do one more backup. Now I made a verification according to http://superuser.com/questions/47628/verifying-time-machine-backups, namely along sudo diff -qr . $HOME/Desktop 2>&1 | tee $HOME/timemachine-diff.log However: After doing the command above, several differences and missing files were reported, approx. 200 files in sum. Whereas some of the missing files were cache or excluded directories, the differences do bother me, especially as some important documents from me are listed as differing. How can I make sure that the data on the external drive is synced correctly? Is it possible to have Time Machine to do a complete new backup without losing the history? Or to have Time Machine compare all files for differences and re-write all files that are different? Or can I set some flags on the files that do not match to have them copied again? (like the archive-flag in Windows/Dos). I'd rather not touch the files because I would like to keep the date of last change/date of creation) Thank you for your thoughts!

    Read the article

  • Nginx and 1000 WordPress Installs - Optimization

    - by GTE
    Hey, I'm trying to create a rather unusual (imo) configuration where I have: nginx php-fastcgi mysql 1000 seperate WordPress installs (with WP Super Cache). Each WP install corresponds to a seperate subdomain. Furthermore, I have 1000 cron jobs being called every hour that in turn call a WP plugin (using wget) which retrieves data from an API and posts it to the respective blog. This is all being run on a virtual server with 1024MB of RAM, 4 shared processors, etc. The server is not doing well, especially during the times that the cron jobs are being executed. Nginx constantly throws 504 errors and the site has a significant lag. 1) Am I crazy for having 1000 individual WP installs? Should I be using WP-MU and will this help significantly? (I have certain plugin restrictions that I prefer having seperate installs but could switch if need be.) 2) Instead of having 1000 unique cron jobs - should be calling say a bash script that will then process the 1000 HTTP requests I need? Could this be done in a succesive order instead of a sequential one? 3) Any other kind of suggestion you may have for optimization? Should I be proxying to Apache instead of just using nginx, etc. Any kind of advice would be appreciated. Thanks in advance

    Read the article

  • Can't reach only certain websites from my Wifi (with macbook and iphone)

    - by trampj
    I can't access certain websites neither from my macbook nor from my iphone when connected to my Wifi. The same website can be opened from another windows computer connected to the same Wifi. This is what happens when I try to ping it: PING ilpost.it (151.1.175.113): 56 data bytes Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 Request timeout for icmp_seq 2 ... And when I try to traceroute it: host-001:~ j$ traceroute www.ilpost.it traceroute to ilpost.it (151.1.175.113), 64 hops max, 52 byte packets 1 vodafonedslrouter (192.168.1.1) 2.965 ms 0.743 ms 0.745 ms 2 * 2.96.54.77.rev.vodafone.pt (77.54.96.2) 12.076 ms 10.871 ms 3 77.41.30.213.rev.vodafone.pt (213.30.41.77) 14.145 ms 10.693 ms 11.960 ms 4 85.205.11.49 (85.205.11.49) 9.658 ms 8.946 ms 9.085 ms 5 85.205.13.105 (85.205.13.105) 57.497 ms 57.621 ms 48.080 ms 6 188.111.129.17 (188.111.129.17) 49.483 ms 51.338 ms 48.852 ms 7 85.205.25.174 (85.205.25.174) 47.891 ms 49.219 ms 47.821 ms 8 * * * 9 * * * 10 * * * 11 * * * I've flushed my DNS cache but nothing changed. This is quite dramatic as it seems to depend on 85.205.25.174 hop and don't know how to avoid it. Any suggestions? I add that 3 days ago everything worked fine. Then it has stopped.

    Read the article

  • Can't reach only certain websites from my Wifi (with macbook and iphone)

    - by mellin
    I can't access certain websites neither from my macbook nor from my iphone when connected to my Wifi. The same website can be opened from another windows computer connected to the same Wifi. This is what happens when I try to ping it: PING ilpost.it (151.1.175.113): 56 data bytes Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 Request timeout for icmp_seq 2 ... And when I try to traceroute it: host-001:~ j$ traceroute www.ilpost.it traceroute to ilpost.it (151.1.175.113), 64 hops max, 52 byte packets 1 vodafonedslrouter (192.168.1.1) 2.965 ms 0.743 ms 0.745 ms 2 * 2.96.54.77.rev.vodafone.pt (77.54.96.2) 12.076 ms 10.871 ms 3 77.41.30.213.rev.vodafone.pt (213.30.41.77) 14.145 ms 10.693 ms 11.960 ms 4 85.205.11.49 (85.205.11.49) 9.658 ms 8.946 ms 9.085 ms 5 85.205.13.105 (85.205.13.105) 57.497 ms 57.621 ms 48.080 ms 6 188.111.129.17 (188.111.129.17) 49.483 ms 51.338 ms 48.852 ms 7 85.205.25.174 (85.205.25.174) 47.891 ms 49.219 ms 47.821 ms 8 * * * 9 * * * 10 * * * 11 * * * I've flushed my DNS cache but nothing changed. This is quite dramatic as it seems to depend on 85.205.25.174 hop and don't know how to avoid it. Any suggestions? I add that 3 days ago everything worked fine. Then it has stopped.

    Read the article

  • IE8 page reload hangs

    - by Rod
    When a 7mb HTM file is first opened (by clicking on the file icon or using Open With), both IE8 and Firefox display this browser file quickly. After the file is closed, Firefox will reopen this file quickly, but IE8 appears to hang during the reopen. Clearing the IE cache does not help. However, IE will reopen the file quickly again only if the File/Open/Browse feature of the menu bar is used (clicking on the file icon can be used only once between computer reboots). Testing suggests that the problem relates to the number of HTML hyperlinks pointing to another part of the file. There are many hyperlinks, but they are not a problem during the first load of the document (between computer reboots). What needs to be fixed to avoid use of the workaround? Using Windows XP SP3 Update 6/23/12 - Controlled testing shows that the number of hyperlinks is not the problem. The way this large file is opened is the difference: 1) from the IE menu bar, File/Open/Browse is consistent and fast (but not as fast as FF). 2) clicking on the file name in the folder (even when IE is the default program for this file type) causes a much delayed load of the file. Creating a smaller file demonstrates the delayed load, but verifies that the load eventually occurs.

    Read the article

  • System requirements for running windows 8 (basic office use) in virtualbox (ubuntu as host os)

    - by Tor Thommesen
    I want to run windows 8 as a guest os with virtualbox on some thinkpad (haven't bought one yet) running Ubuntu 12.04. Apart from virtualizing windows 8 (mostly just for use with the office suite app) my needs are very modest, I don't need much more than emacs and a browser. What I'd like to know is what kind of specs will be necessary to run windows 8 well as a vm, using the office apps. It would be a shame to waste money on overpowered hardware. Are there any official guidelines from oracle or windows on this? Would this lenovo x220, for example, be sufficiently strong? The specs below were taken from this review. Intel Core i5-2520M dual-core processor (2.5GHz, 3MB cache, 3.2GHz Turbo frequency) Windows 7 Professional (64-bit) 12.5-inch Premium HD (1366 x 768) LED Backlit Display (IPS) Intel Integrated HD Graphics 4GB DDR3 (1333MHz) 320GB Hitachi Travelstar hard drive (Z7K320) Intel Centrino Advanced-N 6205 (Taylor Peak) 2x2 AGN wireless card Intel 82579LM Gigabit Ethernet 720p High Definition webcam Fingerprint reader 6-cell battery (63Wh) and optional slice battery (65Wh) Dimensions: 12 (L) x 8.2 (W) x 0.5-1.5 (H) inches with 6-cell battery Weight: 3.5 pounds with 6-cell battery 4.875 pounds with 6-cell battery and optional external battery slice Price as configured: $1,299.00 (starting at $979.00)

    Read the article

  • Building a small server farm

    - by RayQuang
    Hi, I am planning to set up a Tech startup company that will provide web application solutions. Eventually we hope to diversify into different areas such as possibly social media or other services. For now we plan on running a high demand (from 1000 to 10,000 users in the first year) website running the application. This includes a MySQL database backend, email, and development servers. My question is then, what type of server arrangement will work best, that is ti say should i have a small cluster of ultra high power machines (E.G. Top of the range Xeons, with 12GB RAM) or will it be better to have more less powerful servers load balanced? Should I go for 1 - 2 u servers rack mounted ot would it be beter for it just to be tower servers for maintainability? Finally I would also like to know what kind of Internet and router i would need, I currently have 10mbit down and barely 1 mbit up, but soon our area will have a fiber optic connection with international speeds of up to 25 mbit / sec. Thanks in advance, RayQuang UPDATE: sorry I forgot to mention it, the platform that I will be using is PHP with the APC code cache, Probably running Debian.

    Read the article

  • Trouble Starting MySL Community Server on Windows 7

    - by CodeAngel
    I have installed Netbeans 7 on my Windows 7. In addition, the MySQL Community Server 5.6.12 is installed with the MSI installer on thesame 7 PC. The MySQL server is integrated with the Netbeans IDE. However , it is not possible to start or stop the MySQL server from the command prompt or the Netbeans IDE. I am only able to start or stop the server from the Windows 7 services tool. Also , it is difficult running SQL queries from the Netbeans IDE even though it shows there is connection with the MySQL server. I have added the my.ini file to the installed directory of the MySQL server , that is : C:\Program Files\MySQL\MySQL Server 5.6 below is the my.ini file : # For advice on how to change settings please see # http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html # *** DO NOT EDIT THIS FILE. It's a template which will be copied to the # *** default location during install, and will be replaced if you # *** upgrade to a newer version of MySQL. [mysqld] # Remove leading # and set to the amount of RAM for the most important data # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%. # innodb_buffer_pool_size = 128M # Remove leading # to turn on a very important data integrity option: logging # changes to the binary log between backups. # log_bin # These are commonly set, remove the # and set as required. # basedir = ..... # datadir = ..... port = 3306 # server_id = ..... # Remove leading # to set options mainly useful for reporting servers. # The server defaults are faster for transactions and fast SELECTs. # Adjust sizes as needed, experiment to find the optimal values. # join_buffer_size = 128M # sort_buffer_size = 2M # read_rnd_buffer_size = 2M sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES Any suggestion is welcomed.

    Read the article

  • MEX issues using WCF Test Client over net.pipe

    - by Beaud.
    I'm trying to make use of WCF Test Client along with named pipes but I get the following error: Error: Cannot obtain Metadata from net.pipe://localhost/MyService Here's my web.config: <system.serviceModel> <services> <service name="MyNamespace.MyService" behaviorConfiguration="MEX"> <endpoint address="net.pipe://localhost/MyService" binding="netNamedPipeBinding" contract="MyNamespace.MyService.IMyService" /> <endpoint address="net.pipe://localhost/MEX" binding="mexNamedPipeBinding" contract="IMetadataExchange" /> </service> </services> <client> <endpoint address="net.pipe://localhost/MyService" binding="netNamedPipeBinding" contract="MyNamespace.MyService.IMyService" /> </client> <behaviors> <serviceBehaviors> <behavior name="MEX"> <serviceMetadata /> </behavior> </serviceBehaviors> </behaviors> What's wrong? I have tried both net.pipe://localhost/MyService and net.pipe://localhost/MEX. Any help would be appreciated, thanks!

    Read the article

  • Java jasper report NullPointerException

    - by William Welch
    I am new to Java and I am running into this issue that I can't figure out. I inherited this project and I have the following code in one of my scriptlets: DefaultLogger.logMessage("DEBUG path: "+ reportFile.getPath()); DefaultLogger.logMessage("DEBUG parameters: "+ parameters); DefaultLogger.logMessage("DEBUG jr: "+ jr); bytes = JasperRunManager.runReportToPdf(reportFile.getPath(), parameters, jr); And I am getting the following results (the fourth line there is line 287 in FootwearReportsServlet.doGet): DEBUG path: C:\Development\JavaWorkspaces\Workspace1\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\RSLDevelopmentStandard\reports\templates\footwear\RslFootwearReport.jasper DEBUG parameters: {class_list_subreport=net.sf.jasperreports.engine.JasperReport@b07af1, signature_path=images/logo_reports.jpg, submission_id=20070213154168780, test_results_subreport=net.sf.jasperreports.engine.JasperReport@5795ce, logo_path=images/logos_3.gif, report_connection_secondary=com.mysql.jdbc.JDBC4Connection@2c39d2, testing_packages_subreport=net.sf.jasperreports.engine.JasperReport@1883d5f, signature_path2=images/logo_reports.jpg, tpid_subreport=net.sf.jasperreports.engine.JasperReport@17531fd, first_page_subreport=net.sf.jasperreports.engine.JasperReport@12504e0} DEBUG jr: net.sf.jasperreports.engine.data.JRMapCollectionDataSource@1630eb6 Apr 29, 2010 4:15:13 PM org.apache.catalina.core.StandardWrapperValve invoke SEVERE: Servlet.service() for servlet FootwearReportsServlet threw exception java.lang.NullPointerException at net.sf.jasperreports.engine.fill.JRFiller.fillReport(JRFiller.java:89) at net.sf.jasperreports.engine.JasperFillManager.fillReport(JasperFillManager.java:601) at net.sf.jasperreports.engine.JasperFillManager.fillReport(JasperFillManager.java:517) at net.sf.jasperreports.engine.JasperRunManager.runReportToPdf(JasperRunManager.java:385) at com.rsl.reports.FootwearReportsServlet.doGet(FootwearReportsServlet.java:287) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Unknown Source) What I can't figure out is where the null reference is. From the debug lines I can see that each parameter has a value. Could it be referring to a bad path to one of the images? Any ideas? For some reason my server won't start in debug mode in Eclipse so I am having trouble figuring this out.

    Read the article

  • Unable to "Run on Server" a webapp from Eclipse

    - by Ram
    When running my WebApp project from Eclipse most of times it run correctly. But if by mistake to stop server, I kill it in "Console" view on instead of "Stop" Server from "Servers" View. While running clean project I get this java.lang.NullPointerException at org.eclipse.wst.common.componentcore.internal.util.VirtualReferenceUtilities.getDefaultProjectArchiveName(VirtualReferenceUtilities.java:81) at org.eclipse.jst.j2ee.componentcore.J2EEModuleVirtualComponent.getJavaClasspathReferences(J2EEModuleVirtualComponent.java:332) at org.eclipse.jst.j2ee.componentcore.J2EEModuleVirtualComponent.getNonManifestRefs(J2EEModuleVirtualComponent.java:236) at org.eclipse.jst.j2ee.componentcore.J2EEModuleVirtualComponent.getReferences(J2EEModuleVirtualComponent.java:160) at org.eclipse.jst.j2ee.componentcore.J2EEModuleVirtualComponent.getReferences(J2EEModuleVirtualComponent.java:208) at org.eclipse.jst.j2ee.componentcore.J2EEModuleVirtualComponent.getReferences(J2EEModuleVirtualComponent.java:201) at org.eclipse.jst.common.internal.modulecore.SingleRootUtil.hasConsumableReferences(SingleRootUtil.java:217) at org.eclipse.jst.common.internal.modulecore.SingleRootUtil.validateSingleRoot(SingleRootUtil.java:165) at org.eclipse.jst.common.internal.modulecore.SingleRootUtil.isSingleRoot(SingleRootUtil.java:93) at org.eclipse.jst.common.internal.modulecore.SingleRootExportParticipant.canOptimize(SingleRootExportParticipant.java:84) at org.eclipse.wst.common.componentcore.internal.flat.FlatVirtualComponent.canOptimize(FlatVirtualComponent.java:136) at org.eclipse.wst.common.componentcore.internal.flat.FlatVirtualComponent.cacheResources(FlatVirtualComponent.java:118) at org.eclipse.wst.common.componentcore.internal.flat.FlatVirtualComponent.fetchResources(FlatVirtualComponent.java:101) at org.eclipse.wst.web.internal.deployables.FlatComponentDeployable.members(FlatComponentDeployable.java:147) at org.eclipse.wst.server.core.internal.ModulePublishInfo.hasDelta(ModulePublishInfo.java:418) at org.eclipse.wst.server.core.internal.ServerPublishInfo.hasDelta(ServerPublishInfo.java:443) at org.eclipse.wst.server.core.internal.Server.hasPublishedResourceDelta(Server.java:1539) at org.eclipse.wst.server.core.internal.Server$ResourceChangeJob$1.visit(Server.java:214) at org.eclipse.wst.server.core.internal.Server.visitModule(Server.java:2929) at org.eclipse.wst.server.core.internal.Server.visit(Server.java:2913) at org.eclipse.wst.server.core.internal.Server$ResourceChangeJob.run(Server.java:225) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:54) And while launching I get this SEVERE: Error starting static Resources java.lang.IllegalArgumentException: Document base D:\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\Webapp does not exist or is not a readable directory at org.apache.naming.resources.FileDirContext.setDocBase(FileDirContext.java:142) at org.apache.catalina.core.StandardContext.resourcesStart(StandardContext.java:4319) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4488) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053) at org.apache.catalina.core.StandardHost.start(StandardHost.java:840) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463) Please help recover from this.

    Read the article

  • Visual Studio: Add Item / Add as link rather than just Add

    - by Pete d'Oronzio
    I'm new to visual studio, coming from Delphi. I have a directory tree full of .cs files (root is \Common). I also have a directory tree full of Applications (root is \Applications) Finally, I've got a tree full of Assemblies (root is \Assemblies) I'd like to keep my .cs files in the Common tree and all the environment voodoo (solutions, projects, settings, metadata, debug data, bin, etc.) in the Assmblies tree. So, for a simple example, I've got an assembly called PdMagic.Common.Math.dll. The Solution and project is located in \Assemblies\Common\Math. All of its source (.cs) files are in \Common\Math. (matrix.cs, trig.cs, mathtypes.cs, mathfuncs.cs, stats.cs, etc.) When I use Add Existing Item to add matrix.cs to my project, a copy of it is added to the \Assemblies\Common\Math folder. I just want to reference it. I don't want multiple copies laying around. I've tried Add Existing Item, and used the drop down to "Add link" rather than just "Add", and that seems to do what I want. Question: What is the "best practice" for this sort of thing? Do most people just put those .cs files all in the same folder as the project? Why isn't "Add link" the default? Thanks!

    Read the article

  • webservice CopyIntoItems is not working to upload file to sharepoint

    - by Joeri
    The following piece of C# is always failing with 1 Unknown Object reference not set to an instance of an object Anybody some idea what i am missing? try { //Copy WebService Settings String strUserName = "abc"; String strPassword = "abc"; String strDomain = "SVR03"; String FileName = "Filename.xls"; WebReference.Copy copyService = new WebReference.Copy(); copyService.Url = "http://192.168.11.253/_vti_bin/copy.asmx"; copyService.Credentials = new NetworkCredential (strUserName, strPassword, strDomain); // Filestream of attachment FileStream MyFile = new FileStream(@"C:\temp\28200.xls", FileMode.Open, FileAccess.Read); // Read the attachment in to a variable byte[] Contents = new byte[MyFile.Length]; MyFile.Read(Contents, 0, (int)MyFile.Length); MyFile.Close(); //Change file name if not exist then create new one String[] destinationUrl = { "http://192.168.11.253/Shared Documents/28200.xls" }; // Setup some SharePoint metadata fields WebReference.FieldInformation fieldInfo = new WebReference.FieldInformation(); WebReference.FieldInformation[] ListFields = { fieldInfo }; //Copy the document from Local to SharePoint WebReference.CopyResult[] result; uint NewListId = copyService.CopyIntoItems (FileName, destinationUrl, ListFields, Contents, out result); if (result.Length < 1) Console.WriteLine("Unable to create a document library item"); else { Console.WriteLine( result.Length ); Console.WriteLine( result[0].ErrorCode ); Console.WriteLine( result[0].ErrorMessage ); Console.WriteLine( result[0].DestinationUrl); } } catch (Exception ex) { Console.WriteLine("Exception: {0}", ex.Message); }

    Read the article

< Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >