Search Results

Search found 9311 results on 373 pages for 'cache dependency'.

Page 249/373 | < Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >

  • Dropbox context menu missing in OS X

    - by slhck
    Problem My Dropbox context menu is missing in OS X Snow Leopard (10.6.8). While the Dropbox service runs normally, Finder doesn't show the icons and also doesn't give me the ability to browse files on the website or copy the public link. What I've tried Removed ~/.dropbox and ~/Dropbox/.dropbox.cache Reinstalled Dropbox.app (both 1.4.7 stable and 1.5.0 experimental), went through the setup again Restarted Finder Logged out and back in All of these I've done over and over again, in random permutations. I've made sure that Dropbox appears in the Login Items under my Account (and I've never touched that) I don't know if ~/Library/Contextual Menu Items is missing the Dropbox plugin or if there shouldn't be one after all. In any case, I can't get the icons or the menu to appear.

    Read the article

  • Mocking my custom dependencies with Spring

    - by Brabster
    Is is possible to declare mocks using some mocking framework for my own classes declaratively with Spring? I know there are some standard mocks available in Spring, but I'd like to be able to mock out my own classes declaratively too. Just to check I'm not going about this the wrong way: the idea is to have a pair of JUnit test and Spring config for each integration test I want to do, mocking everything except the specific integration aspect I'm testing (say I had a dependency on two different data services, test one at a time) and minimising the amount of repeated Java code specifying the mocks.

    Read the article

  • e2fsck extremly slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there are some problems that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg | grep sdb [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and he scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So this number of the lseeks before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if the numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. (times are in CET)

    Read the article

  • Visual Studio: Link executable

    - by smerlin
    Lets say I have: a static library project called "LIB" a application project called "LIBAPP" a application project called "APP" a application project called "APPTEST" When i add "LIB" to LIBAPP Project Dependencies, Visual Studio automatically links "LIBAPP" against LIB. But when i add APP to APPTEST Project Dependencies, it doesnt. Since i am doing unit tests of APP's classes in APPTEST, i have to link against APP, therefore i am currently manually linking against all *.obj files of APP (hundreds...) Since i have to change the link targets of APPTEST everytime i add or remove a *.cpp file from APP, this isnt a nice solution. So is there a way to force Visual Studio to do this for me automatically, like it does when adding a static library Project Dependency ?

    Read the article

  • Tuning MySQL to consume less memory

    - by Alex
    I have a VM which has 2GB Ram, (full specs) And I am setting up a site which has one table in particular with over a million records. There's little or no usage of this particular database (perhaps once or twice a day) but simply running mysql grinds the whole server to a halt. I've looked through the top results but nothing is really denting the CPU however the memory seems to be the issue. The site isnt even live of taking requests yet. the memory situation looks like this: # free -m total used free shared buffers cached Mem: 2006 1880 126 0 3 53 -/+ buffers/cache: 1823 183 Swap: 2047 345 1702 Are there any good pointers to tune mysql to stop hogging the system memory? Thanks very much EDIT: (requested by 8bit): http://tny.cz/b41a0b12

    Read the article

  • PHP Runs Very Slow on IIS7. Need Help optimizing our config

    - by Kendor
    Am running a PHP based web app on our Windows 2008 cloud-based server. The app, which runs fine outside of our environment (e.g. a different IIS server), but is VERY slow in our environment. Based on googling this is a relatively common situation. I installed PHP and MySQL via the IIS web deployment method... Here's our setup: Windows 2008 Server Enterprise SP2 (32-bit) Microsoft-IIS/7.0 MySQL client version: mysqlnd 5.0.8-dev - 20102224 $Revision: 321634 $ PHP extension: mysqli Update for IIS 7.0 FastCGI Windows Cache Extension 1.1 for PHP 5.3 I had read elsewhere that ipv6 might be an issue, so I turned this off on the network adapter. The app is using: localhost as its connection Be easy on me, as I'm a bit green about some of these components... Also, rewriting the PHP app or modifying it is NOT an option. I'm reasonably SURE that our config is the issue.

    Read the article

  • Unidentified network: How to configure TCP/IPv4 for Win7?

    - by Zolomon
    When I try to connect to internet I keep getting the error "Unidentified network". I've tried numerous attempts at restoring access without success. IP release, flushing DNS cache, reinstalling NIC, reactivating NIC, resetting router and so on... I've read several times that it's my default gateway that's wrong. Currently I've had automatic IP/DNS configuration set without any problems, and then it stopped working for some reason. Anyone know how I specify the IP? My subnetmask is 255.255.255.0, default gateway is 192.168.0.1 but I have no idea how to determine what IP I should set. I use a D-Link DIR-655 and other computers on the network have IPs like 192.168.0.194, next is 192.168.0.197. (I'm completely lost and am trying to cool down after two weekends of debugging filled with despair.)

    Read the article

  • Unable to run Microsoft Office 2010 install file

    - by Len
    This problem began when I noticed that the icons in the Windows 7 task bar for MS Word and Outlook were generic. I rebuilt the icon cache. Still not the right icons, but not the generic "document" icons either, and both are identical (to each other). The two programs seem to be working OK. So then I tried to repair MS Office. I ran the setup file. It extracts the files, I get the splash screen, and then the message, "Setup has stopped working. A problem caused the program to stop working correctly. Windows will close the program and notify you if a solution is available." with a "Close program" button. Microsoft does not notify me about a solution. What I have tried: 1. running two other copies of the setup program; 2. doing an in-place re-install of Windows 7.

    Read the article

  • How do I configure namecheap for "arbitrarily-nested" wildcard subdomains?

    - by rabidsnail
    I'm trying to set up something like nyud.net, where any arbitrary chain of subdomains resolves to the same CNAME record (which in my case points to an amazon elastic load balancer). Ex: www.gogle.com.nyud.net:8080 points to one of their cache servers, which looks at the HOST header and returns www.google.com. I'm using namecheap as my dns host. Adding a CNAME record for *.mydomain.com doesn't seem to do anything (nslookup gives NXDOMAIN for all subdomains). What do I have to do to set this up? Do I have to use something fancier than namecheap (like route53)?

    Read the article

  • Makefile option/rule to handle missing/removed source files

    - by b3nj1
    http://stackoverflow.com/questions/239004/need-a-makefile-dependency-rule-that-can-handle-missing-files gives some pointers on how to handle removed source files for generating .o files. I'm using gcc/g++, so adding the -MP option when generating dependencies works great for me, until I get to the link stage with my .a file... What about updating archives/libraries when input sources go away? This works OK for me, but is there a cleaner way (ie, something as straightforward as the g++ -MP option)? #BUILD_DIR is my target directory (includes Debug/Release and target arch) #SRC_OUTS are my .o files LIBATLS_HAS = $(shell nm ${BUILD_DIR}/libatls.a | grep ${BUILD_DIR} | sed -e 's/.*(//' -e 's/).*://') LIBATLS_REMOVE = $(filter-out $(notdir ${SRC_OUTS}), ${LIBATLS_HAS}) ${BUILD_DIR}/libatls.a: ${BUILD_DIR}/libatls.a(${SRC_OUTS}) ifneq ($(strip ${LIBATLS_REMOVE}),) $(AR) -d $@ ${LIBATLS_REMOVE} endif

    Read the article

  • How to manage maintenance/bug-fix branches in Subversion when third-party installers are involved?

    - by Mike Spross
    We have a suite of related products written in VB6, with some C# and VB.NET projects, and all the source is kept in a single Subversion repository. We haven't been using branches in Subversion (although we do tag releases now), and simply do all development in trunk, creating new releases when the trunk is stable enough. This causes no end of grief when we release a new version, issues are found with it, and we have already begun working on new features or major changes to the trunk. In the past, we would address this in one of two ways, depending on the severity of the issues and how stable we thought the trunk was: Hurry to stabilize the trunk, fix the issues, and then release a maintenance update based on the HEAD revision, but this had the side effect of releases that fixed the bugs but introduced new issues because of half-finished features or bugfixes that were in trunk. Make customers wait until the next official release, which is usually a few months. We want to change our policies to better deal with this situation. I was considering creating a "maintenance branch" in Subversion whenever I tag an official release. Then, new development would continue in trunk, and I can periodically merge specific fixes from trunk into the maintenance branch, and create a maintenance release when enough fixes are accumulated, while we continue to work on the next major update in parallel. I know we could also have a more stable trunk and create a branch for new updates instead, but keeping current development in trunk seems simpler to me. The major problem is that while we can easily branch the source code from a release tag and recompile it to get the binaries for that release, I'm not sure how to handle the setup and installer projects. We use QSetup to create all of our setup programs, and right now when we need to modify a setup project, we just edit the project file in-place (all the setup projects and any dependencies that we don't compile ourselves are stored on a separate server, and we make sure to always compile the setup projects on that machine only). However, since we may add or remove files to the setup as our code changes, there is no guarantee that today's setup projects will work with yesterday's source code. I was going to put all the QSetup projects in Subversion to deal with this, but I see some problems with this approach. I want the creation of setup programs to be as automated as possible, and at the very least, I want a separate build machine where I can build the release that I want (grabbing the code from Subversion first), grab the setup project for that release from Subversion, recompile the setup, and then copy the setup to another place on the network for QA testing and eventual release to customers. However, when someone needs to change a setup project (to add a new dependency that trunk now requires or to make other changes), there is a problem. If they treat it like a source file and check it out on their own machine to edit it, they won't be able to add files to the project unless they first copy the files they need to add to the build machine (so they are available to other developers), then copy all the other dependencies from the build machine to their machine, making sure to match the folder structure exactly. The issue here is that QSetup uses absolute paths for any files added to a setup project. However, this means installing a bunch of setup dependencies onto development machines, which seems messy (and which could destabilize the development environment if someone accidentally runs the setup project on their machine). Also, how do we manage third-party dependencies? For example, if the current maintenance branch used MSXML 3.0 and the trunk now requires MSXML 4.0, we can't go back and create a maintenance release if we have already replaced the MSXML library on the build machine with the latest version (assuming both versions have the same filename). The only solution I can think is to either put all the third-party dependencies in Subversion along with the source code, or to make sure we put different library versions in separate folders (i.e. C:\Setup\Dependencies\MSXML\v3.0 and C:\Setup\Dependencies\MSXML\v4.0). Is one way "better" or more common than the other? Are there any best practices for dealing with this situation? Basically, if we release v2.0 of our software, we want to be able to release v2.0.1, v2.0.2, and v.2.0.3 while we work on v2.1, but the whole setup/installation project and setup dependency issue is making this more complicated than the the typical "just create a branch in Subversion and recompile as needed" answer.

    Read the article

  • trouble loggin into a Mac share from a Windows PC on the network

    - by villares
    I have this mixed network and usually log into the Macs from the Windows XP home machines and vice versa. I have no real networking knowledge, things just seem to work, more or less, with the default settings. Now I've got a new Snow Leopard Mac with a shared folder (added the user names of the Windows users at the sharing preferences) and the trouble is some machines can open the share and others can't. I can't see the difference. It feels like some Windows machines have a "cache" and won't ask for the share password, just deny access. I can also see old shares proposed at the Windows "add network place wizard".

    Read the article

  • Recurring Apache 2.0.52 error on CentOS 4 - 'could not create `rewrite_log_lock`'

    - by warren
    I have been seeing a recurring issue on my web server: [Sun May 16 03:10:19 2010] [crit] (28)No space left on device: mod_rewrite: could not create rewrite_log_lock Configuration Failed [Sun May 16 04:10:05 2010] [crit] (28)No space left on device: mod_rewrite: could not create rewrite_log_lock Configuration Failed [Sun May 16 05:10:04 2010] [crit] (28)No space left on device: mod_rewrite: could not create rewrite_log_lock Configuration Failed [Sun May 16 05:17:13 2010] [crit] (28)No space left on device: mod_rewrite: could not create rewrite_log_lock Configuration Failed So far, the only fix I have found to this when it happens is to reboot my server. This is non-ideal :-\ Restarting httpd does not clear the error. df indicates I have 20+ gigs free, and top and free both report 800+ megs (or 1.2 gigs) > df -h Filesystem Size Used Avail Use% Mounted on /dev/simfs 40G 18G 23G 44% / # > free total used free shared buffers cached Mem: 1474560 300832 1173728 0 0 0 -/+ buffers/cache: 300832 1173728 Any ideas on why this would recur, and how to prevent/fix it?

    Read the article

  • Unable to install Ruby gems

    - by gemseeker
    I am trying for the first time to install some Ruby gems on Mac OS X Leopard. Please see the command and the output below. My question is how do I install a gem with dependencies? I tried installing individual dependency gems first from a locally downloaded files but I soon found out that there is no end to the rabbit hole :-) I also found out that there are circular dependencies that break even this tedious method. There must be a better way! I would really appreciate your help. sudo gem install oauth Updating metadata for 1 gems from http://gems.rubyforge.org . complete ERROR: Error installing oauth: oauth requires actionpack (>= 2.2.0, < 2.3.0)

    Read the article

  • Eliminating Windows 7 user tracking registry writes

    - by caffiend
    Windows 7 continues the practice of saving user actions in the registry. I'd like to disable this practice both to avoid reg-file fragmentation and SSD wear, as well as being uncomfortable with programs being able to quickly analyze my usage habits. Even with the "Turn off user tracking" policy enabled, there are at least two areas that still contain user tracking: HKCU\Software\Classes\Local Settings\MuiCache This key stores a cache of most-recently accessed strings, including most-recently ran exe descriptions. MKCU\Software\Classes\Local Settings\Software\Microsoft Windows\Shell\BagMRU This directory stores the most recently viewed folders along with timestamps. Are there additional policy settings/registry entries to disable these writes? If not, is it possible to make these entries Volatile? Would it be practical to create a temporary hive (eg, on ramdisk) and map it over this location?

    Read the article

  • Cannot resolve a single A Record from client machine

    - by Alex
    I set up a simple Bind server on my VPS and it is working properly. The problem occurs with my local windows machines, which are connected to internet through the home router. I created an A-record named 'dev' and it is invisible from my local network for some reason, though people from other locations can resolve dev.mydomain.com. Ironically, dev.mydomain.com cannot be resolved for myself only. If I add another A-record, say, 'gamma' then it becomes visible from my local windows machines instantly. So this is just for that particular 'dev' name. The only difference is that I had dev.mydomain.com server on another IP but that was a month ago; all nameservers have been changed since then. I tried to reboot my router and flushed dns cache on windows machines: no result. Thank you in advance.

    Read the article

  • How to benchmark kernel (-Os vs -O2)

    - by NightwishFan
    It seems logical to me that on a 64-bit kernel compiling it to optimize for size might help overall. (My distro of choice uses -O2) It has the benefits of more registers and memory and perhaps less cache contention than normal optimized code. I have a kernel compiled like this and it seems excellent. However my question is how can I prove this? I like using Phoronix for "real world" sort of benchmarks so I would prefer to test cases like that. What should I pick to test? Does anyone else have any alternatives? Thank you very much in advance.

    Read the article

  • How to get the spec of a machine on Linux?

    - by machinePurchaser
    I am interested in getting the spec of a machine, because I am thinking getting a similar server. What I am mostly interested in knowing is the number of cores / CPUs / etc., the amount of memory, the speed of the CPUs, the CPU cache size, and any other detail which is important for performance. My question is two-fold: Which parameters should I be interested in other than the ones I specified above? Is there an easy way to read them off the machine in Linux? cat /proc/cpuinfo reveals a lot about the CPUs, for example... What about memory (would rather not rely on top), etc?

    Read the article

  • Tomcat servlet-api.jar problem

    - by CitadelCSCadet
    I am running a web application using Tomcat and Java Servlets, JSP's, etc. I am aware that in order to use Servlets, it is dependent on the Servlet-api.jar file. Initially I placed this jar file in the WEB-INF/lib/ directory. This has worked fine for me for months during the developmental phase. When we put the application onto the server space we are using, we started seeing wierd problems showing up in the Catalina.out file telling us that there was dependency problems with the servlet-api.jar file. I am aware that tomcat has this jar file in its container, and that I should remove it from the WEB-INF/lib/ directory. I have tried this and it does not work. What do I have to do when I remove this jar file from the local files and allow it to depend on tomcats servlet-api.jar file.

    Read the article

  • Session cach and tmp folder error on cPanel Centos

    - by Danialzo
    One of my clients has come across multiple breakdowns in their websites with the following error PHP Code: Warning: session_start() [function.session-start]:open(/tmp/sess_1d6616afe1b8a0d91a8d9ec29254b453, O_RDWR) failed: No space left on device (28) in /home/***/public_html/system/library/session.php on line 11Warning: session_start() [function.session-start]: Cannot send session cookie - headers already sent by (output started at /home/***/public_html/index.php:104) in /home/***/public_html/system/library/session.php on line 11Warning: session_start() [function.session-start]: Cannot send session cache limiter - headers already sent (output started at /home/***/public_html/index.php:104) in /home/***/public_html/system/library/session.php on line 11Warning: Cannot modify header information - headers already sent by (output started at /home/***/public_html/index.php:104) in /home/***/public_html/index.php on line 177Warning: Cannot modify header information - headers already sent by (output started at /home/***/public_html/index.php:104) in /home/***/public_html/system/library/currency.php on line 45&#65279;Notice: Error: Can't create/write to file '/tmp/#sql_4c5_0.MYI' (Errcode: 28) Error No: 1 They are experiencing this problem while: Nothing has been recently changed on the server tmp and other folders are occupied with not more than 10% of their total space The error come and goes I would really appreciate it if anyone could guide me through. Thanks in advance

    Read the article

  • Set Thunderbird "from" address by incoming "to" address

    - by user293698
    I have configured my email server to cache all email to my mailbox. So [email protected] and [email protected] go to one mailbox. Every forum, registration, and guy get their own address for sending me emails so I can deliver it to /dev/null if anyone start spamming. That's the working setup. Now the problem: If I reply to a message, then Thunderbird always sets my default Identity as sender. I know I can add additional identities, but I don't want to add every address. How can I configure when a email is sent to [email protected], I answer with [email protected].

    Read the article

  • Cached css/javascript files on Sun Java System Web Server

    - by Derp
    I'm doing front-end web development in a Solaris 10 / Sun Java System Web Server 7.0U2 environment. I have noticed that changes to static css or javascript files often do not take effect immediately, whereas changes to static html files always do. My best guess is that a default setting in the web server causes it to cache certain file types in order to provide reasonable performance out of the box. I don't have the admin server running--I'll need to edit the config files by hand. What change(s) can I make so that all of my css and javascript edits take effect immediately? Thanks!

    Read the article

  • High Load average threshold in linux

    - by user2481010
    My one of friend said that his server load average sometime goes above 500-1000, for me it is strange value because I never saw load average more than 10. I asked him give me some snapshot of top and memory usages, he gave following details: TOP USAGES top - 06:06:03 up 117 days, 23:02, 2 users, load average: 147.37, 44.57, 15.95 Tasks: 116 total, 2 running, 113 sleeping, 0 stopped, 1 zombie Cpu(s): 16.6%us, 6.9%sy, 0.0%ni, 9.2%id, 66.5%wa, 0.0%hi, 0.8%si, 0.0%st Mem: 8161648k total, 7779528k used, 382120k free, 3296k buffers Swap: 5242872k total, 1293072k used, 3949800k free, 168660k cached Free $ free -gt total used free shared buffers cached Mem: 7 6 1 0 0 4 -/+ buffers/cache: 1 5 Swap: 4 0 4 Total: 12 6 6 Total cpu $ nproc 8 my question is it possible load average more than 100 on 8 core,12 GB mem Server? because I read many tutorial,article on load average, it said that thumb rule is "number of cores = max load" according to thumb rule here is max load average 16 then how his server running with 147.37 load server? he said that it is least value (147.37) some time goes more than 500.

    Read the article

  • How to debug without Visual Studio?

    - by aF
    Hello, Python - c++ dll - c# dll I have a com interop c# dll that is loaded in a wrapper c++ dll throught the .tlb file generated in c# to be used in a python project. When I run in my computer it works fine but when I run in a computer that just got formated it gives: WindowsError: exception code 0xe0434f4d I have the redistribute c++ installed and the .net compact framework 3.5 on the formatted computer. How can I see what is the correct exception on a computer that does not have visual studio installed? How can I debug all of this? I can't debug the dll's itself can I? Note: in my computer all works well so maybe is some dll or file missing. I allready used Dependency Walker to see if there's some dll missing, and nop!

    Read the article

  • Maven project dependecy against JDK version

    - by Andrea Polci
    I have projects that need to be build with a specific version of the JDK. The problem isn't in the source and target parameters but in the jars of the runtime used during compilation. In some cases I get a compilation error if I try to compile with the wrong JDK, but sometimes the build is successful and I get runtime errors when using the jars. For example in eclipse I have the ability to establish the execution enviroment for the project in the .classpath file. Is there a way to handle such situation in maven? What I would like to have is the ability to handle JRE dependency like other dependencies of the project in the POM file.

    Read the article

< Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >