Search Results

Search found 17324 results on 693 pages for 'memory warning'.

Page 64/693 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • IIS8 Memory Improvements

    - by The Official Microsoft IIS Site
    There is a lot of buzz in the Internet Information Services (IIS) community about IIS 8, the version of IIS that is included with Windows Server 2012. While there are plenty of new features in IIS 8, for this writing I am going to focus on the memory improvements that you will see for the application pools. Memory is a key resource on an IIS server as it is often the first limiting factor if you planned your CPU and disk requirements appropriately. I was fortunate to be able to attend TechEd North...(read more)

    Read the article

  • How to create a recovery partition in memory

    - by Luis Alvarado
    How can I create a recovery partition in memory as an option when booting the PC so that I can check all partitions including the system one that typically loads Ubuntu. This way I can fsck for example the partition that is normally running Ubuntu but without having it running it at that moment. The recovery partition would have access to some tools to check the disck, memory, etc. Is this doable?

    Read the article

  • Warning about SSL ceritificate, am I under attack ?

    - by Bunny Rabbit
    Lately I've been getting a lot of warnings about SSL certifications on my pc, Empathy keeps telling me that Facebook's certificate is self-signed and can't be trusted, and also, there are occasional warnings in Google-Chrome about security. I remember the last one saying that that the page is secured but some of the resources that the page is using are not from a secure connection, something like that. Is my pc hacked / under attack? How can I check that, and if so, how can I safeguard myself? PS: One thing that comes to my mind is that I might be under an arp poisoning / spoofing attack.

    Read the article

  • lowering the use of the memory controller in OpenCL based applications

    - by user827992
    With my first experiments I noticed that OpenCL is a good technology but often hampered by the X86 architecture and finding a mid-range VGA driven by a low-end chipset is not that unusual in the real world scenarios, sometimes this can happen with some high-end VGA too. Are there some caching techniques? Something that can bypass this inconvenience in some ways. The amount of dedicated memory on today's VGA is usually high, it's possible to use this memory to create some kind of buffer with instructions.

    Read the article

  • Warning: Why your Internet might fail on May 5

    <b>IT News:</b> "On May 5, the world's top domain authorities (led by ICANN, the US Government and Verisign) will complete the first phase of the roll-out of DNSSEC (Domain Name System Security Extensions) across the 13 root servers that direct user requests to the relevant websites on the internet."

    Read the article

  • Warning: Why your Internet might fail on May 5

    <b>IT News:</b> "On May 5, the world's top domain authorities (led by ICANN, the US Government and Verisign) will complete the first phase of the roll-out of DNSSEC (Domain Name System Security Extensions) across the 13 root servers that direct user requests to the relevant websites on the internet."

    Read the article

  • Cannot update 12.04 due to "Failed to fetch" warning

    - by harsha
    I can't update my ubuntu 12.04. I tried it from terminal, update manager and also from the synaptic package manager. The error is W: Failed to fetch http://in.archive.ubuntu.com/ubuntu/dists/precise-backports/universe/i18n/Translation-en_IN Unable to connect to 172.20.0.100:8080: W: Failed to fetch http://in.archive.ubuntu.com/ubuntu/dists/precise-backports/universe/i18n/Translation-en Unable to connect to 172.20.0.100:8080: E: Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • Warning about unavailable repositories

    - by richzilla
    An icon has recently appeared on my panel with an exclamation mark. The message i get when hovering over this is that the update information is out of date, and that this may be caused by an unavailable repository or a network isssue. My network connection appears to be ok. The message advises i manually check for updates. When i do this, i get the following message: W:Failed to fetch http://ppa.launchpad.net///ubuntu/dists/maverick/main/source/Sources.gz 404 Not Found , W:Failed to fetch http://ppa.launchpad.net///ubuntu/dists/maverick/main/binary-amd64/Packages.gz 404 Not Found , E:Some index files failed to download, they have been ignored, or old ones used instead. Any idea whats going on?

    Read the article

  • SSD becomes hot, disk failure warning

    - by Aegluin
    I have a two weeks old SSD (Kingston SSDnow 64GB). Yesterday, the computer shutdown twice and after rebooting I was bombarded with disk failure warnings. I usually take such warnings serious (and backed up), but skeptical. After cooling down, the laptop boots again and the only red Smart value was the temperature (Ubuntu did not show the temperature of failure, but the at that time 29°). After refreshing the Smart status and doing a "self test", everything is green. Before contacting Kingston support, I would like to know whether it could be due to a software issue: Is it possible that it is false alarm, and how can I check? I installed Ubuntu 12.04 32bit and took care of alignment. I supposed Ubuntu set up with optimal settings for SSDs, how can I check that there was no mistake? The current temperature is around 40-56°. Is such a temperature abnormal for SSDs? Output of sudo smartctl --all /dev/sda: http://pastebin.ubuntu.com/1175940/

    Read the article

  • Graphics hardware warning when updating to 14.04

    - by pacomet
    As I use Ubuntu at work I just update only LTS versions but now I'm not sure if I can/should. As my computer is now ten years old I would change if was mine but as it is owned by my employer I have to work with it. It's not a bad one, it runs fine (this was not true when still had Windows on it ;-). When updating to 14.04, it warns about possible bad/slow performance with Unity 3D so I stop updating as I am at work, not my own computer. As I understand from http://askubuntu.com/a/438958/25305 Nvidia Geforce FX 5500 graphics card is still supported in 14.04. Now, in 12.04, I have driver version 173 and unity 2d runs fine for me. output of /usr/lib/nux/unity_support_test -p OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce FX 5500/AGP/SSE2 OpenGL version string: 2.1.2 NVIDIA 173.14.39 Not software rendered: yes Not blacklisted: no GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no Should I update? Is it better to stay with 12.04? Thanks

    Read the article

  • Facebook Like button warning, but prevents sharing

    - by Steve
    I added a FB Like button to my Wordpress template, and when I click Like, I receive an error, which pops up and says: There was an error liking the page. If you are the page owner, please try running your page through the linter on the Facebook devsite (https://developers.facebook.com/tools/lint/) and fixing any errors. The Lint Checker (now) gives no Error or Warnings, after I removed a duplicate og:description tag. Why is it not working?

    Read the article

  • Windows Media Player Vulnerability, PCAnywhere Warning

    Windows Media Player Vulnerability Targeted by Drive-by-download Attack Security firm Trend Micro recently released details on malware that has been targeting the MIDI Remote Code Execution Vulnerability found in Microsoft's Windows Media Player. A post on Trend Micro's Malware Blog offered further insight into the malware that has been exploiting the CVE-2012-0003 vulnerability. The malware's authors have been successful in exploiting the vulnerability by tricking unsuspecting victims into opening a specially engineered MIDI file in Windows Media Player. This Web-based drive-by-download ...

    Read the article

  • "BAD idea" warning when trying to recover Grub, after Windows removed it

    - by Shazzner
    Tried sudo grub-install on sda1 but it complained about being a BAD IDEA. I had to install windows for a work related issue so I used a separate disk (I had used it for ubuntu on this computer, but bought a bigger disk so installed ubuntu on that and left the old one in in case I needed an old file). Windows installed fine but overwrote Grub. So if I choose the Ubuntu disk to boot first in BIOS I get a blank screen. I googled and followed this advice: https://help.ubuntu.com/community/RecoveringUbuntuAfterInstallingWindows However, when I get down to this section: sudo grub-install --root-directory=/media/0d104aff-ec8c-44c8-b811-92b993823444 /dev/sda1 I get this: Attempting to install GRUB to a partition instead of the MBR. This is a BAD idea… --recheck does nothing. Any ideas?

    Read the article

  • Find out how many memory my server would need ideally?

    - by Daniel
    I have a pretty busy GNU/Linux server that I think needs more RAM. I know that the free command doesn't show the amount of RAM that is used. So I was stumbling upon Commited_As in /proc/meminfo. It currently shows 57972 kB which isn't much. Is this the amount of RAM that the processes use "right now" or is this an estimate of how many additional RAM it would take to never run out of memory with this load?

    Read the article

  • Auto restart server if virtual memory is too low

    - by Sukhjinder Singh
    There are quite number of software running on my server: httpd, varnish, mysql, memcache, java.. Each of them is using a part of the virtual memory and varnish was configured to be allocated 3GB of memory to run. Due to high traffic load which is 100K, our server ran out of memory and oom-killer is invoked. We've to reboot the server. We have 8GB of Virtual Memory and due to some reason we cannot extend to larger memory. My question is - Is there any automated script, which will monitor how much virtual memory left and based upon certain criteria, lets say if 500MB left than restart the server automatically? I do know this is not the proper solution but we have to do it, otherwise we don't know when server will get OOM and by the time we know and restart the server, we lost our visiting users.

    Read the article

  • non-mapped virtual memory & total number of connections

    - by tszming
    We have two MongoDB data nodes (replica set) - Primary & Secondary. I noticed that the non-mapped virtual memory is relatively high and wondering if they are hurting our MongoDB performance (The server usually peaked at around 6-7K queries per sec). In MMS, it was stated: "The most common case of usage of a high amount of memory for non-mapped is that there are very many connections to the database." So we checked the memory usage with db.serverStatus().mem in our Secondary: { "bits" : 64, "resident" : 6846, "virtual" : 416797, "supported" : true, "mapped" : 205549, "mappedWithJournal" : 411098, "note" : "virtual minus mapped is large. could indicate a memory leak" } Note: We are using 2.0.4 and now the default stack size should be 1MB per connection. The current number of connections is around 1.1K, but the non-mapped virtual memory (virtual-mappedWithJournal) is around 5699 MB. The trend is quite stable so I can't say there is a leak here, but where is the memory gone? Any idea?

    Read the article

  • php Warning: strtotime() Error

    - by Kavithanbabu
    I have changed my joomla and wordpress files from old server to new server. In the front end and admin side its working without any errors. But in the Database (phpmyadmin) Section it shows some warning messages like this.. Warning: strtotime() [function.strtotime]: It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected 'Asia/Calcutta' for 'IST/5.0/no DST' instead in /usr/share/phpmyadmin/libraries/db_info.inc.php on line 88 Warning: strftime() [function.strftime]: It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected 'Asia/Calcutta' for 'IST/5.0/no DST' instead in /usr/share/phpmyadmin/libraries/common.lib.php on line 1483 Can you please suggest, how to hide these warning messages?? Thanks in advance.

    Read the article

  • autorelease object not confirming protocol does not give any warning

    - by Sahil Wasan
    I have a class "ABC" and its method which returns non autoreleases object of that class. @interface ABC:NSObject +(ABC *)aClassMethodReturnsObjectWhichNotAutoreleased; @end @implementation ABC +(ABC *)aClassMethodReturnsObjectWhichNotAutoreleased{ ABC *a = [[ABC alloc]init]; return a; } @end If I have a protocol Foo. @Protocol Foo @required -(void)abc; @end My ABC class is "not" confirming Foo protocols. 1st call id<Foo> obj = [ABC aClassMethodReturnsObjectWhichNotAutoreleased]; //show warning It shows warning "Non Compatible pointers.." thats good.Abc did not confirm protocol Foo BUT 2nd call id<Foo> obj = [NSArray arrayWithObjects:@"abc",@"def",nil]; // It will "not" show warning as it will return autorelease object.NSArray don't confirm protocol Foo In first call compiler gives warning and in second call compiler is not giving any warning.I think that is because i am not returning autorelease object. Why is compiler not giving warning in 2nd call as NSArray is also not confirming FOO Thanks in advance

    Read the article

  • Java Runtime.getRuntime().exec() alternatives

    - by twilbrand
    I have a collection of webapps that are running under tomcat. Tomcat is configured to have as much as 2 GB of memory using the -Xmx argument. Many of the webapps need to perform a task that ends up making use of the following code: Runtime runtime = Runtime.getRuntime(); Process process = runtime.exec(command); process.waitFor(); ... The issue we are having is related to the way that this "child-process" is getting created on Linux (Redhat 4.4 and Centos 5.4). It's my understanding that an amount of memory equal to the amount tomcat is using needs to be free in the pool of physical (non-swap) system memory initially for this child process to be created. When we don't have enough free physical memory, we are getting this: java.io.IOException: error=12, Cannot allocate memory at java.lang.UNIXProcess.<init>(UNIXProcess.java:148) at java.lang.ProcessImpl.start(ProcessImpl.java:65) at java.lang.ProcessBuilder.start(ProcessBuilder.java:452) ... 28 more My questions are: 1) Is it possible to remove the requirement for an amount of memory equal to the parent process being free in the physical memory? I'm looking for an answer that allows me to specify how much memory the child process gets or to allow java on linux to access swap memory. 2) What are the alternatives to Runtime.getRuntime().exec() if no solution to #1 exists? I could only think of two, neither of which is very desirable. JNI (very un-desirable) or rewriting the program we are calling in java and making it it's own process that the webapp communicates with somehow. There has to be others. 3) Is there another side to this problem that I'm not seeing that could potentially fix it? Lowering the amount of memory used by tomcat is not an option. Increasing the memory on the server is always an option, but seems like more a band-aid. Servers are running java 6.

    Read the article

  • NRPE Warning threshold must be a positive integer

    - by Frida
    OS: Ubuntu 12.10 Server 64bits I've installed Icinga, with ido2db, pnp4nagios and icinga-web (last release, following the instruction given in the documentation, installation with apt, etc). I am using icinga-web to monitor my hosts. For the moment, I have just my localhost, and all is perfect. I am trying to add a host and monitor it with NRPE (version 2.12): root@server:/etc/icinga# /usr/lib/nagios/plugins/check_nrpe -H client NRPE v2.12 The configuration looks good. I've created a file in /etc/icinga/objects/client.cfg as below on the server: root@server:/etc/icinga/objects# cat client.cfg define host{ use generic-host ; Name of host template to use host_name client alias client.toto address xx.xx.xx.xx } # Service Definitions define service{ use generic-service host_name client service_description CPU Load check_command check_nrpe_1arg!check_load } define service{ use generic-service host_name client service_description Number of Users check_command check_nrpe_1arg!check_users } And add in my /etc/icinga/commands.cfg: # this command runs a program $ARG1$ with no arguments define command { command_name check_nrpe command_line /usr/lib/nagios/plugins/check_nrpe -H $HOSTADDRESS$ -c $ARG1$ -a $ARG2$ } # this command runs a program $ARG1$ with no arguments define command { command_name check_nrpe_1arg command_line /usr/lib/nagios/plugins/check_nrpe -H $HOSTADDRESS$ -c $ARG1$ } But it does not work. These are the logs from the client: Dec 3 19:45:12 client nrpe[604]: Connection from xx.xx.xx.xx port 32641 Dec 3 19:45:12 client nrpe[604]: Host address is in allowed_hosts Dec 3 19:45:12 client nrpe[604]: Handling the connection... Dec 3 19:45:12 client nrpe[604]: Host is asking for command 'check_users' to be run... Dec 3 19:45:12 client nrpe[604]: Running command: /usr/lib/nagios/plugins/check_users -w -c Dec 3 19:45:12 client nrpe[604]: Command completed with return code 3 and output: check_users: Warning t hreshold must be a positive integer#012Usage:check_users -w -c Dec 3 19:45:12 client nrpe[604]: Return Code: 3, Output: check_users: Warning threshold must be a positive integer#012Usage:check_users -w -c Dec 3 19:44:49 client nrpe[32582]: Connection from xx.xx.xx.xx port 32129 Dec 3 19:44:49 client nrpe[32582]: Host address is in allowed_hosts Dec 3 19:44:49 client nrpe[32582]: Handling the connection... Dec 3 19:44:49 client nrpe[32582]: Host is asking for command 'check_load' to be run... Dec 3 19:44:49 client nrpe[32582]: Running command: /usr/lib/nagios/plugins/check_load -w -c Dec 3 19:44:49 client nrpe[32582]: Command completed with return code 3 and output: Warning threshold mu st be float or float triplet!#012#012Usage:check_load [-r] -w WLOAD1,WLOAD5,WLOAD15 -c CLOAD1,CLOAD5,CLO AD15 Dec 3 19:44:49 client nrpe[32582]: Return Code: 3, Output: Warning threshold must be float or float trip let!#012#012Usage:check_load [-r] -w WLOAD1,WLOAD5,WLOAD15 -c CLOAD1,CLOAD5,CLOAD15 Dec 3 19:44:49 client nrpe[32582]: Connection from xx.xx.xx.xx closed. Have you any ideas?

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >