Search Results

Search found 7374 results on 295 pages for 'strange behaviour'.

Page 62/295 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • ctrl+click or shift+click not always firing the onclick event

    - by Erik
    Hi, I recently discovered that different browsers handle the onclick event differently when the control of shift key is pressed. Same thing for following links with the middle mouse button. <a href="http://www.example.com/" onclick="alert('onclick');">go to example.com</a> Onclick browser support table Mouse Keyboard Chrome Firefox Safari Opera IE5.5 IE6 IE7 IE8 IE9 Left None yes yes yes yes yes yes yes yes yes Left Ctrl yes yes yes yes ? yes no no ? Left Shift yes yes yes yes ? yes yes yes ? Middle None yes no yes no ? N/A no no ? Can someone please fill in the question marks for me? Also; I'm wondering if the behaviour differs for each version of Chrome, Firefox, Safari and Opera. Finding a logical pattern in this behaviour would be even nicer, but I don't think there is :). Thanks a lot.

    Read the article

  • Can redirection of screen output to file change the result of a C++ code?

    - by Biga
    I am having this very weird behaviour with a C++ code: It gives me different results when running with and without redirecting the screen output to a file (reproducible in cygwin and linux). I mean, if I get the same executable and run it like ./run or run it like ./run >out.log, I get different results! I use std::cout to output to screen, all lines ending with endl; I use ifstream for the input file; I use ofstream for output, all lines ending with endl. I am using g++ 4. Any idea what is going on? UPDATE: I have hard-coded the input data, so 'ifstream' is not used, and problem persists. UPDATE 2: That's getting interesting. I have probed three variables that are computed initially, and that's what I get when using with and without redirecting the output to file redirected to file: 0 -0.02 0 direct to screen: 0 -0.02 1.04083e-17 So there's a round-off difference in the code variables with and without redirecting the output! Now, why redirecting would interefere with an internal computation of the code? UPDATE 3: If I redirect to /dev/null, I get the sam behaviour as outputing direct to screen, instead of redirecting to file.

    Read the article

  • Releasing Autopool crashes on iOS 4.0 (and only on 4.0)

    - by samsam
    Hi there. I'm wondering what could cause this. I have several methods in my code that i call using performSelectorInBackground. Within each of these methods i have an Autoreleasepool that is being alloced/initialized at the beginning and released at the end of the method. this perfectly works on iOS 3.1.3 / 3.2 / 4.2 / 4.2.1 but it fataly crashes on iOS 4.0 with a EXC_BAD_ACCESS Exception that happens after calling [myPool release]. After I noticed this strange behaviour I was thinking about rewriting portions of my code and to make my app "less parallel" in case that the client os is 4.0. After I did that, the next point where the app crashed was within the ReachabilityCallback-Method from Apples Reachability "Framework". well, now I'm not quite sure what to do. The things i do within my threaded methods is pretty simple xml parsing (no cocoa calls or stuff that would affect the UI). After each method finishes it posts a notification which the coordinating-thread listens to and once all the parallelized methods have finished, the coordinating thread calls viewcontrollers etc... I have absolutely no clue what could cause this weird behaviour. Especially because Apples Code fails as well. any help is greatly appreciated! thanks, sam

    Read the article

  • Windows 7 taskbar icon grouping with multiple similar windows

    - by user1266104
    When I have a number of similar windows opens, for example, multiple explorer windows, they are all grouped into the same icon on the taskbar. When I hover over this I get a thumbnail of the window, and a piece of truncated text which is supposed to help me work out what that window is. However I also like to have the full path shown in explorer windows, so the truncated text is usually C:\CommonPathToEveryWind... I have noticed that if I have over 14 explorer windows open, then Windows gives up trying to display these useless thumbnails, and instead gives me a nicely formatted list of paths. My question is how can I customise this behaviour, to either disable thumbnails all together for a subset of applications where a thumbnail is inappropriate (explorer, 'Everything'); or to lower the max number of thumbnails per grouped taskbar icon to 2; or just to disable thumbnails all together, (without loosing the entire windows theme) Edit: Just to make it clear what I currently get, and what I actually want. I do still want to keep the grouping behaviour, so that multiple instances of the same program, Explorer for example, only take one slot on the taskbar. What I want is to alter what is displayed when I hover over the grouped icon: What I actually see - useless thumbnails:- http://i.stack.imgur.com/8bTxX.png The style I want for any number of instances:- http://i.stack.imgur.com/zv6Si.png

    Read the article

  • wx_ref and custom wx_object's

    - by Iogann
    Hi! I am developing MDI application with help of wxErlang. I have a parent frame, implemented as wx_object: -module(main_frame). -export([new/0, init/1, handle_call/3, handle_event/2, terminate/2]). -behaviour(wx_object). .... And I have a child frame, implemented as wx_object too: module(child_frame). -export([new/2, init/1, handle_call/3, handle_event/2, terminate/2]). -export([save/1]). -behaviour(wx_object). % some public API method save(Frame) -> wx_object:call(Frame, save). .... I want to call save/1 for an active child frame from the parent frame. There is my code: ActiveChild = wxMDIParentFrame:getActiveChild(Frame), case wx:is_null(ActiveChild) of false - child_frame:save(ActiveChild); _ - ignore end This code fails because ActiveChild is #wx_ref{} with state=[], but wx_object:call/2 needs #wx_ref{} where state is set to the pid of the process which we call. What is the right method to do this? I thought only to store a list of all created child frames with its pids in the parent frame and search the pid in this list, but this is ugly.

    Read the article

  • crash when using stl vector at instead of operator[]

    - by Jamie Cook
    I have a method as follows (from a class than implements TBB task interface - not currently multithreading though) My problem is that two ways of accessing a vector are causing quite different behaviour - one works and the other causes the entire program to bomb out quite spectacularly (this is a plugin and normally a crash will be caught by the host - but this one takes out the host program as well! As I said quite spectacular) void PtBranchAndBoundIterationOriginRunner::runOrigin(int origin, int time) const // NOTE: const method { BOOST_FOREACH(int accessMode, m_props->GetAccessModes()) { // get a const reference to appropriate vector from member variable // map<int, vector<double>> m_rowTotalsByAccessMode; const vector<double>& rowTotalsForAccessMode = m_rowTotalsByAccessMode.find(accessMode)->second; if (origin != 129) continue; // Additional debug constrain: I know that the vector only has one non-zero element at index 129 m_job->Write("size: " + ToString(rowTotalsForAccessMode.size())); try { // check for early return... i.e. nothing to do for this origin if (!rowTotalsForAccessMode[origin]) continue; // <- this works if (!rowTotalsForAccessMode.at(origin)) continue; // <- this crashes } catch (...) { m_job->Write("Caught an exception"); // but its not an exception } // do some other stuff } } I hate not putting in well defined questions but at the moment my best phrasing is : "WTF?" I'm compiling this with Intel C++ 11.0.074 [IA-32] using Microsoft (R) Visual Studio Version 9.0.21022.8 and my implementation of vector has const_reference operator[](size_type _Pos) const { // subscript nonmutable sequence #if _HAS_ITERATOR_DEBUGGING if (size() <= _Pos) { _DEBUG_ERROR("vector subscript out of range"); _SCL_SECURE_OUT_OF_RANGE; } #endif /* _HAS_ITERATOR_DEBUGGING */ _SCL_SECURE_VALIDATE_RANGE(_Pos < size()); return (*(_Myfirst + _Pos)); } (Iterator debugging is off - I'm pretty sure) and const_reference at(size_type _Pos) const { // subscript nonmutable sequence with checking if (size() <= _Pos) _Xran(); return (*(begin() + _Pos)); } So the only difference I can see is that at calls begin instead of simply using _Myfirst - but how could that possibly be causing such a huge difference in behaviour?

    Read the article

  • jQuery .die isnt killing an attached event?

    - by adam
    Hi I've just started experimenting with .live and .die and having some great results but one thing isn't working. I've been tinkering with firebugs console to try out my written code live to see if i can figure out the reason why .die isn't killing off an attached event. First if i do this //attach ajax submission $('a[href$=edit]').live("click", function(event) { $.get($(this).attr("href"), null, null); return false; }); Then as expected when I click on a link the ajax fires off and my server side code injects a form for inline editing. But sometimes I want to disable this behaviour and also make the link unclickable so I do the following //unbind ajax form creation when we click on a link, then disable its semantic behaviour $('a[href$=edit]').die("click").click( function(){ return false; } ); which works but if then try to remove this and restore that ajax goodness with the code below it doesn't work, Instead the link remains unclickable. I cant figure out why? Can anyone help? //remove any previous events from the links $('a[href$=edit]').die(); //attach ajax submission $('a[href$=edit]').live("click", function(event) { $.get($(this).attr("href"), null, null); return false; });

    Read the article

  • Can JQuery.Validate plugin prevent submission of an Ajax form

    - by berko
    I am using the JQuery form plugin (http://malsup.com/jquery/form/) to handle the ajax submission of a form. I also have JQuery.Validate (http://docs.jquery.com/Plugins/Validation) plugged in for my client side validation. What I am seeing is that the validation fails when I expect it to however it does not stop the form from submitting. When I was using a traditional form (i.e. non-ajax) the validation failing prevented the form for submitting at all.... which is my desired behaviour. I know that the validation is hooked up correctly as the validation messages still appear after the ajax submit has happened. So what I am I missing that is preventing my desired behaviour? Sample code below.... <form id="searchForm" method="post" action="/User/GetDetails"> <input id="username" name="username" type="text" value="user.name" /> <input id="submit" name="submit" type="submit" value="Search" /> </form> <div id="detailsView"> </div> <script type="text/javascript"> var options = { target: '#detailsView' }; $('#searchForm').ajaxForm(options); $('#searchForm').validate({ rules: { username: {required:true}}, messages: { username: {required:"Username is a required field."}} }); </script>

    Read the article

  • How to use .htaccess to redirect to an url that includes a query parameter

    - by wbervoets
    Hi guys, I've been struggling with a redirect where the final URL includes a query parameter that is an URL. It seems htaccess is escaping some characters. Here is my htaccess: Code: RewriteRule ^mypath http s://www.otherserver.com/cookie?param1=123&redirectto=http://otherserver2.com/&param2=1 [L,R=302] First, if I put Code: http s://www.otherserver.com/cookie?param1=123&redirectto=http://otherserver2.com/&param2=1 in my browser address bar, www.otherserver.com will do its thing and then redirect to otherserver2 (including the &param2=1 which is a parameter of that URL and not of the URL otherserver.com) That's the behaviour I need :-) Now when I try to use the htaccess redirect from my site: http://mysite/mypath; the behaviour is not the same then putting the same URL in the browser address bar; it now tries to redirect to http ://otherserver2.com/ (no param2=1 anymore). (ps: otherserver1 and otherserver2 are not under my control.) I've tried escaping the redirectto parameter in my htaccess, like below, but it didn't work either: Code: http s://www.otherserver.com/cookie?param1=123&redirectto=http%3a%2f%otherserver2.com%2f%3fparam2%3d1 Because then my browser tries to go to httpotherserver.com (all special characters are gone) In the end I would like to see http ://mysite/mypath to show the contents of Code: http s://www.otherserver.com/cookie?param1=123&redirectto=http://otherserver2.com/&param2=1 (preferred solution) or do a redirect to that URL. I hope my message is not to confusing, I hope someone can help me out; as I've already spent hours on this :-)

    Read the article

  • Sequence Point and Evaluation Order( Preincrement)

    - by Josh
    There was a debate today among some of my colleagues and I wanted to clarify it. It is about the evaluation order and the sequence point in an expression. It is clearly stated in the standard that C/C++ does not have a left-to-right evaluation in an expression unlike languages like Java which is guaranteed to have a sequencial left-to-right order. So, in the below expression, the evaluation of the leftmost operand(B) in the binary operation is sequenced before the evaluation of the rightmost operand(C): A = B B_OP C The following expression according, to CPPReference under the subsection Sequenced-before rules(Undefined Behaviour) and Bjarne's TCPPL 3rd ed, is an UB x = x++ + 1; It could be interpreted as the compilers like BUT the expression below is said to be clearly a well defined behaviour in C++11 x = ++x + 1; So, if the above expression is well defined, what is the "fate" of this? array[x] = ++x; It seems the evaluation of a post-increment and post-decrement is not defined but the pre-increment and the pre-decrement is defined. NOTE: This is not used in a real-life code. Clang 3.4 and GCC 4.8 clearly warns about both the pre- and post-increment sequence point.

    Read the article

  • jQuery .ready() automatically assigning variables for each element with ID in DOM

    - by Greg
    I have noticed some unexpected behaviour when using the jQuery .ready() function, whereby afterwards you can reference an element in the DOM simply by using its ID without prior declaration or assignment: <html> <script src="jquery.js"></script> <script> $(document).ready(function() { myowndiv.innerHTML = 'wow!' }); </script> <body> <div id="myowndiv"></div> </body> </html> I would have expected to have to declare and assign myowndiv with document.getElementById("myowndiv"); or $("#myowndiv"); before I could call innerHTML or anything else on it? Is this behaviour by design? Can anyone explain why? My fear is that if I refactor and end up not using .ready() or even using jQuery at all then my code will fail to execute. Cheers!

    Read the article

  • Why does local variable names take precedence over function names in JavaScripts?

    - by fredrik
    In JavaScript you can define function in a bunch of different ways: function BatmanController () { } var BatmanController = function () { } // If you want to be EVIL eval("function BatmanController () {}"); // If you are fancy (function () { function BatmanController () { } }()); By accident I ran across a unexpected behaviour today. When declaring a local variable (in the fancy way) with the same name as function the local variable takes presence inside the local scope. For example: (function () { "use strict"; function BatmanController () { } console.log(typeof BatmanController); // outputs "function" var RobinController = function () { } console.log(typeof RobinController); // outputs "function" var JokerController = 1; function JokerController () { } console.log(typeof JokerController); // outputs "number", Ehm what? }()); Anyone know why var JokerController isn't overwritten by function JokerController? I tested this in Chrome, Safari, Canary, Firefox. I would guess it's due to some "look ahead" JavaScript optimizing done in the V8 and JägerMonkey engines. But is there any technical explanation to explain this behaviour?

    Read the article

  • Force calling the derived class implementation within a generic function in C#?

    - by Adam Hardy
    Ok so I'm currently working with a set of classes that I don't have control over in some pretty generic functions using these objects. Instead of writing literally tens of functions that essentially do the same thing for each class I decided to use a generic function instead. Now the classes I'm dealing with are a little weird in that the derived classes share many of the same properties but the base class that they are derived from doesn't. One such property example is .Parent which exists on a huge number of derived classes but not on the base class and it is this property that I need to use. For ease of understanding I've created a small example as follows: class StandardBaseClass {} // These are simulating the SMO objects class StandardDerivedClass : StandardBaseClass { public object Parent { get; set; } } static class Extensions { public static object GetParent(this StandardDerivedClass sdc) { return sdc.Parent; } public static object GetParent(this StandardBaseClass sbc) { throw new NotImplementedException("StandardBaseClass does not contain a property Parent"); } // This is the Generic function I'm trying to write and need the Parent property. public static void DoSomething<T>(T foo) where T : StandardBaseClass { object Parent = ((T)foo).GetParent(); } } In the above example calling DoSomething() will throw the NotImplemented Exception in the base class's implementation of GetParent(), even though I'm forcing the cast to T which is a StandardDerivedClass. This is contrary to other casting behaviour where by downcasting will force the use of the base class's implementation. I see this behaviour as a bug. Has anyone else out there encountered this?

    Read the article

  • What goes into the "Controller" in "MVC"?

    - by P72endragon
    I think I understand the basic concepts of MVC - the Model contains the data and behaviour of the application, the View is responsible for displaying it to the user and the Controller deals with user input. What I'm uncertain about is exactly what goes in the Controller. Lets say for example I have a fairly simple application (I'm specifically thinking Java, but I suppose the same principles apply elsewhere). I organise my code into 3 packages called app.model, app.view and app.controller. Within the app.model package, I have a few classes that reflect the actual behaviour of the application. These extends Observable and use setChanged() and notifyObservers() to trigger the views to update when appropriate. The app.view package has a class (or several classes for different types of display) that uses javax.swing components to handle the display. Some of these components need to feed back into the Model. If I understand correctly, the View shouldn't have anything to do with the feedback - that should be dealt with by the Controller. So what do I actually put in the Controller? Do I put the public void actionPerformed(ActionEvent e) in the View with just a call to a method in the Controller? If so, should any validation etc be done in the Controller? If so, how do I feedback error messages back to the View - should that go through the Model again, or should the Controller just send it straight back to View? If the validation is done in the View, what do I put in the Controller? Sorry for the long question, I just wanted to document my understanding of the process and hopefully someone can clarify this issue for me!

    Read the article

  • VSS Post Backup failures for Virtual Server 2005 R2 SP1 virtual machines

    - by califguy4christ
    We've been seeing strange errors with Volume Shadow Copy services on our Virtual Server 2005 R2 SP1 host. It appears to be failing on a strange mountpoint in the C:\WINDOWS\Temp\ folders, which I believe is used by VSS to mount a writeable image file. To summarize: The Microsoft Virtual Server 2005 Writer continually goes into a failed retryable state The Virtual Server log reports errors during the Post Backup phase VSS reports errors backing up a mount point of unknown origins The mount point causes NTFS and ftdisk errors The host is x86 Windows Server 2003 Standard, SP2. The virtual machine is the same. Both use basic disks. Here is the writer state: Writer name: 'Microsoft Virtual Server 2005 Writer' Writer Id: {76afb926-87ad-4a20-a50f-cdc69412ddfc} Writer Instance Id: {78df98e2-bf19-4804-890b-15865efef3bd} State: [11] Failed Last error: Retryable error From the Virtual Server log: Virtual Server - Vss Writer - Event ID: 1035: The VSS writer for Virtual Server failed during the PostBackup phase. The guest shadow copies did not get exposed on the host machine, after mounting all the virtual hard disks of the virtual machine VMACHINE. From the Application log: VSS - None - Event ID: 12290: Volume Shadow Copy Service warning: GetVolumeInformationW( \\?\Volume{fb84bae7-87f5-11dd-9832-001cc4961ca6}\,NULL,0, NULL,NULL,[0x00000000], , 260) == 0x0000045d. hr = 0x00000000. From the System log: Ntfs - Disk - Event ID: 55: The file system structure on the disk is corrupt and unusable. Please run the chkdsk utility on the volume C:\WINDOWS\Temp\ {fb84bae7-87f5-11dd-9832-001cc49.... My current theory is that VSS creates a mount point for an image file of the VHD, then the software panics for some reason, leaving everything in an inconsistent state. Removing the mount point doesn't resolve the problem. All of the other disks check out fine with CHKDSK. There's no exclusion option for VHDs or to turn off online backups. Has anyone seen this kind of thing before or point me in the right direction for getting more information about the mount point and it's origins? I haven't been able to trace what application is creating that mount point.

    Read the article

  • "Unable to open MRTG log file" error with nagios and mrtg

    - by Simone Magnaschi
    We have a strange issue with our setup of icinga / nagios and mrtg. Icinga is working great and has no problem, it can monitor basically everything without issues. We setup mrtg to gather bandwith data from our routers and switches. MRTG is working fine: it stores the log data in the /var/www/mrtg/ directory and displays the graph data via web. We assume so MRTG is doing great. We tried to setup bandwidth checks in nagios: define service{ use generic-service ; Inherit values from a template host_name zywall-agora service_description ZYWALL AGORA TRAFFICO check_command check_local_mrtgtraf!/var/www/mrtg/x.x.x.x_2.log!AVG!1000000,2000000!5000000,5000000!1000 check_interval 1 ; Check the service every 1 minute under normal conditions retry_interval 1 ; Re-check every minute until its final/hard state is determined } Where /var/www/mrtg/x.x.x.x_2.log is the correct log path file. We keep on getting Unable to open MRTG log file error in the test result in icinga web interface. We tried everything: give ownership to user nagios or icinga to the log file give chmod 777 to the file try to copy the file in another directory and give it full permission Same error. The strange thing is that if we use the command that nagios generate in a bash session the command works like a charm: /usr/lib64/nagios/plugins/check_mrtgtraf -F /var/www/mrtg/x.x.x.x_2.log -a AVG -w 10,20 -c 5000000,5000000 -e 10 Result: Traffic WARNING - Avg. In = 17.9 KB/s, Avg. Out = 5.0 KB/s|in=17.877930KB/s;10.000000;5000000.000000;0.000000 out=5.000000KB/s;20.000000;5000000.000000;0.000000 We ran that command line as root, as user nagios and as user icinga and all three worked ok. We thought that the command that nagios perform maybe has something wrong in it, so we debugged nagios but we found out that the generated command from nagios is the same as above. Searching on google for these kind of problem returns only issues of systems where mrtg is not installed or issues with the wrong path to the log file, but these seems not to be our case. We are stuck, can somebody help?

    Read the article

  • Low CPU performance with low usage and clock - Windows 8.1

    - by Daniele
    I recently deleted everything from my PC and reinstalled Windows 8.1 from scratch. When I first booted into Windows everything was extremely slow though the CPU usage was very low (about 1%). After installing some drivers the problem seemed to be solved, I was able to use my PC normally. Today I installed a game and I noticed a strange behavior: the game was playable but the performance worsened more and more in the time. This is the situation BEFORE opening the game (normal): This is AFTER some minutes inside the game (low CPU usage and clock): Some information about my system: PC: Sony Vaio S13 (SVS13A1C5E) OS: Windows 8.1 CPU: Intel Core i7-3520M 2.90GHz GPU(1): Intel HD Graphics 4000 GPU(2): NVIDIA GeForce GT 640M LE I tried searching for new drivers and other solutions but noting worked and I don't know what is the cause. I did not checked the temperatures but the fans are not running fast and the PC does not look overheated. Update: Max CPU Temp: 66°C, Max GPU Temp: 61°C The strange thing is that the GPU load is 99% (GPU-Z) and the fan is almost silent. Update 2: I had troubles with Sony Vaio software, I can't get the FN keys and the STAMINA/SPEED switch to work (it is a physical switch to enable/disable the Nvidia card and change the Power Profile). I'm saying this because I remember that before reinstalling Windows there was an option in the Vaio Control Center (now it is not there anymore) that allowed me to choose from something like "priority to performance (ventilation)" or "priority to silence". The current behavior looks like a "priority to silence", but I can't get the stamina-speed switch to work and so I don't see similar oprions in the Vaio Control Center. I don't know if the problem is related to this.

    Read the article

  • WHS - Windows Update Failure

    - by Kyle B.
    Clicking "Update Now..." inside my EX470 control panel for Windows Update produces the following error message: "Windows Home Server updates installation can not complete. Please try again later. If the problem persists, please restart the server." I have rebooted the server numerous times, and I have also used remote desktop to connect to the machine to perform the update this way, however the browser is unable to pull up http://windowsupdate.microsoft.com. This is very strange behavior because I am able to access all other sites (gmail.com, serverfault.com, etc). Would it be possible for someone to explain to me how I can check to see what is blocking the connection of this device, which apparently has a valid internet connection, to the Microsoft Windows Update site? note #1 Using the shortcut: %SystemRoot%\system32\wupdmgr.exe does not work either. It says "Connecting to 65.55.200.155..." but nothing ever happens. This is strange because all other sites seem fine. Also, I can connect to windowsupdate.microsoft.com on my local desktop so I know this is running as well

    Read the article

  • Mplayer no sound when playing some movies

    - by Ivan Peevski
    Ok, that's a bit of a strange problem, that somehow crept into my system. It used to work fine. Here is the problem as far as I can identify it. When I try to play certain video files with mplayer, there is no sound. As far as I can tell, it is only an issue with ac3 and dts sound tracks (using the ffmpeg decoder). Mplayer says: ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders AUDIO: 48000 Hz, 6 ch, s16le, 1536.0 kbit/33.33% (ratio: 192000->576000) Selected audio codec: [ffdca] afm: ffmpeg (FFmpeg DTS) ========================================================================== [AO_ALSA] Playback open error: Device or resource busy Failed to initialize audio driver 'alsa' Could not open/initialize audio device -> no sound. Audio: no sound (similar with ac3 sound, but using the ffac3 audio codec). Trying different audio output (-ao oss/pcm/sdl) doesn't fix the problem. The strange thing is that if I play these files directly with ffplay, they work fine. mplayer sound with mp3/ogg is fine My alsa configuration is standard (no /etc/asound.conf or ~/.asound*) OS: Linux Gentoo Mplayer: 1.0_rc4_p20100213 (SVN-r30554-4.3.4) FFMpeg: 0.5_p20601-r1 (SVN-r20601) Any other information I can provide?

    Read the article

  • Toshiba Qosmio: Battery Stuck at 60%, does not Charges, PC can't power up, can't remain on with out

    - by Fellknight
    Just like the tittle says, now let me try to give some more detail about the symptoms; The battery is stuck at 60 percent (68% at the moment of this writing).When hovering over the battery icon in Windows 7 Home Premium x64 it reads:"68% available (plugged in, charging)", there's no x or any sing the OS is displaying any error. No matter how much time left connected to the AC adapter the battery doesn't charge, it seems however it continues to discharge at its normal rate when disconnected from the laptop (about 1% each 2 weeks). Now this last symptom is the one i find most strange it "seems" the laptop somehow isn't recognizing the battery because even with the remaining charge of 60%(ish) the laptop wont power up or remain on if disconnected from its AC adapter(if it's on and is unplugged it will immediately turn off). Meaning that even with the battery attached correctly in its right place is as if running the laptop with no battery at all. Toshiba's Utilities haven't detected anything strange (or anything for that matter) with the battery or the hardware. The laptop when in use is connected 90% of the time to a Belkin surge protector (like my 1TB EHD). The protector is working correctly (green light on) and the 1TB HD too, thus a power surge having damaged it's very unlikely. Thnx in advance

    Read the article

  • "Run As Administrator" on program right click failing and not launching program

    - by GONeale
    This problem lies within a relatively fresh x64 Windows 7 install ~4 weeks, but is also a problem I have seen on Windows Vista machines (x86 versions). Since the other day, any programs attempted to be launched via right clicking on a shortcut (.lnk)'s context menu and pressing - "Run As Administrator" for instance, in the Quick Launch/Jump List in Windows 7 has failed, screen has not dimmed, no UAC popup. In fact the program does not even load. There is no way around this unless I use the shortcut version from "All Programs" which appears to work, very strange? I have performed no major software installs, nothing out of the ordinary. Has anybody encountered this or know what would be causing it? Here's an example of somebody else experiencing this problem in Vista with no solution: http://www.vistax64.com/vista-general/131918-strange-run-administrator-problem.html and I believe this problem is related, I also cannot right click - "Manage" on my computer): http://windows7forums.com/windows-7-support/5501-run-administrator-broken.html I am running the latest version of Avira AntiVir Virus Scanner and pretty concious of what I download, I don't think it is a virus, nor do I believe it is due to the RC Version of Windows 7, because I have seen the problem across multiple Operating Systems versions. Thanks guys.

    Read the article

  • Why do I get this message from chrome when navigating to https://www.amazon.com?

    - by Denis
    This is probably not the site you are looking for! You attempted to reach www.amazon.com, but instead you actually reached a server identifying itself as *.voxcdn.com. This may be caused by a misconfiguration on the server or by something more serious. An attacker on your network could be trying to get you to visit a fake (and potentially harmful) version of www.amazon.com. Intermittently, I get a blank page when going to http://www.amazon.com. So I stuck an 's' in the URL, making it https://www.amazon.com and got that message above (with the nice red screen) from Chrome indicating there might be some monkey business going on. After hammering on the URL a bunch of times and pulling it up in Chrome's developer tool to look at the network traffic on it, the url (without the s) started behaving. The url with the s just hangs, but the red screen no longer comes up. Some specs... I've got a macBook Pro, Snow Leopard, Time Warner cable. I've had enough strange stuff happening over the past couple months (google.com, youtube.com, amazon.com not coming up or loading strange error messages with random reference numbers) that I finally decided to switch to OpenDNS. Still having problems, though.

    Read the article

  • unable to ping hostname, but \\hostname\\c$ works!?

    - by ciscokid
    I'm having a strange issue on my initial lab setup. Situation: Host with OS Server 2008 R2 64bit, on this host a Virtual Machine in Hyper-V with OS Server 2008 SP1 32bit. The virtual machine has a fixed ip, and is referring to itself for the preferred DNS Server (dns server role has been installed). The host has tcp/ip set to automatic (so automatic ip from router, and dns/gateway = router). Both are able to ping each other on IP address (same ip range). Both are NOT able to ping each other on hostname (sounds logic because virtual machine dns server does not yet have a dns record for the hostmachine). But here's the strange thing: I am able to set up a working network mapping on the Virtual Machine to the host: \hostname\c$. The first thing I thougt was 'something' is blocking the ping request, so I completely disabled Windows Firewall on both Virtual Machine and host. Still pinging on hostname in both ways didn't work, yet I am able to access the network mapping on hostname. There is no extra software installed on both systems (clean windows server 2008).Can someone tell me what is causing this? I always thought: ping on IP address works = network mapping on IP address works. Pinging on hostname doesn't work = network mapping on hostname doesn't work neither. Where am I wrong? Looking forward to your advice!

    Read the article

  • Network share not always available on Windows 2003

    - by JP Hellemons
    Hello everybody, we have a windows 2003 server with a shared directory/folder. I've seen this thread but this wasn't any help: http://superuser.com/questions/58890/the-specified-network-name-is-no-longer-available I have a ping -t running from 3 pc's (vista and two windows 7) they all work. the problem occurss when two users enter the network share then this 'network share is no longer available' appears and the explorer windows turn white. after f5 or refresh the shared directory is back. this is really strange. there is no anti virus or kasparsky running on either end. this is all in the same LAN. the internet connection is really stable, so it's really strange. because a stable internet connection should imply that the local network connection is also stable and that this is a windows issue. can it be a router issue? I have checked the eventlog on the server for diskfailure related messages, but there are none. EDIT: can this be related to mapping a shared directory to a drive letter? and that there is a router between me and the mapped network drive? or is it just windows that is not working well with two users on the same shared folder? should I install samba or something?

    Read the article

  • (Mac Intel) HP PS driver prints in B&W from Adobe Reader after installing Cannon PS driver

    - by JohnB
    I have a unique problem that leaves me at a loss as to where to start troubleshooting. We have three Macs we use for graphics, two of which are PowerPC and one which is Intel. They are set up to print to an HP 5500dn, but occasionally this printer gets tied up with a massive print job, so I installed the PS driver (iR-PSv1.81MacOSX) for the Cannon C3200 Printer/copier on each of the machines. Both of the PowerPC Macs installed without issue, but the Intel Mac exhibits strange behavior: I've confirmed that while the Cannon driver is installed (whether or not the Cannon is set up for printing in print settings), the HP 5500dn will print in color from Safari, but only prints in black and white from Adobe Reader. The Cannon printer itself has not exhibited any strange behavior As soon as the Cannon driver is uninstalled, the HP 5500dn prints in color from Adobe Reader again. We run a network of Windows PCs, and the 'Mac room' mostly takes care of itself, so we don't have any experienced Mac administrators onsite. The Cannon is capable of Appletalk, but the PS driver seemed easier to work with (and Appletalk is currently disable on the Cannon. I'm not against using the Appletalk compatible drivers, but I would rather use the PS driver if at all possible - I don't want to open up the proverbial can of worms. If someone has any clues or suggestions that would help troubleshoot this problem, I would be grateful. I've already done some googling, but due to the obscure nature of this problem, I haven't been very successful. I don't like to create multiple threads on multiple sites, but I'm posting here due to Chopper3's suggestion on my post on ServerFault (http://serverfault.com/questions/135349/mac-intel-hp-ps-driver-prints-in-bw-from-adobe-reader-after-installing-cannon)

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >