Search Results

Search found 26810 results on 1073 pages for 'fixed point'.

Page 536/1073 | < Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >

  • ArchBeat Link-o-Rama for 11/11/2011

    - by Bob Rhubart
    3 SOA business cases, explained in a 2-minute elevator speech | Joe McKendrick Impress your CEO — maybe even the CFO — with some quick examples of SOA making a difference to the business. ADF Faces - a logic bomb in the order of bean instantiations | Chris Muir Oracle ACE Director Chris Muir shares the details on "an interesting ADF logic bomb" discovered by one of his colleagues. 5 key trends in cloud computing's future | David Linthicum "'Cloud computing' will become just 'computing' at some point," says Linthicum, "but it will still be around as an approach to computing." What's New with XBRL? | John O'Rourke John O'Rourke shares highlights and key take-aways from the XBRL US Conference in Nashville and the XBRL International Conference in Montreal. Siri-ous Business: Enterprise Apps and Global UX Considerations | Ultan O'Broin Ultan O'Broin ponders "the enterprise applications user experience (UX) implications of Siri" and "the global UX aspects to the Siri potential." These are 11 of my favorite things! | Mike Gerdts Gerdts introduces his 11 favorite things about zones in Solaris 11. The Power of Social Recommendations | Peter Reiser "Do you really want to invest to drive YOUR audience trough public social networks," asks Reiser, "or do you want to have YOUR audience on your own social network which is seamless integrated with your web properties and business applications." Fourth Key Attribute of Cloud Computing - Provisioning | Tom Laszewski "Self-service provisioning of computing infrastructure in a cloud infrastructure is also very desirable as it can cut down the time it takes to deploy new infrastructure for a new application or scale up/down infrastructure for an existing application," says Tom Laszewski. Oracle Utilities Application Framework Whitepaper List as of November 2011 | Anthony Shorten Anthony Shorten shares an updated and nicely detailed list of Oracle Utilities Application Framework white papers. Down from the Tower; Information Integration Conversation; By the Time the Architects get to Phoenix This week on the Oracle Technology Network Architect Home Page.

    Read the article

  • Troubles setting up my new T1

    - by timmaah
    I'm more of a web developer kind of guy with limited knowledge of networks, so if anyone can point in the right direction, I would be grateful. I am replacing my satellite connection with a T1 I got for a good deal thru the phone company. I also managed to get my hands on a Netvanta 3200 router. My problem is I can't quite figure out how to set up the router and can't find any kind of guide that would explain what I need to set where. I'm not sure what to do next on my troubleshooting journey.

    Read the article

  • DPM Coordinator Service is not responding

    - by Anatoly Vilchinsky
    Hi dear Gurus! I'm in stuck point now, so please at least try to help me. I've installed DPM 2010 on cluster server (w2k8 r2), everything was fine. Than I've tried tot install Protection Agent on both server of my cluster, but I'm getting error Error 312: The agent operation failed because the DPM Agent Coordinator service is not responding. Error details: The service cannot be started, either because it is disabled or because it has no enabled devices associated with it (0x80070422) Recommended action: Restart the DPM Agent Coordinator service on . And the thing is that I'm totally can't see DPM Agent Coordinator service not on the localhost, nor on the second server. All the suggestion, that I've found in the internet are suggesting to restart this service, but how can I perform this if the one is absent? I'll be glad for any help

    Read the article

  • IIS cache control header settings

    - by a_m0d
    I'm currently working on a website that is accessed over https. We have recently come across a problem where we are unable to view .pdf files or any other type of file that is sent as an attachment (Content-Disposition:attachment). According to Microsoft Knowledge Base this is due to the fact that Cache-Control is set to no-cache. However, we have a requirement that all pages be fully reloaded every time they are visited, so we have disabled caching on all pages (through our ASP code, not through IIS settings). However, I have made a special case of this one page that shows the attachment, and it now returns a header with Cache-Control:private and the expiry set to 1 minute in the future. This works fine when I test it on my local machine, using https. However, when I deploy it to our test server and try it, the response headers still return Cache-Control:no-cache. There is no firewall or anything between me and the server, so IIS itself must be adding these headers and replacing mine. I have no idea why it would do this, and it doesn't really make any sense, but it seems to be the only option at the moment (I haven't yet found any other place in the code that will change the cache headers). Can anyone point me to a possible place where IIS might be setting these header values?

    Read the article

  • Doesn't boot after installation

    - by jchysk
    Downloaded Ubuntu 12.04.1-alternate-amd64 Installed to USB stick Integrity check fails on ./install/netboot/ubuntu-installer/amd64/pxelinux.cfg/default but that seems to be a known bug where the file isn't included in the alternative 64-bit ISO and shouldn't affect installation. I ignore it and proceed on. For partitioning on 2 SSD Drives: Partition 300MB and 63GB on both RAID1 the 300MB and 63GBs Set the 300MB to EXT4 on /boot Encrypt the rest as MD1 and set it for LVM Create two volumes from MD1: 4GB swap and 59GB to / I go through the installation and get to the point where it says everything is ready and to take the media out so as to boot from the drives I receive the error "Error: No video mode activated." on startup I've read that this can be solved by running "cp /usr/share/grub/*.pf2 /boot/grub" and then updating grub but I can't get to a place where I can actually run this command. In rescue mode I can get to a shell from installer with /boot mounted to /target. So from there I can run "cp /cdrom/boot/grub/font.pf2 /target/grub/" but can't figure out a way to get it to update grub after that or know how what to change in manually updating the grub.cfg file. If I try other devices to mount the root filesystem I get the error "An error occurred while mounting the device you entered for your root file system". It just sits on the video mode error and doesn't progress further. Googling around it seems like people see the error briefly before it continues booting, not getting stuck on it the way I am which leads me to believe that error may be unrelated to Ubuntu not booting. So any ideas as to what I should try next or what needs to be done to install Ubuntu and get it to boot would be helpful.

    Read the article

  • How to stop Windows 7 from automatically connecting to unsecure wifi network

    - by Remi Despres-Smyth
    One of my neighbors has an unsecure wifi network called WLAN. At one point in the past, I accidentally connected to it, and disconnected immediately when I noticed. Now, when I open my laptop at home, it sometimes connects to the WLAN network first, before trying my (secured) home wifi network. The information I've found regarding this issue seems to suggest this network should have a profile on the "Manage wireless networks" screen - but it does not. How do I tell Windows 7 to never connect to networks with SSIDs called WLAN? Or to never connect to unsecured networks without confirming with me first?

    Read the article

  • Where could Distributed Version Control Systems currently be in Gartner's hype cycle?

    - by dukeofgaming
    Edit: Given the recent downvoting (+8/-6 at this point) it was made clear to me that Gartner's lifecycle is a biased metric from a programmer's perspective. This is something that is part of a paper I'm going to present to management, and management types are part of Gartner's audience. Giving DVCS exposure & enthusiasm (that "could" be deemed as hype, or at least attacked as such), think about the following question when reading this one: "how could I use Gartner's hype cycle to convince management that DVCSs are ready (or ready-enough) for us, and that it is not just hype" Just asking if DVCSs is hype wouldn't be constructive, Gartner's hype cycle is a more objective instrument than just asking that (even if this instrument is regarded as biased). If you know any other instrument please, by all means, mention it. Edit #2: I agree that Gartner's Life Cycle is not for every technology, but I consider it may have generated enough buzz to be considered hype by some, so it maybe deserves to be at least evaluated/pondered as such by using this instrument in order to prove/disprove it to whatever degree. I'm an advocate of DVCS, BTW. I'm doing research for a whitepaper I'm writing in favor of DVCS adoption at company and I stumbled upon the concept of social proof. I want to prove that the social proof of DVCS adoption is not necessarily cargo cult and doing further research I now stumbled upon Gartner's hype cycle which describes technology maturity in 5 phases. My question is: what could be an indicator of the current location of Distributed Version Control Systems (I mean git, mercurial, bazaar, etc. in general) at a particular phase in the hype cycle?... in other (less convoluted) words, would you say that currently expectations of DVCSs are a) starting, b)inflated, c)decreasing (disillusionment), d)increasing (enlightenment) or e)stabilizing (mature) and (more importantly) why? I know it is a hard question and there is subjectivity involved, but I'll grant the answer (and the traditional cookie) to the clearest argument/evidence for a particular phase.

    Read the article

  • Can't change Hyper Terminal Hardware Settings

    - by Tim
    Anyone here any good with Hyperterminal? Am having a nightmare trying to use it at work to update our telephone extensions. The company who supply our PABX box have told me that XP does strange things to Hyperterminal and that I should use Win2k. Which I did with the same result. I have narrowed the problem dowwn to the Hardware Settings for the connection in Hyper Terminal. No matter what I set (and I need 7E1) it defaults back to 8N1. Has anyone seen this behaviour before and know of a simple way around it? (Apart from buying a more expensive commercial version of Hyper Terminal, as suggested by our support people). Edit: I should point out I have to connect via a phone line so a direct serial connection is not an option. Cheers Tim

    Read the article

  • Unable to install SQL Server 2008 Express SP1

    - by dahacker89
    I am facing difficulties installing the MS SQL Server Express 2008 Service Pack 1. I already have MS SQL Server Express 2008 installed and all I want to do is to install the SP1 however I get following error message even though all features are selected, it still tells me to select one or more features: Also just for information, when I open the SQL Server Configuration Manager to manage my SQL Server Services, the following error message is displayed: If anyone who has faced this and has a solution then please let me know, as my aim is to install management studio, but for that I must have SP1 installed at least, and I'm stuck at that point. Thanks.

    Read the article

  • If You Include the Groovy Editor...

    - by Geertjan
    ...in a NetBeans RCP application, what additional JARs will you need to include for the Groovy Editor to work? Leaving aside the debate on the current state & quality of the NetBeans Groovy Editor, so, assuming you need the Groovy support that the NetBeans Groovy Editor provides, you would check the Groovy Editor checkbox in the Project Properties dialog of your application: As you can see, however, the Groovy Editor depends on other modules, some of which, in turn, depend on yet other modules, and so on. So, I clicked the "Resolve" button above and then created a ZIP distribution, to see which additional JARs had been included. Until that point, I had only been using the "platform" cluster, which means that absolutely everything found in the ZIP's "ide" cluster and "java" cluster have only been included so that the Groovy Editor could be included, i.e., all thanks to clicking the "Resolve" button above. Let's first look at what that means for the "java" cluster: That's not so bad and kind of a side effect of Groovy being Java, i.e., a lot of Java functionality is needed. Now let's look at the "ide" cluster: So, in answer to the original question, if all you want in your NetBeans Platform application, in terms of editor functionality, is the Groovy Editor, then you have a pretty high price to pay. At the very least, I would have assumed that the project support JARs and the debugger support JARs would not be so tightly coupled with the Groovy Editor. That would be a cool thing to separate out from the editor support.

    Read the article

  • How do I choose the scaling factor of a 3D game world?

    - by concept3d
    I am making a 3D tank game prototype with some physics simulation, am using C++. One of the decisions I need to make is the scale of the game world in relation to reality. For example, I could consider 1 in-game unit of measurement to correspond to 1 meter in reality. This feels intuitive, but I feel like I might be missing something. I can think of the following as potential problems: 3D modelling program compatibility. (?) Numerical accuracy. (Does this matter?) Especially at large scales, how games like Battlefield have huge maps: How don't they lose numerical accuracy if they use 1:1 mapping with real world scale, since floating point representation tend to lose more precision with larger numbers (e.g. with ray casting, physics simulation)? Gameplay. I don't want the movement of units to feel slow or fast while using almost real world values like -9.8 m/s^2 for gravity. (This might be subjective.) Is it ok to scale up/down imported assets or it's best fit with a world with its original scale? Rendering performance. Are large meshes with the same vertex count slower to render? I'm wondering if I should split this into multiple questions...

    Read the article

  • How can I force Parallels' networking to obtain an IP through a wireless router?

    - by RLH
    Here is my setup. I have a Macbook, Thunderbolt display and an Ethernet connection plugged into the Thunderbolt display. During the day, most of my network use can (and should) operate across the ethernet associated with my display. However, I also need to be able to connect up to a wireless router. This hasn't been a problem on the Mac OS X side, but the program that I need to run on the router has to obtain an IP address from the wireless access point. Considering my current setup, how can I leave it so that I can access the internet in OS X, yet have my Window 7 instance running in Parallels, get it's assigned IP address from a wireless router that my Mac is also connected to? I've fiddled around with the Parallel's network settings for an hour, and I can't get Parallel's to see the router, even though my Mac is certainly connected to it.

    Read the article

  • How do I Series: Connecting an Expression Blend Project to Team Foundation Server

    - by Enrique Lima
    I have heard of people wanting and needing to add projects created in Expression Blend to Team Foundation Server. Here is the recipe: 1) Create your project in Expression Blend … click OK 2) Select the option to open your recently created project in Visual Studio. Once that option is selected, your solution will open up in Visual Studio, close Expression Blend at this point. Now, I want to add this project to Source Control … Next, I connect to my TFS environment, and pick the location to save my project Once the project is added, I will get a status window of pending changes for my project, all that we are left to do is to check in those changes. Since we have checked in our project, we can now close Visual Studio, and we will proceed to open Expression Blend again. And select our project we will! We notice some differences from before, just by opening it What differences you say?!? Notice the lock to the right of the item name … And we also get this when we right click … And there we have it, it is a combination of tools to achieve this, but it is well worth it.

    Read the article

  • The Stub Proto: Not Just For Stub Objects Anymore

    - by user9154181
    One of the great pleasures of programming is to invent something for a narrow purpose, and then to realize that it is a general solution to a broader problem. In hindsight, these things seem perfectly natural and obvious. The stub proto area used to build the core Solaris consolidation has turned out to be one of those things. As discussed in an earlier article, the stub proto area was invented as part of the effort to use stub objects to build the core ON consolidation. Its purpose was merely as a place to hold stub objects. However, we keep finding other uses for it. It turns out that the stub proto should be more properly thought of as an auxiliary place to put things that we would like to put into the proto to help us build the product, but which we do not wish to package or deliver to the end user. Stub objects are one example, but private lint libraries, header files, archives, and relocatable objects, are all examples of things that might profitably go into the stub proto. Without a stub proto, these items were handled in a variety of ad hoc ways: If one part of the workspace needed private header files, libraries, or other such items, it might modify its Makefile to reach up and over to the place in the workspace where those things live and use them from there. There are several problems with this: Each component invents its own approach, meaning that programmers maintaining the system have to invest extra effort to understand what things mean. In the past, this has created makefile ghettos in which only the person who wrote the makefiles feels confident to modify them, while everyone else ignores them. This causes many difficulties and benefits no one. These interdependencies are not obvious to the make, utility, and can lead to races. They are not obvious to the human reader, who may therefore not realize that they exist, and break them. Our policy in ON is not to deliver files into the proto unless those files are intended to be packaged and delivered to the end user. However, sometimes non-shipping files were copied into the proto anyway, causing a different set of problems: It requires a long list of exceptions to silence our normal unused proto item error checking. In the past, we have accidentally shipped files that we did not intend to deliver to the end user. Mixing cruft with valuable items makes it hard to discern which is which. The stub proto area offers a convenient and robust solution. Files needed to build the workspace that are not delivered to the end user can instead be installed into the stub proto. No special exceptions or custom make rules are needed, and the intent is always clear. We are already accessing some private lint libraries and compilation symlinks in this manner. Ultimately, I'd like to see all of the files in the proto that have a packaging exception delivered to the stub proto instead, and for the elimination of all existing special case makefile rules. This would include shared objects, header files, and lint libraries. I don't expect this to happen overnight — it will be a long term case by case project, but the overall trend is clear. The Stub Proto, -z assert_deflib, And The End Of Accidental System Object Linking We recently used the stub proto to solve an annoying build issue that goes back to the earliest days of Solaris: How to ensure that we're linking to the OS bits we're building instead of to those from the running system. The Solaris product is made up of objects and files from a number of different consolidations, each of which is built separately from the others from an independent code base called a gate. The core Solaris OS consolidation is ON, which stands for "Operating System and Networking". You will frequently also see ON called the OSnet. There are consolidations for X11 graphics, the desktop environment, open source utilities, compilers and development tools, and many others. The collection of consolidations that make up Solaris is known as the "Wad Of Stuff", usually referred to simply as the WOS. None of these consolidations is self contained. Even the core ON consolidation has some dependencies on libraries that come from other consolidations. The build server used to build the OSnet must be running a relatively recent version of Solaris, which means that its objects will be very similar to the new ones being built. However, it is necessarily true that the build system objects will always be a little behind, and that incompatible differences may exist. The objects built by the OSnet link to other objects. Some of these dependencies come from the OSnet, while others come from other consolidations. The objects from other consolidations are provided by the standard library directories on the build system (/lib, /usr/lib). The objects from the OSnet itself are supposed to come from the proto areas in the workspace, and not from the build server. In order to achieve this, we make use of the -L command line option to the link-editor. The link-editor finds dependencies by looking in the directories specified by the caller using the -L command line option. If the desired dependency is not found in one of these locations, ld will then fall back to looking at the default locations (/lib, /usr/lib). In order to use OSnet objects from the workspace instead of the system, while still accessing non-OSnet objects from the system, our Makefiles set -L link-editor options that point at the workspace proto areas. In general, this works well and dependencies are found in the right places. However, there have always been failures: Building objects in the wrong order might mean that an OSnet dependency hasn't been built before an object that needs it. If so, the dependency will not be seen in the proto, and the link-editor will silently fall back to the one on the build server. Errors in the makefiles can wipe out the -L options that our top level makefiles establish to cause ld to look at the workspace proto first. In this case, all objects will be found on the build server. These failures were rarely if ever caught. As I mentioned earlier, the objects on the build server are generally quite close to the objects built in the workspace. If they offer compatible linking interfaces, then the objects that link to them will behave properly, and no issue will ever be seen. However, if they do not offer compatible linking interfaces, the failure modes can be puzzling and hard to pin down. Either way, there won't be a compile-time warning or error. The advent of the stub proto eliminated the first type of failure. With stub objects, there is no dependency ordering, and the necessary stub object dependency will always be in place for any OSnet object that needs it. However, makefile errors do still occur, and so, the second form of error was still possible. While working on the stub object project, we realized that the stub proto was also the key to solving the second form of failure caused by makefile errors: Due to the way we set the -L options to point at our workspace proto areas, any valid object from the OSnet should be found via a path specified by -L, and not from the default locations (/lib, /usr/lib). Any OSnet object found via the default locations means that we've linked to the build server, which is an error we'd like to catch. Non-OSnet objects don't exist in the proto areas, and so are found via the default paths. However, if we were to create a symlink in the stub proto pointing at each non-OSnet dependency that we require, then the non-OSnet objects would also be found via the paths specified by -L, and not from the link-editor defaults. Given the above, we should not find any dependency objects from the link-editor defaults. Any dependency found via the link-editor defaults means that we have a Makefile error, and that we are linking to the build server inappropriately. All we need to make use of this fact is a linker option to produce a warning when it happens. Although warnings are nice, we in the OSnet have a zero tolerance policy for build noise. The -z fatal-warnings option that was recently introduced with -z guidance can be used to turn the warnings into fatal build errors, forcing the programmer to fix them. This was too easy to resist. I integrated 7021198 ld option to warn when link accesses a library via default path PSARC/2011/068 ld -z assert-deflib option into snv_161 (February 2011), shortly after the stub proto was introduced into ON. This putback introduced the -z assert-deflib option to the link-editor: -z assert-deflib=[libname] Enables warning messages for libraries specified with the -l command line option that are found by examining the default search paths provided by the link-editor. If a libname value is provided, the default library warning feature is enabled, and the specified library is added to a list of libraries for which no warnings will be issued. Multiple -z assert-deflib options can be specified in order to specify multiple libraries for which warnings should not be issued. The libname value should be the name of the library file, as found by the link-editor, without any path components. For example, the following enables default library warnings, and excludes the standard C library. ld ... -z assert-deflib=libc.so ... -z assert-deflib is a specialized option, primarily of interest in build environments where multiple objects with the same name exist and tight control over the library used is required. If is not intended for general use. Note that the definition of -z assert-deflib allows for exceptions to be specified as arguments to the option. In general, the idea of using a symlink from the stub proto is superior because it does not clutter up the link command with a long list of objects. When building the OSnet, we usually use the plain from of -z deflib, and make symlinks for the non-OSnet dependencies. The exception to this are dependencies supplied by the compiler itself, which are usually found at whatever arbitrary location the compiler happens to be installed at. To handle these special cases, the command line version works better. Following the integration of the link-editor change, I made use of -z assert-deflib in OSnet builds with 7021896 Prevent OSnet from accidentally linking to build system which integrated into snv_162 (March 2011). Turning on -z assert-deflib exposed between 10 and 20 existing errors in our Makefiles, which were all fixed in the same putback. The errors we found in our Makefiles underscore how difficult they can be prevent without an automatic system in place to catch them. Conclusions The stub proto is proving to be a generally useful construct for ON builds that goes beyond serving as a place to hold stub objects. Although invented to hold stub objects, it has already allowed us to simplify a number of previously difficult situations in our makefiles and builds. I expect that we'll find uses for it beyond those described here as we go forward.

    Read the article

  • Snow Leopard: Optimization

    - by Shyam
    Hi, I have bunch of questions: I have a Mac network, which has five Mac's. Right now, they are individually getting software updates. Is there a way to download the patches/security updates in a single place (repository) and point all machines to this location? Personally, I have tools like Monolingual and Onyx, but are there tools you could recommend that affect the performance of the Operating System positively? Tweaks would be nice. Links and pointers, would be really appreciated. I've read about Time machine, is there a way to backup all machines to a network drive using this tool? Thanks!

    Read the article

  • SSH tunnel RDP through gateway server outside the network?

    - by Mike
    I need to access a PC via RDP that is behind a firewall. There's no way to connect to it directly that I know of. What I'd like to do is SSH from that remote PC to my home Ubuntu server, then connect to the remote PC using my home PC with the Ubuntu server as a gateway. I've tried SSH from remote PC to Ubuntu server, tunneling remote port 3389 to 127.0.0.1:3389, then SSH from home PC to Ubuntu server, tunneling local port 13389 to remote port 3389. At that point I try to RDP into: 127.0.0.1:13389, 127.0.0.2:13389, :3389 - no dice. I suppose I could simply set up an SSH server on my home PC and SSH from remote PC into home PC and then establish the tunnel that way, but I'd rather not go through the hassle of installing and configuring an ssh server on my home PC. I know LogMeIn would work here, but I don't want to go that route for various reasons. Any ideas? Thanks!

    Read the article

  • How to create iso dvd image for use with Mac OS X's "Disk Utility"?

    - by adolf garlic
    This seemed way easier on my pc. I would just pop a blank dvd in the drive, it asked what I wanted to do with it, to which I would respond, "burn dvd with nero" (paraphrasing), then I would pick "new" and just drag and drop the folders in there. Mac appears to have "Disk Utility" which just requires that I 'choose an image' but then doesn't bother to detail: how to do this what the options mean e.g. format "Mac OS Extended (Journaled)" is that ever going to be readable on a non Mac machine? I want to create an ISO standard DVD as per the 'default' you'd get on nero. All the stuff on the web points to doing things with 'Terminal' (the whole point of buying a Mac was to get away from command line jiggery pokery - I'm trying to burn some photos not land a friggin lunar module here!) Please, if you can just provide some simple instructions on what I need to achive this I'd be extremely grateful. Ta muchley in advance.

    Read the article

  • Is there any evidence that drugs can actually help programmers produce "better" code? [closed]

    - by sytycs
    I just read this quote from Steve Jobs: "Doing LSD was one of the two or three most important things I have done in my life." Also a quote from that article: He was hardly alone among computer scientists in his appreciation of hallucinogenics and their capacity to liberate human thought from the prison of the mind. Now I'm wondering if there's any evidence to support the theory that drugs can help make a "better" programmer. Has there ever been a study where programmers have been given drugs to see if they could produce "better" code? Is there a well-known programming concept or piece of code which originated from people who were on drugs? EDIT So I did a little more research and it turns out Dennis R. Wier actually documented how he took LSD to wrap his head around a coding project: "At one point in the project I could not get an overall viewpoint for the operation of the entire system. It really was too much for my brain to keep all the subtle aspects and processing nuances clear so I could get a processing and design overview. After struggling with this problem for a few weeks, I decided to use a little acid to see if it would enable a breakthrough, because otherwise, I would not be able to complete the project and be certain of a consistent overall design"[1] There is also an interesting article on wired about Kevin Herbet, who used LSD to solve tough technical problems and chemist Kary Mullis even said "...that LSD had helped him develop the polymerase chain reaction that helps amplify specific DNA sequences." [2]

    Read the article

  • Keyboard lights sometimes stay on after PC is powered off. Why?

    - by Vilx-
    Sometimes when I power off my PC some lights on the keyboard stay on. It's pretty random which ones, though I suspect it has something to do with what lights were on just before the shutdown (if I vigorously follow that they are all off while the PC is shutting down then they stay off). Not that this would change anything, but it is annoying. Why does this happen and how can I avoid it? Added: I know that I can turn off the power completely to my computer and it will go off. That's not the point. Actually, I even want my keyboard to have power, because I use it to turn on my computer (the power button on the case is difficult to reach). I just want the lights to go off. Or at least understand why this happens. Oh, and it's a PS2 keyboard.

    Read the article

  • STOP: c000021a {Fatal System Error} The initial session process or system process terminated unexpectedly

    - by christof
    I'm encountering such an error after expanding disk space on a virtual machine using Hyper-V. STOP: c000021a {Fatal System Error} The initial session process or system process terminated unexpectedly with a status of (0x00000000) (0xc000012d 0x001003f0). The virtual server there is Windows Server 2008 R2 Enterprise Edition, which is also Domain Controller. I've tried to repair Windows but there is no restore point, and using the command line. I've tried the sfc /SCANNOW /OFFBOOTDIR /OFFWINDIR command, but I got the error Windows Resource Protection could not perform the requested operation.

    Read the article

  • Getting MTP to work with a Galaxy tab 2 7.0?

    - by Wouter
    I'm trying to get MTP with the galaxy tab 2 7.0 working on my ubuntu installation. Such that I can access the files. I tried to do what is described here: http://www.omgubuntu.co.uk/2011/12/how-to-connect-your-android-ice-cream-sandwich-phone-to-ubuntu-for-file-access I however fail at executing one of the following commands mtp-detect | grep idVendor mtp-detect | grep idProduct This fails [20:42|0] $ mtp-detect | grep idVender Device 0 (VID=04e8 and PID=6860) is a Samsung GT-P7310/P7510/N7000/I9100/Galaxy Tab 7.7/10.1/S2/Nexus/Note. PTP_ERROR_IO: failed to open session, trying again after resetting USB interface LIBMTP libusb: Attempt to reset device LIBMTP PANIC: failed to open session on second attempt Unable to open raw device 0 [20:44|0] $ mtp-detect | grep idProduct Device 0 (VID=04e8 and PID=6860) is a Samsung GT-P7310/P7510/N7000/I9100/Galaxy Tab 7.7/10.1/S2/Nexus/Note. PTP_ERROR_IO: failed to open session, trying again after resetting USB interface LIBMTP libusb: Attempt to reset device LIBMTP PANIC: failed to open session on second attempt Unable to open raw device 0 Now my guess was was that the idVender is the same as the VID (04e8) and the idProduct is the same as PID (6860) Now I continued to work with those values and completed the tutorial. When finished I tried android-connect This returned fuse: bad mount point `/media/GalaxyTab': Transport endpoint is not connected Does anybody have a clue what to do? Also I want to note that when I connect my GalaxyTab 2 7.0 that I still get a pop-up of ubuntu that a device was connected. I also can still see the mapstructure, the problem however is is that all the folders have 0 bytes and do not have any subfolders. I can only see the folders in the root. ps. I also checked a similar question and tried what is described in this answer http://askubuntu.com/a/88630/27480

    Read the article

  • High Availability Configuration using Heartbeat and Pacemaker

    - by pradeepchhetri
    I have the following setup: I have configured high availability between two load balancers (HAProxy) so that if HAProxy1 get down, the floating IP gets transferred to the other load balancer HAProxy2, hence all the clients will get the response from HAProxy2, which at the back-end is doing LB among the sme two webserver. This is for removing the single point of failure in case of only one HAProxy. Whenever I stops the hearbeat in HAProxy1, the floating IP goes to HAProxy2. But I want to configure such that whenever the process haproxy goes down, the floating IP should get assigned to HAProxy2. Can someone tell me how to implement it ?

    Read the article

  • Sharing an external hard drive in Ubuntu using Samba

    - by cambraca
    /media/MYDISK is where my hard drive is mounted automatically. I created a symlink using: ln -s /media/MYDISK /home/camilo/MYDISK chmod 777 /home/camilo/MYDISK I'm setting up smb.conf like this: [myshare1] comment = external disk browsable = yes path = /home/camilo/MYDISK guest ok = yes read only = no create mask = 0775 Also, in the [global] section I tried adding the following lines: follow symlinks = yes wide links = yes unix extensions = no The problem is that when browsing the shared folder in Windows 7, I get a "\\etc\myshare1 is not accessible" error. When pointing the path to a regular folder it works fine. Also, when I point it directly to /media/MYDISK, it shows the same error. EDIT: to make it more interesting, I have no graphical interface, so I need to touch the config files directly..

    Read the article

  • Asset and Work Management in Utilities: An Integrated Enterprise Success Story

    - by stephen.slade(at)oracle.com
    Jan 11 '11 Webcast: Utilities are turning to Oracle to deliver an integrated EAM platform that manages all of their assets from fleet to facilities and distribution to generation. Hear from solutions experts and from Sunflower Electric Power Corporation about how an integrated enterprise asset and work management system helped them deliver bottom line results Do you have different work management systems for generation, distribution, and transmission? Fleet maintenance? Facilities? Are you on the latest release of these products? Have you considered your options when the product is no longer supported? Do you struggle with integration and keeping the various systems "in balance"? Do you have trouble retrieving data from these disparate systems and getting an enterprise view of asset and work management operations? Utilities are challenged to better manage information on generation, transmission and distribution assets. Point solutions for Enterprise Asset Management (EAM) and Computerized Maintenance Management Systems (CMMS) are often effective as departmental solutions but have limited ability to deliver an enterprise solution with accessible business intelligence. Date:  January 11, 2011 @ 10am PT/1pm ET EVITE:  http://www.oracle.com/us/dm/h2fy11/63025-wwmk10040611mpp054c003-se-197386.html Register: HERE

    Read the article

  • Is mocking for unit testing appropriate in this scenario?

    - by Vinoth Kumar
    I have written around 20 methods in Java and all of them call some web services. None of these web services are available yet. To carry on with the server side coding, I hard-coded the results that the web-service is expected to give. Can we unit test these methods? As far as I know, unit testing is mocking the input values and see how the program responds. Are mocking both input and ouput values meaningful? Edit : The answers here suggest I should be writing unit test cases. Now, how can I write it without modifying the existing code ? Consider the following sample code (hypothetical code) : public int getAge() { Service s = locate("ageservice"); // line 1 int age = s.execute(empId); // line 2 return age; // line 3 } Now How do we mock the output ? Right now , I am commenting out 'line 1' and replacing line 2 with int age= 50. Is this right ? Can anyone point me to the right way of doing it ?

    Read the article

< Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >