Search Results

Search found 19958 results on 799 pages for 'bit fiddling'.

Page 363/799 | < Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >

  • Configuring Apache for multiple clients

    - by Chris_K
    Last week I had a question here about suexec / suphp but I tried to accomplish too much. I'm going to narrow the scope a bit and try again. I'd like to configure a LAMP server to host multiple clients. I'd like it to seem (from the client's viewpoint) just like any other shared hosting environment. Web sites in their home directory, no need to muck around with file ownerships to get pages served, etc. It would seem that a configuration that involves suexec and suphp is the way to go(?) I'm specifically looking for a current/modern guide on how to accomplish this (I'll be using CentOS if it matters) and I'm afraid I need more than a link to Apache docs. Are there any good How-To's out there? The few I've found have been pretty out of date, but it is quite possible my search was weak.

    Read the article

  • Upgrade Talks at OpenWorld Beijing: December 13-16, 2010

    - by [email protected]
    Mike may be done traveling for a while, but I have more than a bit of travel coming up. Next week I will be delivering four talks at OpenWorld Beijing 2010. I'm looking forward to returning to Beijing. Last time Mike and I saw the usual tourist sites and plenty of interesting food. One place to which I will definitely try to return this time is Da Dong Duck, a wonderful restaurant for (what else?) Peking Duck. Oh yes, my talks, I almost forgot :-). Here are the details: Session Title: The Most Common Upgrade Mistakes (and How to Avoid Them) Session ID: 1716 Session Schedule: 12/15/10 Time: 10:45 - 11:30 Location: Room 506 AB Session Title: Get the Best out of Oracle Data Pump Functionality Session ID: 1376 Session Schedule: 12/16/10 Time: 16:30 - 17:15 Location: Room 311 A Session Title: What Do I Really Need to Know When Upgrading? Session ID: 1412 Session Schedule: 12/16/10 Time: 14:30 - 15:15 Location: Room 308 Session Title: Patching, Upgrades, and Certifications: A Guide for DBAs Session ID: 1723 Session Schedule: 12/16/10 Time: 11:45 - 12:30 Location: Room 506 AB We will also have a demo booth to talk about upgrading to Oracle Database 11g Release 2. So, if you'll be attending OpenWorld Beijing 2010, please stop by one of my talks or the demo booth!

    Read the article

  • Adding arbitrary search URLs to Firefox search bar

    - by Matthew
    New-ish versions of Firefox (I'm currently on 3.6) have the nifty "search bookmark" feature, which allows you to create searches in the location bar with custom URLs, e.g. en.wikipedia.org/wiki/%s. This is really great, but when trying to mange the engines in the search bar, I was dismayed at the lack of customisability there. It looks like the two search methods are entirely distinct. Is there a way to put custom URLs in my search bar, or do I have to just hope that whatever I want is on the long but finite list of plugins at mycroft? Thanks UPDATE: done a bit more research, posting my own answer

    Read the article

  • Upgraded from fc10 to fc12 now I have eth0_rename, how do I get back to plain old eth0?

    - by shank
    I upgraded from Fedora 10 to Fedora 12. Unfortunately, my ethernet interface eth0 is now named eth0_rename. I'd like to get back to having it named plain old eth0. I googled a bit but the solution of removing the eth0 entry from /etc/udev/rules.d/70-persistent-net.rules seems to have no effect (I restarted the network service but didn't reboot). The interface works just fine although I could see a script or two having a problem with the format. So, it's more of an inconvenience thing than anything else. Any ideas? Thanks.

    Read the article

  • How to balance a non-symmetric "extension" based game?

    - by Klaim
    Most strategy games have fixed units and possible behaviours. However, think of a game like Magic The Gathering : each card is a set of rules. Regularly, new sets of card types are created. I remember that the firsts editions of the game have been said to be prohibited in official tournaments because the cards were often too powerful. Later extensions of the game provided more subtle effects/rules in cards and they managed to balance the game apparently effectively, even if there is thousands of different cards possible. I'm working on a strategy game that is a bit in the same position : every units are provided by extensions and the game is thought to be extended for some years, at least. The effects variety of the units are very large even with some basic design limitations set to be sure it's manageable. Each player choose a set of units to play with (defining their global strategy) before playing (like chooseing a themed deck of Magic cards). As it's a strategy game (you can think of Magic as a strategy game too in some POV), it's essentially skirmish based so the game have to be fair, even if the players don't choose the same units before starting to play. So, how do you proceed to balance this type of non-symmetric (strategy) game when you know it will always be extended? For the moment, I'm trying to apply those rules but I'm not sure it's right because I don't have enough design experience to know : each unit would provide one unique effect; each unit should have an opposite unit that have an opposite effect that would cancel each others; some limitations based on the gameplay; try to get a lot of beta tests before each extension release? Looks like I'm in the most complex case?

    Read the article

  • HP Envy 14, Ubuntu 10.10 and trouble with the graphics cards

    - by Carsten Gehling
    A few days ago I bought a HP Envy 14, containing 2 graphics card: An integrated Intel graphics card, and an ATI HD 5650. I've installed Ubuntu 10.10 32-bit on the machine. Most things work fine out of the box, but the graphics cards are giving me trouble. When booting, I get the message "failed to get i915 symbols, graphics turbo disabled". Then the screen blanks out during the remaining boot period. I am able to get the display working by changing to one of the consoles, then closing and opening the laptop's lid. It seems that Ubuntu gets confused about which card to use. I've read here: http://www.andreas-demmer.de/en/2010/07/18/testbericht-linux-auf-dem-hp-envy-14 that I should be able to turn off one the cards by echoing keywords into /sys/kernel/debug/vgaswitcheroo/switch, but that path is not available on my system. The BIOS does not have any methods to switch of the ATI card. Help anyone? /Carsten

    Read the article

  • HP Envy 14, Ubuntu 10.10 and trouble with the graphics cards

    - by Carsten Gehling
    A few days ago I bought a HP Envy 14, containing 2 graphics card: An integrated Intel graphics card, and an ATI HD 5650. I've installed Ubuntu 10.10 32-bit on the machine. Most things work fine out of the box, but the graphics cards are giving me trouble. When booting, I get the message "failed to get i915 symbols, graphics turbo disabled". Then the screen blanks out during the remaining boot period. I am able to get the display working by changing to one of the consoles, then closing and opening the laptop's lid. It seems that Ubuntu gets confused about which card to use. I've read here: http://www.andreas-demmer.de/en/2010/07/18/testbericht-linux-auf-dem-hp-envy-14 that I should be able to turn off one the cards by echoing keywords into /sys/kernel/debug/vgaswitcheroo/switch, but that path is not available on my system. The BIOS does not have any methods to switch of the ATI card. Help anyone? /Carsten

    Read the article

  • Problems installing LYNC on non-domain controler

    - by Trikks
    I have two servers in this set up. AD and EX, the domain is called mydomain.net The AD is a Windows 2008 Server (32 bit) with Active Directory installed AD only has it's own ip in the DNS-servers list AD.mydomain.net does resolve correctly in the dns EX is a Windows 2008 R2 that is connected to the mydomain.net-domain EX only DNS server is the ip of the ad.mydomain.net There are no firewalls running between the two servers When trying to install Lync 2010 on the EX server I get the following error "Not available :Failure occurred attempting to check the schema state.Please ensure Active Directory is reachable." I can control the AD from EX, also login to it and do successful checks like netdom query /domain:mydomain.net fsmo ...that resolves correctly I suspect there is something fundamentally wrong with my setup, maybe Lync need a 2k8 R2 ad?

    Read the article

  • Openfiler RAID 10 option not found

    - by chrisling106
    Hi, I'm building a NAS using Openfiler2.3 (from 32-bit ISO), first of all I want to experiment it on VM first before going out and buy the harddrives needed. I created 5 virtual drives on VMware, sda is 2GB and the rest 1GB each (sdb to sde). I left sda blank and want to setup a RAID 10 disk using sdb, sdc, sdd and sde, 4 RAID partitions are setup successfully, but when I try to create a RAID device the only option for RAID level is 1, 0, 5 and 6. RAID 10 is not there! Can someone let me know what have I missed, please? TIA.

    Read the article

  • Ranking hit after WP site migration

    - by Ben
    I migrated my site from its old domain over a month ago. I followed WMT completely, including 301 redirects from every existing URL to the new domain, and then submitting a change of address. Traffic continued as normal, but then a few days after submitting the change of address traffic plummeted to about 20-30% of what it was previously. Most of my traffic come from organic search, and I can see that for the keywords I had targeted before and performed well with and am now ranking much much lower for. In some cases for low competition keywords I've only lost a few places, for higher competition terms I have really suffered. This has started to pick up a bit (one of my keywords I have risen from 195 to 100 in the last week), but it seems to be a very slow process. How seamless is this process normally? I was under the impression that this would not affect my rankings too severely, but it has now been a month since the move and recovery seems to be very slow, if at all. Is it likely that I've missed something? The only change is that I have moved what was the home page to be more of a sub-page, and now in its place is a magazine-style home page. I understand that links to the old site will now be pointing to the latter which means that rankings for some keywords attributed to the old home page will take a hit, but even on other pages that seem to fit in exactly the same page structure as the previous site I have seen a drop in rankings. Any help would be greatly appreciated. Thanks!

    Read the article

  • 12.10 visual performance using nvidia driver

    - by user100485
    My fresh ubuntu 12.10 install is slow, not something extreme but dragging windows, switching workspaces and things like that are just slow and look horrible. it feels like the fps is dropping in a game. Doing some photoshop work in windows was even a relief! This effect gets worse if I connect my external monitor. My system is an intel pentium dual core T4500 with 4gb memory and a GeForce 8200M G/integrated/SSE2 graphics chip. Nothing fancy but should be able to run ok. My "experience" in ubuntu is set to standard. (MSI cr500 laptop) I've installed the nvidia drivers, tried current and experimental and the experimental drivers seem to perform a bit better but overall bad anyway. I set the mode to adaptive in the nvidia-settings tool and it goes to maximum setting directly and doesn't come back. Using htop I found out that compiz or the X server always use a few percent of my cpu, more than I think it should and the time consumed is 5:18 for compiz, 4:33 for /usr/bin/X and 2:41 for google chrome(about 30 tabs open so not too strange I think.) What can I do to increase the visual performance cause this makes me not want to use ubuntu in public!

    Read the article

  • Emulator PCSX Reloaded - Fullscreen not working on Unity

    - by Leonardo Montenegro
    I have an older PS1 console with a couple of games I bought some years ago. On my pc, I'm using PCSX Reloaded - the best PS1 emulators for Linux I found so far. But I'm having a little issue on Ubuntu 12.04 Precise. I'm using Unity 3D and I'm trying to run some of my original PS1 games on PCSX Reloaded. Everything works nicely, except for fullscreen. I toggle fullscreen and specify maximum resolution for my monitor, but on fullscreen mode, both left and top unity bars aren't getting hidden. I tried changing between other graphic modes like Gnome Classic and Gnome Classic w/o Effects. On both, PCSX shows bars in fullscreen mode, so it isn't an Unity-specific issue, but an emulator problem. It's a bit annoying play games this way, so basically I'm running games on window mode for now. I'm using default OpenGL graphic plugin on this emulator. I tried changing to X11 graphic plugin and fullscreen worked, but graphics on X11 plugin aren't as good as OpenGL one. Anyone know a way to get fullscreen working on PCSX using OpenGL plugin? Or maybe another graphic plugin w/ OpenGL support.

    Read the article

  • Pointer position way off in Java Application menu's when using gnome-shell

    - by Hailwood
    When using any java application in gnome-shell if the window is maximised the pointer position is way off; but only on the menu's, in the editor, or the side panel, the pointer is fine. This only presents itself when the window is maximized, and it seems that the further away from 0x0 the window is when you maximise it, the bigger the pointer offset. From what I have gathered it has to do with the window not updating it's size when it gets maximised. The other issue is that when a gnome-shell notification appears, when clicking on it, I lose the ability to type in the editor, I can select text etc, but can't give it focus to type. I must bring up some other text input (e.g. right click on a file on the left, select rename, which brings up a rename dialog) after that I can type in the editor again. So, how can I fix this? Below is as much information as I can think to provide $ gnome-shell --version GNOME Shell 3.6.1 $ java -version java version "1.7.0_09" Java(TM) SE Runtime Environment (build 1.7.0_09-b05) Java HotSpot(TM) 64-Bit Server VM (build 23.5-b02, mixed mode) $ file /etc/alternatives/java /etc/alternatives/javac /etc/alternatives/java: symbolic link to '/usr/lib/jvm/java-7-oracle/jre/bin/java' /etc/alternatives/javac: symbolic link to '/usr/lib/jvm/java-7-oracle/bin/javac'

    Read the article

  • Adding a CMS to an existing Magento shop

    - by user6341
    I am working on a project for 3 niche stores built on magento (using magento's multi-store function) that each get roughly 50k unique visitors a day. The sites don't currently have a blog or forum or any social networking aspects. Would like to add a cms to each site that can be centrally run and would like it to take over the front end content from Magento. Also would like it to maintain an online blog/publication of sorts with videos, articles, and the like with privileges to edit the content given to a dozen or so people with different privileges. Want to add a forum to each site that is fairly robust and to possibly add some social networking aspects down the road, so extandability and available plugins/mods in each cms is important. Other than shared login between the forums,blog/publication and store, would like to be able to integrate some content from the forums and blog/publication into the store as well. After researching this a bit, I am inclined towards Drupal, but I haven't found any modules to integrate it with Magento. Also, since the blog content will be done by about a dozen nontechnical people, I want something that is very easy to work with. Lastly, since the site gets a good amount of traffic, speed and security are very important. What CMS would you recommend integrating in this context? Deciding between Drupal, Wordpress and Plone. Thanks.

    Read the article

  • Server room kit?

    - by Bill Weiss
    I feel like this is a question I've seen on here before, but some searching didn't do me any good. This looks similar, but I'm looking for stuff I leave there, not what's in my go-bag. What would you say is indispensable equipment in your server room? I've inherited one that's a bit light on stuff (except for servers, those are in there). We're in the single digits of racks, if that matters. I'm thinking of things like: Cable labeler Ethernet tester (copper at least, fibre if you need) ... ? Community wiki, because, really. [Edit] I suppose it's important to say that it's a colo facility, kind of far from the office. No food, water, etc. :(

    Read the article

  • Data Aggregation of CSV files java

    - by royB
    I have k csv files (5 csv files for example), each file has m fields which produce a key and n values. I need to produce a single csv file with aggregated data. I'm looking for the most efficient solution for this problem, speed mainly. I don't think by the way that we will have memory issues. Also I would like to know if hashing is really a good solution because we will have to use 64 bit hashing solution to reduce the chance for a collision to less than 1% (we are having around 30000000 rows per aggregation). For example file 1: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,50,60,70,80 a3,b2,c4,60,60,80,90 file 2: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,30,50,90,40 a3,b2,c4,30,70,50,90 result: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,80,110,160,120 a3,b2,c4,90,130,130,180 algorithm that we thought until now: hashing (using concurentHashTable) merge sorting the files DB: using mysql or hadoop or redis. The solution needs to be able to handle Huge amount of data (each file more than two million rows) a better example: file 1 country,city,peopleNum england,london,1000000 england,coventry,500000 file 2: country,city,peopleNum england,london,500000 england,coventry,500000 england,manchester,500000 merged file: country,city,peopleNum england,london,1500000 england,coventry,1000000 england,manchester,500000 The key is: country,city. This is just an example, my real key is of size 6 and the data columns are of size 8 - total of 14 columns. We would like that the solution will be the fastest in regard of data processing.

    Read the article

  • GoogleTest C++ Framework Incompatible With XCode 4.5.2 and OSX 10.8.2

    - by eb80
    I am trying to follow the instructions mentioned here for setting up Google's C++ framework in XCode Version 4.5.2 (4G2008a). First, I got "The run destination My Mac 64-bit is not valid for Running the scheme 'gtest-framework'". The answers here Xcode 4 - The selected run destination is not valid for this action are not working for me. I was able to change the build SDK following the instructions here Unable to build using Xcode 4 - The selected run destination is not valid for this action, except this resulted in many build failures such as "Unsupported compiler '4.0' selected for architecture 'x86_64'" and "Unsupported compiler '4.0' selected for architecture 'i386'" I've changed nothing out of the box, so this is very frustrating that I cannot seem to get this to build. Details of Machine: 64bit Mac OSX 10.8.2 Build 12C3006 Details of XCode: Version 4.5.2 (4G2008a)

    Read the article

  • Agile Testing Days 2012 – Day 2 – Learn through disagreement

    - by Chris George
    I think I was in the right place! During Day 1 I kept on reading tweets about Lean Coffee that has happened earlier that morning. It intrigued me and I figured in for a penny in for a pound, and set my alarm for 6:45am. Following the award night the night before, it was _really_ hard getting up when it went off, but I did and after a very early breakfast, set off for the 10 min walk to the Dorint. With Lean Coffee due to start at 07:30, I arrived at the hotel and made my way to one of the hotel bars. I soon realised I was in the right place as although the bar was empty, there was a table with post-it’s and pens! This MUST be the place! The premise of Lean Coffee is to have several small timeboxed discussions. Everyone writes down what they would like to discuss on post-its that are then briefly explained and submitted to the pile. Once everyone is done, the group dot-votes on the topics. The topics are then sorted by the dot vote counts and the discussions begin. Each discussion had 8 mins to start with, which meant it prevented the discussions getting off topic too much. After the time elapsed, the group had a vote whether to extend the discussion by a further 4 mins or move on. Several discussion were had around training, soft skills etc. The conversations were really interesting and there were quite a few good ideas. Overall it was a very enjoyable experience, certainly worth the early start! Make Melly Happy Following Lean Coffee was real coffee, and much needed that was! The first keynote of the day was “Let’s help Melly (Changing Work into Life)”by Jurgen Appelo. Draw lines to track happiness This was a very interesting presentation, and set the day nicely. The theme to the keynote was projects are about the people, more-so than the actual tasks. So he started by showing a photo of an employee ‘Melly’ who looked happy enough. He then stated that she looked happy but actually hated her job. In fact 50% of Americans hate their jobs. He went on to say that the world over 50% of people hate Americans their jobs. Jurgen talked about many ways to reduce the feedback cycle, not only of the project, but of the people management. Ideas such as Happiness doors, happiness tracking (drawing lines on a wall indicating your happiness for that day), kudo boxes (to compliment a colleague for good work). All of these (and more) ideas stimulate conversation amongst the team, lead to early detection of issues and investigation of solutions. I’ve massively simplified Jurgen’s keynote and have certainly not done it justice, so I will post a link to the video once it’s available. Following more coffee, the next talk was “How releasing faster changes testing” by Alexander Schwartz. This is a topic very close to our hearts at the moment, so I was eager to find out any juicy morsels that could help us achieve more frequent releases, and Alex did not disappoint. He started off by confirming something that I have been a firm believer in for a number of years now; adding more people can do more harm than good when trying to release. This is for a number of reasons, but just adding new people to a team at such a critical time can be more of a drain on resources than they add. The alternative is to have the whole team have shared responsibility for faster delivery. So the whole team is responsible for quality and testing. Obviously you will have the test engineers on the project who have the specialist skills, but there is no reason that the entire team cannot do exploratory testing on the product. This links nicely with the Developer Exploratory testing presented by Sigge on Day 1, and certainly something that my team are really striving towards. Focus on cycle time, so what can be done to reduce the time between dev cycles, release cycles. What’s stops a release, what delays a release? all good solid questions that can be answered. Alex suggested that perhaps the product doesn’t need to be fully tested. Doing less testing will reduce the cycle time therefore get the release out faster. He suggested a risk-based approach to planning what testing needs to happen. Reducing testing could have an impact on revenue if it causes harm to customers, so test the ‘right stuff’! Determine a set of tests that are ‘face saving’ or ‘smoke’ tests. These tests cover the core functionality of the product and aim to prevent major embarrassment if these areas were to fail! Amongst many other very good points, Alex suggested that a good approach would be to release after every new feature is added. So do a bit of work -> release, do some more work -> release. By releasing small increments of work, the impact on the customer of bugs being introduced is reduced. Red Pill, Blue Pill The second keynote of the day was “Adaptation and improvisation – but your weakness is not your technique” by Markus Gartner and proved to be another very good presentation. It started off quoting lines from the Matrix which relate to adapting, improvising, realisation and mastery. It has alot of nerds in the room smiling! Markus went on to explain how through deliberate practice ( and a lot of it!) you can achieve mastery, but then you never stop learning. Through methods such as code retreats, testing dojos, workshops you can continually improve and learn. The code retreat idea was one that interested me. It involved pairing to write an automated test for, say, 45 mins, they deleting all the code, finding a different partner and writing the same test again! This is another keynote where the video will speak louder than anything I can write here! Markus did elaborate on something that Lisa and Janet had touched on yesterday whilst busting the myth that “Testers Must Code”. Whilst it is true that to be a tester, you don’t need to code, it is becoming more common that there is this crossover happening where more testers are coding and more programmers are testing. Markus made a special distinction between programmers and developers as testers develop tests code so this helped to make that clear. “Extending Continuous Integration and TDD with Continuous Testing” by Jason Ayers was my next talk after lunch. We already do CI and a bit of TDD on my project team so I was interested to see what this continuous testing thing was all about and whether it would actually work for us. At the start of the presentation I was of the opinion that it just would not work for us because our tests are too slow, and that would be the case for many people. Jason started off by setting the scene and saying that those doing TDD spend between 10-15% of their time waiting for tests to run. This can be reduced by testing less often, reducing the test time but this then increases the risk of introduced bugs not being spotted quickly. Therefore, in comes Continuous Testing (CT). CT systems run your unit tests whenever you save some code and runs them in the background so you can continue working. This is a really nice idea, but to do this, your tests must be fast, independent and reliable. The latter two should be the case anyway, and the first is ideal, but hard! Jason makes several suggestions to make tests fast. Firstly keep the scope of the test small, secondly spin off any expensive tests into a suite which is run, perhaps, overnight or outside of the CT system at any rate. So this started to change my mind, perhaps we could re-engineer our tests, and continuously run the quick ones to give an element of coverage. This talk was very interesting and I’ve already tried a couple of the tools mentioned on our product (Mighty Moose and NCrunch). Sadly due to the way our solution is built, it currently doesn’t work, but we will look at whether we can make this work because this has the potential to be a mini-game-changer for us. Using the wrong data Gojko’s Hierarchy of Quality The final keynote of the day was “Reinventing software quality” by Gojko Adzic. He opened the talk with the statement “We’ve got quality wrong because we are using the wrong data”! Gojko then went on to explain that we should judge a bug by whether the customer cares about it, not by whether we think it’s important. Why spend time fixing issues that the customer just wouldn’t care about and releasing months later because of this? Surely it’s better to release now and get customer feedback? This was another reference to the idea of how it’s better to build the right thing wrong than the wrong thing right. Get feedback early to make sure you’re making the right thing. Gojko then showed something which was very analogous to Maslow’s heirachy of needs. Successful – does it contribute to the business? Useful – does it do what the user wants Usable – does it do what it’s supposed to without breaking Performant/Secure – is it secure/is the performance acceptable Deployable Functionally ok – can it be deployed without breaking? He then explained that User Stories should focus on change. In other words they should focus on the users needs, not the users process. Describe what the change will be, how that change will happen then measure it! Networking and Beer Following the day’s closing keynote, there were drinks and nibble for the ‘Networking’ evening. This was a great opportunity to talk to people. I find approaching strangers very uncomfortable but once again, when in Rome! Pete Walen and I had a long conversation about only fixing issues that the customer cares about versus fixing issues that make you proud of your software! Without saying much, and asking the right questions, Pete made me re-evaluate my thoughts on the matter. Clever, very clever!  Oh and he ‘bought’ me a beer! My Takeaway Triple from Day 2: release small and release often to minimize issues creeping in and get faster feedback from ‘the real world’ Focus on issues that the customers care about, not what we think is important It’s okay to disagree with someone, even if they are well respected agile testing gurus, that’s how discussion and learning happens!  

    Read the article

  • What electronic user-story-mapping tools can you recommend?

    - by azheglov
    Agile software development relies heavily on a work item type called user stories. For example, you have a backlog full of user stories and you can select a few of them to work on during the next sprint. But where and how do you find user stories to put into the backlog? There is a popular technique for doing that called story mapping. Jeff Patton invented it and here is the definitive guide on how to do it. The question is, what electronic tools are out there that support Patton's story-mapping technique? I've done a bit of research, found Pivotal and Rally plug-ins (but I'm not a customer of either) and I'm currently experimenting with SilverStories. What other tools are out there? What have you used? What do you (not) recommend? Why? UPDATE: Some people who wrote comments seem to lean towards an answer that applying this technique is simply impossible with an electronic tool and we should just accept that. Can't someone write it up as an answer?

    Read the article

  • Focus follows mouse stops working when opening window from launcher and no click to focus

    - by user97600
    This is 12.04 default desktop (unity). I set it to focus follows mouse, and changed the menus to be on the window. This worked for a while, then some unknown even, maybe an upgrade maybe some other setting change caused it to stop working. There are many ways for this behavior to start but one reliable one is to bring a window to the foreground/focus with the launcher. Now the focus is stuck on that window and not just the window but the regions within the window so the close, maximize, minimize and menus do not work. I have to use mouse middle and then mouse right and then focus follows mouse is restored for a bit. The exact details of the mouse action aren't clear, sometimes it seems like just mouse middle helps, sometimes just right some times a desperate sequence of clicks :-( I have tried switching to the gnome desktop and it seems to occur less there but it is not eliminated. I have tried switching mice to an old wired USB mouse. I have tried creating a new account and that has not worked. I have observed "split focus" where to scroll button scrolls one one window but the input goes to another. I go trapped recently where my keyboard input went to libre office calc, but I was selecting the search term in the chrome address window. The selection "grayed" but the keyboard input for the search went to libre. Regions in windows have very confused focus. I have to work hard to get focus on for example the close gliph (X) or the minimize gliph (_).

    Read the article

  • Why does installing NVidia 9600GT graphics card, take 1GB of RAM away from Windows?

    - by Nick G
    Hi, I've changed graphics cards in my PC and now Windows 7 (32bit) is reporting that I have a whole gigabyte less physical RAM in my PC. Why is this? Firstly, the machine has 4GB of physical RAM. The old card was an ATI 2600XT with 256MB and the new card is an NVidia 9600GT with 512MB. With the ATI card windows sees 3326MB. With the NVidia card, windows sees 2558MB. I realise that due to address space restrictions I will not see all 4GB with 32bit windows, but why is there such a massive loss of RAM when simply changing cards (bearing in mind BOTH cards have their own RAM and borrow no main memory like some built on chipsets do). Would using 64 bit windows solve this? Thanks Nick.

    Read the article

  • How to check Early Z efficiency on AMD GPU with Windows 7

    - by Suma
    I have a game using DirectX 9, and a development station using Win 7 x64. I am still able to get access to another station with Vista x64 / dual booted with WinXP x86. I wanted to check early Z efficiency in the game and to my sadness all tools I have tried seem to be unable to perform this task: AMD PerfStudio AMD GPUPerfStudio 2 does not support DirectX 9 at all AMD GPUPerfStudio 1.2 does not install correctly on Windows 7. When I have tweaked the MSI package (a simple OS version check adjustment was needed), it complained the drivers I have do not provide needed instrumentation. The drivers old enough to support the GPUPerfStudio would most likely not be able to operate with my Radeon 5750 card (though this is something I am not 100 % sure, I did not attempt to try any older drivers, not knowing which I should look for) PIX PIX does not seem to contain any counters like this. It offers some ATI specific counters, but when I try to activate them, the PIX reports "PIX encountered a problem while attaching to the target program." I do not want to upgrade to DX 10/11 just to be able to profile the game, but it seems without the step I am somewhat locked with a toolset which is no longer supported. I see only one obvious options which would probably work, and that is using WinXP (or with a little bit of luck even Vista) station, perhaps with some older AMD card, to make sure GPUPerfStudio 1.2 works. Other than that, can anyone recommend other options how to check GPU HW counters (HiZ / EarlyZ in particular, but if others would be enabled as well, it would be a nice bonus) for a DirectX 9 game on Windows 7, preferably on AMD GPU? (If that is not possible, I would definitely prefer switching GPU to switching the OS, but before I do so I would like to know if I will not hit the same problem with nVidia again)

    Read the article

  • Mac claims to have connected to wireless network, but hasn't

    - by Mick
    I am attempting to connect a new mac OSX 10.6.5 laptop to a wireless network (I am a windows expert but a mac novice). It used to connect without problem to the network when I had the security set to "64 bit wep". Now I have changed the security on my belkin router to "WPA-PSK (no server)". I have two PC's and an old mac connecting via the new security setting without problem. Now I have the problem that on the new mac, the wireless icon is indicating a good connection (5 dark bars). Also the network name has a tick next to it on the wireless drop down menu. But I can not view any websites. I can not even connect to the router by typing 192.168.2.1 into a browser address bar. Any ideas where I went wrong?

    Read the article

  • Ridiculously easy AJAX with ASP.NET MVC and jQuery

    - by eddraper
    After deciding I wanted to dive full-on into the world of ASP.NET MVC 2, I  began doing some research into what would be the best way to support some of my required AJAX functionality on this platform.  The result of these efforts was a barrage of options – many of which required completely different JScript infrastructure than what I planned to go forward with.  As I’ve been delighted with jQuery so far, I began tossing out all approaches that didn’t natively leverage it… Thus, I planned to resist the temptation to take anymore <script> dependencies whatsoever, unless I thoroughly proved that jQuery could NOT do what I planned to do.   Here’s some code I wish I would’ve found early in my research.  This would’ve saved me quite a bit of time and search engine bandwidth. ;-)   <script type="text/javascript">     $(document).ready(function () {         $('#div_name_here').load('<%=Url.Action("ACTION_NAME_HERE","CONTROLLER_NAME_HERE")%>');         $('#id_of_link_I_want_trigger_the_ajax_call')       .bind('click', function (event) {           $('#div_name_where_I_want_to_have_the_ajax_response_loaded_here').load('<%=Url.Action("ACTION_HERE","CONTROLLER_HERE", )%>');       })     }) </script>

    Read the article

  • 2 year degree plus experience vs 4 year degree

    - by CenterOrbit
    Alright, I have searched around a bit on this site and found two somewhat similar questions: Computer Science Programming Certificate vs. Computer Science Degree? Is it possible/likely to be paid fairly without a college degree? But these do not provide an answer specifically to what I am seeking. I have my 2 year A.A.S. Degree in computer programming, along with a networking certificate from a technical college. I also have been working at a small educational game development company for 3 years now in various positions, but steadily moving up and now as a lead programmer on a few projects. Some of the higher programmers I work with claim that no matter how much experience I develop it still will not mean as much as someone with a 4 year degree. Their argument is that most employers will look over my resume because of the common '4 yr' minimum requirement. I have also heard people state (not as many though) that experience is everything and that an employer would rather have someone that has worked in the field instead of a rookie fresh out of college. I have heard both sides of this argument, but am looking for a general consensus, or more arguments from both sides from the people who have been there, or are there.

    Read the article

< Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >