Search Results

Search found 1163 results on 47 pages for 'spangles forever'.

Page 30/47 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Digital Agenda in the EU means open standards after all

    - by trond-arne.undheim
    European Commission Vice President Neelie Kroes speech on Openness at the heart of the EU Digital Agenda at Open Forum Europe 2010 Summit in Brussels refocuses the EU Digital Agenda on open standards. I say the speech scores a 90/100, smooth, smart, a little vicious at the fringes, maybe? Anyway, it shows the strategy might age and implement well. This is Dutch pragmatism at its best. The EU Digital Agenda (I give it an 85/100 score), while laudable, stops short of using the term. The next step for the European Commission is defining the term open standards. If they do that, and do it right, Vice President Kroes will go into history as having made a significant contribution towards global progress in e-government by possibly eradicating lock-in forever. Moreover, she will put Europe's SMEs in a better position to succeed in a global IT market filled with barriers to entry from players not fully understanding, using, or unpacking standards. Kroes' interesting suggestion that she will now explore a "legal proposal" on interoperability that will have an impact on all IT companies operating in the European market is more up for debate. An interoperability directive? One run by DG COMP or one run by DG INFSO, telecom style? Would something like that work? Would the industry like it? Would it help European governments? Possibly, if done right. The good thing was, Kroes pointed out that she will look for input from the industry. Kroes' track record is one of not being scared of taking on the Titans. She also wants to enact real, positive, lasting change. "I will not go anywhere", she said. All of that is good. And she does understand the importance of open standards. Let's now start discussing the details. Implementing the Digital Agenda is not simple. It requires collaboration across the various Directorates in the European Commission. Mounting a new Interoperability directive is also never attempted before. Getting it right is important. Even possibly finding out it cannot be done right and choosing a more light weight approach that is equally effective would be bold. Go Kroes!

    Read the article

  • How to implement an email unsubscribe system for a site with many kinds of emails?

    - by Mike Liu
    I'm working on a website that features many different types of emails. Users have accounts, and when logged in they have access to a setting page that they can use to customize what types of emails they receive. However, I'd like to also give users an easy way to unsubscribe directly in the emails they receive. I've looked into list unsubscribe headers as well as creating some type of one click link that would unsubscribe a user from that type of email without requiring login or further action. The later would probably require me to break convention and make changes to the database in response to a GET on the link. However, am I incorrect in thinking that either of these would require me to generate and permanently store a unique identifier in my database for every email I ever send, really complicating email delivery? Without that, I'm not sure how I would be able to uniquely identify a user and a type of email in order to change their email preferences, and this identifier would need to be stored forever as a user could have an email sitting in their inbox for a long time before they decide to act on it. Alternatively, I was considering having a no-login page for managing email preferences. In contrast to above where I would need one of these identifiers for each email, this would only need one identifier per user, with no generation or other action required on sending an email. All of these raise security issues, and they could potentially be used by people to tamper with others' email preferences. This could be mitigated somewhat by ensuring that the identifier is really difficult to guess. For the once per user identifier approach, I was considering generating the identifier by passing a user's ID through some type of encryption algorithm, is this a sound approach? For the per-email identifiers, perhaps I could use a user's ID appended to the time. However, even this would not eliminate the problem entirely, as this would really just be security through obscurity, and anyone with the URL could tamper, and in the end the main defense would have to be that most people aren't so bored as to tamper with other people's email preferences. Are there any other alternatives I've missed, or issues or solutions with these that anyone can provide insight on? What are best practices in this area?

    Read the article

  • schedule compliance and keeping technical supports and resolving issues

    - by imays
    I am an entrepreneur of a small software developer company. The flagship product is developed by myself and my company grew up to 14 people. One of pride is that we've never have to be invested or loaned. The core development team is 5 people. 3 are seniors and 2 are juniors. After the first release, we've received many issues from our customers. Most of them are bug issues, customization needs, usage questions and upgrade requests. The issues from customers are incoming many times everyday, so it takes little time or much time of our developers. Because of our product is a software development kit(SDK) so most of questions can be answered only from our developers. And, for resolving bug issues, developers must be involved. Estimating time to resolve bug is hard. I fully understand it. However, our developers insist they cannot set the any due date of each project because they are busy doing technical supports and bug fixes by issues from customers everyday. Of course, they never do overwork. I suggested them an idea to divide the team into two parts: one for focusing on development by milestones, other for doing technical supports and bug fixes without setting due days. Then we could announce release plan officially. After the finish of release, two parts exchange the role for next milestone. However, they say they "NO, because it is impossible to share knowledge and design document fully." They still say they cannot set the release date and they request me to alter the due date flexibly. They does not fix the due date of each milestone. Fortunately, our company is not loaned and invested so we are not chocked. But I think it is bad idea to keep this situation. I know the story of ant and grasshopper. Our customers are tired of waiting forever of our release date. Companies consume limited time and money. If flexible due date without limit could be acceptable, could they accept flexible salary day? What is the root cause of our problem? All that I want is to fix and achieve precisely due date of each milestone without losing frequent technical supports. I think there must be solution for this situation. Please answer me. Thanks in advance. PS. Our tools and ways of project management are Trello, Mantis-like issue tracker, shared calendar software and scrum(collected cards into series of 'small and high completeness' projects).

    Read the article

  • Will polishing my current project be a better learning experience than starting a new one?

    - by Alejandro Cámara
    I started programming many years ago. Now I'm trying to make games. I have read many recommendations to start cloning some well known games like galaga, tetris, arkanoid, etc. I have also read that I should go for the whole game (including menus, sound, score, etc.). Yesterday I finished the first complete version of my arkanoid clone. But it is far from over. I can still work on it for months (I program as a hobby in my free time) implementing a screen resolution switcher, remap of the control keys, power-ups falling from broken bricks, and a huge etc. But I do not want to be forever learning how to clone ONE game. I have the urge to get to the next clone in order to apply some design ideas I have come upon while developing this arkanoid clone (at the same time I am reading the GoF book and much source code from Ludum Dare 21 game contest). So the question is: Should I keep improving the arkanoid clone until it has all the features the original game had? or should I move to the next clone (there are almost infinite games to clone) and start mending the things I did wrong with the previous clone? This can be a very subjective question, so please restrain the answers to the most effective way to learn how to make my own games (not cloning someone ideas). Thank you! CLARIFICATION In order to clarify what I have implemented I make this list: Features implemented: Bouncing capabilities (the ball bounces on walls, on bricks, and on the bar). Sounds when bouncing on bricks and the bar, and when the player wins or loses. Basic title menu (new game and exit only). Also in-game menu and win/lose menus. Only three levels, but the map system is so easy I do not think it will teach me much (am I wrong?). Features not-implemented: Power-ups when breaking the bricks. Complex bricks (with more than one "hit point" and invincible). Better graphics (I am not really good at it). Programming polishing (use more intensively the design patterns). Here's a link to its (minimal) webpage: http://blog.acamara.es/piperine/ I kind of feel ashamed to show it, so please do not hit me too hard :-) My question was related to the not-implemented features. I wondered what was the fastest (optimal) path to learn. 1) implement the not-implemented features in this project which is getting big, or 2) make a new game which probably will teach me those lessons and new ones. ANSWER I choose @ashes999 answer because, in my case, I think I should polish more and try to "ship" the game. I think all the other answers are also important to bear in mind, so if you came here having the same question, before taking a rush decision read all the discussion. Thank you all!

    Read the article

  • Redgate ANTS Performance Profiler

    - by Jon Canning
    Seemingly forever I've been working on a business idea, it's a REST API delivering content to mobiles, and I've never really had much idea about its performance. Yes, I have a suite of unit tests and integration tests, but these only tell me that it works, not how well it works. I was also about to embark on a major refactor, swapping the database from MongoDB to RavenDB, and was curious to see if that impacted performance at all, so I needed a profiler that supported IIS Express that I can run my integration tests against, and Google gave me:   http://www.red-gate.com/supportcenter/content/ANTS_Performance_Profiler/help/7.4/app_iise   Excellent. Following the above guide an instance of IIS Express and is launched, as is Internet Explorer. The latter eventually becomes annoying, I would like to decide whether I want a browser opened, but thankfully the guide is wrong in that it can be closed and profiling will continue. So I ran my tests, stopped profiling, and was presented with a call tree listing the endpoints called and allowing me to drill down to the source code beneath.     Although useful and fascinating this wasn't what I was expecting to see, I was after the method timings from the entire test suite. Switching Show to Methods Grid presented me with a list of my methods, with the slowest lit up in red at the top. Marvellous.     I did find that if you switch to Methods Grid before Call tree has loaded, you do not get the red warnings.   StructureMap was very busy, and next on the list was a request filter that I didn't expect to be so overworked. Highlighting it, the source code was presented to me in the bottom window with timings and a nice red indicator to show me where to look. Oh horror, that reflection hack I put in months ago, I'd forgotten all about it. It was calling Validate<T>() which in turn was resolving a validator from StructureMap. Note to self, use //TODO: when leaving smelly code lying around.     Before refactoring, remember to Save Profile Results from the File menu. Annoyingly you are not prompted to save your results when exiting, and using Save Project will only leave you thankful that you have version control and can go back in time to run your tests again.   Having implemented StructureMap’s ForGenericType, I ran my tests again and:     Win, thankyou ANTS (What does ANTS stand for BTW?)   There's definitely room in my toolbox for a profiler; what started out as idle curiosity actually solved a potential problem. When presented with a new codebase I can see enormous benefit from getting an overview of the pipeline from the call tree before drilling into the code, and as a sanity check before release it gives a little more reassurance that you've done your best, and shows you exactly where to look if you haven’t.   Next I’m going to profile a load test.

    Read the article

  • New computer hangs on shutdown/reboot, how to troubleshoot?

    - by torbengb
    Summary: My machine hangs on shutdown/restart: all windows and the menu bar disappear but the desktop wallpaper remains, and it stays like that without disk activity forever (hours). It doesn't even show the shutdown screen (the one with the animated dots) where I could hit ESC and watch the shutdown text. How can I troubleshoot this? Details: I've just received a new nettop computer (Acer Aspire Revo 3700: CPU:Atom D525, GPU:Nvidia ION2). I've just made a clean install of Ubuntu 10.10 using the standard USB pendrive method. The machine boots okay and works OK including WLAN and audio, but the graphics are not OK. Ubuntu offered to install&activate the current recommended Nvidia driver, but the machine hangs on shutdown/restart which prevents the installation of the proper Nvidia driver. I have to cycle the power to reboot. I ran the Update Manager in the hope that the updates would fix the hang-up. At the end of the update-installation it asked to reboot - and got stuck just like before. I see no obvious cause of the freeze and I don't know if it's caused by graphics problems or anything else. The only USB attachment is a mouse/keyboard; I don't have any external storage attached; and I don't have any programs running (the machine freezes even when doing restart right after login). How can I determine what is causing the freeze? How can I fix this? I'm frankly rather disappointed because I bought this new machine in the hopes of getting the graphics to work, which failed miserably on my old machine, even though Ubuntu is supposed to be good with Nvidia. Being a fresh convert from Windows, I was hoping for a happier experience this time, so I'm very much looking forward to your suggestions! ... After posting this question, I see related questions in the right sidebar: this, this, and this. Don't know why these didn't show up while I composed by question. Those questions suggest some ACPI settings but I am not experienced enough to find/change those settings. I'll try the sudo shutdown -h now command when I get home and see if that works, then update this question. I did check the system BIOS but didn't see anything out of the ordinary.

    Read the article

  • How to move a website and domain name without experiencing downtime for emails or site?

    - by user4842
    Okay, I have a pretty complex problem, so I'll get right to it. I'm a designer who built a new website for my client. Their old site is hosted at GoDaddy, as well as their email. Problem is, the guy who built the original site decided to put the original domain name and hosting under HIS personal GoDaddy account. Well, that turned out to be a bad move for several reasons. Here's how it's all tied together. The original domain name, www.domainoriginal.com, was actually purchased at Network Solutions. The original web designer pointed the nameservers from Network Solutions to his GoDaddy account, where the email and hosting is setup. The new domain name, www.domainnew.com, was purchased under a new and separate GoDaddy account belonging to the company, and the new website was built under a 3rd party platform (Big Commerce). So, the www.domainnew.com is already pointed to the new website using A records at new GoDaddy account. All is fine there. However, they still need www.domainoriginal.com to point to the NEW website as well. (The old one can simply be deleted, it is NOT important). AND, they want to keep their old email addresses intact and working as well, but under the NEW GoDaddy account. Obviously, I have no DNS control at Network Solutions, and I have no idea what kind of control I have at GoDaddy under the old account because the web designer will not let me see inside his account. But, he and GoDaddy both tell me nothing can be done other than to repoint the nameservers to Network Solutions, and then repoint the A record to my new website, www.domainnew.com, and point the MX Records to GoDaddy. I'm told the downtime would be 24-48 hours if I do this. Ideally, we'd like to do a domain name transfer and get www.domainoriginal.com in the new GoDaddy account created by the company. But, I'm told this could take up to 7 days. Does this mean the site and email will be down for 7 days? And any emails sent during this time, would they be lost forever? If I do this, how long could I expect the site and email to go down? And, will the emails be permanently lost? I've gotten different answers from everybody at GoDaddy so I kind of don't trust them anymore... Any help would be greatly appreciated Thanks, Tyson

    Read the article

  • Portland Silverlight User Group: WP7 &amp; XNA &ndash; I survived.

    - by George Clingerman
    Last night I gave a talk to the Portland Silverlight User Group. http://www.portlandsilverlight.net/Meetings/Details/15 And I survived (which you should have probably already figured out since you’re reading this post AND that’s what I titled it…) Really though it was a fantastic time and I had a lot of fun! I was a little nervous getting ready for it, but I’m always a little nervous getting ready for things. I had the game all written,  I knew the general flow for what the talk was going to be. I read over Scott Hanselman’s 11 Top Tips for a Successful Technical presentation (which has become something I do EVERY time I’m preparing for a talk). I gave myself a brief list of points I wanted to make sure I covered or worked into the talk. But then I was ready and I waited. I hate the waiting. It makes me nervous. Once I was up in front of the room though with my laptop open and some XNA code in front of me, my nerves go away. Then I’m ready. I know XNA, I love talking about XNA and I love sharing what I know and hearing questions that make me think. And hopefully that came through while I was talking. I had a freaking blast. The swag went quickly (and I was even able to hand out some XNA 2.0 books that have been around forever!) and the pizza was been gobbled down. I started the talk at about 6:10 and managed to cover a very brief intro to programming against the game loop (it’s a different concept and one that seems to trip a lot of people up getting started with game development) and then rolled into making a basic 2D game for Windows Phone 7 using XNA. And I finished the whole thing before 8:30. Wahoo! I’m planning on posting the source code and assets on my site so those should be coming soon. And to make things even better, they recorded the whole thing on video so everyone will get to see me pretend I can speak! Just wait till you hear the new song I wrote for this talk…

    Read the article

  • C# 5: At last, async without the pain

    - by Alex.Davies
    For me, the best feature in Visual Studio 11 is the async and await keywords that come with C# 5. I am a big fan of asynchronous programming: it frees up resources, in particular the thread that a piece of code needs to run in. That lets that thread run something else, while waiting for your long-running operation to complete. That's really important if that thread is the UI thread, or if it's holding a lock because it accesses some data structure. Before C# 5, I think I was about the only person in the world who really cared about asynchronous programming. The trouble was that you had to go to extreme lengths to make code asynchronous. I would forever be writing methods that, instead of returning a value, accepted an extra argument that is a "continuation". Then, when calling the method, I'd have to pass a lambda in to it, which contained all the stuff that needed to happen after the method finished. Here is a real snippet of code that is in .NET Demon: m_BuildControl.FilterEnabledForBuilding(     projects,     enabledProjects = m_OutOfDateProjectFinder.FilterNeedsBuilding(         enabledProjects,         newDirtyProjects =         {             // Mark any currently broken projects as dirty             newDirtyProjects.UnionWith(m_BrokenProjects);             // Copy what we found into the set of dirty things             m_DirtyProjects = newDirtyProjects;             RunSomeBuilds();         })); It's just obtuse. Who puts a lambda inside a lambda like that? Well, me obviously. But surely enabledProjects should just be the return value of FilterEnabledForBuilding? And newDirtyProjects should just be the return value of FilterNeedsBuilding? C# 5 async/await lets you write asynchronous code without it looking so stupid. Here's what I plan to change that code to, once we upgrade to VS 11: var enabledProjects = await m_BuildControl.FilterEnabledForBuilding(projects); var newDirtyProjects = await m_OutOfDateProjectFinder.FilterNeedsBuilding(enabledProjects); // Mark any currently broken projects as dirty newDirtyProjects.UnionWith(m_BrokenProjects); // Copy what we found into the set of dirty things m_DirtyProjects = newDirtyProjects; RunSomeBuilds(); Much easier to read! But how is this the same code? If we were on the UI thread, doesn't the UI thread have to block while FilterEnabledForBuilding runs? No, it doesn't, and that's the magic of the await keyword! It cuts your method up into its constituent pieces, much like I did manually with lambdas before. When you run it, only the piece up to the first await actually runs. The rest is passed to FilterEnabledForBuilding as a continuation, which will get called back whenever that method is finished. In the meantime, our thread returns, and can go back to making the UI responsive, or whatever else threads do in their spare time. This is actually a massive simplification, and if you're interested in all the gory details, and speed hacks that the await keyword actually does for you, I recommend Jon Skeet's blog posts about it.

    Read the article

  • Getting a virus is *very* annoying

    - by bconlon
    I spent most of yesterday removing an annoying virus from my PC. I feel slightly foolish for getting one in the first place, but after so many years I guess I was always going to eventually succumb. I was also a little surprised at the failure of various tools at removing it. The virus would redirect the browser to websites including ‘licosearch’, ‘hugosearch’ and ‘facebook’, and the disk would be thrashing away infecting dlls in some way. I had the full up to date version of McAfee installed. This identified that there was an issue in some dlls on the system and was able to ‘fix’ them. But they kept getting re-infected. So I installed Microsoft Security Essentials and this too was able to identify and ‘fix’ the infected dlls. The system scans take forever and I really expected better results. I also tried Malwarebytes, Hitman Pro, AVG and Sophos to no avail. Eventually I thought I’d investigate myself. It turned out that on reboot, the virus would start 3 instances of Firefox.exe which I’m guessing would do bad things including infecting as many dlls on the system as possible. I removed Firefox and the virus cleverly then launched 3 instances of Chrome! So I uninstalled Chrome and yes, it then started to launch 3 instances of iexplore.exe. If I’m honest, by this stage I was just seeing if it would be able to use any of the browsers! As it was starting these on reboot, I looked in my User Startup folder and there was a <randomly named>.exe and several log files. I deleted these and rebooted. When I looked they had been recreated. So I then looked in the registry Run and RunOnce entries: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run. Sure enough there were entries to run a file in C:\Program Files\<random name folder>\<random name file>.exe. I deleted this and rebooted and it was fixed. I also looked in the event log and found a warning that Winlogon had failed to start the file C:\Program Files\<random name folder>\<random name file>.exe So I also checked HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon and this entry had also been changed. Finally I ran a full system scan to clean up any infected dlls. I hope it’s gone for good!  #

    Read the article

  • Ubuntu doesn't "see" external USB Hard Disk

    - by Mina Michael
    It's NTFS. It's USB2. I'm using Ubuntu 13.04. It works perfectly fine on Windows (which excludes cable and hardware problems). I have two Ubuntu computers and it's not detected on either. It's about 500 GB. Edits: Following the first link, I input sudo lsusb in a terminal; before and after connecting the HDD. The difference was Bus 001 Device 012: ID 14cd:6116 Super Top M6116 SATA Bridge. There it is! ("sata bridge" used to appear in a windows notification when I plugged in the HDD in!). ...This means that Ubuntu detects it but is it not mounting it? I tried this: sudo mount /dev/sdb1 /mnt But gives this: mount: special device /dev/sdb1 does not exist I also tried: sudo mount /dev/sdc1 /mnt but it stays with no output forever. I left it in background for about 30 min.s. sudo fdisk -l gives out this: Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xa42d04a3 Device Boot Start End Blocks Id System /dev/sda1 63 80324 40131 de Dell Utility /dev/sda2 * 80325 102481919 51200797+ 7 HPFS/NTFS/exFAT /dev/sda3 263874558 312580095 24352769 5 Extended /dev/sda4 102481920 263872511 80695296 7 HPFS/NTFS/exFAT /dev/sda5 263874560 310505471 23315456 83 Linux /dev/sda6 310507520 312580095 1036288 82 Linux swap / Solaris Partition table entries are not in disk order Disk /dev/sdc: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x5822aaea Device Boot Start End Blocks Id System /dev/sdc1 2048 976769023 488383488 7 HPFS/NTFS/exFAT The part below "Partition table entries are not in disk order" takes about 5 minutes to appear. The outputs of ls /dev/ | grep sd before and after connecting the HDD: before: sda sda1 sda2 sda3 sda4 sda5 sda6 ,after: sda sda1 sda2 sda3 sda4 sda5 sda6 sdd sdd1 The second output has the lines sdd and sdd1 different from the first one. IT SHOWED THE FILES!! The command sudo mount /dev/sdd1 /mnt worked after I typed in sudo fdisk -l!!! Thanks a million!! :) :)

    Read the article

  • How to make sprint planning fun

    - by Jacob Spire
    Not only are our sprint planning meetings not fun, they're downright dreadful. The meetings are tedious, and boring, and take forever (a day, but it feels like a lot longer). The developers complain about it, and dread upcoming plannings. Our routine is pretty standard (user story inserted into sprint backlog by priority story is taken apart to tasks tasks are estimated in hours repeat), and I can't figure out what we're doing wrong. How can we make the meetings more enjoyable? ... Some more details, in response to requests for more information: Why are the backlog items not inserted and prioritized before sprint kickoff? User stories are indeed prioritized; we have no idea how long they'll take until we break them down into tasks! From the (excellent) answers here, I see that maybe we shouldn't estimate tasks at all, only the user stories. The reason we estimate tasks (and not stories) is because we've been getting story-estimates terribly wrong -- but I guess that's the subject for an altogether different question. Why are developers complaining? Meetings are long. Meetings are monotonous. Story after story, task after task, struggling (yes, struggling) to estimate how long it will take and what it involves. Estimating tasks makes user-story-estimation seem pointless. The longer the meeting, the less focus in the room. The less focused colleagues are, the longer the meeting takes. A recursive hate-spiral develops. We've considered splitting the meeting into two days in order to keep people focused, but the developers wouldn't hear of it. One day of planning is bad enough; now we'll have two?! Part of our problem is that we go into very small detail (in order to get more accurate estimations). But when we estimate roughly, we go way off the mark! To sum up the question: What are we doing wrong? What additional ways are there to make the meeting generally more enjoyable?

    Read the article

  • Java 2D Tile Collision

    - by opiop65
    I have been working on a way to do collision detection forever, and just can't figure it out. Here's my simple 2D array: for (int x = 0; x < 16; x++) { for (int y = 0; y < 16; y++) { map[x][y] = AIR; if(map[x][y] == AIR) { air.draw(x * tilesize, y * tilesize); } } } for (int x = 0; x < 16; x++) { for (int y = 6; y < 16; y++) { map[x][y] = GRASS; if(map[x][y] == GRASS) { grass.draw(x * tilesize, y * tilesize); } } } for (int x = 0; x < 16; x++) { for (int y = 8; y < 16; y++) { map[x][y] = STONE; if(map[x][y] == STONE) { stone.draw(x * tilesize, y * tilesize); } } } I want to do it with rectangles, and using the intersect() method, but how would I go about adding rectangles to all the tiles? Edit: My player moves like this: if(input.isKeyDown(Input.KEY_W)) { shiftY -= delta * speed; idY = (int) shiftY; if(shift == true) { shiftY -= delta * runspeed; } if(isColliding == true) { shiftY += delta * speed; } } if(input.isKeyDown(Input.KEY_S)) { shiftY += delta * speed; idY = (int) shiftY; if(shift == true) { shiftY += delta * runspeed; } if(isColliding == true) { shiftY -= delta * speed; } } if (input.isKeyDown(Input.KEY_A)) { steve = left; shiftX -= delta * speed; idX = (int) shiftX; if(shift == true) { shiftX -= delta * runspeed; } if(isColliding == true) { shiftX += delta * speed; } } if (input.isKeyDown(Input.KEY_D)) { steve = right; shiftX += delta * speed; idX = (int) shiftX; if(shift == true) { shiftX += delta * runspeed; } if(isColliding == true) { shiftX -= delta * speed; } } (I have tried my own collision code, but its horrible. Doesn't work in the slightest)

    Read the article

  • MVVM and Animations in Silverlight

    - by Aligned
    I wanted to spin an icon to show progress to my user while some content was downloading. I'm using MVVM (aren't you) and made a satisfactory Storyboard to spin the icon. However, it took longer than expected to trigger that animation from my ViewModel's property.I used a combination of the GoToState action and the DataTrigger from the Microsoft.Expression.Interactions dll as described here.Then I had problems getting it to start until I found this approach that saved the day. The DataTrigger didn't bind right away because "it doesn’t change visual state on load is because the StateTarget property of the GotoStateAction is null at the time the DataTrigger fires.". Here's my XAML, hopefully you can fill in the rest.<Image x:Name="StatusIcon" AutomationProperties.AutomationId="StatusIcon" Width="16" Height="16" Stretch="Fill" Source="inProgress.png" ToolTipService.ToolTip="{Binding StatusTooltip}"> <i:Interaction.Triggers> <utilitiesBehaviors:DataTriggerWhichFiresOnLoad Value="True" Binding="{Binding IsDownloading, Mode=OneWay, TargetNullValue=True}"> <ei:GoToStateAction StateName="Downloading" /> </utilitiesBehaviors:DataTriggerWhichFiresOnLoad> <utilitiesBehaviors:DataTriggerWhichFiresOnLoad Value="False" Binding="{Binding IsDownloading, Mode=OneWay, TargetNullValue=True}"> <ei:GoToStateAction StateName="Complete"/> </utilitiesBehaviors:DataTriggerWhichFiresOnLoad> </i:Interaction.Triggers> <Image.Projection> <PlaneProjection/> </Image.Projection> <VisualStateManager.VisualStateGroups> <VisualStateGroup x:Name="VisualStateGroup"> <VisualStateGroup.Transitions> <VisualTransition GeneratedDuration="0" To="Downloading"> <VisualTransition.GeneratedEasingFunction> <QuadraticEase EasingMode="EaseInOut"/> </VisualTransition.GeneratedEasingFunction> <Storyboard RepeatBehavior="Forever"> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Projection).(PlaneProjection.RotationZ)" Storyboard.TargetName="StatusIcon"> <EasingDoubleKeyFrame KeyTime="0:0:1.5" Value="-360"/> <EasingDoubleKeyFrame KeyTime="0:0:2" Value="-360"/> </DoubleAnimationUsingKeyFrames> </Storyboard> </VisualTransition> <VisualTransition From="Downloading" GeneratedDuration="0"/> </VisualStateGroup.Transitions> <VisualState x:Name="Downloading"/> <VisualState x:Name="Complete"/> </VisualStateGroup> </VisualStateManager.VisualStateGroups></Image>MVVMAnimations.zip

    Read the article

  • Should we persist with an employee still writing bad code after many years?

    - by user94986
    I've been assigned the task of managing developers for a well-established company. They have a single developer who specialises in all their C++ coding (since forever), but the quality of the work is abysmal. Code reviews and testing have revealed many problems, one of the worst being memory leaks. The developer has never tested his code for leaks, and I discovered that the applications could leak many MBs with only a minute of use. User's were reporting huge slowdowns, and his take was, "it's nothing to do with me - if they quit and restart, it's all good again." I've given him tools to detect and trace the leaks, and sat down with him for many hours to demonstrate how the tools are used, where the problems occur, and what to do to fix them. We're 6 months down the track, and I assigned him to write a new module. I reviewed it before it was integrated into our larger code base, and was dismayed to discover the same bad coding as before. The part that I find incomprehensible is that some of the coding is worse than amateurish. For example, he wanted a class (Foo) that could populate an object of another class (Bar). He decided that Foo would hold a reference to Bar, e.g.: class Foo { public: Foo(Bar& bar) : m_bar(bar) {} private: Bar& m_bar; }; But (for other reasons) he also needed a default constructor for Foo and, rather than question his initial design, he wrote this gem: Foo::Foo() : m_bar(*(new Bar)) {} So every time the default constructor is called, a Bar is leaked. To make matters worse, Foo allocates memory from the heap for 2 other objects, but he didn't write a destructor or copy constructor. So every allocation of Foo actually leaks 3 different objects, and you can imagine what happened when a Foo was copied. And - it only gets better - he repeated the same pattern on three other classes, so it isn't a one-off slip. The whole concept is wrong on so many levels. I would feel more understanding if this came from a total novice. But this guy has been doing this for many years and has had very focussed training and advice over the past few months. I realise he has been working without mentoring or peer reviews most of that time, but I'm beginning to feel he can't change. So my question is, would you persist with someone who is writing such obviously bad code?

    Read the article

  • ?????create or replace???PL/SQL??

    - by Liu Maclean(???)
    ????T.Askmaclean.com?????10gR2??????procedure,?????????create or replace ??????????????????,????Oracle???????????????????procedure? ??Maclean ??2?10gR2???????????PL/SQL?????: ??1: ??Flashback Query ????,?????????????flashback database,??????????create or replace???SQL??source$??????????undo data,????????????: SQL> select * from V$version; BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi PL/SQL Release 10.2.0.5.0 - Production CORE 10.2.0.5.0 Production TNS for Linux: Version 10.2.0.5.0 - Production NLSRTL Version 10.2.0.5.0 - Production SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com SQL> create or replace procedure maclean_proc as   2  begin   3  execute immediate 'select 1 from dual';   4  end;   5  / Procedure created. SQL> select * from dba_source where name='MACLEAN_PROC'; OWNER      NAME                           TYPE               LINE TEXT ---------- ------------------------------ ------------ ---------- -------------------------------------------------- SYS        MACLEAN_PROC                   PROCEDURE             1 procedure maclean_proc as SYS        MACLEAN_PROC                   PROCEDURE             2 begin SYS        MACLEAN_PROC                   PROCEDURE             3 execute immediate 'select 1 from dual'; SYS        MACLEAN_PROC                   PROCEDURE             4 end; SQL> select current_scn from v$database; CURRENT_SCN -----------     2660057 create or replace procedure maclean_proc as begin -- I am new procedure execute immediate 'select 2 from dual'; end; / Procedure created. SQL> select current_scn from v$database; CURRENT_SCN -----------     2660113 SQL> select * from dba_source where name='MACLEAN_PROC'; OWNER      NAME                           TYPE               LINE TEXT ---------- ------------------------------ ------------ ---------- -------------------------------------------------- SYS        MACLEAN_PROC                   PROCEDURE             1 procedure maclean_proc as SYS        MACLEAN_PROC                   PROCEDURE             2 begin SYS        MACLEAN_PROC                   PROCEDURE             3 -- I am new procedure SYS        MACLEAN_PROC                   PROCEDURE             4 execute immediate 'select 2 from dual'; SYS        MACLEAN_PROC                   PROCEDURE             5 end; SQL> create table old_source as select * from dba_source as of scn 2660057 where name='MACLEAN_PROC'; Table created. SQL> select * from old_source where name='MACLEAN_PROC'; OWNER      NAME                           TYPE               LINE TEXT ---------- ------------------------------ ------------ ---------- -------------------------------------------------- SYS        MACLEAN_PROC                   PROCEDURE             1 procedure maclean_proc as SYS        MACLEAN_PROC                   PROCEDURE             2 begin SYS        MACLEAN_PROC                   PROCEDURE             3 execute immediate 'select 1 from dual'; SYS        MACLEAN_PROC                   PROCEDURE             4 end; ?????????scn??flashback query????,????????as of timestamp??????????,????PL/SQL????????????????undo??????????,????????????replace/drop ??????PL/SQL??? ??2 ??logminer??replace/drop PL/SQL?????SQL???DELETE??,??logminer?UNDO SQL???PL/SQL?????? ????????????????archivelog????,??????????????? minimal supplemental logging,??????????Unsupported SQLREDO???: create or replace?? ?? procedure???????SQL??????, ??????procedure????????????????, source$??????????????: SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA; Database altered. SQL> create or replace procedure maclean_proc as   2  begin   3  execute immediate 'select 1 from dual';   4  end;   5  / Procedure created. SQL> SQL> oradebug setmypid; Statement processed. SQL> SQL> oradebug event 10046 trace name context forever,level 12; Statement processed. SQL> SQL> create or replace procedure maclean_proc as   2  begin   3  execute immediate 'select 2 from dual';   4  end;   5  / Procedure created. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_4305.trc [oracle@vrh8 ~]$ egrep  "update|insert|delete|merge"  /s01/admin/G10R25/udump/g10r25_ora_4305.trc delete from procedureinfo$ where obj#=:1 delete from argument$ where obj#=:1 delete from procedurec$ where obj#=:1 delete from procedureplsql$ where obj#=:1 delete from procedurejava$ where obj#=:1 delete from vtable$ where obj#=:1 insert into procedureinfo$(obj#,procedure#,overload#,procedurename,properties,itypeobj#) values (:1,:2,:3,:4,:5,:6) insert into argument$( obj#,procedure$,procedure#,overload#,position#,sequence#,level#,argument,type#,default#,in_out,length,precision#,scale,radix,charsetid,charsetform,properties,type_owner,type_name,type_subname,type_linkname,pls_type) values (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17,:18,:19,:20,:21,:22,:23) insert into procedureplsql$(obj#,procedure#,entrypoint#) values (:1,:2,:3) update procedure$ set audit$=:2,options=:3 where obj#=:1 delete from source$ where obj#=:1 insert into source$(obj#,line,source) values (:1,:2,:3) delete from idl_ub1$ where obj#=:1 and part=:2 and version<>:3 delete from idl_char$ where obj#=:1 and part=:2 and version<>:3 delete from idl_ub2$ where obj#=:1 and part=:2 and version<>:3 delete from idl_sb4$ where obj#=:1 and part=:2 and version<>:3 delete from ncomp_dll$ where obj#=:1 returning dllname into :2 update idl_sb4$ set piece#=:1 ,length=:2 , piece=:3 where obj#=:4 and part=:5 and piece#=:6 and version=:7 update idl_ub1$ set piece#=:1 ,length=:2 , piece=:3 where obj#=:4 and part=:5 and piece#=:6 and version=:7 update idl_char$ set piece#=:1 ,length=:2 , piece=:3 where obj#=:4 and part=:5 and piece#=:6 and version=:7 update idl_ub2$ set piece#=:1 ,length=:2 , piece=:3 where obj#=:4 and part=:5 and piece#=:6 and version=:7 delete from idl_ub1$ where obj#=:1 and part=:2 and version<>:3 delete from idl_char$ where obj#=:1 and part=:2 and version<>:3 delete from idl_ub2$ where obj#=:1 and part=:2 and version<>:3 delete from idl_sb4$ where obj#=:1 and part=:2 and version<>:3 delete from ncomp_dll$ where obj#=:1 returning dllname into :2 delete from idl_ub1$ where obj#=:1 and part=:2 and (piece#<:3 or piece#>:4) and version=:5 delete from idl_char$ where obj#=:1 and part=:2 and (piece#<:3 or piece#>:4) and version=:5 delete from idl_ub2$ where obj#=:1 and part=:2 and (piece#<:3 or piece#>:4) and version=:5 delete from idl_sb4$ where obj#=:1 and part=:2 and (piece#<:3 or piece#>:4) and version=:5 delete from idl_ub1$ where obj#=:1 and part=:2 and version<>:3 delete from idl_char$ where obj#=:1 and part=:2 and version<>:3 delete from idl_ub2$ where obj#=:1 and part=:2 and version<>:3 delete from idl_sb4$ where obj#=:1 and part=:2 and version<>:3 delete from ncomp_dll$ where obj#=:1 returning dllname into :2 update idl_sb4$ set piece#=:1 ,length=:2 , piece=:3 where obj#=:4 and part=:5 and piece#=:6 and version=:7 update idl_ub1$ set piece#=:1 ,length=:2 , piece=:3 where obj#=:4 and part=:5 and piece#=:6 and version=:7 delete from idl_char$ where obj#=:1 and part=:2 and (piece#<:3 or piece#>:4) and version=:5 delete from idl_ub2$ where obj#=:1 and part=:2 and (piece#<:3 or piece#>:4) and version=:5 delete from error$ where obj#=:1 delete from settings$ where obj# = :1 insert into settings$(obj#, param, value) values (:1, :2, :3) delete from warning_settings$ where obj# = :1 insert into warning_settings$(obj#, warning_num, global_mod, property) values (:1, :2, :3, :4) delete from dependency$ where d_obj#=:1 delete from access$ where d_obj#=:1 insert into dependency$(d_obj#,d_timestamp,order#,p_obj#,p_timestamp, property, d_attrs)values (:1,:2,:3,:4,:5,:6, :7) insert into access$(d_obj#,order#,columns,types) values (:1,:2,:3,:4) update obj$ set obj#=:6,type#=:7,ctime=:8,mtime=:9,stime=:10,status=:11,dataobj#=:13,flags=:14,oid$=:15,spare1=:16, spare2=:17 where owner#=:1 and name=:2 and namespace=:3 and(remoteowner=:4 or remoteowner is null and :4 is null)and(linkname=:5 or linkname is null and :5 is null)and(subname=:12 or subname is null and :12 is null) ?drop procedure??????source$???PL/SQL?????: SQL> oradebug setmypid; Statement processed. SQL> oradebug event 10046 trace name context forever,level 12; Statement processed. SQL> drop procedure maclean_proc; Procedure dropped. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_4331.trc delete from context$ where obj#=:1 delete from dir$ where obj#=:1 delete from type_misc$ where obj#=:1 delete from library$ where obj#=:1 delete from procedure$ where obj#=:1 delete from javaobj$ where obj#=:1 delete from operator$ where obj#=:1 delete from opbinding$ where obj#=:1 delete from opancillary$ where obj#=:1 delete from oparg$ where obj# = :1 delete from com$ where obj#=:1 delete from source$ where obj#=:1 delete from idl_ub1$ where obj#=:1 and part=:2 delete from idl_char$ where obj#=:1 and part=:2 delete from idl_ub2$ where obj#=:1 and part=:2 delete from idl_sb4$ where obj#=:1 and part=:2 delete from ncomp_dll$ where obj#=:1 returning dllname into :2 delete from idl_ub1$ where obj#=:1 and part=:2 delete from idl_char$ where obj#=:1 and part=:2 delete from idl_ub2$ where obj#=:1 and part=:2 delete from idl_sb4$ where obj#=:1 and part=:2 delete from ncomp_dll$ where obj#=:1 returning dllname into :2 delete from idl_ub1$ where obj#=:1 and part=:2 delete from idl_char$ where obj#=:1 and part=:2 delete from idl_ub2$ where obj#=:1 and part=:2 delete from idl_sb4$ where obj#=:1 and part=:2 delete from ncomp_dll$ where obj#=:1 returning dllname into :2 delete from error$ where obj#=:1 delete from settings$ where obj# = :1 delete from procedureinfo$ where obj#=:1 delete from argument$ where obj#=:1 delete from procedurec$ where obj#=:1 delete from procedureplsql$ where obj#=:1 delete from procedurejava$ where obj#=:1 delete from vtable$ where obj#=:1 delete from dependency$ where d_obj#=:1 delete from access$ where d_obj#=:1 delete from objauth$ where obj#=:1 update obj$ set obj#=:6,type#=:7,ctime=:8,mtime=:9,stime=:10,status=:11,dataobj#=:13,flags=:14,oid$=:15,spare1=:16, spare2=:17 where owner#=:1 and name=:2 and namespace=:3 and(remoteowner=:4 or remoteowner is null and :4 is null)and(linkname=:5 or linkname is null and :5 is null)and(subname=:12 or subname is null and :12 is null) ??????????source$???redo: SQL> alter system switch logfile; System altered. SQL> select sequence#,name from v$archived_log where sequence#=(select max(sequence#) from v$archived_log);  SEQUENCE# ---------- NAME --------------------------------------------------------------------------------        242 /s01/flash_recovery_area/G10R25/archivelog/2012_05_21/o1_mf_1_242_7vnm13k6_.arc SQL> exec dbms_logmnr.add_logfile ('/s01/flash_recovery_area/G10R25/archivelog/2012_05_21/o1_mf_1_242_7vnm13k6_.arc',options => dbms_logmnr.new); PL/SQL procedure successfully completed. SQL> exec dbms_logmnr.start_logmnr(options => dbms_logmnr.dict_from_online_catalog); PL/SQL procedure successfully completed. SQL> select sql_redo,sql_undo from v$logmnr_contents where seg_name = 'SOURCE$' and operation='DELETE'; delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '1' and "SOURCE" = 'procedure maclean_proc as ' and ROWID = 'AAAABIAABAAALpyAAN'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','1','procedure maclean_proc as '); delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '2' and "SOURCE" = 'begin ' and ROWID = 'AAAABIAABAAALpyAAO'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','2','begin '); delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '3' and "SOURCE" = 'execute immediate ''select 1 from dual''; ' and ROWID = 'AAAABIAABAAALpyAAP'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','3','execute immediate ''select 1 from dual''; '); delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '4' and "SOURCE" = 'end;' and ROWID = 'AAAABIAABAAALpyAAQ'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','4','end;'); delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '1' and "SOURCE" = 'procedure maclean_proc as ' and ROWID = 'AAAABIAABAAALpyAAJ'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','1','procedure maclean_proc as '); delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '2' and "SOURCE" = 'begin ' and ROWID = 'AAAABIAABAAALpyAAK'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','2','begin '); delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '3' and "SOURCE" = 'execute immediate ''select 2 from dual''; ' and ROWID = 'AAAABIAABAAALpyAAL'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','3','execute immediate ''select 2 from dual''; '); delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '4' and "SOURCE" = 'end;' and ROWID = 'AAAABIAABAAALpyAAM'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','4','end;'); ???? logminer???UNDO SQL???????source$????,?DELETE????????????,????SOURCE????????????PL/SQL???DDL???

    Read the article

  • Why does chkdsk produce "Correcting error in index $I30 for file 33267" and then freeze?

    - by wcpro
    I am running chkdsk like this: chkdsk k: /x I get this error: Correcting error in index $I30 for file 33267. What does it mean? Here's the full problem: Microsoft Windows [Version 6.1.7600] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Windows\system32>chkdsk e: /r The type of the file system is NTFS. Volume label is RodDrobo. CHKDSK is verifying files (stage 1 of 5)... 162048 file records processed. File verification completed. 0 large file records processed. 0 bad file records processed. 0 EA records processed. 0 reparse records processed. CHKDSK is verifying indexes (stage 2 of 5)... 10 percent complete. (167130 of 208124 index entries processed) Correcting error in index $I30 for file 33267. Correcting error in index $I30 for file 33267. <--- it always reaches this spot, then hangs forever

    Read the article

  • procdump on w3wp.exe: Only part of a ReadProcessMemory or WriteProcessMemory request was completed

    - by JakeS
    I'm having a problem with an IIS application that occasionally spikes up in CPU usage, and am trying to use procdump to get a memory dump for examination. I'm running "procdump.exe -64 -mA 9999" where 9999 is the pid of the process. But every time I do it, I get an error: Only part of a ReadProcessMemory or WriteProcessMemory request was completed. Doing this also recycles the apppool, relieving the CPU spike, so I can't keep trying until I get it right. Does anyone know what is going wrong? EDIT WITH MORE INFO: So far I've failed to generate a debug dump no matter what tool I try. All of them seem to generate the same sort of error. This is 2008 R2 Datacenter running IIS7 with a 64-bit asp.net web site. My best guess is that something is getting blocked, causing some requests to remain open in IIS and gradually using up resources. If I monitor the worker process using the IIS Manager and view all requests, throughout the day I'll start to see some requests that "stick" and run forever. Some of these are for static files. Some are for aspx pages. I cannot see any "common" reason for them. Every once in a while the app pool starts taking up 100% CPU and the only remedy is to kill it.

    Read the article

  • Terminal Server 2008: Installing 16-bit Application (FoxPro 2.6)

    - by JohnyD
    I have one physical Windows 2008 R2 server running Hyper-V. Running under Hyper-V I have a virtual Windows Server 2008 R2 server running Remote Desktop Services (Terminal Services). I'm preparing my applications using the "Install Application on Remote Desktop..." control panel app. So far so good. However, I am now trying to install FoxPro 2.6 which is a 16-bit windows application. When I try to install it I receive the message: "The version of this file is not compatible with the version of Windows you're running. Check your computer's system information to see whether you need an x86 (32-bit) or x64 (64-bit) version of the program, and then contact the software publisher". Is there any way around this? I'm in the middle of a large migration to thin-clients and foxpro 2.6, while it won't be around forever, is a very integral application for our data-entry personnel. How can I get this to work? Thanks in advance!

    Read the article

  • Which apache/mysql/php package is best for windows?

    - by crosenblum
    I have tried appservnetwork, was the best so far, but I haven't seen them do an update in ages, EasyPHP is just slow to load always. Wamp and Xamp, all put in their description that is not for production. I do not plan to host publicly this site or site's I am working on. But I do want a fast loading apache/mysql/php server for development purposes. I used to really like WLMP, which is Lighttpd for Windows, but that project seems unupdated or abandoned. I refuse to use IIS, but i have no desire to get into any wars over it. I run windows xp sp3 at my home pc. I will need to have a web server setup for professional work, as well as some fun websites I am working on. I just want it fast enough, so i can run it via localhost, and not take forever to load in the browser. Thank you... I plan mostly do php programming and perhaps coldfusion via this.

    Read the article

  • Put one monitor of a dual monitor windows system into standby

    - by Psycogeek
    Standby not Disabled! When running 2 monitors on windows 7 or Windows XP, I would like to be able to put one of the monitors at a time into standby. The method can be manual. When running 2 monitors , the second monitor is not always needed, shutting off the monitors own power switch will turn off the monitor, that does work Ok. Problems with that are , the delay with the monitor logo at turn on, and the power switch is not very accessable, and the switch might not live forever turning it on and off so many times. Using disable methods like devcon, WIN-P and Display, causes all the windows to properly move to the other monitor. While that is what a person would want to happen so they can get hold of the windows, that is not what I want to happen, and some things on the other monitor have to be re-arranged after a re-enable. By putting it into standby mode, nothing changes other than the monitor going into standby. Disconnecting the DVI cable still can cause the system to (properly) shift all the windows over to the one monitor, just like any of the disable methods do. That makes a mess of the windows, and is so unacceptable, that I would prefer to leave the monitor on, wasting power and the hardware, when it could easily go into standby for some time. For both monitors I am using a "MonitorOff" program that puts both monitors into standby, but I can not find a utility that will put only ONE monitor into standby for the windows system. If someone comes along and suggests "ultramon" you must know for a fact that it will put One of either of the monitors into actual standby. And it does not really suit me to use ultramon, I tested it (it was nice) and I did not feel that it was a program I wanted. The 2 monitors are running off of an ATI 4890 card, they are both hooked up DVI-I, the OS is both Windows 7 (primary) and Windows XP. In addition it would also be interesting to have seperate standby activity timers, and follow mouse kind of standby changes, but any manuel method , shortcut, batch , tray, or gadget kind of operation would be a good start.

    Read the article

  • mysql startup, shtudown and logging on osx

    - by Joelio
    Hi, I am trying to troubleshoot some mysql problems (I have a table I cant seem to delete or drop, it hangs forever) I have 10.5.8 osx, I dont remember how/if I installed mysql, here is what I know: it automatically starts on boot the process looks like this: /usr/local/mysql/libexec/mysqld --basedir=/usr/local/mysql --datadir=/usr/local/mysql/var --pid-file=/usr/local/mysql/var/Joels-New-Pro.local.pid _mysql 96 0.0 0.0 75884 684 ?? Ss Sat06PM 0:00.02 /bin/sh /usr/local/mysql/bin/mysqld_safe when I run: /usr/local/mysql/libexec/mysqld --verbose --help it says: /usr/local/mysql/libexec/mysqld Ver 5.0.45 for apple-darwin9.1.0 on i686 (Source distribution) it seems to use my.cnf from /etc/my.cnf Now here are my questions: I dont see anything in the startupitems that remotely looks like mysql ls /Library/StartupItems/ BRESINKx86Monitoring ChmodBPF HP IO HP Trap Monitor Parallels ParallelsTransporter 1.) So how does it startup automatically? 2.) How do I start & stop this type of installation? Also, looking at the config, the logs have no values: /usr/local/mysql/libexec/mysqld --verbose --help|grep '^log' log (No default value) log-bin (No default value) log-bin-index (No default value) log-bin-trust-function-creators FALSE log-bin-trust-routine-creators FALSE log-error log-isam myisam.log log-queries-not-using-indexes FALSE log-short-format FALSE log-slave-updates FALSE log-slow-admin-statements FALSE log-slow-queries (No default value) log-tc tc.log log-tc-size 24576 log-update (No default value) log-warnings 1 3.) Does that mean there is no logging enabled in mysetup? thanks in advance! Joel

    Read the article

  • Setting a time limit for a transaction in MySQL/InnoDB

    - by Trevor Burnham
    This sprang from this related question, where I wanted to know how to force two transactions to occur sequentially in a trivial case (where both are operating on only a single row). I got an answer—use SELECT ... FOR UPDATE as the first line of both transactions—but this leads to a problem: If the first transaction is never committed or rolled back, then the second transaction will be blocked indefinitely. The innodb_lock_wait_timeout variable sets the number of seconds after which the client trying to make the second transaction would be told "Sorry, try again"... but as far as I can tell, they'd be trying again until the next server reboot. So: Surely there must be a way to force a ROLLBACK if a transaction is taking forever? Must I resort to using a daemon to kill such transactions, and if so, what would such a daemon look like? If a connection is killed by wait_timeout or interactive_timeout mid-transaction, is the transaction rolled back? Is there a way to test this from the console? Clarification: innodb_lock_wait_timeout sets the number of seconds that a transaction will wait for a lock to be released before giving up; what I want is a way of forcing a lock to be released. Update: Here's a simple example that demonstrates why innodb_lock_wait_timeout is not sufficient to ensure that the second transaction is not blocked by the first: START TRANSACTION; SELECT SLEEP(55); COMMIT; With the default setting of innodb_lock_wait_timeout = 50, this transaction completes without errors after 55 seconds. And if you add an UPDATE before the SLEEP line, then initiate a second transaction from another client that tries to SELECT ... FOR UPDATE the same row, it's the second transaction that times out, not the one that fell asleep. What I'm looking for is a way to force an end to this transaction's restful slumber.

    Read the article

  • How to find cause of main file system going to read only mode

    - by user606521
    Ubuntu 12.04 File system goes to readonly mode frequently. First of all I have read this question file system is going into read only mode frequently already. But I have to know if it's not caused by something else than dying hard drive. This is server provided by my client and I am just runing there some node.js workers + one node.js server and I am using mongodb. From time to time (every 20-50h) system suddenly makes filesystem read only, mongodb process fails (due read-only fs) and my node workers/server (which are started by forever) are just killed. Here is the log from dmesg - I can see there some errors and messages that FS is going to read-only, and there is also some JOURNAL error but I would like to find cause of those errors.. http://speedy.sh/Ux2VV/dmesg.log.txt edit smartctl -t long /dev/sda smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.5.0-23-generic] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net SMART support is: Unavailable - device lacks SMART capability. A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options. What I am doing wrong? Same is for sda2. Morover now when I type any command that not exists in shell I get this: Sorry, command-not-found has crashed! Please file a bug report at: https://bugs.launchpad.net/command-not-found/+filebug Please include the following information with the report:

    Read the article

  • o2cb thinks ocfs2 cluster is still online, and refuses to shut down

    - by Kendall
    I have a handful of OpenSuSE 11.2 servers that utilize OCFS2 volumes. I've noticed that o2cb can't figure out when the OCFS2 cluster is actually mounted. For example, when I try to shutdown o2cb, after stopping OCSF2, o2cb refuses to shutdown because it thinks OCFS2 is still up! After stopping OCFS2 I try to stop o2cb... hamguy:/dev/disk/by-label # /etc/init.d/o2cb stop Stopping O2CB cluster ocfs2: Failed Unable to stop cluster as heartbeat region still active So I check the status... hamguy:/dev/disk/by-label # /etc/init.d/o2cb status Driver for "configfs": Loaded Filesystem "configfs": Mounted Stack glue driver: Loaded Stack plugin "o2cb": Loaded Driver for "ocfs2_dlmfs": Loaded Filesystem "ocfs2_dlmfs": Mounted Checking O2CB cluster ocfs2: Online Heartbeat dead threshold = 31 Network idle timeout: 30000 Network keepalive delay: 2000 Network reconnect delay: 2000 Checking O2CB heartbeat: Active And double check OCFS2... hamguy:/dev/disk/by-label # /etc/init.d/ocfs2 status Configured OCFS2 mountpoints: /u/conf /u/logs /u/backup /u/client /u/data /u/mdata OCFS2 is clearly down, while o2cb clearly thinks otherwise. The versions of OCFS2 and o2cb are... kendall@hamguy:~> rpm -qa |grep ocfs2 ocfs2console-1.4.1-25.6.x86_64 ocfs2-tools-o2cb-1.4.1-25.6.x86_64 ocfs2-tools-1.4.1-25.6.x86_64 kendall@hamguy:~> rpm -qa |grep o2cb ocfs2-tools-o2cb-1.4.1-25.6.x86_64 What causes this, and is there a way around it? If I try to reboot the machine, it will just sit there forever until your physically power cycle it. That obviously is a bit of a problem. Any insight is appreciated, thank you. Kendall

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >