Search Results

Search found 556 results on 23 pages for 'newton falls'.

Page 9/23 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • How to reset display settings in XFCE \ Ubuntu 12.04 and also flgrx drivers

    - by Agent24
    I recently upgraded to Ubuntu 12.04 and since I hate unity I installed the Xubuntu package and am using XFCE instead. Since I have a Radeon HD5770 I also installed the fglrx drivers. This all went fine (aside from the fact that the post-release update fglrx drivers have an error on installation and Ubuntu thinks they're not installed when they actually are. I configured my display settings (dual monitors, a 17" CRT on VGA and a 17" LCD on DVI) in the amdcccle program and everything was perfect. THEN, 2 days ago, I accidentally clicked on the "Display" settings in XFCE "settings" manager. After that, everything got screwed. Now, I normally run the CRT at 1152x854 and the LCD at 1280x1024 with the CRT as my primary monitor (with panel) and the LCD without panels etc just to display other windows when I want to drag them over there. The problem is now that if I set my CRT to 1152x864, it stays at 1280x1024 virtually and half the stuff falls off the screen. It also puts the LCD at 1280x1024 BUT then overlays the CRT's display ontop with different wallpaper in an L shape down the right-hand and bottom edges. In short, nothing makes sense and everything is FUBAR. I tried uninstalling fglrx through synaptic, and renaming xorg.conf and also the xfce XML file that has monitor settings but it still won't make sense. Unity on the other hand can currently set everything normally so the problem appears to be only with XFCE. In any case, I can't even get the fglrx drivers back, when I re-installed them, I can't run amdccle anymore as it says the driver isn't installed!! Can someone help me reset my XFCE settings so the monitors aren't screwed with some incorrect virtual desktop size and also so I can get fglrx drivers back and working? I really don't want to have to format and reinstall and go through all the hassle but it looks like I may have to :(

    Read the article

  • Using HTML5 Today part 2&ndash;Fixing Semantic tags with a Shiv

    - by Steve Albers
    Semantic elements and the Shiv! This is the second entry in the series of demos from the “Using HTML5 Today” talk. For the definitive discussion on unknown elements and the HTML5 Shiv check out Mark Pilgrim’s Dive Into HTML5 online book at http://diveintohtml5.info/semantics.html#unknown-elements Semantic tags increase the meaning and maintainability of your markup, help make your page more computer-readable, and can even provide opportunities for libraries that are written to automagically enhance content using standard tags like <nav>, <header>,  or <footer>. Legacy IE issues However, new HTML5 tags get mangled in IE browsers prior to version 9.  To see this in action, consider this bit of HTML code which includes the new <article> and <header> elements: Viewing this page using the IE9 developer tools (F12) we see that the browser correctly models the hierarchy of tags listed above: But if we switch to IE8 Browser Mode in developer tools things go bad: Did you know that a closing tag could close itself?? The browser loses the hierarchy & closes all of the new tags.  The new tags become unusable and the page structure falls apart. Additionally block-level elements lose their block status, appearing as inline.    The Fix (good) The block-level issue can be resolved by using CSS styling.  Below we set the article, header, and footer tags as block tags. article, header, footer {display:block;} You can avoid the unknown element issue by creating a version of the element in JavaScript before the actual HTML5 tag appears on the page: <script> document.createElement("article"); document.createElement("header"); document.createElement("footer"); </script> The Fix (better) Rather than adding your own JS you can take advantage of a standard JS library such as Remy Sharp’s HTML5 Shiv at http://code.google.com/p/html5shiv/.  By default the Modernizr library includes HTML5 Shiv, so you don’t need to include the shiv code separately if you are using Modernizr.

    Read the article

  • Issue in understanding how to compare performance of classifier using ROC

    - by user1214586
    I am trying to demystify pattern recognition techniques and understood few of them. I am trying to design a classifier M. A gesture is classified based on the hamming distance between the sample time series y and the training time series x. The result of the classifier are probabilistic values. There are 3 classes/categories with labels A,B,C which classifies hand gestures where there are 100 samples for each class which are to be classified (single feature and data length=100). The data are different time series (x coordinate vs time). The training set is used to assign probabilities indicating which gesture has occured how many times. So,out of 10 training samples if gesture A appeared 6 times then probability that a gesture falls under category A is P(A)=0.6 similarly P(B)=0.3 and P(C)=0.1 Now, I am trying to compare the performance of this classifier with Bayes classifier, K-NN, Principal component analysis (PCA) and Neural Network. On what basis,parameter and method should I do it if I consider ROC or cross validate since the features for my classifier are the probabilistic values for the ROC plot hence what shall be the features for k-nn,bayes classification and PCA? Is there a code for it which will be useful. What should be the value of k is there are 3 classes of gestures? Please help. I am in a fix.

    Read the article

  • What options can I use in totem, xine or vlc to play on machines with limited resources?

    - by Nina
    Mplayer has some options that allow me to play it on a machine with limited resources. I can use mplayer -vo x11 -zoom -framedrop to play videos when otherwise I'd run out of memory. At least I think I'm running out of memory. I am running our of some resource for sure. -vo sets the video output driver -zoom actually I don't know what this does, but it works for me ;-) -framedrop allows mplayer to not render frames if it falls behind. Are there similar command line controls (or a defaults file) for totem, vlc or xine?

    Read the article

  • Our winners- and some BBQ for everyone

    - by Steve Tunstall
    Congrats to our two winners for the first two comments on my last entry. Steve from Australia and John Lemon. Steve won since he was the first person over the International Date Line to see the post I made so late after a workday on Friday. So not only does he get to live in a country with the 2nd most beautiful women in the world, but now he gets some cool Oracle Swag, too. (Yes, I live on the beach in southern California, so you can guess where 1st place is for that other contest…Now if Steve happens to live in Manly, we may actually have a tie going…) OK, ok, for everyone else, you can be winners, too. How you ask? I will make you the envy of every guy and gal in your neighborhood or campsite. What follows is the way to smoke the best ribs you or anyone you know have ever tasted. Follow my instructions and give it a try. People at your party/cookout/campsite will tell you that they’re the best ribs they’ve ever had, and I will let you take all the credit. Yes, I fully realize this post is going to be longer than any post I’ve done yet. But let’s get serious here. Smoking meat is much more important, agreed? J In all honesty, this is a repeat of another blog I did, so I’m just copying and pasting. Step 1. Get some ribs. I actually really like Costco’s pack. They have both St. Louis and Baby Back. (They are the same ribs, but cut in half down the sides. St. Louis style is the ‘front’ of the ribs closest to the stomach, and ‘Baby back’ is the part of the ribs where is connects to the backbone). I like them both, so here you see I got one pack of each. About 4 racks to a pack. So these two packs for $25 each will feed about 16-20 of my guests. So around 3 bucks a person is a pretty good deal for the best ribs you’ll ever have. Step 2. Prep the ribs the night before you’re going to smoke. You need to trim them to fit your smoker racks, and also take off the membrane and add your rub. Then cover and set in fridge overnight. Here’s how to take off the membrane, which will not break down with heat and smoke like the rest of the meat, so must be removed. Use a butter knife to work in a ways between the membrane and the white bone. Just enough to make room for your finger. Try really hard not to poke through the membrane, you want to keep it whole. See how my gloved fingers can now start to lift up and pull off the membrane? This is what you are trying to do. It’s awesome when the whole thing can come off at once. This one is going great, maybe the best one I’ve ever done. Sometime, it falls apart and doesn't come off in one nice piece. I hate when that happens. Now, add your rub and pat it down once into the meat with your other hand. My rub is not secret. I got it from my mentor, a BBQ competitive chef who is currently ranked #1 in California and #3 in the nation on the BBQ circuit. He does full-day classes in southern California if anyone is interested in taking his class. Go to www.slapyodaddybbq.com to check him out. I tweaked his run recipe a tad and made my own. It’s one part Lawry’s, one part sugar, one part Montreal Steak Seasoning, one part garlic powder, one-half part red chili powder, one-half part paprika, and then 1/20th part cayenne. You can adjust that last ingredient, or leave it out. Real cheap stuff you can get at Costco. This lets you make enough rub to last about a year or two. Don’t make it all at once, make a shaker’s worth and use it up before you make more. Place it all in a bowl, mix well, and then add to a shaker like you see here. You can get a shaker with medium sized holes on it at any restaurant supply store or Smart & Final. The kind you see at pizza places for their red pepper flakes works best. Now cover and place in fridge overnight. Step 3. The next day. Ok, I’m ready to go. Get your stuff together. You will need your smoker, some good foil, a can of peach nectar, a bottle of Agave syrup, and a package of brown sugar. You will need this stuff later. I also use a clean spray bottle, and apple juice. Step 4. Make your fire, or turn on your electric smoker. In this example I’m using my portable charcoal smoker. I got this for only $40. I then modified it to be useful. Once modified, these guys actually work very well. Trust me, your food DOES NOT KNOW how expensive your smoker is. Someone who tells you that you need to spend a bunch of money on a smoker is an idiot. I also have an electric smoker that stays in my backyard. It’s cleaner and larger so I can smoke more food. But this little $40 one works great for going camping. Here is what my fire-bowl looks like. I leave a space in the middle open, and place cold charcoal and wood chucks in a circle going outwards. This makes it so when I dump the hot coals down the middle, they will slowly burn outwards, hitting different wood chucks at different times, allowing me to go 4-5 hours without having to even touch my fire. For ribs, I use apple and pecan wood. Pecan works for anything. Apple or any fruit wood is excellent for pork. So now I make my hot charcoal with a chimney only about half-full. I found a great use for that side-burner on my grill that I never use. It makes a fantastic chimney starter. You never use fluids of any kind, nor ever use that stupid charcoal that has lighter fluid built into it. Never, ever, ever. Step 5. Smoke. Add your ribs in the racks and stack them up in your smoker. I have a digital thermometer on a probe that I use to keep track of the temp in the smoker. I just lay the probe on the top rack and shut the lid. This cheap guy is a little harder to maintain the right temperature of around 225 F, so I do have to keep my eye on it more than my electric one or a more expensive charcoal one with the cool gadgets that regulate your temp for you. Every hour, spray apple juice all over your ribs using that spray bottle. After about 3 hours, you should have a very good crust (called the Bark) on your ribs. Once you have the Bark where you want it, carefully remove your ribs and place them in a tray. We are now ready for a very important part to make the flavor. Get a large piece of foil and place one rib section on it. Splash some of the peach nectar on it, and then a drizzle of the Agave syrup. Then, use your gloved hand to pack on some brown sugar. Do this on BOTH sides, and then completely wrap it up TIGHT in the foil. Do this for each rib section, and then place all the wrapped sections back into the smoker for another 4 to 6 hours. This is where the meat will get tender and flavorful. The first three hours is only to make the smoke bark. You don’t need smoke anymore, since the ribs are wrapped, you only need to keep the heat around 225 for the next 4-6 hours. Obviously you don’t spray anymore. Just time and slow heat. Be patient. It’s actually really hard to overdo it. You can let them go longer, and all that will happen is they will get even MORE tender!!! If you take them out too soon, they will be tough. How do you know? Take out one package (use long tongs) and open it up. If you grab a bone with your tongs and it just falls apart and breaks away from the rest of the meat, you are done!!! Enjoy!!! Step 6. Eat. It pulls apart like this when it’s done. By the way, smoking tri-tip is way easier. Just rub it with the same rub, and put in your smoker for about 2.5 hours at 250 F. That’s it. Low-maintenance. It comes out like this, with a fantastic smoke ring and amazing flavor. Thanks, and I will put up another good tip, about the ZFSSA, around the end of November. Steve 

    Read the article

  • How to go from mainstream to indie development?

    - by Salano Software
    I'm currently working as a game programmer for a AAA-level developer and publisher - which falls into the 'nice problem to have' category, I know, except that I'm growing more and more disenchanted with the direction of both the company and the AAA portion of the industry as a whole. I don't see any games on the studio's calendar for the next several years that I'm actually interested in working on; it looks like a continuing parade of sequels, license extensions and largely-derivative work. Which isn't to say that there won't be interesting things to do on those projects; but more and more I find myself wanting to do something fundamentally different. It seems like the market's never been better for smaller-scale projects, and I'd love to jump into that (and I've done small demos for Android and have started digging into iOS), but I obviously can't put anything out while I'm working for the company, and I'm concerned that I shouldn't even do substantial development in my spare time on anything I'd eventually like to release on my own. At the same time, I'm leery of leaving the job I've got for hopefully-obvious reasons, especially without a specific plan in place. Has anyone out there got experience with 'going indie' out of a mainstream job, and does anyone have specific suggestions as to what the best approach is and what I should specifically be thinking about or be careful of?

    Read the article

  • Effective versus efficient code

    - by Todd Williamson
    TL;DR: Quick and dirty code, or "correct" (insert your definition of this term) code? There is often a tension between "efficient" and "effective" in software development. "Efficient" often means code that is "correct" from the point of view of adhering to standards, using widely-accepted patterns/approaches for structures, regardless of project size, budget, etc. "Effective" is not about being "right", but about getting things done. This often results in code that falls outside the bounds of commonly accepted "correct" standards, usage, etc. Usually the people paying for the development effort have dictated ahead of time what it is that they value more. An organization that lives in a technical space will tend towards the efficient end, others will tend towards the effective. Developers often refuse to compromise their favored approach for the other. In my own experience I have found that people with formal education in software development tend towards the Efficient camp. Those that picked up software development more or less as a tool to get things done tend towards the Effective camp. These camps don't get along very well. When managing a team of developers who are not all in one camp it is challenging. In your own experience, which camp do you land in, and do you find yourself having to justify your approach to others? To management? To other developers?

    Read the article

  • Trouble with touch events on iPhone

    - by MrDatabase
    I'm making a simple 2D game for iPhone. Think of the game as a ball on the screen that goes up while the user is touching the screen and falls down when the user stops touching the screen. The ball starts moving up in touchesBegan:withEvent and starts moving down in touchesEnded:withEvent. This works fine almost all the time. However on occasion the ball will keep moving up after the user stops touching... or the ball will keep moving down while the user is touching. Why is this happening? Just fyi the ball is drawn on a UIWindow. The taps are handled by a UIImageview subclass that's clearColor and takes up the entire screen. This "touchLayer" is also moved to the front of the window in the game loop. Any idea why this control scheme occasionally fails? Perhaps the touch events just aren't firing? Or they're fired out of order? Cheers!

    Read the article

  • How to transfer Google Earth route to Google Maps?

    - by macias
    I have a GPX route, I imported it into Google Earth. Everything is fine, so I saved it as KMZ file. Then just for check, I imported KMZ back into Google Earth. No problem. The thing is, I would like to work with Google Maps, no Google Earth and I am not able to transfer this route into Google Maps. Each time I select "show in Google Maps", the view is switched from Earth to Maps, but my route is missing. If I use standalone web browser and try to import any of the files directly to Google Maps, either it falls into some infinite loop (I wait ~hour and still see progress bar) or Google Maps shows error. Thus the question: how to transfer a route from Google Earth to Google Maps? The size of GPX file is 3MB, the size of KMZ is 1MB.

    Read the article

  • Fixed timestep with interpolation in AS3

    - by Jim Sreven
    I'm trying to implement Glenn Fiedler's popular fixed timestep system as documented here: http://gafferongames.com/game-physics/fix-your-timestep/ In Flash. I'm fairly sure that I've got it set up correctly, along with state interpolation. The result is that if my character is supposed to move at 6 pixels per frame, 35 frames per second = 210 pixels a second, it does exactly that, even if the framerate climbs or falls. The problem is it looks awful. The movement is very stuttery and just doesn't look good. I find that the amount of time in between ENTER_FRAME events, which I'm adding on to my accumulator, averages out to 28.5ms (1000/35) just as it should, but individual frame times vary wildly, sometimes an ENTER_FRAME event will come 16ms after the last, sometimes 42ms. This means that at each graphical redraw the character graphic moves by a different amount, because a different amount of time has passed since the last draw. In theory it should look smooth, but it doesn't at all. In contrast, if I just use the ultra simple system of moving the character 6px every frame, it looks completely smooth, even with these large variances in frame times. How can this be possible? I'm using getTimer() to measure these time differences, are they even reliable?

    Read the article

  • Ubuntu 12.04 installation on GPT + RAID going into grub rescue

    - by Proy
    I have two 2TB disks. I am installing Ubuntu 12.04 using the alternate version of the server cd. On the partitioning page I have done my partitioning as follows /dev/sda1 - 32 MB - bios_grub /dev/sda2 - 50 GB -raid device /dev/sda3 - 8 GB -raid device /dev/sda4 - Balance full GB - raid device /dev/sdb1 - 32 MB - bios_grub /dev/sdb2 - 50 GB -raid device /dev/sdb3 - 8 GB -raid device /dev/sdb4 - Balance full GB - raid device After this I have setup raid devices /dev/md0 for /(/dev/sda2 + /dev/sdb2) for / ext4 /dev/md1 for swap( /dev/sda3 + /dev/sdb3)for swap /dev/md2 for /home(/dev/sda4 + /dev/sdb4)for /home ext4 The installation finishes it shows that it is installing grub to /dev/sda and /dev/sdb. But once the system reboots it falls into grub rescue mode. on doing ls I can not see the md devices only hd once. I also tried booting into rescue mode with the install cd and doing grub-install /dev/sda and /dev/sdb. What am I doing wrong ? Why is grub2 not detecting the raid revices ? UPDATE: I just did the same steps with Ubuntu 10.04 and it worked perfectly fine. I wiped out the RAID and partitions and everything and did it from scratch. I think the issue is with Ubuntu 12.04 and the way it partitions 2 TB disks

    Read the article

  • Stakeholder Management in OUM

    - by user719921
    Where is Stakeholder Management in OUM?  Stakeholder Management typically falls into the purview of the Project Manager, which means much of the associated guidance is found in the OUM Manage Focus Area (a.k.a. Manage).  There is no process in Manage named Stakeholder Management, but this “touch point” can be found in a variety of other processes including Bid Transition (BT), Communication Management (CMM) and Organizational Change Management (OCHM). •         Stakeholder management starts in the Bid Transition process with Stakeholder Analysis •         This Stakeholder Analysis is used to build the Project Team Communication Plan in the Communication Management process. •         Stakeholder management should be executed during the Execution and Control phase.  For example, as issues are resolved, the project manager should take the action item to follow up with the affected stakeholders to ensure they are aware that the issue has been resolved. •       The broader topic of Stakeholder management is also addressed very thoroughly in the Organizational Change Management process in the Implement Focus Area, which is a touch point to the Organizational Change Management process in Manage. Check it out and let me know your thoughts!

    Read the article

  • New Dell Vostro 3550 will not boot into Live CD

    - by rich97
    I've been trying to install Ubuntu and/or it's derivatives on my new Dell Vostro 3550 but I'm finding it impossible to boot into the live CD environment. Here are the things I've already tried: Booting from multiple versions of multiple distributions (Ubuntu 10.10, Ubuntu 11.04, Mint 11, Mint 10) Booting from a Live USB created with unetbootin, the Linux Mint startup disk creator, the universal USB creator and dd of=linuxmint.***.iso if if=/dev/sdx. Burning a CD and booting from the internal CD drive. In the case of the USB keys with the latest versions of Ubuntu it gets to the page where I can select a boot option, but after selecting an option the screen goes black, even the back light turns off. With the older distributions the screen stays on but it starts loading and then just hangs. The CDs don't even try to boot. It starts spinning but then falls back to the default Windows install. The only way I've got it to work so far is with Wubi, but that's hardly ideal. I'd like to have two separate physical partitions with a /home and /. Any help is appreciated. Thank you.

    Read the article

  • XNA for life!

    - by George Clingerman
    Or until my arm falls off at least…. I’ve been meaning to get a new tattoo for quite a while. And I wanted it to be geeky and to represent some memorable milestone in my life. So I was thinking, mulling over some various ideas, sketching some things out. And then it hit me. XNA. Specifically the XNA logo. It’s super geeky (so geeky I’ll probably have to explain to 90% of the people that say it what it even means) and I’m not sure that anything has been so memorable and made such a change in my life as XNA. Cheesy but true. When the XNA framework came out, thing started happening quickly. I stumbled into a geek community and started going to code camps, MSDN events, code-a-thons. I got an XNA MVP award. Met huge geek idols in my life (like Rory Blyth and Scott Hanselman) and just really started getting integrated into this fantastic development community. Then to add to all of that I became part of this fantastic XNA community. It’s really just been one of the best things to ever happen to me and I’m having the time of my life right now. So sure, it’s permanent. Sure Microsoft could cancel XNA, rebrand it to Flimmer Flammer or some other name. Sure my arms are going to be wrinkly and flabby when I’m older and for sure I’m going to have to explain to people over and over again just what in the world “xna” means. But you know what, every time I look at that tattoo and every time I’m telling somebody what xna means, I’m going to remember all these awesome things that have happened to me. All the tremendous things I’m seeing people do with the XNA framework and more importantly all the stories and friendships I’ve formed in the XNA community. And I think that deserves a little permanent recognition.

    Read the article

  • Requesting quality analysis test cases up front of implementation/change

    - by arin
    Recently I have been assigned to work on a major requirement that falls between a change request and an improvement. The previous implementation was done (badly) by a senior developer that left the company and did so without leaving a trace of documentation. Here were my initial steps to approach this problem: Considering that the release date was fast approaching and there was no time for slip-ups, I initially asked if the requirement was a "must have". Since the requirement helped the product significantly in terms of usability, the answer was "If possible, yes". Knowing the wide-spread use and affects of this requirement, had it come to a point where the requirement could not be finished prior to release, I asked if it would be a viable option to thrash the current state and revert back to the state prior to the ex-senior implementation. The answer was "Most likely: no". Understanding that the requirement was coming from the higher management, and due to the complexity of it, I asked all usability test cases to be written prior to the implementation (by QA) and given to me, to aid me in the comprehension of this task. This was a big no-no for the folks at the management as they failed to understand this approach. Knowing that I had to insist on my request and the responsibility of this requirement, I insisted and have fallen out of favor with some of the folks, leaving me in a state of "baffledness". Basically, I was trying a test-driven approach to a high-risk, high-complexity and must-have requirement and trying to be safe rather than sorry. Is this approach wrong or have I approached it incorrectly? P.S.: The change request/improvement was cancelled and the implementation was reverted back to the prior state due to the complexity of the problem and lack of time. This only happened after a 2 hour long meeting with other seniors in order to convince the aforementioned folks.

    Read the article

  • How to configure sudoers with path wildcards?

    - by C. Lee
    I need sudo for a command for any path under a particular area. Example: sudo mycommand /opt/apps/myapp/... What is the sudoers syntax to allow this command to run in any path that falls under /opt/apps/myapp? This is Solaris 10 sudo. Thank you for your reply, but I don't need wildcards for the path to the commands, but wildcards for the arguments for the commands. For example, we want to do something like... sudo mycmd /opt/userarea/area1 sudo mycmd /opt/userarea/area1/area2 sudo mycmd /opt/userarea/area1/area2/area3 So far, using wildcards for the arguments in sudoers look like this: /opt/userarea/* /opt/userarea/*/* And it seems like if we want to have N levels of directories, then we need N lines in sudoers! Is there a better way to include all N levels in one line in sudoers? Thanks.

    Read the article

  • Apache 410 Gone instructions not working with mod_alias nor mod_rewrite

    - by Peter Boughton
    Apache 2.2 seems to be ignoring instructions to return a 410 status. This happens for both mod_alias's Redirect (using 410 or gone) and mod_rewrite's RewriteRule (using [G]), being used inside a .htaccess file. This works: Redirect 302 /somewhere /gone But this doesn't: Redirect 410 /somewhere That line is ignored (as if it had been commented) and the request falls through to other rules (which direct it to an unrelated generic error handling script). Similarly, trying to use a RewriteRule with a [G] flag doesn't work, but the same rule rewriting to a script that generates a 410 does - so the rules aren't the problem and it seems instead to be something about 410/gone that isn't behaving. I can workaround it by having a script sending the 410, but that's annoying and I don't get why it's not working. Any ideas?

    Read the article

  • Would it be possible to create an open source software library, entirely developed and moderated by an open community?

    - by Steven Jeuris
    Call it democratic software development, or open source on steroids if you will. I'm not just talking about the possibility of providing a patch which can be approved by the library owner. Think more along the lines of how Stack Exchange works. Anyone can post code, and through community moderation it is cleaned up and eventually valid code ends up in the final library. For complex libraries an elaborate system should probably be created, but for a simple library it is my belief this is already possible even within the Stack Exchange platform. Take a library of extension methods for .NET for example. Everybody goes their own way and implements their own subset of what they feel is important, open-source library or not. People want to share their code, but there is no suitable platform for it. extensionmethod.net is the result of answering this call for extension methods, but the framework hopelessly falls short; there is no order, or structure at all. You don't know whether an idea is any good until you try it, so I decided to create an Extension Methods proposal on Area51. I belief with proper moderation, it could be possible for the site to be more than a Q&A site, and that an actual library (or subsets of it) could be extracted from it. Has anything like this been attempted before? Are there platforms better suited for this?

    Read the article

  • LWJGL Java 2D collision when lagging

    - by user1990950
    I'm using a tile based collision, but when the game is lagging (the lag isn't the problem) the collision fails and the player falls through tiles. This is the movement/collision detection code of my Player class: gravity.y = gspeed; speed.y+=gravity.y; position.set(position.x + direction.x * speed.x * deltaSeconds, position.y + direction.y * speed.y * deltaSeconds); for (int i = (int) Math.round(position.x / 32) - 2 * t; i < (int) Math.round(position.x / 32) + 3 * t; i++) { for (int j = (int) Math.round(position.y / 32); j < (int) Math.round((position.y + height + 64) / 32); j++) { checkCollision(i, j, deltaSeconds); } } public void checkCollision(int i, int j, float deltaSeconds) { bbox.setBounds((int) position.x, (int) position.y, (int) width, (int) height); Tile t = null; t = Map.getTile(i, j); if (t != null) { if (t.isSolid()) { if (t.getTop().intersects(bbox)) { if (position.y + height < t.y * 32 + 32) { if (speed.y >= 0) { position.y = t.y * 32 - height; speed.y = 0; gravity.y = 0; jumpState = 0; } } } if (t.getBottom().intersects(bbox)) { if (position.y < t.y * 32 + 32) { position.y = t.y * 32 + 32; speed.y = 0; } } else { if (t.getLeft().intersects(bbox)) { if (position.x + width > t.x * 32) { position.x = t.x * 32 - width; speed.x = 0; } } if (t.getRight().intersects(bbox)) { if (position.x < t.x * 32 + 32) { position.x = t.x * 32 + 32; speed.x = 0; } } } } } } Is it possible to fix my code, if yes how? Or is it possible to tell if the game is lagging?

    Read the article

  • Tool for purging unneeded backups

    - by Dana the Sane
    I'm in the common situation where the one of the linux servers I use for storing backups is filling up. I'm wondering what tools are available for doing this. Ideally, what I would like is something that keeps nightlies for the previous month, weeklies for the 2nd to 5th preceding months and retains monthlies (well, every 3rd week) for an indefinite period. Everything that falls outside of that would be deleted after the backups are run. I could write a script to do this, but I feel like there must be a standard tool for this task.

    Read the article

  • Are you ready to take a walk in the clouds?

    - by Steve Loethen
    Cloud computing is here, whether we want it or not.  When I say "a walk in the clouds” I am not talking about a pleasant romantic comedy, but a real alternative to hosting applications on-premise.  For years we have had the power to host our web sites on remote systems.  Sure, challenges existed.  Mostly web sites.  I could, with a few clicks, create a account at a myriad of web host sites, put my site in the hands of a remote hosting company, and boom, I was a site on the internet.  But choices, power, and management was limited. Now, we have a set of services to let us approach and power and control we love, but with scalability of the data center.  My personal web site is hosted on a laptop running hyperV in my basement.  I have to manage the machine, patch it, make sure it is powered up.  This is fine for the “hello, this is my dog skippy site” that I maintain. If the football pool I run has an issue, one of the 10 users I have calls or emails me and I go check it out.  All is well. But this falls well below the needs of even the simplest of enterprises.  A business needs a stronger datacenter, a better pipe to the world.  Do I really want to base my business on a dynamic dns and a dsl line from the local phone company? Cloud computing gives us most of what I value (control, a db of my own, updating my site from Visual Studio). Come learn how this technology can transform your business.  If you are a Microsoft shop, or are interested in Microsoft in the cloud, on April 8 and 9, a 2 day free Azure training class is being conducted in Kansas City.  http://www.azurebootcamp.com/city/kansascity Hope to see you there.  If you come, make sure you look me up.

    Read the article

  • Applying to a company while personally working on a comparable project

    - by Developer Art
    That's going to be an unusual question but here it goes. I'm entertaining the thought to send my docs to a place which develops a large web project of a social type. Social meaning people, communities, interaction and all that usual stuff. The issue is that I myself am working on something that falls into the category of social in my private time. Now the question. Is it wise to apply there under these circumstances? I think there may be issues of intellectual ownership if I develop something similar that exists or will exist in that company's work. On the other hand, the web of full of social places (even this site is one of them), many of them utilize the same ideas and move in the same direction and it seems to work for everyone. It's hard to come up with something which hasn't been tried yet by somebody else so it's all basically reuse of the commonly available ideas and experience. What I'm working on is not a functional equivalent, it's rather largely off. There may be some intersections, but on a large scale this is not an equivalent. And whatever features might coincide, they already exist everywhere on the web anyway. Also technology stacks are entirely different so the issue with directly copying out parts of the code is probably not applicable. I plan to say it up front that I'm engaged in a personal project of mine and let them see if it represents a problem for them. What do you think? Am I making things up or is there really an issue?

    Read the article

  • Investigating Strategies For Functional Decomposition

    - by Liam McLennan
    Introducing Functional Decomposition Before I begin I must apologise. I think I am using the term ‘functional decomposition’ loosely, and probably incorrectly. For the purpose of this article I use functional decomposition to mean the recursive splitting of a large problem into increasingly smaller ones, so that the one large problem may be solved by solving a set of smaller problems. The justification for functional decomposition is that the decomposed problem is more easily solved. As software developers we recognise that the smaller pieces are more easily tested, since they do less and are more cohesive. Functional decomposition is important to all scientific pursuits. Once we understand natural selection we can start to look for humanities ancestral species, once we understand the big bang we can trace our expanding universe back to its origin. Isaac Newton acknowledged the compositional nature of his scientific achievements: If I have seen further than others, it is by standing upon the shoulders of giants   The Two Strategies For Functional Decomposition of Computer Programs Private Methods When I was working on my undergraduate degree I was taught to functionally decompose problems by using private methods. Consider the problem of painting a house. The obvious solution is to solve the problem as a single unit: public void PaintAHouse() { // all the things required to paint a house ... } We decompose the problem by breaking it into parts: public void PaintAHouse() { PaintUndercoat(); PaintTopcoat(); } private void PaintUndercoat() { // everything required to paint the undercoat } private void PaintTopcoat() { // everything required to paint the topcoat } The problem can be recursively decomposed until a sufficiently granular level of detail is reached: public void PaintAHouse() { PaintUndercoat(); PaintTopcoat(); } private void PaintUndercoat() { prepareSurface(); fetchUndercoat(); paintUndercoat(); } private void PaintTopcoat() { fetchPaint(); paintTopcoat(); } According to Wikipedia, at least one computer programmer has referred to this process as “the art of subroutining”. The practical issues that I have encountered when using private methods for decomposition are: To preserve the top level API all of the steps must be private. This means that they can’t easily be tested. The private methods often have little cohesion except that they form part of the same solution. Decomposing to Classes The alternative is to decompose large problems into multiple classes, effectively using a class instead of each private method. The API delegates to related classes, so the API is not polluted by the sub-steps of the problem, and the steps can be easily tested because they are each in their own highly cohesive class. Additionally, I think that this technique facilitates better adherence to the Single Responsibility Principle, since each class can be decomposed until it has precisely one responsibility. Revisiting my previous example using class composition: public class HousePainter { private undercoatPainter = new UndercoatPainter(); private topcoatPainter = new TopcoatPainter(); public void PaintAHouse() { undercoatPainter.Paint(); topcoatPainter.Paint(); } } Summary When decomposing a problem there is more than one way to represent the sub-problems. Using private methods keeps the logic in one place and prevents a proliferation of classes (thereby following the four rules of simple design) but the class decomposition is more easily testable and more compatible with the Single Responsibility Principle.

    Read the article

  • Bejeweled-like game, managing different gem/powerup behaviors?

    - by Wissam
    I thought I'd ask a question and look forward to some insight from this very compelling community. In a Bejeweled-like (Match 3) game, the standard behavior once a valid swap of two adjacent tiles is made is that the resulting matching tiles are destroyed, any tiles now sitting over empty spaces fall to the position above the next present-tile, and any void created above is filled with new tiles. In richer Match-3 games like Bejeweled, 4 in a row (as opposed to just 3) modifies this behavior such that the tile that was swapped is retained, turned into a "flaming" gem, it falls, and then the empty space above is filled. The next time that "flaming gem" is played it explodes and destroys the 8 perimeter tiles, triggers a different animation sequence (neighbors of those 8 tiles being destroyed look like they've been hit by a shockwave then they fall to their respective positions). Scoring is different, the triggered sounds are different, etc. There are even more elaborate behaviors for Match5, Match-cross-pattern, and many powerups that can be purchased, each which produces a more elaborate sequence of events, sounds, animations, scoring, etc... What is the best approach to developing all these different behaviors that respond to players' "move" and her current "performance" and that deviate from the standard sequence of events, scoring, animation, sounds etc, in such a way that we can always flexibly introduce a new "powerup" ? What we are doing now is hard-coding the events of each one, but the task is long and arduous and seems like the wrong approach especially since the game-designers and testers often offer (later) valuable insight on what works better in-game, which means that the code itself may have to be re-written even for minor changes in behavior (say, destroy only 7 neighboring tiles, instead of all 8 in an explosion). ANY pointers for good practices here would be highly appreciated.

    Read the article

  • Organizing files relationally in Windows 7?

    - by Cayetano Gonçalves
    I just took a new job as a policy analyst, and after even one week keeping track of hundreds of files- lawsuits, legislation, letters, etc- in Windows 7 is proving difficult. In my last job I was a database architect and I helped build Linux based servers to track files among an entire department, however there is no way for me to do that at this time in this job. Is there any way to track files/indices/locations/tags/themes and store them in some kind of RDBMS system, instead of storing the files in folders that only allow for flat and fixed storage? For example, if I have a file that deals with: ELID organization Appeals court John Smith It really is inconvenient to have to decide which one of these tags to create into a folder and place the file into it, when it falls under all the categories. Even if I could place tags the way you can in Stack Exchange on files, it would solve a lot of heart ache.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >