Search Results

Search found 16353 results on 655 pages for 'long ngo'.

Page 318/655 | < Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >

  • Faster 2D Collision detection

    - by eShredder
    Recently I've been working on a fast-paced 2d shooter and I came across a mighty problem. Collision detection. Sure, it is working, but it is very slow. My goal is: Have lots of enemies on screen and have them to not touch each other. All of the enemies are chasing the player entity. Most of them have the same speed so sooner or later they all end up taking the same space while chasing the player. This really drops the fun factor since, for the player, it looks like you are being chased by one enemy only. To prevent them to take the same space I added a collision detection (a very basic 2D detection, the only method I know of) which is. Enemy class update method Loop through all enemies (continue; if the loop points at this object) If enemy object intersects with this object Push enemy object away from this enemy object This works fine. As long as I only have <200 enemy entities that is. When I get closer to 300-350 enemy entities my frame rate begins to drop heavily. First I thought it was bad rendering so I removed their draw call. This did not help at all so of course I realised it was the update method. The only heavy part in their update method is this each-enemy-loops-through-every-enemy part. When I get closer to 300 enemies the game does a 90000 (300x300) step itteration. My my~ I'm sure there must be another way to aproach this collision detection. Though I have no idea how. The pages I find is about how to actually do the collision between two objects or how to check collision between an object and a tile. I already know those two things. tl;dr? How do I aproach collision detection between LOTS of entities? Quick edit: If it is to any help, I'm using C# XNA.

    Read the article

  • critical swap nagios

    - by Toby Joiner
    I installed nagios a very long time ago, and have started trying to use it now. I am getting this error: Current Status: CRITICAL (for 231d 16h 52m 49s) Status Information: SWAP CRITICAL - 100% free (0 MB out of 0 MB) Performance Data: swap=0MB;0;0;0;0 Current Attempt: 4/4 (HARD state) Last Check Time: 01-09-2011 13:26:34 Check Type: ACTIVE Check Latency / Duration: 0.125 / 0.004 seconds Next Scheduled Check: 01-09-2011 13:31:34 Last State Change: 05-22-2010 21:36:47 Last Notification: 01-09-2011 13:01:42 (notification 5521) Is This Service Flapping? NO (0.00% state change) In Scheduled Downtime? NO Last Update: 01-09-2011 13:29:32 ( 0d 0h 0m 4s ago) Is this normal? Should I be concerned? If more info is needed please let me know.

    Read the article

  • mod_rewrite and SEO friendliness

    - by John Doe
    My website has an atypical structure and I'm not sure if this could create problems in the long run, specially for SEO positioning purposes. I have a unique, large PHP script, and I use the Apache module mod_rewrite in the .htaccess file to create friendly URLs, for example: RewriteRule ^$ /index.php?section=Main RewriteRule ^createArticle$ /index.php?section=Main&view=CreateArticle RewriteRule ^configuration$ /index.php?section=Configuration RewriteRule ^article/([0-9]{1,10})$ /index.php?section=Article&view=Default&id=$1 RewriteRule ^deleteArticle/([0-9]{1,10})$ /index.php?section=Article&view=Delete&id=$1 RewriteRule ^reportArticle/([0-9]{1,10})$ /index.php?section=Article&view=Report&id=$1 RewriteRule ^logIn$ /index.php?section=Authentication ... So, www.example.com/index.php?section=Article&view=Default&id=105 would become www.example.com/article/105. The only real physical file is index.php, in which the parameters of the URL queried is processed and the corresponding result is outputted. My question is, do the crawling robots (e.g. Googlebot) recognize these links? Do they index the resulting HTML outputted by index.php with the specified parameters as if it was a actual HTML file? Also, would this become a problem when creating a Sitemap?

    Read the article

  • Has ec2 made self-hosting possible for 'amateur' sysadmins possible?

    - by Blankman
    I'm a developer, and it seems ec2 has made it possible for a amateur sysadmin like me to setup and maintain a fairly large set of servers. Now I don't mean to undermine real sys admins, as I know the value of them but what I am trying to get at is that someone like me can setup and maintain a cluster of servers (front end web servers, with some db servers) using tools like ec2 and capistrano with the help of google. Now this isn't something I would do as a long term thing, but as a startup, one-man operation, I think I can pull this off until business takes off and I can hire this important role out. With ec2, I get my firewall, so I basically open up port 80 on my public facing server, which will run haproxy and route requests to my cluster of servers. Ofcourse I am simplifying the setup, but just want a feel for what you guys think about my perception. My application is a web application, that will be runing Ruby on rails (passenger) and talking to mysql or postgresql.

    Read the article

  • TechEd 2012: MVVM In XAML

    - by Tim Murphy
    Paul Sheriff was a real character at the start of his MVVM in XAML session.  There was a lot of sarcasm and self deprecation going on prior to the .  That is never a bad way to get things rolling right after lunch.  Then things got semi-serious. The presentation itself had a number of surprises, but not all of them had to do with XAML.  When he flipped over his company’s code generation tool it took me off guard.  I am used to generator that create code for a whole project, but his tools were able to create different types of constructs on demand.  It also made it easier to follow what he was doing than some of the other demos I have seen this week where people were using code snippets. Getting to the heart of the topic I found myself thinking that I may have found my utopia for application development in MVVM.  Yes, I know there is no such thing, but this comes closer than any other pattern I have learned about.  This pattern allows the application to have better separation of concerns than I have seen before.  This is especially true since you can leverage data binding.  I’m not sure why it has taken me so long to find time for this subject. As Paul demonstrated using this pattern with XAML gives you multi-platform reusable code when you leverage common utility classes and ModelView classes.  The one drawback I see is that you have to go to the lowest common denominator between the platforms you want to support, but you always have to weigh the trade offs. And finally, the Visual Studio nuggets just keep coming.  Even though it has been available for several generations of Visual Studio I have never seen someone use linked files within a solution.  It just goes to show that I should spend more time exploring the deeper features of each dialog. del.icio.us Tags: TechEd,TechEd 2012,MVVM,Paul Sheriff,Patterns,Visual Studio 2012

    Read the article

  • Navigate to a virtual member from the member that overrides it

    - by axrwkr
    Using visual studio, in the editor window, I am able to navigate from the usage of a member to the line and file where it is declared by pressing F12 while the cursor is over that member by or right clicking on the member and selecting "Go To Definition". I would like to find a way to navigate from an override member to the base class member that it overrides. For example, if I have the following class with one method public class SomeClass { public virtual void TheMethod() { // do something } } An I override that method somewhere else in the project or solution similar to the following public OtherClass : SomeClass { public override void TheMethod() { // do something else } } I want to navigate from the declaration of TheMethod in OtherClass to the declaration of TheMethod in SomeClass Is there a way to do this? I've found that I can find the definition of the member in the base class by pressing Shift + F12 (Find all References) and then looking through the list occurances, this works fine most of the time, since the list isn't usually that long but it would be much better to have a way to go there directly.

    Read the article

  • Dealing with Fine-Grained Cache Entries in Coherence

    - by jpurdy
    On occasion we have seen significant memory overhead when using very small cache entries. Consider the case where there is a small key (say a synthetic key stored in a long) and a small value (perhaps a number or short string). With most backing maps, each cache entry will require an instance of Map.Entry, and in the case of a LocalCache backing map (used for expiry and eviction), there is additional metadata stored (such as last access time). Given the size of this data (usually a few dozen bytes) and the granularity of Java memory allocation (often a minimum of 32 bytes per object, depending on the specific JVM implementation), it is easily possible to end up with the case where the cache entry appears to be a couple dozen bytes but ends up occupying several hundred bytes of actual heap, resulting in anywhere from a 5x to 10x increase in stated memory requirements. In most cases, this increase applies to only a few small NamedCaches, and is inconsequential -- but in some cases it might apply to one or more very large NamedCaches, in which case it may dominate memory sizing calculations. Ultimately, the requirement is to avoid the per-entry overhead, which can be done either at the application level by grouping multiple logical entries into single cache entries, or at the backing map level, again by combining multiple entries into a smaller number of larger heap objects. At the application level, it may be possible to combine objects based on parent-child or sibling relationships (basically the same requirements that would apply to using partition affinity). If there is no natural relationship, it may still be possible to combine objects, effectively using a Coherence NamedCache as a "map of maps". This forces the application to first find a collection of objects (by performing a partial hash) and then to look within that collection for the desired object. This is most naturally implemented as a collection of entry processors to avoid pulling unnecessary data back to the client (and also to encapsulate that logic within a service layer). At the backing map level, the NIO storage option keeps keys on heap, and so has limited benefit for this situation. The Elastic Data features of Coherence naturally combine entries into larger heap objects, with the caveat that only data -- and not indexes -- can be stored in Elastic Data.

    Read the article

  • How to address an EC2 instance from both inside and outside datacenter?

    - by Alexandr Kurilin
    I'm trying to find a good way of being able to address my EC2 database instance from both inside and outside of the datacenter. Other EC2 instances need to be able to call into it, and other clients like pgAdmin might need to connect to it from the outside world as well. It's my understanding that using the internal and external DNS names is sustainable long term as each reboot leads to a change. I'm thinking of associating an Elastic IP with the instance and giving it an A record (say db1.mydomain.com) which I then will use both within and outside the datacenter. Further instances in the same role will get the same treatment and a DNS record of db2.mydomain.com etc. Now, is there a cleaner and more stable way of achieving this result? Am going about this the wrong way? Suggestions?

    Read the article

  • Windows Boot disc to save files?

    - by acidzombie24
    Somehow after updating my legit windows 7 OS with no pirated or mod software on my PC i was welcome to a black screen. I popped in my ghost disc and copied the files i need to an external HD. IIRC windows 7 disc can do that too. Problem with the way i did it on ghost was it excepted me to select 1 file (an HD disc image) so i couldnt select multiple folders to move. Also when i did move i had no idea if it finished or how long it would take. My linux live cd couldnt access the HD. Anyways, is there a disc i can use to easily copy files from my laptop to my external HD? I think ghost, windows 7 and windows server all allow me but is there one that is better suited to copy files?

    Read the article

  • Server goes offline. What to look for?

    - by Jonathan Sampson
    I'm using a new virtual server through GoDaddy, and this morning I received a call from the powers that be informing me our website was offline. After confirming this, I requested a power cycle through our GoDaddy control panel, and within a minute or two the server was back online. I made the call, and reported the news that we're back up. Of course, a couple minutes later we're down again. I tried connecting through PuTTy, and it takes forever to prompt me for a username, and each successive prompt takes a long time to come up. I'm using CentOS. So my questions are: How can I determine the cause? What types of things can I do to prevent this in the future? One interesting, and perhaps relevant, observation is that yesterday our bandwidth consumption was about 20% greater than our top figures from the past month.

    Read the article

  • Recovering an old website

    - by noah
    I have a client with an old website that somebody setup for him long ago. The guy who set it up is unreachable, so how do we go about trying to take it over? A WHOIS lookup got us some contact information, but I don't have great hopes for that (it hasn't been update in quite some time). The nameservers are ns1.theplanet.com and ns2.theplanet.com, and we will try calling them, but I don't expect we'll be able to get much from them. What are our options? Is there a way I can discover the registrar so we can try contacting them as well? EDIT: It would be sufficient if we could get control of the domain name or put in some sort of redirect to the new site. Either hosting was prepaid for quite some time, or someone else is still paying for it, so we don't care about that.

    Read the article

  • Excel cutting out down over 1024 characters

    - by Zeno
    I am using Excel 2003 to save a large file as a CSV. But when saving cells that contain over 1024 characters, it cuts out the characters beyond 1024. Per a previous question, I am using this official macro to save: http://support.microsoft.com/default.aspx?scid=kb;en-us;291296&Product=xlw This macro in question is probably causing it, since I'm not using the normal Save As (in order to put quotes around every field). It may not be 1024 characters, but long cells are getting cut off. What in this macro is causing that?

    Read the article

  • Can I format a USB key in GUID mode in windows 7 to make it Mac OSX boot friendly?

    - by digiguru
    I have a macbookpro that wont boot properly. I've tried resetting the PRAM (holding down option - alt - P - R), but it doesn't work, it gets halfway through the boot process and says "You need to restart your computer" in several languages. Recently I downloaded a USB Key compatible Linux OS. This USB Key works as a boot loader on Windows machines, but on OSX it can only find the Harddrive partitions when I go into the boot loader menu (holding down Option on startup). I am assuming it is because it is formatted as FAT32, and not GUID Table. I believe my CD drive is also bust, it hasn't worked in a long time. I don't have another Mac computer, so is there a way I can format the USB key as GUID Partition from a windows 7 machine?

    Read the article

  • I can't delete a file - even when using unlocker

    - by chipperyman573
    So I have a 64 bit windows 7 ultimate computer. On my desktop, I have a .iso file that should NOT be used by anything. However, when I use lockhunter (unlocker doesn't work for me), it says it can't unlock or delete it. When I check what it's being used by, it says by the system. Here's a picture when I click delete: Here's a picture of when I click unlock it: When I click Other Close locking process: I've had this file on my computer for a LONG time now and haven't been able to delete it. I've also tried eraser erase on startup, but that doesn't work, either. How can I remove this file?

    Read the article

  • Domain user periodically can't login, but only temporarily?

    - by Josh
    Ok, this is a strange one that I'm having trouble replicating letting alone solving. I have a user who uses two computers (both XP sp3) on the domain with a roaming profile. She has no problems on her personal computer but occasionally needs to use a shared computer. On this shared computer she is sometimes (~once a week) unable to login with the error message "Username or password incorrect. Check username password and domain and try again." I've checked when this happens and her username and password are indeed entered correctly. Now the strange part - if someone else logs in to the computer (which so far always works) and then logs out she is able to log in after that. This problem began after a recent and long overdue password change. I've tried to replicate this problem after a reboot, or after another user logs out to no avail. Any suggestions on troubleshooting or replicating this one? Anyone experienced something similar?

    Read the article

  • Windows 7 - Wireless connection before login possible?

    - by EJ
    Is there a way for Windows 7 to connect to a wireless network before a user has logged in? I have found no good answers to this question elsewhere. Some say it should already be happening if I am using Windows' connection management (WLAN AutoConnect, formerly WirelessZero), but I am using that, and it is not. I can sit at the login screen for as long as you please and it will not connect (watching the router from a separate PC), moments after I login it will connect. Others have said that you need to use the manufacturer's connection management (not Windows'), and they can sometimes have the option for prelogin/prelogon connections, but I am using generic drivers. The device is a Netgear/Cisco WMP300N, with a Broadcom chipset. Netgear/Cisco and Broadcom all claim to not have drivers for Win7, but Win7 apparently comes with a functional driver.

    Read the article

  • How do you track existing requirements over time?

    - by CaptainAwesomePants
    I'm a software engineer working on a complex, ongoing website. It has a lot of moving parts and a small team of UI designers and business folks adding new features and tweaking old ones. Over the last year or so, we've added hundreds of interesting little edge cases. Planning, implementing, and testing them is not a problem. The problem comes later, when we want to refactor or add another new feature. Nobody remembers half of the old features and edge cases from a year ago. When we want to add a new change, we notice that code does all sorts of things in there, and we're not entirely sure which things are intentional requirements and which are meaningless side effects. Did someone last year request that the login token was supposed to only be valid for 30 minutes, or did some programmers just pick a sensible default? Can we change it? Back when the product was first envisioned, we created some documentation describing how the site worked. Since then we created a few additional documents describing new features, but nobody ever goes back and updates those documents when new features are requested, so the only authoritative documentation is the code itself. But the code provides no justification, no reason for its actions: only the how, never the why. What do other long-running teams do to keep track of what the requirements were and why?

    Read the article

  • Cannot Delete "Account Unknown" account in Windows XP SP3

    - by naspinski
    If I Right click on My Computer, Select Properties, then click on Advanced-User Profiles-Settings, I get a list of user accounts on my machine and I see a few that are "Account Unknown" in there with large profiles and I want my space back. I am assuming this is because these users are long gone and AD no longer recognizes their SIDs. The problem is that the Delete button is grayed out, but only for the accounts, the ones that are recognized, I can delete jsut fine. These accounts do not show up in Computer Management at all, and I am an administrator on my machine - any ideas on how to delete them?

    Read the article

  • CPU and HD degradation on sourced based Linux distribution

    - by danilo2
    I was wondering for a long time if source based Linux distributions, like Gentoo or Funtoo are "destroying" your system faster than binary ones (like Fedora or Debian). I'm talking about CPU and hard drive degradation. Of course, when you're updating your system, it has to compile everything from source, so it takes longer and your CPU is used at hard conditions (it is warmer and more loaded). Such systems compile hundreds of packages weekly, so does it really matter? Does such a system degrade faster than binary based ones?

    Read the article

  • Assuming "clean code/architecture" is there a difference in "effort" between PHP or Java/J2EE web application development?

    - by PhD
    A client asked us to estimate effort when selecting PHP as the implementation language for his next web-based application. We spent about a week exploring PHP, prototyping, testing etc., We are quite new to this language - may have hacked around it in the past but, let's go with PHP-noobs but application development experts (for the lack of a better, less flattering word :) It seems, that if we write, clean maintainable code, follow separation of concerns, enterprise architecture patters (DAOs etc.) the 'effort' in creating an object-oriented PHP based web-application seems to be the same for a Java based one. Here's our equation for estimating the effort (development/delivery time): ConstructionEffort = f(analysis, design, coding, testing, review, deployment) We were specifically comparing effort estimates in creating an enterprise application with the following: PHP + CakePHP/CodeIgniter (should we have considered others?) Java + Spring + Restlet It's an end-to-end application: Client: Javascript/jQuery + HTML/CSS Middle tier/Business Logic - (Still evaluating PHP/Java) Database: MySQL The effort estimates of the 1st and 3rd tier are constant and relatively independent of the middle tier's technology. At a high level with an initial breakdown into user stories of the requested features as well as a high-level SWAG on the sheer number of classes/SLOC that would be required for PHP doesn't seem to differ by much from what is required of the same in Java. Is this correct? We are basing our initial estimates on the initial prototyping/coding we've done with PHP - we are currently disregarding fluency with the language as a factor, since that'll be an initial hurdle and not a long term impediment IMHO (we also have sufficient time to become quite fluent with PHP). I'm interested in knowing the programmers' perspective with respect to effort when creating similar applications with either of the languages to justify choosing one over the other. Are we missing something here? It seems we are going against popular belief of PHP being quicker to market (or we being very fluent with Java have our vision clouded). It doesn't seem to have any coding/programming effort saving from what we/ve played around with.

    Read the article

  • Multiple EyeFinity Display groups

    - by Shinrai
    Is it possible with an EyeFinity enabled card to make multiple display groups at once? I was playing with a FirePro 2460 and while a 4x1 or 2x2 display group works quite nicely, if I make a 2x1 display group and then select one of the other displays to try to make a second 2x1 display group, it disables the first one. Is there any way to circumvent this behavior and set up two separate spans on the same card? Additionally, can you set up distinct display groups if they're on different cards? I will have the opportunity to test several of these cards in one machine very shortly, but I'm curious if anyone has any experience. EDIT: I can confirm that you can make multiple spans on multiple cards (as long as they don't cross cards, obviously) (If the answers are different for FirePro/FireMV cards and Radeon cards, that is helpful and relevant knowledge - I doubt it, though.)

    Read the article

  • Start Menu freezes when highlighting 'All Programs'

    - by gergesi
    Hey all, A friends machine is lagging for about 30 seconds when they highlight 'All Programs'. A gray box popus up where the program menu would be, but it doesnt populate for nearly 30 seconds. This seems to happen if he hasn't open the start menu for a bit.. but if he get's it working once, doing it immediatly afterwards will usually not have the same lag. Assuming computer hardware isn't an issue, any ideas what the problem may be? Sidenote: He mentioned making desktop shortcuts is taking long as well, does that give any clues?

    Read the article

  • Using ASP.NET, Membership, and jQuery to Determine Username Availability

    Chances are, at some point you've tried creating a new user account on a website and were told that the username you selected was already taken. This is especially common on very large websites with millions of members, but can happen on smaller websites with common usernames, such as people's names or popular words or phrases in the lexicon of the online community that frequents the website. If the user registration process is short and sweet, most users won't balk when they are told their desired username has already been taken - they'll just try a new one. But if the user registration process is long, involving several questions and scrolling, it can be frustrating to complete the registration process only to be told you need to return to the top of the page to try a different username. Many websites use Ajax techniques to check whether a visitor's desired username is available as soon as they enter it (rather than waiting for them to submit the form). This article shows how to implement such a feature in an ASP.NET website using Membership and jQuery. This article includes a demo available for download that implements this behavior in an ASP.NET WebForms application that uses the CreateUserWizard control to register new users. However, the concepts in this article can be applied to ad-hoc user registration pages and ASP.NET MVC. Read on to learn more! Read More >

    Read the article

  • Hosting and domain registrations for multiple clients

    - by letseatfood
    I am finally getting regular work desiging, developing, and deploying websites for small businesses and individuals. So far the websites utilize single-user content management systems, so the websites create, as far as I know, minimal load on the shared servers. I have always required that each of my clients purchase annual shared hosting at Dreamhost. For domain registration, I ask that they register with Dreamhost, but some already have a registered domain elsewhere and this is fine with me. I do this so the billing issues are the client's responsibility, not mine. My question is: Since I can register unlimited domains and connect them to my one shared hosting account at Dreamhost, should I not be requiring clients to individually pay for shared hosting and a domain? Should I actually be paying for one hosting account and then hosting all of my client's websites on that account? As I said before, I currently have each client buy their own hosting, because I feel that, for example, if there is high traffic to their site, there would be less a chance of the site going down than if their site was hosted with many others on one account. I am famous for being long-winded, please let me know if I can clarify at all. Thanks!

    Read the article

  • Google is re-indexing pages after redirecting URLs from HTTP to HTTPS incorrectly

    - by SLIM
    I upgraded my site so that all pages have gone from using HTTP to HTTPS. I didn't consider that Google treats HTTPS pages differently than HTTP. I recreated my sitemap to so that all links now reflect the new HTTPS URLs and let it be for a few days. (Whoops!) Google is now re-indexing all the HTTPS pages. I have about 19k pages on the site, and Google has already indexed about 8k of the new HTTPS pages. The problem is that Google sees all of these as brand new pages when many of them have a long HTTP history. Of course most of you will recognize the problem, I didn't set up a 301 from the old HTTP to the new HTTPS URLs. Is it too late to do this? Should I switch my sitemap back to HTTP URLs and then 301 redirect to the new HTTPS URls? Or should I leave the sitemap as is, and setup 301 redirects anyway... I'm not even sure if Google is trying to reach the HTTP site anymore. Currently the site is doing 303 redirects (from HTTP to HTTPS), although I haven't figured out why yet.

    Read the article

< Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >