Search Results

Search found 7722 results on 309 pages for 'pitfalls to avoid'.

Page 24/309 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • How to Avoid a Busy Loop Inside a Function That Returns the Object That's Being Waited For

    - by Carl Smith
    I have a function which has the same interface as Python's input builtin, but it works in a client-server environment. When it's called, the function, which runs in the server, sends a message to the client, asking it to get some input from the user. The user enters some stuff, or dismisses the prompt, and the result is passed back to the server, which passes it to the function. The function then returns the result. The function must work like Python's input [that's the spec], so it must block until it has the result. This is all working, but it uses a busy loop, which, in practice, could easily be spinning for many minutes. Currently, the function tells the client to get the input, passing an id. The client returns the result with the id. The server puts the result in a dictionary, with the id as the key. The function basically waits for that key to exist. def input(): '''simplified example''' key = unique_key() tell_client_to_get_input(key) while key not in dictionary: pass return dictionary.pop(pin) Using a callback would be the normal way to go, but the input function must block until the result is available, so I can't see how that could work. The spec can't change, as Python will be using the new input function for stuff like help and pdb, which provide their own little REPLs. I have a lot of flexibility in terms of how everything works overall, but just can't budge on the function acting exactly like Python's. Is there any way to return the result as soon as it's available, without the busy loop?

    Read the article

  • how to avoid or minimise use of check/conditional statement?

    - by Muneeb Nasir
    I have scenario, where i got stream and i need to check for some value. if i got my any new value i have to store it in any of data structure. well it seems very easy, i can place conditional statement if-else or can use contain method of set/map to check either received is new or not. but the problem is checking will effect my application performance, in stream i'll receive hundreds for value in second, if i start checking each and every value i received than for sure it effect performance. Any body can suggest me any mechanism or algorithm that solve my issue. either by bypassing checks or atleast minimize them?

    Read the article

  • How to avoid oscillation by async event based systems?

    - by inf3rno
    Imagine a system where there are data sources which need to be kept in sync. A simple example is model - view data binding by MVC. Now I intend to describe these kind of systems with data sources and hubs. Data sources are publishing and subscribing for events and hubs are relaying events to data sources. By handling an event a data source will change it state described in the event. By publishing an event the data source puts its current state to the event, so other data sources can use that information to change their state accordingly. The only problem with this system, that events can be reflected from the hub or from the other data sources, and that can put the system into an infinite oscillation (by async or infinite loop by sync). For example A -- data source B -- data source H -- hub A -> H -> A -- reflection from the hub A -> H -> B -> H -> A -- reflection from another data source By sync it is relatively easy to solve this issue. You can compare the current state with the event, and if they are equal, you don't change the state and raise the same event again. By async I could not find a solution yet. The state comparison does not work by async event handling because there is eventual consistency, and new events can be published in an inconsistent state causing the same oscillation. For example: A(*->x) -> H -> B(y->x) -- can go parallel with B(*->y) -> H -> A(x->y) -- so first A changes to x state while B changes to y state -- then B changes to x state while A changes to y state -- and so on for eternity... What do you think is there an algorithm to solve this problem? If there is a solution, is it possible to extend it to prevent oscillation caused by multiple hubs, multiple different events, etc... ? update: I don't think I can make this work without a lot of effort. I think this problem is just the same as we have by syncing multiple databases in a distributed system. So I think what I really need is constraints if I want to prevent this problem in an automatic way. What constraints do you suggest?

    Read the article

  • How do sites avoid SEO issues / legalities with subdomain unique ids?

    - by JM4
    I was looking through a few websites recently and noticed a trend I'm not sure I understand. Sites are creating unique referral URLs for customers in the form of: http://customname.site.com (If somebody were to use http://www.site.com/customname it would function the same way). I can see the sites are using 302 redirects at some point using Google Chrome then doing some sort of htaccess redirect, taking the subdomain name (customname) and applying it as a referral parameter then keeping in session during the entire process. However, there must be thousands of these custom URLs that people are typing in. How are each one of these "subdomains" not treated as separate URLs which in turn are redirected to the same page (in short, generating tons of links all pointing to the same page which Google would normally frown upon)? Additionally, the links also appear on the site themselves as clickable links so I'm not sure how these are not tracked. Similarly, the "unique" url is not indexed or cached in any Google search results. How is this capability handled? It does NOT highlight the referral aspect, but a true example of this is visiting http://sfgiants.com which does a 302 redirect to the much longer proper San Francisco Giants MLB homepage. I am wondering how SFgiants.com is not indexed (assuming that direct shortened link appears on several MLB pages)? 1 - I know these are 302 redirects, I can see this on the sites network flow. 2 - These links do in fact appear on the page itself because in some areas (for example, the bottom of the page may say: send this page to a friend! http://name.site.com/ which in turn would again redirect to something like http://www.site.com?id=name so the id value could be stored in session

    Read the article

  • How can my team avoid frequent errors after refactoring?

    - by SDD64
    to give you a little background: I work for a company with roughly twelve Ruby on Rails developers (+/- interns). Remote work is common. Our product is made out of two parts: a rather fat core, and thin up to big customer projects built upon it. Customer projects usually expand the core. Overwriting of key features does not happen. I might add that the core has some rather bad parts that are in urgent need of refactorings. There are specs, but mostly for the customer projects. The worst part of the core are untested (as it should be...). The developers are split into two teams, working with one or two PO for each sprint. Usually, one customer project is strictly associated with one of the teams and POs. Now our problem: Rather frequently, we break each others stuff. Some one from Team A expands or refactors the core feature Y, causing unexpected errors for one of Team B's customer projects. Mostly, the changes are not announced over the teams, so the bugs hit almost always unexpected. Team B, including the PO, thought about feature Y to be stable and did not test it before releasing, unaware of the changes. How to get rid of those problems? What kind of 'announcement technique' can you recommend me?

    Read the article

  • Should I switch my graphics mode in the BIOS to avoid using Bumblebee?

    - by Fawkes5
    I have just purchased a Acer 3830TG, the timeline X series. To my surprise I found out that there is no first-party support for nvidia optimus for linux. Bumblebee works great, but the battery life from the graphics card always running is not so great. I don't use linux for games so i don't really need the graphics card on, I have Windows for that. In my bios, I have the ability to change my graphics mode from switchable to integrated. If I do this, reinstall ubuntu, what will happen? Will my nvidia card just turn off? Will everything work properly, as if i'm not running an optimus laptop? Is this recommended as opposed to dealing with bumblebee? What is the best thing I could do?

    Read the article

  • How does delicious.com avoid being sued for copyright infringement?

    - by Stanish
    With the recent redesign of delicious.com, they've added a much more graphical home page. The site continues to be a service for people to bookmark and share websites they come across on the web. The delicious home is now made up of images taken from those linked sites. See for yourself at http://delicious.com I would like to know what in the law allows them to do this, considering the images represent the main content of the page, and they clearly do not own copyright to those images? I know there is some leeway given to search engines where it is considered fair use to use a small portion of the content if the aim is to lead people to the originating site. Does that apply here?

    Read the article

  • I have domain.com and domain.org to the same site, should I use redirects to avoid duplicate content

    - by bunzip
    I have both the .com and the .org for a domain name, and using Apache I point them to the same site content. I think this might be causing problems with the Search Engines because of duplicate content. I want the .org to be the essential website. How do others handle this situation? Should I be using 301 redirects to point all the .com requests to the .org? Should I just use the link rel="canonical" on each page to point to the .org?

    Read the article

  • When decomposing a large function, how can I avoid the complexity from the extra subfunctions?

    - by missingno
    Say I have a large function like the following: function do_lots_of_stuff(){ { //subpart 1 ... } ... { //subpart N ... } } a common pattern is to decompose it into subfunctions function do_lots_of_stuff(){ subpart_1(...) subpart_2(...) ... subpart_N(...) } I usually find that decomposition has two main advantages: The decomposed function becomes much smaller. This can help people read it without getting lost in the details. Parameters have to be explicitly passed to the underlying subfunctions, instead of being implicitly available by just being in scope. This can help readability and modularity in some situations. However, I also find that decomposition has some disadvantages: There are no guarantees that the subfunctions "belong" to do_lots_of_stuff so there is nothing stopping someone from accidentally calling them from a wrong place. A module's complexity grows quadratically with the number of functions we add to it. (There are more possible ways for things to call each other) Therefore: Are there useful convention or coding styles that help me balance the pros and cons of function decomposition or should I just use an editor with code folding and call it a day? EDIT: This problem also applies to functional code (although in a less pressing manner). For example, in a functional setting we would have the subparts be returning values that are combined in the end and the decomposition problem of having lots of subfunctions being able to use each other is still present. We can't always assume that the problem domain will be able to be modeled on just some small simple types with just a few highly orthogonal functions. There will always be complicated algorithms or long lists of business rules that we still want to correctly be able to deal with. function do_lots_of_stuff(){ p1 = subpart_1() p2 = subpart_2() pN = subpart_N() return assembleStuff(p1, p2, ..., pN) }

    Read the article

  • How do professional games avoid showing pixel seams in adjacent mesh boundaries due to decimal imprecision?

    - by ufomorace
    Graphics cards are mathematically imprecise. So when some meshes are joined by their borders, the graphics card often makes mistakes and decides that some pixels at the seam represent neither object, and unwanted pixels appear. It's a natural behaviour on all graphics cards. How are such worries avoided in Pro Games? Batching? Shaders? Different tangent vectors? Merging? Overlaping seams? Dark backgrounds? Extra vertices at borders? Z precision? Camera distance tweaks? Screencap of a fix that ended up not working:

    Read the article

  • How do I make cars on a one-dimensional track avoid collisions?

    - by user990827
    Using three.js, I use a simple spline to represent a road. Cars can only move forward on the spline. A car should be able to slow-down behind a slow moving car. I know how to calculate the distance between 2 cars, but how to calculate the proper speed in each game update? At the moment I simply do something like this: this.speed += (this.maxSpeed - this.speed) * 0.02; // linear interpolation to maxSpeed // the position on the spline (0.0 - 1.0) this.position += this.speed / this.road.spline.getLength(); This works. But how to implement the slow-down part? // transform from floats (0.0 - 1.0) into actual units var carInFrontPosition = carInFront.position * this.road.spline.getLength(); var myPosition = this.position * this.road.spline.getLength(); var distance = carInFrontPosition - myPosition; // WHAT TO DO HERE WITH THE DISTANCE? // HOW TO CALCULATE MY NEW SPEED? Obviously I have to somehow take current speed of the cars into account for calculation. Besides different maxSpeeds, I want each car to also have a different mass (causing it to accelerate slower/faster). But this mass has to be then also taken into account for braking (slowing down) so they don't crash into each other.

    Read the article

  • Should I avoid or embrace asking questions of other developers on the job?

    - by T.K.
    As a CS undergraduate, the people around me are either learning or are paid to teach me, but as a software developer, the people around me have tasks of their own. They aren't paid to teach me, and conversely, I am paid to contribute. When I first started working as a software developer co-op, I was introduced to a huge code base written in a language I had never used before. I had plenty of questions, but didn't want to bother my co-workers with all of them - it wasted their time and hurt my pride. Instead, I spent a lot of time bouncing between IDE and browser, trying to make sense of what had already been written and differentiate between expected behavior and symptoms of bugs. I'd ask my co-workers when I felt that the root of my lack of understanding was an in-house concept that I wouldn't find on the internet, but aside from that, I tried to confine my questions to lunch hours. Naturally, there were occasions where I wasted time trying to understand something in code on the internet that had, at its heart, an in-house concept, but overall, I felt I was productive enough during my first semester, contributing about as much as one could expect and gaining a pretty decent understanding of large parts of the product. I was wondering what senior developers felt about that mindset. Should new developers ask more questions to get to speed faster, or should they do their own research for themselves? I see benefits to both mindsets, and anticipate a large variety of responses, but I figure new developers might appreciate your answers without thinking to ask this question.

    Read the article

  • How to avoid movement speed stacking when multiple keys are pressed?

    - by eren_tetik
    I've started a new game which requires no mouse, thus leaving the movement up to the keyboard. I have tried to incorporate 8 directions; up, left, right, up-right and so on. However when I press more than one arrow key, the movement speed stacks (http://gfycat.com/CircularBewitchedBarebirdbat). How could I counteract this? Here is relevant part of my code: var speed : int = 5; function Update () { if(Input.GetKey(KeyCode.UpArrow)){ transform.Translate(Vector3.forward * speed * Time.deltaTime); } else if(Input.GetKey(KeyCode.UpArrow) && Input.GetKey(KeyCode.RightArrow)){ transform.Translate(Vector3.forward * speed * Time.deltaTime); } else if(Input.GetKey(KeyCode.UpArrow) && Input.GetKey(KeyCode.LeftArrow)){ transform.rotation = Quaternion.AngleAxis(315, Vector3.up); } if(Input.GetKey(KeyCode.DownArrow)){ transform.Translate(Vector3.forward * speed * Time.deltaTime); } }

    Read the article

  • How to avoid tons of `instanceof` in collision detection?

    - by Prog
    Consider a simple game with 4 kinds of entities: Robots, Dogs, Missiles, Walls. Here's a simple collision-detection mechanism in psuedocode: (I know, O(n^2). Irrelevant for this question). for(Entity entityA in entities){ for(Entity entityB in entities){ if(collision(entityA, entityB)){ if(entityA instanceof Robot && entityB instanceof Dog) entityB.die(); if(entityA instanceof Robot && entityB instanceof Missile){ entityA.die(); entityB.die(); } if(entityA instanceof Missile && entityB instanceof Wall) entityB.die(); // .. and so on } } } Obviously this is very ugly, and will get bigger and harder to maintain the more entities there are, and the more conditions there are. One option to make this better is to have separate lists for each kind of entity. For example a Robots list, a Dogs list etc. And than check for collisions of all Robots with Dogs, and all Dogs with Walls, etc. This is better, but I still don't think it's good. So my question is: The collision detection system spotted a collision. Now what? What is the common way to react to the collision? Should the system notify the entity itself that it collided with something, and have it decide for itself how to react? E.g. entityA.reactToCollision(entityB). Or is there some other solution?

    Read the article

  • How can I avoid a few seconds of blank video when using -vcodec copy?

    - by arlomedia
    I'm processing user-uploaded videos on a CentOS web server with ffmpeg. I need to convert each video to a standard size and format, then extract a 30-second sample clip from each video. I want to use the "-vcodec copy" flag in the extraction command to avoid encoding a second time. This command works for my initial conversion: ffmpeg -i uploaded.mov -f mp4 -vcodec libx264 -vpre medium -acodec libfaac -r 15 -b 360k -ab 48k -ar 22050 -s 480x320 formatted.mp4 And this sometimes works for the extraction: ffmpeg -i formatted.mp4 -vcodec copy -acodec copy -ss 0 -t 30 formatted_sample.mp4 However, when I run the extraction command on some videos, the extracted sample clip starts with several seconds of blank video. The audio starts right away but the video doesn't start for 3-6 seconds. To demonstrate the problem, I've uploaded two video clips and run the above commands on them. I created the first clip in Final Cut Express and encoded it with Handbrake before uploading to the web server: 1a) uploaded clip 1b) converted with first command 1c) extracted with second command, missing first six seconds By comparison, this second clip comes from Apple's website and does not show the problem: 2a) uploaded clip 2b) converted with first command 2c) extracted with second command, no problem Can anyone see what's different about the two source clips? And if so, is there anything I can do in my conversion command so that when the extraction command runs, the clip is set up to avoid the missing video? By the way, I initially had the problem with ffmpeg 0.6.1 installed from yum, but I upgraded to the latest git version and the problem remains.

    Read the article

  • How do you avoid that server documentation gets out of sync with the actual setup?

    - by Frerich Raabe
    I'm a hobbyist maintaining a small FreeBSD server serving mail via IMAP - it's an exercise in server administration. The setup does have reasonably good documentation (in AsciiDoc format) which recently allowed another person to recreate the entire setup from scratch in less than 30 minutes. However, I noticed that after the initial setup, it easily happens that small changes done to the system (say: inetd gets disabbled, my IMAP server listens on an additional port for ManageSieve connections, a new router is added to the exim configuration) don't end up in the documentation immediately (if at all). My idea was to avoid this problem by (partially?) generating the documentation out of the configuration files and the comments therein - one way to implement this may be to put /etc and /usr/local/etc into some source code management system (say - git) and then run a script which regenerates the documentation on every commit. However, I'm not sure whether that would be overkill and/or too difficult to get right (after all, I don't want complete copies of the source files in my documentation but rather just the diffs). How do other people avoid that the server documentation gets outdated - is there a good way to keep them in sync automatically, or do you just have the discipline to update the documentation the same time you modify the system?

    Read the article

  • Does /NOCANDY avoid any adware-related activities with OpenCandy?

    - by Andrew Grimm
    OpenCandy claims that using the /NOCANDY switch when using a OpenCandy-affiliated installer allows you to avoid opencandy. Should I take their word for it? If not, can anyone independent of OpenCandy and their affiliates verify that /NOCANDY works? Background: About to install WinSCP onto a fresh Windows installation, and found out that new versions have OpenCandy associated with their installer. For the sake of balance, here's a link to WinSCP's FAQ on OpenCandy. The claim about /NOCANDY working appears on WinSCP's web site, but the same boilerplate appears on other OpenCandy web sites. If the OpenCandy people are offended by the tag "spyare": sorry, but it's the main tag here, rather than "adware".

    Read the article

  • How can I avoid repeating DocumentRoot in this Apache virtual host?

    - by David Faux
    I have an Apache virtual host configured for a website powered by Wordpress. <VirtualHost *:80> ServerName 67.178.132.253 DocumentRoot /home/david/wordpressWebsite # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteRule ^index\.php$ - [L] RewriteCond /home/david/wordpressWebsite%{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress </VirtualHost> How can I avoid hard-coding /home/david/wordpressWebsite twice? I don't want to use REQUEST_URI since that involves an extra request.

    Read the article

  • Does /NOCANDY avoid any adware-related activities with OpenCandy?

    - by Andrew Grimm
    OpenCandy claims that using the /NOCANDY switch when using a OpenCandy-affiliated installer allows you to avoid opencandy. Should I take their word for it? If not, can anyone independent of OpenCandy and their affiliates verify that /NOCANDY works? Background: About to install WinSCP onto a fresh Windows installation, and found out that new versions have OpenCandy associated with their installer. For the sake of balance, here's a link to WinSCP's FAQ on OpenCandy. The claim about /NOCANDY working appears on WinSCP's web site, but the same boilerplate appears on other OpenCandy web sites. If the OpenCandy people are offended by the tag "spyare": sorry, but it's the main tag here, rather than "adware".

    Read the article

  • Should I limit end-user gigabit ports to avoid saturating uplink/trunk connections?

    - by Joel Coel
    We have a campus with 16 buildings and older 850nm 1Gbps fiber links between the buildings, that all come to a core switch for our servers that also uses 1Gbps ports. We're finally starting to replace our aging 10/100 end-user switches, and much of what we're looking at are 1 Gbps units. My question is, since the trunk/uplink lines are still 1Gbps, if I were to install 1 Gbps switches for end users, should I limit the ports to 100Mbps until I can also upgrade the trunks to avoid allowing a bad-behaving host to saturate a trunk line (since we're a college, we have plenty of mis-behaving hosts) and thereby create a DoS situation for a building, or will TCP congestion control typically take care of that for me? What if we have a lot of UDP traffic (games, video chats, even a small amount of bittorrent)?

    Read the article

  • How to secure Apache for shared hosting environment? (chrooting, avoid symlinking...)

    - by Alessio Periloso
    I'm having problems dealing with Apache configuration: the problem is that I want to limit each user to his own docroot (so, a chroot() would be what I'm looking for), but: Mod_chroot works only globally and not for each virtualhost: i have the users in a path like the following one /home/vhosts/xxxxx/domains/domain.tld/public_html (xxxxx is the user), and can't solve the problem chrooting /home/vhosts, because the users would still be allowed to see each other. Using apache-mod-itk would slow down the websites too much, and I'm not sure if it would solve anything Without using any of the previous two, I think the only thing left is avoiding symlinking, not allowing the users to link to something that doesn't belong to them. So, I think I'm going to follow the third point but... how to efficiently avoid symlinking while still keeping mod_rewrite working?! The php has already been chrooted with php-fpm, so my only concern is about Apache itself.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >