Search Results

Search found 13564 results on 543 pages for 'non transparent'.

Page 19/543 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • UITableViewCell is transparent when not supposed to be

    - by David Liu
    My UITableViewCell is being transparent when it's not supposed to be. My table view has a background color and it shows through the table cell, even though they're supposed to be opaque. I'm not sure why this is. Relevant code: UITableViewCell *cell = [table dequeueReusableCellWithIdentifier:emptyIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:emptyIdentifier] autorelease]; } cell.textLabel.text = @"Empty"; cell.textLabel.textAlignment = UITextAlignmentCenter; cell.textLabel.backgroundColor = [UIColor whiteColor]; return cell;

    Read the article

  • Create two semi-transparent images that when stacked produce the target image

    - by posfan12
    Due to CSS limitations I am forced to stack to semi-transparent images on my website. I won't go into detail regarding the CSS since if I can get this question answered the problem is moot. Anyway, I would like to modify image A in the GIMP such that it will look like it did originally after being stacked on top of image B. Both image A and image B have their opacities set to 50%. Image B is a solid color throughout, whereas image A has some minor details such as a gradient. Here's what it looks like before image B is applied on top (and what it should look like in the end): [URL=http://s421.photobucket.com/albums/pp292/SharkD2161/Support/Website/?action=view&current=website_testing_target_image.png][IMG]http://i421.photobucket.com/albums/pp292/SharkD2161/Support/Website/th_website_testing_target_image.png[/IMG][/URL] Here's what it looks like after image B has been applied on top: [URL=http://s421.photobucket.com/albums/pp292/SharkD2161/Support/Website/?action=view&current=website_testing_undesired_result.png][IMG]http://i421.photobucket.com/albums/pp292/SharkD2161/Support/Website/th_website_testing_undesired_result.png[/IMG][/URL] Thanks! Mike

    Read the article

  • Transparent child control

    - by pps
    Hello all, I'm writing a control that may have some of its parts transparent or semitransparent. Basically this control shows a png image (with alpha channel). The control's parent window has some graphics on it. Therefore, to make the png control render correctly it needs to get image of what the parent window would draw beneath it. Parent dialog might have WS_CLIPCHILDREN flag set, that means that the the parent window won't draw anything under the the png control and in this case png control won't work properly. This control has to work on Windows Mobile also, so it can't have WS_EX_TRANSPARENT

    Read the article

  • Transparent Arc on HTML5 Canvas

    - by Rigil
    Here I have an arc with some transparency applied to one of the two gradients its using:` ctx.arc(mouseX,mouseY,radius,0, 2*Math.PI,false); var grd=ctx.createRadialGradient(mouseX,mouseY,0,mouseX,mouseY,brushSize); grd.addColorStop(1,"transparent"); grd.addColorStop(0.1,"#1f0000"); ctx.fillStyle=grd; ctx.fill(); Is there a way to now give the entire arc some transparency affecting only the arc and none of the rest of the canvas? Thanks

    Read the article

  • Transparent QGLWidget on top of QGraphicsView

    - by maciej.gryka
    I'm using QGraphicsView to show a 2D image and also have a separate QGLWidget window to display some 3D object. I'm dynamically changing the image displayed in `QGraphicsView' based on the rotation of the 3D object. I would like to render a semi-transparent 3D object on top of the 2D image, something like Maya 2009 used to do (notice the cube in the upper right corner of the viewport): Is it possible to do this with my current widgets? If not, how could it be done? One option I can think of would be to render everything in QGLWidget and display the 2D image as a texture on a background plane, but that seems slightly painful.

    Read the article

  • jQuery drag and drop behavior with partially transparent image

    - by Aaron
    I'm trying to develop a drag-and-drop behavior based on the jQuery UI draggable behavior but am running into some road blocks. I want to be able to drag several images with transparent regions around a region of the screen. I want the user to be able to drag the image he clicks and not just whatever draggable div or PNG happens to be z-indexed on top. The below image is a screen grab from my test page. If I click the lower left region of the blue square through the red thing I should drag the square and not the red thing. The red thing is what gets dragged though because it is on top and the browser does not care about the transparency. My question is, how can I make it behave as expected in this situation and drag the square instead? Edit: Seems I can't attach images as a new user. See this URL for my example image: http://i42.tinypic.com/r1g4sk.png

    Read the article

  • How do i use a transparent image in a transparent image using GD?

    - by hogofwar
    I have tried every way I can find but the background is always black. Is there any way to keep the transparency of the first image when GD ises it in PHP? I'm using imagecopymerge to use the image. though i am not sure if this is the right way. imagecopymerge($dest, $char, 0, 0, 0, 0, 150, 300, 100); The image is like so: http://filesmelt.com/dl/draw7.php.png See that the background is black whereas the roginal picture was transparent as the modified picture should be aswell

    Read the article

  • Problem with Juggernaut and transparent wmode in Adobe AIR

    - by vortexmk
    Hi, I am trying to use juggernaut in Adobe AIR application, however the big issue with this is that the juggernaut flash file should have wmode parameter set to 'transparent'. This is a bit of a problem for juggernaut because it seems that it doesn't work with that parameter included. The interesting part is that it is working under Linux on all browsers and in air application, but on Windows it's working only with Safari all other browsers including the AIR application are not even connecting to the juggernaut server. So the question is, what is the cause of this and if anyone had similar problem I would be interested to hear about solutions or workarounds. Thanks, Peco

    Read the article

  • Implementing Transparent Persistence

    - by Jules
    Transparent persistence allows you to use regular objects instead of a database. The objects are automatically read from and written to disk. Examples of such systems are Gemstone and Rucksack (for common lisp). Simplified version of what they do: if you access foo.bar and bar is not in memory, it gets loaded from disk. If you do foo.bar = baz then the foo object gets updated on disk. Most systems also have some form of transactions, and they may have support for sharing objects across programs and even across a network. My question is what are the different techniques for implementing these kind of systems and what are the trade offs between these implementation approaches?

    Read the article

  • Technical non-terminating condition in a loop

    - by Snarfblam
    Most of us know that a loop should not have a non-terminating condition. For example, this C# loop has a non-terminating condition: any even value of i. This is an obvious logic error. void CountByTwosStartingAt(byte i) { // If i is even, it never exceeds 254 for(; i < 255; i += 2) { Console.WriteLine(i); } } Sometimes there are edge cases that are extremely unlikeley, but technically constitute non-exiting conditions (stack overflows and out-of-memory errors aside). Suppose you have a function that counts the number of sequential zeros in a stream: int CountZeros(Stream s) { int total = 0; while(s.ReadByte() == 0) total++; return total; } Now, suppose you feed it this thing: class InfiniteEmptyStream:Stream { // ... Other members ... public override int Read(byte[] buffer, int offset, int count) { Array.Clear(buffer, offset, count); // Output zeros return count; // Never returns -1 (end of stream) } } Or more realistically, maybe a stream that returns data from external hardware, which in certain cases might return lots of zeros (such as a game controller sitting on your desk). Either way we have an infinite loop. This particular non-terminating condition stands out, but sometimes they don't. A completely real-world example as in an app I'm writing. An endless stream of zeros will be deserialized into infinite "empty" objects (until the collection class or GC throws an exception because I've exceeded two billion items). But this would be a completely unexpected circumstance (considering my data source). How important is it to have absolutely no non-terminating conditions? How much does this affect "robustness?" Does it matter if they are only "theoretically" non-terminating (is it okay if an exception represents an implicit terminating condition)? Does it matter whether the app is commercial? If it is publicly distributed? Does it matter if the problematic code is in no way accessible through a public interface/API? Edit: One of the primary concerns I have is unforseen logic errors that can create the non-terminating condition. If, as a rule, you ensure there are no non-terminating conditions, you can identify or handle these logic errors more gracefully, but is it worth it? And when? This is a concern orthogonal to trust.

    Read the article

  • How to put transparent swf into html/php?

    - by SunSky
    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en-US" lang="en-US"> <div id="divAnima01"> <object> <embed src="anima/anima01.swf" width="340" height="590"> <param name="wmode" value="transparent" /> </embed> </object> </div> Everything works except transparency - swf has white background. I tried to put wmode outside embed tag - without result.

    Read the article

  • Is there an HTML code that can make my background picture transparent and my text non-transparent?

    - by user1831312
    Okay so I've been typing some HTML code for a technology class that I need to satisfy for my Education major. This is what i have for my background: body { background-image:url('islandbeach.jpg'); background-repeat:repeat; background-position:center; background-attachment:fixed; background-size:cover; } Now, I want to make my background transparent or faded so I can see the text and the other image that I have. The background is too colorful to be able to see the words without having to squint. Are there any HTML codes that can do this for me? I am not a pro at this stuff, I've just been following everything my professor has told me to do so please explain stuff in baby steps if you do have an answer. Thank you so so much!

    Read the article

  • Non-Dom Element Event Binding with jQuery

    - by Rick Strahl
    Yesterday I had a short discussion with Dave Reed on Twitter regarding setting up fake ‘events’ on objects that are hookable. jQuery makes it real easy to bind events on DOM elements and with a little bit of extra work (that I didn’t know about) you can also set up binding to non-DOM element ‘event’ bindings. Assume for a second that you have a simple JavaScript object like this: var item = { sku: "wwhelp" , foo: function() { alert('orginal foo function'); } }; and you want to be notified when the foo function is called. You can use jQuery to bind the handler like this: $(item).bind("foo", function () { alert('foo Hook called'); } ); Binding alone won’t actually cause the handler to be triggered so when you call: item.foo(); you only get the ‘original’ message. In order to fire both the original handler and the bound event hook you have to use the .trigger() function: $(item).trigger("foo"); Now if you do the following complete sequence: var item = { sku: "wwhelp" , foo: function() { alert('orginal foo function'); } }; $(item).bind("foo", function () { alert('foo hook called'); } ); $(item).trigger("foo"); You’ll see the ‘hook’ message first followed by the ‘original’ message fired in succession. In other words, using this mechanism you can hook standard object functions and chain events to them in a way similar to the way you can do with DOM elements. The main difference is that the ‘event’ has to be explicitly triggered in order for this to happen rather than just calling the method directly. .trigger() relies on some internal logic that checks for event bindings on the object (attached via an expando property) which .trigger() searches for in its bound event list. Once the ‘event’ is found it’s called prior to execution of the original function. This is pretty useful as it allows you to create standard JavaScript objects that can act as event handlers and are effectively hookable without having to explicitly override event definitions with JavaScript function handlers. You get all the benefits of jQuery’s event methods including the ability to hook up multiple events to the same handler function and the ability to uniquely identify each specific event instance with post fix string names (ie. .bind("MyEvent.MyName") and .unbind("MyEvent.MyName") to bind MyEvent). Watch out for an .unbind() Bug Note that there appears to be a bug with .unbind() in jQuery that doesn’t reliably unbind an event and results in a elem.removeEventListener is not a function error. The following code demonstrates: var item = { sku: "wwhelp", foo: function () { alert('orginal foo function'); } }; $(item).bind("foo.first", function () { alert('foo hook called'); }); $(item).bind("foo.second", function () { alert('foo hook2 called'); }); $(item).trigger("foo"); setTimeout(function () { $(item).unbind("foo"); // $(item).unbind("foo.first"); // $(item).unbind("foo.second"); $(item).trigger("foo"); }, 3000); The setTimeout call delays the unbinding and is supposed to remove the event binding on the foo function. It fails both with the foo only value (both if assigned only as “foo” or “foo.first/second” as well as when removing both of the postfixed event handlers explicitly. Oddly the following that removes only one of the two handlers works: setTimeout(function () { //$(item).unbind("foo"); $(item).unbind("foo.first"); // $(item).unbind("foo.second"); $(item).trigger("foo"); }, 3000); this actually works which is weird as the code in unbind tries to unbind using a DOM method that doesn’t exist. <shrug> A partial workaround for unbinding all ‘foo’ events is the following: setTimeout(function () { $.event.special.foo = { teardown: function () { alert('teardown'); return true; } }; $(item).unbind("foo"); $(item).trigger("foo"); }, 3000); which is a bit cryptic to say the least but it seems to work more reliably. I can’t take credit for any of this – thanks to Dave Reed and Damien Edwards who pointed out some of these behaviors. I didn’t find any good descriptions of the process so thought it’d be good to write it down here. Hope some of you find this helpful.© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  

    Read the article

  • Some non-generic collections

    - by Simon Cooper
    Although the collections classes introduced in .NET 2, 3.5 and 4 cover most scenarios, there are still some .NET 1 collections that don't have generic counterparts. In this post, I'll be examining what they do, why you might use them, and some things you'll need to bear in mind when doing so. BitArray System.Collections.BitArray is conceptually the same as a List<bool>, but whereas List<bool> stores each boolean in a single byte (as that's what the backing bool[] does), BitArray uses a single bit to store each value, and uses various bitmasks to access each bit individually. This means that BitArray is eight times smaller than a List<bool>. Furthermore, BitArray has some useful functions for bitmasks, like And, Xor and Not, and it's not limited to 32 or 64 bits; a BitArray can hold as many bits as you need. However, it's not all roses and kittens. There are some fundamental limitations you have to bear in mind when using BitArray: It's a non-generic collection. The enumerator returns object (a boxed boolean), rather than an unboxed bool. This means that if you do this: foreach (bool b in bitArray) { ... } Every single boolean value will be boxed, then unboxed. And if you do this: foreach (var b in bitArray) { ... } you'll have to manually unbox b on every iteration, as it'll come out of the enumerator an object. Instead, you should manually iterate over the collection using a for loop: for (int i=0; i<bitArray.Length; i++) { bool b = bitArray[i]; ... } Following on from that, if you want to use BitArray in the context of an IEnumerable<bool>, ICollection<bool> or IList<bool>, you'll need to write a wrapper class, or use the Enumerable.Cast<bool> extension method (although Cast would box and unbox every value you get out of it). There is no Add or Remove method. You specify the number of bits you need in the constructor, and that's what you get. You can change the length yourself using the Length property setter though. It doesn't implement IList. Although not really important if you're writing a generic wrapper around it, it is something to bear in mind if you're using it with pre-generic code. However, if you use BitArray carefully, it can provide significant gains over a List<bool> for functionality and efficiency of space. OrderedDictionary System.Collections.Specialized.OrderedDictionary does exactly what you would expect - it's an IDictionary that maintains items in the order they are added. It does this by storing key/value pairs in a Hashtable (to get O(1) key lookup) and an ArrayList (to maintain the order). You can access values by key or index, and insert or remove items at a particular index. The enumerator returns items in index order. However, the Keys and Values properties return ICollection, not IList, as you might expect; CopyTo doesn't maintain the same ordering, as it copies from the backing Hashtable, not ArrayList; and any operations that insert or remove items from the middle of the collection are O(n), just like a normal list. In short; don't use this class. If you need some sort of ordered dictionary, it would be better to write your own generic dictionary combining a Dictionary<TKey, TValue> and List<KeyValuePair<TKey, TValue>> or List<TKey> for your specific situation. ListDictionary and HybridDictionary To look at why you might want to use ListDictionary or HybridDictionary, we need to examine the performance of these dictionaries compared to Hashtable and Dictionary<object, object>. For this test, I added n items to each collection, then randomly accessed n/2 items: So, what's going on here? Well, ListDictionary is implemented as a linked list of key/value pairs; all operations on the dictionary require an O(n) search through the list. However, for small n, the constant factor that big-o notation doesn't measure is much lower than the hashing overhead of Hashtable or Dictionary. HybridDictionary combines a Hashtable and ListDictionary; for small n, it uses a backing ListDictionary, but switches to a Hashtable when it gets to 9 items (you can see the point it switches from a ListDictionary to Hashtable in the graph). Apart from that, it's got very similar performance to Hashtable. So why would you want to use either of these? In short, you wouldn't. Any gain in performance by using ListDictionary over Dictionary<TKey, TValue> would be offset by the generic dictionary not having to cast or box the items you store, something the graphs above don't measure. Only if the performance of the dictionary is vital, the dictionary will hold less than 30 items, and you don't need type safety, would you use ListDictionary over the generic Dictionary. And even then, there's probably more useful performance gains you can make elsewhere.

    Read the article

  • IntelliSense for Razor Hosting in non-Web Applications

    - by Rick Strahl
    When I posted my Razor Hosting article a couple of weeks ago I got a number of questions on how to get IntelliSense to work inside of Visual Studio while editing your templates. The answer to this question is mainly dependent on how Visual Studio recognizes assemblies, so a little background is required. If you open a template just on its own as a standalone file by clicking on it say in Explorer, Visual Studio will open up with the template in the editor, but you won’t get any IntelliSense on any of your related assemblies that you might be using by default. It’ll give Intellisense on base System namespace, but not on your imported assembly types. This makes sense: Visual Studio has no idea what the assembly associations for the single file are. There are two options available to you to make IntelliSense work for templates: Add the templates as included files to your non-Web project Add a BIN folder to your template’s folder and add all assemblies required there Including Templates in your Host Project By including templates into your Razor hosting project, Visual Studio will pick up the project’s assembly references and make IntelliSense available for any of the custom types in your project and on your templates. To see this work I moved the \Templates folder from the samples from the Debug\Bin folder into the project root and included the templates in the WinForm sample project. Here’s what this looks like in Visual Studio after the templates have been included:   Notice that I take my original example and type cast the Context object to the specific type that it actually represents – namely CustomContext – by using a simple code block: @{ CustomContext Model = Context as CustomContext; } After that assignment my Model local variable is in scope and IntelliSense works as expected. Note that you also will need to add any namespaces with the using command in this case: @using RazorHostingWinForm which has to be defined at the very top of a Razor document. BTW, while you can only pass in a single Context 'parameter’ to the template with the default template I’ve provided realize that you can also assign a complex object to Context. For example you could have a container object that references a variety of other objects which you can then cast to the appropriate types as needed: @{ ContextContainer container = Context as ContextContainer; CustomContext Model = container.Model; CustomDAO DAO = container.DAO; } and so forth. IntelliSense for your Custom Template Notice also that you can get IntelliSense for the top level template by specifying an inherits tag at the top of the document: @inherits RazorHosting.RazorTemplateFolderHost By specifying the above you can then get IntelliSense on your base template’s properties. For example, in my base template there are Request and Response objects. This is very useful especially if you end up creating custom templates that include your custom business objects as you can get effectively see full IntelliSense from the ‘page’ level down. For Html Help Builder for example, I’d have a Help object on the page and assuming I have the references available I can see all the way into that Help object without even having to do anything fancy. Note that the @inherits key is a GREAT and easy way to override the base template you normally specify as the default template. It allows you to create a custom template and as long as it inherits from the base template it’ll work properly. Since the last post I’ve also made some changes in the base template that allow hooking up some simple initialization logic so it gets much more easy to create custom templates and hook up custom objects with an IntializeTemplate() hook function that gets called with the Context and a Configuration object. These objects are objects you can pass in at runtime from your host application and then assign to custom properties on your template. For example the default implementation for RazorTemplateFolderHost does this: public override void InitializeTemplate(object context, object configurationData) { // Pick up configuration data and stuff into Request object RazorFolderHostTemplateConfiguration config = configurationData as RazorFolderHostTemplateConfiguration; this.Request.TemplatePath = config.TemplatePath; this.Request.TemplateRelativePath = config.TemplateRelativePath; // Just use the entire ConfigData as the model, but in theory // configData could contain many objects or values to set on // template properties this.Model = config.ConfigData as TModel; } to set up a strongly typed Model and the Request object. You can do much more complex hookups here of course and create complex base template pages that contain all the objects that you need in your code with strong typing. Adding a Bin folder to your Template’s Root Path Including templates in your host project works if you own the project and you’re the only one modifying the templates. However, if you are distributing the Razor engine as a templating/scripting solution as part of your application or development tool the original project is likely not available and so that approach is not practical. Another option you have is to add a Bin folder and add all the related assemblies into it. You can also add a Web.Config file with assembly references for any GAC’d assembly references that need to be associated with the templates. Between the web.config and bin folder Visual Studio can figure out how to provide IntelliSense. The Bin folder should contain: The RazorHosting.dll Your host project’s EXE or DLL – renamed to .dll if it’s an .exe Any external (bin folder) dependent assemblies Note that you most likely also want a reference to the host project if it contains references that are going to be used in templates. Visual Studio doesn’t recognize an EXE reference so you have to rename the EXE to DLL to make it work. Apparently the binary signature of EXE and DLL files are identical and it just works – learn something new everyday… For GAC assembly references you can add a web.config file to your template root. The Web.config file then should contain any full assembly references to GAC components: <configuration> <system.web> <compilation debug="true"> <assemblies> <add assembly="System.Web.Mvc, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> <add assembly="System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> <add assembly="System.Web.Extensions, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> </assemblies> </compilation> </system.web> </configuration> And with that you should get full IntelliSense. Note that if you add a BIN folder and you also have the templates in your Visual Studio project Visual Studio will complain about reference conflicts as it’s effectively seeing both the project references and the ones in the bin folder. So it’s probably a good idea to use one or the other but not both at the same time :-) Seeing IntelliSense in your Razor templates is a big help for users of your templates. If you’re shipping an application level scripting solution especially it’ll be real useful for your template consumers/users to be able to get some quick help on creating customized templates – after all that’s what templates are all about – easy customization. Making sure that everything is referenced in your bin folder and web.config is a good idea and it’s great to see that Visual Studio (and presumably WebMatrix/Visual Web Developer as well) will be able to pick up your custom IntelliSense in Razor templates.© Rick Strahl, West Wind Technologies, 2005-2011Posted in Razor  

    Read the article

  • Movement prediction for non-shooters

    - by ShadowChaser
    I'm working on an isometric 2D game with moderate-scale multiplayer, approximately 20-30 players connected at once to a persistent server. I've had some difficulty getting a good movement prediction implementation in place. Physics/Movement The game doesn't have a true physics implementation, but uses the basic principles to implement movement. Rather than continually polling input, state changes (ie/ mouse down/up/move events) are used to change the state of the character entity the player is controlling. The player's direction (ie/ north-east) is combined with a constant speed and turned into a true 3D vector - the entity's velocity. In the main game loop, "Update" is called before "Draw". The update logic triggers a "physics update task" that tracks all entities with a non-zero velocity uses very basic integration to change the entities position. For example: entity.Position += entity.Velocity.Scale(ElapsedTime.Seconds) (where "Seconds" is a floating point value, but the same approach would work for millisecond integer values). The key point is that no interpolation is used for movement - the rudimentary physics engine has no concept of a "previous state" or "current state", only a position and velocity. State Change and Update Packets When the velocity of the character entity the player is controlling changes, a "move avatar" packet is sent to the server containing the entity's action type (stand, walk, run), direction (north-east), and current position. This is different from how 3D first person games work. In a 3D game the velocity (direction) can change frame to frame as the player moves around. Sending every state change would effectively transmit a packet per frame, which would be too expensive. Instead, 3D games seem to ignore state changes and send "state update" packets on a fixed interval - say, every 80-150ms. Since speed and direction updates occur much less frequently in my game, I can get away with sending every state change. Although all of the physics simulations occur at the same speed and are deterministic, latency is still an issue. For that reason, I send out routine position update packets (similar to a 3D game) but much less frequently - right now every 250ms, but I suspect with good prediction I can easily boost it towards 500ms. The biggest problem is that I've now deviated from the norm - all other documentation, guides, and samples online send routine updates and interpolate between the two states. It seems incompatible with my architecture, and I need to come up with a better movement prediction algorithm that is closer to a (very basic) "networked physics" architecture. The server then receives the packet and determines the players speed from it's movement type based on a script (Is the player able to run? Get the player's running speed). Once it has the speed, it combines it with the direction to get a vector - the entity's velocity. Some cheat detection and basic validation occurs, and the entity on the server side is updated with the current velocity, direction, and position. Basic throttling is also performed to prevent players from flooding the server with movement requests. After updating its own entity, the server broadcasts an "avatar position update" packet to all other players within range. The position update packet is used to update the client side physics simulations (world state) of the remote clients and perform prediction and lag compensation. Prediction and Lag Compensation As mentioned above, clients are authoritative for their own position. Except in cases of cheating or anomalies, the client's avatar will never be repositioned by the server. No extrapolation ("move now and correct later") is required for the client's avatar - what the player sees is correct. However, some sort of extrapolation or interpolation is required for all remote entities that are moving. Some sort of prediction and/or lag-compensation is clearly required within the client's local simulation / physics engine. Problems I've been struggling with various algorithms, and have a number of questions and problems: Should I be extrapolating, interpolating, or both? My "gut feeling" is that I should be using pure extrapolation based on velocity. State change is received by the client, client computes a "predicted" velocity that compensates for lag, and the regular physics system does the rest. However, it feels at odds to all other sample code and articles - they all seem to store a number of states and perform interpolation without a physics engine. When a packet arrives, I've tried interpolating the packet's position with the packet's velocity over a fixed time period (say, 200ms). I then take the difference between the interpolated position and the current "error" position to compute a new vector and place that on the entity instead of the velocity that was sent. However, the assumption is that another packet will arrive in that time interval, and it's incredibly difficult to "guess" when the next packet will arrive - especially since they don't all arrive on fixed intervals (ie/ state changes as well). Is the concept fundamentally flawed, or is it correct but needs some fixes / adjustments? What happens when a remote player stops? I can immediately stop the entity, but it will be positioned in the "wrong" spot until it moves again. If I estimate a vector or try to interpolate, I have an issue because I don't store the previous state - the physics engine has no way to say "you need to stop after you reach position X". It simply understands a velocity, nothing more complex. I'm reluctant to add the "packet movement state" information to the entities or physics engine, since it violates basic design principles and bleeds network code across the rest of the game engine. What should happen when entities collide? There are three scenarios - the controlling player collides locally, two entities collide on the server during a position update, or a remote entity update collides on the local client. In all cases I'm uncertain how to handle the collision - aside from cheating, both states are "correct" but at different time periods. In the case of a remote entity it doesn't make sense to draw it walking through a wall, so I perform collision detection on the local client and cause it to "stop". Based on point #2 above, I might compute a "corrected vector" that continually tries to move the entity "through the wall" which will never succeed - the remote avatar is stuck there until the error gets too high and it "snaps" into position. How do games work around this?

    Read the article

  • Hardening non-root standalone Linux Tomcat install

    - by NoozNooz42
    I want to know if you have any tips as to how to strengthen the security of a non-root install of Tomcat in standalone mode once Tomcat is already installed in a non-root account, in standalone mode. I precise this because, for example, I'm not at all interested by the answers given here (because both Java and Tomcat requires root priviledges there to be installed and I've got zero interest in running jsvc): http://serverfault.com/questions/43765 So far, here's what I've done for my non-root standalone Tomcat 6 install: download and install the JRE .bin provided by Oracle/Sun (no need to be root here) (no need for a full JDK anymore right seen that Jasper [Tomcat's JSP engine] has its own compiler now right?) download and tar -xzf tomcat 6 (no need to be root here) set up transparent port-forwarding (must be root here) Note that my distribution is a Debian one and I have exactly zero interest in downloading Debian package / backports / whatever... Because, once again, I DO NOT want to need to be root to install Java & Tomcat. The only moment I needed to be root was to configure the firewall to transparently do the port forwarding 80 <-- 8080 and 443 <-- 8443. I then deleted all the default webapps but one: cd ~/apache-tomcat-6.0.26/webapps rm -rf docs rm -rf examples/ rm -rf manager/ rm -rf ROOT/ What about the directory ~/apache-tomcat-6.0.26/webapps/host-manager, do I need it or can I delete it? So, once I've installed Tomcat standalone in a non-root account (and taken into account that I don't want to enter the root password anymore and that I don't plan to install the whole Apache shebang), what more can I do? Are there connectors I can disable? (how?)

    Read the article

  • Ignore non-unicode programs language when installing software

    - by mitya
    This is something that is driving me nuts for a while and I haven't been able to find a solution for this problem anywhere. I am running Windows 7 and my "Language for non-Unicode programs" setting is set to Russian. I need for some non-unicode software that has a Russian UI. However, for most of my software I prefer to use the English UI. A lot of software out there is multilingual and is too smart for my liking. When installing, it switches the UI to Russian and the software UI stays in Russian after the installation without an option to change that, besides setting the "non-unicode language" to English. It switches back to Russian once I revert the setting and reboot. Most of the time it is driver software, i.e: Intel, HP, etc. How can force the installation to run English and stay that way after install, ignoring the "Language for non-Unicode programs" setting? Now, I understand this might be specific to the installer: MSI, Install Shield, etc. But any solution will be good, even if I have to apply it for every software installation. Thanks in advance for any helpful information!

    Read the article

  • What books should I read to be be able to communicate with programmers? [migrated]

    - by Zak833
    My experience is in online marketing, UI/UX and web design, but I know virtually no programming. I have recently been hired to build a new, fairly complex site from scratch, for which I will be working with an experienced programmer with whom I have worked extensively in the past. Although I have a decent understanding of certain technical concepts relating to web development, I would like to build a better appreciation of the programmer's craft, in order to improve communication with my programmer, as well as the client. I have heard Code Complete is quite a good book for this. Other than reading this and learning some basic programming, are there any other books or resources that could be recommended to the non-programmer who does not wish to become a programmer, yet wishes to understand the most common concepts involved in building software, web-based or otherwise?

    Read the article

  • Can I (reasonably) refuse to sign an NDA for pro bono work? [closed]

    - by kojiro
    A friend of mine (let's call him Joe) is working on a promising project, and has asked me for help. As a matter of friendship I agreed (orally) not to discuss the details, but now he has a potential investor who wants me to sign a non-disclosure agreement (NDA). Since thus far all my work has been pro bono I told Joe I am not comfortable putting myself under documented legal obligation without some kind of compensation for the risk. (It needn't be strictly financial. I would accept a small ownership stake or possibly even just guaranteed credits in the code and documentation.) Is my request reasonable, or am I just introducing unnecessary complexity?

    Read the article

  • Resize transparent images using C#

    - by MartinHN
    Does anyone have the secret formula to resizing transparent images (mainly GIFs) without ANY quality loss - what so ever? I've tried a bunch of stuff, the closest I get is not good enough. Take a look at my main image: http://www.thewallcompany.dk/test/main.gif And then the scaled image: http://www.thewallcompany.dk/test/ScaledImage.gif //Internal resize for indexed colored images void IndexedRezise(int xSize, int ySize) { BitmapData sourceData; BitmapData targetData; AdjustSizes(ref xSize, ref ySize); scaledBitmap = new Bitmap(xSize, ySize, bitmap.PixelFormat); scaledBitmap.Palette = bitmap.Palette; sourceData = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.ReadOnly, bitmap.PixelFormat); try { targetData = scaledBitmap.LockBits(new Rectangle(0, 0, xSize, ySize), ImageLockMode.WriteOnly, scaledBitmap.PixelFormat); try { xFactor = (Double)bitmap.Width / (Double)scaledBitmap.Width; yFactor = (Double)bitmap.Height / (Double)scaledBitmap.Height; sourceStride = sourceData.Stride; sourceScan0 = sourceData.Scan0; int targetStride = targetData.Stride; System.IntPtr targetScan0 = targetData.Scan0; unsafe { byte* p = (byte*)(void*)targetScan0; int nOffset = targetStride - scaledBitmap.Width; int nWidth = scaledBitmap.Width; for (int y = 0; y < scaledBitmap.Height; ++y) { for (int x = 0; x < nWidth; ++x) { p[0] = GetSourceByteAt(x, y); ++p; } p += nOffset; } } } finally { scaledBitmap.UnlockBits(targetData); } } finally { bitmap.UnlockBits(sourceData); } } I'm using the above code, to do the indexed resizing. Does anyone have improvement ideas?

    Read the article

  • CSS: Making two transparent images overlapping

    - by Pierre
    Hi all, I'm trying to make two transparent images (having the same size/dimension ) overlapping into a div at their top left corner. I tried: <html xmlns="http://www.w3.org/1999/xhtml"> <body> <div style="margin:20px;"> <div id="main" style="overflow:hidden;background-color:red;width:400px;height:400px;border:3px solid blue;"> <img src="myimage1.png" style="position:relative;top:0px;left:0px;z-index:0;"/> <img src="myimage2.png" style="position:relative;top:0px;left:0px;z-index:10;"/> </div> </div> </body> </html> but it doesn't work, the two pictures are concatenated into the parent div. Thanks for your help !

    Read the article

  • Can you overlay transparent images in Blackberry Apps?

    - by Greg
    I have a very simple application that has one screen and one button. The main screen has a verticalFieldManager with a BitmapField inside it, and one button beneath the bitmap. I want to be able to overlay another image on top of this when the user clicks a button. The overlay image is a PNG with transparent background, and this is important for design, so I can't use popupscreen or a new screen because the backgrounds are always white by default, and I've heard alpha doesn't really do the trick. I guess what I'm asking is if anyone knows a simple way to... A) take a standard verticalFieldManager and overlay a PNG on top of the inner contents B) overlay a PNG over the screen, no matter the contents The basic functionality of this app was intended to be - show an image. on click, show another overlaid on top. on click again, remove the popup image. I haven't found anything that addresses something like this online, but I have read of people doing similar things that utilize popupscreen and new screens in a way I don't need to do. Hopefully this makes sense. Thanks

    Read the article

  • iPhone: Use a view as a transparent overlay with closing button

    - by axooh
    I've got a map with three bar buttons for different markers to show up in the map. If I click on a bar button, the specific markers are shown in the map, which already works great. Now I would like to show a transparent overlay (popup window) with the description of the markers after I clicked on a bar button with a button to close the overlay again and show the markers (which are set in the background). The function of the bar button: - (IBAction)routeTwo:(id)sender { // The code for the overlay // ... // remove any annotations that exist [map removeAnnotations:map.annotations]; // Add any annotations which belongs to route 2 [map addAnnotation:[self.mapAnnotations objectAtIndex:2]]; [map addAnnotation:[self.mapAnnotations objectAtIndex:3]]; [map addAnnotation:[self.mapAnnotations objectAtIndex:4]]; [map addAnnotation:[self.mapAnnotations objectAtIndex:5]]; } I tried the following possibilities: 1. Using a modal view controller RouteDescriptionViewController *routeDescriptionView = [[RouteDescriptionViewController alloc] init]; [self presentModalViewController:routeDescriptionView animated:YES]; [routeDescriptionView release]; Works great, but the problem is: The map view in the background is not visible anymore (configuring alpha values of the modal view doesn't change anything). 2. Add RouteDescriptionView as a subview RouteDescriptionViewController *routeDescriptionView = [[RouteDescriptionViewController alloc] init]; [self.view addSubview:routeDescriptionView.view]; [routeDescriptionView release]; Works great as well, but the problem here is: I can't configure a close button on the subview to close/remove the subview (RouteDescriptionView). 3. Using UIAlertView Would work as expected, but the UIAlert is not really customizable and therefore not suitable for my needs. Any ideas how to achieve this?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >