Search Results

Search found 2347 results on 94 pages for 'slightly frustrated'.

Page 57/94 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Code review recommendations and Code Smells

    - by Michael Freidgeim
    Some time ago Twitter told that I am similar to Boris Lipschitz . Indeed he is also .Net programmer from Russia living in Australia. I‘ve read his list of Code Review points and found them quite comprehensive. A few points  were not clear for me, and it forced me for a further reading.In particular the statement “Exception should not be used to return a status or an error code.” wasn’t fully clear for me, because sometimes we store an exception as an object with all error details and I believe it’s a valid approach. However I agree that throwing exceptions should be avoided, if you expect to return error as a part of a normal flow. Related link: http://codeutopia.net/blog/2010/03/11/should-a-failed-function-return-a-value-or-throw-an-exception/ Another point slightly puzzled me“If Thread.Sleep() is used, can it be replaced with something else, ei Timer, AutoResetEvent, etc” . I believe, that there are very rare cases, when anyone using Thread.Sleep in any production code. Usually it is used in mocks and prototypes.I had to look further to clarify “Dependency injection is used instead of Service Location pattern”.Even most of articles has some preferences to Dependency injection, there are also advantages to use Service Location. E.g see http://geekswithblogs.net/KyleBurns/archive/2012/04/27/dependency-injection-vs.-service-locator.aspx. http://www.cookcomputing.com/blog/archives/000587.html  refers to Concluding Thoughts of Martin Fowler The choice between Service Locator and Dependency Injection is less important than the principle of separating service configuration from the use of services within an applicationThe post had a link to excellent article Code Smells of Jeff Atwood, but the statement, that “code should not pass a review if it violates any of the  code smells” sound too strict for my environment. In particular, I disagree with “Dead Code” recommendation “Ruthlessly delete code that isn't being used. That's why we have source control systems!”. If there is a chance that not used code will be required in a future, it is convenient to keep it as commented or #if/#endif blocks with appropriate explanation, why it could be required in the future. TFS is a good source control system, but context search in source code of current solution is much easier than finding something in the previous versions of the code.I also found a link to a good book “Clean Code.A.Handbook.of.Agile.Software”

    Read the article

  • What are the tradeoffs for using 'partial view models'?

    - by Kenny Evitt
    I've become aware of an itch due to some non-DRY code pertaining to view model classes in an (ASP.NET) MVC web application and I'm thinking of scratching my itch by organizing code in various 'partial view model' classes. By partial-view-model, I'm referring to a class like a view model class in an analogous way to how partial views are like views, i.e. a way to encapsulate common info and behavior. To strengthen the 'analogy', and to aid in visually organizing the code in my IDE, I was thinking of naming the partial-view-model classes with a _ prefix, e.g. _ParentItemViewModel. As a slightly more concrete example of why I'm thinking along these lines, imagine that I have a domain-model-entity class ParentItem and the user-friendly descriptive text that identifies these items to users is complex enough that I'd like to encapsulate that code in a method in a _ParentItemViewModel class, for which I can then include an object or a collection of objects of that class in all the view model classes for all the views that need to include a reference to a parent item, e.g. ChildItemViewModel can have a ParentItem property of the _ParentItemViewModel class type, so that in my ChildItemView view, I can use @Model.ParentItem.UserFriendlyDescription as desired, like breadcrumbs, links, etc. Edited 2014-02-06 09:56 -05 As a second example, imagine that I have entity classes SomeKindOfBatch, SomeKindOfBatchDetail, and SomeKindOfBatchDetailEvent, and a view model class and at least one view for each of those entities. Also, the example application covers a lot more than just some-kind-of-batches, so that it wouldn't really be useful or sensible to include info about a specific some-kind-of-batch in all of the project view model classes. But, like the above example, I have some code, say for generating a string for identifying a some-kind-of-batch in a user-friendly way, and I'd like to be able to use that in several views, say as breadcrumb text or text for a link. As a third example, I'll describe another pattern I'm currently using. I have a Contact entity class, but it's a fat class, with dozens of properties, and at least a dozen references to other fat classes. However, a lot of view model classes need properties for referencing a specific contact and most of those need other properties for collections of contacts, e.g. possible contacts to be referenced for some kind of relationship. Most of these view model classes only need a small fraction of all of the available contact info, basically just an ID and some kind of user-friendly description (i.e. a friendly name). It seems to be pretty useful to have a 'partial view model' class for contacts that all of these other view model classes can use. Maybe I'm just misunderstanding 'view model class' – I understand a view model class as always corresponding to a view. But maybe I'm assuming too much.

    Read the article

  • Web.NET: A Brief Retrospective

    - by Chris Massey
    It’s been several weeks since I had the pleasure of visiting Milan, and joining 150 enthusiastic web developers for a day of server-side frameworks and JavaScript. Lucky for me, I keep good notes. Overall the day went smoothly, with some solid logistics and very attentiveorganizerss, and an impressively diverse audience drawn by the fact that the event was ambitiously run in English. This was great in that it drew a truly pan-European audience (11 countries were represented on the day, and at least 1 visa had to be procured to get someone there!) It was trouble because, in some cases, it pushed speakers outside their comfort zone. Thankfully, despite a slightly rocky start, every session I attended was very well presented, and the consensus on the day was that the speakers were excellent. While I felt that a lot of the speakers had more that they wanted to cover, the topics were well-chosen, every room constantly had a stack of people in it, and all the sessions were pleasingly focused on code & demos. For all that the language barriers occasionally made networking a little challenging,organizerss Simone & Ugo nailed the logistics. Registration was slick, lunch was plentiful, and session management was great. The very generous Rui was kind enough to showcase a short video about Glimpse in his session, which seemed to go down well (Although the audio in the rooms was a little under-powered). Because I think you might need a mid-week chuckle, here are some out-takes.: And lets not forget the Hackathon. The idea was what having just learned about a stack of interesting technologies, attendees could spend an evening (fuelled by pizza and some good Github beer) hacking something together using them. Unfortunately, after a (great)10-hour day, and in many cases facing international travel in the morning, many of the attendees headed straight for their hotel rooms. This idea could work so beautifully, and I’m excited to see how it pans out in 2013. On top of the slick sessions, getting to finally meet Ugo and Simone in the flesh as a pleasure, as was the serendipitous introduction to the most excellent Rui. They’re all fantastic guys who are passionate about the web, and I’m looking forward to finding opportunities to work with them. Simone & Ugo put on a great event, and I’m excited to see what they do next year.

    Read the article

  • Executable execution path. Does it depends of the place the executable is called from?

    - by Valkea
    as I'm still a new Linux user, I still discover some behaviours and I'm unable to tell if they are "normals" or not. I searched the Internet but as I can't really find an answer I guess it's time to ask here. Few weeks ago I installed a small game called "Machinarium" and I played it... but few days later when I wanted to continue my game I was unable to make the game start correctly. And as I didn't had the time to search I given up. But yesterday as I was working on a program of mine, I had the exactly same behaviour. So I searched a bit and I discovered that when using Nautilus with the "List view", I was able to run the program (ie: the program does find the sound, images etc resources) when I was literally "inside" the executable folder, but unable when I was in a parent folder and expanding it to the executable folder to run it. To illustrate the behaviour here are two screen shots. It doesn't works if the executable is double clicked from here It does works if the executable is double clicked from here This is indeed the same "place", but the Nautilus view is slightly different as the current folder is not the same and it seems to make a difference for the program. Furthermore, when I create a menu items via System Settings/Main Menu to the executable, it behaves just like if the executable can't find the resources (that's why I was unable to play Machinarium the second time as I created a menu short-cut after my first game). So I asked my program to generate a text file at it's root when running, and I started to launch it from different "parent" folders to see where is generated the text file. Each time the file was generated on the top folder of the current Nautilus view. I was expected to see it appears in the same folder of the executable (well not as I was guessing what as happening, but before that I would have expected to see it appears in the exe folder). Does anyone can explain me why it does works like this (I guess it's normal) ? How I'm supposed to solve this when creating programs (Should I detect the executable path in my C++ code or should I organize the resources files another way than on windows ?)

    Read the article

  • Subterranean IL: Volatile

    - by Simon Cooper
    This time, we'll be having a look at the volatile. prefix instruction, and one of the differences between volatile in IL and C#. The volatile. prefix volatile is a tricky one, as there's varying levels of documentation on it. From what I can see, it has two effects: It prevents caching of the load or store value; rather than reading or writing to a cached version of the memory location (say, the processor register or cache), it forces the value to be loaded or stored at the 'actual' memory location, so it is then immediately visible to other threads. It forces a memory barrier at the prefixed instruction. This ensures instructions don't get re-ordered around the volatile instruction. This is slightly more complicated than it first seems, and only seems to matter on certain architectures. For more details, Joe Duffy has a blog post going into the details. For this post, I'll be concentrating on the first aspect of volatile. Caching field accesses To demonstrate this, I created a simple multithreaded IL program. It boils down to the following code: .class public Holder { .field public static class Holder holder .field public bool stop .method public static specialname void .cctor() { newobj instance void Holder::.ctor() stsfld class Holder Holder::holder ret }}.method private static void Main() { .entrypoint // Thread t = new Thread(new ThreadStart(DoWork)) // t.Start() // Thread.Sleep(2000) // Console.WriteLine("Stopping thread...") ldsfld class Holder Holder::holder ldc.i4.1 stfld bool Holder::stop call instance void [mscorlib]System.Threading.Thread::Join() ret}.method private static void DoWork() { ldsfld class Holder Holder::holder // while (!Holder.holder.stop) {} DoWork: dup ldfld bool Holder::stop brfalse DoWork pop ret} If you compile and run this code, you'll find that the call to Thread.Join() never returns - the DoWork spinlock is reading a cached version of Holder.stop, which is never being updated with the new value set by the Main method. Adding volatile to the ldfld fixes this: dupvolatile.ldfld bool Holder::stopbrfalse DoWork The volatile ldfld forces the field access to read direct from heap memory, which is then updated by the main thread, rather than using a cached copy. volatile in C# This highlights one of the differences between IL and C#. In IL, volatile only applies to the prefixed instruction, whereas in C#, volatile is specified on a field to indicate that all accesses to that field should be volatile (interestingly, there's no mention of the 'no caching' aspect of volatile in the C# spec; it only focuses on the memory barrier aspect). Furthermore, this information needs to be stored within the assembly somehow, as such a field might be accessed directly from outside the assembly, but there's no concept of a 'volatile field' in IL! How this information is stored with the field will be the subject of my next post.

    Read the article

  • How do you exclude yourself from Google Analytics on your website using cookies?

    - by Cold Hawaiian
    I'm trying to set up an exclusion filter with a browser cookie, so that my own visits to my don't show up in my Google Analytics. I tried 3 different methods and none of them have worked so far. I would like help understanding what I am doing wrong and how I can fix this. Method 1 First, I tried following Google's instructions, http://www.google.com/support/analytics/bin/answer.py?hl=en&answer=55481, for excluding traffic by Cookie Content: Create a new page on your domain, containing the following code: <body onLoad="javascript:pageTracker._setVar('test_value');"> Method 2 Next, when that didn't work, I googled around and found this Google thread, http://www.google.com/support/forum/p/Google%20Analytics/thread?tid=4741f1499823fcd5&hl=en, where the most popular answer says to use a slightly different code: SHS Analytics wrote: <body onLoad="javascript:_gaq.push(['_setVar','test_value']);"> Thank you! This has now set a __utmv cookie containing "test_value", whereas the original: pageTracker._setVar('test_value') (which Google is still recommending) did not manage to do that for me (in Mac Safari 5 and Firefox 3.6.8). So I tried this code, but it didn't work for me. Method 3 Finally, I searched StackOverflow and came across this thread, http://stackoverflow.com/questions/3495270/exclude-my-traffic-from-google-analytics-using-cookie-with-subdomain, which suggests that the following code might work: <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setVar', 'exclude_me']); _gaq.push(['_setAccount', 'UA-xxxxxxxx-x']); _gaq.push(['_trackPageview']); // etc... </script> This script appeared in the head element in the example, instead of in the onload event of the body like in the previous 2 examples. So I tried this too, but still had no luck with trying to exclude myself from Google Analytics. Re-iterate question So, I tried all 3 methods above with no success. Am I doing something wrong? How can I exclude myself from my Google Analytics using an exclusion cookie for my browser? Update I've been testing this for several days now, and I've confirmed that the 2nd method of excluding yourself from tracking does indeed work. The problem was that the filter settings weren't properly applied to my profile, which has been corrected. See the accepted answer below.

    Read the article

  • Picture rendered from above and below using an Orthographic camera do not match

    - by Roy T.
    I'm using an orthographic camera to render slices of a model (in order to voxelize it). I render each slice both from above and below in order to determine what is inside each slice. I am using an orthographic camera The model I render is a simple 'T' shape constructed from two cubes. The cubes have the same dimensions and have the same Y (height) coordinate. See figure 1 for a render of it in Blender. I render this model once directly from above and once directly from below. My expectation was that I would get exactly the same image (except for mirroring over the y-axis). However when I render using a very low resolution render target (25x25) the position (in pixels) of the 'T' is different when rendered from above as opposed to rendered from below. See figure 2 and 3. The pink blocks are not part of the original rendering but I've added them so you can easily count/see the differences. Figure 2: the T rendered from above Figure 3: the T rendered from below This is probably due to what I've read about pixel and texel coordinates which might be biased to the top-left as seen from the camera. Since I'm using the same 'up' vector for both of my camera's my bias only shows on the x-axis. I've tried to change the position of the camera and it's look-at by, what I thought, should be half a pixel. I've tried both shifting a single camera and shifting both cameras and while I see some effect I am not able to get a pixel-by-pixel perfect copy from both camera's. Here I initialize the camera and compute, what I believe to be, half pixel. boundsDimX and boundsDimZ is a slightly enlarged bounding box around the model which I also use as the width and height of the view volume of the orthographic camera. Matrix projection = Matrix.CreateOrthographic(boundsDimX, boundsDimZ, 0.5f, sliceHeight + 0.5f); Vector3 halfPixel = new Vector3(boundsDimX / (float)renderTarget.Width, 0, boundsDimY / (float)renderTarget.Height) * 0.5f; This is the code where I set the camera position and camera look ats // Position camera if (downwards) { float cameraHeight = bounds.Max.Y + 0.501f - (sliceHeight * i); Vector3 cameraPosition = new Vector3 ( boundsCentre.X, // possibly adjust by half a pixel? cameraHeight, boundsCentre.Z ); camera.Position = cameraPosition; camera.LookAt = new Vector3(cameraPosition.X, cameraHeight - 1.0f, cameraPosition.Z); } else { float cameraHeight = bounds.Max.Y - 0.501f - (sliceHeight * i); Vector3 cameraPosition = new Vector3 ( boundsCentre.X, cameraHeight, boundsCentre.Z ); camera.Position = cameraPosition; camera.LookAt = new Vector3(cameraPosition.X, cameraHeight + 1.0f, cameraPosition.Z); } Main Question Now you've seen all the problems and code you can guess it. My main question is. How do I align both camera's so that they each render exactly the same image (mirrored along the Y axis)? Figure 1 the original model rendered in blender

    Read the article

  • WebLogic Server Weekly for March 26th, 2012: WLS 1211 Update, Java 7 Certification, Galleria, WebLogic for DBAs, REST and Enterprise Architecture, Singleton Services

    - by Steve Button
    WebLogic Server 12c Certified with Java 7 for Production Use WebLogic Server 12c (12.1.1) has been certified with JDK 7 for development usage since December and we have now completed JDK 7 certification for use with production systems. In doing so, we have updated the WebLogic Server 12c (12.1.1) distributions incorporating fixes associated with JDK 7 support as well as some bundled patches that address several issues that have been discovered since the initial release. These updated distributions are available for download from OTN and will be beneficial for all WebLogic Server 12c (12.1.1) users in general. What's New Release Notes Download Here! Updated Oracle WebLogic Server 12.1.1.0 distribution Never one to miss a trick, Markus Eisele was one of the first to notice the WebLogic Server 12c update and post a blog about it. Sources told me that as of Friday last week you have an updated version of WebLogic Server 12c on OTN. http://blog.eisele.net/2012/03/updated-oracle-weblogic-server-12110.html Using WebLogic Server 12c with Java 7 - Video To illustrate the use of Java 7 with WebLogic Server 12c, I put together a screen cam showing the creation of a domain using Java 7 and then build and deploy a simple web application that uses Java 7 syntax to show it working. Ireland OUG Presentation: WebLogic for DBAs Simon Haslam posted his slides from a presentation he gave Dublin on 21/3/12 at the OUG Ireland conference. In this presentation, he explains the core concepts and ideas behind WebLogic Server, walks through an installation and offers some tips and common gotcha's to avoid. Simon also covers some aspects of installing and use Enterprise Manager 12c. Note: I usually install the JVM and use the generic .jar installer rather than using an installer bundled with a JVM. http://www.slideshare.net/Veriton/weblogic-for-dbas-10h Slightly Retro: Jeff West on Enterprise Architecure and REST In this weeks flashback, we look at Jeff West's blog from early 2011 where he provides some thoughtful opinions on enterprise architecture and innovation, then jumps into his views on REST. After I progressed in my career and did more team-leading and architecture type roles I was ‘educated’ on what it meant to have Asynchronous and Long-Running processes as part of your Enterprise Application architecture. If I had a synchronous process then I needed a thread available to service the request and then provide the response. https://blogs.oracle.com/jeffwest/entry/weblogic_integration_wli_web_services_and_soap_and_rest_part_1 Starting Managed Servers without an Administration Server using Node Manager and WLST Blogger weblogic-tips shows how to start a managed server without going through the Administration Server, using the Node Manager and WLST. Connect WLST to a Node Manager by entering the nmConnect command. http://www.weblogic-tips.com/2012/02/18/starting-managed-servers-without-an-administration-server-using-node-manager-and-wlst/ Using WebLogic Server Singleton Services WebLogic Server has supported the notion of a Singleton Service for a number of releases, in which WebLogic Server will maintain a single instance of a configured singleton service on one managed server within a cluster. This blog demonstrates how the singleton service can be accessed and used from applications deployed on the cluster. http://buttso.blogspot.com.au/2012/03/weblogic-server-singleton-services.html

    Read the article

  • How do I make the launcher progress bar work with my application?

    - by Kevin Gurney
    Background Research I am attempting to update the progress bar within the Unity launcher for a simple python/Gtk application created using Quickly called test; however, following the instructions in this video, I have not been able to successfully update the progress bar in the Unity launcher. In the Unity Integration video, Quickly was not used, so the way that the application was structured was slightly different, and the code used in the video does not seem to function properly without modification in a default Quickly ubuntu-application template application. Screenshots Here is a screenshot of the application icon as it is currently displayed in the Unity Launcher. Here is a screenshot of the kind of Unity launcher progress bar functionality that I would like (overlayed on mail icon: wiki.ubuntu.com). Code class TestWindow(Window): __gtype_name__ = "TestWindow" def finish_initializing(self, builder): # pylint: disable=E1002 """Set up the main window""" super(TestWindow, self).finish_initializing(builder) self.AboutDialog = AboutTestDialog self.PreferencesDialog = PreferencesTestDialog # Code for other initialization actions should be added here. self.add_launcher_integration() def add_launcher_integration(self): self.launcher = Unity.LauncherEntry.get_for_desktop_id("test.destkop") self.launcher.set_property("progress", 0.75) self.launcher.set_property("progress_visible", True) Expected Behavior I would expect the above code to show a progress bar that is 75% full overlayed on the icon for the test application in the Unity Launcher, but the application only runs and displays no progress bar when the command quickly run is executed. Problem Investigation I believe that the problem is that I am not properly getting a reference to the application's main window, however, I am not sure how to properly fix this problem. I also believe that the line: self.launcher = Unity.LauncherEntry.get_for_desktop_id("test.destkop") may be another source of complication because Quickly creates .desktop.in files rather than ordinary .desktop files, so I am not sure if that might be causing issues as well. Perhaps, another source of the issue is that I do not entirely understand the difference between .desktop and .desktop.in files. Does it possibly make sense to make a copy of the test.desktop.in file and rename it test.desktop, and place it in /usr/share/applications in order for get_for_desktop_id("test,desktop") to reference the correct .desktop file? Related Research Links Although, I am still not clear on the difference between .desktop and .desktop.in files, I have done some research on .desktop files and I have come across a couple of links: Desktop Entry Files (library.gnome.org) Desktop File Installation Directory (askubuntu.com) Unity Launcher API (wiki.ubuntu.com) Desktop Files: putting your application in the desktop menus (developer.gnome.org) Desktop Menu Specification (standards.freedesktop.org)

    Read the article

  • PowerShell – Show a Notification Balloon

    - by BuckWoody
    In my presentations for PowerShell I sometimes want to start a process (like a backup) that will take some time. I normally pop up a notification “balloon” at the start, and then do the bulk of the work, and then pop up a balloon at the end to let me know it’s done. You can actually try out this little sample (on a test system, of course) without any other code to see what it does. Then just put the other PowerShell commands in the #Do Some Work part. Oh – throw an icon (.ico file) in a c:\temp directory or point that somewhere else. (No, this probably isn’t original. Can’t remember where I saw the original code, but I’ve modified it a bit anyway, so if you’re the original author and this looks slightly familiar, post a comment.) [void] [System.Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms") $objBalloon = New-Object System.Windows.Forms.NotifyIcon $objBalloon.Icon = "C:\temp\Folder.ico" # You can use the value Info, Warning, Error $objBalloon.BalloonTipIcon = "Info" # Put what you want to say here for the Start of the process $objBalloon.BalloonTipTitle = "Begin Title" $objBalloon.BalloonTipText = "Begin Message" $objBalloon.Visible = $True $objBalloon.ShowBalloonTip(10000) # Do some work # Put what you want to say here for the completion of the process $objBalloon.BalloonTipTitle = "End Title" $objBalloon.BalloonTipText = "End Message" $objBalloon.Visible = $True $objBalloon.ShowBalloonTip(10000) Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Stencil buffer appears to not be decrementing values correctly

    - by Alex Ames
    I'm attempting to use the stencil buffer as a clipper for my UI system, but I'm having trouble debugging a problem I'm running in to. This is what I'm doing: A widget can pass a rectangle to the the stencil clipper functions, which will increment the stencil buffer values that it covers. Then it will draw its children, which will only get drawn in the stencilled area (so that if they extend outside they'll be clipped). After a widget is done drawing its children, it pops that rectangle from the stack and in the process decrements the values in the stencil buffer that it has previously incremented. The slightly simplified code is below: static void drawStencil(Rect& rect, unsigned int ref) { // Save previous values of the color and depth masks GLboolean colorMask[4]; GLboolean depthMask; glGetBooleanv(GL_COLOR_WRITEMASK, colorMask); glGetBooleanv(GL_DEPTH_WRITEMASK, &depthMask); // Turn off drawing glColorMask(0, 0, 0, 0); glDepthMask(0); // Draw vertices here ... // Turn everything back on glColorMask(colorMask[0], colorMask[1], colorMask[2], colorMask[3]); glDepthMask(depthMask); // Only render pixels in areas where the stencil buffer value == ref glStencilFunc(GL_EQUAL, ref, 0xFF); glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP); } void pushScissor(Rect rect) { // increment things only at the current stencil stack level glStencilFunc(GL_EQUAL, s_scissorStack.size(), 0xFF); glStencilOp(GL_KEEP, GL_INCR, GL_INCR); s_scissorStack.push_back(rect); drawStencil(rect, states, s_ScissorStack.size()); } void popScissor() { // undo what was done in the previous push, // decrement things only at the current stencil stack level glStencilFunc(GL_EQUAL, s_scissorStack.size(), 0xFF); glStencilOp(GL_KEEP, GL_DECR, GL_DECR); Rect rect = s_scissorStack.back(); s_scissorStack.pop_back(); drawStencil(rect, states, s_scissorStack.size()); } And this is how it's being used by the Widgets if (m_clip) pushScissor(m_rect); drawInternal(target, states); for (auto child : m_children) target.draw(*child, states); if (m_clip) popScissor(); This is the result of the above code: There are two things on the screen, a giant test button, and a window with some buttons and text areas on it. The text area scroll box is set to clip its children (so that the text doesn't extend outside the scroll box). The button is drawn after the window and should be on top of it completely. However, for some reason the text area is appearing on top of the button. The only reason I can think of that this would happen is if the stencil values were not getting decremented in the pop, and when it comes time to render the button, since those pixels don't have the right stencil value it doesn't draw over. But I can't figure out whats wrong with my code that would cause that to happen.

    Read the article

  • Is reliance on parametrized queries the only way to protect against SQL injection?

    - by Chris Walton
    All I have seen on SQL injection attacks seems to suggest that parametrized queries, particularly ones in stored procedures, are the only way to protect against such attacks. While I was working (back in the Dark Ages) stored procedures were viewed as poor practice, mainly because they were seen as less maintainable; less testable; highly coupled; and locked a system into one vendor; (this question covers some other reasons). Although when I was working, projects were virtually unaware of the possibility of such attacks; various rules were adopted to secure the database against corruption of various sorts. These rules can be summarised as: No client/application had direct access to the database tables. All accesses to all tables were through views (and all the updates to the base tables were done through triggers). All data items had a domain specified. No data item was permitted to be nullable - this had implications that had the DBAs grinding their teeth on occasion; but was enforced. Roles and permissions were set up appropriately - for instance, a restricted role to give only views the right to change the data. So is a set of (enforced) rules such as this (though not necessarily this particular set) an appropriate alternative to parametrized queries in preventing SQL injection attacks? If not, why not? Can a database be secured against such attacks by database (only) specific measures? EDIT Emphasis of the question changed slightly, in the light of the initial responses received. Base question unchanged. EDIT2 The approach of relying on paramaterized queries seems to be only a peripheral step in defense against attacks on systems. It seems to me that more fundamental defenses are both desirable, and may render reliance on such queries not necessary, or less critical, even to defend specifically against injection attacks. The approach implicit in my question was based on "armouring" the database and I had no idea whether it was a viable option. Further research has suggested that there are such approaches. I have found the following sources that provide some pointers to this type of approach: http://database-programmer.blogspot.com http://thehelsinkideclaration.blogspot.com The principle features I have taken from these sources is: An extensive data dictionary, combined with an extensive security data dictionary Generation of triggers, queries and constraints from the data dictionary Minimize Code and maximize data While the answers I have had so far are very useful and point out difficulties arising from disregarding paramaterized queries, ultimately they do not answer my original question(s) (now emphasised in bold).

    Read the article

  • ZFS pool broken after upgrading to 14.04 LTS

    - by cruiserparts
    Well, I have been putting off upgrading to 14.04 for fear that I would break something. Actually for fear that it would break zfs (or I would break it). I am bascially slightly better than novice at linux. Spent the last couple of hours trying to get the pool back. Now I am at the stage where I don't think I have a complete failure, but I am worried that I may break it. So if could help me not break it, and recover it, I would be thankful. My zfs is file storage and not boot. It was working fine for a year and was working perfectly before the upgrade (scrub and everything was fine). I was confident that the upgrade would work (or at least I could fix it) because I had upgraded once in the past, the pool went missing, but I was able to get it back. I have reinstalled zfs, zfs utilities, and some dependencies (after searching this forum) I think what happened is 14.04 deleted some config file, or specified disk names differntly, but I could be wrong. When I set the pool up originally, I was using specific device Ids as I recall (because I did not want to break things if they got reassigned at boot) So see if this helps. I can confirm that old mountpoint folders are there but empty. no talloc stackframe at ../source3/param/loadparm.c:4864, leaking memory pool: naspool1 state: UNAVAIL status: One or more devices could not be used because the label is missing or invalid. There are insufficient replicas for the pool to continue functioning. action: Destroy and re-create the pool from a backup source. see: http://zfsonlinux.org/msg/ZFS-8000-5E scan: none requested config: NAME STATE READ WRITE CKSUM naspool1 UNAVAIL 0 0 0 insufficient replicas raidz1-0 UNAVAIL 0 0 0 insufficient replicas scsi-SATA_WDC_WD1001FALS-_WD-WMATV0990825 UNAVAIL 0 0 0 scsi-SATA_WDC_WD1001FALS-_WD-WMATV2995365 UNAVAIL 0 0 0 scsi-SATA_WDC_WD10EARS-00_WD-WMAV51894349 UNAVAIL 0 0 0 ___@ourserver:~$ sudo zpool import naspool1 cannot import 'naspool1': a pool with that name is already created/imported, and no additional pools with that name were found ___@ourserver:~$ sudo zfs list no datasets available What other output can I post to help? I'm thinking the update deleted some zfs config files. It seems like the pool exists and certainly 3 perfectly working disks did not fail at once. I am worried that I may break something without a little bit of guideance. Thanks.

    Read the article

  • I'd like to switch from 32-bit to 64-bit within same version

    - by Marty Fried
    I have a 32-bit installation of 11.10 on my 64-bit (4 GB) home AMD system. I have recently read up a bit on 64-bit version, and it seems that it would be a marginally better choice now for me. I have read about several methods to help reinstall all the various apps, using either dpkg's get-selections/set-selections and dselect in various ways, or using synaptic's save/get markings. The problem here is that I've read several variations, and I'm not sure which is best. I have enough disk space to do this with a brand new partition, so I'm not too worried about destroying anything, but I don't really want to make it my life's work, hence my appeal for expert tips. Since it's the same version, would it be safe to copy configuration files from the 32-bit system? I'd guess my home directory and /etc might be enough, and would save at least most of the time to reconfigure. But are there difference in configuration files in either of these directories for 32 vs 64 bits that might cause problems? After reinstalling to 64-bit, I can then continue along the 64 bit path for upgrades, but I thought it would be easier to switch the same version, than to try to reinstall apps and upgrade at the same time. Some methods I've seen suggested, among others: A. From Ubuntu forums On your old system (assuming it is still working), start up Synaptic and go: File->Save Markings and choose a file name along with a location (like a USB drive) that you can use when you have installed your new system). You need to check on the bottom: "Save full state, not only changes" This file contains a list of all your currently installed packages, and when you have installed and booted up your new system (and configured your repositories to the best for your location - as we all do, don't we?) then start up Synaptic and go: File-Read Markings and point it at your saved file, and after that has completed then select Apply to kick off the download & installation of all of those packages you had installed previously! B. From the same discussion: According to section 6.4.9 of the Debian Reference Manual, the following will save both the list of packages installed and their debconf configuration: # dpkg --get-selections "*" >myselections # or use \* # debconf-get-selections > debconfsel.txt and the following will reinstall and reconfigure them: # dselect update # debconf-set-selections < debconfsel.txt # dpkg --set-selections <myselections # apt-get -u dselect-upgrade # or dselect install C. A variation on the above I've seen a lot, this from stackoverflow: dpkg --get-selections > package_list then on the new install: cat package_list | sudo dpkg --set-selections && sudo apt-get dselect-upgrade I don't really understand B, or why it's slightly different than many others.

    Read the article

  • Efficient inline templates and C++

    - by Darryl Gove
    I've talked before about calling inline templates from C++, I've also talked about calling inline templates efficiently. This time I want to talk about efficiently calling inline templates from C++. The obvious starting point is that I need to declare the inline templates as being extern "C": extern "C" { int mytemplate(int); } This enables us to call it, but the call may not be very efficient because the compiler will treat it as a function call, and may produce suboptimal code based on that premise. So we need to add the no_side_effect pragma: extern "C" { int mytemplate(int); #pragma no_side_effect(mytemplate) } However, this may still not produce optimal code. We've discussed how the no_side_effect pragma cannot be combined with exceptions, well we know that the code cannot produce exceptions, but the compiler doesn't know that. If we tell the compiler that information it may be able to produce even better code. We can do this by adding the "throw()" keyword to the template declaration: extern "C" { int mytemplate(int) throw(); #pragma no_side_effect(mytemplate) } The following is an example of how these changes might improve performance. We can take our previous example code and migrate it to C++, adding the use of a try...catch construct: #include <iostream extern "C" { int lzd(int); #pragma no_side_effect(lzd) } int a; int c=0; class myclass { int routine(); }; int myclass::routine() { try { for(a=0; a<1000; a++) { c=lzd(c); } } catch(...) { std::cout << "Something happened" << std::endl; } return 0; } Compiling this produces a slightly suboptimal code sequence in the hot loop: $ CC -O -xtarget=T4 -S t.cpp t.il ... /* 0x0014 23 */ lzd %o0,%o0 /* 0x0018 21 */ add %l6,1,%l6 /* 0x001c */ cmp %l6,1000 /* 0x0020 */ bl,pt %icc,.L77000033 /* 0x0024 23 */ st %o0,[%l7] There's a store in the delay slot of the branch, so we're repeatedly storing data back to memory. If we change the function declaration to include "throw()", we get better code: $ CC -O -xtarget=T4 -S t.cpp t.il ... /* 0x0014 21 */ add %i1,1,%i1 /* 0x0018 23 */ lzd %o0,%o0 /* 0x001c 21 */ cmp %i1,999 /* 0x0020 */ ble,pt %icc,.L77000019 /* 0x0024 */ nop The store has gone, but the code is still suboptimal - there's a nop in the delay slot rather than useful work. However, it's good enough for this example. The point I'm making is that the compiler produces the better code with both the "throw()" and the no side effect pragma.

    Read the article

  • What is the difference between development and R&D?

    - by MainMa
    I was asked by a colleague to explain clearly the difference between ordinary development and research and development (R&D) and was unable to do it. After reading Wikipedia, I still don't have the precise answer. According to Wikipedia (slightly modified): There are two primary models: In one model, the primary function is to develop new products; in the other model, the primary function is to discover and create new knowledge about scientific and technological topics for the purpose of uncovering and enabling development of valuable new products, processes, and services. The first model is confusing. Does it mean that development (not R&D) consists exclusively in adding new features to a product, solving bugs and doing maintenance? What if something which was previously developed as a new feature becomes a separate product? The second model is less confusing, but still, how to qualify whether something is new knowledge or existent knowledge which is just rediscovered? Later, Wikipedia adds that ordinary development is different from R&D because of its: nearly immediate profit or immediate improvement. It's still not clear enough. How to qualify "nearly immediate profit"? What if a task has an immediate profit but requires heavy research? Or if it is basic but has uncertain profit, like the enforcement of a common style over the codebase? For example, does it belong to development or R&D to: Develop an engine which abstracts the access to the database, simplifying and shortening enormously the code of other applications (existent or ones which will be written in future) which should access to the database? Establish a new service-oriented architecture for the entire organization of company resources, in order to move from a bunch of separate and autonomous applications to a set of well-organized, interconnected web services, like what is used by Amazon? Design a new communication protocol to allow faster replication of data between two data centers of the company? Conceive a new type of software testing while working on a specific product, knowing that this type of testing will improve/simplify the testing process? Prove that Functional programming is more appropriate than OOP for a specific application, based on evidence, logic and previous experience? Enhance the existent application by adding gestures on tactile screens, after doing studies and testing that shows that those gestures improve the productivity of the users by a ratio of at least 1.4 for a precise set of tasks? Find a way to strongly enhance the Power usage effectiveness (PUE) of a data center? Create a Domain-Specific Language (DSL)? In short, how could I determine whether I'm doing R&D while working on something?

    Read the article

  • SEO to ensure visibility for a narrow, non-competitive, non-commercial site

    - by hen3ry
    I'm webmaster of a non-commercial site in English. A non-native-English speaker asked me why our site doesn't produce hits in Google searches she conducts for relevant keywords in her native language. I asked her for a list of keywords in her native language, and I naively tried inserting those into the META info in the page headers and waited a couple of weeks. No help. A little searching informed me that Google doesn't use the META info, and has not done so for a very long time. D'oh! To be entirely concrete, suppose the StackExchange folks want Russian speakers to find this site, Pro Webmasters. The direct translation in Russian of "webmaster" --thanks, Google Translator-- is: "?????????". (Not sure this will render properly, but that's not essential to my question.) Assuming Pro Webmasters has a common template for all pages it generates, inserting "?????????" into the Keywords META for that template won't help, it seems. What could StackExchange do to make this site visible to users searching with the Russian keyword "?????????" ? Pretty much all the advice I've seen boils down to this, if I understand correctly: use the desired search term often (but not too often) among site content, and the problem will be solved. That's great, but I don't think sprinkling "?????????" visibly all over Pro Webmasters is going to fly. Just for completeness, crawlers must be long immune to the invisible-to-visitors scheme, e.g, format "?????????" in a tiny text size in a color the same as an existing background, e.g. white-over-white. Or, put that text inside a div styled: ' style="visibility: hidden" '. Probably some other equivalents. I can only think of one slightly effective method, along these lines: place an unobtrusive link on the common template to a page titled "for international users" , and on that page list desired synonyms for "webmaster" in various languages on that page. A test case --admittedly, just one-- using my site implies that a Google search for "international users" ????????? will produce a hit for this page, and thus make the site minimally visible, despite the fact that the page will almost never be visited. At the moment, anyway. Note: All the SEO discussions I have found so far are about competitive and --almost certainly-- commercial sites. To repeat: my site is non-commercial, and it is about an obscure, narrow topic that is of interest to only a small number of people worldwide. This isn't about clawing our way to the top of competitive rankings, just making this content minimally visible to interested non-native-English speakers. Ideas? TIA

    Read the article

  • What should filenames and URLs of images contain for SEO benefit?

    - by Baumr
    We know that good site architecture usually looks like this: example-company.com/ example-company.com/about/ example-company.com/contact/ example-company.com/products/ example-company.com/products/category/ example-company.com/products/category/productname/ Now, when it comes to Google Image search, it is clear that the img alt tag, filename/URL, and surrounding text (captions, headings, paragraphs) have an effect on ranking. I want to ask about the filename of the images that we should use (e.g. product-photo.jpg). ...but first about the URL: Often web developers stick all images in a single folder in the root: example-company.com/img/ — and I have stopped doing that. (I don't want to get into it, but basically, it seems more semantic for images which make up part of the content at each sub-directory) However, when all images appear in a folder, I feel that their filename needs to reflect what they are a bit more than usual, for example: example-company.com/img/example-company-productname-category.jpg It's a longer filename than just product.png, but as long as it's relevant, I see no problem with regards to SEO (unless you're keyword stuffing), and it could even help rank for keywords: "example company" "productname" "category" So no questions there. But what about when we have places images in the site architecture we outlined at the beginning? In other words, what if image URL paths look like this: example-company.com/products/category/productname/productname.jpg My question is, should the URL be kept short like above and only have the "productname" (and some descriptive keywords) as part of it's filename? Or, should it also include the "example-company" and "category"? Like so: example-company.com/products/category/productname/example-company-category-productname.jpg That seems much longer, and redundant when we look at the URL, but here are a few considerations. Images are often downloaded onto computers, and, to the average user, they lose their original URL and thus — it isn't clear where they came from. Also, some social networks, forums, and other platforms leave the filename intact when uploaded. (Many others rewrite it, for example, Pinterest and Facebook.) Another consideration, will this really help (even if ever so slightly) rank in Google Image Search, or at least inform Google that the product is something specific to the "example-company"? For example, what if this product can only be bought at this store and is the flagship product? In addition to an abundance of internal links to this product page, would having the "example company" name and "category" help it appear in "example company" searches? In other words, is less more?

    Read the article

  • One Year Oracle SocialChat - The Movie

    - by mprove
    Tweet | Like | Watch on Vimeo You’ve just watched – hopefully – my first short movie. Thank you! Here is a bit of the back stage story. About 6 weeks ago colleagues from SNBC (Social Network and Business Collaboration) announced a Social Use Case Competition. It was expected to submit a video of 2 to 5 minutes duration on the Social Enterprise (our internal phrase for Enterprise 2.0). Hmm – I had a few vague ideas, but no script – no actors – no experience in film making. Really the best conditions to try something! I chose our weekly SocialChats as my main topic. But if you don’t do Danish Dogma cinema, you still need a script. Hence I played around with the SocialChat’s archive, and all of a sudden a script and even the actors appeared in front of me. The words that you have just seen are weekly topics. Slightly abridged and rearranged to form a story. Exciting, next phase. How to get it on digital celluloid? I have to confess I am still impressed by epic. (Keep in mind, epic was done in 2004.) And my actors – words – call for a typographic style already. The main part was done over a weekend with Apple Keynote. And I even found a wonderful matching soundtrack among my albums: Didge Goes World by Delago. I picked parts of Second Day and Seventh Day. Literally, the rhythm was set, and I "just" had to complete the movie. Tools used – apart from trial and error: Keynote, Pixelmator, GarageBand, iMovie. Finally I want to mention that I am extremely thankful to BSC Music for granting permissions to use the tracks for this short film! Without this sound it would have been just an ordinary slide show. – Internal note: The next SocialChat is on Death by PowerPoint vs. Presentation Zen. CU this Friday 3pm Greenwich / 7am Pacific.

    Read the article

  • Wireframing: A Day In the Life of UX Workshop at Oracle

    - by ultan o'broin
    The Oracle Applications User Experience team's Day in the Life (DITL) of User Experience (UX) event was run in Oracle's Redwood Shores HQ for Oracle Usability Advisory Board (OUAB) members. I was charged with putting together a wireframing session, together with Director of Financial Applications User Experience, Scott Robinson (@scottrobinson). Example of stunning new wireframing visuals we used on the DITL events. We put on a lively show, explaining the basics of wireframing, the concepts, what it is and isn't, considerations on wireframing tool choice, and then imparting some tips and best practices. But the real energy came when the OUAB customers and partners in the room were challenge to do some wireframing of their own. Wireframing is about bringing your business and product use cases to life in real UX visual terms, by creating a low-fidelity drawing to iterate and agree on in advance of prototyping and coding what is to be finally built and rolled out for users. All the best people wireframe. Leonardo da Vinci used "cartoons" on some great works, tracing outlines first and using red ochre or charcoal dropped through holes in the tracing parchment onto the canvas to outline the subject. (Image distributed under Wikimedia commons license) Wireframing an application's user experience design enables you to: Obtain stakeholder buy-in. Enable faster iteration of different designs. Determine the task flow navigation paths (in Oracle Fusion Applications navigation is linked with user roles). Develop a content strategy (readability, search engine optimization (SEO) of content, and so on) Lay out the pages, widgets, groups of features, and so on. Apply usability heuristics early (no replacement for usability testing, but a great way to do some heavy-lifting up front). Decide upstream which functional user experience design patterns to apply (out of the box solutions that expedite productivity). Assess which Oracle Application Development Framework (ADF) or equivalent technology components can be used (again, developer productivity is enhanced downstream). We ran a lively hands-on exercise where teams wireframed a choice of application scenarios using the time-honored tools of pen and paper. Scott worked the floor like a pro, pointing out great use of features, best practices, innovations, and making sure that the whole concept of wireframing, the gestalt, transferred. "We need more buttons!" The cry of the energized. Not quite. The winning wireframe session (online shopping scenario) from the Applications UX DITL event shown. Great fun, great energy, and great teamwork were evident in the room. Naturally, there were prizes for the best wireframe. Well, actually, prizes were handed out to the other attendees too! An exciting, slightly different aspect to delivery of this session made the wireframing event one of the highlights of the day. And definitely, something we will repeat again when we get the chance. Thanks to everyone who attended, contributed, and helped organize.

    Read the article

  • Another Custom Property Locator: a Library of Books

    - by Cindy McMullen
    Introduction The previous post gave an introduction to custom property locators and showed how create one using JDeveloper.  This post continues on the custom locator theme, with a slightly more complex locator: a library of books.  It demonstrates using the DAO pattern to delegate data access from the Locator, which is likely how many actual backing stores will integrate with the Locator.  You can imagine, rather than a library of books, the data store might be a user database of sorts.  The same sort of pattern would apply. This post uses the BookLocator example originally shown in the WebCenter documentation, but has: updated the source code to reflect the final Property APIs includes the steps for generating the namespace and property definition files via JDeveloper detailed usage of the PropertyService APIs Getting Started If you're new to JDeveloper, you might want to check out this tutorial.  There is also the "Jump-Start to using Personalization" blog post that you might find useful.  Otherwise, if you're already familiar with both, you can skip those tutorials and jump right in to using JDeveloper. Download the BookLocator.zip file (which has been updated from the original post) and unzip it to a new directory.  Start JDeveloper, navigate to the BookLocator.jws file, and open it.   It should look something like this: The Properties Namespace file contains the property definitions and property set definitions you define.  It is explained more in detail in the Namespace documentation.  Although this example doesn't show it, the property set definitions have the ability to reference multiple locators per property.   This can be done by right-clicking on the 'Locator Info' box.  Configure the contents of the Locator Map  by editing locators and mapping them to available property names in the property set definition. Compiling, deploying, and running your locator The rest of the steps in this tutorial basically follow those in the previous blog on custom locators, and won't be repeated here.   A scenario to invoke your locator is included with the sample app: see BookProperties.scenarios_diagram above.  Summary This post demonstrates a simple library of books accessed by the BookPropertyLocator via the DAO layer.  This is a useful pattern for more realistic property retrievals, such as a backing user store.  It also points out the possibility of retrieving properties from multiple locators, which would be quite handy to retrieve user attributes from multiple sources.

    Read the article

  • More on Map Testing

    - by Michael Stephenson
    I have been chatting with Maurice den Heijer recently about his codeplex project for the BizTalk Map Testing Framework (http://mtf.codeplex.com/). Some of you may remember the article I did for BizTalk 2009 and 2006 about how to test maps but with Maurice's project he is effectively looking at how to improve productivity and quality by building some useful testing features within the framework to simplify the process of testing maps. As part of our discussion we realized that we both had slightly different approaches to how we validate the output from the map. Put simple Maurice does some xpath validation of the data in various nodes where as my approach for most standard cases is to use serialization to allow you to validate the output using normal MSTest assertions. I'm not really going to go into the pro's and con's of each approach because I think there is a place for both and also I'm sure others have various approaches which work too. What would be great is for the map testing framework to provide support for different ways of testing which can cover everything from simple cases to some very specialized scenarios. So as agreed with Maurice I have done the sample which I will talk about in the rest of this article to show how we can use the serialization approach to create and compare the input and output from a map in normal development testing. Prerequisites One of the common patterns I usually implement when developing BizTalk solutions is to use xsd.exe to create .net classes for most of the schemas used within the solution. In the testing pattern I will take advantage of these .net classes. The Map In this sample the map we will use is very simple and just concatenates some data from the input message to the output message. Hopefully the below picture illustrates this well. The Test In the test I'm basically taking the following actions: Use the .net class generated from the schema to create an input message for the map Serialize the input object to a file Run the map from .net using the standard BizTalk test method which was generated for running the map Deserialize the output file from the map execution to a .net class representing the output schema Use MsTest assertions to validate things about the output message The below picture shows this: As you can see the code for this is pretty simple and it's all strongly typed which means changes to my schema which can affect the tests can be easily picked up as compilation errors. I can then chose to have one test which validates most of the output from the map, or to have many specific tests covering individual scenarios within the map. Summary Hopefully this post illustrates a powerful yet simple way of effectively testing many BizTalk mapping scenarios. I will probably have more conversations with Maurice about these approaches and perhaps some of the above will be included in the mapping test framework.   The sample can be downloaded from here: http://cid-983a58358c675769.office.live.com/self.aspx/Blog%20Samples/More%20Map%20Testing/MapTestSample.zip

    Read the article

  • How to properly do weapon cool-down reload timer in multi-player laggy environment?

    - by John Murdoch
    I want to handle weapon cool-down timers in a fair and predictable way on both client on server. Situation: Multiple clients connected to server, which is doing hit detection / physics Clients have different latency for their connections to server ranging from 50ms to 500ms. They want to shoot weapons with fairly long reload/cool-down times (assume exactly 10 seconds) It is important that they get to shoot these weapons close to the cool-down time, as if some clients manage to shoot sooner than others (either because they are "early" or the others are "late") they gain a significant advantage. I need to show time remaining for reload on player's screen Clients can have clocks which are flat-out wrong (bad timezones, etc.) What I'm currently doing to deal with latency: Client collects server side state in a history, tagged with server timestamps Client assesses his time difference with server time: behindServerTimeNs = (behindServerTimeNs + (System.nanoTime() - receivedState.getServerTimeNs())) / 2 Client renders all state received from server 200 ms behind from his current time, adjusted by what he believes his time difference with server time is (whether due to wrong clocks, or lag). If he has server states on both sides of that calculated time, he (mostly LERP) interpolates between them, if not then he (LERP) extrapolates. No other client-side prediction of movement, e.g., to make his vehicle seem more responsive is done so far, but maybe will be added later So how do I properly add weapon reload timers? My first idea would be for the server to send each player the time when his reload will be done with each world state update, the client then adjusts it for the clock difference and thus can estimate when the reload will be finished in client-time (perhaps considering also for latency that the shoot message from client to server will take as well?), and if the user mashes the "shoot" button after (or perhaps even slightly before?) that time, send the shoot event. The server would get the shoot event and consider the time shot was made as the server time when it was received. It would then discard it if it is nowhere near reload time, execute it immediately if it is past reload time, and hold it for a few physics cycles until reload is done in case if it was received a bit early. It does all seem a bit convoluted, and I'm wondering whether it will work (e.g., whether it won't be the case that players with lower ping get better reload rates), and whether there are more elegant solutions to this problem.

    Read the article

  • How to run software, that is not offered though package managers, that requires ia32-libs

    - by Onno
    I'm trying to install the Arma 2 OA dedicated server on a Virtualbox VM so I can test my own missions in a sandbox environment in a way that lets me offload them to another computer in my network. (The other computer is running the VM, but it's a windows machine, and I didn't want to hassle with its installation) It needs at least 2, and preferably 4GB of ram, so I thought I would install the AMD64 version of ubuntu 13.10 to get this going. 'How do you run a 32-bit program on a 64-bit version of Ubuntu?' already explained how to install 32bit software though apt-get and/or dpkg, but that doesn't apply in this case. The server is offered as a compressed download on the site of BI Studio, the developer of the Arma games. Its installation instructions are obviously slightly out of date with the current state of the art. (probably because the state of the art has been updated quite recently :) ) It states that I have to install ia32-libs, which has now apparently been deprecated. Now I have to find out how to get the right packages installed to make sure that it will run. My experience level is like novice-intermediate when it comes to these issues. I've installed a lot of packages though apt-get; I've solved dependency issues in the past; I haven't installed much software without using package managers. I can handle myself with basic administrative work like editing conf files and such. I have just gone ahead and tried to install it without installing ia32-libs through apt-get but to install gcc to get the libs after all. My reasoning being that gcc will include the files for backward compatibility coding and on linux all libs are (as far as I can tell) installed at a system level in /libs . So far it seems to start up. (I can connect with the game server trough my in-game network browser, so it's communicating) I'm not sure if there's any dependency checking going on when running the game server program, so I'm left with a couple of questions: Does 13.10 catch any calls to ia32libs libraries and translate the calls to the right code on amd64? If it runs, does that mean that all required libraries have been loaded correctly, or is there a change of it crashing later on when a library that was needed is missing after all? Is it necessary to do a workaround such as installing gcc? How do I find out what libraries I might need to run this software? (or any other piece of 32-bit software that isn't offered through a package manager)

    Read the article

  • Getting a virus is *very* annoying

    - by bconlon
    I spent most of yesterday removing an annoying virus from my PC. I feel slightly foolish for getting one in the first place, but after so many years I guess I was always going to eventually succumb. I was also a little surprised at the failure of various tools at removing it. The virus would redirect the browser to websites including ‘licosearch’, ‘hugosearch’ and ‘facebook’, and the disk would be thrashing away infecting dlls in some way. I had the full up to date version of McAfee installed. This identified that there was an issue in some dlls on the system and was able to ‘fix’ them. But they kept getting re-infected. So I installed Microsoft Security Essentials and this too was able to identify and ‘fix’ the infected dlls. The system scans take forever and I really expected better results. I also tried Malwarebytes, Hitman Pro, AVG and Sophos to no avail. Eventually I thought I’d investigate myself. It turned out that on reboot, the virus would start 3 instances of Firefox.exe which I’m guessing would do bad things including infecting as many dlls on the system as possible. I removed Firefox and the virus cleverly then launched 3 instances of Chrome! So I uninstalled Chrome and yes, it then started to launch 3 instances of iexplore.exe. If I’m honest, by this stage I was just seeing if it would be able to use any of the browsers! As it was starting these on reboot, I looked in my User Startup folder and there was a <randomly named>.exe and several log files. I deleted these and rebooted. When I looked they had been recreated. So I then looked in the registry Run and RunOnce entries: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run. Sure enough there were entries to run a file in C:\Program Files\<random name folder>\<random name file>.exe. I deleted this and rebooted and it was fixed. I also looked in the event log and found a warning that Winlogon had failed to start the file C:\Program Files\<random name folder>\<random name file>.exe So I also checked HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon and this entry had also been changed. Finally I ran a full system scan to clean up any infected dlls. I hope it’s gone for good!  #

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >