Search Results

Search found 54098 results on 2164 pages for 'something broken'.

Page 538/2164 | < Previous Page | 534 535 536 537 538 539 540 541 542 543 544 545  | Next Page >

  • Optimizing collision engine bottleneck

    - by Vittorio Romeo
    Foreword: I'm aware that optimizing this bottleneck is not a necessity - the engine is already very fast. I, however, for fun and educational purposes, would love to find a way to make the engine even faster. I'm creating a general-purpose C++ 2D collision detection/response engine, with an emphasis on flexibility and speed. Here's a very basic diagram of its architecture: Basically, the main class is World, which owns (manages memory) of a ResolverBase*, a SpatialBase* and a vector<Body*>. SpatialBase is a pure virtual class which deals with broad-phase collision detection. ResolverBase is a pure virtual class which deals with collision resolution. The bodies communicate to the World::SpatialBase* with SpatialInfo objects, owned by the bodies themselves. There currenly is one spatial class: Grid : SpatialBase, which is a basic fixed 2D grid. It has it's own info class, GridInfo : SpatialInfo. Here's how its architecture looks: The Grid class owns a 2D array of Cell*. The Cell class contains two collection of (not owned) Body*: a vector<Body*> which contains all the bodies that are in the cell, and a map<int, vector<Body*>> which contains all the bodies that are in the cell, divided in groups. Bodies, in fact, have a groupId int that is used for collision groups. GridInfo objects also contain non-owning pointers to the cells the body is in. As I previously said, the engine is based on groups. Body::getGroups() returns a vector<int> of all the groups the body is part of. Body::getGroupsToCheck() returns a vector<int> of all the groups the body has to check collision against. Bodies can occupy more than a single cell. GridInfo always stores non-owning pointers to the occupied cells. After the bodies move, collision detection happens. We assume that all bodies are axis-aligned bounding boxes. How broad-phase collision detection works: Part 1: spatial info update For each Body body: Top-leftmost occupied cell and bottom-rightmost occupied cells are calculated. If they differ from the previous cells, body.gridInfo.cells is cleared, and filled with all the cells the body occupies (2D for loop from the top-leftmost cell to the bottom-rightmost cell). body is now guaranteed to know what cells it occupies. For a performance boost, it stores a pointer to every map<int, vector<Body*>> of every cell it occupies where the int is a group of body->getGroupsToCheck(). These pointers get stored in gridInfo->queries, which is simply a vector<map<int, vector<Body*>>*>. body is now guaranteed to have a pointer to every vector<Body*> of bodies of groups it needs to check collision against. These pointers are stored in gridInfo->queries. Part 2: actual collision checks For each Body body: body clears and fills a vector<Body*> bodiesToCheck, which contains all the bodies it needs to check against. Duplicates are avoided (bodies can belong to more than one group) by checking if bodiesToCheck already contains the body we're trying to add. const vector<Body*>& GridInfo::getBodiesToCheck() { bodiesToCheck.clear(); for(const auto& q : queries) for(const auto& b : *q) if(!contains(bodiesToCheck, b)) bodiesToCheck.push_back(b); return bodiesToCheck; } The GridInfo::getBodiesToCheck() method IS THE BOTTLENECK. The bodiesToCheck vector must be filled for every body update because bodies could have moved meanwhile. It also needs to prevent duplicate collision checks. The contains function simply checks if the vector already contains a body with std::find. Collision is checked and resolved for every body in bodiesToCheck. That's it. So, I've been trying to optimize this broad-phase collision detection for quite a while now. Every time I try something else than the current architecture/setup, something doesn't go as planned or I make assumption about the simulation that later are proven to be false. My question is: how can I optimize the broad-phase of my collision engine maintaining the grouped bodies approach? Is there some kind of magic C++ optimization that can be applied here? Can the architecture be redesigned in order to allow for more performance? Actual implementation: SSVSCollsion Body.h, Body.cpp World.h, World.cpp Grid.h, Grid.cpp Cell.h, Cell.cpp GridInfo.h, GridInfo.cpp

    Read the article

  • NTFS Issues in Windows 7 and 2008 R2 - 'Is it a Bug?'

    - by renewieldraaijer
    I have been using the various versions of the Microsoft Windows product line since NT4 and I really thought I knew the ins and outs about the NTFS filesystem by now. There were always a few rules of thumb to understand what happens if you move data around. These rules were: "If you copy data, the copied data will inherit the permissions of the location it is being copied to. The same goes for moving data between disk partitions. Only when you move data within the same partition, the permissions are kept."  Recently I was asked to assist in troubleshooting some NTFS related issues. This forced me to have another good look at this theory. To my surprise I found out that this theory does not completely stand anymore. Apparently some things have changed since the release of Windows Vista / Windows 2008. Since the release of these Operating Systems, a move within the same disk partition results in the data inheriting the permissions of the location it is being copied into. A major change in the NTFS filesystem you would think!  Not quite! The above only counts when the move operation is being performed by using Windows Explorer. A move by using the 'move' command from within a cmd prompt for example, retains the NTFS permissions, just like before in Windows XP and older systems. Conclusion: The Windows Explorer is responsible for changing the ACL's of the moved data. This is a remarkable change, but if you follow this theory, the resulting ACL after a move operation is still predictable.  We could say that since Windows Vista and Windows 2008, a new rule set applies: "If you copy data, the copied data will inherit the permissions of the location it is being copied to. Same goes for moving data between disk partitions and within disk partitions. Only when you move data within the same partition by using something else than the Windows Explorer, the permissions are kept." The above behavior should be unchanged in Windows 7 / Windows 2008 R2, compared to Windows Vista / 2008. But somehow the NTFS permissions are not so predictable in Windows 7 and Windows 2008 R2. Moving data within the same disk partition the one time results in the permissions being kept and the next time results in inherited permissions from the destination location. I will try to demonstrate this in a few examples: Example 1 (Incorrect behavior): Consider two folders, 'Folder A' and 'Folder B' with the following permissions configured.                    Now we create the test file 'test file 1.txt' in 'Folder A' and afterwards move this file to 'Folder B' using Windows Explorer.                       According to the new theory, the file should inherit the permissions of 'Folder B' and therefore 'Group B' should appear in the ACL of 'test file 1.txt'. In the screenshot below the resulting permissions are displayed. The permissions from the originating location are kept, while the permissions of 'Folder B' should be inherited.                   Example 2 (Correct behavior): Again, consider the same two folders. This time we make a small modification to the ACL of 'Folder A'. We add 'Group C' to the ACL and again we create a file in 'Folder A' which we name 'test file 2.txt'.                    Next, we move 'test file 2.txt' to 'Folder B'.                       Again, we check the permissions of 'test file 2.txt' at the target location. We can now see that the permissions are inherited. This is what should be happening, and can be considered 'correct behavior' for Windows Vista / 2008 / 7 / 2008 R2. It remains uncertain why this behavior is so inconsistent. At this time, this is under investigation with Microsoft Support. The investigation has been going for the last two weeks and it is beginning to look like there is no rational reason for this, other than a bug in the Windows Explorer in Windows 7 and 2008 R2. As soon as there is any certainty on this, I will note it here in this blog.                   The examples above are harmless tests, by using my own laptop. If you would create the same set of folders and groups, and configure exactly the same permissions, you will see exactly the same behavior. Be sure to use Windows 7 or Windows 2008 R2.   Initially the problem arose at a customer site where move operations on data on the fileserver by users would result in unpredictable results. This resulted in the wrong set of people having àccess permissions on data that they should not have permissions to. Off course this is something we want to prevent at all costs.   I have also done several tests with move operations by using the move command in a cmd prompt. This way the behavior is always consistent. The inconsistent behavior is only exposed when using the Windows Explorer to initiate the move operation, and only when using Windows 7 or Windows 2008 R2 systems. It is evident that this behavior changes when the ACL of a folder has been changed, for example by adding an extra entry. The reason for this remains uncertain though. To be continued…. A dutch version of this post can be found at: http://blogs.platani.nl/?p=612

    Read the article

  • Auto DOP and Concurrency

    - by jean-pierre.dijcks
    After spending some time in the cloud, I figured it is time to come down to earth and start discussing some of the new Auto DOP features some more. As Database Machines (the v2 machine runs Oracle Database 11.2) are effectively selling like hotcakes, it makes some sense to talk about the new parallel features in more detail. For basic understanding make sure you have read the initial post. The focus there is on Auto DOP and queuing, which is to some extend the focus here. But now I want to discuss the concurrency a little and explain some of the relevant parameters and their impact, specifically in a situation with concurrency on the system. The goal of Auto DOP The idea behind calculating the Automatic Degree of Parallelism is to find the highest possible DOP (ideal DOP) that still scales. In other words, if we were to increase the DOP even more  above a certain DOP we would see a tailing off of the performance curve and the resource cost / performance would become less optimal. Therefore the ideal DOP is the best resource/performance point for that statement. The goal of Queuing On a normal production system we should see statements running concurrently. On a Database Machine we typically see high concurrency rates, so we need to find a way to deal with both high DOP’s and high concurrency. Queuing is intended to make sure we Don’t throttle down a DOP because other statements are running on the system Stay within the physical limits of a system’s processing power Instead of making statements go at a lower DOP we queue them to make sure they will get all the resources they want to run efficiently without trashing the system. The theory – and hopefully – practice is that by giving a statement the optimal DOP the sum of all statements runs faster with queuing than without queuing. Increasing the Number of Potential Parallel Statements To determine how many statements we will consider running in parallel a single parameter should be looked at. That parameter is called PARALLEL_MIN_TIME_THRESHOLD. The default value is set to 10 seconds. So far there is nothing new here…, but do realize that anything serial (e.g. that stays under the threshold) goes straight into processing as is not considered in the rest of this post. Now, if you have a system where you have two groups of queries, serial short running and potentially parallel long running ones, you may want to worry only about the long running ones with this parallel statement threshold. As an example, lets assume the short running stuff runs on average between 1 and 15 seconds in serial (and the business is quite happy with that). The long running stuff is in the realm of 1 – 5 minutes. It might be a good choice to set the threshold to somewhere north of 30 seconds. That way the short running queries all run serial as they do today (if it ain’t broken, don’t fix it) and allows the long running ones to be evaluated for (higher degrees of) parallelism. This makes sense because the longer running ones are (at least in theory) more interesting to unleash a parallel processing model on and the benefits of running these in parallel are much more significant (again, that is mostly the case). Setting a Maximum DOP for a Statement Now that you know how to control how many of your statements are considered to run in parallel, lets talk about the specific degree of any given statement that will be evaluated. As the initial post describes this is controlled by PARALLEL_DEGREE_LIMIT. This parameter controls the degree on the entire cluster and by default it is CPU (meaning it equals Default DOP). For the sake of an example, let’s say our Default DOP is 32. Looking at our 5 minute queries from the previous paragraph, the limit to 32 means that none of the statements that are evaluated for Auto DOP ever runs at more than DOP of 32. Concurrently Running a High DOP A basic assumption about running high DOP statements at high concurrency is that you at some point in time (and this is true on any parallel processing platform!) will run into a resource limitation. And yes, you can then buy more hardware (e.g. expand the Database Machine in Oracle’s case), but that is not the point of this post… The goal is to find a balance between the highest possible DOP for each statement and the number of statements running concurrently, but with an emphasis on running each statement at that highest efficiency DOP. The PARALLEL_SERVER_TARGET parameter is the all important concurrency slider here. Setting this parameter to a higher number means more statements get to run at their maximum parallel degree before queuing kicks in.  PARALLEL_SERVER_TARGET is set per instance (so needs to be set to the same value on all 8 nodes in a full rack Database Machine). Just as a side note, this parameter is set in processes, not in DOP, which equates to 4* Default DOP (2 processes for a DOP, default value is 2 * Default DOP, hence a default of 4 * Default DOP). Let’s say we have PARALLEL_SERVER_TARGET set to 128. With our limit set to 32 (the default) we are able to run 4 statements concurrently at the highest DOP possible on this system before we start queuing. If these 4 statements are running, any next statement will be queued. To run a system at high concurrency the PARALLEL_SERVER_TARGET should be raised from its default to be much closer (start with 60% or so) to PARALLEL_MAX_SERVERS. By using both PARALLEL_SERVER_TARGET and PARALLEL_DEGREE_LIMIT you can control easily how many statements run concurrently at good DOPs without excessive queuing. Because each workload is a little different, it makes sense to plan ahead and look at these parameters and set these based on your requirements.

    Read the article

  • Slide Creation Checklist

    - by Daniel Moth
    PowerPoint is a great tool for conference (large audience) presentations, which is the context for the advice below. The #1 thing to keep in mind when you create slides (at least for conference sessions), is that they are there to help you remember what you were going to say (the flow and key messages) and for the audience to get a visual reminder of the key points. Slides are not there for the audience to read what you are going to say anyway. If they were, what is the point of you being there? Slides are not holders for complete sentences (unless you are quoting) – use Microsoft Word for that purpose either as a physical handout or as a URL link that you share with the audience. When you dry run your presentation, if you find yourself reading the bullets on your slide, you have missed the point. You have a message to deliver that can be done regardless of your slides – remember that. The focus of your audience should be on you, not the screen. Based on that premise, I have created a checklist that I go over before I start a new deck and also once I think my slides are ready. Turn AutoFit OFF. I cannot stress this enough. For each slide, explicitly pick a slide layout. In my presentations, I only use one Title Slide, Section Header per demo slide, and for the rest of my slides one of the three: Title and Content, Title Only, Blank. Most people that are newbies to PowerPoint, get whatever default layout the New Slide creates for them and then start deleting and adding placeholders to that. You can do better than that (and you'll be glad you did if you also follow item #11 below). Every slide must have an image. Remove all punctuation (e.g. periods, commas) other than exclamation points and question marks (! ?). Don't use color or other formatting (e.g. italics, bold) for text on the slide. Check your animations. Avoid animations that hide elements that were on the slide (instead use a new slide and transition). Ensure that animations that bring new elements in, bring them into white space instead of over other existing elements. A good test is to print the slide and see that it still makes sense even without the animation. Print the deck in black and white choosing the "6 slides per page" option. Can I still read each slide without losing any information? If the answer is "no", go back and fix the slides so the answer becomes "yes". Don't have more than 3 bullet levels/indents. In other words: you type some text on the slide, hit 'Enter', hit 'Tab', type some more text and repeat at most one final time that sequence. Ideally your outer bullets have only level of sub-bullets (i.e. one level of indentation beneath them). Don't have more than 3-5 outer bullets per slide. Space them evenly horizontally, e.g. with blank lines in between. Don't wrap. For each bullet on all slides check: does the text for that bullet wrap to a second line? If it does, change the wording so it doesn't. Or create a terser bullet and make the original long text a sub-bullet of that one (thus decreasing the font size, but still being consistent) and have no wrapping. Use the same consistent fonts (i.e. Font Face, Font Size etc) throughout the deck for each level of bullet. In other words, don't deviate form the PowerPoint template you chose (or that was chosen for you). Go on each slide and hit 'Reset'. 'Reset' is a button on the 'Home' tab of the ribbon or you can find the 'Reset Slide' menu when you right click on a slide on the left 'Slides' list. If your slides can survive doing that without you "fixing" things after the Reset action, you are golden! For each slide ask yourself: if I had to replace this slide with a single sentence that conveys the key message, what would that sentence be? This exercise leads you to merge slides (where the key message is split) or split a slide into many, if there were too many key messages on the slide in the first place. It can also lead you to redesign a slide so the text on it really is just explanation or evidence for the key message you are trying to convey. Get the length right. Is the length of this deck suitable for the time you have been given to present? If not, cut content! It is far better to deliver less in a relaxed, polished engaging, memorable way than to deliver in great haste more content. As a rule of thumb, multiply 2 minutes by the number of slides you have, add the time you need for each demo and check if that add to more than the time you have allotted. If it does, start cutting content – we've all been there and it has to be done. As always, rules and guidelines are there to be bent and even broken some times. Start with the above and on a slide-by-slide basis decide which rules you want to bend. That is smarter than throwing all the rules out from the start, right? Comments about this post welcome at the original blog.

    Read the article

  • April Omnibus

    - by KKline
    I freely admit it - I'm a sluggard. I should be blogging a couple times per week and tweeting in between. But, for some unknown reason, April has been a tough month to get this in gear. Hence, I'm putting out an omnibus post to cover all of the stuff I've been up to, instead of the one-off's I usually post when I've got something new to mention. Isn't it funny how life gets in the way of the stuff we want and intend to do? As they say - "The road to hell is paved with good intentions", or was that...(read more)

    Read the article

  • Lightweight PHP/HTML/CSS editor with code browser

    - by Nisto
    I'm looking for a freeware editor which has; syntax highlighting and a code browser (or code suggestions/hints). Preferably freeware license! I've tried out quite a few editors, but a lot of them are unfortunately very resource heavy and provides a lot more functions than I ever needed. So far, there's two editors that I really like, and is lightweight: jEdit and Notepad++. Although, unfortunately... Notepad++ doesn't have code browser support for both control structures and functions for PHP. Also, there's no code browser for HTML... I really liked jEdit as well, but there doesn't seem to be a code browser for it. Except for maybe Completion, but it's a bothersome plugin, and doesn't show the code browser unless you type something in and press CTRL+B. Other editors I've tried, but wasn't satisfied with: Adobe Dreamweaver CodeLobster PHP Edition Aptana Studio Komodo Edit EditPlus BlueFish PHP Designer 2007 - Personal PhpStorm Scriptly Eclipse UltraEdit Notepad2 EditPad Pro Rapid PHP EDIT I'm using Windows XP

    Read the article

  • SQL SERVER – SSIS Parameters in Parent-Child ETL Architectures – Notes from the Field #040

    - by Pinal Dave
    [Notes from Pinal]: SSIS is very well explored subject, however, there are so many interesting elements when we read, we learn something new. A similar concept has been Parent-Child ETL architecture’s relationship in SSIS. Linchpin People are database coaches and wellness experts for a data driven world. In this 40th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to understand SSIS Parameters in Parent-Child ETL Architectures. In this brief Notes from the Field post, I will review the use of SSIS parameters in parent-child ETL architectures. A very common design pattern used in SQL Server Integration Services is one I call the parent-child pattern.  Simply put, this is a pattern in which packages are executed by other packages.  An ETL infrastructure built using small, single-purpose packages is very often easier to develop, debug, and troubleshoot than large, monolithic packages.  For a more in-depth look at parent-child architectures, check out my earlier blog post on this topic. When using the parent-child design pattern, you will frequently need to pass values from the calling (parent) package to the called (child) package.  In older versions of SSIS, this process was possible but not necessarily simple.  When using SSIS 2005 or 2008, or even when using SSIS 2012 or 2014 in package deployment mode, you would have to create package configurations to pass values from parent to child packages.  Package configurations, while effective, were not the easiest tool to work with.  Fortunately, starting with SSIS in SQL Server 2012, you can now use package parameters for this purpose. In the example I will use for this demonstration, I’ll create two packages: one intended for use as a child package, and the other configured to execute said child package.  In the parent package I’m going to build a for each loop container in SSIS, and use package parameters to pass in a value – specifically, a ClientID – for each iteration of the loop.  The child package will be executed from within the for each loop, and will create one output file for each client, with the source query and filename dependent on the ClientID received from the parent package. Configuring the Child and Parent Packages When you create a new package, you’ll see the Parameters tab at the package level.  Clicking over to that tab allows you to add, edit, or delete package parameters. As shown above, the sample package has two parameters.  Note that I’ve set the name, data type, and default value for each of these.  Also note the column entitled Required: this allows me to specify whether the parameter value is optional (the default behavior) or required for package execution.  In this example, I have one parameter that is required, and the other is not. Let’s shift over to the parent package briefly, and demonstrate how to supply values to these parameters in the child package.  Using the execute package task, you can easily map variable values in the parent package to parameters in the child package. The execute package task in the parent package, shown above, has the variable vThisClient from the parent package mapped to the pClientID parameter shown earlier in the child package.  Note that there is no value mapped to the child package parameter named pOutputFolder.  Since this parameter has the Required property set to False, we don’t have to specify a value for it, which will cause that parameter to use the default value we supplied when designing the child pacakge. The last step in the parent package is to create the for each loop container I mentioned earlier, and place the execute package task inside it.  I’m using an object variable to store the distinct client ID values, and I use that as the iterator for the loop (I describe how to do this more in depth here).  For each iteration of the loop, a different client ID value will be passed into the child package parameter. The final step is to configure the child package to actually do something meaningful with the parameter values passed into it.  In this case, I’ve modified the OleDB source query to use the pClientID value in the WHERE clause of the query to restrict results for each iteration to a single client’s data.  Additionally, I’ll use both the pClientID and pOutputFolder parameters to dynamically build the output filename. As shown, the pClientID is used in the WHERE clause, so we only get the current client’s invoices for each iteration of the loop. For the flat file connection, I’m setting the Connection String property using an expression that engages both of the parameters for this package, as shown above. Parting Thoughts There are many uses for package parameters beyond a simple parent-child design pattern.  For example, you can create standalone packages (those not intended to be used as a child package) and still use parameters.  Parameter values may be supplied to a package directly at runtime by a SQL Server Agent job, through the command line (via dtexec.exe), or through T-SQL. Also, you can also have project parameters as well as package parameters.  Project parameters work in much the same way as package parameters, but the parameters apply to all packages in a project, not just a single package. Conclusion Of the numerous advantages of using catalog deployment model in SSIS 2012 and beyond, package parameters are near the top of the list.  Parameters allow you to easily share values from parent to child packages, enabling more dynamic behavior and better code encapsulation. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Java Transaction Service without the application server

    - by johnny
    Is it possible to have a Java standalone application (no application server attached) that exposes some operations that a client can call and be the one to manage the transactions? I was thinking this application to expose JNDI resources and get a hold of a java:comp/UserTransaction from there, get also a bean from there and call methods A, B and C on it and coordinate the transaction from the client? The application I'm writing isn't complex enough so that I need a big application server around it so I'm thinking to have a standalone JTS inside it that the client could interact with from a transactions point of view. I don't have much experience with distributed transactions and don't really know how to tackle the issue. Is it even possible? Am I getting myself into something beyond what a mere mortal (programmer) can handle? How can I approach this?

    Read the article

  • Can't login, kde loads, then back to kdm

    - by Daniel
    Hi @all (K)Ubuntu users, I installed Kubuntu 10.10 after it's realesing. (ordinary I use Ubuntu, but this time I want to try Kubuntu, too) Now I can't login in Kubuntu: When(/if) I login with mine username and password, KDE loads(I mean this splashscreen), but if it's ready nearly, the screen becomes dark and I'm back in the login-manager. I tried many things: With a new user or with installing gdm or install it new (two times!) Thank you for helping PS: Ubuntu works normal Sorry for my bad english ;-) EDIT: The text-console-mode(or however it's named in english) isn't working anytimes, seemes like a graphics bug or something similiar. And there aren't very many (hidden) ".folders", just .kde .config .dbus .fontconfig and some ".files".

    Read the article

  • What Did You Do? is a Bad Question

    - by Ajarn Mark Caldwell
    Brian Moran (blog | Twitter) did a great presentation today for the PASS Professional Development Virtual Chapter on The Art of Questions.  One of the points that Brian made was that there are good questions and bad (or at least not-as-good) questions.  Good questions tend to open-up the conversation and engender positive reactions (perhaps even trust and respect) between the participants; and bad questions tend to close-down a conversation either through the narrow list of possible responses (e.g. strictly Yes/No) or through the negative reactions they can produce.  And this explains why I so frequently had problems troubleshooting real-time problems with users in the past.  I’ll explain that in more detail below, but before we go on, let me recommend that you watch the recording of Brian’s presentation to learn why the question Why is often problematic in the U.S. and yet we so often resort to it. For a short portion (3 years) of my career, I taught basic computer skills and Office applications in an adult vocational school, and this gave me ample opportunity to do live troubleshooting of user challenges with computers.  And like many people who ended up in computer related jobs, I also have had numerous times where I was called upon by less computer-savvy individuals to help them with some challenge they were having, whether it was part of my job or not.  One of the things that I noticed, especially during my time as a teacher, was that when I was helping somebody, typically the first question I would ask them was, “What did you do?”  This seemed to me like a good way to start my detective work trying to figure out what happened, what went wrong, how to fix it, and how to help the person avoid it again in the future.  I always asked it in a polite tone of voice as I was just trying to gather the facts before diving in deeper.  However; 99.999% of the time, I always got the same answer, “Nothing!”  For a long time this frustrated me because (remember I’m in detective mode at that point) I knew it could not possibly be true.  They HAD to have done SOMETHING…just tell me what were the last actions you took before this problem presented itself.  But no, they always stuck with “Nothing”.  At which point, with frustration growing, and not a little bit of disdain for their lack of helpfulness, I would usually ask them to move aside while I took over their machine and got them out of whatever they had gotten themselves into.  After a while I just grew used to the fact that this was the answer I would usually receive, but I always kept asking because for the .001% of the people who would actually tell me, I could then help them understand what went wrong and how to avoid it in the future. Now, after hearing Brian’s talk, I understand what the problem was.  Even though I meant to just be in an information gathering mode, the words I was using, “What did YOU do?” have such a strong negative connotation that people would instinctively go into defense-mode and stop sharing information that might make them look bad.  Many of them probably were not even consciously aware that they had gone on the defensive, but the self-preservation instinct, especially self-preservation of the ego, is so strong that people would end up there without even realizing it. So, if “What did you do” is a bad question, what would have been better?  Well, one suggestion that Brian makes in his talk is something along the lines of, “Can you tell me what led up to this?” or “what was happening on the computer right before this came up?”  It’s subtle, but the point is to take the focus off of the person and their behavior; instead depersonalizing it and talk about events from more of a 3rd-party observer point of view.  With this approach, people will be more likely to talk about what the computer did and what they did in response to it without feeling the interrogation spotlight is on them.  They are also more likely to mention other events that occurred around the same time that may or may not be related, but which could certainly help you troubleshoot a larger problem if it is not just user actions.  And that is the ultimate goal of your asking the questions.  So yes, it does matter how you ask the question; and there are such things as good questions and bad questions.  Excellent topic Brian!  Thanks for getting the thinking gears churning! (Cross-posted to the Professional Development Virtual Chapter blog.)

    Read the article

  • Richmond Code Camp 2010.1

    - by andyleonard
    I can't believe it - Richmond Code Camp 2010.1 is less than two weeks away! Once again, the leadership team has outdone themselves. We have a bunch of great speakers, 9 tracks, 45 sessions - there's something for everyone. If you're going to be in the area and are interested, register today. :{> Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • IIS.net is running on IIS 8.0 Beta!

    - by The Official Microsoft IIS Site
    Here at Microsoft we're pretty passionate about testing our own software. We often ask our customers to test the pre-release versions of our new software products, and we wouldn't ask our customers to try something that we're unwilling to do. To that end, we are pleased to announce that IIS.net is fully running on IIS 8.0 Beta. Some of you may have noticed the "Running on IIS8" button above the IIS.net menu bar; this message lets you know that you're browsing to a server...(read more)

    Read the article

  • SQL SERVER – BI Quiz – Troubleshooting Cube Performance

    - by pinaldave
    My friend Jacob Sebastian runs SQL BI Quiz competition. Where there are 30 different questions on each day of the month. Winners get opportunity to participate in this Quiz, learn something new and win great awards. Working with huge data is very common when it is about Data Warehousing. It is necessary to create Cubes on the data to make it meaningful and consumable. There are cases when retrieving the data from cube takes lots of the time. Let us assume that your cube is returning you data very quickly. Suddenly on one day it is returning the data very slowly. What are the three things will you in order to diagnose this. After diagnose what you will do to resolve performance issue. Participate in my question over here Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Pinal Dave, PostADay, Readers Question, SQL, SQL Authority, SQL Performance, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Universal navigation menu across domains - would it be considered duplicate content?

    - by Jon Harley
    Across different sites on different second-level domains exists a universal navigation bar with a collection of roughly 30 links. This universal bar is exactly the same for every page on each domain. The bar's HTML, CSS and JavaScript are all stored in a subfolder for each domain and the HTML is embedded upon serving the page and is not being injected on the client side. None of the links use any rel directives and are as vanilla as can be. My question is about Google's duplicate content rule. Would something like this be considered duplicate content? Matt Cutt's blog post about duplicate content mentions boilerplate repetition, but then he mentions lengthy legalese. Since the text in this universal bar is brief and uses common terms, I wonder if this same rule applies. If this is considered duplicate content, what would be a good way to correct the problem?

    Read the article

  • SQL SERVER – SQLServer Quiz 2011 – Do you know your execution plan – Two questions – One Answer

    - by pinaldave
    My friend Jacob Sebastian has SQL Server Quiz 2011 launched. This time when he asked me to come up with quiz question – I wanted to come up with something which is new and make participant to think about it. After carefully thinking I come with question which I really like to solve myself. Here is the details: 1) Using Single table only Once in Single SELECT statement generate execution plan which have JOIN operator. Explain the reason for the same. 2) Using Single table only Once in Single SELECT statement generate execution plan which have parallelism operator. Explain the reason for the same. Bonus: Create a single query which satisfy both of the above statement. To answer this question and win exciting gifts please visit the SQL Server Quiz website. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • KISS and Tell - MVVM and the ViewModelLocator

    - by Bobby Diaz
    A popular topic that comes up when talking about MVVM is the use of a ViewModelLocator and the many different ways one can be implemented.  Rather than getting into the pros and cons on when or why you should use it, I decided I would just post my version of a simple ViewModelLocator and let those who like it use it, and those who don’t, well you know…  :) First, a disclaimer.  I have not used this code in a production application, it is just something I was tossing around while reading others’ posts on the subject. 1. MainView.xaml   2. MainViewModel.cs 3. ViewModelLocator.cs   I have a codepaste of the ViewModelLocator.cs file if you are interested but don’t feel like re-typing the 50 lines of code! Enjoy! Additional Resources Simple ViewModel Locator for MVVM: The Patients Have Left the Asylum - by John Papa ViewModel binding with the Managed Extensibility Framework - by Jeremy Likness MVVM Light Toolkit - by Laurent Bugnion

    Read the article

  • TechEd 2012 - last day

    - by Stefan Barrett
    Miss when TechEd was 5 days long!, it's Thursday already and we are on the last day. The snacks haven't appeared, but more developer sessions have. Having access to online schedule is very important, since the new sessions are usually the more interesting ones. On the whole, I think the wifi network has been worse this year - more blank spots, and more areas where performance is bad. I do think its funny that I get better reception on my iPad than my phones (iPad & Nokia/Microsoft). There seems to be less areas for people to plug in their own laptops this year - I do wonder, since more and more people have smart phones, and since most of the attendees are from America, perhaps they are not using the wifi - but rather their own phone provider. If I was in Japan, I would probably do the same. About to attend a session on F#, something which is probably going to be important for me over the next year.

    Read the article

  • tech-ed 2012

    - by foxjazz
    So, am not going to tech-ed this year.I didn't get much benefit from going last year, but I did meet a lot of nice folks.I am working on my first official Silverlight project, and it's going ok.I having a few issues which I may resolve with WCF services.I am still green around the edges with this technology, but I am getting the hang of it slowly.Learning a lot about IQueryable and how to handle databases.Depending on what I am looking todo, I may use some messaging services within the app.It has been a hard study the last month. Learning SL, JQuery, More CSS and website work, code-first.Node.js, SingalRThere seems to be a lot to do to keep up with the technology.Hope to post more often, but am hammering on something new, most of the time.

    Read the article

  • What should web programmers know about cryptography?

    - by davidhaskins
    Should programmers who build websites/web applications understand cryptography? I have no idea how most crypographic algorithms work, and I really don't understand the differences between md5/des/aes/etc. Have any of you found any need for an in-depth understanding of cryptography? I haven't needed it, but I wonder if perhaps I'm missing something. I've used salt + md5 hash to encrypt passwords, and I tell webservers to use SSL. Beyond that, I can't say I've used much else, nor can I say with any certainty how secure these methods are. I only use them because other people claim they are safe. Have you ever found a need to use cryptography in web programming aside from these two simple examples?

    Read the article

  • Oracle Fusion Applications User Experience Design Patterns: Feeling the Love after Launch

    - by mvaughan
    By Misha Vaughan, Oracle Applications User ExperienceIn the first video by the Oracle Applications User Experience team on the Oracle Partner Network, Vice President Jeremy Ashley said that Oracle is looking to expand the ecosystem of support for Oracle’s applications customers as they begin to assess their investment and adoption of Oracle Fusion Applications. Oracle has made a massive investment to maintain the benefits of the Fusion Applications User Experience. This summer, the Applications User Experience team released the Oracle Fusion Applications user experience design patterns.Design patterns help create consistent experiences across devices.The launch has been very well received:Angelo Santagata, Senior Principal Technologist and Fusion Middleware evangelist for Oracle,  wrote this to the system integrator community: “The web site is the result of many years of Oracle R&D into user interface design for Fusion Applications and features a really cool web app which allows you to visualise the UI components in action.”  Grant Ronald, Director of Product Management, Application Development Framework (ADF) said: “It’s a science I don't understand, but now I don't have to ... Now you can learn from the UX experience of Fusion Applications.”Frank Nimphius, Senior Principal Product Manager, Oracle (ADF) wrote about the launch of the design patterns for the ADF Code Corner, and Jürgen Kress, Senior Manager EMEA Alliances & Channels for Fusion MiddleWare and Service Oriented Architecture, (SOA), shared the news with his Partner Community. Oracle Twitter followers also helped spread the message about the design patterns launch: ?@bex – Brian Huff, founder and Chief Software Architect for Bezzotech, and Oracle ACE Director:“Nifty! The Oracle Fusion UX team just released new ADF design patterns.”@maiko_rocha, Maiko Rocha, Oracle Consulting Solutions Architect and Oracle FMW engineer: “Haven't seen any other vendor offer such comprehensive UX Design Patterns catalog for free!”@zirous_chad, Chad Thompson, Senior Solutions Architect for Zirous, Inc. and ADF Developer:Wow - @ultan and company did a great job with the Fusion UX PatternsWhat is a user experience design pattern?A user experience design pattern is a re-usable, usability tested functional blueprint for a particular user experience.  Some examples are guided processes, shopping carts, and search and search results.  Ultan O’Broin discusses the top design patterns every developer should know.The patterns that were just released are based on thousands of hours of end-user field studies, state-of-the-art user interface assessments, and usability testing.  To be clear, these are functional design patterns, not technical design patterns that developers may be used to working with.  Because we know there is a gap, we are putting together some training that will help close that gap.Who should care?This is an offering targeted primarily at Application Development Framework (ADF) developers. If you are faced with the following questions regarding Fusion Applications, you will want to know and learn more:•    How do I build something that looks like Fusion Applications?•    How do I build a next-generation application?•    How do I extend a Fusion Application and maintain the user experience?•    I don’t want to re-invent the wheel on the user interface, so where do I start?•    I need to build something that will eventually co-exist with Fusion Applications. How do I do that?These questions are relevant to partners with an ADF competency, individual practitioners, or small consultancies with an ADF specialization, and customers who are trying to shift their IT staff over to supporting Fusion Applications.Where you can find out more?OnlineOur Fusion User Experience design patterns maven is Ultan O’Broin. The Oracle Partner Network is helping our team bring this first e-seminar to you in order to go into a more detail on what this means and how to take advantage of it:? Webinar: Build a Better User Experience with Oracle: Oracle Fusion Applications Functional Design PatternsSept 20, 2012 , 10:30am-11:30am PacificDial-In:  1. 877-664-9137 / Passcode 102546?International:  706-634-9619  http://www.intercall.com/national/oracleuniversity/gdnam.htmlAccess the Live Event Or Via Webconference Access http://ouweb.webex.com  ?and enter this session number: 598036234At a Usergroup eventThe Fusion User Experience Advocates (FXA) are also going to be getting some deep-dive training on this content and can share it with local user groups.At OpenWorld Ultan O’Broin               Chris MuirIf you will be at OpenWorld this year, our own Ultan O’Broin will be visiting the ADF demopod to say hello, thanks to Shay Shmeltzer, Senior Group Manager for ADF outbound communication and at the OTN lounge: Monday 10-10:45, Tuesday 2:15-2:45, Wednesday 2:15-3:30 ?  Oracle JDeveloper and Oracle ADF,  Moscone South, Right - S-207? “ADF Meet and Greett”, OTN Lounge, Wednesday 4:30 And I cannot talk about OpenWorld and ADF without mentioning Chris Muir’s ADF EMG event: the Year After the Year Of the ADF Developer – Sunday, Sept 30 of OpenWorld. Chris has played host to Ultan and the Applications user experience message for his online community and is now a seasoned UX expert.Expect to see additional announcements about expanded and training on similar topics in the future.

    Read the article

  • Does programming knowledge have a half-life?

    - by Gary Rowe
    In answering this question, I asserted that programming knowledge has a half-life of about 18 months. In physics, we have radioactive decay which is the process by which a radioactive element transforms into something less energetic. The half-life is the measure of how long it takes for this process to result in only half of the material to remain. A parallel concept might be that over time our programming knowledge ceases to be the current idiom and eventually becomes irrelevant. Noting that a half-life is asymptotic (so some knowledge will always be relevant), what are your thoughts on this? Is 18 months a good estimate? Is it even the case? Does it apply to design patterns, but over a longer period? What are the inherent advantages/disadvantages of this half-life? Update Just found this question which covers the material fairly well: "Half of everything you know will be obsolete in 18-24 months" = ( True, or False? )

    Read the article

  • How do you decide site availability requirements?

    - by Nathan Long
    I work on a web application to file a specific kind of county taxes. Our company wants our state to mandate that counties must accept electronic filings (as opposed to paper) from any system that meets some sensible requirements for uptime, security, data validation, etc. (Yes, this would help us as a business, but it would also force county governments to be more efficient.) We're creating a draft of those requirements to be reviewed and tweaked with the state. One of the sections is "availability." We want to specify something reasonably high, but not so high that any unexpected problem will get us (or a competitor) penalized. How do we decide what's reasonable for availability requirements?

    Read the article

  • What's the best way of marketing to programmers?

    - by Stuart
    Disclaimer up front - I'm definitely not going to include any links in here - this question isn't part of my marketing! I've had a few projects recently where the end product is something that developers will use. In the past I've been on the receiving end of all sorts of marketing - as a developer I've gotten no end of junk - 1000s of pens, tee-shirts and mouse pads; enough CDs to keep my desk tea-free; some very useful USB keys with some logos I no longer recognise; a small forest's worth of leaflets; a bulging spam folder full of ignored emails, etc... So that's my question - What are good ways to market to developers? And as an aside - are developers the wrong people to target? - since we so often don't have a purchasing budget anyways!

    Read the article

  • Expected sprint completion rate and load in scrum?

    - by bjarkef
    Recently at work there have been increased focus on completion rate and load on the developers in our sprints. With completion rate I mean, if we plan 20 user stories for a sprint, what percentage of these user stories are closed at the end of the sprint. And with load I mean, if we have a sprint with 3 developers of 60 hours each, i.e. 180 hours for the sprint, how many hours worth of user stories do we schedule for the sprint. So I am really interested in others experience with this, I guess this is something everybody working with scrum deals with. My question is, what completion rate and load are expected/usual, and how are your team doing with respect to these parameters?

    Read the article

  • How to Print or Save a Directory Listing to a File

    - by Lori Kaufman
    Printing a directory listing is something you may not do often, but when you need to print a listing of a directory with a lot of files in it, you would rather not manually type the filenames. You may want to print a directory listing of your videos, music, ebooks, or other media. Or, someone at work may ask you for a list of test case files you have created for the software you’re developing, or a list of chapter files for the user guide, etc. If the list of files is small, writing it down or manually typing it out is not a problem. However, if you have a lot of files, automatically creating a directory listing would get the task done quickly and easily. This article shows you how to write a directory listing to a file using the command line and how to use a free tool to print or save a directory listing in Windows Explorer. Amazon’s New Kindle Fire Tablet: the How-To Geek Review HTG Explains: How Hackers Take Over Web Sites with SQL Injection / DDoS Use Your Android Phone to Comparison Shop: 4 Scanner Apps Reviewed

    Read the article

< Previous Page | 534 535 536 537 538 539 540 541 542 543 544 545  | Next Page >