Search Results

Search found 12397 results on 496 pages for 'maybe'.

Page 444/496 | < Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >

  • Approach for packing 2D shapes while minimizing total enclosing area

    - by Dennis
    Not sure on my tags for this question, but in short .... I need to solve a problem of packing industrial parts into crates while minimizing total containing area. These parts are motors, or pumps, or custom-made components, and they have quite unusual shapes. For some, it may be possible to assume that a part === rectangular cuboid, but some are not so simple, i.e. they assume a shape more of that of a hammer or letter T. With those, (assuming 2D shape), by alternating direction of top & bottom, one can pack more objects into the same space, than if all tops were in the same direction. Crude example below with letter "T"-shaped parts: ***** xxxxx ***** x ***** *** ooo * x vs * x vs * x vs * x o * x * xxxxx * x * x o xxxxx xxx Right now we are solving the problem by something like this: using CAD software, make actual models of how things fit in crate boxes make estimates of actual crate dimensions & write them into Excel file (1) is crazy amount of work and as the result we have just a limited amount of possible entries in (2), the Excel file. The good things is that programming this is relatively easy. Given a combination of products to go into crates, we do a lookup, and if entry exists in the Excel (or Database), we bring it out. If it doesn't, we say "sorry, no data!". I don't necessarily want to go full force on making up some crazy algorithm that given geometrical part description can align, rotate, and figure out best part packing into a crate, given its shape, but maybe I do.. Question Well, here is my question: assuming that I can represent my parts as 2D (to be determined how), and that some parts look like letter T, and some parts look like rectangles, which algorithm can I use to give me a good estimate on the dimensions of the encompassing area, while ensuring that the parts are packed in a minimal possible area, to minimize crating/shipping costs? Are there approximation algorithms? Seeing how this can get complex, is there an existing library I could use? My thought / Approach My naive approach would be to define a way to describe position of parts, and place the first part, compute total enclosing area & dimensions. Then place 2nd part in 0 degree orientation, repeat, place it at 180 degree orientation, repeat (for my case I don't think 90 degree rotations will be meaningful due to long lengths of parts). Proceed using brute force "tacking on" other parts to the enclosing area until all parts are processed. I may have to shift some parts a tad (see 3rd pictorial example above with letters T). This adds a layer of 2D complexity rather than 1D. I am not sure how to approach this. One idea I have is genetic algorithms, but I think those will take up too much processing power and time. I will need to look out for shape collisions, as well as adding extra padding space, since we are talking about real parts with irregularities rather than perfect imaginary blocks. I'm afraid this can get geometrically messy fairly fast, and I'd rather keep things simple, if I can. But what if the best (practical) solution is to pack things into different crate boxes rather than just one? This can get a bit more tricky. There is human element involved as well, i.e. like parts can go into same box and are thus a constraint to be considered. Some parts that are not the same are sometimes grouped together for shipping and can be considered as a common grouped item. Sometimes customers want things shipped their way, which adds human element to constraints. so there will have to be some customization.

    Read the article

  • Framework for Everything - Where to begin? [Longer post]

    - by SquaredSoft
    Back story of this question, feel free to skip down for the specific question Hello, I've been very interested in the idea of abstract programming the last few years. I've made about 30 attempts at creating a piece of software that is capable of almost anything you throw at it. I've undertook some attempts at this that have taken upwards of a year, while getting close, never releasing it beyond my compiler. This has been something I've always tried wrapping my head around, and something is always missing. With the title, I'm sure you're assuming, "Yes of course you noob! You can't account for everything!" To which I have to reply, "Why not?" To give you some background into what I'm talking about, this all started with doing maybe a shade of gray hat SEO software. I found myself constantly having to create similar, but slightly different sets of code. I've gone through as many iterations of way to communicate on http as the universe has particles. "How many times am I going to have to write this multi-threaded class?" is something I found myself asking a lot. Sure, I could create a class library, and just work with that, but I always felt I could optimize what I had, which often was a large undertaking and typically involved frequent use of the CRTL+A keyboard shortcut, mixed with the delete button. It dawned on me that it was time to invest in a plugin system. This would allow me to simply add snippets of code. as time went on, and I could subversion stuff out, and distribute small chunks of code, rather than something that encompasses only a specific function or design. This comes with its own complexity, of course, and by the time I had finished the software scope for this addition, it hit me that I would want to add to everything in the software, not just a new http method, or automation code for a specific website. Great, we're getting more abstract. However, the software that I have in my mind comes down to a quite a few questions regarding its execution. I have to have some parameters to what I am going to do. After writing what the perfect software would do in my mind, I came up with this as a list of requirements: Should be able to use networking A "Macro" or "Expression system" which would allow people to do something like : =First(=ParseToList(=GetUrl("http://www.google.com?q=helloworld!"), Template.Google)) Multithreaded Able to add UI elements through some type of XML -- People can make their own addons etc. Can use third party API through the plugins, such as Microsoft CRM, Exchange, etc. This would allow the software to essentially be used for everything. Really, any task you wish to automate, in a simple way. Making the UI was as also extremely hard. How do you do all of this? Its very difficult. So my question: With so many attempts at this, I'm out of ideas how to successfully complete this. I have a very specific idea in my mind, but I keep failing to execute it. I'm a self taught programmer. I've been doing it for years, and work professionally in it, but I've never encountered something that would be as complex and in-depth as a system which essentially does everything. Where would you start? What are the best practices for design? How can I avoid constantly having to go back and optimize my software. What can I do to generalize this and draw everything out to completion. These are things I struggle with. P.s., I'm using c# as my main language. I feel like in this example, I might be hitting the outer limit of the language, although, I don't know if that is the case, or if I'm just a bad programmer. Thanks for your time.

    Read the article

  • Violation of the DRY Principle

    - by Onorio Catenacci
    I am sure there's a name for this anti-pattern somewhere; however I am not familiar enough with the anti-pattern literature to know it. Consider the following scenario: or0 is a member function in a class. For better or worse, it's heavily dependent on class member variables. Programmer A comes along and needs functionality like or0 but rather than calling or0, Programmer A copies and renames the entire class. I'm guessing that she doesn't call or0 because, as I say, it's heavily dependent on member variables for its functionality. Or maybe she's a junior programmer and doesn't know how to call it from other code. So now we've got or0 and c0 (c for copy). I can't completely fault Programmer A for this approach--we all get under tight deadlines and we hack code to get work done. Several programmers maintain or0 so it's now version orN. c0 is now version cN. Unfortunately most of the programmers that maintained the class containing or0 seemed to be completely unaware of c0--which is one of the strongest arguments I can think of for the wisdom of the DRY principle. And there may also have been independent maintainance of the code in c. Either way it appears that or0 and c0 were maintained independent of each other. And, joy and happiness, an error is occurring in cN that does not occur in orN. So I have a few questions: 1.) Is there a name for this anti-pattern? I've seen this happen so often I'd find it hard to believe this is not a named anti-pattern. 2.) I can see a few alternatives: a.) Fix orN to take a parameter that specifies the values of all the member variables it needs. Then modify cN to call orN with all of the needed parameters passed in. b.) Try to manually port fixes from orN to cN. (Mind you I don't want to do this but it is a realistic possibility.) c.) Recopy orN to cN--again, yuck but I list it for sake of completeness. d.) Try to figure out where cN is broken and then repair it independently of orN. Alternative a seems like the best fix in the long term but I doubt the customer will let me implement it. Never time or money to fix things right but always time and money to repair the same problem 40 or 50 times, right? Can anyone suggest other approaches I may not have considered? If you were in my place, which approach would you take? If there are other questions and answers here along these lines, please post links to them. I don't mind removing this question if it's a dupe but my searching hasn't turned up anything that addresses this question yet. EDIT: Thanks everyone for all the thoughtful responses. I asked about a name for the anti-pattern so I could research it further on my own. I'm surprised this particular bad coding practice doesn't seem to have a "canonical" name for it.

    Read the article

  • Pantech Link II, Ubuntu and Virtual XP

    - by user85041
    Okay this is my problem. I have a Pantech Link II, dmesg states: [ 896.072037] usb 2-3: new high-speed USB device number 3 using ehci_hcd [ 896.258562] cdc_acm 2-3:1.0: ttyACM0: USB ACM device [ 896.260039] usbcore: registered new interface driver cdc_acm [ 896.260042] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters Have it installed through wine (pc suite and driver) and it doesn't see it. Virtual XP through VMWare Player sees my device, knows it needs a driver. The removable devices says Curitel Pantech USB Device (Maybe Driver). I have PC Suite installed in XP, I install the driver through the executable.. it says problem with installing hardware, and then it disappears. Ubuntu sees it after restart, but if I start XP with that driver installed, it disappears from both and I get these errors in dmesg: [ 1047.760555] /dev/vmmon[2882]: PTSC: initialized at 3093322000 Hz using TSC, TSCs are synchronized. [ 1048.174033] /dev/vmmon[2882]: Monitor IPI vector: 0 [ 1055.293060] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1055.293074] /dev/vmnet: port on hub 8 successfully opened [ 1055.293088] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1055.293094] /dev/vmnet: port on hub 8 successfully opened [ 1072.446305] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1072.446316] /dev/vmnet: port on hub 8 successfully opened [ 1072.446328] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1072.446334] /dev/vmnet: port on hub 8 successfully opened [ 1072.856024] usb 1-1: reset high-speed USB device number 2 using ehci_hcd [ 1079.292024] usb 1-1: reset high-speed USB device number 2 using ehci_hcd [ 1079.732024] usb 1-1: reset high-speed USB device number 2 using ehci_hcd [ 1127.743034] NET: Registered protocol family 39 [ 1127.749320] [3163]: VMCI: IOCTL_VMCI_QUEUEPAIR_ALLOC (cid=1522210225,result=4). [ 1144.104031] usb 2-3: reset high-speed USB device number 3 using ehci_hcd [ 1144.412031] usb 2-3: reset high-speed USB device number 3 using ehci_hcd [ 1155.889976] ehci_hcd 0000:00:13.2: force halt; handshake ffffc90000642024 00004000 00000000 -> -110 [ 1155.889980] ehci_hcd 0000:00:13.2: HC died; cleaning up [ 1155.890008] usb 2-3: USB disconnect, device number 3 [ 1155.890013] usb 2-3: usbfs: usb_submit_urb returned -110 [ 1658.310777] [3163]: VMCI: IOCTL_VMCI_QUEUEPAIR_DETACH (cid=1522210225,result=3). [ 1658.392018] NET: Unregistered protocol family 39 [ 1666.546438] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1666.546450] /dev/vmnet: port on hub 8 successfully opened [ 1666.546462] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1666.546467] /dev/vmnet: port on hub 8 successfully opened [ 1671.431383] uvcvideo: Found UVC 1.00 device USB2.0 Camera (1871:0101) [ 1671.432533] input: USB2.0 Camera as /devices/pci0000:00/0000:00:12.2/usb1/1-1/1-1:1.0/input/input13 lessa@X:~$ dmesg|tail [ 1155.890008] usb 2-3: USB disconnect, device number 3 [ 1155.890013] usb 2-3: usbfs: usb_submit_urb returned -110 [ 1658.310777] [3163]: VMCI: IOCTL_VMCI_QUEUEPAIR_DETACH (cid=1522210225,result=3). [ 1658.392018] NET: Unregistered protocol family 39 [ 1666.546438] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1666.546450] /dev/vmnet: port on hub 8 successfully opened [ 1666.546462] /dev/vmnet: open called by PID 3163 (vmx-vcpu-0) [ 1666.546467] /dev/vmnet: port on hub 8 successfully opened [ 1671.431383] uvcvideo: Found UVC 1.00 device USB2.0 Camera (1871:0101) [ 1671.432533] input: USB2.0 Camera as /devices/pci0000:00/0000:00:12.2/usb1/1-1/1-1:1.0/input/input13 I have tried uninstalling, and installing manually from the device manager update driver while it's still has the warning sign.. it doesn't see the drivers as valid. No idea how to fix this.. would prefer to not have to go to another computer. I'm not trying to do anything but get the pictures off of it. I have to restart ubuntu, plug in device, for ubuntu to see it correctly again. I am like a month and a half old linux newbie so I have no idea the commands I could use for this, and I don't have a memory card in the phone to mount.

    Read the article

  • Get More Value From Your Oracle Premier Support Investment

    - by Get Proactive Customer Adoption Team
    Untitled Document The Return on Investment in Support Training I’m a typical software user. I’ve been using spreadsheets almost daily for the past 10 years or so. I know how to enter simple formulas, format cells, import files, and I can sort and filter. Sometimes I even use a pivot table. I never attended training. I learnt everything I know on the fly. Sometimes it was intuitive and easy, other times I had to spend minutes and even hours searching for a solution. Yet when I see what some other people can do with their spreadsheets, I know I’m utilizing maybe 15% of the functionality. Pity, one day I really have to sign up for training. Why haven’t I done it yet? Ah, you know, I’m a busy person, I have work to do. And if I need to use a feature that I am unfamiliar with, I’ll spend time on it only when I really need it. Now wait. When I recall how much time I spent trying to figure how things work compared to time I spent doing the productive work, I realize it was not insignificant. I’m unable to sum up all the time I spent ‘learning’ on the fly, but I’m sure it’s been days or even weeks. And after all this time, I’ve mastered 15% of its features. If only I had attended training years ago. That investment would have paid back 10 times! Working with My Oracle Support is no different. Our customers typically use simple search, create service requests, and download patches. They think they know how to use My Oracle Support. And they’re right. They know something but often they’re utilizing only a fragment of My Oracle Support’s potential. For the investment that has been made, using only a small subset of the capabilities offered in My Oracle Support leaves value on the table. There is much more available in My Oracle Support. Dozens of diagnostic tools and proactive health checks will keep verifying your Oracle environments against best practices that Oracle gathers every day thanks to our comprehensive knowledge management process. Automated patch recommendations will help prevent known issues, and upgrade planning and more is included in My Oracle Support. Why are you not utilizing all of these best practices, capabilities and tools? Is it because you don’t have time to invest 2-3 hours of your time to learn about the features? Simply because you think you can learn on the fly like I thought I could? Does learning on the fly how to properly use the Service Request escalation process when you already have critical issue sound like a good idea? My advice is: Invest your time now to learn how My Oracle Support can help you prevent issues on your systems. Learn how to find answers faster and resolve problems more efficiently. Understand how to properly complete a service request. Invest in Support training, offered at no additional cost to Oracle Premier Support customers. It will pay back quicker than you think. It will bring you more value than you think. Discover your advantage with Oracle Premier Support's Proactive Portfolio.

    Read the article

  • Dual monitor not working completely in 12.10 after upgrade

    - by Mark Baldridge
    At 12.04, dual monitors worked perfectly. After upgrading to 12.10, the primary monitor works, the second monitor only partly works. I am sure there is some difference between the releases that I have missed setting properly. System settings - Displays show both correctly as Acer 22" monitors at 1680x1050 (16:10). An icon on monitor 2 is present, but elongated; almost an artifact, since other icons on the primary screen are absent, but this one icon is there on th second monitor. Selecting the icons on both screens exist. Painting is weird on monitor 2. Launcher exists and works on both screens, but even with sticky edges off, the cursor stops at the left edge of monitor 2. Clicking on text editor on screen 2 launcer will launch gedit there. If I drag it, it leaves a trail of after images like repaint is failing. If I drive the cursor on the launcher, the help tags like "LibreOffice Writer" appear, but stay on screen unless I drag the active gedit window over them. Then part of the help bubbles are overwritten, leaving behind after images of the gedit window on screen. What is really fascinating is that the System settings - Displays is now ignoring monitor selection, after allowing it earlier. Just before this, the help popup which said "Select a monitor to change its properties; drag to rearrange its placement" actually let me do that. Maybe a trick of where I grab the edge of the monitor in the Displays setting. I just found a working handle. When I drag monitor 1 to the right of monitor 2, "Apply" and confirm, both monitors work normally (although the right monitor lets the cursor slide off the right edge onto the left edge of monitor 1 - which sounds correct). Painting of windows does not leave an after image. However, success is only temporary. The setting survives the reboot, but painting on the left monitor, now monitor 2, now replicates the issues from before. The after image of the gedit window and the small window for "Are you sure you want to close all programs and restart the computer?" are still on monitor 2 (on the left now), even though they are not real windows, nor do they have processes behind them. Curiously, in Displays, the "green" monitor on the left in the display window is matched by the right monitor color in the monitor upper left corner. Probably makes sense as the one on the right is now monitor 1. If I repeat the "drag the left monitor to the right of the right monitor on the "Displays" window, things are oriented properly, with no display artifacts as I drag windows around either screen. Also the description bubbles that pop up are overwritten on both screens, so none of those artifacts either. This goodness does not survive a reboot, however. Have not tried logging out and back in. All of this after positing that the motherboard VGA and HDMI ports could have been the issue. So, I installed an e-GeForce 7600 GT Dual DVI (I know the web thinks it is not DVI, but VGA, but the connectors are DVI). No change to the weird behavior. The good parts continue to work, the weirdness also works, and swapping monitor positions seems to cure the issue. So, is there a setting I have missed? Given "swapping" monitor 1 and 2 on the System Settings... - Displays makes it work, just not across boot, I suspect so.

    Read the article

  • When things go awry

    - by Phil Factor
    The moment the Entrepreneur opened his mouth on prime-time national TV, spelled out the URL and waxed big on how exciting ‘his’ new website was, I knew I was in for a busy night. I’d designed and built it. All at once, half a million people tried to log into the website. Although all my stress-testing paid off, I have to admit that the network locked up tight long before there was any danger of a database or website problem. Soon afterwards, the Entrepreneur and the Big Boss were there in the autopsy meeting. We picked through all our systems in detail to see how they’d borne the unexpected strain. Mercifully, in view of the sour mood of the Big Boss, it turned out that the only thing we could have done better was buy a bigger pipe to and from the internet. We’d specified that ‘big pipe’ when designing the system. The Big Boss had then railed at the cost and so we’d subsequently compromised. I felt that my design decisions were vindicated. The Big Boss brooded for a while. Then he made the significant comment: “What really ****** me off is the fact that, for ten minutes, we couldn’t take people’s money.” At that point I stopped feeling smug. Had the internet connection been better, the system would have reached its limit and failed rather precipitously, and that wasn’t what he wanted. Then it occurred to me that what had gummed up the connection was all those images on the site, that had made it so impressive for the visitors. If there had been a way to automatically pare down the site to the bare essentials under stress… Hmm. I began to consider disaster-recovery in the broadest sense – maintaining a service in spite of unusual or unexpected events. What he said makes a lot of sense: sacrifice whatever isn’t essential to keep the core service running when we approach the capacity limits. Maybe in IT we should borrow (or revive) the business concept of the ‘Skeleton service’, maintaining only the priority parts under stress, using a process that is well-prepared and carefully rehearsed. How might this work? Whatever the event we have to prepare for, it is all about understanding the priorities; knowing what one can dispense with when the going gets tough. In the event of database disaster, it’s much faster to deploy a skeletal system with only the essential data than to restore the entire system, though there would have to be a reconciliation process to update the revived database retrospectively, once the emergency was over. It isn’t just the database that could be designed for resilience. One could prepare for unusually high traffic in a website by designing a system that degraded gradually to a ‘skeletal’ site, one that maintained the commercial essentials without fat images, JavaScript libraries and razzmatazz. This is all what the Big Boss scathingly called ‘a mere technicality’. It seems to me that what is needed first is a culture of application and database design which acknowledges that we live in a very imperfect world, and react accordingly when things go awry.

    Read the article

  • Resolving collisions between dynamic game objects

    - by TheBroodian
    I've been building a 2D platformer for some time now, I'm getting to the point where I am adding dynamic objects to the stage for testing. This has prompted me to consider how I would like my character and other objects to behave when they collide. A typical staple in many 2D platformer type games is that the player takes damage upon touching an enemy, and then essentially becomes able to pass through enemies during a period of invulnerability, and at the same time, enemies are able to pass through eachother freely. I personally don't want to take this approach, it feels strange to me that the player should receive arbitrary damage for harmless contact to an enemy, despite whether the enemy is attacking or not, and I would like my enemies' interactions between each other (and my player) to be a little more organic, so to speak. In my head I sort of have this idea where a game object (player, or non player) would be able to push other game objects around by manner of 'pushing' each other out of one anothers' bounding boxes if there is an intersection, and maybe correlate the repelling force to how much their bounding boxes are intersecting. The problem I'm experiencing is I have no idea what the math might look like for something like this? I'll show what work I've done so far, it sort of works, but it's jittery, and generally not quite what I would pass in a functional game: //Clears the anti-duplicate buffer collisionRecord.Clear(); //pick a thing foreach (GameObject entity in entities) { //pick another thing foreach (GameObject subject in entities) { //check to make sure both things aren't the same thing if (!ReferenceEquals(entity, subject)) { //check to see if thing2 is in semi-near proximity to thing1 if (entity.WideProximityArea.Intersects(subject.CollisionRectangle) || entity.WideProximityArea.Contains(subject.CollisionRectangle)) { //check to see if thing2 and thing1 are colliding. if (entity.CollisionRectangle.Intersects(subject.CollisionRectangle) || entity.CollisionRectangle.Contains(subject.CollisionRectangle) || subject.CollisionRectangle.Contains(entity.CollisionRectangle)) { //check if we've already resolved their collision or not. if (!collisionRecord.ContainsKey(entity.GetHashCode())) { //more duplicate resolution checking. if (!collisionRecord.ContainsKey(subject.GetHashCode())) { //if thing1 is traveling right... if (entity.Velocity.X > 0) { //if it isn't too far to the right... if (subject.CollisionRectangle.Contains(new Microsoft.Xna.Framework.Rectangle(entity.CollisionRectangle.Right, entity.CollisionRectangle.Y, 1, entity.CollisionRectangle.Height)) || subject.CollisionRectangle.Intersects(new Microsoft.Xna.Framework.Rectangle(entity.CollisionRectangle.Right, entity.CollisionRectangle.Y, 1, entity.CollisionRectangle.Height))) { //Find how deep thing1 is intersecting thing2's collision box; float offset = entity.CollisionRectangle.Right - subject.CollisionRectangle.Left; //Move both things in opposite directions half the length of the intersection, pushing thing1 to the left, and thing2 to the right. entity.Velocities.Add(new Vector2(-(((offset * 4) * (float)gameTime.ElapsedGameTime.TotalMilliseconds)), 0)); subject.Velocities.Add(new Vector2((((offset * 4) * (float)gameTime.ElapsedGameTime.TotalMilliseconds)), 0)); } } //if thing1 is traveling left... if (entity.Velocity.X < 0) { //if thing1 isn't too far left... if (entity.CollisionRectangle.Contains(new Microsoft.Xna.Framework.Rectangle(subject.CollisionRectangle.Right, subject.CollisionRectangle.Y, 1, subject.CollisionRectangle.Height)) || entity.CollisionRectangle.Intersects(new Microsoft.Xna.Framework.Rectangle(subject.CollisionRectangle.Right, subject.CollisionRectangle.Y, 1, subject.CollisionRectangle.Height))) { //Find how deep thing1 is intersecting thing2's collision box; float offset = subject.CollisionRectangle.Right - entity.CollisionRectangle.Left; //Move both things in opposite directions half the length of the intersection, pushing thing1 to the right, and thing2 to the left. entity.Velocities.Add(new Vector2((((offset * 4) * (float)gameTime.ElapsedGameTime.TotalMilliseconds)), 0)); subject.Velocities.Add(new Vector2(-(((offset * 4) * (float)gameTime.ElapsedGameTime.TotalMilliseconds)), 0)); } } //Make record that thing1 and thing2 have interacted and the collision has been solved, so that if thing2 is picked next in the foreach loop, it isn't checked against thing1 a second time before the next update. collisionRecord.Add(entity.GetHashCode(), subject.GetHashCode()); } } } } } } } } One of the biggest issues with my code aside from the jitteriness is that if one character were to land on top of another character, it very suddenly and abruptly resolves the collision, whereas I would like a more subtle and gradual resolution. Any thoughts or ideas are incredibly welcome and helpful.

    Read the article

  • Drive

    - by erikanollwebb
    Picking up where we left off, let's summarize.  People have both intrinsic motivation and extrinsic motivation, and whether reward works depends a bit on what you are rewarding.  Rewards don't decreased intrinsic motivation provided you know what you are getting and why, and when you reward high performance.  But as anyone who has watched the great animation of Dan Pink's TED talk knows, even that doesn't tell the whole story.  Although people may not be less intrinsically motivated by rewards, the impact of rewards on actual performance is a really odd questions.  Larger rewards don't necessarily lead to better performance and in fact, some times lead to worse performance.  Pink argues that people are driven and engaged when they have autonomy, mastery and purpose.  If they can self-direct and can be good at what they do and have a sense of purpose for what they are doing, they show the highest engagement.   (Personally, I would add progress to the list.  My experience is that if you have autonomy, mastery and a sense of purpose but don't get a feeling that you are making any progress day to day, your level of engagement will drop rapidly.) So Pink is arguing if we could set up work so that people have a sense of purpose in what they do, have some autonomy and the ability to build mastery, you'll have better companies.  And that's probably true in a lot of ways, but there's a problem.  Sometimes, you have things you need to do but maybe you don't really want to do.  Or that you don't really see the point of.  Or that doesn't have a lot of value to you at the end of the day.  Then what does a company do?  Let me give you an example.  I've worked on some customer relationship management (CRM) tools over the years and done user research with sales people to try and understand their world.  And there's a funny thing about sales tools in CRM.  Sometimes what the company wants a sales person to do is at odds with what a sales person thinks is useful to them.  For example, companies would like to know who a sales person talked to at the company and the person level.  They'd like to know what they talked about, when, and whether the deals closed.  Those metrics would help you build a better sales force and understand what works and what does not.  But sales people see that as busy work that doesn't add any value to their ability to sell.  So you have a sales person who has a lot of autonomy, they like to do things that improve their ability to sell and they usually feel a sense of purpose--the group is trying to make a quota!  That quota will help the company succeed!  But then you have tasks that they don't think fit into that equation.  The company would like to know more about what makes them successful and get metrics on what they do and frankly, have a record of what they do in case they leave, but the sales person thinks it's a waste of time to put all that information into a sales application. They have drive, just not for all the things the company would like.   You could punish them for not entering the information, or you could try to reward them for doing it, but you still have an imperfect model of engagement.  Ideally, you'd like them to want to do it.  If they want to do it, if they are motivated to do it, then the company wins.  If *something* about it is rewarding to them, then they are more engaged and more likely to do it.  So the question becomes, how do you create that interest to do something?

    Read the article

  • Setting up your project

    - by ssoolsma
    Before any coding we first make sure that the project is setup correctly. (Please note, that this blog is all about how I do it, and incase i forget, i can return here and read how i used to do it. Maybe you come up with some idea’s for yourself too.) In these series we will create a minigolf scoring cart. Please note that we eventually create a fully functional application which you cannot use unless you pay me alot of money! (And i mean alot!)   1. Download and install the appropriate tools. Download the following: - TestDriven.Net (free version on the bottom of the download page) - nUnit TestDriven is a visual studio plugin for many unittest frameworks, which allows you to run  / test code very easily with a right click –> run test. nUnit is the test framework of choice, it works seamless with TestDriven.   2. Create your project Fire up visual studio and create your DataAccess project:  MidgetWidget.DataAccess is it’s name. (I choose MidgetWidget as name for the solution). Also, make sure that the MidgetWidget.DataAccess project is a c# ClassLibary Hit OK to create the solution. (in the above example the checkbox Create directory for solution is checked, because i’m pointing the location to the root of c:\development where i want MidgetWidget to be created.   3. Setup the database. You should have thought about a database when you reach this point. Let’s assume that you’ve created a database as followed: Table name: LoginKey Fields: Id (PK), KeyName (uniqueidentifier), StartDate (datetime), EndDate (datetime) Table name:  Party Fields: Id (PK), Key (uniqueidentifier, Created (datetime) Table name:  Person Fields: Id(PK),  PartyId (int), Name (varchar) Tablename: Score Fields: Id (PK), Trackid (int), PersonId (int), Strokes (int) Tablename: Track Fields: Id (PK), Name (varchar) A few things to take note about the database setup. I’ve singularized all tablenames (not “Persons“ but “Person”. This is because in a few minutes, when this is in our code, we refer to the database objects as single rows. We retrieve a single Person not a single “Persons” from the database.   4. Create the entity framework In your solution tree create a new folder and call it “DataModel”. Inside this folder: Add new item –> and choose ADO.NET Entity Data Model. Name it “Entities.edmx” and hit  “Add”. Once the edmx is added, open it (double click) and right click the white area and choose “Update model from database…". Now, point it to your database (i include sensitive data in the connectionstring) and select all the tables. After that hit “Finish” and let the entity framework do it’s code generation. Et Voila, after a few seconds you have set up your entity model. Next post we will start building the data-access! I’m off to the beach.

    Read the article

  • Why is x=x++ undefined?

    - by ugoren
    It's undefined because the it modifies x twice between sequence points. The standard says it's undefined, therefore it's undefined. That much I know. But why? My understanding is that forbidding this allows compilers to optimize better. This could have made sense when C was invented, but now seems like a weak argument. If we were to reinvent C today, would we do it this way, or can it be done better? Or maybe there's a deeper problem, that makes it hard to define consistent rules for such expressions, so it's best to forbid them? So suppose we were to reinvent C today. I'd like to suggest simple rules for expressions such as x=x++, which seem to me to work better than the existing rules. I'd like to get your opinion on the suggested rules compared to the existing ones, or other suggestions. Suggested Rules: Between sequence points, order of evaluation is unspecified. Side effects take place immediately. There's no undefined behavior involved. Expressions evaluate to this value or that, but surely won't format your hard disk (strangely, I've never seen an implementation where x=x++ formats the hard disk). Example Expressions x=x++ - Well defined, doesn't change x. First, x is incremented (immediately when x++ is evaluated), then it's old value is stored in x. x++ + ++x - Increments x twice, evaluates to 2*x+2. Though either side may be evaluated first, the result is either x + (x+2) (left side first) or (x+1) + (x+1) (right side first). x = x + (x=3) - Unspecified, x set to either x+3 or 6. If the right side is evaluated first, it's x+3. It's also possible that x=3 is evaluated first, so it's 3+3. In either case, the x=3 assignment happens immediately when x=3 is evaluated, so the value stored is overwritten by the other assignment. x+=(x=3) - Well defined, sets x to 6. You could argue that this is just shorthand for the expression above. But I'd say that += must be executed after x=3, and not in two parts (read x, evaluate x=3, add and store new value). What's the Advantage? Some comments raised this good point. It's not that I'm after the pleasure of using x=x++ in my code. It's a strange and misleading expression. What I want is to be able to understand complicated expressions. Normally, a complicated expression is no more than the sum of its parts. If you understand the parts and the operators combining them, you can understand the whole. C's current behavior seems to deviate from this principle. One assignment plus another assignment suddenly doesn't make two assignments. Today, when I look at x=x++, I can't say what it does. With my suggested rules, I can, by simply examining its components and their relations.

    Read the article

  • Should I expose IObservable<T> on my interfaces?

    - by Alex
    My colleague and I have dispute. We are writing a .NET application that processes massive amounts of data. It receives data elements, groups subsets of them into blocks according to some criterion and processes those blocks. Let's say we have data items of type Foo arriving some source (from the network, for example) one by one. We wish to gather subsets of related objects of type Foo, construct an object of type Bar from each such subset and process objects of type Bar. One of us suggested the following design. Its main theme is exposing IObservable objects directly from the interfaces of our components. // ********* Interfaces ********** interface IFooSource { // this is the event-stream of objects of type Foo IObservable<Foo> FooArrivals { get; } } interface IBarSource { // this is the event-stream of objects of type Bar IObservable<Bar> BarArrivals { get; } } / ********* Implementations ********* class FooSource : IFooSource { // Here we put logic that receives Foo objects from the network and publishes them to the FooArrivals event stream. } class FooSubsetsToBarConverter : IBarSource { IFooSource fooSource; IObservable<Bar> BarArrivals { get { // Do some fancy Rx operators on fooSource.FooArrivals, like Buffer, Window, Join and others and return IObservable<Bar> } } } // this class will subscribe to the bar source and do processing class BarsProcessor { BarsProcessor(IBarSource barSource); void Subscribe(); } // ******************* Main ************************ class Program { public static void Main(string[] args) { var fooSource = FooSourceFactory.Create(); var barsProcessor = BarsProcessorFactory.Create(fooSource) // this will create FooSubsetToBarConverter and BarsProcessor barsProcessor.Subscribe(); fooSource.Run(); // this enters a loop of listening for Foo objects from the network and notifying about their arrival. } } The other suggested another design that its main theme is using our own publisher/subscriber interfaces and using Rx inside the implementations only when needed. //********** interfaces ********* interface IPublisher<T> { void Subscribe(ISubscriber<T> subscriber); } interface ISubscriber<T> { Action<T> Callback { get; } } //********** implementations ********* class FooSource : IPublisher<Foo> { public void Subscribe(ISubscriber<Foo> subscriber) { /* ... */ } // here we put logic that receives Foo objects from some source (the network?) publishes them to the registered subscribers } class FooSubsetsToBarConverter : ISubscriber<Foo>, IPublisher<Bar> { void Callback(Foo foo) { // here we put logic that aggregates Foo objects and publishes Bars when we have received a subset of Foos that match our criteria // maybe we use Rx here internally. } public void Subscribe(ISubscriber<Bar> subscriber) { /* ... */ } } class BarsProcessor : ISubscriber<Bar> { void Callback(Bar bar) { // here we put code that processes Bar objects } } //********** program ********* class Program { public static void Main(string[] args) { var fooSource = fooSourceFactory.Create(); var barsProcessor = barsProcessorFactory.Create(fooSource) // this will create BarsProcessor and perform all the necessary subscriptions fooSource.Run(); // this enters a loop of listening for Foo objects from the network and notifying about their arrival. } } Which one do you think is better? Exposing IObservable and making our components create new event streams from Rx operators, or defining our own publisher/subscriber interfaces and using Rx internally if needed? Here are some things to consider about the designs: In the first design the consumer of our interfaces has the whole power of Rx at his/her fingertips and can perform any Rx operators. One of us claims this is an advantage and the other claims that this is a drawback. The second design allows us to use any publisher/subscriber architecture under the hood. The first design ties us to Rx. If we wish to use the power of Rx, it requires more work in the second design because we need to translate the custom publisher/subscriber implementation to Rx and back. It requires writing glue code for every class that wishes to do some event processing.

    Read the article

  • Is Visual Source Safe (The latest Version) really that bad? Why? What's the Best Alternative? Why? [closed]

    - by hanzolo
    Over the years I've constantly heard horror stories, had people say "Real Programmers Dont Use VSS", and so on. BUT, then in the workplace I've worked at two companies, one, a very well known public facing high traffic website, and another high end Financial Services "Web-Based" hosted solution catering to some very large, very well known companies, which is where I currently Reside and everything's working just fine (KNOCK KNOCK!!). I'm constantly interfacing with EXTREMELY Old technology with some of these financial institutions.. OLD LIKE YOU WOULDN'T BELIEVE.. which leads me to the conclusion that if it works "LEAVE IT", and that maybe there's some value in old technology? at least enough value to overrule a rewrite!? right?? Is there something fundamentally flawed with the underlying technology that VSS uses? I have a feeling that if i said "someone said VSS Sucks" they would beg to differ, most likely give me this look like i dont know -ish, and I'd never gain back their respect and my credibility (well, that'll be hard to blow.. lol), BUT, give me an argument that I can take to someone whose been coding for 30 years, that builds Platforms that leverage current technology (.NET 3.5 / SQL 2008 R2 ), write's their own ORM with scaffolding and is able to provide a quality platform that supports thousands of concurrent users on a multi-tenant hosted solution, and does not agree with any benefits from having Source Control Integrated, and yet uses the Infamous Visual Source Safe. I have extensive experience with TFS up to 2010, and honestly I think it's great when a team (beyond developers) can embrace it. I've worked side by side with someone whose a die hard SVN'r and from a purist standpoint, I see the beauty in it (I need a bit more, out of my SS, but it surely suffices). So, why are such smarties not running away from Visual Source Safe? surely if it was so bad, it would've have been realized by now, and I would not be sitting here with this simple old, Check In, Check Out, Version Resistant, Label Intensive system. But here I am... I would love to drop an argument that would be the end all argument, but if it's a matter of opinion and personal experience, there seems to be too much leeway for keeping VSS. UPDATE: I guess the best case is to have the VSS supporters check other people's experiences and draw from that until we (please no) experience the breaking factor ourselves. Until then, i wont be engaging in a discussion to migrate off of VSS.. UPDATE 11-2012: So i was able to convince everyone at my work place that since MS is sun downing Visual Source Safe it might be time to migrate over to TFS. I was able to convince them and have recently upgraded our team to Visual Studio 2012 and TFS 2012. The migration was fairly painless, had to run analyze.exe which found a bunch of errors (not sure they'll ever affect the project) and then manually run the VSSConverter.exe. Again, painless, except it took 16 hours to migrate 5 years worth of everything.. and now we're on TFS.. much more integrated.. much more cooler.. so all in all, VSS served it's purpose for years without hick-up. There were no horror stories and Visual Source Save as source control worked just fine. so to all the nay sayers (me included). there's nothing wrong with using VSS. i wouldnt start a new project with it, and i would definitely consider migrating to TFS. (it's really not super difficult and a new "wizard" type converter is due out any day now so migrating should be painless). But from my experience, it worked just fine and got the job done.

    Read the article

  • Problem with sprite direction and rotation

    - by user2236165
    I have a sprite called Tool that moves with a speed represented as a float and in a direction represented as a Vector2. When I click the mouse on the screen the sprite change its direction and starts to move towards the mouseclick. In addition to that I rotate the sprite so that it is facing in the direction it is heading. However, when I add a camera that is suppose to follow the sprite so that the sprite is always centered on the screen, the sprite won't move in the given direction and the rotation isn't accurate anymore. This only happens when I add the Camera.View in the spriteBatch.Begin(). I was hoping anyone could maybe shed a light on what I am missing in my code, that would be highly appreciated. Here is the camera class i use: public class Camera { private const float zoomUpperLimit = 1.5f; private const float zoomLowerLimit = 0.1f; private float _zoom; private Vector2 _pos; private int ViewportWidth, ViewportHeight; #region Properties public float Zoom { get { return _zoom; } set { _zoom = value; if (_zoom < zoomLowerLimit) _zoom = zoomLowerLimit; if (_zoom > zoomUpperLimit) _zoom = zoomUpperLimit; } } public Rectangle Viewport { get { int width = (int)((ViewportWidth / _zoom)); int height = (int)((ViewportHeight / _zoom)); return new Rectangle((int)(_pos.X - width / 2), (int)(_pos.Y - height / 2), width, height); } } public void Move(Vector2 amount) { _pos += amount; } public Vector2 Position { get { return _pos; } set { _pos = value; } } public Matrix View { get { return Matrix.CreateTranslation(new Vector3(-_pos.X, -_pos.Y, 0)) * Matrix.CreateScale(new Vector3(Zoom, Zoom, 1)) * Matrix.CreateTranslation(new Vector3(ViewportWidth * 0.5f, ViewportHeight * 0.5f, 0)); } } #endregion public Camera(Viewport viewport, float initialZoom) { _zoom = initialZoom; _pos = Vector2.Zero; ViewportWidth = viewport.Width; ViewportHeight = viewport.Height; } } And here is my Update and Draw-method: protected override void Update (GameTime gameTime) { float elapsed = (float)gameTime.ElapsedGameTime.TotalSeconds; TouchCollection touchCollection = TouchPanel.GetState (); foreach (TouchLocation tl in touchCollection) { if (tl.State == TouchLocationState.Pressed || tl.State == TouchLocationState.Moved) { //direction the tool shall move towards direction = touchCollection [0].Position - toolPos; if (direction != Vector2.Zero) { direction.Normalize (); } //change the direction the tool is moving and find the rotationangle the texture must rotate to point in given direction toolPos += (direction * speed * elapsed); RotationAngle = (float)Math.Atan2 (direction.Y, direction.X); } } if (direction != Vector2.Zero) { direction.Normalize (); } //move tool in given direction toolPos += (direction * speed * elapsed); //change cameracentre to the tools position Camera.Position = toolPos; base.Update (gameTime); } protected override void Draw (GameTime gameTime) { graphics.GraphicsDevice.Clear (Color.Blue); spriteBatch.Begin (SpriteSortMode.BackToFront, BlendState.AlphaBlend, null, null, null, null, Camera.View); spriteBatch.Draw (tool, new Vector2 (toolPos.X, toolPos.Y), null, Color.White, RotationAngle, originOfToolTexture, 1, SpriteEffects.None, 1); spriteBatch.End (); base.Draw (gameTime); }

    Read the article

  • How to handle multi-processing of libraries which already spawn sub-processes?

    - by exhuma
    I am having some trouble coming up with a good solution to limit sub-processes in a script which uses a multi-processed library and the script itself is also multi-processed. Both, the library and script are modifiable by us. I believe the question is more about design than actual code, but for what it's worth, it's written in Python. The goal of the library is to hide implementation details of various internet routers. For that reason, the library has a "Proxy" factory method which takes the IP of a router as parameter. The factory then probes the device using a set of possible proxies. Usually, there is one proxy which immediately knows that is is able to send commands to this device. All others usually take some time to return (given a timeout). One thought was already to simply query the device for an identifier, and then select the proper proxy using that, but in order to do so, you would already need to know how to query the device. Abstracting this knowledge is one of the main purposes of the library, so that becomes a little bit of a "circular-requirement"/deadlock: To connect to a device, you need to know what proxy to use, and to know what proxy to create, you need to connect to a device. So probing the device is - as we can see - the best solution so far, apart from keeping a lookup-table somewhere. The library currently kills all remaining processes once a valid proxy has been found. And yes, there is always only one good proxy per device. Currently there are about 12 proxies. So if one create a proxy instance using the factory, 12 sub-processes are spawned. So far, this has been really useful and worked very well. But recently someone else wanted to use this library to "broadcast" a command to all devices. So he took the library, and wrote his own multi-processed script. This obviously spawned 12 * n processes where n is the number of IPs to which he broadcasted. This has given us two problems: The host on which the command was executed slowed down to a near halt. Aborting the script with CTRL+C ground the system to a total halt. Not even the hardware console responded anymore! This may be due to some Python strangeness which still needs to be investigated. Maybe related to http://bugs.python.org/issue8296 The big underlying question, is how to design a library which does multi-processing, so other applications which use this library and want to be multi-processed themselves do not run into system limitations. My first thought was to require a pool to be passed to the library, and execute all tasks in that pool. In that way, the person using the library has control over the usage of system resources. But my gut tells me that there must be a better solution. Disclaimer: My experience with multiprocessing is fairly limited. I have implemented a few straightforward which did not require access control to resources. So I have not yet any practical experience with semaphores or mutexes. p.s.: In the future, we may have enough information to do this without the probing. But the database which would contain the proper information is not yet operational. Also, the design about multiprocessing a multiprocessed library intrigues me :)

    Read the article

  • Criminals and Other Illegal Characters

    - by Most Valuable Yak (Rob Volk)
    SQLTeam's favorite Slovenian blogger Mladen (b | t) had an interesting question on Twitter: http://www.twitter.com/MladenPrajdic/status/347057950470307841 I liked Kendal Van Dyke's (b | t) reply: http://twitter.com/SQLDBA/status/347058908801667072 And he was right!  This is one of those pretty-useless-but-sounds-interesting propositions that I've based all my presentations on, and most of my blog posts. If you read all the replies you'll see a lot of good suggestions.  I particularly like Aaron Bertrand's (b | t) idea of going into the Unicode character set, since there are over 65,000 characters available.  But how to find an illegal character?  Detective work? I'm working on the premise that if SQL Server will reject it as a name it would throw an error.  So all we have to do is generate all Unicode characters, rename a database with that character, and catch any errors. It turns out that dynamic SQL can lend a hand here: IF DB_ID(N'a') IS NULL CREATE DATABASE [a]; DECLARE @c INT=1, @sql NVARCHAR(MAX)=N'', @err NVARCHAR(MAX)=N''; WHILE @c<65536 BEGIN BEGIN TRY SET @sql=N'alter database ' + QUOTENAME(CASE WHEN @c=1 THEN N'a' ELSE NCHAR(@c-1) END) + N' modify name=' + QUOTENAME(NCHAR(@c)); RAISERROR(N'*** Trying %d',10,1,@c) WITH NOWAIT; EXEC(@sql); SET @c+=1; END TRY BEGIN CATCH SET @err=ERROR_MESSAGE(); RAISERROR(N'Ooops - %d - %s',10,1,@c,@err) WITH NOWAIT; BREAK; END CATCH END SET @sql=N'alter database ' + QUOTENAME(NCHAR(@c-1)) + N' modify name=[a]'; EXEC(@sql); The script creates a dummy database "a" if it doesn't already exist, and only tests single characters as a database name.  If you have databases with single character names then you shouldn't run this on that server. It takes a few minutes to run, but if you do you'll see that no errors are thrown for any of the characters.  It seems that SQL Server will accept any character, no matter where they're from.  (Well, there's one, but I won't tell you which. Actually there's 2, but one of them requires some deep existential thinking.) The output is also interesting, as quite a few codes do some weird things there.  I'm pretty sure it's due to the font used in SSMS for the messages output window, not all characters are available.  If you run it using the SQLCMD utility, and use the -o switch to output to a file, and -u for Unicode output, you can open the file in Notepad or another text editor and see the whole thing. I'm not sure what character I'd recommend to answer Mladen's question.  I think the standard tab (ASCII 9) is fine.  There's also several specific separator characters in the original ASCII character set (decimal 28-31). But of all the choices available in Unicode whitespace, I think my favorite would be the Mongolian Vowel Separator.  Or maybe the zero-width space. (that'll be fun to print!)  And since this is Mladen we're talking about, here's a good selection of "intriguing" characters he could use.

    Read the article

  • Direct3D - Zooming into Mouse Position

    - by roohan
    I'm trying to implement my camera class for a simulation. But I cant figure out how to zoom into my world based on the mouse position. I mean the object under the mouse cursor should remain at the same screen position. My zooming looks like this: VOID ZoomIn(D3DXMATRIX& WorldMatrix, FLOAT const& MouseX, FLOAT const& MouseY) { this->Position.z = this->Position.z * 0.9f; D3DXMatrixLookAtLH(&this->ViewMatrix, &this->Position, &this->Target, &this->UpDirection); } I passed the world matrix to the function because I had the idea to move my drawing origin according to the mouse position. But I cant find out how to calculate the offset in to move my drawing origin. Anyone got an idea how to calculate this? Thanks in advance. SOLVED Ok I solved my problem. Here is the code if anyone is interested: VOID CAMERA2D::ZoomIn(FLOAT const& MouseX, FLOAT const& MouseY) { // Get the setting of the current view port. D3DVIEWPORT9 ViewPort; this->Direct3DDevice->GetViewport(&ViewPort); // Convert the screen coordinates of the mouse to world space coordinates. D3DXVECTOR3 VectorOne; D3DXVECTOR3 VectorTwo; D3DXVec3Unproject(&VectorOne, &D3DXVECTOR3(MouseX, MouseY, 0.0f), &ViewPort, &this->ProjectionMatrix, &this->ViewMatrix, &WorldMatrix); D3DXVec3Unproject(&VectorTwo, &D3DXVECTOR3(MouseX, MouseY, 1.0f), &ViewPort, &this->ProjectionMatrix, &this->ViewMatrix, &WorldMatrix); // Calculate the resulting vector components. float WorldZ = 0.0f; float WorldX = ((WorldZ - VectorOne.z) * (VectorTwo.x - VectorOne.x)) / (VectorTwo.z - VectorOne.z) + VectorOne.x; float WorldY = ((WorldZ - VectorOne.z) * (VectorTwo.y - VectorOne.y)) / (VectorTwo.z - VectorOne.z) + VectorOne.y; // Move the camera into the screen. this->Position.z = this->Position.z * 0.9f; D3DXMatrixLookAtLH(&this->ViewMatrix, &this->Position, &this->Target, &this->UpDirection); // Calculate the world space vector again based on the new view matrix, D3DXVec3Unproject(&VectorOne, &D3DXVECTOR3(MouseX, MouseY, 0.0f), &ViewPort, &this->ProjectionMatrix, &this->ViewMatrix, &WorldMatrix); D3DXVec3Unproject(&VectorTwo, &D3DXVECTOR3(MouseX, MouseY, 1.0f), &ViewPort, &this->ProjectionMatrix, &this->ViewMatrix, &WorldMatrix); // Calculate the resulting vector components. float WorldZ2 = 0.0f; float WorldX2 = ((WorldZ2 - VectorOne.z) * (VectorTwo.x - VectorOne.x)) / (VectorTwo.z - VectorOne.z) + VectorOne.x; float WorldY2 = ((WorldZ2 - VectorOne.z) * (VectorTwo.y - VectorOne.y)) / (VectorTwo.z - VectorOne.z) + VectorOne.y; // Create a temporary translation matrix for calculating the origin offset. D3DXMATRIX TranslationMatrix; D3DXMatrixIdentity(&TranslationMatrix); // Calculate the origin offset. D3DXMatrixTranslation(&TranslationMatrix, WorldX2 - WorldX, WorldY2 - WorldY, 0.0f); // At the offset to the cameras world matrix. this->WorldMatrix = this->WorldMatrix * TranslationMatrix; } Maybe someone has even a better solution than mine.

    Read the article

  • Validation and authorization in layered architecture

    - by SonOfPirate
    I know you are thinking (or maybe yelling), "not another question asking where validation belongs in a layered architecture?!?" Well, yes, but hopefully this will be a little bit of a different take on the subject. I am a firm believer that validation takes many forms, is context-based and varies at each level of the architecture. That is the basis for the post - helping to identify what type of validation should be performed in each layer. In addition, a question that often comes up is where authorization checks belong. The example scenario comes from an application for a catering business. Periodically during the day, a driver may turn in to the office any excess cash they've accumulated while taking the truck from site to site. The application allows a user to record the 'cash drop' by collecting the driver's ID, and the amount. Here's some skeleton code to illustrate the layers involved: public class CashDropApi // This is in the Service Facade Layer { [WebInvoke(Method = "POST")] public void AddCashDrop(NewCashDropContract contract) { // 1 Service.AddCashDrop(contract.Amount, contract.DriverId); } } public class CashDropService // This is the Application Service in the Domain Layer { public void AddCashDrop(Decimal amount, Int32 driverId) { // 2 CommandBus.Send(new AddCashDropCommand(amount, driverId)); } } internal class AddCashDropCommand // This is a command object in Domain Layer { public AddCashDropCommand(Decimal amount, Int32 driverId) { // 3 Amount = amount; DriverId = driverId; } public Decimal Amount { get; private set; } public Int32 DriverId { get; private set; } } internal class AddCashDropCommandHandler : IHandle<AddCashDropCommand> { internal ICashDropFactory Factory { get; set; } // Set by IoC container internal ICashDropRepository CashDrops { get; set; } // Set by IoC container internal IEmployeeRepository Employees { get; set; } // Set by IoC container public void Handle(AddCashDropCommand command) { // 4 var driver = Employees.GetById(command.DriverId); // 5 var authorizedBy = CurrentUser as Employee; // 6 var cashDrop = Factory.CreateCashDrop(command.Amount, driver, authorizedBy); // 7 CashDrops.Add(cashDrop); } } public class CashDropFactory { public CashDrop CreateCashDrop(Decimal amount, Employee driver, Employee authorizedBy) { // 8 return new CashDrop(amount, driver, authorizedBy, DateTime.Now); } } public class CashDrop // The domain object (entity) { public CashDrop(Decimal amount, Employee driver, Employee authorizedBy, DateTime at) { // 9 ... } } public class CashDropRepository // The implementation is in the Data Access Layer { public void Add(CashDrop item) { // 10 ... } } I've indicated 10 locations where I've seen validation checks placed in code. My question is what checks you would, if any, be performing at each given the following business rules (along with standard checks for length, range, format, type, etc): The amount of the cash drop must be greater than zero. The cash drop must have a valid Driver. The current user must be authorized to add cash drops (current user is not the driver). Please share your thoughts, how you have or would approach this scenario and the reasons for your choices.

    Read the article

  • Android Game Development. Async Task. Loading Bitmap Images Sounds

    - by user2534694
    Im working on this game for android. And wanted to know if my thread architecture was right or wrong. Basically, what is happening is, i am loading All the bitmaps,sounds etc in the initializevariables() method. But sometimes the game crashes and sometimes it doesnt. So i decided to use async task. But that doesnt seem to work either (i too loads at times and crashes at times) @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setFullScreen(); initializeVariables(); new initVariables().execute(); // setContentView(ourV); } private void setFullScreen() { requestWindowFeature(Window.FEATURE_NO_TITLE); getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN, WindowManager.LayoutParams.FLAG_FULLSCREEN); getWindow().setFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON, WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON ); } private void initializeVariables() { ourV=new OurView(this); stats = getSharedPreferences(filename, 0); ballPic = BitmapFactory.decodeResource(getResources(), R.drawable.ball5); platform = BitmapFactory.decodeResource(getResources(), R.drawable.platform3); gameB = BitmapFactory.decodeResource(getResources(), R.drawable.game_back2); waves = BitmapFactory.decodeResource(getResources(), R.drawable.waves); play = BitmapFactory.decodeResource(getResources(), R.drawable.play_icon); pause = BitmapFactory.decodeResource(getResources(), R.drawable.pause_icon); platform2 = BitmapFactory.decodeResource(getResources(), R.drawable.platform4); countdown = BitmapFactory.decodeResource(getResources(), R.drawable.countdown); bubbles = BitmapFactory.decodeResource(getResources(), R.drawable.waves_bubbles); backgroundMusic = MediaPlayer.create(this, R.raw.music); jump = MediaPlayer.create(this, R.raw.jump); click = MediaPlayer.create(this, R.raw.jump_crack); sm = (SensorManager) getSystemService(SENSOR_SERVICE); acc = sm.getDefaultSensor(Sensor.TYPE_ACCELEROMETER); sm.registerListener(this, acc, SensorManager.SENSOR_DELAY_GAME); ourV.setOnTouchListener(this); dialog = new Dialog(this,android.R.style.Theme_Translucent_NoTitleBar_Fullscreen); dialog.setContentView(R.layout.pausescreen); dialog.hide(); dialog.setOnDismissListener(this); resume = (Button) dialog.findViewById(R.id.bContinue); menu = (Button) dialog.findViewById(R.id.bMainMenu); newTry = (Button) dialog.findViewById(R.id.bNewTry); tv_time = (TextView) dialog.findViewById(R.id.tv_time); tv_day = (TextView) dialog.findViewById(R.id.tv_day); tv_date = (TextView) dialog.findViewById(R.id.tv_date); resume.setOnClickListener(this); menu.setOnClickListener(this); newTry.setOnClickListener(this); } @Override protected void onResume() { //if its running the first time it goes in the brackets if(firstStart) { ourV.onResume(); firstStart=false; } } Now what onResume in ourV does is , its responsible for starting the thread //this is ourV.onResume public void onResume() { t=new Thread(this); isRunning=true; t.start(); } Now what I want is to initialise all bitmaps sounds etc in the async background method public class initVariables extends AsyncTask<Void, Integer, Void> { ProgressDialog pd; @Override protected void onPreExecute() { pd = new ProgressDialog(GameActivity.this); pd.setProgressStyle(ProgressDialog.STYLE_HORIZONTAL); pd.setMax(100); pd.show(); } @Override protected Void doInBackground(Void... arg0) { synchronized (this) { for(int i=0;i<20;i++) { publishProgress(5); try { Thread.sleep(89); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } return null; } @Override protected void onProgressUpdate(Integer... values) { pd.incrementProgressBy(values[0]); } @Override protected void onPostExecute(Void result) { pd.dismiss(); setContentView(ourV); } } Now since I am new to this. You could tellme maybe if async is not required for such stuff and there is another way of doing it normally.

    Read the article

  • Identity in .NET 4.5&ndash;Part 3: (Breaking) changes

    - by Your DisplayName here!
    I recently started porting a private build of Thinktecture.IdentityModel to .NET 4.5 and noticed a number of changes. The good news is that I can delete large parts of my library because many features are now in the box. Along the way I found some other nice additions. ClaimsIdentity now has methods to query the claims collection, e.g. HasClaim(), FindFirst(), FindAll(). ClaimsPrincipal has those methods as well. But they work across all contained identities. Nice! ClaimsPrincipal.Current retrieves the ClaimsPrincipal from Thread.CurrentPrincipal. Combined with the above changes, no casting necessary anymore. SecurityTokenHandler now has read and write methods that work directly with strings. This makes it much easier to deal with non-XML tokens like SWT or JWT. A new session security token handler that uses the ASP.NET machine key to protect the cookie. This makes it easier to get started in web farm scenarios. No need for a custom service host factory or the federation behavior anymore. WCF can be switched into “WIF mode” with the useIdentityConfiguration switch (odd name though). Tooling has become better and the new test STS makes it very easy to get started. On the other hand – and that was kind of expected – to bring claims into the core framework, there are also some breaking changes for WIF code. If you want to migrate (and I would recommend that), most changes to your code are mechanical. The following is a brain dump of the changes I encountered. Assembly Microsoft.IdentityModel is gone. The new functionality is now in mscorlib, System.IdentityModel(.Services) and System.ServiceModel. All the namespaces have changed as well. No IClaimsPrincipal and IClaimsIdentity anymore. Configuration section has been split into <system.identityModel /> and <system.identityModel.services />. WCF configuration story has changed as well. Claim.ClaimType is now Claim.Type. ClaimCollection is now IEnumerable<Claim>. IsSessionMode is now IsReferenceMode. Bootstrap token handling is different now. ClaimsPrincipalHttpModule is gone. This is not really needed anymore, apart from maybe claims transformation (see here). Various factory methods on ClaimsPrincipal are gone (e.g. ClaimsPrincipal.CreateFromIdentity()). SecurityTokenHandler.ValidateToken now returns a ReadOnlyCollection<ClaimsIdentity>. Some lower level helper classes are gone or internal now (e.g. KeyGenerator). The WCF WS-Trust bindings are gone. I think this is a pity. They were *really* useful when doing work with WSTrustChannelFactory. Since WIF is part of the Windows operating system and also supported in future versions of .NET, there is no urgent need to migrate to the 4.5 claims model. But obviously, going forward, at some point you want to make the move.

    Read the article

  • Cloud Fact for Business Managers #3: Where You Data Is, and Who Has Access to It Might Surprise You

    - by yaldahhakim
    Written by: David Krauss While data security and operational risk conversations usually happen around the desk of a CCO/CSO (chief compliance and/or security officer), or perhaps the CFO, since business managers are now selecting cloud providers, they need to be able to at least ask some high-level questions on the topic of risk and compliance.  While the report found that 76% of adopters were motivated to adopt cloud apps because of quick access to software, most of these managers found that after they made a purchase decision their access to exciting new capabilities in the cloud could be hindered due to performance and scalability constraints put forth  by their cloud provider.  If you are going to let your business consume their mission critical business applications as a service, then it’s important to understand who is providing those cloud services and what kind of performance you are going to get.  Different types of departments, companies and industries will all have unique requirements so it’s key to take this also into consideration.   Nothing puts a CEO in a bad mood like a public data breach or finding out the company lost money when customers couldn’t buy a product or service because your cloud service provider had a problem.  With 42% of business managers having seen a data security breach in their department associated directly with the use of cloud applications, this is happening more than you think.   We’ve talked about the importance of being able to avoid information silos through a unified cloud approach and platform.  This is also important when keeping your data safe and secure, and a key conversation to have with your cloud provider.  Your customers want to know that their information is protected when they do business with you, just like you want your own company information protected.   This is really hard to do when each line of business is running different cloud application services managed by different cloud providers, all with different processes and controls.   It only adds to the complexity, and the more complex, the more risky and the chance that something will go wrong. What about compliance? Depending on the cloud provider, it can be difficult at best to understand who has access to your data, and were your data is actually stored.  Add to this multiple cloud providers spanning multiple departments and it becomes very problematic when trying to comply with certain industry and country data security regulations.  With 73% of business managers complaining that having cloud data handled externally by one or more cloud vendors makes it hard for their department to be compliant, this is a big time suck for executives and it puts the organization at risk. Is There A Complete, Integrated, Modern Cloud Out there for Business Executives?If you are a business manager looking to drive faster innovation for your business and want a cloud application that your CIO would approve of, I would encourage you take a look at Oracle Cloud.  It’s everything you want from a SaaS based application, but without compromising on functionality and other modern capabilities like embedded business intelligence, social relationship management (for your entire business), and advanced mobile.  And because Oracle Cloud is built and managed by Oracle, you can be confident that your cloud application services are enterprise-grade.  Over 25 Million users and 10 thousands companies around the globe rely on Oracle Cloud application services everyday – maybe your business should too.  For more information, visit cloud.oracle.com. Additional Resources •    Try it: cloud.oracle.com•    Learn more: http://www.oracle.com/us/corporate/features/complete-cloud/index.html•    Research Report: Cloud for Business Managers: The Good, the Bad, and the Ugly

    Read the article

  • Using NSpec at various architectural layers

    - by nono
    Having read the quick start at nspec.org, I realized that NSpec might be a useful tool in a scenario which was becoming a bit cumbersome with NUnit alone. I'm adding an OAuth (or, DotNetOpenAuth) to a website and quickly made a mess of writing test methods such as [Test] public void UserIsLoggedInLocallyPriorToInvokingExternalLoginAndExternalLoginSucceedsAndExternalProviderIdIsNotAlreadyAssociatedWithUserAccount() { ... } ... and I wound up with maybe a dozen permutations of this theme, for the user already being logged in locally and not locally, the external login succeeding or failing, etc. Not only were the method names unwieldy, but every test needed a setup that contained parts in common with a different set of other tests. I realized that NSpec's incremental setup capabilities would work great for this, and for a while I was trucking a long wonderfully, with code like act = () => { actionResult = controller.ExternalLoginCallback(returnUrl); }; context["The user is already logged in"] = () => { before = () => identity.Setup(x => x.IsAuthenticated).Returns(true); context["The external login succeeds"] = () => { before = () => oauth.Setup(x => x.VerifyAuthentication(It.IsAny<string>())).Returns(new AuthenticationResult(true, providerName, "provideruserid", "username", new Dictionary<string, string>())); context["External login already exists for current user"] = () => { before = () => authService.Setup(x => x.ExternalLoginExistsForUser(It.IsAny<string>(), It.IsAny<string>(), It.IsAny<string>())).Returns(true); it["Should add 'login sucessful' alert"] = () => { var alerts = (IList<Alert>)controller.TempData[TempDataKeys.AlertCollection]; alerts[0].Message.should_be_same("Login successful"); alerts[0].AlertType.should_be(AlertType.Success); }; it["Should return a redirect result"] = () => actionResult.should_cast_to<RedirectToRouteResult>(); }; context["External login already exists for another user"] = () => { before = () => authService.Setup(x => x.ExternalLoginExistsForAnyOtherUser(It.IsAny<string>(), It.IsAny<string>(), It.IsAny<string>())).Returns(true); it["Adds an error alert"] = () => { var alerts = (IList<Alert>)controller.TempData[TempDataKeys.AlertCollection]; alerts[0].Message.should_be_same("The external login you requested is already associated with a different user account"); alerts[0].AlertType.should_be(AlertType.Error); }; it["Should return a redirect result"] = () => actionResult.should_cast_to<RedirectToRouteResult>(); }; This approach seemed to work magnificently until I prepared to write test code for my ApplicationServices layer, to which I delegate viewmodel manipulation from my MVC controllers, and which coordinates the operations of the lower data repository layer: public void CreateUserAccountFromExternalLogin(RegisterExternalLoginModel model) { throw new NotImplementedException(); } public void AssociateExternalLoginWithUser(string userName, string provider, string providerUserId) { throw new NotImplementedException(); } public string GetLocalUserName(string provider, string providerUserId) { throw new NotImplementedException(); } I have no idea what in the world to name the test class, the test methods, or even if I should perhaps include the testing for this layer into the test class from my large code snippet above, so that a single feature or user action could be tested without regard to architectural layering. I can't find any tutorials or blog posts which cover more than simple examples, so I would appreciate any recommendations or pointing in the right direction. I would even welcome "your question is invalid"-type answers as long as some explanation is provided.

    Read the article

  • Too complex/too many objects?

    - by Mike Fairhurst
    I know that this will be a difficult question to answer without context, but hopefully there are at least some good guidelines to share on this. The questions are at the bottom if you want to skip the details. Most are about OOP in general. Begin context. I am a jr dev on a PHP application, and in general the devs I work with consider themselves to use many more OO concepts than most PHP devs. Still, in my research on clean code I have read about so many ways of using OO features to make code flexible, powerful, expressive, testable, etc. that is just plain not in use here. The current strongly OO API that I've proposed is being called too complex, even though it is trivial to implement. The problem I'm solving is that our permission checks are done via a message object (my API, they wanted to use arrays of constants) and the message object does not hold the validation object accountable for checking all provided data. Metaphorically, if your perm containing 'allowable' and 'rare but disallowed' is sent into a validator, the validator may not know to look for 'rare but disallowed', but approve 'allowable', which will actually approve the whole perm check. We have like 11 validators, too many to easily track at such minute detail. So I proposed an AtomicPermission class. To fix the previous example, the perm would instead contain two atomic permissions, one wrapping 'allowable' and the other wrapping 'rare but disallowed'. Where previously the validator would say 'the check is OK because it contains allowable,' now it would instead say '"allowable" is ok', at which point the check ends...and the check fails, because 'rare but disallowed' was not specifically okay-ed. The implementation is just 4 trivial objects, and rewriting a 10 line function into a 15 line function. abstract class PermissionAtom { public function allow(); // maybe deny() as well public function wasAllowed(); } class PermissionField extends PermissionAtom { public function getName(); public function getValue(); } class PermissionIdentifier extends PermissionAtom { public function getIdentifier(); } class PermissionAction extends PermissionAtom { public function getType(); } They say that this is 'not going to get us anything important' and it is 'too complex' and 'will be difficult for new developers to pick up.' I respectfully disagree, and there I end my context to begin the broader questions. So the question is about my OOP, are there any guidelines I should know: is this too complicated/too much OOP? Not that I expect to get more than 'it depends, I'd have to see if...' when is OO abstraction too much? when is OO abstraction too little? how can I determine when I am overthinking a problem vs fixing one? how can I determine when I am adding bad code to a bad project? how can I pitch these APIs? I feel the other devs would just rather say 'its too complicated' than ask 'can you explain it?' whenever I suggest a new class.

    Read the article

  • A* algorithm very slow

    - by Amaranth
    I have an programming a RTS game (I use XNA with C#). The pathfinding is working fine, except that when it has a lot of node to search in, there is a lag period of one or two seconds, it happens mainly when there is no path to the target destination, since it that situation there is more nodes to explore. I have the same problem when the path is shorter but selected more than 3 units (can't take the same path since the selected units can be in different part of the map). private List<NodeInfo> FindPath(Unit u, NodeInfo start, NodeInfo end) { Map map = GameInfo.GetInstance().GameMap; _nearestToTarget = start; start.MoveCost = 0; Vector2 endPosition = map.getTileByPos(end.X, end.Y).Position; //getTileByPos simply gets the tile in a 2D array with the X and Y indexes start.EstimatedRemainingCost = (int)(endPosition - map.getTileByPos(start.X, start.Y).Position).Length(); start.Parent = null; List<NodeInfo> openedNodes = new List<NodeInfo>(); ; List<NodeInfo> closedNodes = new List<NodeInfo>(); Point[] movements = GetMovements(u.UnitType); openedNodes.Add(start); while (!closedNodes.Contains(end) && openedNodes.Count > 0) { //Loop in nodes to find lowest cost NodeInfo currentNode = FindLowestCostOpenedNode(openedNodes); openedNodes.Remove(currentNode); closedNodes.Add(currentNode); Vector2 previousMouvement; if (currentNode.Parent == null) { previousMouvement = ConvertRotationToDirectionVector(u.Rotation); } else { previousMouvement = map.getTileByPos(currentNode.X, currentNode.Y).Position - map.getTileByPos(currentNode.Parent.X, currentNode.Parent.Y).Position; previousMouvement.Normalize(); } //For each neighbor foreach (Point movement in movements) { Point exploredGridPos = new Point(currentNode.X + movement.X, currentNode.Y + movement.Y); //Checks if valid move and checks if not if closed nodes list if (ValidNavigableNode(u.UnitType, new Point(currentNode.X, currentNode.Y), exploredGridPos) && !closedNodes.Contains(_gridMap[exploredGridPos.Y, exploredGridPos.X])) { NodeInfo exploredNode = _gridMap[exploredGridPos.Y, exploredGridPos.X]; Tile.TileType exploredTerrain = map.getTileByPos(exploredGridPos.X, exploredGridPos.Y).TerrainType; if(openedNodes.Contains(exploredNode)) { int newCost = currentNode.MoveCost + GetMoveCost(previousMouvement, movement, exploredTerrain); if (newCost < exploredNode.MoveCost) { exploredNode.Parent = currentNode; exploredNode.MoveCost = newCost; //Find nearest tile to the target (in case doesn't find path to target) //Only compares the node to the current nearest FindNearest(exploredNode); } } else { exploredNode.Parent = currentNode; exploredNode.MoveCost = currentNode.MoveCost + GetMoveCost(previousMouvement, movement, exploredTerrain); Vector2 exploredNodeWorldPos = map.getTileByPos(exploredGridPos.X, exploredGridPos.Y).Position; exploredNode.EstimatedRemainingCost = (int)(endPosition - exploredNodeWorldPos).Length(); //Find nearest tile to the target (in case doesn't find path to target) //Only compares the node to the current nearest FindNearest(exploredNode); openedNodes.Add(exploredNode); } } } } return closedNodes; } After that, I simply check if the end node is contained in the returned nodes. If so, I add the end node and each parent until I reach the start. If not, I add the nearestToTarget and each parent until I reach the start. I added a condition before calling FindPath so that only one unit can call a find path each frame (60 frame per second), but it makes no difference. I thought maybe I could solve this by allowing the find path to run in background while the game continues to run correctly, even if it takes a few frame (it is currently sequential sonce it is called in the update() of the unit if there's a target location but no path), but I don't really know how... I also though about sorting my opened nodes list by cost so I don't have to loop them, but I don't know if that would have an effect on the performance... Would there be other solutions? P.S. In the code, when I get the Move Cost, I check if the unit has to turn to perform the move, and the terrain type, nothing hard to do.

    Read the article

  • Hosted Monitoring

    - by Grant Fritchey
    The concept of using services to take the place of writing a lot of your own code goes way, way back in computing history. The fundamentals of the concept go back to the dawn of computing with places like IBM hosting time-shares for computing power that you could rent for short periods of time. But things really took off with the building of the Web. Now, all the growth with virtual machines, hosted machines, hosted services from vendors like Amazon and Microsoft, the need to keep all of your software locally on physical boxes is just going the way of the dodo. There will likely always be some pieces of software that you keep on machines on your property or on your person, but the concept of keeping fundamental services locally is going away. As someone put it to me once, if you were starting a business right now, would you bother setting up an Exchange server to manage your email or would you just go to one of the external mail services for everything? For most of us (who are not Exchange admins) the answer is pretty easy. With all this momentum to having external services manage more and more of the infrastructure that’s not business unique, why would you burn up a server and license instance setting up monitoring for your SQL Servers? Of course, some of you are dealing with hyper-sensitive data that might require, through law or treaty, that you lock it down and never expose it to the intertubes, but most of us are not. So, what if someone else took on the basic hassle of setting up monitoring on your systems? That’s what we’re working on here at Red Gate. Right now it’s a private test, but we’re growing it and developing it and it’ll be going to a public beta, probably (hopefully) this year. I’m running it on my machines right now. The concept is pretty simple. You put a relay on your server, poke a hole in your firewall for it, and we start monitoring your server using SQL Monitor. It’s actually shocking how easy it is to get going. You still have to adjust your alerting thresholds, but that’s a standard part of alerting. Your pain threshold and my pain threshold for any given alert may be different. But from there, we do all the heavy lifting, keeping your data online and available, providing you with access to the information about how your servers are behaving, everything. Maybe it’s just me, but I’m really excited by this. I think we’re getting to a place where we can really help the small and medium sized businesses get a monitoring solution in place, quickly and easily. All you crazy busy, and possibly accidental, DBAs and system admins finally can set up monitoring without taking all the time to configure systems, run installs, and all the rest. You just have to tweak your alerts and you’re ready to run. If you are interested in checking it out, you can apply for the closed beta through the Monitor web page.

    Read the article

< Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >