Search Results

Search found 11396 results on 456 pages for 'simply denis'.

Page 46/456 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Deliberate Practice

    - by Jeff Foster
    It’s easy to assume, as software engineers, that there is little need to “practice” writing code. After all, we write code all day long! Just by writing a little each day, we’re constantly learning and getting better, right? Unfortunately, that’s just not true. Of course, developers do improve with experience. Each time we encounter a problem we’re more likely to avoid it next time. If we’re in a team that deploys software early and often, we hone and improve the deployment process each time we practice it. However, not all practice makes perfect. To develop true expertise requires a particular type of practice, deliberate practice, the only goal of which is to make us better programmers. Everyday software development has other constraints and goals, not least the pressure to deliver. We rarely get the chance in the course of a “sprint” to experiment with potential solutions that are outside our current comfort zone. However, if we believe that software is a craft then it’s our duty to strive continuously to raise the standard of software development. This requires specific and sustained efforts to get better at something we currently can’t do well (from Harvard Business Review July/August 2007). One interesting way to introduce deliberate practice, in a sustainable way, is the code kata. The term kata derives from martial arts and refers to a set of movements practiced either solo or in pairs. One of the better-known examples is the Bowling Game kata by Bob Martin, the goal of which is simply to write some code to do the scoring for 10-pin bowling. It sounds too easy, right? What could we possibly learn from such a simple example? Trust me, though, that it’s not as simple as five minutes of typing and a solution. Of course, we can reach a solution in a short time, but the important thing about code katas is that we explore each technique fully and in a controlled way. We tackle the same problem multiple times, using different techniques and making different decisions, understanding the ramifications of each one, and exploring edge cases. The short feedback loop optimizes opportunities to learn. Another good example is Conway’s Game of Life. It’s a simple problem to solve, but try solving it in a functional style. If you’re used to mutability, solving the problem without mutating state will push you outside of your comfort zone. Similarly, if you try to solve it with the focus of “tell-don’t-ask“, how will the responsibilities of each object change? As software engineers, we don’t get enough opportunities to explore new ideas. In the middle of a development cycle, we can’t suddenly start experimenting on the team’s code base. Code katas offer an opportunity to explore new techniques in a safe environment. If you’re still skeptical, my challenge to you is simply to try it out. Convince a willing colleague to pair with you and work through a kata or two. It only takes an hour and I’m willing to bet you learn a few new things each time. The next step is to make it a sustainable team practice. Start with an hour every Friday afternoon (after all who wants to commit code to production just before they leave for the weekend?) for month and see how that works out. Finally, consider signing up for the Global Day of Code Retreat. It’s like a daylong code kata, it’s on December 8th and there’s probably an event in your area!

    Read the article

  • concurrency::extent<N> from amp.h

    - by Daniel Moth
    Overview We saw in a previous post how index<N> represents a point in N-dimensional space and in this post we'll see how to define the N-dimensional space itself. With C++ AMP, an N-dimensional space can be specified with the template class extent<N> where you define the size of each dimension. From a look and feel perspective, you'd expect the programmatic interface of a point type and size type to be similar (even though the concepts are different). Indeed, exactly like index<N>, extent<N> is essentially a coordinate vector of N integers ordered from most- to least- significant, BUT each integer represents the size for that dimension (and hence cannot be negative). So, if you read the description of index, you won't be surprised with the below description of extent<N> There is the rank field returning the value of N you passed as the template parameter. You can construct one extent from another (via the copy constructor or the assignment operator), you can construct it by passing an integer array, or via convenience constructor overloads for 1- 2- and 3- dimension extents. Note that the parameterless constructor creates an extent of the specified rank with all bounds initialized to 0. You can access the components of the extent through the subscript operator (passing it an integer). You can perform some arithmetic operations between extent objects through operator overloading, i.e. ==, !=, +=, -=, +, -. There are operator overloads so that you can perform operations between an extent and an integer: -- (pre- and post- decrement), ++ (pre- and post- increment), %=, *=, /=, +=, –= and, finally, there are additional overloads for plus and minus (+,-) between extent<N> and index<N> objects, returning a new extent object as the result. In addition to the usual suspects, extent offers a contains function that tests if an index is within the bounds of the extent (assuming an origin of zero). It also has a size function that returns the total linear size of this extent<N> in units of elements. Example code extent<2> e(3, 4); _ASSERT(e.rank == 2); _ASSERT(e.size() == 3 * 4); e += 3; e[1] += 6; e = e + index<2>(3,-4); _ASSERT(e == extent<2>(9, 9)); _ASSERT( e.contains(index<2>(8, 8))); _ASSERT(!e.contains(index<2>(8, 9))); grid<N> Our upcoming pre-release bits also have a similar type to extent, grid<N>. The way you create a grid is by passing it an extent, e.g. extent<3> e(4,2,6); grid<3> g(e); I am not going to dive deeper into grid, suffice for now to think of grid<N> simply as an alias for the extent<N> object, that you create when you encounter a function that expects a grid object instead of an extent object. Usage The extent class on its own simply defines the size of the N-dimensional space. We'll see in future posts that when you create containers (arrays) and wrappers (array_views) for your data, it is an extent<N> object that you'll need to use to create those (and use an index<N> object to index into them). We'll also see that it is a grid<N> object that you pass to the new parallel_for_each function that I'll cover in the next post. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Move over DFS and Robocopy, here is SyncToy!

    - by andywe
    Ever since Windows 2000, I have always had the need to replicate data to multiple endpoints with the same content. Until DFS was introduced, the method of thinking was to either manually copy the data location by location, or to batch script it with xcopy and schedule a task. Even though this worked (and still does today), it was cumbersome, and intensive on the network, especially when dealing with larger amounts of data. Then along came robocopy, as an internal tool written by an enterprising programmer at Microsoft. We used it quite a bit, especially when we could not use DFS in the early days. It was received so well, it made it into the public realm. At least now we could have the ability to determine what files had changed and only replicate those. Well, over time there has been evolution of this ideal. DFS is obviously the Windows enterprise class service to do this, along with BrancheCache..however you don’t always need or want the power of DFS, especially when it comes to small datacenter installations, or remote offices. I have specific data sets that are on closed or restricted networks, that either have a security need for this, or are in remote countries where bandwidth is a premium. FOr this, I use the latest evolution for one off replication names Synctoy. Synctoy is from Microsoft, seemingly released in 2009, that wraps a nice GUI around setting up a paired set of folders (remember the mobile briefcase from Windows 98?), and allowing you the choice of synchronization methods. 1 way, or 2 way. Simply create a paired set of folders on the source and destination, choose your options for content, exclude any file types you don’t want to replicate, and click run. Scheduling is even easier. MS has included a wrapper for doing just this so all you enter in your task schedule in the SynToyCMD.exe, a –R as an argument, and the time schedule. No more complicated command lines or scripts.   I find this especially useful when I use MS backup to back up a system volume, but only want subsets of backup information of a data share and ONLY when that dataset has changed. Not relying on full backups and incremental. An example of this is my application installation master share. I back this up with SyncToy because I do not need multiple backup copies..one copy elsewhere suffices to back it up. At home, very useful for your pictures, videos, music, ect..the backup is online and ready to access, not waiting for you to restore a backup file, and no need to institute a domain simply to have DFS.'   Do note there is a risk..if you accidently delete a file and do not catch this before the next sync, then depending on your SyncToy settings, you can indeed lose that file as the destination updates..so due diligence applies. I make it a rule to sync manly one way…I use my master share for making changes, and allow the schedule to follow suit. Any real important file I lock down as read only through file permissions so it cannot be deleted unless I intervene.   Check out the tool and have some fun! http://www.microsoft.com/en-us/download/details.aspx?DisplayLang=en&id=15155

    Read the article

  • Extreme Makeover, Phone Edition: Comcasts xfinity

    Mobile Makeover For many companies the first foray into Windows Phone 7 (WP7) may be in porting their existing mobile apps. It is tempting to simply transfer existing functionality, avoiding the additional design costs. Readdressing business needs and taking advantage of the WP7 platform can reduce cost and is essential to a successful re-launch. To better understand the advantage of new development lets examine a conceptual upgrade of Comcasts existing mobile app. Before Comcast has a great mobile app that provides several key features. The ability to browse the lineup using a guide, a client for Comcast email accounts, On Demand gallery, and much more. We will leverage these and build on them using some of the incredible WP7 features.   After With the proliferation of DVRs (Digital Video Recorders) and a variety of media devices (TV, PC, Mobile) content providers are challenged to find creative ways to build their brands. Every client touch point must provide both value added services as well as opportunities for marketing and up-sale; WP7 makes it easy to focus on those opportunities. The new app is an excellent vehicle for presenting Comcasts newly rebranded TV, Voice, and Internet services. These services now fly under the banner of xfinity and have been expanded to provide the best experience for Comcast customers. The Windows Phone 7 app will increase the surface area of this service revolution.   The home menu is simplified and highlights Comcasts Triple Play: Voice, TV, and Internet. The inbox has been replaced with a messages view, and message management is handled by a WP7 hub. The hub presents emails, tweets, and IMs from Comcast and other viewers the user follows on Twitter.  The popular view orders shows based on the users viewing history and current cable package. The first show Glee is both popular and participating in a conceptual co-marketing effort, so it receives prime positioning. The second spot goes to a hit show on a premium channel, in this example HBOs The Pacific, encouraging viewers to upgrade for this premium content. The remaining spots are ordered based on viewing history and popularity. Tapping the play button moves the user to the theatre where they can watch previews or full episodes streaming from Fancast. Tapping an extra presents the user with show details as well as interactive content that may be included as part of co-marketing efforts. Co-Marketing with Dynamic Content The success of Comcasts services are tied to the success of the networks and shows it purveys, making co-marketing efforts essential. In this concept FOX is co-marketing its popular show Glee. A customized panorama is updated with the latest gleeks tweets, streaming HD episodes, and extras featuring photos and video of the cast. If WP7 apps can be dynamically extended with web hosted .xap files, including sandboxed partner experiences would enable interactive features such as the Gleek Peek, in which a viewer can select a character from a panorama to view the actors profile. This dynamic inline experience has a tailored appeal to aspiring creatives and is technically possible with Windows Phone 7.   Summary The conceptual Comcast mobile app for Windows Phone 7 highlights just a few of the incredible experiences and business opportunities that can be unlocked with this latest mobile solution. It is critical that organizations recognize and take full advantage of these new capabilities. Simply porting existing mobile applications does not leverage these powerful tools; re-examining existing applications and upgrading them to Windows Phone 7 will prove essential to the continued growth and success of your brand.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Cloud – the forecast is improving

    - by Rob Farley
    There is a lot of discussion about “the cloud”, and how that affects people’s data stories. Today the discussion enters the realm of T-SQL Tuesday, hosted this month by Jorge Segarra. Over the years, companies have invested a lot in making sure that their data is good, and I mean every aspect of it – the quality of it, the security of it, the performance of it, and more. Experts such as those of us at LobsterPot Solutions have helped these companies with this, and continue to work with clients to make sure that data is a strong part of their business, not an oversight. Whether business intelligence systems are being utilised or not, every business needs to be able to rely on its data, and have the confidence in it. Data should be a foundation upon which a business is built. In the past, data had been stored in paper-based systems. Filing cabinets stored vital information. Today, people have server rooms with storage of various kinds, recognising that filing cabinets don’t necessarily scale particularly well. It’s easy to ‘lose’ data in a filing cabinet, when you have people who need to make sure that the sheets of paper are in the right spot, and that you know how things are stored. Databases help solve that problem, but still the idea of a large filing cabinet continues, it just doesn’t involve paper. If something happens to the physical ‘filing cabinet’, then the problems are larger still. Then the data itself is under threat. Many clients have generators in case the power goes out, redundant cables in case the connectivity dies, and spare servers in other buildings just in case they’re required. But still they’re maintaining filing cabinets. You see, people like filing cabinets. There’s something to be said for having your data ‘close’. Even if the data is not in readable form, living as bits on a disk somewhere, the idea that its home is ‘in the building’ is comforting to many people. They simply don’t want to move their data anywhere else. The cloud offers an alternative to this, and the human element is an obstacle. By leveraging the cloud, companies can have someone else look after their filing cabinet. A lot of people really don’t like the idea of this, partly because the administrators of the data, those people who could potentially log in with escalated rights and see more than they should be allowed to, who need to be trusted to respond if there’s a problem, are now a faceless entity in the cloud. But this doesn’t mean that the cloud is bad – this is simply a concern that some people may have. In new functionality that’s on its way, we see other hybrid mechanisms that mean that people can leverage parts of the cloud with less fear. Companies can use cloud storage to hold their backup data, for example, backups that have been encrypted and are therefore not able to be read by anyone (including administrators) who don’t have the right password. Companies can have a database instance that runs locally, but which has its data files in the cloud, complete with Transparent Data Encryption if needed. There can be a higher level of control, making the change easier to accept. Hybrid options allow people who have had fears (potentially very justifiable) to take a new look at the cloud, and to start embracing some of the benefits of the cloud (such as letting someone else take care of storage, high availability, and more) without losing the feeling of the data being close. @rob_farley

    Read the article

  • Try the Oracle Database Appliance Manager Configurator - For Fun!

    - by pwstephe-Oracle
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 If you would like to get a first hand glimpse of how easy it is to configure an ODA, even if you don’t have access to one, it’s possible to download the Appliance Manager Configurator from the Oracle Technology Network, and run it standalone on your PC or Linux/Unix  workstation. The configurator is packaged in a zip file that contains the complete Java environment to run standalone. Once the package is downloaded and unzipped it’s simply a matter of launching it using the config command or shell depending on your runtime environment. Oracle Appliance Manager Configurator is a Java-based tool that enables you to input your deployment plan and validate your network settings before an actual deployment, or you can just preview and experiment with it. Simply download and run the configurator on a local client system which can be a Windows, Linux, or UNIX system. (For Windows launch the batch file config.bat for Linux/Unix environments, run  ./ config.sh). You will be presented with the very same dialogs and options used to configure a production ODA but on your workstation. At the end of a configurator session, you may save your deployment plan in a configuration file. If you were actually ready to deploy, you could copy this configuration file to a real ODA where the online Oracle Appliance Manager Configurator would use the contents to deploy your plan in production. You may also print the file’s content and use the printout as a checklist for setting up your production external network configuration. Be sure to use the actual production network addresses you intend to use it as this will only work correctly if your client system is connected to same network that will be used for the ODA. (This step is not necessary if you are just previewing the Configurator). This is a great way to get an introductory look at the simple and intuitive Database Appliance configuration interface and the steps to configure a system. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Can someone explain the true landscape of Rails vs PHP deployment, particularly within the context of Reseller-based web hosting (e.g., Hostgator)?

    - by rcd
    Currently, I have a reseller account with the company HostGator. I design websites, which up until now have occasionally been wrapped in Wordpress CMSs and the like (PHP applications). I then sell hosting (of the site I've designed) to the client, which is pretty simple, in that I can simply click a button and add a new shared hosting account/site with whatever settings I want. Furthermore, I then utilize WHMCS to automate billing and account management. It's a nice package and pretty simple. I pay something like $25 a month, and can sell a hundred accounts under this (because my clients bandwidth requirements are low). Now I am finding the need to develop more customized applications, including a minimalist CMS and several proprietary things. I soon anticipate developing these apps for clients as well. Thus, I've spent the past few months learning Rails, and it's coming along well now. The thing that has nagged at me all along, though, is the deployment issue. I can't wrap my brain around it. It seems like all of the popular options (Heroku, etc) have nice automation with git and are set up in the "Rails Way". I get that (sort of). But it's terribly expensive... a single dyno, a helper, and the cheapest database (which they say is mainly suitable for testing) that isn't limited to 5MB runs $51. This is for ONE app!!! Throw in a "production" DB and you're over $200. This is like... the same prices as getting a server somewhere, right? Meanwhile, going back to what I guess is a "traditional" hosting environment with Hostgator, their server only has Ruby 1.8.7 and Rails 2.3.5... No Rails 3. AND, no Passenger (not that I really understand the difference in CGI or mod_rails or whatever, but they say Passenger is the simplest). So I'm to understand that if I build an app in Rails 3, it won't run at all on this host? But damn, I already have these accounts under my reseller account there, all running static html and/or PHP stuff, right? So what now? How do I get all of this under one simple (and affordable) roof? Forgive my ignorance, but I just don't get it. Managing a VPS is cool and all, but entails learning server admin stuff and security... And it's expensive. I get that a shared and/or reseller "server-based" (forgive the terminology) may be inadequate for large-scale apps that use a lot of bandwidth... But what about for those of us who are building real (but small and low bandwidth) apps (with Rails) and who want to deploy them simply, cheaply, using the same conceptual approach as PHP? Even after learning all of this Ruby and Rails stuff for months, I'm questioning whether it's worth it when it comes to deployment. I want to build a small app, upload it to my home directory on a shared server account, and just make it run. Why should that be so hard? Am I just choosing the wrong language/framework? Forgive my ignorance in the subject; these questions are not rhetorical; just trying to learn here. So: 1) I'd appreciate if someone could give me a good rundown of how to understand deployment in Rails vs. PHP. 2) I'd appreciate if someone could address my issue with running a hosting/web business around reseller hosting (Hostgator) while also being able to host Rails apps. Can it be done? And how can a company like Hostgator completely ignore what's current in Rails/Ruby? Thanks.

    Read the article

  • Career-Defining Moments

    - by Robz / Fervent Coder
    Originally posted on: http://geekswithblogs.net/robz/archive/2013/06/25/career-defining-moments.aspx Fear holds us back from many things. A little fear is healthy, but don’t let it overwhelm you into missing opportunities. In every career there is a moment when you can either step forward and define yourself, or sit down and regret it later. Why do we hold back: is it fear, constraints, family concerns, or that we simply can't do it? I think in many cases it comes to the unknown, and we are good at fearing the unknown. Some people hold back because they are fearful of what they don’t know. Some hold back because they are fearful of learning new things. Some hold back simply because to take on a new challenge it means they have to give something else up. The phrase sometimes used is “It’s the devil you know versus the one you don’t.” That fear sometimes allows us to miss great opportunities. In many people’s case it is the opportunity to go into business for yourself, to start something that never existed. Most hold back hear for a fear of failing. We’ve all heard the phrase “What would you do if you knew you couldn’t fail?”, which is intended to get people to think about the opportunities they might create. A better term I heard recently on the Ruby Rogues podcast was “What would be worth doing even if you knew you were going to fail?” I think that wording suits the intent better. If you knew (or thought) going in that you were going to fail and you didn’t care, it would open you up to the possibility of paying more attention to the journey and not the outcome. In my case it is a fear of acceptance. I am fearful that I may not learn what I need to learn or may not do a good enough job to be accepted. At the same time that fear drives me and makes me want to leap forward. Some folks would define this as “The Flinch”. I’m learning Ruby and Puppet right now. I have limited experience with both, limited to the degree it scares me some that I don’t know much about either. Okay, it scares me quite a bit! Some people’s defining moment might be going to work for Microsoft. All of you who know me know that I am in love with automation, from low-tech to high-tech automation. So for me, my “mecca” is a little different in that regard. Awhile back I sat down and defined where I wanted my career to go and it had to do more with DevOps, defined as applying developer practices to system administration operations (I could not find this definition when I searched). It’s an area that interests me and why I really want to expand chocolatey into something more awesome. I want to see Windows be as automatable and awesome as other operating systems that are out there. Back to the career-defining moment. Sometimes these moments only come once in a lifetime. The key is to recognize when you are in one of these moments and step back to evaluate it before choosing to dive in head first. So I am about to embark on what I define as one of these “moments.”  On July 1st I will be joining Puppet Labs and working to help make the Windows automation experience rock solid! I’m both scared and excited about the opportunity!

    Read the article

  • A* algorithm very slow

    - by Amaranth
    I have an programming a RTS game (I use XNA with C#). The pathfinding is working fine, except that when it has a lot of node to search in, there is a lag period of one or two seconds, it happens mainly when there is no path to the target destination, since it that situation there is more nodes to explore. I have the same problem when the path is shorter but selected more than 3 units (can't take the same path since the selected units can be in different part of the map). private List<NodeInfo> FindPath(Unit u, NodeInfo start, NodeInfo end) { Map map = GameInfo.GetInstance().GameMap; _nearestToTarget = start; start.MoveCost = 0; Vector2 endPosition = map.getTileByPos(end.X, end.Y).Position; //getTileByPos simply gets the tile in a 2D array with the X and Y indexes start.EstimatedRemainingCost = (int)(endPosition - map.getTileByPos(start.X, start.Y).Position).Length(); start.Parent = null; List<NodeInfo> openedNodes = new List<NodeInfo>(); ; List<NodeInfo> closedNodes = new List<NodeInfo>(); Point[] movements = GetMovements(u.UnitType); openedNodes.Add(start); while (!closedNodes.Contains(end) && openedNodes.Count > 0) { //Loop in nodes to find lowest cost NodeInfo currentNode = FindLowestCostOpenedNode(openedNodes); openedNodes.Remove(currentNode); closedNodes.Add(currentNode); Vector2 previousMouvement; if (currentNode.Parent == null) { previousMouvement = ConvertRotationToDirectionVector(u.Rotation); } else { previousMouvement = map.getTileByPos(currentNode.X, currentNode.Y).Position - map.getTileByPos(currentNode.Parent.X, currentNode.Parent.Y).Position; previousMouvement.Normalize(); } //For each neighbor foreach (Point movement in movements) { Point exploredGridPos = new Point(currentNode.X + movement.X, currentNode.Y + movement.Y); //Checks if valid move and checks if not if closed nodes list if (ValidNavigableNode(u.UnitType, new Point(currentNode.X, currentNode.Y), exploredGridPos) && !closedNodes.Contains(_gridMap[exploredGridPos.Y, exploredGridPos.X])) { NodeInfo exploredNode = _gridMap[exploredGridPos.Y, exploredGridPos.X]; Tile.TileType exploredTerrain = map.getTileByPos(exploredGridPos.X, exploredGridPos.Y).TerrainType; if(openedNodes.Contains(exploredNode)) { int newCost = currentNode.MoveCost + GetMoveCost(previousMouvement, movement, exploredTerrain); if (newCost < exploredNode.MoveCost) { exploredNode.Parent = currentNode; exploredNode.MoveCost = newCost; //Find nearest tile to the target (in case doesn't find path to target) //Only compares the node to the current nearest FindNearest(exploredNode); } } else { exploredNode.Parent = currentNode; exploredNode.MoveCost = currentNode.MoveCost + GetMoveCost(previousMouvement, movement, exploredTerrain); Vector2 exploredNodeWorldPos = map.getTileByPos(exploredGridPos.X, exploredGridPos.Y).Position; exploredNode.EstimatedRemainingCost = (int)(endPosition - exploredNodeWorldPos).Length(); //Find nearest tile to the target (in case doesn't find path to target) //Only compares the node to the current nearest FindNearest(exploredNode); openedNodes.Add(exploredNode); } } } } return closedNodes; } After that, I simply check if the end node is contained in the returned nodes. If so, I add the end node and each parent until I reach the start. If not, I add the nearestToTarget and each parent until I reach the start. I added a condition before calling FindPath so that only one unit can call a find path each frame (60 frame per second), but it makes no difference. I thought maybe I could solve this by allowing the find path to run in background while the game continues to run correctly, even if it takes a few frame (it is currently sequential sonce it is called in the update() of the unit if there's a target location but no path), but I don't really know how... I also though about sorting my opened nodes list by cost so I don't have to loop them, but I don't know if that would have an effect on the performance... Would there be other solutions? P.S. In the code, when I get the Move Cost, I check if the unit has to turn to perform the move, and the terrain type, nothing hard to do.

    Read the article

  • Create Custom Speech Bubbles in Silverlight.

    - by mbcrump
    I had a reader email me the following question: “How do you create Speech Bubbles in Silverlight/WPF without adding any extra .dlls? Right off the bat, I know at least two ways to create the speech bubbles that look just like the ones in comic books. Using the Callout Shapes included with Blend 4. Using the free 3rd party control named FreeBubbles (I used this before Blend 4). Unfortunately, we cannot use either of these as they will both add extra .dll’s to the project. So why wouldn’t you want to use one of those? I can think of a few reasons: You do not want to increase the size of your .XAP by including extra .dll’s. You do not have Expression Blend or the license to the use the .dll’s. You want a custom Speech Bubble that is not included in the four “Callout” Controls with Blend. Instead of using one of these methods, we will create a Speech Bubble in Blend 4 using Path element and a TextBlock. Before we get started, lets look at the Callout Shapes included with Blend 4. Using Blend 4 you can simply drag/drop these controls onto your Silverlight application and you are ready to go. We can create all of these Speech Bubbles and even some of the modern bubbles used in recent comic books. Lets get started. Start up Expression Blend 4 and select the Pen Tool. On the Art Board, start connecting the dots like I did below. You can add a color if you wish. …keep going …complete Let’s go ahead and add some text to the Speech Bubble. Drag a TextBlock from the Panel and put it directly inside the Speech Bubble. Go ahead and set the TextAlignment to Center for the TextBlock. and give it some text. At this point, you could go ahead and create a user control if you want to reuse the Speech Bubble you created. Select both the Path and the TextBlock by clicking then while holding down CTRL and then Right Click them. Select Make Into User Control. Give it a name and then Build your project. Lets create another one using the Ellipse for the older comic book style of Speech Bubbles. Drag an Ellipse to the Artboard and give it a color. Now, grab the Pen and drag a triangle like I did below. Simply drag it over a corner of the Ellipse. Select Combine then Unite and you will have a Path. At this point, you can go ahead and add a TextBlock like we did earlier. Lets go ahead and create a rounded rectangle one by adding a Rectangle to the Artboard. Go ahead and set the RadiuX and RadiusY to 25 to give it rounded edges. Let’s create another path and drag it right on top of our rounded rectangle like we did earlier. …looking good Select Combine then Unite and you will have a Path. At this point, you can go ahead and add a TextBlock like we did earlier. So let’s look at what we’ve created today using the path element and TextBlock. As you can tell, it required more work but meets the requirements. This was actually fun to do and I encourage anyone that visits my blog to send in request like this.  Subscribe to my feed

    Read the article

  • Gomoku array-based AI-algorithm?

    - by Lasse V. Karlsen
    Way way back (think 20+ years) I encountered a Gomoku game source code in a magazine that I typed in for my computer and had a lot of fun with. The game was difficult to win against, but the core algorithm for the computer AI was really simply and didn't account for a lot of code. I wonder if anyone knows this algorithm and has some links to some source or theory about it. The things I remember was that it basically allocated an array that covered the entire board. Then, whenever I, or it, placed a piece, it would add a number of weights to all locations on the board that the piece would possibly impact. For instance (note that the weights are definitely wrong as I don't remember those): 1 1 1 2 2 2 3 3 3 444 1234X4321 3 3 3 2 2 2 1 1 1 Then it simply scanned the array for an open location with the lowest or highest value. Things I'm fuzzy on: Perhaps it had two arrays, one for me and one for itself and there was a min/max weighting? There might've been more to the algorithm, but at its core it was basically an array and weighted numbers Does this ring a bell with anyone at all? Anyone got anything that would help?

    Read the article

  • ASP.NET MVC Routing - Redirect to aspx?

    - by bmoeskau
    This seems like it should be easy, but for some reason I'm having no luck. I'm migrating an existing WebForms app to MVC, so I need to keep the root of the site pointing to my existing aspx pages for now and only apply routing to named routes. Here's what I have: public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.IgnoreRoute("{resource}.aspx/{*pathInfo}"); RouteTable.Routes.Add( "Root", new Route("", new DefaultRouteHandler()) ); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Calendar2", action = "Index", id = "" } // Parameter defaults ); } So aspx pages should be ignored, and the default root url should be handled by this handler: public class DefaultRouteHandler : IRouteHandler { public IHttpHandler GetHttpHandler(RequestContext requestContext) { return System.Web.Compilation.BuildManager.CreateInstanceFromVirtualPath( "~/Dashboard/default.aspx", typeof(Page)) as IHttpHandler; } } This seems to work OK, but the resulting YPOD gives me this: Multiple controls with the same ID '__Page' were found. Trace requires that controls have unique IDs. which seems to imply that the page is somehow getting rendered twice. If I simply type in the url to my dashboard page directly it works fine (no routing, no error). I have no idea why the handler code would be doing anything differently. Bottom line -- I'd like to simply redirect the root url path to an aspx of my choosing -- can anyone shed some light?

    Read the article

  • .NET C#: WebBrowser control Navigate() does not load targeted URL

    - by Dave
    Hey guys, I'm trying to programmatically load a web page via the WebBrowser control with the intent of testing the page & it's JavaScript functions. Basically, I want to compare the HTML & JavaScript run through this control against a known output to ascertain whether there is a problem. However, I'm having trouble simply creating and navigating the WebBrowser control. The code below is intended to load the HtmlDocument into the WebBrowser.Document property: WebBrowser wb = new WebBrowser(); wb.AllowNavigation = true; wb.Navigate("http://www.google.com/"); When examining the web browser's state via Intellisense after Navigate() runs, the WebBrowser.ReadyState is 'Uninitialized', WebBrowser.Document = null, and it overall appears completely unaffected by my call. On a contextual note, I'm running this control outside of a Windows form object: I do not need to load a window or actually look at the page. Requirements dictate the need to simply execute the page's JavaScript and examine the resultant HTML. Any suggestions are greatly appreciated, thanks!

    Read the article

  • Streaming to the Android MediaPlayer

    - by Rob Szumlakowski
    Hi. I'm trying to write a light-weight HTTP server in my app to feed dynamically generated MP3 data to the built-in Android MediaPlayer. I am not permitted to store my content on the SD card. My input data is essentially of an infinite length. I tell MediaPlayer that its data source should basically be something like "http://localhost/myfile.mp3". I've a simple server set up that waits for MediaPlayer to make this request. However, MediaPlayer isn't very cooperative. At first, it makes an HTTP GET and tries to grab the whole file. It times out if we try and simply dump data into the socket so we tried using the HTTP Range header to write data in chunks. MediaPlayer doesn't like this and doesn't keep requesting the subsequent chunks. Has anyone had any success streaming data directly into MediaPlayer? Do I need to implement an RTSP or Shoutcast server instead? Am I simply missing a critical HTTP header? What strategy should I use here? Rob Szumlakowski

    Read the article

  • Rotate a Swing JLabel

    - by Johannes Rössel
    I am currently trying to implement a Swing component, inheriting from JLabel which should simply represent a label that can be oriented vertically. Beginning with this: public class RotatedLabel extends JLabel { public enum Direction { HORIZONTAL, VERTICAL_UP, VERTICAL_DOWN } private Direction direction; I thought it's be a nice idea to just alter the results from getPreferredSize(): @Override public Dimension getPreferredSize() { // swap size for vertical alignments switch (getDirection()) { case VERTICAL_UP: case VERTICAL_DOWN: return new Dimension(super.getPreferredSize().height, super .getPreferredSize().width); default: return super.getPreferredSize(); } } and then simply transform the Graphics object before I offload painting to the original JLabel: @Override protected void paintComponent(Graphics g) { Graphics2D gr = (Graphics2D) g.create(); switch (getDirection()) { case VERTICAL_UP: gr.translate0, getPreferredSize().getHeight()); gr.transform(AffineTransform.getQuadrantRotateInstance(-1)); break; case VERTICAL_DOWN: // TODO break; default: } super.paintComponent(gr); } } It seems to work—somehow—in that the text is now displayed vertically. However, placement and size are off: Actually, the width of the background (orange in this case) is identical with the height of the surrounding JFrame which is ... not quite what I had in mind. Any ideas how to solve that in a proper way? Is delegating rendering to superclasses even encouraged?

    Read the article

  • ASP.NET MVC Session Expiration

    - by Andrew Flanagan
    We have an internal ASP.NET MVC application that requires a logon. Log on works great and does what's expected. We have a session expiration of 15 minutes. After sitting on a single page for that period of time, the user has lost the session. If they attempt to refresh the current page or browse to another, they will get a log on page. We keep their request stored so once they've logged in they can continue on to the page that they've requested. This works great. However, my issue is that on some pages there are AJAX calls. For example, they may fill out part of a form, wander off and let their session expire. When they come back, the screen is still displayed. If they simply fill in a box (which will make an AJAX call) the AJAX call will return the Logon page (inside of whatever div the AJAX should have simply returned the actual results). This looks horrible. I think that the solution is to make the page itself expire (so that when a session is terminated, they automatically are returned to the logon screen without any action by them). However, I'm wondering if there are opinions/ideas on how best to implement this specifically in regards to best practices in ASP.NET MVC.

    Read the article

  • Need advice or pointers on Release Management Strategies

    - by Murray
    I look after an internal web based (Java, JSP, Mediasurface, etc.) system that is in constant use (24/5). Users raise tickets for enhancements, bug fixes and other business changes. These issues are signed off individually and assigned to one of three or four developers. Once the issue is complete it is built and the code only committed to SVN. The changed files (templates, html, classes, jsp) are then copied to a dev server and committed to a different repository from where they are checked out to the UAT server for testing. (this often requires the Tomcat service to be restarted and occasionally the Mediasurface service as well). The users then test and either reject or approve the release. If approved the edited files are checked out to the Live server and the same process as with UAT undertaken. If rejected the developer makes the relevant changes and starts the release process again. This is all done manually without much control. Where different developers are working on similar files, changes sometimes get overwritten by builds done on out of sync code in other cases changes in UAT are moved to live in error as they are mixed up in files associated with a signed off release. I would like to move this to a more controlled and automated process where all source code and output files are held in SVN and releases to Dev, UAT and Live managed by a CI system (We have TeamCity in house for our .NET applications). My question is on how to manage the releases of multiple changes where some will be signed off and moved on and others rejected and returned to the developer. The changes may be on overlapping files and simply merging each release in to a Release Branch means that the rejected changes would have to be backed out of the branch. Is there a way to manage this using SVN and CI or will I simply have to live with the current system.

    Read the article

  • WMD markdown editor - HTML to Markdown conversion

    - by gg1
    I am using wmd markdown editor on a project and had a question: When I post the form containing the markdown text area, it (as expected) posts html to the server. However, say upon server-side validation something fails and I need to send the user back to edit their entry, is there anyway to refill the textarea with just the markdown and not the html? Since as I have it set up, the server only has access to the post data (which is in the form of html) so I can't seem to think of a way to do this. Any ideas? Preferably a non-javascript based solution. Update: I found an html to markdown converter called markdownify. I guess this might be the best solution for displaying the markdown back to the user...any better alternatives are welcome! Update 2: I found this post on SO and I guess there is an option to send the data to the server as markdown instead of html. Are there any downsides to simply storing the data as markdown in the database? What about displaying it back to the user (outside of an editor)? Maybe it would be best to post both versions (html AND markdown) to the server... SOLVED: I can simply use php markdown to convert the markdown to html serverside.

    Read the article

  • Jquery and XML - How to add the value of nodes

    - by Matias
    Hi Experts, This may be a simple question though can´t figure out how to do it. I am parsing an XML with Jquery Ajax. It contains dates and rates The XML looks something like <rate> <date>Today</date> <price>66</price> </rate> <rate> <date>Tomorrow</date> <price>99</price> </rate> I simply want to figure out how to calculate the total price of both days Today and Tomorrow. Thought that by using Javascript Number it will simply return the total value of the nodes.. $(xml).find("rate").each(function() { $(this).find("price").each(function() { $("#TOTALPRICE").append(Number($(this).text())); } } //output is: 6699 However, it´s just concatenating the values both not adding them. //output is: 6699 I greatly appreciate your help !! Thanks

    Read the article

  • Desktop Notifications, aka Internal Alert System

    - by Refracted Paladin
    It has become apparent that where I work needs, internally, a "notification system". The issue being that we are very spread out throughout multiple buildings and the bulk of the work force regularly keeps there email closed for hours at a time. I need to create a simple way to be able to push out a message and have it "pop up" on everyones computer(or a single computer). My first thought was to write a windows service that calls a winform/wpf app that resides on each computer that simply pops up with the message. Not sure how viable an idea that is but this is just brain-storming. A different route, I thought, could be an app that resides in the systray on each computer that polls a db table and using the Query Notifications could pop up a message each time a new row is added. Then simply create an insanely basic app for writing a row to that table. So, what I am asking is if any one else has walked this path. If so, how? What things did you take into consideration? Are either of my ideas valid starting points or are "egg and my face in perfect alignment"? Is there a different way that is even simpler? Thanks

    Read the article

  • WPF designer gives exception when databinding a label to a checkbox

    - by John
    I'm sure it's something stupid, but I'm playing around with databinding. I have a checkbox and a label on a form. What I'm trying to do is simply bind the Content of the label to the checkbox's IsChecked value. What I've done runs fine (no compilation errors and acts as expected), but if I touch the label in the XAML, the designer trows an exception: System.NullReferenceException Object reference not set to an instance of an object. at MS.Internal.Designer.PropertyEditing.Editors.MarkupExtensionInlineEditorControl.BuildBindingString(Boolean modeSupported, PropertyEntry propertyEntry) at <Window x:Class="UnitTestHelper.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:FileSysCtls="clr-namespace:WPFFileSystemUserControls;assembly=WPFFileSystemUserControls" xmlns:HelperClasses="clr-namespace:UnitTestHelper" Title="MainWindow" Height="406" Width="531"> <Window.Resources> <HelperClasses:ThreestateToBinary x:Key="CheckConverter" /> </Window.Resources> <Grid Height="367" Width="509"> <CheckBox Content="Step into subfolders" Height="16" HorizontalAlignment="Left" Margin="17,254,0,0" Name="chkSubfolders" VerticalAlignment="Top" Width="130" IsThreeState="False" /> <Label Height="28" HorizontalAlignment="Left" Margin="376,254,0,0" Name="lblStepResult" VerticalAlignment="Top" Width="120" IsEnabled="True" Content="{Binding IsChecked, ElementName=chkSubfolders, Mode=OneWay, UpdateSourceTrigger=PropertyChanged, Converter={StaticResource CheckConverter}}" /> </Grid> The ThreeStateToBinary class is as follows: class ThreestateToBinary : IValueConverter { #region IValueConverter Members public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { if ((bool)value) return "Checked"; else return "Not checked"; //throw new NotImplementedException(); } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { return ((string)value == "Checked"); //throw new NotImplementedException(); } #endregion } Quite honestly, I'm playing around with it at this point. It was originally simpler (not using the ValueConverter) but was displaying similar behavior when I simply had the content set to: Content="{Binding IsChecked, ElementName=chkSubfolders, UpdateSourceTrigger=PropertyChanged}" Any ideas? Thanks, John

    Read the article

  • How to produce an HTTP 403-equivalent WCF Message from an IErrorHandler?

    - by Andras Zoltan
    I want to write an IErrorHandler implementation that will handle AuthenticationException instances (a proprietary type), and then in the implementation of ProvideFault provide a traditional Http Response with a status code of 403 as the fault message. So far I have my first best guess wired into a service, but WCF appears to be ignoring the output message completely, even though the error handler is being called. At the moment, the code looks like this: public class AuthWeb403ErrorHandler : IErrorHandler { #region IErrorHandler Members public bool HandleError(Exception error) { return error is AuthenticationException; } public void ProvideFault(Exception error, MessageVersion version, ref Message fault) { //first attempt - just a stab in the dark, really HttpResponseMessageProperty property = new HttpResponseMessageProperty(); property.SuppressEntityBody = true; property.StatusCode = System.Net.HttpStatusCode.Forbidden; property.StatusDescription = "Forbidden"; var m = Message.CreateMessage(version, null); m.Properties[HttpResponseMessageProperty.Name] = property; fault = m; } #endregion } With this in place, I just get the standard WCF html 'The server encountered an error processing the request. See server logs for more details.' - which is what would happen if there was no IErrorHandler. Is this a feature of the behaviours added by WebServiceHost? Or is it because the message I'm building is simply wrong!? I can verify that the event log is indeed not receiving anything. My current test environment is a WebGet method (both XML and Json) hosted in a service that is created with the WebServiceHostFactory, and Asp.Net compatibility switched off. The service method simply throws the exception in question.

    Read the article

  • Cookies not present after using XMLHttpRequest

    - by Joe B
    I'm trying to make a bookmarklet to download videos off of YouTube, but I've come across a little problem. To detect the highest quality video available, I use a sort of brute force method, in which I make requests using the XMLHttpRequest object until a 404 isn't returned (I can't do it until a 200 ok is returned because YouTube redirects to a different server if the video is available, and the cross-domain policy won't allow me to access any of that data). Once a working URL is found, I simply set window.location to the URL and the download should start, right? Wrong. A request is made, but for reasons unknown to me, the cookies are stripped and YouTube returns a 403 access denied. This does not happen if the XML requests aren't made before it, i.e. if I just set the window.location to the URL everything works fine, it's when I do the XMLHttpRequest that the cookies aren't sent. It's hard to explain so here's the script: var formats = ["37", "22", "35", "34", "18", ""]; var url = "/get_video?video_id=" + yt.getConfig('SWF_ARGS')['video_id'] + "&t=" + (unescape(yt.getConfig('SWF_ARGS')['t'])) + "&fmt="; for (var i = 0; i < formats.length; i++) { xmlhttp = new XMLHttpRequest; xmlhttp.open("HEAD", url + formats[i], false); xmlhttp.send(null); if (xmlhttp.status != 404) { document.location = url + formats[i]; break } } That script does not send the cookies after setting the document.location and thus does not work. However, simply doing this: document.location = /get_video?video_id=" + yt.getConfig('SWF_ARGS')['video_id'] + "&t=" + (unescape(yt.getConfig('SWF_ARGS')['t'])) DOES send the cookies along with the request, and does work. The only downside is I can't automatically detect the highest quality, I just have to try every "fmt" parameter manually until I get it right. So my question is: why is the XMLHttpRequest object removing cookies from subsequent requests? This is the first time I've ever done anything in JS by the way, so please, go easy on me. ;)

    Read the article

  • Firebird sequence-backed ID shorthand

    - by pilcrow
    What do others do to simplify the creation of simple, serial surrogate keys populated by a SEQUENCE (a.k.a. GENERATOR) in Firebird = 2.1? I finc the process comparatively arduous: For example, in PostgreSQL, I simply type: pg> CREATE TABLE tbl ( > id SERIAL NOT NULL PRIMARY KEY, > ... In MySQL, I simply type: my> CREATE TABLE tbl ( > id INT NOT NULL PRIMARY KEY AUTO_INCREMENT, > ... But in Firebird I type: fb> CREATE TABLE tbl ( > id BIGINT NOT NULL PRIMARY KEY, > ... fb> CREATE SEQUENCE tbl_id_seq; fb> SET TERM !!; > CREATE TRIGGER tbl_id_trg FOR tbl > ACTIVE BEFORE INSERT POSITION 0 > AS > BEGIN > IF ((new.id IS NULL) OR (new.id <= 0)) THEN > BEGIN > new.id = GEN_ID(tbl_id_seq, 1); > END > END !! > SET TERM ; !! ... and I get pretty bored by the time I reach trigger definition. However, I routinely make SEQUENCE-backed ID fields for temporary, developement and throw-away tables. What do others do to simplify this? Work with an IDE? Run a pre-processing, in-house perl script over the DDL file? Etc.

    Read the article

  • Switching between iScroll and standard WebView Scrolling functionality

    - by Jonathan
    I'm using iScroll 4 for my rather heavy iPhone web/Phonegap app and trying to find a way to switch between iScrolls scrolling functionality and standard "native" webview scrolling, basically on click. Why I want this is described below. My app has several different subpages all in one file. Some subpages have input fields, some don't. As we all know, iScroll + input fields = you're out of luck. I've wrapped iScrolls wrapper div (and all its functionality) around the one sub page where scrolling is crucial, and where there are no input fields. The other sections, I've simply placed outside this div, which gives these no scrolling functionality at all. I've of course tried wrapping all inside the wrapper and enabling/disabling scroll (shown below) but that didn't work me at all: myScroll.disable() myScroll.enable() By placing some sub pages outside the main scrolling area / iscroll div, I've disabled both iScrolls and the standard webview scrolling (the latter - which i guess iScroll does) which leaves me with only basic basic scrolling, hence basically no scrolling at all. One can move around vertically, but once you let go of the screen with, the "scrolling" stops. Quite naturally but alas so nasty. Therefore, I'm searching for a way to enable standard webview scrolling on the sub pages placed outside of iScroll's wrapper div. I've tried different approaches such as the one above and by using: document.removeEventListener('touchmove', preventDefault, false); document.removeEventListener('touchmove', preventDefault, true); But with no success. Sorry for not providing you guys with any hard code or demos to test out for yourselves, it's simply too much code and it would be presented so out of its context, nobody would be able to debug it. So, is there a way n javascript to do this, switching between iScroll scrolling functionality and standard "native" webview scrolling? I would rather not rebuild the entire DOM framework so a solution like the one described above would be preferable.

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >