Search Results

Search found 12439 results on 498 pages for 'wondering'.

Page 478/498 | < Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >

  • How to tell whether your programmers are under-performing?

    - by A Team Lead
    I am a team lead with 5+ developers. I have a developer (let's call him A) who is a good programmer, who writes good clean, easy to understand code. However he is somewhat difficult to manage, and sometimes I wonder whether he is really under-performing or not. Our company requires the developers to indicate the work progress in the bug tracker we use, not so much as to monitor the programmers but to let the stackholders know the progress. The thing is, A only updates a task progress when it is done ( maybe 3 weeks after it is first worked on) and this leaves everyone wondering what is going on in the middle of the development week. He wouldn't change his habit despite repeated probing. ( It's OK, developers hate paperwork, I do, too) Recent 2-3 months he on leave quite often due to various events-- either he is sick, or have to attend a lot of personal events etc. ( It's OK, bad things happen in a string. It's just a coincidence) We define sprints, or roadmaps for each month. And in the beginning of the sprint, we will discuss the amount of work each of the developers have to do in a sprint and the developers get to set the amount of time they need for each task. He usually won't be able to complete all of them. (It's OK, the developers are regularly missing deadlines not due to their fault). If only one or two of the above events happen, I won't feel that A is under-performing, but they all happen together. So I have the feeling that A is under-performing and maybe-- God forbid--- slacking off. This is just a feeling based on my years of experience as programmer. But I could be wrong. It is notoriously hard to measure the work of a programmer, given that not all two tasks are alike, and there lacks a standard objective to measure the commitment of a programmer to your company. It is downright impossible to tell whether the programmer is doing his job or slacking off. All you can do, is to trust them-- yeah, trusting and giving them autonomy is the best way for programmers to work, I know that, so don't start a lecture on why you need to trust your programmers, thank you every much-- but if they abuse your trust, can you know? My question is, how can you tell whether your programmers are under-performing? Surely there are experience team leads who know better than me on this? Outcome: I've a straight talk with him regarding my perception on his performance. He was indignant when I suggested that I had the feeling that he wasn't performing at his best level. He felt that this was a completely unfair feeling. I then replied that this was my feeling and I didn't know whether my feeling was right or not. He would have none of this and ended the discussion immediately. Before he left he said that he "would try to give more to the company" in a very cold tone. I was taken aback by his reaction. I am sure that I offended him in some ways. Not too sure whether that was the right thing to do for me to be so frank with him, though. Extra notes: I hate micromanaging. So all that we have for our software process is Sprint ( where tasks get prioritized and assigned, and at the end of the month, a review of the amount of work done). Developers would require to update the tasks as they go along everyday. There is no standup meeting, or anything of the sort. Mainly because we have the freedom to work from home and everyone cherishes this freedom. Although I am the one who sets the deadline, but the developers will provide the estimate for each tasks and I will decide-- based on the estimate-- the tasks that go into a particular sprint. If they can't finish the tasks at the end of the sprint, I will push them to the next. So theoretically one can just do only 1 or 2 tasks during the whole sprint and then push the remaining 99 tasks to the next sprint and still he will be fine as long as justifies this-- in the form of daily work progress updates

    Read the article

  • Defaulting the HLSL Vertex and Pixel Shader Levels to Feature Level 9_1 in VS 2012

    - by Michael B. McLaughlin
    I love Visual Studio 2012. But this is not a post about that. This is a post about tweaking one particular parameter that I’ve found a bit annoying. Disclaimer: You will be modifying important MSBuild files. If you screw up you will break your build tools. And maybe your computer will catch fire. I’m not responsible. No warranties or guaranties of any sort. This info is provided “as is”. By default, if you add a new vertex shader or pixel shader item to a project, it will be set to build with shader profile 4.0_level_9_3. If you need 9_3 functionality, this is all well and good. But (especially for Windows Store apps) you really want to target the lowest shader profile possible so that your game will run on as many computers as possible. So it’s a good idea to default to 9_1. To do this you could add in new HLSL files via “Add->New Item->Visual C++->HLSL->______ Shader File (.hlsl)” and then edit the shader files’ properties to set them manually to use 9_1 via “Properties->HLSL Compiler->General->Shader Model”. This is fine unless you forget to do this once and then submit your game with 9_3 shaders instead of 9_1 shaders to the Windows Store or to some other game store. Then you’d wind up with either rejection or angry “this doesn’t work on my computer! ripoff!” messages. There’s another option though. In “Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ItemTemplates\VC\HLSL\1033\VertexShader” (note the path might vary slightly for you if you are using a 32-bit system or have a non-ENU version of Visual Studio 2012) you will find a “VertexShader.vstemplate” file. If you open this file in a text editor (e.g. Notepad++), then inside the CustomParameters tag within the TemplateContent tag you should see a CustomParameter tag for the ShaderType, i.e.: <CustomParameter Name="$ShaderType$" Value="Vertex"/> On a new line, we are going to add another CustomParameter tag to the CustomParameters tag. It will look like this: <CustomParameter Name="$ShaderModel$" Value="4.0_level_9_1"/> such that we now have:     <CustomParameters>       <CustomParameter Name="$ShaderType$" Value="Vertex"/>       <CustomParameter Name="$ShaderModel$" Value="4.0_level_9_1"/>     </CustomParameters> You can then save the file (you will need to be an Administrator or have Administrator access). Back in the 1033 directory (or whatever the number is for your language), go into the “PixelShader” directory. Edit the “PixelShader.vstemplate” file and make the same change (note that this time $ShaderType$ is “Pixel” not “Vertex”; you shouldn’t be changing that line anyway, but if you were to just copy and replace the above four lines then you will wind up creating pixel shaders that the HLSL compiler would try to compile as vertex shaders, with all sort of weird errors as a result). Once you’ve added the $ShaderModel$ line to “PixelShader.vstemplate” and have saved it, everything should be done. Since Feature Level 9_1 and 9_3 don’t support any of the other shader types, those are set to default to their appropriate minimums already (Compute and Geometry are set to “4.0” and Domain and Hull are set to “5.0”, which are their respective minimums (though not all 4.0 cards support Compute shaders; they were an optional feature added with DirectX 10.1 and only became required for DirectX 11 hardware). In case you are wondering where these magic values come from, you can find them all in the “fxc.xml” file in the “\Program Files (x86)\MSBuild\Microsoft.CPP\v4.0\V110\1033” directory (or whatever your language number is; 1033 is ENU and various other product languages have their own respective numbers (see: http://msdn.microsoft.com/en-us/goglobal/bb964664.aspx ) such that Japanese is 1041 (for example), though for all I know MSBuild tasks might be 1033 for everyone). If, like me, you installed VS 2012 to a drive other than the C:\ drive, you will find the vstemplate files in the drive to which you installed VS 2012 (D:\ in my case) but you will find the fxc.xml file on the C:\ drive. You should not edit fxc.xml. You will almost definitely break things by doing that; it’s just something you can look through to see all the other options that the FXC task takes such that you could, if needed, add further CustomParameter tags if you wanted to default to other supported options. I haven’t tried any others though so I don’t have any advice on how to set them.

    Read the article

  • Use your own domain email and tired of SPAM? SPAMfighter FTW

    - by Dave Campbell
    I wouldn't post this if I hadn't tried it... and I paid for it myself, so don't anybody be thinking I'm reviewing something someone sent me! Long ago and far away I got very tired of local ISPs and 2nd phone lines and took the plunge and got hooked up to cable... yeah I know the 2nd phone line concept may be hard for everyone to understand, but that's how it was in 'the old days'. To avoid having to change email addresses all the time, I decided to buy a domain name, get minimal hosting, and use that for all email into the house. That way if I changed providers, all the email addresses wouldn't have to change. Of course, about a dozen domains later, I have LOTS of pop email addresses and even an exchange address to my client's server... times have changed. What also has changed is the fact that we get SPAM... 'back in the day' when I was a beta tester for the first ISP in Phoenix, someone tried sending an ad to all of us, and what he got in return for his trouble was a bunch of core dumps that locked up his email... if you don't know what a core dump is, ask your grandfather. But in today's world, we're all much more civilized than that, and as with many things, the criminals seem to have much more rights than we do, so we get inundated with email offering all sorts of wild schemes that you'd have to be brain-dead to accept, but yet... if people weren't accepting them, they'd stop sending them. I keep hoping that survival of the smartest would weed out the mental midgets that respond and then the jumk email stop, but that hasn't happened yet anymore than finding high-quality hearing aids at the checkout line of Safeway because of all the dimwits playing music too loud inside their car... but that's another whole topic and I digress. So what's the solution for all the spam? And I mean *all*... on that old personal email address, I am now getting over 150 spam messages a day! Yes I know that's why God invented the delete key, but I took it on as a challenge, and it's a matter of principle... why should I switch email addresses, or convert from [email protected] to something else, or have all my email filtered through some service just because some A-Hole somewhere has a site up trying to phish Ma & Pa Kettle (ask your grandfather about that too) out of their retirement money? Well... I got an email from my cousin the other day while I was writing yet another email rule, and there was a banner on the bottom of his email that said he was protected by SPAMfighter. SPAMfighter huh.... so I took a look at their site, and found yet one more of the supposed tools to help us. But... I read that they're a Microsoft Gold Partner... and that doesn't come lightly... so I took a gamble and here's what I found: I installed it, and had to do a couple things: 1) SPAMfighter stuffed the SPAMfighter folder into my client's exchange address... I deleted it, made a new SPAMfighter folder where I wanted it to go, then in the SPAMfighter Clients settings for Outlook, I told it to put all spam there. 2) It didn't seem to be doing anything. There's a ribbon button that you can select "Block", and I did that, wondering if I was 'training' it, but it wasn't picking up duplicates 3) I sent email to support, and wrote a post on the forum (not to self: reply to that post). By the time the folks from the home office responded, it was the next day, and first up, SPAMfighter knocked down everything that came through when Outlook opend... two thumbs up! I disabled my 'garbage collection' rule from Outlook, and told Outlook not to use the junk folder thinking it was interfering. 4) Day 2 seemed to go about like Day 1... but I hung in there. 5) Day 3 is now a whole new day... I had left Outlook open and hadn't looked at the PC since sometime late yesterday afternoon, and when I looked this morning, *every bit* of spam was in the SPAMfighter folder!! I'm a new paying customer After watching SPAMfighter work this morning, I've purchased a 1-year license, and I now can sit and watch as emails come in and disappear from my inbox into the SPAMfighter folder. No more continual tweaking of the rules. I've got SPAMfighter set to 'Very Hard' filtering... personally I'd rather pull the few real emails out of the SPAMfighter folder than pull spam out of the real folders. Yes this is simply another way of using the delete key, but you know what? ... it feels good :) Here's a screenshot of the stats after just about 48 hours of being onboard: Note that all the ones blocked by me were during Day 1 and 2... I've blocked none today, and everything is blocked. Stay in the 'Light!

    Read the article

  • Microsoft BUILD 2013 Day 1&ndash;Keynote

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/06/27/microsoft-build-2013-day-1ndashkeynote.aspx This one is going to be a little long because the keynote was jam-packed so bare with me. The keynote for the first day of BUILD 2013 was kicked off by Steve Balmer.  He made it very clear that Microsoft’s focus is on accelerating its time to market with products and product updates.  His quote was that “Rapid release” is the new norm.  He continued by showing off several new Lumias that have been buzzing around the internet for a while and announce that Sprint will now be carrying the HTC 8XT and Samsung ATIV. Balmer is known for repeating words or phrase for affect.  This time it was “Rapid release, rapid release” and “Touch, touch, touch, touch, touch, …”.  This was fun, but even more fun was when he announce that all attendees would receive an Acer Iconia 8” tablet. SCORE! The next subject Balmer focused on is new apps.  The three new ones were Flipboard, Facebook and NFL Fantasy Football.  I liked the first two because these are ones that people coming from other platforms are missing.  The NFL app is great just because it targets a demographic that can be fanatical.  If these types of apps keep coming than the missing app argument goes away. While many Negative Nancy’s are describing Windows 8.1 as Windows 180 Steve Balmer chose to call it a “refined blend” as in a coffee that has been improved with a new mix.  This includes more multi-tasking options and leveraging Bing straight throughout the entire ecosystem. He ended this first section by explaining that this will also bring more Bing development opportunities to the community. Steve Balmer was followed by Julie Larson-Green who spent her time on stage selling us on Windows 8 all over again from my point of view.  Something that I would not have thought was needed until I had listened to some other attendees who had a number of concerns and complaints.  She showed a number of new gestures that will come with Windows 8.1, and while they were cool I was left wondering if they really improved the experience.  I guess only time will tell. I did like the fact that it the UI implementation to bring up “All Apps” now mirrors that of Windows Phone.  The consistency is a big step forward that I hope to see continue.  The cool factor went up from there as she swiped content from a desktop (mega-tablet) to the XBox One.  This seamless experience I believe is what is really needed for any future platform to be relevant. I was much more enthused by the presentation of Antoine Leblond who humbled us by letting us know that there are 5k new API.  How that can be or how anyone would ever use all of them is another question.  His announcement was that the Visual Studio 2013 preview would be available today along with the Windows 8.1 bits.  One of the features of VS2013 that he demonstrated is the power consumption profiler.  With battery life being a key factor with consumer consumption devices this is a welcome addition. He didn’t limit his presentation to VS2013 features though.  He showed how the Store has been redesigned to enable better search and discoverability of apps and how Win 8.1 can perform multiple screen scales depending on the resolution of the device automatically.  The last feature he demoed was the real time video streaming API which he made sure we understood by attaching a Surface to a little robot.  Oh, but there was one more thing.  Antoine and Julie announce that all attendees would also be getting Surface Pros.  BONUS! How much more could there be?  Gurdeep Singh Pall was about to pile on.  He introduced us to Bing as a platform (BaaP?).  He said if they (Microsoft) could do something with and API that is good 3rd party developers can do something that is dynamite and showed us some of the tools they had produced.  These included natural user interface improvements such as voice commands that looked to put Siri to shame.  Add to that 3D, OCR and translation capabilities and the future looks to be full of opportunities. Balmer then came out to show us one last thing.  Project Spark is a game design environment that will be available for Windows 8.1, XBox 360 and XBox One.  All I can say is that if my kids get their hands on this they are going to be able to learn some of what dad does in a much more enjoyable way. At the end of it all I was both exhausted and energized by what I saw.  What could they have possibly left for the day 2 keynote?  I hear it will feature Scott Hanselman.  If that is right we are in for a treat.  See you there. del.icio.us Tags: BUILD 2013,Windows 8.1,Winodws Phone,XAML,Keynote,Bing,Visual Studio 2013,Project Spark

    Read the article

  • Taking a Flying Leap

    - by Lance Shaw
    Yesterday, I went skydiving with three of my children.  It was thrilling, scary, invigorating and exciting. While there is obvious risk involved, the reward and feeling of success was well worth it. You might already be wondering what skydiving would have to with WebCenter, so let me explain. Implementing a skydiving program and becoming an instructor does not happen overnight.  It does not happen with the purchase of the needed technology. Not one of us would go out, buy a parachute, the harnesses, helmet and all the gear and be able to convince anyone that we are now ready to be a skydiving instructor. The fact is that obtaining the technology is merely a small piece of the overall process and so is the case with managing content in your company. You don't just buy the right software (Oracle WebCenter Content) and go to your boss and declare information management success. There is planning, research and effort that goes into deploying software of any kind and especially when it is as mission-critical to the success of your business as Enterprise Content Management. To become a certified skydiving instructor takes at least 3 years of commitment and often longer. In the United States, candidates must complete over 500 solo jumps of their own over a minimum of 36 months and then must complete additional rigorous training under observation.  When you consider the amount of time and effort involved, it's not unlike getting a college degree and anyone that has trusted their lives to one of these instructors will no doubt appreciate their dedication to the curriculum.  Implementing an ECM system won't take that long, but it certainly requires commitment, analysis and consideration. But guess what?  Humans are involved and that means that mistakes can happen and that rules change.  This struck me while reading an excellent post on darkreading.com by Glenn S. Phillips entitled "Mission Impossible: 4 Reasons Compliance is Impossible".  His over-arching point was that with information management and security, environments change and people are involved meaning the work is never done.  He stated that you can never claim your compliance efforts are complete because of the following reasons. People are involved.  And lets face it, some are more trustworthy than others. Change is Constant. There is always some new technology coming along that is disruptive. Consumer grade cloud file sharing and sync tools come to mind here. Compliance is interpreted, not defined.  Laws and the judges that read them are always on the move. Technology is a tool, not a complete solution. There is no magic pill. The skydiving analogy holds true here as well.  Ultimately, a single person packs your parachute.  For obvious reasons, you prefer that this person be trustworthy but there are no absolute guarantees of a 100% error-free scenario.  Weather and wind conditions are never a constant and the best-laid plans for a great day of skydiving are easily disrupted by forces outside of your control.  Rules and regulations vary by location and may be updated at any time and as I mentioned early on, even the best technology on its own will only get you started. The good news is that, like skydiving, with the right technology, the right planning, the right team and a proper understanding of the rules and regulations that govern your industry, your ECM deployment can be a great success.  Failure to plan for any of the 4 factors that Glenn outlined in his article will certainly put your deployment and maybe even your company at risk, so consider them carefully. As a final aside, for those of you who consider skydiving an incredibly dangerous and risky pastime, consider this comparative statistic.  In 2012, the U.S. Parachute Association recorded 19 fatal skydiving accidents in the U.S. out of roughly 3.1 million jumps.  That’s 0.006 fatalities per 1,000 jumps. By comparison, the U.S. National Highway Traffic Safety Administration reports that there were 34,080 deaths due to car accidents in 2012.  Based on the percentages, one could argue that it is safer to jump out of a plane than to drive to the airport where the skydiving will take place. While the way you manage, secure, classify, control, retain and dispose of company files may not carry as much risk as driving or skydiving, it certainly carries risk for the organization when not planned and deployed appropriately.  Consider all the factors involved in your organization as you make your content management plans.  For additional areas of consideration, be sure to download our free whitepaper on the topic entitled "The Top 10 Criteria for Choosing an ECM System" which is available for download here.

    Read the article

  • The Enterprise is a Curmudgeon

    - by John K. Hines
    Working in an enterprise environment is a unique challenge.  There's a lot more to software development than developing software.  A project lead or Scrum Master has to manage personalities and intra-team politics, has to manage accomplishing the task at hand while creating the opportunities and a reputation for handling desirable future work, has to create a competent, happy team that actually delivers while being careful not to burn bridges or hurt feelings outside the team.  Which makes me feel surprised to read advice like: " The enterprise should figure out what is likely to work best for itself and try to use it." - Ken Schwaber, The Enterprise and Scrum. The enterprises I have experience with are fundamentally unable to be self-reflective.  It's like asking a Roman gladiator if he'd like to carve out a little space in the arena for some silent meditation.  I'm currently wondering how compatible Scrum is with the top-down hierarchy of life in a large organization.  Specifically, manufacturing-mindset, fixed-release, harmony-valuing large organizations.  Now I understand why Agile can be a better fit for companies without much organizational inertia. Recently I've talked with nearly two dozen software professionals and their managers about Scrum and Agile.  I've become convinced that a developer, team, organization, or enterprise can be Agile without using Scrum.  But I'm not sure about what process would be the best fit, in general, for an enterprise that wants to become Agile.  It's possible I should read more than just the introduction to Ken's book. I do feel prepared to answer some of the questions I had asked in a previous post: How can Agile practices (including but not limited to Scrum) be adopted in situations where the highest-placed managers in a company demand software within extremely aggressive deadlines? Answer: In a very limited capacity at the individual level.  The situation here is that the senior management of this company values any software release more than it values developer well-being, end-user experience, or software quality.  Only if the developing organization is given an immediate refactoring opportunity does this sort of development make sense to a person who values sustainable software.   How can Agile practices be adopted by teams that do not perform a continuous cycle of new development, such as those whose sole purpose is to reproduce and debug customer issues? Answer: It depends.  For Scrum in particular, I don't believe Scrum is meant to manage unpredictable work.  While you can easily adopt XP practices for bug fixing, the project-management aspects of Scrum require some predictability.  My question here was meant toward those who want to apply Scrum to non-development teams.  In some cases it works, in others it does not. How can a team measure if its development efforts are both Agile and employ sound engineering practices? Answer: I'm currently leaning toward measuring these independently.  The Agile Principles are a terrific way to measure if a software team is agile.  Sound engineering practices are those practices which help developers meet the principles.  I think Scrum is being mistakenly applied as an engineering practice when it is essentially a project management practice.  In my opinion, XP and Lean are examples of good engineering practices. How can Agile be explained in an accurate way that describes its benefits to sceptical developers and/or revenue-focused non-developers? Answer: Agile techniques will result in higher-quality, lower-cost software development.  This comes primarily from finding defects earlier in the development cycle.  If there are individual developers who do not want to collaborate, write unit tests, or refactor, then these are simply developers who are either working in an area where adding these techniques will not add value (i.e. they are an expert) or they are a developer who is satisfied with the status quo.  In the first case they should be left alone.  In the second case, the results of Agile should be demonstrated by other developers who are willing to receive recognition for their efforts.  It all comes down to individuals, doesn't it?  If you're working in an organization whose Agile adoption consists exclusively of Scrum, consider ways to form individual Agile teams to demonstrate its benefits.  These can even be virtual teams that span people across org-chart boundaries.  Once you can measure real value, whether it's Scrum, Lean, or something else, people will follow.  Even the curmudgeons.

    Read the article

  • Adaptive Layout for ADF Faces on Tablets

    - by Shay Shmeltzer
    In the 11.1.16 version of Oracle ADF we started adding specific features to the ADF Faces components so they'll work better on iPad tablets. In this entry I'm going to highlight some new capabilities that we have added to the 11.1.2.3 release. (note if you are still on the 11.1.1.* branch - you'll need to wait for 11.1.1.7 to get the features discussed here). The two key additions in the 11.1.2.3 version compared to the 11.1.1.6 features for iPad support include: pagination for tables and adaptive flow layout. The pagination for table is self explanatory, basically since iPad don't support scroll bars, we automatically switch the table component to render with a pagination toolbar that allow you to scroll set of records or directly jump to a specific set. See the image below. The adaptive flow layout takes a bit more explanation. On regular desktops the UI that you usually build for ADF Faces screens is going to use stretch layout - meaning that it stretches to fill the whole area of the browser window. If you resize the browser windoe, the ADF Faces page resizes with it. If your browser window is too small, scroll bars will appear to allow you to scroll to areas that are "hidden". However on an iPad, this is probably not the type of layout you want - you would rather have a flow layout that eliminates scroll bars and instead allows you to scroll down the page. Basically your want the page to be sized based on its content, rather then based on the browser window size. In ADF Faces terminology this can be done with the dimensionsFrom property set to "children". And here comes the tricky part, since in the past(and also today) when you create an ADF Faces page and add a stretchable component to it, the dimensionsFrom property is set to parent by default. This will be true to other layout components you'll add as well. At this point you might be wondering "Does this mean I'll need to go to each of the layout components in my page and modify the dimensionsFrom property value to be children?" ADF Faces to the rescue... To eliminate the need to do this tedious manual changes, we introduced a new web.xml parameter "oracle.adf.view.rich.geometry.DEFAULT_DIMENSIONS" You'll basically add the following to your web.xml <context-param>    <description>      This parameter controls the default value for component geometry on the page.      Supported values are:        legacy - component attributes use the default values as specified for the attributes                 in the tag documentation (default value)        auto   - component attributes use the correct default value given the value of their                 parent component. For example, with this setting, the panelStretchLayout                 will use "auto" as the default value for its "dimensionsFrom" attribute                 instead of "parent".    </description>    <param-name>oracle.adf.view.rich.geometry.DEFAULT_DIMENSIONS</param-name>    <param-value>auto</param-value>  </context-param> Once you set this parameter, you only need to set the dimensionsFrom attribute for the top level layout component on your page, and the rest of the components will adjust accordingly. One trick that you can use, and that is used in the demo below, is to have the dimensionsFrom property depend on the type of client that access your application. This way you can switch between stretch or flow layout based on the device accessing your application. For example I use the following in my page: <af:panelStretchLayout topHeight="70px" startWidth="0px" endWidth="0px"                                       dimensionsFrom="#{adfFacesContext.agent.capabilities['touchScreen'] eq 'none'  ? 'parent' : 'children' }"> Which results in a flow layout for iPads and a stretch layout for regular browsers. Check out the result in the below demo: &amp;lt;span id=&amp;quot;XinhaEditingPostion&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;

    Read the article

  • "Imprinting" as a language feature?

    - by MKO
    Idea I had this idea for a language feature that I think would be useful, does anyone know of a language that implements something like this? The idea is that besides inheritance a class can also use something called "imprinting" (for lack of better term). A class can imprint one or several (non-abstract) classes. When a class imprints another class it gets all it's properties and all it's methods. It's like the class storing an instance of the imprinted class and redirecting it's methods/properties to it. A class that imprints another class therefore by definition also implements all it's interfaces and it's abstract class. So what's the point? Well, inheritance and polymorphism is hard to get right. Often composition gives far more flexibility. Multiple inheritance offers a slew of different problems without much benefits (IMO). I often write adapter classes (in C#) by implementing some interface and passing along the actual methods/properties to an encapsulated object. The downside to that approach is that if the interface changes the class breaks. You also you have to put in a lot of code that does nothing but pass things along to the encapsulated object. A classic example is that you have some class that implements IEnumerable or IList and contains an internal class it uses. With this technique things would be much easier Example (c#) [imprint List<Person> as peopleList] public class People : PersonBase { public void SomeMethod() { DoSomething(this.Count); //Count is from List } } //Now People can be treated as an List<Person> People people = new People(); foreach(Person person in people) { ... } peopleList is an alias/variablename (of your choice)used internally to alias the instance but can be skipped if not needed. One thing that's useful is to override an imprinted method, that could be achieved with the ordinary override syntax public override void Add(Person person) { DoSomething(); personList.Add(person); } note that the above is functional equivalent (and could be rewritten by the compiler) to: public class People : PersonBase , IList<Person> { private List<Person> personList = new List<Person>(); public override void Add(object obj) { this.personList.Add(obj) } public override int IndexOf(object obj) { return personList.IndexOf(obj) } //etc etc for each signature in the interface } only if IList changes your class will break. IList won't change but an interface that you, someone in your team, or a thirdparty has designed might just change. Also this saves you writing a whole lot of code for some interfaces/abstract classes. Caveats There's a couple of gotchas. First we, syntax must be added to call the imprinted classes's constructors from the imprinting class constructor. Also, what happends if a class imprints two classes which have the same method? In that case the compiler would detect it and force the class to define an override of that method (where you could chose if you wanted to call either imprinted class or both) So what do you think, would it be useful, any caveats? It seems it would be pretty straightforward to implement something like that in the C# language but I might be missing something :) Sidenote - Why is this different from multiple inheritance Ok, so some people have asked about this. Why is this different from multiple inheritance and why not multiple inheritance. In C# methods are either virtual or not. Say that we have ClassB who inherits from ClassA. ClassA has the methods MethodA and MethodB. ClassB overrides MethodA but not MethodB. Now say that MethodB has a call to MethodA. if MethodA is virtual it will call the implementation that ClassB has, if not it will use the base class, ClassA's MethodA and you'll end up wondering why your class doesn't work as it should. By the terminology sofar you might already confused. So what happens if ClassB inherits both from ClassA and another ClassC. I bet both programmers and compilers will be scratching their heads. The benefit of this approach IMO is that the imprinting classes are totally encapsulated and need not be designed with multiple inheritance in mind. You can basically imprint anything.

    Read the article

  • Clone a VirtualBox Machine

    I just installed VirtualBox, which I want to try out based on recommendations from peers for running a server from within my Windows 7 x64 OS.  Ive never used VirtualBox, so Im certainly no expert at it, but I did want to share my experience with it thus far.  Specifically, my intention is to create a couple of virtual machines.  One I intend to use as a build server, for which a virtual machine makes sense because I can easily move it around as needed if there are hardware issues (its worth noting my need for setting up a build server at the moment is a result of a disk failure on the old build server).  The other VM I want to set up will act as a proxy server for the issue tracking system were using at Code Project, Axosoft OnTime.  They have a Remote Server application for this purpose, and since the OnTime install is 300 miles away from my location, the Remote Server should speed up my use of the OnTime client by limiting the chattiness with the database (at least, thats the hope). So, I need two VMs, and Im lazy.  I dont want to have to install the OS and such twice.  No problem, it should be simple to clone a virtualbox machine, or clone a virtualbox hard drive, right?  Well unfortunately, if you look at the UI for VirtualBox, theres no such command.  Youre left wondering How do I clone a VirtualBox machine? or the slightly related How do I clone a VirtualBox hard drive? If youve used VirtualPC, then you know that its actually pretty easy to copy and move around those VMs.  Not quite so easy with VirtualBox.  Finding the files is easy, theyre located in your user folder within the .VirtualBox folder (possibly within a HardDisks folder).  The disks have a .vdi extension and will be pretty large if youve installed anything.  The one shown here has just Windows Server 2008 R2 installed on it nothing else. If you copy the .vdi file and rename it, you can use the Virtual Media Manager to view it and you can create a new machine and choose the new drive to attach to.  Unfortunately, if you simply make a copy of the drive, this wont work and youll get an error that says something to the effect of: Cannot register the hard disk PATH with UUID {id goes here} because a hard disk PATH2 with UUID {same id goes here} already exists in the media registry (PATH to XML file). There are command line tools you can use to do this in a way that avoids this error.  Specifically, the c:\Program File\Sun\VirtualBox\VBoxManage.exe program is used for all command line access to VirtualBox, and to copy a virtual disk (.vdi file) you would call something like this: VBoxManage clonehd Disk1.vdi Disk1_Copy.vdi However, in my case this didnt work.  I got basically the same error I showed above, along with some debug information for line 628 of VBoxManageDisk.cpp.  As my main task was not to debug the C++ code used to write VirtualBox, I continued looking for a simple way to clone a virtual drive.  I found it in this blog post. The Secret setvdiuuid Command VBoxManage has a whole bunch of commands you can use with it just pass it /? to see the list.  However, it also has a special command called internalcommands that opens up access to even more commands.  The one thats interesting for us here is the setvdiuuid command.  By calling this command and passing in the file path to your vdi file, it will reset the UUID to a new (random, apparently) UUID.  This then allows the virtual media manager to cope with the file, and lets you set up new machines that reference the newly UUIDd virtual drive.  The full command line would be: VBoxManage internalcommands setvdiuuid MyCopy.vdi The following screenshot shows the error when trying clonehd as well as the successful use of setvdiuuid. Summary Now that I can clone machines easily, its a simple matter to set up base builds of any OS I might need, and then fork from there as needed.  Hopefully the GUI for VirtualBox will be improved to include better support for copying machines/disks, as this is Im sure a very common scenario. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Adjusting server-side tickrate dynamically

    - by Stuart Blackler
    I know nothing of game development/this site, so I apologise if this is completely foobar. Today I experimented with building a small game loop for a network game (think MW3, CSGO etc). I was wondering why they do not build in automatic rate adjustment based on server performance? Would it affect the client that much if the client knew this frame is based on this tickrate? Has anyone attempted this before? Here is what my noobish C++ brain came up with earlier. It will improve the tickrate if it has been stable for x ticks. If it "lags", the tickrate will be reduced down by y amount: // GameEngine.cpp : Defines the entry point for the console application. // #ifdef WIN32 #include <Windows.h> #else #include <sys/time.h> #include <ctime> #endif #include<iostream> #include <dos.h> #include "stdafx.h" using namespace std; UINT64 GetTimeInMs() { #ifdef WIN32 /* Windows */ FILETIME ft; LARGE_INTEGER li; /* Get the amount of 100 nano seconds intervals elapsed since January 1, 1601 (UTC) and copy it * to a LARGE_INTEGER structure. */ GetSystemTimeAsFileTime(&ft); li.LowPart = ft.dwLowDateTime; li.HighPart = ft.dwHighDateTime; UINT64 ret = li.QuadPart; ret -= 116444736000000000LL; /* Convert from file time to UNIX epoch time. */ ret /= 10000; /* From 100 nano seconds (10^-7) to 1 millisecond (10^-3) intervals */ return ret; #else /* Linux */ struct timeval tv; gettimeofday(&tv, NULL); uint64 ret = tv.tv_usec; /* Convert from micro seconds (10^-6) to milliseconds (10^-3) */ ret /= 1000; /* Adds the seconds (10^0) after converting them to milliseconds (10^-3) */ ret += (tv.tv_sec * 1000); return ret; #endif } int _tmain(int argc, _TCHAR* argv[]) { int sv_tickrate_max = 1000; // The maximum amount of ticks per second int sv_tickrate_min = 100; // The minimum amount of ticks per second int sv_tickrate_adjust = 10; // How much to de/increment the tickrate by int sv_tickrate_stable_before_increment = 1000; // How many stable ticks before we increase the tickrate again int sys_tickrate_current = sv_tickrate_max; // Always start at the highest possible tickrate for the best performance int counter_stable_ticks = 0; // How many ticks we have not lagged for UINT64 __startTime = GetTimeInMs(); int ticks = 100000; while(ticks > 0) { int maxTimeInMs = 1000 / sys_tickrate_current; UINT64 _startTime = GetTimeInMs(); // Long code here... cout << "."; UINT64 _timeTaken = GetTimeInMs() - _startTime; if(_timeTaken < maxTimeInMs) { Sleep(maxTimeInMs - _timeTaken); counter_stable_ticks++; if(counter_stable_ticks >= sv_tickrate_stable_before_increment) { // reset the stable # ticks counter counter_stable_ticks = 0; // make sure that we don't go over the maximum tickrate if(sys_tickrate_current + sv_tickrate_adjust <= sv_tickrate_max) { sys_tickrate_current += sv_tickrate_adjust; // let me know in console #DEBUG cout << endl << "Improving tickrate. New tickrate: " << sys_tickrate_current << endl; } } } else if(_timeTaken > maxTimeInMs) { cout << endl; if((sys_tickrate_current - sv_tickrate_adjust) > sv_tickrate_min) { sys_tickrate_current -= sv_tickrate_adjust; } else { if(sys_tickrate_current == sv_tickrate_min) { cout << "Please reduce sv_tickrate_min..." << endl; } else{ sys_tickrate_current = sv_tickrate_min; } } // let me know in console #DEBUG cout << "The server has lag. Reduced tickrate to: " << sys_tickrate_current << endl; } ticks--; } UINT64 __timeTaken = GetTimeInMs() - __startTime; cout << endl << endl << "Total time in ms: " << __timeTaken; cout << endl << "Ending tickrate: " << sys_tickrate_current; char test; cin >> test; return 0; }

    Read the article

  • Efficient Way to Draw Grids in XNA

    - by sm81095
    So I am working on a game right now, using Monogame as my framework, and it has come time to render my world. My world is made up of a grid (think Terraria but top-down instead of from the side), and it has multiple layers of grids in a single world. Knowing how inefficient it is to call SpriteBatch.Draw() a lot of times, I tried to implement a system where the tile would only be drawn if it wasn't hidden by the layers above it. The problem is, I'm getting worse performance by checking if it's hidden than when I just let everything draw even if it's not visible. So my question is: how to I efficiently check if a tile is hidden to cut down on the draw() calls? Here is my draw code for a single layer, drawing floors, and then the tiles (which act like walls): public void Draw(GameTime gameTime) { int drawAmt = 0; int width = Tile.TILE_DIM; int startX = (int)_parent.XOffset; int startY = (int)_parent.YOffset; //Gets the starting tiles and the dimensions to draw tiles, so only onscreen tiles are drawn, allowing for the drawing of large worlds int tileDrawWidth = ((CIGame.Instance.Graphics.PreferredBackBufferWidth / width) + 4); int tileDrawHeight = ((CIGame.Instance.Graphics.PreferredBackBufferHeight / width) + 4); int tileStartX = (int)MathHelper.Clamp((-startX / width) - 2, 0, this.Width); int tileStartY = (int)MathHelper.Clamp((-startY / width) - 2, 0, this.Height); #region Draw Floors and Tiles CIGame.Instance.GraphicsDevice.SetRenderTarget(_worldTarget); CIGame.Instance.GraphicsDevice.Clear(Color.Black); CIGame.Instance.SpriteBatch.Begin(); //Draw floors for (int x = tileStartX; x < (int)MathHelper.Clamp(tileStartX + tileDrawWidth, 0, this.Width); x++) { for (int y = tileStartY; y < (int)MathHelper.Clamp(tileStartY + tileDrawHeight, 0, this.Height); y++) { //Check if this tile is hidden by layer above it bool visible = true; for (int i = this.LayerNumber; i <= _parent.ActiveLayer; i++) { if (this.LayerNumber != (_parent.Layers - 1) && (_parent.GetTileAt(x, y, i + 1).Opacity >= 1.0f || _parent.GetFloorAt(x, y, i + 1).Opacity >= 1.0f)) { visible = false; break; } } //Only draw if visible under the tile above it if (visible && this.GetTileAt(x, y).Opacity < 1.0f) { Texture2D tex = WorldTextureManager.GetFloorTexture((Floor)_floors[x, y]); Rectangle source = WorldTextureManager.GetSourceForIndex(((Floor)_floors[x, y]).GetTextureIndexFromSurroundings(x, y, this), tex); Rectangle draw = new Rectangle(startX + x * width, startY + y * width, width, width); CIGame.Instance.SpriteBatch.Draw(tex, draw, source, Color.White * ((Floor)_floors[x, y]).Opacity); drawAmt++; } } } //Draw tiles for (int x = tileStartX; x < (int)MathHelper.Clamp(tileStartX + tileDrawWidth, 0, this.Width); x++) { for (int y = tileStartY; y < (int)MathHelper.Clamp(tileStartY + tileDrawHeight, 0, this.Height); y++) { //Check if this tile is hidden by layers above it bool visible = true; for (int i = this.LayerNumber; i <= _parent.ActiveLayer; i++) { if (this.LayerNumber != (_parent.Layers - 1) && (_parent.GetTileAt(x, y, i + 1).Opacity >= 1.0f || _parent.GetFloorAt(x, y, i + 1).Opacity >= 1.0f)) { visible = false; break; } } if (visible) { Texture2D tex = WorldTextureManager.GetTileTexture((Tile)_tiles[x, y]); Rectangle source = WorldTextureManager.GetSourceForIndex(((Tile)_tiles[x, y]).GetTextureIndexFromSurroundings(x, y, this), tex); Rectangle draw = new Rectangle(startX + x * width, startY + y * width, width, width); CIGame.Instance.SpriteBatch.Draw(tex, draw, source, Color.White * ((Tile)_tiles[x, y]).Opacity); drawAmt++; } } } CIGame.Instance.SpriteBatch.End(); Console.WriteLine(drawAmt); CIGame.Instance.GraphicsDevice.SetRenderTarget(null); //TODO: Change to new rendertarget instead of null #endregion } So I was wondering if this is an efficient way, but I'm going about it wrongly, or if there is a different, more efficient way to check if the tiles are hidden. EDIT: For example of how much it affects performance: using a world with three layers, allowing everything to draw no matter what gives me 60FPS, but checking if its visible with all of the layers above it gives me only 20FPS, while checking only the layer immediately above it gives me a fluctuating FPS between 30 and 40FPS.

    Read the article

  • The Dreaded Startup Repair Loop on Win 7

    - by HighAltitudeCoder
    For most people, upgrading to Windows 7 has been a relatively painless process.  Not me.  I am in the unlucky 1% or less who had a somewhat less pleasant experience.  First, I cloned my entire onto a larger (and much faster) solid state hard drive, only experiencing minimal problems. Then, I bought the Retail version of Windows 7 Ultimate, took a deep breath and... oh yeah, I almost forgot - BACK UP THE COMPUTER.  The next morning I upgraded to Win 7 and everything seemed fine, until... I rebooted the system, the nice Windows 7 launch graphics come up, it's about to launch and AWWW, are you kidding me?!?!  Back to the BIOS splash screen?  Next comes the sequence of failure - attempt repair - unable to repair - do you want to wipe your hard drive decisions. Because I purchased the retail version, a number is provided where I could call Microsoft Tech support.  When I did, they instructed me to click "Install" from my installation CD, which did not work.  When I tried the "Upgrade" option, it reaches an impasse, telling you that yoiu have a newer version of Win 7, and thus cannot Upgrade.  If you choose "Install" you willl lose everything... files, programs, EVERYTHING.  Or at least this is what it tells you.  I was not willing to take the risk. To make things worse, I had installed a new antivirus software application before I realized my system was unstable (Trend Micro Titanium Internet Security), and this was causing additional problems. One interesting thing, and the only saving grace as it turns out, was that my system WOULD successfully reboot into the OS if I chose to restart it, rather than shut it down.  If I chose to shut down, I would have to go through the loop again until I was given the option to restart. As it turned out, I needed to update my BIOS.  I assumed that since I had updated my BIOS a long time ago to settings that were stable under Windows Vista Ultimate x64, I incorrectly expected Win 7 to adopt the same settings and didn't expect there to be any problems.  WRONG. My BIOS had a setting to halt the boot cycle if various kinds of errors were detected.  Windows Vista didn't care about this, but forget it under Windows 7.  I turned immediately corrected that BIOS setting.  Next, there were the two separate BIOS settings: enable USB mouse and enable USB keyboard.  The only sequence of events that would work were to start my reboot process over from stratch with a hard-wired non-usb keyboard and mouse.  Whent the system booted under these settings, it doesn't detect any errors due to either the mouse or keyboard, and actually booted for the first time in a long while (let me tell ya, that's an amazing experience after fiddling with settings for two entire weekends!) Next step: leave your old mouse and keyboard connected, but also connect your other two devices (mouse, keyboard) that use USB connections.  During the boot cycle, the operating system will not fail due to missing requirements during startup, and it will then pick up the new drivers necessary to use your new hardware. If you think you are in the clear here, you are wrong.  The next VERY IMPORTANT step is to remember to change your settings in the BIOS upon next startup.  Specifically, yoiu will need ot change your BIOS to enable USB mouse and enable USB keyboard input.  If you don't, Windows will detect an incompatibility upon the next startup, and you will be stuck once again in the endless cycle of reboot/Startup Repair/reboot/Startup Repair, without ever reaching a successful boot. Here's the thing - the BIOS and the drivers registered in Win 7 need to match.  If they don't, you're going to lose another weekend worrying and fiddling, all the while wondering if you've permanently damaged your hard drive beyond repair. (Sigh).  In the end, things worked out.  I must note that it is saddening to see how many posts there are out there that recommend just doing a clean install, as if it's the only option.  How many countless poor souls have lost their data, their backups, their pictures and videos, all for nothing other than the fact that the person giving advice just didn't know what to do at that point? My advice to you, try having a look at your BIOS settings first and making sure Win 7 can find your BIOS settings, and also disabling in your BIOS anything that might halt your system boot-up process if it encounters errors.

    Read the article

  • The architecture and technologies to use for a secure, fast, reliable and easily scalable web application

    - by DSoul
    ^ For actual questions, skip to the lists down below I understand, that his is a vague topic, but please, before you turn the other way and disregard me, hear me out. I am currently doing research for a web application(I don't know if application is the correct word for it, but I will proceed w/ that for now), that one day might need to be everything mentioned in the title. I am bound by nothing. That means that every language, OS and framework is acceptable, but only if it proves it's usefulness. And if you are going to say, that scalability and speed depend on the code I write for this application, then I agree, but I am just trying to find something, that wouldn't stand in my way later on. I have done quite a bit reading on this subject, but I still don't have a clear picture, to what suits my needs, so I come to you, StackOverflow, to give me directions. I know you all must be wondering what I'm building, but I assure you, that it doesn't matter. I have heard of 12 factor app though, if you have any similar guidelines or what is, to suggest the please, go ahead. For the sake of keeping your answers as open as possible, I'm not gonna provide you my experience regarding anything written in this question. ^ Skippers, start here First off - the weights of the requirements are probably something like that (on a scale of 10): Security - 10 Speed - 5 Reliability (concurrency) - 7.5 Scalability - 10 Speed and concurrency are not a top priority, in the sense, that the program can be CPU intensive, and therefore slow, and only accept a not-that-high number of concurrent users, but both of these factors must be improvable by scaling the system Anyway, here are my questions: How many layers should the application have, so it would be future-proof and could best fulfill the aforementioned requirements? For now, what I have in mind is the most common version: Completely separated front end, that might be a web page or an MMI application or even both. Some middle-ware handling communication between the front and the back end. This is probably a server that communicates w/ the front end via HTTP. How the communication w/ the back end should be handled is probably dependent on the back end. The back end. Something that handles data through resources like DB and etc. and does various computations w/ the data. This, as the highest priority part of the software, must be easily spread to multiple computers later on and have no known security holes. I think ideally the middle-ware should send a request to a queue from where one of the back end processes takes this request, chops it up to smaller parts and buts these parts of the request back onto the same queue as the initial request, after what these parts will be then handled by other back end processes. Something *map-reduce*y, so to say. What frameworks, languages and etc. should these layers use? The technologies used here are not that important at this moment, you can ignore this part for now I've been pointed to node.js for this part. Do you guys know any better alternatives, or have any reasons why I should (not) use node.js for this particular job. I actually have no good idea, what to use for this job, there are too many options out there, so please direct me. This part (and the 2. one also, I think) depend a lot on the OS, so suggest any OSs alongside w/ the technologies/frameworks. Initially, all computers (or 1 for starters) hosting the back end are going to be virtual machines. Please do give suggestions to any part of the question, that you feel you have comprehensive knowledge and/or experience of. And also, point out if you feel that any part of the current set-up means an instant (or even distant) failure or if I missed a very important aspect to consider. I'm not looking for a definitive answer for how to achieve my goals, because there certainly isn't one, for I haven't provided you w/ all the required information. I'm just looking for recommendations and directions on what to look into. Also, bare in mind, that this isn't something that I have to get done quickly, to sell and let it be re-written by the new owner (which, I've been told for multiple times, is what I should aim for). I have all the time in the world and I really just want to learn doing something really high-end. Also, excuse me if my language isn't the best, I'm not a native. Anyway. Thanks in advance to anyone, who takes the time to help me out here. PS. When I do seem to come up w/ a good architecture/design for this project, I will certainly make it an open project and keep you guys up to date w/ it's development. As in what you could have told me earlier and etc. For obvious reasons the very same question got closed on SO, but could you guys still help me?.

    Read the article

  • C++ strongly typed typedef

    - by Kian
    I've been trying to think of a way of declaring strongly typed typedefs, to catch a certain class of bugs in the compilation stage. It's often the case that I'll typedef an int into several types of ids, or a vector to position or velocity: typedef int EntityID; typedef int ModelID; typedef Vector3 Position; typedef Vector3 Velocity; This can make the intent of code more clear, but after a long night of coding one might make silly mistakes like comparing different kinds of ids, or adding a position to a velocity perhaps. EntityID eID; ModelID mID; if ( eID == mID ) // <- Compiler sees nothing wrong { /*bug*/ } Position p; Velocity v; Position newP = p + v; // bug, meant p + v*s but compiler sees nothing wrong Unfortunately, suggestions I've found for strongly typed typedefs include using boost, which at least for me isn't a possibility (I do have c++11 at least). So after a bit of thinking, I came upon this idea, and wanted to run it by someone. First, you declare the base type as a template. The template parameter isn't used for anything in the definition, however: template < typename T > class IDType { unsigned int m_id; public: IDType( unsigned int const& i_id ): m_id {i_id} {}; friend bool operator==<T>( IDType<T> const& i_lhs, IDType<T> const& i_rhs ); }; Friend functions actually need to be forward declared before the class definition, which requires a forward declaration of the template class. We then define all the members for the base type, just remembering that it's a template class. Finally, when we want to use it, we typedef it as: class EntityT; typedef IDType<EntityT> EntityID; class ModelT; typedef IDType<ModelT> ModelID; The types are now entirely separate. Functions that take an EntityID will throw a compiler error if you try to feed them a ModelID instead, for example. Aside from having to declare the base types as templates, with the issues that entails, it's also fairly compact. I was hoping anyone had comments or critiques about this idea? One issue that came to mind while writing this, in the case of positions and velocities for example, would be that I can't convert between types as freely as before. Where before multiplying a vector by a scalar would give another vector, so I could do: typedef float Time; typedef Vector3 Position; typedef Vector3 Velocity; Time t = 1.0f; Position p = { 0.0f }; Velocity v = { 1.0f, 0.0f, 0.0f }; Position newP = p + v*t; With my strongly typed typedef I'd have to tell the compiler that multypling a Velocity by a Time results in a Position. class TimeT; typedef Float<TimeT> Time; class PositionT; typedef Vector3<PositionT> Position; class VelocityT; typedef Vector3<VelocityT> Velocity; Time t = 1.0f; Position p = { 0.0f }; Velocity v = { 1.0f, 0.0f, 0.0f }; Position newP = p + v*t; // Compiler error To solve this, I think I'd have to specialize every conversion explicitly, which can be kind of a bother. On the other hand, this limitation can help prevent other kinds of errors (say, multiplying a Velocity by a Distance, perhaps, which wouldn't make sense in this domain). So I'm torn, and wondering if people have any opinions on my original issue, or my approach to solving it.

    Read the article

  • "Mega Menus" for SEO [duplicate]

    - by Thought Space Designs
    This question already has an answer here: How do I handle having to many links on a webpage because of my menu 4 answers I'm using the term "Mega Menus" loosely here. I'm redesigning my WordPress site (it's going to be responsive), and as part of the redesign, I was debating incorporating some sort of descriptive menu setup. For example, normal navigation drop down menus come in the form of unordered lists of links like so: <nav> <ul> <li> <a href="#">Link1</a> </li> <li> <a href="#">Link2</a> </li> <li> <a href="#">Link3</a> <ul> <li> <a href="#">Sub Link1</a> </li> <li> <a href="#">Sub Link2</a> </li> <li> <a href="#">Sub Link3</a> </li> </ul> </li> <li> <a href="#">Link4</a> </li> </ul> </nav> What I'm looking to do is build my drop down menus with more information than your standard menu. For example, I have a top level link named "Team", and under that link, I want to make a large drop down that contains head shots, headers (in the form of styled p tags) and brief (<100 words) descriptions of each team member (only 2 currently). I want to accompany this with a "Read More" link that takes you to their actual team page. This is just one example, of course, and the other top level links would also have descriptive drop downs in the same fashion. On mobile, I was planning on hiding the "mega menu", and delivering a standard unordered list of links. Here's what I was thinking for overall structure and syntax: <nav> <ul> <li> <a href="#">Home</a> </li> <li> <a href="#">About</a> </li> <li> <a href="#">Team</a> <ul> <!-- DESKTOP --> <li class="mega-menu row"> <a class="col-sm-6" href="#"> <div class="row"> <div class="col-sm-4"> <img src="#" alt="Team Member 1" /> </div> <div class="col-sm-8"> <p class="header">Team Member 1</p> <p>Short description goes here.</p> </div> </div> </a> <a class="col-sm-6" href="#"> <!-- OTHER TEAM MEMBER INFO --> </a> </li> <!-- END DESKTOP --> <!-- MOBILE --> <li> <a href="#">Team Member 1</a> </li> <li> <a href="#">Team Member 2</a> </li> <!-- END MOBILE --> </ul> </li> <li> <a href="#">Contact</a> </li> </ul> </nav> Can anybody think of any potential SEO ramifications of doing this? I'm not going to be loading these menus full of links, so it shouldn't hurt page rank, but what are the effects of having a good bit of text and maybe even forms within nav elements? Is there such a thing as overloading nav with HTML? EDIT: Here's an example of what the menu would look like rendered on desktop. I'm currently hovering the "Team" menu, but you can't see because my mouse went away when I took the screenshot. EDIT 2: This question is not a duplicate. I'm not going to have "too many" links in my menus. I'm wondering how having images and text inside of header navigation will affect my menus. Also, I don't just want "yes, this is bad" answers. Please cite your sources and be specific with reasoning.

    Read the article

  • GooglePlayServicesNotAvailable: GooglePlayServices not available due to error 1

    - by Mathias Lin
    I'm on Galaxy S III with Android 4.0.4, Google Play installed. In my app I try to get a token from the Google Play services, as described on https://developers.google.com/android/google-play-services/authentication. Since it's all quite new (the Google pages were last updated this week), there's not much documentation to be found, especially about each specific error code. final String token = GoogleAuthUtil.getToken(this, "[email protected]", "scope"); gives me an exception: 09-30 11:24:36.075: ERROR/GoogleAuthUtil(11984): GooglePlayServices not available due to error 1 09-30 11:24:36.105: ERROR/AuthTokenCheck_(11984): Error 1 com.google.android.gms.auth.GooglePlayServicesAvailabilityException: GooglePlayServicesNotAvailable at com.google.android.gms.auth.GoogleAuthUtil.f(Unknown Source) at com.google.android.gms.auth.GoogleAuthUtil.getToken(Unknown Source) at com.google.android.gms.auth.GoogleAuthUtil.getToken(Unknown Source) at mobi.app.activity.AuthTokenCheck.getAndUseAuthTokenBlocking(AuthTokenCheck.java:148) at mobi.app.activity.AuthTokenCheck$1.doInBackground(AuthTokenCheck.java:61) at android.os.AsyncTask$2.call(AsyncTask.java:264) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:305) at java.util.concurrent.FutureTask.run(FutureTask.java:137) at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:208) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1076) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:569) at java.lang.Thread.run(Thread.java:856) https://developers.google.com/android/google-play-services/reference/com/google/android/gms/auth/package-summary tells me: GooglePlayServicesAvailabilityExceptions are special instances of UserRecoverableAuthExceptions which are thrown when the expected Google Play services app is not available for some reason. But what exactly does that mean? And how to resolve it? I've added the Google Play services extras in my SDK and the jar to my project, marked as 'exported'. I'm also wondering what the "Google Play services app" exactly is. Unfortunately it's all not very clearly described at https://developers.google.com/android/google-play-services/. The Google Play services component is delivered as an APK through the Google Play Store, so updates to Google Play services are not dependent on carrier or OEM system image updates. Newer devices will also have Google Play services as part of the device's system image, but updates are still pushed to these newer devices through the Google Play Store. Isn't "Google Play services" app the same as the "Google Play" app? Another question I have, due to lack of documentation: what is the scope parameter for? The documentation just says the following, but not defining what an 'authentication scope' exactly is: scope String representing the authentication scope.

    Read the article

  • CodeIgniter: Passing variables via URL - alternatives to using GET

    - by John Durrant
    I'm new to CodeIgniter and have just discovered the difficulties using the GET method of passing variables via the URL (e.g. domain.com/page.php?var1=1&var2=2). I gather that one approach is to pass the variables in the URI segments but haven't quite figured out how to do that yet as it seems to create the expectation of having a function in the controller named as the specific URI segment???? Anyway Instead of using GET I've decided to use POST by adapting a submit button (disguised as a link) with the variables in hidden input fields. I've created the following solution which seems to work fine, but am wondering whether I'm on the right track here or whether there is an easier way of passing variables via a link within CodeIgniter? I've created the following class in application/libraries/ <?php if ( ! defined('BASEPATH')) exit('No direct script access allowed'); class C_variables { function variables_via_link($action, $link_text, $style, $link_data) { $attributes = array('style' => 'margin:0; padding:0; display: inline;'); echo form_open($action, $attributes); $attributes = array('class' => $style, 'name' => 'link'); echo form_submit($attributes, $link_text); foreach ($link_data as $key => $value){ echo form_hidden($key, $value); } echo form_close(); } } ?> With the following CSS: /* SUBMIT BUTTON AS LINK adapted from thread: http://forums.digitalpoint.com/showthread.php?t=403667 Cross browser support (apparently). */ .submit_as_link { background: transparent; border-top: 0; border-right: 0; border-bottom: 1px solid #00F; border-left: 0; color: #00F; display: inline; margin: 0; padding: 0; cursor: hand /* Added to show hand when hovering */ } *:first-child+html .submit_as_link { /* hack needed for IE 7 */ border-bottom: 0; text-decoration: underline; } * html .submit_as_link { /* hack needed for IE 5/6 */ border-bottom: 0; text-decoration: underline; } Link then created using the following code in the VIEW: <?php $link = new C_variables; $link_data=array('var1' => 1, 'var2' => 2); $link ->variables_via_link('destination_page', 'here is a link!', 'submit_as_link', $link_data); ?> Thanks for your help...

    Read the article

  • MVVM load data during or after ViewModel construction?

    - by mkmurray
    My generic question is as the title states, is it best to load data during ViewModel construction or afterward through some Loaded event handling? I'm guessing the answer is after construction via some Loaded event handling, but I'm wondering how that is most cleanly coordinated between ViewModel and View? Here's more details about my situation and the particular problem I'm trying to solve: I am using the MVVM Light framework as well as Unity for DI. I have some nested Views, each bound to a corresponding ViewModel. The ViewModels are bound to each View's root control DataContext via the ViewModelLocator idea that Laurent Bugnion has put into MVVM Light. This allows for finding ViewModels via a static resource and for controlling the lifetime of ViewModels via a Dependency Injection framework, in this case Unity. It also allows for Expression Blend to see everything in regard to ViewModels and how to bind them. So anyway, I've got a parent View that has a ComboBox databound to an ObservableCollection in its ViewModel. The ComboBox's SelectedItem is also bound (two-way) to a property on the ViewModel. When the selection of the ComboBox changes, this is to trigger updates in other views and subviews. Currently I am accomplishing this via the Messaging system that is found in MVVM Light. This is all working great and as expected when you choose different items in the ComboBox. However, the ViewModel is getting its data during construction time via a series of initializing method calls. This seems to only be a problem if I want to control what the initial SelectedItem of the ComboBox is. Using MVVM Light's messaging system, I currently have it set up where the setter of the ViewModel's SelectedItem property is the one broadcasting the update and the other interested ViewModels register for the message in their constructors. It appears I am currently trying to set the SelectedItem via the ViewModel at construction time, which hasn't allowed sub-ViewModels to be constructed and register yet. What would be the cleanest way to coordinate the data load and initial setting of SelectedItem within the ViewModel? I really want to stick with putting as little in the View's code-behind as is reasonable. I think I just need a way for the ViewModel to know when stuff has Loaded and that it can then continue to load the data and finalize the setup phase. Thanks in advance for your responses.

    Read the article

  • Making a ViewFlipper like the Home Screen using MotionEvent.ACTION_MOVE

    - by DJTripleThreat
    Ok I have a ViewFlipper with three LinearLayouts nested inside it. It defaults to showing the first one. This code: // Assumptions in my Activity class: // oldTouchValue is a float // vf is my view flipper @Override public boolean onTouchEvent(MotionEvent touchEvent) { switch (touchEvent.getAction()) { case MotionEvent.ACTION_DOWN: { oldTouchValue = touchEvent.getX(); break; } case MotionEvent.ACTION_UP: { float currentX = touchEvent.getX(); if (oldTouchValue < currentX) { vf.setInAnimation(AnimationHelper.inFromLeftAnimation()); vf.setOutAnimation(AnimationHelper.outToRightAnimation()); vf.showNext(); } if (oldTouchValue > currentX) { vf.setInAnimation(AnimationHelper.inFromRightAnimation()); vf.setOutAnimation(AnimationHelper.outToLeftAnimation()); vf.showPrevious(); } break; } case MotionEvent.ACTION_MOVE: { // TODO: Some code to make the ViewFlipper // act like the home screen. break; } } return false; } public static class AnimationHelper { public static Animation inFromRightAnimation() { Animation inFromRight = new TranslateAnimation( Animation.RELATIVE_TO_PARENT, +1.0f, Animation.RELATIVE_TO_PARENT, 0.0f, Animation.RELATIVE_TO_PARENT, 0.0f, Animation.RELATIVE_TO_PARENT, 0.0f); inFromRight.setDuration(350); inFromRight.setInterpolator(new AccelerateInterpolator()); return inFromRight; } public static Animation outToLeftAnimation() { Animation outtoLeft = new TranslateAnimation( Animation.RELATIVE_TO_PARENT, 0.0f, Animation.RELATIVE_TO_PARENT, -1.0f, Animation.RELATIVE_TO_PARENT, 0.0f, Animation.RELATIVE_TO_PARENT, 0.0f); outtoLeft.setDuration(350); outtoLeft.setInterpolator(new AccelerateInterpolator()); return outtoLeft; } // for the next movement public static Animation inFromLeftAnimation() { Animation inFromLeft = new TranslateAnimation( Animation.RELATIVE_TO_PARENT, -1.0f, Animation.RELATIVE_TO_PARENT, 0.0f, Animation.RELATIVE_TO_PARENT, 0.0f, Animation.RELATIVE_TO_PARENT, 0.0f); inFromLeft.setDuration(350); inFromLeft.setInterpolator(new AccelerateInterpolator()); return inFromLeft; } public static Animation outToRightAnimation() { Animation outtoRight = new TranslateAnimation( Animation.RELATIVE_TO_PARENT, 0.0f, Animation.RELATIVE_TO_PARENT, +1.0f, Animation.RELATIVE_TO_PARENT, 0.0f, Animation.RELATIVE_TO_PARENT, 0.0f); outtoRight.setDuration(350); outtoRight.setInterpolator(new AccelerateInterpolator()); return outtoRight; } } ... handles the view flipping but the animations are very "on/off". I'm wondering if someone can help me out with the last part. Assuming that I can access the LinearLayouts, is there a way where I can set the position of the layouts based on deltaX and deltaY? Someone gave me this link: https://android.git.kernel.org/?p=platform/frameworks/base.git;a=blob;f=core/java/android/view/animation/TranslateAnimation.java;h=d2ff754ac327dda0fe35e610752abb78904fbb66;hb=HEAD and said I should refer to the applyTransformation method for hints on how to do this but I don't know how to repeat this same behavior.

    Read the article

  • Creating Entity Framework objects with Unity for Unit of Work/Repository pattern

    - by TobyEvans
    Hi there, I'm trying to implement the Unit of Work/Repository pattern, as described here: http://blogs.msdn.com/adonet/archive/2009/06/16/using-repository-and-unit-of-work-patterns-with-entity-framework-4-0.aspx This requires each Repository to accept an IUnitOfWork implementation, eg an EF datacontext extended with a partial class to add an IUnitOfWork interface. I'm actually using .net 3.5, not 4.0. My basic Data Access constructor looks like this: public DataAccessLayer(IUnitOfWork unitOfWork, IRealtimeRepository realTimeRepository) { this.unitOfWork = unitOfWork; this.realTimeRepository = realTimeRepository; } So far, so good. What I'm trying to do is add Dependency Injection using the Unity Framework. Getting the EF data context to be created with Unity was an adventure, as it had trouble resolving the constructor - what I did in the end was to create another constructor in my partial class with a new overloaded constructor, and marked that with [InjectionConstructor] [InjectionConstructor] public communergyEntities(string connectionString, string containerName) :this() { (I know I need to pass the connection string to the base object, that can wait until once I've got all the objects initialising correctly) So, using this technique, I can happily resolve my entity framework object as an IUnitOfWork instance thus: using (IUnityContainer container = new UnityContainer()) { container.RegisterType<IUnitOfWork, communergyEntities>(); container.Configure<InjectedMembers>() .ConfigureInjectionFor<communergyEntities>( new InjectionConstructor("a", "b")) DataAccessLayer target = container.Resolve<DataAccessLayer>(); Great. What I need to do now is create the reference to the repository object for the DataAccessLayer - the DAL only needs to know the interface, so I'm guessing that I need to instantiate it as part of the Unity Resolve statement, passing it the appropriate IUnitOfWork interface. In the past, I would have just passed the Repository constructor the db connection string, and it would have gone away, created a local Entity Framework object and used that just for the lifetime of the Repository method. This is different, in that I create an Entity Framework instance as an IUnitOfWork implementation during the Unity Resolve statement, and it's that instance I need to pass into the constructor of the Repository - is that possible, and if so, how? I'm wondering if I could make the Repository a property and mark it as a Dependency, but that still wouldn't solve the problem of how to create the Repository with the IUnitOfWork object that the DAL is being Resolved with I'm not sure if I've understood this pattern correctly, and will happily take advice on the best way to implement it - Entity Framework is staying, but Unity can be swapped out if not the best approach. If I've got the whole thing upside down, please tell me thanks

    Read the article

  • Facebook connect displaying invite friends dialog and closing on completion

    - by Dougnukem
    I'm trying to create a Facebook Connect application that displays a friend invite dialog within the page using Facebook's Javascript API (through a FBMLPopupDialog). The trouble is to display a friend invite dialog you use a multi-friend form which requires an action="url" attribute that represents the URL to redirect your page to when the user completes or skips the form. The problem is that I want to just close the FBMLPopupDialog (the same behavior as if the user just hit the 'X' button on the popup dialog). The best I can do is redirect the user back to the page they were on basically a reload but they lose all AJAX/Flash application state. I'm wondering if any Facebook Connect developers have run into this issue and have a good way to simply display a friend invite "lightbox" dialog within their website where they don't want to "refresh" or "redirect" when the user finishes. The facebook connect JS API provides a FB.Connect.inviteConnectUsers, which provides a nice dialog but only connects existing users of your application who also have a Facebook account and haven't connected. http://bugs.developers.facebook.com/show%5Fbug.cgi?id=4916 function fb_inviteFriends() { //Invite users log("Inviting users..."); FB.Connect.requireSession( function() { //Connect succes var uid = FB.Facebook.apiClient.get_session().uid; log('FB CONNECT SUCCESS: ' + uid); //Invite users log("Inviting users..."); //Update server with connected account updateAccountFacebookUID(); var fbml = fb_getInviteFBML() ; var dialog = new FB.UI. FBMLPopupDialog("Weblings Invite", fbml) ; //dialog.setFBMLContent(fbml); dialog.setContentWidth(650); dialog.setContentHeight(450); dialog.show(); }, //Connect cancelled function() { //User cancelled the connect log("FB Connect cancelled:"); } ); } function fb_getInviteFBML() { var uid = FB.Facebook.apiClient.get_session().uid; var fbml = ""; fbml = '<fb:fbml>\n' + '<fb:request-form\n'+ //Redirect back to this page ' action="'+ document.location +'"\n'+ ' method="POST"\n'+ ' invite="true"\n'+ ' type="Weblings Invite"\n' + ' content="I need your help to discover all the Weblings and save the Internet! WebWars: Weblings is a cool new game where we can collect fantastic creatures while surfing our favorite websites. Come find the missing Weblings with me!'+ //Callback the server with the appropriate Webwars Account URL ' <fb:req-choice url=\''+ WebwarsFB.WebwarsAccountServer +'/SplashPage.aspx?action=ref&reftype=Facebook' label=\'Check out WebWars: Weblings\' />"\n'+ '>\n'+ ' <fb:multi-friend-selector\n'+ ' rows="2"\n'+ ' cols="4"\n'+ ' bypass="Cancel"\n'+ ' showborder="false"\n'+ ' actiontext="Use this form to invite your friends to connect with WebWars: Weblings."/>\n'+ ' </fb:request-form>'+ ' </fb:fbml>'; return fbml; }

    Read the article

  • Spring mail support - no subject

    - by Trick
    I have updated my libraries, and now e-mails are sent without subject. I don't know where this happened... Mail API is 1.4.3., Spring 2.5.6. and Spring Integration Mail 1.0.3.RELEASE. <!-- Definitions for SMTP server --> <bean id="mailSender" class="org.springframework.mail.javamail.JavaMailSenderImpl"> <property name="host" value="${mail.host}" /> <property name="username" value="${mail.username}" /> <property name="password" value="${mail.password}" /> </bean> <bean id="adminMailTemplate" class="org.springframework.mail.SimpleMailMessage" > <property name="from" value="${mail.admin.from}" /> <property name="to" value="${mail.admin.to}" /> <property name="cc"> <list> <value>${mail.admin.cc1}</value> </list> </property> </bean> <!-- Mail service definition --> <bean id="mailService" class="net.bbb.core.service.impl.MailServiceImpl"> <property name="sender" ref="mailSender"/> <property name="mail" ref="adminMailTemplate"/> </bean> And properties mail.host,mail.username,mail.password,mail.admin.from,mail.admin.to, mail.admin.cc1. Java class: /** The sender. */ private MailSender sender; /** The mail. */ private SimpleMailMessage mail; public void sendMail() { this.mail.setSubject("Subject"); this.mail.setText("msg body"); try { getSender().send(this.mail); } catch (MailException e) { log.error("Error sending mail!",e); } } public SimpleMailMessage getMail() { return this.mail; } public void setMail(SimpleMailMessage mail) { this.mail = mail; } public MailSender getSender() { return this.sender; } public void setSender(MailSender mailSender1) { this.sender = mailSender1; } Everything worked before, I am wondering if there may be any conflicts with new libraries.

    Read the article

  • WPF Animation FPS vs. CPU usage - Am I expecting too much?

    - by Cory Charlton
    Working on a screen saver for my wife, http://cchearts.codeplex.com/, and while I've been able to improve FPS on lower end machines (switch from Path to StreamGeometry, use DrawingVisual instead of UserControl, etc) the CPU usage still seems very high. Here's some numbers I ran from a few 5 minute sampling periods: ~60FPS 35% average CPU on Core 2 Duo T7500 @ 2.2GHz, 3GB ram, NVIDIA Quadro NVS 140M (128MB), Vista [My dev laptop] ~40FPS 50% average CPU on Pentium D @ 3.4GHz, 1.5GB ram, Standard VGA Graphics Adapter (unknown), 2003 Server [A crappy desktop] I can understand the lower frame rate and higher CPU usage on the crappy desktop but it still seems pretty high and 35% on my dev laptop seems high as well. I'd really like to analyze the application to get more details but I'm having issues there as well so I'm wondering if I'm doing something wrong (never profiled WPF before). WPF Performance Suite: Process Launch Error Unable to attach to process: CCHearts.exe Do you want to kill it? This error message occurs when I click cancel after attempting launch. If I don't click cancel it sits there idle, I guess waiting to attach. Performance Explorer: Could not launch C:\Projects2\CC.Hearts\CC.Hearts\bin\Debug (USEVISUAL)\CCHearts.exe. Previous attempt to profile the application finished unsuccessfully. Please restart the application. Output Window from Performance: Profiling started. Profiling process ID 5360 (CCHearts). Process ID 5360 has exited. Data written to C:\Projects2\CC.Hearts\CCHearts100608.vsp. Profiling finished. PRF0025: No data was collected. Profiling complete. So I'm stuck wanting to improve performance but have no concrete way to determine where the bottleneck is. Have been relatively successful throwing darts at this point but I'm beyond that now :) PS: Screensaver is hosted at CodePlex if you want to look at the source and missed the link above. Edit: My RenderOptions darts... // NOTE: Grasping at straws here ;-) RenderOptions.SetBitmapScalingMode(newHeart, BitmapScalingMode.LowQuality); RenderOptions.SetCachingHint(newHeart, CachingHint.Cache); RenderOptions.SetEdgeMode(newHeart, EdgeMode.Aliased); I threw those a little while back and didn't see much difference (not sure if the bitmap scaling even comes into play). Really wish I could get profiling working to know where I should try to optimize. For now I assume there is some overhead in creating a new HeartVisual and the DrawingVisual contained inside. Maybe if I reset and reused the hearts (tossed them in a queue once they completed or something) I'd see an improvement. Shrug Throwing darts while blindfolder is always fun.

    Read the article

  • Why do many software projects fail today?

    - by TomTom
    As long as there are software projects, the world is wondering why they fail so often. I would like to know if there is a list or something equivalent which shows how many software projects fail today. Would be nice if there would be a comparison over the last 20 - 30 years. You can also add your top reason why a software project fails. Mine is "Requirements are poor or not even existing." which includes also "No (real) customer / user involved". EDIT: It is nearly impossible to clearly define the term "fail". Let's say that fail means: The project was more than 10% over budget and time. In my opinion the 10% + / - is a good range for an offer / tender. EDIT: Until now (Feb 11) it seems that most posters agree that a fail of the project is basically a failure of the project management (whatever fail means). But IMHO it comes out, that most developers are not happy with this situation. Perhaps because not the manager get penalized when a project was not successful, but the lazy, incompetent developer teams? When I read the posts I can also hear-out that there is a big "gap" between the developer side and the managment side. The expectations (perhaps also the requirements) seem to be so different, that a project cannot be successful in the end (over time / budget; users are not happy; not all first-prio features implemented; too many bugs because developers were forced to implement in too short timeframes ...) I',m asking myself: How can we improve it? Or do we have the possibility to improve it? Everybody seems to be unsatisfied with the way it goes now. Can we close the gap between these two worlds? Should we (the developers) go on strike and fight for "high quality reqiurements" and "realistic / iteration based time shedules"? EDIT: Ralph Westphal and Stefan Lieser have founded a new "community" called: Clean-Code-Developer. The aim of the group is to bring more professionalism into software engineering. Independently if a developer has a degree or tons of years of experience you can be part of this movement. Clean Code Developers live principles like SOLID every day. A professional developer is the biggest reviewer of his own work. And he has an internal value system which helps him to improve and become better. Check it out on: Clean Code Developer EDIT: Our company is doing at the moment a thing called "Application Development and Maintenance Benchmarking". This is a service offered by IBM to get a feedback from someone external on your software engineering process quality etc. When we get the results, I will tell you more about it.

    Read the article

  • Wrap or adorn wpf listview datatemplate

    - by Chris Cap
    I'm attempting to essentially wrap the contents of a DateTemplate in a listview gridview column with a border. What I want to know is if it's possible to supply an adorner that will surround that template so that I don't have to specify the border in every single datatemplate on every column (which is what I'm doing now). I've got something like this, but I know it's not right <Style TargetType="{x:Type ListBoxItem}"> <Setter Property="TemplateContent"> <Setter.Value> <ControlTemplate> <StackPanel> <Border BorderBrush="Green" BorderThickness="1"> <AdornedElementPlaceholder /> </Border> </StackPanel> </ControlTemplate> </Setter.Value> </Setter> </Style> This complains that Templatecontent is not a valid type. I've also tried with DataTemplate and that doesn't work either (understandably so). I know I could just create a DataTemplate, however the content for each column is different. At the very least, it binds to different fields. I'm wondering if there's a solution using a dynamic resource, but I don't know much about it. Thanks for your help EDIT: here's a sample of my listview <ListView ItemsSource="{Binding Path=OrderLines}" ItemContainerStyle="{StaticResource ResourceKey=ListViewItemContainerStyle}"> <ListView.View> <GridView> <GridViewColumn> <GridViewColumn.CellTemplate> <DataTemplate> <TextBox MaxWidth="30" Width="30" MaxLength="2" Text="{Binding Path=Quantity,ValidatesOnDataErrors=True}" /> </DataTemplate> </GridViewColumn.CellTemplate> </GridViewColumn> <GridView> <ListView.View> </ListView> Essentially I want to wrap that text box in the data template and any other items in additional columns.

    Read the article

< Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >