Search Results

Search found 1470 results on 59 pages for 'hands on'.

Page 53/59 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Specs, Form and Function – What am I Missing?

    - by Barry Shulam
    0 0 1 628 3586 08041 29 8 4206 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Friday October 26th the Microsoft Surface RT arrived at the office.  I was summoned to my boss’s office for the grand unpacking.  If I had planned ahead I could have used my iPhone 4 to film the event and post it on YouTube however the desire to hold the device and turn it ON was more inviting than becoming a proxy reviewer for Engadget’s website.  1980 was the first time we had a personal computer in our house.  It was a  Kaypro computer. It weighed 29 pounds more than any persons lap could hold.  Then the term “portable computer” meant you could remove it from the building and take it else where.  Today I am typing on this entry on a Macbook Air which weighs 2.38 pounds. This morning Amazons front page main title is: “Much More for Much Less” I was born at the right time to start with the CPM operating system on the Kaypro thru the DOS, Windows, Linux, Mac OSX and mobile phone operating systems and languages.  If you are not aware Technology is moving at a rapid pace.  The New iPad (those who are keeping score – iPad4) is replacing a 7 month old machine the New iPad (iPad 3) I have used and owned many technology devices in my life.  The main point that most of the reader who are in the USA overlook is the fact that we are in the USA.  The devices we purchase have a great digital garden to support them.  The Kaypro computer had a 7-inch screen.  It was a TV tube with two colors – Black and Green.  You could see the 80-column screen flicker with characters – have you every played Pac-Man emulated on the screen with the ABC characters. Traveling across the world you will find that not all apps on your device will function as they did back home because they are not offered outside of your country of origin. I think the main question a buyer of technology should be asking is Function.  The greatest Specs with out function limit you.  The most beautiful form with out function is the same as a crystal vase on your shelf – not a good cereal bowl in the morning. Microsoft Surface RT, Amazon Kindle Fire and Apple iPad all great devices in their respective customers hands. My advice for those looking to purchase on this year:  If the device is your only technology device you buy what you WANT and LIKE. Consider this parallel universe if its not your only device?  Ever go shopping for clothing, shoes, and accessories with your wife, girlfriend, sister or mother?  If you listen carefully you will hear the little voices coming out of there heads saying:  “This goes well with that and I can use it also with that outfit” ”Do you think this clashes with that?”  “Ohh I love how that combination looks on you”.  Portable devices such as tablets and computers can offer a whole lot more when they are combined with the digital echo system you have at home and the manufacturer offers online. Pros of each Device: Microsoft Surface RT: There is a new functionality named SmartGlass which will let you share the content off your tablet to your XBOX 360.  Microsoft office is loaded on the tablet.  You can have more than one user profile on the tablet if you share it with others.   Amazon Kindle or Kindle HD: If you are an Amazon consumer with an annual Amazon Prime service you can consume videos and read books off the Amazon site.  Its the cheapest device.  Its a step up from the kindle reader in many ways.   Apple Ipad or Ipad mini: Over 270 Thousand applications.  Airplay permits you the ability to share to your TV screen. If you are a cord cutter (a person who gets their entertainment content over the web or air vs Cable Providers) the Airplay or Smart glass are a huge bonus.  iPad mini or not: The mini will fit in a purse where the larger one will not.  Its lighter which makes it nice to hold for prolonged periods.  It has an option for LTE wireless which non of the other sub 9 inch tables offer.  The screen is non retina which means the applications are smaller.  Speaking with individuals who are above 50 in age that wear glasses they retina does not make a difference for them however they prefer the larger iPad over the new mini.   Happy Shopping this Channuka Season.   The Kosher Coder.   Follow me on twitter @KosherCoder

    Read the article

  • Microsoft BUILD 2013 Day 1&ndash;Keynote

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/06/27/microsoft-build-2013-day-1ndashkeynote.aspx This one is going to be a little long because the keynote was jam-packed so bare with me. The keynote for the first day of BUILD 2013 was kicked off by Steve Balmer.  He made it very clear that Microsoft’s focus is on accelerating its time to market with products and product updates.  His quote was that “Rapid release” is the new norm.  He continued by showing off several new Lumias that have been buzzing around the internet for a while and announce that Sprint will now be carrying the HTC 8XT and Samsung ATIV. Balmer is known for repeating words or phrase for affect.  This time it was “Rapid release, rapid release” and “Touch, touch, touch, touch, touch, …”.  This was fun, but even more fun was when he announce that all attendees would receive an Acer Iconia 8” tablet. SCORE! The next subject Balmer focused on is new apps.  The three new ones were Flipboard, Facebook and NFL Fantasy Football.  I liked the first two because these are ones that people coming from other platforms are missing.  The NFL app is great just because it targets a demographic that can be fanatical.  If these types of apps keep coming than the missing app argument goes away. While many Negative Nancy’s are describing Windows 8.1 as Windows 180 Steve Balmer chose to call it a “refined blend” as in a coffee that has been improved with a new mix.  This includes more multi-tasking options and leveraging Bing straight throughout the entire ecosystem. He ended this first section by explaining that this will also bring more Bing development opportunities to the community. Steve Balmer was followed by Julie Larson-Green who spent her time on stage selling us on Windows 8 all over again from my point of view.  Something that I would not have thought was needed until I had listened to some other attendees who had a number of concerns and complaints.  She showed a number of new gestures that will come with Windows 8.1, and while they were cool I was left wondering if they really improved the experience.  I guess only time will tell. I did like the fact that it the UI implementation to bring up “All Apps” now mirrors that of Windows Phone.  The consistency is a big step forward that I hope to see continue.  The cool factor went up from there as she swiped content from a desktop (mega-tablet) to the XBox One.  This seamless experience I believe is what is really needed for any future platform to be relevant. I was much more enthused by the presentation of Antoine Leblond who humbled us by letting us know that there are 5k new API.  How that can be or how anyone would ever use all of them is another question.  His announcement was that the Visual Studio 2013 preview would be available today along with the Windows 8.1 bits.  One of the features of VS2013 that he demonstrated is the power consumption profiler.  With battery life being a key factor with consumer consumption devices this is a welcome addition. He didn’t limit his presentation to VS2013 features though.  He showed how the Store has been redesigned to enable better search and discoverability of apps and how Win 8.1 can perform multiple screen scales depending on the resolution of the device automatically.  The last feature he demoed was the real time video streaming API which he made sure we understood by attaching a Surface to a little robot.  Oh, but there was one more thing.  Antoine and Julie announce that all attendees would also be getting Surface Pros.  BONUS! How much more could there be?  Gurdeep Singh Pall was about to pile on.  He introduced us to Bing as a platform (BaaP?).  He said if they (Microsoft) could do something with and API that is good 3rd party developers can do something that is dynamite and showed us some of the tools they had produced.  These included natural user interface improvements such as voice commands that looked to put Siri to shame.  Add to that 3D, OCR and translation capabilities and the future looks to be full of opportunities. Balmer then came out to show us one last thing.  Project Spark is a game design environment that will be available for Windows 8.1, XBox 360 and XBox One.  All I can say is that if my kids get their hands on this they are going to be able to learn some of what dad does in a much more enjoyable way. At the end of it all I was both exhausted and energized by what I saw.  What could they have possibly left for the day 2 keynote?  I hear it will feature Scott Hanselman.  If that is right we are in for a treat.  See you there. del.icio.us Tags: BUILD 2013,Windows 8.1,Winodws Phone,XAML,Keynote,Bing,Visual Studio 2013,Project Spark

    Read the article

  • What's My Problem? What's Your Problem?

    - by Jacek Ziabicki
    Software installers are not made for building demo environments. I can say this much after 12 years (on and off) of supporting my fellow sales consultants with environments for software demonstrations. When we release software, we include installation programs and procedures that are designed for use by our clients – to build a production environment and a limited number of testing, training and development environments. Different Objectives Your priorities when building an environment for client use vs. building a demo environment are very different. In a production environment, security, stability, and performance concerns are paramount. These environments are built on a specific server and rarely, if ever, moved to a different server or different network address. There is typically just one application running on a particular server (physical or virtual). Once built, the environment will be used for months or years at a time. Because of security considerations, the installation program wants to make these environments very specific to the organization using the software and the use case, encoding a fully qualified name of the server, or even the IP address on the network, in the configuration. So you either go through the installation procedure for each environment, or learn how to clone and reconfigure the software as a separate instance to build all your non-production environments. This may not matter much if the installation is as simple as clicking on the Setup program. But for enterprise applications, you have a number of configuration settings that you need to get just right – so whether you are installing from scratch or reconfiguring an existing installation, this requires both time and expertise in the particular piece of software. If you need a setup of several applications that are integrated to talk to one another, it is a whole new level of complexity. Now you need the expertise in all of the applications involved (plus the supporting technology products), and in addition to making each application work, you also have to configure the integration endpoints. Each application needs the URLs and credentials to call the integration layer, and the integration must be able to call each application. Then you have to make sure that each app has the right data so a business process initiated in one application can continue in the next. And, you will need to check that each application has the correct version and patch level for the integration to work. When building demo environments, your #1 concern is agility. If you can get away with a small number of long-running environments, you are lucky. More likely, you may get a request for a dedicated environment for a demonstration that is two weeks away: how quickly can you make this available so we still have the time to build the client-specific data? We are running a hands-on workshop next month, and we’ll need 15 instances of application X environment so each student can have a separate server for the exercises. We cannot connect to our data center from the client site, the client’s security policy won’t allow our VPN to go through – so we need a portable environment that we can bring with us. Our consultants need to be able to work at the hotel, airport, and the airplane, so we really want an environment that can run on a laptop. The client will need two playpen environments running in the cloud, accessible from their network, for a series of workshops that start two weeks from now. We have seen all of these scenarios and more. Here you would be much better served by a generic installation that would be easy to clone. Welcome to the Wonder Machine The reason I started this blog is to share a particular design of a demo environment, a special way to install software, that can address the above requirements, even for integrated setups. This design was created by a team at Oracle Utilities Global Business Unit, and we are using this setup for most of our demo environments. In a bout of modesty we called it the Wonder Machine. Over the next few posts – think of it as a novel in parts – I will tell you about the big idea, how it was implemented and what you can do with it. After we have laid down the groundwork, I would like to share some tips and tricks for users of our Wonder Machine implementation, as well as things I am learning about building portable, cloneable environments. The Wonder Machine is by no means a closed specification, it is under active development! I am hoping this blog will be of interest to two groups of readers – the users of the Wonder Machine we have built at Oracle Utilities, who want to get the most out of their demo environments and be able to reconfigure it to their needs – and to people who need to build environments for demonstration, testing, training, development and would like to make them cloneable and portable to maximize the reuse of their effort. Surely we are not the only ones facing this problem? If you can think of a better way to solve it, or if you can help us improve on our concept, I will appreciate your comments!

    Read the article

  • A Knights Tale

    - by Phil Factor
    There are so many lessons to be learned from the story of Knight Capital losing nearly half a billion dollars as a result of a deployment gone wrong. The Knight Capital Group (KCG N) was an American global financial services firm engaging in market making, electronic execution, and institutional sales and trading. According to the recent order (File No.3.15570) against Knight Capital by U.S. Securities and Exchange Commission?, Knight had, for many years used some software which broke up incoming “parent” orders into smaller “child” orders that were then transmitted to various exchanges or trading venues for execution. A tracking ‘cumulative quantity’ function counted the number of ‘child’ orders and stopped the process once the total of child orders matched the ‘parent’ and so the parent order had been completed. Back in the mists of time, some code had been added to it  which was excuted if a particular flag was set. It was called ‘power peg’ and seems to have had a similar design and purpose, but, one guesses, would have shared the same tracking function. This code had been abandoned in 2003, but never deleted. In 2005, The tracking function was moved to an earlier point in the main process. It would seem from the account that, from that point, had that flag ever been set, the old ‘Power Peg’ would have been executed like Godzilla bursting from the ice, making child orders without limit without any tracking function. It wasn’t, presumably because the software that set the flag was removed. In 2012, nearly a decade after ‘Power Peg’ was abandoned, Knight prepared a new module to their software to cope with the imminent Retail Liquidity Program (RLP) for the New York Stock Exchange. By this time, the flag had remained unused and someone made the fateful decision to reuse it, and replace the old ‘power peg’ code with this new RLP code. Had the two actions been done together in a single automated deployment, and the new deployment tested, all would have been well. It wasn’t. To quote… “Beginning on July 27, 2012, Knight deployed the new RLP code in SMARS in stages by placing it on a limited number of servers in SMARS on successive days. During the deployment of the new code, however, one of Knight’s technicians did not copy the new code to one of the eight SMARS computer servers. Knight did not have a second technician review this deployment and no one at Knight realized that the Power Peg code had not been removed from the eighth server, nor the new RLP code added. Knight had no written procedures that required such a review.” (para 15) “On August 1, Knight received orders from broker-dealers whose customers were eligible to participate in the RLP. The seven servers that received the new code processed these orders correctly. However, orders sent with the repurposed flag to the eighth server triggered the defective Power Peg code still present on that server. As a result, this server began sending child orders to certain trading centers for execution. Because the cumulative quantity function had been moved, this server continuously sent child orders, in rapid sequence, for each incoming parent order without regard to the number of share executions Knight had already received from trading centers. Although one part of Knight’s order handling system recognized that the parent orders had been filled, this information was not communicated to SMARS.” (para 16) SMARS routed millions of orders into the market over a 45-minute period, and obtained over 4 million executions in 154 stocks for more than 397 million shares. By the time that Knight stopped sending the orders, Knight had assumed a net long position in 80 stocks of approximately $3.5 billion and a net short position in 74 stocks of approximately $3.15 billion. Knight’s shares dropped more than 20% after traders saw extreme volume spikes in a number of stocks, including preferred shares of Wells Fargo (JWF) and semiconductor company Spansion (CODE). Both stocks, which see roughly 100,000 trade per day, had changed hands more than 4 million times by late morning. Ultimately, Knight lost over $460 million from this wild 45 minutes of trading. Obviously, I’m interested in all this because, at one time, I used to write trading systems for the City of London. Obviously, the US SEC is in a far better position than any of us to work out the failings of Knight’s IT department, and the report makes for painful reading. I can’t help observing, though, that even with the breathtaking mistakes all along the way, that a robust automated deployment process that was ‘all-or-nothing’, and tested from soup to nuts would have prevented the disaster. The report reads like a Greek Tragedy. All the way along one wants to shout ‘No! not that way!’ and ‘Aargh! Don’t do it!’. As the tragedy unfolds, the audience weeps for the players, trapped by a cruel fate. All application development and deployment requires defense in depth. All IT goes wrong occasionally, but if there is a culture of defensive programming throughout, the consequences are usually containable. For financial systems, these defenses are required by statute, and ignored only by the foolish. Knight’s mistakes weren’t made by just one hapless sysadmin, but were progressive errors by an  IT culture spanning at least ten years.  One can spell these out, but I think they’re obvious. One can only hope that the industry studies what happened in detail, learns from the mistakes, and draws the right conclusions.

    Read the article

  • Speakers, Please Check Your Time

    - by AjarnMark
    Woodrow Wilson was once asked how long it would take him to prepare for a 10 minute speech. He replied "Two weeks". He was then asked how long it would take for a 1 hour speech. "One week", he replied. 2 hour speech? "I'm ready right now," he replied.  Whether that is a true story or an urban legend, I don’t really know, but either way, it is a poignant reminder for all speakers, and particularly apropos this week leading up to the PASS Community Summit. (Cross-posted to the PASS Professional Development Virtual Chapter blog #PASSProfDev.) What’s the point of that story?  Simply this…if you have plenty of time to do your presentation, you don’t need to prepare much because it is easy to throw in more and more material to stretch out to your allotted time.  But if you are on a tight time constraint, then it will take significant preparation to distill your talk down to only the essential points. I have attended seven of the last eight North American Summit events, and every one of them has been fantastic.  The speakers are great, the material is timely and relevant, and the networking opportunities are awesome.  And every year, there is one little thing that just bugs me…speakers going over their allotted time.  Why does it bother me so?  Well, if you look at a typical schedule for a Summit, you’ll see that there are six or more sessions going on at the same time, and only 15 minutes to move from one to another.  If you’re trying to maximize your training dollar by attending something during every session time slot, and you don’t want to be the last guy trying to squeeze into the middle of the row, then those 15 minutes can be critical.  All the more so if you need to stop and use the bathroom or if you have to hike to the opposite end of the convention center.  It is really a bad position to find yourself having to choose between learning the last key points of Speaker A who is going over time, and getting over to Speaker B on time so you don’t miss her key opening remarks. And frankly, I think it is just rude.  Yes, the speakers are the function, after all they are bringing the content that the rest of us are paying to learn.  But it is also an honor to be given the opportunity to speak at a conference like this, and no one speaker is so important that the conference would be a disaster without him.  Speakers know when they submit their abstract, long before the conference, how much time they will have.  It has been the same pattern at the Summit for at least the last eight years.  Program Sessions are 75 minutes long.  Some speakers who have a good track record, and meet other qualifying criteria, are extended an invitation to present a Spotlight Session which is 90 minutes (a 20% increase).  So there really is no excuse.  It’s not like you were promised a 2-hour segment and then discovered when you got here that it was only 75 minutes.  In fact, it’s not like PASS advertised 90-minute sessions for everyone and then a select few were cut back to only 75.  As a speaker, you know well before you get here which type of session you are doing and how long it is, so as a professional, you should plan accordingly. Now you might think that this only happens to rookies, but I’ll tell you that some of the worst offenders are big-name veterans who draw huge attendance numbers for their sessions.  Some attendees blow this off as, “Hey, it’s so-and-so, and I’d stay here for hours and listen to him/her talk.”  To which I would reply, “Then they should have submitted for a pre- or post-conference day-long seminar instead, but don’t try to squeeze your day-long talk into a 90-minute session.”  Now I don’t really believe that these speakers are being malicious or just selfishly trying to extend their time in the spotlight.  I think that most of them are merely being undisciplined and did not trim their presentation sufficiently, or allowed themselves to get off-track (often in a generous attempt to help someone in the audience with a question or problem that really should have been noted for further discussion after the session). So here is my recommendation…my plea, even.  TRIM THE FAT!  Now.  Before it’s too late.  Before you even get on the airplane, take a long, hard look at your presentation and eliminate some of the points that you originally thought you had to make, but in reality are not truly crucial to your main topic.  Delete a few slides.  Test your demos and have them already scripted rather than typing them during your talk.  It is better to cut out too much and end up with plenty of time at the end for Questions & Answers.  And you can always keep some notes on the stuff that you cut out so that you could fill it back in at the end as bonus material if you really do end up with a whole bunch of time on your hands.  But I don’t think you will.  And if you do, that will look even better to the audience as it will look like you’re giving them something extra that not every audience gets.  And they will thank you for that.

    Read the article

  • WMemoryProfiler is Released

    - by Alois Kraus
    What is it? WMemoryProfiler is a managed profiling Api to aid integration testing. This free library can get managed heap statistics and memory usage for your own process (remember testing) and other processes as well. The best thing is that it does work from .NET 2.0 up to .NET 4.5 in x86 and x64. To make it more interesting it can attach to any running .NET process. The reason why I do mention this is that commercial profilers do support this functionality only for their professional editions. An normally only since .NET 4.0 since the profiling API only since then does support attaching to a running process. This thing does differ in many aspects from “normal” profilers because while profiling yourself you can get all objects from all managed heaps back as an object array. If you ever wanted to change the state of an object which does only exist a method local in another thread you can get your hands on it now … Enough theory. Show me some code /// <summary> /// Show feature to not only get statisics out of a process but also the newly allocated /// instances since the last call to MarkCurrentObjects. /// GetNewObjects does return the newly allocated objects as object array /// </summary> static void InstanceTracking() { using (var dumper = new MemoryDumper()) // if you have problems use to see the debugger windows true,true)) { dumper.MarkCurrentObjects(); Allocate(); ILookup<Type, object> newObjects = dumper.GetNewObjects() .ToLookup( x => x.GetType() ); Console.WriteLine("New Strings:"); foreach (var newStr in newObjects[typeof(string)] ) { Console.WriteLine("Str: {0}", newStr); } } } … New Strings: Str: qqd Str: String data: Str: String data: 0 Str: String data: 1 … This is really hot stuff. Not only you can get heap statistics but you can directly examine the new objects and make queries upon them. When I do find more time I can reconstruct the object root graph from it from my own process. It this cool or what? You can also peek into the Finalization Queue to check if you did accidentally forget to dispose a whole bunch of objects … /// <summary> /// .NET 4.0 or above only. Get all finalizable objects which are ready for finalization and have no other object roots anymore. /// </summary> static void NotYetFinalizedObjects() { using (var dumper = new MemoryDumper()) { object[] finalizable = dumper.GetObjectsReadyForFinalization(); Console.WriteLine("Currently {0} objects of types {1} are ready for finalization. Consider disposing them before.", finalizable.Length, String.Join(",", finalizable.ToLookup( x=> x.GetType() ) .Select( x=> x.Key.Name)) ); } } How does it work? The W of WMemoryProfiler is a good hint. It does employ Windbg and SOS dll to do the heavy lifting and concentrates on an easy to use Api which does hide completely Windbg. If you do not want to see Windbg you will never see it. In my experience the most complex thing is actually to download Windbg from the Windows 8 Stanalone SDK. This is described in the Readme and the exception you are greeted with if it is missing in much greater detail. So I will not go into this here.   What Next? Depending on the feedback I do get I can imagine some features which might be useful as well Calculate first order GC Roots from the actual object graph Identify global statics in Types in object graph Support read out of finalization queue of .NET 2.0 as well. Support Memory Dump analysis (again a feature only supported by commercial profilers in their professional editions if it is supported at all) Deserialize objects from a memory dump into a live process back (this would need some more investigation but it is doable) The last item needs some explanation. Why on earth would you want to do that? The basic idea is to store in your live process some logging/tracing data which can become quite big but since it is never written to it is very fast to generate. When your process crashes with a memory dump you could transfer this data structure back into a live viewer which can then nicely display your program state at the point it did crash. This is an advanced trouble shooting technique I have not seen anywhere yet but it could be quite useful. You can have here a look at the current feature list of WMemoryProfiler with some examples.   How To Get Started? First I would download the released source package (it is tiny). And compile the complete project. Then you can compile the Example project (it has this name) and uncomment in the main method the scenario you want to check out. If you are greeted with an exception it is time to install the Windows 8 Standalone SDK which is described in great detail in the exception text. Thats it for the first round. I have seen something more limited in the Java world some years ago (now I cannot find the link anymore) but anyway. Now we have something much better.

    Read the article

  • Lessons learnt in implementing Scrum in a Large Organization that has traditional values

    - by MarkPearl
    I recently had the experience of being involved in a “test” scrum implementation in a large organization that was used to a traditional project management approach. Here are some lessons that I learnt from it. Don’t let the Project Manager be the Product Owner First lesson learnt is to identify the correct product owner – in this instance the product manager assumed the role of the product owner which was a mistake. The product owner is the one who has the most to loose if the project fails. With a methodology that advocates removing the role of the project manager from the process then it is not in the interests of the person who is employed as a project manager to be the product owner – in fact they have the most to gain should the project fail. Know the time commitments of team members to the Project Second lesson learnt is to get a firm time commitment of the members on a team for the sprint and to hold them to it. In this project instance many of the issues we faced were with team members having to double up on supporting existing projects/systems and the scrum project. In many situations they just didn’t get round to doing any work on the scrum project for several days while they tried to meet other commitments. Initially this was not made transparent to the team – in stand up team members would say that had done some work but would be very vague on how much time they had actually spent using the blackhole of their other legacy projects as an excuse – putting up a time burn down chart made time allocations transparent and easy to hold the team to. In addition, how can you plan for a sprint without knowing the actual time available of the members – when I mean actual time, the exercise of getting them to go through all their appointments and lunch times and breaks and removing them from their time commitment helps get you to a realistic time that they can dedicate. Make sure you meet your minimum team sizes In a recent post I wrote about the difference between a partnership and a team. If you are going to do scrum in a large organization make sure you have a minimum team size of at least 3 developers. My experience with larger organizations is that people have a tendency to be sick more, take more leave and generally not be around – if you have a team size of two it is so easy to loose momentum on the project – the more people you have in the team (up to about 9) the more the momentum the project will have when people are not around. Swapping from one methodology to another can seem as waste to the customer It sounds bad, but most customers don’t care what methodology you use. Often they have bought into the “big plan upfront”. If you can, avoid taking a project on midstream from a traditional approach unless the customer has not bought into the process – with this particular project they had a detailed upfront planning breakaway with the customer using the traditional approach and then before the project started we moved onto a scrum implementation – this seemed as waste to the customer. We should have managed the customers expectation properly. Don’t play the role of the scrum master if you can’t be the scrum master With this particular implementation I was the “scrum master”. But all I did was go through the process of the formal meetings of scrum – I attended stand up, retrospectives and planning – but I was not hands on the ground. I was not performing the most important role of removing blockages – and by the end of the project there were a number of blockages “cropping up”. What could have been a better approach was to take someone on the team and train them to be the scrum master and be present to coach them. Alternatively actually be on the team on a fulltime basis and be the scrum master. By just going through the meetings of scrum didn’t mean we were doing scrum. So we failed with this one, if you fail look at it from an agile perspective As this particular project drew to a close and it became more and more apparent that it was not going to succeed the failure of it became depressing. Emotions were expressed by various people on the team that we not encouraging and enforced the failure. Embracing the failure and looking at it for what it is instead of taking it as the end of the world can change how you grow from the experience. Acknowledging that it failed and then focussing on learning from why and how to avoid the failure in the future can change how you feel emotionally about the team, the project and the organization.

    Read the article

  • UK OUG Conference Highlights and Insights

    - by Richard Bingham
    As per my preemptive post, this was the first time the annual conference organized by the UK Oracle User Group (UKOUG) was split into two events, one for Oracle Applications and another in December for Oracle Technology. Apps13, as it was branded, was hailed as a success, with over 1000 registered attendees and three days of sessions, exhibition, round-tables and many other types of content. As this poster on their stand illustrates, the UKOUG is a strong community with popular participants from both big and small Oracle partners and customers. The venue was a more intimate setting than previous years also, allowing everyone to casually bump into those they hoped to. It gave a real feeling of an Apps Community. The main themes over the days where CRM and Customer Experience, HCM, and FIN/SCM. This allowed people to attend just one focused day if they wanted. In addition the Apps Transformation stream ran across all three days, offering insights, advice, and details on the newer product solutions like Fusion Applications.  Here are some of the key take-aways I got from the conference, specific to my role in Fusion Applications Developer Relations: User Experience continues to be a significant reason for adopting some of the newer application products available, with immediately obvious gains in user productivity and satisfaction reported by customers. Also this doesn't stop with the baked-in UX either, with their Design Patterns proving popular and indeed currently being extended to including things like extending on ADF mobile and customizing the Simplified UI. More on this to come from us soon. The executive sessions emphasized the "it's a journey" phrase, illustrating that modern business applications are powered by technologies such as Cloud, Mobile, Social and Big Data and these can be harnessed to help propel your organization forward. Indeed the emphasis is away from the traditional vendor prescribed linear applications road map, and towards plotting a course based on business priorities supported by a broad range of integrated solutions. To help with this several conference sessions demoed the new "Applications Navigator" tool, developed in partnership with OUG members, which offers a visual framework to help organizations plan their Oracle Applications investments around business and technology imperatives. Initial reaction was positive, especially as customers do not need to decipher Oracle's huge product catalog and embeds the best blend of proven and integrated applications solutions. We'll share more on this when it is generally available. Several sessions focused around explanations and interpretation of Oracle OpenWorld 2013, helping highlight the key Oracle Applications messages and directions. With a relative small percentage of conference attendees also at OpenWorld (from a show of hands) this was a popular way to distill the information available down into specific items of interest for the community. Please note the original OpenWorld 2013 content is still available for download but will not remain available forever (via the Oracle website OpenWorld Content Catalog > pick a session > see the PDF download). With the release of E-Business Suite 12.2 the move to develop and deploy on the Fusion Middleware stack becomes a reality for many Oracle Applications customers. This coupled with recent E-Business Suite features such as the Integrated SOA Gateway and the E-Business Suite SDK for Java, illustrates how the gap between the technologies and techniques involved in extending E-Business Suite and Fusion Applications is quickly narrowing. We'll see this merging continue to evolve going forwards. Getting started with Oracle Cloud Applications is actually easier than many customers expected, with a broad selection of both large and medium sized organizations explaining how they added new features to their existing Oracle Applications portfolios. New functionality available from Fusion HCM and CX are popular extensions that do not have to disrupt those core business services. Coexistence is the buzzword here, and the available integration is also simpler than many expected, commonly involving an initial setup data load, then regularly incremental synchronizations, often without a need for real-time constant communication between systems. With much of this pre-built already the implementation process is also quite rapid. With most people dressed in suits, we wanted to get the conversations going without the traditional english reserve, so we decided to make ourselves a bit more obvious, as the photo below shows. This seemed to be quite successful and helped those interested identify and approach us. Keep a look out for similar again. In fact if you're in the UK there is an "Apps Transformation Day" planned by the UKOUG for the 19th March 2014, with more details to follow. Again something we'll be sure to participate in. I am hoping to attend the next half of the UKOUG annual conference, Tech13, that focuses more on Oracle technology and where there is more likely to be larger attendance of those interested in the lower-level aspects of applications customization and development. If you're going, let me know and maybe we can meet up.

    Read the article

  • Lesi, from Graduate Trainee to Territory Manager

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 It’s the final year, University is now coming to an end. A new chapter now awaits my arrival. This part of my life is called “Looking for a Job”. With no form of experience whatsoever, getting a job at a well renowned IT company is something that every IT student dreams about. CV: v, Application form: v, interviews: v. Acceptance Call, “Lesi I’m pleased to inform you that you have been accepted to be part of the Oracle Graduate Program for 2012”. Life would never again be the same. Being Part of the Graduate Program Going into the Graduate program, I felt like a baby seeing candy for the first time. The Program gave me the platform to not only break in to the workplace but also to help launch my career. Over the next 3 months, I went through various trainings / workshops / events / coaching / mentorship sessions. Like a construction worker building a solid foundation for a beautifully designed architecture, a clear path to build my career was set. With training out the way, it was now time to start working closely with my team. For the rest of the year, it was all about selling. Sales, Pipeline, Forecasting and numbers soon became the common words in my career. As the saying goes, “once a sales man, always a sales man”. There was no turning back now, a career in sales was the new hustle in my life. I worked closely with my mentor & coach (Ibrahim) who was heading up Zambia and Malawi. This was to be one of my best moments in the program as I started engaging with customers and getting some hands on experience in the field. By the end of the program all the experience, hard work, training and resources came in handy as I was now ready and fully groomed to be a sales rep. Life after the Graduate Program I’m proud to say that now I’m a Territory Manager, heading up Malawi, selling Technology, Middleware & Applications across all industries. I’m part of the Transition Cluster Team, a powerful team headed by the seasoned Senior Director. As a Territory Manager my role is to push for coverage, to penetrate the market by selling Oracle from end- to- end to all accounts in Malawi. I now spend my days living out of a suitcase, moving from hotel to hotel, chasing after business in all areas of Malawi. It’s the life of a Sales Man and I’m enjoying every minute of it. I’m truly fortunate and grateful to have been part of such a wonderful graduate program. I owe my Sales career to the graduate program, and I truly hope that the program will continue to develop and to groom new talent amongst the youth of this world. If you're interested in joining the Graduate Program in South Africa keep an eye on our CampusatOracle Facebook Page page to get the latest updates! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Windows Phone 8 Launch Event Summary

    - by Tim Murphy
    Today was the official coming out party for Windows Phone 8.  Below is a summary of the launch event.  There is a lot here to stay with me. They started with a commercial staring Joe Belfiore show how his Windows Phone 8 was personal too him which highlights something I think Microsoft has done well over the last couple of event: spotlight how Windows Phone is a different experience from other smartphones.  Joe actually called iPhone and Android “tired old metaphors" and explained that the idea around Windows Phone was to “reinvent the smartphone around you” as “the most personal smartphone operating system”.  The is the message that they need to drive home in their adds. The only real technical aspect we found out was that they have optimized the operating system around the dual core Qualcomm Snapdragon chip set.  It seems like all of the other hardware goodies had already been announced.  The remainder of the event was centered around new features of the OS and app announcements. So what are we getting?  The integrated features included lock screen live tile, Data Sense, Rooms and Kids corner.  There wasn’t a lot of information about it, but Joe also talked about apps not just having live tiles, but being live apps that could integrate with wallet and the hub. The lock screen will now be able to be personalized with live tile data or even a photo slide show.  This gives the lock screen an even better ability to give you the information you want to know before you even unlock the phone. The Kids Corner allows you as a parent to setup an area on your phone that you kids can go into an use it without disturbing your apps.  They can play games or use apps that you have designated and will only see those apps.  It even has a special lock screen gesture just for the kids corner. Rooms allow you to organize your phone around the groups of people in your life.  You get a shared calendar, a room wall as well as shared notes beyond just being able to send messages to a group.  You can also invite people not on the Windows Phone platform to access an online version of the room. Data Sense is a new feature that gives you better control and understanding of your data plan usage.  You can see which applications are using data and it can automatically adjust they way your phone behaves as you get close to your data limit. Add to these features the fact that the entire Windows ecosystem is integrated with SkyDrive and you have an available anywhere experience that is unequaled by any other platform.  Your document, photos and music are available on your Windows Phone, Window 8 device and Xbox.  SkyDrive also doesn’t limit how long you can keep files like the competing cloud platforms and give more free storage. It was interesting the way they made the launch event more personal.  First Joe brought out his own kids to demo the Kids Corner.  They followed this up by bringing out Jessica Alba to discuss her experience on the Windows Phone 8.  They need to keep putting a face on the product instead of just showing features as a cold list. Then we get to apps.  We knew that the new Skype was coming, but we found out that it was created in such a way that it can receive calls without running consistently in the background which would eat up battery.  This announcement was follow by the coming Facebook app that is optimized for Windows Phone 8.  As a matter of fact they indicated that just after launch the marketplace would have 46 out of the top 50 apps used by all smartphone platforms.  In a rational world this tide with over 120,000 apps currently in the marketplace there should be no more argument about the Windows Phone ecosystem. For those of us who develop for Windows Phone and weren’t on the early adoption program will finally get access to the SDK tomorrow after an announcement at Build (more waiting).  Perhaps we will get a few new features then. In the end I wouldn’t say there were any huge surprises, but I am really excited about getting my hands on the devices next month and starting to develop.  Stay tuned. del.icio.us Tags: Windows Phone,Windows Phone 8,Winodws Phone 8 Launch,Joe Belfiore,Jessica Alba

    Read the article

  • Revisiting the Generations

    - by Row Henson
    I was asked earlier this year to contribute an article to the IHRIM publication – Workforce Solutions Review.  My topic focused on the reality of the Gen Y population 10 years after their entry into the workforce.  Below is an excerpt from that article: It seems like yesterday that we were all talking about the entry of the Gen Y'ers into the workforce and what a radical change that would have on how we attract, retain, motivate, reward, and engage this new, younger segment of the workforce.  We all heard and read that these youngsters would be more entrepreneurial than their predecessors – the Gen X'ers – who were said to be more loyal to their profession than their employer. And, we heard that these “youngsters” would certainly be far less loyal to their employers than the Baby Boomers or even earlier Traditionalists. It was also predicted that – at least for the developed parts of the world – they would be more interested in work/life balance than financial reward; they would need constant and immediate reinforcement and recognition and we would be lucky to have them in our employment for two to three years. And, to keep them longer than that we would need to promote them often so they would be continuously learning since their long-term (10-year) goal would be to own their own business or be an independent consultant.  Well, it occurred to me recently that the first of the Gen Y'ers are now in their early 30s and it is time to look back on some of these predictions. Many really believed the Gen Y'ers would enter the workforce with an attitude – expect everything to be easy for them – have their employers meet their demands or move to the next employer, and I believe that we can now say that, generally, has not been the case. Speaking from personal experience, I have mentored a number of Gen Y'ers and initially felt that with a 40-year career in Human Resources and Human Resources Technology – I could share a lot with them. I found out very quickly that I was learning at least as much from them! Some of the amazing attributes I found from these under-30s was their fearlessness, ease of which they were able to multi-task, amazing energy and great technical savvy. They were very comfortable with collaborating with colleagues from both inside the company and peers outside their organization to problem-solve quickly. Most were eager to learn and willing to work hard.  This brings me to the generation that will follow the Gen Y'ers – the Generation Z'ers – those born after 1998. We have come full circle. If we look at the Silent Generation or Traditionalists, we find a workforce that preceded the television and even very early telephones. We Baby Boomers (as I fall right squarely in this category) remembered the invention of the television and telephone – but laptop computers and personal digital assistants (PDAs) were a thing of “StarTrek” and other science fiction movies and publications. Certainly, the Gen X'ers and Gen Y'ers grew up with the comfort of these devices just as we did with calculators. But, what of those under the age of 10 – how will the workplace look in 15 more years and what type of workforce will be required to operate in the mobile, global, virtual world. I spoke to a friend recently who had her four-year-old granddaughter for a visit. She said she found her in the den in front of the TV trying to use her hand to get the screen to move! So, you see – we have come full circle. The under-70 Traditionalist grew up in a world without TV and the Generation Z'er may never remember the TV we knew just a few years ago. As with every generation – we spend much time generalizing on their characteristics. The most important thing to remember is every generation – just like every individual – is different. The important thing for those of us in Human Resources to remember is that one size doesn’t fit all. What motivates one employee to come to work for you and stay there and be productive is very different than what the next employee is looking for and the organization that can provide this fluidity and flexibility will be the survivor for generations to come. And, finally, just when we think we have it figured out, a multitude of external factors such as the economy, world politics, industries, and technologies we haven’t even thought about will come along and change those predictions. As I reach retirement age – I do so believing that our organizations are in good hands with the generations to follow – energetic, collaborative and capable of working hard while still understanding the need for balance at work, at home and in the community! Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Using Optical Flow in EmguCV

    - by Meko
    HI. I am trying to create simple touch game using EmguCV.Should I use optical flow to determine for interaction between images on screen and with my hand ,if changes of points somewhere on screen more than 100 where the image, it means my hand is over image? But how can I track this new points? I can draw on screen here the previous points and new points but It shows on my head more points then my hand and I can not track my hands movements. void Optical_Flow_Worker(object sender, EventArgs e) { { Input_Capture.SetCaptureProperty(Emgu.CV.CvEnum.CAP_PROP.CV_CAP_PROP_POS_FRAMES, ActualFrameNumber); ActualFrame = Input_Capture.QueryFrame(); ActualGrayFrame = ActualFrame.Convert<Gray, Byte>(); NextFrame = Input_Capture.QueryFrame(); NextGrayFrame = NextFrame.Convert<Gray, Byte>(); ActualFeature = ActualGrayFrame.GoodFeaturesToTrack(500, 0.01d, 0.01, 5); ActualGrayFrame.FindCornerSubPix(ActualFeature, new System.Drawing.Size(10, 10), new System.Drawing.Size(-1, -1), new MCvTermCriteria(20, 0.3d)); OpticalFlow.PyrLK(ActualGrayFrame, NextGrayFrame, ActualFeature[0], new System.Drawing.Size(10, 10), 3, new MCvTermCriteria(20, 0.03d), out NextFeature, out Status, out TrackError); OpticalFlowFrame = new Image<Bgr, Byte>(ActualFrame.Width, ActualFrame.Height); OpticalFlowFrame = NextFrame.Copy(); for (int i = 0; i < ActualFeature[0].Length; i++) DrawFlowVectors(i); ActualFrameNumber++; pictureBox1.Image = ActualFrame.Resize(320, 400).ToBitmap() ; pictureBox3.Image = OpticalFlowFrame.Resize(320, 400).ToBitmap(); } } private void DrawFlowVectors(int i) { System.Drawing.Point p = new Point(); System.Drawing.Point q = new Point(); p.X = (int)ActualFeature[0][i].X; p.Y = (int)ActualFeature[0][i].Y; q.X = (int)NextFeature[i].X; q.Y = (int)NextFeature[i].Y; p.X = (int)(q.X + 6 * Math.Cos(angle + Math.PI / 4)); p.Y = (int)(q.Y + 6 * Math.Sin(angle + Math.PI / 4)); p.X = (int)(q.X + 6 * Math.Cos(angle - Math.PI / 4)); p.Y = (int)(q.Y + 6 * Math.Sin(angle - Math.PI / 4)); OpticalFlowFrame.Draw(new Rectangle(q.X,q.Y,1,1), new Bgr(Color.Red), 1); OpticalFlowFrame.Draw(new Rectangle(p.X, p.Y, 1, 1), new Bgr(Color.Blue), 1); }

    Read the article

  • Qt vs .NET - plz no n00bs who don't know wtf they're talking about [closed]

    - by Pirate for Profit
    Man in all these Qt vs. .NET discussions 90% these people don't know WTF they're talking about. Trying to get a real comparison chart going before we embark on a major fucking project. And yes I'm drunk, and yes I use cocaine. Event Handling In Qt the event handling system you just emit signals when something cool happens and then catch them in slots, for instance emit valueChanged(int percent, bool something); and void MyCatcherObj::valueChanged(int p, bool ok){} blocking them and disconnecting them when needed, doing it across threads... once you get the hang of it, it just seems a lot more natural and intuitive than the way the .NET event handling is set up (you know, object sender, CustomEventArgs e). And I'm not just talking about syntax, because in the end the .NET delegate crap is the bomb. I'm also talking about in more than just reflection (because, yes, .NET obviously has much stronger reflection capabilities). I'm talking about in the way the system feels to a human being. Qt wins hands down i m o. Basically, the footprints make more sense and you can visualize the project easier without the clunky event handling system. I wish I could it explain it better. The only thing is, I do love some of the ease of C# compared to C++ and .NET's assembly architecture. That is a big bonus for modular projects, which are a PITA to do in C++. Database Ease of Doing Crap Also what about datasets and database manipulations. I think .net wins here but I'm not sure. Threading/Conccurency How do you guys think of the threading? In .NET, all I've ever done is make like a list of master worker threads with locks. I like QConcurrentFramework, you don't worry about locks or anything, and with the ease of the signal slot system across threads it's nice to get notified about the progress of things. Memory Usage Also what do you think of the overall memory usage comparison. Is the .NET garbage collector pretty on the ball and quick compared to the instantaneous nature of native memory management? Or does it just let programs leak up a storm and lag the computer then clean it up when it's about to really lag? However, I am a n00b who doesn't know what I'm talking about, please school me on the subject.

    Read the article

  • Entrepreneur Needs Programmers, Architects, or Engineers?

    - by brand-newbie
    Hi guys (Ladies included). I posted on a related site, but THIS is the place to be. I want to build a specialized website. I am an entrepreneur and refining valuations now for venture capitalsists: i.e., determining how much cash I will need. I need help in understanding what human resources I need (i.e., Software Programmers, Architects, Engineers, etc.)??? Trust me, I have read most--if not all--of the threads here on the subject, and I can tell you I am no closer to the answer than ever. Here's my technology problem: The website will include (2) main components: a search engine (web crawler)...and a very large database. The search engine will not be a competitor to google, obviously; however, it "will" require bots to scour the web. The website will be, basically, a statistical database....where users should be able to pull up any statistic from "numerous" fields. Like any entrepreneur with a web-based vision, I'm "hoping" to get 100+ million registered users eventually. However, practically, we will start as small as feasible. As regards the technology (database architecture, servers, etc.), I do want quality, quality, quality. My priorities are speed, and the capaility to be scalable...so that if I "did" get globally large, we could do it without having to re-engineer anything. In other words, I want the back-end and the "infrastructure" to be scalable and professional....with emphasis on quality. I am not an IT professional. Although I've built several Joomla-based websites, I'm just a rookie who's only used minor javascript coding to modify a few plug-ins and components. The business I'm trying to create requires specialization and experts. I want to define the problem and let a capable team create the final product, and I will stay totally hands off. So who do you guys suggest I hire to run this thing? A software engineer? I was thinking I would need a "database engineer," a "systems security engineer", and maybe 2 or 3 "programmers" for the search engine. Also a web designer...and maybe a part-time graphic designer...everyone working under a single software engineer. What do you guys think? Who should I hire?...I REALLY need help from some people in the industry (YOU guys) on this. Is this project do-able in 6 months? If so, how many people will I need? Who exactly needs to head up this thing?...Senior software engineer, an embedded engineer, a CC++ engineer, a java engineer, a database engineer? And do I build this thing is Ruby or Java?

    Read the article

  • SSH login with expect(1). How to exit expect and remain in SSH?

    - by Koroviev
    So I wanted to automate my SSH logins. The host I'm with doesn't allow key authentication on this server, so I had to be more inventive. I don't know much about shell scripting, but some research showed me the command 'expect' and some scripts using it for exactly this purpose. I set up a script and ran it, it worked perfectly to login. #!/usr/bin/env expect -f set password "my_password" match_max 1000 spawn ssh -p 2222 "my_username"@11.22.11.22 expect "*?assword:*" send -- "$password\r" send -- "\r" expect eof Initially, it runs as it should. Last login: Wed May 12 21:07:52 on ttys002 esther:~ user$ expect expect-test.exp spawn ssh -p 2222 [email protected] [email protected]'s password: Last login: Wed May 12 15:44:43 2010 from 20.10.20.10 -jailshell-3.2$ But that's where the success ends. Commands do not work, but hitting enter just makes a new line. Arrow keys and other non-alphanumeric keys produce symbols like '^[[C', '^[[A', '^[OQ' etc.[1] No other prompt appears except the two initially created by the expect script. Any ignored commands will be executed by my local shell once expect times out. An example: -jailshell-3.2$ whoami ls pwd hostname (...time passes, expect times out...) esther:~ user$ whoami user esther:~ ciaran$ ls Books Documents Movies Public Code Downloads Music Sites Desktop Library Pictures expect-test.exp esther:~ ciaran$ pwd /Users/ciaran esther:~ ciaran$ hostname esther.local As I said, I have no shell scripting experience, but I think it's being caused because I'm still "inside of" expect, but not "inside of" SSH. Is there any way to terminate expect once I've logged in, and have it hand over the SSH session to me? I've tried commands like 'close' and 'exit', after " send -- "\r" ". Yeah, they do what I want and expect dies, but it vindictively takes the SSH session down with it, leaving me back where I started. What I really need is for expect to do its job and terminate, leaving the SSH session back in my hands as if I did it manually. All help is appreciated, thanks. [1] I know there's a name for this, but I don't know what it is. And this is one of those frightening things which can't be googled, because the punctuation characters are ignored. As a side question, what's the story here?

    Read the article

  • Yii - Custom GridView with Multiple Tables

    - by savinger
    So, I've extended GridView to include an Advanced Search feature tailored to the needs of my organization. Filter - lets you show/hide columns in the table, and you can also reorder columns by dragging the little drag icon to the left of each item. Sort - Allows for the selection of multiple columns, specify Ascending or Descending. Search - Select your column and insert search parameters. Operators tailored to data type of selected column. Version 1 works, albeit slowly. Basically, I had my hands in the inner workings of CGridView, where I snatch the results from the DataProvider and do the searching and sorting in PHP before rendering the table contents. Now writing Version 2, where I aim to focus on clever CDbCriteria creation, allowing MySQL to do the heavy lifting so it will run quicker. The implementation is trivial when dealing with a single database table. The difficulty arises when I'm dealing with 2 or more tables... For example, if the user intends to search on a field that is a STAT relation, I need that relation to be present in my query. Here's the question. How do I assure that Yii includes all with relations in my query so that I include comparisons? I've included all my relations with my criteria in the model's search function and I've tried CDbCriteria's together ... public function search() { $criteria=new CDbCriteria; $criteria->compare('id', $this->id); $criteria->compare( ... ... $criteria->with = array('relation1','relation2','relation3'); $criteria->together = true; return new CActiveDataProvider( get_class($this), array( 'criteria'=>$criteria, 'pagination' => array('pageSize' => 50) ));} But I still get errors like this... CDbCommand failed to execute the SQL statement: SQLSTATE[42S22]: Column not found: 1054 Unknown column 't.relation3' in 'where clause'. The SQL statement executed was: SELECT COUNT(DISTINCT `t`.`id`) FROM `table` `t` LEFT OUTER JOIN `relation_table` `relation0` ON (`t`.`id`=`relation0`.`id`) LEFT OUTER JOIN `relation_table` `relation1` ON (`t`.`id`=`relation1`.`id`) WHERE (`t`.`relation3` < 1234567890) Where relation0 and relation1 are BELONGS_TO relations, but any STAT relations are missing. Furthermore, why is the query a SELECT COUNT(DISTINCT 't'.'id') ?

    Read the article

  • Is there a way to handle the dynamic change of a dropdown for a single row in a grid-based datawindo

    - by TomatoSandwich
    Is there a way to handle the dynamic change of a dropdown for a single row in a grid-based datawindow? Example: NAME LIKABILITY PURCHASED IN COLOUR (Text) (DropDown*) (Text) (Text) Bananas [Good] Hands Yellow [Bad] [Bananas are good] Apples [Good] Bags Red [Bad] Given the above is a grid-based datawindow, where the fields 'NAME','PURCHASED IN' and 'COLOUR' are text fields, where as the 'LIKABILITY' field is a dropdown*. I say dropdown* because the same visual representation can be created by using a DropDownList (hardcoded within the datawindow element at design time), or a DropDownDW (or DDDW, a select statement that can be based on other elements in the datawindow). However, there is no way I can get 'Bananas' having it's 3 dropdowns, while Apples has only 2. If I enter multiple rows of 'Bananas', then all rows have 3 dropdowns, but as soon as I add an Apples row, all dropdowns revert to 2 selections. To attempt to achieve this functionality, I have tried the following options: -- 1) dw_1.Object.likability.values("Good~tG/Bad~tB/Bananas are good~tDRWHO") on ue_itemchange when editing NAME. FAILS: edits all instances of LIKABILITY instead of the current row. -- 2) Duplicate Dropdowns, having one filtered, one unfiltered selection list per row, visible based on NAME selection. FAILS: can't set visibility/overlapping columns on grid-based datawindow. (Source) -- 3) Hard-code display value as Database value, or Vice Versa. Have 'GOOD','BAD','BANANASAREGOOD' as the display and database values, and change handling of options from G, B, DRWHO to these new values. FAILS: 3rd option appears for all rows, still selectable on Apple rows, which is wrong. -- 4) DDDW retrieve list of options for dropdown. Create a DDDW that uses the value of NAME to determine what selections it should have for the dropdown. FAILS: edits all instances of the dropdown, not just the current row. -- 5) DDDW retrieve counter of options available (if B then 3 else 2), then have duplicate dropdown columns that protect/unprotect based on DDDW counter. FAILS: Can't autoselect dddw value to populate column to cause protect on other two columns, ugly solution in any case. -- There is now a bounty on this question for anyone who can give me a solution that will enable me to edit a dropdown column for a single row on a grid-based datawindow in PB 10.5

    Read the article

  • QT vs. Net - REAL comparisons for R.A.D. projects

    - by Pirate for Profit
    Man in all these Qt vs. .NET discussions 90% these people argue about the dumbest crap. Trying to get a real comparison chart here, because I know a little about both frameworks but I don't know everything. I believe Qt and .NET both have strengths and weaknesses. This is to make a comparison that highlights these so people can make more informed decisions before embarking on a project, in the spirit of R.A.D. Event Handling In Qt the event handling system is very simple. You just emit signals when something cool happens and then catch them in slots. ie. // run some calculations, then emit valueChanged(30, false, 20.2); and then catching it, any object can make a slot to recieve that message easily void MyObj::valueChanged(int percent, bool ok, float timeRemaining). It's easy to "block" an event or "disconnect" when needed, and works seamlessly across threads... once you get the hang of it, it just seems a lot more natural and intuitive than the way the .NET event handling is set up (you know, void valueChanged(object sender, CustomEventArgs e). And I'm not just talking about syntax, because in the end the .NET anonymous delegates are the bomb. I'm also talking about in more than just reflection (because, yes, .NET obviously has much stronger reflection capabilities). I'm talking about in the way the system feels to a human being. Qt wins hands down for the simplest yet still flexible event handling system ever i m o. Plugins and such I do love some of the ease of C# compared to C++, as well as .NET's assembly architecture, even though it leads to a bunch of .dll's (there's ways to combine everything into a single exe though). That is a big bonus for modular projects, which are a PITA to import stuff in C++ as far as RAD is concerned. Database Ease of Doing Crap Also what about datasets and database manipulations. I think .net wins here but I'm not sure. Threading/Conccurency How do you guys think of the threading? In .NET, all I've ever done is make like a list of master worker threads with locks. I like QConcurrentFramework, you don't worry about locks or anything, and with the ease of the signal slot system across threads it's nice to get notified about the progress of things. QConcurrent is the simplest threading mechanism I've ever played with. Memory Usage Also what do you think of the overall memory usage comparison. Is the .NET garbage collector pretty on the ball and quick compared to the instantaneous nature of native memory management? Or does it just let programs leak up a storm and lag the computer then clean it up when it's about to really lag? Doesn't the just-in-time compiler make native code that is pretty good, like and that only happens the first time the program is run? However, I am a n00b who doesn't know what I'm talking about, please school me on the subject.

    Read the article

  • Qt vs .NET - a few comparisons [closed]

    - by Pirate for Profit
    Event Handling In Qt the event handling system you just emit signals when something cool happens and then catch them in slots, for instance emit valueChanged(int percent, bool something); and void MyCatcherObj::valueChanged(int p, bool ok){} blocking them and disconnecting them when needed, doing it across threads... once you get the hang of it, it just seems a lot more natural and intuitive than the way the .NET event handling is set up (you know, object sender, CustomEventArgs e). And I'm not just talking about syntax, because in the end the .NET delegate crap is the bomb. I'm also talking about in more than just reflection (because, yes, .NET obviously has much stronger reflection capabilities). I'm talking about in the way the system feels to a human being. Qt wins hands down i m o. Basically, the footprints make more sense and you can visualize the project easier without the clunky event handling system. I wish I could it explain it better. The only thing is, I do love some of the ease of C# compared to C++ and .NET's assembly architecture. That is a big bonus for modular projects, which are a PITA to do in C++. Database Ease of Doing Crap Also what about datasets and database manipulations. I think .net wins here but I'm not sure. Threading/Conccurency How do you guys think of the threading? In .NET, all I've ever done is make like a list of master worker threads with locks. I like QConcurrentFramework, you don't worry about locks or anything, and with the ease of the signal slot system across threads it's nice to get notified about the progress of things. Memory Usage Also what do you think of the overall memory usage comparison. Is the .NET garbage collector pretty on the ball and quick compared to the instantaneous nature of native memory management? Or does it just let programs leak up a storm and lag the computer then clean it up when it's about to really lag? However, I am a n00b who doesn't know what I'm talking about, please school me on the subject.

    Read the article

  • Favorite Visual Studio keyboard remappings?

    - by hoytster
    Stack Overflow has covered favorite short-cuts and add-ins, optimizations and preferences -- great topics all. If this one has been covered, I can't find it -- so thanks in advance for the link. What are your favorite Visual Studio keyboard remappings? Mine are motivated by the fact that I'm a touch-typist. Mouse, function keys, arrow keys, Home, End -- bleh. These are commands I do all day every day, so I've remapped them to sequences I can execute without moving my hands from the home row. The command that is remapped in Tools = Customize = [Keyboard] is shown in parentheses. I'm 100% positive that there are better remappings than these, so please post yours! Please include the command; oft times, figuring it out is a challenge. -- Hoytster Running the app and operating the debugger Ctrl+Q + Ctrl+R Run the application, in debug mode (Debug.Start) Ctrl+Q + Ctrl+Q Quit (stop) the application (Debug.StopDebugging) Ctrl+T Toggle a breakpoint at the current line (Debug.ToggleBreakpoint) Ctrl+K + Ctrl+I Step Into the method (Debug.StepInto) Ctrl+K + Ctrl+O Step Out of the method (Debug.StepOut) Ctrl+N Step over the method to the Next statement (Debug.StepOver) Ctrl+K + Ctrl+C Run the code, stopping at the Cursor position (Debug.RunToCursor) Ctrl+K + Ctrl+E Set then next statement to Execute (Debug.SetNextStatement) Navigating the code Ctrl+S Move a character LEFT (Edit.CharLeft) Ctrl+D Move a character RIGHT (Edit.CharRight) Ctrl+Q + Ctrl+S Move to the LEFT END of the current line (Edit.LineStart) Ctrl+Q + Ctrl+D Move to the RIGHT END of the current line (Edit.LineEnd) Ctrl+E Move a line UP (Edit.LineUp) Ctrl+X Move a line DOWN (Edit.LineDown) Ctrl+K + Ctrl+K Toggle (add or remove) bookmark (Edit.ToggleBookmark) Ctrl+K + Ctrl+N Move to the NEXT bookmark (Edit.NextBookmark) Ctrl+K + Ctrl+P Move to the PREVIOUS bookmark (Edit.PreviousBookmark) Ctrl+Q + Ctrl+W Save all modified Windows (File.SaveAll) Ctrl+L Find the NEXT instance of the search string (Edit.FindNext) Ctrl+K + Ctrl+L Find the PREVIOUS instance of the search string (Edit.FindPrevious) Ctrl+Q + Ctrl+L Drop down the list of open files (Window.ShowEzMDIFileList) The last sequence is like clicking the downward-facing triangle in the upper-right corner of the code editor window. VS will display a list of all the open windows. You can select from the list by typing the file name; the matching file will be selected as you type. Pause for a second and resume typing, and the matching process starts over, so you can select a different file. Nice, VS Team. The key takes you to the tab for the selected file.

    Read the article

  • Performance of looping over an Unboxed array in Haskell

    - by Joey Adams
    First of all, it's great. However, I came across a situation where my benchmarks turned up weird results. I am new to Haskell, and this is first time I've gotten my hands dirty with mutable arrays and Monads. The code below is based on this example. I wrote a generic monadic for function that takes numbers and a step function rather than a range (like forM_ does). I compared using my generic for function (Loop A) against embedding an equivalent recursive function (Loop B). Having Loop A is noticeably faster than having Loop B. Weirder, having both Loop A and B together is faster than having Loop B by itself (but slightly slower than Loop A by itself). Some possible explanations I can think of for the discrepancies. Note that these are just guesses: Something I haven't learned yet about how Haskell extracts results from monadic functions. Loop B faults the array in a less cache efficient manner than Loop A. Why? I made a dumb mistake; Loop A and Loop B are actually different. Note that in all 3 cases of having either or both Loop A and Loop B, the program produces the same output. Here is the code. I tested it with ghc -O2 for.hs using GHC version 6.10.4 . import Control.Monad import Control.Monad.ST import Data.Array.IArray import Data.Array.MArray import Data.Array.ST import Data.Array.Unboxed for :: (Num a, Ord a, Monad m) => a -> a -> (a -> a) -> (a -> m b) -> m () for start end step f = loop start where loop i | i <= end = do f i loop (step i) | otherwise = return () primesToNA :: Int -> UArray Int Bool primesToNA n = runSTUArray $ do a <- newArray (2,n) True :: ST s (STUArray s Int Bool) let sr = floor . (sqrt::Double->Double) . fromIntegral $ n+1 -- Loop A for 4 n (+ 2) $ \j -> writeArray a j False -- Loop B let f i | i <= n = do writeArray a i False f (i+2) | otherwise = return () in f 4 forM_ [3,5..sr] $ \i -> do si <- readArray a i when si $ forM_ [i*i,i*i+i+i..n] $ \j -> writeArray a j False return a primesTo :: Int -> [Int] primesTo n = [i | (i,p) <- assocs . primesToNA $ n, p] main = print $ primesTo 30000000

    Read the article

  • What is your most productive shortcut with Vim?

    - by Olivier Pons
    I've heard a lot about Vim, both pros and cons. It really seems you should be (as a developer) faster with Vim than with any other editor. I'm using Vim to do some basic stuff and I'm at best 10 times less productive with Vim. The only two things you should care about when you talk about speed (you may not care enough about them, but you should) are: Using alternatively left and right hands is the fastest way to use the keyboard. Never touching the mouse is the second way to be as fast as possible. It takes ages for you to move your hand, grab the mouse, move it, and bring it back to the keyboard (and you often have to look at the keyboard to be sure you returned your hand properly to the right place) Here are two examples demonstrating why I'm far less productive with Vim. Copy/Cut & paste. I do it all the time. With all the classical editors you press Shift with the left hand, and you move the cursor with your right hand to select text. Then Ctrl+C copies, you move the cursor and Ctrl+V pastes. With Vim it's horrible: yy to copy one line (you almost never want the whole line!) [number xx]yy to copy xx lines into the buffer. But you never know exactly if you've selected what you wanted. I often have to do [number xx]dd then u to undo! Another example? Search & replace. In PSPad: Ctrl+f then type what you want you search for, then press Enter. In Vim: /, then type what you want to search for, then if there are some special characters put \ before each special character, then press Enter. And everything with Vim is like that: it seems I don't know how to handle it the right way. NB : I've already read the Vim cheat sheet :) My question is: What is the way you use Vim that makes you more productive than with a classical editor?

    Read the article

  • Unique_ptr compiler errors

    - by Godric Seer
    I am designing and entity-component system for a project, and C++ memory management is giving me a few issues. I just want to make sure my design is legitimate. So to start I have an Entity class which stores a vector of Components: class Entity { private: std::vector<std::unique_ptr<Component> > components; public: Entity() { }; void AddComponent(Component* component) { this -> components.push_back(std::unique_ptr<Component>(component)); } ~Entity(); }; Which if I am not mistaken means that when the destructor is called (even the default, compiler created one), the destructor for the Entity, will call ~components, which will call ~std::unique_ptr for each element in the vector, and lead to the destruction of each Component, which is what I want. The component class has virtual methods, but the important part is its constructor: Component::Component(Entity parent) { parent.addComponent(this) // I am not sure if this would work like I expect // Other things here } As long as passing this to the method works, this also does what I want. My confusion is in the factory. What I want to do is something along the lines of: std::shared_ptr<Entity> createEntity() { std::shared_ptr<Entity> entityPtr(new Entity()); new Component(*parent); // Initialize more, and other types of Components return entityPtr; } Now, I believe that this setup will leave the ownership of the Component in the hands of its Parent Entity, which is what I want. First a small question, do I need to pass the entity into the Component constructor by reference or pointer or something? If I understand C++, it would pass by value, which means it gets copied, and the copied entity would die at the end of the constructor. The second, and main question is that code based on this sample will not compile. The complete error is too large to print here, however I think I know somewhat of what is going on. The compiler's error says I can't delete an incomplete type. My Component class has a purely virtual destructor with an implementation: inline Component::~Component() { }; at the end of the header. However since the whole point is that Component is actually an interface. I know from here that a complete type is required for unique_ptr destruction. The question is, how do I work around this? For reference I am using gcc 4.4.6.

    Read the article

  • How to avoid repetition when working with primitive types?

    - by I82Much
    I have the need to perform algorithms on various primitive types; the algorithm is essentially the same with the exception of which type the variables are. So for instance, /** * Determine if <code>value</code> is the bitwise OR of elements of <code>validValues</code> array. * For instance, our valid choices are 0001, 0010, and 1000. * We are given a value of 1001. This is valid because it can be made from * ORing together 0001 and 1000. * On the other hand, if we are given a value of 1111, this is invalid because * you cannot turn on the second bit from left by ORing together those 3 * valid values. */ public static boolean isValid(long value, long[] validValues) { for (long validOption : validValues) { value &= ~validOption; } return value != 0; } public static boolean isValid(int value, int[] validValues) { for (int validOption : validValues) { value &= ~validOption; } return value != 0; } How can I avoid this repetition? I know there's no way to genericize primitive arrays, so my hands seem tied. I have instances of primitive arrays and not boxed arrays of say Number objects, so I do not want to go that route either. I know there are a lot of questions about primitives with respect to arrays, autoboxing, etc., but I haven't seen it formulated in quite this way, and I haven't seen a decisive answer on how to interact with these arrays. I suppose I could do something like: public static<E extends Number> boolean isValid(E value, List<E> numbers) { long theValue = value.longValue(); for (Number validOption : numbers) { theValue &= ~validOption.longValue(); } return theValue != 0; } and then public static boolean isValid(long value, long[] validValues) { return isValid(value, Arrays.asList(ArrayUtils.toObject(validValues))); } public static boolean isValid(int value, int[] validValues) { return isValid(value, Arrays.asList(ArrayUtils.toObject(validValues))); } Is that really much better though? Any thoughts in this matter would be appreciated.

    Read the article

  • Need help iteratating over an array, retrieve two possibilites, no repeats, for Poker AI

    - by elguapo-85
    Can't really think of a good way to word this question, nor a good title, and maybe the answer is so ridiculously simple that I am missing it. I am working on a poker AI, and I want to calculate the number of hands that exist which are better then mine, I understand how to that, but want I can't figure out is the best way to iterate over a group of cards. So I am at the flop, I know what my two cards are and there are 3 cards on the board. So there are 47 unknown cards and I want to iterate over all possible combination of those 47 cards assuming that two are passed out, so you can't have two cards of the same rank and suit, and you if you have previously calculated a set you don't want to do it over again, because I will being wasting time, and this will be called many times. If you don't understand want I am asking please tell me and I will clarify more. So I can set something up like this, if that element equals one, it means it is not in my hand and not on the board, 4 for each suit, and 13 for each rank. setOfCards[4][13] If I do a simple set of for loops like this: (pseudocode) //remove cards I know are in play from setOfCards by setting values to zero for(int i = 0; i < 4; i++) for(int j = 0; j < 13; j++) for(int k = 0; k < 4; k++) for(int l = 0; l < 4; l++) //skip if values equal zero card1 = setOfCards[i][j] card2 = setOfCards[k][l] //now compare card1, card2 and set of board cards So this is actually going to repeat many values, for example: card1 = AceOfHearts, card2 = KingOfHearts is the same as card1 = KingOfHearts, card2 = AceOfHearts. It will also alter my calculations. How should I go about avoiding this? Also is there a name for this technique? Thank you.

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59  | Next Page >