Search Results

Search found 12376 results on 496 pages for 'side effects'.

Page 457/496 | < Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >

  • Seven Random Thoughts on JavaOne

    - by HecklerMark
    As most people reading this blog may know, last week was JavaOne. There are a lot of summary/recap articles popping up now, and while I didn't want to just "add to pile", I did want to share a few observations. Disclaimer: I am an Oracle employee, but most of these observations are either externally verifiable or based upon a collection of opinions from Oracle and non-Oracle attendees alike. Anyway, here are a few take-aways: The Java ecosystem is alive and well, with a breadth and depth that is impossible to adequately describe in a short post...or a long post, for that matter. If there is any one area within the Java language or JVM that you would like to - or need to - know more about, it's well-represented at J1. While there are several IDEs that are used to great effect by the developer community, NetBeans is on a roll. I lost count how many sessions mentioned or used NetBeans, but it was by far the dominant IDE in use at J1. As a recent re-convert to NetBeans, I wasn't surprised others liked it so well, only how many. OpenJDK, OpenJFX, etc. Many developers were understandably concerned with the change of sponsorship/leadership when Java creator and longtime steward Sun Microsystems was acquired by Oracle. The read I got from attendees regarding Oracle's stewardship was almost universally positive, and the push for "openness" is deep and wide within the current Java environs. Few would probably have imagined it to be this good, this soon. Someone observed that "Larry (Ellison) is competitive, and he wants to be the best...so if he wants to have a community, it will be the best community on the planet." Like any company, Oracle is bound to make missteps, but leadership seems to be striking an excellent balance between embracing open efforts and innovating in competitive paid offerings. JavaFX (2.x) isn't perfect or comprehensive, but a great many people (myself included) see great potential, are developing for it, and are really excited about where it is and where it may be headed. This is another part of the Java ecosystem that has impressive depth for being so new (JavaFX 1.x aside). If you haven't kicked the tires yet, give it a try! You'll be surprised at how capable and versatile it is, and you'll probably catch yourself smiling while coding again.  :-) JavaEE is everywhere. Not exactly a newsflash, but there is a lot of buzz around EE still/again/anew. Sessions ranged from updated component specs/technologies to Websockets/HTML5, from frameworks to profiles and application servers. Programming "server-side" Java isn't confined to the server (as you no doubt realize), and if you still consider JavaEE a cumbersome beast, you clearly haven't been using the last couple of versions. Download GlassFish or the WebLogic Zip distro (or another JavaEE 6 implementation) and treat yourself. JavaOne is not inexpensive, but to paraphrase an old saying, "If you think that's expensive, you should try ignorance." :-) I suppose it's possible to attend J1 and learn nothing, but you'd have to really work at it! Attending even a single session is bound to expand your horizons and make you approach your code, your problem domain, differently...even if it's a session about something you already know quite well. The various presenters offer vastly different perspectives and challenge you to re-think your own approach(es). And finally, if you think the scheduled sessions are great - and make no mistake, most are clearly outstanding - wait until you see what you pick up from what I like to call the "hallway sessions". Between the presentations, people freely mingle in the hallways, go to lunch and dinner together, and talk. And talk. And talk. Ideas flow freely, sparking other ideas and the "crowdsourcing" of knowledge in a way that is hard to imagine outside of a conference of this magnitude. Consider this the "GO" part of a "BOGO" (Buy One, Get One) offer: you buy the ticket to the "structured" part of JavaOne and get the hallway sessions at no additional charge. They're really that good. If you weren't able to make it to JavaOne this year, you can still watch/listen to the sessions online by visiting the JavaOne course catalog and clicking the media link(s) in the right column - another demonstration of Oracle's commitment to the Java community. But make plans to be there next year to get the full benefit! You'll be glad you did. All the best,Mark P.S. - I didn't mention several other exciting developments in areas like the embedded space and the "internet of things" (M2M), robotics, optimization, and the cloud (among others), but I think you get the idea. JavaOne == brainExpansion;  Hope to see you there next year!

    Read the article

  • How Important is Project Team Communication in the Public Sector?

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} By Paul Bender, Director of Public Administration Strategy, Oracle Primavera It goes without saying that communication between project team members is a core competency that connects every member of a project team to a common set of strategies, goals and actions. If these components are not effectively shared by project leads and understood by stakeholders, project outcomes can be jeopardized and budgets may incur unnecessary risk. As reported by PMI’s 2013 Pulse of the Profession, an organization’s ability to meet project timelines, budgets and especially goals significantly impacts its ability to survive—and even thrive. The Pulse study revealed that the most crucial success factor in project management is effective communication to all stakeholders—a critical core competency for public agencies. PMI’s 2013 Pulse of the Profession report revealed that US$135 million is at risk for every US$1 billion spent on a project. Further research on the importance of effective project team communication uncovers that a startling 56 percent (US$75 million of that US$135 million) is at risk due to ineffective communication. Simply stated: public agencies cannot execute strategic initiatives unless they can effectively communicate their strategic alignment and business benefits. Executives and project managers around the world agree that poor communication between project team members contributes to project failure. A Forbes Insights 2010 Strategic Initiatives Study “Adapting Corporate Strategy to the Changing Economy,” found that nine out of ten CEOs believe that communication is critical to the success of their strategic initiatives, and nearly half of respondents cite communication as an integral and active component of their strategic planning and execution process. Project managers see it similarly from their side as well. According to PMI’s Pulse research, 55 percent of project managers agree that effective communication to all stakeholders is the most critical success factor in project management. As we all know, not all projects succeed. On average, two in five projects do not meet their original goals and business intent, and one-half of those unsuccessful projects are related to ineffective communication. Results reveal that while all aspects of project communication can be challenging to public agencies, the biggest problem areas are: A gap in understanding the business benefits. Challenges surrounding the language used to deliver project-related information, which is often unclear and peppered with project management jargon. Public agencies -- federal, state, and local -- have difficulty communicating with the appropriate levels with clarity and detail. This difficulty is likely exacerbated by the divide between each key audience and its understanding of project-specific, technical language. For those involved in public sector project and portfolio management, I would be interested to hear your thoughts and please visit Primavera EPPM solutions for public sector.

    Read the article

  • #OOW 2012 @PARIS...talking Oracle and Clouds, and Optimized Datacenter

    - by Eric Bezille
    For those of you who want to get most out of Oracle technologies to evolve your IT to the Next Wave, I encourage you to register to the up coming Oracle Optimized Datacenter event that will take place in Paris on November 28th. You will get the opportunity to exchange with Oracle experts and customers having successfully evolve their IT by leveraging Oracle technologies. You will also get the latest news on some of the Oracle systems announcements made during OOW 2012. During this event we will make an update about Oracle and Clouds, from private to public and hybrid models. So in preparing this session, I thought it was a good start to make a status of Cloud Computing in France, and CIO requirements in particular. Starting in 2009 with the first Cloud Camp in Paris, the market has evolved, but the basics are still the same : think hybrid. From Traditional IT to Clouds One size doesn't fit all, and for big companies having already an IT in place, there will be parts eligible to external (public) cloud, and parts that would be required to stay inside the firewalls, so ability to integrate both side is key.  None the less, one of the major impact of Cloud Computing trend on IT, reported by Forrester, is the pressure it makes on CIO to evolve towards the same model that end-users are now used to in their day to day life, where self-service and flexibility are paramount. This is what is driving IT to transform itself toward "a Global Service Provider", or for some as "IT "is" the Business" (see : Gartner Identifies Four Futures for IT and CIO), and for both models toward a Private Cloud Service Provider. In this journey, there is still a big difference between most of existing external Cloud and a firm IT : the number of applications that a CIO has to manage. Most cloud providers today are overly specialized, but at the end of the day, there are really few business processes that rely on only one application. So CIOs has to combine everything together external and internal. And for the internal parts that they will have to make them evolve to a Private Cloud, the scope can be very large. This will often require CIOs to evolve from their traditional approach to more disruptive ones, the time has come to introduce new standards and processes, if they want to succeed. So let's have a look at the different Cloud models, what type of users they are addressing, what value they bring and most importantly what needs to be done by the  Cloud Provider, and what is left over to the user. IaaS, PaaS, SaaS : what's provided and what needs to be done First of all the Cloud Provider will have to provide all the infrastructure needed to deliver the service. And the more value IT will want to provide, the more IT will have to deliver and integrate : from disks to applications. As we can see in the above picture, providing pure IaaS, left a lot to cover for the end-user, that’s why the end-user targeted by this Cloud Service is IT people. If you want to bring more value to developers, you need to provide to them a development platform ready to use, which is what PaaS is standing for, by providing not only the processors power, storage and OS, but also the Database and Middleware platform. SaaS being the last mile of the Cloud, providing an application ready to use by business users, the remaining part for the end-users being configuring and specifying the application for their specific usage. In addition to that, there are common challenges encompassing all type of Cloud Services : Security : covering all aspect, not only of users management but also data flows and data privacy Charge back : measuring what is used and by whom Application management : providing capabilities not only to deploy, but also to upgrade, from OS for IaaS, Database, and Middleware for PaaS, to a full Business Application for SaaS. Scalability : ability to evolve ALL the components of the Cloud Provider stack as needed Availability : ability to cover “always on” requirements Efficiency : providing a infrastructure that leverage shared resources in an efficient way and still comply to SLA (performances, availability, scalability, and ability to evolve) Automation : providing the orchestration of ALL the components in all service life-cycle (deployment, growth & shrink (elasticity), upgrades,...) Management : providing monitoring, configuring and self-service up to the end-users Oracle Strategy and Clouds For CIOs to succeed in their Private Cloud implementation, means that they encompass all those aspects for each component life-cycle that they selected to build their Cloud. That’s where a multi-vendors layered approach comes short in terms of efficiency. That’s the reason why Oracle focus on taking care of all those aspects directly at Engineering level, to truly provide efficient Cloud Services solutions for IaaS, PaaS and SaaS. We are going as far as embedding software functions in hardware (storage, processor level,...) to ensure the best SLA with the highest efficiency. The beauty of it, as we rely on standards, is that the Oracle components that you are running today in-house, are exactly the same that we are using to build Clouds, bringing you flexibility, reversibility and fast path to adoption. With Oracle Engineered Systems (Exadata, Exalogic & SPARC SuperCluster, more specifically, when talking about Cloud), we are delivering all those components hardware and software already engineered together at Oracle factory, with a single pane of glace for the management of ALL the components through Oracle Enterprise Manager, and with high-availability, scalability and ability to evolve by design. To give you a feeling of what does that bring in terms just of implementation project timeline, for example with Oracle SPARC SuperCluster, we have a consistent track of record to have the system plug into existing Datacenter and ready in a week. This includes Oracle Database, OS, virtualization, Database Storage (Exadata Storage Cells in this case), Application Storage, and all network configuration. This strategy enable CIOs to very quickly build Cloud Services, taking out not only the complexity of integrating everything together but also taking out the automation and evolution complexity and cost. I invite you to discuss all those aspect in regards of your particular context face2face on November 28th.

    Read the article

  • Lazy HTML attributes wrapping in Internet Explorer

    - by AGS777
    Having encountered this Internet Explorer (all versions) behavior several times previously, I eventually decided to share this most probably useless knowledge. Excuse my lengthy explanations because I am going to show the behavior along with a very simple case when one can come across it inadvertently. Let's say I want to implement some simple templating solution in JavaScript. I wrote an HTML template with an intention to bind data to it on the client side: Please note, that name of the “sys-template” class is just a coincidence. I do not use any ASP.NET AJAX code in this simple example. As you can see we need to replace placeholders (property name wrapped with curly braces) with actual data. Also, as you can see, many of the placeholders are situated within attribute values and it is where the danger lies. I am going to use <a /> element HTML as a template and replace each placeholder pattern with respective properties’ values with a little bit of jQuery like this: You can find complete code along with the contextFormat() method definition at the end of the post. Let’s assume that value for the name property (that we want to put in the title attribute) of the first data item is “first tooltip”. So it consists of two words. When the replacement occurred, title attribute should contain the “first tooltip” text which we are going to see as a tooltip for the <a /> element. But let’s run the sample code in Internet Explorer and check it out. What you’ll see is that only the first word of the supposed “title” attribute’s content is shown. So, were is the rest of my attribute and what happened? The answer is obvious once you see the result of jQuery(“.sys-template”).html() line for the given HTML markup. In IE you’ll get the following <A id={id} class={cssClass} title={name} href="{source}" myAttr="{attr}">Link to {source}</A> See any difference between this HTML and the one shown earlier? No? Then look carefully. While the original HTML of the <a /> element is well-formed and all the attributes are correctly quoted, when you take the same HTML back in Internet Explorer (it doesn’t matter whether you use html() method from jQuery library or IE’s innerHTML directly), you lose attributes’ quotes for some of the attributes. Then, after replacement, we’ll get following HTML for our first data item. I marked the attribute value in question with italic: <A id=1 class=first title=first tooltip href="first.html" myAttr="firstAttr">Link to first.html</A> Now you can easily imagine for yourself what happens when this HTML is inserted into the document and why we do not see the second (and any subsequent words if any) of our title attribute in the tooltip. There are still two important things to note. The first one (and it actually the reason why I named the post “lazy wrapping” is that if value of the HTML attribute does contains spaces in the original HTML, then it WILL be wrapped with quotation marks. For example, if I wrote following on my page (note the trailing space for the title attribute value) <a href="{source}" title="{name}  " id="{id}" myAttr="{attr}" class="{cssClass}">Link to {source}</a> then I would have my placeholder quoted correctly and the result of the replacement would render as expected: The second important thing to note is that there are exceptions for the lazy attributes wrapping rule in IE. As you can see href attribute value did not contain spaces exactly as all the other attributes with placeholders, but it was still returned correctly quoted Custom attribute myAttr is also quoted correctly when returned back from document, though its placeholder value does not contain spaces either. Now, on account of the highly unlikely probability that you found this information useful and need a solution to the problem the aforementioned behavior introduces for Internet Explorer browser, I can suggest a simple workaround – manually quote the mischievous attributes prior the placeholder pattern is replaced. Using the code of contextFormat() method shown below, you would need to add following line right before the return statement: result = result.replace(/=({([^}]+)})/g, '="$1"'); Below please find original sample code:

    Read the article

  • Windows for IoT, continued

    - by Valter Minute
    Originally posted on: http://geekswithblogs.net/WindowsEmbeddedCookbook/archive/2014/08/05/windows-for-iot-continued.aspxI received many interesting feedbacks on my previous blog post and I tried to find some time to do some additional tests. Bert Kleinschmidt pointed out that pins 2,3 and 10 of the Galileo are connected directly to the SOC, while pin 13, the one used for the sample sketch is controlled via an I2C I/O expander. I changed my code to use pin 2 instead of 13 (just changing the variable assignment at the beginning of the code) and latency was greatly reduced. Now each pulse lasts for 1.44ms, 44% more than the expected time, but ways better that the result we got using pin 13. I also used SetThreadPriority to increase the priority of the thread that was running the sketch to THREAD_PRIORITY_HIGHEST but that didn't change the results. When I was using the I2C-controlled pin I tried the same and the timings got ways worse (increasing more than 10 times) and so I did not commented on that part, wanting to investigate the issua a bit more in detail. It seems that increasing the priority of the application thread impacts negatively the I2C communication. I tried to use also the Linux-based implementation (using a different Galileo board since the one provided by MS seems to use a different firmware) and the results of running the sample blink sketch modified to use pin 2 and blink the led for 1ms are similar to those we got on the same board running Windows. Here the difference between expected time and measured time is worse, getting around 3.2ms instead of 1 (320% compared to 150% using Windows but far from the 100.1% we got with the 8-bit Arduino). Both systems were not under load during the test, maybe loading some applications that use part of the CPU time would make those timings even less reliable, but I think that those numbers are enough to draw some conclusions. It may not be worth running a full OS if what you need is Arduino compatibility. The Arduino UNO is probably the best Arduino you can find to perform this kind of development. The Galileo running the Linux-based stack or running Windows for IoT is targeted to be a platform for "Internet of Things" devices, whatever that means. At the moment I don't see the "I" part of IoT. We have low level interfaces (SPI, I2C, the GPIO pins) that can be used to connect sensors but the support for connectivity is limited and the amount of work required to deliver some data to the cloud (using a secure HTTP request or a message queuing system like APMQS or MQTT) is still big and the rich OS underneath seems to not provide any help doing that.Why should I use sockets and can't access all the high level connectivity features we have on "full" Windows?I know that it's possible to use some third party libraries, try to build them using the Windows For IoT SDK etc. but this means re-inventing the wheel every time and can also lead to some IP concerns if used for products meant to be closed-source. I hope that MS and Intel (and others) will focus less on the "coolness" of running (some) Arduino sketches and more on providing a better platform to people that really want to design devices that leverage internet connectivity and the cloud processing power to deliver better products and services. Providing a reliable set of connectivity services would be a great start. Providing support for .NET would be even better, leaving native code available for hardware access etc. I know that those components may require additional storage and memory etc. So making the OS componentizable (or, at least, provide a way to install additional components) would be a great way to let developers pick the parts of the system they need to develop their solution, knowing that they will integrate well together. I can understand that the Arduino and Raspberry Pi* success may have attracted the attention of marketing departments worldwide and almost any new development board those days is promoted as "XXX response to Arduino" or "YYYY alternative to Raspberry Pi", but this is misleading and prevents companies from focusing on how to deliver good products and how to integrate "IoT" features with their existing offer to provide, at the end, a better product or service to their customers. Marketing is important, but can't decide the key features of a product (the OS) that is going to be used to develop full products for end customers integrating it with hardware and application software. I really like the "hackable" nature of open-source devices and like to see that companies are getting more and more open in releasing information, providing "hackable" devices and supporting developers with documentation, good samples etc. On the other side being able to run a sketch designed for an 8 bit microcontroller on a full-featured application processor may sound cool and an easy upgrade path for people that just experimented with sensors etc. on Arduino but it's not, in my humble opinion, the main path to follow for people who want to deliver real products.   *Shameless self-promotion: if you are looking for a good book in Italian about the Raspberry Pi , try mine: http://www.amazon.it/Raspberry-Pi-alluso-Digital-LifeStyle-ebook/dp/B00GYY3OKO

    Read the article

  • Yes, I did it - Skydiving in Mauritius

    Finally, I did it or better said we did it. Already back in November last year I saw the big billboard advertisement of Skydive Austral Mauritius near Caudan Waterfront in Port Louis and decided for myself that this is going to be the perfect birthday gift for my wife. Simply out of curiosity I would join her tandem jump with a second instructor. Due to her pregnancy of our son I had to be patient... But then finally, her birthday had arrived and on our midnight celebration session I showed her her netbook with the website preloaded. Actually, it was the "perfect" timing... Recovery from her cesarean is fine, local weather conditions are gorgious and the children were under surveillance of my mum - spending her annual holidays on the island. So, after late wake-up in the morning, we packed our stuff and off we went. According to Google Maps direction indication we had to drive for roughly 50km (only) but traffic here in Mauritius is always challenging. The dropzone is at the Zone Industrielle Mon Loisir Sugar Estate near Riviere du Rempart at the northern east coast. Anyways, we were not in a hurry and arrived there shortly after noon. The access road to the airfield are just small down-driven paths through sugar cane fields and according to our daughter "it's bumpy!". True true true... The facilities at Skydive Austral Mauritius are complete except for food. Enough space for parking, easy handling at the reception and a lot to see for the kids. There's even a big terrace with several sets of tables and chairs, small bar for soft drinks, strictly non-alcoholic. The team over there is all welcoming and warm-hearthy! Having the kids with us was no issue at all. Quite the opposite, our daugther was allowed to discover a lot of things than we adults did. Even visiting the small air plane was on the menu for her. Really great stuff! While waiting for our turn we enjoyed watching other people getting ready in the jump gear, taking off with the Cessna, and finally coming back down on the tandem parachute. Actually, the different expressions on their faces was one of the best parts while waiting. Great mental preparation as my wife was getting more anxious about her first jump... {loadposition content_adsense} First, we got some information about the procedures on the plane about how to get seated, tight up with our instructors and how to get ready for the jump off the plane as soon as we arrive the height of 10.000 ft. All well explained and easy to understand after all.Next, we met with our jumpers Chris and Lee aka "Rasta" to get dressed and ready for take-off. Those guys are really cool and relaxed for their job. From that point on, the DVD session / recording for my wife's birthday started and we really had a lot of fun... The difference between that small Cessna and a commercial flight with an Airbus or a Boeing is astronomic! The climb up to 10.000 ft took us roughly 25 minutes and we enjoyed the magnificent view over the turquoise lagunes near Poste de Flacq, Lafayette and Isle d'Ambre on the north-east coast. After flying through the clouds we sun-bathed and looked over "iced-sugar covered" Mauritius. You might have a look at the picture gallery of Skydive Mauritius for better imagination. The moment of truth, or better said, point of no return came after approximately 25 minutes. The door opens, moving into position on the side on top of the wheel and... out! Back flip and free fall! Slight turns and Wooooohooooo! through the clouds... It so amazing and breath-taking! So undescribable! You have to experience this yourself! Some seconds later the parachute opened and we glided smoothly with some turns and spins back down to the dropzone. The rest of the family could hear and see us soon and the landing was easy going. We never had any doubts or fear about our instructors. They did a great job and we are looking forward to book our next job. I might even consider to follow educational classes on skydiving and earn a license. By the way, feel free to get in touch with Skydive Austral Mauritius. Either via contact details on their website or tweeting a little bit with them. Follow the tweets of Chris and fellows on SkydiveAustral.

    Read the article

  • C# Open Source software that is useful for learning Design Patterns

    - by Fathom Savvy
    In college I took a class in Expert Systems. The language the book taught (CLIPS) was esoteric - Expert Systems: Principles and Programming, Fourth Edition. I remember having a tough time with it. So, after almost failing the class, I needed to create the most awesome Expert System for my final presentation. I chose to create an expert system that would calculate risk analysis for a person's retirement portfolio. In short, the system would provide the services normally performed by one's financial adviser. In other words, based on personality, age, state of the macro economy, and other factors, should one's portfolio be conservative, moderate, or aggressive? In the appendix of the book (or on the CD-ROM), there was this in-depth example program for something unrelated to my presentation. Over my break, I read and re-read every line of that program until I understood it to the letter. Even though it was unrelated, I learned more than I ever could by reading all of the chapters. My presentation turned out to be pretty damn good and I received praises from my professor and classmates. So, the moral of the story is..., by understanding other people's code, you can gain greater insight into a language/paradigm than by reading canonical examples. Still, to this day, I am having trouble with everyday design patterns such as the Factory Pattern. I would like to know if anyone could recommend open source software that would help me understand the Gang of Four design patterns, at the very least. I have read the books, but I'm having trouble writing code for the concepts in the real world. Perhaps, by studying code used in today's real world applications, it might just "click". I realize a piece of software may only implement one kind of design pattern. But, if the pattern is an implementation you think is good for learning, and you know what pattern to look for within the source, I'm hoping you can tell me about it. For example, the System.Linq.Expressions namespace has a good example of the Visitor Pattern. The client calls Expression.Accept(new ExpressionVisitor()), which calls ExpressionVisitor (VisitExtension), which calls back to Expression (VisitChildren), which then calls Expression (Accept) again - wooah, kinda convoluted. The point to note here is that VisitChildren is a virtual method. Both Expression and those classes derived from Expression can implement the VisitChildren method any way they want. This means that one type of Expression can run code that is completely different from another type of derived Expression, even though the ExpressionVisitor class is the same in the Accept method. (As a side note Expression.Accept is also virtual). In the end, the code provides a real world example that you won't get in any book because it's kinda confusing. To summarize, If you know of any open source software that uses a design pattern implementation you were impressed by, please list it here. I'm sure it will help many others besides just me. public class VisitorPatternTest { public void Main() { Expression normalExpr = new Expression(); normalExpr.Accept(new ExpressionVisitor()); Expression binExpr = new BinaryExpression(); binExpr.Accept(new ExpressionVisitor()); } } public class Expression { protected internal virtual Expression Accept(ExpressionVisitor visitor) { return visitor.VisitExtension(this); } protected internal virtual Expression VisitChildren(ExpressionVisitor visitor) { if (!this.CanReduce) { throw Error.MustBeReducible(); } return visitor.Visit(this.ReduceAndCheck()); } public virtual Expression Visit(Expression node) { if (node != null) { return node.Accept(this); } return null; } public Expression ReduceAndCheck() { if (!this.CanReduce) { throw Error.MustBeReducible(); } Expression expression = this.Reduce(); if ((expression == null) || (expression == this)) { throw Error.MustReduceToDifferent(); } if (!TypeUtils.AreReferenceAssignable(this.Type, expression.Type)) { throw Error.ReducedNotCompatible(); } return expression; } } public class BinaryExpression : Expression { protected internal override Expression Accept(ExpressionVisitor visitor) { return visitor.VisitBinary(this); } protected internal override Expression VisitChildren(ExpressionVisitor visitor) { return CreateDummyExpression(); } protected internal Expression CreateDummyExpression() { Expression dummy = new Expression(); return dummy; } } public class ExpressionVisitor { public virtual Expression Visit(Expression node) { if (node != null) { return node.Accept(this); } return null; } protected internal virtual Expression VisitExtension(Expression node) { return node.VisitChildren(this); } protected internal virtual Expression VisitBinary(BinaryExpression node) { return ValidateBinary(node, node.Update(this.Visit(node.Left), this.VisitAndConvert<LambdaExpression>(node.Conversion, "VisitBinary"), this.Visit(node.Right))); } }

    Read the article

  • Best Practices Generating WebService Proxies for Oracle Sales Cloud (Fusion CRM)

    - by asantaga
    I've recently been building a REST Service wrapper for Oracle Sales Cloud and initially all was going well, however as soon as I added all of my Web Service proxies I started to get weird errors..  My project structure looks like this What I found out was if I only had the InteractionsService & OpportunityService WebService Proxies then all worked ok, but as soon as I added the LocationsService Proxy, I would start to see strange JAXB errors. Example of the error message Exception in thread "main" javax.xml.ws.WebServiceException: Unable to create JAXBContextat com.sun.xml.ws.model.AbstractSEIModelImpl.createJAXBContext(AbstractSEIModelImpl.java:164)at com.sun.xml.ws.model.AbstractSEIModelImpl.postProcess(AbstractSEIModelImpl.java:94)at com.sun.xml.ws.model.RuntimeModeler.buildRuntimeModel(RuntimeModeler.java:281)at com.sun.xml.ws.client.WSServiceDelegate.buildRuntimeModel(WSServiceDelegate.java:762)at weblogic.wsee.jaxws.spi.WLSProvider$ServiceDelegate.buildRuntimeModel(WLSProvider.java:982)at com.sun.xml.ws.client.WSServiceDelegate.createSEIPortInfo(WSServiceDelegate.java:746)at com.sun.xml.ws.client.WSServiceDelegate.addSEI(WSServiceDelegate.java:737)at com.sun.xml.ws.client.WSServiceDelegate.getPort(WSServiceDelegate.java:361)at weblogic.wsee.jaxws.spi.WLSProvider$ServiceDelegate.internalGetPort(WLSProvider.java:934)at weblogic.wsee.jaxws.spi.WLSProvider$ServiceDelegate$PortClientInstanceFactory.createClientInstance(WLSProvider.java:1039)...... Looking further down I see the error message is related to JAXB not being able to find an objectFactory for one of its types Caused by: java.security.PrivilegedActionException: com.sun.xml.bind.v2.runtime.IllegalAnnotationsException: 6 counts of IllegalAnnotationExceptionsThere's no ObjectFactory with an @XmlElementDecl for the element {http://xmlns.oracle.com/apps/crmCommon/activities/activitiesService/}AssigneeRsrcOrgIdthis problem is related to the following location:at protected javax.xml.bind.JAXBElement com.oracle.xmlns.apps.crmcommon.activities.activitiesservice.ActivityAssignee.assigneeRsrcOrgId at com.oracle.xmlns.apps.crmcommon.activities.activitiesservice.ActivityAssignee This is very strange... My first thoughts are that when I generated the WebService Proxy I entered the package name as "oracle.demo.pts.fusionproxy.servicename" and left the generated types as blank. This way all the generated types get put into the same package hierarchy and when deployed they get merged... Sounds resaonable and appears to work but not in this case..  To resolve this I regenerate the proxy but this time setting : Package name : To the name of my package eg. oracle.demo.pts.fusionproxy.interactionsRoot Package for Generated Types :  Package where the types will be generated to, e.g. oracle.demo.pts.fusionproxy.SalesParty.types When I ran the application now, it all works , awesome eh???? Alas no, there is a serious side effect. The problem now is that to help coding I've created a collection of helper classes , these helper classes take parameters which use some of the "generic" datatypes, like FindCriteria. e.g. This wont work any more public static FindCriteria createCustomFindCriteria(FindCriteria pFc,String pAttributes) Here lies a gremlin of a problem.. I cant use this method anymore, this is because the FindCriteria datatype is now being defined two, or more times, in the generated code for my project. If you leave the Root Package for types blank it will get generated to com.oracle.xmlns, and if you populate it then it gets generated to your custom package.. The two datatypes look the same, sound the same (and if this were a duck would sound the same), but THEY ARE NOT THE SAME... Speaking to development, they recommend you should not be entering anything in the Root Packages section, so the mystery thickens why does it work.. Well after spending sometime with some colleagues of mine in development we've identified the issue.. Alas different parts of Oracle Fusion Development have multiple schemas with the same namespace, when the WebService generator generates its classes its not seeing the other schemas properly and not generating the Object Factories correctly...  Thankfully I've found a workaround Solution Overview When generating the proxies leave the Root Package for Generated Types BLANK When you have finished generating your proxies, use the JAXB tool XJC and generate Java classes for all datatypes  Create a project within your JDeveloper11g workspace and import the java classes into this project Final bit.. within the project dependencies ensure that the JAXB/XJC generated classes are "FIRST" in the classpath Solution Details Generate the WebServices SOAP proxies When generating the proxies your generation dialog should look like this Ensure the "unwrap" parameters is selected, if it isn't then that's ok, it simply means when issuing a "get" you need to extract out the Element Generate the JAXB Classes using XJC XJC provides a command line switch called -wsdl, this (although is experimental/beta) , accepts a HTTP WSDL and will generate the relevant classes. You can put these into a single batch/shell script xjc -wsdl https://fusionservername:443/appCmmnCompInteractions/InteractionService?wsdlxjc -wsdl https://fusionservername443/opptyMgmtOpportunities/OpportunityService?wsdl Create Project in JDeveloper to store the XJC "generated" JAXB classes Within the project folder create a filesystem folder called "src" and copy the generated files into this folder. JDeveloper11g should then see the classes and display them, if it doesnt try clicking the "refresh" button In your main project ensure that the JDeveloper XJC project is selected as a dependancy and IMPORTANT make sure it is at the top of the list. This ensures that the classes are at the front of the classpath And voilà.. Hopefully you wont see any JAXB generation errors and you can use common datatypes interchangeably in your project, (e.g. FindCriteria etc)

    Read the article

  • Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) is Available Now !

    - by Anand Akela
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Oracle today announced the availability of Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2). It is now available for download on OTN on ALL platforms. This is the first major release since the launch of Enterprise Manager 12c in October of 2011. This is the first time when Enterprise Manager release is available on all platforms simultaneously. This is primarily a stability release which incorporates many of issues and feedback reported by early adopters. In addition, this release contains many new features and enhancements in areas across the board. New Capabilities and Features Enhanced management capabilities for enterprise private clouds: Introduces new capabilities to allow customers to build and manage a Java Platform-as-a-Service (PaaS) cloud based on Oracle Weblogic Server. The new capabilities include guided set up of PaaS Cloud, self-service provisioning, automatic scale out and metering and chargeback. Enhanced lifecycle management capabilities for Oracle WebLogic Server environments: Combining in-context multiple domain, patching and configuration file synchronizations. Integrated Hardware-Software management for Oracle Exalogic Elastic Cloud through features such as rack schematics visualization and integrated monitoring of all hardware and software components. The latest management capabilities for business-critical applications include: Business Application Management: A new Business Application (BA) target type and dashboard with flexible definitions provides a logical view of an application’s business transactions, end-user experiences and the cloud infrastructure the monitored application is running on. Enhanced User Experience Reporting: Oracle Real User Experience Insight has been enhanced to provide reporting capabilities on client-side issues for applications running in the cloud and has been more tightly coupled with Oracle Business Transaction Management to help ensure that real-time user experience and transaction tracing data is provided to users in context. Several key improvements address ease of administration, reporting and extensibility for massively scalable cloud environments including dynamic groups, self-updateable monitoring templates, bulk operations against many events, etc. New and Revised Plug-Ins: Several plug-Ins have been updated as a part of this release resulting in either new versions or revisions. Revised plug-ins contain only bug-fixes and while new plug-ins incorporate both bug fixes as well as new functionality. Plug-In Name Version Enterprise Manager for Oracle Database 12.1.0.2 (revision) Enterprise Manager for Oracle Fusion Middleware 12.1.0.3 (new) Enterprise Manager for Chargeback and Capacity Planning 12.1.0.3 (new) Enterprise Manager for Oracle Fusion Applications 12.1.0.3 (new) Enterprise Manager for Oracle Virtualization 12.1.0.3 (new) Enterprise Manager for Oracle Exadata 12.1.0.3 (new) Enterprise Manager for Oracle Cloud 12.1.0.4 (new) Installation and Upgrade: All major platforms have been released simultaneously (Linux 32 / 64 bit, Solaris (SPARC), Solaris x86-64, IBM AIX 64-bit, and Windows x86-64 (64-bit) ) Enterprise Manager 12.1.0.2 is a complete release that includes both the EM OMS and Agent versions of 12.1.0.2. Installation options available with EM 12.1.0.2: User can do fresh Install or an upgrade from versions EM 10.2.0.5, 11.1, or 12.1.0.2 ( Bundle Patch 1 not mandatory). Upgrading to EM 12.1.0.2 from EM 12.1.0.1 is not a patch application (similar to Bundle Patch 1) but is achieved through a 1-system upgrade. Documentation: Oracle Enterprise Manager Cloud Control Introduction Document provides a broad overview of capabilities and highlights"What's New" in EM 12.1.0.2. All updated Oracle Enterprise Manager documentation can be found on OTN Upgrade Guide Please feel free to ask questions related to the new Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) at the Oracle Enterprise Manager Forum . You could also share your feedback at twitter  using hash tag #em12c or at Facebook . Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • JavaDay Taipei 2014 Trip Report

    - by reza_rahman
    JavaDay Taipei 2014 was held at the Taipei International Convention Center on August 1st. Organized by Oracle University, it is one of the largest Java developer events in Taiwan. This was another successful year for JavaDay Taipei with a fully sold out venue packed with youthful, energetic developers (this was my second time at the event and I have already been invited to speak again next year!). In addition to Oracle speakers like me, Steve Chin and Naveen Asrani, the event also featured a bevy of local speakers including Taipei Java community leaders. Topics included Java SE, Java EE, JavaFX, cloud and Big Data. It was my pleasure and privilege to present one of the opening keynotes for the event. I presented my session on Java EE titled "JavaEE.Next(): Java EE 7, 8, and Beyond". I covered the changes in Java EE 7 as well as what's coming in Java EE 8. I demoed the Cargo Tracker Java EE BluePrints. I also briefly talked about Adopt-a-JSR for Java EE 8. The slides for the keynote are below (click here to download and view the actual PDF): It appears your Web browser is not configured to display PDF files. No worries, just click here to download the PDF file. In the afternoon I did my JavaScript + Java EE 7 talk titled "Using JavaScript/HTML5 Rich Clients with Java EE 7". This talk is basically about aligning EE 7 with the emerging JavaScript ecosystem (specifically AngularJS). The talk was completely packed. The slide deck for the talk is here: JavaScript/HTML5 Rich Clients Using Java EE 7 from Reza Rahman The demo application code is posted on GitHub. The code should be a helpful resource if this development model is something that interests you. Do let me know if you need help with it but the instructions should be fairly self-explanatory. I am delivering this material at JavaOne 2014 as a two-hour tutorial. This should give me a little more bandwidth to dig a little deeper, especially on the JavaScript end. I finished off Java Day Taipei with my talk titled "Using NoSQL with ~JPA, EclipseLink and Java EE" (this was the last session of the conference). The talk covers an interesting gap that there is surprisingly little material on out there. The talk has three parts -- a birds-eye view of the NoSQL landscape, how to use NoSQL via a JPA centric facade using EclipseLink NoSQL, Hibernate OGM, DataNucleus, Kundera, Easy-Cassandra, etc and how to use NoSQL native APIs in Java EE via CDI. The slides for the talk are here: Using NoSQL with ~JPA, EclipseLink and Java EE from Reza Rahman The JPA based demo is available here, while the CDI based demo is available here. Both demos use MongoDB as the data store. Do let me know if you need help getting the demos up and running. After the event the Oracle University folks hosted a reception in the evening which was very well attended by organizers, speakers and local Java community leaders. I am extremely saddened by the fact that this otherwise excellent trip was scarred by terrible tragedy. After the conference I joined a few folks for a hike on the Maokong Mountain on Saturday. The group included friends in the Taiwanese Java community including Ian and Robbie Cheng. Without warning, fatal tragedy struck on a remote part of the trail. Despite best efforts by us, the excellent Taiwanese Emergency Rescue Team and World class Taiwanese physicians we were unable to save our friend Robbie Cheng's life. Robbie was just thirty-four years old and is survived by his younger brother, mother and father. Being the father of a young child myself, I can only imagine the deep sorrow that this senseless loss unleashes. Robbie was a key member of the Taiwanese Java community and a Java Evangelist at Sun at one point. Ironically the only picture I was able to take of the trail was mere moments before tragedy. I thought I should place him in that picture in profoundly respectful memoriam: Perhaps there is some solace in the fact that there is something inherently honorable in living a bright life, dying young and meeting one's end on a beautiful remote mountain trail few venture to behold let alone attempt to ascend in a long and tired lifetime. Perhaps I'd even say it's a fate I would not entirely regret facing if it were my own. With that thought in mind it seems appropriate to me to quote some lyrics from the song "Runes to My Memory" by legendary Swedish heavy metal band Amon Amarth idealizing a fallen Viking warrior cut down in his prime: "Here I lie on wet sand I will not make it home I clench my sword in my hand Say farewell to those I love When I am dead Lay me in a mound Place my weapons by my side For the journey to Hall up high When I am dead Lay me in a mound Raise a stone for all to see Runes carved to my memory" I submit my deepest condolences to Robbie's family and hope my next trip to Taiwan ends in a less somber note.

    Read the article

  • Profiling Startup Of VS2012 &ndash; YourKit Profiler

    - by Alois Kraus
    The YourKit (v7.0.5) profiler is interesting in terms of price (79€ single place license, 409€ + 1 year support and upgrades) and feature set. You do get a performance and memory profiler in one package for which you normally need also to pay extra from the other vendors. As an interesting side note the profiler UI is written in Java because they do also sell Java profilers with the same feature set. To get all methods of a VS startup you need first to configure it to include System* in the profiled methods and you need to configure * to measure wall clock time. By default it does record only CPU times which allows you to optimize CPU hungry operations. But you will never see a Thread.Sleep(10000) in the profiler blocking the UI in this mode. It can profile as all others processes started from within the profiler but it can also profile the next or all started processes. As usual it can profile in sampling and tracing mode. But since it is a memory profiler as well it does by default also record all object allocations > 1MB. With allocation recording enabled VS2012 did crash but without allocation recording there were no problems. The CPU tab contains the time line of the application and when you click in the graph you the call stacks of all threads at this time. This is really a nice feature. When you select a time region you the CPU Usage estimation for this time window. I have seen many applications consuming 100% CPU only because they did create garbage like crazy. For this is the Garbage Collection tab interesting in conjunction with a time range. This view is like the CPU table only that the CPU graph (green) is missing. All relevant information except for GCs/s is already visible in the CPU tab. Very handy to pinpoint excessive GC or CPU bound issues. The Threads tab does show the thread names and their lifetime. This is useful to see thread interactions or which thread is hottest in terms of CPU consumption. On the CPU tab the call tree does exist in a merged and thread specific view. When you click on a method you get below a list of all called methods. There you can sort for methods with a high own time which are worth optimizing. In the Method List you can select which scope you want to see. Back Traces are the methods which did call you. Callees ist the list of methods called directly or indirectly by your method as a flat list. This is not a call stack but still very useful to see which methods were slow so you can see the “root” cause quite quickly without the need to click trough long call stacks. The last view Merged Calles is a call stacked view of the previous view. This does help a lot to understand did call each method at run time. You would get the same view with a debugger for one call invocation but here you get the full statistics (invocation count) as well. Since YourKit is also a memory profiler you can directly see which objects you have on your managed heap and which objects do hold most of your precious memory. You can in in the Object Explorer view also examine the contents of your objects (strings or whatsoever) to get a better understanding which objects where potentially allocating this stuff.   YourKit is a very easy to use combined memory and performance profiler in one product. The unbeatable single license price makes it very attractive to straightly buy it. Although it is a Java UI it is very responsive and the memory consumption is considerably lower compared to dotTrace and ANTS profiler. What I do really like is to start the YourKit ui and then start the processes I want to profile as usual. There is no need to alter your own application code to be able to inject a profiler into your new started processes. For performance and memory profiling you can simply select the process you want to investigate from the list of started processes. That's the way I like to use profilers. Just get out of the way and let the application run without any special preparations.   Next: Telerik JustTrace

    Read the article

  • Alternative way of developing for ASP.NET to WebForms - Any problems with this?

    - by John
    So I have been developing in ASP.NET WebForms for some time now but often get annoyed with all the overhead (like ViewState and all the JavaScript it generates), and the way WebForms takes over a lot of the HTML generation. Sometimes I just want full control over the markup and produce efficient HTML of my own so I have been experimenting with what I like to call HtmlForms. Essentially this is using ASP.NET WebForms but without the form runat="server" tag. Without this tag, ASP.NET does not seem to add anything to the page at all. From some basic tests it seems that it runs well and you still have the ability to use code-behind pages, and many ASP.NET controls such as repeaters. Of course without the form runat="server" many controls won't work. A post at Enterprise Software Development lists the controls that do require the tag. From that list you will see that all of the form elements like TextBoxes, DropDownLists, RadioButtons, etc cannot be used. Instead you use normal HTML form controls. But how do you access these HTML controls from the code behind? Retrieving values on post back is easy, you just use Request.QueryString or Request.Form. But passing data to the control could be a little messy. Do you use a ASP.NET Literal control in the value field or do you use <%= value % in the markup page? I found it best to add runat="server" to my HTML controls and then you can access the control in your code-behind like this: ((HtmlInputText)txtName).Value = "blah"; Here's a example that shows what you can do with a textbox and drop down list: Default.aspx <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="NoForm.Default" %> <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="NoForm.Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form action="" method="post"> <label for="txtName">Name:</label> <input id="txtName" name="txtName" runat="server" /><br /> <label for="ddlState">State:</label> <select id="ddlState" name="ddlState" runat="server"> <option value=""></option> </select><br /> <input type="submit" value="Submit" /> </form> </body> </html> Default.aspx.cs using System; using System.Web.UI.HtmlControls; using System.Web.UI.WebControls; namespace NoForm { public partial class Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { //Default values string name = string.Empty; string state = string.Empty; if (Request.RequestType == "POST") { //If form submitted (post back) name = Request.Form["txtName"]; state = Request.Form["ddlState"]; //Server side form validation would go here //and actions to process form and redirect } ((HtmlInputText)txtName).Value = name; ((HtmlSelect)ddlState).Items.Add(new ListItem("ACT")); ((HtmlSelect)ddlState).Items.Add(new ListItem("NSW")); ((HtmlSelect)ddlState).Items.Add(new ListItem("NT")); ((HtmlSelect)ddlState).Items.Add(new ListItem("QLD")); ((HtmlSelect)ddlState).Items.Add(new ListItem("SA")); ((HtmlSelect)ddlState).Items.Add(new ListItem("TAS")); ((HtmlSelect)ddlState).Items.Add(new ListItem("VIC")); ((HtmlSelect)ddlState).Items.Add(new ListItem("WA")); if (((HtmlSelect)ddlState).Items.FindByValue(state) != null) ((HtmlSelect)ddlState).Value = state; } } } As you can see, you have similar functionality to ASP.NET server controls but more control over the final markup, and less overhead like ViewState and all the JavaScript ASP.NET adds. Interestingly you can also use HttpPostedFile to handle file uploads using your own input type="file" control (and necessary form enctype="multipart/form-data"). So my question is can you see any problems with this method, and any thoughts on it's usefulness? I have further details and tests on my blog.

    Read the article

  • Taking the Plunge - or Dipping Your Toe - into the Fluffy IAM Cloud by Paul Dhanjal (Simeio Solutions)

    - by Greg Jensen
    In our last three posts, we’ve examined the revolution that’s occurring today in identity and access management (IAM). We looked at the business drivers behind the growth of cloud-based IAM, the shortcomings of the old, last-century IAM models, and the new opportunities that federation, identity hubs and other new cloud capabilities can provide by changing the way you interact with everyone who does business with you. In this, our final post in the series, we’ll cover the key things you, the enterprise architect, should keep in mind when considering moving IAM to the cloud. Invariably, what starts the consideration process is a burning business need: a compliance requirement, security vulnerability or belt-tightening edict. Many on the business side view IAM as the “silver bullet” – and for good reason. You can almost always devise a solution using some aspect of IAM. The most critical question to ask first when using IAM to address the business need is, simply: is my solution complete? Typically, “business” is not focused on the big picture. Understandably, they’re focused instead on the need at hand: Can we be HIPAA compliant in 6 months? Can we tighten our new hire, employee transfer and termination processes? What can we do to prevent another password breach? Can we reduce our service center costs by the end of next quarter? The business may not be focused on the complete set of services offered by IAM but rather a single aspect or two. But it is the job – indeed the duty – of the enterprise architect to ensure that all aspects are being met. It’s like remodeling a house but failing to consider the impact on the foundation, the furnace or the zoning or setback requirements. While the homeowners may not be thinking of such things, the architect, of course, must. At Simeio Solutions, the way we ensure that all aspects are being taken into account – to expose any gaps or weaknesses – is to assess our client’s IAM capabilities against a five-step maturity model ranging from “ad hoc” to “optimized.” The model we use is similar to Capability Maturity Model Integration (CMMI) developed by the Software Engineering Institute (SEI) at Carnegie Mellon University. It’s based upon some simple criteria, which can provide a visual representation of how well our clients fair when evaluated against four core categories: ·         Program Governance ·         Access Management (e.g., Single Sign-On) ·         Identity and Access Governance (e.g., Identity Intelligence) ·         Enterprise Security (e.g., DLP and SIEM) Often our clients believe they have a solution with all the bases covered, but the model exposes the gaps or weaknesses. The gaps are ideal opportunities for the cloud to enter into the conversation. The complete process is straightforward: 1.    Look at the big picture, not just the immediate need – what is our roadmap and how does this solution fit? 2.    Determine where you stand with respect to the four core areas – what are the gaps? 3.    Decide how to cover the gaps – what role can the cloud play? Returning to our home remodeling analogy, at some point, if gaps or weaknesses are discovered when evaluating the complete impact of the proposed remodel – if the existing foundation wouldn’t support the new addition, for example – the owners need to decide if it’s time to move to a new house instead of trying to remodel the old one. However, with IAM it’s not an either-or proposition – i.e., either move to the cloud or fix the existing infrastructure. It’s possible to use new cloud technologies just to cover the gaps. Many of our clients start their migration to the cloud this way, dipping in their toe instead of taking the plunge all at once. Because our cloud services offering is based on the Oracle Identity and Access Management Suite, we can offer a tremendous amount of flexibility in this regard. The Oracle platform is not a collection of point solutions, but rather a complete, integrated, best-of-breed suite. Yet it’s not an all-or-nothing proposition. You can choose just the features and capabilities you need using a pay-as-you-go model, incrementally turning on and off services as needed. Better still, all the other capabilities are there, at the ready, whenever you need them. Spooling up these cloud-only services takes just a fraction of the time it would take a typical organization to deploy internally. SLAs in the cloud may be higher than on premise, too. And by using a suite of software that’s complete and integrated, you can dramatically lower cost and complexity. If your in-house solution cannot be migrated to the cloud, you might consider using hardware appliances such as Simeio’s Cloud Interceptor to extend your enterprise out into the network. You might also consider using Expert Managed Services. Cost is usually the key factor – not just development costs but also operational sustainment costs. Talent or resourcing issues often come into play when thinking about sustaining a program. Expert Managed Services such as those we offer at Simeio can address those concerns head on. In a cloud offering, identity and access services lend to the new paradigms described in my previous posts. Most importantly, it allows us all to focus on what we're meant to do – provide value, lower costs and increase security to our respective organizations. It’s that magic “silver bullet” that business knew you had all along. If you’d like to talk more, you can find us at simeiosolutions.com.

    Read the article

  • Using Unity – Part 4

    - by nmarun
    In this part, I’ll be discussing about constructor and property or setter injection. I’ve created a new class – Product3: 1: public class Product3 : IProduct 2: { 3: public string Name { get; set; } 4: [Dependency] 5: public IDistributor Distributor { get; set; } 6: public ILogger Logger { get; set; } 7:  8: public Product3(ILogger logger) 9: { 10: Logger = logger; 11: Name = "Product 1"; 12: } 13:  14: public string WriteProductDetails() 15: { 16: StringBuilder productDetails = new StringBuilder(); 17: productDetails.AppendFormat("{0}<br/>", Name); 18: productDetails.AppendFormat("{0}<br/>", Logger.WriteLog()); 19: productDetails.AppendFormat("{0}<br/>", Distributor.WriteDistributorDetails()); 20: return productDetails.ToString(); 21: } 22: } This version has a property of type IDistributor and takes a constructor parameter of type ILogger. The IDistributor property has a Dependency attribute (Microsoft.Practices.Unity namespace) applied to it. IDistributor and its implementation are shown below: 1: public interface IDistributor 2: { 3: string WriteDistributorDetails(); 4: } 5:  6: public class Distributor : IDistributor 7: { 8: public List<string> DistributorNames = new List<string>(); 9:  10: public Distributor() 11: { 12: DistributorNames.Add("Distributor1"); 13: DistributorNames.Add("Distributor2"); 14: DistributorNames.Add("Distributor3"); 15: DistributorNames.Add("Distributor4"); 16: } 17: public string WriteDistributorDetails() 18: { 19: StringBuilder distributors = new StringBuilder(); 20: for (int i = 0; i < DistributorNames.Count; i++) 21: { 22: distributors.AppendFormat("{0}<br/>", DistributorNames[i]); 23: } 24: return distributors.ToString(); 25: } 26: } ILogger and the FileLogger have the following definition: 1: public interface ILogger 2: { 3: string WriteLog(); 4: } 5:  6: public class FileLogger : ILogger 7: { 8: public string WriteLog() 9: { 10: return string.Format("Type: {0}", GetType()); 11: } 12: } The Unity container creates an instance of the dependent class (the Distributor class) within the scope of the target object (an instance of Product3 class that will be called by doing a Resolve<IProduct>() in the calling code) and assign this dependent object to the attributed property of the target object. To add to it, property injection is a form of optional injection of dependent objects.The dependent object instance is generated before the container returns the target object. Unlike constructor injection, you must apply the appropriate attribute in the target class to initiate property injection. Let’s see how to change the config file to make this work. The first step is to add all the type aliases: 1: <typeAlias alias="Product3" type="ProductModel.Product3, ProductModel"/> 2: <typeAlias alias="ILogger" type="ProductModel.ILogger, ProductModel"/> 3: <typeAlias alias="FileLogger" type="ProductModel.FileLogger, ProductModel"/> 4: <typeAlias alias="IDistributor" type="ProductModel.IDistributor, ProductModel"/> 5: <typeAlias alias="Distributor" type="ProductModel.Distributor, ProductModel"/> Now define mappings for these aliases: 1: <type type="ILogger" mapTo="FileLogger" /> 2: <type type="IDistributor" mapTo="Distributor" /> Next step is to define the constructor and property injection in the config file: 1: <type type="IProduct" mapTo="Product3" name="ComplexProduct"> 2: <typeConfig extensionType="Microsoft.Practices.Unity.Configuration.TypeInjectionElement, Microsoft.Practices.Unity.Configuration"> 3: <constructor> 4: <param name="logger" parameterType="ILogger" /> 5: </constructor> 6: <property name="Distributor" propertyType="IDistributor"> 7: <dependency /> 8: </property> 9: </typeConfig> 10: </type> There you see a constructor element that tells there’s a property named ‘logger’ that is of type ILogger. By default, the type of ILogger gets resolved to type FileLogger. There’s also a property named ‘Distributor’ which is of type IDistributor and which will get resolved to type Distributor. On the calling side, I’ve added a new button, whose click event does the following: 1: protected void InjectionButton_Click(object sender, EventArgs e) 2: { 3: unityContainer.RegisterType<IProduct, Product3>(); 4: IProduct product3 = unityContainer.Resolve<IProduct>(); 5: productDetailsLabel.Text = product3.WriteProductDetails(); 6: } This renders the following output: This completes the part for constructor and property injection. In the next blog, I’ll talk about Arrays and Generics. Please see the code used here.

    Read the article

  • Collision Detection problems in Voxel Engine (XNA)

    - by Darestium
    I am creating a minecraft like terrain engine in XNA and have had some collision problems for quite some time. I have checked and changed my code based on other peoples collision code and I still have the same problem. It always seems to be off by about a block. for instance, if I walk across a bridge which is one block high I fall through it. Also, if you walk towards a "row" of blocks like this: You are able to stand "inside" the left most one, and you collide with nothing in the right most side (where there is no block and is not visible on this image). Here is all my collision code: private void Move(GameTime gameTime, Vector3 direction) { float speed = playermovespeed * (float)gameTime.ElapsedGameTime.TotalSeconds; Matrix rotationMatrix = Matrix.CreateRotationY(player.Camera.LeftRightRotation); Vector3 rotatedVector = Vector3.Transform(direction, rotationMatrix); rotatedVector.Normalize(); Vector3 testVector = rotatedVector; testVector.Normalize(); Vector3 movePosition = player.position + testVector * speed; Vector3 midBodyPoint = movePosition + new Vector3(0, -0.7f, 0); Vector3 headPosition = movePosition + new Vector3(0, 0.1f, 0); if (!world.GetBlock(movePosition).IsSolid && !world.GetBlock(midBodyPoint).IsSolid && !world.GetBlock(headPosition).IsSolid) { player.position += rotatedVector * speed; } //player.position += rotatedVector * speed; } ... public void UpdatePosition(GameTime gameTime) { player.velocity.Y += playergravity * (float)gameTime.ElapsedGameTime.TotalSeconds; Vector3 footPosition = player.Position + new Vector3(0f, -1.5f, 0f); Vector3 headPosition = player.Position + new Vector3(0f, 0.1f, 0f); // If the block below the player is solid the Y velocity should be zero if (world.GetBlock(footPosition).IsSolid || world.GetBlock(headPosition).IsSolid) { player.velocity.Y = 0; } UpdateJump(gameTime); UpdateCounter(gameTime); ProcessInput(gameTime); player.Position = player.Position + player.velocity * (float)gameTime.ElapsedGameTime.TotalSeconds; velocity = Vector3.Zero; } and the one and only function in the camera class: protected void CalculateView() { Matrix rotationMatrix = Matrix.CreateRotationX(upDownRotation) * Matrix.CreateRotationY(leftRightRotation); lookVector = Vector3.Transform(Vector3.Forward, rotationMatrix); cameraFinalTarget = Position + lookVector; Vector3 cameraRotatedUpVector = Vector3.Transform(Vector3.Up, rotationMatrix); viewMatrix = Matrix.CreateLookAt(Position, cameraFinalTarget, cameraRotatedUpVector); } which is called when the rotation variables are changed: public float LeftRightRotation { get { return leftRightRotation; } set { leftRightRotation = value; CalculateView(); } } public float UpDownRotation { get { return upDownRotation; } set { upDownRotation = value; CalculateView(); } } World class: public Block GetBlock(int x, int y, int z) { if (InBounds(x, y, z)) { Vector3i regionalPosition = GetRegionalPosition(x, y, z); Vector3i region = GetRegionPosition(x, y, z); return regions[region.X, region.Y, region.Z].Blocks[regionalPosition.X, regionalPosition.Y, regionalPosition.Z]; } return new Block(BlockType.none); } public Vector3i GetRegionPosition(int x, int y, int z) { int regionx = x == 0 ? 0 : x / Variables.REGION_SIZE_X; int regiony = y == 0 ? 0 : y / Variables.REGION_SIZE_Y; int regionz = z == 0 ? 0 : z / Variables.REGION_SIZE_Z; return new Vector3i(regionx, regiony, regionz); } public Vector3i GetRegionalPosition(int x, int y, int z) { int regionx = x == 0 ? 0 : x / Variables.REGION_SIZE_X; int X = x % Variables.REGION_SIZE_X; int regiony = y == 0 ? 0 : y / Variables.REGION_SIZE_Y; int Y = y % Variables.REGION_SIZE_Y; int regionz = z == 0 ? 0 : z / Variables.REGION_SIZE_Z; int Z = z % Variables.REGION_SIZE_Z; return new Vector3i(X, Y, Z); } Any ideas how to fix this problem? EDIT 1: Graphic of the problem: EDIT 2 GetBlock, Vector3 version: public Block GetBlock(Vector3 position) { int x = (int)Math.Floor(position.X); int y = (int)Math.Floor(position.Y); int z = (int)Math.Ceiling(position.Z); Block block = GetBlock(x, y, z); return block; } Now, the thing is I tested the theroy that the Z is always "off by one" and by ceiling the value it actually works as intended. Altough it still could be greatly more accurate (when you go down holes you can see through the sides, and I doubt it will work with negitive positions). I also does not feel clean Flooring the X and Y values and just Ceiling the Z. I am surely not doing something correctly still.

    Read the article

  • Synergy - easy share of keyboard and mouse between multiple computers

    Did you ever have the urge to share one set of keyboard and mouse between multiple machines? If so, please read on... Using multiple machines Honestly, as a software craftsman it is my daily business to run multiple machines - either physical or virtual - to be able to solve my customers' requirements. Recent hardware equipment allows this very easily. For laptops it's a no-brainer to attach a second or even a third screen in order to extend your native display. This works quite handy and in my case I used to attached two additional screens - one via HD15 connector, the other via HDMI. But... as it's a laptop and therefore a mobile unit there are slight restrictions. Detaching and re-attaching all cables when changing locations is one of them but hardware limitations, too. After all, it's a laptop and not a workstation. I guess, that anyone working in IT (or ICT) has more than one machine at their workplace or their home office and at least I find it quite annoying to have multiple sets of keyboard and mouse conquering my remaining space on my desk. Despite the ugly looks of all those cables and whatsoever 'chaos of distraction' I prefer a more clean solution and working environment. This allows me to actually focus on my work and tasks to do rather than to worry about choosing the right combination of keyboard/mouse. My current workplace is a patch work of various pieces of hardware (approx. 2-3 years): DIY desktop on Ubuntu 12.04 64-bit, Core2 Duo (E7400, 2.8GHz), 4GB RAM, 2x 250GB HDD, nVidia GPU 512MB Dell Inspiron 1525 on Windows 8 64-bit, 4GB RAM, 200GB HDD HP Compaq 6720s on Windows Vista 32-bit, Core2 Duo (T5670, 1.8GHz), 2GB RAM, 160GB HDD Mac mini on Mac OS X 10.7, Core i5 (2.3 GHz), 2GB RAM, 500GB HDD I know... Not the latest and greatest but a decent combination to work with. New system(s) is/are already on the shopping list but I live in the 'wrong' country to buy computer hardware. So, the next trip abroad will provide me with some new stuff. Using multiple operating systems The list of hardware above already names different operating systems, and actually I have only one preference: Linux. But still my job as a software craftsman for Visual FoxPro and .NET development requires other OSes, too. Not a big deal, it's just like this. Additionally to those physical machines, there are a bunch of virtual machines around. Most of them running either Windows XP or Windows 7. Since years I have the practice that each development for one customer is isolated into its own virtual machine and environment. This keeps it clean and version-safe. But as you can easily imagine with that setup there are a couple of constraints referring to keyboard and mouse. Usually, those systems require their own pieces of hardware attached. As stated, I don't like clutter on my desk's surface, so a cross-platform solution has to come in here. In the past, I tried it with various applications, hardware or network protocols like X11, RDP, NX, TeamViewer, RAdmin, KVM switch, etc. but the problem in this case is that they either allow you to remotely connect to the other system or exclusively 'bind' your peripherals to the active system. Not optimal after all. Synergy to the rescue Quote from their website: "Synergy lets you easily share your mouse and keyboard between multiple computers on your desk, and it's Free and Open Source. Just move your mouse off the edge of one computer's screen on to another. You can even share all of your clipboards. All you need is a network connection. Synergy is cross-platform (works on Windows, Mac OS X and Linux)." Yep, that's it! All I need for my setup here... Actually, I couldn't believe it myself that I didn't stumble over synergy earlier but 'Get over it' and there we go. And despite the fact that it is Open Source, no, it's also for free. Donations for the developers are very welcome and recently they introduced Synergy Premium. A possibility to buy so-called premium votes that can be used to put more weight / importance on specific issues or bugs that you would like the developers to look into. Installation and configuration Simply download the installation packages for your systems of choice, run the installer and enter some minor information about your network setup. I chose my desktop machine for the role of the Synergy server and configured my screen setup as follows: The screen setup allows you currently to build or connect up to 15 machines. The number of screens can be higher as those machine might have multiple screens physically attached. Synergy takes this into the overall calculations and simply works as expected. I tried it for fun with a second monitor each connected to both laptops to have a total number of 6 active screens. No flaws after all - stunning! All the other machines are configured as clients like so: Side note: The screenshot was taken on Windows 8 and pasted via clipboard into Gimp running on Ubuntu. Resume Synergy is now definitely in my box of tools for my daily work, and amongst the first pieces of software I install after the operating system. It just simplifies my life and cleans my desk. Never again without Synergy!Now, only waiting for an Android version to integrate my Galaxy Tab 10.1, too. ;-) Please, check out that superb product and enjoy sharing one keyboard, one mouse and one clipboard between your various machines and operating systems.

    Read the article

  • Windows Phone 8 Announcement

    - by Tim Murphy
    As if the Surface announcement on Monday wasn’t exciting enough, today Microsoft announce that Windows Phone 8 will be coming this fall.  That itself is great news, but the features coming were like confetti flying in all different directions.  Given this speed I couldn’t capture every feature they covered.  A summary of what I did capture is listed below starting with their eight main features. Common Core The first thing that they covered is that Windows Phone 8 will share a core OS with Windows 8.  It will also run natively on multiple cores.  They mentioned that they have run it on up to 64 cores to this point.  The phones as you might expect will at least start as dual core.  If you remember there were metrics saying that Windows Phone 7 performed operations faster on a single core than other platforms did with dual cores.  The metrics they showed here indicate that Windows Phone 8 runs faster on comparable dual core hardware than other platforms. New Screen Resolutions Screen resolution has never been an issue for me, but it has been a criticism of Windows Phone 7 in the media.  Windows Phone 8 will supports three screen resolutions: WVGA 800 x 480, WXGA 1280 x 768, and 720 1280x720.  Hopefully this makes pixel counters a little happier. MicroSD Support This was one of my pet peeves when I got my Samsung Focus. With Windows Phone 8 the operating system will support adding MicroSD cards after initial setup.  Of course this is dependent on the hardware company on implementing it, but I think we have seen that even feature phone manufacturers have not had a problem supporting this in the past. NFC NFC has been an anticipated feature for some time.  What Microsoft showed today included the fact that they didn’t just want it to be for the phone.  There is cross platform NFC functionality between Windows Phone 8 and Windows 8.  The demos , while possibly a bit fanciful, showed would could be achieved even in a retail environment.  We are getting closer and closer to a Minority Report world with these technologies. Wallet Windows Phone 8 isn’t the first platform to have a wallet concept.  What they have done to differentiate themselves is to make it sot that it is not dependent on a SIM type chip like other platforms.  They have also expanded the concept beyond just banks to other types of credits such as airline miles. Nokia Mapping People have been envious of the Lumia phones having the Nokia mapping software.  Now all Windows Phone 8 devices will use NavTeq data and will have the capability to run in an offline fashion.  This is a major step forward from the Bing “touch for the next turn” maps. IT Administration The lack of features for enterprise administration and deployment was a complaint even before the Windows Phone 7 was released.  With the Windows Phone 8 release such features as Bitlocker and Secure boot will be baked into the OS. We will also have the ability to privately sign and distribute applications. Changing Start Screen Joe Belfiore made a big deal about this aspect of the new release.  Users will have more color themes available to them and the live tiles will be highly customizable. You will have the ability to resize and organize the tiles in a more dynamic way.  This allows for less important tiles or ones with less information to be made smaller.  And There Is More So what other tidbits came out of the presentation?  Later this summer the API for WP8 will be available.  There will be developer events coming to a city near you.  Another announcement of interest to developers is the ability to write applications at a native code level.  This is a boon for game developers and those who need highly efficient applications. As a topper on the cake there was mention of in app payment. On the consumer side we also found out that all updates will be available over the air.  Along with this came the fact that Microsoft will support all devices with updates for at least 18 month and you will be able to subscribe for early updates.  Update coming for Windows Phone 7.5 customers to WP7.8.  The main enhancement will be the new live tile features.  The big bonus is that the update will bypass the carriers.  I would assume though that you will be brought up to date with all previous patches that your carrier may not have released. There is so much more, but that is enough for one post.  Needless to say, EXCITING! del.icio.us Tags: Windows Phone 8,WP8,Windows Phone 7,WP7,Announcements,Microsoft

    Read the article

  • LINQ and ArcObjects

    - by Marko Apfel
    Motivation LINQ (language integrated query) is a component of the Microsoft. NET Framework since version 3.5. It allows a SQL-like query to various data sources such as SQL, XML etc. Like SQL also LINQ to SQL provides a declarative notation of problem solving – i.e. you don’t need describe in detail how a task could be solved, you describe what to be solved at all. This frees the developer from error-prone iterator constructs. Ideally, of course, would be to access features with this way. Then this construct is conceivable: var largeFeatures = from feature in features where (feature.GetValue("SHAPE_Area").ToDouble() > 3000) select feature; or its equivalent as a lambda expression: var largeFeatures = features.Where(feature => (feature.GetValue("SHAPE_Area").ToDouble() > 3000)); This requires an appropriate provider, which manages the corresponding iterator logic. This is easier than you might think at first sight - you have to deliver only the desired entities as IEnumerable<IFeature>. LINQ automatically establishes a state machine in the background, whose execution is delayed (deferred execution) - when you are really request entities (foreach, Count (), ToList (), ..) an instantiation processing takes place, although it was already created at a completely different place. Especially in multiple iteration through entities in the first debuggings you are rubbing your eyes when the execution pointer jumps magically back in the iterator logic. Realization A very concise logic for constructing IEnumerable<IFeature> can be achieved by running through a IFeatureCursor. You return each feature via yield. For an easier usage I have put the logic in an extension method Getfeatures() for IFeatureClass: public static IEnumerable<IFeature> GetFeatures(this IFeatureClass featureClass, IQueryFilter queryFilter, RecyclingPolicy policy) { IFeatureCursor featureCursor = featureClass.Search(queryFilter, RecyclingPolicy.Recycle == policy); IFeature feature; while (null != (feature = featureCursor.NextFeature())) { yield return feature; } //this is skipped in unit tests with cursor-mock if (Marshal.IsComObject(featureCursor)) { Marshal.ReleaseComObject(featureCursor); } } So you can now easily generate the IEnumerable<IFeature>: IEnumerable<IFeature> features = _featureClass.GetFeatures(RecyclingPolicy.DoNotRecycle); You have to be careful with the recycling cursor. After a delayed execution in the same context it is not a good idea to re-iterated on the features. In this case only the content of the last (recycled) features is provided and all the features are the same in the second set. Therefore, this expression would be critical: largeFeatures.ToList(). ForEach(feature => Debug.WriteLine(feature.OID)); because ToList() iterates once through the list and so the the cursor was once moved through the features. So the extension method ForEach() always delivers the same feature. In such situations, you must not use a recycling cursor. Repeated executions of ForEach() is not a problem, because for every time the state machine is re-instantiated and thus the cursor runs again - that's the magic already mentioned above. Perspective Now you can also go one step further and realize your own implementation for the interface IEnumerable<IFeature>. This requires that only the method and property to access the enumerator have to be programmed. In the enumerator himself in the Reset() method you organize the re-executing of the search. This could be archived with an appropriate delegate in the constructor: new FeatureEnumerator<IFeatureclass>(_featureClass, featureClass => featureClass.Search(_filter, isRecyclingCursor)); which is called in Reset(): public void Reset() { _featureCursor = _resetCursor(_t); } In this manner, enumerators for completely different scenarios could be implemented, which are used on the client side completely identical like described above. Thus cursors, selection sets, etc. merge into a single matter and the reusability of code is increasing immensely. On top of that in automated unit tests an IEnumerable could be mocked very easily - a major step towards better software quality. Conclusion Nevertheless, caution should be exercised with these constructs in performance-relevant queries. Because of managing a state machine in the background, a lot of overhead is created. The processing costs additional time - about 20 to 100 percent. In addition, working without a recycling cursor is fast a performance gap. However declarative LINQ code is much more elegant, flawless and easy to maintain than manually iterating, compare and establish a list of results. The code size is reduced according to experience an average of 75 to 90 percent! So I like to wait a few milliseconds longer. As so often it has to be balanced between maintainability and performance - which for me is gaining in priority maintainability. In times of multi-core processors, the processing time of most business processes is anyway not dominated by code execution but by waiting for user input. Demo source code The source code for this prototype with several unit tests, you can download here: https://github.com/esride-apf/Linq2ArcObjects. .

    Read the article

  • Styling ASP.NET MVC Error Messages

    - by MightyZot
    Originally posted on: http://geekswithblogs.net/MightyZot/archive/2013/11/11/styling-asp.net-mvc-error-messages.aspxOff the cuff, it may look like you’re stuck with the presentation of your error messages (model errors) in ASP.NET MVC. That’s not the case, though. You actually have quite a number of options with regard to styling those boogers. Like many of the helpers in MVC, the Html.ValidationMessageFor helper has multiple prototypes. One of those prototypes lets you pass a dictionary, or anonymous object, representing attribute values for the resulting markup. @Html.ValidationMessageFor( m => Model.Whatever, null, new { @class = “my-error” }) By passing the htmlAttributes parameter, which is the last parameter in the call to the prototype of Html.ValidationMessageFor shown above, I can style the resulting markup by associating styles to the my-error css class.  When you run your MVC project and view the source, you’ll notice that MVC adds the class field-validation-valid or field-validation-error to a span created by the helper. You could actually just style those classes instead of adding your own…it’s really up to you. Now, what if you wanted to move that error message around? Maybe you want to put that error message in a box or a callout. How do you do that? When I first started using MVC, it didn’t occur to me that the Html.ValidationMessageFor helper just spits out a little bit of markup. I wanted to put the error messages in boxes with white backgrounds, our site originally had a black background, and show a little nib on the side to make them look like callouts or conversation bubbles. Not realizing how much freedom there is in the styling and markup, and after reading someone else’s post, I created my own version of the ValidationMessageFor helper that took out the span and replaced it with divs. I styled the divs to produce the effect of a popup box and had a lot of trouble with sizing and such. That’s a really silly and unnecessary way to solve this problem. If you want to move your error messages around, all you have to do is move the helper. MVC doesn’t appear to care where you put it, which makes total sense when you think about it. Html.ValidationMessageFor is just spitting out a little markup using a little bit of reflection on the name you’re passing it. All you’ve got to do to style it the way you want it is to put it in whatever markup you desire. Take a look at this, for example… <div class=”my-anchor”>@Html.ValidationMessageFor( m => Model.Whatever )</div> @Html.TextBoxFor(m => Model.Whatever) Now, given that bit of HTML, consider the following CSS… <style> .my-anchor { position:relative; } .field-validation-error {    background-color:white;    border-radius:4px;    border: solid 1px #333;    display: block;    position: absolute;    top:0; right:0; left:0;    text-align:right; } </style> The my-anchor class establishes an anchor for the absolutely positioned error message. Now you can move the error message wherever you want it relative to the anchor. Using css3, there are some other tricks. For example, you can use the :not(:empty) selector to select the span and apply styles based upon whether or not the span has text in it. Keep it simple, though. Moving your elements around using absolute positioning may cause you issues on devices with screens smaller than your standard laptop or PC. While looking for something else recently, I saw someone asking how to style the output for Html.ValidationSummary.  Html.ValidationSummery is the helper that will spit out a list of property errors, general model errors, or both. Html.ValidationSummary spits out fairly simple markup as well, so you can use the techniques described above with it also. The resulting markup is a <ul><li></li></ul> unordered list of error messages that carries the class validation-summary-errors In the forum question, the user was asking how to hide the error summary when there are no errors. Their errors were in a red box and they didn’t want to show an empty red box when there aren’t any errors. Obviously, you can use the css3 selectors to apply different styles to the list when it’s empty and when it’s not empty; however, that’s not support in all browsers. Well, it just so happens that the unordered list carries the style validation-summary-valid when the list is empty. While the div rendered by the Html.ValidationSummary helper renders a visible div, containing one invisible listitem, you can always just style the whole div with “display:none” when the validation-summary-valid class is applied and make it visible when the validation-summary-errors class is applied. Or, if you don’t like that solution, which I like quite well, you can also check the model state for errors with something like this… int errors = ViewData.ModelState.Sum(ms => ms.Value.Errors.Count); That’ll give you a count of the errors that have been added to ModelState. You can check that and conditionally include markup in your page if you want to. The choice is yours. Obviously, doing most everything you can with styles increases the flexibility of the presentation of your solution, so I recommend going that route when you can. That picture of the fat guy jumping has nothing to do with the article. That’s just a picture of me on the roof and I thought it was funny. Doesn’t every post need a picture?

    Read the article

  • Efficient Way to Draw Grids in XNA

    - by sm81095
    So I am working on a game right now, using Monogame as my framework, and it has come time to render my world. My world is made up of a grid (think Terraria but top-down instead of from the side), and it has multiple layers of grids in a single world. Knowing how inefficient it is to call SpriteBatch.Draw() a lot of times, I tried to implement a system where the tile would only be drawn if it wasn't hidden by the layers above it. The problem is, I'm getting worse performance by checking if it's hidden than when I just let everything draw even if it's not visible. So my question is: how to I efficiently check if a tile is hidden to cut down on the draw() calls? Here is my draw code for a single layer, drawing floors, and then the tiles (which act like walls): public void Draw(GameTime gameTime) { int drawAmt = 0; int width = Tile.TILE_DIM; int startX = (int)_parent.XOffset; int startY = (int)_parent.YOffset; //Gets the starting tiles and the dimensions to draw tiles, so only onscreen tiles are drawn, allowing for the drawing of large worlds int tileDrawWidth = ((CIGame.Instance.Graphics.PreferredBackBufferWidth / width) + 4); int tileDrawHeight = ((CIGame.Instance.Graphics.PreferredBackBufferHeight / width) + 4); int tileStartX = (int)MathHelper.Clamp((-startX / width) - 2, 0, this.Width); int tileStartY = (int)MathHelper.Clamp((-startY / width) - 2, 0, this.Height); #region Draw Floors and Tiles CIGame.Instance.GraphicsDevice.SetRenderTarget(_worldTarget); CIGame.Instance.GraphicsDevice.Clear(Color.Black); CIGame.Instance.SpriteBatch.Begin(); //Draw floors for (int x = tileStartX; x < (int)MathHelper.Clamp(tileStartX + tileDrawWidth, 0, this.Width); x++) { for (int y = tileStartY; y < (int)MathHelper.Clamp(tileStartY + tileDrawHeight, 0, this.Height); y++) { //Check if this tile is hidden by layer above it bool visible = true; for (int i = this.LayerNumber; i <= _parent.ActiveLayer; i++) { if (this.LayerNumber != (_parent.Layers - 1) && (_parent.GetTileAt(x, y, i + 1).Opacity >= 1.0f || _parent.GetFloorAt(x, y, i + 1).Opacity >= 1.0f)) { visible = false; break; } } //Only draw if visible under the tile above it if (visible && this.GetTileAt(x, y).Opacity < 1.0f) { Texture2D tex = WorldTextureManager.GetFloorTexture((Floor)_floors[x, y]); Rectangle source = WorldTextureManager.GetSourceForIndex(((Floor)_floors[x, y]).GetTextureIndexFromSurroundings(x, y, this), tex); Rectangle draw = new Rectangle(startX + x * width, startY + y * width, width, width); CIGame.Instance.SpriteBatch.Draw(tex, draw, source, Color.White * ((Floor)_floors[x, y]).Opacity); drawAmt++; } } } //Draw tiles for (int x = tileStartX; x < (int)MathHelper.Clamp(tileStartX + tileDrawWidth, 0, this.Width); x++) { for (int y = tileStartY; y < (int)MathHelper.Clamp(tileStartY + tileDrawHeight, 0, this.Height); y++) { //Check if this tile is hidden by layers above it bool visible = true; for (int i = this.LayerNumber; i <= _parent.ActiveLayer; i++) { if (this.LayerNumber != (_parent.Layers - 1) && (_parent.GetTileAt(x, y, i + 1).Opacity >= 1.0f || _parent.GetFloorAt(x, y, i + 1).Opacity >= 1.0f)) { visible = false; break; } } if (visible) { Texture2D tex = WorldTextureManager.GetTileTexture((Tile)_tiles[x, y]); Rectangle source = WorldTextureManager.GetSourceForIndex(((Tile)_tiles[x, y]).GetTextureIndexFromSurroundings(x, y, this), tex); Rectangle draw = new Rectangle(startX + x * width, startY + y * width, width, width); CIGame.Instance.SpriteBatch.Draw(tex, draw, source, Color.White * ((Tile)_tiles[x, y]).Opacity); drawAmt++; } } } CIGame.Instance.SpriteBatch.End(); Console.WriteLine(drawAmt); CIGame.Instance.GraphicsDevice.SetRenderTarget(null); //TODO: Change to new rendertarget instead of null #endregion } So I was wondering if this is an efficient way, but I'm going about it wrongly, or if there is a different, more efficient way to check if the tiles are hidden. EDIT: For example of how much it affects performance: using a world with three layers, allowing everything to draw no matter what gives me 60FPS, but checking if its visible with all of the layers above it gives me only 20FPS, while checking only the layer immediately above it gives me a fluctuating FPS between 30 and 40FPS.

    Read the article

  • The design of a generic data synchronizer, or, an [object] that does [actions] with the aid of [helpers]

    - by acheong87
    I'd like to create a generic data-source "synchronizer," where data-source "types" may include MySQL databases, Google Spreadsheets documents, CSV files, among others. I've been trying to figure out how to structure this in terms of classes and interfaces, keeping in mind (what I've read about) composition vs. inheritance and is-a vs. has-a, but each route I go down seems to violate some principle. For simplicity, assume that all data-sources have a header-row-plus-data-rows format. For example, assume that the first rows of Google Spreadsheets documents and CSV files will have column headers, a.k.a. "fields" (to parallel database fields). Also, eventually, I would like to implement this in PHP, but avoiding language-specific discussion would probably be more productive. Here's an overview of what I've tried. Part 1/4: ISyncable class CMySQL implements ISyncable GetFields() // sql query, pdo statement, whatever AddFields() RemFields() ... _dbh class CGoogleSpreadsheets implements ISyncable GetFields() // zend gdata api AddFields() RemFields() ... _spreadsheetKey _worksheetId class CCsvFile implements ISyncable GetFields() // read from buffer AddFields() RemFields() ... _buffer interface ISyncable GetFields() AddFields($field1, $field2, ...) RemFields($field1, $field2, ...) ... CanAddFields() // maybe the spreadsheet is locked for write, or CanRemFields() // maybe no permission to alter a database table ... AddRow() ModRow() RemRow() ... Open() Close() ... First Question: Does it make sense to use an interface, as above? Part 2/4: CSyncer Next, the thing that does the syncing. class CSyncer __construct(ISyncable $A, ISyncable $B) Push() // sync A to B Pull() // sync B to A Sync() // Push() and Pull() only differ in direction; factor. // Sync()'s job is to make sure that the fields on each side // match, to add fields where appropriate and possible, to // account for different column-orderings, etc., and of // course, to add and remove rows as necessary to sync. ... _A _B Second Question: Does it make sense to define such a class, or am I treading dangerously close to the "Kingdom of Nouns"? Part 3/4: CTranslator? ITranslator? Now, here's where I actually get lost, assuming the above is passable. Sometimes, two ISyncables speak different "dialects." For example, believe it or not, Google Spreadsheets (accessed through the Google Data API "list feed") returns column headers lower-cased and stripped of all spaces and symbols! That is, sys_TIMESTAMP is systimestamp, as far as my code can tell. (Yes, I am aware that the "cell feed" does not strip the name so; however cell-by-cell manipulation is too slow for what I'm doing.) One can imagine other hypothetical examples. Perhaps even the data itself can be in different "dialects." But let's take it as given for now, and not argue this if possible. Third Question: How would you implement "translation"? Note: Taking all this as an exercise, I'm more interested in the "idealized" design, rather than the practical one. (God knows that shipped sailed when I began this project.) Part 4/4: Further Thought Here's my train of thought to demonstrate I've thunk, albeit unfruitfully: First, I thought, primitively, "I'll just modify CMySQL::GetFields() to lower-case and strip field names so they're compatible with Google Spreadsheets." But of course, then my class should really be called, CMySQLForGoogleSpreadsheets, and that can't be right. So, the thing which translates must exist outside of an ISyncable implementor. And surely it can't be right to make each translation a method in CSyncer. If it exists outside of both ISyncable and CSyncer, then what is it? (Is it even an "it"?) Is it an abstract class, i.e. abstract CTranslator? Is it an interface, since a translator only does, not has, i.e. interface ITranslator? Does it even require instantiation? e.g. If it's an ITranslator, then should its translation methods be static? (I learned what "late static binding" meant, today.) And, dear God, whatever it is, how should a CSyncer use it? Does it "have" it? Is it, "it"? Who am I? ...am I, "I"? I've attempted to break up the question into sub-questions, but essentially my question is singular: How does one implement an object A that conceptually "links" (has) two objects b1 and b2 that share a common interface B, where certain pairs of b1 and b2 require a helper, e.g. a translator, to be handled by A? Something tells me that I've overcomplicated this design, or violated a principle much higher up. Thank you all very much for your time and any advice you can provide.

    Read the article

  • The sign of a true manager is delegation (C# style)

    - by MarkPearl
    Today I thought I would write a bit about delegates in C#. Up till recently I have managed to side step any real understanding of what delegates do and why they are useful – I mean, I know roughly what they do and have used them a lot, but I have never really got down dirty with them and mucked about. Recently however with my renewed interest in Silverlight delegates came up again as a possible solution to a particular problem, and suddenly I found myself opening a bland little console application to just see exactly how far I could take delegates with my limited knowledge. So, let’s first look at the MSDN definition of delegates… A delegate declaration defines a reference type that can be used to encapsulate a method with a specific signature. A delegate instance encapsulates a static or an instance method. Delegates are roughly similar to function pointers in C++; however, delegates are type-safe and secure. Well, don’t you love MSDN for such a useful definition. I must give it credit though… later on it really explains it a bit better by saying “A delegate lets you pass a function as a parameter. The type safety of delegates requires the function you pass as a delegate to have the same signature as the delegate declaration.” A little more reading up on delegates mentions that delegates are similar to interfaces in that they enable the separation of specification and implementation. A delegate declares a single method, while an interface declares a group of methods. So enough reading - lets look at some code and see a basic example of a delegate… Let’s assume we have a console application with a simple delegate declared called AdjustValue like below… class Program { private delegate int AdjustValue(int val); static void Main(string[] args) { } } In a sense, all we have said is that we will be creating one or more methods that follow the same pattern as AdjustValue – i.e. they will take one input value of type int and return an integer. We could then expand our code to have various methods that match the structure of our delegate AdjustValue (remember the structure is int xxx (int xxx)) class Program { private delegate int AdjustValue(int val); private static int Dbl(int val) { return val * 2; } private static int AlwaysOne(int val) { return 1; } static void Main(string[] args) { } }  Above I have expanded my project to have two methods, one called Dbl and the other AlwaysOne. Dbl always returns double the input val and AlwaysOne always returns 1. I could now declare a variable and assign it to be one of those functions, like the following… class Program { private delegate int AdjustValue(int val); private static int Dbl(int val) { return val * 2; } private static int AlwaysOne(int val) { return 1; } static void Main(string[] args) { AdjustValue myDelegate; myDelegate = Dbl; Console.WriteLine(myDelegate(1).ToString()); Console.ReadLine(); } } In this instance I have declared an instance of the AdjustValue delegate called myDelegate; I have then told myDelegate to point to the method Dbl, and then called myDelegate(1). What would the result be? Yes, in this instance it would be exactly the same as me calling the following code… static void Main(string[] args) { Console.WriteLine(Dbl(1).ToString()); Console.ReadLine(); }   So why all the extra work for delegates when we could just do what we did above and call the method directly? Well… that separation of specification to implementation comes to mind. So, this all seems pretty simple. Let’s take a slightly more complicated variation to the console application. Assume that my project is the same as the one previously except that my main method is adjusted as follows… static void Main(string[] args) { AdjustValue myDelegate; myDelegate = Dbl; myDelegate = AlwaysOne; Console.WriteLine(myDelegate(1).ToString()); Console.ReadLine(); } What would happen in this scenario? Quite simply “1” would be written to the console, the reason being that myDelegate was last pointing to the AlwaysOne method before it was called. Make sense? In a way, the myDelegate is a variable method that can be swapped and changed when needed. Let’s make the code a little more confusing by using a delegate in the declaration of another delegate as shown below… class Program { private delegate int AdjustValue(InputValue val); private delegate int InputValue(); private static int Dbl(InputValue val) { return val()*2; } private static int GetInputVal() { Console.WriteLine("Enter a whole number : "); return Convert.ToInt32(Console.ReadLine()); } static void Main(string[] args) { AdjustValue myDelegate; myDelegate = Dbl; Console.WriteLine(myDelegate(GetInputVal).ToString()); Console.ReadLine(); } }   Now it gets really interesting because it looks like we have passed a method into a function in the main method by declaring… Console.WriteLine(myDelegate(GetInputVal).ToString()); So, what it the output? Well, try take a guess on what will happen – then copy the code and see if you got it right. Well that brings me to the end of this short explanation of Delegates. Hopefully it made sense!

    Read the article

  • Collaborative Whiteboard using WebSocket in GlassFish 4 - Text/JSON and Binary/ArrayBuffer Data Transfer (TOTD #189)

    - by arungupta
    This blog has published a few blogs on using JSR 356 Reference Implementation (Tyrus) as its integrated in GlassFish 4 promoted builds. TOTD #183: Getting Started with WebSocket in GlassFish TOTD #184: Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark TOTD #185: Processing Text and Binary (Blob, ArrayBuffer, ArrayBufferView) Payload in WebSocket TOTD #186: Custom Text and Binary Payloads using WebSocket One of the typical usecase for WebSocket is online collaborative games. This Tip Of The Day (TOTD) explains a sample that can be used to build such games easily. The application is a collaborative whiteboard where different shapes can be drawn in multiple colors. The shapes drawn on one browser are automatically drawn on all other peer browsers that are connected to the same endpoint. The shape, color, and coordinates of the image are transfered using a JSON structure. A browser may opt-out of sharing the figures. Alternatively any browser can send a snapshot of their existing whiteboard to all other browsers. Take a look at this video to understand how the application work and the underlying code. The complete sample code can be downloaded here. The code behind the application is also explained below. The web page (index.jsp) has a HTML5 Canvas as shown: <canvas id="myCanvas" width="150" height="150" style="border:1px solid #000000;"></canvas> And some radio buttons to choose the color and shape. By default, the shape, color, and coordinates of any figure drawn on the canvas are put in a JSON structure and sent as a message to the WebSocket endpoint. The JSON structure looks like: { "shape": "square", "color": "#FF0000", "coords": { "x": 31.59999942779541, "y": 49.91999053955078 }} The endpoint definition looks like: @WebSocketEndpoint(value = "websocket",encoders = {FigureDecoderEncoder.class},decoders = {FigureDecoderEncoder.class})public class Whiteboard { As you can see, the endpoint has decoder and encoder registered that decodes JSON to a Figure (a POJO class) and vice versa respectively. The decode method looks like: public Figure decode(String string) throws DecodeException { try { JSONObject jsonObject = new JSONObject(string); return new Figure(jsonObject); } catch (JSONException ex) { throw new DecodeException("Error parsing JSON", ex.getMessage(), ex.fillInStackTrace()); }} And the encode method looks like: public String encode(Figure figure) throws EncodeException { return figure.getJson().toString();} FigureDecoderEncoder implements both decoder and encoder functionality but thats purely for convenience. But the recommended design pattern is to keep them in separate classes. In certain cases, you may even need only one of them. On the client-side, the Canvas is initialized as: var canvas = document.getElementById("myCanvas");var context = canvas.getContext("2d");canvas.addEventListener("click", defineImage, false); The defineImage method constructs the JSON structure as shown above and sends it to the endpoint using websocket.send(). An instant snapshot of the canvas is sent using binary transfer with WebSocket. The WebSocket is initialized as: var wsUri = "ws://localhost:8080/whiteboard/websocket";var websocket = new WebSocket(wsUri);websocket.binaryType = "arraybuffer"; The important part is to set the binaryType property of WebSocket to arraybuffer. This ensures that any binary transfers using WebSocket are done using ArrayBuffer as the default type seem to be blob. The actual binary data transfer is done using the following: var image = context.getImageData(0, 0, canvas.width, canvas.height);var buffer = new ArrayBuffer(image.data.length);var bytes = new Uint8Array(buffer);for (var i=0; i<bytes.length; i++) { bytes[i] = image.data[i];}websocket.send(bytes); This comprehensive sample shows the following features of JSR 356 API: Annotation-driven endpoints Send/receive text and binary payload in WebSocket Encoders/decoders for custom text payload In addition, it also shows how images can be captured and drawn using HTML5 Canvas in a JSP. How could this be turned in to an online game ? Imagine drawing a Tic-tac-toe board on the canvas with two players playing and others watching. Then you can build access rights and controls within the application itself. Instead of sending a snapshot of the canvas on demand, a new peer joining the game could be automatically transferred the current state as well. Do you want to build this game ? I built a similar game a few years ago. Do somebody want to rewrite the game using WebSocket APIs ? :-) Many thanks to Jitu and Akshay for helping through the WebSocket internals! Here are some references for you: JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) Subsequent blogs will discuss the following topics (not necessary in that order) ... Error handling Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API

    Read the article

  • Processing Text and Binary (Blob, ArrayBuffer, ArrayBufferView) Payload in WebSocket - (TOTD #185)

    - by arungupta
    The WebSocket API defines different send(xxx) methods that can be used to send text and binary data. This Tip Of The Day (TOTD) will show how to send and receive text and binary data using WebSocket. TOTD #183 explains how to get started with a WebSocket endpoint using GlassFish 4. A simple endpoint from that blog looks like: @WebSocketEndpoint("/endpoint") public class MyEndpoint { public void receiveTextMessage(String message) { . . . } } A message with the first parameter of the type String is invoked when a text payload is received. The payload of the incoming WebSocket frame is mapped to this first parameter. An optional second parameter, Session, can be specified to map to the "other end" of this conversation. For example: public void receiveTextMessage(String message, Session session) {     . . . } The return type is void and that means no response is returned to the client that invoked this endpoint. A response may be returned to the client in two different ways. First, set the return type to the expected type, such as: public String receiveTextMessage(String message) { String response = . . . . . . return response; } In this case a text payload is returned back to the invoking endpoint. The second way to send a response back is to use the mapped session to send response using one of the sendXXX methods in Session, when and if needed. public void receiveTextMessage(String message, Session session) {     . . .     RemoteEndpoint remote = session.getRemote();     remote.sendString(...);     . . .     remote.sendString(...);    . . .    remote.sendString(...); } This shows how duplex and asynchronous communication between the two endpoints can be achieved. This can be used to define different message exchange patterns between the client and server. The WebSocket client can send the message as: websocket.send(myTextField.value); where myTextField is a text field in the web page. Binary payload in the incoming WebSocket frame can be received if ByteBuffer is used as the first parameter of the method signature. The endpoint method signature in that case would look like: public void receiveBinaryMessage(ByteBuffer message) {     . . . } From the client side, the binary data can be sent using Blob, ArrayBuffer, and ArrayBufferView. Blob is a just raw data and the actual interpretation is left to the application. ArrayBuffer and ArrayBufferView are defined in the TypedArray specification and are designed to send binary data using WebSocket. In short, ArrayBuffer is a fixed-length binary buffer with no format and no mechanism for accessing its contents. These buffers are manipulated using one of the views defined by one of the subclasses of ArrayBufferView listed below: Int8Array (signed 8-bit integer or char) Uint8Array (unsigned 8-bit integer or unsigned char) Int16Array (signed 16-bit integer or short) Uint16Array (unsigned 16-bit integer or unsigned short) Int32Array (signed 32-bit integer or int) Uint32Array (unsigned 16-bit integer or unsigned int) Float32Array (signed 32-bit float or float) Float64Array (signed 64-bit float or double) WebSocket can send binary data using ArrayBuffer with a view defined by a subclass of ArrayBufferView or a subclass of ArrayBufferView itself. The WebSocket client can send the message using Blob as: blob = new Blob([myField2.value]);websocket.send(blob); where myField2 is a text field in the web page. The WebSocket client can send the message using ArrayBuffer as: var buffer = new ArrayBuffer(10);var bytes = new Uint8Array(buffer);for (var i=0; i<bytes.length; i++) { bytes[i] = i;}websocket.send(buffer); A concrete implementation of receiving the binary message may look like: @WebSocketMessagepublic void echoBinary(ByteBuffer data, Session session) throws IOException {    System.out.println("echoBinary: " + data);    for (byte b : data.array()) {        System.out.print(b);    }    session.getRemote().sendBytes(data);} This method is just printing the binary data for verification but you may actually be storing it in a database or converting to an image or something more meaningful. Be aware of TYRUS-51 if you are trying to send binary data from server to client using method return type. Here are some references for you: JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) TOTD #183 - Getting Started with WebSocket in GlassFish TOTD #184 - Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark Subsequent blogs will discuss the following topics (not necessary in that order) ... Error handling Custom payloads using encoder/decoder Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API

    Read the article

  • How to Hashtag (Without Being #Annoying)

    - by Mike Stiles
    The right tool in the wrong hands can be a dangerous thing. Giving a chimpanzee a chain saw would not be a pretty picture. And putting Twitter hashtags in the hands of social marketers who were never really sure how to use them can be equally unattractive. Boiled down, hashtags are for search and organization of tweets. A notch up from that, they can also be used as part of a marketing strategy. In terms of search, if you’re in the organic apple business, you want anyone who searches “organic” on Twitter to see your posts about your apples. It’s keyword tactics not unlike web site keyword search tactics. So get a clear idea of what keywords are relevant for your tweet. It’s reasonable to include #organic in your tweet. Is it fatal if you don’t hashtag the word? It depends on the person searching. If they search “organic,” your tweet’s going to come up even if you didn’t put the hashtag in front of it. If the searcher enters “#organic,” your tweet needs the hashtag. Err on the side of caution and hashtag it so it comes up no matter how the searcher enters it. You’ll also want to hashtag it for the second big reason people hashtag, organization. You can follow a hashtag. So can the rest of the Twitterverse. If you’re that into organic munchies, you can set up a stream populated only with tweets hashtagged #organic. If you’ve established a hashtag for your brand, like #nobugsprayapples, you (and everyone else) can watch what people are tweeting about your company. So what kind of hashtags should you include? They should be directly related to the core message of your tweet. Ancillary or very loosely-related hashtags = annoying. Hashtagging your brand makes sense. Hashtagging your core area of interest makes sense. Creating a specific event or campaign hashtag you want others to include and spread makes sense (the burden is on you to promote it and get it going). Hashtagging nearly every word in the tweet is highly annoying. Far and away, the majority of hashtagged words in such tweets have no relevance, are not terms that would be searched, and are not terms needed for categorization. It looks desperate and spammy. Two is fine. One is better. And it is possible to tweet with --gasp-- no hashtags! Make your hashtags as short as you can. In fact, if your brand’s name really is #nobugsprayapples, you’re burning up valuable, limited characters and risking the inability of others to retweet with added comments. Also try to narrow your topic hashtag down. You’ll find a lot of relevant users with #organic, but a lot of totally uninterested users with #food. Just as you can join online forums and gain credibility and a reputation by contributing regularly to that forum, you can follow hashtagged topics and gain the same kind of credibility in your area of expertise. Don’t just parachute in for the occasional marketing message. And if you’re constantly retweeting one particular person, stop it. It’s kissing up and it’s obvious. Which brings us to the king of hashtag annoyances, “hashjacking.” This is when you see what terms are hot and include them in your marketing tweet as a hashtag, even though it’s unrelated to your content. Justify it all you want, but #justinbieber has nothing to do with your organic apples. Equally annoying, piggybacking on a popular event’s hashtag to tweet something not connected to the event. You’re only fostering ill will and mistrust toward your account from the people you’ve tricked into seeing your tweet. Lastly, don’t @ mention people just to make sure they see your tweet. If the tweet’s not for them or about them, it’s spammy. What I haven’t covered is use of the hashtag for comedy’s sake. You’ll see this a lot and is a matter of personal taste. No one will search these hashtagged terms or need to categorize then, they’re just there for self-expression and laughs. Twitter is, after all, supposed to be fun.  What are some of your biggest Twitter pet peeves? #blogsovernow

    Read the article

< Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >