Search Results

Search found 5885 results on 236 pages for 'finally'.

Page 113/236 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • Commerce, Anyway You Want It

    - by David Dorf
    I believe our industry is finally starting to realize the importance of letting consumers determine how, when, and where to interact with retailers.  Over the last few months I've seen several articles discussing the importance of removing the barriers between existing channels. Paula Rosenblum of RSR first brought the term omni-channel to my attention back in September. She stated, "omni-channel retail isn’t the merging of channels – rather, it’s the use of all possible channels (present and future) to enhance the customer experience in a profitable way." I added to her thoughts in this blog posting in which I said, "For retailers to provide an omni-channel experience, there needs to be one logical representation of products, prices, promotions, and customers across all channels. The only thing that varies is the presentation of the content based on the delivery mechanism (e.g. shelf labels, mobile phone, web site, print, etc.) and often these mechanisms can be combined in various ways." More recently Brian Walker of Gartner suggested we stop using the term multi-channel and begin thinking more about consumer touch-points. "It is time for organizations to leave their channel-oriented ways behind, and enter the era of agile commerce--optimizing their people, processes and technology to serve today's empowered, ever-connected customers across this rapidly evolving set of customer touch points." Now Jason Goldberg, better known as RetailGeek, says we should start breaking down the channel silos by re-casting the VP of E-Commerce as the VP of Digital Marketing, and change his/her focus to driving sales across all channels using digital media. This logic is based on the fact that consumers switch between channels, or touch-points as Brian prefers, as part of their larger buying process. Today's smart consumer leverages the Web, mobile, and stores to provide the best shopping experience, so retailers need to make this easier. Regardless of what we call it, the key take-away is that "multi-channel" is not only an antiquated term but also an idea who's time has passed.  Today, retailers must look at e-commerce, m-commerce, f-commerce, catalogs, and traditional store sales collectively and through the consumers' eyes.  The goal is not to drive sales through each channel but rather to just drive sales -- using whatever method the customer prefers.  There really should be just one cart.

    Read the article

  • JavaOne: Parleys.com, Spring Vs. Java EE and HTML5 tooling

    - by delabassee
    Parleys.com, a 2012 Duke's Choice Award winner, is an E-Learning platform that host content from different sources (conferences, JUGs meetings, etc.). There is a lot of technical content available for online but also offline consumption, including many sessions on Java EE. Parleys has just released, for free, all the Devoxx 2011 sessions (video and slides sync'ed!). From a technical point of view, Parleys.com is interesting as they have switched from Spring to Java EE 6 to avoid being locked in a proprietary framework. During the GlassFish Community BoF, Stephan Janssen (Parleys.com and Devoxx founder) also presented how GlassFish is used to support 2000 concurrent Parleys users over a cluster of 2 GlassFish instances. Talking about Java EE and/or Spring, Harshad Oak has posted an update on the 'Spring Vs. Java EE' panel discussion that took place on Tuesday. As Arun said standards such as Java EE does not necessarily refrain innovation: "JBoss Forge & Arquillian from RedHat are great examples of innovation in the JavaEE community. Standardization is important but innovation does continue even within that framework." Simplicity, productivity along with HTML5 are the driving themes of Java EE 7. In terms of simplicity and productivity, the developer experience can also be improved by the tooling. Every NetBeans release comes with a large set of improvements, the just released NetBeans 7.3 beta is no exception. The goal of ‘NB 7.3’s Project Easel’ is to improve HTML5 development, something that will be handy for Java EE 7 developers. Project Easel can, for example, communicate directly to Chrome's WebKit engine, this feature was shown during Sunday's Technical Keynote at the end of the Java EE section. In this beta release, Chrome and the embedded JavaFX browser are the only supported browsers but the NetBeans team plan to add support, over time, for other WebKit based browsers. NetBans 7.3 beta NetBeans 7.3 screenscasts Today (i.e. Wednesday 3rd) is also the final exhibition day, so make sure to visit the Java EE and the GlassFish pods on the Java DEMOgrounds (Hilton Grand Ballroom, 9:30 am - 5:00 pm). Finally, here are some Java EE and GlassFish related activities worth attending today if you are at JavaOne : Wednesday October 3rd Time Title Location 8:30-9:30am What's New in Servlet 3.1: An Overview Parc 55 Mission 8:30-9:30am Bean Validation 1.1: What's New Under the Hood Parc 55Cyril Magnin II/III 10:00-11:00am JSR 353: Java API for JSON Processing Parc 55 Mission 10:00-12:00pm Tutorial : Integrating Your Service into the GlassFish PaaS Platform Parc 55 Devisidero 11:30-12:30pm What's New in JSF: A Complete Tour of JSF 2.2 Parc 55Cyril Magnin I 11:30-12:30pm Best of Both Worlds: Java Persistence with NoSQL and SQL Parc 55 Mission 1:00-2:00pm Sharding Middleware to Achieve Elasticity and High Availability in the Cloud Parc 55Market Street 1:00-2:00pm Pimp My RESTful Java Applications Parc 55Cyril Magnin I 3:00-4:00pm Migrating Spring to Java EE Parc 55Cyril Magnin II/III 4:30-5:30pm JavaEE.Next(): Java EE 7, 8, and Beyond Parc 55Cyril Magnin II/III 4:30-5:30pm HTML5 WebSocket and Java Parc 55Cyril Magnin I 4:30-5:30pm Easy Middleware for Your Embedded Device Nikko Ballroom II/III

    Read the article

  • Business Choices and Evony

    - by Robert May
    Recently, I’ve been playing a game called Evony, and I finally decided to quit the game and thought I should warn others who might be tempted.  I also find a lot of insight with this game as an example.  A few of the companies that I’ve worked with or worked for have been like this and they are NOT good places to be. Evony is a joke designed to milk as much money out of people as possible.  As a professional software developer who mentors teams on how to build better software, here's what I see: They obviously offshore all development and have little oversight over that offshore development, and they probably have a small team at that.  Evidenced by the poor grammar throughout the game. They're seeking to maximize revenue and pushing to do as little development as possible, which would mean a small team. They're horribly understaffed in the customer support department as evidenced by never replying to this forum and never responding to bug reports or help requests (I've had one open with no response AT ALL for over a month . . .) They have way inadequate testing, no CI, and probably no automated unit tests.  You can see this by the poor grammar throughout the game and the type of bugs that show up. They aren't following a formal development process (no Agile, Waterfall, or anything else) as evidenced by their lack of predictable release cycle and lack of visibility. I'm guessing that the internal code base is terrible, otherwise, there wouldn't be an "Age II" that had nothing more than a new visual interface and a few rule tweaks.  This is also evidenced by the itty bitty scope of bug fixes and their inability to really fix bugs. Their Architect sucks.  Really, 42k user is all you can handle on a single server?  Could you REALLY not come up with a better way to scale to handle users?  They've built isolated worlds, instead of a single continuous world. Back to milking people for money--to really progress, you have to spend money. All of this adds up to knowing, deliberate actions on the part of management.  They CHOOSE to do this (like AOL choosing to send more discs instead of improve quality). So, what can we learn? This game will never really improve, since the bosses don't care, they're only in it for the money. The game will never have good support.  Again, the owners don't care. Giving them money only perpetuates this scam (and yes, I've given them money, way too much money. :() They don't care if you quit.  There's a new sucker born every day. Don't EVER go to work for them.  I've worked both with and for people like this and the culture is NEVER good. Ah well. Technorati Tags: Evony

    Read the article

  • box2d resize bodies arround point

    - by philipp
    I have a compound object, consisting of a b2Body, vector-graphics and a list polygons which describe the b2body's shapes. This object has its own transformation matrix to centralize the storage of transformations. So far everything is working quiet fine, even scaling works, but not if i scale around a point. In the initialization phase of the object it is scaled around a point. This happens in this order: transform the main matrix transform the vector graphics and the polygons recreate the b2Body After this function ran, the shapes and all the graphics are exactly where they should be, BUT: after the first steps of the b2World the graphical stuff moves away from the body. When I ran the debugger I found out that the position of the body is 0/0 the red dot shows the center of scaling. the first image shows the basic setup and the second the final position of the graphics. This distance stays constant for the rest of the simulation. If I set the position via myBody.SetPosition( sx, sy ); the whole scenario just plays a bit more distant for the origin. Any Idea how to fix this? EDIT:: I came deeper down to the problem and it lies in the fact that i must not scale the transform matrix for the b2body shapes around the center, but set the b2body's position back to the point after scaling. But how can I calculate that point? EDIT 2 :: I came ever deeper down to it, even solved it, but this is a slow solution and i hope that there is somebody who understands what formula I need. assuming to have a set polygons relative to an origin as basis shapes for a b2body: scaling the whole object around a certain point is done in the following steps: i scale everything around the center except the polygons i create a clone of the polygons matrix i scale this clone around the point i calculate dx, dy as difference of clone.tx - original.tx and clone.ty - original.ty i scale the original polygon matrix NOT around the point i recreate the body i create the fixture i set the position of the body to dx and dy done! So what i an interested in is a formula for dx and dy without cloning matrices, scaling the clone around a point, getting dx and dy and finally scale the vertex matrix.

    Read the article

  • XNA Moddable Game - Architecture Design and Reflection

    - by David K
    I've decided to embark on an XNA moddable game project of a simple rogue style. For all purposes of this question, I'm going to not be using a scripting engine, but rather allow modders to directly compile assemblies that are loaded by the game at run time. I know about the security problems this may raise. So in order to expose the moddable content, I have gone about creating a generic project in XNA called MyModel. This contains a number of interfaces that all inherit from IPlugin, such as IGameSystem, IRenderingSystem, IHud, IInputSystem etc. Then I've created another project called MyRogueModel. This references MyModel project, and holds interfaces such as IMonster, IPlayer, IDungeonGenerator, IInventorySystem. More rogue specific interfaces, but again, all interfaces in this project inherit from IPlugin. Then finally, I've created another project called MyRogueGame, that references both MyModel and MyRogueModel projects. This project will be the game that you run and play. Here I have put the actual implementation of the Monster, DungeonGenerator, InputSystem and RenderingSystem classes. This project will also scan the mods directory during run time and load any IPlugins it finds using reflection and override anything it finds from the default. For example if it finds a new implementation of the DungeonGenerator it will use that one instead. Now my question is, in order to get this far, I have effectively 2 projects that contain nothing but interfaces... which seems a little... strange ? For people to create mods for the game, I would give them both the MyModel and MyRogueModel assemblies in which they would reference. I'm not sure whether this is the right way to do it, but my reasoning goes as follows : If I write 1 input system, I can use it in any game I write. If I create 3 rogue like games, and a modder writes 1 rendering system, that modder could use the rendering system for all 3 games, because it all comes from the MyModel project. I come from a more web based C# role, so having empty interface projects doesn't seem wrong, its just something I haven't done before. Before I embark on something that might be crazy, I'd just like to know whether this is a foolish idea and whether there's a better (or established) design principle I should be following ?

    Read the article

  • Loading levels from .txt or .XML for XNA

    - by Dave Voyles
    I'm attemptin to add multiple levels to my pong game. I'd like to simply exchange a few elements with each level, nothing crazy. Just the background texture, the color of the AI paddle (the one on the right side), and the music. It seems that the best way to go about this is by utilizing the StreamReader to read and write the files from XML. If there is a better, or more efficient alternative way then I'm all for it. In looking over the XNA Starter Platformer Kit provided by MS it seems that they've done it in this manner as well. I'm perplexed by a few things, however, namely parts within the Level class which aren't commented. /// <summary> /// Iterates over every tile in the structure file and loads its /// appearance and behavior. This method also validates that the /// file is well-formed with a player start point, exit, etc. /// </summary> /// <param name="fileStream"> /// A stream containing the tile data. /// </param> private void LoadTiles(Stream fileStream) { // Load the level and ensure all of the lines are the same length. int width; List<string> lines = new List<string>(); using (StreamReader reader = new StreamReader(fileStream)) { string line = reader.ReadLine(); width = line.Length; while (line != null) { lines.Add(line); if (line.Length != width) throw new Exception(String.Format("The length of line {0} is different from all preceeding lines.", lines.Count)); line = reader.ReadLine(); } } What does width = line.Length mean exactly? I mean I know how it reads the line, but what difference does it make if one line is longer than any of the others? Finally, their levels are simply text files that look like this: .................... .................... .................... .................... .................... .................... .................... .........GGG........ .........###........ .................... ....GGG.......GGG... ....###.......###... .................... .1................X. #################### It can't be that easy..... Can it?

    Read the article

  • Using multiple diagrams per model in Entity Framework 5.0

    - by nikolaosk
    I have downloaded .Net framework 4.5 and Visual Studio 2012 since it was released to MSDN subscribers on the 15th of August.For people that do not know about that yet please have a look at Jason Zander's excellent blog post .Since then I have been investigating the many new features that have been introduced in this release.In this post I will be looking into theIn order to follow along this post you must have Visual Studio 2012 and .Net Framework 4.5 installed in your machine.Download and install VS 20120 using this link.My machine runs on Windows 8 and Visual Studio 2012 works just fine. I have also installed in my machine SQL Server 2012 developer edition. I have also downloaded and installed AdventureWorksLT2012 database.You can download this database from the codeplex website.   Before I start showcasing the demo I want to say that I strongly believe that Entity Framework is maturing really fast and now at version 5.0 can be used as your data access layer in all your .Net projects.I have posted extensively about Entity Framework in my blog.Please find all the EF related posts here. In this demo I will show you how to split an entity model into multiple diagrams using the new enhanced EF designer. We will not build an application in this demo.Sometimes our model can become too large to edit or view.In earlier versions we could only have one diagram per EDMX file.In EF 5.0 we can split the model into more diagrams.1) Launch VS 2012. Express edition will work fine.2) Create a New Project. From the available templates choose a Web Forms application  3) Add a new item in your project, an ADO.Net Entity Data Model. I have named it AdventureWorksLT.edmx.Then we will create the model from the database and click Next.Create a new connection by specifying the SQL Server instance and the database name and click OK.Then click Next in the wizard.In the next screen of the wizard select all the tables from the database and hit Finish.4) It will take a while for our .edmx diagram to be created. When I select an Entity (e.g Customer) from my diagram and right click on it,a new option appears "Move to new Diagram".Make sure you have the Model Browser window open.Have a look at the picture below 5) When we do that a new diagram is created and our new Entity is moved there.Have a look at the picture below  6) We can also right-click and include the related entities. Have a look at the picture below. 7) When we do that the related entities are copied to the new diagram.Have a look at the picture below  8) Now we can cut (CTRL+X) the entities from Diagram2 and paste them back to Diagram1.9) Finally another great enhancement of the EF 5.0 designer is that you can change colors in the various entities that make up the model.Select the entities you want to change color, then in the Properties window choose the color of your choice. Have a look at the picture below. To recap we have demonstrated how to split your entity model in multiple diagrams which comes handy in EF models that have a large number of entities in them Hope it helps!!!!

    Read the article

  • It is CX a new concept?

    - by Isabel F. Peñuelas
    The Marketing Industry and the Web Industry are talking about CX since some time. However it is only very recently that the concept has reached some common meaning accepted by the analysts’ and the IT community. The new CX model depends on two previous facts: the expansion of the social media, and the impact of the new advanced features of mobile devices regarding brand-customer interaction. CXsers vs UXers First there is some need of disambiguity between User Experience and Customer Experience. User Experience -UX, is a much well established concept related with the design of user interactions for particular devices. UX people are interested on multiple touch points of digital interfaces while CX people are interested on all kind of interfaces including physical ones. UX is an evolution of Web Usability, while CX is a marketing concept. UX is an instrument of User Experience. CX in fact is all about Connections and Interactions. Connections Dan Draper, the creative director Mad Men, understands very well that to market effectively means to connect with people, and the best way to connect to people is to use the connections people have with other people: understanding Social Media connections and taking the customer pulse of customers on those medias, and are strong facilitators of CX strategies.  Interactions We can very simply define CX as the relationship that a customer establishes with a brand through multiple touch points (interactions, channels) through the entire life cycle of his relationship- direct or indirect with the brand. Interactions can be grouped on Customer Journeys through multiple touch points defined as the path a customer follows to achieve a goal. Processes A customer journey today usually starts at the moment he surfs the Web, then he takes a purchase decision; purchases the product;  request a particular service and finally recommends or do not recommends the product.  Customer Journeys are processes, and to analyze customer journeys there exists today a broad offering of modern Customer Journey tools very similar actually to the use cases or UML activity diagrams for IT systems design. As a summary CX is nothing more and nothing less than applying process analysis methods for better understanding how to create value through customer interactions across the multiple user´s touch points with the brand.

    Read the article

  • Pro SharePoint 2010 Business Intelligence Solutions

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Oh yeah baby, it’s out finally! This book is what I wanted to write for so long now, but never really got a chance to. For SharePoint 2007, I authored the SharePoint section of “Smart BI Solutions with SQL Server 2008” for MS Press. But never really got the time, to author a full book that this topic deserved. Until SharePoint 2010, we actually have a full book on this topic. So first things first, I didn’t actually write it. My role was limited to the overall concept, the outline, the layout, completion of it, code samples, identifying what we need in here, vouching for technical accuracy, identifying authors etc. The real work was done by Srini (5 chapters), and Steve (1 chapter). So credit given where it is due. But, with that said, this is a pretty good book. It has always been a challenge to find the superman that knows both, data ware housing concepts, and SharePoint concepts. The data ware housing concepts include basic stuff you need to know to work in the BI area such as cubes, MDX queries, etc. So chapter 1 covers that – and if you’re a hardcore DBA, feel free to skip Chapter 1. Then beyond that, we take every single SharePoint 2010 BI topic, and slice and dice it in detail. The topics we deal with are - Visio Services Reporting services Business Connectivity Services Excel Services PerformancePoint Services And in covering each of these topics, we ensure that a general layout was followed for each topic, to ensure completeness of content. We make sure we cover Setup related issues and advice Point and click usage Code usage, i.e. extensibility using visual studio and a walkthrough of the administration side of things, including powershell. (Yes, I insisted on that in being there in every chapter). Writing a book is always a lot of work, so we hope you find it useful. And it should go very well with the other book I just reviewed, which is Microsoft ADO.NET 4, step by step. Comment on the article ....

    Read the article

  • #SSAS #Tabular Workshop and Community Events in Netherlands and Denmark

    - by Marco Russo (SQLBI)
    Next week I will finally start the roadshow of the SSAS Tabular Workshop, a 2-day seminar about the new BISM Tabular model for Analysis Services that has been introduced in SQL Server 2012. During these roadshows, we always try to arrange some speeches at local community events in the evening - we already defined for Copenhagen, we have some logistic issue in Amsterdam that we're trying to solve. Here is the timetable: Netherlands SSAS Workshop in Amsterdam, NL – April 16-17, 2012 2-day seminar, I and Alberto will be the trainers for this event, register here We're trying to manage a Community event but we still don't have a confirmation, stay tuned        Denmark SSAS Workshop in Copenhagen, DK – April 26-27, 2012 2-day seminar, I and Alberto will be the trainers for this event, register here Community event on April 26, 2012 This event will run in Hellerup, at Microsoft venue All details available here: http://msbip.dk/events/26/msbip-mode-nr-5/ People from Sweden are welcome! Just register to this private group on LinkedIn in order to announce your presence, so we’ll know how many people will attend In community events we’ll deliver two speeches – here are the descriptions: Inside xVelocity (VertiPaq) PowerPivot and BISM Tabular models in Analysis Services share a great columnar-based database engine called xVelocity in-memory analytics engine (VertiPaq). If you want to improve performance and optimize memory used, you have to understand some basic principles about how this engine works, how data is compressed, and how you can design a data model for better optimization. Prepare yourself to change your mind. xVelocity optimization techniques might seem counterintuitive and are absolutely different than OLAP and SQL ones! Choosing between Tabular and Multidimensional You have a new project and you have to make an important decision upfront. Should you use Tabular or Multidimensional? It is not easy to answer, because sometime there is a clear choice, but most of the times both decisions might be correct, at least at the beginning. In this session we’ll help you making an informed decision, correctly evaluating pros and cons of each one according to common scenarios, considering both short-term and long-term consequences of your choice. I hope to meet many people in this first dates. We have many other events coming in May and June, including an online event (for US time zones), and you can also attend our PreCon Day at TechEd US in Orland (PRC06) or TechEd Europe in Amsterdam. I’ll be a good customer for airline companies in the next three months! I’m just sorry that I hadn’t time to write other articles in the last month, but I’m accumulating material that I will need to write down during some flight – stay tuned…

    Read the article

  • Conflict between Change Control and ASL Mapping

    - by Jie Chen
    Yesterday I got one strange report that on Agile 9.3.1.2, adding a Supplier into Item's Supplier tab will always remove all the data from Item.PageTwo.MultiList01 field which is assigned to a User Group list. The detailed problem description is like below. In JavaClient, MultiList01 attribute on Parts class's PageTwo tab is enabled and assigned with User Group list. On WebClient, user created a new Part and assign MultiList01 with two UserGroups: "Global User Group Test1" and "Personal Group_Test1". Then go to Suppliers tab to add three Suppliers. Switch back to Part's TitleBlock, will see MultiList01 loses the User Group data. To confirm if MultiList01 really loses the data or it saves with other wrong data, I need to check the database and find strange data that MultiList01 saves wrong data ",7976911,7976907,7976959,", which are exactly the ID of these three Suppliers. Then I can suspect the Supplier attribute on Suppliers tab must be mapped to MultiList01. However when I check Supplier in JavaClient, the "ASL mapped to" is blank. More interesting thing is the database clearly shows Supplier attribute (Base ID =2000004219) is mapped to 2090, which is PageTwo.MultiList01 Base ID. Till now, we can get a conclusion that Supplier data is really mapped to MultiList01, though we assign MultiList01 to User Group list and Supplier does not set "ASL mapped to". It must be another function which overrides "ASL mapped to" visibility in JavaClient with high priority. That is the "Change Controlled" function. We immediately see "ASL mapped to" with value "MultiList01" when we disable Change Controlled for Multilist01 If one attribute is Change Controlled, Supplier data cannot be mapped to this attribute theoretically because Supplier could be dynamically modified by users, not by Changes. In real situation of Agile 9.3.1.2, it could be a Code Defect. We can imagine the scenario customer met. He setup Parts.PageTwo.Multilist01 assigned with Supplier list, then in Parts.Suppliers.Supplier attribute, he set "ASL mapped to" to "Multilist01". Later company business is changed, so he set Multilist01 with Change Controlled and re-assign with User Group list. He forgot to remove "ASL mapped to" before he did modifications to Multilist01. Finally we know the solution, it depends on real business. If still need to mapping Supplier to Parts.PageTwo attribute, should modify "ASL mapped to" to other one attribute which already has assigned with Suppliers list. If do not need "ASL mapped to" function, should delete the data from database level. We cannot do it from JavaClient UI. delete propertytable where id in (select p.id from propertytable p, nodetable n where p.parentid =n.id and n.inherit=2000004219 and propertyid=794)

    Read the article

  • 6 Ways to Modernize Your Customer Experience

    - by Mike Stiles
    If customers have changed, if the way they research and shop have changed, if their expectations have changed, if their ability to act on dissatisfaction has changed, but your customer experience has NOT changed, what was once “good enough” may now be crippling. Well, the customer has changed, and why wouldn’t they? You’ve probably changed too in your role as consumer. There’s more info available, it’s easier to get, there’s more choice, you’re more mobile, you’re more connected, it’s easier to buy, and yes, it’s easier to switch brands if experiences don’t meet your now higher expectations. Thanks to technological advances, we as marketers can increasingly work borderline miracles. But if we’re still not adamantly adopting customer centricity, and if we aren’t making the customer experience paramount amongst business goals, the tech is wasted. A far more modern customer experience is called for. Here are 6 ways to get there: 1. Modern Marketing: Marketing data is aggregated and targeted to the right customers, who are getting personal, relevant communications. In return, you’re getting insight that finally properly attributes revenue to your marketing efforts. 2. Modern Selling: Demand is being driven across all channels with modern selling tools. Productivity is up thanks to coordinated communication and selling, and performance is ever optimized using powerful analytics. 3. Modern CPQ: You’re cross-selling and upselling more effectively since reps and channel partners have been empowered with the ability to quickly, automatically generate 100% accurate, customer-friendly quotes complete with price controls and automated approvals. 4. Modern Commerce: You’re leveraging data and delivering personalized, targeted digital experiences to everyone. You’re attracting more visitors, and you’re able to scale and keep up with the market and control the experience. 5. Modern Service: You’re better serving your customers by making it easier for them to engage with your brand, plus you’re lowering your costs by increasing agent and tech support efficiencies. 6. Modern Social: You’re getting faster, deeper, more accurate insights from social and turning content around faster, which then goes out to the right people at the right time in the right place. You’ve also gotten proactive in your service, and customers love that. For far too many brands, the buying journey of Need, Research, Select, Buy, Use, Recommend across the multiple connect points of Social, Mobile, Store, Call Center, Site, Ecommerce is a disconnected mess. Oracle’s approach to CX is to connect every interaction your customer has with your brand, avoiding the revenue losses lousy customer experiences bring. How important is the experience to customers? 94% are willing to pay more of their hard-earned money to have better ones, while a meager 1% say they get the good, consistent experiences they expect. Brands, your words aren’t as loud anymore, so your actions as they relate to customer experience are going to have to do the talking. @mikestiles @oraclesocialPhoto: Julien Tromeur, freeimages.com

    Read the article

  • February 2011 Java SE and Java for Business Critical Patch Update Released

    - by eric.maurice
    Hello, this is Eric Maurice again. Oracle released the February 2011 Critical Patch Update for Java SE and Java for Business today. As discussed in a previous blog entry, Oracle currently maintains a separate Critical Patch Update schedule for Java SE and Java for Business because of commitments made prior to the Oracle acquisition in regards to the timing for the publication of Java fixes. Today's Java Critical Patch Update includes fixes for 21 vulnerabilities. The most severe CVSS Base Score for vulnerabilities fixed in this CPU is 10.0, and this Base Score affects 8 vulnerabilities. Out of these 21 vulnerabilities, 13 affect Java client deployments. 12 of these 13 vulnerabilities can be exploited through Untrusted Java Web Start applications and Untrusted Java Applets, which run in the Java sandbox with limited privileges. One of these 13 vulnerabilities can be exploited by running a standalone application. In addition, one of the client vulnerability affects Java Update, a Windows-specific component. 3 of the 21 vulnerabilities affect client and server deployments. These vulnerabilities can be exploited through Untrusted Java Web Start applications and Untrusted Java Applets, as well as be exploited by supplying malicious data to APIs in the specified components, such as, for example, through a web service. 3 vulnerabilities affect Java server deployments only. These vulnerabilities can be exploited by supplying malicious data to APIs in the specified Java components. Note that one of these vulnerabilities (CVE-2010-4476) was the subject of a Security Alert released on February 8th. Finally, one of these vulnerabilities is specific to Java DB, a component in the Java JDK, but not included in the Java Runtime Environment (JRE). As usual, because of the severity of the vulnerabilities fixed in this Critical Patch Update, Oracle recommends that Java customers apply it as soon as possible. The Critical Patch Advisory provides more details about the vulnerabilities addressed in the Critical Patch Update as well as instructions on how to install the fixes and where to get them. Home users should use the Java auto-update mechanism to install the latest version of the Java Runtime Environment 6 update 24 or higher (JRE), which includes the fix for this vulnerability. For More Information: The Critical Patch Updates and Security Alerts page is located at http://www.oracle.com/technetwork/topics/security/alerts-086861.html More information on Oracle Software Security Assurance is located at http://www.oracle.com/us/support/assurance/index.html Consumers can go to http://www.java.com/en/download/installed.jsp to ensure that they have the latest version of Java running on their desktops. More information on Java Update is available at http://www.java.com/en/download/help/java_update.xml

    Read the article

  • Sometimes keyboard & touchpad work... sometimes not

    - by Voyagerfan5761
    When I first ran Ubuntu from CD on this Dell Inspiron 2650, it worked for about ten to fifteen minutes, then it hung (I was probably trying to do too much at once from a Live CD). The next time, my mouse and keyboard didn't work. I rebooted three times and finally got them working. I then installed Ubuntu alongside Windows XP. After installing, selecting the OS in GRUB worked, but my touchpad and keyboard were again not working. I rebooted, and they worked. (I fortunately had a USB mouse with which to reboot.) Booting Ubuntu and then rebooting to enable my keyboard and touchpad has become a routine ever since. Often several reboots are required; at one point I had to reboot over a dozen times in a row before getting a session where everything worked properly. (My installation has been in place for about three days a week now.) I've looked around for a device manager equivalent to no avail. Sometimes the hardware is properly detected, and sometimes it's not. Once or twice I've had the keyboard detected properly but the touchpad not. Plugging in my wireless card also sometimes requires a plug, unplug, and plug again to get it working. So is there some solution? I'm without an Internet connection at home, and this "laptop" is really a wall wart on my desk, so suggestions for packages may take a while to test. Xorg logs I captured two three four sample Xorg logs: one from a startup where the devices worked; one from when they didn't; one from a session where Ubuntu thought my touchpad was a normal mouse; and one from a session where my keyboard worked but the touchpad didn't. See this gist. Updated 2010-12-15 01:50 UTC with Xorg.0.log.keyboardonly file illustrating the case where the keyboard worked but not the touchpad. Updated 2011-01-11 04:10 UTC with Xorg.0.log.touchpadregmouse to illustrate a case where the touchpad was detected as a regular mouse (no "Touchpad" tab in mouse prefs).

    Read the article

  • Shrink NTFS Windows 7 Partition with GParted

    - by user15961
    I am running a dual-boot system with Windows 7 and Ubuntu 10.10. Initially I allocated about 20GB for my Ubuntu partition; however, I quickly ran out of that space and am now looking to expand my partition. Currently my NTFS partition (450GB) has about 130GB of free space. I tried using GParted to shrink the partition but encountered the following error. I booted into windows so I could run chkdsk but the countdown freezes at 1 upon reboot. I tried multiple methods to resolve that issue but nothing seems to work. Finally I gave up, and now I just want to know what is the best way for me to force GParted to shrink the partition regardless of the errors. I don't really have anything important and I don't mind risking the data. I just don't want to wipe the entire NTFS partition because I don't have a Windows install CD and might require Windows later on for some programs. I tried using sudo ntfsresize but that spews out the same error as GParted... Any ideas? Check and repair file system (ntfs) on /dev/sda2 00:00:09 ( ERROR ) calibrate /dev/sda2 00:00:00 ( SUCCESS ) path: /dev/sda2 start: 36944325 end: 976771119 size: 939826795 (448.14 GiB) check file system on /dev/sda2 for errors and (if possible) fix them 00:00:09 ( ERROR ) ntfsresize -P -i -f -v /dev/sda2 ntfsresize v2.0.0 (libntfs 10:0:0) Device name : /dev/sda2 NTFS volume version: 3.1 Cluster size : 4096 bytes Current volume size: 481191318016 bytes (481192 MB) Current device size: 481191319040 bytes (481192 MB) Checking for bad sectors ... Checking filesystem consistency ... Cluster 63468 is referenced multiple times! Cluster 63469 is referenced multiple times! Cluster 63465 is referenced multiple times! Cluster 63466 is referenced multiple times! Cluster 63467 is referenced multiple times! Cluster 165621 is referenced multiple times! Cluster 165622 is referenced multiple times! Cluster 165623 is referenced multiple times! Cluster 165624 is referenced multiple times! ERROR: Filesystem check failed! ERROR: 9 clusters are referenced multiply times. NTFS is inconsistent. Run chkdsk /f on Windows then reboot it TWICE! The usage of the /f parameter is very IMPORTANT! No modification was and will be made to NTFS by this software until it gets repaired.

    Read the article

  • Mobile Apps: An Ongoing Revolution

    - by Steve Walker
    a guest post from Suhas Uliyar, VP Mobile Strategy, Product Management, Oracle The rise of smartphone apps have proved transformational for businesses, increasing the productivity of employees while simultaneously creating some seriously cool end user experiences. But this is a revolution that is only just beginning. Over the next few years, apps will change everything about the way enterprises work as well as overhauling the experiences of customers. The spark for this revolution is simplicity. Simplicity has already proved important for the front-end of apps, which are now often as compelling and intuitive as consumer apps. Businesses will encourage this trend, both to further increase employee productivity and to attract ‘digital natives’ (as employees and customers). With the variety of front-end development tools available already, this should be a simple mission for developers to accomplish – but front-end simplicity alone is not enough for the enterprise mobile revolution. Without the right content even the most user-friendly app is useless. Yet when it comes to integrating apps with ‘back-end’ systems to enable this content, developers often face a complex, costly and time-consuming task. Then there is security: how can developers strike a balance between complying with enterprise security policies and keeping the user experience simple? Complexity has acted as a brake on innovation, with integration and security compliance swallowing enterprise resources. This is why the simplification of integration, security and scalability is so important: it frees time and money for revolutionary innovation. The key is to put in place a complete and unified SOA integration platform that runs across the entire enterprise and enables organizations to easily integrate and connect applications across IT environments. The platform must also be capable of abstracting apps from the underlying OS and enabling a ‘write-once, run- anywhere’ capability for mobile devices - essential for BYOD environments and integrating third-party apps. Mobile Back-end-as-a-Service can also be very important in streamlining back-end integration. Mobile services offered through the cloud can simplify mobile application development with a standard approach to dealing with complex server-side programming and integration issues. This allows the business to innovate at its own pace while providing developers with a choice of tools to speed development and integration. Finally, there is security, which must be done in a way that encourages users to make the most of their mobile devices and applications. As mobile users, we want convenience and that is why we generally approve of businesses that adopt BYOD policies. Enterprises can safely encourage BYOD as they can separate, protect, and wipe corporate applications by installing a secure ‘container’ around corporate applications on any mobile device. BYOD management also means users’ personal applications and data can be kept separate from the enterprise information – giving them the confidence they need to embrace the use of their devices for corporate apps. Enterprises that place mobility at the heart of what they do will fundamentally transform their businesses and leap ahead of the competition. As businesses take to mobile platforms that simplify integration, security and scalability we will see a blossoming of innovation that will drive new levels of user convenience and create new ways of working that we are only beginning to imagine.

    Read the article

  • Issues with shooting in a HTML5 platformer game

    - by fnx
    I'm coding a 2D sidescroller using only JavaScript and HTML5 canvas, and in my game I have two problems with shooting: 1) Player shoots continous stream of bullets. I want that player can shoot only a single bullet even though the shoot-button is being held down. 2) Also, I get an error "Uncaught TypeError: Cannot call method 'draw' of undefined" when all the bullets are removed. My shooting code goes like this: When player shoots, I do game.bullets.push(new Bullet(this, this.scale)); and after that: function Bullet(source, dir) { this.id = "bullet"; this.width = 10; this.height = 3; this.dir = dir; if (this.dir == 1) { this.x = source.x + source.width - 5; this.y = source.y + 16; } if (this.dir == -1) { this.x = source.x; this.y = source.y + 16; } } Bullet.prototype.update = function() { if (this.dir == 1) this.x += 8; if (this.dir == -1) this.x -= 8; for (var i in game.enemies) { checkCollisions(this, game.enemies[i]); } // Check if bullet leaves the viewport if (this.x < game.viewX * 32 || this.x > (game.viewX + game.tilesX) * 32) { removeFromList(game.bullets, this); } } Bullet.prototype.draw = function() { // bullet flipping uses orientation of the player var posX = game.player.scale == 1 ? this.x : (this.x + this.width) * -1; game.ctx.scale(game.player.scale, 1); game.ctx.drawImage(gameData.getGfx("bullet"), posX, this.y); } I handle removing with this function: function removeFromList(list, object) { for (i in list) { if (object == list[i]) { list.splice(i, 1); break; } } } And finally, in the main game loop I have this: for (var i in game.bullets) { game.bullets[i].update(); game.bullets[i].draw(); } I have tried adding if (game.bullets.length > 0) to the main game loop before the above draw&update calls, but I still get the same error.

    Read the article

  • Configuring Full-Text Search for pdf and docx files

    - by Lukasz Kurylo
    I think in may I was creating a little filters module based on Full Text-Search. I have configured my dev machine, the same for two testing servers – in our company for internal testing before we deployed it to client, and then on the testing client server. Until last week this build  was still on the testing server and finally we got feedback that we can deploy it on the production one. I only say that, I lost half a day because I had not correctly remembered what I was doing to configure the FTS on the previous servers and I had no notes for that. I foolishly believed in my memory. Lesson learned.   For future reference a bunch of steps to configure the FTS for searching in *.pdf and *.docx files (and by the way in other Office files like *.xlsx).   1. From the page (link) download and install the *.pdf IFilter for FTS. 2. To the PATH global system variable add path to the catalog, where you installed the plugin. Default for this version is: C:\Program Files\Adobe\Adobe PDF iFilter 9 for 64-bit platforms\bin 3. From the page (link) download a FilterPackx64.exe and install it. 4. Now from SSMS execute the following procedures: -sp_fulltext_service 'load_os_resources',1 -sp_fulltext_service 'verify_signature', 0 5. Restart the server 6. Now we must check if the plugins are visible: -select document_type, path from sys.fulltext_document_types where document_type = '.pdf' -select document_type, path from sys.fulltext_document_types where document_type = '.docx' 7. If we see a result, then we can assume that everything is ok*. 8. Right now we can create a catalog for FTS and indexes on appropriate columns.     *I lost a lot of hours to find out, why the plugin for the *.pdf files wasn’t indexed any file in the database, but in the sys.fulltext_document_types table there was available a line for this plugin. After the deeper investigation I found that the *.pdf files actually were indexed. At least the EOF sign was added to the indexes and nothing more for each file. In the end the problem was that, I forgot to add the /bin in the path to the plugin in PATH variable..

    Read the article

  • Why is my laptop so sluggish? Or Damn You Facebook and Twitter! Or All Hail Chrome!

    - by John Conwell
    In the past three weeks, I've noticed that my laptop (dual core 2.1GHz, 2Gb RAM) has become amazingly sluggish.  I only uses for communications and data lookup workflows, so the slowness was tolerable.  But today I finally got fed up with the suckyness and decided to get to the root of the problem (I do have strong performance roots after all). It actually didn't take all that long to figure it out.  About a year ago I converted to Google Chrome (away from FireFox).  One of the great tools Chrome has is a "Task Manager" tool, that gives you Windows Task Manager like details for all the tabs open in the browser (Shift + Esc).  Since every tab runs in its own process, its easy from Task Manager (both Windows or Chrome) to identify and kill a single performance offending tab.  This is unlike IE, where you only get aggregate data about all tabs open.  Anyway, I digress.  Today my laptop sucked.  Windows Task Manager told me that I had two memory hogging Chrome tabs, but couldn't tell me which web page those tabs are showing.  Enter Chrome Task Manager which tells you the page title, along with CPU, memory and network utilization of each tab.  Enter my amazement.  Turns out Facebook was using just shy of half a Gb of RAM.  Half a Gigabyte!  That's 512 Megabytes!524,288 Kilobytes! 536,870,912 Bytes!  Or 4,294,967,296 Bits!  In other words, that's a frackin boat load of memory.  Now consider that Facebook is running on pretty much 96.3% (statistics based on absolutely nothing) of every house hold desktop, laptop, netbook, and mobile device in America, that is pretty horrific! And I wasn't playing any Facebook games like FarmWars or MafiaVille.  I just had my normal, default home page up showing me who just had breakfast, or just got finished with their morning run. I'm sorry...let me say that again...HALF A GIG OF RAM!  That is just unforgivable. I can just see my mom calling me up:  Mom: "John...I think I need a new computer.  Mine is really slow these days" John: "What do you have running?" Mom: "Oh, just Facebook" John: "Ok, close Facebook and tell me how fast your computer feels" Mom: "Well...I don't know how fast it is.  All I do is use Facebook" John: "Ok Mom, I'll send you a new computer by Tuesday" Oh yea...and the other offending web page?  It was Twitter, using a quarter of a Gigabyte. God I love social networks!

    Read the article

  • JavaOne Latin America 2011: Keynotes, Sessions, Hands-on Lab, Geek Bike Ride, etc.

    - by arungupta
    After a very successful JavaOne San Francisco, the first JavaOne on the road for 2011 is heading to Latin America next week. There are 59 sessions delivered by several rock star speakers and with 60% sessions delivered by the local community. There are strategy, technical and community keynotes. The community keynote on Thursday will particularly be lot of fun with appearances from Java Champions, JUG leaders, jHome, and several others. Also check out the Exhibitor Floor Plan and don't forget to Register! The complete session schedule gives an overview for the list of technical sessions and hands-on lab. There are several Java EE, GlassFish, and WebLogic sessions and are highlighted below: Tuesday, Dec 6 Oracle WebLogic Server XML-Free Programming: Java Server and Client Development without <> Java EE Application in Production: Tips and Tricks to achieve zero downtime Web Applications and Wicket Scala on GlassFish and Java EE 6 REST and Java best practices, issues and solutions for the Enterprise Building a RESTful Web Application with JAX-RS and Ext JS 4 Wednesday, Dec 7 Oracle GlassFish Server in the Virtual World JAX-RS 2.0: What's in JSR 339 ? JSF 343: What's coming in Java Message Service 2.0 ? The Great News of JSF 2.0! Thursday, Dec 8 Servlet 3.1 Update Develop, Deploy, and Monitor a Java EE 6 Application with Clustered GlassFish 3.1 Migrating from EJB/SOAP to REST with JAX-RS: The Case of the Central Bank of Brazil GlassFish REST Administration Back End: An Insider look at a real REST Application Scripting and Agile Java EE Applications with Jython And this is Brazil so a fun element is important. There are the usual Caiprihinas, Churrascaria, late night social dinners, community engagement, and multiple other fun activities. Fabiane Nardon and SOUJava gang are also organizing a Geek Bike Ride on the Sunday (Dec 4th) before JavaOne. The 20k ride (map) starts at 7am and goes through the streets of Sao Paulo. This is an opportunity to meet some of the JavaOne speakers and attendees outside the conference. They've even designed a t-shirt and 32 geeks have signed up so far. I'm glad my discussion with Fabiane during FISL early this year for arranging this bike ride is finally taking shape! I'm definitely looking forward to it and will be bringing nice fruity Odwalla bars for all the riders. Be there to ride with me and many others :-) Stay updated by following @oracledobrasil and @javaoneconf. I'll be there, will you ? Don't wait and register now! And in case you are interested in reading about the experience from last year ... it was lot of fun! Just check out a collage of pictures yourself ... And the complete album at:

    Read the article

  • Improving the performance of JDeveloper11g (part 2) and JVMs in general

    - by asantaga
    Just received an email from one of our JVM developers who read my blog entry on Performance tuning JDeveloper11g and he's confirmed that all of the above parameters are totally supported :-) He's also provided a description of the parameters so we can learn what magic is actually being applied. - -XX:+AggressiveOpts -- this enables the latest and greatest JVM optimizations. It will likely help most Java applications. It's fully supported. The downside of it is that because it has the latest and greatest optimizations, there is some small probability that it may not offer as good of an experience. As those features enabled with this command line option have "matured", they are made the default in a future JDK release. So, you can think of this command line option as the place where the newest optimizations get introduced. Some time later they are moved out from under AggressiveOpts to become default behavior. -XX:+OptimizeStringConcat -- only works with the -server JVM. It may be enabled by the default in a future JDK 7 update release. This option delays the construction of a StringBuilder/StringBuffer and attempts to avoid re-sizing the underlying char[] by attempting to detect the size of the char[] to allocate based on what's being appended to the StringBuilder/StringBuffer. -XX:+UseStringCache -- I would not suggest using this unless you knew that JDeveloper allocated the same string over and over again. And, the string that's allocated over and over again is one of the first 100,000 allocated strings. In short, I'd recommend against using it. And, in fact, in Java 7 (currently) does not include this feature. -XX:+UseCompressedOops -- applicable to 64-bit JVMs. And, if you're using a 64-bit JVM, I'd suggest you use it. It's auto enabled in JDK 7 64-bit JVMs and later JDK 6 64-bit JVMs enable it by default too. -XX:+UseGCOverheadLimit -- by default this option is already enabled. One other command line option to consider is -XX:+TieredCompilation for a JDK 6 Update 25 or later, or JDK 7. This gives you the startup of a -client JVM and the peak performance of a -server JVM. Awesome-ness!  Finally, Charlies also pointed out to me a "new" book he's just published where he goes into the details of JVM tuning, a must for all Fusion Middleware tuning exercises..  (click the book)  Thanks Charlie!

    Read the article

  • How do I make this rendering thread run together with the main one?

    - by funk
    I'm developing an Android game and need to show an animation of an exploding bomb. It's a spritesheet with 1 row and 13 different images. Each image should be displayed in sequence, 200 ms apart. There is one Thread running for the entire game: package com.android.testgame; import android.graphics.Canvas; public class GameLoopThread extends Thread { static final long FPS = 10; // 10 Frames per Second private final GameView view; private boolean running = false; public GameLoopThread(GameView view) { this.view = view; } public void setRunning(boolean run) { running = run; } @Override public void run() { long ticksPS = 1000 / FPS; long startTime; long sleepTime; while (running) { Canvas c = null; startTime = System.currentTimeMillis(); try { c = view.getHolder().lockCanvas(); synchronized (view.getHolder()) { view.onDraw(c); } } finally { if (c != null) { view.getHolder().unlockCanvasAndPost(c); } } sleepTime = ticksPS - (System.currentTimeMillis() - startTime); try { if (sleepTime > 0) { sleep(sleepTime); } else { sleep(10); } } catch (Exception e) {} } } } As far as I know I would have to create a second Thread for the bomb. package com.android.testgame; import android.graphics.Bitmap; import android.graphics.Canvas; import android.graphics.Rect; public class Bomb { private final Bitmap bmp; private final int width; private final int height; private int currentFrame = 0; private static final int BMPROWS = 1; private static final int BMPCOLUMNS = 13; private int x = 0; private int y = 0; public Bomb(GameView gameView, Bitmap bmp) { this.width = bmp.getWidth() / BMPCOLUMNS; this.height = bmp.getHeight() / BMPROWS; this.bmp = bmp; x = 250; y = 250; } private void update() { currentFrame++; new BombThread().start(); } public void onDraw(Canvas canvas) { update(); int srcX = currentFrame * width; int srcY = height; Rect src = new Rect(srcX, srcY, srcX + width, srcY + height); Rect dst = new Rect(x, y, x + width, y + height); canvas.drawBitmap(bmp, src, dst, null); } class BombThread extends Thread { @Override public void run() { try { sleep(200); } catch(InterruptedException e){ } } } } The Threads would then have to run simultaneously. How do I do this?

    Read the article

  • Interconnect nodes in a Java distributed infrastructure for tweet processing

    - by David Moreno García
    I'm working in a new version of an old project that I used to download and process user statuses from Twitter. The main problem of that project was its infrastructure. I used multiple instances of a java application (trackers) to download from Twitter given an specific task (basically terms to search for), connected with a central node (a web application) that had to process all tweets once per day and generate a new task for each trackers once each 15 minutes. The central node also had to monitor all trackers and enable/disable them under user petition. This, as I said, was too slow because I had multiple bottlenecks, so in this new version I want to improve the infrastructure and isolate all functionalities in specific nodes. I also need a good notification system to receive notifications for any node. So, in the next diagram I show the components that I'll need in this new version: As you can see, there are more nodes. Here are some notes about them: Dashboard: Controls trackers statuses and send a single task to each of them (under user request). The trackers will use this task until replaced with a new one (if done, not each 15 minutes like before). Search engine: I need to store all the tweets. They are firstly stored in a local database for each tracker but after that I'm thinking on using something like Elasticsearch to be able to do fast searches. Tweet processor: Just and isolated component with its own database (maybe something like the search engine to have fast access to info generated by the module). In the future more could be added. Application UI: A web application with a shared database with the Dashboard (mainly to store users information and preferences). Indeed, both could be merged into a single web. The main difference with the previous version of the project is that now they will be isolated and they will only show information and send requests. I will not do any heavy task in them (like process tweets as I did before). So, having this components, my main headache is how to structure all to not have to rewrite a lot of code every time I need to access any new data. Another headache is how can I interconnect nodes. I could use sockets but that is a pain in the ass. Maybe a REST layer? And finally, if all the nodes are isolated, how could I generate notifications for each user which info is only in the database used by the Application UI? I'm programming this using Java and Spring (at least I used them in the last version) but I have no problems with changing the language if I can take advantage of a tool/library/engine to make my life easier and have a better platform. Any comment will be appreciated.

    Read the article

  • C++11 Tidbits: Decltype (Part 2, trailing return type)

    - by Paolo Carlini
    Following on from last tidbit showing how the decltype operator essentially queries the type of an expression, the second part of this overview discusses how decltype can be syntactically combined with auto (itself the subject of the March 2010 tidbit). This combination can be used to specify trailing return types, also known informally as "late specified return types". Leaving aside the technical jargon, a simple example from section 8.3.5 of the C++11 standard usefully introduces this month's topic. Let's consider a template function like: template <class T, class U> ??? foo(T t, U u) { return t + u; } The question is: what should replace the question marks? The problem is that we are dealing with a template, thus we don't know at the outset the types of T and U. Even if they were restricted to be arithmetic builtin types, non-trivial rules in C++ relate the type of the sum to the types of T and U. In the past - in the GNU C++ runtime library too - programmers used to address these situations by way of rather ugly tricks involving __typeof__ which now, with decltype, could be rewritten as: template <class T, class U> decltype((*(T*)0) + (*(U*)0)) foo(T t, U u) { return t + u; } Of course the latter is guaranteed to work only for builtin arithmetic types, eg, '0' must make sense. In short: it's a hack. On the other hand, in C++11 you can use auto: template <class T, class U> auto foo(T t, U u) -> decltype(t + u) { return t + u; } This is much better. It's generic and a construct fully supported by the language. Finally, let's see a real-life example directly taken from the C++11 runtime library as implemented in GCC: template<typename _IteratorL, typename _IteratorR> inline auto operator-(const reverse_iterator<_IteratorL>& __x, const reverse_iterator<_IteratorR>& __y) -> decltype(__y.base() - __x.base()) { return __y.base() - __x.base(); } By now it should appear be completely straightforward. The availability of trailing return types in C++11 allowed fixing a real bug in the C++98 implementation of this operator (and many similar ones). In GCC, C++98 mode, this operator is: template<typename _IteratorL, typename _IteratorR> inline typename reverse_iterator<_IteratorL>::difference_type operator-(const reverse_iterator<_IteratorL>& __x, const reverse_iterator<_IteratorR>& __y) { return __y.base() - __x.base(); } This was guaranteed to work well with heterogeneous reverse_iterator types only if difference_type was the same for both types.

    Read the article

  • Advice for how to handle company pride

    - by user17971
    We have this "amazing" little product using the latest development methodologies, components with all the bells and whistles. I took over this product maybe 6 months ago and struggled with it from day one. Even though it is supposedly is state of the art because of all its amazing structure, using dependency injections, inversion of control from the unity framework, hibernation and is domain driven in a .net mvvm xaml application to make it streamlined and modular. I knew from the moment I saw the monolith that it was going to be an uphill struggle for me. A lot of little code-bits scattered all around in neatly organized paradigms. Debugging is difficult, tracing the code is difficult, making new code is difficult, although some modifications is surprisinly easy but it doesn't out weight the problems I have with the code by a long shot. When I took over the project I was told that the new management console was ready for delivery and all I had to do was compile it and drop it. This was the beginning of a uphill struggle, our customer didn't agree at all that this was the functionality they had asked for so I had to do modifications to the program to their specifications. Since the project pretty much has been overdue since I took over it it has always been important that we didn't add or change much to the original system. I could modify the existing bits. fast forward until today where I finally completed all their comments and issues with the program but now I think that the users has opened their eyes (even though they saw this program many times) that they will be going backwards with this new system, that it will be much worse than the tool they got today (for a long time due to the fact that I'm the only resource on the project, project manager, tester, developer, integration specialist etc) My problem is that I lost faith in this system quite early due to the nature of the program. Although I made many changes and improvements to the system I wholeheartedly sympathize with the poor users who are going to start using this system. Its not nearly doing all the things it should do. I had this conversation internally with my boss where I told him what I thought about it, that if I were the customer I wouldn't have spent money developing it. So what do I do now? The system in ready, on a staging system and nobody likes it, its too slow and boring and does maybe do 50% of what they need it to do. Despite how much energy and working around the clock I've done to this project: I won't mind scrapping the system but we've spent much money (well my salaries) developing it and my company wants us to be proud of everything we do and advocate it. How will I tackle the contractor when he asks for advice? Surely I can tell him, this is what we agreed upon based on your use case scenarios, and be done with it? How will I inform my boss about this progress? He knows what I feel about it but I always get the feeling he let my criticism pass him by as just hot air, gone tomorrow,.

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >