Search Results

Search found 30146 results on 1206 pages for 'back to the future'.

Page 86/1206 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • Moving between android activities on button clicks

    - by cppdev
    I am writing a android application where, on startup activity view, I have a button "DoIt". When user clicks "DoIt" button, I start another activity with different layout. On newly started activity, I have a button "Back" which should take me to the first activity. How to accomplish this. What code should I write on OnClick method of "Back" button. Also, I want the newly created activity to die after back button is pressed and application comes back to start-up activity.

    Read the article

  • Is loose coupling w/o use cases an anti-pattern?

    - by dsimcha
    Loose coupling is, to some developers, the holy grail of well-engineered software. It's certainly a good thing when it makes code more flexible in the face of changes that are likely to occur in the foreseeable future, or avoids code duplication. On the other hand, efforts to loosely couple components increase the amount of indirection in a program, thus increasing its complexity, often making it more difficult to understand and often making it less efficient. Do you consider a focus on loose coupling without any use cases for the loose coupling (such as avoiding code duplication or planning for changes that are likely to occur in the foreseeable future) to be an anti-pattern? Can loose coupling fall under the umbrella of YAGNI?

    Read the article

  • More free geek-read.December SolidQ Journal is online

    - by Greg Low
    I'm really excited to see the last SolidQ Journal for this year out the door. It's our free online magazine. I've been wondering about the future of printed technical magazines for a long time. I doubt they have much of a future, as online publications become more prevalent and more timely. By the time a print magazine gets to you, it's such a long time since the author wrote the material that it's hard to even retain relevance in a fast moving world. That's why I'm so happy to have the format we...(read more)

    Read the article

  • When someone deletes a shared data source in SSRS

    - by Rob Farley
    SQL Server Reporting Services plays nicely. You can have things in the catalogue that get shared. You can have Reports that have Links, Datasets that can be used across different reports, and Data Sources that can be used in a variety of ways too. So if you find that someone has deleted a shared data source, you potentially have a bit of a horror story going on. And this works for this month’s T-SQL Tuesday theme, hosted by Nick Haslam, who wants to hear about horror stories. I don’t write about LobsterPot client horror stories, so I’m writing about a situation that a fellow MVP friend asked me about recently instead. The best thing to do is to grab a recent backup of the ReportServer database, restore it somewhere, and figure out what’s changed. But of course, this isn’t always possible. And it’s much nicer to help someone with this kind of thing, rather than to be trying to fix it yourself when you’ve just deleted the wrong data source. Unfortunately, it lets you delete data sources, without trying to scream that the data source is shared across over 400 reports in over 100 folders, as was the case for my friend’s colleague. So, suddenly there’s a big problem – lots of reports are failing, and the time to turn it around is small. You probably know which data source has been deleted, but getting the shared data source back isn’t the hard part (that’s just a connection string really). The nasty bit is all the re-mapping, to get those 400 reports working again. I know from exploring this kind of stuff in the past that the ReportServer database (using its default name) has a table called dbo.Catalog to represent the catalogue, and that Reports are stored here. However, the information about what data sources these deployed reports are configured to use is stored in a different table, dbo.DataSource. You could be forgiven for thinking that shared data sources would live in this table, but they don’t – they’re catalogue items just like the reports. Let’s have a look at the structure of these two tables (although if you’re reading this because you have a disaster, feel free to skim past). Frustratingly, there doesn’t seem to be a Books Online page for this information, sorry about that. I’m also not going to look at all the columns, just ones that I find interesting enough to mention, and that are related to the problem at hand. These fields are consistent all the way through to SQL Server 2012 – there doesn’t seem to have been any changes here for quite a while. dbo.Catalog The Primary Key is ItemID. It’s a uniqueidentifier. I’m not going to comment any more on that. A minor nice point about using GUIDs in unfamiliar databases is that you can more easily figure out what’s what. But foreign keys are for that too… Path, Name and ParentID tell you where in the folder structure the item lives. Path isn’t actually required – you could’ve done recursive queries to get there. But as that would be quite painful, I’m more than happy for the Path column to be there. Path contains the Name as well, incidentally. Type tells you what kind of item it is. Some examples are 1 for a folder and 2 a report. 4 is linked reports, 5 is a data source, 6 is a report model. I forget the others for now (but feel free to put a comment giving the full list if you know it). Content is an image field, remembering that image doesn’t necessarily store images – these days we’d rather use varbinary(max), but even in SQL Server 2012, this field is still image. It stores the actual item definition in binary form, whether it’s actually an image, a report, whatever. LinkSourceID is used for Linked Reports, and has a self-referencing foreign key (allowing NULL, of course) back to ItemID. Parameter is an ntext field containing XML for the parameters of the report. Not sure why this couldn’t be a separate table, but I guess that’s just the way it goes. This field gets changed when the default parameters get changed in Report Manager. There is nothing in dbo.Catalog that describes the actual data sources that the report uses. The default data sources would be part of the Content field, as they are defined in the RDL, but when you deploy reports, you typically choose to NOT replace the data sources. Anyway, they’re not in this table. Maybe it was already considered a bit wide to throw in another ntext field, I’m not sure. They’re in dbo.DataSource instead. dbo.DataSource The Primary key is DSID. Yes it’s a uniqueidentifier... ItemID is a foreign key reference back to dbo.Catalog Fields such as ConnectionString, Prompt, UserName and Password do what they say on the tin, storing information about how to connect to the particular source in question. Link is a uniqueidentifier, which refers back to dbo.Catalog. This is used when a data source within a report refers back to a shared data source, rather than embedding the connection information itself. You’d think this should be enforced by foreign key, but it’s not. It does allow NULLs though. Flags this is an int, and I’ll come back to this. When a Data Source gets deleted out of dbo.Catalog, you might assume that it would be disallowed if there are references to it from dbo.DataSource. Well, you’d be wrong. And not because of the lack of a foreign key either. Deleting anything from the catalogue is done by calling a stored procedure called dbo.DeleteObject. You can look at the definition in there – it feels very much like the kind of Delete stored procedures that many people write, the kind of thing that means they don’t need to worry about allowing cascading deletes with foreign keys – because the stored procedure does the lot. Except that it doesn’t quite do that. If it deleted everything on a cascading delete, we’d’ve lost all the data sources as configured in dbo.DataSource, and that would be bad. This is fine if the ItemID from dbo.DataSource hooks in – if the report is being deleted. But if a shared data source is being deleted, you don’t want to lose the existence of the data source from the report. So it sets it to NULL, and it marks it as invalid. We see this code in that stored procedure. UPDATE [DataSource]    SET       [Flags] = [Flags] & 0x7FFFFFFD, -- broken link       [Link] = NULL FROM    [Catalog] AS C    INNER JOIN [DataSource] AS DS ON C.[ItemID] = DS.[Link] WHERE    (C.Path = @Path OR C.Path LIKE @Prefix ESCAPE '*') Unfortunately there’s no semi-colon on the end (but I’d rather they fix the ntext and image types first), and don’t get me started about using the table name in the UPDATE clause (it should use the alias DS). But there is a nice comment about what’s going on with the Flags field. What I’d LIKE it to do would be to set the connection information to a report-embedded copy of the connection information that’s in the shared data source, the one that’s about to be deleted. I understand that this would cause someone to lose the benefit of having the data sources configured in a central point, but I’d say that’s probably still slightly better than LOSING THE INFORMATION COMPLETELY. Sorry, rant over. I should log a Connect item – I’ll put that on my todo list. So it sets the Link field to NULL, and marks the Flags to tell you they’re broken. So this is your clue to fixing it. A bitwise AND with 0x7FFFFFFD is basically stripping out the ‘2’ bit from a number. So numbers like 2, 3, 6, 7, 10, 11, etc, whose binary representation ends in either 11 or 10 get turned into 0, 1, 4, 5, 8, 9, etc. We can test for it using a WHERE clause that matches the SET clause we’ve just used. I’d also recommend checking for Link being NULL and also having no ConnectionString. And join back to dbo.Catalog to get the path (including the name) of broken reports are – in case you get a surprise from a different data source being broken in the past. SELECT c.Path, ds.Name FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; When I just ran this on my own machine, having deleted a data source to check my code, I noticed a Report Model in the list as well – so if you had thought it was just going to be reports that were broken, you’d be forgetting something. So to fix those reports, get your new data source created in the catalogue, and then find its ItemID by querying Catalog, using Path and Name to find it. And then use this value to fix them up. To fix the Flags field, just add 2. I prefer to use bitwise OR which should do the same. Use the OUTPUT clause to get a copy of the DSIDs of the ones you’re changing, just in case you need to revert something later after testing (doing it all in a transaction won’t help, because you’ll just lock out the table, stopping you from testing anything). UPDATE ds SET [Flags] = [Flags] | 2, [Link] = '3AE31CBA-BDB4-4FD1-94F4-580B7FAB939D' /*Insert your own GUID*/ OUTPUT deleted.Name, deleted.DSID, deleted.ItemID, deleted.Flags FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; But please be careful. Your mileage may vary. And there’s no reason why 400-odd broken reports needs to be quite the nightmare that it could be. Really, it should be less than five minutes. @rob_farley

    Read the article

  • Harnessing Business Events for Predictive Decision Making - part 1 / 3

    - by Sanjeev Sharma
    Businesses have long relied on data mining to elicit patterns and forecast future demand and supply trends. Improvements in computing hardware, specifically storage and compute capacity, have significantly enhanced the ability to store and analyze mountains of data in ever shrinking time-frames. Nevertheless, the reality is that data growth is outpacing storage capacity by a factor of two and computing power is still very much bounded by Moore's Law, doubling only every 18 months.Faced with this data explosion, businesses are exploring means to develop human brain-like capabilities in their decision systems (including BI and Analytics) to make sense of the data storm, in other words business events, in real-time and respond pro-actively rather than re-actively. It is more like having a little bit of the right information just a little bit before hand than having all of the right information after the fact. To appreciate this thought better let's first understand the workings of the human brain.Neuroscience research has revealed that the human brain is predictive in nature and that talent is nothing more than exceptional predictive ability. The cerebral-cortex, part of the human brain responsible for cognition, thought, language etc., comprises of five layers. The lowest layer in the hierarchy is responsible for sensory perception i.e. discrete, detail-oriented tasks whereas each of the above layers increasingly focused on assembling higher-order conceptual models. Information flows both up and down the layered memory hierarchy. This allows the conceptual mental-models to be refined over-time through experience and repetition. Secondly, and more importantly, the top-layers are able to prime the lower layers to anticipate certain events based on the existing mental-models thereby giving the brain a predictive ability. In a way the human brain develops a "memory of the future", some sort of an anticipatory thinking which let's it predict based on occurrence of events in real-time. A higher order of predictive ability stems from being able to recognize the lack of certain events. For instance, it is one thing to recognize the beats in a music track and another to detect beats that were missed, which involves a higher order predictive ability.Existing decision systems analyze historical data to identify patterns and use statistical forecasting techniques to drive planning. They are similar to the human-brain in that they employ business rules very much like mental-models to chunk and classify information. However unlike the human brain existing decision systems are unable to evolve these rules automatically (AI still best suited for highly specific tasks) and  predict the future based on real-time business events. Mistake me not,  existing decision systems remain vital to driving long-term and broader business planning. For instance, a telco will still rely on BI and Analytics software to plan promotions and optimize inventory but tap into business events enabled predictive insight to identify specifically which customers are likely to churn and engage with them pro-actively. In the next post, i will depict the technology components that enable businesses to harness real-time events and drive predictive decision making.

    Read the article

  • What is a correct step by step logic of exporting scene with baked occlusion for loading it at runtime?

    - by myWallJSON
    I wonder what is a correct step by step logic of exporting scene with baked occlusion (Culling data) for loading that scene at runtime (on fly from the internet for example))? So currently my plan looks like this: I create prefabs Place them onto my scene (into Hierarchy) (say create 20 buffolows and some hourses and some buildings) Create empty prefab and drag all my scene objects from hierarchy onto it Export prefab So generally I put all my scene objects into one large prefab and export it but it seems that all objects that were marked as static get this property turned off when loading them at runtime and so no Frustrum Culling, and no Occlusion culling happens. So I wonder what is a correct way of exporting Sceen + Objecrts + Occlusion (and onther culing) data for future load of such scene at runtime? I wonder about current 3.5.2 Pro and future 4 Pro versions of U3D.

    Read the article

  • How do I update with a newly-created detached entity using NHibernate?

    - by Daniel T.
    Explanation: Let's say I have an object graph that's nested several levels deep and each entity has a bi-directional relationship with each other. A -> B -> C -> D -> E Or in other words, A has a collection of B and B has a reference back to A, and B has a collection of C and C has a reference back to B, etc... Now let's say I want to edit some data for an instance ofC. In Winforms, I would use something like this: var instanceOfC; using (var session = SessionFactory.OpenSession()) { // get the instance of C with Id = 3 instanceOfC = session.Linq<C>().Where(x => x.Id == 3); } SendToUIAndLetUserUpdateData(instanceOfC); using (var session = SessionFactory.OpenSession()) { // re-attach the detached entity and update it session.Update(instanceOfC); } In plain English, we grab a persistent instance out of the database, detach it, give it to the UI layer for editing, then re-attach it and save it back to the database. Problem: This works fine for Winform applications because we're using the same entity all throughout, the only difference being that it goes from persistent to detached to persistent again. The problem occurs when I'm using a web service and a browser, sending over JSON data. In this case, the data that comes back is no longer a detached entity, but rather a transient one that just happens to have the same ID as the persistent one. If I use this entity to update, it will wipe out the relationship to B and D unless I sent the entire object graph over to the UI and got it back in one piece. Question: My question is, how do I serialize detached entities over the web, receive them back, and save them, while preserving any relationships that I didn't explicitly change? I know about ISession.SaveOrUpdateCopy and ISession.Merge() (they seem to do the same thing?), but this will still wipe out the relationships if I don't explicitly set them. I could copy the fields from the transient entity to the persistent entity one by one, but this doesn't work too well when it comes to relationships and I'd have to handle version comparisons manually.

    Read the article

  • Which is the next dominant programming paradigm? [closed]

    - by Kugathasan Abimaran
    What is the next programming paradigm when OOP get lost in the market? Or else will OOP be for ever? What is your advise for the future developers? To which paradigm should we aware of? Because, before OOP, structured programming paradigm is there with C. Don't close it Please, because I need to aware, which paradigm have the ability to withstand in future? Aspect-oriented programming. Declarative programming. Functional programming. Object-oriented programming. Any Others? This describes programming paradigm according to their kernel language.

    Read the article

  • jQuery gallery scrolling effect with ease

    - by Sebastian Otarola
    So, I got this page from a friend and I think the gallery is amazingly done. Too bad it's in Flash ; http://selected.com/en/#/collection/homme/ Now, I'm trying to replicate the effect with jQuery. I've made all the loco searches on google one could think of. Zooming the picture is not a problem, the problem lies within the scrolling, how they come together at the ease part. I'm looking for solution in how to make the thumbnail animate when you scroll the page, they drag behind and infront of each other in a very subtle way - I've got (With a lot of help from Whirl3d in the jQuery-irc channel) this for the scrollup/down part of the mouse but the scrolling goes haywire; I Thought I post it here where I've come many times to get answers to a lot of questions and code-errors. This is my first post in stackoverflow and I know you guys are geniuses! Give it a shot! Thanks in advance! jQuery Part $(document).ready(function() { var fronts=$(".front"); var backs=$(".back"); var tempScrollTop, currentScrollTop = 0; $(document).scroll(function () { currentScrollTop = $(document).scrollTop(); if (tempScrollTop < currentScrollTop) { //Scroll down fronts.animate({marginTop:"-=100"},{duration:500, queue:false, easing:"easeOutBack"}); backs.animate({marginTop:"-=100"}, {duration:300, queue:false, easing:"easeOutBack"}); console.log('scroll down'); } else if (tempScrollTop > currentScrollTop) { //scroll up fronts.animate({marginTop:"+=100"},{duration:500, queue:false, easing:"easeOutBack"}); backs.animate({marginTop:"+=100"}, {duration:300, queue:false, easing:"easeOutBack"}); console.log('scroll up'); } tempScrollTop = currentScrollTop ;}) ;}); The HTML <html> <head> <link rel="stylesheet" type="text/css" href="style.css" /> <script type="text/javascript" src="http://code.jquery.com/jquery-1.7.2.min.js"></script> <script type="text/javascript" src="http://www.paigeharvey.net/assets/js/jquery.easing.js"></script> <script type="text/javascript" src="gallery.js"></script> <title>Parallax testing image gallery</title> </head> <body> <div class="container"> <div class='box front'>First Group</div> <div class='box back'>First Group</div> <div class='box front'>First Group</div> <div class='box back'>First Group</div> <br style="clear:both"/> <div class='box front'>Second Group</div> <div class='box back'>Second Group</div> <div class='box front'>Second Group</div> <div class='box back'>Second Group</div> <br style="clear:both"/> <div class='box front'>Third Group</div> <div class='box back'>Third Group</div> <div class='box front'>Third Group</div> <br style="clear:both"/> </div> </body> And finally the CSS Part .container {margin: auto; width: 410px; border: 1px solid red;} .box.front{border: 1px solid red;background-color:Black;color:white;z-Index:500;} .box.back {border: 1px solid green;z-Index:300;background-color:white;} .box {float:left; text-align:center; width:100px; height:100px;}

    Read the article

  • How to limit the number of the same Activity on the stack for an Android application

    - by johnrock
    Is this possible in an Android app? I want to make it so that no matter how many times a user starts activityA, when they hit the back button they will never get more than one occurence of activityA. What I am finding in my current code is that I have only two options: 1. I can call finish() in activityA which will prevent it from being accessible via the back button completely, or 2. I do not call finish(), and then if the user starts activityA (n) times during their usage, there will be (n) instances when hitting the back button. Again, I want to have activityA accessible by hitting the back button, but there is no reason to keep multiple instances of the same activity on the stack. Is there a way to limit the number of instances of an activity in the queue to only 1?

    Read the article

  • Square One to Game Development

    - by Ian Quach
    How does someone even get into developing a game. What would they need to know, how would someone find the knowledge to program a game? I've always looked at game development as a future career. Now that I'm getting closer to university I was hoping to find a way to head start this future in game development. What would be the best place to start? I would love any help or tips from anyone. Thanks for reading this. :)

    Read the article

  • Identity Globe Trotters (Sep Edition): The Social Customer

    - by Tanu Sood
    Welcome to the inaugural edition of our monthly series - Identity Globe Trotters. Starting today, the last Friday of every month, we will explore regional commentary on Identity Management. We will invite guest contributors from around the world to share their opinions and experiences around Identity Management and highlight regional nuances, specific drivers, solutions and more. Today's feature is contributed by Michael Krebs, Head of Business Development at esentri consulting GmbH, a (SOA) specialized Oracle Gold Partner based in Ettlingen, Germany. In his current role, Krebs is dealing with the latest developments in Enterprise Social Networking and the Integration of Social Media within business processes.  By Michael Krebs The relevance of "easy sign-on" in the age of the "Social Customer" With the growth of Social Networks, the time people spend within those closed "eco-systems" is growing year by year. With social networks looking to integrate search engines, like Facebook announced some weeks ago, their relevance will continue to grow in contrast to the more conventional search engines. This is one of the reasons why social network accounts of the users are getting more and more like a virtual fingerprint. With the growing relevance of social networks the importance of a simple way for customers to get in touch with say, customer care or contract departments, will be crucial for sales processes in critical markets. Customers want to have one single point of contact and also an easy "login-method" with no dedicated usernames, passwords or proprietary accounts. The golden rule in the future social media driven markets will be: The lower the complexity of the initial contact, the better a company can profit from social networks. If you, for example, can generate a smart way of how an existing customer can use self-service portals, the cost in providing phone support can be lowered significantly. Recruiting and Hiring of "Digital Natives" Another particular example is "social" recruiting processes. The so called "digital natives" don´t want to type in their profile facts and CV´s in proprietary systems. Why not use the actual LinkedIn profile? In German speaking region, the market in the area of professional social networks is dominated by XING, the equivalent to LinkedIn. A few weeks back, this network also opened up their interfaces for integrating social sign-ons or the usage of profile data for recruiting-purposes. In the European (and especially the German) employment market, where the number of young candidates is shrinking because of the low birth rate in the region, it will become essential to use social-media supported hiring processes to find and on-board the rare talents. In fact, you will see traditional recruiting websites integrated with social hiring to attract the best talents in the market, where the pool of potential candidates has decreased dramatically over the years. Identity Management as a key factor in the Customer Experience process To create the biggest value for customers and also future employees, companies need to connect their HCM or CRM-systems with powerful Identity management solutions. With the highly efficient Oracle (social & mobile enabling) Identity Management solution, enterprises can combine easy sign on with secure connections to the backend infrastructure. This combination enables a "one-stop" service with personalized content for customers and talents. In addition, companies can collect valuable data for the enrichment of their CRM-data. The goal is to enrich the so called "Customer Experience" via all available customer channels and contact points. Those systems have already gained importance in the B2C-markets and will gradually spread out to B2B-channels in the near future. Conclusion: Central and "Social" Identity management is key to Customer Experience Management and Talent Management For a seamless delivery of "Customer Experience Management" and a modern way of recruiting the best talent, companies need to integrate Social Sign-on capabilities with modern CX - and Talent management infrastructure. This lowers the barrier for existing and future customers or employees to get in touch with sales, support or human resources. Identity management is the technology enabler and backbone for a modern Customer Experience Infrastructure. Oracle Identity management solutions provide the opportunity to secure Social Applications and connect them with modern CX-solutions. At the end, companies benefit from "best of breed" processes and solutions for enriching customer experience without compromising security. About esentri: esentri is a provider of enterprise social networking and brings the benefits of social network communication into business environments. As one key strength, esentri uses Oracle Identity Management solutions for delivering Social and Mobile access for Oracle’s CRM- and HCM-solutions. …..End Guest Post…. With new and enhanced features optimized to secure the new digital experience, the recently announced Oracle Identity Management 11g Release 2 enables organizations to securely embrace cloud, mobile and social infrastructures and reach new user communities to help further expand and develop their businesses. Additional Resources: Oracle Identity Management 11gR2 release Oracle Identity Management website Datasheet: Mobile and Social Access (pdf) IDM at OOW: Focus on Identity Management Facebook: OracleIDM Twitter: OracleIDM We look forward to your feedback on this post and welcome your suggestions for topics to cover in Identity Globe Trotters. Last Friday, every month!

    Read the article

  • Meet Windows Azure Sweden &amp; SWAG Sommeravslutning

    - by Alan Smith
    The Meet Windows Azure event last week saw some great announcements about the current and future developments on the Windows Azure platform. Microsoft Sweden will be hosting an event at their offices that will run through these releases and demo some of the new technologies. It will be a great chance to see the new capabilities in action, and chat to Microsoft Evangelists, MVPs and other developers about the future of the platform. This will also be the last Sweden Windows Azure Group (SWAG) meeting before the summer break, so there will be food, drinks, and the chance of some “SWAG”. We will be back in force after the summer, and have a number of great events planned for the rest of the year. We will have a big announcement to make regarding one of these, so be there and get the chance to register! Registration is here.

    Read the article

  • How do people manage changes to common library files stored across mutiple (Mercurial) repositories?

    - by mckoss
    This is perhaps not a question unique to Mercurial, but that's the SCM that I've been using most lately. I work on multiple projects and tend to copy source code for libraries or utilities from a previous project to get a leg up on starting a new project. The problem comes in when I want to merge all the changes I made in my latest project, back into a "master" copy of those shared library files. Since the files stored in disjoint repositories will have distinct version histories, Mercurial won't be able to perform an intelligent merge if I just copy the files back to the master repo (or even between two independent projects). I'm looking for an easy way to preserve the change history so I can merge library files back to the master with a minimum of external record keeping (which is one of the reasons I'm using SVN less as merges require remembering when copies were made across branches). Perhaps I need to do a bit more up-front organization of my repository to prepare for a future merge back to a common master.

    Read the article

  • Is Microsoft Prism alive and active?

    - by Mike
    I've been doing a lot of reading these last two days on Microsoft Prism, but the thing I'm still not very sure of is what does the future look like for it? I know that version 4.1 was just released a few months ago, but besides Microsoft's own documentation, I haven't found many blog posts written in the last year on the subject, most of what I find is 2009-2010. It definitely looks interesting but the learning curve seems to be a bit steep and I wouldn't want to embark if it's going to become obsolete in the near future. Anyone has any insight on this?

    Read the article

  • What to "CRM" in San Francisco? CRM Highlights for OpenWorld '12

    - by Richard Lefebvre
    There is plenty to SEE for CRM during OpenWorld in San Francisco, September 30 - October 4! Here are some of the sessions in the CRM Track that you might want to consider attending for products you currently own or might consider for the future. I think you'll agree, there is quite a bit of investment going on across Oracle CRM. Please use OpenWorld Schedule Builder or check the OpenWorld Content Catalog for all of the session details and any time or location changes. Tip: Pre-enrolled session registrants via Schedule Builder are allowed into the session rooms before anyone else, so Schedule Builder will guarantee you a seat. Many of the sessions below will likely be at capacity. General Session: Oracle Fusion CRM—Improving Sales Effectiveness, Efficiency, and Ease of Use (Session ID: GEN9674) - Oct 2, 11:45 AM - 12:45 PM. Anthony Lye, Senior VP, Oracle leads this general session focused on Oracle Fusion CRM. Oracle Fusion CRM optimizes territories, combines quota management and incentive compensation, integrates sales and marketing, and cleanses and enriches data—all within a single application platform. Oracle Fusion can be configured, changed, and extended at runtime by end users, business managers, IT, and developers. Oracle Fusion CRM can be used from the Web, from a smartphone, from Microsoft Outlook, or from an iPad. Deloitte, sponsor of the CRM Track, will also present key concepts on CRM implementations. Oracle Fusion Customer Relationship Management: Overview/Strategy/Customer Experiences/Roadmap (CON9407) - Oct 1, 3:15PM - 4:15PM. In this session, learn how Oracle Fusion CRM enables companies to create better sales plans, generate more quality leads, and achieve higher win rates and find out why customers are adopting Oracle Fusion CRM. Gain a deeper understanding of the unique capabilities only Oracle Fusion CRM provides, and learn how Oracle’s commitment to CRM innovation is driving a wide range of future enhancements. Oracle RightNow CX Cloud Service Vision and Roadmap (CON9764) - Oct 1, 10:45 AM - 11:45 AM. Oracle RightNow CX Cloud Service combines Web, social, and contact center experiences for a unified, cross-channel service solution in the cloud, enabling organizations to increase sales and adoption, build trust, strengthen relationships, and reduce costs and effort. Come to this session to hear from Oracle experts about where the product is going and how Oracle is committed to accelerating the pace of innovation and value to its customers. Siebel CRM Overview, Strategy, and Roadmap (CON9700) - Oct 1, 12:15PM - 1:15PM. The world’s most complete CRM solution, Oracle’s Siebel CRM helps organizations differentiate their businesses. Come to this session to learn about the Siebel product roadmap and how Oracle is committed to accelerating the pace of innovation and value for its customers on this platform. Additionally, the session covers how Siebel customers can leverage many Oracle assets such as Oracle WebCenter Sites; InQuira, RightNow, and ATG/Endeca applications, and Oracle Policy Automation in conjunction with their current Siebel investments. Oracle Fusion Social CRM Strategy and Roadmap: Future of Collaboration and Social Engagement (CON9750) - Oct 4, 11:15 AM - 12:15 PM. Social is changing the customer experience! Come find out how Oracle can help you know your customers better, encourage brand affinity, and improve collaboration within your ecosystem. This session reviews Oracle’s social media solution and shows how you can discover hidden insights buried in your enterprise and social data. Also learn how Oracle Social Network revolutionizes how enterprise users work, collaborate, and share to achieve successful outcomes. Oracle CRM On Demand Strategy and Roadmap (CON9727) - Oct 1, 10:45AM - 11:45AM. Oracle CRM On Demand is a powerful cloud-based customer relationship management solution. Come to this session to learn directly from Oracle experts about future product plans and hear how Oracle is committed to accelerating the pace of innovation and value to its customers. Knowledge Management Roadmap and Strategy (CON9776) - Oct 1, 12:15PM - 1:15PM. Learn how to harness the knowledge created as a natural byproduct of day-to-day interactions to lower costs and improve customer experience by delivering the right answer at the right time across channels. This session includes an overview of Oracle’s product roadmap and vision for knowledge management for both the Oracle RightNow and Oracle Knowledge (formerly InQuira) product families. Oracle Policy Automation Roadmap: Supercharging the Customer Experience (CON9655) - Oct 1, 12:15PM - 1:15PM. Oracle Policy Automation delivers rapid customer value by streamlining the capture, analysis, and deployment of policies across every facet of the customer experience. This session discusses recent Oracle Policy Automation enhancements for policy analytics; the latest Oracle Policy Automation Connector for Siebel; and planned new capabilities, including availability with the Oracle RightNow product line. There is much more, so stay tuned for more highlights or check out the Content Catalog and search for your areas of interest. 

    Read the article

  • Does attending the upcoming Devdays 2011 have some value for a resume?

    - by systempuntoout
    This fall I'm 99% going to London to attend the awesome Devdays 2011; I have many reasons to go there and some of them are: Professional stuff Great people Awesome topics Unicorns Passion London :) Obviously all the cool technologies that will be discussed are light years far from my daily work but useful for my side projects and maybe for some future employment. Now, to get to the point; a coworker said to me that he won't come with me because Devday London is expensive, and something expensive should reward you with a certificate, a certificate that could have some value to the eyes on an employer. Is he right? Do you think that attenting to this kind of event have some value on a resume? Should it be highlighted? Does it have any value for a future employer?

    Read the article

  • Python modules not updating after restarting the main module.

    - by Ian
    I've recently come back to a project having had to stop for about 6 months, and after reinstalling my operating system and coming back to it I'm having all kinds of crazy things happen. I made sure to install the same version(2.6) of python that I was using before. It started by giving me strange tkinter error that I hadn't had trouble with before, the program is relatively simple and the 2 or 3 bugs that were left when i quit, I had documented and weren't related to the interface. Things got even weirder when the same error would pop up even after I had removed the offending section of code. In fact, the traceback pointed to a line that didn't even exist in the module it was referencing, eg: line 262 when the module was only 200 lines long. After just starting a completely new file for the main module and copy/pasting it finally recognized that the offending code was gone and I stopped getting the error only to find that any updates to the code I made in another module didn't show up when I restarted the program through the shell. (I didn't forget to save.) After fiddling with this, of course, the old interface error came back, only in a different section of code that had been working previously. In fact, if I revert back to the files I had six months ago, the program works fine. As soon as I change anything in the main module, however, the interface bug comes back. Here's the original error: Exception in Tkinter callback Traceback (most recent call last): File "C:\Python26\lib\lib-tk\Tkinter.py", line 1410, in __call__ return self.func(*args) File "C:\PyStuff\interface.py", line 202, in dispOne __main__.top.destroy() File "C:\Python26\lib\lib-tk\Tkinter.py", line 1938, in destroy self.tk.call('destroy', self._w) TclError: can't invoke "destroy" command: application has been destroyed I'm guessing something else is going on here other than my own poor programming. Anyone have any ideas?

    Read the article

  • 20 Years of Solaris - 25 Years of SPARC!

    - by Stefan Hinker
    I don't usually duplicate what can be found elsewhere.  But this is worth an exception. 20 Years of Solaris - Guess who got all those innovation awards!25 Years of SPARC - And the future has just begun :-) Check out those pages for some links pointing to the past, and, more interesting, to the future... There are also some nice videos: 20 Years of Solaris - 25 Years of SPARC (Come to think of it - I got to be part of all but the first 4 years of Solaris.  I must be getting older...)

    Read the article

  • Set environment variable in Ubuntu

    - by Junho Park
    In Ubuntu, I'd like to switch my JAVA_HOME environment variable back and forth between Java 5 and 6. I open a terminal and type in the following to set the JAVA_HOME environment variable: export JAVA_HOME=/usr/lib/jvm/java-1.5.0-sun And in that same terminal window, I type the following to check that the environment variable has been updated: echo $JAVA_HOME And I see /usr/lib/jvm/java-1.5.0-sun which is what I'm expecting to see. In addition, I modify ~/.profile and set the JAVA_HOME environment variable to /usr/lib/jvm/java-1.5.0-sun. And now for the problem--when I open a new terminal window and I check my JAVA_HOME environment variable by typing in echo $JAVA_HOME I see that my JAVA_HOME environment variable has been reverted back to Java 6. When I reboot my machine (or log out and back in, I suppose) the JAVA_HOME environment variable is set to Java 5 (presumably because of the modification I made in my ~/.profile). Is there a way around this so that I can change my JAVA_HOME environment without having to log out and back in (AND make that environment variable change stick in all new terminal windows)?

    Read the article

  • Software Architecture Analysis Method (SAAM)

    Software Architecture Analysis Method (SAAM) is a methodology used to determine how specific application quality attributes were achieved and how possible changes in the future will affect quality attributes based on hypothetical cases studies. Common quality attributes that can be utilized by this methodology include modifiability, robustness, portability, and extensibility. Quality Attribute: Application Modifiability The Modifiability quality attribute refers to how easy it changing the system in the future will be. This to me is a very open-ended attribute because a business could decide to transform a Point of Sale (POS) system in to a Lead Tracking system overnight. (Yes, this did actually happen to me) In order for SAAM to be properly applied for checking this attribute specific hypothetical case studies need to be created and review for the modifiability attribute due to the fact that various scenarios would return various results based on the amount of changes. In the case of the POS change out a payment gateway or adding an additional payment would have scored very high in comparison to changing the system over to a lead management system. I personally would evaluate this quality attribute based on the S.O.I.L.D Principles of software design. I have found from my experience the use of S.O.I.L.D in software design allows for the adoption of changes within a system. Quality Attribute: Application Robustness The Robustness quality attribute refers to how an application handles the unexpected. The unexpected can be defined but is not limited to anything not anticipated in the originating design of the system. For example: Bad Data, Limited to no network connectivity, invalid permissions, or any unexpected application exceptions. I would personally evaluate this quality attribute based on how the system handled the exceptions. Robustness Considerations Did the system stop or did it handle the unexpected error? Did the system log the unexpected error for future debugging? What message did the user receive about the error? Quality Attribute: Application Portability The Portability quality attribute refers to the ease of porting an application to run in a new operating system or device. For example, It is much easier to alter an ASP.net website to be accessible by a PC, Mac, IPhone, Android Phone, Mini PC, or Table in comparison to desktop application written in VB.net because a lot more work would be involved to get the desktop app to the point where it would be viable to port the application over to the various environments and devices. I would personally evaluate this quality attribute based on each new environment for which the hypothetical case study identifies. I would pay particular attention to the following items. Portability Considerations Hardware Dependencies Operating System Dependencies Data Source Dependencies Network Dependencies and Availabilities  Quality Attribute: Application Extensibility The Extensibility quality attribute refers to the ease of adding new features to an existing application without impacting existing functionality. I would personally evaluate this quality attribute based on each new environment for the following Extensibility  Considerations Hard coded Variables versus Configurable variables Application Documentation (External Documents and Codebase Documentation.) The use of Solid Design Principles

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >