Search Results

Search found 21963 results on 879 pages for 'lance may'.

Page 609/879 | < Previous Page | 605 606 607 608 609 610 611 612 613 614 615 616  | Next Page >

  • How Microsoft Market DotNet?

    - by Fendy
    I just read an Joel's article about Microsoft's breaking change (non-backwards compatibility) with dot net's introduction. It is interesting and explicitly reflected the condition during that time. But now almost 10 years has passed. The breaking change It is mainly on how bad is Microsoft introducing non-backwards compatibility development tools, such as dot net, instead of improving the already-widely used asp classic or VB6. As much have known, dot net is not natively embedded in windows XP (yes in vista or 7), so in order to use the .net apps, you need to install the .net framework of over 300mb (it's big that day). However, as we see that nowadays many business use .net as their main development tools, with asp.net or mvc as their web-based applications. C# nowadays be one of tops programming languages (the most questions in stackoverflow). The more interesing part is, win32api still alive even there is newer technology out there (and still widely used). Imagine if microsoft does not introduce the breaking change, there will many corporates still uses asp classic or vb-based applications (there still is, but not that much). There are many corporates use additional services such as azure or sharepoint (beside how expensive is it). Please note that I also know there are many flagships applications (maybe adobe's and blizzard's) still use C-based or older language and not porting to newer high-level language. The question How can Microsoft persuade the users to migrate their old applications into dot net? As we have known it is very hard and give no immediate value when rewrite the applications (netscape story), and it is very risky. I am more interested in Microsoft's way and not opinion such as "because dot net is OOP, or dot net is dll-embedable, etc". This question may be constructive, as the technology is vastly changes over times lately. As we can see, Microsoft changes Asp.Net webform to MVC, winform is legacy now, it is starting to change to use windows store rather than basic-installment, touchscreen and later on we will have see-through applications such as google class. And that will be breaking changes. We will need to account portability as an issue nowadays. We will need other than just mere technology choice, but also migration plans. Even maybe as critical as we might need multiplatform language compiler, as approached by Joel's Wasabi. (hey, I read his articles too much!)

    Read the article

  • Fraud and Anomaly Detection using Oracle Data Mining YouTube-like Video

    - by chberger
    I've created and recorded another YouTube-like presentation and "live" demos of Oracle Advanced Analytics Option, this time focusing on Fraud and Anomaly Detection using Oracle Data Mining.  [Note:  It is a large MP4 file that will open and play in place.  The sound quality is weak so you may need to turn up the volume.] Data is your most valuable asset. It represents the entire history of your organization and its interactions with your customers.  Predictive analytics leverages data to discover patterns, relationships and to help you even make informed predictions.   Oracle Data Mining (ODM) automatically discovers relationships hidden in data.  Predictive models and insights discovered with ODM address business problems such as:  predicting customer behavior, detecting fraud, analyzing market baskets, profiling and loyalty.  Oracle Data Mining, part of the Oracle Advanced Analytics (OAA) Option to the Oracle Database EE, embeds 12 high performance data mining algorithms in the SQL kernel of the Oracle Database. This eliminates data movement, delivers scalability and maintains security.  But, how do you find these very important needles or possibly fraudulent transactions and huge haystacks of data? Oracle Data Mining’s 1 Class Support Vector Machine algorithm is specifically designed to identify rare or anomalous records.  Oracle Data Mining's 1-Class SVM anomaly detection algorithm trains on what it believes to be considered “normal” records, build a descriptive and predictive model which can then be used to flags records that, on a multi-dimensional basis, appear to not fit in--or be different.  Combined with clustering techniques to sort transactions into more homogeneous sub-populations for more focused anomaly detection analysis and Oracle Business Intelligence, Enterprise Applications and/or real-time environments to "deploy" fraud detection, Oracle Data Mining delivers a powerful advanced analytical platform for solving important problems.  With OAA/ODM you can find suspicious expense report submissions, flag non-compliant tax submissions, fight fraud in healthcare claims and save huge amounts of money in fraudulent claims  and abuse.   This presentation and several brief demos will show Oracle Data Mining's fraud and anomaly detection capabilities.  

    Read the article

  • Updating My Online Boggle Solver Using jQuery Templates and WCF

    With WebForms, each ASP.NET page's rendered output includes a <form> element that performs a postback to the same page whenever a Button control within the form is clicked, or whenever the user modifies a control whose AutoPostBack property is set to True. This model simplifies web page development, but carries with it some costs - namely, the large amount of data exchanged between the client and the server during a postback. On postback the browser sends the values of all of its form fields (including hidden ones, like view state, which may be quite large) to the server; the server then sends back the entire contents of the web page. While there are some scenarios where this amount of information needs to be exchanged, in many cases the user has performed some action that requires far less information to be exchanged. With a little bit of forethought and code we can have the browser and server exchange much less data, which leads to more responsive web pages and an improved user experience. Over the past several weeks I've been writing an article series on accessing server-side data from client script. Rather than rely solely on forms and postbacks, many websites use JavaScript code to asynchronously communicate with the server in response to the page loading or some other user action. The server, upon receiving the JavaScript-initiated request, returns just the data needed by the browser, which the browser then seamlessly integrates into the web page. There are a variety of technologies and techniques that can be employed to provide both the needed server- and client-side functionality. Last week's article, Using WCF Services with jQuery and the ASP.NET Ajax Library, explored using the Windows Communication Foundation, or WCF, to serve data from the web server and showed how to consume such a service using both the ASP.NET Ajax Library and jQuery. In a previous 4Guys article, Creating an Online Boggle Solver, I built an application to find all solutions in a game of Boggle. (Boggle is a word game trademarked by Parker Brothers and Hasbro that involves several players trying to find as many words as they can in a 4x4 grid of letters.) This article takes the lessons learned in Using WCF Services with jQuery and the ASP.NET Ajax Library and uses them to update the user interface for my online Boggle solver, replacing the existing WebForms-based user interface with a more modern and responsive interface. I also used jQuery Templates, a JavaScript-based templating library that is useful for displaying the results from a server-side service. Read on to learn more! Read More >

    Read the article

  • Cheating on Technical Debt

    - by Tony Davis
    One bad practice guaranteed to cause dismay amongst your colleagues is passing on technical debt without full disclosure. There could only be two reasons for this. Either the developer or DBA didn’t know the difference between good and bad practices, or concealed the debt. Neither reflects well on their professional competence. Technical debt, or code debt, is a convenient term to cover all the compromises between the ideal solution and the actual solution, reflecting the reality of the pressures of commercial coding. The one time you’re guaranteed to hear one developer, or DBA, pass judgment on another is when he or she inherits their project, and is surprised by the amount of technical debt left lying around in the form of inelegant architecture, incomplete tests, confusing interface design, no documentation, and so on. It is often expedient for a Project Manager to ignore the build-up of technical debt, the cut corners, not-quite-finished features and rushed designs that mean progress is satisfyingly rapid in the short term. It’s far less satisfying for the poor person who inherits the code. Nothing sends a colder chill down the spine than the dawning realization that you’ve inherited a system crippled with performance and functional issues that will take months of pain to fix before you can even begin to make progress on any of the planned new features. It’s often hard to justify this ‘debt paying’ time to the project owners and managers. It just looks as if you are making no progress, in marked contrast to your predecessor. There can be many good reasons for allowing technical debt to build up, at least in the short term. Often, rapid prototyping is essential, there is a temporary shortfall in test resources, or the domain knowledge is incomplete. It may be necessary to hit a specific deadline with a prototype, or proof-of-concept, to explore a possible market opportunity, with planned iterations and refactoring to follow later. However, it is a crime for a developer to build up technical debt without making this clear to the project participants. He or she needs to record it explicitly. A design compromise made in to order to hit a deadline, be it an outright hack, or a decision made without time for rigorous investigation and testing, needs to be documented with the same rigor that one tracks a bug. What’s the best way to do this? Ideally, we’d have some kind of objective assessment of the level of technical debt in a software project, although that smacks of Science Fiction even as I write it. I’d be interested of hear of any methods you’ve used, but I’m sure most teams have to rely simply on the integrity of their colleagues and the clear perceptions of the project manager… Cheers, Tony.

    Read the article

  • Code Coverage for Maven Integrated in NetBeans IDE 7.2

    - by Geertjan
    In NetBeans IDE 7.2, JaCoCo is supported natively, i.e., out of the box, as a code coverage engine for Maven projects, since Cobertura does not work with JDK 7 language constructs. (Although, note that Cobertura is supported as well in NetBeans IDE 7.2.) It isn't part of NetBeans IDE 7.2 Beta, so don't even try there; you need some development build from after that. I downloaded the latest development build today. To enable JaCoCo features in NetBeans IDE, you need do no different to what you'd do when enabling JaCoCo in Maven itself, which is rather wonderful. In both cases, all you need to do is add this to the "plugins" section of your POM: <plugin> <groupId>org.jacoco</groupId> <artifactId>jacoco-maven-plugin</artifactId> <version>0.5.7.201204190339</version> <executions> <execution> <goals> <goal>prepare-agent</goal> </goals> </execution> <execution> <id>report</id> <phase>prepare-package</phase> <goals> <goal>report</goal> </goals> </execution> </executions> </plugin> Now you're done and ready to examine the code coverage of your tests, whether they are JUnit or TestNG. At this point, i.e., for no other reason than that you added the above snippet into your POM, you will have a new Code Coverage menu when you right-click on the project node: If you click Show Report above, the Code Coverage Report window opens. Here, once you've run your tests, you can actually see how many classes have been covered by your tests, which is pretty useful since 100% tests passing doesn't mean much when you've only tested one class, as you can see very graphically below: Then, when you click the bars in the Code Coverage Report window, the class under test is shown, with the methods for which tests exist highlighted in green and those that haven't been covered in red: (Note: Of course, striving for 100% code coverage is a bit nonsensical. For example, writing tests for your getters and setters may not be the most useful way to spend one's time. But being able to measure, and visualize, code coverage is certainly useful regardless of the percentage you're striving to achieve.) Best of all about all this is that everything you see above is available out of the box in NetBeans IDE 7.2. Take a look at what else NetBeans IDE 7.2 brings for the first time to the world of Maven: http://wiki.netbeans.org/NewAndNoteworthyNB72#Maven

    Read the article

  • A strong component keeps everything together

    - by Justin Paul-Oracle
    Most of the times you implement a WebCenter Content based system, you require some sort of customization. Sometimes these customizations need a Java class or two, or libraries (for example, the JavaMail API), or Database Objects (like new tables, views, indexes, etc). I have seen that libraries and Database Objects are usually put in place using manual steps. This means that the library jar files are copied to one of the common classes directory (set in the Content CLASSPATH variable) and/or the database scripts are executed manually. I have also seen people place the custom Java classes in the common classes directory. While this may seem like an easy solution, think about a scenario where you need to disable or uninstall the component or if you have to upgrade or migrate the system. You have to keep these manual steps documented and execute them every time you encounter the above scenarios. It is very common that some of these manual steps are missed when you have multiple teams and people working on the system. Here are a few points to ponder upon: Place all your custom Java classes within your component. Create a new directory, say ${COMPONENT_DIR}/classes, and place your code there. You can choose to bundle all your classes into a jar or you can place the entire class directory structure. Add a path entry to the Build Settings so that it is bundled with the component when you build it. You also need to update the Custom Class Path and the Custom Class Path Load Order under the Advanced Build Settings. This will ensure that the system CLASSPATH is updated to add this new directory. Create a new component for any new library that you want to add. Add the appropriate path entries to the Build Settings so that it is bundled with the component when you build it. You also need to update the Custom Class Path, Custom Class Path Load Order and/or the Custom Library Path under the Advanced Build Settings. Enter a comma separated list of features that this component will provide. When you create other components that will use the features exposed by this component, make sure that you specify a dependency to this library component by specifying the comma separated list of features in the Advanced Build Settings. The component wizard allows you to create custom install/uninstall Java code. The wizard will create a install filter class when you check the “Has Install” checkbox on the “Install/Uninstall Settings” tab. Consider using this filter class to create database objects when you install the component and drop the objects when you uninstall the component. If you do a lot of custom component development, consider creating a install/uninstall Java class, which can execute queries defined within the component. To sum up, whenever you write a new custom component, make sure that you bundle everything within the component.

    Read the article

  • #SSAS #Tabular Workshop and Community Events in Netherlands and Denmark

    - by Marco Russo (SQLBI)
    Next week I will finally start the roadshow of the SSAS Tabular Workshop, a 2-day seminar about the new BISM Tabular model for Analysis Services that has been introduced in SQL Server 2012. During these roadshows, we always try to arrange some speeches at local community events in the evening - we already defined for Copenhagen, we have some logistic issue in Amsterdam that we're trying to solve. Here is the timetable: Netherlands SSAS Workshop in Amsterdam, NL – April 16-17, 2012 2-day seminar, I and Alberto will be the trainers for this event, register here We're trying to manage a Community event but we still don't have a confirmation, stay tuned        Denmark SSAS Workshop in Copenhagen, DK – April 26-27, 2012 2-day seminar, I and Alberto will be the trainers for this event, register here Community event on April 26, 2012 This event will run in Hellerup, at Microsoft venue All details available here: http://msbip.dk/events/26/msbip-mode-nr-5/ People from Sweden are welcome! Just register to this private group on LinkedIn in order to announce your presence, so we’ll know how many people will attend In community events we’ll deliver two speeches – here are the descriptions: Inside xVelocity (VertiPaq) PowerPivot and BISM Tabular models in Analysis Services share a great columnar-based database engine called xVelocity in-memory analytics engine (VertiPaq). If you want to improve performance and optimize memory used, you have to understand some basic principles about how this engine works, how data is compressed, and how you can design a data model for better optimization. Prepare yourself to change your mind. xVelocity optimization techniques might seem counterintuitive and are absolutely different than OLAP and SQL ones! Choosing between Tabular and Multidimensional You have a new project and you have to make an important decision upfront. Should you use Tabular or Multidimensional? It is not easy to answer, because sometime there is a clear choice, but most of the times both decisions might be correct, at least at the beginning. In this session we’ll help you making an informed decision, correctly evaluating pros and cons of each one according to common scenarios, considering both short-term and long-term consequences of your choice. I hope to meet many people in this first dates. We have many other events coming in May and June, including an online event (for US time zones), and you can also attend our PreCon Day at TechEd US in Orland (PRC06) or TechEd Europe in Amsterdam. I’ll be a good customer for airline companies in the next three months! I’m just sorry that I hadn’t time to write other articles in the last month, but I’m accumulating material that I will need to write down during some flight – stay tuned…

    Read the article

  • Ubuntu won't connect to wired network

    - by djeikyb
    I'm running 10.04, upgraded from 9.10, maybe, but probably not upgraded from 9.04. I have two wifi routers. Zeus is connected to the dsl modem. Hermes uses a wds bridge with Zeus to extend the network. My desktop (Daedalus) is ethernetted to Hermes. My laptop (Clyde) is wifi, switching to Hermes or Zeus as needed. Occasionally, as in whenever I transfer a large file from desktop to laptop, the wds bridge will die. Fixing it means restarting both routers, though it seems Hermes should boot first. This is ridiculous, and eventually I'll get around to asking you guys to help me stop it from happening. More importantly is that my desktop requires a reboot to get back on the network. WTF. ifconfig shows my nic has no ip. /etc/init.d/networking restart doesn't do anything, not even give me a lousy ip. dhcpcd eth1 grants me an ip address, but doesn't help with internet access. route -n shows what looks like my normal routing table, but pinging google.com informs me it's an unknown host. jake@daedalus:~$ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.1.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 0.0.0.0 10.1.1.1 0.0.0.0 UG 0 0 0 eth1 It may be worth noting that I can ping both Zeus (10.1.1.1) and Hermes (10.1.1.4) and my laptop (10.1.1.55). Much obliged for any help. Rebooting is, well, trivial in this instance. But it's stupid. I switched to linux because I like the idea that if one part breaks, you fix it instead of reboot reboot reboot. I've left my poor desktop in disarray, confining myself to my little netbook. My desktop is broken, awaiting magical commands from you brilliant folk. (and yes, i know clyde the netbook should be named icarus. it was its original name. ironically the ssd burned out, and i felt it wasn't right when it came to reinstalling)

    Read the article

  • links for 2011-01-12

    - by Bob Rhubart
    WebCenter Spaces 11g PS2 Template Customization (Javier Ductor's Blog) "Recently, we have been involved in a WebCenter Spaces customization project. A customer sent us a prototype website in HTML, and we had to transform Spaces to set the same look and feel as in the prototype..." Javier Ductor (tags: oracle otn webcenter enteprise2.0) Matt Carter: Risky Business "Incorporating risk detection and mitigation capabilities into apps is becoming all the rage. There are plenty of real-life examples of cases where prevention of cyber-security threats and fraudsters might have kept governments and companies out of the news, and with more money in their accounts." (tags: oracle otn security middleware) John Brunswick: 5 Surprisingly Good Benefits of Corporate Blogs "Some may still propose that not all corporations are going to be able to provide the five benefits above and are more focused around shameless self promotion of products and services.  If that is the case, that corporation is most likely not producing something of high value." - John Brunswick (tags: oracle otn enterprise2.0 blogging) InfoQ: IT And Architecture: Inside-Out Perspectives The software industry is in disarray, costs are escalating, and quality is diminishing. Promises of newer technologies and processes and methodologies in IT are still far from materializing on any significant scale. Bruce Laidlaw and Michael Poulin - each with more than 30 years of experience compared notes on the past and present of IT and provide insights on what IT needs to make progress. (tags: ping.fm) SOA & Middleware: Canceling a running composite instance - example Useful tips from Niall Commiskey. (tags: soa middleware oracle) BPEL 11.1.1.2 Certified for Prebuilt E-Business Suite 12.1.3 SOA Integrations (Oracle E-Business Suite Technology) "A new certification was released simultaneously with the E-Business Suite 12.1.3 Maintenance Pack late last year: the use of BPEL 11g Version 11.1.1.2 with E-Business Suite 12.1.3." -- Steven Chan (tags: oracle bpel) Marc Kelderman: OSB: Deploy Service Level Agreement (SLA), aka Alert Rule "The big issue with these SLAs is the deployment. If you have dozens of services, with multiple operations, and you have a lot of environments it takes a while to create them...[But] I have a nice workaround." - Mark Kelderman  (tags: oracle otn soa osb sla) @myfear: Java EE 7 - what's coming up for 2012? First hints. "Even if the actual Java EE 6 version is still not too widespread, we already have seen the first signs of the next EE 7 version written to the sky." -- Markus "myfear" Eisele (tags: oracle otn oracleace java)

    Read the article

  • Retail in New York - a walk down 5th Avenue

    - by sarah.taylor(at)oracle.com
    It's the week of the NRF Big Show and all eyes in the retail industry are on New York. The Big Apple is famous for Big Retail -with a proliferation of incredibly iconic stores. The environment is exciting and familiar even to people visiting this small island for the first time. Most of us have travelled down Fifth Avenue watching movies and TV even if we have never set foot on American soil. I find it one of the most exciting retail cities in the world and I am thrilled this year to be here with so many of Oracle's International retail customers who are joining us for the Retail Exchange. The Oracle program brings retailers from all over the planet together to share ideas and be inspired by New York retail and the NRF event. The show celebrates its 100th year in 2011 and New York itself has been recognized globally as the capital of innovative retail for just as long.  Fifth Avenue is where many global brands have placed their flagship stores, and businesses are in constant competition to set themselves apart from their competitors - both in the store and from the street.  These flag ship retail destinations present what today's customers are finding most exciting and delightful about retail. For the tourist market, they may only visit these stores once, but the impression that a trip to a flagship store leaves with a customer can last a lifetime.  One of the stores that is currently turning heads on Fifth Avenue is Hollister, sister brand to Abercrombie and Fitch, which has filled its shop front with a massive live video (and audio) feed of surfers on the beach in California.  To complete the effect, they also have troughs of water in front of the video screens to bring the sea to the street.  And this isn't the only kind of surfing that retailers are considering today and multi-channel retail is a hot topic that all of the retailers joining the Retail Exchange are considering.   The rest of the world looks to the brands along Fifth Avenue for inspiration - how they take advantage of new opportunities, how they set themselves apart from their competitors and how they keep their products fresh and desirable. With these inspiring pioneers in New York, it's little wonder that NRF's Big Show is so popular, and that New York is viewed as one of the retail capitals of the world. It is a pleasure to be here with so many of the world's greatest international retailers.

    Read the article

  • Sometimes keyboard & touchpad work... sometimes not

    - by Voyagerfan5761
    When I first ran Ubuntu from CD on this Dell Inspiron 2650, it worked for about ten to fifteen minutes, then it hung (I was probably trying to do too much at once from a Live CD). The next time, my mouse and keyboard didn't work. I rebooted three times and finally got them working. I then installed Ubuntu alongside Windows XP. After installing, selecting the OS in GRUB worked, but my touchpad and keyboard were again not working. I rebooted, and they worked. (I fortunately had a USB mouse with which to reboot.) Booting Ubuntu and then rebooting to enable my keyboard and touchpad has become a routine ever since. Often several reboots are required; at one point I had to reboot over a dozen times in a row before getting a session where everything worked properly. (My installation has been in place for about three days a week now.) I've looked around for a device manager equivalent to no avail. Sometimes the hardware is properly detected, and sometimes it's not. Once or twice I've had the keyboard detected properly but the touchpad not. Plugging in my wireless card also sometimes requires a plug, unplug, and plug again to get it working. So is there some solution? I'm without an Internet connection at home, and this "laptop" is really a wall wart on my desk, so suggestions for packages may take a while to test. Xorg logs I captured two three four sample Xorg logs: one from a startup where the devices worked; one from when they didn't; one from a session where Ubuntu thought my touchpad was a normal mouse; and one from a session where my keyboard worked but the touchpad didn't. See this gist. Updated 2010-12-15 01:50 UTC with Xorg.0.log.keyboardonly file illustrating the case where the keyboard worked but not the touchpad. Updated 2011-01-11 04:10 UTC with Xorg.0.log.touchpadregmouse to illustrate a case where the touchpad was detected as a regular mouse (no "Touchpad" tab in mouse prefs).

    Read the article

  • Phone number in meta description bad or good for local rankings NAP

    - by bybe
    Once again I'm at it with increasing people's local rankings and I've learnt so much about local rankings in the past 2 weeks it feels like my brain is gonna pop anyway, question is fairly simple for someone who engages in local rankings and I appreciate the question may be a little guess work but isn't SEO mostly guessing anyway? From what I've read and learned that Google works of a system called nap for local rankings (With many other factors but this question is purely based on NAP). For people who care about local rankings NAP stands for Name of Business / Address of Business / Phone Number for Business. Now what what I've read you don't need the whole NAP to be on one website, a P or just a N can help towards your local rankings. It's believed that NAP rewards more than just P and N for example but knowing Google they might have a diversity checker which is my concern what your get to in a moment. Now of course sites weight differently where your business is posted, it's certainly going to be more credible if your NAP details are on your national phone book than say a blog site, so taking in this consideration too. Pure Guess (Not apart of the question but none the less makes a good read on my belief). Now my guess work would make me believe that the formula would look something like (N)+(A)+(P)x(T) So (N)name would be 1 or 0 to indicate present or not So (A)dress would be 1 or 0 to indicate present or not So (P)hone would be 1 or 0 to indicate present or not So (T)rust would be 1-100 to indicate level of trust So a phone number appearing on youtube might look something like 0+0+1x95= 95 and a NAP appearing on your national phone book might look something like 1+1+1x100= 300 Please note that I'm not saying this is the sole factor and I'm sure its way more complex that this with things like other factors on the page, off the page (Reviews, Links, Clicks) and so on but its still a contributor). The Question My question is fairly simple and I'd imagine hard to impossible to have an actual definite answer to this but maybe someone has seen official wording else where on this, is it bad to include address or phone number in the Meta Description? The reason I ask is that one of my competitors has these elements in the meta descriptions and their local rankings are absolutely superb, the problem I have with this is scrap bot sites like 'Similar Too' 'Seo Rankings' and 1,000's of the other scrap box networks that scrap site and then make urls with your site information are mostly limited to your meta description what this means that your phone number, address and sometimes even your company name if the domain is exact will appear as AP, and even NAP on thousands of websites. So, is it a bad strategy to include phone number and address in meta description, everything I read into would suggest its good of course with the downside of maybe lowering quality of description for click thoughts but top rankings would increase this 10 folds anyhow..

    Read the article

  • Oracle OpenWorld Call for MDM Papers

    - by david.butler(at)oracle.com
    As the MDM Track owner, I would like to invite everyone to respond to the Oracle OpenWorld (October 2-6, Moscone Center, San Francisco) Call for Papers (https://oracleus.wingateweb.com/portal/cfp/ ). The Call for Papers is open now through Sunday, March 27. This is an outstanding opportunity for organizations familiar with MDM to tell their story to a very large, knowledgeable and intensely interested community. Opportunities for feedback and networking abound.  I would love to see MDM papers on: business drivers; business benefits; quantified ROI stories; business process optimization; implementation styles; implementation lessons learned; using master data as a service; data governance best practices; end-to-end data quality experiences; support for SOA; Chart of Accounts issues fixed; how to leverage reference data; improving EPM and/or BI across the board; operationalizing a data warehouse; support for cloud computing; compliance success stories; architecture, scalability, and mixed workload RAC platform performance examples; industry specific value propositions (Financial Services; Retail, Telecom; Manufacturing, High Tech Manufacturing, Public Sector, Health Care, …); and line of business specific value propositions (CRM, ERP, PLM, SCM, …); etc. In fact, given that MDM positively impacts all areas of operations and analytics, there are no limits to the ideas you may have for an OpenWorld presentation. When you follow the submission process, be sure to use “Master Data Management” for either the Primary or Optional track. Add “Master Data Management” as an Optional track if you are adding MDM content to a presentation on one of the following tracks: Agile; Customer Relationship Management, Oracle E-Business Suite, Product Lifecycle Management, Siebel, Sourcing and Procurement, Supply Chain Management, or one of the 18 available industry tracks. If Cloud Computing is included, please add “Cloud Computing” as a Cross-Stream Track. And don’t forget to make “MDM” a Tag, along with Business Intelligence, Cloud, CRM, Data Integration, Data Migration, Data Warehousing, EPM, or Service-Oriented Architecture whenever your content includes these items. I will personally review each submission. I hope you all keep me very busy over the next few weeks.

    Read the article

  • Do you know about the Visual Studio 2010 Database Projects Guidance?

    - by Martin Hinshelwood
    Early on in the Team System (now Visual Studio ALM) cycle a new product surfaced within Team System that was affectionately called “Data Dude”, but had the more formal name of “Visual Studio 2005 Team Edition for Database Professionals”. The purpose of this product was to try and make the database a “first class citizen” in the development world. Those that started using Visual Studio 2005 Team Edition for Database Professionals (Data Dude) loved it, but everyone else did not get it. The capabilities were a little patchy, but the one thing it did bring to the party was the ability to put your database schema under source control. This was revolutionary as previously your DBA sat as far away from the team as possible, and usually in a dark cupboard, now they could partake of all the goodness of Version Control, Work Item Tracking and automated builds. The problem was that the understanding required to manage these projects was very different to that needed previously. Then the Visual Studio ALM Rangers got a hold of it…and produced some of the best guidance available. Figure: Download the guidance from http://vsdatabaseguide.codeplex.com/ This guidance discusses scenarios and approaches of using the Database Projects in Visual Studio 2010 to help you use the tools more effectively and maximize their value to your organization This guidance is focused on these five areas: Solution and Project Management Source Code Control and Configuration Management Integrating External Changes with the Project System Build and Deployment Automation with Visual Studio Database Projects Database Testing and Deployment Verification Each of these areas has common guidance, usage scenarios, hands on labs, and lessons learned from real world engagements and the community discussions.   The guidance is broken down into three packages: Guidance documentation Hands-on-lab (HOL) documentation note: The documentation is available in XPS-only format packages or complete XPS,PDF,DOCX format packages HOL Package If you need assistance and no one else can help, then you may need to call the Visual Studio ALM Rangers. The Visual Studio ALM Rangers have the mission to provide out of band solutions for missing features or guidance. They are supported by Microsoft Product Group, Microsoft Consulting Services, Microsoft Most Valued Professionals (MVPs) and technical specialists from technology communities around the globe, giving you a real-world view from the field, where the technology has been tested and used. For more information on the Rangers please visit http://msdn.microsoft.com/en-us/vstudio/ee358786.aspx and for more a list of other Rangers projects please see http://msdn.microsoft.com/en-us/vstudio/ee358787.aspx.

    Read the article

  • Guiding Management to the Correct Decision

    - by Blumer
    My supervisor (also a developer) and I have a running joke about writing a book called "Managing From Beneath: Subversively Guiding Management to the Right Decision" and including a number of "techniques" we've developed for helping those who make the decisions to make the right ones. So far, we've got (cynicism warning!): BIC It! BIC stands for "Bury In Committee." When a bad idea comes up that someone wants to champion, we try to get it deferred to a committee for input. Typically it will either get killed outright (especially if other members of the committee are competing for you as a resource), or it will be hung up long enough that the proponent forgets about it. Smart, Stupid, or Expensive? When someone gets a visionary idea, offer them three ways to do it: a smart way, a stupid way, and an expensive way. The hope is that you've at least got a 2/3 shot of not having to do it the way that makes a piece of your soul die. All-Pro. It's a preemptive pro/con list in which you get into the mind of the (pr)opponent and think what would be cons against doing it your way. Twist them into pros and present them in your pro list before they have a chance to present them as cons. Dependicitis. Link pending decisions together, ideally with the proponent's pet project as the final link in the chain. Use this leverage to force action on those that have been put off. Preemptive Acceptance. Sometimes it's clear that management is going to go a particular direction regardless of advice to the contrary, and it's time to make the best of it. Take the opportunity to get something else you need, though. Approach the sponsor out of the blue and take the first step: "You know, I've been thinking about it, and while it's not the route I would advise, as long as we can get the schedule and budget for Project Awesome loosened up, I can work some magic to make your project fly." So ... what techniques have you come up with to try to head off the problem projects or make the best of what may come?

    Read the article

  • Five Fake Sounds Engineered to Make Your Feel Better [Science]

    - by Jason Fitzpatrick
    As objects in our environment (like cars, ATMs, and phones) have grown lighter and quieter scientists have been carefully engineering their sounds so that they continue to sound like we expect them to. Read on to see how. At the design blog Humans Invent they share five interesting ways that the world around us is being engineered so it sounds the way we expect it to. They start with the example of the car door. Years ago cars were almost entirely steel, the doors were weighty, and when you slammed them it sounded like one big hunk of steel locking into another big hunk of steel (which, in fact, it was). Newer cars are lighter but people still crave that substantial clunk. Humans Invent highlights the effect of consumer desire: A car door is essentially a hollow shell with parts placed inside it. Without careful design the door frame amplifies the rattling of mechanisms inside. Car companies know that if buyers don’t get a satisfying thud when they close the door, it dents their confidence in the entire vehicle. To produce the ideal clunk, car doors are designed to minimise the amount of high frequencies produced (we associate them with fragility and weakness) and emphasise low, bass-heavy frequencies that suggest solidity. The effect is achieved in a range of different ways – car companies have piled up hundreds of patents on the subject – but usually involves some form of dampener fitted in the door cavity. Locking mechanisms are also tailored to produce the right sort of click and the way seals make contact is precisely controlled. On average it takes 1.8 seconds to close a car door but in that time you’re witnessing a strange kind of symphony composed by engineers and designers whose goal is to reassure you that its rock solid. They mention lock mechanisms, something you may never have thought about. A friend of mine had a Ford Focus some years ago and that particular model had electric locks that, instead of giving a satisfying thunk or solid click, made this horrible gates-of-the-prison-buzzing sound that was completely unnerving. Hit up the link below to see how sounds are engineered for car doors, electric motors, ATM machines, and more. 5 Fake Sounds Designed to Help Humans [Humans Invent via Boing Boing] How To Easily Access Your Home Network From Anywhere With DDNSHow To Recover After Your Email Password Is CompromisedHow to Clean Your Filthy Keyboard in the Dishwasher (Without Ruining it)

    Read the article

  • RPG Monster-Area, Spawn, Loot table Design

    - by daemonfire300
    I currently struggle with creating the database structure for my RPG. I got so far: tables: area (id) monster (id, area.id, monster.id, hp, attack, defense, name) item (id, some other values) loot (id = monster.id, item = item.id, chance) spawn (id = area.id, monster = monster.id, count) It is a browser-based game like e.g. Castle Age. The player can move from area to area. If a player enters an area the system spawns, based on the area.id and using the spawn table data, new monsters into the monster table. If a player kills a monster, the system picks the monster.id looks up the items via the the loot table and adds those items to the player's inventory. First, is this smart? Second, I need some kind of "monster_instance"-table and "area_instance"-table, since each player enters his very own "area" and does damage to his very own "monsters". Another approach would be adding the / a player.id to the monster table, so each monster spawned, has it's own "player", but I still need to assign them to an area, and I think this would overload the monster table if I put in the player.id and the area.id into the monster table. What are your thoughts? Temporary Solution monster (id, attackDamage, defense, hp, exp, etc.) monster_instance (id, player.id, area_instance.id, hp, attackDamage, defense, monster.id, etc.) area (id, name, area.id access, restriction) area_instance (id, area.id, last_visited) spawn (id, area.id, monster.id) loot (id, monster.id, chance, amount, ?area.id?) An example system-flow would be: Player enters area 1: system creates area_instance of type area.id = 1 and sets player.location to area.id = 1 If Player wants to battle monsters in the current area: system fetches all spawn entries matching area.id == player.location and creates a new monster_instance for each spawn by fetching the according monster-base data from table monster. If a monster is fetched more than once it may be cached. If Player actually attacks a monster: system updates the according monster_instance, if monster dies the instance if removed after creating the loot If Player leaves the area: area_instance.last_visited is set to NOW(), if player doesn't return to data area within a certain amount of time area_instance including all its monster_instances are deleted.

    Read the article

  • Ubuntu 12.04 Automounting ntfs partition

    - by kuzyt
    Ive looked everywhere to fix this problem but I cant seem to figure out why its doing this. I have the following /etc/fstab entry to mount a ntfs partition using ntfs-3g. UUID=01CD842715EC2180 /media/mediahd02 ntfs defaults,user,noexec,uid=1000,gid=1000,dmask=007,fmask=117 0 2 The volume label for this partition is "MEDIA02" So I have had no problems with the fstab mounting. The problem however is that it automounts again using MEDIA02 label. I'm not sure automounting is the right term for this as its just an empty directory. Deleting this directory and rebooting is causing it to appear again. So listing /media I see both MEDIA02 & mediahd02 htpc@htpc:~$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sdf1 during installation UUID=ec027544-b0e7-4145-99a4-905543a9781a / ext4 errors=remount-ro,noatime,discard 0 1 # swap was on /dev/sdf5 during installation UUID=1794409e-723f-41ac-9f31-ae059f377613 none swap sw 0 0 # Added all the lines below this tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0 UUID=0F70-3B06 /media/mediahd01 vfat defaults,user,noexec,uid=1000,gid=1000,dmask=007,fmask=117 0 2 UUID=01CD842715EC2180 /media/mediahd02 ntfs defaults,user,noexec,uid=1000,gid=1000,dmask=007,fmask=117 0 2 htpc@htpc:~$ cat /etc/mtab /dev/sdc1 / ext4 rw,noatime,errors=remount-ro,discard 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 none /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 udev /dev devtmpfs rw,mode=0755 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 tmpfs /tmp tmpfs rw,noatime,mode=1777 0 0 tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0 none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0 none /run/shm tmpfs rw,nosuid,nodev 0 0 /dev/sdc1 /media/usbhd-sdc1 ext4 rw,relatime 0 0 /dev/sdb1 /media/mediahd02 fuseblk rw,noexec,nosuid,nodev,allow_other,default_permissions,blksize=4096 0 0 /dev/sda5 /media/mediahd01 vfat rw,noexec,nosuid,nodev,uid=1000,gid=1000,dmask=007,fmask=117 0 0 /dev/sdh1 /media/Windows_7 fuseblk rw,nosuid,nodev,allow_other,blksize=4096 0 0 Can someone shed some light as to why its doing this ?

    Read the article

  • A "First" at Oracle OpenWorld

    - by Kathryn Perry
    A guest post by Adam May, Director, Fusion CRM, Oracle Applications Development There are always firsts at OpenWorld. These firsts keep the conference fresh and are the reason people come back year after year. An important first this year is our Fusion CRM customers who are using the product and deriving real benefit from Fusion CRM. Everyone can learn from and interact with them -- including us!  We love talking to customers, especially those who are using our solutions in unexpected ways because they challenge us! At previous OpenWorlds, we presented our overall Fusion vision and our plans for Fusion CRM. Those presentations helped customers plan their strategies and map out their new release uptakes. Fast forward to March of this year when the first Fusion CRM customer went live. Since then we've watched the pace of go-lives accelerate every single month. Now we're at the threshold of another OpenWorld -- with over 45,000 attendees, 2,500 sessions and LOTS of other activities. To avoid having our customers curl into a ball with sensory overload, we designed a Focus On Document to outline the most important Fusion CRM activities. Here are some of the highlights: Anthony Lye's "Oracle Fusion Customer Relationship Management: Overview/Strategy/Customer Experiences/Roadmap" on Monday at 3:15 p.m. The CRM Pavilion, open in Moscone West from Monday through Wednesday; features our strategic Fusion CRM partners and provides live demonstrations of their capabilities General Session: "Oracle Fusion CRM--Improving Sales Effectiveness, Efficiency, and Ease of Use" on Tuesday at 11:45 a.m.; features Anthony Lye and Deloitte "Meet the Fusion CRM Experts" on Tuesday at 5:00 p.m.; this session gives customers the opportunity to interact one-on-one with Fusion experts divided into eight categories of expertise CRM Social Reception on Tuesday from 6-8 p.m.; there's no better way to spend the early evening than discussing Fusion CRM with Oracle experts and strategic partners over appetizers and drinks Wednesday night is Oracle's Customer Appreciation event; enjoy Pearl Jam, Kings of Leon, etc. beginning at 7:30 p.m. at Treasure Island Be sure to drink plenty of water before sleeping Wednesday night and don't stay out too late because we have lots of great content on Thursday; at the top of the list is "Oracle Fusion Social CRM Strategy and Roadmap: Future of Collaboration and Social Engagement" at 11:15 a.m. We hope you have a fantastic experience at OpenWorld 2012! And here's a little video treat to whet your appetite: http://www.youtube.com/user/FusionAppsAtOracle

    Read the article

  • Office 2010 Professional Plus (Top 10 reasons to upgrade)

    - by mbcrump
    Being a huge nerd, I decided that I would go ahead and upgrade to the latest and greatest office. That being, Office 2010 Professional Plus. The biggest concern that I had was loosing all my mail settings from Outlook 2007. Thankfully, it upgrade gracefully and worked like a charm. So lets start this top 10 list. 1) You can upgrade without fear of loosing all your stuff! As you can tell by the screenshot below, you can select what you want to do. I selected to remove all previous versions.    2) Outlook conversations: Just like GMail, you can now group emails by conversations. This is simply awesome and a must have. 3) The ability to ignore conversations. If you are on a email thread that has nothing to do with you. Simply “ignore” the conversation and all emails go into the deleted folder. 4) Quick Steps, do you send an email to the same team member or group constantly. With quick steps, its just one click away. 5) Spell check in the Subject line! 6)  Easier Screenshots, built in just click the button. No more ALT-Printscreen for those that are not aware of the awesome SnagIT 10 that's out. 7) Open in protected view. When you open a document from an email attachment, it lets you know the file may be unsafe. You can click a button to enable editing. This is great for preventing macros.       8) Excel has always had a variety of charts and graphs available to visually depict data and trends. With Excel 2010, though, Microsoft has added a new feature called Sparklines, which allows you to place a mini-graph or trend line in a single cell. The Sparklines are a cool way to quickly and simply add a visual element without having to go through the effort of inserting a graph or chart that overwhelms the worksheet. 9) Contact actions. If you hover over a name in the form or fields on an email, you get a popup giving you several actions you can perform on the person such as adding them to your Outlook contacts, scheduling a meeting, viewing their stored contact information if they are already in your contacts, sending an instant message or even starting a telephone call. 10) Windows 7 Task Bar Context Menu – I love the jumplist. I don’t know how much that I would actually use it but it just rocks.

    Read the article

  • What are some reasonable stylistic limits on type inference?

    - by Jon Purdy
    C++0x adds pretty darn comprehensive type inference support. I'm sorely tempted to use it everywhere possible to avoid undue repetition, but I'm wondering if removing explicit type information all over the place is such a good idea. Consider this rather contrived example: Foo.h: #include <set> class Foo { private: static std::set<Foo*> instances; public: Foo(); ~Foo(); // What does it return? Who cares! Just forward it! static decltype(instances.begin()) begin() { return instances.begin(); } static decltype(instances.end()) end() { return instances.end(); } }; Foo.cpp: #include <Foo.h> #include <Bar.h> // The type need only be specified in one location! // But I do have to open the header to find out what it actually is. decltype(Foo::instances) Foo::instances; Foo() { // What is the type of x? auto x = Bar::get_something(); // What does do_something() return? auto y = x.do_something(*this); // Well, it's convertible to bool somehow... if (!y) throw "a constant, old school"; instances.insert(this); } ~Foo() { instances.erase(this); } Would you say this is reasonable, or is it completely ridiculous? After all, especially if you're used to developing in a dynamic language, you don't really need to care all that much about the types of things, and can trust that the compiler will catch any egregious abuses of the type system. But for those of you that rely on editor support for method signatures, you're out of luck, so using this style in a library interface is probably really bad practice. I find that writing things with all possible types implicit actually makes my code a lot easier for me to follow, because it removes nearly all of the usual clutter of C++. Your mileage may, of course, vary, and that's what I'm interested in hearing about. What are the specific advantages and disadvantages to radical use of type inference?

    Read the article

  • Configuring Full-Text Search for pdf and docx files

    - by Lukasz Kurylo
    I think in may I was creating a little filters module based on Full Text-Search. I have configured my dev machine, the same for two testing servers – in our company for internal testing before we deployed it to client, and then on the testing client server. Until last week this build  was still on the testing server and finally we got feedback that we can deploy it on the production one. I only say that, I lost half a day because I had not correctly remembered what I was doing to configure the FTS on the previous servers and I had no notes for that. I foolishly believed in my memory. Lesson learned.   For future reference a bunch of steps to configure the FTS for searching in *.pdf and *.docx files (and by the way in other Office files like *.xlsx).   1. From the page (link) download and install the *.pdf IFilter for FTS. 2. To the PATH global system variable add path to the catalog, where you installed the plugin. Default for this version is: C:\Program Files\Adobe\Adobe PDF iFilter 9 for 64-bit platforms\bin 3. From the page (link) download a FilterPackx64.exe and install it. 4. Now from SSMS execute the following procedures: -sp_fulltext_service 'load_os_resources',1 -sp_fulltext_service 'verify_signature', 0 5. Restart the server 6. Now we must check if the plugins are visible: -select document_type, path from sys.fulltext_document_types where document_type = '.pdf' -select document_type, path from sys.fulltext_document_types where document_type = '.docx' 7. If we see a result, then we can assume that everything is ok*. 8. Right now we can create a catalog for FTS and indexes on appropriate columns.     *I lost a lot of hours to find out, why the plugin for the *.pdf files wasn’t indexed any file in the database, but in the sys.fulltext_document_types table there was available a line for this plugin. After the deeper investigation I found that the *.pdf files actually were indexed. At least the EOF sign was added to the indexes and nothing more for each file. In the end the problem was that, I forgot to add the /bin in the path to the plugin in PATH variable..

    Read the article

  • 6 Ways to Modernize Your Customer Experience

    - by Mike Stiles
    If customers have changed, if the way they research and shop have changed, if their expectations have changed, if their ability to act on dissatisfaction has changed, but your customer experience has NOT changed, what was once “good enough” may now be crippling. Well, the customer has changed, and why wouldn’t they? You’ve probably changed too in your role as consumer. There’s more info available, it’s easier to get, there’s more choice, you’re more mobile, you’re more connected, it’s easier to buy, and yes, it’s easier to switch brands if experiences don’t meet your now higher expectations. Thanks to technological advances, we as marketers can increasingly work borderline miracles. But if we’re still not adamantly adopting customer centricity, and if we aren’t making the customer experience paramount amongst business goals, the tech is wasted. A far more modern customer experience is called for. Here are 6 ways to get there: 1. Modern Marketing: Marketing data is aggregated and targeted to the right customers, who are getting personal, relevant communications. In return, you’re getting insight that finally properly attributes revenue to your marketing efforts. 2. Modern Selling: Demand is being driven across all channels with modern selling tools. Productivity is up thanks to coordinated communication and selling, and performance is ever optimized using powerful analytics. 3. Modern CPQ: You’re cross-selling and upselling more effectively since reps and channel partners have been empowered with the ability to quickly, automatically generate 100% accurate, customer-friendly quotes complete with price controls and automated approvals. 4. Modern Commerce: You’re leveraging data and delivering personalized, targeted digital experiences to everyone. You’re attracting more visitors, and you’re able to scale and keep up with the market and control the experience. 5. Modern Service: You’re better serving your customers by making it easier for them to engage with your brand, plus you’re lowering your costs by increasing agent and tech support efficiencies. 6. Modern Social: You’re getting faster, deeper, more accurate insights from social and turning content around faster, which then goes out to the right people at the right time in the right place. You’ve also gotten proactive in your service, and customers love that. For far too many brands, the buying journey of Need, Research, Select, Buy, Use, Recommend across the multiple connect points of Social, Mobile, Store, Call Center, Site, Ecommerce is a disconnected mess. Oracle’s approach to CX is to connect every interaction your customer has with your brand, avoiding the revenue losses lousy customer experiences bring. How important is the experience to customers? 94% are willing to pay more of their hard-earned money to have better ones, while a meager 1% say they get the good, consistent experiences they expect. Brands, your words aren’t as loud anymore, so your actions as they relate to customer experience are going to have to do the talking. @mikestiles @oraclesocialPhoto: Julien Tromeur, freeimages.com

    Read the article

  • ASP.NET vNext Blog Post Series

    - by Soe Tun
    Originally posted on: http://geekswithblogs.net/stun/archive/2014/06/04/asp.net-vnext-blog-post-series.aspxASP.NET vNext Blog Post Series ASP.NET vNext was announced at TechEd 2014, and I have been playing around with it a bit. ASP.NET vNext is an exciting and revolutionary change for the Microsoft .NET development platform. ASP.NET vNext is now open-source, and available on Github at this location: https://github.com/aspnet/Home. I want to start a blog post series on the ASP.NET vNext, and share my experience as I learn more about it. Keeping it simple Each blog post in the series will be short and simple so I can write them in a short amount of time, and keep it focused on one (at most two) topic(s) per post. My goal is to make it easy to absorb the information as there are a ton of great new stuff to cover. Many other people in the community have blogged about the key new features of the ASP.NET vNext. I will link to those blog posts in my next blog post. MVC 6 POCO Controller Today, I want to start this blog post series with a teaser code snippet for those developers familiar with the ASP.NET MVC. Getting Started with ASP.NET MVC 6 article from ASP.NET website shows how to write a lightweight POCO (plain-old CLR object) MVC Controller class in the upcoming ASP.NET MVC 6. However, it doesn't show us how to use the IActionResultHelper interface to render a View. This is how I wrote my POCO MVC Controller based on the https://github.com/aspnet/Home/blob/master/samples/HelloMvc/Controllers/HomeController.cs sample from Github.   Note that this may not be the best way to write it, but this is good enough for now. using Microsoft.AspNet.Mvc; using Microsoft.AspNet.Mvc.ModelBinding; using MvcSample.Web.Models; namespace MvcSample.Web { public class HomeController { IActionResultHelper html; IModelMetadataProvider mmp; public HomeController(IActionResultHelper h, IModelMetadataProvider mmp) { this.html = h; this.mmp = mmp; } public IActionResult Index() { var viewData = new ViewDataDictionary<User>(mmp) { Model = User() }; return html.View("Index", viewData); } public User User() { return new User { Name = "My name", Address = "My address" }; } } } Please feel free to give me feedback as this will greatly help me organize the blog posts in this series, and plan head. Thanks for reading!

    Read the article

  • Cloud service and IM protocol advice, for a backend to group chat mobile app

    - by Jonathan
    Overview I’m going to develop an app on Android and iOS. It will allow users to set up group ‘chat rooms’ and talk on chat rooms set up by other users. The service needs to be highly scalable, such that it could accommodate a massive increase in users overnight (we can only dream). Chat requirements The chat protocol used should be flexible: it should allow me to determine who can view/post on ‘chat rooms’ based on certain other factors, as determined by the first poster/creator of the particular ‘chat room’. It should also allow for users to simply install the app and begin using the service, after only providing a simple nickname (which could be changed later). Chat protocol plans Having looked around I think the XMPP protocol is the best candidate. In particular the Multi-user chat extension looks like what I’ll need. Would this be most suited to my requirements, or do you know another potential solution? Cloud service I have been deciding between Amazon Web Services, Google App Engine and Windows Azure. I’m coming to the conclusion that Azure will be best, as it is easier to manage than AWS (ease of scalability will be a key factor in the design), I think it may be less restricted than GAE, plus Azure will soon have toolkits to allow easy interfacing with both Android and iOS phones. Is this the decision you would have made, or would you recommend/look into other cloud services? General project philosophy I have only recently started looking into this project’s feasibility, and am no expert on any of its aspects. So wherever possible I will leave the actual implementations to experts, i.e. choosing a higher-level cloud service, using a well-documented plugin of a, proven reliable, group chat protocol etc. My background I have some programming knowledge from a computer science degree. Main languages I’ve used have been Java and Python, but I don’t want this to affect design decisions for the project. The most appropriate languages for the task should be used, i.e. I don’t mind learning a lot of new skills (my current programming levels are relatively basic anyway). Thank you Thanks for reading, and any advice you have about any aspect would be greatly appreciated :-)

    Read the article

< Previous Page | 605 606 607 608 609 610 611 612 613 614 615 616  | Next Page >