Search Results

Search found 6561 results on 263 pages for 'developing'.

Page 203/263 | < Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >

  • Guest blog: A Closer Look at Oracle Price Analytics by Will Hutchinson

    - by Takin Babaei
    Overview:  Price Analytics helps companies understand how much of each sale goes into discounts, special terms, and allowances. This visibility lets sales management see the panoply of discounts and start seeing whether each discount drives desired behavior. In Price Analytics monitors parts of the quote-to-order process, tracking quotes, including the whole price waterfall and seeing which result in orders. The “price waterfall” shows all discounts between list price and “pocket price”. Pocket price is the final price the vendor puts in its pocket after all discounts are taken. The value proposition: Based on benchmarks from leading consultancies and companies I have talked to, where they have studied the effects of discounting and started enforcing what many of them call “discount discipline”, they find they can increase the pocket price by 0.8-3%. Yes, in today’s zero or negative inflation environment, one can, through better monitoring of discounts, collect what amounts to a price rise of a few percent. We are not talking about selling more product, merely about collecting a higher pocket price without decreasing quantities sold. Higher prices fall straight to the bottom line. The best reference I have ever found for understanding this phenomenon comes from an article from the September-October 1992 issue of Harvard Business Review called “Managing Price, Gaining Profit” by Michael Marn and Robert Rosiello of McKinsey & Co. They describe the outsized impact price management has on bottom line performance compared to selling more product or cutting variable or fixed costs. Price Analytics manages what Marn and Rosiello call “transaction pricing”, namely the prices of a given transaction, as opposed to what is on the price list or pricing according to the value received. They make the point that if the vendor does not manage the price waterfall, customers will, to the vendor’s detriment. It also discusses its findings that in companies it studied, there was no correlation between discount levels and any indication of customer value. I urge you to read this article. What Price Analytics does: Price analytics looks at quotes the company issues and tracks them until either the quote is accepted or rejected or it expires. There are prebuilt adapters for EBS and Siebel as well as a universal adapter. The target audience includes pricing analysts, product managers, sales managers, and VP’s of sales, marketing, finance, and sales operations. It tracks how effective discounts have been, the win rate on quotes, how well pricing policies have been followed, customer and product profitability, and customer performance against commitments. It has the concept of price waterfall, the deal lifecycle, and price segmentation built into the product. These help product and sales managers understand their pricing and its effectiveness on driving revenue and profit. They also help understand how terms are adhered to during negotiations. They also help people understand what segments exist and how well they are adhered to. To help your company increase its profits and revenues, I urge you to look at this product. If you have questions, please contact me. Will HutchinsonMaster Principal Sales Consultant – Analytics, Oracle Corp. Will Hutchinson has worked in the business intelligence and data warehousing for over 25 years. He started building data warehouses in 1986 at Metaphor, advancing to running Metaphor UK’s sales consulting area. He also worked in A.T. Kearney’s business intelligence practice for over four years, running projects and providing training to new consultants in the IT practice. He also worked at Informatica and then Siebel, before coming to Oracle with the Siebel acquisition. He became Master Principal Sales Consultant in 2009. He has worked on developing ROI and TCO models for business intelligence for over ten years. Mr. Hutchinson has a BS degree in Chemical Engineering from Princeton University and an MBA in Finance from the University of Chicago.

    Read the article

  • Be the surgeon

    - by Rob Farley
    It’s a phrase I use often, especially when teaching, and I wish I had realised the concept years earlier. (And of course, fits with this month’s T-SQL Tuesday topic, hosted by Argenis Fernandez) When I’m sick enough to go to the doctor, I see a GP. I used to typically see the same guy, but he’s moved on now. However, when he has been able to roughly identify the area of the problem, I get referred to a specialist, sometimes a surgeon. Being a surgeon requires a refined set of skills. It’s why they often don’t like to be called “Doctor”, and prefer the traditional “Mister” (the history is that the doctor used to make the diagnosis, and then hand the patient over to the person who didn’t have a doctorate, but rather was an expert cutter, typically from a background in butchering). But if you ask the surgeon about the pain you have in your leg sometimes, you’ll get told to ask your GP. It’s not that your surgeon isn’t interested – they just don’t know the answer. IT is the same now. That wasn’t something that I really understood when I got out of university. I knew there was a lot to know about IT – I’d just done an honours degree in it. But I also knew that I’d done well in just about all my subjects, and felt like I had a handle on everything. I got into developing, and still felt that having a good level of understanding about every aspect of IT was a good thing. This got me through for the first six or seven years of my career. But then I started to realise that I couldn’t compete. I’d moved into management, and was spending my days running projects, rather than writing code. The kids were getting older. I’d had a bad back injury (ask anyone with chronic pain how it affects  your ability to concentrate, retain information, etc). But most of all, IT was getting larger. I knew kids without lives who knew more than I did. And I felt like I could easily identify people who were better than me in whatever area I could think of. Except writing queries (this was before I discovered technical communities, and people like Paul White and Dave Ballantyne). And so I figured I’d specialise. I wish I’d done it years earlier. Now, I can tell you plenty of people who are better than me at any area you can pick. But there are also more people who might consider listing me in some of their lists too. If I’d stayed the GP, I’d be stuck in management, and finding that there were better managers than me too. If you’re reading this, SQL could well be your thing. But it might not be either. Your thing might not even be in IT. Find out, and then see if you can be a world-beater at it. But it gets even better, because you can find other people to complement the things that you’re not so good at. My company, LobsterPot Solutions, has six people in it at the moment. I’ve hand-picked those six people, along with the one who quit. The great thing about it is that I’ve been able to pick people who don’t necessarily specialise in the same way as me. I don’t write their T-SQL for them – generally they’re good enough at that themselves. But I’m on-hand if needed. Consider Roger Noble, for example. He’s doing stuff in HTML5 and jQuery that I could never dream of doing to create an amazing HTML5 version of PivotViewer. Or Ashley Sewell, a guy who does project management far better than I do. I could go on. My team is brilliant, and I love them to bits. We’re all surgeons, and when we work together, I like to think we’re pretty good! @rob_farley

    Read the article

  • WebCenter Customer Spotlight: Textron Inc.

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryTextron Inc. is one of the world's best known multi-industry companies and is a pioneer of the diversified business model. Founded in 1923, it has grown into a network of businesses—including Bell Helicopter, E-Z-GO, Cessna, and Jacobsen—with facilities and a presence in 25 countries, serving a diverse and global customer base. Textron is ranked 236th on the Fortune 500 list of the largest US companies. Textron needed a Web experience management solution to centralize control, minimize costs, and enable more efficient operations. Specifically, the company wanted to take IT out of the picture as much as possible, enabling sales and marketing leads for subsidiaries to make Website updates as they deem appropriate for their business.   Textron worked with Oracle partner Element Solutions to consolidate its Website management systems onto Oracle WebCenter Sites. The implementation enables Textron’s subsidiaries to adjust more quickly to customer demands,  reduced Website management cost & time to update content on a Website while allowing to integrate its Website updates more closely with social media and mobile platforms. Company OverviewTextron Inc. is one of the world's best known multi-industry companies and is a pioneer of the diversified business model. Founded in 1923, it has grown into a network of businesses—including Bell Helicopter, E-Z-GO, Cessna, and Jacobsen—with facilities and a presence in 25 countries, serving a diverse and global customer base. Textron is ranked 236th on the Fortune 500 list of the largest US companies. Business ChallengesWith numerous subsidiaries and more than 50 public Websites, Textron needed a Web experience management solution to centralize control, minimize costs, and enable more efficient operations. Specifically, the company wanted to take IT out of the picture as much as possible, enabling sales and marketing leads for subsidiaries to make Website updates as they deem appropriate for their business.   Solution DeployedTextron worked with Oracle partner Element Solutions to consolidate its Website management systems onto Oracle WebCenter Sites. Specifically, Textron: Used Oracle WebCenter Sites to integrate Web experience management capabilities for all Textron brands, including Bell Helicopter, E-Z-GO, Cessna, and Jacobsen Developed Website templates to enable marketing and communications professionals to easily make updates to their Websites, without having to work with IT Reduced Website management costs, as it costs more for IT to coordinate Website updates as opposed to marketing and communications Enabled IT to concentrate on other activities to enhance overall operations for Textron, such as project workflows Acquired a platform that enables marketing teams to integrate their Websites with social media and mobile platforms, allowing subsidiaries to make updates and contact customers anytime and everywhere—including through tablets and smartphones Reduced the time it takes to update content on a Website, including press releases, by enabling communications professionals to make updates directly Developed more appealing visual designs for Websites to help enhance customer purchase Business ResultsThe implementation enabled Textron’s subsidiaries to adjust more quickly to customer demands and Textron’s IT staff to concentrate on other processes, such as writing code and developing new workflows, enabling them to enhance company processes. In addition, Textron can use Oracle WebCenter Sites to integrate its Website updates more closely with social media and mobile platforms, enabling marketing and communications teams to make updates anytime and everywhere. The initiative has enabled Textron to save money by freeing IT up to work on more important tasks, instituting new e-commerce and mobile initiatives to better engage customers, and by ensuring efficient Website management processes to quickly adjust to customer demands.  “We considered a number of products, but chose Oracle WebCenter Sites because it provides the best user interface. We reviewed customer references and analyst reports, and Oracle WebCenter Sites was consistently at the top of the list,” Brad Hof, Manager, Advanced Business Solutions and Web Communications, Textron Inc. Additional Information Tectron Inc. Customer Snapshot Oracle WebCenter Sites

    Read the article

  • Mind Reading with the Raspberry Pi

    - by speakjava
    Mind Reading With The Raspberry Pi At JavaOne in San Francisco I did a session entitled "Do You Like Coffee with Your Dessert? Java and the Raspberry Pi".  As part of this I showed some demonstrations of things I'd done using Java on the Raspberry Pi.  This is the first part of a series of blog entries that will cover all the different aspects of these demonstrations. A while ago I had bought a MindWave headset from Neurosky.  I was particularly interested to see how this worked as I had had the opportunity to visit Neurosky several years ago when they were still developing this technology.  At that time the 'headset' consisted of a headband (very much in the Bjorn Borg style) with a sensor attached and some wiring that clearly wasn't quite production ready.  The commercial version is very simple and easy to use: there are two sensors, one which rests on the skin of your forehead, the other is a small clip that attaches to your earlobe. Typical EEG sensors used in hospitals require lots of sensors and they all need copious amounts of conductive gel to ensure the electrical signals are picked up.  Part of Neurosky's innovation is the development of this simple dry-sensor technology.  Having put on the sensor and turned it on (it powers off a single AAA size battery) it collects data and transmits it to a USB dongle plugged into a PC, or in my case a Raspberry Pi. From a hacking perspective the USB dongle is ideal because it does not require any special drivers for any complex, low level USB communication.  Instead it appears as a simple serial device, which on the Raspberry Pi is accessed as /dev/ttyUSB0.  Neurosky have published details of the command protocol.  In addition, the MindSet protocol document, including sample code for parsing the data from the headset, can be found here. To get everything working on the Raspberry Pi using Java the first thing was to get serial communications going.  Back in the dim distant past there was the Java Comm API.  Sadly this has grown a bit dusty over the years, but there is a more modern open source project that provides compatible and enhanced functionality, RXTXComm.  This can be installed easily on the Pi using sudo apt-get install librxtx-java.  Next I wrote a library that would send commands to the MindWave headset via the serial port dongle and read back data being sent from the headset.  The design is pretty simple, I used an event based system so that code using the library could register listeners for different types of events from the headset.  You can download a complete NetBeans project for this here.  This includes javadoc API documentation that should make it obvious how to use it (incidentally, this will work on platforms other than Linux.  I've tested it on Windows without any issues, just by changing the device name to something like COM4). To test this I wrote a simple application that would connect to the headset and then print the attention and meditation values as they were received from the headset.  Again, you can download the NetBeans project for that here. Oracle recently released a developer preview of JavaFX on ARM which will run on the Raspberry Pi.  I thought it would be cool to write a graphical front end for the MindWave data that could take advantage of the built in charts of JavaFX.  Yet another NetBeans project is available here.  Screen shots of the app, which uses a very nice dial from the JFxtras project, are shown below. I probably should add labels for the EEG data so the user knows which is the low alpha, mid gamma waves and so on.  Given that I'm not a neurologist I suspect that it won't increase my understanding of what the (rather random looking) traces mean. In the next blog I'll explain how I connected a LEGO motor to the GPIO pins on the Raspberry Pi and then used my mind to control the motor!

    Read the article

  • On The Road with the HR Community

    - by Kathryn Perry
    A guest post by Steve Boese, Director, Talent Strategy, Oracle One of the best ways to connect with and to get a feel for what is on the minds of Human Resources leaders is to get out of the office and hit the road. I’ve had the great honor to attend and/or present at a number of events recently, including the massive SHRM Annual Conference, the HR Florida Conference, and Taleo World in Chicago. These events, and many others, offer solution providers, talent management professionals, business leaders, and even more casual observers of the Human Resources field with tremendous opportunities to connect, to share information, and to learn from each other. Attending the conferences also give people a sense of how they can improve and enhance their skills and knowledge, learn about the latest workforce technologies, and bring new and innovative ideas back to their organizations. And sure, the parties and conference swag can be pretty nice as well! If you attend a few of these industry events, one of the most beneficial by-products that you can emerge with -- whether you are on the front lines in HR at your organization, or as we are at Oracle, in the business of developing and delivering innovative and impactful technology solutions to our customers -- is to get a larger sense of the big ideas and major trends, concerns, and challenges facing organizations all across the landscape, and to be able to better understand how your strategies and solutions can be improved with this greater perspective. So what are HR folks discussing and debating? What questions and problems keep them up at night? What are the bloggers and large community of HR social media enthusiasts buzzing about? From my perspective some of the common themes you see over and again across the HR community break down (broadly), into three main areas: Talent attraction - How can we locate, attract, recruit, and hire the best talent possible? What new strategies, approaches, and technologies can help us in this critically important area? What role do external social networks like LinkedIn, Facebook, and Twitter play in the increasingly competitive search for talent? Talent Retention - How can we make sure to keep that talent on our team? What engagement, development, recognition, and compensation tools can help us in this regard? How can we continue, (or become), an employer of choice? What is our unique and compelling employer value proposition? Talent Empowerment - How can we put our employees in the best position to succeed? What can we do to better align our talent with the organization’s mission and goals, while simultaneously providing the best and most driven to succeed individuals a clear path to achieve their career goals and aspirations? How can new technologies, particularly social and collaboration tools help in this area? While these are the ‘big themes’ that I know I have seen this year, certainly they are not really new, nor are they likely to fundamentally change in the next year or two. I think the reason is that at the core of any successful enterprise is a collection of smart, interested, engaged, challenged, and empowered group of people. And that was likely the case 10 or 20 years ago, and will probably be the case 10 or 20 years into the future. But what has changed, and what you can see -- evidenced by simply following the Twitter backchannel for an event and by reading some of the many fantastic HR blogs out there -- is that the HR professional's ability, along with technology solution providers like Oracle, to connect, to more openly share information with each other, and to make each other better in the process, (and to create new, improved, and more innovative solutions), has never been greater. And I think it is with this heretofore unprecedented level of opportunity to connect with other members of the community that HR professionals will be better equipped to help their organizations attract, retain, and empower their teams. We at Oracle HCM look forward to continuing to meet, engage, and connect with the HR community in the coming months. Until then -- follow us on Twitter and Facebook.

    Read the article

  • As a person getting into mobile development, what's the best mobile platform in terms of profitability? [closed]

    - by Kyle Loman
    I realize this question can range very far so would love to hear any and all opinions on this. However, I'll be honest and say that I have been thinking of this in terms of most profitable. I know how this may sound either way but this is one of my main sticking points. I realize that I'm not guaranteed a single cent and success is never guaranteed but I'm going into this with the thought of making something out of it both financially and also for my own interest. I know that iOS gets a lot of attention on this front but Android commands a lot more market share. However, I know there are drawbacks to Android too, whether it's in the actual development process and programming (though I've heard conflicting reports on this, such as how easy/difficult it is for to address screen res in different devices) or the app ecosystem being flooded. But iOS's app ecosystem has been described as too saturated and harder to compete in for that reason. Since Windows Phone has fewer apps than both of those two, that might be the best place to start in order to be closer to the ground floor of the store and be noticed more? Less saturation = better chances of sales or differentiating? Something like the gold rush during the first years of the iOS App Store (not exactly but at least in concept)? Would it be that despite fewer users on the platform, there's more exposure due to less competition so that may translate to better success at sales? Plus, I know MS is in it for the long haul so I'm not too fearful of something like WebOS going away. Obviously RIM isn't very popular nowadays but I read a recent article that says Blackberry actually has the apps that make the most money, any thoughts on that: http://gigaom.com/mobile/which-mobile-oss-apps-make-most-money-surprise-its-blackberry/ Again, this is all I've heard or known about so if there's anything to add or correct here, please do. In addition, this has actually affected my next personal phone upgrade. I'm eligible for a carrier discount now and I've had my eye on the iPhone 5. However, the Lumia 920 is the one I'm holding out for and I'm open to trying an Android but I'm not sure I can wait that long for any new Nexus or even the Razr HD. Even the new Lumia in November is making me antsy, I'm so close to just getting an iPhone 5. But when I say this has affected my phone choice, I'd want to be able to carry the apps I write with me so that I'm able to pull my phone out to show people without having to carry around a second device to do so. So that's why I'd like to make my personal phone match the main platform I'm developing for. Of course, I will likely expand to other platforms if I gain any decent success but the one I target now would serve well as my personal phone I carry around so that I can use it as a marketing tool, in a sense, showing people my apps if the opportunity presents itself. So what's the best mobile platform to choose, and especially in regards to most lucrative? As said previously, this would influence my personal phone choice greatly. Thanks in advance and I hope this isn't taken the wrong way - I understand there are trade-offs and other factors that may balance this out but making some revenue is key among that. For some background, I have done software development and know programming language concepts so I'm not entirely new to it and I do get the notion of being familiar with these things so that I can translate this skill among a variety of languages but I'm currently just having difficulty choosing my first main mobile platform based on the criteria I've outlined above.

    Read the article

  • Desktop Applications Versus Web Applications

    Up until the advent of the internet programmers really only developed one type of application used by end-users.  This type of application was called a desktop application. As the name implies, these applications ran strictly from a desktop computer, and were limited by the resources available to the computer. Initially, this type of applications did not need resources outside of the scope of the computer in which they installed. The problem with this type of application is that if multiple end-users need to access the same desktop application, then the application must be installed on the end-user’s computer. In this age of software development security was not as big of a concern as it is today with other types of applications. This is primarily due to the fact that an end-user must have access to the computer where the software is installed in order for them to access the application. In addition, developers could also password protect the application just in case an authorized end-user was able to gain access to the computer. With the birth of the internet a second form of application emerged because developers were trying to solve inherent issues with the preexisting desktop application. One of the solutions to overcome some of the short comings of desktop applications is the web application. Web applications are hosted on a centralized server and clients only need to have network access and a web browser in order to access the application. Because a web application can be installed on a remote server it removes the need for individual installations of the same application on each end-user’s computer.  The main benefits to an application being hosted on a server is increased accessibility to the application due to the fact that nothing has to be installed on a desktop computer for an end-user to be able to access the application. In addition, web applications are much easier to maintain because any change to the application is applied on the server and is inherently applied to any end-user trying to use the application. This removes the time needed to install and maintain individual installations of a desktop application. However with the increased accessibility there are additional costs that are incurred compared to a desktop application because of the additional cost and maintenance of a server hosting the application. Typically, after a desktop application is purchased there are no additional reoccurring fees associated with the application.  When developing a web based application there are additional considerations that must be addressed compared to a desktop application. The added benefit of increased accessibility also now adds a new failure point when trying to gain access to an application. An end-user now must have network connectivity in order to access the application. This issue is not a concern for desktop applications because there resources are typically bound to the computer in which they run. Since the availability of an application is increased with the use of the client-server model in a web based application, additional security concerns now come in to play. As stated before a, desktop application is bound to the accessibility of the end-user to the computer that the application is installed. This is not the case with web based applications because they potentially could have access from anywhere with the proper internet/network connection. Additional security steps are required to insure the integrity of the application and its data. Examples of these steps include and are not limited to the following: Restricted/Password Areas This form of security is used when specific information can only be accessed by end-users based on a set of accessibility rules. IP Restrictions This form of security is used when only specific locations need to access an application. This form of security is applied from within the web server or a firewall. Network Restrictions (Firewalls) This form of security is used to contain access to an application within a specific sub set of a network. Data Encryption This form of security is used transform personally identifiable information in to something unreadable so that it can be stored for future use. Encrypted Protocols (HTTPS) This form of security is used to prevent others from reading messages being sent between applications over a network.

    Read the article

  • Why are you doing this? [closed]

    - by NIcholas Lawson
    I am working on a story that I am going to be querying to several magazines in my hometown about this work that is being done by the AXR group. This is a group of people who have networked online and are working on developing a higher level syntax structure than CSS and HTML currently offer. I am covering this is as a story because I see potential in this as a human interest story in cosmopolitan society. I have been asked by the group to pose this question to you and would appreciate any and all comments you would have on the following ... To AXR: So when does the internet become finished? At what point does a computer scientist say to himself ... my job here is finished ... the internet is complete? When is the internet ready to be more about the display of content than the uploading of new websites or computer tech? You are embarking on upon a sixty year project every day you work with this internet, what drives you? Why are you spending your hard earned hours working on the code to this computer? I spend thirty hours a week online because I love the writing and I know what would make the internet better ... ease of use ... i know it is difficult to program but I see some very elegant solutions online ... in this early inception phase of your programming development for this HSS prototype ... I would like to know why I do not see you programmers asking questions such as ... What would make the end user's life the easiest when using this code? I know you can solve the problem but an evolution forward would be simple, not simple to a computer scientist but simple to use for a career janitor ... if you could solve the problem of alleviating the stress at using a the computer you could get better content out of the computer ... right now the main problem is that the best content is in the hands of the people least likely to use the computer and the more simple you make the computer to use ... the better the content collection will be in the long run ... That is not what I want to talk about though ... why are you writing code when you could be writing stories? I know the computer is worthless without content so I build content, I know the book is worthless without the combinations of words in them, i know the television is worthless without the television news anchor or the actor, what I want to know from you folks in a very journalistic sense is why are you even bothering to bother to write code for a machine that has only made our lives i would dare say less interesting. why are you feeding the beast your time when you could be writing stories or being an actor or musician or auto mechanic ... why code? why this machine? what do you love about it? what do you hate about it? what do you wonder about it? I want to know so that starting out I know how to further shape my questions with axr ... i want the full story ... i want the real answers ... and i want to know why you are doing this, it would make for great writing if you could elucidate on this point.

    Read the article

  • Company wants to write custom project management tool, rather then use third party product.

    - by Jason Evans
    At the company I work, we are really wanting to get into the agile methodology for developing software. One thing that I'm not excited about is the fact that management wants us to build a custom project management feature inside the company's Intranet. I think this is a total waste of time. There are many great third party tools available (e.g. Axosoft OnTime) that can do everything we need, and more. For how much development time it would cost us to build our own project management module, we could buy numerous licences for a third party product. One concern is that, whilst we are writing code for a client, and using our custom Intranet project management module, we find bugs in the module that need fixing ASAP. That means having to stop work on the client code to fix the Intranet. That just puts shivers down my spine. Another worry I have is lack of functionality. This custom module is going to be so basic, that it will just feel really crap to use. That might sound a bit snooty, but for goodness sake, many third party tools are so feature rich, that the idea of having to write our own tool makes feel very uneasy. In fact, I can't be bothered. What do you guys think? I'm going to raise this issue with my boss, since I feel it's such an important topic to talk about. EDIT: Thanks for the great responses, much appreciated. To summarize some of them: Money Naturally my boss does want to save money, by not forking out a few hundred £'s for licences. However, for us to write a custom tool, it will take x number of days, multiplied by approx £500, which is our costs. I don't see the business value in this. Management have mentioned that they want to sell the Intranet as a product in the future, but it's so custom to our needs (and downright basic), that in order to give it to another client, I can see us having to fork a version of the code and rebuild the majority of it anyway. So it's not like we're gaining anything there in reuse. Features Having our own custom module means not feature bloat - only the functionality we require will be in the product. My issue is that there are plenty of free, open-source project management tools out there with minimal features already. So even if cost is an issue, we could look into open-source. Again it all boils down to the fact that I don't see the point in writing a project management tool in this day and age. It's a bit like writing your own web browser - why?, what's the point? Although management are asking for this tool, just because they are, it does not mean I'm going to please them and do it just because they asked for it. If something does not make sense, then I will raise it as a concern. At the end of the day, it's the developers who write the code, it's the developers who make money for a business. Thus, as far I'm concerned, the devs have a very big role in deciding how a company should manage projects and what tools are used. "I am Spartan, argh!" :) Hmm, I've not been able to make this question a wiki for some reason, thus I'm going to have to pick an answer to accept. Cheers. Jas.

    Read the article

  • SharePoint: Numeric/Integer Site Column (Field) Types

    - by CharlesLee
    What field type should you use when creating number based site columns as part of a SharePoint feature? Windows SharePoint Services 3.0 provides you with an extensible and flexible method of developing and deploying Site Columns and Content Types (both of which are required for most SharePoint projects requiring list or library based data storage) via the feature framework (more on this in my next full article.) However there is an interesting behaviour when working with a column or field which is required to hold a number, which I thought I would blog about today. When creating Site Columns in the browser you get a nice rich UI in order to choose the properties of this field: However when you are recreating this as a feature defined in CAML (Collaborative Application Mark-up Language), which is a type of XML (more on this in my article) then you do not get such a rich experience.  You would need to add something like this to the element manifest defined in your feature: <Field SourceID="http://schemas.microsoft.com/sharepoint/3.0"        ID="{C272E927-3748-48db-8FC0-6C7B72A6D220}"        Group="My Site Columns"        Name="MyNumber"        DisplayName="My Number"        Type="Numeric"        Commas="FALSE"        Decimals="0"        Required="FALSE"        ReadOnly="FALSE"        Sealed="FALSE"        Hidden="FALSE" /> OK, its not as nice as the browser UI but I can deal with this. Hang on. Commas="FALSE" and yet for my number 1234 I get 1,234.  That is not what I wanted or expected.  What gives? The answer lies in the difference between a type of "Numeric" which is an implementation of the SPFieldNumber class and "Integer" which does not correspond to a given SPField class but rather represents a positive or negative integer.  The numeric type does not respect the settings of Commas or NegativeFormat (which defines how to display negative numbers.)  So we can set the Type to Integer and we are good to go.  Yes? Sadly no! You will notice at this point that if you deploy your site column into SharePoint something has gone wrong.  Your site column is not listed in the Site Column Gallery.  The deployment must have failed then?  But no, a quick look at the site columns via the API reveals that the column is there.  What new evil is this?  Unfortunately the base type for integer fields has this lovely attribute set on it: UserCreatable = FALSE So WSS 3.0 accordingly hides your field in the gallery as you cannot create fields of this type. However! You can use them in content types just like any other field (except not in the browser UI), and if you add them to the content type as part of your feature then they will show up in the UI as a field on that content type.  Most of the time you are not going to be too concerned that your site columns are not listed in the gallery as you will know that they are there and that they are still useable. So not as bad as you thought after all.  Just a little quirky.  But that is SharePoint for you.

    Read the article

  • ArchBeat Link-o-Rama Top 10 for November 2012

    - by Bob Rhubart
    Every day ArchBeat searches the web for content created by and for community members, and then shares that content via social media. Here's the list of the Top 10 most popular items posted on the OTN ArchBeat Facebook Page for November 2012. One-Stop Shop for Oracle Webcasts Webcasts can be a great way to get information about Oracle products without having to go cross-eyed reading yet another document off your computer screen. Oracle's new Webcast Center offers selectable filtering to make it easy to get to the information you want. Yes, you have to register to gain access, but that process is quick, and with over 200 webcasts to choose from you know you'll find useful content. OAM/OVD JVM Tuning Vinay from the Oracle Fusion Middleware Architecture Group (otherwise known as the A-Team) shares a process for analyzing and improving performance in Oracle Virtual Directory and Oracle Access Manager. White Paper: Oracle Exalogic Elastic Cloud: Advanced I/O Virtualization Architecture for Consolidating High-Performance Workloads This new white paper by Adam Hawley (with contributions from Yoav Eilat) describes in great detail the incorporation into Oracle Exalogic of virtualized InfiniBand I/O interconnects using Single Root I/O Virtualization (SR-IOV) technology. Architected Systems: "If you don't develop an architecture, you will get one anyway..." "Can you build a system without taking care of architecture?," asks Manuel Ricca. "You certainly can. But inevitably the system will be unbalanced, neglecting the interests of key stakeholders, and problems will soon emerge." Backup and Recovery of an Exalogic vServer via rsync "On Exalogic a vServer will consist of a number of resources from the underlying machine," says the man known only as Donald. "These resources include compute power, networking and storage. In order to recover a vServer from a failure in the underlying rack all of these components have to be thoughts about. This article only discusses the backup and recovery strategies that apply to the storage system of a vServer." This Week on the OTN Architect Community Home Page Make time to check out this week's features on the OTN Solution Architect Homepage, including: SOA Practitioner Guide: Identifying and Discovering Services Technical article by Yuli Vasiliev on Setting Up, Configuring, and Using an Oracle WebLogic Server Cluster Podcast: Are You Future Proof? Clustering ODI11g for High-Availability Part 1: Introduction and Architecture | Richard Yeardley "JEE agents can be deployed alongside, or instead of, standalone agents," says Rittman Meade's Richard Yeardley. "But there is one key advantage in using JEE agents and WebLogic – when you deploy JEE agents as part of a WebLogic cluster they can be configured together to form a high availability cluster." Learn more in Yeardley's extensive post. OIM 11g : Multi-thread approach for writing custom scheduled job | Saravanan V S Saravanan shares insight and expertise relevant to "designing and developing an OIM schedule job that uses multi threaded approach for updating data in OIM using APIs." How to Create Virtual Directory in Weblogic Server | Zeeshan Baig Oracle ACE Zeeshan Baig shows you how in six easy steps. SOA Galore: New Books for Technical Eyes Only Shake up up your technical skills with this trio of new technical books from community members covering SOA and BPM. Thought for the Day "Humans are the best value in computers -- where else can you get a non-linear computer weighing only about 160lbs, having a billion binary decision elements, that can be mass-produced by unskilled labour?" — Anonymous Source: SoftwareQuotes.com

    Read the article

  • Acceptance tests done first...how can this be accomplished?

    - by Crazy Eddie
    The basic gist of most Agile methods is that a feature is not "done" until it's been developed, tested, and in many cases released. This is supposed to happen in quick turnaround chunks of time such as "Sprints" in the Scrum process. A common part of Agile is also TDD, which states that tests are done first. My team works on a GUI program that does a lot of specific drawing and such. In order to provide tests, the testing team needs to be able to work with something that at least attempts to perform the things they are trying to test. We've found no way around this problem. I can very much see where they are coming from because if I was trying to write software that targeted some basically mysterious interface I'd have a very hard time. Although we have behavior fairly well specified, the exact process of interacting with various UI elements when it comes to automation seems to be too unique to a feature to allow testers to write automated scripts to drive something that does not exist. Even if we could, a lot of things end up turning up later as having been missing from the specification. One thing we considered doing was having the testers write test "scripts" that are more like a set of steps that must be performed, as described from a use-case perspective, so that they can be "automated" by a human being. This can then be performed by the developer(s) writing the feature and/or verified by someone else. When the testers later get an opportunity they automate the "script" for regression purposes mainly. This didn't end up catching on in the team though. The testing part of the team is actually falling behind us by quite a margin. This is one reason why the apparently extra time of developing a "script" for a human being to perform just did not happen....they're under a crunch to keep up with us developers. If we waited for them, we'd get nothing done. It's not their fault really, they're a bottle neck but they're doing what they should be and working as fast as possible. The process itself seems to be set up against them. Very often we end up having to go back a month or more in what we've done to fix bugs that the testers have finally gotten to checking. It's an ugly truth that I'd like to do something about. So what do other teams do to solve this fail cascade? How can we get testers ahead of us and how can we make it so that there's actually time for them to write tests for the features we do in a sprint without making us sit and twiddle our thumbs in the meantime? As it's currently going, in order to get a feature "done", using agile definitions, would be to have developers work for 1 week, then testers work the second week, and developers hopefully being able to fix all the bugs they come up with in the last couple days. That's just not going to happen, even if I agreed it was a reasonable solution. I need better ideas...

    Read the article

  • Moving from Silverlight 4 Beta to RC - Part 1

    The other day I had finished up my Task-It Webinar, written a few blog posts, and knew the time had come to move from my Silverlight 4 Beta environment up to the latest RC (release candidate) bits that were released last Monday. What disappointed me when I went to the Silverlight 4 Information Page is that it told me what to install, but not what to uninstall first. Uninstalling I'm not entirely sure if I had to uninstall anything, or if installing the new stuff would just work, but in poking around the web I found posts stating that you must uninstall the following items first. Unfortunately I'm going by memory here and have not been able to find my way back to the magic page I found in the myriad of posts that I went through: Microsoft VisualStudio Beta 2 Microsoft .NET Framework 4 Extended (apparently this must be done *before* the next one) Microsoft .NET Framework Client Profile Microsoft Silverlight 4 Tools for Visual Studio 2010 WCF RIA Services Preview for Visual Studio 2010 While I was at it, I removed a bunch of other stuff, like Blend 3, Blend 4, the SDK's associated with them, and a bunch other stuff that was old. Of course, I didn't really want/need to keep any Silverlight 3 stuff around as I am developing Task-It in Silverlight 4. If I need a Silverlight 3 environment at some point I'll set it up in a virtual environment. NOTE: One thing that I did not uninstall is the Microsoft Silverlight 4 Toolkit November 2009. The reason is that they have not released the March version yet, so if you uninstall this, youll end up having to reinstall it. Installing OK, now that I had all of that old stuff off my machine, now it was time to get the new stuff. For this part I liked Tim Heuer's post, A Guide to What has Changed in Silverlight 4 RC better than the Silverlight 4 Information Page. VisualStudio 2010 RC - I installed Ultimate, but you may not need that version. No harm, it's free for now anyway. I downloaded the .exe and the 3 .rar files, then ran the .exe. I then extracted the contents of the ISO (using WinZip) to a new directory, and now had Setup.exe, a bunch of .cab files and some other assorted stuff. I simply ran Setup.exe and chose custom install (only because I wanted to uncheck Visual C++...I don't really have a need for that) Silverlight 4 Tools for Visual Studio 2010 - As mentioned in Tim's blog post, this installs this installs Silverlight developer runtime, SDK, tools, and this installs Silverlight developer runtime, SDK, tools, and WCF RIA Services. WCF RIA Services Toolkit March 2010 - I'm not sure if/when I'll need any of this stuff, but no harm in installing it anyway. Expression Blend 4 beta - Only if you plan to use Blend, which I do. Windows Phone Developer Tools - Only if you are interested in playing with Windows Phone 7 development. Wrap Up Hopefully I got those steps right. If anyone finds anything I've missed, please just add a comment to this post and I'll update it accordingly.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Asynchrony in C# 5 (Part I)

    - by javarg
    I’ve been playing around with the new Async CTP preview available for download from Microsoft. It’s amazing how language trends are influencing the evolution of Microsoft’s developing platform. Much effort is being done at language level today than previous versions of .NET. In these post series I’ll review some major features contained in this release: Asynchronous functions TPL Dataflow Task based asynchronous Pattern Part I: Asynchronous Functions This is a mean of expressing asynchronous operations. This kind of functions must return void or Task/Task<> (functions returning void let us implement Fire & Forget asynchronous operations). The two new keywords introduced are async and await. async: marks a function as asynchronous, indicating that some part of its execution may take place some time later (after the method call has returned). Thus, all async functions must include some kind of asynchronous operations. This keyword on its own does not make a function asynchronous thought, its nature depends on its implementation. await: allows us to define operations inside a function that will be awaited for continuation (more on this later). Async function sample: Async/Await Sample async void ShowDateTimeAsync() {     while (true)     {         var client = new ServiceReference1.Service1Client();         var dt = await client.GetDateTimeTaskAsync();         Console.WriteLine("Current DateTime is: {0}", dt);         await TaskEx.Delay(1000);     } } The previous sample is a typical usage scenario for these new features. Suppose we query some external Web Service to get data (in this case the current DateTime) and we do so at regular intervals in order to refresh user’s UI. Note the async and await functions working together. The ShowDateTimeAsync method indicate its asynchronous nature to the caller using the keyword async (that it may complete after returning control to its caller). The await keyword indicates the flow control of the method will continue executing asynchronously after client.GetDateTimeTaskAsync returns. The latter is the most important thing to understand about the behavior of this method and how this actually works. The flow control of the method will be reconstructed after any asynchronous operation completes (specified with the keyword await). This reconstruction of flow control is the real magic behind the scene and it is done by C#/VB compilers. Note how we didn’t use any of the regular existing async patterns and we’ve defined the method very much like a synchronous one. Now, compare the following code snippet  in contrast to the previuous async/await: Traditional UI Async void ComplicatedShowDateTime() {     var client = new ServiceReference1.Service1Client();     client.GetDateTimeCompleted += (s, e) =>     {         Console.WriteLine("Current DateTime is: {0}", e.Result);         client.GetDateTimeAsync();     };     client.GetDateTimeAsync(); } The previous implementation is somehow similar to the first shown, but more complicated. Note how the while loop is implemented as a chained callback to the same method (client.GetDateTimeAsync) inside the event handler (please, do not do this in your own application, this is just an example).  How it works? Using an state workflow (or jump table actually), the compiler expands our code and create the necessary steps to execute it, resuming pending operations after any asynchronous one. The intention of the new Async/Await pattern is to let us think and code as we normally do when designing and algorithm. It also allows us to preserve the logical flow control of the program (without using any tricky coding patterns to accomplish this). The compiler will then create the necessary workflow to execute operations as the happen in time.

    Read the article

  • Tetris Movement - Implementation

    - by James Brauman
    Hi gamedev, I'm developing a Tetris clone and working on the input at the moment. When I was prototyping, movement was triggered by releasing a directional key. However, in most Tetris games I've played the movement is a bit more complex. When a directional key is pressed, the shape moves one space in that direction. After a short interval, if the key is still held down, the shape starts moving in the direction continuously until the key is released. In the case of the down key being pressed, there is no pause between the initial movement and the subsequent continuous movement. I've come up with a solution, and it works well, but it's totally over-engineered. Hey, at least I can recognize when things are getting silly, right? :) public class TetrisMover { List registeredKeys; Dictionary continuousPressedTime; Dictionary totalPressedTime; Dictionary initialIntervals; Dictionary continousIntervals; Dictionary keyActions; Dictionary initialActionDone; KeyboardState currentKeyboardState; public TetrisMover() { *snip* } public void Update(GameTime gameTime) { currentKeyboardState = Keyboard.GetState(); foreach (Keys currentKey in registeredKeys) { if (currentKeyboardState.IsKeyUp(currentKey)) { continuousPressedTime[currentKey] = TimeSpan.Zero; totalPressedTime[currentKey] = TimeSpan.Zero; initialActionDone[currentKey] = false; } else { if (initialActionDone[currentKey] == false) { keyActions[currentKey](); initialActionDone[currentKey] = true; } totalPressedTime[currentKey] += gameTime.ElapsedGameTime; if (totalPressedTime[currentKey] = initialIntervals[currentKey]) { continuousPressedTime[currentKey] += gameTime.ElapsedGameTime; if (continuousPressedTime[currentKey] = continousIntervals[currentKey]) { keyActions[currentKey](); continuousPressedTime[currentKey] = TimeSpan.Zero; } } } } } public void RegisterKey(Keys key, TimeSpan initialInterval, TimeSpan continuousInterval, Action keyAction) { if (registeredKeys.Contains(key)) throw new InvalidOperationException( string.Format("The key %s is already registered.", key)); registeredKeys.Add(key); continuousPressedTime.Add(key, TimeSpan.Zero); totalPressedTime.Add(key, TimeSpan.Zero); initialIntervals.Add(key, initialInterval); continousIntervals.Add(key, continuousInterval); keyActions.Add(key, keyAction); initialActionDone.Add(key, false); } public void UnregisterKey(Keys key) { *snip* } } I'm updating it every frame, and this is how I'm registering keys for movement: tetrisMover.RegisterKey( Keys.Left, keyHoldStartSpecialInterval, keyHoldMovementInterval, () = { Move(Direction.Left); }); tetrisMover.RegisterKey( Keys.Right, keyHoldStartSpecialInterval, keyHoldMovementInterval, () = { Move(Direction.Right); }); tetrisMover.RegisterKey( Keys.Down, TimeSpan.Zero, keyHoldMovementInterval, () = { PerformGravity(); }); Issues that this doesn't address: If both left and right are held down, the shape moves back and forth really quick. If a directional key is held down and the turn finishes and the shape is replaced by a new one, the new one will move quickly in that direction instead of the little pause it is supposed to have. I could fix the issues, but I think it will make the solution even worse. How would you implement this?

    Read the article

  • Development Quirk From ASP.NET Dynamic Compilation

    - by jkauffman
    The Problem I got a compilation error in my ASP.NET MVC3 project that tested my sanity today. (As always, names are changed to protect the innocent) The type or namespace name 'FishViewModel' does not exist in the namespace 'Company.Product.Application.Models' (are you missing an assembly reference?) Sure looks easy! There must be something in the project referring to a FishViewModel. The Confusing Part The first thing I noticed was the that error was occuring in a folder clearly not in my project and in files that I definitely had not created: %SystemRoot%\Microsoft.NET\Framework\(versionNumber)\Temporary ASP.NET Files\ App_Web_mezpfjae.1.cs I also ascertained these facts, each of which made me more confused than the last: Rebuild and Clean had no effect. No controllers in the project ever returned a ViewResult using FishViewModel. No views in the project defined that they use FishViewModel. Searching across all files included in the project for “FishViewModel” provided no results. The build server did not report a problem. The Solution The problem stemmed from a file that was not included in the project but still present on the file system: (By the way, if you don’t know this trick already, there is a toolbar button in the Solution Explorer window to “Show All Files” which allows you to see files all files in the file system) In my situation, I was working on the mission-critical Fish view before abandoning the feature. Instead of deleting the file, I excluded it from the project. However, this was a bad move. It caused the build failure, and in order to fix the error, this file must be deleted. By the way, this file was not in source control, so the build server did not have it. This explains why my build server did not report a problem for me. The Explanation So, what’s going on? This file isn’t even a part of the project, so why is it failing the build? This is a behavior of the ASP.NET Dynamic Compilation. This is the same process that occurs when deploying a webpage; ASP.NET compiles the web application’s code. When this occurs on a production server, it has to do so without the .csproj file (which isn’t usually deployed, if you’ve taken your time to do a deployment cleanly). This process has merely the file system available to identify what to compile. So, back in the world of developing the webpage in visual studio on my developer box, I run into the situation because the same process is occuring there. This is true even though I have more files on my machine than will actually get deployed. I can’t help but think that this error could be attributed back to the real culprit file (Fish.cshtml, rather than the temporary files) with some work, but at least the error had enough information in it to narrow it down. The Conclusion I had previously been accustomed to the idea that for c# projects, the .csproj file always “defines” the build behavior. This investigation has taught me that I’ll need to shift my thinking a bit to remember that the file system has the final say when it comes to web applications, even on the developer’s machine!

    Read the article

  • Circle-Line Collision Detection Problem

    - by jazzdawg
    I am currently developing a breakout clone and I have hit a roadblock in getting collision detection between a ball (circle) and a brick (convex polygon) working correctly. I am using a Circle-Line collision detection test where each line represents and edge on the convex polygon brick. For the majority of the time the Circle-Line test works properly and the points of collision are resolved correctly. Collision detection working correctly. However, occasionally my collision detection code returns false due to a negative discriminant when the ball is actually intersecting the brick. Collision detection failing. I am aware of the inefficiency with this method and I am using axis aligned bounding boxes to cut down on the number of bricks tested. My main concern is if there are any mathematical bugs in my code below. /* * from and to are points at the start and end of the convex polygons edge. * This function is called for every edge in the convex polygon until a * collision is detected. */ bool circleLineCollision(Vec2f from, Vec2f to) { Vec2f lFrom, lTo, lLine; Vec2f line, normal; Vec2f intersectPt1, intersectPt2; float a, b, c, disc, sqrt_disc, u, v, nn, vn; bool one = false, two = false; // set line vectors lFrom = from - ball.circle.centre; // localised lTo = to - ball.circle.centre; // localised lLine = lFrom - lTo; // localised line = from - to; // calculate a, b & c values a = lLine.dot(lLine); b = 2 * (lLine.dot(lFrom)); c = (lFrom.dot(lFrom)) - (ball.circle.radius * ball.circle.radius); // discriminant disc = (b * b) - (4 * a * c); if (disc < 0.0f) { // no intersections return false; } else if (disc == 0.0f) { // one intersection u = -b / (2 * a); intersectPt1 = from + (lLine.scale(u)); one = pointOnLine(intersectPt1, from, to); if (!one) return false; return true; } else { // two intersections sqrt_disc = sqrt(disc); u = (-b + sqrt_disc) / (2 * a); v = (-b - sqrt_disc) / (2 * a); intersectPt1 = from + (lLine.scale(u)); intersectPt2 = from + (lLine.scale(v)); one = pointOnLine(intersectPt1, from, to); two = pointOnLine(intersectPt2, from, to); if (!one && !two) return false; return true; } } bool pointOnLine(Vec2f p, Vec2f from, Vec2f to) { if (p.x >= min(from.x, to.x) && p.x <= max(from.x, to.x) && p.y >= min(from.y, to.y) && p.y <= max(from.y, to.y)) return true; return false; }

    Read the article

  • Setup and configure a MVC4 project for Cloud Service(web role) and SQL Azure

    - by MagnusKarlsson
    I aim at keeping this blog post updated and add related posts to it. Since there are a lot of these out there I link to others that has done kind of the same before me, kind of a blog-DRY pattern that I'm aiming for. I also keep all mistakes and misconceptions for others to see. As an example; if I hit a stacktrace I will google it if I don't directly figure out the reason for it. I will then probably take the most plausible result and try it out. If it fails because I misinterpreted the error I will not delete it from the log but keep it for future reference and for others to see. That way people that finds this blog can see multiple solutions for indexed stacktraces and I can better remember how to do stuff. To avoid my errors I recommend you to read through it all before going from start to finish.The steps:Setup project in VS2012. (msdn blog)Setup Azure Services (half of mpspartners.com blog)Setup connections strings and configuration files (msdn blog + notes)Export certificates.Create Azure package from vs2012 and deploy to staging (same steps as for production).Connections string error Set up the visual studio project:http://blogs.msdn.com/b/avkashchauhan/archive/2011/11/08/developing-asp-net-mvc4-based-windows-azure-web-role.aspx Then login in to Azure to setup the services:Stop following this guide at the "publish website" part since we'll be uploading a package.http://www.mpspartners.com/2012/09/ConfiguringandDeployinganMVC4applicationasaCloudServicewithAzureSQLandStorage/ When set up (connection strings for debug and release and all), follow this guide to set up the configuration files:http://msdn.microsoft.com/en-us/library/windowsazure/hh369931.aspxTrying to package our application at this step will generate the following warning:3>MvcWebRole1(0,0): warning WAT170: The configuration setting 'Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString' is set up to use the local storage emulator for role 'MvcWebRole1' in configuration file 'ServiceConfiguration.Cloud.cscfg'. To access Windows Azure storage services, you must provide a valid Windows Azure storage connection string. Right click the web role under roles in solution manager and choose properties. Choose "Service configuration: Cloud". At "specify storage account credentials" we will copy/paste our account name and key from the Azure management platform.3.1 4. Right click Remote desktop Configuration and select certificate and export to file. We need to allow it in Portal manager.4.15 Now right click the cloud project and select package.5.1 Showing dialogue box. 5.2 Package success Now copy the path to the packaged file and go to management portal again. Click your web role and choose staging (or production). Upload. 5.3Tick the box about the single instance if that's what you want or you don't know what it means. Otherwise the following will happen (see image 4.6)5.4 Dialogue box When you have clicked the symbol for accept- button you will see the following screen with some green indicators down at the right corner. Click them if you want to see status.5.5 Information screen.5.6 "Failed to deploy application. The upload application has at least one role with only one instance. We recommend that you deploy at least two instances per role to ensure high availability in case one of the instances becomes unavailable. "To fix, go to step 5.4If you forgot to (or just didn't know you were supposed to) export your certificates. The following error will occur. Side note, the following thread suggests. To prevent: "Enable Remote Desktop for all roles" when right-clicking BIAB and choosing "Package". But in my case it was the not so present certificates. I fund the solution here.http://social.msdn.microsoft.com/Forums/en-US/dotnetstocktradersampleapplication/thread/0e94c2b5-463f-4209-86b9-fc257e0678cd5.75.8 Success! 5.9 Nice URL n' all. (More on that at another blog post).6. If you try to login and getWhen this error occurs many web sites suggest this is because you need:http://nuget.org/packages/Microsoft.AspNet.Providers.LocalDBOr : http://nuget.org/packages/Microsoft.AspNet.ProvidersBut it can also be that you don't have the correct setup for converting connectionstrings between your web.config to your debug.web.config(or release.web.config, whichever your using).Run as suggested in the "ordinary project in your solution. Go to the management portal and click update.

    Read the article

  • USDM and Oracle Offer a New Part 11 Compliant Solution for Life Sciences

    - by Michael Snow
    Guest post today provided by Oracle partner, USDM  Regulated Content in WebCenterUSDM and Oracle offer a new Part 11 compliant solution for Life Sciences (White Paper) Life science customers now have the ability to take advantage of all of the benefits of Oracle’s WebCenter Content, a global leader in Enterprise Content Management.   For the past year, USDM has been developing best practice compliance solutions to meet regulated content management requirements for 21 CFR Part 11 in WebCenter Content. USDM has been an expert in ECM for life sciences since 1999 and in 2011, certified that WebCenter was a 21CFR Part 11 compliant content management platform (White Paper).  In addition, USDM has built Validation Accelerators Packs for WebCenter to enable life science organizations to quickly and cost effectively validate this world class solution.With the Part 11 certification, Oracle’s WebCenter now provides regulated life science organizations  the ability to manage REGULATORY content in WebCenter, as well as the ability to take advantage of ALL of the additional functionality of WebCenter, including  a complete, open, and integrated portfolio of portal, web experience management, content management and social networking technology.  Here are a few screen shot examples of Part 11 functionality included in the product: E-Sign, E-Sign Rendor, Meta Data History, Audit Trail Report, and Access Reporting. Gone are the days that life science companies have to spend millions of dollars a year to implement, maintain, and validate ECM systems that no longer meet the ever changing business and regulatory requirements.  Life science companies now have the ability to use WebCenter Content, an ECM system with a substantially lower cost of ownership and unsurpassed functionality.Oracle has been #1 in life sciences because of their ability to develop cost effective, easy-to-use, scalable solutions which help increase insight and efficiency to drive growth for their customers.  Adding a world class ECM solution to this product portfolio allows life science organizations the chance to get rid of costly ECM systems that no longer meet their needs and use WebCenter, part of the Oracle Fusion Technology stack, with their other leading enterprise applications.USDM provides:•    Expertise in Life Science ECM Business Processes•    Prebuilt Life Science Configuration in WebCenter •    Validation Accelerator Packs for WebCenterUSDM is very proud to support Oracle’s expanding commitment to Life Sciences…. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} For more information please contact:  [email protected] Oracle will be exhibiting at DIA 2012 in Philadelphia on June 25-27. Stop by our booth (#2825) to learn more about the advantages of a centralized ECM strategy and see the Oracle WebCenter Content solution, our 21 CFR Part 11 compliant content management platform.

    Read the article

  • Replicating between Cloud and On-Premises using Oracle GoldenGate

    - by Ananth R. Tiru
    Do you have applications running on the cloud that you need to connect with the on premises systems. The most likely answer to this question is an astounding YES!  If so, then you understand the importance of keep the data fresh at all times across the cloud and on-premises environments. This is also one of the key focus areas for the new GoldenGate 12c release which we announced couple of week ago via a press release. Most enterprises have spent years avoiding the data “silos” that inhibit productivity. For example, an enterprise which has adopted a CRM strategy could be relying on an on-premises based marketing application used for developing and nurturing leads. At the same time it could be using a SaaS based Sales application to create opportunities and quotes. The sales and the marketing teams which use these systems need to be able to access and share the data in a reliable and cohesive way. This example can be extended to other applications areas such as HR, Supply Chain, and Finance and the demands the users place on getting a consistent view of the data. When it comes to moving data in hybrid environments some of the key requirements include minimal latency, reliability and security: Data must remain fresh. As data ages it becomes less relevant and less valuable—day-old data is often insufficient in today’s competitive landscape. Reliability must be guaranteed despite system or connectivity issues that can occur between the cloud and on-premises instances. Security is a key concern when replicating between cloud and on-premises instances. There are several options to consider when replicating between the cloud and on-premises instances. Option 1 – Secured network established between the cloud and on-premises A secured network is established between the cloud and on-premises which enables the applications (including replication software) running on the cloud and on-premises to have seamless connectivity to other applications irrespective of where they are physically located. Option 2 – Restricted network established between the cloud and on-premises A restricted network is established between the cloud and on-premises instances which enable certain ports (required by replication) be opened on both the cloud and on the on-premises instances and white lists the IP addresses of the cloud and on-premises instances. Option 3 – Restricted network access from on-premises and cloud through HTTP proxy This option can be considered when the ports required by the applications (including replication software) are not open and the cloud instance is not white listed on the on-premises instance. This option of tunneling through HTTP proxy may be only considered when proper security exceptions are obtained. Oracle GoldenGate Oracle GoldenGate is used for major Fortune 500 companies and other industry leaders worldwide to support mission-critical systems for data availability and integration. Oracle GoldenGate addresses the requirements for ensuring data consistency between cloud and on-premises instances, thus facilitating the business process to run effectively and reliably. The architecture diagram below illustrates the scenario where the cloud and the on-premises instance are connected using GoldenGate through a secured network In the above scenario, Oracle GoldenGate is installed and configured on both the cloud and the on-premises instances. On the cloud instance Oracle GoldenGate is installed and configured on the machine where the database instance can be accessed. Oracle GoldenGate can be configured for unidirectional or bi-directional replication between the cloud and on premises instances. The specific configuration details of Oracle GoldenGate processes will depend upon the option selected for establishing connectivity between the cloud and on-premises instances. The knowledge article (ID - 1588484.1) titled ' Replicating between Cloud and On-Premises using Oracle GoldenGate' discusses in detail the options for replicating between the cloud and on-premises instances. The article can be found on My Oracle Support. To learn more about Oracle GoldenGate 12c register for our launch webcast where we will go into these new features in more detail.   You may also want to download our white paper "Oracle GoldenGate 12c Release 1 New Features Overview" I would love to hear your requirements for replicating between on-premises and cloud instances, as well as your comments about the strategy discussed in the knowledge article to address your needs. Please post your comments in this blog or in the Oracle GoldenGate public forum - https://forums.oracle.com/community/developer/english/business_intelligence/system_management_and_integration/goldengate

    Read the article

  • Comparing Apples and Pairs

    - by Tony Davis
    A recent study, High Costs and Negative Value of Pair Programming, by Capers Jones, pulls no punches in its assessment of the costs-to- benefits ratio of pair programming, two programmers working together, at a single computer, rather than separately. He implies that pair programming is a method rushed into production on a wave of enthusiasm for Agile or Extreme Programming, without any real regard for its effectiveness. Despite admitting that his data represented a far from complete study of the economics of pair programming, his conclusions were stark: it was 2.5 times more expensive, resulted in a 15% drop in productivity, and offered no significant quality benefits. The author provides a more scientific analysis than Jon Evans’ Pair Programming Considered Harmful, but the theme is the same. In terms of upfront-coding costs, pair programming is surely more expensive. The claim of productivity loss is dubious and contested by other studies. The third claim, though, did surprise me. The author’s data suggests that if both the pair and the individual programmers employ static code analysis and testing, then there is no measurable difference in the resulting code quality, in terms of defects per function point. In other words, pair programming incurs a massive extra cost for no tangible return in investment. There were, inevitably, many criticisms of his data and his conclusions, a few of which are persuasive. Firstly, that the driver/observer model of pair programming, on which the study bases its findings, is far from the most effective. For example, many find Ping-Pong pairing, based on use of test-driven development, far more productive. Secondly, that it doesn’t distinguish between “expert” and “novice” pair programmers– that is, independently of other programming skills, how skilled was an individual at pair programming. Thirdly, that his measure of quality is too narrow. This point rings true, certainly at Red Gate, where developers don’t pair program all the time, but use the method in short bursts, while tackling a tricky problem and needing a fresh perspective on the best approach, or more in-depth knowledge in a particular domain. All of them argue that pair programming, and collective code ownership, offers significant rewards, if not in terms of immediate “bug reduction”, then in removing the likelihood of single points of failure, and improving the overall quality and longer-term adaptability/maintainability of the design. There is also a massive learning benefit for both participants. One developer told me how he once worked in the same team over consecutive summers, the first time with no pair programming and the second time pair-programming two-thirds of the time, and described the increased rate of learning the second time as “phenomenal”. There are a great many theories on how we should develop software (Scrum, XP, Lean, etc.), but woefully little scientific research in their effectiveness. For a group that spends so much time crunching other people’s data, I wonder if developers spend enough time crunching data about themselves. Capers Jones’ data may be incomplete, but should cause a pause for thought, especially for any large IT departments, supporting commerce and industry, who are considering pair programming. It certainly shouldn’t discourage teams from exploring new ways of developing software, as long as they also think about how to gather hard data to gauge their effectiveness.

    Read the article

  • Passed: Exam 70-480: Programming in HTML5 with JavaScript and CSS3

    First off: Mission accomplished successfully. And it was fun! Using the resources listed in my previous article about Learning Content, I'd like to thank Microsoft Technical Evangelists Jeremy Foster and Michael Palermo for their excellent jump start videos on Channel 9, and the various authors at Pluralsight. Local Prometric testing centre Back in November I chose a local testing centre which was the easiest to access from my office despite the horrible traffic you might experience here on the island. Actually, it was not the closest one. But due to their website, their awards as Microsoft Learning Center, and my general curiosity about the premises, I gave FRCI my priority. Boy, how should I regret this decision this morning... The official Prometric exam guide asks any attendee to show up at least 30 minutes prior to the scheduled time of the test. Well, this should have been the easier part but unfortunately due to heavier traffic than usual I arrived only 20 minutes before time. Not too bad but more to come. The building called 'le Hub' is nicely renovated and provides the right environment for an IT group of companies like FRCI. I think they have currently 5 independent IT departments over there. Even the handling at the reception was straight forward, welcoming and at my ease. But then... first shock: "We don't have any exam registration for today." - Hm, that's nice... Here's my mail confirmation from Prometric. First attack successfully handled and the lady went off again to check their records. Next shock: A couple of minutes later, another guy tries to explain me that "the staff of the testing centre is already on vacation and the centre is officially closed." - Are you kidding me? Here's the official confirmation by Prometric, and I don't find it funny that I take a day off today only to hear this kind of blubbering nonsense. I thought that I'll be on the safe side choosing a company with a good reputation here on the island. Another 40 (!) minutes later, they finally come back to the waiting area with a pre-filled form about the test appointment. And finally, after an hour of waiting, discussing, restarting the testing PC, and lots of talk, I am allowed to sit down and take the exam. Exam details Well, you know the rules. Signing an NDA doesn't allow me to provide you any details about the questions or topics that have been covered. Please check out the official exam description, and you're on the right way. Sorry, guys... ;-) The result "Congratulations! You have passed this Microsoft Certification exam." - In general, I have to admit that the parts on HTML5 and CSS3 were the easiest after all, and that I have to get myself a little bit more familiar with certain Javascript features like class definitions, inheritance and data security. Anyway, exam passed - who cares about the details? Next goal Of course, the journey to Microsoft Certifications continues and my next goal is to pass exams 70-481 - Essentials of Developing Windows Store Apps using HTML5 and JavaScript and 70-482 - Advanced Windows Store App Development using HTML5 and JavaScript. This would allow me to achieve the certification of MCSD: Windows Store Apps using HTML5. I guess, during 2013 I'll be busy with various learning and teaching lessons.

    Read the article

  • ARTS Reference Model for Retail

    - by Sanjeev Sharma
    Consider a hypothetical scenario where you have been tasked to set up retail operations for a electronic goods or daily consumables or a luxury brand etc. It is very likely you will be faced with the following questions: What are the essential business capabilities that you must have in place?  What are the essential business activities under-pinning each of the business capabilities, identified in Step 1? What are the set of steps that you need to perform to execute each of the business activities, identified in Step 2? Answers to the above will drive your investments in software and hardware to enable the core retail operations. More importantly, the choices you make in responding to the above questions will several implications in the short-run and in the long-run. In the short-term, you will incur the time and cost of defining your technology requirements, procuring the software/hardware components and getting them up and running. In the long-term, as you grow in operations organically or through M&A, partnerships and franchiser business models  you will invariably need to make more technology investments to manage the greater complexity (scale and scope) of business operations.  "As new software applications, such as time & attendance, labor scheduling, and POS transactions, just to mention a few, are introduced into the store environment, it takes a disproportionate amount of time and effort to integrate them with existing store applications. These integration projects can add up to 50 percent to the time needed to implement a new software application and contribute significantly to the cost of the overall project, particularly if a systems integrator is called in. This has been the reality that all retailers have had to live with over the last two decades. The effect of the environment has not only been to increase costs, but also to limit retailers' ability to implement change and the speed with which they can do so." (excerpt taken from here) Now, one would think a lot of retailers would have already gone through the pain of finding answers to these questions, so why re-invent the wheel? Precisely so, a major effort began almost 17 years ago in the retail industry to make it less expensive and less difficult to deploy new technology in stores and at the retail enterprise level. This effort is called the Association for Retail Technology Standards (ARTS). Without standards such as those defined by ARTS, you would very likely end up experiencing the following: Increased Time and Cost due to resource wastage arising from re-inventing the wheel i.e. re-creating vanilla processes from scratch, and incurring, otherwise avoidable, mistakes and errors by ignoring experience of others Sub-optimal Process Efficiency due to narrow, isolated view of processes thereby ignoring process inter-dependencies i.e. optimizing parts but not the whole, and resulting in lack of transparency and inter-departmental finger-pointing Embracing ARTS standards as a blue-print for establishing or managing or streamlining your retail operations can benefit you in the following ways: Improved Time-to-Market from parity with industry best-practice processes e.g. ARTS, thus avoiding “reinventing the wheel” for common retail processes and focusing more on customizing processes for differentiations, and lowering integration complexity and risk with a standardized vocabulary for exchange between internal and external i.e. partner systems Lower Operating Costs by embracing the ARTS enterprise-wide process reference model for developing and streamlining retail operations holistically instead of a narrow, silo-ed view, and  procuring IT systems in compliance with ARTS thus avoiding IT budget marginalization While parity with industry standards such as ARTS business process model by itself does not create a differentiation, it does however provide a higher starting point for bridging the strategy-execution gap in setting up and improving retail operations.

    Read the article

  • How can a developer realize the full value of his work [closed]

    - by Jubbat
    I, honestly, don't want to work as a developer in a company anymore after all I have seen. I want to continue developing software, yes, but not in the way I see it all around me. And I'm in London, a city that congregates lots of great developers from the whole world, so it shouldn't be a problem of location. So, what are my concerns? First of all, best case scenario: you are paying managers salary out of yours. You are consistently underpaid by making up for the average manager negative net return plus his whole salary. Typical scenario. I am a reasonably good developer with common sense who cares for readable code with attention to basic principles. I have found way too often, overconfident and arrogant developers with a severe lack of common sense. Personally, I don't want to follow TDD or Agile practices like all the cool kids nowadays. I would read about them, form my own opinion and take what I feel is useful, but don't follow it sheepishly. I want to work with people who understand that you have to design good interfaces, you absolutely have to document your code, that readability is at the top of your priorities. Also people who don't have a cargo cult mentality too. For instance, the same person who asked me about design patterns in a job interview, later told me that something like a List of Map of Vector of Map of Set (in Java) is very readable. Why would someone ask me about design patterns if they can't even grasp encapsulation? These kind of things are the norm. I've seen many examples. I've seen worse than that too, from very well paid senior devs, by the way. Every second that you spend working with people with such lack of common sense and clear thinking, you are effectively losing money by being terribly inefficient with your time. Yet, with all these inefficiencies, the average developer earns a high salary. So I tried working on my own then, although I don't like the idea. I prefer healthy exchange of opinions and ideas and task division. I then did a bit of online freelancing for a while but I think working in a sweatshop might be more enjoyable. Also, I studied computer engineering and you are in an environment in which your client will presume you don't have any formal education because there is no way to prove it. Again, you are undervalued. You could try building a product, yes. But, of course, luck is a big factor. I wonder if there is a way to work in something you can do well, software development, and be valued for the quality of your work and be paid accordingly, and where you and only you get fairly paid for the value you generate. I know that what I have written seems somehow unlikely but I strongly feel this way. Hopefully someone will understand me and has already figured this out. I don't think I'm alone in this kind of feeling.

    Read the article

  • Getting Help with 'SEPA' Questions

    - by MargaretW
    What is 'SEPA'? The Single Euro Payments Area (SEPA) is a self-regulatory initiative for the European banking industry championed by the European Commission (EC) and the European Central Bank (ECB). The aim of the SEPA initiative is to improve the efficiency of cross border payments and the economies of scale by developing common standards, procedures, and infrastructure. The SEPA territory currently consists of 33 European countries -- the 28 EU states, together with Iceland, Liechtenstein, Monaco, Norway and Switzerland. Part of that infrastructure includes two new SEPA instruments that were introduced in 2008: SEPA Credit Transfer (a Payables transaction in Oracle EBS) SEPA Core Direct Debit (a Receivables transaction in Oracle EBS) A SEPA Credit Transfer (SCT) is an outgoing payment instrument for the execution of credit transfers in Euro between customer payment accounts located in SEPA. SEPA Credit Transfers are executed on behalf of an Originator holding a payment account with an Originator Bank in favor of a Beneficiary holding a payment account at a Beneficiary Bank. In R12 of Oracle applications, the current SEPA credit transfer implementation is based on Version 5 of the "SEPA Credit Transfer Scheme Customer-To-Bank Implementation Guidelines" and the "SEPA Credit Transfer Scheme Rulebook" issued by European Payments Council (EPC). These guidelines define the rules to be applied to the UNIFI (ISO20022) XML message standards for the implementation of the SEPA Credit Transfers in the customer-to-bank space. This format is compliant with SEPA Credit Transfer version 6. A SEPA Core Direct Debit (SDD) is an incoming payment instrument used for making domestic and cross-border payments within the 33 countries of SEPA, wherein the debtor (payer) authorizes the creditor (payee) to collect the payment from his bank account. The payment can be a fixed amount like a mortgage payment, or variable amounts such as those of invoices. The "SEPA Core Direct Debit" scheme replaces various country-specific direct debit schemes currently prevailing within the SEPA zone. SDD is based on the ISO20022 XML messaging standards, version 5.0 of the "SEPA Core Direct Debit Scheme Rulebook", and "SEPA Direct Debit Core Scheme Customer-to-Bank Implementation Guidelines". This format is also compliant with SEPA Core Direct Debit version 6. EU Regulation #260/2012 established the technical and business requirements for both instruments in euro. The regulation is referred to as the "SEPA end-date regulation", and also defines the deadlines for the migration to the new SEPA instruments: Euro Member States: February 1, 2014 Non-Euro Member States: October 31, 2016. Oracle and SEPA Within the Oracle E-Business Suite of applications, Oracle Payables (AP), Oracle Receivables (AR), and Oracle Payments (IBY) provide SEPA transaction capabilities for the following releases, as noted: Release 11.5.10.x -  AP & AR Release 12.0.x - AP & AR & IBY Release 12.1.x - AP & AR & IBY Release 12.2.x - AP & AR & IBY Resources To assist our customers in migrating, using, and troubleshooting SEPA functionality, a number of resource documents related to SEPA are available on My Oracle Support (MOS), including: R11i: AP: White Paper - SEPA Credit Transfer V5 support in Oracle Payables, Doc ID 1404743.1R11i: AR: White Paper - SEPA Core Direct Debit v5.0 support in Oracle Receivables, Doc ID 1410159.1R12: IBY: White Paper - SEPA Credit Transfer v5 support in Oracle Payments, Doc ID 1404007.1R12: IBY: White Paper - SEPA Core Direct Debit v5 support in Oracle Payments, Doc ID 1420049.1R11i/R12: AP/AR/IBY: Get Help Setting Up, Using, and Troubleshooting SEPA Payments in Oracle, Doc ID 1594441.2R11i/R12: Single European Payments Area (SEPA) - UPDATES, Doc ID 1541718.1R11i/R12: FAQs for Single European Payments Area (SEPA), Doc ID 791226.1

    Read the article

< Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >