Search Results

Search found 1481 results on 60 pages for 'paper'.

Page 55/60 | < Previous Page | 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Highlights from the Oracle Customer Experience Summit @ OpenWorld

    - by Richard Lefebvre
    The Oracle Customer Experience Summit was the first-ever event covering the full breadth of Oracle's CX portfolio -- Marketing, Sales, Commerce, and Service. The purpose of the Summit was to articulate the customer experience imperative and to showcase the suite of Oracle products that can help our customers create the best possible customer experience. This topic has always been a very important one, but now that there are so many alternative companies to do business with and because people have such public ways to voice their displeasure, it's necessary for vendors to have multiple listening posts in place to gauge consumer sentiment. They need to know what is going on in real time and be able to react quickly to turn negative situations into positive ones. Those can then be shared in a social manner to enhance the brand and turn the customer into a repeat customer. The Summit was focused on Oracle's portfolio of products and entirely dedicated to customers who are committed to building great customer experiences within their businesses. Rather than DBAs, the attendees were business people looking to collaborate with other like-minded experts and find out how Oracle can help in terms of technology, best practices, and expertise. The event was at the Westin St. Francis Hotel in San Francisco as part of Oracle OpenWorld. We had eight hundred people attend, which was great for the first year. Next year, there's no doubt in my mind, we can raise that number to 5,000. Alignment and Logic Oracle's Customer Experience portfolio is made up of a combination of acquired and organic products owned by many people who are new to Oracle. We include homegrown Fusion CRM, as well as RightNow, Inquira, OPA, Vitrue, ATG, Endeca, and many others. The attendees knew of the acquisitions, so naturally they wanted to see how the products all fit together and hear the logic behind the portfolio. To tell them about our alignment, we needed to be aligned. To accomplish that, a cross functional team at Oracle agreed on the messaging so that every single Oracle presenter could cover the big picture before going deep into a product or topic. Talking about the full suite of products in one session produced overflow value for other products. And even though this internal coordination was a huge effort, everyone saw the value for our customers and for our long-term cooperation and success. Keynotes, Workshops, and Tents of Innovation We scored by having Seth Godin as our keynote speaker ? always provocative and popular. The opening keynote was a session orchestrated by Mark Hurd, Anthony Lye, and me. Mark set the stage by giving real-world examples of bad customer experiences, Anthony clearly articulated the business imperative for addressing these experiences, and I brought it all to life by taking the audience around the Customer Lifecycle and showing demos and videos, with partners included at each of the stops around the lifecycle. Brian Curran, a VP for RightNow Product Strategy, presented a session that was in high demand called The Economics of Customer Experience. People loved hearing how to build a business case and justify the cost of building a better customer experience. John Kembel, another VP for RightNow Product Strategy, held a workshop that customers raved about. It was based on the journey mapping methodology he created, which is a way to talk to customers about where they want to make improvements to their customers' experiences. He divided the audience into groups led by facilitators. Each person had the opportunity to engage with experts and peers and construct some real takeaways. The conference hotel was across from Union Square so we used that space to set up Innovation Tents. During the day we served lunch in the tents and partners showed their different innovative ideas. It was very interesting to see all the technologies and advancements. It also gave people a place to mix and mingle and to think about the fringe of where we could all take these ideas. Product Portfolio Plus Thought Leadership Of course there is always room for improvement, but the feedback on the format of the conference was positive. Ninety percent of the sessions had either a partner or a customer teamed with an Oracle presenter. The presentations weren't dry, one-way information dumps, but more interactive. I just followed up with a CEO who attended the conference with his Head of Marketing. He told me that they are using John Kembel's journey mapping methodology across the organization to pull people together. This sort of thought leadership in these highly competitive areas gives Oracle permission to engage around the technology. We have to differentiate ourselves and it's harder to do on the product side because everyone looks the same on paper. But on thought leadership ? we can, and did, take some really big steps. David Vap Group Vice President Oracle Applications Product Development

    Read the article

  • Pace Layering Comes Alive

    - by Tanu Sood
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Rick Beers is Senior Director of Product Management for Oracle Fusion Middleware. Prior to joining Oracle, Rick held a variety of executive operational positions at Corning, Inc. and Bausch & Lomb. With a professional background that includes senior management positions in manufacturing, supply chain and information technology, Rick brings a unique set of experiences to cover the impact that technology can have on business models, processes and organizations. Rick hosts the IT Leaders Editorial on a monthly basis. By now, readers of this column are quite familiar with Oracle AppAdvantage, a unified framework of middleware technologies, infrastructure and applications utilizing a pace layered approach to enterprise systems platforms. 1. Standardize and Consolidate core Enterprise Applications by removing invasive customizations, costly workarounds and the complexity that multiple instances creates. 2. Move business specific processes and applications to the Differentiate Layer, thus creating greater business agility with process extensions and best of breed applications managed by cross- application process orchestration. 3. The Innovate Layer contains all the business capabilities required for engagement, collaboration and intuitive decision making. This is the layer where innovation will occur, as people engage one another in a secure yet open and informed way. 4. Simplify IT by minimizing complexity, improving performance and lowering cost with secure, reliable and managed systems across the entire Enterprise. But what hasn’t been discussed is the pace layered architecture that Oracle AppAdvantage adopts. What is it, what are its origins and why is it relevant to enterprise scale applications and technologies? It’s actually a fascinating tale that spans the past 20 years and a basic understanding of it provides a wonderful context to what is evolving as the future of enterprise systems platforms. It all begins in 1994 with a book by noted architect Stewart Brand, of ’Whole Earth Catalog’ fame. In his 1994 book How Buildings Learn, Brand popularized the term ‘Shearing Layers’, arguing that any building is actually a hierarchy of pieces, each of which inherently changes at different rates. In 1997 he produced a 6 part BBC Series adapted from the book, in which Part 6 focuses on Shearing Layers. In this segment Brand begins to introduce the concept of ‘pace’. Brand further refined this idea in his subsequent book, The Clock of the Long Now, which began to link the concept of Shearing Layers to computing and introduced the term ‘pace layering’, where he proposes that: “An imperative emerges: an adaptive [system] has to allow slippage between the differently-paced systems … otherwise the slow systems block the flow of the quick ones and the quick ones tear up the slow ones with their constant change. Embedding the systems together may look efficient at first but over time it is the opposite and destructive as well.” In 2000, IBM architects Ian Simmonds and David Ing published a paper entitled A Shearing Layers Approach to Information Systems Development, which applied the concept of Shearing Layers to systems design and development. It argued that at the time systems were still too rigid; that they constrained organizations by their inability to adapt to changes. The findings in the Conclusions section are particularly striking: “Our starting motivation was that enterprises need to become more adaptive, and that an aspect of doing that is having adaptable computer systems. The challenge is then to optimize information systems development for change (high maintenance) rather than stability (low maintenance). Our response is to make it explicit within software engineering the notion of shearing layers, and explore it as the principle that systems should be built to be adaptable in response to the qualitatively different rates of change to which they will be subjected. This allows us to separate functions that should legitimately change relatively slowly and at significant cost from that which should be changeable often, quickly and cheaply.” The problem at the time of course was that this vision of adaptable systems was simply not possible within the confines of 1st generation ERP, which were conceived, designed and developed for standardization and compliance. It wasn’t until the maturity of open, standards based integration, and the middleware innovation that followed, that pace layering became an achievable goal. And Oracle is leading the way. Oracle’s AppAdvantage framework makes pace layering come alive by taking a strategic vision 20 years in the making and transforming it to a reality. It allows enterprises to retain and even optimize their existing ERP systems, while wrapping around those ERP systems three layers of capabilities that inherently adapt as needed, at a pace that’s optimal for the enterprise.

    Read the article

  • Access Control Service v2: Registering Web Identities in your Applications [concepts]

    - by Your DisplayName here!
    ACS v2 support two fundamental types of client identities– I like to call them “enterprise identities” (WS-*) and “web identities” (Google, LiveID, OpenId in general…). I also see two different “mind sets” when it comes to application design using the above identity types: Enterprise identities – often the fact that a client can present a token from a trusted identity provider means he is a legitimate user of the application. Trust relationships and authorization details have been negotiated out of band (often on paper). Web identities – the fact that a user can authenticate with Google et al does not necessarily mean he is a legitimate (or registered) user of an application. Typically additional steps are necessary (like filling out a form, email confirmation etc). Sometimes also a mixture of both approaches exist, for the sake of this post, I will focus on the web identity case. I got a number of questions how to implement the web identity scenario and after some conversations it turns out it is the old authentication vs. authorization problem that gets in the way. Many people use the IsAuthenticated property on IIdentity to make security decisions in their applications (or deny user=”?” in ASP.NET terms). That’s a very natural thing to do, because authentication was done inside the application and we knew exactly when the IsAuthenticated condition is true. Been there, done that. Guilty ;) The fundamental difference between these “old style” apps and federation is, that authentication is not done by the application anymore. It is done by a third party service, and in the case of web identity providers, in services that are not under our control (nor do we have a formal business relationship with these providers). Now the issue is, when you switch to ACS, and someone with a Google account authenticates, indeed IsAuthenticated is true – because that’s what he is! This does not mean, that he is also authorized to use the application. It just proves he was able to authenticate with Google. Now this obviously leads to confusion. How can we solve that? Easy answer: We have to deal with authentication and authorization separately. Job done ;) For many application types I see this general approach: Application uses ACS for authentication (maybe both enterprise and web identities, we focus on web identities but you could easily have a dual approach here) Application offers to authenticate (or sign in) via web identity accounts like LiveID, Google, Facebook etc. Application also maintains a database of its “own” users. Typically you want to store additional information about the user In such an application type it is important to have a unique identifier for your users (think the primary key of your user database). What would that be? Most web identity provider (and all the standard ACS v2 supported ones) emit a NameIdentifier claim. This is a stable ID for the client (scoped to the relying party – more on that later). Furthermore ACS emits a claims identifying the identity provider (like the original issuer concept in WIF). When you combine these two values together, you can be sure to have a unique identifier for the user, e.g.: Facebook-134952459903700\799880347 You can now check on incoming calls, if the user is already registered and if yes, swap the ACS claims with claims coming from your user database. One claims would maybe be a role like “Registered User” which can then be easily used to do authorization checks in the application. The WIF claims authentication manager is a perfect place to do the claims transformation. If the user is not registered, show a register form. Maybe you can use some claims from the identity provider to pre-fill form fields. (see here where I show how to use the Facebook API to fetch additional user properties). After successful registration (which may include other mechanisms like a confirmation email), flip the bit in your database to make the web identity a registered user. This is all very theoretical. In the next post I will show some code and provide a download link for the complete sample. More on NameIdentifier Identity providers “guarantee” that the name identifier for a given user in your application will always be the same. But different applications (in the case of ACS – different ACS namespaces) will see different name identifiers. This is by design to protect the privacy of users because identical name identifiers could be used to create “profiles” of some sort for that user. In technical terms they create the name identifier approximately like this: name identifier = Hash((Provider Internal User ID) + (Relying Party Address)) Why is this important to know? Well – when you change the name of your ACS namespace, the name identifiers will change as well and you will will lose your “connection” to your existing users. Oh an btw – never use any other claims (like email address or name) to form a unique ID – these can often be changed by users.

    Read the article

  • What Counts For a DBA: Fitness

    - by Louis Davidson
    If you know me, you can probably guess that physical exercise is not really my thing. There was a time in my past when it a larger part of my life, but even then never in the same sort of passionate way as a number of our SQL friends.  For me, I find that mental exercise satisfies what I believe to be the same inner need that drives people to run farther than I like to drive on most Saturday mornings, and it is certainly just as addictive. Mental fitness shares many common traits with physical fitness, especially the need to attain it through repetitive training. I only wish that mental training burned off a bacon cheeseburger in the same manner as does jogging around a dewy park on Saturday morning. In physical training, there are at least two goals, the first of which is to be physically able to do a task. The second is to train the brain to perform the task without thinking too hard about it. No matter how long it has been since you last rode a bike, you will be almost certainly be able to hop on and start riding without thinking about the process of pedaling or balancing. If you’ve never ridden a bike, you could be a physics professor /Olympic athlete and still crash the first few times you try, even though you are as strong as an ox and your knowledge of the physics of bicycle riding makes the concept child’s play. For programming tasks, the process is very similar. As a DBA, you will come to know intuitively how to backup, optimize, and secure database systems. As a data programmer, you will work to instinctively use the clauses of Transact-SQL DML so that, when you need to group data three ways (and not four), you will know to use the GROUP BY clause with GROUPING SETS without resorting to a search engine.  You have the skill. Making it naturally then requires repetition and experience is the primary requirement, not just simply learning about a topic. The hardest part of being really good at something is this difference between knowledge and skill. I have recently taken several informative training classes with Kimball University on data warehousing and ETL. Now I have a lot more knowledge about designing data warehouses than before. I have also done a good bit of data warehouse designing of late and have started to improve to some level of proficiency with the theory. Yet, for all of this head knowledge, it is still a struggle to take what I have learned and apply it to the designs I am working on.  Data warehousing is still a task that is not yet deeply ingrained in my brain muscle memory. On the other hand, relational database design is something that no matter how much or how little I may get to do it, I am comfortable doing it. I have done it as a profession now for well over a decade, I teach classes on it, and I also have done (and continue to do) a lot of mental training beyond the work day. Sometimes the training is just basic education, some reading blogs and attending sessions at PASS events.  My best training comes from spending time working on other people’s design issues in forums (though not nearly as much as I would like to lately). Working through other people’s problems is a great way to exercise your brain on problems with which you’re not immediately familiar. The final bit of exercise I find useful for cultivating mental fitness for a data professional is also probably the nerdiest thing that I will ever suggest you do.  Akin to running in place, the idea is to work through designs in your head. I have designed more than one database system that would revolutionize grocery store operations, sales at my local Target store, the ordering process at Amazon, and ways to improve Disney World operations to get me through a line faster (some of which they are starting to implement without any of my help.) Never are the designs truly fleshed out, but enough to work through structures and processes.  On “paper”, I have designed database systems to catalog things as trivial as my Lego creations, rental car companies and my audio and video collections. Once I get the database designed mentally, sometimes I will create the database, add some data (often using Red-Gate’s Data Generator), and write a few queries to see if a concept was realistic, but I will rarely fully flesh out the database since I have no desire to do any user interface programming anymore.  The mental training allows me to keep in practice for when the time comes to do the work I love the most for real…even if I have been spending most of my work time lately building data warehouses.  If you are really strong of mind and body, perhaps you can mix a mental run with a physical run; though don’t run off of a cliff while contemplating how you might design a database to catalog the trees on a mountain…that would be contradictory to the purpose of both types of exercise.

    Read the article

  • Cloud to On-Premise Connectivity Patterns

    - by Rajesh Raheja
    Do you have a requirement to convert an Opportunity in Salesforce.com to an Order/Quote in Oracle E-Business Suite? Or maybe you want the creation of an Oracle RightNow Incident to trigger an on-premise Oracle E-Business Suite Service Request creation for RMA and Field Scheduling? If so, read on. In a previous blog post, I discussed integrating TO cloud applications, however the use cases above are the reverse i.e. receiving data FROM cloud applications (SaaS) TO on-premise applications/databases that sit behind a firewall. Oracle SOA Suite is assumed to be on-premise with with Oracle Service Bus as the mediation and virtualization layer. The main considerations for the patterns are are security i.e. shielding enterprise resources; and scalability i.e. minimizing firewall latency. Let me use an analogy to help visualize the patterns: the on-premise system is your home - with your most valuable possessions - and the SaaS app is your favorite on-line store which regularly ships (inbound calls) various types of parcels/items (message types/service operations). You need the items at home (on-premise) but want to safe guard against misguided elements of society (internet threats) who may masquerade as postal workers and vandalize property (denial of service?). Let's look at the patterns. Pattern: Pull from Cloud The on-premise system polls from the SaaS apps and picks up the message instead of having it delivered. This may be done using Oracle RightNow Object Query Language or SOAP APIs. This is particularly suited for certain integration approaches wherein messages are trickling in, can be centralized and batched e.g. retrieving event notifications on an hourly schedule from the Oracle Messaging Service. To compare this pattern with the home analogy, you are avoiding any deliveries to your home and instead go to the post office/UPS/Fedex store to pick up your parcel. Every time. Pros: On-premise assets not exposed to the Internet, firewall issues avoided by only initiating outbound connections Cons: Polling mechanisms may affect performance, may not satisfy near real-time requirements Pattern: Open Firewall Ports The on-premise system exposes the web services that needs to be invoked by the cloud application. This requires opening up firewall ports, routing calls to the appropriate internal services behind the firewall. Fusion Applications uses this pattern, and auto-provisions the services on the various virtual hosts to secure the topology. This works well for service integration, but may not suffice for large volume data integration. Using the home analogy, you have now decided to receive parcels instead of going to the post office every time. A door mail slot cut out allows the postman can drop small parcels, but there is still concern about cutting new holes for larger packages. Pros: optimal pattern for near real-time needs, simpler administration once the service is provisioned Cons: Needs firewall ports to be opened up for new services, may not suffice for batch integration requiring direct database access Pattern: Virtual Private Networking The on-premise network is "extended" to the cloud (or an intermediary on-demand / managed service offering) using Virtual Private Networking (VPN) so that messages are delivered to the on-premise system in a trusted channel. Using the home analogy, you entrust a set of keys with a neighbor or property manager who receives the packages, and then drops it inside your home. Pros: Individual firewall ports don't need to be opened, more suited for high scalability needs, can support large volume data integration, easier management of one connection vs a multitude of open ports Cons: VPN setup, specific hardware support, requires cloud provider to support virtual private computing Pattern: Reverse Proxy / API Gateway The on-premise system uses a reverse proxy "API gateway" software on the DMZ to receive messages. The reverse proxy can be implemented using various mechanisms e.g. Oracle API Gateway provides firewall and proxy services along with comprehensive security, auditing, throttling benefits. If a firewall already exists, then Oracle Service Bus or Oracle HTTP Server virtual hosts can provide reverse proxy implementations on the DMZ. Custom built implementations are also possible if specific functionality (such as message store-n-forward) is needed. In the home analogy, this pattern sits in between cutting mail slots and handing over keys. Instead, you install (and maintain) a mailbox in your home premises outside your door. The post office delivers the parcels in your mailbox, from where you can securely retrieve it. Pros: Very secure, very flexible Cons: Introduces a new software component, needs DMZ deployment and management Pattern: On-Premise Agent (Tunneling) A light weight "agent" software sits behind the firewall and initiates the communication with the cloud, thereby avoiding firewall issues. It then maintains a bi-directional connection either with pull or push based approaches using (or abusing, depending on your viewpoint) the HTTP protocol. Programming protocols such as Comet, WebSockets, HTTP CONNECT, HTTP SSH Tunneling etc. are possible implementation options. In the home analogy, a resident receives the parcel from the postal worker by opening the door, however you still take precautions with chain locks and package inspections. Pros: Light weight software, IT doesn't need to setup anything Cons: May bypass critical firewall checks e.g. virus scans, separate software download, proliferation of non-IT managed software Conclusion The patterns above are some of the most commonly encountered ones for cloud to on-premise integration. Selecting the right pattern for your project involves looking at your scalability needs, security restrictions, sync vs asynchronous implementation, near real-time vs batch expectations, cloud provider capabilities, budget, and more. In some cases, the basic "Pull from Cloud" may be acceptable, whereas in others, an extensive VPN topology may be well justified. For more details on the Oracle cloud integration strategy, download this white paper.

    Read the article

  • Cloud to On-Premise Connectivity Patterns

    - by Rajesh Raheja
    Do you have a requirement to convert an Opportunity in Salesforce.com to an Order/Quote in Oracle E-Business Suite? Or maybe you want the creation of an Oracle RightNow Incident to trigger an on-premise Oracle E-Business Suite Service Request creation for RMA and Field Scheduling? If so, read on. In a previous blog post, I discussed integrating TO cloud applications, however the use cases above are the reverse i.e. receiving data FROM cloud applications (SaaS) TO on-premise applications/databases that sit behind a firewall. Oracle SOA Suite is assumed to be on-premise with with Oracle Service Bus as the mediation and virtualization layer. The main considerations for the patterns are are security i.e. shielding enterprise resources; and scalability i.e. minimizing firewall latency. Let me use an analogy to help visualize the patterns: the on-premise system is your home - with your most valuable possessions - and the SaaS app is your favorite on-line store which regularly ships (inbound calls) various types of parcels/items (message types/service operations). You need the items at home (on-premise) but want to safe guard against misguided elements of society (internet threats) who may masquerade as postal workers and vandalize property (denial of service?). Let's look at the patterns. Pattern: Pull from Cloud The on-premise system polls from the SaaS apps and picks up the message instead of having it delivered. This may be done using Oracle RightNow Object Query Language or SOAP APIs. This is particularly suited for certain integration approaches wherein messages are trickling in, can be centralized and batched e.g. retrieving event notifications on an hourly schedule from the Oracle Messaging Service. To compare this pattern with the home analogy, you are avoiding any deliveries to your home and instead go to the post office/UPS/Fedex store to pick up your parcel. Every time. Pros: On-premise assets not exposed to the Internet, firewall issues avoided by only initiating outbound connections Cons: Polling mechanisms may affect performance, may not satisfy near real-time requirements Pattern: Open Firewall Ports The on-premise system exposes the web services that needs to be invoked by the cloud application. This requires opening up firewall ports, routing calls to the appropriate internal services behind the firewall. Fusion Applications uses this pattern, and auto-provisions the services on the various virtual hosts to secure the topology. This works well for service integration, but may not suffice for large volume data integration. Using the home analogy, you have now decided to receive parcels instead of going to the post office every time. A door mail slot cut out allows the postman can drop small parcels, but there is still concern about cutting new holes for larger packages. Pros: optimal pattern for near real-time needs, simpler administration once the service is provisioned Cons: Needs firewall ports to be opened up for new services, may not suffice for batch integration requiring direct database access Pattern: Virtual Private Networking The on-premise network is "extended" to the cloud (or an intermediary on-demand / managed service offering) using Virtual Private Networking (VPN) so that messages are delivered to the on-premise system in a trusted channel. Using the home analogy, you entrust a set of keys with a neighbor or property manager who receives the packages, and then drops it inside your home. Pros: Individual firewall ports don't need to be opened, more suited for high scalability needs, can support large volume data integration, easier management of one connection vs a multitude of open ports Cons: VPN setup, specific hardware support, requires cloud provider to support virtual private computing Pattern: Reverse Proxy / API Gateway The on-premise system uses a reverse proxy "API gateway" software on the DMZ to receive messages. The reverse proxy can be implemented using various mechanisms e.g. Oracle API Gateway provides firewall and proxy services along with comprehensive security, auditing, throttling benefits. If a firewall already exists, then Oracle Service Bus or Oracle HTTP Server virtual hosts can provide reverse proxy implementations on the DMZ. Custom built implementations are also possible if specific functionality (such as message store-n-forward) is needed. In the home analogy, this pattern sits in between cutting mail slots and handing over keys. Instead, you install (and maintain) a mailbox in your home premises outside your door. The post office delivers the parcels in your mailbox, from where you can securely retrieve it. Pros: Very secure, very flexible Cons: Introduces a new software component, needs DMZ deployment and management Pattern: On-Premise Agent (Tunneling) A light weight "agent" software sits behind the firewall and initiates the communication with the cloud, thereby avoiding firewall issues. It then maintains a bi-directional connection either with pull or push based approaches using (or abusing, depending on your viewpoint) the HTTP protocol. Programming protocols such as Comet, WebSockets, HTTP CONNECT, HTTP SSH Tunneling etc. are possible implementation options. In the home analogy, a resident receives the parcel from the postal worker by opening the door, however you still take precautions with chain locks and package inspections. Pros: Light weight software, IT doesn't need to setup anything Cons: May bypass critical firewall checks e.g. virus scans, separate software download, proliferation of non-IT managed software Conclusion The patterns above are some of the most commonly encountered ones for cloud to on-premise integration. Selecting the right pattern for your project involves looking at your scalability needs, security restrictions, sync vs asynchronous implementation, near real-time vs batch expectations, cloud provider capabilities, budget, and more. In some cases, the basic "Pull from Cloud" may be acceptable, whereas in others, an extensive VPN topology may be well justified. For more details on the Oracle cloud integration strategy, download this white paper.

    Read the article

  • From the Classroom to the Boardroom

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Pens and Paper...these are the only things being a student and being a graduate / professional have in common. Walking in to the offices of Oracle South Africa as a graduate the first thing you notice is how polished and sleek all the people who work here look. 80% of the ladies wear sky-scraper heels and walk with the greatest grace. This was the first of many rude awakenings to remind me that I am no longer a student but a graduate. My first struggle was having to wake up at wee hours of the morning to prepare for work. As a student going to class was almost an optional thing, if you missed a morning class you could always attend an evening class to make up for it or simply attend with another group. But in the workplace, you HAVE to show up every single morning at the same time, with no option of coming in when it suits you and there is definitely no coming in with the evening class/shift. As a student, the earliest hour I ever woke up was 7:00am, anything earlier than that was considered inhumane torture. My reason for waking up every morning as a student was “you have a degree to go get” but as a graduate having to go to work I have to say to myself “here’s to a new day of learning and growing”. My second struggle has come in having to change my beloved wardrobe. Everyone who knows me knows how passionate I am about fashion and shopping. For me Shopping is a BASIC HUMAN RIGHT, that should not be messed with. Therefore it was with great sadness that I swopped my rippled skinny jeans for pin-striped formal pants, my long chandelier earrings for simple studs, my flat shoes for heels, my sheer blouses for crisp white shirts, even my beloved wild hair had to make way for a simple ponytail. Our looks as ladies also came under great scrutiny, we had to acquaint ourselves with some serious grooming tools: the mascara, blush, lip-gloss, blush, a touch of lipstick and a manicure set. Language was a struggle of its own as well. Being a student you learn to relate to your peers in a informal way. In the workplace you have to address everyone with the same respect, including your peers. Words like “Hey buddy” had to make way for “good morning friend”. The month long winter school holiday was one of the things I looked forward to as a student. This was a time where we got to be at home and avoid the coldest month of the year, July. It was the most amazing thing ever, just sleeping and snuggling up to all sorts of warm things but sadly it is now a thing of the past. It is currently winter in South Africa and going to work has become the most unfashionable thing with all the jackets, boots, scarves and gloves. But summer is coming and I will miss those holidays too. As a student the school holidays were like a gift for us to catch a break and not think for a while which was why it was imaginable how someone would go on for the entire year without a break, with only the promise of a mere 21 days annual leave!! Right now I am sure we are all looking forward to taking that annual leave when the time is right. The worst rude awakening I must say, has to be presenting in front of clients and managers. As a student you have the same class mates for almost four years therefore presenting in front of them becomes the norm over the years and your lecturer will always go gently on you. What they don’t tell you at University is that in the real world, time is money and clients pay money to see you present therefore there is no room for error. Clients are not there to give you a score and boost your ego, they expect nothing less than 100% and they will let you know without a second thought. For a graduate this can feel like you are being fed to the sharks, you either get eaten or you swim for your life. At the end of the day, it is all an experience that is meant to groom us into better professional and make us a part of the Red Team. All the sacrifices are worth it and they lead us to being better and more polished professionals. So if you are interested in joining the ECEMEA Sales and Presales Internship Programme, please have a look at http://campus.oracle.com for more information and for our latest vacancies and internships. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Some Early Considerations

    - by Chris Massey
    Following on from my previous post, I want to say "thank you" to everyone who has got in touch and got involved – you are pioneers! An update on where we are right now: paper prototypes v1 To be more specific, we’ve picked two of the ideas that seem to have more pros than cons, turned them into Balsamiq mockups, and are getting them fleshed out with realistic content. We’ll initially make these available to the aforementioned pioneers (thank you again), roll in the feedback, and then open up to get more data on what works and what doesn’t. If you’ve got any questions about this (or what we’re working on right now), feel free to ask me in the comments below. I’ve had a few people express an interest in the process we’re going through, and I’m more than happy to share details more frequently as we go along – not least because you, dear reader, will help us stay on target and create something Good. To start with, here’s a quick flashback to bring you all up to speed. A Brief Retrospective As you may already know, we’re creating a new publishing asset specifically focused on providing great content for web developers. We don’t yet know exactly what this thing will look like, or exactly how it will work, but we know we want to create something that is useful different. For my part, I’m seriously excited at the prospect of building a genuinely digital publishing system (as opposed to what most publishing is these days, which is print-style publishing which just happens to be on the web). The main challenge at this point is working out our build-measure-assess loop to speed up our experimental turn-around, and that’ll get better as we run more trials. Of course, there are a few things we’ve been pondering at this early conceptual stage: Do we publishing about heterogeneous technology stacks from day 1, or do we start with ASP.NET (which we’re familiar with) & branch out later? There are challenges with either approach. What publishing "modes" are already being well-handled? For example, the likes of Pluralsight, TekPub, and Treehouse have pretty much nailed video training (debate about price, if you like), and unless we think we can do it faster / better / cheaper (unlikely, for the record), we should leave them to it. Where should we base whatever we create? Should we create a completely new asset under a new name, graft something onto Simple-Talk (like the labs), or just build something directly into Simple-Talk? It sounds trivial, but it does have at least some impact on infrastructure and what how we manage the different types of content we (will) have. Are there any obvious problems or niches that we think could address really well, or should we just throw ideas out and see what readers respond to? What kind of users do we want to provide for? This actually deserves a little bit of unpacking… Why are you here? We currently divide readers into (broadly) the categories: Category 1: I know nothing about X, and I’d like to learn about it. Category 2: I know something about X, but I’d like to learn how to do something specific with it. Category 3: Ah man, I have a problem with X, and I need to fix it now. Now that I think about it, I might also include a 4th class of reader: Category 4: I’m looking for something interesting to engage my brain. These are clearly task-based categorizations, and depending on which task you’re performing when you arrive here, you’re going to need different types of content, or will have specific discovery needs. One of the questions that’s at the back of my mind whenever I consider a new idea is “How many of the categories will this satisfy?” As an example, typical video training is very well suited to categories 1, 2, and 4. StackOverflow is very well suited to category 3, and serves as a sign-posting system to the rest. Clearly it’s not necessary to satisfy every category need to be useful and popular, but being aware of what behavior readers might be exhibiting when they arrive will help us tune our ideas appropriately. < / Flashback > We don’t have clean answers to most of these considerations – they’re things we’re aware of, and each idea we look at is going to be best suited to a different mix of the options I’ve described. Our first experimental loop will be coming full circle in the next few days, so we should start to see how the different possibilities vary between ideas. Free to chime in with questions and suggestions about anything I’ve just brain-dumped, or at any stage as we go along. If you see anything that intrigued or enrages you, or just have an idea you’d like to share, I’d love to hear from you.

    Read the article

  • Control HelpButton, HelpRequested, HelpButtonClicked - Instant help for windows Dialog Form components

    Instant help for windows dialog components is a great feature and very much known since windows 98. but I saw many many people are not aware it and query on Google to get help, but “help button” for dialogs helps you or your customers to get the help instantly. Every dialog window has help icon if that dialog was coded to enable it. it really helps to know the functionality of the components quickly. For example I was trying to pint a document from acrobat reader and opened printer properties to print the content front and back of the paper. If you observe there is a help button before close button. To get help on options of “Print on Both Sides” you would need to click on help button first and then click on the area on which you want to see the help. above picture shows help text for the options of “Print on Both Sides”. If you would like to get the help using keyboard you can use F1 key. Help button can be displayed only if minimize button and maximize button both are not shown unless you want go with custom buttons. below is the way if you want to get Help button for windows forms.   In this sample demo I want to have a checkbox and need to show help when I click on F1 on check box. So I created a form which country check box and help label as show in adjacent picture. Below is the code for your code bind file. using System; using System.Windows.Forms; namespace WindowsFormsApplication1 {     public partial classForm1: Form    {         publicForm1()         {             InitializeComponent();         }         private void Form1_Load(objectsender, EventArgs e)         {             this.Text = "Help Button Demo Form";             lblHelp.Text = "Press F1 on any component to get Instant Help";             this.HelpButton = true;             this.MaximizeBox = false;             this.MinimizeBox = false;             chkCountry.Tag = "Check or Uncheck Coutry Check Box";             chkCountry.HelpRequested += newHelpEventHandler(chkCountry_HelpRequested);             chkCountry.MouseLeave += newEventHandler(chkCountry_MouseLeave);         }         void chkCountry_HelpRequested(objectsender, HelpEventArgs hlpevent)         {             ControlrequestingControl = (Control)sender;             lblHelp.Text = (string)requestingControl.Tag;             hlpevent.Handled = true;         }         void chkCountry_MouseLeave(objectsender, EventArgs e)         {             lblHelp.Text = "Press F1 on any component to get Instant Help";         }     } } In above code  “HelpRequested” is an event will be fired when you click on F1 on Country checkbox. I stored the help information in the checkbox property called “Tag”. You might also maintain a property file to keep help text for each component differently. If you click on F1 when focus is on main form instead on individual component then generally separate help window opens. This can be done using the event “Form.HelpRequested” to open help windows as in below code. this.HelpRequested += newHelpEventHandler(Form1_HelpRequested); voidForm1_HelpRequested(objectsender, HelpEventArgs hlpevent) {     frmHelp.Show(); } span.fullpost {display:none;}

    Read the article

  • getting autodiscover URL from Exchange email address

    - by Anthony
    I'm starting with an address for an Exchange 2007 server: [email protected] And I attempted to send an autodiscover request, as documented at MSDN. I attempted to use the generic autodiscover address documented at the TechNet White Paper. So, using curl on PHP, I sent the following request: <Autodiscover xmlns="http://schemas.microsoft.com/exchange/autodiscover/outlook/requestschema/2006"> <Request> <EMailAddress>[email protected]</EMailAddress> <AcceptableResponseSchema> http://schemas.microsoft.com/exchange/autodiscover/outlook/responseschema/2006a </AcceptableResponseSchema> </Request> </Autodiscover> to the following URL: https://domain.exchangeserver.org/autodiscover/autodiscover.xml But got no response, just an eventual timeout. I also tried: https://autodiscover.domain.exchangeserver.org/autodiscover/autodiscover.xml With the same result. Now, since my larger goal is to use Autodiscover with Exchange Web Services, and since all of the EWS URLs start with the same sub-domain as the Outlook Web Access address, I thought I'd give that a try: OWA: https://wmail.domain.exchangeserver.org So I tried: https://wmail.domain.exchangeserver.org/autodiscover/autodiscover.xml And sure enough, I got back the expected response. However, I only knew the OWA sub-domain because it's the server I have access to and that I'm using to test everything. I would not know it for sure or be able to guess it if this were a live app and the user was entering in their own Exchange email. I know that whatever generic autodiscover settings must be turned on, because I can enter: [email protected] into Apple Mail on Snow Leopard and it finds everything without trouble. So the question is... Should https://domain.exchangeserver.org/autodiscover/autodiscover.xml have worked, and I just missed a step when trying to connect to it? Or, Is there some trick (maybe involving pinging the email address?) that Apple Mail and other clients use to resolve the address to the OWA subdomain before sending the autodiscover request? Thanks to anyone who knows or can take a wild guess.

    Read the article

  • What is in your Mathematica tool bag?

    - by Timo
    We all know that Mathematica is great, but it also often lacks critical functionality. What kind of external packages / tools / resources do you use with Mathematica? I'll edit (and invite anyone else to do so too) this main post to include resources which are focused on general applicability in scientific research and which as many people as possible will find useful. Feel free to contribute anything, even small code snippets (as I did below for a timing routine). Also, undocumented and useful features in Mathematica 7 and beyond you found yourself, or dug up from some paper/site are most welcome. Please include a short description or comment on why something is great or what utility it provides. If you link to books on Amazon with affiliate links please mention it, e.g., by putting your name after the link. Packages: LevelScheme is a package that greatly expands Mathematica's capability to produce good looking plots. I use it if not for anything else then for the much, much improved control over frame/axes ticks. David Park's Presentation Package ($50 - no charge for updates) Tools: MASH is Daniel Reeves's excellent perl script essentially providing scripting support for Mathematica 7. (This is finally built in as of Mathematica 8 with the -script option.) Resources: Wolfram's own repository MathSource has a lot of useful if narrow notebooks for various applications. Also check out the other sections such as Current Documentation, Courseware for lectures, and Demos for, well, demos. Books: Mathematica programming: an advanced introduction by Leonid Shifrin (web, pdf) is a must read if you want to do anything more than For loops in Mathematica. Quantum Methods with Mathematica by James F. Feagin (amazon) The Mathematica Book by Stephen Wolfram (amazon) (web) Schaum's Outline (amazon) Mathematica in Action by Stan Wagon (amazon) - 600 pages of neat examples and goes up to Mathematica version 7. Visualization techniques are especially good, you can see some of them on the author's Demonstrations Page. Mathematica Programming Fundamentals by Richard Gaylord (pdf) - A good concise introduction to most of what you need to know about Mathematica programming. Undocumented (or scarcely documented) Features: How to customize Mathematica keyboard shortcuts. See this question. How to inspect patterns and functions used by Mathematica's own functions. See this answer How to achieve Consistent size for GraphPlots in Mathematica? See this question.

    Read the article

  • Is there an algorithm for converting quaternion rotations to Euler angle rotations?

    - by Will Baker
    Is there an existing algorithm for converting a quaternion representation of a rotation to an Euler angle representation? The rotation order for the Euler representation is known and can be any of the six permutations (i.e. xyz, xzy, yxz, yzx, zxy, zyx). I've seen algorithms for a fixed rotation order (usually the NASA heading, bank, roll convention) but not for arbitrary rotation order. Furthermore, because there are multiple Euler angle representations of a single orientation, this result is going to be ambiguous. This is acceptable (because the orientation is still valid, it just may not be the one the user is expecting to see), however it would be even better if there was an algorithm which took rotation limits (i.e. the number of degrees of freedom and the limits on each degree of freedom) into account and yielded the 'most sensible' Euler representation given those constraints. I have a feeling this problem (or something similar) may exist in the IK or rigid body dynamics domains. Solved: I just realised that it might not be clear that I solved this problem by following Ken Shoemake's algorithms from Graphics Gems. I did answer my own question at the time, but it occurs to me it may not be clear that I did so. See the answer, below, for more detail. Just to clarify - I know how to convert from a quaternion to the so-called 'Tait-Bryan' representation - what I was calling the 'NASA' convention. This is a rotation order (assuming the convention that the 'Z' axis is up) of zxy. I need an algorithm for all rotation orders. Possibly the solution, then, is to take the zxy order conversion and derive from it five other conversions for the other rotation orders. I guess I was hoping there was a more 'overarching' solution. In any case, I am surprised that I haven't been able to find existing solutions out there. In addition, and this perhaps should be a separate question altogether, any conversion (assuming a known rotation order, of course) is going to select one Euler representation, but there are in fact many. For example, given a rotation order of yxz, the two representations (0,0,180) and (180,180,0) are equivalent (and would yield the same quaternion). Is there a way to constrain the solution using limits on the degrees of freedom? Like you do in IK and rigid body dynamics? i.e. in the example above if there were only one degree of freedom about the Z axis then the second representation can be disregarded. I have tracked down one paper which could be an algorithm in this pdf but I must confess I find the logic and math a little hard to follow. Surely there are other solutions out there? Is arbitrary rotation order really so rare? Surely every major 3D package that allows skeletal animation together with quaternion interpolation (i.e. Maya, Max, Blender, etc) must have solved exactly this problem?

    Read the article

  • Diophantine Equation [closed]

    - by ANIL
    In mathematics, a Diophantine equation (named for Diophantus of Alexandria, a third century Greek mathematician) is a polynomial equation where the variables can only take on integer values. Although you may not realize it, you have seen Diophantine equations before: one of the most famous Diophantine equations is: X^n+Y^n=Z^n We are not certain that McDonald's knows about Diophantine equations (actually we doubt that they do), but they use them! McDonald's sells Chicken McNuggets in packages of 6, 9 or 20 McNuggets. Thus, it is possible, for example, to buy exactly 15 McNuggets (with one package of 6 and a second package of 9), but it is not possible to buy exactly 16 nuggets, since no non- negative integer combination of 6's, 9's and 20's adds up to 16. To determine if it is possible to buy exactly n McNuggets, one has to solve a Diophantine equation: find non-negative integer values of a, b, and c, such that 6a + 9b + 20c = n. Problem 1 Show that it is possible to buy exactly 50, 51, 52, 53, 54, and 55 McNuggets, by finding solutions to the Diophantine equation. You can solve this in your head, using paper and pencil, or writing a program. However you chose to solve this problem, list the combinations of 6, 9 and 20 packs of McNuggets you need to buy in order to get each of the exact amounts. Given that it is possible to buy sets of 50, 51, 52, 53, 54 or 55 McNuggets by combinations of 6, 9 and 20 packs, show that it is possible to buy 56, 57,..., 65 McNuggets. In other words, show how, given solutions for 50-55, one can derive solutions for 56-65. Problem 2 Write an iterative program that finds the largest number of McNuggets that cannot be bought in exact quantity. Your program should print the answer in the following format (where the correct number is provided in place of n): "Largest number of McNuggets that cannot be bought in exact quantity: n" Hints: Hypothesize possible instances of numbers of McNuggets that cannot be purchased exactly, starting with 1 For each possible instance, called n, a. Test if there exists non-negative integers a, b, and c, such that 6a+9b+20c = n. (This can be done by looking at all feasible combinations of a, b, and c) b. If not, n cannot be bought in exact quantity, save n When you have found six consecutive values of n that in fact pass the test of having an exact solution, the last answer that was saved (not the last value of n that had a solution) is the correct answer, since you know by the theorem that any amount larger can also be bought in exact quantity

    Read the article

  • Utility that helps in file locking - expert tips wanted

    - by maix
    I've written a subclass of file that a) provides methods to conveniently lock it (using fcntl, so it only supports unix, which is however OK for me atm) and b) when reading or writing asserts that the file is appropriately locked. Now I'm not an expert at such stuff (I've just read one paper [de] about it) and would appreciate some feedback: Is it secure, are there race conditions, are there other things that could be done better … Here is the code: from fcntl import flock, LOCK_EX, LOCK_SH, LOCK_UN, LOCK_NB class LockedFile(file): """ A wrapper around `file` providing locking. Requires a shared lock to read and a exclusive lock to write. Main differences: * Additional methods: lock_ex, lock_sh, unlock * Refuse to read when not locked, refuse to write when not locked exclusivly. * mode cannot be `w` since then the file would be truncated before it could be locked. You have to lock the file yourself, it won't be done for you implicitly. Only you know what lock you need. Example usage:: def get_config(): f = LockedFile(CONFIG_FILENAME, 'r') f.lock_sh() config = parse_ini(f.read()) f.close() def set_config(key, value): f = LockedFile(CONFIG_FILENAME, 'r+') f.lock_ex() config = parse_ini(f.read()) config[key] = value f.truncate() f.write(make_ini(config)) f.close() """ def __init__(self, name, mode='r', *args, **kwargs): if 'w' in mode: raise ValueError('Cannot open file in `w` mode') super(LockedFile, self).__init__(name, mode, *args, **kwargs) self.locked = None def lock_sh(self, **kwargs): """ Acquire a shared lock on the file. If the file is already locked exclusively, do nothing. :returns: Lock status from before the call (one of 'sh', 'ex', None). :param nonblocking: Don't wait for the lock to be available. """ if self.locked == 'ex': return # would implicitly remove the exclusive lock return self._lock(LOCK_SH, **kwargs) def lock_ex(self, **kwargs): """ Acquire an exclusive lock on the file. :returns: Lock status from before the call (one of 'sh', 'ex', None). :param nonblocking: Don't wait for the lock to be available. """ return self._lock(LOCK_EX, **kwargs) def unlock(self): """ Release all locks on the file. Flushes if there was an exclusive lock. :returns: Lock status from before the call (one of 'sh', 'ex', None). """ if self.locked == 'ex': self.flush() return self._lock(LOCK_UN) def _lock(self, mode, nonblocking=False): flock(self, mode | bool(nonblocking) * LOCK_NB) before = self.locked self.locked = {LOCK_SH: 'sh', LOCK_EX: 'ex', LOCK_UN: None}[mode] return before def _assert_read_lock(self): assert self.locked, "File is not locked" def _assert_write_lock(self): assert self.locked == 'ex', "File is not locked exclusively" def read(self, *args): self._assert_read_lock() return super(LockedFile, self).read(*args) def readline(self, *args): self._assert_read_lock() return super(LockedFile, self).readline(*args) def readlines(self, *args): self._assert_read_lock() return super(LockedFile, self).readlines(*args) def xreadlines(self, *args): self._assert_read_lock() return super(LockedFile, self).xreadlines(*args) def __iter__(self): self._assert_read_lock() return super(LockedFile, self).__iter__() def next(self): self._assert_read_lock() return super(LockedFile, self).next() def write(self, *args): self._assert_write_lock() return super(LockedFile, self).write(*args) def writelines(self, *args): self._assert_write_lock() return super(LockedFile, self).writelines(*args) def flush(self): self._assert_write_lock() return super(LockedFile, self).flush() def truncate(self, *args): self._assert_write_lock() return super(LockedFile, self).truncate(*args) def close(self): self.unlock() return super(LockedFile, self).close() (the example in the docstring is also my current use case for this) Thanks for having read until down here, and possibly even answering :)

    Read the article

  • VBA/SQL recordsets

    - by intruesiive
    The project I'm asking about is for sending an email to teachers asking what books they're using for the classes they're teaching next semester, so that the books can be ordered. I have a query that compares the course number of this upcoming semester's classes to the course numbers of historical textbook orders, pulling out only those classes that are being taught this semester. That's where I get lost. I have a table that contains the following: -Professor -Course Number -Year -Book -Title The data looks like this: professor year course number title smith 13 1111 Pride and Prejudice smith 13 1111 The Fountainhead smith 13 1222 The Alchemist smith 12 1111 Pride and Prejudice smith 11 1222 Infinite Jest smith 10 1333 The Bible smith 13 1333 The Bible smith 12 1222 The Alchemist smith 10 1111 Moby Dick johnson 12 1222 The Tipping Point johnson 11 1333 Anna Kerenina johnson 10 1333 Everything is Illuminated johnson 12 1222 The Savage Detectives johnson 11 1333 In Search of Lost Time johnson 10 1333 Great Expectations johnson 9 1222 Proust on the Shore Here's what I need the code to do "on paper": Group the records by professor. Determine every unique course number in that group, and group records by course number. For each unique course number, determine the highest year associated. Then spit out every record with that professor+course number+year combination. With the sample data, the results would be: professor year course number title smith 13 1111 Pride and Prejudice smith 13 1111 The Fountainhead smith 13 1222 The Alchemist smith 13 1333 The Bible johnson 12 1222 The Tipping Point johnson 11 1333 Anna Kerenina johnson 12 1222 The Savage Detectives johnson 11 1333 In Search of Lost Time I'm thinking I should make a record set for each teacher, and within that, another record set for each course number. Within the course number record set, I need the system to determine what the highest year number is - maybe store that in a variable? Then pull out every associated record so that if the teacher ordered 3 books the last time they taught that class (whether it was in 2013 or 2012 and so on) all three books display. I'm not sure I'm thinking of record sets in the right way, though. My SQL so far is basic and clearly doesn't work: SELECT [All].Professor, [All].Course, Max([All].Year) FROM [All] GROUP BY [All].Professor, [All].Course; Thanks for your help.

    Read the article

  • Is One Tool or a Suite of Tools Better for Scrum?

    - by Rob Wells
    G'day, Edit: We've been using Scrum very successfully for several years on several projects of varying sizes. In fact, our team developed the successful iPlayer project for the BBC using a classical Scrum approach. After using various combinations of tools, some high-tech, some low-tech, across these projects we now wish to try adopting a suitable tool suite. Our manager is to some extent attempting to force the adoption of a single suite of tools for Scrum. I've looked at the SO question "Best Scrum tools" and most people seem to recommend either: a suite of low-tech solutions, e.g. whiteboards, post-its, index cards, etc., or a monolithic tool that tries to satisfy as much as possible of the process, e.g. Agilo, Mingle, ScrumWorks, Target Process, etc. Our team is currently evaluating several different Scrum tools. However, we are looking at selecting a single, monolithic tool, e.g. Agilo. All of the "one-stop" solutions have their strengths and weaknesses with the serious enterprise type solutions being the best sort of fit. But all have some short comings. After reading the paper "Peer Code Review: An Agile Process" over at SmartBear I started wondering if we were trying to force adoption of a tool on a "best fit" basis. I think you can take a couple of reference artefacts of the Scrum development process, say user stories, epics and themes, and the code base which must use a well-known SCM, e.g. SVN, Hg, etc. Then if we take that as the common reference points for the tools employed then we would be able to use a group of tools to handle the different aspects of the Scrum process rather than try forcing a fit of a single tool would is a bit like forcing a square peg into the round hole. In this way, providing you've agreed your common reference points, you can use several tools, each performing their role better than a could be done by a single component in a monolithic tool suite. Is this a more sensible approach? Are the two reference points I mentioned above suitable, or is their a better choice of points where the tools would meet? cheers,

    Read the article

  • Do you still limit line length in code?

    - by Noldorin
    This is a matter on which I would like to gauge the opinion of the community: Do you still limit the length of lines of code to a fixed maximum? This was certainly a convention of the past for many languages; one would typically cap the number of characters per line to a value such as 80 (and more recnetly 100 or 120 I believe). As far as I understand, the primary reasons for limiting line length are: Readability - You don't have to scroll over horizontally when you want to see the end of some lines. Printing - Admittedly (at least in my experience), most code that you are working on does not get printed out on paper, but by limiting the number of characters you can insure that formatting doesn't get messed up when printed. Past editors (?) - Not sure about this one, but I suspect that at some point in the distant past of programming, (at least some) text editors may have been based on a fixed-width buffer. I'm sure there are points that I am still missing out, so feel free to add to these... Now, when I tend to observe C or C# code nowadays, I often see a number of different styles, the main ones being: Line length capped to 80, 100, or even 120 characters. As far as I understand, 80 is the traditional length, but the longer ones of 100 and 120 have appeared because of the widespread use of high resolutions and widescreen monitors nowadays. No line length capping at all. This tends to be pretty horrible to read, and I don't see it too often, though it's certainly not too rare either. Inconsistent capping of line length. The length of some lines are limited to a fixed maximum (or even a maximum that changes depending on the file/location in code), while others (possibly comments) are not at all. My personal preference here (at least recently) has been to cap the line length to 100 in the Visual Studio editor. This means that in a decently sized window (on a non-widescreen monitor), the ends of lines are still fully visible. I can however see a few disadvantages in this, especially when you end up writing code that's indented 3 or 4 levels and then having to include a long string literal - though I often take this as a sign to refactor my code! In particular, I am curious what the C and C# coders (or anyone who uses Visual Studio for that matter) think about this point, though I would be interested in hearing anyone's thoughts on the subject. Edit Thanks for the all answers - I appreciate the variety of opinions here, all presenting sound reasons. Consensus does seem to be tipping in the direction of always (or almost always) limit the line length. Interestingly, it seems to be in various coding standards to limit the line length. Judging by some of the answers, both the Python and Google CPP guidelines set the limit at 80 chars. I haven't seen anything similar regarding C# or VB.NET, but I would be curious to see if there are ones anywhere.

    Read the article

  • Fuzzy Regex, Text Processing, Lexical Analysis?

    - by justinzane
    I'm not quite sure what terminology to search for, so my title is funky... Here is the workflow I've got: Semi-structured documents are scanned to file. The files are OCR'd to text. The text is parsed into Python objects The objects are serialized (to SQL, JSON, whatever) for use. The documents are structures like this: HEADER blah blah, Page ### blah Garbage text... 1. Question Text... continued until now. A. Choice text... adsadsf. B. Another Choice... 2. Another Question... I need to extract the questions and choices. The problem is that, because the text is OCR output, there are occasional strange substitutions like '2' - 'Z' which makes ordinary regular expressions useless. I've tried the Levenshtein module and it helps, but it requires prior knowledge of what edit distance is to be expected. I don't know whether I'm looking to create a parser? a lexer? something else? This has lead me down all kinds of interesting but nonrelevant paths. Guidance would be greatly appreciated. Oh, also, the text is generally from specific technical domains, so general spelling tools are not so helpful. Regarding the structure of the documents, there is no clear visual pattern -- like line breaks or indentation -- with the exception of the fact that "questions" usually begin a line. Crap on the document can cause characters to appear before the actual beginning of the line, which means that something along the lines of r'^[0-9]+' does not reliably work. Though the "questions" always begin with an int, a period and a space; the OCR can substitute other characters or skip characters. This is not so much a problem with Tesseract or Cunieform, rather with the poor quality of the paper documents. # Note: for the project in question, it was decided that having a human prep the OCR'd text was better that spending the time coding a solution. I'd still love good pointers, however.

    Read the article

  • Reading Source Code Aloud

    - by Jon Purdy
    After seeing this question, I got to thinking about the various challenges that blind programmers face, and how some of them are applicable even to sighted programmers. Particularly, the problem of reading source code aloud gives me pause. I have been programming for most of my life, and I frequently tutor fellow students in programming, most often in C++ or Java. It is uniquely aggravating to try to verbally convey the essential syntax of a C++ expression. The speaker must give either an idiomatic translation into English, or a full specification of the code in verbal longhand, using explicit yet slow terms such as "opening parenthesis", "bitwise and", et cetera. Neither of these solutions is optimal. On the one hand, an idiomatic translation is only useful to a programmer who can de-translate back into the relevant programming code—which is not usually the case when tutoring a student. In turn, education (or simply getting someone up to speed on a project) is the most common situation in which source is read aloud, and there is a very small margin for error. On the other hand, a literal specification is aggravatingly slow. It takes far far longer to say "pound, include, left angle bracket, iostream, right angle bracket, newline" than it does to simply type #include <iostream>. Indeed, most experienced C++ programmers would read this merely as "include iostream", but again, inexperienced programmers abound and literal specifications are sometimes necessary. So I've had an idea for a potential solution to this problem. In C++, there is a finite set of keywords—63—and operators—54, discounting named operators and treating compound assignment operators and prefix versus postfix auto-increment and decrement as distinct. There are just a few types of literal, a similar number of grouping symbols, and the semicolon. Unless I'm utterly mistaken, that's about it. So would it not then be feasible to simply ascribe a concise, unique pronunciation to each of these distinct concepts (including one for whitespace, where it is required) and go from there? Programming languages are far more regular than natural languages, so the pronunciation could be standardised. Speakers of any language would be able to verbally convey C++ code, and due to the regularity and fixity of the language, speech-to-text software could be optimised to accept C++ speech with a high degree of accuracy. So my question is twofold: first, is my solution feasible; and second, does anyone else have other potential solutions? I intend to take suggestions from here and use them to produce a formal paper with an example implementation of my solution.

    Read the article

  • Why do I get rows of zeros in my 2D fft?

    - by Nicholas Pringle
    I am trying to replicate the results from a paper. "Two-dimensional Fourier Transform (2D-FT) in space and time along sections of constant latitude (east-west) and longitude (north-south) were used to characterize the spectrum of the simulated flux variability south of 40degS." - Lenton et al(2006) The figures published show "the log of the variance of the 2D-FT". I have tried to create an array consisting of the seasonal cycle of similar data as well as the noise. I have defined the noise as the original array minus the signal array. Here is the code that I used to plot the 2D-FT of the signal array averaged in latitude: import numpy as np from numpy import ma from matplotlib import pyplot as plt from Scientific.IO.NetCDF import NetCDFFile ### input directory indir = '/home/nicholas/data/' ### get the flux data which is in ### [time(5day ave for 10 years),latitude,longitude] nc = NetCDFFile(indir + 'CFLX_2000_2009.nc','r') cflux_southern_ocean = nc.variables['Cflx'][:,10:50,:] cflux_southern_ocean = ma.masked_values(cflux_southern_ocean,1e+20) # mask land nc.close() cflux = cflux_southern_ocean*1e08 # change units of data from mmol/m^2/s ### create an array that consists of the seasonal signal fro each pixel year_stack = np.split(cflux, 10, axis=0) year_stack = np.array(year_stack) signal_array = np.tile(np.mean(year_stack, axis=0), (10, 1, 1)) signal_array = ma.masked_where(signal_array > 1e20, signal_array) # need to mask ### average the array over latitude(or longitude) signal_time_lon = ma.mean(signal_array, axis=1) ### do a 2D Fourier Transform of the time/space image ft = np.fft.fft2(signal_time_lon) mgft = np.abs(ft) ps = mgft**2 log_ps = np.log(mgft) log_mgft= np.log(mgft) Every second row of the ft consists completely of zeros. Why is this? Would it be acceptable to add a randomly small number to the signal to avoid this. signal_time_lon = signal_time_lon + np.random.randint(0,9,size=(730, 182))*1e-05 EDIT: Adding images and clarify meaning The output of rfft2 still appears to be a complex array. Using fftshift shifts the edges of the image to the centre; I still have a power spectrum regardless. I expect that the reason that I get rows of zeros is that I have re-created the timeseries for each pixel. The ft[0, 0] pixel contains the mean of the signal. So the ft[1, 0] corresponds to a sinusoid with one cycle over the entire signal in the rows of the starting image. Here are is the starting image using following code: plt.pcolormesh(signal_time_lon); plt.colorbar(); plt.axis('tight') Here is result using following code: ft = np.fft.rfft2(signal_time_lon) mgft = np.abs(ft) ps = mgft**2 log_ps = np.log1p(mgft) plt.pcolormesh(log_ps); plt.colorbar(); plt.axis('tight') It may not be clear in the image but it is only every second row that contains completely zeros. Every tenth pixel (log_ps[10, 0]) is a high value. The other pixels (log_ps[2, 0], log_ps[4, 0] etc) have very low values.

    Read the article

  • Simplifying a four-dimensional rule table in Matlab: addressing rows and columns of each dimension

    - by Cate
    Hi all. I'm currently trying to automatically generate a set of fuzzy rules for a set of observations which contain four values for each observation, where each observation will correspond to a state (a good example is with Fisher's Iris Data). In Matlab I am creating a four dimensional rule table where a single cell (a,b,c,d) will contain the corresponding state. To reduce the table I am following the Hong and Lee method of row and column similarity checking but I am having difficulty understanding how to address the third and fourth dimensions' rows and columns. From the method it is my understanding that each dimension is addressed individually and if the rule is true, the table is simplified. The rules for merging are as follows: If all cells in adjacent columns or rows are the same. If two cells are the same or if either of them is empty in adjacent columns or rows and at least one cell in both is not empty. If all cells in a column or row are empty and if cells in its two adjacent columns or rows are the same, merge the three. If all cells in a column or row are empty and if cells in its two adjacent columns or rows are the same or either of them is empty, merge the three. If all cells in a column or row are empty and if all the non-empty cells in the column or row to its left have the same region, and all the non-empty cells in the column or row to its right have the same region, but one different from the previously mentioned region, merge these three columns into two parts. Now for the confusing bit. Simply checking if the entire row/column is the same as the adjacent (rule 1) seems simple enough: if (a,:,:,:) == (a+1,:,:,:) (:,b,:,:) == (:,b+1,:,:) (:,:,c,:) == (:,:,c+1,:) (:,:,:,d) == (:,:,:,d+1) is this correct? but to check if the elements in the row/column match, or either is zero (rules 2 and 4), I am a bit lost. Would it be something along these lines: for a = 1:20 for i = 1:length(b) if (a+1,i,:,:) == (a,i,:,:) ... else if (a+1,i,:,:) == 0 ... else if (a,i,:,:) == 0 etc. and for the third and fourth dimensions: for c = 1:20 for i = 1:length(a) if (i,:,c,:) == (i,:,c+1,:) ... else if (i,:,c+1,:) == 0 ... else if (i,:,c,:) == 0 etc. for d = 1:20 for i = 1:length(a) if (i,:,:,d) == (i,:,:,d+1) ... else if (i,:,:,d+1) == 0 ... else if (i,:,:,d) == 0 etc. even any help with four dimensional arrays would be useful as I'm so confused by the thought of more than three! I would advise you look at the paper to understand my meaning - they themselves have used the Iris data but only given an example with a 2D table. Thanks in advance, hopefully!

    Read the article

  • CoreGraphics taking a while to show on a large view - can i get it to repeat pixels?

    - by Andrew
    This is my coregraphics code: void drawTopPaperBackground(CGContextRef context, CGRect rect) { CGRect paper3 = CGRectMake(10, 14, 300, rect.size.height - 14); CGRect paper2 = CGRectMake(13, 12, 294, rect.size.height - 12); CGRect paper1 = CGRectMake(16, 10, 288, rect.size.height - 10); //Shadow CGContextSetShadowWithColor(context, CGSizeMake(0,0), 10, [[UIColor colorWithWhite:0 alpha:0.5]CGColor]); CGPathRef path = createRoundedRectForRect(paper3, 0); CGContextSetFillColorWithColor(context, [[UIColor blackColor] CGColor]); CGContextAddPath(context, path); CGContextFillPath(context); //Layers of paper //CGContextSaveGState(context); drawPaper(context, paper3); drawPaper(context, paper2); drawPaper(context, paper1); //CGContextRestoreGState(context); } void drawPaper(CGContextRef context, CGRect rect) { //Shadow CGContextSaveGState(context); CGContextSetShadowWithColor(context, CGSizeMake(0,0), 1, [[UIColor colorWithWhite:0 alpha:0.5]CGColor]); CGPathRef path = createRoundedRectForRect(rect, 0); CGContextSetFillColorWithColor(context, [[UIColor blackColor] CGColor]); CGContextAddPath(context, path); CGContextFillPath(context); //CGContextRestoreGState(context); //Gradient //CGContextSaveGState(context); CGColorRef startColor = [UIColor colorWithWhite:0.92 alpha:1.0].CGColor; CGColorRef endColor = [UIColor colorWithWhite:0.94 alpha:1.0].CGColor; CGRect firstHalf = CGRectMake(rect.origin.x, rect.origin.y, rect.size.width / 2, rect.size.height); CGRect secondHalf = CGRectMake(rect.origin.x + (rect.size.width / 2), rect.origin.y, rect.size.width / 2, rect.size.height); drawVerticalGradient(context, firstHalf, startColor, endColor); drawVerticalGradient(context, secondHalf, endColor, startColor); //CGContextRestoreGState(context); //CGContextSaveGState(context); CGRect redRect = rectForRectWithInset(rect, -1); CGMutablePathRef redPath = createRoundedRectForRect(redRect, 0); //CGContextSaveGState(context); CGContextSetStrokeColorWithColor(context, [[UIColor blackColor] CGColor]); CGContextAddPath(context, path); CGContextClip(context); CGContextAddPath(context, redPath); CGContextSetShadowWithColor(context, CGSizeMake(0, 0), 15.0, [[UIColor colorWithWhite:0 alpha:0.1] CGColor]); CGContextStrokePath(context); CGContextRestoreGState(context); } The view is a UIScrollView, which contains a textview. Every time the user types something and goes onto a new line, I call [self setNeedsDisplay]; and it redraws the code. But when the view starts to get long - around 1000 height, it has very noticeable lag. How can i make this code more efficient? Can i take a line of pixels and make it just repeat that, or stretch it, all the way down?

    Read the article

  • Proper HTML technique to create an web form out of an image

    - by Lars
    I plan to create an interactive golf score card for my website (XHTML). (Btw. thats how such a scorecard looks like: ScoreCard). So at the end one should be able to insert a score for each hole in the appropriated input field in the virtual scorecard on the website. For me it is very important that the interactive scorecard really looks the same as the original (paper-) scorecard does and so my first approach was to scan and slice the scorecard image to reach that appearance. Here you can see the way I sliced the image: The idea was to insert HTML text input for each score field ending up with something like this: After I sliced the image I reconstructed it using the HTML . To do that I put the image slices as the cell background. <table> <tr> <td style="background: url("slice1.jpg") width="58px" height="25px"> <input type="text"></inputText> </td> </tr> ... </table> At the first moment this worked fine (as Gimp offers quite a nice feature for this). Then the problem was that I had to create a HTML table to create the exact layout. As you can see the lower part of the layout is split up into 3 columns. The middle column is split up into several (for each hole) rows. So the left and right column have to be spanned over those rows. Ok finally that worked, but it lead to some kind of scaling problem. If I zoom in or out on the table the middle column (and only that one) is not scaled the right way. Iam not able to fix this, and so I start doubting if this is the right technique for html image virtualization. Iam really no specialist in the area of creating websites, so I would really appriciate any help on this. Maybe there is a complete other and better technique to do that, as I think it is a common job in webcreation. I couldnt find any nice examples or tuts on that.

    Read the article

  • MySQL Split Time Ranges into Smaller Chunks

    - by Neren
    Hello all, I've recently been tasked with finishing a PHP/MySQL web app when the developer quit last week. I'm no MySQL expert, so I apologize if this is an intensely simple question. I've searched SO for the better part of two days trying to find a relatively easy solution to my problem, which is as follows. Problem in a Nutshell: I have a MySQL table full of start and end datetime (GMT -5) & UNIX Timestamp values covering durations of irregular length and need to break/split/divide them into more-regular time chunks (5 minutes). I'm not after a count of row entries per time chunk/bucket/period, if that makes any sense. Data Example: started, ended, started_UNIX, ended_UNIX 2010-10-25 15:12:33, 2010-10-25 15:47:09, 1288033953, 1288036029 What I'm hoping to get: 2010-10-25 15:12:33, 2010-10-25 15:15:00, 1288033953, 1288037700 2010-10-25 15:15:00, 2010-10-25 15:20:00, 1288037700, 1288038000 2010-10-25 15:20:00, 2010-10-25 15:25:00, 1288038000, 1288038300 2010-10-25 15:25:00, 2010-10-25 15:30:00, 1288038300, 1288038600 2010-10-25 15:30:00, 2010-10-25 15:35:00, 1288038600, 1288038900 2010-10-25 15:35:00, 2010-10-25 15:40:00, 1288038900, 1288039200 2010-10-25 15:40:00, 2010-10-25 15:45:00, 1288039200, 1288039500 2010-10-25 15:45:00, 2010-10-25 15:47:09, 1288039500, 1288039629 If you're interested, here's the quick & dirty on the app and why I need the data: App overview: The application receives very simple POST requests generated by a basic sensor device when its input pins go to ground, which submits an INSERT query to the database where MySQL records a timestamp (as started). When the input pins return from a grounded state, the device submits a different POST request, which causes the PHP app to submit an UPDATE query, where a modification time timestamp is inserted (as ended). My employer recently changed the periodic reporting unit of measure from Seconds "On" Per Day to Seconds "On" Per 5 Minute Interval. I had formulated what I thought would be a workable solution, but when I looked at it on paper, it looked like Rube Goldberg's nightmare constructed in MySQL, so that was out. Any suggestions as to how to break these spans into 5 minute blocks? Keeping it all in MySQL would be my preference, though I'll take any suggestions. Thank you for any suggestions you may have. Again, I apologize if this is a no-brainer. If I ask any additional questions of the SO collective consciousness in the future, I'll try to word them a bit better. Any help will be happily welcomed. Thanks, Neren

    Read the article

  • circles and triangles problem

    - by Faken
    Hello everyone, I have an interesting problem here I've been trying to solve for the last little while: I have 3 circles on a 2D xy plane, each with the same known radius. I know the coordinates of each of the three centers (they are arbitrary and can be anywhere). What is the largest triangle that can be drawn such that each vertice of the triangle sits on a separate circle, what are the coordinates of those verticies? I've been looking at this problem for hours and asked a bunch of people but so far only one person has been able to suggest a plausible solution (though i have no way of proving it). The solution that we have come up with involves first creating a triangle about the three circle centers. Next we look at each circle individually and calculate the equation of a line that passes through the circle's center and is perpendicular to the opposite edge. We then calculate two intersection points of the circle. This is then done for the next two circles with a result of 6 points. We iterate over the 8 possible 3 point triangles that these 6 points create (the restriction is that each point of the big triangle must be on a separate circle) and find the maximum size. The results look reasonable (at least when drawn out on paper) and it passes the special case of when the centers of the circles all fall on a straight line (gives a known largest triangle). Unfortunate i have no way of proving this is correct or not. I'm wondering if anyone has encountered a problem similar to this and if so, how did you solve it? Note: I understand that this is mostly a math question and not programming, however it is going to be implemented in code and it must be optimized to run very fast and efficient. In fact, I already have the above solution in code and tested to be working, if you would like to take a look, please let me know, i chose not to post it because its all in vector form and pretty much impossible to figure out exactly what is going on (because it's been condensed to be more efficient). Lastly, yes this is for school work, though it is NOT a homework question/assignment/project. It's part of my graduate thesis (abet a very very small part, but still technically is part of it). Thanks for your help.

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60  | Next Page >