Search Results

Search found 59199 results on 2368 pages for 'time wasters'.

Page 308/2368 | < Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >

  • How should an object that uses composition set its composed components?

    - by Casey
    After struggling with various problems and reading up on component-based systems and reading Bob Nystrom's excellent book "Game Programming Patterns" and in particular the chapter on Components I determined that this is a horrible idea: //Class intended to be inherited by all objects. Engine uses Objects exclusively. class Object : public IUpdatable, public IDrawable { public: Object(); Object(const Object& other); Object& operator=(const Object& rhs); virtual ~Object() =0; virtual void SetBody(const RigidBodyDef& body); virtual const RigidBody* GetBody() const; virtual RigidBody* GetBody(); //Inherited from IUpdatable virtual void Update(double deltaTime); //Inherited from IDrawable virtual void Draw(BITMAP* dest); protected: private: }; I'm attempting to refactor it into a more manageable system. Mr. Nystrom uses the constructor to set the individual components; CHANGING these components at run-time is impossible. It's intended to be derived and be used in derivative classes or factory methods where their constructors do not change at run-time. i.e. his Bjorne object is just a call to a factory method with a specific call to the GameObject constructor. Is this a good idea? Should the object have a default constructor and setters to facilitate run-time changes or no default constructor without setters and instead use a factory method? Given: class Object { public: //...See below for constructor implementation concerns. Object(const Object& other); Object& operator=(const Object& rhs); virtual ~Object() =0; //See below for Setter concerns IUpdatable* GetUpdater(); IDrawable* GetRenderer(); protected: IUpdatable* _updater; IDrawable* _renderer; private: }; Should the components be read-only and passed in to the constructor via: class Object { public: //No default constructor. Object(IUpdatable* updater, IDrawable* renderer); //...remainder is same as above... }; or Should a default constructor be provided and then the components can be set at run-time? class Object { public: Object(); //... SetUpdater(IUpdater* updater); SetRenderer(IDrawable* renderer); //...remainder is same as above... }; or both? class Object { public: Object(); Object(IUpdater* updater, IDrawable* renderer); //... SetUpdater(IUpdater* updater); SetRenderer(IDrawable* renderer); //...remainder is same as above... };

    Read the article

  • What is the best Broadphase Interface for moving spheres?

    - by Molmasepic
    As of now I am working on optimizing the performance of the physics and collision, and as of now I am having some slowdowns on my other computers from my main. I have well over 3000 btSphereShape Rigidbodies and 2/3 of them do not move at all, but I am noticing(by the profile below) that collision is taking a bit of time to maneuver. Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 10.09 0.65 0.65 SphereTriangleDetector::collide(btVector3 const&, btVector3&, btVector3&, float&, float&, float) 7.61 1.14 0.49 btSphereTriangleCollisionAlgorithm::processCollision(btCollisionObject*, btCollisionObject*, btDispatcherInfo const&, btManifoldResult*) 5.59 1.50 0.36 btConvexTriangleCallback::processTriangle(btVector3*, int, int) 5.43 1.85 0.35 btQuantizedBvh::reportAabbOverlappingNodex(btNodeOverlapCallback*, btVector3 const&, btVector3 const&) const 4.97 2.17 0.32 btBvhTriangleMeshShape::processAllTriangles(btTriangleCallback*, btVector3 const&, btVector3 const&) const::MyNodeOverlapCallback::processNode(int, int) 4.19 2.44 0.27 btSequentialImpulseConstraintSolver::resolveSingleConstraintRowGeneric(btRigidBody&, btRigidBody&, btSolverConstraint const&) 4.04 2.70 0.26 btSequentialImpulseConstraintSolver::resolveSingleConstraintRowLowerLimit(btRigidBody&, btRigidBody&, btSolverConstraint const&) 3.73 2.94 0.24 Ogre::OctreeSceneManager::walkOctree(Ogre::OctreeCamera*, Ogre::RenderQueue*, Ogre::Octree*, Ogre::VisibleObjectsBoundsInfo*, bool, bool) 3.42 3.16 0.22 btTriangleShape::getVertex(int, btVector3&) const 2.48 3.32 0.16 Ogre::Frustum::isVisible(Ogre::AxisAlignedBox const&, Ogre::FrustumPlane*) const 2.33 3.47 0.15 1246357 0.00 0.00 Gorilla::Layer::setVisible(bool) 2.33 3.62 0.15 SphereTriangleDetector::getClosestPoints(btDiscreteCollisionDetectorInterface::ClosestPointInput const&, btDiscreteCollisionDetectorInterface::Result&, btIDebugDraw*, bool) 1.86 3.74 0.12 btCollisionDispatcher::findAlgorithm(btCollisionObject*, btCollisionObject*, btPersistentManifold*) 1.86 3.86 0.12 btSequentialImpulseConstraintSolver::setupContactConstraint(btSolverConstraint&, btCollisionObject*, btCollisionObject*, btManifoldPoint&, btContactSolverInfo const&, btVector3&, float&, float&, btVector3&, btVector3&) 1.71 3.97 0.11 btTriangleShape::getEdge(int, btVector3&, btVector3&) const 1.55 4.07 0.10 _Unwind_SjLj_Register 1.55 4.17 0.10 _Unwind_SjLj_Unregister 1.55 4.27 0.10 Ogre::D3D9HardwareVertexBuffer::updateBufferResources(char const*, Ogre::D3D9HardwareVertexBuffer::BufferResources*) 1.40 4.36 0.09 btManifoldResult::addContactPoint(btVector3 const&, btVector3 const&, float) 1.40 4.45 0.09 btSequentialImpulseConstraintSolver::setupFrictionConstraint(btSolverConstraint&, btVector3 const&, btRigidBody*, btRigidBody*, btManifoldPoint&, btVector3 const&, btVector3 const&, btCollisionObject*, btCollisionObject*, float, float, float) 1.24 4.53 0.08 btSequentialImpulseConstraintSolver::convertContact(btPersistentManifold*, btContactSolverInfo const&) 1.09 4.60 0.07 408760 0.00 0.00 Living::MapHide() 1.09 4.67 0.07 btSphereTriangleCollisionAlgorithm::~btSphereTriangleCollisionAlgorithm() 1.09 4.74 0.07 inflate_fast EDIT: Updated to show current Profile. I have only listed the functions using over 1% time from the many functions that are being used. Another thing is that each monster has a certain area that they stay in and are only active when a player is in said area. I was wondering if maybe there is a way to deactivate the non-active monsters from bullet(reactivating once in the area again) or maybe theres a different broadphase interface that I should use. The current BPI is btDbvtBroadphase. EDIT: Here is the Profile on the other computer(the top one is my main) Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls ms/call ms/call name 12.18 1.19 1.19 SphereTriangleDetector::collide(btVector3 const&, btVector3&, btVector3&, float&, float&, float) 6.76 1.85 0.66 btSphereTriangleCollisionAlgorithm::processCollision(btCollisionObject*, btCollisionObject*, btDispatcherInfo const&, btManifoldResult*) 5.83 2.42 0.57 btQuantizedBvh::reportAabbOverlappingNodex(btNodeOverlapCallback*, btVector3 const&, btVector3 const&) const 5.12 2.92 0.50 btConvexTriangleCallback::processTriangle(btVector3*, int, int) 4.61 3.37 0.45 btTriangleShape::getVertex(int, btVector3&) const 4.09 3.77 0.40 _Unwind_SjLj_Register 3.48 4.11 0.34 btBvhTriangleMeshShape::processAllTriangles(btTriangleCallback*, btVector3 const&, btVector3 const&) const::MyNodeOverlapCallback::processNode(int, int) 2.46 4.35 0.24 btSequentialImpulseConstraintSolver::resolveSingleConstraintRowLowerLimit(btRigidBody&, btRigidBody&, btSolverConstraint const&) 2.15 4.56 0.21 _Unwind_SjLj_Unregister 2.15 4.77 0.21 SphereTriangleDetector::getClosestPoints(btDiscreteCollisionDetectorInterface::ClosestPointInput const&, btDiscreteCollisionDetectorInterface::Result&, btIDebugDraw*, bool) 1.84 4.95 0.18 btTriangleShape::getEdge(int, btVector3&, btVector3&) const 1.64 5.11 0.16 btSequentialImpulseConstraintSolver::resolveSingleConstraintRowGeneric(btRigidBody&, btRigidBody&, btSolverConstraint const&) 1.54 5.26 0.15 btSequentialImpulseConstraintSolver::setupContactConstraint(btSolverConstraint&, btCollisionObject*, btCollisionObject*, btManifoldPoint&, btContactSolverInfo const&, btVector3&, float&, float&, btVector3&, btVector3&) 1.43 5.40 0.14 Ogre::D3D9HardwareVertexBuffer::updateBufferResources(char const*, Ogre::D3D9HardwareVertexBuffer::BufferResources*) 1.33 5.53 0.13 btManifoldResult::addContactPoint(btVector3 const&, btVector3 const&, float) 1.13 5.64 0.11 btRigidBody::predictIntegratedTransform(float, btTransform&) 1.13 5.75 0.11 btTriangleIndexVertexArray::getLockedReadOnlyVertexIndexBase(unsigned char const**, int&, PHY_ScalarType&, int&, unsigned char const**, int&, int&, PHY_ScalarType&, int) const 1.02 5.85 0.10 btSphereTriangleCollisionAlgorithm::CreateFunc::CreateCollisionAlgorithm(btCollisionAlgorithmConstructionInfo&, btCollisionObject*, btCollisionObject*) 1.02 5.95 0.10 btSphereTriangleCollisionAlgorithm::btSphereTriangleCollisionAlgorithm(btPersistentManifold*, btCollisionAlgorithmConstructionInfo const&, btCollisionObject*, btCollisionObject*, bool) Edited same as other Profile.

    Read the article

  • JavaOne Latin America 2012 Trip Report

    - by reza_rahman
    JavaOne Latin America 2012 was held at the Transamerica Expo Center in Sao Paulo, Brazil on December 4-6. The conference was a resounding success with a great vibe, excellent technical content and numerous world class speakers. Some notable local and international speakers included Bruno Souza, Yara Senger, Mattias Karlsson, Vinicius Senger, Heather Vancura, Tori Wieldt, Arun Gupta, Jim Weaver, Stephen Chin, Simon Ritter and Henrik Stahl. Topics covered included the JCP/JUGs, Java SE 7, HTML 5/WebSocket, CDI, Java EE 6, Java EE 7, JSF 2.2, JMS 2, JAX-RS 2, Arquillian and JavaFX. Bruno Borges and I manned the GlassFish booth at the Java Pavilion on Tuesday and Webnesday. The booth traffic was decent and not too hectic. We met a number of GlassFish adopters including perhaps one of the largest GlassFish deployments in Brazil as well as some folks migrating to Java EE from Spring. We invited them to share their stories with us. We also talked with some key members of the local Java community. Tuesday evening we had the GlassFish party at the Tribeca Pub. The party was definitely a hit and we could have used a larger venue (this was the first time we had the GlassFish party in Brazil). Along with GlassFish enthusiasts, a number of Java community leaders were there. We met some of the same folks again at the JUG leader's party on Wednesday evening. On Thursday Arun Gupta, Bruno Borges and I ran a hands-on-lab on JAX-RS, WebSocket and Server-Sent Events (SSE) titled "Developing JAX-RS Web Applications Utilizing Server-Sent Events and WebSocket". This is the same Java EE 7 lab run at JavaOne San Francisco. The lab provides developers a first hand glipse of how an HTML 5 powered Java EE application might look like. We had an overflow crowd for the lab (at one point we had about twenty people standing) and the lab went very well. The slides for the lab are here: Developing JAX-RS Web Applications Utilizing Server-Sent Events and WebSocket from Reza Rahman The actual contents for the lab is available here. Give me a shout if you need help getting it up and running. I gave two solo talks following the lab. The first was on JMS 2 titled "What’s New in Java Message Service 2". This was essentially the same talk given by JMS 2 specification lead Nigel Deakin at JavaOne San Francisco. I talked about the JMS 2 simplified API, JMSContext injection, delivery delays, asynchronous send, JMS resource definition in Java EE 7, standardized configuration for JMS MDBs in EJB 3.2, mandatory JCA pluggability and the like. The session went very well, there was good Q & A and someone even told me this was the best session of the conference! The slides for the talk are here: What’s New in Java Message Service 2 from Reza Rahman My last talk for the conference was on JAX-RS 2 in the keynote hall. Titled "JAX-RS 2: New and Noteworthy in the RESTful Web Services API" this was basically the same talk given by the specification leads Santiago Pericas-Geertsen and Marek Potociar at JavaOne San Francisco. I talked about the JAX-RS 2 client API, asyncronous processing, filters/interceptors, hypermedia support, server-side content negotiation and the like. The talk went very well and I got a few very kind complements afterwards. The slides for the talk are here: JAX-RS 2: New and Noteworthy in the RESTful Web Services API from Reza Rahman On a more personal note, Sao Paulo has always had a special place in my heart as the incubating city for Sepultura and Soulfy -- two of my most favorite heavy metal musical groups of all time! Consequently, the city has a perpertually alive and kicking metal scene pretty much any given day of the week. This time I got to check out a solid performance by local metal gig Republica at the legendary Manifesto Bar. I also wanted to see a Dio Tribute at the Blackmore but ran out of time and energy... Overall I enjoyed the conference/Sao Paulo and look forward to going to Brazil again next year!

    Read the article

  • Get Your Enterprise Working With Oracle On Track Communication 1.0

    - by Josh Lannin
    The On Track Development team is very pleased to announce that today On Track is available for our customers to download and evaluate.  To learn more about what On Track does start with our whitepaper and datasheet.   If you are a developer, take a look at our documentation and samples posted to our OTN page. For this first blog post, I’ll be speaking to several notable points about our product. Graceful Escalation via Conversations: On Track addresses the “Collaboration Problem” through a single guiding principle – graceful escalation – within the construct of a Conversation. In On Track, collaboration is based on a context (called a “Conversation”) that gracefully escalates in form, structure, and content, as dictated by the particular needs of a given collaboration.  Within that context, On Track provides a rich set of tools to choose from.  These tools provide for communication, coordination, content management, organization, decision making, and analysis -- all essential aspects of collaboration, but not all of them are essential all of the time.  Every collaborative interaction will evolve differently.  Some will evolve to represent work spreading over the course of years and involving a large, distributed team, while others may involve few people and not evolve at all.  Regardless, all collaborative contexts are built from the same parts, utilize the same concepts, and start the same way.  The principle of graceful escalation is that you only use the tools and structure you need; so you only incur the complexity you need. Purposeful Collaboration: Through application integration, On Track Conversations bring enterprise application users the communication and collaboration capabilities required to complete business process.  By association with specific processes or business objects conversations extend the possible interactions and broaden participation to internal or external non-application users and provide a sophisticated interaction experience, all the while enhancing the data set within the owning application.  Purposeful collaboration not only needs to happen in the context of applications, it must support a full range of real-time and long-running interactions to provide the greatest value. Multi Client, Multi Modal: This On Track 1.0 product release includes the same day availability of  multiple clients, including iPhone and iPad applications which are now available on the Apple Store, a fully capable and accessible Outlook Add-In, along with our browser web client.  With each client we have sought to leverage the strengths of each unique device- our iPhone client supports picture and voice posts, the iPad is optimized for meeting room situations and document viewing, and our Outlook add-in allows you to take emails in context and bring them into On Track.  In addition to supporting a diverse array of clients, On Track provides a unified multi modal experience support starting with basic messages moving through to integrated documents with live annotations, snapshots, application sharing, and voice. Next Generation Web Architecture: We believe On Track will help move the bar higher for what users can expect from all web applications, most notably ones that involve real-time activity.  On Track is built from the ground up with an innovative, real-time architecture that leverages the extensive push capabilities of our server.  Whether you are receiving a new message, viewing where crowds of people are collaborating, or doing live annotation on a document with a set of people, that information comes to you immediately without refreshes or moving back and forth between pages.  We’ve leveraged this core architecture across the product experience and raised the user experience bar for this type of application.  As well these capabilities are based on open standards and protocols, and are fully extensible by anyone- enabling sophisticated integrations to be created with a wide variety of both legacy and next-generation applications. Agile Product Development: As a product team we operate using continuous feedback and modified agile development methodologies.  We have thousands of active internal Oracle users who have helped pilot our product for critical business functions, and the On Track product development team uses our product as our primary vehicle for all our collaboration.  Additionally we been working with early access customers who are adopting our technology and providing us valuable feedback - which our process has rapidly realized in improvements to our software.  On Track agility extends to our server as well, which is built to scale, and is very simple to install and configure. We are pleased to make this product announcement and encourage you to join us on Facebook or follow us on Twitter, as well as checking back here for the latest product information.

    Read the article

  • Graduate Life in Oracle by Ramakrishna Nalabothula

    - by david.talamelli
    Preparation for the BIG interview: I prepared in both technical and logical aspects to face the Oracle Interview. I had to cover almost all main areas in technical and many types of problems in logical areas. We attended mock- interviews and written tests at our college, browsed websites and communities. Having started with such a rigorous preparation before Oracle visited the college; it was possible for me to make it into the list of final selection. I put in a big effort to reach this position and I am very happy to achieve this. Why I chose Oracle: Oracle is one of the best technology providers and has a large customer base. I think it is not an easy job to offer services to that many customers. So, the company needs young and dynamic people and I wanted to be one among them. This gave me spirit and led me to walk into Oracle. I am working on different technologies and learning something new in the field. Having many customers is challenging Oracle and my work is challenging me. I am confident enough that customers to both me and Oracle will never lose their faith. Learning at Oracle: The style of learning is good and never resembles a classroom session in a college. It is always fun to learn here. There is no exam to track the performance. There is enough time until we completely learn it. There is no concept of stiff competition. Peers help through KT (Knowledge Transfers) and there are good resources in the Oracle to learn. People are always there to direct you to those. There are lots of opportunities for Web learning too! My Work Area: My team gives me a great opportunity to offer service to the entire product. There were no situations when I got tense with my work or targets and deliverables. I work with a Global Team and my manager is based in the UK. I have a lot of freedom and flexibility. I use the work from home option in case of any disturbances in the city or due to personal problems. I have a weekly meeting with my manager and use Instant Messenger for status updates. My manager plans very well to give me enough time to complete tasks. I have good coordination from the team towards our deadlines. My work has also brought me close to many people across various technology and product teams. I am glad to make many friends across Oracle. I am enjoying my time and work here. I cover all the major activities in the team. I am thankful to everyone from the Development and Quality Assurance Team to have high confidence in me by assigning such big responsibilities. Primary tasks are maintaining the environments that are very unstable at times. This really requires big time and effort to trace the root causes. I am working and still learning on all these areas. The happiest thing is I got chances to travel to USA & UK for training and for supporting a few customer demo projects. I have got to explore more across two countries and got sponsored to visit the places around due to Oracle's policies. I am very much thankful for these what I have from Oracle and for the cooperation there from other colleagues. Fun at Work: Oracle has a club from Employees to conduct games and events. I had an opportunity to participate in competitions, tournaments inside Oracle and Inter-Corporate for all games. I thank Oracle for providing me all these opportunities and I would like to extend my thanks to Senior Management for their confidence in me. I thank Oracle HR Recruiting Team too for selecting me into Oracle and giving me this opportunity to share my experience and feelings. Ramakrishna Nalabothula

    Read the article

  • SQL Saturday 27 (Portland, Oregon)

    - by BuckWoody
    I’m sitting in the Seattle airport, waiting for my flight to Silicon Valley California for the SQL Server 2008 R2 Launch Event. By some quirk of nature, they are asking me to Emcee the event – but that’s another post entirely.   I’m reflecting on the SQL Saturday 27 event that was just held in Portland, Oregon this last Saturday. These are not Microsoft-sponsored events – it’s truly the community at work. Think of a big user-group meeting – I mean REALLY big – held in a central location, like at a college (as ours was) or some larger, inexpensive venue like that. Everyone there is volunteering – it’s my own money and time to drive several hours to a hotel for the night, feed myself and present. It’s their own time and money for the folks that organize the event – unless a vendor or two steps in to help. It’s their own time and money for the attendees to drive a long way, spend the night and their Saturday to listen to the speakers. Why do all this?   Because everybody benefits. Every speaker learns something new, meets new people, and reaches a new audience. Every volunteer does the same. And the attendees? Well, it’s pretty obvious what they get. A 7Am to 10PM extravaganza of knowledge from every corner of the product. In fact, this year the Portland group hooked up with the CodeCamp folks and held a combined event. We had over 850 people, and I had everyone from data professionals to developers in my sessions.   So I’ll take this opportunity to do two things: to say “thank you” to all of the folks who attended, from those who spoke to those who worked and those who came to listen, and to challenge you to attend the next SQL Saturday anywhere near you. You can find the list here: http://www.sqlsaturday.com/. Don’t see anything in your area? Start one! The PASS folks have a package that will show you how. Sure, it’s a big job, but the key is to get as many people helping you as possible. Even if you have only a few dozen folks show up the first time, no worries. The first events I presented at had about 20 in the room. But not this week.   See you at the Launch Event if you’re near the San Francisco area tomorrow, and see you at the Redmond SQL Saturday and TechEd if not.   Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Unity GUI not in build, but works fine in editor

    - by Darren
    I have: GUITexture attached to an object A script that has GUIStyles created for the Textfield and Buttons that are created in OnGUI(). This script is attached to the same object in number 1 3 GUIText objects each separate from the above. A script that enables the GUITexture and the script in number 1 and 2 respectively This is how it is supposed to work: When I cross the finish line, number 4 script enables number 1 GUITexture component and number 2 script component. The script component uses one of number 3's GUIText objects to show you your best lap time, and also makes a GUI.Textfield for name entry and 2 GUI.Buttons for "Submit" and "Skip". If you hit "Submit" the script will submit the time. No matter which button you press, The remaining 2 GUIText objects from number 3 will show you the top 10 best times. For some reason, when I run it in editor, everything works 100%, but when I'm in different kinds of builds, the results vary. When I am in a webplayer, The GUITexture and the textfield and buttons appear, but the textfield and buttons are plain and have no evidence of GUIStyles. When I click one of the buttons, the score gets submitted but I do not get the fastest times showing. When I am in a standalone build, the GUITexture shows up, but nothing else does. If I remove the GUIStyle parameter of the GUI.Textfield and GUI.Button, they show up. Why am I getting these variations and how can I fix it? Code below: void Start () { Names.text = ""; Times.text = ""; YourBestTime.text = "Your Best Lap: " + bestTime + "\nEnter your name:"; //StartCoroutine(GetTimes("Test")); } void Update() { if (!ShowButtons && !GettingTimes) { StartCoroutine(GetTimes()); GettingTimes = true; } } IEnumerator GetTimes () { Debug.Log("Getting times"); YourBestTime.text = "Loading Best Lap Times"; WWW times_get = new WWW(GetTimesUrl); yield return times_get; WWW names_get = new WWW(GetNamesUrl); yield return names_get; if(times_get.error != null || names_get.error != null) { print("There was an error retrieiving the data: " + names_get.error + times_get.error); } else { Times.text = times_get.text; Names.text = names_get.text; YourBestTime.text = "Your Best Lap: " + bestTime; } } IEnumerator PostLapTime (string Name, string LapTime) { string hash= MD5.Md5Sum(Name + LapTime + secretKey); string bestTime_url = SubmitTimeUrl + "&Name=" + WWW.EscapeURL(Name) + "&LapTime=" + LapTime + "&hash=" + hash; Debug.Log (bestTime_url); // Post the URL to the site and create a download object to get the result. WWW hs_post = new WWW(bestTime_url); //label = "Submitting..."; yield return hs_post; // Wait until the download is done if (hs_post.error != null) { print("There was an error posting the lap time: " + hs_post.error); //label = "Error: " + hs_post.error; //show = false; } else { Debug.Log("Posted: " + hs_post.text); ShowButtons = false; PostingTime = false; } } void OnGUI() { if (ShowButtons) { //makes text box nameString = GUI.TextField( new Rect((Screen.width/2)-111, (Screen.height/2)-130, 222, 25), nameString, 20, TextboxStyle); if (GUI.Button( new Rect( (Screen.width/2-74.0f), (Screen.height/2)- 90, 64, 32), "Submit", ButtonStyle)) { //SUBMIT TIME if (nameString == "") { nameString = "Player"; } if (!PostingTime) { StartCoroutine(PostLapTime(nameString, bestTime)); PostingTime = true; } } else if (GUI.Button( new Rect( (Screen.width/2+10.0f), (Screen.height/2)- 90, 64, 32), "Skip", ButtonStyle)) { ShowButtons = false; } } } }

    Read the article

  • SQL SERVER – Monitoring SQL Server Database Transaction Log Space Growth – DBCC SQLPERF(logspace) – Puzzle for You

    - by pinaldave
    First of all – if you are going to say this is very old subject, I agree this is very (very) old subject. I believe in earlier time we used to have this only option to monitor Log Space. As new version of SQL Server released we all equipped with DMV, Performance Counters, Extended Events and much more new enhancements. However, during all this year, I have always used DBCC SQLPERF(logspace) to get the details of the logs. It may be because when I started my career I remember this command and it did what I wanted all the time. Recently I have received interesting question and I thought, I should request your help. However, before I request your help, let us see traditional usage of DBCC SQLPERF(logspace). Every time I have to get the details of the log I ran following script. Additionally, I liked to store the details of the when the log file snapshot was taken as well so I can go back and know the status log file growth. This gives me a fair estimation when the log file was growing. CREATE TABLE dbo.logSpaceUsage ( id INT IDENTITY (1,1), logDate DATETIME DEFAULT GETDATE(), databaseName SYSNAME, logSize DECIMAL(18,5), logSpaceUsed DECIMAL(18,5), [status] INT ) GO INSERT INTO dbo.logSpaceUsage (databaseName, logSize, logSpaceUsed, [status]) EXEC ('DBCC SQLPERF(logspace)') GO SELECT * FROM dbo.logSpaceUsage GO I used to record the details of log file growth every hour of the day and then we used to plot charts using reporting services (and excel in much earlier times). Well, if you look at the script above it is very simple script. Now here is the puzzle for you. Puzzle 1: Write a script based on a table which gives you the time period when there was highest growth based on the data stored in the table. Puzzle 2: Write a script based on a table which gives you the amount of the log file growth from the beginning of the table to the latest recording of the data. You may have to run above script at some interval to get the various data samples of the log file to answer above puzzles. To make things simple, I am giving you sample script with expected answers listed below for both of the puzzle. Here is the sample query for puzzle: -- This is sample query for puzzle CREATE TABLE dbo.logSpaceUsage ( id INT IDENTITY (1,1), logDate DATETIME DEFAULT GETDATE(), databaseName SYSNAME, logSize DECIMAL(18,5), logSpaceUsed DECIMAL(18,5), [status] INT ) GO INSERT INTO dbo.logSpaceUsage (databaseName, logDate, logSize, logSpaceUsed, [status]) SELECT 'SampleDB1', '2012-07-01 7:00:00.000', 5, 10, 0 UNION ALL SELECT 'SampleDB1', '2012-07-01 9:00:00.000', 16, 10, 0 UNION ALL SELECT 'SampleDB1', '2012-07-01 11:00:00.000', 9, 10, 0 UNION ALL SELECT 'SampleDB1', '2012-07-01 14:00:00.000', 18, 10, 0 UNION ALL SELECT 'SampleDB3', '2012-06-01 7:00:00.000', 5, 10, 0 UNION ALL SELECT 'SampleDB3', '2012-06-04 7:00:00.000', 15, 10, 0 UNION ALL SELECT 'SampleDB3', '2012-06-09 7:00:00.000', 25, 10, 0 GO Expected Result of Puzzle 1 You will notice that there are two entries for database SampleDB3 as there were two instances of the log file grows with the same value. Expected Result of Puzzle 2 Well, please a comment with valid answer and I will post valid answers with due credit next week. Not to mention that winners will get a surprise gift from me. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: DBCC

    Read the article

  • Wisdom Lies in Collaborative Power and Intelligence

    - by kellsey.ruppel
    By Alakh Verma, Director, Platform Technology Solutions   In my recent blog posts, I shared insights on Predictive Analytics (Will Predictive Analytics at 'Speed of Thoughts' Help Businesses?), Real Time Decisions (How critical are Real Time decisions in business today?) and their significance in our lives in general and in businesses today. In the current business paradigm shift- with evolutionary social business, it is paramount that businesses look for wisdom in collaborative power and intelligence and equip their employees with the tools to engage with one another. There is an old time saying that 5 sticks tied together are stronger and unable to break as opposed to an individual stick. We have recently witnessed the power of ordinary people uniting together and fought collaboratively using Facebook and Twitter to topple down dictators in Tunisia, Egypt and Libya—and are threatening absolute rule in Syria. And an India one man’s (Anna Hazare) campaign against corruption went viral, bringing thousands to the streets in support. As anyone who has worked in a sizeable organization knows, there is no guarantee that the organization as a whole will perform efficiently and achieve its goals, even if each employee is individually efficient and every team has a high level of productivity. To achieve enterprise productivity, it is necessary not only for individuals and groups to “do things right” by working productively but also for the enterprise as a whole to “do the right things” - form the right teams, make the right decisions, allocate resources correctly, and effectively coordinate activities across the entire organization. Most organizations fall short of the optimal level of enterprise productivity because of one or more of these reasons, all at a great cost to the business.  They are disconnected from themselves with various parts of the organization unintentionally working at cross-purposes with each other.  Information that exists is not getting shared or reused.  Human talent is not being applied where it is most needed.  The same problems are being solved repeatedly by multiple groups. Intelligent collaboration through automated business processes has the ability to alter the course of any important business activity, with a potentially dramatic impact on the financial performance of the business. Whether it is a simple email exchange, a physical or virtual meeting, a task force, or a large-scale project, the activity is inherently collaborative.  In fact, collaboration can be defined as the work that takes place among people when a business process is not pre-determining how the work should take place. Collaboration is many things: information sharing, brainstorming, problem solving, best practice negotiation, innovation, coordination of activity, alignment of purpose, and so forth.  Collaboration is the “white space” between the business processes; it is the glue that holds an organization together, and the lubricant that allows the machinery to keep running.  Real time search and collaborative capabilities of the right people with the right content supported by defined processes will provide unparallel wisdom in the organization in the most competitive business environment today. Interestingly, technologies such as Oracle WebCenter offer these capabilities in our Web based business transactions and compliment in the overall collaborative intelligence and power to truly transform organizations to social businesses. Looking to learn more about engaging your employees to collaborate together and providing a complete user experience for your customers? You won't want to miss our webcast today! Drive Online Engagement with Intuitive Portals and Websites

    Read the article

  • Open source adventures with... wait for it... Microsoft

    - by Jeff
    Last week, Microsoft announced that it was going to open source the rest of the ASP.NET MVC Web stack. The core MVC framework has been open source for a long time now, but the other pieces around it are also now out in the wild. Not only that, but it's not what I call "big bang" open source, where you release the source with each version. No, they're actually committing in real time to a public repository. They're also taking contributions where it makes sense. If that weren't exciting enough, CodePlex, which used to be a part of the team I was on, has been re-org'd to a different part of the company where it is getting the love and attention (and apparently money) that it deserves. For a period of several months, I lobbied to get a PM gig with that product, but got nowhere. A year and a half later, I'm happy to see it finally treated right. In any case, I found a bug in Razor, the rendering engine, before the beta came out. I informally sent the bug info to some people, but it wasn't fixed for the beta. Now, with the project being developed in the open, I was able to submit the issue, and went back and forth with the developer who wrote the code (I met him once at a meet up in Bellevue, I think), and he committed a fix. I tried it a day later, and the bug was gone. There's a lot to learn from all of this. That open source software is surprisingly efficient and often of high quality is one part of it. For me the win is that it demonstrates how open and collaborative processes, as light as possible, lead to better software. In other words, even if this were a project being developed internally, at a bank or something, getting stakeholders involved early and giving people the ability to respond leads to awesomeness. While there is always a place for big thinking, experience has shown time and time again that trying to figure everything out up front takes too long, and rarely meets expectations. This is a lesson that probably half of Microsoft has yet to learn, including the team I was on before I split. It's the reason that team still hasn't shipped anything to general availability. But I've seen what an open and iterative development style can do for teams, at Microsoft and other places that I've worked. When you can have a conversation with people, and take ideas and turn them into code quickly, you're winning. So why don't people like winning? I think there are a lot of reasons, and they can generally be categorized into fear, skepticism and bad experiences. I can't give the Web stack teams enough credit. Not only did they dream big, but they changed a culture that often seems immovable and hopelessly stuck. This is a very public example of this culture change, but it's starting to happen at every scale in Microsoft. It's really interesting to see in a company that has been written off as dead the last decade.

    Read the article

  • How Do You Actually Model Data?

    Since the 1970’s Developers, Analysts and DBAs have been able to represent concepts and relations in the form of data through the use of generic symbols.  But what is data modeling?  The first time I actually heard this term I could not understand why anyone would want to display a computer on a fashion show runway. Hey, what do you expect? At that time I was a freshman in community college, and obviously this was a long time ago.  I have since had the chance to learn what data modeling truly is through using it. Data modeling is a process of breaking down information and/or requirements in to common categories called objects. Once objects start being defined then relationships start to form based on dependencies found amongst other existing objects.  Currently, there are several tools on the market that help data designer actually map out objects and their relationships through the use of symbols and lines.  These diagrams allow for designs to be review from several perspectives so that designers can ensure that they have the optimal data design for their project and that the design is flexible enough to allow for potential changes and/or extension in the future. Additionally these basic models can always be further refined to show different levels of details depending on the target audience through the use of three different types of models. Conceptual Data Model(CDM)Conceptual Data Models include all key entities and relationships giving a viewer a high level understanding of attributes. Conceptual data model are created by gathering and analyzing information from various sources pertaining to a project during the typical planning phase of a project. Logical Data Model (LDM)Logical Data Models are conceptual data models that have been expanded to include implementation details pertaining to the data that it will store. Additionally, this model typically represents an origination’s business requirements and business rules by defining various attribute data types and relationships regarding each entity. This additional information can be directly translated to the Physical Data Model which reduces the actual time need to implement it. Physical Data Model(PDMs)Physical Data Model are transformed Logical Data Models that include the necessary tables, columns, relationships, database properties for the creation of a database. This model also allows for considerations regarding performance, indexing and denormalization that are applied through database rules, data integrity. Further expanding on why we actually use models in modern application/database development can be seen in the benefits that data modeling provides for data modelers and projects themselves, Benefits of Data Modeling according to Applied Information Science Abstraction that allows data designers remove concepts and ideas form hard facts in the form of data. This gives the data designers the ability to express general concepts and/or ideas in a generic form through the use of symbols to represent data items and the relationships between the items. Transparency through the use of data models allows complex ideas to be translated in to simple symbols so that the concept can be understood by all viewpoints and limits the amount of confusion and misunderstanding. Effectiveness in regards to tuning a model for acceptable performance while maintaining affordable operational costs. In addition it allows systems to be built on a solid foundation in terms of data. I shudder at the thought of a world without data modeling, think about it? Data is everywhere in our lives. Data modeling allows for optimizing a design for performance and the reduction of duplication. If one was to design a database without data modeling then I would think that the first things to get impacted would be database performance due to poorly designed database and there would be greater chances of unnecessary data duplication that would also play in to the excessive query times because unneeded records would need to be processed. You could say that a data designer designing a database is like a box of chocolates. You will never know what kind of database you will get until after it is built.

    Read the article

  • Clever memory usage through the years

    - by Ben Emmett
    A friend and I were recently talking about the really clever tricks people have used to get the most out of memory. I thought I’d share my favorites, and would love to hear yours too! Interleaving on drum memory Back in the ye olde days before I’d been born (we’re talking the 50s / 60s here), working memory commonly took the form of rotating magnetic drums. These would spin at a constant speed, and a fixed head would read from memory when the correct part of the drum passed it by, a bit like a primitive platter disk. Because each revolution took a few milliseconds, programmers took to manually arranging information non-sequentially on the drum, timing when an instruction or memory address would need to be accessed, then spacing information accordingly around the edge of the drum, thus reducing the access delay. Similar techniques were still used on hard disks and floppy disks into the 90s, but have become irrelevant with modern disk technologies. The Hashlife algorithm Conway’s Game of Life has attracted numerous implementations over the years, but Bill Gosper’s Hashlife algorithm is particularly impressive. Taking advantage of the repetitive nature of many cellular automata, it uses a quadtree structure to store the hashes of pieces of the overall grid. Over time there are fewer and fewer new structures which need to be evaluated, so it starts to run faster with larger grids, drastically outperforming other algorithms both in terms of speed and the size of grid which can be simulated. The actual amount of memory used is huge, but it’s used in a clever way, so makes the list . Elite’s procedural generation Ok, so this isn’t exactly a memory optimization – more a storage optimization – but it gets an honorable mention anyway. When writing Elite, David Braben and Ian Bell wanted to build a rich world which gamers could explore, but their 22K memory was something of a limitation (for comparison that’s about the size of my avatar picture at the top of this page). They procedurally generated all the characteristics of the 2048 planets in their virtual universe, including the names, which were stitched together using a lookup table of parts of names. In fact the original plans were for 2^52 planets, but it was decided that that was probably too many. Oh, and they did that all in assembly language. Other games of the time used similar techniques too – The Sentinel’s landscape generation algorithm being another example. Modern Garbage Collectors Garbage collection in managed languages like Java and .NET ensures that most of the time, developers stop needing to care about how they use and clean up memory as the garbage collector handles it automatically. Achieving this without killing performance is a near-miraculous feet of software engineering. Much like when learning chemistry, you find that every time you think you understand how the garbage collector works, it turns out to be a mere simplification; that there are yet more complexities and heuristics to help it run efficiently. Of course introducing memory problems is still possible (and there are tools like our memory profiler to help if that happens to you) but they’re much, much rarer. A cautionary note In the examples above, there were good and well understood reasons for the optimizations, but cunningly optimized code has usually had to trade away readability and maintainability to achieve its gains. Trying to optimize memory usage without being pretty confident that there’s actually a problem is doing it wrong. So what have I missed? Tell me about the ingenious (or stupid) tricks you’ve seen people use. Ben

    Read the article

  • The inevitable Hello World post!

    - by brendonpage
    Greetings to anyone reading this! This is my first of hopefully many posts. I would like to use this post to introduce myself and to let you know what to expect from this blog in future. Okay so a bit about myself. In case you missed the name of this blog, my name is Brendon Page! I am a Software Developer from South Africa and work for a small company who’s main focus is producing software for the kitchen cupboard industry, although from time to time we do produce custom solutions for other industries. I work in a small team of 3, including myself, and am fortunate enough to work from home! I have been involved in IT since 1996, which is when I got my first PC, and started working as a junior programmer in 2003. Outside of work I enjoy playing squash, PC Games and of course LANing with my friends. If I get any free time between all of that I will usually dedicate some of it to a personal project, these are mainly prototypes for an idea I have had or for something that could be useful at work. I was in 2 minds on whether to include a photo of myself. The reason for this was because while I was looking for a suitable photo to use, it dawned on me how much time I dedicate to pulling funny faces in photos! I also realized how little I shave, which I blame completely on working form home. So after much debate here I am, funny face, beard and all!   Now that you know a bit about me lets move onto what expect from this blog. I work predominantly with Microsoft technologies so most if not all of my posts will be related to something Microsoft. Since most of my job entails Software Development you can expect a lot of posts which will deal with the .NET Framework. I am currently working on a large Silverlight project, so my first few posts will be targeted at in that direction. I will be striving to make the content of my posts as useful as possible from both an explanation and code perspective, I aim to include a working solution for every post, which I will put up on my skydrive for download. Here is what I have planned for my next few posts: Where did my session variables go?  Here I will take you through the lessons I learnt the hard way about the ASP.NET session. I am not going to go into to much depth in this post, as there is already a lot of information available on it. I mainly want to cover it in an effort to keep the scope creep of my posts to a minimum, some the solutions I upload will use it and I would like to have a post that I can reference to explain why I am doing something a certain way. Uploading files through SIlverlight Again there is a lot of existing information on this topic, so I wont be going into to much depth, but I will be using the solution from this as a base for my next post. Generating and Displaying DeepZoom images dynamically in Silverlight Well the title pretty much speaks for it’s self on this one. As I mentioned I will be building off the solution that I create in my ‘Uploading files through Silverlight’ post. Securing DeepZoom images using a custom implementation of the MultiScaleTileSource In this post I will look at the privacy issue surrounding the default usage of DeepZoom images in Silverlight and how to overcome it. This makes the use of DeepZoom in privacy conscious applications more viable. Thanks to anyone who actually read this post! I look forward to producing more which will hopefully be helpful to you.

    Read the article

  • Oracle Retail Mobile Point-of-Service

    - by David Dorf
    When most people discuss mobile in retail, they immediately go to shopping applications.  While I agree the consumer side of mobile is huge, I believe its also important to arm store associates with mobile tools.  There are around a dozen major roll-outs of mobile POS to chain retailers, and all have been successful.  This does not, however, signal the demise of traditional registers.  Retailers will adopt mobile POS slowly and reduce the number of fixed registers over time, but there's likely to be a combination of both for the foreseeable future.  Even Apple retains at least one fixed register in every store, you just have to know where to look. The business benefits for mobile POS are pretty straightforward: 1. Faster checkout.  Walmart's CFO recently reported that for every second they shave off the average transaction time, they can potentially save $12M a year in labor.  I think its more likely that labor will be redeployed to enhance the customer experience. 2. Smarter associates.  The sales associates on the floor need the same access to information that consumers have, if not more.  They need ready access to product details, reviews, inventory, etc. to meet consumer expectations.  In a recent study, 40% of consumers said a savvy store associate can impact their final product selection more than a website. 3. Lower costs.  Mobile POS hardware (iPod touch + sled) costs about a fifth of fixed registers, not to mention the reclaimed space that can be used for product displays. But almost all Mobile POS solutions can claim those benefits equally.  Where there's differentiation is on the technical side.  Oracle recently announced availability of the Oracle Retail Mobile Point-of-Service, and it has three big technology advantages in the market: 1. Portable. We used a popular open-source component called PhoneGap that abstracts the app from the underlying OS and hardware so that iOS, Android, and other platforms could be supported.  Further, we used Web technologies such as HTML5 and JavaScript, which are commonly known by many programmers, as opposed to ObjectiveC which is more difficult to find.  The screen can adjust to different form-factors and sizes, just like you see with browsers.  In the future when a new, zippy device gets released, retailers will have the option to move to that device more easily than if they used a native app. 2. Flexible.  Our Mobile POS is free with the Oracle Retail Point-of-Service product.  Retailers can use any combination of fixed and mobile registers, and those ratios can change as required.  Perhaps start with 1 mobile and 4 fixed per store, then transition over time to 4 mobile and 1 fixed without any additional software licenses.  Our scalable solution supports lots of combinations. 3. Consistent.  Because our Mobile POS is fully integrated to our traditional POS, the same business logic is reused.  Third-party Mobile POS solutions often handle pricing, promotions, and tax calculations separately leading to possible inconsistencies within the store.  That won't happen with Oracle's solution. For many retailers, Mobile POS can lower costs, increase customer service, and generally enhance a consumer's in-store experience.  Apple led the way, but lots of other retailers are discovering the many benefits of adding mobile capabilities in their stores.  Just be sure to examine both the business and technology benefits so you get the most value from your solution for the longest period of time.

    Read the article

  • Delivering SOA Governance with EAMS and Oracle Enterprise Repository by Link Consulting Team

    - by JuergenKress
    In the last 12 years Link Consulting has been making its presence in specific areas such as Governance and Architecture, both in terms of practices and methodologies, products, know-how and technological expertise. The Enterprise Architecture Management System - Oracle Enterprise Edition (EAMS - OER Edition) is the result of this experience and combines the architecture management solution with OER in order to deliver a product specialized for SOA Governance that gathers the better of two worlds in solution that enables SOA Governance projects, initiatives and programs. Enterprise Architecture Management System Enterprise Architecture Management System (EAMS), is an automation based solution that enables the efficient management of Enterprise Architectures. The solution uses configured enterprise repositories and takes advantages of its features to provide automation capabilities to the users. EAMS provides capabilities to create/customize/analyze repository data, architectural blueprints, reports and analytic charts. Oracle Enterprise Repository Oracle Enterprise Repository (OER) is one of the major and central elements of the Oracle SOA Governance solution. Oracle Enterprise Repository provides the tools to manage and govern the metadata for any type of software asset, from business processes and services to patterns, frameworks, applications, components, and models. OER maps the relationships and inter-dependencies that connect those assets to improve impact analysis, promote and optimize their reuse, and measure their impact on the bottom line. It provides the visibility, feedback, controls, and analytics to keep your SOA on track to deliver business value. The intense focus on automation helps to overcome barriers to SOA adoption and streamline governance throughout the lifecycle. Core capabilities of the OER include: Asset Management Asset Lifecycle Management Usage Tracking Service Discovery Version Management Dependency Analysis Portfolio Management EAMS - OER Edition The solution takes the advantages and features from both products and combines them in a symbiotic tool that enhances the quality of SOA Governance Initiatives and Programs. EAMS is able to produce a vast number of outputs by combining its analytical engine, SOA-specific configurations and the assets in OER and other related tools, catalogs and repositories. The configurations encompass not only the extendable parametrization of the metadata but also fully configurable blueprints, PowerPoint reports, charts and queries. The SOA blueprints The solution comes with a set of predefined architectural representations that help the organization better perceive their SOA landscape. More blueprints can be easily created in order to accommodate the organizations needs in terms of detail, audience and metadata. Charts & Dashboards The solution encompasses a set of predefined charts and dashboards that promote a more agile way to control and explore the assets. Time Based Visualization All representations are time bound, and with EAMS - OER you can truly govern SOA with a complete view of the Past, Present and Future; The solution delivers Gap Analysis, a project oriented approach while taking into consideration the As-Was, As-Is an To-Be. Time based visualization differentiating factors: Extensive automation and maintenance of architectural representations Organization wide solution. Easy access and navigation to and between all architectural artifacts and representations. Flexible meta-model, customization and extensibility capabilities. Lifecycle management and enforcement of the time dimension over all the repository content. Profile based customization. Comprehensive visibility Architectural alignment Friendly and striking user interfaces For more information on EAMS visit us here. For more information on SOA visit us here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Link Consulting,OER,OSR,SOA Governance,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • Acceptance tests done first...how can this be accomplished?

    - by Crazy Eddie
    The basic gist of most Agile methods is that a feature is not "done" until it's been developed, tested, and in many cases released. This is supposed to happen in quick turnaround chunks of time such as "Sprints" in the Scrum process. A common part of Agile is also TDD, which states that tests are done first. My team works on a GUI program that does a lot of specific drawing and such. In order to provide tests, the testing team needs to be able to work with something that at least attempts to perform the things they are trying to test. We've found no way around this problem. I can very much see where they are coming from because if I was trying to write software that targeted some basically mysterious interface I'd have a very hard time. Although we have behavior fairly well specified, the exact process of interacting with various UI elements when it comes to automation seems to be too unique to a feature to allow testers to write automated scripts to drive something that does not exist. Even if we could, a lot of things end up turning up later as having been missing from the specification. One thing we considered doing was having the testers write test "scripts" that are more like a set of steps that must be performed, as described from a use-case perspective, so that they can be "automated" by a human being. This can then be performed by the developer(s) writing the feature and/or verified by someone else. When the testers later get an opportunity they automate the "script" for regression purposes mainly. This didn't end up catching on in the team though. The testing part of the team is actually falling behind us by quite a margin. This is one reason why the apparently extra time of developing a "script" for a human being to perform just did not happen....they're under a crunch to keep up with us developers. If we waited for them, we'd get nothing done. It's not their fault really, they're a bottle neck but they're doing what they should be and working as fast as possible. The process itself seems to be set up against them. Very often we end up having to go back a month or more in what we've done to fix bugs that the testers have finally gotten to checking. It's an ugly truth that I'd like to do something about. So what do other teams do to solve this fail cascade? How can we get testers ahead of us and how can we make it so that there's actually time for them to write tests for the features we do in a sprint without making us sit and twiddle our thumbs in the meantime? As it's currently going, in order to get a feature "done", using agile definitions, would be to have developers work for 1 week, then testers work the second week, and developers hopefully being able to fix all the bugs they come up with in the last couple days. That's just not going to happen, even if I agreed it was a reasonable solution. I need better ideas...

    Read the article

  • A developer&rsquo;s WBS &ndash; 3 factors of 5

    - by johndoucette
    As a development manager, I have requested work breakdown structures (WBS) many times from the dev leads. Everyone has their own approach and why it takes sometimes days to get this simple list is often frustrating. Here is a simple way to get that elusive WBS done in 30 minutes and have 125 items in your list – well, 126. The WBS is made up of parent-child entities representing the overall outcome of the project. At the bottom of the hierarchical list should be the task item that a developer would perform in support of the branch in the list or WBS. Because I work with different dev leads on every project, I always ask the “what time value would you like to see at the lowest task in order to assign it to a developer and ensure it gets done within the timeframe”. I am particular to a task being 8 hours. Some like 8 to 24 hours. Stay away from tasks defaulting to 1 week. The task becomes way to vague and hard to manage completeness, especially on short budgets. As a developer, your focus is identifying the tasks you to accomplish in order to deliver the product. As a project manager, you will take the developer's WBS and add all the “other stuff” like quality testing, meetings, documentation, transition to maintenance, etc… Start your exercise with the name of the product you are delivering as a result of the project. You should be able to represent what you are building and deploying with one to three words. Example; XYZ Public Website Middleware BizTalk Application The reason you start with that single identifier is to always see the list as the product. It helps during each of the next three passes. Now, choose 5 tasks which in their entirety represent the product you will be delivering and add them to list under the product name you created earlier; Public Website     Security     Sites     Infrastructure     Publishing     Creative Continue this concept of seeing the list as the complete picture and decompose it one more level. You should have 25 items. Public Website     Security         Authentication         Login Control         Administration         DRM         Workflow     Sites         Masterpages         Page Layouts         Web Parts (RIA, Multimedia)         Content Types         Structures     Infrastructure         ...     Publishing         ...     Creative         ... And one more time for a total of 125 items. The top item makes the list 126. Public Website     Security         Authentication             Install (AD/ADAM/LDAP/SQL)             Configuration             Management             Web App Configuration             Implement Provider         Login Control             Login Form             Login/Logoff             pw change             pw recover/forgot             email verification         Administration             ...         DRM             ...         Workflow             ...     Sites         Masterpages         Page Layouts         Web Parts (RIA, Multimedia)         Content Types         Structures     Infrastructure         ...     Publishing         ...     Creative         ... The next step is to make sure the task at the bottom of every branch represents the “time value” you planned for the project. You can add more to the WBS and of course if you can’t find 5 items, 4 is fine. If a task can be done in a fraction of the time value you determined for the project, try to roll it up into a larger task. In the task actions (later when the iteration is being planned), decompose the details back to the simple tasks. Now, go estimate!

    Read the article

  • Let Me Show You Something: Instagram, Vine and Snapchat for Brands

    - by Mike Stiles
    While brands are well aware of how much more impactful images are than text-only posts on social channels, today you’re additionally being presented with platform after additional platform for hosting, doctoring and sharing photos and videos.  Can you play in every sandbox? And if you do, can you be brilliant on all of them? As has usually been the case, so far brands are sticking their toes into new platforms while not actually committing to them, or strategizing for them, or resourcing them. TrackMaven found of the 123 F500 companies using Instagram, only 22% of them are active on it. Likewise, research from Simply Measured found brands are indeed jumping in, with the number establishing a presence on Instagram up 55% over the past year. Users want them there…brand engagement has exploded 350%, and over 1/3 of the top brands have at least 10,000 followers. BUT…the top 10 brands are generating 33% of all posts, reaping 83% of all engagement. Things are also growing on Twitter’s Vine, the 6-second looping video app that hit 40 million users in August. The 7th Chamber says 5 tweets a second contain a Vine link. Other studies say branded Vines are 4 times more likely to be shared and seen than rank-and-file branded videos. Why? Users know that even if a video is pure junk, they won’t get robbed of too much of their valuable time. Vine is always upgrading so you can make sure your videos are worth viewers’ time. You can now edit videos, and save & work on several projects concurrently. What you can’t do is upload a finely crafted video into Vine, but you can do that with Instagram. The key to success? Same as with all other content; make it of value. Deliver a laugh or a lesson or both. How-to, behind the scenes peeks, contests, demos, all make sense in the short video format. Or follow Nash Grier’s example, which is to just have fun with and connect to your viewers, earning their trust that your next Vine will be as good as the last. Nash is only 15, has over 1.4 million followers, and adds about 100,000 a week. He broke out when one of his videos was re-Vined by some other kid with 300,000 followers. Make good stuff, get it in front of influencers, and your brand Vines could break out as well. Then there’s Snapchat, the “this photo will self destruct” platform. How can that be of use to brands besides offering coupons that really expire? The jury is out. But with an audience of over 100 million and a valuation of $800 million, media-with-a-time-limit is compelling. Now there’s “Snapchat Stories” that can last 24 hours and be shared to the public at large. You might be able to capitalize on how much more focus gets put on content when there’s a time limit on its availability. The underlying truth to all of this is, these are all tools. Very cool, feature rich tools, but tools. You can give the exact same art kit to 5 different people and you’d get back 5 very different works, ranging from worthless garbage to masterpiece. Brands are being called upon to be still and moving image artists. That’s what your customers are used to seeing, from a variety of sources. Commit to communicating with them accordingly. @mikestiles Photo: stock.xchng

    Read the article

  • Good DBAs Do Baselines

    - by Louis Davidson
    One morning, you wake up and feel funny. You can’t quite put your finger on it, but something isn’t quite right. What now? Unless you happen to be a hypochondriac, you likely drag yourself out of bed, get on with the day and gather more “evidence”. You check your symptoms over the next few days; do you feel the same, better, worse? If better, then great, it was some temporal issue, perhaps caused by an allergic reaction to some suspiciously spicy chicken. If the same or worse then you go to the doctor for some health advice, but armed with some data to share, and having ruled out certain possible causes that are fixed with a bit of rest and perhaps an antacid. Whether you realize it or not, in comparing how you feel one day to the next, you have taken baseline measurements. In much the same way, a DBA uses baselines to gauge the gauge health of their database servers. Of course, while SQL Server is very willing to share data regarding its health and activities, it has almost no idea of the difference between good and bad. Over time, experienced DBAs develop “mental” baselines with which they can gauge the health of their servers almost as easily as their own body. They accumulate knowledge of the daily, natural state of each part of their database system, and so know instinctively when one of their databases “feels funny”. Equally, they know when an “issue” is just a passing tremor. They see their SQL Server with all of its four CPU cores running close 100% and don’t panic anymore. Why? It’s 5PM and every day the same thing occurs when the end-of-day reports, which are very CPU intensive, are running. Equally, they know when they need to respond in earnest when it is the first time they have heard about an issue, even if it has been happening every day. Nevertheless, no DBA can retain mental baselines for every characteristic of their systems, so we need to collect physical baselines too. In my experience, surprisingly few DBAs do this very well. Part of the problem is that SQL Server provides a lot of instrumentation. If you look, you will find an almost overwhelming amount of data regarding user activity on your SQL Server instances, and use and abuse of the available CPU, I/O and memory. It seems like a huge task even to work out which data you need to collect, let alone start collecting it on a regular basis, managing its storage over time, and performing detailed comparative analysis. However, without baselines, though, it is very difficult to pinpoint what ails a server, just by looking at a single snapshot of the data, or to spot retrospectively what caused the problem by examining aggregated data for the server, collected over many months. It isn’t as hard as you think to get started. You’ve probably already established some troubleshooting queries of the type SELECT Value FROM SomeSystemTableOrView. Capturing a set of baseline values for such a query can be as easy as changing it as follows: INSERT into BaseLine.SomeSystemTable (value, captureTime) SELECT Value, SYSDATETIME() FROM SomeSystemTableOrView; Of course, there are monitoring tools that will collect and manage this baseline data for you, automatically, and allow you to perform comparison of metrics over different periods. However, to get yourself started and to prove to yourself (or perhaps the person who writes the checks for tools) the value of baselines, stick something similar to the above query into an agent job, running every hour or so, and you are on your way with no excuses! Then, the next time you investigate a slow server, and see x open transactions, y users logged in, and z rows added per hour in the Orders table, compare to your baselines and see immediately what, if anything, has changed!

    Read the article

  • LUKOIL Overseas Holding Optimizes Oil Field Development Projects with Integrated Project Management

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} LUKOIL Overseas Group is a growing oil and gas company that is an integral part of the vertically integrated oil company OAO LUKOIL. It is engaged in the exploration, acquisition, integration, and efficient development of oil and gas fields outside the Russian Federation to promote transforming LUKOIL into a transnational energy company. In 2010, the company signed a 20-year development project for the giant, West Qurna 2 oil field in Iraq. Executing 10,000 to 15,000 project activities simultaneously on 14 major construction and drilling projects in Iraq for the West Qurna-2 project meant the company needed a clear picture, in real time, of dependencies between its capital construction, geologic exploration and sinking projects—required for its building infrastructure oil field development projects in Iraq. LUKOIL Overseas Holding deployed Oracle’s Primavera P6 Enterprise Project Portfolio Management to generate structured project management information and optimize planning, monitoring, and analysis of all engineering and commercial activities—such as tenders, and bulk procurement of materials and equipment—related to oil field development projects. A word from LUKOIL Overseas Holding Ltd. “Previously, we created project schedules on desktop computers and uploaded them to the project server to be merged into one big file for each project participant to access. This was not scalable, as we’ve grown and now run up to 15,000 activities in numerous projects and subprojects at any time. With Oracle’s Primavera P6 Enterprise Project Portfolio Management, we can now work concurrently on projects with many team members, enjoy absolute security, and issue new baselines for all projects and project participants once a week, with ease.” – Sergey Kotov, Head of IT and the Communication Office, LUKOIL Mid-East Ltd. Oracle Primavera Solutions: · Facilitated managing dependencies between projects by enabling the general scheduler to reschedule all projects and subprojects once a week, realigning 10,000 to 15,000 project activities that the company runs at any time · Replaced Microsoft Project and a paper-based system with a complete solution that provides structured project data · Enhanced data security by establishing project management security policies that enable only authorized project members to edit their project tasks, while enabling each project participant to view all project data that are relevant to that individual’s task · Enabled the company to monitor project progress in comparison to the projected plan, based on physical project assets to determine if each project is on track to conclude within its time and budget limitations To view the full list of solutions view here. “Oracle Gold Partner Parma Telecom was key to our successful Primavera deployment, implementing the software’s basic functionalities, such as project content, timeframes management, and cost management, in addition to performing its integration with our enterprise resource planning system and intranet portal within ten months and in accordance with budgets,” said Rafik Baynazarov, head of the master planning and control office, LUKOIL Mid-East Ltd. “ To read the full version of the customer success story, please view here.

    Read the article

  • SQL Saturday 27 (Portland, Oregon)

    - by BuckWoody
    I’m sitting in the Seattle airport, waiting for my flight to Silicon Valley California for the SQL Server 2008 R2 Launch Event. By some quirk of nature, they are asking me to Emcee the event – but that’s another post entirely.   I’m reflecting on the SQL Saturday 27 event that was just held in Portland, Oregon this last Saturday. These are not Microsoft-sponsored events – it’s truly the community at work. Think of a big user-group meeting – I mean REALLY big – held in a central location, like at a college (as ours was) or some larger, inexpensive venue like that. Everyone there is volunteering – it’s my own money and time to drive several hours to a hotel for the night, feed myself and present. It’s their own time and money for the folks that organize the event – unless a vendor or two steps in to help. It’s their own time and money for the attendees to drive a long way, spend the night and their Saturday to listen to the speakers. Why do all this?   Because everybody benefits. Every speaker learns something new, meets new people, and reaches a new audience. Every volunteer does the same. And the attendees? Well, it’s pretty obvious what they get. A 7Am to 10PM extravaganza of knowledge from every corner of the product. In fact, this year the Portland group hooked up with the CodeCamp folks and held a combined event. We had over 850 people, and I had everyone from data professionals to developers in my sessions.   So I’ll take this opportunity to do two things: to say “thank you” to all of the folks who attended, from those who spoke to those who worked and those who came to listen, and to challenge you to attend the next SQL Saturday anywhere near you. You can find the list here: http://www.sqlsaturday.com/. Don’t see anything in your area? Start one! The PASS folks have a package that will show you how. Sure, it’s a big job, but the key is to get as many people helping you as possible. Even if you have only a few dozen folks show up the first time, no worries. The first events I presented at had about 20 in the room. But not this week.   See you at the Launch Event if you’re near the San Francisco area tomorrow, and see you at the Redmond SQL Saturday and TechEd if not.   Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Learn Many Languages

    - by Jeff Foster
    My previous blog, Deliberate Practice, discussed the need for developers to “sharpen their pencil” continually, by setting aside time to learn how to tackle problems in different ways. However, the Sapir-Whorf hypothesis, a contested and somewhat-controversial concept from language theory, seems to hold reasonably true when applied to programming languages. It states that: “The structure of a language affects the ways in which its speakers conceptualize their world.” If you’re constrained by a single programming language, the one that dominates your day job, then you only have the tools of that language at your disposal to think about and solve a problem. For example, if you’ve only ever worked with Java, you would never think of passing a function to a method. A good developer needs to learn many languages. You may never deploy them in production, you may never ship code with them, but by learning a new language, you’ll have new ideas that will transfer to your current “day-job” language. With the abundant choices in programming languages, how does one choose which to learn? Alan Perlis sums it up best. “A language that doesn‘t affect the way you think about programming is not worth knowing“ With that in mind, here’s a selection of languages that I think are worth learning and that have certainly changed the way I think about tackling programming problems. Clojure Clojure is a Lisp-based language running on the Java Virtual Machine. The unique property of Lisp is homoiconicity, which means that a Lisp program is a Lisp data structure, and vice-versa. Since we can treat Lisp programs as Lisp data structures, we can write our code generation in the same style as our code. This gives Lisp a uniquely powerful macro system, and makes it ideal for implementing domain specific languages. Clojure also makes software transactional memory a first-class citizen, giving us a new approach to concurrency and dealing with the problems of shared state. Haskell Haskell is a strongly typed, functional programming language. Haskell’s type system is far richer than C# or Java, and allows us to push more of our application logic to compile-time safety. If it compiles, it usually works! Haskell is also a lazy language – we can work with infinite data structures. For example, in a board game we can generate the complete game tree, even if there are billions of possibilities, because the values are computed only as they are needed. Erlang Erlang is a functional language with a strong emphasis on reliability. Erlang’s approach to concurrency uses message passing instead of shared variables, with strong support from both the language itself and the virtual machine. Processes are extremely lightweight, and garbage collection doesn’t require all processes to be paused at the same time, making it feasible for a single program to use millions of processes at once, all without the mental overhead of managing shared state. The Benefits of Multilingualism By studying new languages, even if you won’t ever get the chance to use them in production, you will find yourself open to new ideas and ways of coding in your main language. For example, studying Haskell has taught me that you can do so much more with types and has changed my programming style in C#. A type represents some state a program should have, and a type should not be able to represent an invalid state. I often find myself refactoring methods like this… void SomeMethod(bool doThis, bool doThat) { if (!(doThis ^ doThat)) throw new ArgumentException(“At least one arg should be true”); if (doThis) DoThis(); if (doThat) DoThat(); } …into a type-based solution, like this: enum Action { DoThis, DoThat, Both }; void SomeMethod(Action action) { if (action == Action.DoThis || action == Action.Both) DoThis(); if (action == Action.DoThat || action == Action.Both) DoThat(); } At this point, I’ve removed the runtime exception in favor of a compile-time check. This is a trivial example, but is just one of many ideas that I’ve taken from one language and implemented in another.

    Read the article

  • Cloud Computing Pricing - It's like a Hotel

    - by BuckWoody
    I normally don't go into the economics or pricing side of Distributed Computing, but I've had a few friends that have been surprised by a bill lately and I wanted to quickly address at least one aspect of it. Most folks are used to buying software and owning it outright - like buying a car. We pay a lot for the car, and then we use it whenever we want. We think of the "cloud" services as a taxi - we'll just pay for the ride we take an no more. But it's not quite like that. It's actually more like a hotel. When you subscribe to Azure using a free offering like the MSDN subscription, you don't have to pay anything for the service. But when you create an instance of a Web or Compute Role, Storage, that sort of thing, you can think of the idea of checking into a hotel room. You get the key, you pay for the room. For Azure, using bandwidth, CPU and so on is billed just like it states in the Azure Portal. so in effect there is a cost for the service and then a cost to use it, like water or power or any other utility. Where this bit some folks is that they created an instance, played around with it, and then left it running. No one was using it, no one was on - so they thought they wouldn't be charged. But they were. It wasn't much, but it was a surprise.They had the hotel room key, but they weren't in the room, so to speak. To add to their frustration, they had to talk to someone on the phone to cancel the account. I understand the frustration. Although we have all this spelled out in the sign up area, not everyone has the time to read through all that. I get that. So why not make this easier? As an explanation, we bill for that time because the instance is still running, and we have to tie up resources to be available the second you want them, and that costs money. As far as being able to cancel from the portal, that's also something that needs to be clearer. You may not be aware that you can spin up instances using code - and so cancelling from the Portal would allow you to do the same thing. Since a mistake in code could erase all of your instances and the account, we make you call to make sure you're you and you really want to take it down. Not a perfect system by any means, but we'll evolve this as time goes on. For now, I wanted to make sure you're aware of what you should do. By the way, you don't have to cancel your whole account not to be billed. Just delete the instance from the portal and you won't be charged. You don't have to call anyone for that. And just FYI - you can download the SDK for Azure and never even hit the online version at all for learning and playing around. No sign-up, no credit card, PO, nothing like that. In fact, that's how I demo Azure all the time. Everything runs right on your laptop in an emulated environment.  

    Read the article

  • lvm disappeared after disc replacement on raid10

    - by user142295
    here my problem: I am running ubuntu 12.04 on a raid10 (4 disks), on top of which I installed an lvm with two volume groups (one for /, one for /home). The layout of the disks are as follows: Disk /dev/sda: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003f3b6 Device Boot Start End Blocks Id System /dev/sda1 * 63 481949 240943+ 83 Linux /dev/sda2 481950 2910640634 1455079342+ fd Linux raid autodetect /dev/sda3 2910640635 2930272064 9815715 82 Linux swap / Solaris Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00069785 Device Boot Start End Blocks Id System /dev/sdb1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdb2 2910158685 2930272064 10056690 82 Linux swap / Solaris Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdc2 2910158685 2930272064 10056690 82 Linux swap / Solaris Disk /dev/sdd: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000f14de Device Boot Start End Blocks Id System /dev/sdd1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdd2 2910158685 2930272064 10056690 82 Linux swap / Solaris The first disk (/dev/sda) contains the /boot partition on /dev/sda1. I use grub2 to boot the system off this partition. On top of this raid10 I installed two volume groups, one for /, one for /home. This system worked well, I even exchanged two disks during the last two years. It always worked. But not this time. For the first time, /dev/sda broke. I do not know if this is an issue – I know I would have struggled anyways to overcome the problem with /boot installed on that disk and grub2 installed on the mbr of /dev/sda. Anyways, I did what I always did: start knoppix fire up the raid sudo mdadm --examine -scan which returns ARRAY /dev/md127 UUID=0dbf4558:1a943464:132783e8:19cdff95 start it up sudo mdadm --assemble /dev/md127 fail the failing disk (smart event) sudo mdadm /dev/md127 --fail /dev/sda2 remove the failing disk sudo mdadm /dev/md127 --remove /dev/sda2 stop the raid sudo mdadm -S /dev/md127 take out the disk replace it with a new one create the same partitions as on the failling one add it to the raid sudo mdadm --assemble /dev/md127 sudo mdadm /dev/md127 --add /dev/sda2 wait 4 hours All looks fine: cat /proc/mdstat returns: Personalities : [raid10] md127 : active raid10 sda2[0] sdd1[3] sdc1[2] sdb1[1] 2910158464 blocks 64K chunks 2 near-copies [4/4] [UUUU] unused devices: <none> and sudo mdadm --detail /dev/md127 returns /dev/md127: Version : 0.90 Creation Time : Wed Jun 10 13:08:46 2009 Raid Level : raid10 Array Size : 2910158464 (2775.34 GiB 2980.00 GB) Used Dev Size : 1455079232 (1387.67 GiB 1490.00 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 127 Persistence : Superblock is persistent Update Time : Thu Mar 21 16:27:40 2013 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 64K UUID : 0dbf4558:1a943464:132783e8:19cdff95 (local to host Microknoppix) Events : 0.4824680 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 8 49 3 active sync /dev/sdd1 However, there is no trace of the volume groups. Rebooting into knoppix does not help Restarting the old system (I actually replugged and re-added the failing disk for that – the system begins to start, but then fails to see the / partition – no wonder if the volume group is gone) does not help. sudo vgscan, sudo vgdisplay, sudo lvs, sudo lvdisplay, sudo vgscan –mknodes all returned No volume groups found. I am completely at a loss. Can anyone tell me if and how I can recover my data? Thanks in advance!

    Read the article

  • Faster Trip to Innovation with Simplified Data Integration: Sabre Holdings Case Study

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Author: Irem Radzik, Director of Product Marketing, Data Integration, Oracle In today’s fast-paced, competitive environment, IT teams are under pressure to deliver technology solutions for many critical business initiatives as fast as possible. When the focus is on speed, it can be easy to continue to use old style, point-to-point custom scripts that grow organically to the point where they are unmanageable and too costly to maintain. As data volumes, data sources, and end users grow, uncoordinated data integration efforts create significant inefficiencies for both IT and business users. In addition to losing IT productivity due to maintaining spaghetti architecture, data integrity becomes a concern as well. Errors caused by inconsistent, data and manual data entry can prove very costly for companies and disrupt business activities. Many industry leaders recognize now that data should be moved in an automated and reliable manner across all platforms to have one version of the truth. By simplifying their data integration architecture and standardizing on a centralized approach, IT teams now accelerate time to market. Especially, using a centralized, shared-service approach brings agility, increases IT productivity, and frees up resources for innovation. One such industry leader that simplified its data integration architecture is Sabre Holdings. Sabre Holdings provides distribution and technology solutions for the travel industry, and is a winner of Oracle Excellence Awards for Fusion Middleware in 2011 in the data integration category. I had the pleasure to host Sabre Holdings on a public webcast and discuss their data integration best practices for data warehousing. In this webcast Sabre’s Amjad Saeed, presented how the company reduced complexity by consolidating systems and standardizing development on Oracle Data Integrator and Oracle GoldenGate for its global data warehouse development team. With Oracle’s complete real-time data integration solution, Sabre also streamlined support and maintenance operations, achieved real-time view in the execution of the integration processes, and can manage the data warehouse and business intelligence solution performance on demand. By reducing complexity and leveraging timely market insights, the company was able to decrease time to market by 40%. You can now listen to the webcast on demand: Sabre Holdings Case Study: Accelerating Innovation using Oracle Data Integration I invite you to hear directly from Sabre how to use advanced data integration capabilities to enable accelerated innovation. To learn more about Oracle’s data integration offering you can download our free resources.

    Read the article

< Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >