Search Results

Search found 3339 results on 134 pages for 'frequency analysis'.

Page 100/134 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Oracle Joins XBRL US To Help Drive Adoption

    - by Theresa Hickman
    Recently, Oracle joined XBRL US, the national consortium for XML business reporting standards to stay ahead of the technology and help increase XBRL adoption by U.S. companies by 2011. Large accelerated filers were mandated to use XBRL starting in 2009; other large filers started in 2010 and all other public companies must comply in June 2011. Here is a list of other organizations that recently joined XBRL US: Oracle Citi Federal Filings LLC Edgar Agents LLC XSP For those of you who have been living under a rock, XBRL stands for eXtensible Business Reporting Language. Simply put, it's reporting electronically. Just like PDFs or spreadsheets are a type of output, XBRL is another output option in electronic form. Right now, the transition to XBRL means extra work for publicly traded companies because they need to file their financial statements in both EDGAR and XBRL formats. Once the SEC phases out the EDGAR system, XBRL will be the primary way to deliver financial information with footnotes and supporting schedules to multiple audiences without having to re-key or reformat the information. A single XBRL document can be converted to printed output, published via the Web, fed into an SEC database (e.g. EDGAR) or forwarded to a creditor for analysis. Question: How does Oracle support XBRL reporting? Answer: The latest XBRL 2.1 specifications are supported by Oracle Hyperion Disclosure Management, which is part of Oracle's Hyperion Financial Close Suite along with Hyperion Financial Management, Hyperion Financial Data Quality Management and Hyperion Financial Close Management. Hyperion Disclosure Management supports the authoring of financial filings in Microsoft Office, with "hot links" to reports and data stored in Hyperion Financial Management or Oracle Essbase. It supports the XBRL tagging of financial statements as well as the disclosures and footnotes within your 10K and 10Q filings. Because many of our customers use Hyperion Financial Management (HFM) for their consolidation needs, they simply generate XBRL statements from their consolidated financial results. Question: What if you don't use Hyperion Financial Management, and you only use E-Business Suite General Ledger or PeopleSoft General Ledger? Answer: No problem, all you need is Hyperion Disclosure Management to generate XBRL from your general ledger. Here are the steps: Upload the XBRL taxonomy from the SEC or XBRL website into Hyperion Disclosure Management. Publish your financial statements out of general ledger to Excel. Perform the XBRL tag mapping from the Excel output to Hyperion Disclosure Management. For more information and some interesting background on XBRL, I recommend reading What You Need To Know About XBRL written by our EPM expert, John O'Rourke.

    Read the article

  • Drawing multiple Textures as tilemap

    - by DocJones
    I am trying to draw a 2d game map and the objects on the map in a single pass. Here is my OpenGL initialization code // Turn off unnecessary operations glDisable(GL_DEPTH_TEST); glDisable(GL_LIGHTING); glDisable(GL_CULL_FACE); glDisable(GL_STENCIL_TEST); glDisable(GL_DITHER); glEnable(GL_BLEND); glEnable(GL_TEXTURE_2D); // activate pointer to vertex & texture array glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); My drawing code is being called by a NSTimer every 1/60 s. Here is the drawing code of my world object: - (void) draw:(NSRect)rect withTimedDelta:(double)d { GLint *t; glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, [_textureManager textureByName:@"blocks"]); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); for (int x=0; x<[_map getWidth] ; x++) { for (int y=0; y<[_map getHeight] ; y++) { GLint v[] = { 16*x ,16*y, 16*x+16,16*y, 16*x+16,16*y+16, 16*x ,16*y+16 }; t=[_textureManager getBlockWithNumber:[_map getBlockAtX:x andY:y]]; glVertexPointer(2, GL_INT, 0, v); glTexCoordPointer(2, GL_INT, 0, t); glDrawArrays(GL_QUADS, 0, 4); } } } (_textureManager is a Singelton only loading a texture once!) The object drawing codes is identical (except the nested loops) in terms of OpenGL calls: - (void) drawWithTimedDelta:(double)d { GLint *t; GLint v[] = { 16*xpos ,16*ypos, 16*xpos+16,16*ypos, 16*xpos+16,16*ypos+16, 16*xpos ,16*ypos+16 }; glBindTexture(GL_TEXTURE_2D, [_textureManager textureByName:_textureName]); t=[_textureManager getBlockWithNumber:12]; glVertexPointer(2, GL_INT, 0, v); glTexCoordPointer(2, GL_INT, 0, t); glDrawArrays(GL_QUADS, 0, 4); } As soon as my central drawing routine calls the two drawing methods the second call overlays the first one. i would expect the call to world.draw to draw the map and "stamp" the objects upon it. Debugging shows me, that the first call is performed correctly (world is being drawn), but the following call to all objects ONLY draws the objects, the rest of the scene is getting black. I think i need to blend the drawn textures, but i cant seem to figure out how. Any help is appreciated. Thanks PS: Here is the github link to the project. It may not be in sync of my post here, but for some more in-depth analysis it may help.

    Read the article

  • Windows Azure Recipe: Big Data

    - by Clint Edmonson
    As the name implies, what we’re talking about here is the explosion of electronic data that comes from huge volumes of transactions, devices, and sensors being captured by businesses today. This data often comes in unstructured formats and/or too fast for us to effectively process in real time. Collectively, we call these the 4 big data V’s: Volume, Velocity, Variety, and Variability. These qualities make this type of data best managed by NoSQL systems like Hadoop, rather than by conventional Relational Database Management System (RDBMS). We know that there are patterns hidden inside this data that might provide competitive insight into market trends.  The key is knowing when and how to leverage these “No SQL” tools combined with traditional business such as SQL-based relational databases and warehouses and other business intelligence tools. Drivers Petabyte scale data collection and storage Business intelligence and insight Solution The sketch below shows one of many big data solutions using Hadoop’s unique highly scalable storage and parallel processing capabilities combined with Microsoft Office’s Business Intelligence Components to access the data in the cluster. Ingredients Hadoop – this big data industry heavyweight provides both large scale data storage infrastructure and a highly parallelized map-reduce processing engine to crunch through the data efficiently. Here are the key pieces of the environment: Pig - a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs. Mahout - a machine learning library with algorithms for clustering, classification and batch based collaborative filtering that are implemented on top of Apache Hadoop using the map/reduce paradigm. Hive - data warehouse software built on top of Apache Hadoop that facilitates querying and managing large datasets residing in distributed storage. Directly accessible to Microsoft Office and other consumers via add-ins and the Hive ODBC data driver. Pegasus - a Peta-scale graph mining system that runs in parallel, distributed manner on top of Hadoop and that provides algorithms for important graph mining tasks such as Degree, PageRank, Random Walk with Restart (RWR), Radius, and Connected Components. Sqoop - a tool designed for efficiently transferring bulk data between Apache Hadoop and structured data stores such as relational databases. Flume - a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large log data amounts to HDFS. Database – directly accessible to Hadoop via the Sqoop based Microsoft SQL Server Connector for Apache Hadoop, data can be efficiently transferred to traditional relational data stores for replication, reporting, or other needs. Reporting – provides easily consumable reporting when combined with a database being fed from the Hadoop environment. Training These links point to online Windows Azure training labs where you can learn more about the individual ingredients described above. Hadoop Learning Resources (20+ tutorials and labs) Huge collection of resources for learning about all aspects of Apache Hadoop-based development on Windows Azure and the Hadoop and Windows Azure Ecosystems SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Don Knuth and MMIXAL vs. Chuck Moore and Forth -- Algorithms and Ideal Machines -- was there cross-pollination / influence in their ideas / work?

    - by AKE
    Question: To what extent is it known (or believed) that Chuck Moore and Don Knuth had influence on each other's thoughts on ideal machines, or their work on algorithms? I'm interested in citations, interviews, articles, links, or any other sort of evidence. It could also be evidence of the form of A and B here suggest that Moore might have borrowed or influenced C and D from Knuth here, or vice versa. (Opinions are of course welcome, but references / links would be better!) Context: Until fairly recently, I have been primarily familiar with Knuth's work on algorithms and computing models, mostly through TAOCP but also through his interviews and other writings. However, the more I have been using Forth, the more I am struck by both the power of a stack-based machine model, and the way in which the spareness of the model makes fundamental algorithmic improvements more readily apparent. A lot of what Knuth has done in fundamental analysis of algorithms has, it seems to me, a very similar flavour, and I can easily imagine that in a parallel universe, Knuth might perhaps have chosen Forth as his computing model. That's the software / algorithms / programming side of things. When it comes to "ideal computing machines", Knuth in the 70s came up with the MIX computer model, and then, collaborating with designers of state-of-the-art RISC chips through the 90s, updated this with the modern MMIX model and its attendant assembly language MMIXAL. Meanwhile, Moore, having been using and refining Forth as a language, but using it on top of whatever processor happened to be in the computer he was programming, began to imagine a world in which the efficiency and value of stack-based programming were reflected in hardware. So he went on in the 80s to develop his own stack-based hardware chips, defining the term MISC (Minimal Instruction Set Computers) along the way, and ending up eventually with the first Forth chip, the MuP21. Both are brilliant men with keen insight into the art of programming and algorithms, and both work at the intersection between algorithms, programs, and bare metal hardware (i.e. hardware without the clutter of operating systems). Which leads me to the headlined question... Question:To what extent is it known (or believed) that Chuck Moore and Don Knuth had influence on each other's thoughts on ideal machines, or their work on algorithms?

    Read the article

  • Removing hard-coded values and defensive design vs YAGNI

    - by Ben Scott
    First a bit of background. I'm coding a lookup from Age - Rate. There are 7 age brackets so the lookup table is 3 columns (From|To|Rate) with 7 rows. The values rarely change - they are legislated rates (first and third columns) that have stayed the same for 3 years. I figured that the easiest way to store this table without hard-coding it is in the database in a global configuration table, as a single text value containing a CSV (so "65,69,0.05,70,74,0.06" is how the 65-69 and 70-74 tiers would be stored). Relatively easy to parse then use. Then I realised that to implement this I would have to create a new table, a repository to wrap around it, data layer tests for the repo, unit tests around the code that unflattens the CSV into the table, and tests around the lookup itself. The only benefit of all this work is avoiding hard-coding the lookup table. When talking to the users (who currently use the lookup table directly - by looking at a hard copy) the opinion is pretty much that "the rates never change." Obviously that isn't actually correct - the rates were only created three years ago and in the past things that "never change" have had a habit of changing - so for me to defensively program this I definitely shouldn't store the lookup table in the application. Except when I think YAGNI. The feature I am implementing doesn't specify that the rates will change. If the rates do change, they will still change so rarely that maintenance isn't even a consideration, and the feature isn't actually critical enough that anything would be affected if there was a delay between the rate change and the updated application. I've pretty much decided that nothing of value will be lost if I hard-code the lookup, and I'm not too concerned about my approach to this particular feature. My question is, as a professional have I properly justified that decision? Hard-coding values is bad design, but going to the trouble of removing the values from the application seems to violate the YAGNI principle. EDIT To clarify the question, I'm not concerned about the actual implementation. I'm concerned that I can either do a quick, bad thing, and justify it by saying YAGNI, or I can take a more defensive, high-effort approach, that even in the best case ultimately has low benefits. As a professional programmer does my decision to implement a design that I know is flawed simply come down to a cost/benefit analysis?

    Read the article

  • Secret of SQL Trace Duration Column

    - by Dan Guzman
    Why would a trace of long-running queries not show all queries that exceeded the specified duration filter?  We have a server-side SQL Trace that includes RPC:Completed and SQL:BatchCompleted events with a filter on Duration >= 100000.  Nearly all of the queries on this busy OLTP server run in under this 100 millisecond threshold so any that appear in the trace are candidates for root cause analysis and/or performance tuning opportunities. After an application experienced query timeouts, the DBA looked at the trace data to corroborate the problem.  Surprisingly, he found no long-running queries in the trace from the application that experienced the timeouts even though the application’s error log clearly showed detail of the problem (query text, duration, start time, etc.).  The trace did show, however, that there were hundreds of other long-running queries from different applications during the problem timeframe.  We later determined those queries were blocked by a large UPDATE query against a critical table that was inadvertently run during this busy period. So why didn’t the trace include all of the long-running queries?  The reason is because the SQL Trace event duration doesn’t include the time a request was queued while awaiting a worker thread.  Remember that the server was under considerable stress at the time due to the severe blocking episode.  Most of the worker threads were in use by blocked queries and new requests were queued awaiting a worker to free up (a DMV query on the DAC connection will show this queuing: “SELECT scheduler_id, work_queue_count FROM sys.dm_os_schedulers;”).  Technically, those queued requests had not started.  As worker threads became available, queries were dequeued and completed quickly.  These weren’t included in the trace because the duration was under the 100ms duration filter.  The duration reflected the time it took to actually run the query but didn’t include the time queued waiting for a worker thread. The important point here is that duration is not end-to-end response time.  Duration of RPC:Completed and SQL:BatchCompleted events doesn’t include time before a worker thread is assigned nor does it include the time required to return the last result buffer to the client.  In other words, duration only includes time after the worker thread is assigned until the last buffer is filled.  But be aware that duration does include the time need to return intermediate result set buffers back to the client, which is a factor when large query results are returned.  Clients that are slow in consuming results sets can increase the duration value reported by the trace “completed” events.

    Read the article

  • Windows Azure Recipe: High Performance Computing

    - by Clint Edmonson
    One of the most attractive ways to use a cloud platform is for parallel processing. Commonly known as high-performance computing (HPC), this approach relies on executing code on many machines at the same time. On Windows Azure, this means running many role instances simultaneously, all working in parallel to solve some problem. Doing this requires some way to schedule applications, which means distributing their work across these instances. To allow this, Windows Azure provides the HPC Scheduler. This service can work with HPC applications built to use the industry-standard Message Passing Interface (MPI). Software that does finite element analysis, such as car crash simulations, is one example of this type of application, and there are many others. The HPC Scheduler can also be used with so-called embarrassingly parallel applications, such as Monte Carlo simulations. Whatever problem is addressed, the value this component provides is the same: It handles the complex problem of scheduling parallel computing work across many Windows Azure worker role instances. Drivers Elastic compute and storage resources Cost avoidance Solution Here’s a sketch of a solution using our Windows Azure HPC SDK: Ingredients Web Role – this hosts a HPC scheduler web portal to allow web based job submission and management. It also exposes an HTTP web service API to allow other tools (including Visual Studio) to post jobs as well. Worker Role – typically multiple worker roles are enlisted, including at least one head node that schedules jobs to be run among the remaining compute nodes. Database – stores state information about the job queue and resource configuration for the solution. Blobs, Tables, Queues, Caching (optional) – many parallel algorithms persist intermediate and/or permanent data as a result of their processing. These fast, highly reliable, parallelizable storage options are all available to all the jobs being processed. Training Here is a link to online Windows Azure training labs where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure HPC Scheduler (3 labs)  The Windows Azure HPC Scheduler includes modules and features that enable you to launch and manage high-performance computing (HPC) applications and other parallel workloads within a Windows Azure service. The scheduler supports parallel computational tasks such as parametric sweeps, Message Passing Interface (MPI) processes, and service-oriented architecture (SOA) requests across your computing resources in Windows Azure. With the Windows Azure HPC Scheduler SDK, developers can create Windows Azure deployments that support scalable, compute-intensive, parallel applications. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • How to recover from finite-state-machine breakdown?

    - by Earl Grey
    My question may seems very scientific but I think it's a common problem and seasoned developers and programmers hopefully will have some advice to avoid the problem I mention in title. Btw., what I describe bellow is a real problem I am trying to proactively solve in my iOS project, I want to avoid it at all cost. By finite state machine I mean this I have a UI with a few buttons, several session states relevant to that UI and what this UI represents, I have some data which values are partly displayed in the UI, I receive and handle some external triggers (represented by callbacks from sensors). I made state diagrams to better map the relevant scenarios that are desirable and alowable in that UI and application. As I slowly implement the code, the app starts to behave more and more like it should. However, I am not very confident that it is robust enough. My doubts come from watching my own thinking and implementation process as it goes. I was confident that I had everything covered, but it was enough to make a few brute tests in the UI and I quickly realized that there are still gaps in the behavior ..I patched them. However, as each component depends and behaves based on input from some other component, a certain input from user or some external source trigers a chain of events, state changes..etc. I have several components and each behave like this Trigger received on input - trigger and its sender analyzed - output something (a message, a state change) based on analysis The problem is, this is not completely selfcontained, and my components (a database item, a session state, some button's state)...COULD be changed, influenced, deleted, or otherwise modified, outside the scope of the event-chain or desirable scenario. (phone crashes, battery is empty phone turn of suddenly) This will introduce a nonvalid situation into the system, from which the system potentially COULD NOT BE ABLE to recover. I see this (althought people do not realize this is the problem) in many of my competitors apps that are on apple store, customers write things like this "I added three documents, and after going there and there, i cannot open them, even if a see them." or "I recorded videos everyday, but after recording a too log video, I cannot turn of captions on them.., and the button for captions doesn't work".. These are just shortened examples, customers often describe it in more detail..from the descriptions and behavior described in them, I assume that the particular app has a FSM breakdown. So the ultimate question is how can I avoid this, and how to protect the system from blocking itself? EDIT I am talking in the context of one viewcontroller's view on the phone, I mean one part of the application. I Understand the MVC pattern, I have separate modules for distinct functionality..everything I describe is relevant to one canvas on the UI.

    Read the article

  • Do you care about your Oracle System Support experience?

    - by user12244613
    It has been a while since I blogged about Systems Support within Oracle. I want to take this opportunity to raise awareness of how Oracle is communicating out to its systems customers. Previously every item to be communicated was sent independently via an email message however, not all messages appear to be being getting the attention they require. In an effort to ensure Oracle is reaching all of our Sun and Oracle System customers, we have created the Oracle Systems Support Newsletter. This monthly newsletter will have a summary of customer support relevant information for you to use and will cover topics that impact your support experience. For example: 1. Did you know that sending explorer content to email addresses with @sun.com is going away soon? For more information, review the Document 1362484.1 2. Are you an Auto Service Request (ASR) user? If yes, here are the latest changes: · ASR Manager accepts My Oracle Support User Name (email address) and password. [Doc ID 1345484.1] · ASR IP Address for secure file transfer has changed [Doc ID 1338575.1] · ASR No Heartbeat Status - Find out how to resolve [Doc ID 1346328.1] 3. Did you notice we have changed the Service Request options for Hardware and introduced a new problem category called “Automated Diagnosis”? This service streamlines the data you send in and then automatically provides an update of known issues found in your My Oracle Support Service Request. This feature also fast tracks hardware failures by sending parts as soon as the data is analyzed. Have you used this new feature? If yes tell us about it – take the 5minute survey 4. Are you being proactive or are you still ‘fire fighting’ in the reactive mode? If you are being proactive for your Oracle System products you might have used Oracle Sun System Analysis. Did you finding this helpful? Can we improve it? You tell us, take the 5minute survey 5. Are you aware that if you attach files to your Service Request it enables the support engineer to start work straight away? For a summary of products and files review the Newsletter. 6. Are you struggling to find patches or firmware or product downloads? If yes, these types of issues are all addressed in the Newsletter. If this is the type of information you want to know about each month, then take time to read the Newsletter link and bookmark it in My Oracle Support so you can stay informed. Thanks for your time.

    Read the article

  • Suggestions for connecting .NET WPF GUI with Java SE Server

    - by Sam Goldberg
    BACKGROUND We are building a Java (SE) trading application which will be monitoring market data and sending trade messages based on the market data, and also on user defined configuration parameters. We are planning to provide the user with a thin client, built in .NET (WPF) for managing the parameters, controlling the server behavior, and viewing the current state of the trading. The client doesn't need real-time updates; it will instead update the view once every few seconds (or whatever interval is configured by the user). The client has about 6 different operations it needs to perform with the server, for example: CRUD with configuration parameters query subset of the data receive updates of current positions from server It is possible that most of the different operations (except for receiving data) are just different flavors of managing the configuration parameters, but it's too early in our analysis for us to be sure. To connect the client with the server, we have been considering using: SOAP Web Service RESTful service building a custom TCP/IP based API (text or xml) (least preferred - but we use this approach with other applications we have) As best as I understand, pros and cons of the different web service flavors are: SOAP pro: totally automated in .NET (and Java), modifying server side interface require no code changes in communication layer, just running refresh on Web Service reference to regenerate the classes. con: more overhead in the communication layer sending more text, etc. We're not using J2EE container so maybe doesn't work so well with J2SE REST pro: lighter weight, less data. Has good .NET and Java support. (I don't have any real experience with this, so don't know what other benefits it has.) con: client will not be automatically aware if there are any new operations or properties added (?), so communication layer needs to be updated by developer if server interface changes. con: (both approaches) Server cannot really push updates to the client at regular intervals (?) (However, we won't mind if client polls the server to get updates.) QUESTION What are your opinions on the above options or suggestions for other ways to connect the 2 parts? (Ideally, we don't want to put much work into the communication layer, because it's not the significant part of the application so the more off-the-shelf and automated the better.)

    Read the article

  • Why do some user agents have spam urls in them?

    - by Erx_VB.NExT.Coder
    If you go to (say) the last 100 entries (visits) to the botsvsbrowsers.com website (exact link, feel free to take a look: http://www.botsvsbrowsers.com/recent/listings/index.html ), you'd notice that almost every User Agent that has the keywords "Opera" and "Presto" inside them, will almost certainly have a web link (URL/Web Address) inside it, and it won't just be a normal web address, but a HTML anchor tag/link to that address. Why is this so, I could not even find a single discussion about it on the internet, nowhere, I tried varying my search terms many times. If the user agent contains the words "Opera" and "Presto" it doesnt mean it will have this weblink, but it means there is about an 80% change that it will. A typical anchor tag/link inside a user agent will look like this: Mozilla/4.0 <a href="http://osis-uk.co.uk/disabled-equipment">disability equipment</a> (Windows NT 5.1; U; en) Presto/2.10.229 Version/11.60 If you check it out at the website, http://www.botsvsbrowsers.com/recent/listings/index.html you will notice that the back and forward arrows are in there unescaped format. This isn't just true for botsvsbrowsers, but several other user agent listing sites. I'm really confused and feel line I'm in a room full of 10,000 people and am the only one seeing this ghost :). If I'm doing statistical analysis, should I include or exclude this type of user agent from my listing (ie: are these just normal users who've set their user agents to attempt to drive some traffic to their sites as they browser the web), or is there something else going on? The fact that it is so consistent in terms of its format leads me to believe that it is an automated process (the setting or alteration of the user agent) so I cannot decide or understand the process by which this change is made (I know how to change a user agent), but unsure which program or facility is doing this, especially since it is exclusive to Opera (Presto) user agents that are beyond I think an 8 or 9 point something browser version. I've run some statistical tests, parsing entries from all over the place, writing custom programs, to get a better understanding of this. Keep in mind that I see normal URL's in user agents infrequently, they are just text such as +http://www.someSite.com appended to a user agent normally, especially if its a crawler or bot it provided its service URL, this is normal and isnt done with an embedded link (A HREF=) etc, so I'm not talking about "those".

    Read the article

  • What is new in Oracle SOA Suite 11g R1 PS6? by Shanny Anoep

    - by JuergenKress
    Oracle has released a new version 11.1.1.7.0 for their Oracle Fusion Middleware product line. This version includes Patch Set #6 (PS6) for Oracle SOA Suite 11g R1, with a big list of improvements and fixes for each component in that suite. In this post we will highlight some of the interesting updates with regards to troubleshooting, performance, reliability and scalability. Infrastructure/Purging scripts Database growth is a common problem for large-scale Oracle SOA Suite deployments. Oracle already provides multiple purging strategies for the SOA Suite runtime database. This patch set includes two new scripts for purging most of the runtime data: Table Recreation Script (TRS): This script can be used to reclaim as much database space as possible, while still retaining the open instances. It can be used as a corrective action for databases that grew excessively, for example when purging was not performed at all. This should be used as a single corrective action only; the script does not replace the normal purging scripts. Truncate script: Remove all records from the SOA Suite runtime tables without dropping the tables. This script can be used for cloning SOA Suite environments without copying the instance data, or for recreating test scenarios by cleaning all the runtime data. The Oracle SOA Suite Administrator's guide contains a table with the available purging strategies. Diagnostic dumps Using WLST you could already dump diagnostic information about various components of the SOA Suite. This version adds support to retrieve more information on BPEL and Adapters from the command-line. Diagnostic dumps for BPEL New diagnostic dumps are available for BPEL to get information on thread pools, average processing time for BPEL components, and average waiting times for asynchronous instances. This information can be very useful for performance analysis or troubleshooting. With WLST this information can be retrieved from the command-line and included for monitoring or reporting. Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: SOA Suite PS6,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • SQL SERVER – Free eBook Download – EPUB, MOBI, PDF Format

    - by pinaldave
    Microsoft has released recently free eBooks on various Microsoft Technology. The best part is that all these books are available in ePub, Mobi and PDF. You can download them to your local machine or eBook reader and read them. This is a great start as many important subjects are now covered and converted into an eBook. I personally read through a few of the books and found they are very comprehensive and and detailed. The goal is not to cover complete technology in a single book but rather pick a single topic and discuss it in detail. The source of the book is white paper, Technet wiki as well book online and it is clearly listed right bellow the book title. Following are the books available for SQL Server Technology and I encourage all of you to have a look at them as they are great resources. Master Data Services Capacity Guidelines Microsoft SQL Server AlwaysOn Solutions Guide for High Availability and Disaster Recovery Microsoft SQL Server Analysis Services Multidimensional Performance and Operations Guide QuickStart: Learn DAX Basics in 30 Minutes SQL Server 2012 Transact-SQL DML Reference You can download above eBooks from here. This is indeed a great attempt as each book talks about the a single subject in depth keeping author focus on the single and simple subject. I have previously written two books by focusing on the same subject and I had great pleasure writing it as well. Writing on focus subjects gives complete freedom to author to explore the a single subject without having burden to cover everything which is associated with that technology at large. Just like eBooks mentioned earlier my SQL Server Wait Stats was inspired from my article series on SQL Wait Stats. The latest book SQL Server Interview Question and Answers was derived from my article series on SQL Interview Q and A. Writing book is an absolutely different concept than writing blog posts. When I was converting my blog posts to books, I ended up writing 50% new material and end up removing many repetitive content which shows up in blog series. It was indeed fun to focused book at the same time it was a great learning experience as an individual. Reference: TechNet Wiki, Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-06

    - by Bob Rhubart
    Creating an Oracle Endeca Information Discovery 2.3 Application Part 3 : Creating the User Interface | Mark Rittman Oracle ACE Director Mark Rittman continues his article series. WebLogic Advisor WebCasts on-demand A series of videos by WebLogic experts, available to those with access to support.oracle.com. Integrating OBIEE 11g into Weblogic’s SAML SSO | Andre Correa A-Team blogger Andre Correa illustrates a transient federation scenario. InfoQ: Cloud 2017: Cloud Architectures in 5 Years Andrew Phillips, Mark Holdsworth, Martijn Verburg, Patrick Debois, and Richard Davies review the evolution of cloud computing so far and look five years into the future. Call for Nominations: Oracle Fusion Middleware Innovation Awards 2012 - Win a free pass to #OOW12 These awards honor customers for their cutting-edge solutions using Oracle Fusion Middleware. Either a customer, their partner, or an Oracle representative can submit the nomination form on behalf of the customer. Submission deadline: July 17. Winners receive a free pass to Oracle OpenWorld 2012 in San Francisco. SOA Analysis within the Department of Defense Architecture Framework (DoDAF) 2.0 – Part II | Dawit Lessanu The conclusion of Lessanu's two-part series for Service Technology Magazine. Driving from Business Architecture to Business Process Services | Hariharan V. Ganesarethinam "The perfect mixture of EA, SOA and BPM make enterprise IT highly agile so it can quickly accommodate dynamic business strategies, alignments and directions," says Ganesarethinam. "However, there should be a structured approach to drive enterprise architecture to service-oriented architecture and business processes." Book Review: Oracle Application Integration Architecture (AIA) Foundation Pack 11gR1: Essentials | Rajesh Raheja Rajesh Raheja reviews the new AIA book from Packt Publishing. ODTUG Kscope12 - June 24-28 - San Antonio, TX San Antonio, TX June 24-28, 2012 Kscope12, sponsored by ODTUG, is your home for Application Express, BI and Oracle EPM, Database Development, Fusion Middleware, and MySQL training by the best of the best! Oracle Enterprise Manager Ops Center 12c : Enterprise Controller High Availability (EC HA) | Mahesh Sharma Mahesh Sharma describes EC HA, looks at the prerequisites, and shares screen shots. The right way to transform your business via the cloud | David Linthicum A couple of quick tests will show you where you need to focus your transition efforts. Thought for the Day "Most software isn't designed. Rather, it emerges from the development team like a zombie emerging from a bubbling vat of Research and Development juice. When a discipline is hugging the ragged edge of technology, this might be expected, but most of today's software is comprised of mostly 'D' and very little 'R'." — Alan Cooper Source: softwarequotes.com

    Read the article

  • Chargeback and showback...both a 'throw back'

    - by llaszews
    Been getting asked again by customers and partners about chargeback and showback in the cloud so thought I would blog on my response to this question. Charge Back background, information and industry analysis: Cloud computing is all about shared resources. These shared resources are computer servers (including memory and CPU), network devices, hard disk storage, database servers, application servers, cooling, floor space, electricity and more. These resources are shared by departments within a company, or by a number of companies, when resources are hosted in the public or hybrid cloud. Currently, hosting providers that run other companies on their cloud platforms do not have an accurate way to measure the shared computing resources used by a specific user let alone used by a specific customer. Additionally, companies running their own cloud data centers, for private or hybrid clouds, have no way of measure and charging back the departments in the company that are using these shared cloud resources. In both cases, the lack of determine shared resource costs and to charge them back to the company, department or user that is using this resources is limited a clear measure of business benefit and impacting company’s ability to measure the Return on Investment (ROI). An IT chargeback system is an accounting strategy that applies the costs of IT services, hardware or software to the business unit in which they are used. This system contrasts with traditional IT accounting models in which a centralized department bears all of the IT costs in an organization and those costs are treated simply as corporate overhead. Showback involves showing the IT costs to a department or customer but not actually charging them for their IT usage. Showback is a gradual method of introducing chargeback into an enterprise. Most companies implement a show back mechanism before a full chargeback system is put in place. Oracle chargeback product: Oracle Enterprise Manager provides tools for defining detailed Chargeback plans spanning different metrics collected for each type of resources as well as defining Cost Centers for grouping costs across multiple developers. Chargeback plans can use not only usage based costs, but also configuration based costs (e.g. version of the platform) or fixed costs (e.g. flat-rate management fee). Chargeback has rich out of the box reports. Trending reports show how charge and resource consumption varies over time, while Summary reports show the breakdown of charges or usage by different dimensions such as Cost Center or Target Type. These reports help consumers in understanding how their charges relate to their consumption and also assist the IT department with budgeting and planning activities. With BI Publisher, the reports can be made available in a variety of formats such as PDF, HTML, Word, Excel or PowerPoint.

    Read the article

  • Suggestions for connecting .NET WPF GUI with Java SE Server aoo

    - by Sam Goldberg
    BACKGROUND We are building a Java (SE) trading application which will be monitoring market data and sending trade messages based on the market data, and also on user defined configuration parameters. We are planning to provide the user with a thin client, built in .NET (WPF) for managing the parameters, controlling the server behavior, and viewing the current state of the trading. The client doesn't need real-time updates; it will instead update the view once every few seconds (or whatever interval is configured by the user). The client has about 6 different operations it needs to perform with the server, for example: CRUD with configuration parameters query subset of the data receive updates of current positions from server It is possible that most of the different operations (except for receiving data) are just different flavors of managing the configuration parameters, but it's too early in our analysis for us to be sure. To connect the client with the server, we have been considering using: SOAP Web Service RESTful service building a custom TCP/IP based API (text or xml) (least preferred - but we use this approach with other applications we have) As best as I understand, pros and cons of the different web service flavors are: SOAP pro: totally automated in .NET (and Java), modifying server side interface require no code changes in communication layer, just running refresh on Web Service reference to regenerate the classes. con: more overhead in the communication layer sending more text, etc. We're not using J2EE container so maybe doesn't work so well with J2SE REST pro: lighter weight, less data. Has good .NET and Java support. (I don't have any real experience with this, so don't know what other benefits it has.) con: client will not be automatically aware if there are any new operations or properties added (?), so communication layer needs to be updated by developer if server interface changes. con: (both approaches) Server cannot really push updates to the client at regular intervals (?) (However, we won't mind if client polls the server to get updates.) QUESTION What are your opinions on the above options or suggestions for other ways to connect the 2 parts? (Ideally, we don't want to put much work into the communication layer, because it's not the significant part of the application so the more off-the-shelf and automated the better.)

    Read the article

  • viable part-time career in IT/programming?

    - by Rider
    Hi, I'd like to ask for some career advice from you people. Is there a viable job/career that can be done in programming/IT for the long term? Right now, I am thinking about website (PHP?) developer path. My background: I have a degree in computer science and have been a programmer/system analyst for almost 10 years. Lately I took a big break from programming and studied for a B.arch. degree (yes architecture), only to discover that architecture offers zero (0) jobs where I'm from, for 3 years already (and no, I am not going to move and the grass in not greener in other places). I have never been particularly interested in programming, in fact I was bored by it. But I was always quite good at both programming and system analysis, and very valued by practically all my employers. On the other hand, I have never been valued or offered a good job in any other field (although I can do many things, like design, architecture, translations, documentation, teaching, etc etc.) I guess the human component has been always more important for me in programming jobs - I value all the good people I worked with, but not projects. However, I have about zero skills or desire to be a project manager. I also have close to zero skills for selling myself. I like it best when I can do "my thing", have my niche, have an ownership of some project. Right now my career perspective is to do part time programming and to part time teach yoga. I have already started the yoga teaching part. Do you think that part time programming is viable? And what niche works best for that? I have considered web development, QA, or software development in a company like I did before. However, my fear is that when you do programming part-time, you get the most boring coding work, only to see your colleagues move to more interesting projects and up their respective career ladders. I also fear that part-timers are not especially needed either. And, since I don't share much enthusiasm at programming, I'd rather not be around young programmers boiling with geeky enthusiasm about coding, but rather QA mindset with people from different backgrounds and life paths might work better for me. Thanks for any advice, --Rider

    Read the article

  • links for 2011-03-17

    - by Bob Rhubart
    Siba Prasad: Oracle Database on Amazon RDSg Siba Prasad share an analysis of the pros and cons. (tags: oracle database cloud amazon) LIVE WEBCAST March 24 2pm PT- Why Switch from Red Hat and SUSE Linux to Oracle Linux? (Oracle's Linux Blog) Featuring Oracle's Monica Kumar, Sr.Director of Linux, Oracle VM and MySQL and Avi Miller, Principal Sales Consultant, Linux and Virtualization. (tags: oracle linux) Webcast: IBM SOA vs. Oracle SOA, March 24, 1pm ET / 10am PT Maneesh Joshi and Bruce Tierney guide you to a solid understanding of the differences between the Oracle and IBM approach to comprehensive SOA. (tags: oracle soa bpm) Finding the Right Solution to Source and Manage Your Contractors (PeopleSoft Apps Strategy) "Talent has become a primary competitive advantage for most organizations. Contingent labor offers talent on flexible terms; it offers the ability to scale up operations, close skill gaps, and manage risk in the process of delivering services." - Mark Rosenberg (tags: oracle peoplesoft enterprisearchitecture) Oracle Business Intelligence Customers: Have Your Voice Heard in the "2011Wisdom of the Crowds Business Intelligence Market Survey" (BI & Analytics Pulse) "The Wisdom of the Crowds survey combines social media, crowd sourcing, and good old fashioned market research to provide vendors and customers alike an unvarnished and insightful snap shot of what's top of mind with business intelligence professionals." (tags: oracle businessintelligence) Martin Bach: Troubleshooting Grid Infrastructure startup Martin Bach hunts down the problem that caused one of his blades to reboot after an EXT3 journal error. (tags: oracle grid rac) Oracle WebCenter: Social Networking & Collaboration (Oracle Enterprise 2.0 Blog) Kelley Ruppel with information on "how the new release of Oracle WebCenter provides unprecedented Social Networking and Collaboration." (tags: oracle webcenter enterprise2.0 collaboration) VirtaThon: 100% Virtual Java/Oracle/MySQL Conference! | Bex Huff "The goal is simple," says Oracle ACE Director Bex Huff. "Because it's all online, the conference is very cheap. Pricing is not yet announced... but it should be around $300. Also, unlike other conferences, every speaker gets paid a small fee depending on the popularity of his or her session." (tags: oracle oracleace java mysqql) Griffiths Waite Blog: BPM 11g PS3 GW's Ian Heathcock shares a link to "a most interesting article on Oracle's recent release discussing the new features and how PS3 adds value  to the whole SOA message." (tags: oracle soa) The Buttso Blathers: Tutorial: JSF 2.0 and JPA 2.0 with WebLogic Server using NetBeans Should you take application architecture advice from a man named Buttso? In this case, yes. (tags: oracle jsf jpa weblogic) Setting-up a High Available Tuned SOA Environment Middleware Magic (tags: ping.fm) How to Configure Weblogic Messaging Bridge with JBoss Middleware Magic (tags: ping.fm Weblogic JBoss) Richard Veryard on Architecture: Emergent Architecture (tags: ping.fm entarch emergence)

    Read the article

  • SQL SERVER – Template Browser – A Very Important and Useful Feature of SSMS

    - by pinaldave
    Let me start today’s blog post with a direction question. How many of you have ever used Template Browser? Template Browser is a very important and useful feature of SQL Server Management Studio (SSMS). Every time when I am talking about SQL Server there is always someone comes up with the question, why there is no step by step procedure included in SSMS for features. Honestly every time I get this question, the question I ask back is How many of you have ever used Template Browser? I think the answer to this question is most of the time either no or we have not heard of the feature. One of the people asked me back – have you ever written about it on your blog? I have not yet written about it. Basically there is nothing much to write about it. It is pretty straight forward feature, like any other feature and it is indeed difficult to elaborate. However, I will try to give a quick introduction to this feature. Templates are like a quick cheat sheet or quick reference. Templates are available to create objects like databases, tables, views, indexes, stored procedures, triggers, statistics, and functions. Templates are also available for Analysis Services as well. The template scripts contain parameters to help you customize the code. You can Replace Template Parameters dialog box to insert values into the script. Additionally users can create new custom templates as well with folder structure. To open a template from Template Explorer Go to View menu >> Template Explorer or type CTRL+ALT+L. You will find a list of categories click on any category and expand the folder structure. For our sample example let us expand Index Folder. In this folder you will notice the various T-SQL Scripts. These scripts can be opened by double click or can be dragged to editor area and modified as needed. Sample template is now available in the query editor area with all the necessary parameter place folder. You can replace the same parameter by typing either CTRL+SHIFT+M or by going to Query Menu >> Specify Values for Template Parameters. In this screen it will show  Specify Values for Template Parameters dialog box, accept the value or replace it with a new value. This will now get your script ready to go. Check it one more time and change the script to fit your requirement. I personally use template explorer for two things. First one is obviously for templates but the hidden one and an important one is for learning new features and T-SQL commands. There is so much to learn and so little time. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Outlying DBAs

    - by steveh99999
    Read an interesting book recently, ‘Outliers – the story of success’ by Malcolm Gladwell. There’s a good synopsis of the book here on wikipedia. I don’t want to write in detailed review of the book, but it’s well worth a read. There were a couple of sections which I thought were possibly relevant to IT professionals and DBAs in particular. Firstly, ‘the 10,000 hour rule’, in this section Gladwell asserts that to be a real ‘elite performer’ takes 10,000 hours of practice. ‘Practice isn’t the thing you do once you’re good, it’s the thing you do that makes you good’.  He gives many interesting examples – the Beatles, Bill Gates etc – but I was wondering could this be applied to DBAs ? If it takes 10,000 hours to be a really elite DBA – how long does that really take ? 8 hours a day makes 1250 days. If we assume that most DBAs work around 230 days a year – then it takes around 5 and a half years to become an elite DBA.   But how much time per day does a DBA spend actually doing DBA work ? Certainly it’s my experience that the more experienced I get as a DBA, the less time I seem to spend actually doing DBA work – ie meetings, change-control meetings, project planning, liasing with other teams, appraisals etc.  Is it more accurate to assume that a DBA spends half their time actually doing ‘real’ DBA work – or is that just my bad luck ?   So, in reality, I’d argue it can take at least 5 1/2 and more likely closer to 10 years to become an elite DBA. Why do I keep receiving CVs for senior DBAs with 2-4 years actual DBA experience ? In the second section I found particularly interesting, Gladwell writes about analysis of plane crashes and the importance of in-cockpit communications. He describes a couple of crashes involving Korean Airlines – where co-pilots were often deferrential to pilots, and unwilling to openly criticise their more senior colleagues or point out errors when things were going badly wrong… There’s a better summary of Gladwell’s concepts on mitigation  here – but to apply this to a DBA role… If you are a DBA and you do not agree with  a decision of one of your superiors, then it’s your duty as a DBA to say what you think is wrong, before it’s too late…  Obviously there’s a fine line between constructive criticism and moaning, but a good senior DBA or manager should be able to take well-researched criticism\debate from a more junior DBA.   Is this really possible ?

    Read the article

  • SQL – Biggest Concerns in a Data-Driven World

    - by Pinal Dave
    The ongoing chaos over Government Agency’s snooping has ignited a heated debate on privacy of personal data and its use by government and/or other institutions. It has created a feeling of disapproval and distrust among users. This incident proves to be a lesson for companies that are looking to leverage their business using a data driven approach. According to analysts, the goal of gathering personal information should be to deliver benefits to both the parties – the user as well as the data collector(government or business). Using data the right way is crucial, and companies need to deploy the right software applications and systems to ensure that their efforts are well-directed. However, there are various issues plaguing analysts regarding available software, which are highlighted below. According to a InformationWeek 2013 Survey of Analytics, Business Intelligence and Information Management where 541 business technology professionals contributed as respondents, it was discovered that the biggest concern was deemed to be the scarcity of expertise and high costs associated with the same. This concern was voiced by as many as 38% of the participants. A close second came out to be the issue of data warehouse appliance platforms being expensive, with 33% of those present believing it to be a huge roadblock. Another revelation made in this respect was that 31% professionals weren’t even sure how Data Analytics can create business opportunities for them. Another 17% shared that they found data platform technologies such as Hadoop and NoSQL technologies hard to learn. These results clearly pointed out that there are awareness and expertise issues that also need much attention. Unless the demand-supply gap of Business Intelligence professionals well versed in data analysis technologies is met, this divide is going to affect how companies make the most of their BI campaigns. One of the key action points that can be taken to salvage the situation, is to provide training on Data Analytics concepts. Koenig Solutions offer courses on many such technologies including a course on MCSE SQL Server 2012: BI Platform. So it’s time to brush up your skills and get down to work in a data driven world that awaits you ahead. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Three Master Data Management Deployment Tips

    - by david.butler(at)oracle.com
    MDM is all about data quality and data governance. We now know that improved data quality raises all operational and analytical boats. But it's not just about deploying data quality tools. It's about deploying data quality tools within and across the IT landscape - from a thousand points of data entry to a single version of the truth. Here are three tips to deploying MDM across your applications and enterprise.   #1: Identify a tactical, high-value business problem where MDM can materially help. §  Support a customer acquisition and retention program with a 'customer' master data solution. §  Accelerate new products and services to market with a 'product' master data solution. §  Reduce supplier exceptions or support spend control initiatives with a 'supplier' master data solution. §  Support new store (branch, campus, restaurant, hospital, office, well head) location analysis with a 'site' master data solution. §  Fix long standing Chart of Accounts and Cost Center problems with a 'financial' master data solution. §  Support M&A activity, application upgrades, an SOA initiative, a cloud computing program, or a new business intelligence deployment by implementing a mix of master data solutions.   #2: Incrementally expand to a full information architecture. Quite often, the measurable return on interest from tactical MDM initiatives will fund future deployments. Over time, the MDM solution expands into its full architecture to cover the entire IT landscape. Operations and analytics are united, IT flexibility is restored, and sustainable competitive advantage is achieved.   #3: Bring business into every MDM deployment. To be successful, MDM must work hand in hand with data governance. In fact, Oracle MDM incorporates data governance tools for business users. IT can insure data quality, but only after the business side has defined what quality means. The business establishes the rules for governing the master data, and then IT enforces the rules via the MDM applications. Without this business/IT collaboration, MDM initiatives seldom achieve their full potential.   It is not very often that a technology comes along that can measurably assist organizations across a wide variety of top IT initiatives. Reducing costs, increasing flexibility, getting more out of existing assets, and aligning business and IT are not easy tasks for any CIO. But with MDM, success is achievable. IT can regain its place as a center for innovation.   For more information on this topic, take a look at my article Master Data Management Deployment Tips in the Opinion Section of Oracle's Profit Online magazine.

    Read the article

  • In a multidisciplinary team, how much should each member's skills overlap?

    - by spade78
    I've been working in embedded software development for this small startup and our team is pretty small: about 3-4 people. We're responsible for all engineering which involves an RF device controlled by an embedded microcontroller that connects to a PC host which runs some sort of data collection and analysis software. I have come to develop these two guidelines when I work with my colleagues: Define a clear separation of responsibilities and make sure each person's contribution to the final product doesn't overlap. Don't assume your colleagues know everything about their responsibilities. I assume there is some sort of technology that I will need to be competent at to properly interface with the work of my colleagues. The first point is pretty easy for us. I do firmware, one guy does the RF, another does the PC software, and the last does the DSP work. Nothing overlaps in terms of two people's work being mixed into the final product. For that to happen, one guy has to hand off work to another guy who will vet it and integrate it himself. The second point is the heart of my question. I've learned the hard way not to trust the knowledge of my colleagues absolutley no matter how many years experience they claim to have. At least not until they've demonstrated it to me a couple of times. So given that whenever I develop a piece of firmware, if it interfaces with some technology that I don't know then I'll try to learn it and develop a piece of test code that helps me understand what they're doing. That way if my piece of the product comes into conflict with another piece then I have some knowledge about possible causes. For example, the PC guy has started implementing his GUI's in .NET WPF (C#) and using LibUSBdotNET for USB access. So I've been learning C# and the .NET USB library that he uses and I build a little console app to help me understand how that USB library works. Now all this takes extra time and energy but I feel it's justified as it gives me a foothold to confront integration problems. Also I like learning this new stuff so I don't mind. On the other hand I can see how this can turn into a time synch for work that won't make it into the final product and may never turn into a problem. So how much experience/skills overlap do you expect in your teammates relative to your own skills? Does this issue go away as the teams get bigger and more diverse?

    Read the article

  • Introducing Agile development after traditional project inception

    - by Riggy
    About a year and a half ago, I entered a workplace that claimed to do Agile development. What I learned was that this place has adopted several agile practices (such as daily standups, sprint plannings and sprint reviews) but none of the principles (just in time / just good enough mentality, exposing failure early, rich communication). I've now been tasked with making the team more agile and I've been assured that I have complete buy-in from the devs and the business team. As a pilot program, they've given me a project that just completed 15 months of requirements gathering, has a 110 page Analysis & Design document (to be considered as "written in stone"), and where I have no access to the end users (only to the committee made up of the users' managers who won't actually be using the product). I started small, giving them a list of expected deliverables for the first 5 sprints (leaving the future sprints undefined), a list of goals for the first sprint, and I dissected the A&D doc to get enough user stories to meet the first sprint's goals. Since then, they've asked why we don't have all the requirements for all the sprints, why I haven't started working on stuff for the third sprint (which they consider more important but is based off of the deliverables of the first 2 sprints) and are pressing for even more documentation that my entire IT team considers busy-work or un-related to us (such as writing the user manual up-front, documenting all the data fields from all the sprints up front, and more "up-front" work). This has been pretty rough for me as a new project manager, but there are improvements I have effectively implemented such as scrumban for story management, pair programming, and having the business give us customer acceptance tests up front (as part of the requirements documentation). So my questions are: What can I do to more effectively introduce change to a resistant business? Are there other practices that I can introduce on the IT side to help show the business the benefits of agile? The burden of documentation is strangling us - the business still sees it as a risk management strategy instead of as a risk. What can we do to alleviate their documentation concerns and demands (specifically the quantity of documentation and their need for all of it up front)? We are in a separate building from our business, about 3 blocks away and they refuse to have their people on the project co-habitate b/c that person "won't be able to work on their other projects while they're at our building." They expect us to always go over there and to bundle our questions so that we can ask them all at once and not waste that person's time with "constant interruptions." What can we do to get richer communication from them? Any additional advice would also be appreciated. Thanks!

    Read the article

  • Endeca Information Discovery 3-Day Hands-on Training Workshop

    - by Mike.Hallett(at)Oracle-BI&EPM
    For Oracle Partners, on October 3-5, 2012 in Milan, Italy: Register here. Endeca Information Discovery plays a key role with your big data analysis and complements Oracle Business Intelligence Solutions such as OBIEE. This FREE hands-on workshop for Oracle Partners highlights technical know-how of the product and helps understand its value proposition. We will walk you through four key components of the product: Oracle Endeca Server—A highly scalable, search-analytical database that derives the data model based on the data presented to it, thereby reducing data modeling requirements. Studio—A highly interactive, component-based user interface for configuring advanced, yet intuitive, analytical applications. Integration Suite—Provides rapid unification and enrichment of diverse sources of information into a single integrated view. Extensible Value-Added Modules—Add-on modules that provide value quickly through configuration instead of custom coding. Topics covered will include Data Exploration with Endeca Information Discovery, Data Ingest, Project Lifecycle, Building an Endeca Server data model and advanced modeling techniques, and Working with Studio. Lab Outline The labs showcase Oracle Endeca Information Discovery components and functionality by providing expertise on features and know-how of building such applications. The hands-on activities are based on a Quick Start application provided during the class. Audience Oracle Partners, Big Data Analytics Developer and Architects BI and EPM Application Developers and Implementers, Data Warehouse Developers Equipment Requirements This workshop requires attendees to provide their own laptops for this class. Attendee laptops must meet the following minimum hardware/software requirements: Hardware 8GB RAM is highly recommended (Windows 64 bit Machine is required) 40 GB free space (includes staging) USB 2.0 port (at least one available) Software One of the following operating systems: 64-bit Windows host/laptop OS (Windows 7 or Windows Server 2008) 64-bit host/laptop OS with a Windows VM (Server, or Win 7, BIC2g, etc.) Internet Explorer 8.x , Firefox 3.6 or Firefox 6.0 WINRAR or 7ziputility to unzip workshop files: Download-able from http://www.win-rar.com/download.html Download-able from http://www.7zip.com/ Oracle Endeca Information Discovery Workshop Register here: October 3-5, 2012: Cinisello Balsamo, Milan.  We will confirm with you your place within 2 weeks. Questions?  Send email to: [email protected]  :  Oracle Platform Technologies Enablement Services.

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >