Search Results

Search found 10556 results on 423 pages for 'practical approach'.

Page 102/423 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • Aligning text to the bottom of a div: am I confused about CSS or about blueprint? [closed]

    - by larsks
    I've used Blueprint to prototype a very simple page layout...but after reading up on absolute vs. relative positioning and a number of online tutorials regarding vertical positioning, I'm not able to get things working the way I think they should. Here's my html: <div class="container" id="header> <div class="span-4" id="logo"> <img src="logo.png" width="150" height="194" /> </div> <div class="span-20 last" id="title"> <h1 class="big">TITLE</h1> </div> </div> The document does include the blueprint screen.css file. I want TITLE aligned with the bottom of the logo, which in practical terms means the bottom of #header. This was my first try: #header { position: relative; } #title { font-size: 36pt; position: absolute; bottom: 0; } Not unexpectedly, in retrospect, this puts TITLE flush left with the left edge of #header...but it failed to affect the vertical positioning of the title. So I got exactly the opposite of what I was looking for. So I tried this: #title { position: relative; } #title h1 { font-size: 36pt; position: absolute; bottom: 0; } My theory was that this would allign the h1 element with the bottom of the containing div element...but instead it made TITLE disappear, completely. I guess this means that it's rendering off the visible screen somewhere. At this point I'm baffled. I'm hoping someone here can point me in the right direction. Thanks!

    Read the article

  • How to talk a client out of a Flash website?

    - by bunglestink
    I have recently been doing a bunch of web side projects through word of mouth recommendations only. Although I am much more a of a programmer than a designer by any means, my design skills are not terrible, and do not hate dealing with UI like many programmers. As a result, I find myself lured into a bunch of side projects where aside from a minimal back end for content administration, most of the programming is on front end interfaces (read javascript/css). By far the biggest frustration I have had is convincing clients that they do not want Flash. Aside the fact that I really do not enjoy Flash "development", there are many practical reasons why Flash is not desirable (lack of compatibility across devices, decreased client accessibility, plug-in requirements, increased development time, etc.). Instead of just flat out telling the clients "I will not build you a flash website", I would much rather use tactics to convince/explain to them that this is not what they actually want, ie: meet their requirements any better than standard html/css/js and distract users from their content. What kind of first hand experience do others have with this? How do you explain to someone that javascript/css/AJAX is usually a better option for most websites? Why do people want to use Flash so bad to begin with? This question pertains to clients who do not have any technical reasons for wanting flash, but just want it because they think it makes pretty websites.

    Read the article

  • How to connect to internet using huawei E303C datacard

    - by Rahul Choudhary
    It is very difficult to use ubuntu. After six hours of research on various websites and ubuntu blogs I was unable to connect to internet using my Huawei DataCard E303C. I wonder that if it takes so much long time to figure out how to connect to internet using a 3g datacard (Huawei E303C) to connect to internet, then how would I do my daily computing tasks using ubuntu. Is there anyone who can help me with that. My personal practical experiences says that using ubuntu instead of windows is not different but tough. Yes it is very tough to use ubuntu. Why it cannot be like we double click and software gets installed as it happens in windows and why we always have to use terminal to install everything. Why everything in ubuntu is very difficult and tough to accomplish. I am drifting away from ubuntu. Can any developer at ubuntu help keep my interest in ubuntu? Is there anyone who can lure me to use ubuntu again?

    Read the article

  • Good Laptop .NET Developer VM Setup

    - by Steve Brouillard
    I was torn between putting this question on this site or SuperUsers. I've tried to do a good bit of searching on this, and while I find plenty of info on why to go with a VM or not, there isn't much practical advise on HOW to best set things up. Here's what I currently HAVE: HP EliteBook 1540, quad-core, 8GB memory, 500GB 7200 RPM HD, eSATA port. Descent machine. Should work just fine. Windows 7 64-bit Host OS. This also acts as my day-to-day basic stuff (email, Word Docs, etc...) OS. VMWare Desktop Windows 7 64-bit Guest OS with all my .NET dev tools, frameworks, etc loaded on it. It's configured to use 2 cores and up to 6GB of memory. I figure that the dev env will need more than email, word, etc... So, this seemed like a good option to me, but I find with the VM running, things tend to slow down all around on both the host and guest OS. Memory and CPU utilization don't seem to be an issue, but I/O does. I tried running the VM on an external eSATA drive, figuring that the extra channel might pick up the slack. Things only got worse (could be my eSATA enclosure). So, for all of that I have basically two questions in one. Has anyone used this sort of setup and are there any gotchas either around the VMWare configuration or anything else I may have missed here that you can point me to? Is there another option that might work better? For example, I've considered trying a lighter weight Host OS and run both of my environments as VMs? I tried this with Server 2008 Hyper-V, but I lose too much laptop functionality going this route, so I never completed setup. I'm not averse to Linux as a host OS, though I'm no Linux expert. If I'm missing any critical info, feel free to ask. Thanks in advance for your help. Steve

    Read the article

  • Why do some programmers think there is a contrast between theory and practice?

    - by Giorgio
    Comparing software engineering with civil engineering, I was surprised to observe a different way of thinking: any civil engineer knows that if you want to build a small hut in the garden you can just get the materials and go build it whereas if you want to build a 10-storey house you need to do quite some maths to be sure that it won't fall apart. In contrast, speaking with some programmers or reading blogs or forums I often find a wide-spread opinion that can be formulated more or less as follows: theory and formal methods are for mathematicians / scientists while programming is more about getting things done. What is normally implied here is that programming is something very practical and that even though formal methods, mathematics, algorithm theory, clean / coherent programming languages, etc, may be interesting topics, they are often not needed if all one wants is to get things done. According to my experience, I would say that while you do not need much theory to put together a 100-line script (the hut), in order to develop a complex application (the 10-storey building) you need a structured design, well-defined methods, a good programming language, good text books where you can look up algorithms, etc. So IMO (the right amount of) theory is one of the tools for getting things done. So my question is why do some programmers think that there is a contrast between theory (formal methods) and practice (getting things done)? Is software engineering (building software) perceived by many as easy compared to, say, civil engineering (building houses)? Or are these two disciplines really different (apart from mission-critical software, software failure is much more acceptable than building failure)?

    Read the article

  • SQLAuthority News – Download Whitepaper – Choosing a Tabular or Multidimensional Modeling Experience in SQL Server 2012 Analysis Services

    - by pinaldave
    Data modeling is the most important task for any BI professional. Matter of the fact, the biggest challenge is to organizing disparate data into an analytic model that effectively and efficiently supports the reporting and analysis. SQL Server 2012 introduces BI Semantic Model (BISM), a single model that can support a broad range of reporting and analysis while blending two Analysis Services modeling experiences behind the scenes. Multidimensional modeling – enables BI professionals to create sophisticated multidimensional cubes using traditional online analytical processing (OLAP). Tabular modeling – provides self-service data modeling capabilities to business and data analysts. As data modeling is evolving and business needs are growing new technologies and tools are emerging to help end users to make the necessary adjustment to the reporting and analysis needs. This white paper is will provide practical guidance to help you decide which SQL Server 2012 Analysis Services modeling experience – tabular or multidimensional. Do let me know what do is your opinion as a comment. In simple word – I would like to know when will you use Tabular modeling and when Multidimensional modeling? Download Choosing a Tabular or Multidimensional Modeling Experience in SQL Server 2012 Analysis Services Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology

    Read the article

  • New Cloud Security Book: Securing the Cloud by Vic Winkler

    - by user12608550
    It's rare that I read a technical book straight through; I usually read key chapters and save the rest for later reference. But Winkler's book, written by an accomplished and highly experienced security professional, was worth a complete read, cover to cover. Of the recently published cloud security books, such as... Cloud Security and Privacy: An Enterprise Perspective on Risks and Compliance, by Tim Mather, Subra Kumaraswamy, and Shahed Latif; O'Reilly Media Inc, 2009; Cloud Computing: Implementation, Management, and Security, by John Rittenhouse and James Ransome; CRC Press 2010; Cloud Security: A Comprehensive Guide to Secure Cloud Computing, by Ronald Krutz and Russell Vines; Wiley Publishing Inc, 2010 ...Securing the Cloud is the most useful and informative about all aspects of cloud security. Clearly, through his experience, the author has thought through many practical issues of securing large, virtualized IT installations. His Chapter 6 on Best Practices and Chapter 9 with its valuable checklists are worth the price of the book. If you are among the many new cloud computing professionals, Securing the Cloud is an essential reference for your work.

    Read the article

  • What stands out on a Juniors CV

    - by DeanMc
    Hi all, I am looking for advice, hopefully from people more experienced that me. I am going to start applying for junior positions for .net development in the summer. At the moment the only thing on my CV is a project I have done for Windows Phone 7 called sprout sms. It allows the user to send webtext as provided by Irish Operators. The app is doing well and is top ten in my local marketplace (Ireland). By trade I am a salesman and that is the extent of most of my employment. I haven't been to college and due to financial commitments I would not be able to go down the road of full time education. I have kept up to date with various .net related tech in a junior capacity and am looking to now change careers. What I am look to see is what stands out on a juniors CV. My lack of education stands against me so I am looking to offset this with practical experience. I am open to all suggestions and from the end of this month I am free to pursue "notches" which will make my CV stand out. So in short if you where hiring a Junior, what would you like to see that would make you take a second look or request an interview? NOTE: I do fear this is a subjective subject, rather than debate what is the best items to have on a Juniors CV I would like to concentrate on what info you would give to a junior who is looking to apply for a job this year. Thanks to all that respond.

    Read the article

  • Is it possible to implement an infinite IEnumerable without using yield with only C# code?

    - by sinelaw
    This isn't a practical problem, it's more of a riddle. Problem I'm curious to know if there's a way to implement something equivalent to the following, but without using yield: IEnumerable<T> Infinite<T>() { while (true) { yield return default(T); } } Rules You can't use the yield keyword Use only C# itself directly - no IL code, no constructing dynamic assemblies etc. You can only use the basic .NET lib (only mscorlib.dll, System.Core.dll? not sure what else to include). However if you find a solution with some of the other .NET assemblies (WPF?!), I'm also interested. Don't implement IEnumerable or IEnumerator. Notes The closest I've come yet: IEnumerable<int> infinite = null; infinite = new int[1].SelectMany(x => new int[1].Concat(infinite)); This is "correct" but hits a StackOverflowException after 14399 iterations through the enumerable (not quite infinite). I'm thinking there might be no way to do this due to the CLR's lack of tail recursion optimization. A proof would be nice :)

    Read the article

  • JavaOne+Develop vs Oracle OpenWorld

    - by Rick Ramsey
    http://cheapoair.wordpress.com/2009/03/24/san-francisco-napa-and-sonoma-a-first-visit/ This year, Oracle OpenWorld will be held Sep 19-23rd in San Francisco. Also this year, JavaOne+Develop will be held Sep 19-23d in San Francisco. How can that be? Simple. Oracle has acquired the city of San Francisco. OK, not all of it. But an awful lot of it. And it didn't actually acquire the city of San Francisco. It just sorta borrowed it. So, Oracle OpenWorld The world's most important developer conferences are creating the world's coolest neighborhood The Zone--San Francisco's Hotel Nikko, Hilton San Francisco, and Parc 55 hotels and the surrounding area--will be dedicated to developers during the week of JavaOne + Develop. Unparalleled education and practical hands-on sessions, engaging activities, exceptional entertainment, and food and drink in the Zone will be exclusively geared toward the developer community converging at JavaOne + Develop. Network, share information, and learn from leading experts in the Java, PL/SQL, rich internet application development, SOA communities, and more. Forget the business casual dress code and golf simulation: The Zone does things the developers' way. See for yourself, September 19 - 23, 2010. Participate in dozens of hands-on labs, including Oracle Database, Oracle Application Express, Oracle WebLogic Server, Java, SOA, .NET, Oracle JDeveloper, Eclipse, Oracle Solaris Studio, and application grid technologies The Develop 2010 call for papers is now closed. Review and selection is under way, and we expect to notify presenters by mid-May, 2010. If the submissions we've received are any indication, you can look forward to an outstanding developer conference this year in San Francisco. Thanks to all of you who contributed papers by the March 21 deadline.

    Read the article

  • What Poor Project Management Might Be Costing You

    - by Sylvie MacKenzie, PMP
    For project-intensive organizations, capital investment decisions define both success and failure. Getting them wrong—the risk of delays and schedule and cost overruns are ever present—introduces the potential for huge financial losses. The resulting consequences can be significant, and directly impact both a company’s profit outlook and its share price performance—which in turn is the fundamental measure of executive performance. This intrinsic link between long-term investment planning and short-term market performance is investigated in the independent report Stock Shock, written by a consultant from Clarity Economics and commissioned by the EPPM Board. A new international steering group organized by Oracle, the EPPM Board brings together senior executives from leading public and private sector organizations to explore the critical role played by enterprise project and portfolio management (EPPM). Stock Shock reviews several high-profile recent project failures, and combined with other research reviews the lessons to be learned. It analyzes how portfolio management is an exercise in balancing risk and reward, a process that places the emphasis firmly on executives to correctly determine which potential investments will deliver the greatest value and contribute most to the bottom line. Conversely, it also details how poor evaluation decisions can quickly impact the overall value of an organization’s project portfolio and compromise long-range capital planning goals. Failure to Deliver—In Search of ROI The report also cites figures from the Economist Intelligence Unit survey that found that more organizations (12 percent) expected to deliver planned ROI less than half the time, than those (11 percent) who claim to deliver it 90 percent or more of the time. This fact is linked to a recent report from Booz & Co. that shows how the average tenure of a global chief executive has fallen from 8.1 years to 6.3 years. “Senior executives need to begin looking at effective project delivery not as a bonus, but as an essential facet of business success,” according to Stock Shock author Phil Thornton. “Consolidated and integrated visibility into individual projects is the most practical solution to overcoming these challenges, which explains the increasing popularity of PPM technologies as an effective oversight and delivery platform.” Stock Shock is available for download on the EPPM microsite at http://www.oracle.com/oms/eppm/us/stock-shock-report-1691569.html

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-20

    - by Bob Rhubart
    Attend OTN Architect Day – by Architects, for Architects – October 25 You won't need 3D glasses to take in these live presentations (8 sessions, two tracks) on Cloud computing, SOA, and engineered systems. And the ticket price is: Zero. Nothing. Absolutely free. Register now for Oracle Technology Network Architect Day in Los Angeles. Thursday October 25, 2012, 8:00am – 5:00pm Sofitel Los Angeles8555 Beverly BoulevardLos Angeles, CA 90048 Loving VirtualBox 4.2… | The ORACLE-BASE Blog Is it wrong for a man to love a technology? Oracle ACE Director Tim Hall has several very good reasons for his feelings… Running RichFaces on WebLogic 12c | Markus Eisele "With all the JMS magic and the different provider checks in the showcase this has become some kind of a challenge to simply build and deploy it," says Oracle ACE Director Markus Eisele. His detailed post will help you to meet that challenge. Oracle ADF Coverage at OOW | Frank Nimphius Frank Nimphius shares a comprehensive and well-organized list of Oracle ADF sessions and activities scheduled for Oracle OpenWorld in San Francisco. OIM 11g R2 Catalog Customization Example | Daniel Gralewski Oracle Fusion Middleware A-Team member Daniel Gralewski's post shows "how OIM catalog can be customized by using OIM UI capabilities such as managed beans and EL expressions. The post first describes the use case and the solution to address the use case; then it describes the solution details as well as provides links to the artifacts." New Book: Oracle BPM Suite 11g: Advanced BPMN Topics | Mark Nelson Redstack blogger Mark Nelson shares an overview of Oracle BPM Suite 11g: Advanced BPMN Topics, the new book he co-authored with Tanya Williams. Nelson describes the book as "a concise presentation of both theory and practical examples of the areas of BPMN where we have encountered the most widespread confusion and misunderstanding." Thought for the Day "I strive for an architecture from which nothing can be taken away." — Helmut Jahn Source: Brainy Quote

    Read the article

  • Visual Studio 2010 on Macbook Air

    - by Kyle B.
    Does anyone here run Visual Studio 2010 (or VS12 RC) on a Macbook Air? I have the current model with 4GB ram, 13" screen, and 256GB SSD drive. Before I go through the effort of configuring this, I'd like to know if anyone from the community has done this and: Was the performance acceptable? If it is, I plan to get a larger cinema display monitor as a second display and do all my coding on this machine ditching my desktop. Did you use Boot camp, Parallels, or VMWare? I feel to maximize performance that boot camp would be necessary to make the most utilization of the memory, but am not sure if this completely necessary. I'd prefer to use a VM, but wasn't sure if this was practical and would value your input before buying a license. Did you also run anything else on the Windows installation, such as SQL Server express, IISExpress, etc? Did performance lag after a certain point? Note: I would have asked this in superuser.com, but felt this applied more directly to the programming community.

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part II

    - by dbayard
    Part II – Solving Big Problems with Oracle R Enterprise In the first post in this series (see https://blogs.oracle.com/R/entry/solving_big_problems_with_oracle), we showed how you can use R to perform historical rate of return calculations against investment data sourced from a spreadsheet.  We demonstrated the calculations against sample data for a small set of accounts.  While this worked fine, in the real-world the problem is much bigger because the amount of data is much bigger.  So much bigger that our approach in the previous post won’t scale to meet the real-world needs. From our previous post, here are the challenges we need to conquer: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In this post, we will show how we moved from sample data environment to working with full-scale data.  This post is based on actual work we did for a financial services customer during a recent proof-of-concept. Getting started with the Database At this point, we have some sample data and our IRR function.  We were at a similar point in our customer proof-of-concept exercise- we had sample data but we did not have the full customer data yet.  So our database was empty.  But, this was easily rectified by leveraging the transparency features of Oracle R Enterprise (see https://blogs.oracle.com/R/entry/analyzing_big_data_using_the).  The following code shows how we took our sample data SimpleMWRRData and easily turned it into a new Oracle database table called IRR_DATA via ore.create().  The code also shows how we can access the database table IRR_DATA as if it was a normal R data.frame named IRR_DATA. If we go to sql*plus, we can also check out our new IRR_DATA table: At this point, we now have our sample data loaded in the database as a normal Oracle table called IRR_DATA.  So, we now proceeded to test our R function working with database data. As our first test, we retrieved the data from a single account from the IRR_DATA table, pull it into local R memory, then call our IRR function.  This worked.  No SQL coding required! Going from Crawling to Walking Now that we have shown using our R code with database-resident data for a single account, we wanted to experiment with doing this for multiple accounts.  In other words, we wanted to implement the split-apply-combine technique we discussed in our first post in this series.  Fortunately, Oracle R Enterprise provides a very scalable way to do this with a function called ore.groupApply().  You can read more about ore.groupApply() here: https://blogs.oracle.com/R/entry/analyzing_big_data_using_the1 Here is an example of how we ask ORE to take our IRR_DATA table in the database, split it by the ACCOUNT column, apply a function that calls our SimpleMWRR() calculation, and then combine the results. (If you are following along at home, be sure to have installed our myIRR package on your database server via  “R CMD INSTALL myIRR”). The interesting thing about ore.groupApply is that the calculation is not actually performed in my desktop R environment from which I am running.  What actually happens is that ore.groupApply uses the Oracle database to perform the work.  And the Oracle database is what actually splits the IRR_DATA table by ACCOUNT.  Then the Oracle database takes the data for each account and sends it to an embedded R engine running on the database server to apply our R function.  Then the Oracle database combines all the individual results from the calls to the R function. This is significant because now the embedded R engine only needs to deal with the data for a single account at a time.  Regardless of whether we have 20 accounts or 1 million accounts or more, the R engine that performs the calculation does not care.  Given that normal R has a finite amount of memory to hold data, the ore.groupApply approach overcomes the R memory scalability problem since we only need to fit the data from a single account in R memory (not all of the data for all of the accounts). Additionally, the IRR_DATA does not need to be sent from the database to my desktop R program.  Even though I am invoking ore.groupApply from my desktop R program, because the actual SimpleMWRR calculation is run by the embedded R engine on the database server, the IRR_DATA does not need to leave the database server- this is both a performance benefit because network transmission of large amounts of data take time and a security benefit because it is harder to protect private data once you start shipping around your intranet. Another benefit, which we will discuss in a few paragraphs, is the ability to leverage Oracle database parallelism to run these calculations for dozens of accounts at once. From Walking to Running ore.groupApply is rather nice, but it still has the drawback that I run this from a desktop R instance.  This is not ideal for integrating into typical operational processes like nightly data warehouse refreshes or monthly statement generation.  But, this is not an issue for ORE.  Oracle R Enterprise lets us run this from the database using regular SQL, which is easily integrated into standard operations.  That is extremely exciting and the way we actually did these calculations in the customer proof. As part of Oracle R Enterprise, it provides a SQL equivalent to ore.groupApply which it refers to as “rqGroupEval”.  To use rqGroupEval via SQL, there is a bit of simple setup needed.  Basically, the Oracle Database needs to know the structure of the input table and the grouping column, which we are able to define using the database’s pipeline table function mechanisms. Here is the setup script: At this point, our initial setup of rqGroupEval is done for the IRR_DATA table.  The next step is to define our R function to the database.  We do that via a call to ORE’s rqScriptCreate. Now we can test it.  The SQL you use to run rqGroupEval uses the Oracle database pipeline table function syntax.  The first argument to irr_dataGroupEval is a cursor defining our input.  You can add additional where clauses and subqueries to this cursor as appropriate.  The second argument is any additional inputs to the R function.  The third argument is the text of a dummy select statement.  The dummy select statement is used by the database to identify the columns and datatypes to expect the R function to return.  The fourth argument is the column of the input table to split/group by.  The final argument is the name of the R function as you defined it when you called rqScriptCreate(). The Real-World Results In our real customer proof-of-concept, we had more sophisticated calculation requirements than shown in this simplified blog example.  For instance, we had to perform the rate of return calculations for 5 separate time periods, so the R code was enhanced to do so.  In addition, some accounts needed a time-weighted rate of return to be calculated, so we extended our approach and added an R function to do that.  And finally, there were also a few more real-world data irregularities that we needed to account for, so we added logic to our R functions to deal with those exceptions.  For the full-scale customer test, we loaded the customer data onto a Half-Rack Exadata X2-2 Database Machine.  As our half-rack had 48 physical cores (and 96 threads if you consider hyperthreading), we wanted to take advantage of that CPU horsepower to speed up our calculations.  To do so with ORE, it is as simple as leveraging the Oracle Database Parallel Query features.  Let’s look at the SQL used in the customer proof: Notice that we use a parallel hint on the cursor that is the input to our rqGroupEval function.  That is all we need to do to enable Oracle to use parallel R engines. Here are a few screenshots of what this SQL looked like in the Real-Time SQL Monitor when we ran this during the proof of concept (hint: you might need to right-click on these images to be able to view the images full-screen to see the entire image): From the above, you can notice a few things (numbers 1 thru 5 below correspond with highlighted numbers on the images above.  You may need to right click on the above images and view the images full-screen to see the entire image): The SQL completed in 110 seconds (1.8minutes) We calculated rate of returns for 5 time periods for each of 911k accounts (the number of actual rows returned by the IRRSTAGEGROUPEVAL operation) We accessed 103m rows of detailed cash flow/market value data (the number of actual rows returned by the IRR_STAGE2 operation) We ran with 72 degrees of parallelism spread across 4 database servers Most of our 110seconds was spent in the “External Procedure call” event On average, we performed 8,200 executions of our R function per second (110s/911k accounts) On average, each execution was passed 110 rows of data (103m detail rows/911k accounts) On average, we did 41,000 single time period rate of return calculations per second (each of the 8,200 executions of our R function did rate of return calculations for 5 time periods) On average, we processed over 900,000 rows of database data in R per second (103m detail rows/110s) R + Oracle R Enterprise: Best of R + Best of Oracle Database This blog post series started by describing a real customer problem: how to perform a lot of calculations on a lot of data in a short period of time.  While standard R proved to be a very good fit for writing the necessary calculations, the challenge of working with a lot of data in a short period of time remained. This blog post series showed how Oracle R Enterprise enables R to be used in conjunction with the Oracle Database to overcome the data volume and performance issues (as well as simplifying the operations and security issues).  It also showed that we could calculate 5 time periods of rate of returns for almost a million individual accounts in less than 2 minutes. In a future post, we will take the same R function and show how Oracle R Connector for Hadoop can be used in the Hadoop world.  In that next post, instead of having our data in an Oracle database, our data will live in Hadoop and we will how to use the Oracle R Connector for Hadoop and other Oracle Big Data Connectors to move data between Hadoop, R, and the Oracle Database easily.

    Read the article

  • Ubuntu impossible to install: "unable to find a medium containing a live filesystem"

    - by Lorenzo
    Yes, I have read questions and answers with similar titles for this issue, which prevented me from installing Ubuntu for several MONTHS now, trying to figure it out. I have a MacBookPro with triple partition (one for Mac Snow Leopard, one for Windows 7, one for Linux) created with reFIT firmware (not BootCamp). I set up the system according to these instructions for reFIT: http://lifehacker.com/5531037/how-to-triple+boot-your-mac-with-windows-and-linux-no-boot-camp-required Now. There is a free partition ready to accept Linux into its arms, but Linux does not want to participate. Most answers to the issue "unable to find a medium containing a live filesystem" point to changing the BIOS booting system (which I don't know how to do, especially using this reFIT booting system), and to changing the socket of the USB (which does not concern me, since I am using a CD, actually I tried with a CD, then with a DVD, since a blank CD is only 700MB while the iso image file of Ubuntu is about 731MB). Anyway. This is what happens: I am in the Mac system (using the Mac partition) I insert the DVD with the burned image of Ubuntu (yes I have tried burning it again and agin on both CD and DVD blank discs). I restart the computer. When reFIT loads, I hold down the ALT key until the CD image appears. I select it, and hit Enter. A small Ubuntu icon appears at the bottom of the screen. Then a Ubuntu sign appears in the middle of the screen with small dots underneath, lighting up progressively over and over to indicate it's loading. Then everything turns black and the following message appears, at the end of a few lines of text: "unable to find medium with live file system". Please provide very practical suggestions on what to do to an unexperienced wannabe Ubuntu very patient user. Please start by saying how do I access the BIOS setup from reFIT bootup, and exactly what and why I need to try and change. (Will this mess up my reFIT bootup?) And anything else I need to do to finally be able to install Ubuntu. Thanks

    Read the article

  • Big GRC: Turning Data into Actionable GRC Intelligence

    - by Jenna Danko
    While it’s no longer headline news that Governments have carried out large scale data-mining programmes aimed at terrorism detection and identifying other patterns of interest across a wide range of digital data sources, the debate over the ethics and justification over this action, will clearly continue for some time to come. What is becoming clear is that these programmes are a framework for the collation and aggregation of massive amounts of unstructured data and from this, the creation of actionable intelligence from analyses that allowed the analysts to explore and extract a variety of patterns and then direct resources. This data included audio and video chats, phone calls, photographs, e-mails, documents, internet searches, social media posts and mobile phone logs and connections. Although Governance, Risk and Compliance (GRC) professionals are not looking at the implementation of such programmes, there are many similar GRC “Big data” challenges to be faced and potential lessons to be learned from these high profile government programmes that can be applied a lot closer to home. For example, how can GRC professionals collect, manage and analyze an enormous and disparate volume of data to create and manage their own actionable intelligence covering hidden signs and patterns of criminal activity, the early or retrospective, violation of regulations/laws/corporate policies and procedures, emerging risks and weakening controls etc. Not exactly the stuff of James Bond to be sure, but it is certainly more applicable to most GRC professional’s day to day challenges. So what is Big Data and how can it benefit the GRC process? Although it often varies, the definition of Big Data largely refers to the following types of data: Traditional Enterprise Data – includes customer information from CRM systems, transactional ERP data, web store transactions, and general ledger data. Machine-Generated /Sensor Data – includes Call Detail Records (“CDR”), weblogs and trading systems data. Social Data – includes customer feedback streams, micro-blogging sites like Twitter, and social media platforms like Facebook. The McKinsey Global Institute estimates that data volume is growing 40% per year, and will grow 44x between 2009 and 2020. But while it’s often the most visible parameter, volume of data is not the only characteristic that matters. In fact, according to sources such as Forrester there are four key characteristics that define big data: Volume. Machine-generated data is produced in much larger quantities than non-traditional data. This is all the data generated by IT systems that power the enterprise. This includes live data from packaged and custom applications – for example, app servers, Web servers, databases, networks, virtual machines, telecom equipment, and much more. Velocity. Social media data streams – while not as massive as machine-generated data – produce a large influx of opinions and relationships valuable to customer relationship management as well as offering early insight into potential reputational risk issues. Even at 140 characters per tweet, the high velocity (or frequency) of Twitter data ensures large volumes (over 8 TB per day) need to be managed. Variety. Traditional data formats tend to be relatively well defined by a data schema and change slowly. In contrast, non-traditional data formats exhibit a dizzying rate of change. Without question, all GRC professionals work in a dynamic environment and as new services, new products, new business lines are added or new marketing campaigns executed for example, new data types are needed to capture the resultant information.  Value. The economic value of data varies significantly. Typically, there is good information hidden amongst a larger body of non-traditional data that GRC professionals can use to add real value to the organisation; the greater challenge is identifying what is valuable and then transforming and extracting that data for analysis and action. For example, customer service calls and emails have millions of useful data points and have long been a source of information to GRC professionals. Those calls and emails are critical in helping GRC professionals better identify hidden patterns and implement new policies that can reduce the amount of customer complaints.   Now on a scale and depth far beyond those in place today, all that unstructured call and email data can be captured, stored and analyzed to reveal the reasons for the contact, perhaps with the aggregated customer results cross referenced against what is being said about the organization or a similar peer organization on social media. The organization can then take positive actions, communicating to the market in advance of issues reaching the press, strengthening controls, adjusting risk profiles, changing policy and procedures and completely minimizing, if not eliminating, complaints and compensation for that specific reason in the future. In this one example of many similar ones, the GRC team(s) has demonstrated real and tangible business value. Big Challenges - Big Opportunities As pointed out by recent Forrester research, high performing companies (those that are growing 15% or more year-on-year compared to their peers) are taking a selective approach to investing in Big Data.  "Tomorrow's winners understand this, and they are making selective investments aimed at specific opportunities with tangible benefits where big data offers a more economical solution to meet a need." (Forrsights Strategy Spotlight: Business Intelligence and Big Data, Q4 2012) As pointed out earlier, with the ever increasing volume of regulatory demands and fines for getting it wrong, limited resource availability and out of date or inadequate GRC systems all contributing to a higher cost of compliance and/or higher risk profile than desired – a big data investment in GRC clearly falls into this category. However, to make the most of big data organizations must evolve both their business and IT procedures, processes, people and infrastructures to handle these new high-volume, high-velocity, high-variety sources of data and be able integrate them with the pre-existing company data to be analyzed. GRC big data clearly allows the organization access to and management over a huge amount of often very sensitive information that although can help create a more risk intelligent organization, also presents numerous data governance challenges, including regulatory compliance and information security. In addition to client and regulatory demands over better information security and data protection the sheer amount of information organizations deal with the need to quickly access, classify, protect and manage that information can quickly become a key issue  from a legal, as well as technical or operational standpoint. However, by making information governance processes a bigger part of everyday operations, organizations can make sure data remains readily available and protected. The Right GRC & Big Data Partnership Becomes Key  The "getting it right first time" mantra used in so many companies remains essential for any GRC team that is sponsoring, helping kick start, or even overseeing a big data project. To make a big data GRC initiative work and get the desired value, partnerships with companies, who have a long history of success in delivering successful GRC solutions as well as being at the very forefront of technology innovation, becomes key. Clearly solutions can be built in-house more cheaply than through vendor, but as has been proven time and time again, when it comes to self built solutions covering AML and Fraud for example, few have able to scale or adapt appropriately to meet the changing regulations or challenges that the GRC teams face on a daily basis. This has led to the creation of GRC silo’s that are causing so many headaches today. The solutions that stand out and should be explored are the ones that can seamlessly merge the traditional world of well-known data, analytics and visualization with the new world of seemingly innumerable data sources, utilizing Big Data technologies to generate new GRC insights right across the enterprise.Ultimately, Big Data is here to stay, and organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be the ones that are well positioned to make the most of it. A Blueprint and Roadmap Service for Big Data Big data adoption is first and foremost a business decision. As such it is essential that your partner can align your strategies, goals, and objectives with an architecture vision and roadmap to accelerate adoption of big data for your environment, as well as establish practical, effective governance that will maintain a well managed environment going forward. Key Activities: While your initiatives will clearly vary, there are some generic starting points the team and organization will need to complete: Clearly define your drivers, strategies, goals, objectives and requirements as it relates to big data Conduct a big data readiness and Information Architecture maturity assessment Develop future state big data architecture, including views across all relevant architecture domains; business, applications, information, and technology Provide initial guidance on big data candidate selection for migrations or implementation Develop a strategic roadmap and implementation plan that reflects a prioritization of initiatives based on business impact and technology dependency, and an incremental integration approach for evolving your current state to the target future state in a manner that represents the least amount of risk and impact of change on the business Provide recommendations for practical, effective Data Governance, Data Quality Management, and Information Lifecycle Management to maintain a well-managed environment Conduct an executive workshop with recommendations and next steps There is little debate that managing risk and data are the two biggest obstacles encountered by financial institutions.  Big data is here to stay and risk management certainly is not going anywhere, and ultimately financial services industry organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be best positioned to make the most of it. Matthew Long is a Financial Crime Specialist for Oracle Financial Services. He can be reached at matthew.long AT oracle.com.

    Read the article

  • E.T. Phone "Home" - Hey I've discovered a leak..!

    - by Martin Deh
    Being a member of the WebCenter ATEAM, we are often asked to performance tune a WebCenter custom portal application or a WebCenter Spaces deployment.  Most of the time, the process is pretty much the same.  For example, we often use tools like httpWatch and FireBug to monitor the application, and then perform load tests using JMeter or Selenium.  In addition, there are the fine tuning of the different performance based tuning parameters that are outlined in the documentation and by blogs that have been written by my fellow ATEAMers (click on the "performance" tag in this ATEAM blog).  While performing the load test where the outcome produces a significant reduction in the systems resources (memory), one of the causes that plays a role in memory "leakage" is due to the implementation of the navigation menu UI.  OOTB in both JDeveloper and WebCenter Spaces, there are sample (page) templates that include a "default" navigation menu.  In WebCenter Spaces, this is through the SpacesNavigationModel taskflow region, and in a custom portal (i.e. pageTemplate_globe.jspx) the menu UI is contructed using standard ADF components.  These sample menu UI's basically enable the underlying navigation model to visualize itself to some extent.  However, due to certain limitations of these sample menu implementations (i.e. deeper sub-level of navigations items, look-n-feel, .etc), many customers have developed their own custom navigation menus using a combination of HTML, CSS and JQuery.  While this is supported somewhat by the framework, it is important to know what are some of the best practices in ensuring that the navigation menu does not leak.  In addition, in this blog I will point out a leak (BUG) that is in the sample templates.  OK, E.T. the suspence is killing me, what is this leak? Note: for those who don't know, info on E.T. can be found here In both of the included templates, the example given for handling the navigation back to the "Home" page, will essentially provide a nice little memory leak every time the link is clicked. Let's take a look a simple example, which uses the default template in Spaces. The outlined section below is the "link", which is used to enable a user to navigation back quickly to the Group Space Home page. When you (mouse) hover over the link, the browser displays the target URL. From looking initially at the proposed URL, this is the intended destination.  Note: "home" in this case is the navigation model reference (id), that enables the display of the "pretty URL". Next, notice the current URL, which is displayed in the browser.  Remember, that PortalSiteHome = home.  The other highlighted item adf.ctrl-state, is very important to the framework.  This item is basically a persistent query parameter, which is used by the (ADF) framework to managing the current session and page instance.  Without this parameter present, among other things, the browser back-button navigation will fail.  In this example, the value for this parameter is currently 95K25i7dd_4.  Next, through the navigation menu item, I will click on the Page2 link. Inspecting the URL again, I can see that it reports that indeed the navigation is successful and the adf.ctrl-state is also in the URL.  For those that are wondering why the URL displays Page3.jspx, instead of Page2.jspx. Basically the (file) naming convention for pages created ar runtime in Spaces start at Page1, and then increment as you create additional pages.  The name of the actual link (i.e. Page2) is the page "title" attribute.  So the moral of the story is, unlike design time created pages, run time created pages the name of the file will 99% never match the name that appears in the link. Next, is to click on the quick link for navigating back to the Home page. Quick investigation yields that the navigation was indeed successful.  In the browser's URL there is a home (pretty URL) reference, and there is also a reference to the adf.ctrl-state parameter.  So what's the issue?  Can you remember what the value was for the adf.ctrl-state?  The current value is 3D95k25i7dd_149.  However, the previous value was 95k25i7dd_4.  Here is what happened.  Remember when (mouse) hovering over the link produced the following target URL: http://localhost:8888/webcenter/spaces/NavigationTest/home This is great for the browser as this URL will navigate to the intended targer.  However, what is missing is the adf.ctrl-state parameter.  Since this parameter was not present upon navigation "within" the framework, the ADF framework produced another adf.ctrl-state (object).  The previous adf.ctrl-state basically is orphaned while continuing to be alive in memory.  Note: the auto-creation of the adf.ctrl state does happen initially when you invoke the Spaces application  for the first time.  The following is the line of code which produced the issue: <af:goLink destination="#{boilerBean.globalLogoURIInSpace} ... Here the boilerBean is responsible for returning the "string" url, which in this case is /spaces/NavigationTest/home. Unfortunately, again what is missing is adf.ctrl-state. Note: there are more than one instance of the goLinks in the sample templates. So E.T. how can I correct this? There are 2 simple fixes.  For the goLink's destination, use the navigation model to return the actually "node" value, then use the goLinkPrettyUrl method to add the current adf.ctrl-state: <af:goLink destination="#{navigationContext.defaultNavigationModel.node['home'].goLinkPrettyUrl}"} ... />  Note: the node value is the [navigation model id]  Using a goLink does solve the main issue.  However, since the link basically does a redirect, some browsers like IE will produce a somewhat significant "flash".  In a Spaces application, this may be an annoyance to the users.  Another way to solve the leakage problem, and also remove the flash between navigations is to use a af:commandLink.  For example, here is the code example for this scenario: <af:commandLink id="pt_cl2asf" actionListener="#{navigationContext.processAction}" action="pprnav">    <f:attribute name="node" value="#{navigationContext.defaultNavigationModel.node['home']}"/> </af:commandLink> Here, the navigation node to where home is located is delivered by way of the attribute to the commandLink.  The actual navigation is performed by the processAction, which is needing the "node" value. E.T. OK, you solved the OOTB sample BUG, what about my custom navigation code? I have seen many implementations of creating a navigation menu through custom code.  In addition, there are some blog sites that also give detailed examples.  The majority of these implementations are very similar.  The code usually involves using standard HTML tags (i.e. DIVS, UL, LI, .,etc) and either CSS or JavaScript (JQuery) to produce the flyout/drop-down effect.  The navigation links in these cases are standard <a href... > tags.  Although, this type of approach is not fully accepted by the ADF community, it does work.  The important thing to note here is that the <a> tag value must use the goLinkPrettyURL method of contructing the target URL.  For example: <a href="${contextRoot}${menu.goLinkPrettyUrl}"> The main reason why this type of approach is popular is that links that are created this way (also with using af:goLinks), the pages become crawlable by search engines.  CommandLinks are currently not search friendly.  However, in the case of a Spaces instance this may be acceptable.  So in this use-case, af:commandLinks, which would replace the <a>  (or goLink) tags. The example code given of the af:commandLink above is still valid. One last important item.  If you choose to use af:commandLinks, special attention must be given to the scenario in which java script has been used to produce the flyout effect in the custom menu UI.  In many cases that I have seen, the commandLink can only be invoked once, since there is a conflict between the custom java script with the ADF frameworks own scripting to control the view.  The recommendation here, would be to use a pure CSS approach to acheive the dropdown effects. One very important thing to note.  Due to another BUG, the WebCenter environement must be patched to BP3 (patch  p14076906).  Otherwise the leak is still present using the goLinkPrettyUrl method.  Thanks E.T.!  Now I can phone home and not worry about my application running out of resources due to my custom navigation! 

    Read the article

  • Customer Concepts TV

    - by Richard Lefebvre
    Eliminate the Guesswork in Your Customer's Sales Organization Selling is the lifeblood of every business. In the past, companies would increase headcount to boost sales. In today’s business environment, companies need to re-evaluate the way in which they sell. Sales and marketing organisations must optimise performance, increase team productivity and focus on the best opportunities. Oracle Fusion CRM has been specifically designed with tools to help sales and marketing teams improve efficiency and drive revenue. Territory modeling and management, quota and commission management, collaborative features, real-time customer information and mobile device integration are just some features incorporated. Join us on Customer Concepts TV as we aim to help you find the right strategy for your prospect and customers. Whether they already have a CRM solution in place or are looking for the next level of CRM implementation, this online TV show will give you very practical advice that can help you to make the most out of your CRM implementation.Register now to reserve your spot for this exclusive, live-stream event. Customer Concepts TV comes to you on April 24. Watch the Customer Concepts TV trailer here

    Read the article

  • Does this BSD-like license achieve what I want it to?

    - by Joseph Szymborski
    I was wondering if this license is: self defeating just a clone of an existing, better established license practical any more "corporate-friendly" than the GPL too vague/open ended and finally, if there is a better license that achieves a similar effect? I wanted a license that would (in simple terms) be as flexible/simple as the "Simplified BSD" license (which is essentially the MIT license) allow anyone to make modifications as long as I'm attributed require that I get a notification that such a derived work exists require that I have access to the source code and be given license to use the code not oblige the author of the derivative work to have to release the source code to the general public not oblige the author of the derivative work to license the derivative work under a specific license Here is the proposed license, which is just the simplified BSD with a couple of additional clauses (all of which are bolded). Copyright (c) (year), (author) (email) All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. The copyright holder(s) must be notified of any redistributions of source code. The copyright holder(s) must be notified of any redistributions in binary form The copyright holder(s) must be granted access to the source code and/or the binary form of any redistribution upon the copyright holder's request. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

    Read the article

  • APress Deal of the Day - 1/June/2012 - Introducing Visual C# 2010

    - by TATWORTH
    Today's $10 Deal of the Day from APress at http://www.apress.com/9781430231714 is Introducing Visual C# 2010."If you're new to C# programming, this book is the ideal way to get started. Respected author Adam Freeman guides you through the C# language by carefully building up your knowledge from fundamental concepts to advanced features." Adam Freeman is an excellent author. This is an excellent introduction to C# programming and a manual for those with experience. Having read through book, I am very impressed by its practical approach to C#. I cannot improve on the by-line "Get started on your C# journey with an expert by your side leading by example" Adam Freeman teaches C# by precept and example. I suspect he drives a Volvo C30 as it comes up in many of the code examples! Throughout the book there are numerous links back and forth so as to avoid over complicating the current topic. I have have no hesitation in recommending this book both to programmers starting out with C# and to the seasoned professional. It is a book that should be on every C# development team's book shelf.

    Read the article

  • What I saw at TechEd North America 2014

    - by Brian Schroer
    Originally posted on: http://geekswithblogs.net/brians/archive/2014/05/19/teched-north-america-2014.aspxI was thrilled to be able to attend TechEd North America 2014 in Houston last week. I got to go to Orlando in 2008, and since then I’ve had to settle for watching the sessions online (which ain’t bad – They’re all available on Channel 9 for streaming or downloading. Here are links to the Developer Track sessions and to the sessions from all tracks.) The sessions I attended (with my favorites bolded) were: Shiny new stuff The Microsoft Application Platform for Developers: Create Applications That Span Devices and Services INTRODUCING: The Future of .NET on the Server DEEP DIVE: The Future of .NET on the Server ASP.NET: Building Web Application Using ASP.NET and Visual Studio The Next Generation of .NET for Building Applications The Future of Visual Basic and C# Stuff you can use now Building Rich Apps with AngularJS on ASP.NET Get the Most Out of Your Code Maps SignalR: Building Real-Time Applications with ASP.NET SignalR Performance Optimize Your ASP.NET Web App Modern Web and Visual Studio Visual Studio Power User: Tips and Tricks Debugging Tips and Tricks in Visual Studio 2013 In a world where the whole company uses TFS… Using Functional, Exploratory and Acceptance Testing to Release with Confidence A Practical View of Release Management for Visual Studio 2013 From Vanity to Value, Metrics That Matter: Improving Lean and Agile, Kanban, and Scrum Ain’t Nobody Got Time for That As usual, there were some time slots with nothing of interest and others with 5 things I wanted to see at the same time. Here are the sessions I’m still planning to watch… Getting Started with TypeScript Building a Large Scale JavaScript Application in TypeScript Modern Application Lifecycle Management Why a Hacker Can Own Your Web Servers in a Day! Async Best Practices for C# and Visual Basic Building Multi-Device Apps with the New Visual Studio Tooling for Apache Cordova Applying S.O.L.I.D. Principles in .NET/C# Native Mobile Application Development for iOS, Android, and Windows in C# and Visual Studio Using Xamarin Latest Innovations in Developing ASP.NET MVC Web Applications Zero to Hero: Untested to Tested with Microsoft Fakes Using Visual Studio Cool and Elegant ASP.NET Web Forms with HTML 5 for the Modern Web The Present and Future of .NET in a World of Devices and Services

    Read the article

  • Key ATG architecture principles

    - by Glen Borkowski
    Overview The purpose of this article is to describe some of the important foundational concepts of ATG.  This is not intended to cover all areas of the ATG platform, just the most important subset - the ones that allow ATG to be extremely flexible, configurable, high performance, etc.  For more information on these topics, please see the online product manuals. Modules The first concept is called the 'ATG Module'.  Simply put, you can think of modules as the building blocks for ATG applications.  The ATG development team builds the out of the box product using modules (these are the 'out of the box' modules).  Then, when a customer is implementing their site, they build their own modules that sit 'on top' of the out of the box ATG modules.  Modules can be very simple - containing minimal definition, and perhaps a small amount of configuration.  Alternatively, a module can be rather complex - containing custom logic, database schema definitions, configuration, one or more web applications, etc.  Modules generally will have dependencies on other modules (the modules beneath it).  For example, the Commerce Reference Store module (CRS) requires the DCS (out of the box commerce) module. Modules have a ton of value because they provide a way to decouple a customers implementation from the out of the box ATG modules.  This allows for a much easier job when it comes time to upgrade the ATG platform.  Modules are also a very useful way to group functionality into a single package which can be leveraged across multiple ATG applications. One very important thing to understand about modules, or more accurately, ATG as a whole, is that when you start ATG, you tell it what module(s) you want to start.  One of the first things ATG does is to look through all the modules you specified, and for each one, determine a list of modules that are also required to start (based on each modules dependencies).  Once this final, ordered list is determined, ATG continues to boot up.  One of the outputs from the ordered list of modules is that each module can contain it's own classes and configuration.  During boot, the ordered list of modules drives the unified classpath and configpath.  This is what determines which classes override others, and which configuration overrides other configuration.  Think of it as a layered approach. The structure of a module is well defined.  It simply looks like a folder in a filesystem that has certain other folders and files within it.  Here is a list of items that can appear in a module: MyModule: META-INF - this is required, along with a file called MANIFEST.MF which describes certain properties of the module.  One important property is what other modules this module depends on. config - this is typically present in most modules.  It defines a tree structure (folders containing properties files, XML, etc) that maps to ATG components (these are described below). lib - this contains the classes (typically in jarred format) for any code defined in this module j2ee - this is where any web-apps would be stored. src - in case you want to include the source code for this module, it's standard practice to put it here sql - if your module requires any additions to the database schema, you should place that schema here Here's a screenshots of a module: Modules can also contain sub-modules.  A dot-notation is used when referring to these sub-modules (i.e. MyModule.Versioned, where Versioned is a sub-module of MyModule). Finally, it is important to completely understand how modules work if you are going to be able to leverage them effectively.  There are many different ways to design modules you want to create, some approaches are better than others, especially if you plan to share functionality between multiple different ATG applications. Components A component in ATG can be thought of as a single item that performs a certain set of related tasks.  An example could be a ProductViews component - used to store information about what products the current customer has viewed.  Components have properties (also called attributes).  The ProductViews component could have properties like lastProductViewed (stores the ID of the last product viewed) or productViewList (stores the ID's of products viewed in order of their being viewed).  The previous examples of component properties would typically also offer get and set methods used to retrieve and store the property values.  Components typically will also offer other types of useful methods aside from get and set.  In the ProductViewed component, we might want to offer a hasViewed method which will tell you if the customer has viewed a certain product or not. Components are organized in a tree like hierarchy called 'nucleus'.  Nucleus is used to locate and instantiate ATG Components.  So, when you create a new ATG component, it will be able to be found 'within' nucleus.  Nucleus allows ATG components to reference one another - this is how components are strung together to perform meaningful work.  It's also a mechanism to prevent redundant configuration - define it once and refer to it from everywhere. Here is a screenshot of a component in nucleus:  Components can be extremely simple (i.e. a single property with a get method), or can be rather complex offering many properties and methods.  To be an ATG component, a few things are required: a class - you can reference an existing out of the box class or you could write your own a properties file - this is used to define your component the above items must be located 'within' nucleus by placing them in the correct spot in your module's config folder Within the properties file, you will need to point to the class you want to use: $class=com.mycompany.myclass You may also want to define the scope of the class (request, session, or global): $scope=session In summary, ATG Components live in nucleus, generally have links to other components, and provide some meaningful type of work.  You can configure components as well as extend their functionality by writing code. Repositories Repositories (a.k.a. Data Anywhere Architecture) is the mechanism that ATG uses to access data primarily stored in relational databases, but also LDAP or other backend systems.  ATG applications are required to be very high performance, and data access is critical in that if not handled properly, it could create a bottleneck.  ATG's repository functionality has been around for a long time - it's proven to be extremely scalable.  Developers new to ATG need to understand how repositories work as this is a critical aspect of the ATG architecture.   Repositories essentially map relational tables to objects in ATG, as well as handle caching.  ATG defines many repositories out of the box (i.e. user profile, catalog, orders, etc), and this is comprised of both the underlying database schema along with the associated repository definition files (XML).  It is fully expected that implementations will extend / change the out of the box repository definitions, so there is a prescribed approach to doing this.  The first thing to be sure of is to encapsulate your repository definition additions / changes within your own module (as described above).  The other important best practice is to never modify the out of the box schema - in other words, don't add columns to existing ATG tables, just create your own new tables.  These will help ensure you can easily upgrade your application at a later date. xml-combination As mentioned earlier, when you start ATG, the order of the modules will determine the final configpath.  Files within this configpath are 'layered' such that modules on top can override configuration of modules below it.  This is the same concept for repository definition files.  If you want to add a few properties to the out of the box user profile, you simply need to create an XML file containing only your additions, and place it in the correct location in your module.  At boot time, your definition will be combined (hence the term xml-combination) with the lower, out of the box modules, with the result being a user profile that contains everything (out of the box, plus your additions).  Aside from just adding properties, there are also ways to remove and change properties. types of properties Aside from the normal 'database backed' properties, there are a few other interesting types: transient properties - these are properties that are in memory, but not backed by any database column.  These are useful for temporary storage. java-backed properties - by nature, these are transient, but in addition, when you access this property (by called the get method) instead of looking up a piece of data, it performs some logic and returns the results.  'Age' is a good example - if you're storing a birth date on the profile, but your business rules are defined in terms of someones age, you could create a simple java-backed property to look at the birth date and compare it to the current date, and return the persons age. derived properties - this is what allows for inheritance within the repository structure.  You could define a property at the category level, and have the product inherit it's value as well as override it.  This is useful for setting defaults, with the ability to override. caching There are a number of different caching modes which are useful at different times depending on the nature of the data being cached.  For example, the simple cache mode is useful for things like user profiles.  This is because the user profile will typically only be used on a single instance of ATG at one time.  Simple cache mode is also useful for read-only types of data such as the product catalog.  Locked cache mode is useful when you need to ensure that only one ATG instance writes to a particular item at a time - an example would be a customers order.  There are many options in terms of configuring caching which are outside the scope of this article - please refer to the product manuals for more details. Other important concepts - out of scope for this article There are a whole host of concepts that are very important pieces to the ATG platform, but are out of scope for this article.  Here's a brief description of some of them: formhandlers - these are ATG components that handle form submissions by users. pipelines - these are configurable chains of logic that are used for things like handling a request (request pipeline) or checking out an order. special kinds of repositories (versioned, files, secure, ...) - there are a couple different types of repositories that are used in various situations.  See the manuals for more information. web development - JSP/ DSP tag library - ATG provides a traditional approach to developing web applications by providing a tag library called the DSP library.  This library is used throughout your JSP pages to interact with all the ATG components. messaging - a message sub-system used as another way for components to interact. personalization - ability for business users to define a personalized user experience for customers.  See the other blog posts related to personalization.

    Read the article

  • Join Domain and Dos App

    - by Austin Lamb
    ok, So First off yes i have read all the related topics and those fixs are either out of date or dont work. i am running ubuntu 12.04 and i would like to add it to the win2008 server network, after i get that done i would like to mount the F:\ drive of the server somewhere on my linux machine where it can be identified as Drive F:\ by wine or Dosemu if i can achieve all of that i need to find out how to run a MS-Dos 16-bit Point-of-sales Graphic program in ubuntu whether that be through wine, dosemu, or dosBox. it does not matter it just has to be able to read and write to the servers F: drive, operate the dos app, and support LPt1 (i think) for printing reciepts and loading tickets. i am a decently knowledgeable windows tech, at least thats what my job description says.. but this is my first encounter with linux in a work environment, it could prove to very experience changing if i can just prove it as a practical theory and a reasonable solution, and get it to work.. the first step is to get it joined to the domain. i have likewise-open CLI and GUI versions, samba, and GADMIN-SAMBA installed in attempts to get any of them to work. any help in any area is greatly appreciated, especially with the domain joining since it is the first step and what i thought would be the easiest step..

    Read the article

  • Skanska Builds Global Workforce Insight with Cloud-Based HCM System

    - by HCM-Oracle
    By David Baum - Originally posted on Profit Peter Bjork grew up building things. He started his work life learning all sorts of trades at his father’s construction company in the northern part of Sweden. So in college, it was natural for him to pursue a bachelor’s degree in construction engineering—but he broke new ground when he added a master’s degree in finance to his curriculum vitae. Written on a traditional résumé, Bjork’s current title (vice president of information systems strategies) doesn’t reveal the diversity of his experience—that he’s adept with hammer and nails as well as rows and columns. But a big part of his current job is to work with his counterparts in human resources (HR) designing, building, and deploying the systems needed to get a complete view of the skills and potential of Skanska’s 22,000-strong white-collar workforce. And Bjork believes that complete view is essential to Skanska’s success. “Our business is really all about people,” says Bjork, who has worked with Skanska for 16 years. “You can have equipment and financial resources, but to truly succeed in a business like ours you need to have the right people in the right places. That’s what this system is helping us accomplish.” In a global HR environment that suffers from a paradox of high unemployment and a scarcity of skilled labor, managers need to have a complete understanding of workforce capabilities to develop management skills, recruit for open positions, ensure that staff is getting the training they need, and reduce attrition. Skanska’s human capital management (HCM) systems, based on Oracle Talent Management Cloud, play a critical role delivering that understanding. “Skanska’s philosophy of having great people, encouraging their development, and giving them the chance to move across business units has nurtured a culture of collaboration, but managing a diverse workforce spread across the globe is a monumental challenge,” says Annika Lindholm, global human resources system owner in the HR department at Skanska’s headquarters just outside of Stockholm, Sweden. “We depend heavily on Oracle’s cloud technology to support our HCM function.” Construction, Workers For Skanska’s more than 60,000 employees and contractors, managing huge construction projects is an everyday job. Beyond erecting signature buildings, management’s goal is to build a corporate culture where valuable talent can be sought out and developed, bringing in the right mix of people to support and grow the business. “Of all the companies in our space, Skanska is probably one of the strongest ones, with a laser focus on people and people development,” notes Tom Crane, chief HR and communications officer for Skanska in the United States. “Our business looks like equipment and material, but all we really have at the end of the day are people and their intellectual capital. Without them, second only to clients, of course, you really can’t achieve great things in the high-profile environment in which we work.” During the 1990s, Skanska entered an expansive growth phase. A string of successful acquisitions paved the way for the company’s transformation into a global enterprise. “Today the company’s focus is on profitable growth,” continues Crane. “But you can’t really achieve growth unless you are doing a very good job of developing your people and having the right people in the right places and driving a culture of growth.” In the United States alone, Skanska has more than 8,000 employees in four distinct business units: Skanska USA Building, also known as the Construction Manager, builds everything at ground level and above—hospitals, educational facilities, stadiums, airport terminals, and other massive projects. Skanska USA Civil does everything at ground level and below, such as light rail, water treatment facilities, power plants or power industry facilities, highways, and bridges. Skanska Infrastructure Development develops public-private partnerships—projects in which Skanska adds equity and also arranges for outside financing. Skanska Commercial Development acts like a commercial real estate developer, acquiring land and building offices on spec or build-to-suit for its clients. Skanska's international portfolio includes construction of the new Meadowlands Stadium. Getting the various units to operate collaboratatively helps Skanska deliver high value to clients and shareholders. “When we have this collaboration among units, it allows us to enrich each of the business units and, at the same time, develop our future leaders to be more facile in operating across business units—more accepting of a ‘one Skanska’ approach,” explains Crane. Workforce Worldwide But HR needs processes and tools to support managers who face such business dynamics. Oracle Talent Management Cloud is helping Skanska implement world-class recruiting strategies and generate the insights needed to drive quality hiring practices, internal mobility, and a proactive approach to building talent pipelines. With their new cloud system in place, Skanska HR leaders can manage everything from recruiting, compensation, and goal and performance management to employee learning and talent review—all as part of a single, cohesive software-as-a-service (SaaS) environment. Skanska has successfully implemented two modules from Oracle Talent Management Cloud—the recruiting and performance management modules—and is in the process of implementing the learn module. Internally, they call the systems Skanska Recruit, Skanska Talent, and Skanska Learn. The timing is apropos. With high rates of unemployment in recent years, there have been many job candidates on the market. However, talent scarcity continues to frustrate recruiters. Oracle Taleo Recruiting Cloud Service, one of the applications in the Oracle Talent Management cloud portfolio, enables Skanska managers to create more-intelligent recruiting strategies, pulling high-performer profile statistics to create new candidate profiles and using multitiered screening and assessments to ensure that only the best-suited candidate applications make it to the recruiter’s desk. Tools such as applicant tracking, interview management, and requisition management help recruiters and hiring managers streamline the hiring process. Oracle’s cloud-based software system automates and streamlines many other HR processes for Skanska’s multinational organization and delivers insight into the success of recruiting and talent-management efforts. “The Oracle system is definitely helping us to construct global HR processes,” adds Bjork. “It is really important that we have a business model that is decentralized, so we can effectively serve our local markets, and interact with our global ERP [enterprise resource planning] systems as well. We would not be able to do this without a really good, well-integrated HCM system that could support these efforts.” A key piece of this effort is something Skanska has developed internally called the Skanska Leadership Profile. Core competencies, on which all employees are measured, are used in performance reviews to determine weak areas but also to discover talent, such as those who will be promoted or need succession plans. This global profiling system brings consistency to the way HR professionals evaluate and review talent across the company, with a consistent set of ratings and a consistent definition of competencies. All salaried employees in Skanska are tied to a talent management process that gives opportunity for midyear and year-end reviews. Using the performance management module, managers can align individual goals with corporate goals; provide clear visibility into how each employee contributes to the success of the organization; and drive a strategic, end-to-end talent management strategy with a single, integrated system for all talent-related activities. This is critical to a company that is highly focused on ensuring that every employee has a development plan linked to his or her succession potential. “Our approach all along has been to deploy software applications that are seamless to end users,” says Crane. “The beauty of a cloud-based system is that much of the functionality takes place behind the scenes so we can focus on making sure users can access the data when they need it. This model greatly improves their efficiency.” The employee profile not only sets a competency baseline for new employees but is also integrated with Skanska’s other back-office Oracle systems to ensure consistency in the way information is used to support other business functions. “Since we have about a dozen different HR systems that are providing us with information, we built a master database that collects all the information,” explains Lindholm. “That data is sent not only to Oracle Talent Management Cloud, but also to other systems that are dependent on this information.” Collaboration to Scale Skanska is poised to launch a new Oracle module to link employee learning plans to the review process and recruitment assessments. According to Crane, connecting these processes allows Skanska managers to see employees’ progress and produce an updated learning program. For example, as employees take classes, supervisors can consult the Oracle Talent Management Cloud portal to monitor progress and align it to each individual’s training and development plan. “That’s a pretty compelling solution for an organization that wants to manage its talent on a real-time basis and see how the training is working,” Crane says. Rolling out Oracle Talent Management Cloud was a joint effort among HR, IT, and a global group that oversaw the worldwide implementation. Skanska deployed the solution quickly across all markets at once. In the United States, for example, more than 35 offices quickly got up to speed on the new system via webinars for employees and face-to-face training for the HR group. “With any migration, there are moments when you hold your breath, but in this case, we had very few problems getting the system up and running,” says Crane. Lindholm adds, “There has been very little resistance to the system as users recognize its potential. Customizations are easy, and a lasting partnership has developed between Skanska and Oracle when help is needed. They listen to us.” Bjork elaborates on the implementation process from an IT perspective. “Deploying a SaaS system removes a lot of the complexity,” he says. “You can downsize the IT part and focus on the business part, which increases the probability of a successful implementation. If you want to scale the system, you make a quick phone call. That’s all it took recently when we added 4,000 users. We didn’t have to think about resizing the servers or hiring more IT people. Oracle does that for us, and they have provided very good support.” As a result, Skanska has been able to implement a single, cost-effective talent management solution across the organization to support its strategy to recruit and develop a world-class staff. Stakeholders are confident that they are providing the most efficient recruitment system possible for competent personnel at all levels within the company—from skilled workers at construction sites to top management at headquarters. And Skanska can retain skilled employees and ensure that they receive the development opportunities they need to grow and advance.

    Read the article

  • There's A Virtual Developer Day in Your Future

    - by OTN ArchBeat
    What are Virtual Developer Days? You really should know this by now. OTN Virtual Developer Days are online events created specifically for developers and architects, with a focus on no-fluff technical presentations, hands-on labs, and expert Q&A to sharpen your technical skills and bring you up to speed on the latest information on Oracle products and practical best practices for their use. The best part about OTN Virtual Developer Days is that you don't have to pack a suitcase or stand in line at an airport waiting for someone pat you down. Instead, you stay where you are, flip open your laptop, and prepare your brain for a massive skills injection. In the next few weeks you'll have two such chances to ramp up your skills. On Tuesday November 5, 2013 Harnessing the Power of Oracle WebLogic and Oracle Coherence will guide you through tooling updates and best practices for developing applications with WebLogic and Coherence as target platforms. This two-track event covers app design and development (Track 1) and building, deploying, and managing applications (Track 2). Each track includes three presentations plus a hands-on lab. [9am-1pm PT / 12pm-4pm ET / 1pm-5pm BRT] Register now This event will also be available in EMEA on December 3, 2013 {9am-1pm GMT / 1pm-5pm GST / 2:30pm-6:30 PM IST] On Tuesday November 19, 2103 Oracle ADF Development: Web, Mobile, and Beyond offers four tracks covering everything from the basics to advance skills for for application development using Oracle ADF and Oracle ADF Mobile. There are three sessions in each track, followed by hands-on labs in which try out what you've learned. [9am-1pm PT / 12pm-4pm ET/ 1pm-5pm BRT] Register now This event will also be available in APAC on Thursday November 21, 2013 [10am-1:30pm IST (India) / 12:30pm-4pm SGT (Singapore) / 3:30pm-7pm AESDT] and in EMEA on Tuesday November 26, 2013 [9am-1pm GMT / 1pm-5pm GST/ 2:30pm-6:30pm IST] Registration for both events is absolutely free. So what are you waiting for?

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >