Search Results

Search found 14910 results on 597 pages for 'programs and features'.

Page 498/597 | < Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >

  • Sync Google Calendar with SharePoint Calendar

    - by dataintegration
    The ADO.NET Providers for Google and SharePoint make it easy to retrieve and update data in both Google's web services and SharePoint. This article shows how the SQL interface to data makes it easy to build applications that need to move data from one source to another. The application described here is a demo Windows application that synchronizes calendar events between Google and SharePoint, but the RSSBus Providers can be used to achieve integrations on both the .NET and the Java platforms, including more sophisticated features like full automation. Getting the Events Step 1: Google accounts can have several calendars. Obtain a list of a user's Google Calendars by issuing a query to the Calendars table. For example: SELECT * FROM Calendars. Step 2: In order to get a list of the events from a given Google Calendar, issue a query to the CalendarEvents table while specifying the CalendarId from the Calendars table. The resulting events can be further filtered by using the StartDateTime or EndDateTime columns. For example: SELECT * FROM CalendarEvents WHERE (CalendarId = '[email protected]') AND (StartDateTime >= '1/1/2012') AND (StartDateTime <= '2/1/2012') Step 3: SharePoint stores data in Lists. There are various types of lists, e.g., document lists and calendar lists. A SharePoint account can have several lists of the same type. To find all the calendar lists in SharePoint, use the ListLists stored procedure and inspect the BaseTemplate column. Step 4: The SharePoint data provider models each SharPoint list as a table. Get the events in a particular calendar by querying the table with the same name as the list. The events may be filtered further by specifying the EventDate or EndDate columns. For example: SELECT * FROM Calendar WHERE (EventDate >= '1/1/2012') AND (EventDate <= '2/1/2012') Synchronizing the Events Synchronizing the events is a simple process. Once the events from Google and SharePoint are available they can be compared and synchronized based on user preference. The sample application does this based on user input, but it is easy to create one that does the synchronization automatically. The INSERT, UPDATE, and DELETE statements available in both data providers makes it easy to create, update, or delete events as needed. Pre-Built Demo Application The executable for the demo application can be downloaded here. Note that this demo is built using BETA builds of the ADO.NET Provider for Google V2 and ADO.NET Provider for SharePoint V2, and will expire in 2013. Source Code You can download the full source of the demo application here. You will need the Google ADO.NET Data Provider V2 and the SharePoint ADO.NET Data Provider V2, which can be obtained here.

    Read the article

  • Survey: Your Plans for Adopting New Firefox Releases?

    - by Steven Chan (Oracle Development)
    Mozilla is committing to releasing new Firefox versions every six weeks.  Mozilla released Firefox 5 this week.  With this release, Mozilla states that Firefox 4 is End-of-Life and will not receive any additional security updates.  In a comment thread posted on to a Mike Kaply's blog article discussing these new Firefox policies, Asa Dotzler from Mozilla stated: ... Enterprise has never been (and I’ll argue, shouldn’t be) a focus of ours. Until we run out of people who don’t have sysadmins and enterprise deployment teams looking out for them, I can’t imagine why we’d focus at all on the kinds of environments you care so much about.  In a later comment, he added: ... A minute spent making a corporate user happy can better be spent making many regular users happy. I’d much rather Mozilla spending its limited resources looking out for the billions of users that don’t have enterprise support systems already taking care of them. Asa then confirmed that every new Firefox release will put the previous one into End-of-Life: As for John’s concern, “By the time I validate Firefox 5, what guarantee would I have that Firefox 5 won’t go EOL when Firefox 6 is released?” He has the opposite of guarantees that won’t happen. He has my promise that it will happen. Firefox 6 will be the EOL of Firefox 5. And Firefox 7 will be the EOL for Firefox 6.  He added: “You’re basically saying you don’t care about corporations.” Yes, I’m basically saying that I don’t care about making Firefox enterprise friendly. Kev Needham, Channel Manager at Mozilla later stated to PC Mag: The Web and Web browsers continue to evolve rapidly. Mozilla's focus is on providing users with the best Web experience possible, and Firefox needs to evolve at the pace the Web's users and developers expect. By releasing small, focused updates more often, we are able to deliver improved security and stability even as we introduce new features, which is better for our users, and for the Web.We recognize that this shift may not be compatible with a large organization's IT Policy and understand that it is challenging to organizations that have effort-intensive certification polices. However, our development process is geared toward delivering products that support the Web as it is today, while innovating and building future Web capabilities. Tying Firefox product development to an organizational process we do not control would make it difficult for us to continue to innovate for our users and the betterment of the Web.  Your feedback needed for E-Business Suite certifications  Mozilla's new support policy has significant implications for enterprise users of Firefox with Oracle E-Business Suite.  We are reviewing the implications for our certification and support policies for Firefox now.  It would be very helpful if you could let me know about your organisation's plans for Firefox in light of this new information.  Please feel free to drop me a private email, or post a comment here if that's appropriate. 

    Read the article

  • Data Virtualization: Federated and Hybrid

    - by Krishnamoorthy
    Data becomes useful when it can be leveraged at the right time. Not only enterprises application stores operate on large volume, velocity and variety of data. Mobile and social computing are in the need of operating in foresaid data. Replicating and transferring large swaths of data is one challenge faced in the field of data integration. However, smaller chunks of data aggregated from a variety of sources presents and even more interesting challenge in the industry. Over the past few decades, technology trends focused on best user experience, operating systems, high performance computing, high performance web sites, analysis of warehouse data, service oriented architecture, social computing, cloud computing, and big data. Operating on the ‘dark data’ becomes mandatory in the future technology trend, although, no solution can make dark data useful data in a single day. Useful data can be quantified by the facts of contextual, personalized and on time delivery. In most cases, data from a single source may not be complete the picture. Data has to be combined and computed from various sources, where data may be captured as hybrid data, meaning the combination of structured and unstructured data. Since related data is often found across disparate sources, effectively integrating these sources determines how useful this data ultimately becomes. Technology trends in 2013 are expected to focus on big data and private cloud. Consumers are not merely interested in where data is located or how data is retrieved and computed. Consumers are interested in how quick and how the data can be leveraged. In many cases, data virtualization is the right solution, and is expected to play a foundational role for SOA, Cloud integration, and Big Data. The Oracle Data Integration portfolio includes a data virtualization product called ODSI (Oracle Data Service Integrator). Unlike other data virtualization solutions, ODSI can perform both read and write operations on federated/hybrid data (RDBMS, Webservices,  delimited file and XML). The ODSI Engine is built on XQuery, hence ODSI user can perform computations on data either using XQuery or SQL. Built in data and query caching features, which reduces latency in repetitive calls. Rightly positioning ODSI, can results in a highly scalable model, reducing spend on additional hardware infrastructure.

    Read the article

  • Enterprise Manager 12c: New DSS Demos Available

    - by Javier Puerta
    Enterprise Manager Cloud Control 12c Application Replay Demo Now Available! User Experience Monitoring with Enterprise Manager Cloud Control 12c and Real User Experience Insight 12R1 Now Available! Oracle Enterprise Manager Cloud Control 12c: Database Management Packs demo upgrade     Enterprise Manager Cloud Control 12c Application Replay Demo Now Available! We are pleased to announce the availability of the Oracle Application Replay demo that showcases some of the key capabilities of performing realistic, production scale testing of your web and packaged Oracle applications. This demo specifically focuses on capturing production web traffic from an E-Business Suite application and replaying the captured workload on a test E-Business Suite application to assess the impact of an application infrastructure change on the workload. The target audiences are application developers, quality assurance teams, IT managers and production control staff that deal in day-to-day change management activities and trouble shooting of production environments. Demo Highlights: Enterprise Manager 12c workflows for capturing application workload Seamless integration of Application Replay with Real User Experience Insight for application workload capture Enterprise Manager 12c centralized workflows for replaying captured application workloads in a test environment Demonstrates how to minimize risk when deploying a complex EBusiness Suite application infrastructure change. Rich reporting capability for performance analysis and problem detection User Experience Monitoring with Enterprise Manager Cloud Control 12c and Real User Experience Insight 12R1 Now Available! We are pleased to announce the availability of the Oracle Real User Experience Insight demo that showcases some of the key capabilities of user experience monitoring. This demo specifically focuses on business reporting, integrated performance diagnostics, tracking of customer journey’s through RUEI’s userflow tracking capabilities and it’s Key Performance Indicators tracking and configuration. Demo Highlights: Application-centric dashboard Integration with Oracle Enterprise Manager 12c – JVMD, ADP and BTM Session diagnostics and user session replay Monitoring through “Key Performance Indicators” (KPI) --- create alerts/incidents FUSION Application centric dashboards & integrated BI Oracle Enterprise Manager Cloud Control 12c: Database Management Packs demo upgrade DSS is pleased to announce an upgrade to the Oracle Enterprise Manager Cloud Control 12c: Database Management Packs demo. While retaining the content from the initial release of the demo—Diagnostic and Tuning Packs, Test Data Management and Data Masking, and Real Application Testing—the demo now includes a new Data Masking for Real Application Testing scenario. Demo Features: Diagnostic and Tuning Packs SQL Performance Analyzer Database Replay Data Masking Masking Real Application Testing workloads Testing pending Optimizer statistics Test Data Management

    Read the article

  • What to do when you inherit an unmaintainable codebase?

    - by GordonM
    I'm currently working at a company with 2 other PHP developers aside from me, and 1 junior developer. The senior developer who originally built the system we're all working on has resigned and will only be here for a matter of weeks. The other developer, who is the only other guy who knows anything about the system, is unhappy here and is looking for a new job. I'm very real danger of being left behind as the only experienced developer on this codebase. Since I've joined this company I've tried to push for better coding standards, project documentation, etc and I do think I've made some headway, but the vast majority of the code is simply unmaintainable and uncommented. A lot of this has to do with the need to get things done fast at points in the project before I joined, but now the technical debt is enormous, even with the two developers who do understand the system on board. Without them, it will simply be impossible to do anything with it. The senior developer is working on trying to at least comment all his code before he leaves but I think the codebase is simply too vast to properly document in the remaining time. Besides, when he does comment it still doesn't make things as clear as it could. If the system was better organized and documented I could probably start refactoring it incrementally, but the whole thing is so tightly coupled that it's very difficult to make any changes in one module without having unintended knock-on effects in other modules. Naturally, there's no unit tests either, and I honestly don't think this codebase could possibly be unit tested anyway given how it's implemented. There also never seems to be enough time to get things done even with 3 developers and 1 junior developer. With one developer and one junior, neither of which had significant input into the early design of the system, I don't see how we could possibly get anything done with keeping the current system working, implementing new features as needed and developing a replacement for the current codebase that is better organized. Is there an approach I can take to cope with this situation, or should I be getting my own CV in order as well at this point? If it was just me and the junior designer who would be left I'd go for the latter option almost without question. However, there's a team of front-end developers and content managers as well, and I'm worried what would become of them if I left and put them in a position where there would be no developers at all. The department might just be closed down altogether under such circumstances, and then I'd have their unemployment on my conscience as well!

    Read the article

  • Investigating Strategies For Functional Decomposition

    - by Liam McLennan
    Introducing Functional Decomposition Before I begin I must apologise. I think I am using the term ‘functional decomposition’ loosely, and probably incorrectly. For the purpose of this article I use functional decomposition to mean the recursive splitting of a large problem into increasingly smaller ones, so that the one large problem may be solved by solving a set of smaller problems. The justification for functional decomposition is that the decomposed problem is more easily solved. As software developers we recognise that the smaller pieces are more easily tested, since they do less and are more cohesive. Functional decomposition is important to all scientific pursuits. Once we understand natural selection we can start to look for humanities ancestral species, once we understand the big bang we can trace our expanding universe back to its origin. Isaac Newton acknowledged the compositional nature of his scientific achievements: If I have seen further than others, it is by standing upon the shoulders of giants   The Two Strategies For Functional Decomposition of Computer Programs Private Methods When I was working on my undergraduate degree I was taught to functionally decompose problems by using private methods. Consider the problem of painting a house. The obvious solution is to solve the problem as a single unit: public void PaintAHouse() { // all the things required to paint a house ... } We decompose the problem by breaking it into parts: public void PaintAHouse() { PaintUndercoat(); PaintTopcoat(); } private void PaintUndercoat() { // everything required to paint the undercoat } private void PaintTopcoat() { // everything required to paint the topcoat } The problem can be recursively decomposed until a sufficiently granular level of detail is reached: public void PaintAHouse() { PaintUndercoat(); PaintTopcoat(); } private void PaintUndercoat() { prepareSurface(); fetchUndercoat(); paintUndercoat(); } private void PaintTopcoat() { fetchPaint(); paintTopcoat(); } According to Wikipedia, at least one computer programmer has referred to this process as “the art of subroutining”. The practical issues that I have encountered when using private methods for decomposition are: To preserve the top level API all of the steps must be private. This means that they can’t easily be tested. The private methods often have little cohesion except that they form part of the same solution. Decomposing to Classes The alternative is to decompose large problems into multiple classes, effectively using a class instead of each private method. The API delegates to related classes, so the API is not polluted by the sub-steps of the problem, and the steps can be easily tested because they are each in their own highly cohesive class. Additionally, I think that this technique facilitates better adherence to the Single Responsibility Principle, since each class can be decomposed until it has precisely one responsibility. Revisiting my previous example using class composition: public class HousePainter { private undercoatPainter = new UndercoatPainter(); private topcoatPainter = new TopcoatPainter(); public void PaintAHouse() { undercoatPainter.Paint(); topcoatPainter.Paint(); } } Summary When decomposing a problem there is more than one way to represent the sub-problems. Using private methods keeps the logic in one place and prevents a proliferation of classes (thereby following the four rules of simple design) but the class decomposition is more easily testable and more compatible with the Single Responsibility Principle.

    Read the article

  • Creating a shared library that might be used with desktop applications and web projects

    - by dreza
    I have been involved in a number of MVC.NET and c# desktop projects in our company over the last year or so while also managing to kept my nose poked into other projects (in a read-only learning capacity of course). From this I've noticed that across the various projects and teams there is a-lot of functionality that has been well designed against good interfaces and abstractions. Because we tend to like our own work at times, I noticed a couple of projects had the exact same class, method copied into it as it had obviously worked on one and so was easily moved to a new project (probably by the same developer who originally wrote it) I mentioned this fact in one of our programmer meetings we have occasionally and suggested we pull some of this functionality into a core company library that we can build up over time and use across multiple projects. Everyone agreed and I started looking into this possibility. However, I've come across a stumbling block pretty early on. Our team primarily focuses on MVC at the moment and we have projects mainly in 2.0 but are starting to branch to 3.0. We also have a number of desktop applications that might benefit from some shared classes and basic helper methods. Initially when creating this DLL I included some shared classes that could be used across any project type (Web, Client etc) but then I started looking at adding some shared modules that would be useful in our MVC applications only. However this meant I had to include a reference to some Microsoft Web DLL's in order to leverage some of the classes I was creating (at this stage MVC 2.0). Now my issue is that we have a shared DLL that has references to web specific libraries that could also possibly be used in a client application. Not only that, our DLL referenced initially MVC 2.0 and we will eventually move onto MVC 3.0 for all projects. But alot of the classes in this library I expect to still be relevant to MVC 3 etc Our code within this DLL is separated into it's own namespaces such as: CompanyDLL.Primitives CompanyDLL.Web.Mvc CompanyDLL.Helpers etc etc So, my questions are: Is it OK to do a shared library like this, or if we have web specific features in it should we create a separate web DLL only targeted at a specific framework or MVC version? If it's OK, what kind of issues might we face when using the library that references MVC 2 in a MVC 3 project for example. I would be thinking that we might run into some sort of compatibility issue, or even issues where the developers using the library doesn't realize they need MVC 2.0 libraries. They might only want to use some of the generic classes etc The concept seemed like a good idea at the time, but I'm starting to think maybe it's not really a practical solution. But the number of times I've seen copied classes and methods across projects because they are proven tested code is a bit unnerving to be perfectly honest!

    Read the article

  • Partner BI Applications 4-Day Hands-on Training Workshop

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} 12th - 15th February 2012, Oracle Reading (UK) - REGISTER NOW This training will provide attendees with an in-depth working understanding of the architecture, the technical and the functional content of the Oracle Business Intelligence Applications, whilst also providing an understanding of their installation, configuration and extension. The course will cover the following topics: Overview of Oracle Business Intelligence Applications Oracle BI Applications Fundamentals and Features Configuring BI Applications for Oracle E-Business Suite Understanding BI Applications Architecture Fundamentals of BI Applications Security Prerequisites - This training is only for OPN member Partners. Good understanding of basic data warehousing concepts Hands on experience in Oracle Business Intelligence Enterprise Edition Hands on experience in Informatica Good understanding of any of the following Oracle EBS modules: General Ledger, Accounts Receivables, Accounts Payables Some understanding of  Oracle BI Applications is required (See Sales & Technical Tutorials for OBI, BI-Apps and Hyperion EPM)  Please note that attendees are required to bring a laptop. Laptop 4GB RAM-Recognized by Windows 64 bits 80GB free space in Hard drive or External Device CPU Core 2 Duo or Higher Operating System Requirements Windows 7, Windows XP, Windows 2003 NOT ALLOWED with Windows Vista An Administrator User

    Read the article

  • Keep Learning After Your Oracle Training Class is Over - Save 50%!

    - by KJones
    Written by Amit Kumar, Senior Director Oracle University Digital Training        Every training class you take about the latest Oracle application or technology moves you closer to developing the skills you need to succeed. But after class is over, how do you keep up with today’s accelerating pace of innovation? To   To keep with the very latest technological advances, you need an ongoing and flexible training solution.       One that lets you learn during your own downtime.       Knowledge that’s easy to access.       Interactive lessons where you connect with experts.       A simple way to increase your knowledge, on your own time and at your own pace. The new Oracle Learning Streams is the flexible training solution you're looking for. Continuously Learn with Oracle Learning Streams Over time, Oracle Learning Streams help you develop the depth and breadth of knowledge that will give you the tools to become an expert in your field. By taking advantage of comprehensive and frequently updated information, you can keep learning continuously, at your own pace, when it's convenient for you. Sign up today and get 12 months of unlimited access to: •    Hundreds of videos delivered by Oracle experts for fresh and continuous product learning•    Live connections with Oracle's top instructors•    Robust video search capability to find exactly what you’re looking for•    Features that allow you to build your own custom learning queue and request new content Oracle Learning Streams are now available for Oracle Database and Oracle Middleware. Take a moment to preview the content now.  For a Limited Time - Save 50% For a limited time, save 50% when you order Oracle Learning Streams with any other Oracle Classroom, Live Virtual Class or Training On Demand course. Now there is no reason for learning to stop when class is over!

    Read the article

  • UK Oracle User Group Event: Trends in Identity Management

    - by B Shashikumar
    As threat levels rise and new technologies such as cloud and mobile computing gain widespread acceptance, security is occupying more and more mindshare among IT executives. To help prepare for the rapidly changing security landscape, the Oracle UK User Group community and our partners at Enline/SENA have put together an User Group event in London on Apr 19 where you can learn more from your industry peers about upcoming trends in identity management. Here are some of the key trends in identity management and security that we predicted at the beginning of last year and look how they have turned out so far. You have to admit that we have a pretty good track record when it comes to forecasting trends in identity management and security. Threat levels will grow—and there will be more serious breaches:   We have since witnessed breaches of high value targets like RSA and Epsilon. Most organizations have not done enough to protect against insider threats. Organizations need to look for security solutions to stop user access to applications based on real-time patterns of fraud and for situations in which employees change roles or employment status within a company. Cloud computing will continue to grow—and require new security solutions: Cloud computing has since exploded into a dominant secular trend in the industry. Cloud computing continues to present many opportunities like low upfront costs, rapid deployment etc. But Cloud computing also increases policy fragmentation and reduces visibility and control. So organizations require solutions that bridge the security gap between the enterprise and cloud applications to reduce fragmentation and increase control. Mobile devices will challenge traditional security solutions: Since that time, we have witnessed proliferation of mobile devices—combined with increasing numbers of employees bringing their own devices to work (BYOD) — these trends continue to dissolve the traditional boundaries of the enterprise. This in turn, requires a holistic approach within an organization that combines strong authentication and fraud protection, externalization of entitlements, and centralized management across multiple applications—and open standards to make all that possible.  Security platforms will continue to converge: As organizations move increasingly toward vendor consolidation, security solutions are also evolving. Next-generation identity management platforms have best-of-breed features, and must also remain open and flexible to remain viable. As a result, developers need products such as the Oracle Access Management Suite in order to efficiently and reliably build identity and access management into applications—without requiring security experts. Organizations will increasingly pursue "business-centric compliance.": Privacy and security regulations have continued to increase. So businesses are increasingly look for solutions that combine strong security and compliance management tools with business ready experience for faster, lower-cost implementations.  If you'd like to hear more about the top trends in identity management and learn how to empower yourself, then join us for the Oracle UK User Group on Thu Apr 19 in London where Oracle and Enline/SENA product experts will come together to share security trends, best practices, and solutions for your business. Register Here.

    Read the article

  • Arçelik A.S. Uses Advanced Analytics to Improve Product Development

    - by Sylvie MacKenzie, PMP
    "Oracle’s Primavera P6 Enterprise Project Portfolio Management’s advanced analytics gives us better insight into the product development process by helping us to identify potential roadblocks.” – Iffet Iyigun Meydanli, Innovation and System Development Manager, R&D Center, Arçelik A.S. Founded in 1955, Arçelik A.S. is now the leading household appliance manufacturer in Turkey, and the third-largest household appliance company in Europe. It operates 14 production facilities in five countries (Turkey, Romania, Russia, China, and South Africa), with international sales and marketing offices in 20 countries. Additionally, the company manages 10 brands (Arçelik, Beko, Grundig, Blomberg, Elektrabregenz, Arctic, Leisure, Flavel, Defy, and Altus). The company has a household presence in more than 100 countries, including China and the United States. Arçelik’s Beko brand is among the top-10 household appliance brands in world, as a market leader for refrigerators, freezers, and washing machines in the United Kingdom. Arçelik implemented Oracle’s Primavera P6 Enterprise Project Portfolio Management for improved management of its design and manufacturing projects. With the solution, Arelik has improved its research and development (R&D) with the ability to evaluate technology risks when planning its projects. Also, it is now more easy to make plans for several locations, monitor all resources, and plan for future projects.  Challenges Improve monitoring of R&D resources?including human resources and critical laboratory equipment?to optimize management of the company’s R&D project portfolio Establish a transparent project platform to enable better product and process planning, gain insight into product performance, and facilitate advanced analytics that support R&D and overall business decisions Identify potential roadblocks for better risk management Solutions Worked with Oracle Partner PRM to implement Oracle’s Primavera P6 Enterprise Project Portfolio Management to manage the entire household-appliance, R&D project portfolio lifecycle, enabling managers and project leaders to better track and monitor resources and deliverables in real time Improved risk analysis and evaluation abilities for R&D projects Supported long-term planning needs Used advanced reporting features to capture data needed for budgeting and other project details, including employee performance evaluations Improved monitoring abilities and insight into the overall performance of products postproduction Enabled flexible, fast, and customized reporting with the P6 dashboard on a centralized platform to meet custom reporting needs for project leaders and support on-time and on-budget deliverables Integrated with other corporate departments, such as accounts payable, to upload project invoice data into the Primavera solution and the company’s e-mail system, so that project leaders will be alerted about milestones and other project related information Partner“Oracle Partner PRM provided us with a quick, reliable, and solution-focused approach to its support,” said Iffet Iyigun Meydanli, innovation and system development manager, R&D Center, Arçelik A.S. “The company’s service covered the entire spectrum of our needs, including implementation, training, configuration, problem solving, and integration.”

    Read the article

  • Software Tuned to Humanity

    - by Phil Factor
    I learned a great deal from a cynical old programmer who once told me that the ideal length of time for a compiler to do its work was the same time it took to roll a cigarette. For development work, this is oh so true. After intently looking at the editing window for an hour or so, it was a relief to look up, stretch, focus the eyes on something else, and roll the possibly-metaphorical cigarette. This was software tuned to humanity. Likewise, a user’s perception of the “ideal” time that an application will take to move from frame to frame, to retrieve information, or to process their input has remained remarkably static for about thirty years, at around 200 ms. Anything else appears, and always has, to be either fast or slow. This could explain why commercial applications, unlike games, simulations and communications, aren’t noticeably faster now than they were when I started programming in the Seventies. Sure, they do a great deal more, but the SLAs that I negotiated in the 1980s for application performance are very similar to what they are nowadays. To prove to myself that this wasn’t just some rose-tinted misperception on my part, I cranked up a Z80-based Jonos CP/M machine (1985) in the roof-space. Within 20 seconds from cold, it had loaded Wordstar and I was ready to write. OK, I got it wrong: some things were faster 30 years ago. Sure, I’d now have had all sorts of animations, wizzy graphics, and other comforting features, but it seems a pity that we have used all that extra CPU and memory to increase the scope of what we develop, and the graphical prettiness, but not to speed the processes needed to complete a business procedure. Never mind the weight, the response time’s great! To achieve 200 ms response times on a Z80, or similar, performance considerations influenced everything one did as a developer. If it meant writing an entire application in assembly code, applying every smart algorithm, and shortcut imaginable to get the application to perform to spec, then so be it. As a result, I’m a dyed-in-the-wool performance freak and find it difficult to change my habits. Conversely, many developers now seem to feel quite differently. While all will acknowledge that performance is important, it’s no longer the virtue is once was, and other factors such as user-experience now take precedence. Am I wrong? If not, then perhaps we need a new school of development technique to rival Agile, dedicated once again to producing applications that smoke the rear wheels rather than pootle elegantly to the shops; that forgo skeuomorphism, cute animation, or architectural elegance in favor of the smell of hot rubber. I struggle to name an application I use that is truly notable for its blistering performance, and would dearly love one to do my everyday work – just as long as it doesn’t go faster than my brain.

    Read the article

  • Oracle GoldenGate 11gR2 Event Marker System

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Oracle GoldenGate 11gR2 includes a number of refinements to the Event Marker system. Using event markers enables GoldenGate processes to take a defined action based on an event in the data stream. This feature within Oracle GoldenGate simplifies methods to embed specific custom processing in the areas of error handling, alerts, and notification. The event marker system effectively allows for DML driven workflows to be created within GoldenGate and enables customers to craft non-standard processing based on special events. There are a number of supported event actions including: trace, log, checkpoint before, suspend, abort, and several others. With 11gR1 events can now be triggered by DDL operations, plus variables can be passed in and out of the system to shell scripts. Some good use cases for this feature are Automatic switchover to the secondary system during planned outages Better monitoring over source systems’ performance and automated switchover to the standby system in case of an outage with the primary system Automatic switchover from initial load to changed data movement Automatic synchronization of any type of batch processing taking place on both the source and target databases for database consistency Automatic stoppage of the Delivery module to allow end-of-day reporting Finding, tracking, and reporting on transactions that are of interest including the ones that do not have primary keys or transaction record numbers If you would like to see a demo, please visit our youtube channel (http://youtube.com/oraclegoldengate)  To learn more about the new features of Oracle GoldenGate 11gR2 and to ask questions to the PM team, please join us on September 12th  8am or 10am PST for our live webcast. Click here to register.

    Read the article

  • Investment scheme for a PC game the project

    - by Alex Kamen
    Good day everyone, I am working on a PC game project that has 3 phases planned, micro, macro and mmo versions [if confused, see a brief description at the bottom]. I have found a potential investor for the micro version of the game, but naturally, he requested a detailed plan of how the game will pay back. And the problem is that micro version itself is not supposed to be monetized much, other than some ads and limited in-game currency utilization. The idea is that with this combat demo already at hand, it should be possible to get a really large enough investment (millions of dollars) and use it to pay back the initial small one (thousands of dollars) and take the project into macro phase, which will really make profit. This way, everybody is going to win, provided that I can deliver the end-product. Yet while I am confident of that both the conception of the macro and the real game-play of the micro versions are going to be appealing, I don’t know how to obtain any guarantee of that I will be able to get funded once I have the prototype ready. And without that, I won’t receive the funds for the prototype in the first place! To summarize, my question is: how to figure out my future possibilities of getting funded once I have combat demo out, basically “whom to write to and what”. Ideally, I would like some sort of a preliminary agreement with a game publisher, something that would basically state “If the developer provides the product in time and in quality corresponding to the specifications given, the publisher guarantees to allocate funds for distribution and further development, thereby acquiring the right to X part of all future profits”. Does this sound sane? It’s just that I don’t want to sell all of my rights out straight away by taking a big outside investment while the project is in such early stage. I would appreciate if you would share your thoughts on this kind of scheme, and be sure to ask questions as I am sure I must have forgotten to mention a ton of important things, like the fact that initial funds are going to be spent on outsourcing (living in Siberia is really just great). [here’s a brief outline of what each version will feature] [micro] 1) turn based tactical combat rules 2) character development 3) arena/tournament system [macro] 4) ai-ruled dynamic interactive worlds 5) global map adventuring 6) strategic rpg + god simulator gameplay [mmo] 7) Persistent worlds system 8) Social structures system (“guilds/clans”) 9) god-simulation on the mmo scale P.S. Obviously, these features are incremental, so that mmo version has all 9.

    Read the article

  • Building a Solaris 11 repository without network connection

    - by user12611852
    Solaris 11 has been released and is a fantastic new iteration of Oracle's rock solid, enterprise operating system.  One of the great new features is the repository based Image Packaging system.  IPS not only introduces new cloud based package installation services, it is also integrated with our zones, boot environment and ZFS file systems to provide a safe, easy and fast way to perform system updates. My customers typically don't have network access and, in fact, can't connect to any network until they have "Authority to connect."  It's useful, however, to build up a Solaris 11 system with additional software using the new Image Packaging System and locally stored repository. The Solaris 11 documentation describes how to create a locally stored repository with full explanations of what the commands do. I'm simply providing the quick and dirty steps.  The easiest way is to download the ISO image, burn to a DVD and insert into your DVD drive.  Then as root: pkg set-publisher -G '*' -g file:///cdrom/sol11repo_full/repo solaris Now you can to install software using the GUI package manager or the pkg commands.  If you would like something more permanent (or don't have a DVD drive), however, it takes a little more work. After installing Solaris 11, download (on another system perhaps) the two files that make up the Solaris 11 repository from our download site Sneaker-net the files to your Solaris 11 system Unzip and cat the two files together to create one large ISO image. The file is about 6.9 GB in size zfs create rpool/export/repoSolaris11 zfs set atime=off rpool/export/repoSolaris11 zfs set compression=on rpool/export/repoSolaris11 (save some space) lofiadm -a sol-11-1111-repo-full.iso /dev/lofi/1 mount -F hsfs /dev/lofi/1 /mnt You could stop here and set the publisher to point to the /mnt/repo location, however, this mount will not be persistent across reboots. Copy the repository from the mounted ISO image to a permanent, on disk location. rsync -aP /mnt/repo /export/repoSolaris11 pkgrepo -s /export/repoSolaris11 refresh pkg set-publisher -G '*' -g /export/repoSolaris11/repo solaris You now have a locally installed repository for adding additional software packages for Solaris 11.  The documentation also takes you through publishing your repository on the network so that others can access it.

    Read the article

  • More on Map Testing

    - by Michael Stephenson
    I have been chatting with Maurice den Heijer recently about his codeplex project for the BizTalk Map Testing Framework (http://mtf.codeplex.com/). Some of you may remember the article I did for BizTalk 2009 and 2006 about how to test maps but with Maurice's project he is effectively looking at how to improve productivity and quality by building some useful testing features within the framework to simplify the process of testing maps. As part of our discussion we realized that we both had slightly different approaches to how we validate the output from the map. Put simple Maurice does some xpath validation of the data in various nodes where as my approach for most standard cases is to use serialization to allow you to validate the output using normal MSTest assertions. I'm not really going to go into the pro's and con's of each approach because I think there is a place for both and also I'm sure others have various approaches which work too. What would be great is for the map testing framework to provide support for different ways of testing which can cover everything from simple cases to some very specialized scenarios. So as agreed with Maurice I have done the sample which I will talk about in the rest of this article to show how we can use the serialization approach to create and compare the input and output from a map in normal development testing. Prerequisites One of the common patterns I usually implement when developing BizTalk solutions is to use xsd.exe to create .net classes for most of the schemas used within the solution. In the testing pattern I will take advantage of these .net classes. The Map In this sample the map we will use is very simple and just concatenates some data from the input message to the output message. Hopefully the below picture illustrates this well. The Test In the test I'm basically taking the following actions: Use the .net class generated from the schema to create an input message for the map Serialize the input object to a file Run the map from .net using the standard BizTalk test method which was generated for running the map Deserialize the output file from the map execution to a .net class representing the output schema Use MsTest assertions to validate things about the output message The below picture shows this: As you can see the code for this is pretty simple and it's all strongly typed which means changes to my schema which can affect the tests can be easily picked up as compilation errors. I can then chose to have one test which validates most of the output from the map, or to have many specific tests covering individual scenarios within the map. Summary Hopefully this post illustrates a powerful yet simple way of effectively testing many BizTalk mapping scenarios. I will probably have more conversations with Maurice about these approaches and perhaps some of the above will be included in the mapping test framework.   The sample can be downloaded from here: http://cid-983a58358c675769.office.live.com/self.aspx/Blog%20Samples/More%20Map%20Testing/MapTestSample.zip

    Read the article

  • Questions to ask to ensure someone understands programming? (and iOS)

    - by Stephen J
    So, I've been tutoring my friend for 2 years. Most people learn programming on their own in 3-6 months, (sans algorithms). It's confusing 'cause he'll run anywhere I tell him to, understands how to read C and C++ honestly better than the average college student, and he'll modify and repeat anything I do... but for the love of god he doesn't move on to new things and he still has test anxiety. I've recently realized he's copied and toyed with existing, but not once gained an understanding of why. I was under the impression he was learning fast because he could write it, but when you say "Make a function that takes an NSString" and he says "How?" and I say "The same way you make ANY function that takes any parameter, NSString is just a type like int" and all I hear is "No, it's an NSString, it's a special thing." and we get into an arguing match 'cause I'm like "It's just a class like any other class, you've used them for months now" and blah... I've subconsciously avoided comprehension questions because of this. Anyway, if you have him copy a program and say "Just initialize it" "Where?" "I don't care, didLoad or initWithCoder or Awake from nib, anywhere it gets initialized" and "No, it has to be exactly where you had it!" "No it doesn't!" I'm sick of this, but he won't give up. So I'm done avoiding these yelling matches and becoming a sadist from now on. I would like some help in finding questions to ask him that force him to understand what he's doing. I'd like some help and any resources I can find. CQuestions looked like a good site, but now I need some iPhone stuff. For example: *What do properties do? How are they changed? How do you change the name of the getter? *Why are Booleans inefficent? What advantage does int have over a boolean and how does the bit-shift operator help? *What does Copy do to a string? *What's the difference between a view controller and a uiview? *Write a program from memory that displays blah on screen, and flashes each view one by one. From beginner up to intermediate, hobbyist with some algebra at most. I'm just looking for resources to work with. I left in backstory so you know to "twist" the questions so he doesn't know he's supposed to init a variable here or there, but has to figure it out, and learn why it goes "here" or that "anywhere is fine as long as it's". Sample programs, anything. I'm relatively open about this because, being a programmer, I seriously doubt he's the only one who has this issue. I'd like to know how others have overcome similar. What made things "click"? for you? Did you have a hard time finding answers on Google, and how did you learn a better way to find what you were looking for? (He's so exact, he'll search for how to write a checkers program with color X and Y inside a uiview, as his search string, instead of breaking it up into components, I need help with that too, and believe it is related). This type of problem has to remind one of us of someone they know. So, Exercises to force them to think? Ways we overcame this thing in the past? I greatly appreciate any help.

    Read the article

  • Why do old programming languages continue to be revised?

    - by SunAvatar
    This question is not, "Why do people still use old programming languages?" I understand that quite well. In fact the two programming languages I know best are C and Scheme, both of which date back to the 70s. Recently I was reading about the changes in C99 and C11 versus C89 (which seems to still be the most-used version of C in practice and the version I learned from K&R). Looking around, it seems like every programming language in heavy use gets a new specification at least once per decade or so. Even Fortran is still getting new revisions, despite the fact that most people using it are still using FORTRAN 77. Contrast this with the approach of, say, the typesetting system TeX. In 1989, with the release of TeX 3.0, Donald Knuth declared that TeX was feature-complete and future releases would contain only bug fixes. Even beyond this, he has stated that upon his death, "all remaining bugs will become features" and absolutely no further updates will be made. Others are free to fork TeX and have done so, but the resulting systems are renamed to indicate that they are different from the official TeX. This is not because Knuth thinks TeX is perfect, but because he understands the value of a stable, predictable system that will do the same thing in fifty years that it does now. Why do most programming language designers not follow the same principle? Of course, when a language is relatively new, it makes sense that it will go through a period of rapid change before settling down. And no one can really object to minor changes that don't do much more than codify existing pseudo-standards or correct unintended readings. But when a language still seems to need improvement after ten or twenty years, why not just fork it or start over, rather than try to change what is already in use? If some people really want to do object-oriented programming in Fortran, why not create "Objective Fortran" for that purpose, and leave Fortran itself alone? I suppose one could say that, regardless of future revisions, C89 is already a standard and nothing stops people from continuing to use it. This is sort of true, but connotations do have consequences. GCC will, in pedantic mode, warn about syntax that is either deprecated or has a subtly different meaning in C99, which means C89 programmers can't just totally ignore the new standard. So there must be some benefit in C99 that is sufficient to impose this overhead on everyone who uses the language. This is a real question, not an invitation to argue. Obviously I do have an opinion on this, but at the moment I'm just trying to understand why this isn't just how things are done already. I suppose the question is: What are the (real or perceived) advantages of updating a language standard, as opposed to creating a new language based on the old?

    Read the article

  • With a little effort you can &ldquo;SEMI&rdquo;-protect your C# assemblies with obfuscation.

    - by mbcrump
    This method will not protect your assemblies from a experienced hacker. Everyday we see new keygens, cracks, serials being released that contain ways around copy protection from small companies. This is a simple process that will make a lot of hackers quit because so many others use nothing. If you were a thief would you pick the house that has security signs and an alarm or one that has nothing? To so begin: Obfuscation is the concealment of meaning in communication, making it confusing and harder to interpret. Lets begin by looking at the cartoon below:     You are probably familiar with the term and probably ignored this like most programmers ignore user security. Today, I’m going to show you reflection and a way to obfuscate it. Please understand that I am aware of ways around this, but I believe some security is better than no security.  In this sample program below, the code appears exactly as it does in Visual Studio. When the program runs, you get either a true or false in a console window. Sample Program. using System; using System.Diagnostics; using System.Linq;   namespace ObfuscateMe {     class Program     {                static void Main(string[] args)         {               Console.WriteLine(IsProcessOpen("notepad")); //Returns a True or False depending if you have notepad running.             Console.ReadLine();         }             public static bool IsProcessOpen(string name)         {             return Process.GetProcesses().Any(clsProcess => clsProcess.ProcessName.Contains(name));         }     } }   Pretend, that this is a commercial application. The hacker will only have the executable and maybe a few config files, etc. After reviewing the executable, he can determine if it was produced in .NET by examing the file in ILDASM or Redgate’s Reflector. We are going to examine the file using RedGate’s Reflector. Upon launch, we simply drag/drop the exe over to the application. We have the following for the Main method:   and for the IsProcessOpen method:     Without any other knowledge as to how this works, the hacker could export the exe and get vs project build or copy this code in and our application would run. Using Reflector output. using System; using System.Diagnostics; using System.Linq;   namespace ObfuscateMe {     class Program     {                static void Main(string[] args)         {               Console.WriteLine(IsProcessOpen("notepad"));             Console.ReadLine();         }             public static bool IsProcessOpen(string name)         {             return Process.GetProcesses().Any<Process>(delegate(Process clsProcess)             {                 return clsProcess.ProcessName.Contains(name);             });         }       } } The code is not identical, but returns the same value. At this point, with a little bit of effort you could prevent the hacker from reverse engineering your code so quickly by using Eazfuscator.NET. Eazfuscator.NET is just one of many programs built for this. Visual Studio ships with a community version of Dotfoscutor. So download and load Eazfuscator.NET and drag/drop your exectuable/project into the window. It will work for a few minutes depending if you have a quad-core or not. After it finishes, open the executable in RedGate Reflector and you will get the following: Main After Obfuscation IsProcessOpen Method after obfuscation: As you can see with the jumbled characters, it is not as easy as the first example. I am aware of methods around this, but it takes more effort and unless the hacker is up for the challenge, they will just pick another program. This is also helpful if you are a consultant and make clients pay a yearly license fee. This would prevent the average software developer from jumping into your security routine after you have left. I hope this article helped someone. If you have any feedback, please leave it in the comments below.

    Read the article

  • Creating Asynchronous Methods in EJB 3.1

    - by cindo
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} OBE of the Month: Creating Asynchronous Methods in EJB 3.1 This OBE covers creating an EJB 3.1 application that demonstrates the use of the @Asynchronous annotation in an Enterprise Java Bean (EJB) class or specific method. In this tutorial, you will create a Java EE 6 Web Application and add the following components to it - a Stateless Session Bean with two asynchronous methods. You define a Servlet to call the asynchronous methods and to keep track of the invocation and completion times to demonstrate the asynchronous nature of the method calls. The index.jsp will contain a form with a submit button, Run allowing you to execute the application. The form will submit to the Servlet which invokes the asynchronous methods defined in the session bean and the response is re-directed to response.jsp. Information about the asynchronous handling procedure is displayed to users. From this information, users will notice that the invoker thread and the called asynchronous thread are working concurrently. Check out this new OBE on the Oracle Learning Library: Creating Asynchronous Methods in EJB 3.1. This OBE is part of the new EJB 3.1 New Features Series. Related OBE’s that might interest you: Creating a No-Interface View Session Bean and Packaging in a WAR File Creating and Accessing a Session Bean in a  Web Application

    Read the article

  • Oracle R Distribution 3.1.1 Released

    - by Sherry LaMonica-Oracle
    Oracle R Distribution version 3.1.1 has been released to Oracle's public yum today. R-3.1.1 (code name "Sock it to Me") is an update to R-3.1.0 that consists mainly of bug fixes. It also includes enhancements related to accessing package help files, improved accuracy when importing data with large integers, and better integration with RStudio graphics. The full list of new features and bug fixes is listed in the NEWS file.To install Oracle R Distribution using yum, follow the instructions in the Oracle R Enterprise Installation and Administration Guide.Installing using yum will resolve any operating system dependencies automatically. As such, we recommend using yum to install Oracle R Distribution. However, if yum is not available, you can install Oracle R Distribution RPMs directly using RPM commands.For Oracle Linux 5, the Oracle R Distribution RPMs are available in the Enterprise Linux Add-Ons repository:  R-3.1.1-1.el5.x86_64.rpm   R-core-3.1.1-1.el5.x86_64.rpm  R-devel-3.1.1-1.el5.x86_64.rpm  libRmath-3.1.1-1.el5.x86_64.rpm  libRmath-devel-3.1.1-1.el5.x86_64.rpm  libRmath-static-3.1.1-1.el5.x86_64.rpm For Oracle Linux 6, the Oracle R Distribution RPMs are available in the Oracle Linux Add-Ons repository:  R-3.1.1-1.el6.x86_64.rpm  R-core-3.1.1-1.el6.x86_64.rpm  R-devel-3.1.1-1.el6.x86_64.rpm  libRmath-3.1.1-1.el6.x86_64.rpm  libRmath-devel-3.1.1-1.el6.x86_64.rpm  libRmath-static-3.1.1-1.el6.x86_64.rpmFor example, this command installs the R 3.1.1 RPM on Oracle Linux x86-64 version 6:  rpm -i R-3.1.1-1.el6.x86_64.rpm To complete the Oracle R Distribution 3.1.1 installation, repeat this command for each of the 6 RPMs, resolving dependencies as required. Oracle R Distribution 3.1.1 is not yet officially certified with Oracle R Enterprise. Refer to Table 1-2 in the Oracle R Enterprise Installation Guide for supported configurations of Oracle R Enterprise components, or check this blog for updates. The Oracle R Distribution 3.1.1 binaries for Windows, AIX, Solaris SPARC and Solaris x86 will be available on OSS, Oracle's Open Source Software portal, in the coming weeks.

    Read the article

  • JavaOne 2012: Camel, Twitter, Coherence, Wicket and GlassFish

    - by Bruno.Borges
    Before joining Oracle as Product Manager for WebLogic and GlassFish for Latin America, at the beggining of this year I proposed two talks to JavaOne USA that I had been presenting in Brazil for quite a while. One of them I presented last year at ApacheCon in Vancouver, Canada as well in JavaOne Brazil. In June I got the news that they were accepted as Alternate Sessions. Surprisingly enough, few weeks later and at the same time I joined Oracle, I received the news that they were officially accepted and put on schedule. Tomorrow I'll be flying to San Francisco, to my first JavaOne in the United States, and I wanted to share with you what I'm going to present there. My two sessions are these ones: Wed, 10/03, 4:30pm - CON2989 Leverage Enterprise Integration Patterns with Apache Camel and TwitterOn this one, you will be introducted to the Apache Camel framework that I had been talking about in Brazil at conferences, before joining Oracle, and to a component I contributed to integrate with Twitter. Also, you will have a preview of a new component I've been working on to integrate Camel with the Oracle Coherence distributed cache. Thu, 10/04, 3:30pm - CON3395 How Scala, Wicket, and Java EE Can Improve Web DevelopmentThis one I've been working on for quite a while. It was based on an idea to have an architecture that could be as agile as frameworks and technologies such as Ruby on Rails, PHP or Python, for rapid web development. You will be introduced to the Apache Wicket framework, another Apache project I enjoy working with and gave lots of talks at Brazilian conferences, including JavaOne Brazil, JustJava, QCon SP, and The Developers Conference. You will also be introduced to the Scala language and how to create nice DSLs to boost productiveness. And last but not least, the Java EE 6 platform, that offers an awesome improvement from previous versions with its CDI, JPA, EJB3 and JAX-RS features for web development. Other events I will be participating during my stay in SF: Geeks Bike Ride GlassFish Community Event GlassFish and Friends Party    If you have any other event to suggest, please do suggest! It's my first JavaOne and I'm really looking forward to enjoying everything. See you guys in a few days!!

    Read the article

  • Oracle ENDECA Discovery 3.1 Partner Training 3-Day Workshop

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 To find out more about the ENDECA training, and to Register for this, click here. June 24-26, 2014: Oracle Reading, UK – Free to partners in EMEA. FREE of charge to OPN member Partners, this Oracle Endeca Information Discovery (OEID) 3-day bootcamp is designed to give partners an understanding of OEID’s features, and how it complements the existing Oracle Business Intelligence suite. This workshop will provide hands-on experience with Oracle Endeca Information Discovery. Topics covered will include Data Exploration with Endeca Information Discovery, Data Ingest, Project Lifecycle, Building an Endeca Server data model and advanced modeling techniques, and Working with Studio. You will also learn about working with ETL components for content acquisitions and other aspects of the project such as security. After taking this course, you will be well prepared to architect, build, demo, and implement an end-to-end Endeca Information Discovery solution. If you are a Bigdata Analytics Architect or Developer, BI or Data Warehouse Architect, developer or consultant, you don’t want to miss this 3-day workshop. Click here to Register for this. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • From Co-op to fulltime help with salary negotation [closed]

    - by Peter
    Hey I'm a coop student that worked at a particular medium size printing company for 8 months. I had a good time it was lax, sometimes insufficiently challenging but none the less I learned a whole lot. I stuck with them for another 5 months (including this month) at the same rate I was paid then, doing testing work, tool development, taking care of emergencies when the lead developers were away, and other smaller projects and now bigger projects and problem handling (bad printer output etc.). I know their website inside out (ecommerce), and I know their printing software inside out and have made many changes to them both without a hitch. I have also done a lot of refactoring of the existing code base which as far as Im concerned, I believe am the only one to do those sorts of restructuring even though there is constant talk about it. I guess the unit testing paid off and lets me see the value in modularity if even a tad more. Never the less I have faith in my skill and the restructuring I did turned out better than I had imagined . Now the problem is that I finish school next month and so I asked for a full time spot the month after. They have been expanding and have hired a new guy a few months after my coop spot, and just now they hired a new guy to deal with the CRM application. The lead developer who wrote all of the software had left 5 months ago so it was up to all of us to learn what he had done over 4 years (including db, networking). So now I'm afraid that if I assert myself for a salary similar to the other guys, which I believe I am certainly on par with, that I would be seen as ingrateful. It's hard to flip a switch and say, hey double my pay, although when I'm working with their bread and butter (printers) and writing new features, refactoring the whole application for extensibility. I love it regardless of pay. I also feel maybe I'm replaceeble, although nobody knows the website better than myself and the lead web dev (not by a long shot), and nobody knows the printer software/drivers better than myself. I just thought they would have brought up a raise earlier on, and now it feels like they don't value my work. I'm also tired of worrying about it. I think my question is, well what do I do next?

    Read the article

  • Block-level deduplicating filesystem

    - by James Haigh
    I'm looking for a deduplicating copy-on-write filesystem solution for general user data such as /home and backups of it. It should use online/inline/synchronous deduplication at the block-level using secure hashing (for negligible chance of collisions) such as SHA256 or TTH. Duplicate blocks need not even touch the disk. The idea is that I should be able to just copy /home/<user> to an external HDD with the same such filesystem to do a backup. Simple. No messing around with incremental backups where corruption to any of the snapshots will nearly always break all later snapshots, and no need to use a specific tool to delete or 'checkout' a snapshot. Everything should simply be done from the file browser without worry. Can you imagine how easy this would be? I'd never have to think twice about backing-up again! I don't mind a performance hit, reliability is the main concern. Although, with specific implementations of cp, mv and scp, and a file browser plugin, these operations would be very fast, especially when there is a lot of duplication as they would only need to transfer the absent blocks. Accidentally using conventional copy tools that do not integrate with the FS would merely take longer, waste some bandwidth when copying remotely and waste some CPU, as the duplicate data would be re-read, re-transferred and re-hashed (although nothing would be re-written), but would absolutely not corrupt anything. (Some filesharing software may also be able to benefit by integrating with the FS.) So what's the best way of doing this? I've looked at some options: lessfs - Looks unmaintained. Any good? [Opendedup/SDFS][3] - Java? Could I use this on Android?! What does [SDFS][4] stand for? [Btrfs][5] - Some patches floating around on mailing list archives, but no real support. [ZFS][6] - Hopefully they'll one day relicense under a true Free/Opensource GPL-compatible licence. Also, 2 years ago I had a go at an attempt in Python using Fuse at the file-level to be used over the top of a typical solid FS such as EXT4, but I found Fuse for Python underdocumented and didn't manage to implement all of the system calls. My first post here, so I can't post more than 2 links until I get over 10 rep: [3]: http://www.opendedup.org/ [4]: https://en.wikipedia.org/w/index.php?title=SDFS&action=edit&redlink=1 [5]: https://en.wikipedia.org/wiki/Btrfs#Features [6]: https://en.wikipedia.org/wiki/ZFS#Linux

    Read the article

< Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >