Search Results

Search found 5920 results on 237 pages for 'hand drawn'.

Page 111/237 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • Issue 15: SVP Focus

    - by rituchhibber
         SVP FOCUS FOCUS -- Chris Baker SVP Oracle Worldwide ISV-OEM-Java Sales Chris Baker is the Global Head of ISV/OEM Sales responsible for working with ISV/OEM partners to maximise Oracle's business through those partners, whilst maximising those partners’ business to their end users. Chris works with partners, customers, innovators, investors and employees to develop innovative business solutions using Oracle products, services and skills. RESOURCES -- Oracle PartnerNetwork (OPN) OPN Solutions Catalog Oracle Exastack Program Oracle Exastack Optimized Oracle Cloud Computing Oracle Engineered Systems Oracle and Java SUBSCRIBE FEEDBACK PREVIOUS ISSUES "By taking part in marketing activities, our partners accelerate their sales cycles." -- Firstly, could you please explain Oracle's current strategy for ISV partners, globally and in EMEA? Oracle customers use independent software vendor (ISV) applications to run their businesses. They use them to generate revenue and to fulfil obligations to their own customers. Our strategy is very straight-forward. We want all of our ISV partners and OEMs to concentrate on the things that they do the best—building applications to meet the unique industry and functional requirements of their customer. We want to ensure that we deliver a best-in-class application platform so ISVs are free to concentrate their effort on their application functionality and user experience We invest over four billion dollars in research and development every year, and we want our ISVs to benefit from all of that investment in operating systems, virtualisation, databases, middleware, engineered systems, and other hardware. By doing this, we help them to reduce their costs, gain more consistency and agility for quicker implementations, and also rapidly differentiate themselves from other application vendors. It's all about simplification because we believe that around 25 to 30 percent of the development costs incurred by many ISVs are caused by customising infrastructure and have nothing to do with their applications. Our strategy is to enable our ISV partners to standardise their application platform using engineered architecture, so they can write once to the Oracle stack and deploy seamlessly in the cloud, on-premise, or in hybrid deployments. It's really important that architecture is the same in order to keep cost and time overheads at a minimum, so we provide standardisation and an environment that enables our ISVs to concentrate on the core business that makes them the most money and brings them success. How do you believe this strategy is helping the ISVs to work hand-in-hand with Oracle to ensure that end customers get the industry-leading solutions that they need? We work with our ISVs not just to help them be successful, but also to help them market themselves. We have something called the 'Oracle Exastack Ready Program', which enables ISVs to publicise themselves as 'Ready' to run the core software platforms that run on Oracle's engineered systems including Exadata and Exalogic. So, for example, they can become 'Database Ready' which means that they use the latest version of Oracle Database and therefore can run their application without modification on Exadata or the Oracle Database Appliance. Alternatively, they can become WebLogic Ready, Oracle Linux Ready and Oracle Solaris Ready which means they run on the latest release and therefore can run their application, with no new porting work, on Oracle Exalogic. Those 'Ready' logos are important in helping ISVs advertise to their customers that they are using the latest technologies which have been fully tested. We now also have Exadata Ready and Exalogic Ready programmes which allow ISVs to promote the certification of their applications on these platforms. This highlights these partners to Oracle customers as having solutions that run fluently on the Oracle Exadata Database Machine, the Oracle Exalogic Elastic Cloud or one of our other engineered systems. This makes it easy for customers to identify solutions and provides ISVs with an avenue to connect with Oracle customers who are rapidly adopting engineered systems. We have also taken this programme to the next level in the shape of 'Oracle Exastack Optimized' for partners whose applications run best on the Oracle stack and have invested the time to fully optimise application performance. We ensure that Exastack Optimized partner status is promoted and supported by press releases, and we help our ISVs go to market and differentiate themselves through the use of our technology and the standardisation it delivers. To date we have had several hundred organisations successfully work through our Exastack Optimized programme. How does Oracle's strategy of offering pre-integrated open platform software and hardware allow ISVs to bring their products to market more quickly? One of the problems for many ISVs is that they have to think very carefully about the technology on which their solutions will be deployed, particularly in the cloud or hosted environments. They have to think hard about how they secure these environments, whether the concern is, for example, middleware, identity management, or securing personal data. If they don't use the technology that we build-in to our products to help them to fulfil these roles, they then have to build it themselves. This takes time, requires testing, and must be maintained. By taking advantage of our technology, partners will now know that they have a standard platform. They will know that they can confidently talk about implementation being the same every time they do it. Very large ISV applications could once take a year or two to be implemented at an on-premise environment. But it wasn't just the configuration of the application that took the time, it was actually the infrastructure - the different hardware configurations, operating systems and configurations of databases and middleware. Now we strongly believe that it's all about standardisation and repeatability. It's about making sure that our partners can do it once and are then able to roll it out many different times using standard componentry. What actions would you recommend for existing ISV partners that are looking to do more business with Oracle and its customer base, not only to maximise benefits, but also to maximise partner relationships? My team, around the world and in the EMEA region, is available and ready to talk to any of our ISVs and to explore the possibilities together. We run programmes like 'Excite' and 'Insight' to help us to understand how we can help ISVs with architecture and widen their environments. But we also want to work with, and look at, new opportunities - for example, the Machine-to-Machine (M2M) market or 'The Internet of Things'. Over the next few years, many millions, indeed billions of devices will be collecting massive amounts of data and communicating it back to the central systems where ISVs will be running their applications. The only way that our partners will be able to provide a single vendor 'end-to-end' solution is to use Oracle integrated systems at the back end and Java on the 'smart' devices collecting the data—a complete solution from device to data centre. So there are huge opportunities to work closely with our ISVs, using Oracle's complete M2M platform, to provide the infrastructure that enables them to extract maximum value from the data collected. If any partners don't know where to start or who to contact, then they can contact me directly at [email protected] or indeed any of our teams across the EMEA region. We want to work with ISVs to help them to be as successful as they possibly can through simplification and speed to market, and we also want all of the top ISVs in the world based on Oracle. What opportunities are immediately opened to new ISV partners joining the OPN? As you know OPN is very, very important. New members will discover a huge amount of content that instantly becomes accessible to them. They can access a wealth of no-cost training and enablement materials to build their expertise in Oracle technology. They can download Oracle software and use it for development projects. They can help themselves become more competent by becoming part of a true community and uncovering new opportunities by working with Oracle and their peers in the Oracle Partner Network. As well as publishing massive amounts of information on OPN, we also hold our global Oracle OpenWorld event, at which partners play a huge role. This takes place at the end of September and the beginning of October in San Francisco. Attending ISV partners have an unrivalled opportunity to contribute to elements such as the OpenWorld / OPN Exchange, at which they can talk to other partners and really begin thinking about how they can move their businesses on and play key roles in a very large ecosystem which revolves around technology and standardisation. Finally, are there any other messages that you would like to share with the Oracle ISV community? The crucial message that I always like to reinforce is architecture, architecture and architecture! The key opportunities that ISVs have today revolve around standardising their architectures so that they can confidently think: "I will I be able to do exactly the same thing whenever a customer is looking to deploy on-premise, hosted or in the cloud". The right architecture is critical to being competitive and to really start changing the game. We want to help our ISV partners to do just that; to establish standard architecture and to seize the opportunities it opens up for them. New market opportunities like M2M are enormous - just look at how many devices are all around you right now. We can help our partners to interface with these devices more effectively while thinking about their entire ecosystem, rather than just the piece that they have traditionally focused upon. With standardised architecture, we can help people dramatically improve their speed, reach, agility and delivery of enhanced customer satisfaction and value all the way from the Java side to their centralised systems. All Oracle ISV partners must take advantage of these opportunities, which is why Oracle will continue to invest in and support them. Oracle OpenWorld 2010 Whether you attended Oracle OpenWorld 2009 or not, don't forget to save the date now for Oracle OpenWorld 2010. The event will be held a little earlier next year, from 19th-23rd September, so please don't miss out. With thousands of sessions and hundreds of exhibits and demos already lined up, there's no better place to learn how to optimise your existing systems, get an inside line on upcoming technology breakthroughs, and meet with your partner peers, Oracle strategists and even the developers responsible for the products and services that help you get better results for your end customers. Register Now for Oracle OpenWorld 2010! Perhaps you are interested in learning more about Oracle OpenWorld 2010, but don't wish to register at this time? Great! Please just enter your contact information here and we will contact you at a later date. How to Exhibit at Oracle OpenWorld 2010 Sponsorship Opportunities at Oracle OpenWorld 2010 Advertising Opportunities at Oracle OpenWorld 2010 -- Back to the welcome page

    Read the article

  • Dealing With Table Borders In OOXML

    - by Tim Murphy
    Note: Cross posted from Coding The Document. Permalink Formatting tables in a document programmatically can be a very complex task.  This is the major reason which we start our document generation projects with templates instead of building components in a document by hand. Borders are on aspect of a table that you may want to fomat.  Borders are used to make certain content in a table stand out.  If you need to conditionally set and remove borders there is something that you need to be aware of.  Even in OOXML you have the concepts of styles, inheriting styles and overriding styles. When Word defines a table it will reference a global style such as “TableGrid”.  This style will include the borders for the table.  Specifically the InsideHorizontalBorder and InsideVerticalBorder define the borders for the cells.  These can be overridden by the TableCellBorders collection of a particular cell.  Adding a double right border on a cell is as easy as the couple of lines of code below. wordprocessing.TableCellBorders borders = new wordprocessing.TableCellBorders(); borders.RightBorder = new RightBorder(){Val = BorderValues.Double, Color = "000000", ThemeColor = ThemeColorValues.Text1, Size = (UInt32Value)4U, Space = (UInt32Value)0U }; cell.TableCellProperties.Append(borders); If I want to revert back to the table’s style for cell borders I simply need to remove all children from the TableCellBorders collection.  It is like removing a class identifier from a TD tag in HTML.  The style in the parent object takes back over. With the knowledge of how the borders work you can take the concept and apply it to other effects of styles. del.icio.us Tags: OOXML,Office Open XML,Microsoft Office 2007,Microsoft Word 2007,table,style,border

    Read the article

  • IOUG Webcast Series on Identity Management

    - by Tanu Sood
    Identity Management for Business Empowerment Identity Management has gone from the realm of IT tools to being a business solution. Security and Identity Management offer confidence in doing secure and compliant business. But more than that, Identity Management today contributes to business growth with secure social, cloud, mobile and internal & external ecosystem enablement. Cloud computing has heightened the interest in user access security, mobile computing brings access to information beyond the enterprise and a bring your own device culture in-house, social media has added a new dimension to user identity and increasing security compliance pressure has made organizations rethink their roles and entitlements strategy. To discuss the industry trends, maturity and framework for security, compliance and business empowerment with identity management, Oracle is proud to collaborate with IOUG to launch a series of live webcasts. Covering a span of topics from identity platform to entitlements managements, privilege access management and cloud, mobile and social security, these webcasts will provide direct access to subject matter experts and technology specialists. Hear first-hand about best practices, a pragmatic approach to security implementation, customer success stories and more. Register today for the individual webcasts or the series. And just a reminder that the conversation starts at COLLABORATE 12 in Las Vegas from April 22nd – 26th. In addition to our conference sessions, as an added value this year, we are offering a half-day deep dive session on Oracle Identity Management: Building a Security and Compliance Framework for Oracle Systems. The session is scheduled for Sunday, April 22nd from 9 am to 3 pm and will cover relevant topics such as: • A Primer on Identity Management • Security and Compliance with Oracle Identity Management • Security for Oracle Applications, Fusion Applications• Managing Identities in The Cloud and Mobile World • Best Practices: Building an Identity Roadmap and Getting Started To get a head start on your compliance and security program, pre-register for this session today.

    Read the article

  • Selling Android apps from Latvia? or should I just put banners?

    - by Roger Travis
    I am in Latvia ( which is not supported to sell apps at android market ), so I am thinking about the best way of monetizing my app. So far I've come up with such options: somehow imitate that I am from a supported country, get a bank account there, etc. use PayPal for in-app purchases. The player get, say, first 10 levels for free, but then is asked to pay 0.99$ for the rest of the game. downsides: player might not feel comfortable entering his paypal details into an app. also android market might not really like that. making the app free and get money from advertising... let's do some calculation here, say, I get 1m free downloads, each user during his playtime would see 10 banners, therefor 10m / 1000 * 0.3 = gives roughly 33k$ ( if we use adMob with their 0.3$ per 1000 impressions ). On the other hand, if we use paypal in app purchase, we need a 3% or more conversion rate to beat this... hmm... What do you think about all this? Thanks! edit: from what I just read all over the net, it looks like advertisers will change their eCPM price a lot without you understanding why... while using in-app paypal purchase you can at least somehow monitor the cashflow.

    Read the article

  • Webcast - Oracle Database In-Memory Option

    - by Thanos Terentes Printzios
    Next to the recent announcement by Larry Ellison on the Future of the Database, we are happy to share this exclusive series of live webcasts from Oracle Database Product Management, where you can learn more about the brand new Oracle Database 12c In-Memory option. Oracle Database In-Memory is Oracle’s new memory-optimized technology that transparently accelerates analytic, data warehousing, and reporting workloads, while also accelerating transaction processing (OLTP) workloads. Participants will learn about Oracle Database In-Memory benefits, features, and leading edge architecture.  The Database In-Memory architecture provides the ability to easily process data orders of magnitude faster by simply enabling the feature and identifying tables to bring in-memory without application changes. Details on Oracle Database In-Memory’s ease of use and management, scalability, and availability will also be covered. Please join us to learn more about Oracle Database In-Memory and get first-hand knowledge of this important new feature. Delivery Format This FREE online LIVE eSeminar will be delivered over the Web.These Oracle webcasts are FREE for Customers, System Integrators, ISVs, VARs and Platform Partners. Presenter: Richard Jacobs, Oracle Solution Architect  Europe Webcast 1 Date: August 29, 2014 @ 10:00 am to 11:00 am Central European Summer Time (CEST)Register Here! Europe Webcast 2 Date: September 29, 2014 @ 10:00 am to 11:00 am Central European Summer Time (CEST)Register Here!

    Read the article

  • Oracle Open World 2013 - JD Edwards at Your Fingertips

    - by KemButller
    The Oracle & JD Edwards Universe at Your Fingertips!  Oracle Open World features thousands of sessions from which attendees can choose, including keynotes, technical sessions, demos, and hands-on labs. Hundreds of exhibitors will be on hand to share what they’re bringing to the leading edge of Oracle technology. You will have an infinite number of opportunities to network, trade information with peers, and gain insights from experts. For JD Edwards’ customers this valuable experience is twofold. Enjoy the convenience of attending the core JD Edwards’ program featured at the Intercontinental Hotel and experience the keynotes, educational sessions, networking events and partner solutions exhibited at the adjacent Moscone Convention Center.  Highlights for JD Edwards Customers:  Kickoff with the JD Edwards General Session, followed by product strategy road map sessions.  Select from over 60 educational sessions specifically applicable to JD Edwards.  Deepen your knowledge by attending the JD Edwards EnterpriseOne technical hands on lab sessions including: o One View Reporting – basic and advanced o EnterpriseOne Page Generator o User Interface Personalization o Configuring Composite Applications with Café One  Chose from thousands of educational sessions offered throughout the entire conference covering Oracle applications, industries, middleware, server and storage systems and database.  Meet the JD Edwards experts in the Oracle DEMOGrounds and get hands on experience with the latest and hottest features in Applications, Tools and Technologies, Mobility, In-Memory Applications, Health and Safety Incident Management, User Experience and Reporting.  Visit the JD Edwards Partner Pavilion at the Intercontinental Hotel featuring partner organizations with solutions for JD Edwards’ customers.  Meet with the Oracle JD Edwards Upgrade team during the conference as part of the Upgrade Care Program. Maximize your conference experience and leave with the information and contacts you need to turbo-charge your upgrade planning. Contact Barbara.canham-AT-oracle-DOT-com prior to the conference for more information.  Arrive on Sunday to participate in sessions presented by the Special Interest Groups of Quest International User Group. Oracle OpenWorld

    Read the article

  • Web workflow solution - how should I approach the design?

    - by Tom Pickles
    We've been tasked with creating a web based workflow tool to track change management. It has a single workflow with multiple synchronous tasks for the most part, but branch out at a point to tasks running in parallel which meet up later on. There will be all sorts of people using the application, and all of them will need to see their outstanding tasks for each change, but only theirs, not others. There will also be a high level group of people who oversee all changes, so need to see everything. They will need to see tasks which have not been done in the specified time, who's responsible etc. The data will be persisted to a SQL database. It'll all be put together using .Net. I've been trying to learn and implement OOP into my designs of late, but I'm wondering if this is moot in this instance as it may be better to have the business logic for this in stored procedures in the DB. I could use POCO's, a front end layer and a data access layer for the web application and just use it as a mechanism for CRUD actions on the DB, then use SP's fired in the DB to apply the business rules. On the other hand, I could use an object oriented design within the web app, but as the data in the app is state-less, is this a bad idea? I could try and model out the whole application into a class structure, implementing interfaces, base classes and all that good stuff. So I would create a change class, which contained a list of task classes/types, which defined each task, and implement an ITask interface etc. Put end-user types into the tasks to identify who should be doing what task. Then apply all the business logic in the respective class methods etc. What approach do you guys think I should be using for this solution?

    Read the article

  • Help, broken Gsettings

    - by Rene
    I was trying to disable the global menu as per http://ubuntuhandbook.org/index.php/2013/07/disable-global-menu-on-ubuntu-13-10-saucy/#comment-8612, but while it didn't change anything, after running the autoremove command unity-tweak-tool broke. Obviously my first reaction was to re-install the removed package but it remains broken. TBH I don't know if it is even related or just a coincidence. When I start it from the launcher it just blinks and disappear. When I start it from terminal I get this error: $ gnome-tweak-tool WARNING : Shell not installed or running WARNING : Error detecting shell Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/gtweak/tweaks/tweak_shell_extensions.py", line 199, in __init__ raise Exception("Shell not running or DBus service not available") Exception: Shell not running or DBus service not available INFO : GSettings missing key org.gnome.nautilus.desktop (key computer-icon-visible) WARNING : Shell not running None INFO : GSettings missing key org.gnome.mutter (key workspaces-only-on-primary) Segmentation fault (core dumped) I had a look with dconf-editor if I could just add the missing key, but apparently keys aren't meant to be added "by hand". So how can I fix this? I'd rather prefer not having to reinstall everything. Which package is broken, can I just reinstall that? EDIT: I found by being root gnome-tweak-tool no longer crashed so possibly a permission issue somewhere. I don't know that I changed any permissions. Another related problem, actually the reason I noticed the problem at all, is that unity-tweak-tool seem no longer to want to save the values edited. I normally just have the Unity launcher on the primary display but wanted to check what it was like having it on both. I didn't like it so I went into unity-tweak-tool to set it back - but regardless how many time I tick "only primary display" it never changes anything. What does the Unity-tweak-tool actually change and can I do this directly somehow?

    Read the article

  • NHibernate Pitfalls: Loading Foreign Key Properties

    - by Ricardo Peres
    This is part of a series of posts about NHibernate Pitfalls. See the entire collection here. When saving a new entity that has references to other entities (one to one, many to one), one has two options for setting their values: Load each of these references by calling ISession.Get and passing the foreign key; Load a proxy instead, by calling ISession.Load with the foreign key. So, what is the difference? Well, ISession.Get goes to the database and tries to retrieve the record with the given key, returning null if no record is found. ISession.Load, on the other hand, just returns a proxy to that record, without going to the database. This turns out to be a better option, because we really don’t need to retrieve the record – and all of its non-lazy properties and collections -, we just need its key. An example: 1: //going to the database 2: OrderDetail od = new OrderDetail(); 3: od.Product = session.Get<Product>(1); //a product is retrieved from the database 4: od.Order = session.Get<Order>(2); //an order is retrieved from the database 5:  6: session.Save(od); 7:  8: //creating in-memory proxies 9: OrderDetail od = new OrderDetail(); 10: od.Product = session.Load<Product>(1); //a proxy to a product is created 11: od.Order = session.Load<Order>(2); //a proxy to an order is created 12:  13: session.Save(od); So, if you just need to set a foreign key, use ISession.Load instead of ISession.Get.

    Read the article

  • Rails/Node.js interaction

    - by lpvn
    I and my co-worker are developing a web application with rails and node.js and we can't reach a consensus regarding a particular architectural decision. Our setup is basically a rails server working with node.js and redis, when a client makes a http request to our rails API in some cases our rails application posts the response to a redis database and then node.js transmits the response via websocket. Our disagreement occurs in the following point: my co-worker thinks that using node.js to send data to clients is somewhat business logic and should be inside the model, so in the first code he wrote he used commands of broadcast in callbacks and other places of the model, he's convinced that the models are the best place for the interaction between rails and node. I on the other hand think that using node.js belongs to the runtime realm, my take is that the broadcast commands and other node.js interactions should be in the controller and should only be used in a model if passed through a well defined interface, just like the situation when a model needs to access the current user of a session. At this point we're tired of arguing over this same thing and our discussion consists in us repeating to ourselves our same opinions over and over. Could anyone, preferably with experience in the same setup, give us an unambiguous response saying which solution is more adequate and why it is?

    Read the article

  • Correct For Loop Design

    - by Yttrill
    What is the correct design for a for loop? Felix currently uses if len a > 0 do for var i in 0 upto len a - 1 do println a.[i]; done done which is inclusive of the upper bound. This is necessary to support the full range of values of a typical integer type. However the for loop shown does not support zero length arrays, hence the special test, nor will the subtraction of 1 work convincingly if the length of the array is equal to the number of integers. (I say convincingly because it may be that 0 - 1 = maxval: this is true in C for unsigned int, but are you sure it is true for unsigned char without thinking carefully about integral promotions?) The actual implementation of the for loop by my compiler does correctly handle 0 but this requires two tests to implement the loop: continue: if not (i <= bound) goto break body if i == bound goto break ++i goto continue break: Throw in the hand coded zero check in the array example and three tests are needed. If the loop were exclusive it would handle zero properly, avoiding the special test, but there'd be no way to express the upper bound of an array with maximum size. Note the C way of doing this: for(i=0; predicate(i); increment(i)) has the same problem. The predicate is tested after the increment, but the terminating increment is not universally valid! There is a general argument that a simple exclusive loop is enough: promote the index to a large type to prevent overflow, and assume no one will ever loop to the maximum value of this type.. but I'm not entirely convinced: if you promoted to C's size_t and looped from the second largest value to the largest you'd get an infinite loop!

    Read the article

  • Ajax application: using SOAP vs REST ?

    - by coder
    I'm building an ajax heavy application (client-side strictly html/css/js) which will be getting all the data and using server business logic via webservices. I know REST seems to be the hot topic but I can't find any good arguments. The main argument seems to be its "light-weight". My impression so far is that wsdl/soap based services are more expressive and allow for more a more complex transfer of data. It appears that soap would be more useful in the application I'm building where the only code consuming the services will be the js downloaded in the client browser. REST on the other hand seems to have a smaller entry barrier and so can be more useful for services like twitter in allowing other developers to consume these services easily. Also, REST seems to Te better suited for simple data transfers. So in summary SOAP is useful for complex data transfer and REST is useful in simple data transfer. I'm currently under the impression that using SOAP would be best due to the complexity of the messages but perhaps there's other factors. What are your thoughts on the pros/cons of soap/rest for a heavy ajax web app? EDIT: While the wsdl is in xml, the data I'm transferring back and forth is actually in JSON. It just appears more natural to use wsdl/soap here due to the nature of the app. The verbs GET and POST may not be enough. I may want to say something like: processQueue, or executeTimer. This is why my conclusion has been wsdl/soap would be good for bridging a complex layer between two applications (client and server) whereas REST would be better (due to its simplicity) for allowing many developer-users to consume resources programmatically. So you could say the choice falls along two lines Will the app be verb-oriented (completing tasks: use soap) or noun-oriented (consuming resources: use REST) Will the api be consumed by few developers or many developers (REST is strong for many developers)? Since such an ajax heavy app would potentially use many verbs and would only be used by the client developer it appears soap/wsdl would be the best fit.

    Read the article

  • Oracle on Oracle: Is that all?

    - by Darin Pendergraft
    On October 17th, I posted a short blog and a podcast interview with Chirag Andani, talking about how Oracle IT uses its own IDM products. Blog link here. In response, I received a comment from reader Jaime Cardoso ([email protected]) who posted: “- You could have talked about how by deploying Oracle's Open standards base technology you were able to integrate any new system in your infrastructure in days. - You could have talked about how by deploying federation you were enabling the business side to keep all their options open in terms of companies to buy and sell while maintaining perfect employee and customer's single view. - You could have talked about how you are now able to cut response times to your audit and security teams into 1/10th of your former times Instead you spent 6 minutes talking about single sign on and self provisioning? If I didn't knew your IDM offer so well I would now be wondering what its differences from Microsoft's offer was. Sorry for not giving a positive comment here but, please your IDM suite is very good and, you simply aren't promoting it well enough” So I decided to send Jaime a note asking him about his experience, and to get his perspective on what makes the Oracle products great. What I found out is that Jaime is a very experienced IDM Architect with several major projects under his belt. Darin Pendergraft: Can you tell me a bit about your experience? How long have you worked in IT, and what is your IDM experience? Jaime Cardoso: I started working in "serious" IT in 1998 when I became Netscape's technical specialist in Portugal. Netscape Portugal didn't exist so, I was working for their VAR here. Most of my work at the time was with Netscape's mail server and LDAP server. Since that time I've been bouncing between the system's side like Sun resellers, Solaris stuff and even worked with Sun's Engineering in the making of an Hierarchical Storage Product (Sun CIS if you know it) and the application's side, mostly in LDAP and IDM. Over the years I've been doing support, service delivery and pre-sales / architecture design of IDM solutions in most big customers in Portugal, to name a few projects: - The first European deployment of Sun Access Manager (SAPO – Portugal Telecom) - The identity repository of 5/5 of the Biggest Portuguese banks - The Portuguese government federation of services project DP: OK, in your blog response, you mentioned 3 topics: 1. Using Oracle's standards based architecture; (you) were able to integrate any new system in days: can you give an example? What systems, how long did it take, number of apps/users/accounts/roles etc. JC: It's relatively easy to design a user management strategy for a static environment, or if you simply assume that you're an <insert vendor here> shop and all your systems will bow to that vendor's will. We've all seen that path, the use of proprietary technologies in interoperability solutions but, then reality kicks in. As an ISP I recall that I made the technical decision to use Active Directory as a central authentication system for the entire IT infrastructure. Clients, systems, apps, everything was there. As a good part of the systems and apps were running on UNIX, then a connector became needed in order to have UNIX boxes to authenticate against AD. And, that strategy worked but, each new machine required the component to be installed, monitoring had to be made for that component and each new app had to be independently certified. A self care user portal was an ongoing project, AD access assumes the client is inside the domain, something the ISP's customers (and UNIX boxes) weren't nor had any intention of ever being. When the Windows 2008 rollout was done, Microsoft changed the Active Directory interface. The Windows administrators didn't have enough know-how about directories and the way systems outside the MS world behaved so, on the go live, things weren't properly tested and a general outage followed. Several hours and 1 roll back later, everything was back working. But, the ISP still had to change all of its applications to work with the new access methods and reset the effort spent on the self service user portal. To keep with the same strategy, they would also have to trust Microsoft not to change interfaces again. Simply by putting up an Oracle LDAP server in the middle and replicating the user info from the AD into LDAP, most of the problems went away. Even systems for which no AD connector existed had PAM in them so, integration was made at the OS level, fully supported by the OS supplier. Sun Identity Manager already had a self care portal, combined with a user workflow so, all the clearances had to be given before the account was created or updated. Adding a new system as a client for these authentication services was simply a new checkbox in the OS installer and, even True64 systems were, for the first time integrated also with a 5 minute work of a junior system admin. True, all the windows clients and MS apps still went to the AD for their authentication needs so, from the start everybody knew that they weren't 100% free of migration pains but, now they had a single point of problems to look at. If you're looking for numbers: - 500K directory entries (users) - 2-300 systems After the initial setup, I personally integrated about 20 systems / apps against LDAP in 1 day while being watched by the different IT teams. The internal IT staff did the rest. DP: 2. Using Federation allows the business to keep options open for buying and selling companies, and yet maintain a single view for both employee and customer. What do you mean by this? Can you give an example? JC: The market is dynamic. The company that's being bought today tomorrow will be sold again. Companies that spread on different markets may see the regulator forcing a sale of part of a company due to monopoly reasons and companies that are in multiple countries have to comply with different legislations. Our job, as IT architects, while addressing the customers and employees authentication services, is quite hard and, quite contrary. On one hand, we need to give access to all of our employees to the relevant systems, apps and resources and, we already have marketing talking with us trying to find out who's a customer of the bough company but not from ours to address. On the other hand, we have to do that and keep in mind we may have to break up all that effort and that different countries legislation may became a problem with a full integration plan. That's a job for user Federation. you don't want to be the one who's telling your President that he will sell that business unit without it's customer's database (making the deal worth a lot less) or that the buyer will take with him a copy of your entire customer's database. Federation enables you to start controlling permissions to users outside of your traditional authentication realm. So what if the people of that company you just bought are keeping their old logins? Do you want, because of that, to have a dedicated system for their expenses reports? And do you want to keep their sales (and pre-sales) people out of the loop in terms of your group's path? Control the information flow, establish a Federation trust circle and give access to your apps to users that haven't (yet?) been brought into your internal login systems. You can still see your users in a unified view, you obviously control if a user has access to any particular application, either that user is in your local database or stored in a directory on the other side of the world. DP: 3. Cut response times of audit and security teams to 1/10. Is this a real number? Can you give an example? JC: No, I don't have any backing for this number. One of the companies I did system Administration for has a SOX compliance policy in place (I remind you that I live in Portugal so, this definition of SOX may be somewhat different from what you're used to) and, every time the audit team says they'll do another audit, we have to negotiate with them the size of the sample and we spend about 15 man/days gathering all the required info they ask. I did some work with Sun's Identity auditor and, from what I've been seeing, Oracle's product is even better and, I've seen that most of the information they ask would have been provided in a few hours with the help of this tool. I do stand by what I said here but, to be honest, someone from Identity Auditor team would do a much better job than me explaining this time savings. Jaime is right: the Oracle IDM products have a lot of business value, and Oracle IT is using them for a lot more than I was able to cover in the short podcast that I posted. I want to thank Jaime for his comments and perspective. We want these blog posts to be informative and honest – so if you have feedback for the Oracle IDM team on any topic discussed here, please post your comments below.

    Read the article

  • Do you think natively compiled languages have reached their EOL?

    - by Yuval A
    If we look at the major programming languages in use today it is pretty noticeable that the vast majority of them are, in fact, interpreted. Looking at the largest piece of the pie we have Java and C# which are both enterprise-ready, heavy-duty, serious programming languages which are basically compiled to byte-code only to be interpreted by their respective VMs (the JVM and the CLR). If we look at scripting languages, we have Perl, Python, Ruby and Lua which are all interpreted (either from code or from bytecode - and yes, it should be noted that they are absolutely not the same). Looking at compiled languages we have C which is nowadays used in embedded and low-level, real-time environments, and C++ which is still alive and kicking, when you want to get down to serious programming as close to the hardware as you can, but still have some nice abstractions to help you with day to day tasks. Basically, there is no real runner-up compiled language in the distance. Do you feel that languages which are natively compiled to executable, binary code are a thing of the past, taken over by interpreted languages which are much more portable and compatible? Does C++ mark an end of an era? Why don't we see any new compiled languages anymore? I think I should clarify: I do not want this to turn into a "which language is better" discussion, because that is not the issue at hand. The languages I gave as example are only examples. Please focus on the question I raised, and if you disagree with my statement that compiled languages are less frequent these days, that is totally fine, I am more than happy to be proved mistaken.

    Read the article

  • Good resources for learning Rails?

    - by Bobby Tables
    I just finished working through Peter Cooper's "Beginning Ruby". So now I've got a reasonable grounding in the Ruby language and would like to move onto learning Rails. This question's answers give some good pointers, but I'd like to hear some specific reviews of books and online materials. I generally learn best by working through books with good practical/technical examples AND some passive reading content that breaks up the study between practical and reading sessions (this is what made "Beginning Ruby" great for me), but I'm worried that RoR is evolving fast and that any printed book I order might be obsolete by the time I get it and work through it. Is this a fair worry? Or can anyone recommend a good Rails 3 book that should be up to date at least for the next year or so? Also, I had a brief look at some of the online resources from the other questions, and Rails for Zombies seems to get a lot of praise. Has anyone here actually used it as their introductory guide to Rails? Basically I'd like to hear first-hand accounts of people who went through this "Ruby-to-Rails" learning phase recently and which materials were useful to you.

    Read the article

  • Open source vs commercial game engines

    - by Vanangamudi
    How commercial game accomplsih stunnning graphics with smooth game play? I am a huge die hard fan and follower of GNU Stallman and his philosophies and other Libre people Cmon how wud I miss Linus. but I got to admit commercial games does excellent jobs. One such good example is Assasin's Creed from Ubisoft. It has good quality graphcis and plays smoothly in my Dual core CPU with Nvidia Geforce 8400ES. Rockstar GTA4 has awesome graphcis but it's slower than AC considering the graphics quality tradeoff. Age of Empires from Ensemble studios, does include Massive crowd AI simulation, yet it plays so smoothly with eyecandy graphics and very large weapon sets and different techtree elements on the other hand. Open source games like Glest, 0A.D(still in alpha :) are not so smooth even though they have very restricted abilities? Coming to question: how do game companies achieve such optmizations, or the open source community is not doing optimizations, or there are any propriarity technological elements that benefits only the companies exists huh?? e.g the OpenSubDiv from Pixar just released open to community?? something like that. and why it is hard to implement optimizations? are there any legal restrictions???

    Read the article

  • Management Reporter Installation – Lessons Learned Part II - Dynamics GP

    - by Ryan McBee
    After feeling pretty good about my deployment skills of Management Reporter for Dynamics GP a few weeks ago, I ran into two additional lessons learned that I wanted to share. First, on another new deployment, I got the error shown below which says “An error occurred while creating the database.  View the installation log for additional information.”  This problem initially pointed me to KB 2406948 which did not provide resolution. After several hours of troubleshooting, I found there is an issue if the defaults database locations in SQL Server are set to the root of a drive. You will want to set the default to something like the following to get it installed; C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA.  My default database locations for the data and log files were indeed sitting on the H:\ and I:\ drives. To change this property in your SQL Server Instance you need to open SQL Server Management Studio, right click on the server, and choose properties and then database settings. When I initially got the error, I briefly considered creating the ManagementReporter database by hand, but experience tells me that would have created more headaches down the road. The second problem I ran into with this particular deployment of Management Reporter happened when I started the FRx conversion utility.  The errors reads “The ‘Microsoft.ACE.OLEDB.12.0’ provider is not registered on the local machine. I had a suspicion that this error was related to the fact FRx uses outdated technology and I happened to be on a new install of Server 2008 R2.  A knowledge base search quickly pointed me to KB 2102486. The resolution for this Management Reporter issue was to install the Microsoft Access Database Engine Redistributable, by following the site below. http://www.microsoft.com/downloads/details.aspx?familyid=C06B8369-60DD-4B64-A44B-84B371EDE16D&displaylang=en

    Read the article

  • Tips for debugging Samba performance?

    - by j-g-faustus
    Samba gives me 24 MB/s read and 44 MB/s write, while ftp gives 97 and 112 MB/s under the same circumstances. The documentation says that Generally, you should find that Samba performs similarly to ftp at raw transfer speed. In my case it clearly doesn't. Where can I find tips on how to debug Samba performance? Or alternatively tips for replacing Samba with something else? (I can't use ftp, unfortunately, as I need something that can be used with rsync/rsnapshot.) More details: Both computers are running Ubuntu 10.10 (using Samba because I have a Mac as well) The Samba share is on a local home network, mounted as $ mount ... //server.local/share/ on /mnt/share type cifs (rw,mand) Samba performance was tested by copying (cp) a single file of ~4GB to and from the share, using time for timing and calculating transfer speed by hand. ftp performance are the numbers from the ftp client for get/put of the same file. iperf gives network speed ~900 Mbits/s bonnie++ gives disk speeds 200 MB/s on both sides for block reads as well as block writes Tried changing the parameters suggested in the performance tuning HOWTO (read/write raw, read size, socket options), most of them made little to no difference. (The one that made a difference caused write speed to drop 50%.)

    Read the article

  • C# Adds Optional and Named Arguments

    Earlier this month Microsoft released Visual Studio 2010, the .NET Framework 4.0 (which includes ASP.NET 4.0), and new versions of their core programming languages: C# 4.0 and Visual Basic 10. In designing the latest versions of C# and VB, Microsoft has worked to bring the two languages into closer parity. Certain features available in C# were missing in VB, and vice-a-versa. Last week I wrote about Visual Basic 2010's language enhancements, which include implicit line continuation, auto-implemented properties, and collection initializers - three useful features that were available in previous versions of C#. Similarly, C# 4.0 introduces new features to the C# programming language that were available in earlier versions of Visual Basic, namely optional arguments and named arguments. Optional arguments allow developers to specify default values for one or more arguments to a method. When calling such a method, these optional arguments may be omitted, in which case their default value is used. In a nutshell, optional arguments allow for a more terse syntax for method overloading. Named arguments, on the other hand, improve readability by allowing developers to indicate the name of an argument (along with its value) when calling a method. This article examines how to use optional arguments and named arguments in C# 4.0. Read on to learn more! Read More >

    Read the article

  • C# Adds Optional and Named Arguments

    Earlier this month Microsoft released Visual Studio 2010, the .NET Framework 4.0 (which includes ASP.NET 4.0), and new versions of their core programming languages: C# 4.0 and Visual Basic 10. In designing the latest versions of C# and VB, Microsoft has worked to bring the two languages into closer parity. Certain features available in C# were missing in VB, and vice-a-versa. Last week I wrote about Visual Basic 2010's language enhancements, which include implicit line continuation, auto-implemented properties, and collection initializers - three useful features that were available in previous versions of C#. Similarly, C# 4.0 introduces new features to the C# programming language that were available in earlier versions of Visual Basic, namely optional arguments and named arguments. Optional arguments allow developers to specify default values for one or more arguments to a method. When calling such a method, these optional arguments may be omitted, in which case their default value is used. In a nutshell, optional arguments allow for a more terse syntax for method overloading. Named arguments, on the other hand, improve readability by allowing developers to indicate the name of an argument (along with its value) when calling a method. This article examines how to use optional arguments and named arguments in C# 4.0. Read on to learn more! Read More >Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Web Application: Combining View Layer Between PHP and Javascript-AJAX

    - by wlz
    I'm developing web application using PHP with CodeIgniter MVC framework with a huge real time client-side functionality needs. This is my first time to build large scale of client-side app. So I combine the PHP with a large scale of Javascript modules in one project. As you already know, MVC framework seperate application modules into Model-View-Controller. My concern is about View layer. I could be display the data on the DOM by PHP built-in script tag by load some data on the Controller. Otherwise I could use AJAX to pulled the data -- treat the Controller like a service only -- and display the them by Javascript. Here is some visualization I could put the data directly from Controller: <label>Username</label> <input type="text" id="username" value="<?=$userData['username'];?>"><br /> <label>Date of birth</label> <input type="text" id="dob" value="<?=$userData['dob'];?>"><br /> <label>Address</label> <input type="text" id="address" value="<?=$userData['address'];?>"> Or pull them using AJAX: $.ajax({ type: "POST", url: config.indexURL + "user", dataType: "json", success: function(data) { $('#username').val(data.username); $('#dateOfBirth').val(data.dob); $('#address').val(data.address); } }); So, which approach is better regarding my application has a complex client-side functionality? In the other hand, PHP-CI has a default mechanism to put the data directly from Controller, so why using AJAX?

    Read the article

  • null pointers vs. Null Object Pattern

    - by GlenH7
    Attribution: This grew out of a related P.SE question My background is in C / C++, but I have worked a fair amount in Java and am currently coding C#. Because of my C background, checking passed and returned pointers is second-hand, but I acknowledge it biases my point of view. I recently saw mention of the Null Object Pattern where the idea is than an object is always returned. Normal case returns the expected, populated object and the error case returns empty object instead of a null pointer. The premise being that the calling function will always have some sort of object to access and therefore avoid null access memory violations. So what are the pros / cons of a null check versus using the Null Object Pattern? I can see cleaner calling code with the NOP, but I can also see where it would create hidden failures that don't otherwise get raised. I would rather have my application fail hard (aka an exception) while I'm developing it than have a silent mistake escape into the wild. Can't the Null Object Pattern have similar problems as not performing a null check? Many of the objects I have worked with hold objects or containers of their own. It seems like I would have to have a special case to guarantee all of the main object's containers had empty objects of their own. Seems like this could get ugly with multiple layers of nesting.

    Read the article

  • Thumbs Up or Thumbs Down – Intel Debuts Prototype Palm-Reading Tech to Replace Passwords [Poll]

    - by Asian Angel
    This week Intel debuted prototype palm-reading tech that could serve as a replacement for our current password system. Our question for you today is do you think this is the right direction to go for better security or do you feel this is a mistake? Photo courtesy of Jane Rahman. Needless to say password security breaches have been a hot topic as of late, so perhaps a whole new security model is in order. It would definitely eliminate the need to remember a large volume of passwords along with circumventing the problem of poor password creation/selection. At the same time the new technology would still be in the ‘early stages’ of development and may not work as well as people would like. Long-term refinement would definitely improve its performance, but would it really be worth pursuing versus the actual benefits? From the blog post: Intel researcher Sridhar Iyendar demonstrated the technology at Intel’s Developer Forum this week. Waving a hand in front of a “palm vein” detector on a computer, one of Iyendar’s assistants was logged into Windows 7, was able to view his bank account, and then once he moved away the computer locked Windows and went into sleeping mode. How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using? HTG Explains: What The Windows Event Viewer Is and How You Can Use It

    Read the article

  • Code Reuse is (Damn) Hard

    - by James Michael Hare
    Being a development team lead, the task of interviewing new candidates was part of my job.  Like any typical interview, we started with some easy questions to get them warmed up and help calm their nerves before hitting the hard stuff. One of those easier questions was almost always: “Name some benefits of object-oriented development.”  Nearly every time, the candidate would chime in with a plethora of canned answers which typically included: “it helps ease code reuse.”  Of course, this is a gross oversimplification.  Tools only ease reuse, its developers that ultimately can cause code to be reusable or not, regardless of the language or methodology. But it did get me thinking…  we always used to say that as part of our mantra as to why Object-Oriented Programming was so great.  With polymorphism, inheritance, encapsulation, etc. we in essence set up the concepts to help facilitate reuse as much as possible.  And yes, as a developer now of many years, I unquestionably held that belief for ages before it really struck me how my views on reuse have jaded over the years.  In fact, in many ways Agile rightly eschews reuse as taking a backseat to developing what's needed for the here and now.  It used to be I was in complete opposition to that view, but more and more I've come to see the logic in it.  Too many times I've seen developers (myself included) get lost in design paralysis trying to come up with the perfect abstraction that would stand all time.  Nearly without fail, all of these pieces of code become obsolete in a matter of months or years. It’s not that I don’t like reuse – it’s just that reuse is hard.  In fact, reuse is DAMN hard.  Many times it is just a distraction that eats up architect and developer time, and worse yet can be counter-productive and force wrong decisions.  Now don’t get me wrong, I love the idea of reusable code when it makes sense.  These are in the few cases where you are designing something that is inherently reusable.  The problem is, most business-class code is inherently unfit for reuse! Furthermore, the code that is reusable will often fail to be reused if you don’t have the proper framework in place for effective reuse that includes standardized versioning, building, releasing, and documenting the components.  That should always be standard across the board when promoting reusable code.  All of this is hard, and it should only be done when you have code that is truly reusable or you will be exerting a large amount of development effort for very little bang for your buck. But my goal here is not to get into how to reuse (that is a topic unto itself) but what should be reused.  First, let’s look at an extension method.  There’s many times where I want to kick off a thread to handle a task, then when I want to reign that thread in of course I want to do a Join on it.  But what if I only want to wait a limited amount of time and then Abort?  Well, I could of course write that logic out by hand each time, but it seemed like a great extension method: 1: public static class ThreadExtensions 2: { 3: public static bool JoinOrAbort(this Thread thread, TimeSpan timeToWait) 4: { 5: bool isJoined = false; 6:  7: if (thread != null) 8: { 9: isJoined = thread.Join(timeToWait); 10:  11: if (!isJoined) 12: { 13: thread.Abort(); 14: } 15: } 16: return isJoined; 17: } 18: } 19:  When I look at this code, I can immediately see things that jump out at me as reasons why this code is very reusable.  Some of them are standard OO principles, and some are kind-of home grown litmus tests: Single Responsibility Principle (SRP) – The only reason this extension method need change is if the Thread class itself changes (one responsibility). Stable Dependencies Principle (SDP) – This method only depends on classes that are more stable than it is (System.Threading.Thread), and in itself is very stable, hence other classes may safely depend on it. It is also not dependent on any business domain, and thus isn't subject to changes as the business itself changes. Open-Closed Principle (OCP) – This class is inherently closed to change. Small and Stable Problem Domain – This method only cares about System.Threading.Thread. All-or-None Usage – A user of a reusable class should want the functionality of that class, not parts of that functionality.  That’s not to say they most use every method, but they shouldn’t be using a method just to get half of its result. Cost of Reuse vs. Cost to Recreate – since this class is highly stable and minimally complex, we can offer it up for reuse very cheaply by promoting it as “ready-to-go” and already unit tested (important!) and available through a standard release cycle (very important!). Okay, all seems good there, now lets look at an entity and DAO.  I don’t know about you all, but there have been times I’ve been in organizations that get the grand idea that all DAOs and entities should be standardized and shared.  While this may work for small or static organizations, it’s near ludicrous for anything large or volatile. 1: namespace Shared.Entities 2: { 3: public class Account 4: { 5: public int Id { get; set; } 6:  7: public string Name { get; set; } 8:  9: public Address HomeAddress { get; set; } 10:  11: public int Age { get; set;} 12:  13: public DateTime LastUsed { get; set; } 14:  15: // etc, etc, etc... 16: } 17: } 18:  19: ... 20:  21: namespace Shared.DataAccess 22: { 23: public class AccountDao 24: { 25: public Account FindAccount(int id) 26: { 27: // dao logic to query and return account 28: } 29:  30: ... 31:  32: } 33: } Now to be fair, I’m not saying there doesn’t exist an organization where some entites may be extremely static and unchanging.  But at best such entities and DAOs will be problematic cases of reuse.  Let’s examine those same tests: Single Responsibility Principle (SRP) – The reasons to change for these classes will be strongly dependent on what the definition of the account is which can change over time and may have multiple influences depending on the number of systems an account can cover. Stable Dependencies Principle (SDP) – This method depends on the data model beneath itself which also is largely dependent on the business definition of an account which can be very inherently unstable. Open-Closed Principle (OCP) – This class is not really closed for modification.  Every time the account definition may change, you’d need to modify this class. Small and Stable Problem Domain – The definition of an account is inherently unstable and in fact may be very large.  What if you are designing a system that aggregates account information from several sources? All-or-None Usage – What if your view of the account encompasses data from 3 different sources but you only care about one of those sources or one piece of data?  Should you have to take the hit of looking up all the other data?  On the other hand, should you have ten different methods returning portions of data in chunks people tend to ask for?  Neither is really a great solution. Cost of Reuse vs. Cost to Recreate – DAOs are really trivial to rewrite, and unless your definition of an account is EXTREMELY stable, the cost to promote, support, and release a reusable account entity and DAO are usually far higher than the cost to recreate as needed. It’s no accident that my case for reuse was a utility class and my case for non-reuse was an entity/DAO.  In general, the smaller and more stable an abstraction is, the higher its level of reuse.  When I became the lead of the Shared Components Committee at my workplace, one of the original goals we looked at satisfying was to find (or create), version, release, and promote a shared library of common utility classes, frameworks, and data access objects.  Now, of course, many of you will point to nHibernate and Entity for the latter, but we were looking at larger, macro collections of data that span multiple data sources of varying types (databases, web services, etc). As we got deeper and deeper in the details of how to manage and release these items, it quickly became apparent that while the case for reuse was typically a slam dunk for utilities and frameworks, the data access objects just didn’t “smell” right.  We ended up having session after session of design meetings to try and find the right way to share these data access components. When someone asked me why it was taking so long to iron out the shared entities, my response was quite simple, “Reuse is hard...”  And that’s when I realized, that while reuse is an awesome goal and we should strive to make code maintainable, often times you end up creating far more work for yourself than necessary by trying to force code to be reusable that inherently isn’t. Think about classes the times you’ve worked in a company where in the design session people fight over the best way to implement a class to make it maximally reusable, extensible, and any other buzzwordable.  Then think about how quickly that design became obsolete.  Many times I set out to do a project and think, “yes, this is the best design, I can extend it easily!” only to find out the business requirements change COMPLETELY in such a way that the design is rendered invalid.  Code, in general, tends to rust and age over time.  As such, writing reusable code can often be difficult and many times ends up being a futile exercise and worse yet, sometimes makes the code harder to maintain because it obfuscates the design in the name of extensibility or reusability. So what do I think are reusable components? Generic Utility classes – these tend to be small classes that assist in a task and have no business context whatsoever. Implementation Abstraction Frameworks – home-grown frameworks that try to isolate changes to third party products you may be depending on (like writing a messaging abstraction layer for publishing/subscribing that is independent of whether you use JMS, MSMQ, etc). Simplification and Uniformity Frameworks – To some extent this is similar to an abstraction framework, but there may be one chosen provider but a development shop mandate to perform certain complex items in a certain way.  Or, perhaps to simplify and dumb-down a complex task for the average developer (such as implementing a particular development-shop’s method of encryption). And what are less reusable? Application and Business Layers – tend to fluctuate a lot as requirements change and new features are added, so tend to be an unstable dependency.  May be reused across applications but also very volatile. Entities and Data Access Layers – these tend to be tuned to the scope of the application, so reusing them can be hard unless the abstract is very stable. So what’s the big lesson?  Reuse is hard.  In fact it’s damn hard.  And much of the time I’m not convinced we should focus too hard on it. If you’re designing a utility or framework, then by all means design it for reuse.  But you most also really set down a good versioning, release, and documentation process to maximize your chances.  For anything else, design it to be maintainable and extendable, but don’t waste the effort on reusability for something that most likely will be obsolete in a year or two anyway.

    Read the article

  • Programming as a minor

    - by Tomas Cokis
    Hello Everyone! I've never asked a question here at programmers, and for reasons which will become obvious later I've never answered one here, but I do poke around in short bursts. Anyway, I'm 15 right now, and I've been programming in C++ for 4 years, just working on my own projects that are aim so high as to never be finished. I've been working on a single project for the last year, and every 3 months, I add a new system into it. It might be a value tabling directory enabled log system, or a render system, or a class to load up xml files, whatever it is, I don't mind too much that the overall project (a 3d engine) isn't ever going to get finished, I just get some satisfaction from getting what I have done building and running. I don't know what I want to do when I grow up, although I suspect I'll go into some form of engineering, but I was interested in knowing if I do choose to go into a career as a developer, what kind of material I could look at to push myself up and get myself experience that might help my career later. I'm not talking about books in particular, I'm more interested in subjects areas that will get me access to good job opportunities, or that will give me a hand-up if I do computer science and software related courses at uni. One of the things I was thinking of doing was designing some of the logic gate components of a small computer - which I started briefly over the holidays, working out integer addition, subtraction and multiplication. That kind of stuff interests me, but is it really useful - or more useful then just more programming? But anyway, Any advice? Should I continue on my perpetual 3d engine? Are there any other projects or particular accomplishments that would help my education? Perhaps I should mention that I live in Perth, Australia, so local software companies are likely to be more scarce then usual.

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >