Search Results

Search found 7580 results on 304 pages for 'coordinate systems'.

Page 78/304 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • EMEA OTN Virtual Technology Summit - Hands-On Learning

    - by Thanos Terentes Printzios
    The Oracle Technology Network (OTN) is excited to invite you to our first Virtual Technology Summit. EMEA – Thursday July 10th / 9am to 1pm BST / 10am – 2pm CET / 12pm to 4pm MSK / GST - Register Now Learn first hand from Oracle ACEs, Java Champions, and Oracle product experts, as they share their insight and expertise on using Oracle technologies to meet today’s IT challenges. This interactive, online event offers four technical tracks, each with a unique focus on specific tools, technologies, and tips in these focus areas. Java – Big Trends and Technologies – Java lets you mine Big Data, build robust apps with HTML5, JavaScript and Java EE, and expand into the Internet of Things. Experts will present and you’ll be able to chat with them live online. Don’t miss out on this great opportunity to learn from some of the best minds in the Java community. Systems – OS Tips and Tricks for Sysadmins – Learn first hand how to configure Oracle Linux to run Oracle Database 11g and 12c, how to use the latest networking capabilities in Oracle Solaris 11, and how to troubleshoot networking problems in Unix and Linux systems. Database – Mastering Oracle Database Management & Development Techniques – Experts will present advanced features and management methods that will help you master your Oracle Database capabilities and drive greater performance, agility and manageability of your IT implementation. This track will build upon your skills with data management, migration, and performance. Middleware – The Architecture of Analytics: Big Time Big Data and Business Intelligence – This track will present a solution architect’s perspective on how business intelligence products in Oracle’s Fusion Middleware family and beyond fit into an effective big data architecture, and present insight and expertise from Oracle ACEs specializing in business Intelligence to help you meet your big data business intelligence challenges. This same content is being offered at 3 different dates listed below, at times convenient for all regions Americas - Wednesday July 9th EMEA – Thursday July 10th APAC English - July 16th 9am to 1pm PST12pm to 4pm  EST1 to 5 pm BRTRegister 9am to 1pm BST10am – 2pm CET12pm to 4pm MSK / GSTRegister IST – 10:00amSG – 12:30pmAEST – 2:30pmRegister The full event agenda is available at https://wikis.oracle.com/display/OTNVirtualTechSummit/Home

    Read the article

  • CA For A Large Intranet

    - by Tim Post
    I'm managing what has become a very large intranet (over 100 different hosts / services) and will be stepping down from my role in the near future. I want to make things easy for the next victim person who takes my place. All hosts are secured via SSL. This includes various portals, wikis, data entry systems, HR systems and other sensitive things. We're using self signed certificates which worked o.k. in the past, but are now problematic because: Browsers make it harder for users to understand exactly what is going on when a self signed certificate is encountered, much less accept them. Putting up a new host means 100 phone calls asking what "Add an exception" means What we were doing is just importing the self signed certs when we set up a new workstation. This was fine when we only had a dozen to deal with, but now its just overwhelming. Our I.T. Department has classified this as ya all's problem, all we get from them is support for switch and router configurations. Beyond the user having connectivity, everything else is up to the intranet administrators. We have a mix of Ubuntu and Windows workstations. We'd like to set up our own self signed CA root, which can sign certificates for each host that we deploy on the intranet. Client browsers would of course be told to trust our CA. My question is, would this be dangerous and would we be better off going with intermediate certificates from someone like Verisign? Either way, I still have to import the root for the intermediate CA, so I really don't see what the difference is? Other than charging us money, what would Verisign be doing that we could not, beyond protecting the root CA cert so it can't be used to make forgeries?

    Read the article

  • Code base migration - old versioning system to modern

    - by JohnP
    Our current code base is contained in a versioning system that is old and outdated (Visual Sourcesafe 5.0, mid 1990's), and contains a mix of packages that are no longer used, ones that are being used but no longer updated, and newer code. It is also a mix of 4 languages, and includes libraries for some of our systems (Such as Dialogic, Sun Tzu {clipper}) implementations. This breaks down into the following categories: Legacy code - No longer used (Systems that have been retired or replaced, etc) Legacy code - In current use (No intentions for upgrades or minor bug fixes, only major fixes if needed) Current code - In current use, and will be used for future versions/development Support libraries - For both legacy and current code (Some of the legacy libraries are no longer available as well) We would like to migrate this to a newer versioning system as we will be adding more developers, and expanding the reach to include remote programmers. When migrating, how do you structure it? Do you just perform a dump of all the data and then import it into the new system, or do you segregate according to type before you bring it into the new system? Do you set up a separate area for libraries, or keep them with the relevant packages? Do you separate by language, system, both? A general outline and methodology is fine, it doesn't need to be broken down to individual program level.

    Read the article

  • Sound not working after installing PCI video card and then removing it

    - by bakhshu
    I am running 11.10 on a HP/Compaq Presario sr1010z, and the video/audio was working fine with whatever was in the machine already. Then I installed a new video card (PCI/nVidia), which disabled the audio/video on the old one automatically. But that card didn't work out too well so I removed it. Now the video is back to normal, but the audio is gone. I have tried the following: 1. In BIOS, set audio to on/enable rather than Auto 2. Looked for hardware in System Settings Sound, but nothing shows up there 3. But when I run the sysinfo utility, I get the description of the audio controller When I do a 'sudo aplay -l', I get 'aplay: device_list:240: no soundcards found...' And then when I do a 'lspci -v | grep -A7 -i "audio"' I get the following: 00:02.7 Multimedia audio controller: Silicon Integrated Systems [SiS] SiS7012 AC'97 Sound Controller (rev a0) Subsystem: Hewlett-Packard Company Device 2a05 Flags: bus master, medium devsel, latency 32, IRQ 5 I/O ports at a000 [size=256] I/O ports at a400 [size=128] Capabilities: 00:03.0 USB Controller: Silicon Integrated Systems [SiS] USB 1.1 Controller (rev 0f) (prog-if 10 [OHCI]) Any ideas on how to fix this? Thanks

    Read the article

  • Any good reason open files in text mode?

    - by Tinctorius
    (Almost-)POSIX-compliant operating systems and Windows are known to distinguish between 'binary mode' and 'text mode' file I/O. While the former mode doesn't transform any data between the actual file or stream and the application, the latter 'translates' the contents to some standard format in a platform-specific manner: line endings are transparently translated to '\n' in C, and some platforms (CP/M, DOS and Windows) cut off a file when a byte with value 0x1A is found. These transformations seem a little useless to me. People share files between computers with different operating systems. Text mode would cause some data to be handled differently across some platforms, so when this matters, one would probably use binary mode instead. As an example: while Windows uses the sequence CR LF to end a line in text mode, UNIX text mode will not treat CR as part of the line ending sequence. Applications would have to filter that noise themselves. Older Mac versions only use CR in text mode as line endings, so neither UNIX nor Windows would understand its files. If this matters, a portable application would probably implement the parsing by itself instead of using text mode. Implementing newline interpretation in the parser might also remove some overhead of using text mode, as buffers would need to be rewritten (and possibly resized) before returning to the application, while this may be less efficient than when it would happen in the application instead. So, my question is: is there any good reason to still rely on the host OS to translate line endings and file truncation?

    Read the article

  • How can Agile methodologies be adapted to High Volume processing system development?

    - by luckyluke
    I am developing high volume processing systems. Like mathematical models that calculate various parameters based on millions of records, calculated derived fields over milions of records, process huge files having transactions etc... I am well aware of unit testing methodologies and if my code is in C# I have no problem in unit testing it. Problem is I often have code in T-SQL, C# code that is a SQL stored assembly, and SSIS workflow with a good amount of logic (and outcomes etc) or some SAS process. What is the approach YOu use when developing such systems. I usually develop several tests as Stored procedures in a designed schema(TEST) and then automatically run them overnight and check out the results. But this is only for T-SQL. And Continous integration IS hard. But the problem is with testing SSIS packages. How do You test it? What is Your preferred approach for stubbing data into tables (especially if You need a lot data initialization). I have some approach derived over the years but maybe I am just not reading enough articles. So Banking, Telecom, Risk developers out there. How do You test your mission critical apps that process milions of records at end day, month end etc? What frameworks do You use? How do You validate that Your ssis package is Correct (as You develop it)/ How do You achieve continous integration in such an environment (Personally I never got there)? I hope this is not to open-ended question. How do You test Your map-reduce jobs for example (i do not use hadoop but this is quite similar). luke Hope that this is not too open ended

    Read the article

  • Should components have sub-components in a component-based system like Artemis?

    - by Daniel Ingraham
    I am designing a game using Artemis, although this is more of philosophical question about component-based design in general. Let's say I have non-primitive data which applies to a given component (a Component "animal" may have qualities such as "teeth" or "diet"). There are three ways to approach this in data-driven design, as I see it: 1) Generate classes for these qualities using "traditional" OOP. I imagine this has negative implications for performance, as systems then must be made aware of these qualities in order to process them. It also seems counter to the overall philosophy of data-driven design. 2) Include these qualities as sub-components. This seems off, in that we are now confusing the role of components with that of entities. Moreover out of the box Artemis isn't capable of mapping these subcomponents onto their parent components. 3) Add "teeth", "diet", etc. as components to the overall entity alongside "animal". While this feels odd hierarchically, it may simply be a peculiarity of component-based systems. I suspect 3 is the correct way to think about things, but I was curious about other ideas.

    Read the article

  • How to document requirements for an API systematically?

    - by Heinrich
    I am currently working on a project, where I have to analyze the requirements of two given IT systems, that use cloud computing, for a Cloud API. In other words, I have to analyze what requirements these systems have for a Cloud API, such that they would be able to switch it, while being able to accomplish their current goals. Let me give you an example for some informal requirements of Project A: When starting virtual machines in the cloud through the API, it must be possible to specify the memory size, CPU type, operating system and a SSH key for the root user. It must be possible to monitor the inbound and outbound network traffic per hour per virtual machine. The API must support the assignment of public IPs to a virtual machine and the retrieval of the public IPs. ... In a later stage of the project I will analyze some Cloud Computing standards that standardize cloud APIs to find out where possible shortcomings in the current standards are. A finding could and will probably be, that a certain standard does not support monitoring resource usage and thus is not currently usable. I am currently trying to find a way to systematically write down and classify my requirements. I feel that the way I currently have them written down (like the three points above) is too informal. I have read in a couple of requirements enineering and software architecture books, but they all focus too much on details and implementation. I do really only care about the functionalities provided through the API/interface and I don't think UML diagrams etc. are the right choice for me. I think currently the requirements that I collected can be described as user stories, but is that already enough for a sophisticated requirements analysis? Probably I should go "one level deeper" ... Any advice/learning resources for me?

    Read the article

  • The Start of a Blog

    - by dbradley
    So, here's my new blog up and running, who am I and what am I planning to write here?First off - here's a little about me:I'm a recent graduate from university (coming up to a year ago since I finished) studying Software Engineering on a four year course where the third year was an industrial placement. During the industrial placement I went to work for a company called Adfero in a "Technical Consultant" role as well as a junior "Information Systems Developer". Once I completed my placement I went back to complete my final year but also continued in my developer role 2/3 days a week with the company.Working part time while at uni always seems like a great idea until you get half way through the year. For me the problem was not so much having a lack of time, but rather a lack of interest in the course content having got a chance at working on real projects in a live environment. Most people who have been graduated a little while also find this - when looking back at uni work, it seem to be much more trivial from a problem solving point of view which I found to be true and I found key to uni work to actually be your ability to prove though how you talk about something that you comprehensively understand the basics.After completing uni I then returned full time to Adfero purely in the developer role which is where I've now been for almost a year and have now also taken on the title of "Information Systems Architect" where I'm working on some of the more high level design problems within the products.What I'm wanting to share on this blog is some of the interesting things I've learnt myself over the last year, the things they don't teach you in uni and pretty much anything else I find interesting! My personal favorite areas are text indexing, search and particularly good software engineering design - good design combined with good code makes the first step towards a well-written, maintainable piece of software.Hopefully I'll also be able to share a few of the products I've worked on, the mistake I've made and the software problems I've inherited from previous developers and had to heavily re-factor.

    Read the article

  • What's the best way to manage error logging for exceptions?

    - by Peter Boughton
    Introduction If an error occurs on a website or system, it is of course useful to log it, and show the user a polite message with a reference code for the error. And if you have lots of systems, you don't want this information dotted around - it is good to have a single centralised place for it. At the simplest level, all that's needed is an incrementing id and a serialized dump of the error details. (And possibly the "centralised place" being an email inbox.) At the other end of the spectrum is perhaps a fully normalised database that also allows you to press a button and see a graph of errors per day, or identifying what the most common type of error on system X is, whether server A has more database connection errors than server B, and so on. What I'm referring to here is logging code-level errors/exceptions by a remote system - not "human-based" issue tracking, such as done with Jira,Trac,etc. Questions I'm looking for thoughts from developers who have used this type of system, specifically with regards to: What are essential features you couldn't do without? What are good to have features that really save you time? What features might seem a good idea, but aren't actually that useful? For example, I'd say a "show duplicates" function that identifies multiple occurrence of an error (without worrying about 'unimportant' details that might differ) is pretty essential. A button to "create an issue in [Jira/etc] for this error" sounds like a good time-saver. Just to re-iterate, what I'm after is practical experiences from people that have used such systems, preferably backed-up with why a feature is awesome/terrible. (If you're going to theorise anyway, at the very least mark your answer as such.)

    Read the article

  • eSTEP TechCast-Special - October 2012

    - by uwes
    Dear partner, we are pleased to announce our eSTEP TechCast-Special on Thursday 18th of October and would be happy if you could join. Please see below the details for the next TechCast.Date and time:Thursday, 18. October 2012, 11:00 - 12:00 BST (12:00 - 13:00 CEST; 14:00 - 15:00 GST) Title: Oracle OpenWorld Systems Update Abstract:In this special TechCast we will give you a brief update to News and Announcements of Oracle Open World 2012. Special focus will be on Announcements around the Systems products and partner relevant News from Oracle OpenWorld. Target audience: Tech Presales Speaker: HW Enablement Team Call Info:Call-in-toll-free number: 08006948154 (United Kingdom)Call-in-toll-free number: +44-2081181001 (United Kingdom) Show global numbers Conference Code: 803 594 3Security Passcode: 9876Webex Info (Oracle Web Conference) Meeting Number: 593 893 048Meeting Password: tech2011 Playback / Recording / Archive: The webcasts will be recorded and will be available shortly after the event in the eSTEP portal under the Events tab, where you could find also material from already delivered eSTEP TechCasts. Use your email-adress and PIN: eSTEP_2011 to get access. Feel free to have a look. We are happy to get your comments and feedback. ">

    Read the article

  • Is the Alternate Ubuntu installer still required for LVM or Software RAID setup?

    - by jimp
    Over the past 5 years, I have been setting up Ubuntu servers using the Alternate installer. I need to provision a new server today, and I'm curious if the Alternate CD is still the only way to setup LVM/RAID at installation time. I'm my limited experience with Red Hat Enterprise Linux, I noticed it's single installer configures LVM automatically. Has Ubuntu's installer, at least the standard "Server" installer, added support for LVM/RAID, or is the Alternate installer still required for that kind of server setup? http://mirror.anl.gov/pub/ubuntu-iso/DVDs/ubuntu/12.04.1/release/ Alternate install CD The alternate install CD allows you to perform certain specialist installations of Ubuntu. It provides for the following situations: setting up automated deployments; upgrading from older installations without network access; LVM and/or RAID partitioning; installs on systems with less than about 384MiB of RAM (although note that low-memory systems may not be able to run a full desktop environment reasonably). LVM has always been fundamental for our server needs, so I'm surprised if it is still not considered a server-worthy feature.

    Read the article

  • How bad would be to focus on iOS/Android development for an indie developer?

    - by kender
    After some time developing games for others I'm thinking of moving towards my own productions. My background is 10+ years of software development, with last 2 years spent on the iOS development (Objective-C and CoronaSDK). With my current experience in Corona I can quickly develop for iOS and Android systems. And this is something that I'm probably gonna do with several of the game ideas I have, at least for the prototype part. But - I'm wondering if it's not a bad idea to focus on those 2 systems only. After all there are other mobile platforms, there are PCs, Macs and Linux boxes... All of them having gamers using them. I was wondering if it wasn't a good idea to try some other SDK, giving me more flexibility when it comes to platform-independance. There's Unity3D (I think I can develop a 2D game in it though), there's MoAI from what I checked. I see a few options, not sure which one is best as I have little experience in this field (publishing own games): Stick with CoronaSDK for the whole time, release for iOS and Android platforms, screw other mobile devices and PCs, Use Corona for prototyping, then when the idea goes more into the "production" phase rewrite it in MoAI or Unity3D for more platforms support, Start with one of those 2 SDKs right now (which means the prototype phase will be delayed a bit, but after that I can jump right into real coding). Any clues here, what to do?

    Read the article

  • Why should I use MSBuild instead of Visual Studio Solution files?

    - by Sid
    We're using TeamCity for continuous integration and it's building our releases via the solution file (.sln). I've used Makefiles in the past for various systems but never msbuild (which I've heard is sorta like Makefiles + XML mashup). I've seen many posts on how to use msbuild directly instead of the solution files but I don't see a very clear answer on why to do it. So, why should we bother migrating from solution files to an MSBuild 'makefile'? We do have a a couple of releases that differ by a #define (featurized builds) but for the most part everything works. The bigger concern is that now we'd have to maintain two systems when adding projects/source code. UPDATE: Can folks shed light on the lifecycle and interplay of the following three components? The Visual Studio .sln file The many project level .csproj files (which I understand an "sub" msbuild scripts) The custom msbuild script Is it safe to say that the .sln and .csproj are consumed/maintained as usual from within the Visual Studio IDE GUI while the custom msbuild script is hand-written and usually consumes the already existing individual .csproj "as-is"? That's one way I can see reduce overlap/duplicate in maintenance... Would appreciate some light on this from other folks' operational experience

    Read the article

  • What are the challenges of implementing an ERP system?

    When a company decides to rollout an ERP system as part of its core business processes they must consider and provide solutions for the following general challenges. It is important to note that this list is generic and that every ERP system that rolls out is as distinct as the companies that are trying to implement the system. Upper Management Support Reengineering Existing Business Process and Applications Integration of the ERP with other existing departmental applications Implementation Time Implementation Costs Employee Training I just recently read an article by Mano Billi called “What are the major challenges in implementing ERP? “ were he basically outlines the common challenges to implementing an ERP system within a company. He discusses items like Upper management support, altering existing systems, and how ERPs integrate with other independent systems. In addition, he also covers items on selecting a ERP vendor, ERP Consultants, and the effects of an ERP system on employees.  I personally think he did a create job of outlining common issues that can cause an ERP implementation to fail or not be as effective as it potentially could be if the challenges are not taken in to account appropriately.

    Read the article

  • New SPC2 benchmark- The 7420 KILLS it !!!

    - by user12620172
    This is pretty sweet. The new SPC2 benchmark came out last week, and the 7420 not only came in 2nd of ALL speed scores, but came in #1 for price per MBPS. Check out this table. The 7420 score of 10,704 makes it really fast, but that's not the best part. The price one would have to pay in order to beat it is ridiculous. You can go see for yourself at http://www.storageperformance.org/results/benchmark_results_spc2The only system on the whole page that beats it was over twice the price per MBPS. Very sweet for Oracle. So let's see, the 7420 is the fastest per $. The 7420 is the cheapest per MBPS. The 7420 has incredible, built-in features, management services, analytics, and protocols. It's extremely stable and as a cluster has no single point of failure. It won the Storage Magazine award for best NAS system this year. So how long will it be before it's the number 1 NAS system in the market? What are the biggest hurdles still stopping the widespread adoption of the ZFSSA? From what I see, it's three things: 1. Administrator's comfort level with older legacy systems. 2. Politics 3. Past issues with Oracle Support.   I see all of these issues crop up regularly. Number 1 just takes time and education. Number 3 takes time with our new, better, and growing support team. many of them came from Oracle and there were growing pains when they went from a straight software-model to having to also support hardware. Number 2 is tricky, but it's the job of the sales teams to break through the internal politics and help their clients see the value in oracle hardware systems. Benchmarks like this will help.

    Read the article

  • Generic rule parser for RPG board game rules - how to do it?

    - by burzum
    I want to build a generic rule parser for pen and paper style RPG systems. A rule can involve usually 1 to N entities 1 to N roles of a dice and calculating values based on multiple attributes of an entity. For example: Player has STR 18, his currently equipped weapon gives him a bonus of +1 STR but a malus of DEX -1. He attacks a monster entity and the game logic now is required to run a set of rules or actions: Player rolls the dice, if he gets for example 8 or more (base attack value he needs to pass is one of his base attributes!) his attack is successfully. The monster then rolls the dice to calculate if the attack goes through it's armor. If yes the damage is taken if not the attack was blocked. Besides simple math rules can also have constraints like applying only to a certain class of user (warrior vs wizzard for example) or any other attribute. So this is not just limited to mathematical operations. If you're familiar with RPG systems like Dungeon and Dragons you'll know what I'm up to. My issue is now that I have no clue how to exactly build this the best possible way. I want people to be able to set up any kind of rule and later simply do an action like selecting a player and a monster and run an action (set of rules like an attack). I'm asking less for help with the database side of things but more about how to come up with a structure and a parser for it to keep my rules flexible. The language of choice for this is php by the way.

    Read the article

  • Oracle Solaris 11 How To Guides

    - by glynn
    Over the past year or so I've been writing a lot of How To Guides for different technologies. While we have really excellent product documentation (including the best set of manual pages available on any UNIX or Linux platform), the various How To Guides we have help to complement some of that learning, giving administrators a chance to learn the motivations for different technologies with a simple set of examples. Not only are they fun to research and write, they're also one of the more popular items on our Oracle Solaris 11 technology pages on OTN. So here's a link to bookmark and come back to on a regular basis: Oracle Solaris 11 How To Guides. We've got an excellent line up of articles there, and below is a list of the ones I've been involved in writing. Let us know if there are technologies that you think a How To Guide would help with and we'd be happy to get them onto our list! TitleLink Taking your First Steps with Oracle Solaris 11An introduction to installing Oracle Solaris 11, including the steps for installing new software and administering other system configuration. Introducing the basics of IPS on Oracle Solaris 11How to administer an Oracle Solaris 11 system using IPS, including how to deal with software package repositories, install and uninstall packages, and update systems. Advanced administration with IPS on Oracle Solaris 11Take a deeper look at advanced IPS to learn how to determine package dependencies, explore manifests, perform advanced searches, and analyze the state of your system. How to create and publish packages with IPS on Oracle Solaris 11How to create new software packages for Oracle Solaris 11 and publish them to a network package repository. How to update your Oracle Solaris 11 systems using Support Repository UpdatesThe steps for updating an Oracle Solaris 11 system with software packages provided by an active Oracle support agreement, plus how to ensure the update is successful and safe. Introducing the basics of SMF on Oracle Solaris 11Simple examples of administering services on Oracle Solaris 11 with the Service Management Facility. Advanced administration with SMF on Oracle Solaris 11Advanced administrative tasks with SMF, including an introduction to service manifests, understanding layering within the SMF configuration repository, and how best to apply configuration to a system.

    Read the article

  • Two views of Federation: inside out, and outside in

    - by Darin Pendergraft
    IDM customers that I speak to have spent a lot of time thinking about enterprise SSO - asking your employees to log in to multiple systems, each with distinct hard to guess (translation: hard to remember) passwords that fit the corporate security policy for length and complexity is a strategy that is just begging for a lot of help-desk password reset calls. So forward thinking organizations have implemented SSO for as many systems as possible. With the mix of Enterprise Apps moving to the cloud, it makes sense to continue this SSO strategy by Federating with those cloud apps and services.  Organizations maintain control, since employee access to the externally hosted apps is provided via the enterprise account.  If the employee leaves, their access to the cloud app is terminated when their enterprise account is disabled.  The employees don't have to remember another username and password - so life is good. From the outside in - I am excited about the increasing use of Social Sign-on - or BYOI (Bring your own Identity).  The convenience of single-sign on is extended to customers/users/prospects when organizations enable access to business services using a social ID.  The last thing I want when visiting a website or blog is to create another account.  So using my Google or Twitter ID is a very nice quick way to get access without having to go through a registration process that creates another username/password that I have to try to remember. The convenience of not having to maintain multiple passwords is obvious, whether you are an employee or customer - and the security benefit of not having lots of passwords to lose or forget is there as well. Are enterprises allowing employees to use their personal (social) IDs for enterprise apps?  Not yet, but we are moving in the right direction, and we will get there some day.

    Read the article

  • Impact of Service Oriented Architecture (SOA) on Business and IT Operations

    The impact of Service Oriented Architecture (SOA) on business and IT operations varies from company to company. I think more and more companies are starting to view SOA as just another technology that they can incorporate in an existing or new system. One of the driving factors in using SOA is the reduction in maintenance costs and decrease in the time needed to bring products to market. The reductions in costs, and reduced turnaround time can be directly converted in to increased profitability due to less expenditures that are needed in order to maintain or create new systems. My personal perspective on SOA is that it is great for what it is actually intended to do. SOA allows systems to be distributed across networks or even the world while ensuring enterprise processing consistency, data integrity and preventing code duplication. This being said a lot of preparation and work goes into properly designing and implementing an SOA especially if an enterprise wants to take full advantage of its benefits. Even though SOA has recently gotten a lot of hype about its benefits it does not a perfect fit for all situations. At the end of the day SOA is just another tool in my tool belt that I can pull from to create solutions that meet the business’s needs. Based on current industry trends SOA appears to be a very solid technology to use moving forward, especially as more and more companies shift towards cloud based computing. It is important to remember that SOA is one of many technologies that can be used in creating business solutions and I think more time will be spent in the future evaluating if SOA is the right technology for a solution once the initial hype of SOA has calmed down.

    Read the article

  • Opportunities in Cloud Computing

    - by Paul Sorensen
    A recent article from CIO Journal indicates that there is an extreme labor shortage (in certain technology areas) that is is leading to upward pressure on wages for IT Workers. This represents a great opportunity for those with certain skill-sets, among which include Java (Oracle certification is mentioned specifically). The article points out that a key driver of the labor shortage is the expansion of cloud computing. Cloud computing is set up to make life extremely simple for end-users, but the model pushes the complexity to back-end systems which are sophisticated, enterprise-level computing stacks (Oracle has an extensive set of cloud computing solutions). These complex systems require very highly-skilled IT professionals (the best-of-the-best) to successfully develop, implement, administer and maintain them. What this mean for you is that there is opportunity for those who have the appropriate skills at the appropriate levels. If you want to be a part of this opportunity you should do a self-assessment of your own skill-sets and experience. Based upon your results you can decide where it would be most appropriate to spend your time and resources for the highest return on your investment. By expanding and sharpening your skills and by gaining greater experience you will be better prepared to take advantage of career opportunities (like this) that come along periodically. As you evaluate your needs remember that Oracle University has a tremendous selection of high-quality eduction offerings (including training and certification) that can you help move your career forward. Thanks and best of luck!

    Read the article

  • MySQL documentation writer for MEM and Replication wanted!

    - by stefanhinz
    As MySQL is thriving and growing, we're looking for an experienced technical writer located in the UK or Ireland to join the MySQL documentation team. For this job, we need the best and most dedicated people around. You will be part of a geographically distributed documentation team responsible for the technical documentation of all MySQL products. Team members are expected to work independently, requiring discipline and excellent time-management skills as well as the technical facilities and experience to communicate across the Internet. Candidates should be prepared to work intensively with our engineers and support personnel. The overall team is highly distributed across different geographies and time zones. Our source format is DocBook XML. We're not just writing documentation, but also handling publication. This means you should be familiar with DocBook, and willing to learn our publication infrastructure. Your areas of responsibility would initially be MySQL Enterprise Monitor, and MySQL Replication. This means you should be familiar with MySQL in general, and preferably also with the MySQL Enterprise offerings. A MySQL certification will be considered an advantage. Other qualifications you should have: Native English speaker 5 or more years previous experience in writing software documentation Familiarity with distributed working environments and versioning systems such as SVN Comfortable with working on multiple operating systems, particularly Windows, Mac OS X, and Linux Ability to administer own workstations and test environment Excellent written and oral communication skills Ability to provide (online) samples of your work, e.g. books or articles If you're interested, contact me under [email protected]. For reference, the job offer can be viewed here.

    Read the article

  • Fresh install of 64 bit 12.04 over 32 bit 11.10 alongside Windows 7

    - by Pareen
    I currently have Ubuntu 11.10 32 bit and Windows 7 dual boot in separate partitions. I am trying to do a fresh install of Ubuntu 12.04 64 bit (mistakenly installed the 32 bit 11.10 a little while ago.. I need a 64 bit version to support AOSP build) OVER my the exisiting 11.10 partition. I have referenced How to Install fresh 12.04 install to a PC with dual booting Windows 7 & Ubuntu11.10?, as well as other posts on using the Live CD to do a fresh install. However, the problem I am experiencing is when I bring up the install screen, it says the following: This computer has multiple operating systems on it. What would you like to do. (3 options) Install Ubuntu 12.04 alongside them Replace all with Ubuntu 12.04 (Warning, this will delete files across ALL operating systems) Something else (you can create or resize partitions yourself) This is different from what is in other posts, as mine states that there are "multiple O.Ses" and doesnt individually allow me to replace the Ubuntu 11.10. I don't want to replace ALL O.S.es: I need to preserve Windows 7 and am only trying to replace the old Ubuntu 11.10 partition with the new 12.04 64 bit. I did have Ubuntu installed via Wubi (I believe it was 10.04) prior to putting 11.10 in a separate partition, but I have removed it via Add/Remove programs in Windows. I was wondering how to go about doing this... Should I use the "Something else" option to bring up the partition manager, and just assign my existing 11.10 partition with root mount point + swap space. Will this do the same thing by overwriting with fresh 12.04 install?? I appreciate all your help.

    Read the article

  • Limiting my heavy thinking to my job [closed]

    - by Robin Castlin
    This might be a weird problem which is only to a half relevant to actual programming, but hopefully there are people here that knows what I'm talking about. Basicly I'm proud of how I can deal with coding problems and fix them in short notice and many other aspects like building new systems and such. I'm fast on finding solutions and I often think about the impact my changes does to existing systems and so on, therefor preventing problem from arising at all and such. I am simply happy with how my mind operates when it comes to programming and I wouldn't want to change it at all. The problem, however is when I'm not programming. I find myself rather limited in social situations. I can't determine if it is through programming, but I sometimes think way to much about the consequences when it comes to being social. I know from own experience that most times you earn by not thinking about consequences, but it's hard for me not to. Often my friends tells me "I think too much" and even though I agree, I can't seem to change this behavior. My brain wants to think, and it likes to overthink simple stuff. Does anyone recognize the bad habit of not leaving advanced thinking at work, and in what way do you deal with it? If this isn't a suitable place to ask this question, I apologize and hope you may point me to the right site.

    Read the article

  • ECS with Go - circular imports [migrated]

    - by Andreas
    I'm exploring both Go and Entity-Component-Systems. I understand how ECS works, and I'm trying to replicate what seems to be the go-to document of ECS, namely http://cowboyprogramming.com/2007/01/05/evolve-your-heirachy/ For performance, the document recommends to use static arrays of every component type. That is, not arrays of component interfaces (arrays of pointers). The problem with this in Go is circular imports. I have one package, ecs, which contains the definitions for Entity, Component and System types/interfaces as well as an EntityManager. Another package, ecs/components, contains the various components. Obviously, the ecs/components package depends on ecs. But, to declare arrays of specific components in EntityManager, ecs would depend on ecs/components, therefore creating a circular import. Is there any way of avoiding this? I am aware that normally a high level system should not depend on lower systems. I'm also want to point out that using an array of pointers is probably fast enough for my purposes, but I'm interested in possible workarounds (for future reference) Thank you for your help!

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >