Search Results

Search found 6561 results on 263 pages for 'developing'.

Page 195/263 | < Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >

  • Professional immigration

    - by etranger
    Hello all, Does anyone here have a practical advice on professional relocation from Russia to Europe? The reasons behind making such a decision are far beyond the subject, perhaps, so I'll stick to the practical part. Having done some of the "common stuff" for finding a job, I am now facing two serious problems: I am a "dual-class" person, with university degree in marketing, and multiple years of self-studied computer competence (hence my writing here). Have professional experience in both areas. I don't currently hold a European work permit. From what I can see, this results in normal HR person throwing out my CV as either being "overqualified" or "too much trouble with making the permit". I do have the skills and character to start my own business, but it requires start-up capital that I don't have, over the last years I had to pay high bills for medical treatment of my family member, who had deceased. Now, I'm almost out of debts. As you can probably guess, English is not a problem, and I'm open to new languages, but first steps of entering the market, or the society, is the problematic part. I live close to Norway, and am trying to get some professional contacts there, but it hasn't got me any practical perspective so far. Any advice is greatly appreciated. EDIT: I am currently making my living off web site development, and occasional consulting services both in IT and marketing. For purely geographic reasons I'm dealing with clients that reside in the same city where I live, pop. 350 000. Being quite local, market requirements for web sites are simple and stable — clients need to control navigation, write articles in a word-like editor, upload illustrations and place ad banners, all with no additional programming. As many web developers do, I'm using my own content management system that fits these expectations. I have also started developing a newer version of this system that has better support for international environments, but I'm too distant from the real market demand in Europe to speak of the right track here. Technically it's based on php/mysql and uses xslt for templating. It allows for quick website deployment, and has architectural neatness, lack of which made me abandon similar opensource solutions (Joomla and the like). Deploying time from rasterized design proofs is normally under 6-8 working hours, don't know how that compares to the world practice. EDIT 2: Can anyone share what Norwegian (Scandinavian) web solutions market currently demands?

    Read the article

  • What to do when you inherit an unmaintainable codebase?

    - by GordonM
    I'm currently working at a company with 2 other PHP developers aside from me, and 1 junior developer. The senior developer who originally built the system we're all working on has resigned and will only be here for a matter of weeks. The other developer, who is the only other guy who knows anything about the system, is unhappy here and is looking for a new job. I'm very real danger of being left behind as the only experienced developer on this codebase. Since I've joined this company I've tried to push for better coding standards, project documentation, etc and I do think I've made some headway, but the vast majority of the code is simply unmaintainable and uncommented. A lot of this has to do with the need to get things done fast at points in the project before I joined, but now the technical debt is enormous, even with the two developers who do understand the system on board. Without them, it will simply be impossible to do anything with it. The senior developer is working on trying to at least comment all his code before he leaves but I think the codebase is simply too vast to properly document in the remaining time. Besides, when he does comment it still doesn't make things as clear as it could. If the system was better organized and documented I could probably start refactoring it incrementally, but the whole thing is so tightly coupled that it's very difficult to make any changes in one module without having unintended knock-on effects in other modules. Naturally, there's no unit tests either, and I honestly don't think this codebase could possibly be unit tested anyway given how it's implemented. There also never seems to be enough time to get things done even with 3 developers and 1 junior developer. With one developer and one junior, neither of which had significant input into the early design of the system, I don't see how we could possibly get anything done with keeping the current system working, implementing new features as needed and developing a replacement for the current codebase that is better organized. Is there an approach I can take to cope with this situation, or should I be getting my own CV in order as well at this point? If it was just me and the junior designer who would be left I'd go for the latter option almost without question. However, there's a team of front-end developers and content managers as well, and I'm worried what would become of them if I left and put them in a position where there would be no developers at all. The department might just be closed down altogether under such circumstances, and then I'd have their unemployment on my conscience as well!

    Read the article

  • So, I though I wanted to learn frontend/web development and break out of my comfort zone...

    - by ripper234
    I've been a backend developer for a long time, and I really swim in that field. C++/C#/Java, databases, NoSql, caching - I feel very much at ease around these platforms/concepts. In the past few years, I started to taste end-to-end web programming, and recently I decided to take a job offer in a front end team developing a large, complex product. I wanted to break out of my comfort zone and become more of an "all around developer". Problem is, I'm getting more and more convinced I don't like it. Things I like about backend programming, and missing in frontend stuff: More interesting problems - When I compare designing a server that handle massive data, to adding another form to a page or changing the validation logic, I find the former a lot more interesting. Refactoring refactoring refactoring - I am addicted to Visual Studio with Resharper, or IntelliJ. I feel very comfortable writing code as it goes without investing too much thought, because I know that with a few clicks I can refactor it into beautiful code. To my knowledge, this doesn't exist at all in javascript. Intellisense and navigation - I hate looking at a bunch of JS code without instantly being able to know what it does. In VS/IntelliJ I can summon the documentation, navigate to the code, climb up inheritance hiererchies ... life is sweet. Auto-completion - Just hit Ctrl-Space on an object to see what you can do with it. Easier to test - With almost any backend feature, I can use TDD to capture the requirements, see a bunch of failing tests, then implement, knowing that if the tests pass I did my job well. With frontend, while tests can help a bit, I find that most of the testing is still manual - fire up that browser and verify the site didn't break. I miss that feeling of "A green CI means everything is well with the world." Now, I've only seriously practiced frontend development for about two months now, so this might seem premature ... but I'm getting a nagging feeling that I should abandon this quest and return to my comfort zone, because, well, it's so comfy and fun. Another point worth mentioning in this context is that while I am learning some frontend tools, a lot of what I'm learning is our company's specific infrastructure, which I'm not sure will be very useful later on in my career. Any suggestions or tips? Do you think I should give frontend programming "a proper chance" of at least six to twelve months before calling it quits? Could all my pains be growing pains, and will they magically disappear as I get more experienced? Or is gaining this perspective is valuable enough, even if plan to do more "backend stuff" later on, that it's worth grinding my teeth and continuing with my learning?

    Read the article

  • Keep Learning After Your Oracle Training Class is Over - Save 50%!

    - by KJones
    Written by Amit Kumar, Senior Director Oracle University Digital Training        Every training class you take about the latest Oracle application or technology moves you closer to developing the skills you need to succeed. But after class is over, how do you keep up with today’s accelerating pace of innovation? To   To keep with the very latest technological advances, you need an ongoing and flexible training solution.       One that lets you learn during your own downtime.       Knowledge that’s easy to access.       Interactive lessons where you connect with experts.       A simple way to increase your knowledge, on your own time and at your own pace. The new Oracle Learning Streams is the flexible training solution you're looking for. Continuously Learn with Oracle Learning Streams Over time, Oracle Learning Streams help you develop the depth and breadth of knowledge that will give you the tools to become an expert in your field. By taking advantage of comprehensive and frequently updated information, you can keep learning continuously, at your own pace, when it's convenient for you. Sign up today and get 12 months of unlimited access to: •    Hundreds of videos delivered by Oracle experts for fresh and continuous product learning•    Live connections with Oracle's top instructors•    Robust video search capability to find exactly what you’re looking for•    Features that allow you to build your own custom learning queue and request new content Oracle Learning Streams are now available for Oracle Database and Oracle Middleware. Take a moment to preview the content now.  For a Limited Time - Save 50% For a limited time, save 50% when you order Oracle Learning Streams with any other Oracle Classroom, Live Virtual Class or Training On Demand course. Now there is no reason for learning to stop when class is over!

    Read the article

  • Move Data into the Grid for Scalable, Predictable Response Times

    - by JuergenKress
    CloudTran is pleased to introduce the availability of the CloudTran Transaction and Persistence Manager for creating scalable, reliable data services on the Oracle Coherence In-Memory Data Grid (IMDG). Use of IMDG architectures has been key to handling today’s web-scale loads because it eliminates database latency by storing important and frequently access data in memory instead of on disk. The CloudTran product lets developers easily use an IMDG for full ACID-compliant transactions without having to be concerned about the location or spread of data. The system has its own implementation of fast, scalable distributed transactions that does NOT depend on XA protocols but still guarantees all ACID properties. Plus, CloudTran asynchronously replicates data going into the IMDG to back-end datastores and back-up data centers, again ensuring ACID properties. CloudTran can be accessed through Java Persistence API (JPA via TopLink Grid) and now, through a new Low-Level API, or LLAPI. This is ideal for use in SOA applications that need data reliability, high availability, performance, and scalability. Still in limited beta release, the LLAPI gives developers the ability to use standard put/remove logic available in Coherence and then wrap logic with simple Spring annotations or XML+AspectJ to start transactions. An important feature of LLAPI is the ability to join transactions. This is a common outcome for SOA applications that need to reduce network traffic by aggregating data into single cache entries and then doing SOA service processing in the node holding the data. This results in the need to orchestrate transaction processing across multiple service calls. CloudTran has the capability to handle these “multi-client” transactions at speed with no loss in ACID properties. Developing software around an IMDG like Oracle Coherence is an important choice for today’s web-scale applications and services. But this introduces new architectural considerations to maintain scalability in light of increased network loads and data movement. Without using CloudTran, developers are faced with an incredibly difficult task to ensure data reliability, availability, performance, and scalability when working with an IMDG. Working with highly distributed data that is entirely volatile while stored in memory presents numerous edge cases where failures can result in data loss. The CloudTran product takes care of all of this, leaving developers with the confidence and peace of mind that all data is processed correctly. For those interested in evaluating the CloudTran product and IMDGs, take a look at this link for more information: http://www.CloudTran.com/downloadAPI.php, or, send your questions to [email protected]. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: Coherence,cloudtran,cache,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • What is the best way to implement collision detection using Bullet physics engine and a track generated from a curve?

    - by tigrou
    I am developing a small racing game were the track is generated from a curve. As said above, the track is generated, but not infinite. The track of one level could fit with no problem in memory and will contain a reasonably small amount of triangles. For collisions, I would like to use Bullet physics engine and know what is the best way to handle collisions with the track efficiently. NOTE : The track will be stored as a static rigid body (mass = 0). The player will be represented by a sphere shape for collisions. Here is some possibilities i have in mind : Create one rigid body, then, put all triangles of the track (except non collidable stuff) into it. Result : 1 body with many triangles (eg : 30000 triangles) Split the track into several sections (eg: 10 sections). Then, for each section, create a rigid body and put corresponding triangles in it. Result : small amount of bodies with relatively small amount of triangles (eg : 1500 triangles per section). Split the track into many sub-sections (eg : 1200 sections). Here one subsection = very small step when generating the curve. Again for each sub-section, create a body and put triangles in it. Result : many bodies with very small amount of triangles (eg : 20 triangles). Advantage : it could be possible to "extra data" to each of the subsection, that could be used when handling collisions. Same as 2, but only put sections N and N+1 in physics engine (where N = current section where the player is). When player reach section N+1, unload section N and load section N+2 and so on... Issue : harder to implement, problems if the player suddenly "jump" from one section to another (eg : player fly away from section N, and fall on section N + 4 that was underneath : no collision handled, player will fall into void ) Same as 4, but with many sub-sections. Issues : since subsections are very small there will be constantly new bodies added and removed to physics engine at runtime. Possibilities for player to accidently skip some sections and fall into the void are higher than 4.

    Read the article

  • Identifying which pattern fits better.

    - by Daniel Grillo
    I'm developing a software to program a device. I have some commands like Reset, Read_Version, Read_memory, Write_memory, Erase_memory. Reset and Read_Version are fixed. They don't need parameters. Read_memory and Erase_memory need the same parameters that are Length and Address. Write_memory needs Lenght, Address and Data. For each command, I have the same steps in sequence, that are something like this sendCommand, waitForResponse, treatResponse. I'm having difficulty to identify which pattern should I use. Factory, Template Method, Strategy or other pattern. Edit I'll try to explain better taking in count the given comments and answers. I've already done this software and now I'm trying to refactoring it. I'm trying to use patterns, even if it is not necessary because I'm taking advantage of this little software to learn about some patterns. Despite I think that one (or more) pattern fits here and it could improve my code. When I want to read version of the software of my device, I don't have to assembly the command with parameters. It is fixed. So I have to send it. After wait for response. If there is a response, treat (or parse) it and returns. To read a portion of the memory (maximum of 256 bytes), I have to assembly the command using the parameters Len and Address. So I have to send it. After wait for response. If there is a response, treat (or parse) it and returns. To write a portion in the memory (maximum of 256 bytes), I have to assembly the command using the parameters Len, Address and Data. So I have to send it. After wait for response. If there is a response, treat (or parse) it and returns. I think that I could use Template Method because I have almost the same algorithm for all. But the problem is some commands are fixes, others have 2 or 3 parameters. I think that parameters should be passed on the constructor of the class. But each class will have a constructor overriding the abstract class constructor. Is this a problem for the template method? Should I use other pattern?

    Read the article

  • Switch from back-end to front-end programming: I'm out of my comfort zone, should I switch back?

    - by ripper234
    I've been a backend developer for a long time, and I really swim in that field. C++/C#/Java, databases, NoSql, caching - I feel very much at ease around these platforms/concepts. In the past few years, I started to taste end-to-end web programming, and recently I decided to take a job offer in a front end team developing a large, complex product. I wanted to break out of my comfort zone and become more of an "all around developer". Problem is, I'm getting more and more convinced I don't like it. Things I like about backend programming, and missing in frontend stuff: More interesting problems - When I compare designing a server that handle massive data, to adding another form to a page or changing the validation logic, I find the former a lot more interesting. Refactoring refactoring refactoring - I am addicted to Visual Studio with Resharper, or IntelliJ. I feel very comfortable writing code as it goes without investing too much thought, because I know that with a few clicks I can refactor it into beautiful code. To my knowledge, this doesn't exist at all in javascript. Intellisense and navigation - I hate looking at a bunch of JS code without instantly being able to know what it does. In VS/IntelliJ I can summon the documentation, navigate to the code, climb up inheritance hiererchies ... life is sweet. Auto-completion - Just hit Ctrl-Space on an object to see what you can do with it. Easier to test - With almost any backend feature, I can use TDD to capture the requirements, see a bunch of failing tests, then implement, knowing that if the tests pass I did my job well. With frontend, while tests can help a bit, I find that most of the testing is still manual - fire up that browser and verify the site didn't break. I miss that feeling of "A green CI means everything is well with the world." Now, I've only seriously practiced frontend development for about two months now, so this might seem premature ... but I'm getting a nagging feeling that I should abandon this quest and return to my comfort zone, because, well, it's so comfy and fun. Another point worth mentioning in this context is that while I am learning some frontend tools, a lot of what I'm learning is our company's specific infrastructure, which I'm not sure will be very useful later on in my career. Any suggestions or tips? Do you think I should give frontend programming "a proper chance" of at least six to twelve months before calling it quits? Could all my pains be growing pains, and will they magically disappear as I get more experienced? Or is gaining this perspective is valuable enough, even if plan to do more "backend stuff" later on, that it's worth grinding my teeth and continuing with my learning?

    Read the article

  • Develop web site from existing software or cherry pick and use a web framework?

    - by erisco
    A small team and I are tasked with developing a web site. The client has referenced a particular open source project (we'll call it X) when describing some of the features. Because of this, the team wants to start with X and adapt it to satisfy the client. I have looked at X and its code and, in my opinion, it would be unwise. However, my experience is limited, and could really benefit from the insights of others so that I can figure out what I should be asserting as the right direction for the team. My red flags are going up and this is why. X was developed in the earlier days of PHP; 500 line blocks of code are the norm; global variables are abundant; giant switch cases are the norm for switching between which page is shown. There is no clear mapping between URL and where the code for that page sits. From a feature-set standpoint, X is actually software specialized for a different task and has dozens of features we don't need or have use for that come as core assumptions. We will be unable to adapt X through its plugin system. That said, there are a few features which can be mapped, with some modification, to suit our purposes. I believe this is the attraction the team feels. I would feel comfortable if, instead of using X directly, we lifted what is salvageable and useful to us. We can then use that code, and the same 3rd party libraries X is using, in a new code base built on top of a PHP web framework (particularly Agavi, so you understand what I mean by 'web framework'). The web framework gives us a strong MVC structure and provides the common facilities for web development, or adapters to work with 3rd party libraries that do so. We will also have a clean slate feature-wise to work from, which means we can work additively instead of subtractively. Because the code base is better structured, and contains none of what we don't need, it will be easier to document, which is a critical requirement of our client. So to summarize, the team wants to use X, whereas I want to take the bits we can from X and use a web framework instead. I want to bounce this opinion off of other's experiences so that I can be more informed. Thanks for your insight.

    Read the article

  • Oracle BPM and Open Data integration development

    - by drrwebber
    Rapidly developing Oracle BPM application solutions with data source integration previously required significant Java and JDeveloper skills. Now using open source tools for open data development significantly reduces the coding needed.  Key tasks can be performed with visual drag and drop designing combined with menu selections entry and automatic form generation directly from XSD schema definitions. The architecture used is extremely lightweight, portable, open platform and scalable allowing integration with a variety of Oracle and non-Oracle data sources and systems. Two videos available on YouTube walk through the process at both an introductory conceptual level and then a deep dive into the programming needed using JDeveloper, Oracle BPM composer and Oracle WLS (WebLogic Server) along with the CAM editor and Open-XDX open source tools. Also available are coding samples and resources from the GitHub project page, along with working online demonstration resources on the VerifyXML site. Combining Oracle BPM with these open source tools provides a comprehensive simple and elegant solution set. Development times are slashed and rapid prototyping is enabled. Also existing data sources can be integrated using open data formats with either XML or JSON along with CRUD accessing via the Open-XDX Java component. The Open-XDX tool is a code-free approach where data mapping is configured as templates using visual drag and drop in the CAM Editor open source tool.  XML or JSON is then automatically generated or processed (output or input) and appropriate SQL statements created to support the data accessing.   Also included is the ability to integrate with fillable PDF forms via the XML templates and the Java PDF form filling library.  Again minimal Java coding is needed to associate the XML source content with the PDF named fields.  The Oracle BPM forms can be automatically generated from XSD schema definitions that are built from the data mapping templates.  This dramatically simplifies development work as all the integration artifacts needed are created by the open source editor toolset. The developer level video is designed as a tutorial with segments, hands-on demonstrations and reviews.  This allows developers to learn the techniques and approaches used in incremental steps. The intended audience ranges from data analysts to developers and assumes only entry level Java skills and knowledge.  Most actions are menu driven while Java coding is limited to simply configuring values and parameters along with performing builds and deployments from JDeveloper and Oracle WLS.   Additional existing Oracle online training resources can be referenced on Oracle BPM and WLS that cover other normal delivery aspects such as user management and application deployment.

    Read the article

  • What arguments can I use to "sell" the BDD concept to a team reluctant to adopt it?

    - by S.Robins
    I am a bit of a vocal proponent of the BDD methodology. I've been applying BDD for a couple of years now, and have adopted StoryQ as my framework of choice when developing DotNet applications. Even though I have been unit testing for many years, and had previously shifted to a test-first approach, I've found that I get much more value out of using a BDD framework, because my tests capture the intent of the requirements in relatively clear English within my code, and because my tests can execute multiple assertions without ending the test halfway through - meaning I can see which specific assertions pass/fail at a glance without debugging to prove it. This has really been the tip of the iceberg for me, as I've also noticed that I am able to debug both test and implementation code in a more targeted manner, with the result that my productivity has grown significantly, and that I can more easily determine where a failure occurs if a problem happens to make it all the way to the integration build due to the output that makes its way into the build logs. Further, the StoryQ api has a lovely fluent syntax that is easy to learn and which can be applied in an extraordinary number of ways, requiring no external dependencies in order to use it. So with all of these benefits, you would think it an easy to introduce the concept to the rest of the team. Unfortunately, the other team members are reluctant to even look at StoryQ to evaluate it properly (let alone entertain the idea of applying BDD), and have convinced each other to try and remove a number of StoryQ elements from our own core testing framework, even though they originally supported the use of StoryQ, and that it doesn't impact on any other part of our testing system. Doing so would end up increasing my workload significantly overall and really goes against the grain, as I am convinced through practical experience that it is a better way to work in a test-first manner in our particular working environment, and can only lead to greater improvements in the quality of our software, given I've found it easier to stick with test first using BDD. So the question really comes down to the following: What arguments can I use to really drive the point home that it would be better to use StoryQ, or at the very least apply the BDD methodology? Can you point me to any anecdotal evidence that I can use to support my argument to adopt BDD as our standard method of choice? What counter arguments can you think of that could suggest that my wish to convert the team efforts to BDD might be in error? Yes, I'm happy to be proven wrong provided the argument is a sound one. NOTE: I am not advocating that we rewrite our tests in their entirety, but rather to simply start working in a different manner for all future testing work.

    Read the article

  • New grad; To overcome complete lack of experience, should I ditch a creative pet project in lieu of one that would demonstrate more applicable skills?

    - by Hart Simha
    I am currently working on a project on github that I think would be a good demonstration of my initiative, creativity and enthusiasm. It is an educational game I am developing in pygame that enables the user to learn to improve their development productivity by using vim, specifically with python, though learning to code faster with vim should be transferable to any language. I think this is something that might have a mass appeal and benefit to a lot of people in a measurable way. -However- I am graduating from college in a month (my degree is computer science with a minor in English), with no experience that is relevant to helping me get any kind of job in the field, and a gpa that doesn't tout my merits. I could pursue a career in game development, but it's not necessarily what I'm most interested in, and see myself applying to startups around the country. To the places I am looking at applying, showing that I have experience with pygame is going to be largely irrelevant, except in demonstration of my ability to code, period. A lot of skills that ARE more marketable, such a data modeling, GIS, mobile application, development, javascript, .net framework, and various web development technologies, are not going to be showcased by this project (on the upside, employers do like to see familiarity with git and python). I'm wondering if I should sink all my free time in the next couple of months into this project, since I'm motivated and interested in it, and if the value of being able to demonstrate ambition and 'good ideas' (for lack of a better term, and in my own opinion) will compensate for the absence of demonstrating more sought-after skills. I am probably at a point where I should either commit fully to this project now, or put it on the backburner in favor of something else, and I am leaning towards continuing with what I am already working on, because I think it's a great idea, and something achievable to me with enough dedication over the next couple months. But the most important thing to me is being able to get a job out of college, which I am exceedingly concerned about as the professional landscape which I am navigating for the first time is a lot more intimidating than I could have anticipated, with almost every job (even short-term contract positions) requiring years of experience which I lack. So in brief, the common denominator to answering the question "How can I overcome experience requirements for a job" seems to be "Show off your own project." I want to know WHICH project I should work on to best increase my chances of getting a job out of college, keeping in mind that I have no experience. I believe this question is applicable to any new grad that lacks demonstrable experience.

    Read the article

  • eSTEP Newsletter for October 2013 Now Available

    - by Cinzia Mascanzoni
    The October'13 issue of our Newsletter is now available. The issue contains information on the following topics: Oracle Open World Summary Oracle Cloud: Oracle Engineered Systems Oracle Database and Middleware Oracle Applications and Software as a Service Oracle Industries Oracle Partners and the "Internet of Things" JavaOne News MySQL News Corporate News Create Your HR Strategic Vision at Oracle HCM World Oracle Database Protection Redefined A Preview: Oracle Database Backup Logging Recovery Appliance Oracle closed Tekelec acquisition Congratulations to ORACLE TEAM USA! Tech sectionARC M6 Oracle's SPARC M6 Oracle SuperCluster M6-32 - Oracle’s Most Scalable Engineered System Oracle Multitenant on SPARC Servers and Oracle Solaris Oracle Database 12c Enterprise Edition: Plug into the Cloud Oracle In-Memory Database Cache Oracle Virtual Compute Appliance New Benchmark-Results published (Sept. 2013) Video Interview: Elasticity, the Biggest Challenge Facing Data Centers Today Tech blog Announcing New Sun Storage 2500-M2 Drives SPARC Product Line Update ZFS RAID Calculator v6 What ships with ODA X3-2? Tech Article: Oracle Multitenant on SPARC Servers and Oracle Solaris New release of Sun Rack II capacity calculator available Announcing: Oracle Solaris Cluster Product Bulletin, September 2013 Learning & events Planned TechCasts Quarterly Partner Update Live Webcast: Simplify and Accelerate Oracle Database deployment with Oracle VM Templates Join us for OTN's Virtual Developer Day - Harnessing the Power of Oracle WebLogic and Oracle Coherence. Learn from OOW 2013 what is going on in Virtualization How to Implementing Early Arriving Facts in ODI, Part I and Part II: Proof of Concept Overview Multi-Factor Authentication in Oracle WebLogic Using multi-factor authentication to protect web applications deployed on Oracle WebLogic. If Virtualization Is Free, It Can't Be Any Good—Right? Looking beyond System/HW SOA and User Interfaces Overcoming the challenges to developing user interfaces in a service oriented References Vodafone Romania Improves Business Agility and Customer Satisfaction, with 10x Faster Business Intelligence Delivery and 12x Faster Processing Emirates Integrated Telecommunications Captures 47% Market Share in a Competitive Market, Thanks to 24/7 Availability Home Credit and Finance Bank Accelerates Getting New Banking Products to Market Extra A Conversation with Java Champion Johan Vos You can find the Newsletter on our portal under eSTEP News ---> Latest Newsletter. You will need to provide your email address and the pin below to get access. Link to the portal is shown below. URL: http://launch.oracle.com/ PIN: eSTEP_2011 Previous published Newsletters can be found under the Archived Newsletters section and more useful information under the Events, Download and Links tab.

    Read the article

  • Pixels - A cry for some insight

    - by CarrotFile
    I'm pretty new to web developing and I'd love some clarification. Although reading more than one book on the topic, I cannot seem to wrap my head around the pixel concept. I encounter problems with this issue when trying to use CSS and pixel units for design that fits different screen sizes. To my understanding a pixel is the most basic unit used by a monitor in order to compose an image on the screen. So if me resolution is 800 by 600, everything on my screen is rendered using those 800*600 basic building blocks. If I were to enlarge my screen resolution, 3 things would accrue: A. The basic image building block(the pixel) would shrink in size B. The pixels would move close together C. Well, more pixels would now be available All these combined lead to a sharper(depending on the viewing distance) and more detail enabling image. Well so far so good. Here is were I start getting lost: To my knowledge a pixel is not a physical, real object. Monitors are not embedded with a few thousand pixels. I am drawn to this conclusion because anyone can change his screen's resolution, making a pixel on his screen bigger or smaller, and adding or subtracting the amount of total pixels on screen. Adding to that, I have herd that different monitors have different pixel densities. For example Apple's retina monitors. Taking all of the above as my knowledge base, These are my questions: If a pixel has no real world constant size, what does comparing different pixel densities matter? Each screen company can define it's own pixel concept and declare the higher density. What does a bigger pixel density mean? Say we take two screens with the same physical dimensions, but with a different pixel density, am I to assert that the main difference would be the larger density screen being able to display a higher max resolution? Or am I to assert that given the same resolution on both monitors, the higher density one would display a sharper, smaller image? If a pixel is not a fixed size within one monitor, is it a fixed size between the same resolution on two different monitors? For example, would two different monitors, set to the same resolution, be comprised of same size, same quantity pixels? I'd love some help (:

    Read the article

  • All hail the Excel Queen

    - by Tim Dexter
    An excellent question this past week from dear ol Blighty; actually from Brian at Nextgen Clearing Ltd in the big smoke (London). Brian was developing an excel template and wanted to be able to reference the data fields multiple times inside the Excel template. Damn good question and I of course has some wacky solutions, from macros and cell referencing in Excel to pre-processing the data with an XSL stylesheet to copy the data multiple times so it could be referenced multiple times. All completely outlandish, enter our Queen of Excel, Shirley from the development team. Shirley is singlehandedly responsible for the Excel templates, I put her through six months of hell a few years back, with a host of Excel template requirements. She was more than up to the challenge and has developed some great features. One of those, is the ability to use the hidden XDO_METADATA sheet to map the data to custom named fields so they can be used multiple times in the template. So simple and very neat! Excel template and regular Excel users will know that you can only use the naming function once ie the names have to be unique across the workbook so you can not reuse a cell/group name. To get around this you can just come up with as many cell names as you want and map them in the XDO_METADATA sheet to the data columns/fields in your XML data set:. For example: XDO_?DEPTNO_SUMMARY?  <?DEPTNO?> XDO_?DNAME_SUMMARY?  <?DNAME?> XDO_GROUP_?G_D_DETAIL? <xsl:for-each-group select=".//G_D" group-by="./DEPTNO"> XDO_?DEPTNO_DETAIL? <?DEPTNO?> As you can see DEPTNO has been referenced twice and mapped to different named values in the left hand column. These values can then be used to name individual cells in the Excel template. You'll also notice a mix of Publisher <? ...?> and native XSL commands. So the world is your oyster on the mapping and the complexity you might need for calculations or string manipulation. Shirley has kindly built out a sample Excel template, data and result here so you can see how it all hangs together. the XDO_METADATA sheet is hidden, just right click on the sheet names and use the Unhide command to show it.

    Read the article

  • Applying DDD principles in a RESTish web service

    - by Andy
    I am developing an RESTish web service. I think I got the idea of the difference between aggregation and composition. Aggregation does not enforce lifecycle/scope on the objects it references. Composition does enforce lifecycle/scope on the objects it contain/own. If I delete a composite object then all the objects it contain/own are deleted as well, while the deleting an aggregate root does not delete referenced objects. 1) If it is true that deleting aggregate roots does not necessary delete referenced objects, what sense does it make to not have a repository for the references objects? Or are aggregate roots as a term referring to what is known as composite object? 2) When you create an web service you will have multiple endpoints, in my case I have one entity Book and another named Comment. It does not make sense to leave the comments in my application if the book is deleted. Therefore, book is a composite object. I guess I should not have a repository for comments since that would break the enforcement of lifecycle and rules that the book class may have. However I have URL such as (examples only): GET /books/1/comments POST /books/1/comments Now, if I do not have a repository for comments, does that mean I have to load the book object and then return the referenced comments? Am I allowed to return a list of Comment entities from the BookRepository, does that make sense? The repository for Book may eventually become rather big with all sorts of methods. Am I allowed to write JPQL (JPA queries) that targets comments and not books inside the repository? What about pagination and filtering of comments. When adding a new comment triggered by the POST endpoint, do you need to load the book, add the comment to the book, and then update the whole book object? What I am currently doing is having a own CommentRepository, even though the comments are deleted with the book. I could need some direction on how to do it correct. Since you are exposing not only root objects in RESTish services I wonder how to handle this at the backend. I am using Hibernate and Spring.

    Read the article

  • How to appear very professional on my first freelance project?

    - by iamserious
    I need some help on appearing very professional on my first freelance project. How / What should you do to achieve this? Background: I started as a full time web developer 18 months ago, two promotions later I am now a senior software engineer. I've never had any problems with designing / developing / coding a complicated system. I thought I could use some help for Christmas and I started bidding for a project and now I have one - from a very reputable lawyers association in London. I have no problem dealing with the actual implementation of the system, but I have no idea how to appear professional throughout the whole process. About the project: This lawyers association are starting a distant training courses and in addition to having a website to show off all their clients etc, they want a students area where their students would log in, download course materials allocated to them etc.. and an admin section where they assign courses to students / create new ones / upload materials etc.. Questions breakdown: 1) How should I start with the requirement gathering? Is using scrum a good idea, or should I use something like Volere Template - and what should I do with it? should I submit a copy to the client etc.. 2) How often should I meet the client? Would once a fortnight would be good? 3) What are the processes / protocols that I need to follow so that they would be satisfied with me and think that I am very professional 4) How much should I charge for the product? 5) How should I get a quote / contract / receipt for the whole project? 6) What are the steps that professional freelancers go through, during the life cycle of a project? My Research so far Looking at How much should I charge doesn't help.. I live in London, zone 1, though I have no idea how much a project of this size would cost. Help on this would be appreciated. How to be professional articles talks about the work / time management etc and not the actual process, what would real people do etc.. it's like academia theory, but not practical. If this needs revising, please let me know, do not close it because of whatever reason, I will edit the question or details to fit the needs. Thanks for reading a lengthy question.

    Read the article

  • Forced to be trained [closed]

    - by steeb
    is it OK to force employees to take a training in order to make them sign a contract to stay in the job for X years? Here I'm the only developer in the company and before this job I've developed in VB6, VB.Net and others. There are others technicians here but they query the database server to make reports and make little programs but do not spend all day in programming. I was hired here to manage some legacy data to be migrated to a new system and I used my experience to accomplish the task. I also have made some utilities and other developments. Since we went live with the new system I've been developing a lot of side programs, add-ons and changed the source code of it and basically I've been learning from it since then and became some sort of jack of all trades when some feature needs to be changed or corrected. I have one and a half year in this company but during the last 8 months I've been entirely programming in the system. There have been times when even the implementers ask me how I accomplish certain things. The issue here is that the company has come and told me and other co-workers (which do not program in the system but know basic programming and databases) to have a basic training to program for the new system's platform and when we finish it we will sign a contract to stay in the company for an unspecified time. They have offered also an unspecified better salary. I'm feeling very suspicious cause I know the basics of system and I don't understand why I have to take it, being known by everybody all developments I've made. I gathered with my boss and told him that I should not take that course because I feel that I can learn more with all the daily requirements I've been asked and they should save that money. But he responded me that I have to take it and sign after that. I think they want me to be with them for a long time, and I'd like to, but my opinion is that I would like to stay in the company because I feel comfortable in all laboral aspects (including salary) but not because I signed a contract forced to do more tasks and forced to say no if I have others and better job offers. What advice can you give me in this kind of situation?

    Read the article

  • Data Source Use of Oracle Edition Based Redefinition (EBR)

    - by Steve Felts
    Edition-based redefinition is a new feature in the 11gR2 release of the Oracle database. It enables you to upgrade the database component of an application while it is in use, thereby minimizing or eliminating down time. It works by allowing for a pre-upgrade and post-upgrade view of the data to exist at the same time, providing a hot upgrade capability. You can then specify which view you want for a particular session.  See the Oracle Database Advanced Application Developer's Guide for further information. There is also a good white paper at Edition Based Definition. Using this feature of the Oracle database does not require any new WebLogic Server functionality. It is set for each connection in the pool automatically by simply specifying SQL ALTER SESSION SET EDITION = edition_name in the Init SQL parameter in the data source configuration. This can be configured either via the console or via WLST (setInitSQL on the JDBCConnectionPoolParams). This SQL statement is executed for each newly created physical database connection.Note that we are assuming that a data source references only one edition of the database. To make use of this feature, you would have an earlier version of the application with a data source that references the earlier EDITION and a later version of the application with a data source that references the later EDITION.   Once you start talking about multiple versions of a WLS application, you should be using the WLS "side-by-side" or "versioned" deployment feature.  See Developing Applications for Production Redeployment for more information.  By combining Oracle database EBR and WLS versioned deployment, the application can be failed over with no downtime, making the combination of features more powerful than either independently. There is a catch - you need to be running with a versioned database and a versioned application initially so then you can switch versions.  The recommended way to version a WLS application is to simply add the "Weblogic-Application-Version" property in the MANIFEST.MF file(you can also specify it at deployment time). The recommended way to configure the data source is to use a packaged data source descriptor that's stored in the ear or war so that everything is self-contained.  There are some restrictions.  You can't use a packaged data source with Logging Last Resource (LLR) - you need to use a system resource.  You can't use an application-scoped packaged data source with EmulateTwoPhaseCommit for the global-transactions-protocol with a versioned application - use a global scope.  See Configuring JDBC Application Modules for Deployment for more details. There's one known problem - it doesn't work correctly with an XA data source (patch available with bug 14075837).

    Read the article

  • How do I create weapon attachments?

    - by Tron86
    My question is; I am developing a game for XNA and I am trying to create a weapon attachment for my player model. My player model loads the .md3 format and reads tags for attachment points. I am able to get the tag of my model's hand. And I am also able to get the tag of my weapon's handle. Each tag I am able to get the rotation and position of and this is how I am calculating it: Model.worldMatrix = Matrix.CreateScale(Model.scale) * Matrix.CreateRotationX(-MathHelper.PiOver2) * Matrix.CreateRotationY(MathHelper.PiOver2); Pretty simple, the player model has a scale and its orientation(it loads on its side so I just use a 90 degree X axis rotation, and a Y axis rotation to face away from the camera). I then calculate the torso tag on the lower body, which gives me a local coordinate at the waist. Then I take that matrix and calculate the tag_weapon in the upper body. This gives me the hand position in local space. I also get the rotation matrix from that tag that I store for later use. All this seems to work fine. Now I move onto my weapon: Matrix weaponWorld = Matrix.CreateScale(CurrentWeapon.scale) * Matrix.CreateRotationX(-MathHelper.PiOver2) * TagRotationMatrix * Matrix.CreateTranslation(HandTag.Position) * Matrix.CreateRotationY(PlayerRotation) * Matrix.CreateTranslation(CollisionBody.Position) * You may notice the weapon matrix gets rotated by 90 degress on the X axis as well. This is because they load in on their sides. Once again this seems pretty simple and follows the SRT order I keep reading about. My TagRotation matrix is the hand's rotation. HandTag.Position is its position in local space. CreateRotationY(PlayerRotation) is the player's rotation in world space, and the CollisionBody.Position is the player's world location. Everything seems to be in order, and almost works in game. However when the gun spawns and follows the player's hand it seems to be flipped on an axis every couple frames. Almost like the X or Y axis is being inversed then put right back. Its hard to explain and I am totally stumped. Even removing all my X axis fixes does nothing to solve the problem. Hopefully I explained everything enough as I am a bit new to this! Thanks!

    Read the article

  • Are there any good Java/JVM libraries for my Expression Tree architecture?

    - by Snuggy
    My team and I are developing an enterprise-level application and I have devised an architecture for it that's best described as an "Expression Tree". The basic idea is that the leaf nodes of the tree are very simple expressions (perhaps simple values or strings). Nodes closer to the trunk will get more and more complex, taking the simpler nodes as their inputs and returning more complex results for their parents. Looking at it the other way, the application performs some task, and for this it creates a root expression. The root expression divides its input into smaller units and creates child expressions, which when evaluated it can use to build it's own result. The subdividing process continues until the simplest leaf nodes. There are two very important aspects of this architecture: It must be possible to manipulate nodes of the tree after it is built. The nodes may be given new input values to work with and any change in result for that node needs to be propagated back up the tree to the root node. The application must make best use of available processors and ultimately be scalable to other computers in a grid or in the cloud. Nodes in the tree will often be updating concurrently and notifying other interested nodes in the tree when they get a new value. Unfortunately, I'm not at liberty to discuss my actual application, but to aid understanding a little bit, you might imagine a kind of spreadsheet application being implemented with a similar architecture, where changes to cells in the table are propagated all over the place to other cells that need the result. The spreadsheet could get so massive that applying multi-core multi-computer distributed system to solve it would be of benefit. I've got my prototype "Expression Engine" working nicely on a single multi-core PC but I've started to run into a few concurrency issues (as expected because I haven't been taking too much care so far) so it's now time to start thinking about migrating the Engine to a more robust library, and that leads to a number of related questions: Is there any precedent for my "Expression Tree" architecture that I could research? What programming concepts should I consider. I realise this approach has many similarities to a functional programming style, and I'm already aware of the concepts of using futures and actors. Are there any others? Are there any languages or libraries that I should study? This question is inspired by my accidental discovery of Scala and the Akka library (which has good support for Actors, Futures, Distributed workloads etc.) and I'm wondering if there is anything else I should be looking at as well?

    Read the article

  • LibGDX - Textures rendering at wrong position

    - by ACluelessGuy
    Update 2: Let me further explain my problem since I think that i didn't make it clear enough: The Y-coordinates on the bottom of my screen should be 0. Instead it is the height of my screen. That means the "higher" i touch/click the screen the less my y-coordinate gets. Above that the origin is not inside my screen, atleast not the 0 y-coordinate. Original post: I'm currently developing a tower defence game for fun by using LibGDX. There are places on my map where the player is or is not allowed to put towers on. So I created different ArrayLists holding rectangles representing a tile on my map. (towerPositions) for(int i = 0; i < map.getLayers().getCount(); i++) { curLay = (TiledMapTileLayer) map.getLayers().get(i); //For all Cells of current Layer for(int k = 0; k < curLay.getWidth(); k++) { for(int j = 0; j < curLay.getHeight(); j++) { curCell = curLay.getCell(k, j); //If there is a actual cell if(curCell != null) { tileWidth = curLay.getTileWidth(); tileHeight = curLay.getTileHeight(); xTileKoord = tileWidth*k; yTileKoord = tileHeight*j; switch(curLay.getName()) { //If layer named "TowersAllowed" picked case "TowersAllowed": towerPositions.add(new Rectangle(xTileKoord, yTileKoord, tileWidth, tileHeight)); // ... AND SO ON If the player clicks on a "allowed" field later on he has the opportunity to build a tower of his coice via a menu. Now here is the problem: The towers render, but they render at wrong position. (They appear really random on the map, no certain pattern for me) for(Rectangle curRect : towerPositions) { if(curRect.contains(xCoord, yCoord)) { //Using a certain tower in this example (left the menu out if(gameControl.createTower("towerXY")) { //RenderObject is just a class holding the Texture and x/y coordinates renderList.add(new RenderObject(new Texture(Gdx.files.internal("TowerXY.png")), curRect.x, curRect.y)); } } } Later on i render it: game.batch.begin(); for(int i = 0; i < renderList.size() ; i++) { game.batch.draw(renderList.get(i).myTexture, renderList.get(i).x, renderList.get(i).y); } game.batch.end(); regards

    Read the article

  • How to synchronize the ball in a network pong game?

    - by Thaars
    I’m developing a multiplayer network pong game, my first game ever. The current state is, I’ve running the physic engine with the same configurations on the server and the clients. The own paddle movement is predicted and get just confirmed by the authoritative server. Is a difference detected between them, I correct the position at the client by interpolation. The opponent paddle is also interpolated 200ms to 100ms in the past, because the server is broadcasting snapshots every 100ms to each client. So far it works very well, but now I have to simulate the ball and have a problem to understanding the procedure. I’ve read Valve’s (and many other) articles about fast-paced multiplayer several times and understood their approach. Maybe I can compare my ball with their bullets, but their advantage is, the bullets are not visible. When I have to display the ball, and see my paddle in the present, the opponent in the past and the server is somewhere between it, how can I synchronize the ball over all instances and ensure, that it got ever hit by the paddle even if the paddle is fast moving? Currently my ball’s position is simply set by a server update, so it can happen, that the ball bounces back, even if the paddle is some pixel away (because of a delayed server position). Until now I’ve got no synced clock over all instances. I’m sending a client step index with each update to the server. If the server did his job, he sends the snapshot with the last step index of each client back to the clients. Now I’m looking for the stored position at the returned step index and compare them. Do I need a common clock to sync the ball? EDIT: I've tried to sync a common clock for the server and all clients with a timestamp. But I think it's better to use an own stepping instead of a timestamp (so I don't need to calculate with the ping and so on - and the timestamp will never be exact). The physics are running 60 times per second and now I use this for keeping them synchronized. Is that a good way? When the ball gets calculated by each client, the angle after bouncing can differ because of the different position of the paddles (the opponent is 200ms in the past). When the server is sending his ball position, velocity and angle (because he knows the position of each paddle and is authoritative), the ball could be in a very different position because of the different angles after bouncing (because the clients receive the server data after 100ms). How is it possible to interpolate such a huge difference? I posted this question some days ago at stackoverflow, but got no answer yet. Maybe this is the better place for this question.

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 3: Key Concerns

    - by Tim Murphy
    This is part 3 in a series of posts based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. Keys Concerns Of Smartphones In The Enterprise These are the factors that you need to be aware of and address in order to build successful enterprise smartphone applications.  Most of them have nothing to do with the application itself as you will see here. Managing Devices Managing devices is a factor that is going to effect how much your company will have to spend outside of developing the applications.  How will you track the devices within the corporation?  How often will you have to replace phones and as a consequence have to upgrade your applications to support new phones?  The devices can represent a significant investment of capital.  If these questions are not addressed you will find a number of hidden costs throughout the life of your solution. Purchase or BYOD We have seen the trend of Bring Your Own Device (BYOD) lately within the enterprise.  How many meetings have you been in where someone is on their personal iPad, iPhone, Android phone or Windows Phone?  The issue is if you can afford to support everyone's choice in device? That is a lot to take on even if you only support the current release of each platform. Do you go with the most popular device or do you pick a platform that best matches your current ecosystem and distribute company owned devices?  There is no easy answer here, but you should be able give some dollar value to both hardware and development costs related to platform coverage. Asset Tracking/Insurance Smartphones are devices that are easier to lose or have stolen than laptops and desktops. Not only do you have your normal asset management concerns but also assignment of financial responsibility. You also will need to insure them against damage and theft and add legal documents that spell out the responsibilities of the employees that use these devices. Personal vs. Corporate Data What happens when you terminate an employee?  How do you recover the device?  What happens when they have put personal data on the device?  These are all situation that can cause possible loss of corporate intellectual property or legal repercussions of reclaiming a device with personal data on it.  Policies need to be put in place that protect the company from being exposed to type of loss.  This can mean significant legal and procedural cost that you need to consider. Coming Up In the last installment of this series I will cover application development considerations. del.icio.us Tags: Smartphones,Enterprise Smartphone Apps,Architecture

    Read the article

  • Is it unusual for a small company (15 developers) not to use managed source/version control?

    - by LordScree
    It's not really a technical question, but there are several other questions here about source control and best practice. The company I work for (which will remain anonymous) uses a network share to host its source code and released code. It's the responsibility of the developer or manager to manually move source code to the correct folder depending on whether it's been released and what version it is and stuff. We have various spreadsheets dotted around where we record file names and versions and what's changed, and some teams also put details of different versions at the top of each file. Each team (2-3 teams) seems to do this differently within the company. As you can imagine, it's an organised mess - organised, because the "right people" know where their stuff is, but a mess because it's all different and it relies on people remembering what to do at any one time. One good thing is that everything is backed up on a nightly basis and kept indefinitely, so if mistakes are made, snapshots can be recovered. I've been trying to push for some kind of managed source control for a while, but I can't seem to get enough support for it within the company. My main arguments are: We're currently vulnerable; at any point someone could forget to do one of the many release actions we have to do, which could mean whole versions are not stored correctly. It could take hours or even days to piece a version back together if necessary We're developing new features along with bug fixes, and often have to delay the release of one or the other because some work has not been completed yet. We also have to force customers to take versions that include new features even if they just want a bug fix, because there's only really one version we're all working on We're experiencing problems with Visual Studio because multiple developers are using the same projects at the same time (not the same files, but it's still causing problems) There are only 15 developers, but we all do stuff differently; wouldn't it be better to have a standard company-wide approach we all have to follow? My questions are: Is it normal for a group of this size not to have source control? I have so far been given only vague reasons for not having source control - what reasons would you suggest could be valid for not implementing source control, given the information above? Are there any more reasons for source control that I could add to my arsenal? I'm asking mainly to get a feel for why I have had so much resistance, so please answer honestly. I'll give the answer to the person I believe has taken the most balanced approach and has answered all three questions. Thanks in advance

    Read the article

< Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >