Search Results

Search found 5521 results on 221 pages for 'deeper understanding'.

Page 112/221 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • ArchBeat Link-o-Rama for October 17, 2013

    - by OTN ArchBeat
    Oracle Author Podcast: Danny Coward on "Java WebSocket Programming" In this Oracle Author Podcast Roger Brinkley talks with Java architect Danny Coward about his new book, Java WebSocket Programming, now available from Oracle Press. Webcast: Why Choose Oracle Linux for your Oracle Database 12c Deployments Sumanta Chatterjee, VP Database Engineering for Oracle discusses advantages of choosing Oracle Linux for Oracle Database, including key optimizations and features, and talks about tools to simplify and speed deployment of Oracle Database on Linux, including Oracle VM Templates, Oracle Validated Configurations, and pre-install RPM. Oracle BI Apps 11.1.1.7.1 – GoldenGate Integration - Part 1: Introduction | Michael Rainey Michael Rainey launches a series of posts that guide you through "the architecture and setup for using GoldenGate with OBIA 11.1.1.7.1." Should your team use a framework? | Sten Vesterli "Some developers have an aversion to frameworks, feeling that it will be faster to just write everything themselves," observes Oracle ACE Director Sten Vesterli. He explains why that's a very bad idea in this short post. Free Poster: Adaptive Case Management in Practice Thanks to Masons of SOA member Danilo Schmiedel for providing a hi-res copy of the Adaptive Case Management poster, now available for download from the OTN ArchBeat Blog. Oracle Internal Testing Overview: Understanding How Rigorous Oracle Testing Saves Time and Effort During Deployment Want to understand Oracle Engineering's internal product testing methodology? This white paper takes you behind the curtain. Thought for the Day "If I see an ending, I can work backward." — Arthur Miller, American playwright (October 17, 1915 – February 10, 2005) Source: brainyquote.com

    Read the article

  • NVIDIA X Server TwinView isses with 2 GeForce 8600GT cards

    - by Big Daddy
    Full disclosure, I am relatively new to Linux and loving the learning curve but I am undoubtedly capable of ignorant mistakes I have a rig I am building for my home office desktop. The basics: Gigabyte MB AMD 64bt processor Ubuntu 12.04 64bit two Nvidia GeForce 8600GT video cards using a SLI bridge two 22" DVI input HP monitors So here is my issue (it is driving me nuts) with X Server. If I plug both monitors into GPU 0 X Server auto configures TwinView and all is grand, works like a charm, though both are running off of GPU 0. If I plug one monitor into GPU 0 and one monitor into GPU 1, X Server enables the monitor on GPU 0, sees but keeps the monitor on GPU 1 disabled. My presumption (we all know the saying about presumptions) is that all I would have to do is select the disabled monitor on GPU 1 and drop down the Configure pull down and select TwinView...problem is when the monitors are plugged in this way, the TwinView option is greyed out and can not be selected. What am I not understanding here? Is there some sort of configuration I need to do elsewhere for Ubuntu to utilize both GPU's? Any help will be most appreciated, thanks in advance.

    Read the article

  • Can too much abstraction be bad?

    - by m3th0dman
    As programmers I feel that our goal is to provide good abstractions on the given domain model and business logic. But where should this abstraction stop? How to make the trade-off between abstraction and all it's benefits (flexibility, ease of changing etc.) and ease of understanding the code and all it's benefits. I believe I tend to write code overly abstracted and I don't know how good is it; I often tend to write it like it is some kind of a micro-framework, which consists of two parts: Micro-Modules which are hooked up in the micro-framework: these modules are easy to be understood, developed and maintained as single units. This code basically represents the code that actually does the functional stuff, described in requirements. Connecting code; now here I believe stands the problem. This code tends to be complicated because it is sometimes very abstracted and is hard to be understood at the beginning; this arises due to the fact that it is only pure abstraction, the base in reality and business logic being performed in the code presented 1; from this reason this code is not expected to be changed once tested. Is this a good approach at programming? That it, having changing code very fragmented in many modules and very easy to be understood and non-changing code very complex from the abstraction POV? Should all the code be uniformly complex (that is code 1 more complex and interlinked and code 2 more simple) so that anybody looking through it can understand it in a reasonable amount of time but change is expensive or the solution presented above is good, where "changing code" is very easy to be understood, debugged, changed and "linking code" is kind of difficult. Note: this is not about code readability! Both code at 1 and 2 is readable, but code at 2 comes with more complex abstractions while code 1 comes with simple abstractions.

    Read the article

  • Student Life in Nigeria

    - by FelixWehmeyer
    They say that the Nigerian way of life involves being able and ready to succeed in life no matter the obstacles. My name is Olawale Obasola, graduated from Covenant University ’07 and presently working as a graduate consultant in Oracle. I will give you an insight into the student life in Nigeria. Nowadays, being a graduate in Nigeria is not the easiest thing; the life after graduation is much harder than one would normally imagine. I guess it isn’t helped with the Economical uproar of the last few years but it’s as grueling as it can get.  Also the Upper vs Lower class phenomena makes the journey a little too much hurried as it is easier to lose one’s way. Joining Oracle is a dream come-true as it further enhances my stance of taking Technology to the core of Nigeria. My initial perception about joining Oracle is that am going to be monitored every single second and my life would practically revolve around work but ever since I stepped in here, it has been an amazing experience. As much as the work can be stressful at times, it’s also the best place I have ever worked in and the atmosphere is great. My team is the best Pre-Sales team in the region and I gain invaluable knowledge every minute, we have a bond and understanding that helps propel each other to achieve more. The Life of a Nigerian graduate isn’t a bed of roses but I have a strong will and personality to emerge from any hardship to succeed. We show the true strength and spirit of Africa, we never give up and would never settle for anything less than the best.

    Read the article

  • How bad is it to have two methods with the same name but different signatures in two classes?

    - by Super User
    I have a design problem related to a public interface, the names of methods, and the understanding of my API and code. I have two classes like this: class A: ... function collision(self): .... ... class B: .... function _collision(self, another_object, l, r, t, b): .... The first class has one public method named collision, and the second has one private method called _collision. The two methods differs in argument type and number. As an example let's say that _collision checks if the object is colliding with another object with certain conditions l, r, t, b (collide on the left side, right side, etc) and returns true or false. The public collision method, on the other hand, resolves all the collisions of the object with other objects. The two methods have the same name because I think it's better to avoid overloading the design with different names for methods that do almost the same thing, but in distinct contexts and classes. Is this clear enough to the reader or I should change the method's name?

    Read the article

  • Interpolation between two 3D points?

    - by meds
    I'm working with some splines which define a path a character follows (you can see a gameplay video here to get a better understanding of what's going on: http://www.youtube.com/watch?v=BndobjOiZ6g). Basically the characters 'forward' look direction is set to the 'forward' direction of the spline and when players tilt their phone left and right the character is strafed along its 'right' coordinate. The issue with this is (rather obviously) in performance, interpolating over a spline to find the nearest position and tangent relative to the player is an incredibly costly operation. To get by this I cache a finite number of positions in what I call 'SplineDetails', the class is as follows: public class SplineDetails { public SplineDetails() { Forward = Vector3.forward; Position = Vector3.one * float.MaxValue; Alpha = -1; } public float Alpha; // [0,1] measured along length of spline where 0 is the initial point and 1 is the end point of the spline public Vector3 Position; // the point of the spline at this alpha public Vector3 Forward; // the forward tangent of the spline at this alpha } I populate this with say 30 coordinates and I can give a rough estimate of a coordinate and 'forward' based on a position past in. It's not as accurate but it's much faster. But now I'd like to make the system work better by estimating positions and 'forward' directions by interpolating between two of the cached points though I'm stuck trying to figure out some logic. My first problem is, how can I determine between which two points the object is? Given each point can be placed at different intervals along the spline it could mean that two points in front or behind the object can be closer to the object. The other problem is to figure out the proportion between the two paths it's between, i.e. if there is a point a at coordinate (0,0,0) and point b at coordinate (1,0,0) if the object is at position (0.5,0,0) then the result it should give is '0.5' (as it is equal distance away from point a and point b). That's a simple example, but what if the object is at coordinate (0.5,3,0) for example?

    Read the article

  • Booting Ubuntu 13.10 form USB on server with no OS (Dell PowerEdge T110)

    - by user35581
    I have a Dell PowerEdge T110 (Xeon 1220v2) server with no OS. From my mac, I was able to save the Ubuntu 13.10 for server iso (x86) and used UNetbootin to save the iso to a USB stick (2GB). I was hoping to boot the server from USB, and all seemed to be going well, BIOS even detected my USB drive when I plugged it in, but for some reason I'm getting a "Missing operating system" error. I checked the USB drive and it appears that UNetbootin put the correct files on it form a cursory glance (although I'm not entirely sure what I should be looking for). Should I be able to boot a server with no OS from a UNetbootin created Ubuntu 13.10 USB? And if so, why might BIOS not find the right files? I had read that there is a USB Emulation setting in BIOS, but I haven't been able to find this in the menus. My understanding is that by default, this is set to on. I might try wiping the USB stick and running UNetbootin again. The USB is formatted to MS-DOS FAT16 (should it be formatted some other way?).

    Read the article

  • Field symbol and Data reference in SAP-ABAP

    - by PDP-21
    If we compare field symbol and data refernece with that of the pointer in C i concluded that :- In C language, Say we declare a variable "var" type "Integer" with default value "5". The variable "var" will be stored some where in the memory and say the memory address which holds this variable is "1000". Now we define a pointer "ptr" and this pointer is assigned to our variable. So, "&ptr" will be "1000" and " *ptr " will be 5. Lets comapre the above situation in SAP ABAP. Here we declare a Field symbol "FS" and assign that to the variable "var". Now my question is what "FS" holds ? I have searched this rigorously in the internet but found out many ABAP consultants have the opinion that FS holds the address of the variable i.e. 1000. But that is wrong. While debugging i found out that fs holds only 5. So fs (in ABAP) is equivalent to *ptr (in C). Please correct me if my understanding is wrong. Now lets declare a data reference "dref" and another filed symbol "fsym" and after creating the data reference we assign the same to field symbol . Now we can do operations on this field symbol. So the difference between data refernec and field symbol is :- in case of field symbol first we will declare a variable and assign it to a field symbol. in case of data reference first we craete a data reference and then assign that to field symbol. Then what is the use of data reference? The same functionality we can achive through field symbol also.

    Read the article

  • I've been told that Exceptions should only be used in exceptional cases. How do I know if my case is exceptional?

    - by tieTYT
    My specific case here is that the user can pass in a string into the application, the application parses it and assigns it to structured objects. Sometimes the user may type in something invalid. For example, their input may describe a person but they may say their age is "apple". Correct behavior in that case is roll back the transaction and to tell the user an error occurred and they'll have to try again. There may be a requirement to report on every error we can find in the input, not just the first. In this case, I argued we should throw an exception. He disagreed, saying, "Exceptions should be exceptional: It's expected that the user may input invalid data, so this isn't an exceptional case" I didn't really know how to argue that point, because by definition of the word, he seems to be right. But, it's my understanding that this is why Exceptions were invented in the first place. It used to be you had to inspect the result to see if an error occurred. If you failed to check, bad things could happen without you noticing. Without exceptions every level of the stack needs to check the result of the methods they call and if a programmer forgets to check in one of these levels, the code could accidentally proceed and save invalid data (for example). Seems more error prone that way. Anyway, feel free to correct anything I've said here. My main question is if someone says Exceptions should be exceptional, how do I know if my case is exceptional?

    Read the article

  • Does the term "Learning Curve" include the knowing of the gotchas?

    - by voroninp
    When you learn new technology you spend time understanding its concepts and tools. But when technology meets real life strange and not pleasant things happen. Reuqirements are often far from ideal and differ from 'classic' scenario. And soon I find myself bending the technology to my real needs. At this point I begin to know bugs of the system or that is is not so flexible as it seemed at the very begining. And this 'fighting' with technology consumes a great part of the time while developing. What is more depressing is that the bunch of such gotchas and workarounds are not concentrated at one place (book, site, etc.) And before you really confront it you cannot really ask the correct question because you do not even suspect the reason for the problem to occur (unknown-unknown). So my question consiststs of three: 1) Do you really manage (and how) to predict possible future problems? 2) How much time do you spend for finding the workaround/fix/solution before you leave it and switch to other problems. 3) What are the criteria for you to think about yourself as experienced in the tecnology. Do you take these gotchas into account?

    Read the article

  • ADF Mobile - Update through Web Service (with ADF Business Components)

    - by Shay Shmeltzer
    In my previous blog entry I went over the basics of exposing ADF Business Components through service interfaces, and developing a simple ADF Mobile application that access and fetches data from those services. In this entry we'll dive a bit deeper  and address an update scenario through these web service interfaces. You can see the full demo video at the end of the post. In the first steps I show how to add an explicit method execution to fetch a specific record we want to update on the second page of a flow. For an update you'll be invoking a service method and passing the record you want to update as a parameter. As in many other Web services scenarios, we need to provide a complete object of specific type to the method. The ADF Web service data control helps you here by offering an object of this type that you can drag and drop into your page. The next step is to make sure to fill that object with the values you want to update. In the demo we do this through  coding in a backing bean that shows how to use the AdfmfJavaUtilities utility. The code gets the value from one field, gets a pointer to the parallel update field, and then copy from one to the other. At the end of the bean we manually execute the call to the update method on the Web service. Here is the demo: &amp;amp;amp;amp;&amp;lt;span id=&amp;quot;XinhaEditingPostion&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;lt;span id=&amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;quot;&amp;amp;amp;amp;gt;&amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;gt; Here is the code used in the backing bean in the demo above. package a.mobile;import oracle.adfmf.amx.event.ActionEvent;import javax.el.MethodExpression;import javax.el.ValueExpression;import oracle.adfmf.amx.event.ActionEvent;import oracle.adfmf.framework.api.AdfmfJavaUtilities;import oracle.adfmf.framework.model.AdfELContext;public class backing {    public backing() {    }    public void copyAndUpdate(ActionEvent actionEvent) {        // Add event code here...        AdfELContext adfELContext = AdfmfJavaUtilities.getAdfELContext();        ValueExpression ve = AdfmfJavaUtilities.getValueExpression("#{bindings.DepartmentName.inputValue}", String.class);        ValueExpression ve3 =            AdfmfJavaUtilities.getValueExpression("#{bindings.DepartmentName1.inputValue}", String.class);        ve3.setValue(adfELContext, ve.getValue(adfELContext));        ve = AdfmfJavaUtilities.getValueExpression("#{bindings.DepartmentId.inputValue}", int.class);        ve3 = AdfmfJavaUtilities.getValueExpression("#{bindings.DepartmentId1.inputValue}", int.class);        ve3.setValue(adfELContext, ve.getValue(adfELContext));        ve = AdfmfJavaUtilities.getValueExpression("#{bindings.ManagerId.inputValue}", int.class);        ve3 = AdfmfJavaUtilities.getValueExpression("#{bindings.ManagerId1.inputValue}", int.class);        ve3.setValue(adfELContext, ve.getValue(adfELContext));        ve = AdfmfJavaUtilities.getValueExpression("#{bindings.LocationId.inputValue}", int.class);        ve3 = AdfmfJavaUtilities.getValueExpression("#{bindings.LocationId1.inputValue}", int.class);        ve3.setValue(adfELContext, ve.getValue(adfELContext));        MethodExpression me = AdfmfJavaUtilities.getMethodExpression("#{bindings.updateDepartmentsView1.execute}", Object.class, new Class[] {});         me.invoke(adfELContext, new Object[] {});        }    }

    Read the article

  • Why is apt-cache so slow?

    - by Damn Terminal
    After upgrade to Trusty (14.04) from Saucy (13.10), all apt operations are very slow. Even those that do not include downloading anything, or connecting to any servers. For example, displaying the apt policy # time apt-cache policy [...] real 0m8.951s user 0m5.069s sys 0m3.861s takes almost ten seconds! Mostly a weird lag right after issuing the command. And it's the same even if I issue the same command again. On another system it doesn't take a tenth of a second real 0m0.096s user 0m0.070s sys 0m0.023s The other system is a little beefier but there was no noticeable difference before the upgrade. It's the same with apt-get, anything apt-related. How do I find out the source of this lag and fix it? Additional info: # cat /etc/nsswitch.conf # /etc/nsswitch.conf # # Example configuration of GNU Name Service Switch functionality. # If you have the `glibc-doc-reference' and `info' packages installed, try: # `info libc "Name Service Switch"' for information about this file. passwd: compat group: compat shadow: compat hosts: files dns networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis BTW is my understanding of how apt-cache works correct? It doesn't make any network connections when I run apt-cache policy, right? In case I'm wrong and it matters, here are my sources https://gist.github.com/anonymous/02920270ff68e23fc3ec

    Read the article

  • ADF Faces Skin Editor - How to Work with It

    - by Shay Shmeltzer
    The ODTUG Kscop11 conference was a great success with lots of sessions about FMW running in a special track. I did several sessions and labs in the conference, and I thought it might be a good idea to at least give you a taste of what you might have missed. So here is most of what I demoed in my ADF Faces Skinning session (not all though - that session was 60 minutes long, and while everyone did end up going out of the building in the middle because of a fire drill for about 5 minutes, there was other things covered in the session as well). In the demo here you'll see how to generate new images and default color scheme, how to identify a component class with Firebug, how to skin a component, how to identify the global selector of a property, how to change fonts and how to change strings. By the way, for more on ADF Skinning you should also listen to the ADF Insider seminar that Frank Nimphius recorded on skinning, it will give you better understanding of the overall skinning process. P.S. in the demo I add an entry to the web.xml file which prevent ADF Faces from compressing the HTML that is generated. The entry is for org.apache.myfaces.trinidad.DISABLE_CONTENT_COMPRESSION  and I set it to true. This is very useful when you work on creating the skin, but don't forget to un-set it before you go production.

    Read the article

  • Learning node.js

    - by john smith
    I am not sure if this is the right place to ask but, I thought this was the most suitable. I recently graduated from university. Learned the full php stack; basically all the LAMP stuff, obviously without counting all the other subjects. Not even got my degree and this whole node.js booming out of nowhere. You can imagine how one can feel about this, the story is always the same: you never end learning, and studying. So I recently got my hands on node.js; reading books, tutorials, and everything imaginable on the internet. The problem is one and simple: this is nowhere near to having a teacher standing near you helping you understanding and solving your problems, especially when all you can do is post your doubts on a website and patiently wait for replies. It's not that it isn't good, it's just much slower than what I just expressed above. So, in short words: is there a place where one can find someone willing to teach you about such contents? This would obviously done via remote means, like skype and such. Can anyone here point me into the right direction? Or just downvote me for being in the wrong website? Thanks in advance.

    Read the article

  • Customer Experience in the Year Ahead

    - by Christina McKeon
    With 2012 coming to an end soon, we find ourselves reflecting on the year behind us and the year ahead. Now is a good time for reflection on your customer experience initiatives to see how far you have come and where you need to go. Looking back on your customer experience efforts this year, were you able to accomplish the following? Customer journey mapping Align processes across the entire customer lifecyle (buying and owning) Connect all functional areas to the same customer data Deliver consistent and personal experiences across all customer touchpoints Make it easy and rewarding to be your customer Hire and develop talent that drives better customer experiences Tie key performance indicators (KPIs) to each of your customer experience objectives This is by no means a complete checklist for your customer experience strategy, but it does help you determine if you have moved in the right direction for delivering great customer experiences. If you are just getting started with customer experience planning or were not able to get to everything on your list this year, consider focusing on customer journey mapping in 2013. This exercise really helps your organization put your customer in the center and understand how everything you do affects that customer. At Oracle, we see organizations in various stages of customer experience maturity all learn a lot when they go through journey mapping. Companies just starting out with customer experience get a complete understanding of what it is like to be a customer and how everything they do affects that customer. And, organizations that are further along with customer experience often find journey mapping helps provide perspective when re-visiting their customer experience strategy. Happy holidays and best wishes for delivering great customer journeys in 2013!

    Read the article

  • Apress "Pro DLR in .NET 4' - ISBN 978-1-430203066-3 - Initial comments

    - by TATWORTH
    The dynamic language runtime (DLR) is a radical development of Dot Net. In some ways it is like the Laser was 40 years, a solution looking for a problem. At the moment the DLR supports languages such as Iron Ruby and Iron Python, together with dynamic extensions for C# and VB.NET. Where DLR will also score is the ability to write your own Dot Net language for specialist areas. So how does this book fare in introducing the DLR? It is a book that will require careful study and perhaps reading several times before fully understanding the subject. You will need to spend time trying out the sample code. So who would I recommend this book to? I recommend it to C# development teams for their library. I recommend it to individuals who not only know C# but have a good history of learning other computer languages. It is not a book that can just be "dipped into", but will require one or more reads from start to finish. This is no reflection on the skill of the author but of the newness of the material.

    Read the article

  • Pulling in changes from a forked repo without a request on GitHub?

    - by Alec
    I'm new to the social coding community and don't know how to proceed properly in this situation: I've created a GitHub Repository a couple weeks ago. Someone forked the project and has made some small changes that have been on my to-do. I'm thrilled someone forked my project and took the time to add to it. I'd like to pull the changes into my own code, but have a couple of concerns. 1) I don't know how to pull in the changes via git from a forked repo. My understanding is that there is an easy way to merge the changes via a pull request, but it appears as though the forker has to issue that request? 2) Is it acceptable to pull in changes without a pull request? This relates to the first one. I'd put the code aside for a couple of weeks and come back to find that what I was going to work on next was done by someone else, and don't want to just copy their code without giving them credit in some way. Shouldn't there be a to pull the changes in even if they don't explicitly ask you to? What's the etiquette here I may be over thinking this, but thanks for your input in advance. I'm pretty new to the hacker community, but I want to do what I can to contribute!

    Read the article

  • Best practice for combining a Java Applet/ Android interface?

    - by Pearsonartphoto
    I'm working on an online game, which I am seriously considering writing a Java Applet for it. The game is not overly complex on the features. I'm considering at some point having at least 3 versions of the game, which include a Java stand alone, applet, and Android game. I know from Design Patterns that the best way to use differing things like buttons and the like is to use a Bridge interface, where I have a common template for the common buttons. However, I'm having a bit of difficulty understanding what to do about the following. I know that Android programs use an Activity structure, which I am well familiar with, and that Applets extend the Applet interface, which I am not as familiar with. I also know that a stand alone java program uses basically a main() function, which doesn't have much structure. I'm convinced that there should be a way to design a common design pattern between the two, but somehow I'm missing what that is exactly. What can I do to make the different frameworks work with as much common code as possible?

    Read the article

  • Why is LSHW Only listing one of my two HDDs?

    - by Mark
    I have a 120GB HDD and a 1TB HDD in the system. I installed Ubuntu Server using LVM on the 120GB. After installation I added the 1TB to the existing volume group and added 10GB to /home as a test. My understanding is that lshw is supposed to list hardware. What's the difference here? mark@server:~$ sudo lshw -short -c disk H/W path Device Class Description ========================================================= /0/100/1f.2/0 /dev/sda disk 120GB ST3120026AS /0/100/1f.2/1 /dev/cdrom disk DVD+-RW GH50N /0/1/0.0.0 /dev/sdc disk SCSI Disk /0/1/0.0.1 /dev/sdd disk SCSI Disk /0/1/0.0.2 /dev/sde disk SCSI Disk /0/1/0.0.3 /dev/sdf disk SCSI Disk The 1TB only shows up as a volume, not as a disk. mark@server:~$ sudo lshw -short .... /0/100/1f.2/0 /dev/sda disk 120GB ST3120026AS /0/100/1f.2/0/1 /dev/sda1 volume 476MiB Linux filesystem partition /0/100/1f.2/0/2 /dev/sda2 volume 111GiB Linux LVM Physical Volume partition /0/100/1f.2/1 /dev/cdrom disk DVD+-RW GH50N /0/100/1f.2/0.0.0 /dev/sdb volume 931GiB WDC WD1001FALS-0 Thanks, Mark

    Read the article

  • Does your organization still use the term "screens" to describe a user interface?

    - by bit-twiddler
    I have been in the field long enough to remember when the term "screen" entered our lexicon. As difficult as it is to believe, the early systems on which I worked had no user interface (UI), that is, unless one counts a keypunch machine and job listings as a user interface. These systems ran as "card image" production jobs back in a day when being a computer operator required a reasonably deep understanding of how computers worked. Flashing forward to today: I cringe every time I hear a systems practitioner use the term "screen." The metaphor no longer fits the medium. The term somewhat fit back when the user dialog consumed 100% of available monitor real estate; however, the term lost its relevance the moment we moved to windowed environments. With the above said, does your organization still use the term "screens" to describe an application's UI? Has anyone successfully purged the term from an organization? For those who do not use the term to describe UI dialog elements, what term do you use in place of “screen.”

    Read the article

  • Transition from 2D to 3D Game development [closed]

    - by jakebird451
    I have been working in the 2D world for a long time from manual blitting in windows to SDL to Python (pygame, pyopengl) and a bunch in between. Needless to say I have been programming for a while. So a while ago I started to program in OpenGL via C++ on my Mac. I then got a little intricate with my work after a while (3D models with skeleton structure and terrain development). After a long time of tinkering, I stopped due to the heavy work just to yield a low level understanding of how OpenGL works. Still interested in Graphics and Game Development I went on a search for a stable game engine with some features to grow on. Licence Requirement: Anything other than GPL (LGPL will do) OS Requirement: Mac & Windows Shader: GLSL or CG (GLSL preferred due to experience) Models: Any model structure with rigging (bone) support & animation I am looking at http://www.ogre3d.org/ currently and am starting to meddle around with some examples. However I am a little reluctant to spend a lot of time on it only to yield another dead end. So instead of falling down a spiraling black pit, I am posting my question to you guys to lead me in the right direction based on my requirements. How was your experience with the engine you recommend? Is it well documented? Does it have well documented examples? Any library requirements (Boost, libpng, etc)?

    Read the article

  • Developing momentum on open source projects

    - by sashang
    Hi I've been struggling to develop momentum contributing to open source projects. I have in the past tried with gcc and contributed a fix to libstdc++ but it was a once off and even though I spent months in my spare time on the dev mailing list and reading through things I just never seemed to develop any momentum with the code. Eventually I unsubscribed and got my free time back and uncluttered my mailbox. Like a lot of people I have some little open source defunct projects lying around on the net, but they're not large and I'm the only contributor. At the moment I'm more interested in contributing to a large open source project and want to know how people got started because I find it difficult while working full time to develop any momentum with the code base. Other more regular contributors, who are on the project full-time, are able to make changes at will and as result enter that positive feedback cycle where they understand the code and also know where it's heading. It makes the barrier to entry higher for those that come along later. My questions are to people who actively contribute to large opensource projects, like the Linux kernel, or gcc or clang/llvm or anything else with say a developer head count of more than 10. How did you get started? Was there a large chunk of time in your life that you just could dedicate to working on the project? I know in Linus's case he had a chunk of time (6 months) to get it started. What barriers to entry did you encounter? Can you describe the initial stages of the time spent with the project, from when you had little understanding of the code to when you understood enough to commit regularly. Thanks

    Read the article

  • 3D space game development

    - by user1693061
    I want to develop a 3D game (sci-fi type with spaceships) which can be played on multiplayer mode and by multiplayer i mean around 10 players for start as it will be a personal testing project and mostly educational. I have been searching for some days about the available languages and engines but i am kinda confused. Since i have been learning Java for my 1st year in I.T university and i have pretty good understanding i thought i would go with the Java language and develop that game on an applet so it could be played on a browser. After going through an applet game tutorial i understood how graphics work on an applet. So.. 1st question: Could an applet carry the burden of a 3D game especially on multiplayer? My thinking: It's a space game so the graphics should not be such a big problem since it wont be that crowded with entities apart from ships, planets and some effects. If the java applet is not the way for my project i would't mind "developing it on desktop"(i mean not making it a browser game). 2nd question: Should i use Unity engine for my purpose(space game)? If not name other language/engine combo.

    Read the article

  • invitation: Oracle Endeca Information Discovery Bootcamp

    - by mseika
    The Oracle Endeca Information Discovery (OEID) Boot Camp is designed to give partners an understanding of OEID’s features, and how it complements the existing Oracle Business Intelligence suite. Participants will learn how to develop & implement solutions using a Data Discovery method. Training is in EnglishWhat will be covered?The Oracle Endeca Information Discovery (OEID) Boot Camp is a three-day class with a combination of lecture and hands-on exercises, tailored to make participants aware of the Oracle Endeca Information Discovery platform, and to gain valuable skills for the implementation of projects.The course will follow a combination of lectures and hands-on lab sessions, to allow participants to apply the knowledge they have gained by extracting from sample data sources, and creating an end-user application that will be used to answer several business questions. What You Will Learn Architecture: OEID Components, use of graphs, overview of clustering OEID Installation: Architecture planning, infrastructure requirements, installation process, production hints & tips OEID Administration: Data store management, administrative operations, portal configuration, data sources, system monitoring Indexing: Integration Suite, Data source analysis, Graph (ETL) creation, record design techniques Portlets: Studio portlets, custom portlet development, querying functions Reporting: Studio applications & best practices, visualizations, EQL PrerequisitesYou must bring a laptop with you for the Hands-on labs ENVIRONMENT – LAPTOP REQUIREMENTS For the OEID boot camp, participants will perform the hands-on lab exercises using a virtual machine image. These virtual machines will be provided to participants within a cloud environment, requiring participants to bring a laptop to the Boot Camp that can access a Windows server utilizing Microsoft RDP from their laptop. Participants will not need to install any software onto their laptops, but must ensure that they have the proper software installed for their OS, to connect through RDP to a server. HARDWARE • CPU: Dual-core, x64, 1.8Ghz or higher • RAM: 2GB SOFTWARE • Microsoft Remote Desktop Client • Internet Explorer 7, Firefox, or Google Chrome This boot camp is intended for prospective implementers of Oracle Endeca Information Discovery (OEID), or those in a presales role looking to gain insight into the technical benefits of this new package. Attendees should have experience and familiarity with the basic concepts of business intelligence. Where and When ? Monday, October 15th until wednesday, October 17th included 9:00 - 18:00 Oracle France 15, boulevard Charles de Gaulle 92715 Colombes Access Register Here Limited number of seats !

    Read the article

  • GeoTrust SSL brand name used by re-sellers

    - by Christopher
    I feel like a I got the bait-and-switch from my web host provider since they advertise "GeoTrust SSL" for $99. I purchased it, thinking the certificate is issued from geotrust.com, but then I get an email from Comodo saying they are providing it. My host provider says they get a discount by using Comodo. I purchased the certificate with the understanding it would be issued by GeoTrust. I called the host provider and they said they usually expect it from GeoTrust, but someone from email support responded saying the product name is "GeoTrust SSL" but they use Comodo to get a discount. I think this is bogus and unfair trade practice. However, searching for "GeoTrust" on google brings up a ton of websites selling "GeoTrust" certificates. How can companies get away with this? Since the host provider is part of BBB I plan to inform my host to update the purchase page on their website to state clearly that... "This certicate is provided at a discount and may be issued by a provider other than GeoTrust.com, such as Comodo.com" Any feedback on this is appreciated.

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >