Search Results

Search found 10543 results on 422 pages for 'big bang theory'.

Page 232/422 | < Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >

  • Talking JavaOne with Rock Star Charles Nutter

    - by Janice J. Heiss
    JavaOne Rock Stars, conceived in 2005, are the top rated speakers from the JavaOne Conference. They are awarded by their peers who through conference surveys recognize them for their outstanding sessions and speaking ability. Over the years many of the world’s leading Java developers have been so recognized.We spoke with distinguished Rock Star, Charles Nutter. A JRuby Update from Charles NutterCharles Nutter of Red Hat is well known as a lead developer of JRuby, a Ruby implementation of Java that is tightly integrated with Java to allow for the embedding of the interpreter into any Java application with full two-way access between the Java and the Ruby code. Nutter is giving the following sessions at this year’s JavaOne: CON7257 – “JVM Bytecode for Dummies (and the Rest of Us Too)” CON7284 – “Implementing Ruby: The Long, Hard Road” CON7263 – “JVM JIT for Dummies” BOF6682 – “I’ve Got 99 Languages, but Java Ain’t One” CON6575 – “Polyglot for Dummies” (Both with Thomas Enebo) I asked Nutter, to give us the latest on JRuby. “JRuby seems to have hit a tipping point this past year,” he explained, “moving from ‘just another Ruby implementation’ to ‘the best Ruby implementation for X,’ where X may be performance, scaling, big data, stability, reliability, security, and a number of other features important for today's applications. We're currently wrapping up JRuby 1.7, which improves support for Ruby 1.9 APIs, solves a number of user issues and concurrency challenges, and utilizes invokedynamic to outperform all other Ruby implementations by a wide margin. JRuby just gets better and better.” When asked what he thought about the rapid growth of alternative languages for the JVM, he replied, “I'm very intrigued by efforts to bring a high-performance JavaScript runtime to the JVM. There's really no reason the JVM couldn't be the fastest platform for running JavaScript with the right implementation, and I'm excited to see that happen.”And what is Nutter working on currently? “Aside from JRuby 1.7 wrap-up,” he explained, “I'm helping the Hotspot developers investigate invokedynamic performance issues and test-driving their new invokedynamic code in Java 8. I'm also starting to explore ways to improve the general state of dynamic languages on the JVM using JRuby as a guide, and to help the JVM become a better platform for all kinds of languages.” Originally published on blogs.oracle.com/javaone.

    Read the article

  • IoT? Time for Enterprise Architecture

    - by OTN ArchBeat
    Of course you've been listening to the latest OTN ArchBeat Podcast on the challenges and opportunities in the Internet of Things. If so, you'll also be interested in ZDNet blogger Joe McKendricks' recent post, Will the 'Internet of Things' make CIOs' jobs harder?. In that post McKendrick offers this important bit of advice that will certainly have architects saying "I told you so." Enterprises need to develop architectural approaches to the management of data. Meaning the development of repeatable processes to source, ingest, transform and store information. For years, IT managers simply bought more hardware and addressed data with on-off integration projects. Now it's time for enterprise architecture. IoT is an important new phase in the evolution of enterprise IT. Challenging? You bet! But meeting any such challenge requires big, broad thinking and planning. In that context Enterprise Architecture has always been important. But as IoT gains traction and speed, enterprise architecture should be top of mind for all concerned.

    Read the article

  • IT Optimization Plan Pays Off For UK Retailer

    - by [email protected]
    I caught this article in ComputerworldUK yesterday. The headline talks about UK-based supermarket chain Morrisons is increasing their IT spend...OK, sounds good. Even nicer that Oracle is a big part of that. But what caught my eye were three things: 1) Morrison's truly has a long term strategy for IT. In this case, modernizing and optimizing how they use IT for business advantage. 2) Even in a tough economic climate, Morrison's views IT investments as contributing to and improving the bottom line. Specifically, "The investment in IT contributed to a 21 percent increase in Morrison's underlying profit.." 3) The phased, 3-year "Optimization Plan" took a holistic approach to their business--from CRM and Supply Chain systems to the underlying application infrastructure. On the infrastructure front, adopting a more flexible Service-Oriented Architecture enabled them to be more agile and adapt their business and Identity Management helped with sometimes mundane (but costly) issues like lost passwords and being able to document who has access to what. Things don't always turn out so rosy. And I know it was a long and difficult process...but it's nice to see a happy ending every once in a while.

    Read the article

  • How to make the run button run the project, not the file, in Eclipse

    - by Roy T.
    I'm using the Spring IDE, a variant of Eclipse to create a Java project. One big irritation I have is that when I press the run button Eclipse tries to run the current file, which usually fails because it doesn't have a main method. I've set up run configurations in the hope that would make the play button default to the run configuration instead of the current file, but that doesn't work either. Now to run my application correctly I have to press the little arrow next to play, select my favorite run configuration and then it works, this is only two extra clicks but it's tedious, the button is small and I feel like I shouldn't have to perform these extra steps. I mean what is the point of run configurations and projects if it still tries to run a file by default? Even more preferably I wouldn't even want to touch the mouse but just press Ctrl+F11, but this has the same behavior. All above applies to debugging as well btw. So my question is this: how do I make the run and debug buttons (and their short keys) default to the project's run configuration instead of to trying (and failing) to run only the current file? Much like it is in Visual Studio and other IDEs?

    Read the article

  • Ubuntu 11.10 won't boot on Dell XPS 8300

    - by Phil Gorman
    I have a brand new Dell Studio XPS 8300 desktop with 17-2600 cpu, H67 chipset, 8GB DDR3, 2 1TB HDDs in mirrored RAID, and AMD Radeon 6770. Dell doesn't support Ubuntu here in Australia so it came with Windows 7 and Windows software. Yes I had to pay for an O.S. and software I didn't want to get hardware I did want, all at a greatly inflated price. It's not all beer and skittles in the land of Oz. I changed boot priorities in the BIOS to DVD and ran Ubuntu 11.10 64bit from the ISO with NOMODESET. The installation reformatted all partitions to rid me of the dreaded Windows. All was well until until reboot. The BIOS does its thing, then its "The Black Screen of Death" with a blinking cursor; no boot screen, no Grub, no keyboard, no mouse. I've searched Dell and Ubuntu forums in vain. Can you help? I would be really grateful for any advice which can help turn my big expensive paperweight into a really useful machine. Thank you in anticipation kind people. Phil

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • Reading a large SQL Errorlog

    - by steveh99999
    I came across an interesting situation recently where a SQL instance had been configured with the Audit of successful and failed logins being written to the errorlog. ie This meant… every time a user or the application connected to the SQL instance – an entry was written to the errorlog. This meant…  huge SQL Server errorlogs. Opening an errorlog in the usual way, using SQL management studio, was extremely slow… Luckily, I was able to use xp_readerrorlog to work around this – here’s some example queries..   To show errorlog entries from the currently active log, just for today :- DECLARE @now DATETIME DECLARE @midnight DATETIME SET @now = GETDATE() SET @midnight =  DATEADD(d, DATEDIFF(d, 0, getdate()), 0) EXEC xp_readerrorlog 0,1,NULL,NULL,@midnight,@now   To find out how big the current errorlog actually is, and what the earliest and most recent entries are in the errorlog :- CREATE TABLE #temp_errorlog (Logdate DATETIME, ProcessInfo VARCHAR(20),Text VARCHAR(4000)) INSERT INTO #temp_errorlog EXEC xp_readerrorlog 0 -- for current errorlog SELECT COUNT(*) AS 'Number of entries in errorlog', MIN(logdate) AS 'ErrorLog Starts', MAX(logdate) AS 'ErrorLog Ends' FROM #temp_errorlog DROP TABLE #temp_errorlog To show just DBCC history  information in the current errorlog :- EXEC xp_readerrorlog 0,1,'dbcc'   To show backup errorlog entries in the current errorlog :- CREATE TABLE #temp_errorlog (Logdate DATETIME, ProcessInfo VARCHAR(20),Text VARCHAR(4000)) INSERT INTO #temp_errorlog EXEC xp_readerrorlog 0 -- for current errorlog SELECT * from #temp_errorlog WHERE ProcessInfo = 'Backup' ORDER BY Logdate DROP TABLE #temp_errorlog XP_Errorlog is an undocumented system stored procedure – so no official Microsoft link describing the parameters it takes – however,  there’s a good blog on this here And, if you do have a problem with huge errorlogs – please consider running system stored procedure  sp_cycle_errorlog on a nightly or regular basis.  But if you do this,  remember to change the amount of errorlogs you do retain – the default of 6 might not be sufficient for you….

    Read the article

  • Effective way to check if an Entity/Player enters a region/trigger

    - by Chris
    I was wondering how multiplayer games detect if you enter a special region. Let's assume there is a huge map that is so big that simply checking it would become a huge performance issue. I've seen bukkit (a modding API for Minecraft servers) firing an Event on every single move. I don't think that larger games do the same because even if you have only a few coordinates you are interested in, you have to loop through a few trigger zone to see if the player is inside your region - for every player. This seems like an extremely CPU-intense operation to me even though I've never developed something like that. Is there a special algorithm that is used by larger games to accomplish this? The only thing I could imagine is to split up the world into multiple parts and to register the event not on the movement itself but on all the parts that are covered by your area and only check for areas that are registered in the current part. And another thing I would like to know: How could you detect when someone must have entered a trigger but you never saw him directly in it since his client only sent you an move packet shortly before entering and after leaving the trigger area. Drawing a line and calculate all colliding parts seems rather CPU intensive if you have to perform it every time.

    Read the article

  • Internship in License Contract Management

    - by cristian.condurache(at)oracle.com
    Hi Everyone, My name is Luca. I am an intern in the License Contract Management team in Italy. I have studied Economics and Business in Pescara and finished my Master’s Degree in July 2009. After a short work experience near my home town I decided to look for a job in an International Company. I got in touch with Oracle in January 2010. I had a telephone interview and then a face-to-face interview. On a cold and grey morning, I arrived in Milan....my first impression was fantastic....a big modern building with wide TVs everywhere. I was a little nervous but very excited. I understood this could be a great opportunity... The interview went well and I started to work in March. After a training period I was quickly involved in the closing of the last quarter of the fiscal year - of which May is the last month at Oracle. Working as a License Contract Manager is a real challenge for a fresh graduate. It involves thoroughly understanding the Oracle Policies and Practices with regards to License Contracts. In my experience, especially in May, I learnt to work under high pressure, within time constrains, and to keep up with constant changes. In this period I also had the opportunity to be involved in different negotiations, being directly in contact with the customers. This helped me to develop my relational skills during complex transactions. Looking back at the nine months at Oracle I can say I have a better understanding of the IT world. It is a complex environment that changes continously, offering new challenges to learn from everytime. If you have any questions related to this article feel free to contact [email protected]. You can find our job opportunities via http://campus.oracle.com. Technorati Tags: License Contract Management,oppotunity,Oracle Policies,internship

    Read the article

  • What is a generic term for name/identifier? (as opposed to label)

    - by d3vid
    I need to refer to a number of things that have both an identifier value (used in code and configuration), and a human-readable label. These things include: database columns dropdown items subapplications objects stored in a dictionary I want two unambiguous terms. One to refer to the identifier/value/key. One to refer to the label. As you can see, I'm pretty settled on the latter :) For the former, identifier seems best (not everything is strictly a key, and value and name could refer to the label; although, identifier usually refers only to a variable name), but I would prefer to follow an established practice if there is one. Is there an established term for this? (Please provide a source.) If not, are there any examples of a choice from a significant source (Java APIs, MSDN, a big FLOSS project)? (I wasn't sure if this should be posted here or to English Language & Usage. I thought this was the more appropriate expert audience. Happy to migrate if not.)

    Read the article

  • Oracle Partner Days and Oracle Days are coming to a city in EMEA near you!

    - by Javier Puerta
    Oracle Partner Days A new round of Oracle Partner Days is coming to a large number of European cities. These events are exclusive for Oracle partners and will deliver to you real Business return on your OPN membership.You will hear the business opportunities coming from the adoption of the entire Oracle stack, the latest products value propositions and related sales strategy and be able to connect directly with Oracle executives and find new business opportunities with other partners in your region.The EMEA Oracle Partner Days are Local/Regional live events targeting the key contacts in sales and consultancy delivering Oracle strategy, engaging around the several perspectives of the Oracle portfolio, executive keynotes and deep dive Business content-related breakout sessions. The first city will be Frankfurt, on Oct. 29. Check the full list to find an Oracle Partner Day in a city near you. Oracle Days Oracle Days will be hosted after Oracle OpenWorld across EMEA, along October and November. By attending an Oracle Day, customers and partners can: Learn about how to leverage the power of the Oracle stack, by hearing customer case studies about successful business transformation, and by following cross-stack solution tracks within the agenda Discuss key issues for business and IT executives in cloud, big data, social, and mobile solutions, and network with peers who are facing the same challenges Meet Oracle experts and watch live demos of new products Get the latest news from Oracle OpenWorld. See full calendar and cities here

    Read the article

  • Reflecting on 2010 and Looking into 2011

    - by Sam Abraham
    In early 2010, I had blogged and shared my excitement as I was about to embark on a new journey relocating to South Florida.     As I settled down and adjusted to my new life, I was presented with an opportunity to get actively involved and volunteer in the local Florida .Net and Project Management communities.  I have since devoted a significant portion of my time to community initiatives, coordinating the West Palm Beach .Net User Group, volunteering as a member of the INETA Speaker’s Bureau and traveling to attend/speak at .Net code camps and user groups throughout the states of Florida and New York. I have also taken on various volunteer roles at the South Florida Chapter of the Project Management Institute starting as core team member on the chapter’s mentoring initiative and ending the year as Project Manager of the chapter’s mentoring program and as Director of Electronic Communications on the chapter’s IT team. I am also serving a one year term (2010-2011) as secretary and founding board member of Florida’s first official chapter of the International Association for Software Architects (IASA).   A big thank you is due for those who afforded me the opportunity and privilege to take part of these initiatives and those who provided guidance and encouragement when I needed them the most.   Looking ahead into 2011, I hope to continue my community involvement and volunteer activities. I will start by dedicating the first 5 weekends in the New Year to teach a free comprehensive Microsoft PowerPoint class at church. My goal will be to start from scratch and slowly cover the various available PowerPoint features that can be leveraged to create captivating presentations. Starting February, I will be resuming my user group/code camp speaking engagements at our South Florida .Net Code Camp and the West Palm Beach .Net User Group.   I look forward to continuing to meet, chat and share with our technical community members and to another active year in community service.   All the best, --Sam Abraham

    Read the article

  • What are the best practices for phasing out obsolete code?

    - by P.Brian.Mackey
    I have the need to phase out an obsolete method. I am aware of the [Obsolete] attribute. Does Microsoft have a recommended best practice guide for doing this? Here's my current plan: A. I do not want to create a new assembly because developers would have to add a new reference to their projects and I expect to get a lot of grief from my boss and co-workers if they must do this. We also do not maintain multiple assembly versions. We only use the latest version. Changing this practice would require changing our deployment process which is a big issue (have to teach people how to do things with TFS instead of FinalBuilder and get them to give up FinalBuilder) B. Mark the old method obsolete. C. Because the implementation is changing (not the method signature), I need to rename the method rather than create an overload. So, to make users aware of the proper method I plan to add a message to the [Obsolete] attribute. This part bothers me, because the only change I'm making is decoupling the method from the connection string. But, because I'm not adding a new assembly, I see no way around this. Result: [Obsolete("Please don't use this anymore because it does not implement IMyDbProvider. Use XXX instead.")]; /// <summary> /// /// </summary> /// <param name="settingName"></param> /// <returns></returns> public static Dictionary<string, Setting> ReadSettings(string settingName) { return ReadSettings(settingName, SomeGeneralClass.ConnectionString); } public Dictionary<string, Setting> ReadSettings2(string settingName) { return ReadSettings(settingName);// IMyDbProvider.ConnectionString private member added to class. Probably have to make this an instance method. }

    Read the article

  • JOB OF THE WEEK

    - by jessica.ebbelaar(at)oracle.com
    Help Desk Support Specialist - Budapest (Hungary) Do you have French and English languages skills, and are living in Hungary? Then this could be the role for you to start your career with at Oracle. We now have an opening as Help Desk Support Specialist in our office Budapest. In this role you will respond to requests for technical assistance by phone, email and/or using our help desk management system We are looking for candidates with a passion for Customer Service. Next to that planning & organizing, problem solving, time management are important competencies to have for this role. If you already had some exposure to Bio Pharmaceutical or Clinical companies that is a big plus. It is a great opportunity not only for graduates, but for all who want to start their career at Oracle and a unique chance to work in multinational team together with colleagues from all over the world! If you are interested in this position, read more here! For all of our other vacancies and internships, please visit https://campus.oracle.com.

    Read the article

  • Masters or Second Bachelors Degree..or neither

    - by drD
    I have a degree in Business Administration, because at the time I didn't know what I wanted to do. I have been interested in programming for the past 2 years and have taken some action to self-teach. My experience/ knowledge base is limited to the following: -Read Kochan's Programming in C -Read IOS and Objective-C from the Big Nerd Ranch series -Obtained a C++ at NYU - thought it would be a good way to start to get a grasp on OO & design I would like to continue developing my skills, but most of all, re-orient how I am perceived as a professional. I am fully aware of how much a novice to this subject and would greatly appreciate any guidance anyone could give me. I currently have a job so full-time is not an option My goal is to become a software/ applications developer My questions are: -Should i take up a second bachelors in computer science? or a masters? or continue taking professional certificate programs (how are these viewed?) -If masters in computer science, would that make sense, if I dont have the formal foundation? (being a chief without ever being an Indian) -General advice for a novice to develop skill Thank You in advanced for helping me out.

    Read the article

  • Second monitor not detected

    - by configurator
    Note: I've seen this question quite a lot, but in all the cases I could find with answers, the answer was either "I don't know" or "use nvidia-settings (which is irrelevant to me)." I'm using Intel Sandybridge Desktop graphics, with a P8H61-M LE motherboard. How do I get Ubuntu to detect my second monitor? Clicking "Detect Displays" here doesn't do anything. Here's some system info: $ lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) $ uname -a Linux clyde 3.5.0-13-generic #13-Ubuntu SMP Tue Aug 28 08:31:47 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux $ hardinfo [copied from the UI] -Display- Resolution : 1920x1080 pixels Vendor : The X.Org Foundation Version : 1.12.3 -Monitors- Monitor 0 : 1920x1080 pixels -Extensions- BIG-REQUESTS Composite DAMAGE DOUBLE-BUFFER DPMS DRI2 GLX Generic Event Extension MIT-SCREEN-SAVER MIT-SHM RANDR RECORD RENDER SECURITY SGI-GLX SHAPE SYNC X-Resource XC-MISC XFIXES XFree86-DGA XFree86-VidModeExtension XINERAMA XInputExtension XKEYBOARD XTEST XVideo -OpenGL- Vendor : Intel Open Source Technology Center Renderer : Mesa DRI Intel(R) Sandybridge Desktop Version : 3.0 Mesa 8.1-devel Direct Rendering : Yes I've tried upgrading everything from ppa:xorg-edgers/ppa and ppa:glasen/intel-driver. I've also installed various tools I've found in other threads (e.g. hardinfo) but they weren't really helpful to me as I don't know what to make of the data. How do I get Ubuntu to detect my second monitor?

    Read the article

  • WolframAlpha Can Now Do In-depth Analysis of Your Facebook Account

    - by Jason Fitzpatrick
    If you’re a big fan of WolframAlpha’s ability to crunch the numbers on just about anything–and we certainly are–you’ll likely be just as delighted as we were to watch it massage the data from your Facebook account. Find out your most liked, discussed, and shared posts, see your Facebook habits, and other neat trends. I unleashed it on my account this morning, not sure what to expect from the results. Within the results tabulation WolframAlpha provided me with all sorts of neat data break downs. I now know exactly how many days it is to my next birthday, the composition of my aggregate posting habits (how many posts are status updates, links, or photos), the time of day when I do the most posting (and what the composition of those posts is), and my average post length. I also know my most liked post and my most commented on post. It will even crunch the numbers on your network of friends (60.6% of my friends are married, for example). By far one of the more interesting data analysis it does on the friendship data, however, is organizing all your friends into relationship clusters so you can see who in your Facebook network is friends with other people in your Facebook network. The service from WolframAlpha is free: simply visit the WolframAlpha search portal and type in “Facebook report” to start the process. You’ll be prompted to create a WolframAlpha account if you don’t have one and to authorize the WolframAlpha Facebook app to access your data. Your Facebook data is cached to your WolframAlpha account for one hour in order to crunch the numbers and display the results. WolframAlpha HTG Explains: How Windows Uses The Task Scheduler for System Tasks HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows? Java is Insecure and Awful, It’s Time to Disable It, and Here’s How

    Read the article

  • Reasons NOT to open source not-for-profit code?

    - by naught101
    I am a big fan of open source code. I think I understand most of the advantages of going open source. I'm a science student researcher, and I have to work with quite a surprising amount of software and code that is not open source (either it's proprietary, or it's not public). I can't really see a good reason for this, and I can see that the code, and people using it, would definitely benefit from being more public (if nothing else, in science it's vital that your results can be replicated if necessary, and that's much harder if others don't have access to your code). Before I go out and start proselytising, I want to know: are there any good arguments for not releasing not-for-profit code publicly, and with an OSI-compliant license? (I realise there are a few similar questions on SE, but most focus on situations where the code is primarily used for making money, and I couldn't much relevant in the answers.) Clarification: By "not-for-profit", I am including downstream profit motives, such as parent-company brand-recognition and investor profit expectations. In other words, the question relates only to software for which there is NO profit motive tied to the software what so ever.

    Read the article

  • Rawr Code Clone Analysis&ndash;Part 0

    - by Dylan Smith
    Code Clone Analysis is a cool new feature in Visual Studio 11 (vNext).  It analyzes all the code in your solution and attempts to identify blocks of code that are similar, and thus candidates for refactoring to eliminate the duplication.  The power lies in the fact that the blocks of code don't need to be identical for Code Clone to identify them, it will report Exact, Strong, Medium and Weak matches indicating how similar the blocks of code in question are.   People that know me know that I'm anal enthusiastic about both writing clean code, and taking old crappy code and making it suck less. So the possibilities for this feature have me pretty excited if it works well - and thats a big if that I'm hoping to explore over the next few blog posts. I'm going to grab the Rawr source code from CodePlex (a World Of Warcraft gear calculator engine program), run Code Clone Analysis against it, then go through the results one-by-one and refactor where appropriate blogging along the way.  My goals with this blog series are twofold: Evaluate and demonstrate Code Clone Analysis Provide some concrete examples of refactoring code to eliminate duplication and improve the code-base Here are the initial results:   Code Clone Analysis has found: 129 Exact Matches 201 Strong Matches 300 Medium Matches 193 Weak Matches Also indicated is that there was a total of 45,181 potentially duplicated lines of code that could be eliminated through refactoring.  Considering the entire solution only has 109,763 lines of code, if true, the duplicates lines of code number is pretty significant. In the next post we’ll start examining some of the individual results and determine if they really do indicate a potential refactoring.

    Read the article

  • What should every programmer know about web development?

    - by Joel Coehoorn
    What things should a programmer implementing the technical details of a web application before making the site public? If Jeff Atwood can forget about HttpOnly cookies, sitemaps, and cross-site request forgeries all in the same site, what important thing could I be forgetting as well? I'm thinking about this from a web developer's perspective, such that someone else is creating the actual design and content for the site. So while usability and content may be more important than the platform, you the programmer have little say in that. What you do need to worry about is that your implementation of the platform is stable, performs well, is secure, and meets any other business goals (like not cost too much, take too long to build, and rank as well with Google as the content supports). Think of this from the perspective of a developer who's done some work for intranet-type applications in a fairly trusted environment, and is about to have his first shot and putting out a potentially popular site for the entire big bad world wide web. Also, I'm looking for something more specific than just a vague "web standards" response. I mean, HTML, JavaScript, and CSS over HTTP are pretty much a given, especially when I've already specified that you're a professional web developer. So going beyond that, Which standards? In what circumstances, and why? Provide a link to the standard's specification.

    Read the article

  • Should I use procedural animation?

    - by user712092
    I have started to make a fantasy 3d fps swordplay game and I want to add animations. I don't want to animate everything by hand because it would take a lot of time, so I decided to use procedural animation. I would certainly use IK (starting with simple reaching an object with hand ...). I also assume procedural generation of animations will make less animations to do by hand (I can blend animations ...). I want also to have a planner for animation which would simplify complex animations; those which can be split to a sequence - run and then jump, jump and then roll - or which are separable - legs running and torso swinging with sword -. I want for example a character to chop a head of a big troll. If troll crouches character would just chop his head off, if it is standing he would climb on a troll. I know that I would have to describe the state ("troll is low", "troll is high", "chop troll head" ..) which would imply what regions animation will be in (if there is a gap between them character would jump), which would imply what places character can have some of legs and hands or would choose an predefined animation. My main goal is simplicity of coding, but I want my game to be looking cool also. Is it worthy to use procedural animation or does it make more troubles that it solves? (there can be lot of twiddling ...) I am using Blender Game Engine (therefore Python for scripting, and Bullet Physics).

    Read the article

  • Meet The MySQL Experts Podcast: MySQL Utilities

    - by Wei-Chen Chiu
    Managing a MySQL database server can become a full time job. In many occasions, one MySQL DBA needs to manage multiple, even tens of, MySQL servers, and tools that bundle a set of related tasks into a common utility can be a big time saver, allowing you spend more time improving performance and less time executing repeating tasks. While there are several such utility libraries to choose, it is often the case that you need to customize them to your needs. The MySQL Utilities library is the answer to that need. It is open source so you can modify and expand it as you see fit. In the latest episode of the "Meet the MySQL Experts" podcast series, Chuck Bell, Sr. MySQL Software Developer at Oracle, introduces a variety of recently released MySQL Utilities, and how DBAs can save significant time using the utilities. Listen to the podcast and learn the highlights in 10 minutes. If you want to gain further details, attend the on-demand webinar for a more complete introduction, including: Use cases for each utility How to group utilities for even more usability How to modify utilities for your needs How to develop and contribute new utilities  Enjoy!

    Read the article

  • How can I convert the Nvidia driver installer into a deb?

    - by Oli
    Every so often there's a beta version of the Nvidia driver that I want to try out. This has happened today: there's been a big performance issue with version 295.40 and I want to try the shiny new XRandR-enabled 302.07. I'm more than able to download the installer, remove all the repo-installed driver files and install the new version but it's frankly a pain in the bottom to turn that around and go back to the repo version. It also means I have to re-install the driver manually each time there's a Kernel upgrade. The other option we commonly give people is a PPA but in this case I'm being really impatient. It's going to be a few days before any PPA gets this but I need to try this today. I've already manually installed it on the media centre and I'm eyeing up my desktop now. So how do I take an installer (eg: NVIDIA-Linux-x86-302.07.run) and convert that into a new nvidia-current/nvidia-current-updates package? Another way of asking this might be: How do people package the Nvidia drivers?

    Read the article

  • How should I safely send bulk mail? [closed]

    - by Jerry Dodge
    First of all, we have a large software system we've developed and have a number of clients using it in their own environment. Each of them is responsible for using their own equipment and resources, we don't provide any services to share with them. We have introduced an automated email system which sends emails automatically via SMTP. Usually, it only sends around 10-20 emails a day, but it's very possible to send bulk email up to thousands of people in a single day. This of course requires a big haul of work, which isn't necessarily the problem. The issue arises when it comes to the SMTP server we're using. An email server is issued a number of relays a day, which is paid for. This isn't really necessarily the issue either. The risk is getting the email server blacklisted. It's inevitable, and we need to carefully take all this into consideration. As far as I can see, the ideal setup would be to have at least 50 IP addresses on multiple servers, each of which hosts its own SMTP server. When sending bulk email, it will divide them up across these servers, and each one will process its own queue. If one of those IP's gets blacklisted, it will be decommissioned and a new IP will replace it. Is there a better way that doesn't require us to invest in a large handful of servers? Perhaps a third party service which is meant exactly for this?

    Read the article

  • installing ubuntu on SSD

    - by kunal
    Going to install Ubuntu 10.10 on new intel x25M 80GB SSD. It will be fresh install. I have been googling for past few days and getting overwhelming articles/blogs/Q&As. One particularly very useful being: Optimize for SSD (I could not post other links as i dont have enough credits) But with so many suggestions and differences of opinions (on different links) this simple OS install process seems to be daunting task to me and I really want to stick with ubuntu (although have used for very short period of time). Can someone help me by answering few questions (yes they are repeated as i couldnt comprehend the answers elsewhere) which file system (ext2/3/4 or something else)? (consider SSD life) can it be changed after installation? should i partition the disk? (as we do in traditional HDD) for now, no plan of dual booting. Only ubuntu will live on scarce space of 80GB SSD. i have 2 GB RAM, should i still allocate swap space (if i dont allocate swap space, can i still hibernate the machine)? will swap space impact SSD life? should i consider putting additional 1GB RAM to avoid swap space? Linux experience - absolute novice intended usage - heavy browsing, programming, regular video/music and some other non-CPU/RAM-intensive programs. will backup big files to an external hard drive. laptop config - 3 yr old vaio, core2 duo, 2GB RAM Please pardon the repetition and i really appreciate anyone helping me getting started with ubuntu.

    Read the article

< Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >