Search Results

Search found 14639 results on 586 pages for 'coding environment'.

Page 234/586 | < Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >

  • Dark Visual Experience in Visual Studio 2012

    - by Jalpesh P. Vadgama
    I have written whole series related to Visual Studio 2012 features and this post will also be part of same series.You can get all my post related to visual studio from the following link. Visual Studio 2012 feature series Before some days I was searching something and found a great way to change the visual experience of visual studio 2012. I found that there are two type of themes available in visual studio 2012 light and dark under Tools->Option-> General environment value. This is one of newest feature I have found in visual studio 2012. Read More >>

    Read the article

  • Legal Precautions of Customizing Ubuntu LiveCD

    - by Voulnet
    Hello everyone, the organization I work at wants to create a custom Ubuntu LiveCD, the customizations are: Pre-installed programs, plugins, some device drivers, and aesthetics such as icons and backgrounds, as well as changing Firefox's homepage and removing unneeded packages. Not big changes, obviously, and we wish to distribute this custom image for clients to use as a bootable CD or USB stick in order to have a quick environment where all our tools are available instantly. What are the licensing and legal consequences of this? What if some of the programs or plugins that are to be pre-packaged are not GPL'd? I should finally note that we are not changing any code in the kernel or any other distro component. Thank you for your time!

    Read the article

  • Space Stations as Envisioned in the 1970s

    - by Jason Fitzpatrick
    Boy, they sure were ambitious back in the 70s; while today we’re happy to have a small apartment-sized environment in orbit, back then they were dreaming of entire cities in space. Courtesy of the NASA Ames Research Center archives, we’re treated to artist renderings of the space colonies of the future as imagined from the 1970s. The artwork spans visions of space colonies from 10,000 to 1,000,000 citizens strong–some of them include everything from bodies of water to office buildings. Hit up the link below for more images. Space Colony Art from the 1970s [via The Daily What] 6 Ways Windows 8 Is More Secure Than Windows 7 HTG Explains: Why It’s Good That Your Computer’s RAM Is Full 10 Awesome Improvements For Desktop Users in Windows 8

    Read the article

  • Ms Build publishing vs Visual Studio IDE publishing

    - by reggie
    I am currently working on ms build to publish my winform application based on the environment selected (Dev or Prod). I am using Ms Build Community Task and referencing this article to achieve this purpose. I had a few theoretical doubts based on publishing application. 1) Is there any difference in publishing through the visual studio ide and msbuild? 2) What do most developers prefer to use and why? 3) What are the advantages of using MsBuild to publish an application as compared to publishing through the visual studio IDE? 4) What is faster? I am using a .net 3.5 winform application developed in Csharp and my question is pertaining to clickonce windows applications only. Please help me clear these doubts

    Read the article

  • Alcatel-Lucent Boosts Broadband Over Copper To 300Mbps

    - by Ratman21
    alphadogg at Slashdot writes "Alcatel-Lucent has come up with a way to [0]move data at 300Mbps over copper lines. So far the results have only been reproduced in a lab environment — real products and services won't be available for at least a year. From the article: 'Researchers at the company's Bell Labs demonstrated the 300Mbps technology over a distance of 400 meters using VDSL2 (Very high bitrate Digital Subscriber Line), according to Stefaan Vanhastel, director of product marketing at Alcatel-Lucent Wireline Networks. The test showed that it can also do 100Mbps over a distance of 1,000 meters, he said. Currently, copper is the most common broadband medium. About 65 percent of subscribers have a broadband connection that's based on DSL, compared to 20 percent for cable and 12 percent for fiber, according to market research company Point Topic. Today, the average advertised DSL speeds for residential users vary between 9.2 Mbps and 1.9Mbps in various parts of the world, Point Topic said.'" Discuss this story at: http://tech.slashdot.org/comments.pl?sid=10/04/21/239243

    Read the article

  • Monitoring Events in your BPEL Runtime - RSS Feeds?

    - by Ramkumar Menon
    @10g - It had been a while since I'd tried something different. so here's what I did this week!Whenever our Developers deployed processes to the BPEL runtime, or perhaps when a process gets turned off due to connectivity issues, or maybe someone retired a process, I needed to know. So here's what I did. Step 1: Downloaded Quartz libraries and went through the documentation to understand what it takes to schedule a recurring job. Step 2: Cranked out two components using Oracle JDeveloper. [Within a new Web Project] a) A simple Java Class named FeedUpdater that extends org.quartz.Job. All this class does is to connect to your BPEL Runtime [via opmn:ormi] and fetch all events that occured in the last "n" minutes. events? - If it doesn't ring a bell - its right there on the BPEL Console. If you click on "Administration > Process Log" - what you see are events.The API to retrieve the events is //get the locator reference for the domain you are interested in.Locator l = .... //Predicate to retrieve events for last "n" minutesWhereCondition wc = new WhereCondition(...) //get all those events you needed.BPELProcessEvent[] events = l.listProcessEvents(wc); After you get all these events, write out these into an RSS Feed XML structure and stream it into a file that resides either in your Apache htdocs, or wherever it can be accessed via HTTP.You can read all about RSS 2.0 here. At a high level, here is how it looks like. <?xml version = '1.0' encoding = 'UTF-8'?><rss version="2.0">  <channel>    <title>Live Updates from the Development Environment</title>    <link>http://soadev.myserver.com/feeds/</link>    <description>Live Updates from the Development Environment</description>    <lastBuildDate>Fri, 19 Nov 2010 01:03:00 PST</lastBuildDate>    <language>en-us</language>    <ttl>1</ttl>    <item>      <guid>1290213724692</guid>      <title>Process compiled</title>      <link>http://soadev.myserver.com/BPELConsole/mdm_product/administration.jsp?mode=processLog&amp;processName=&amp;dn=all&amp;eventType=all&amp;eventDate=600&amp;Filter=++Filter++</link>      <pubDate>Fri Nov 19 00:00:37 PST 2010</pubDate>      <description>SendPurchaseOrderRequestService: 3.0 Time : Fri Nov 19 00:00:37                   PST 2010</description>    </item>   ...... </channel> </rss> For writing ut XML content, read through Oracle XML Parser APIs - [search around for oracle.xml.parser.v2] b) Now that my "Job" was done, my job was half done. Next, I wrote up a simple Scheduler Servlet that schedules the above "Job" class to be executed ever "n" minutes. It is very straight forward. Here is the primary section of the code.           try {        Scheduler sched = StdSchedulerFactory.getDefaultScheduler();         //get n and make a trigger that executes every "n" seconds        Trigger trigger = TriggerUtils.makeSecondlyTrigger(n);        trigger.setName("feedTrigger" + System.currentTimeMillis());        trigger.setGroup("feedGroup");                JobDetail job = new JobDetail("SOA_Feed" + System.currentTimeMillis(), "feedGroup", FeedUpdater.class);        sched.scheduleJob(job,trigger);         }catch(Exception ex) {            ex.printStackTrace();            throw new ServletException(ex.getMessage());        } Look up the Quartz API and documentation. It will make this look much simpler.   Now that both components were ready, I packaged the Application into a war file and deployed it onto my Application Server. When the servlet initialized, the "n" second schedule was set/initialized. From then on, the servlet kept populating the RSS Feed file. I just ensured that my "Job" code keeps only 30 latest events within it, so that the feed file is small and under control. [a few kbs]   Next I opened up the feed xml on my browser - It requested a subscription - and Here I was - watching new deployments/life cycle events all popping up on my browser toolbar every 5 (actually n)  minutes!   Well, you could do it on a browser/reader of your choice - or perhaps read them like you read an email on your thunderbird!.      

    Read the article

  • How to set-up DSL dialer for Ubuntu 12.04 LTS

    - by Mohammad Yaseen
    I have just installed Ubuntu 12.04 LTS and I'm unable to get my DSL dialer working properly. To set this up in Windows 7 I had to do following: Control Panel --- Network and Intertnet Network and sharing center --- Setup a new network or connection Connect to the internet --- Broadband PPPoE Enter username and Password.. CLick 'Connect' and Done. I am doing following steps in Ubuntu with no luck: Click on 'Two Arrows' (i don't know what they are called) on upper right corner. Configure VPN --- DSL tab --- Add Then I entered username, password, MAC address and Clone MAC address (copied from Auto Ethernet). Save The same set up used to work with Ubuntu 10.10 but it is not working here. Now whenever I click on DSL Connection 1 to connect dialer 'Auto Ethernet' gets disconneted and I end up with no Internet connection. I am new to Ubuntu, Please suggest some easy steps. I have installed ubuntu alongside windows. And dialer works fine in Windows environment, i am writing this in Windows .

    Read the article

  • Always use dtexec.exe to test performance of your dataflows. No exceptions.

    - by jamiet
    Earlier this evening I posted a blog post entitled Investigation: Can different combinations of components effect Dataflow performance? where I compared the performance of three different dataflows all working to the same overall goal. I wanted to make one last point related to the results but I thought it warranted a blog post all of its own. Here is a screenshot of one of the dataflows that I was testing: Pretty complicated I’m sure you’ll agree. Now, when I executed this dataflow in the test it was executing in ~19seconds however in that case I was executing using the command-line tool dtexec. I also tried executing inside the BIDS development environment and in that case it took much longer – 139seconds. That’s more than seven times as long. The point I want to make is very simple. If you are testing your dataflows for performance please use dtexec. Nothing else will suffice. @Jamiet

    Read the article

  • ASP.NET hosting: better, faster, cheaper

    After seven years with webhost4life, it was time to move on. Especially because of all the troubles with webhost4life due to their internal migration to a new hosting environment (the company has been bought out).I've just moved all my websites elsewhere. I'm now using Arvixe and OrcsWeb.I use OrcsWeb for metaSapiens.com. OrcsWeb kindly offers me free ASP.NET hosting because I'm a Microsoft MVP. I'd like to publicly thank OrcsWeb for this, and I invite you to have a look at what they have to offer.I...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Impact of Truncate or Drop Table When Flashback Database is Enabled

    - by alejandro.vargas
    Recently I was working on a VLDB on the implementation of a disaster recovery environment configured with data guard physical standby and fast start failover. One of the questions that come up was about the overhead of truncating and dropping tables. There are daily jobs on the database that truncate extremely large partitions, and as note 565535.1 explains, we knew there is an overhead for these operations. But the information on the note was not clear enough, we the additional information I've got from Senior Oracle colleagues I did compile this document "Impact of Truncate or Drop Table When Flashback Database is Enabled" that further explain the case

    Read the article

  • ASP.NET hosting: better, faster, cheaper

    After seven years with webhost4life, it was time to move on. Especially because of all the troubles with webhost4life due to their internal migration to a new hosting environment (the company has been bought out).I've just moved all my websites elsewhere. I'm now using Arvixe and OrcsWeb.I use OrcsWeb for metaSapiens.com. OrcsWeb kindly offers me free ASP.NET hosting because I'm a Microsoft MVP. I'd like to publicly thank OrcsWeb for this, and I invite you to have a look at what they have to offer.I...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • ssh forwarding error

    - by Ahsan
    I have some issues regarding SSH and i am unable to solve . I have completed bootstrap and node status is 1 node allocated to maas, Now when i do juju status, it says invalid ssh key , hostname cannot be found error. .. I then went to /etc/hosts file and i changed 127.0.0.1 localhost to my 127.0.01 Node1 Now it gives me , error SSH forwading error: @@@@@@@@@@@@@@@@@@@@@@@@@@ I also have run the node after bootstrap and it gives ssh key .. I didnt added any ssh key in my Dashboard of MAAS. Secondly i want to ask how can i make more nodes allocated to root? Do i have to rewrite the maas-oauth portion in environment with another API key . Kindly Reply ASAP ....

    Read the article

  • Enabling SSL Requests on Jdev's Integrated Weblogic

    - by Christian David Straub
    Often times you will want to enable SSL access for such things as secure login or secure signup. By default, the integrated WLS that ships with JDev does not listen to SSL requests. However, this is easily fixed.Just navigate to http://127.0.0.1:7101/console. This will deploy the console app where you can configure WLS. By default the login credentials are:username: weblogicpassword: weblogic1Then go to Environment -> Servers -> DefaultServer. Check the "SSL Listen Port Enabled" box and your server will now listen to SSL requests (just make sure to use the listen port that is specified).For added security, you can always check while processing your request that it is going through an SSL connection by first checking HttpServletRequest.isSecure().

    Read the article

  • Why eclipse is hanging while in debug mode ?

    - by Pratik
    We are developing our web application using JAVA GWT framework. We are using Eclipse Indigo as a development GUI. We are facing problems while debugging the JAVA gwt application in eclipse. Most of the time, Eclipse hangs while debugging. We tried to increase the memory buffer size in eclipse but no luck. We had tried to run the eclipse in various environment like Windows, Fedora 16, Cent OS. but some how not getting positive results. Can anyone help me out to decide which OS, and eclipse or version should we have to use so can able to resolve the hanging issue? Thanks in advance. Pratik

    Read the article

  • Best practices for logging user actions in production

    - by anthonypliu
    I was planning on logging a lot of different stuff in my production environment, things like when a user: Logs In, Logs Off Change Profile Edit Account settings Change password ... etc Is this a good practice to do on a production enviornment? Also what is a good way to log all this. I am currently using the following code block to log to: public void LogMessageToFile(string msg) { System.IO.StreamWriter sw = System.IO.File.AppendText( GetTempPath() + @"MyLogFile.txt"); try { string logLine = System.String.Format( "{0:G}: {1}.", System.DateTime.Now, msg); sw.WriteLine(logLine); } finally { sw.Close(); } } Will this be ok for production? My application is very new so im not expecting millions of users right away or anything, looking for the best practices to keeping track of actions on a website or if its even best practice to.

    Read the article

  • Oracle ADF Mobile - Develop iOS and Android Mobile Applications with Oracle ADF

    - by Shay Shmeltzer
    We are very happy to announce the release of Oracle ADF Mobile.  The new Oracle ADF Mobile enables developers to build applications that run on iOS and Android devices. Several unique aspects to Oracle ADF Mobile solution: Develop once run on many - same code base used for both iOS and Android applicaitons Uses Java - no need to learn device specific languages Leverage ADF - same concepts you are familiar with (component based UI construction, taskflow, data controls) Leverage JDeveloper - same development environment you know, same declarative and visual style. Create native looking applications - HTML 5 based UI components (that you can also skin) Use device services - Leverage the camera, SMS, location, contact etc without learning device specific APIs Create Hybrid applications - run on the device and able to consume remote data and UI if needed Here is the 3 minute introduction Oracle ADF Mobile is available as an extension to Oracle JDeveloper 11.1.2.3 - use the help->check for updates to install it. Then head over to the Oracle ADF Mobile page for all the resources you need. If you are an Oracle ADF developer, it's time to update your resume - you are now a mobile device developer too :-)

    Read the article

  • Premature-Optimization and Performance Anxiety

    - by James Michael Hare
    While writing my post analyzing the new .NET 4 ConcurrentDictionary class (here), I fell into one of the classic blunders that I myself always love to warn about.  After analyzing the differences of time between a Dictionary with locking versus the new ConcurrentDictionary class, I noted that the ConcurrentDictionary was faster with read-heavy multi-threaded operations.  Then, I made the classic blunder of thinking that because the original Dictionary with locking was faster for those write-heavy uses, it was the best choice for those types of tasks.  In short, I fell into the premature-optimization anti-pattern. Basically, the premature-optimization anti-pattern is when a developer is coding very early for a perceived (whether rightly-or-wrongly) performance gain and sacrificing good design and maintainability in the process.  At best, the performance gains are usually negligible and at worst, can either negatively impact performance, or can degrade maintainability so much that time to market suffers or the code becomes very fragile due to the complexity. Keep in mind the distinction above.  I'm not talking about valid performance decisions.  There are decisions one should make when designing and writing an application that are valid performance decisions.  Examples of this are knowing the best data structures for a given situation (Dictionary versus List, for example) and choosing performance algorithms (linear search vs. binary search).  But these in my mind are macro optimizations.  The error is not in deciding to use a better data structure or algorithm, the anti-pattern as stated above is when you attempt to over-optimize early on in such a way that it sacrifices maintainability. In my case, I was actually considering trading the safety and maintainability gains of the ConcurrentDictionary (no locking required) for a slight performance gain by using the Dictionary with locking.  This would have been a mistake as I would be trading maintainability (ConcurrentDictionary requires no locking which helps readability) and safety (ConcurrentDictionary is safe for iteration even while being modified and you don't risk the developer locking incorrectly) -- and I fell for it even when I knew to watch out for it.  I think in my case, and it may be true for others as well, a large part of it was due to the time I was trained as a developer.  I began college in in the 90s when C and C++ was king and hardware speed and memory were still relatively priceless commodities and not to be squandered.  In those days, using a long instead of a short could waste precious resources, and as such, we were taught to try to minimize space and favor performance.  This is why in many cases such early code-bases were very hard to maintain.  I don't know how many times I heard back then to avoid too many function calls because of the overhead -- and in fact just last year I heard a new hire in the company where I work declare that she didn't want to refactor a long method because of function call overhead.  Now back then, that may have been a valid concern, but with today's modern hardware even if you're calling a trivial method in an extremely tight loop (which chances are the JIT compiler would optimize anyway) the results of removing method calls to speed up performance are negligible for the great majority of applications.  Now, obviously, there are those coding applications where speed is absolutely king (for example drivers, computer games, operating systems) where such sacrifices may be made.  But I would strongly advice against such optimization because of it's cost.  Many folks that are performing an optimization think it's always a win-win.  That they're simply adding speed to the application, what could possibly be wrong with that?  What they don't realize is the cost of their choice.  For every piece of straight-forward code that you obfuscate with performance enhancements, you risk the introduction of bugs in the long term technical debt of the application.  It will become so fragile over time that maintenance will become a nightmare.  I've seen such applications in places I have worked.  There are times I've seen applications where the designer was so obsessed with performance that they even designed their own memory management system for their application to try to squeeze out every ounce of performance.  Unfortunately, the application stability often suffers as a result and it is very difficult for anyone other than the original designer to maintain. I've even seen this recently where I heard a C++ developer bemoaning that in VS2010 the iterators are about twice as slow as they used to be because Microsoft added range checking (probably as part of the 0x standard implementation).  To me this was almost a joke.  Twice as slow sounds bad, but it almost never as bad as you think -- especially if you're gaining safety.  The only time twice is really that much slower is when once was too slow to begin with.  Think about it.  2 minutes is slow as a response time because 1 minute is slow.  But if an iterator takes 1 microsecond to move one position and a new, safer iterator takes 2 microseconds, this is trivial!  The only way you'd ever really notice this would be in iterating a collection just for the sake of iterating (i.e. no other operations).  To my mind, the added safety makes the extra time worth it. Always favor safety and maintainability when you can.  I know it can be a hard habit to break, especially if you started out your career early or in a language such as C where they are very performance conscious.  But in reality, these type of micro-optimizations only end up hurting you in the long run. Remember the two laws of optimization.  I'm not sure where I first heard these, but they are so true: For beginners: Do not optimize. For experts: Do not optimize yet. This is so true.  If you're a beginner, resist the urge to optimize at all costs.  And if you are an expert, delay that decision.  As long as you have chosen the right data structures and algorithms for your task, your performance will probably be more than sufficient.  Chances are it will be network, database, or disk hits that will be your slow-down, not your code.  As they say, 98% of your code's bottleneck is in 2% of your code so premature-optimization may add maintenance and safety debt that won't have any measurable impact.  Instead, code for maintainability and safety, and then, and only then, when you find a true bottleneck, then you should go back and optimize further.

    Read the article

  • Experiencing the New Social Enterprise

    - by kellsey.ruppel(at)oracle.com
    Social media and networking tools, popularly known as Web 2.0 technologies, are rapidly transforming user expectations of enterprise systems. Many organizations are investing in these new tools to cultivate a modern user experience in an "Enterprise 2.0" environment that unlocks the full potential of traditional IT systems and fosters collaboration in key business processes. Is your organization a social enterprise? How are you using Web 2.0 and Enterprise 2.0 technologies? Read this white paper to learn how Oracle WebCenter Suite enables organizations to become social enterprises and is the modern user experience platform for the enterprise and the Web.

    Read the article

  • gnome-shell failed to launch when dual monitor detected, nvidia card

    - by Terry Hu
    I installed a clean precise. I like gnome-shell so I installed it. It worked well with single monitor. But It does not work well when I go to office, plug my external monitor onto my laptop. I am using nvidia driver. Another colleague is using the same laptop, he can work well with dual monitors. His laptop is ATI graphic card. When I plug my extended monitor cable, I choose GNOME, but actually it goes to gnome 2 classic desktop environment, not GNOME 3. When I disconnect the extended monitor, restart X, I can login with GNOME3. Do I have to choose between GNOME3 and extended monitor? It's a hard choose.

    Read the article

  • How should I replan A*?

    - by Gregory Weir
    I've got a pathfinding boss enemy that seeks the player using the A* algorithm. It's a pretty complex environment, and I'm doing it in Flash, so the search can get a bit slow when it's searching over long distances. If the player was stationary, I could just search once, but at the moment I'm searching every frame. This takes long enough that my framerate is suffering. What's the usual solution to this? Is there a way to "replan" A* without redoing the entire search? Should I just search a little less often (every half-second or second) and accept that there will be a little inaccuracy in the path?

    Read the article

  • Level Design V.S. Modeler

    - by Ecurbed
    From what I understand being a level designer and a character/environment/object/etc Modeler are two different jobs, yet sometimes it feels like a Modeler can also do the job of the level designer. I know this also depends on the scale of the game. For small games maybe they are one and the same, but for bigger games they become two different jobs. I understand a background in some modeling could not hurt when it comes to level design, but the question I have is: Do jobs prefer people who can model for level designing? This way they can kill two birds with one stone and have someone to create the assets and design the level. What is your opinion of the training? Does level design contain skill sets that make them completely different from what a modeler can do, or is this an easy transition for a modeler to become a level designer? Can you be a bad level designer but a good modeler and vice versa?

    Read the article

  • Studies of Pair Programming on Translation Projects

    - by gmletzkojr
    I am looking for information (ie, studies, metrics, etc) for pair programming when translating a project from an "older" language to a "newer" language. In this particular case, translating means line for line translation where ever possible, and only modifying the design when absolutely necessary, not when the modification would provide improved performance. I have performed pair programming in new development, and I am well aware of the pros and cons of pairing in that environment. However, I haven't been able to find any information in this particular case. Any help is appreciated.

    Read the article

  • Compatibility of Enum Vs. string constants

    - by Yosi
    I was recently told that using Enum: public enum TaskEndState { Error, Completed, Running } may have compatibility/serialization issues, and thus sometimes it's better to use const string: public const string TASK_END_STATE = "END_STATE"; public const string TASK_END_STATE_ERROR = "TASK_END_STATE_ERROR"; public const string TASK_END_STATE_COMPLETE = "TASK_END_STATE_COMPLETE"; public const string TASK_END_STATE_RUNNING = "TASK_END_STATE_RUNNING"; Can you find practical use case where it may happen, is there any guidelines where Enum's should be avoided? Edit: My production environment has multiple WFC services (different versions of the same product). A later version may/or may not include some new properties as Task end state (this is just an example). If we try to deserialize a new Enum value in an older version of a specific service, it may not work.

    Read the article

  • how to set different wallpapers in ubuntu workspaces

    - by Steve
    I'm having an issues trying to customize ubuntu workspaces in the gnome environment. Assuming the default four workspaces aka desktops, how can one have a different wallpaper for each one? When I go to an individual workspace to set its wallpaper, all of the workspaces use it. So if I set: wallpaper B on workspace 2 wallpaper C on workspace 3 What will happen is that all the workspaces will default to the last wallpaper set no matter which workspace it was set in. What's even weirder is that the very first wallpaper set upon using it for the very first time is what shows up when i call up the Workspaces tool. Even though once I settle upon a workspace, no matter which one, the original wallpaper disappears and the last wallpaper set is the one that always shows up.

    Read the article

  • Oracle Announces the Winners of the 2014 Oracle Sustainability Innovation Award

    - by Evelyn Neumayr
    Oracle will be honoring the winners of the 2014 Sustainability Innovation Award, one of the Oracle Excellence Awards, at the Oracle OpenWorld conference in San Francisco. This award recognizes the innovative use of Oracle technology to address global sustainability business challenges. The winning customers reduced their environmental footprint while also reducing costs using green business practices and Oracle technology. For these customers, environmental sustainability has become an essential ingredient to doing business responsibly and successfully. Oracle will also be awarding Lacey Lewis, Senior Vice President – Finance at Cox Enterprises, with Oracle's 2014 Chief Sustainability Officer of the Year award. Lacey is being honored for the comprehensive, deep-rooted environmental sustainability program at Cox Enterprises. With a focus on conserving and protecting the environment, Cox Enterprises uses Oracle Applications and technology to drive efficiency and green business processes throughout its organization. These awards will be presented by Jeff Henley, Oracle Chairman of the Board, in Oracle's seventh annual sustainability awards session. Please join us at this awards session on Wednesday October 1 in Moscone West Room 3002 if you will be attending Oracle OpenWorld.

    Read the article

< Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >