Search Results

Search found 13488 results on 540 pages for 'calculator date calculation'.

Page 274/540 | < Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >

  • Compute directional light frustum from view furstum points and light direction

    - by Fabian
    I'm working on a friends engine project and my task is to construct a new frustum from the light direction that overlaps the view frustum and possible shadow casters. The project already has a function that creates a frustum for this but its way to big and includes way to many casters (shadows) which can't be seen in the view frustum. Now the only parameter of this function are the normalized light direction vector and a view class which lets me extract the 8 view frustum points in world space. I don't have any additional infos about the scene. I have read some of the related Questions here but non seem to fit very well to my problem as they often just point to cascaded shadow maps. Sadly i can't use DX or openGl functions directly because this engine has a dedicated math library. From what i've read so far the steps are: Transform view frustum points into light space and find min/max x and y values (or sometimes minima and maxima of all three axis) and create a AABB using the min/max vectors. But what comes after this step? How do i transform this new AABB back to world space? What i've done so far: CVector3 Points[8], MinLight = CVector3(FLT_MAX), MaxLight = CVector3(FLT_MAX); for(int i = 0; i<8;++i){ Points[i] = Points[i] * WorldToShadowMapMatrix; MinLight = Math::Min(Points[i],MinLight); MaxLight = Math::Max(Points[i],MaxLight); } AABox box(MinLight,MaxLight); I don't think this is the right way to do it. The near plain probably has to extend into the direction of the light source to include potentional shadow casters. I've read the Microsoft article about cascaded shadow maps http://msdn.microsoft.com/en-us/library/windows/desktop/ee416307%28v=vs.85%29.aspx which also includes some sample code. But they seem to use the scenes AABB to determine the near and far plane which I can't since i cant access this information from the funtion I'm working in. Could you guys please link some example code which shows the calculation of such frustum? Thanks in advance! Additional questio: is there a way to construct a WorldToFrustum matrix that represents the above transformation?

    Read the article

  • Generating an EJB SDO Service Interface for Oracle SOA Suite by Edwin Biemond

    - by JuergenKress
    In Oracle SOA Suite you can use the EJB adapter as a reference or service in your composite applications. The EJB adapter has a flexible binding integration, there are 3 ways for integrating the remote interface with your composite. First you have the java interface way which I described here this follows the JAX-WS way. It means you need to use Calendar for your Java date types and leads to one big WSDL when you add a wire to a service component. Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: EJB,SDO,Edwin Biemond,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress

    Read the article

  • SharePoint 2010 Hosting :: HTTP Error 503. The Service in Unavailable.

    - by mbridge
    Error Message: HTTP Error 503. The service in unavailable. The application pool is shutdown and the Event Viewer indicates: Log Name:      Application Source:        Microsoft-Windows-User Profiles Service Date:          4/24/2010 4:58:28 PM Event ID:      1500 Task Category: None Level:         Error Keywords: User:          GUERILLA\spapppool Computer:      SPS2010RTM.guerilla.local Description: Windows cannot log you on because your profile cannot be loaded. Check that you are connected to the network, and that your network is functioning correctly. DETAIL – Unspecified error Solution: Option 1 – Log on locally with the service account once to create a local profile for it. Option 2 - Modify the application pool by going into IIS > Application Pools > Right-Click offending app pool > Advanced Settings > Set “Load User Profile” to False. Give it try!! Good luck!!

    Read the article

  • Innovative Applications with WebSphere Server Feature Packs and Rational Tooling

    This webcast will cover how the new open standards and programming models that are delivered through WebSphere Application Server Feature Packs can be used to create innovative web applications that help you to stay ahead of your competition. The Feature Packs covered will include: Web 2.0, Service Component Architecture, Communication Enabled Applications, and OSGi. It will also cover the IBM Rational tooling that can help you to quickly leverage the new capabilities delivered in these Feature Packs and accelerate the delivery of new applications. <b>Date / Time:</b> &#9;&#10;Wednesday, December 15, 2010 / 11:00 AM PT / 2:00 PM ET

    Read the article

  • Webinar: Meeting Customer Expectations in the New Age of Retail

    - by Sanjeev Sharma
    Webcast Date: Thursday, November 8, 2012 Time: 10am PT/ 1pm ET The retail market has expanded into the online, mobile, and social worlds. But the key to success hasn’t changed since the days of traditional, brick-and-mortar business. It’s still about service. A successful retailer today in omni-channel customer engagement must be able to deliver quality service that meets customer expectations. For many retailers, Oracle Web commerce applications help them achieve that success, allowing them to market, interact, and transact across multiple channels in a predictable, consistent, and personalized manner. Join us for this Webcast, and learn what Oracle applications can do for your business. In this session, we will discuss: The significance and dimensions of modern omni-channel customer experience The Oracle Commerce platform Real-world examples of business value derived by running customer-facing applications on Oracle Engineered Systems Register today Speakers: Sanjeev Sharma Principal Product Director, Oracle Exalogic, Oracle Kelly Goetsch Senior Principal Product Manager, Oracle Commerce, Oracle Dan Conway Senior Product Manager, Oracle Retail, Oracle

    Read the article

  • Ways of Gathering Event Information From the Internet

    - by Ciwan
    What are the best ways of gathering information on events (any type) from the internet ? Keeping in mind that different websites will present information in different ways. I was thinking 'smart' web crawlers, but that can turn out to be extremely challenging, simply because of the hugely varied ways that different sites present their information. Then I was thinking of sifting through the official twitter feeds of organisations, people with knowledge of events .. etc and look for the event hash tag, grab the tweet and dissect it to grab the relevant information about the event. Information I am interested in gathering is: Date and Time of Event, Address where Event is being held, and any Celebrities (or any famous people) attending the event (if any). The reason to ask here is my hope that experienced folk will open my eyes to things I've missed, which I am sure I have.

    Read the article

  • What are the benefits of designing a KeyBinding relay?

    - by Adam Naylor
    The input system of Quake3 is handled using a Keybinding relay, whereby each keypress is matched against a 'binding' which is then passed to the CLI along with a time stamp of when the keypress (or release) occurred. I just wanted to get an idea from developers what they considered to be the key benefits of designing your input system around this approach? One thing i don't particularly like is the appending of the timestamp to the bound command. This seems like a bit of a hack to bend the CLI into handling the games input? Also I feel that detecting the keypress only to add the command to a stream of text that gets parsed at a later date to be a slightly latent way of responding to input? (or is this unfounded?) The only real benefit i can see is that it allows you to bind 'complex' commands to keypresses; like 'switch weapon;+fire;' for example. Or maybe for journaling purposes? Thanks for any insights!

    Read the article

  • Extending Currying: Partial Functions in Javascript

    - by kerry
    Last week I posted about function currying in javascript.  This week I am taking it a step further by adding the ability to call partial functions. Suppose we have a graphing application that will pull data via Ajax and perform some calculation to update a graph.  Using a method with the signature ‘updateGraph(id,value)’. To do this, we have do something like this: 1: for(var i=0;i<objects.length;i++) { 2: Ajax.request('/some/data',{id:objects[i].id},function(json) { 3: updateGraph(json.id, json.value); 4: } 5: } This works fine.  But, using this method we need to return the id in the json response from the server.  This works fine, but is not that elegant and increase network traffic. Using partial function currying we can bind the id parameter and add the second parameter later (when returning from the asynchronous call).  To do this, we will need the updated curry method.  I have added support for sending additional parameters at runtime for curried methods. 1: Function.prototype.curry = function(scope) { 2: scope = scope || window 3: var args = []; 4: for (var i=1, len = arguments.length; i < len; ++i) { 5: args.push(arguments[i]); 6: } 7: var m = this; 8: return function() { 9: for (var i=0, len = arguments.length; i < len; ++i) { 10: args.push(arguments[i]); 11: } 12: return m.apply(scope, args); 13: }; 14: } To partially curry this method we will call the curry method with the id parameter, then the request will callback on it with just the value.  Any additional parameters are appended to the method call. 1: for(var i=0;i<objects.length;i++) { 2: var id=objects[i].id; 3: Ajax.request('/some/data',{id: id}, updateGraph.curry(id)); 4: } As you can see, partial currying gives is a very useful tool and this simple method should be a part of every developer’s toolbox.

    Read the article

  • MySQL Connector/Net 6.8.0 alpha has been released

    - by Roberto Garcia
    Dear MySQL users, MySQL Connector/Net 6.8.0, a new version of the all-managed .NET driver for MySQL has been released. This is an alpha release for 6.8.x and it's not recommended for production environments.It is appropriate for use with MySQL server versions 5.0-5.6 It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.8.0 version of MySQL Connector/Net has support for Entity Framework 6.0 including: - Async Query and Save- Code-Based Configuration- Dependency Resolution- DbSet.AddRange/RemoveRange- Code First Mapping to Insert/Update/Delete Stored Procedures - Configurable Migrations History Table- DbContext can now be created with a DbConnection that is already opened- Custom Code First Conventions The release is available to download at http://dev.mysql.com/downloads/connector/net/#downloads Documentation-------------------------------------You can view current Connector/Net documentation at http://dev.mysql.com/doc/refman/5.6/en/connector-net.html You can find our team blog at http://blogs.oracle.com/MySQLOnWindows You can also post questions on our forums at http://forums.mysql.com/ Enjoy and thanks for the support! Connector/NET Team

    Read the article

  • Best practices for managing deployment of code from dev to production servers?

    - by crosenblum
    I am hoping to find an easy tool or method, that allow's managing our code deployment. Here are the features I hope this solution has: Either web-based or batch file, that given a list of files, will communicate to our production server, to backup those files in different folders, and zip them and put them in a backup code folder. Then it records the name, date/time, and purpose of the deployment. Then it sends the files to their proper spot on the production server. I don't want too complex an interface to doing the deployment's because then they might never use it. Or is what I am asking for too unrealistic? I just know that my self-discipline isn't perfect, and I'd rather have a tool I can rely on to do what needs to be done, then my own memory of what exact steps I have to take every time. How do you guys, make sure everything get's deployed correctly, and have easy rollback in case of any mistakes?

    Read the article

  • How to create a good sitemap for dynamic website

    - by Saif Bechan
    I have a website with dynamic content and different kind of pages. I have some pages that rarely change, and I have pages like blogs that change often. The blog pages also have links for sorting, for example sorting on date, asc, desc. On some of the pages I also have links to different tabbed content, and links that are just anchor links. Now when I use a xml sitemap generator then all the links are thrown into the site, and so I don't think all the links are really relevant. The blogposts up until now are also taken into the sitemap. Is this really necessary? I think the links to the blogposts can be indexed just fine. Is the best way to make a sitemap just to manually assign the main menu links to the sitemap, or is indexing everything really recommended?

    Read the article

  • Why are 2 Adobe Flash Plugin on USC (Ubuntu Software Center)?

    - by LuC1F3R
    As you know in Ubuntu Software Center is 2 times Adobe Flash Plugin. One is called Adobe Flash Plugin and other Adobe Flash Plugin 10. Which of the two to install? Or rather it is the recommended installation methods? If we think well, we can install the Adobe Flash plugin for Firefox from the notification date (Install missing plugin) or walking on the Adobe website and downloading the package .deb. After all, how to properly install Flash Player on Linux Ubuntu? (But my biggg question is why are 2 Adobe Flash Plugin on USC? ...for what? If you click on "More Info", the description are the same for both)

    Read the article

  • Finding the lowest average Hamming distance when the order of the strings matter

    - by user1049697
    I have a sequence of binary strings that I want to find a match for among a set of longer sequences of binary strings. A match means that the compared sequence gives the lowest average Hamming distance when all elements in the shorter sequence have been matched against a sequence in one of the longer sets. Let me try to explain with an example. I have a set of video frames that have been hashed using a perceptual hashing algorithm so that the video frames that look the same has roughly the same hash. I want to match a short video clip against a set of longer videos, to see if the clip is contained in one of these. This means that I need to find out where the sequence of the hashed frames in the short video has the lowest average Hamming distance when compared with the long videos. The short video is the sub strings Sub1, Sub2 and Sub3, and I want to match them against the hashes of the long videos in Src. The clue here is that the strings need to match in the specific order that they are given in, e.g. that Sub1 always has to match the element before Sub2, and Sub2 always has to match the element before Sub3. In this example it would map thusly: Sub1-Src3, Sub2-Src4 and Sub3-Src5. So the question is this: is there an algorithm for finding the lowest average Hamming distance when the order of the elements compared matter? The naïve approach to compare the substring sequence to every source string won't cut it of course, so I need something that preferably can match a (much) shorter sub string to a set of million of elements. I have looked at MVP-trees, BK-trees and similar, but everything seems to only take into account one binary string and not a sequence of them. Sub1: 100111011111011101 Sub2: 110111000010010100 Sub3: 111111010110101101 Src1: 001011010001010110 Src2: 010111101000111001 Src3: 101111001110011101 Src4: 010111100011010101 Src5: 001111010110111101 Src6: 101011111111010101 I have added a calculation of the examples below. (The Hamming distances aren't correct, but it doesn't matter) **Run 1.** dist(Sub1, Src1) = 8 dist(Sub2, Src2) = 10 dist(Sub3, Src3) = 12 average = 10 **Run 2.** dist(Sub1, Src2) = 10 dist(Sub2, Src3) = 12 dist(Sub3, Src4) = 10 average = 11 **Run 3.** dist(Sub1, Src3) = 7 dist(Sub2, Src4) = 6 dist(Sub3, Src5) = 10 average = 8 **Run 4.** dist(Sub1, Src3) = 10 dist(Sub2, Src4) = 4 dist(Sub3, Src5) = 2 average = 5 So the winner here is sequence 4 with an average distance of 5.

    Read the article

  • NoSQL as file meta database

    - by fga
    I am trying to implement a virtual file system structure in front of an object storage (Openstack). For availability reasons we initially chose Cassandra, however while designing file system data model, it looked like a tree structure similar to a relational model. Here is the dilemma for availability and partition tolerance we need NoSQL, but our data model is relational. The intended file system must be able to handle filtered search based on date, name etc. as fast as possible. So what path should i take? Stick to relational with some indexing mechanism backed by 3 rd tools like Apache Solr or dig deeper into NoSQL and find a suitable model and database satisfying the model? P.S: Currently from NoSQL Cassandra or MongoDB are choices proposed by my colleagues.

    Read the article

  • Log incoming requests

    - by Maxim Eliseev
    We have Tomcat running on Ubuntu server. It runs a web service, open to the internet. Sometimes it has sudden spike of traffic and goes down. There is nothing unusual in Tomcat access logs. I guess because some of the requests are so 'heavy' that they never finish and hence are not recorded to Tomcat access logs. Is there a way to configure Ubuntu to log incoming requests in the following format (below)? Date, Time, URL (with query string params), IP address (of client) There should be one line per request. Each request should be logged before it is executed. Only incoming requests to ports 80 and 443 should be logged.

    Read the article

  • OpenWorld 2011 Call for Papers: Deadline March 27

    - by antonio romero
    OpenWorld 2011 is now open for the public to submit session proposals. We would like to encourage our customers, and partners to participate in this ‘call for papers” (CFP) process. CFP for the general public, non-Oracle employee submitters, closes on March 27, 2011. Please share the information provided below with your contacts. General Information Conference Location: Moscone Convention Center, San Francisco, CA. Conference Date: Sunday - Thursday, October 2 - 6, 2011 Conference Website: http://www.oracle.com/us/openworld CFP Website: https://oracleus.wingateweb.com/portal/cfp/ Paper submission key dates: Deliverables Due Dates Call for Papers Begins Wednesday, March 9 Call for Papers Ends Sunday, March 27 – 11:59 pm PDT Notifications for Accepted and Declined Submissions Sent End of May Questions regarding the Call for Papers, send an email to [email protected]

    Read the article

  • Oracle OpenWorld 2012 Hands-on Lab: “Using Oracle AIA Foundation Pack for Oracle E-Business Suite Integration”

    - by Lionel Dubreuil
    Sharpen your Oracle skill sets and master Oracle technology in Oracle OpenWorld Hands-on Labs.In self-paced, practical learning sessions covering everything from business applications to middleware, database, storage, and enterprise management solutions, you'll discover new ways to derive maximum benefits from your Oracle hardware and software solutionsOracle experts will be available in person to answer questions and guide you through each lab.Hands-on Labs fill up early, and seats are limited, so don’t be late.This HOL10233 - Using Oracle AIA Foundation Pack for Oracle E-Business Suite Integration is scheduled for: Date: Monday, Oct 1 Time: 10:45 AM - 11:45 AM Location: Marriott Marquis - Nob Hill CD In this Hands-on Lab, learn how to integrate Oracle E-Business Suite with third-party applications in your ecosystem with Oracle Application Integration Architecture (AIA) Foundation Pack.This hands-on lab focuses on SOA-based integration with Oracle E-Business Suite, using Oracle E-Business Suite Integrated SOA Gateway or Oracle Applications Adapter to orchestrate a process across disparate applications in your ecosystem.Objectives for this session are to: Learn how to design and build an integration, from functional definition to development Learn how to automatically build services with robust error handling logic Learn how to discover and reuse services available in Oracle E-Business Suite

    Read the article

  • What method replaces GL_SELECT for box selection?

    - by Jake
    Last 2 weeks I started working on a box selection that selects shapes using GL_SELECT and I just got it working finally. When looking up resources online, there is a significant number of posts that say GL_SELECT is deprecated in OpenGL 3.0, but there is no mention of what had replace that function. I learnt OpenGL 1.2 in back in college 2 years back but checking wikipedia now, I realise we already have OpenGL 4.0 but I am unaware of what I need to do to keep myself up to date. So, in the meantime, what would be the latest preferred method for box selection? EDIT: I found http://www.khronos.org/files/opengl-quick-reference-card.pdf on page 5 this card still lists glRenderMode(GL_SELECT) as part of the OpenGL 3.2 reference.

    Read the article

  • SQL Table stored as a Heap - the dangers within

    - by MikeD
    Nearly all of the time I create a table, I include a primary key, and often that PK is implemented as a clustered index. Those two don't always have to go together, but in my world they almost always do. On a recent project, I was working on a data warehouse and a set of SSIS packages to import data from an OLTP database into my data warehouse. The data I was importing from the business database into the warehouse was mostly new rows, sometimes updates to existing rows, and sometimes deletes. I decided to use the MERGE statement to implement the insert, update or delete in the data warehouse, I found it quite performant to have a stored procedure that extracted all the new, updated, and deleted rows from the source database and dump it into a working table in my data warehouse, then run a stored proc in the warehouse that was the MERGE statement that took the rows from the working table and updated the real fact table. Use Warehouse CREATE TABLE Integration.MergePolicy (PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date, Operation varchar(5)) CREATE TABLE fact.Policy (PolicyKey int identity primary key, PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date) CREATE PROC Integration.MergePolicy as begin begin tran Merge fact.Policy as tgtUsing Integration.MergePolicy as SrcOn (tgt.PolicyId = Src.PolicyId) When not matched by Target then Insert (PolicyId, PolicyTypeKey, Premium, Deductible, EffectiveDate)values (src.PolicyId, src.PolicyTypeKey, src.Premium, src.Deductible, src.EffectiveDate) When matched and src.Operation = 'U' then Update set PolicyTypeKey = src.PolicyTypeKey,Premium = src.Premium,Deductible = src.Deductible,EffectiveDate = src.EffectiveDate When matched and src.Operation = 'D' then Delete ;delete from Integration.WorkPolicy commit end Notice that my worktable (Integration.MergePolicy) doesn't have any primary key or clustered index. I didn't think this would be a problem, since it was relatively small table and was empty after each time I ran the stored proc. For one of the work tables, during the initial loads of the warehouse, it was getting about 1.5 million rows inserted, processed, then deleted. Also, because of a bug in the extraction process, the same 1.5 million rows (plus a few hundred more each time) was getting inserted, processed, and deleted. This was being sone on a fairly hefty server that was otherwise unused, and no one was paying any attention to the time it was taking. This week I received a backup of this database and loaded it on my laptop to troubleshoot the problem, and of course it took a good ten minutes or more to run the process. However, what seemed strange to me was that after I fixed the problem and happened to run the merge sproc when the work table was completely empty, it still took almost ten minutes to complete. I immediately looked back at the MERGE statement to see if I had some sort of outer join that meant it would be scanning the target table (which had about 2 million rows in it), then turned on the execution plan output to see what was happening under the hood. Running the stored procedure again took a long time, and the plan output didn't show me much - 55% on the MERGE statement, and 45% on the DELETE statement, and table scans on the work table in both places. I was surprised at the relative cost of the DELETE statement, because there were really 0 rows to delete, but I was expecting to see the table scans. (I was beginning now to suspect that my problem was because the work table was being stored as a heap.) Then I turned on STATS_IO and ran the sproc again. The output was quite interesting.Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Policy'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'MergePolicy'. Scan count 1, logical reads 433276, physical reads 60, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. I've reproduced the above from memory, the details aren't exact, but the essential bit was the very high number of logical reads on the table stored as a heap. Even just doing a SELECT Count(*) from Integration.MergePolicy incurred that sort of output, even though the result was always 0. I suppose I should research more on the allocation and deallocation of pages to tables stored as a heap, but I haven't, and my original assumption that a table stored as a heap with no rows would only need to read one page to answer any query was definitely proven wrong. It's likely that some sort of physical defragmentation of the table may have cleaned that up, but it seemed that the easiest answer was to put a clustered index on the table. After doing so, the execution plan showed a cluster index scan, and the IO stats showed only a single page read. (I aborted my first attempt at adding a clustered index on the table because it was taking too long - instead I ran TRUNCATE TABLE Integration.MergePolicy first and added the clustered index, both of which took very little time). I suspect I may not have noticed this if I had used TRUNCATE TABLE Integration.MergePolicy instead of DELETE FROM Integration.MergePolicy, since I'm guessing that the truncate operation does some rather quick releasing of pages allocated to the heap table. In the future, I will likely be much more careful to have a clustered index on every table I use, even the working tables. Mike  

    Read the article

  • MIX10 Big Announcements Speculation

    - by Ken Cox [MVP]
    What’s your speculation on the big announcements to come from MIX10 ? A date for VS 2010 availability on MSDN? A release candidate for Silverlight 4 on the desktop? An SDK for Silverlight on Windows Mobile 7? A CTP of Internet Explorer 9? Something (anything!) new on Windows Live ID development? More JQuery in ASP.NET? Alas, the vast majority of .NET developers (me included) can’t attend the MIX conference again this year. Fortunately, Channel 9 is putting much of the important content online . They...(read more)

    Read the article

  • MIX10 Big Announcements Speculation

    Whats your speculation on the big announcements to come from MIX10 ? A date for VS 2010 availability on MSDN? A release candidate for Silverlight 4 on the desktop? An SDK for Silverlight on Windows Mobile 7? A CTP of Internet Explorer 9? Something (anything...(read more)...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • WebCast: WebCenter Portal, April 03rd, 2012

    - by rituchhibber
    Our next WebCenter Portal webcast will be on April 03rd, 2012. This WebCast will help you to prepare yourself for the WebCenter Portal Certified Implementation Specialist EXAM. Webcast Details: Date Topic Speaker Web Call Details Intercall Details  April 03rd                WebCenter Portal   Refresh     Course      Yannick Ongena, InfoMentumWebCenter Portal Specialized Partner    Join Webcast Dial-in numbers:CC/SP: 1579222/9221 Time: 12:00 -15:00 CET Break around 13:30 Conference ID/Key: 9729232/0304 For more details, please click here.

    Read the article

  • English as a system language but Russian regional settings

    - by mbaitoff
    I usually choose English as an installation language since I believe that the original is better than the translation. However, the environment I'm working in is mostly Russian, so I have to deal with locale specificity. Even worse is the fact that selecting English yields to royal measurement system, that is, feet, inches, and damned letter paper size. Whatever I do, I didn't manage to get rid of letter paper size - eventually here and there I stumble upon letter as a hidden default, and that spoils my prints. How can I select and use English as my language, but use metric system everywhere and a4 paper size everywhere, and Russian regional settings (date, time, decimal etc).

    Read the article

  • Watch the Dec. 8 Webcast: Oracle CFO Discusses Planning and Forecasting

    - by Theresa Hickman
    Watch CFO,com's webcast featuring Oracle's CFO, Jeff Epstein, discuss how CFOs and CIOs lead their organizations to better planning, forecasting and performance. Date: Weds. December 8, 2010 Time: 2:00 PM E.S.T Duration: 30 mins In this webcast, Celina Rogers, director of research with CFO Research Services, summarizes the latest findings from a fall 2010 survey surrounding the issues regarding timely, accurate and relevant forecasting and planning. Included in this webcast, you will hear firsthand from Jeff Epstein, CFO of Oracle, on how a senior finance leader can partner successfully with IT to support growth during the course of the economic recovery. Click here, to register for this webcast

    Read the article

  • Oracle Linux 6.3 has been released

    - by Lenz Grimmer
    We're happy to announce the availability of Oracle Linux 6.3, the third update release for Oracle Linux 6. ISO images can now be obtained from the Oracle Software Delivery Cloud, the individual RPM packages have already been published from our public yum repository. This distribution now includes the Unbreakable Enterprise Kernel Release 2 (2.6.39-200), Oracle's recommended kernel version for Oracle Linux. For further details, please see the Oracle Linux 6.3 Release Notes. Remember, Oracle Linux can be downloaded, used and distributed free of charge, updates and errata are freely available. For support, you are free to decide for which of your systems you want to obtain a support subscription, and at which level each of  them should be supported. This makes Oracle Linux an ideal choice for both your development and production systems - you decide which support coverage is the best for each of your systems individually, while keeping all of them up-to-date and secure. Wim Coekaerts recently wrote several blog posts about the benefits of Oracle Linux, which are worth a read: Oracle Linux components More Oracle Linux options My own personal use of Oracle Linux

    Read the article

< Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >