Search Results

Search found 5260 results on 211 pages for 'white paper'.

Page 39/211 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • SQL - when should you use "with (nolock)"

    - by Andy White
    Can someone explain the implications of using "with (nolock)" on queries, when you should/shouldn't use it? For example, if you have a banking application with high transaction rates and a lot of data in certain tables, in what types of queries would nolock be okay? Are there cases when you should always use it/never use it?

    Read the article

  • User control event or method override where custom properties are valid?

    - by Curtis White
    I have an ASP.NET user control that is used in another use control. The parent user control uses data-binding to bind to a custom property of the child user control. What method can I override or page event where I am ensured that the property state is set? I think in a page it is PageLoaded versus the Page_Load override? I am looking for this in the user control because my property is always null even though it is set. Thanks.

    Read the article

  • Wordpress Custom Post Type adding tags

    - by Nick White
    I am currently working on a Wordpress site I have created some custom Post Types all work fine create the post etc. What I need however is custom taxonomies with some of the custom Post Types, I have set this up and when adding different tags to the taxonomy it works however, when creating a post for a custom post type in the taxonomy block clicking add tag it just does a anchor link to #Member news Category-add Nothing else happens, it's not a big bug but I would however, like to fix this so it is correct for the time I go live Is this a known bug? or is there something I have probably missed when creating the custom post type? register_post_type('member_news', array( 'label' => 'Member News','description' => 'News content submitted by members of RRUKA.','public' => true,'show_ui' => true,'show_in_menu' => true,'capability_type' => 'post','hierarchical' => false,'rewrite' => array('slug' => 'member-news'),'query_var' => true,'has_archive' => true,'exclude_from_search' => false,'menu_position' => 5,'supports' => array('title','editor','excerpt','trackbacks','revisions','thumbnail','author','page-attributes',),'labels' => array ( 'name' => 'Member News', 'singular_name' => 'Member News', 'menu_name' => 'Member News', 'add_new' => 'Add Member News', 'add_new_item' => 'Add New Member News', 'edit' => 'Edit', 'edit_item' => 'Edit Member News', 'new_item' => 'New Member News', 'view' => 'View Member News', 'view_item' => 'View Member News', 'search_items' => 'Search Member News', 'not_found' => 'No Member News Found', 'not_found_in_trash' => 'No Member News Found in Trash', 'parent' => 'Parent Member News', ),) ); Any information on this would be very welcome Thanks in advanced

    Read the article

  • Protecting routes with authentication in an AngularJS app

    - by Chris White
    Some of my AngularJS routes are to pages which require the user to be authenticated with my API. In those cases, I'd like the user to be redirected to the login page so they can authenticate. For example, if a guest accesses /account/settings, they should be redirected to the login form. From brainstorming I came up with listening for the $locationChangeStart event and if it's a location which requires authentication then redirect the user to the login form. I can do that simple enough in my applications run() event: .run(['$rootScope', function($rootScope) { $rootScope.$on('$locationChangeStart', function(event) { // Decide if this location required an authenticated user and redirect appropriately }); }]); The next step is keeping a list of all my applications routes that require authentication, so I tried adding them as parameters to my $routeProvider: $routeProvider.when('/account/settings', {templateUrl: '/partials/account/settings.html', controller: 'AccountSettingCtrl', requiresAuthentication: true}); But I don't see any way to get the requiresAuthentication key from within the $locationChangeStart event. Am I overthinking this? I tried to find a way for Angular to do this natively but couldn't find anything.

    Read the article

  • Java servlet's request parameter's name set to entire json object

    - by Geren White
    I'm sending a json object through ajax to a java servlet. The json object is key-value type with three keys that point to arrays and a key that points to a single string. I build it in javascript like this: var jsonObject = {"arrayOne": arrayOne, "arrayTwo": arrayTwo, "arrayThree": arrThree, "string": stringVar}; I then send it to a java servlet using ajax as follows: httpRequest.open('POST', url, true); httpRequest.setRequestHeader("Content-type", "application/x-www-form-urlencoded"); httpRequest.setRequestHeader("Connection", "close"); var jsonString = jsonObject.toJSONString(); httpRequest.send(jsonString); This will send the string to my servlet, but It isn't showing as I expect it to. The whole json string gets set to the name of one of my request's parameters. So in my servlet if I do request.getParameterNames(); It will return an enumeration with one of the table entries' key's to be the entire object contents. I may be mistaken, but my thought was that it should set each key to a different parameter name. So I should have 4 parameters, arrayOne, arrayTwo, arrayThree, and string. Am I doing something wrong or is my thinking off here? Any help is appreciated. Thanks

    Read the article

  • C# performance of static string[] contains() (slooooow) vs. == operator

    - by Andrew White
    Hiya, Just a quick query: I had a piece of code which compared a string against a long list of values, e.g. if(str == "string1" || str = "string2" || str == "string3" || str = "string4". DoSomething(); And the interest of code clarity and maintainability I changed it to public static string[] strValues = { "String1", "String2", "String3", "String4"}; ... if(strValues.Contains(str) DoSomething(); Only to find the code execution time went from 2.5secs to 6.8secs (executed ca. 200,000 times). I certainly understand a slight performance trade off, but 300%? Anyway I could define the static strings differently to enhance performance? Cheers.

    Read the article

  • Java keep printing a new line in my recursive method

    - by Abra Grace Libretto White
    I am trying to write a recursive method to print n number of asteriks in a line and create a new line at the end. So, TriangleOps.line(5); would print ***** This is the code I wrote: public static void line (int n){ if(n>0){ System.out.println("*"); line(n-1); }} instead it prints * * * * * with a lot of space at the end. Can anyone tell me how to remove the line breaks?

    Read the article

  • Error while debug (role redirection)

    - by Chris White
    What is wrong with my role redirection, protected void Login1_LoggedIn(object sender, EventArgs e) { { if (Roles.IsUserInRole(Login1.UserName, "Aemy")) Response.Redirect("~/Admin/Home.aspx"); else if (Roles.IsUserInRole(Login1.UserName, "User")) Response.Redirect("~/Welcome/User1.aspx"); } } Error : The name 'Roles' does not exist in the current context

    Read the article

  • SQL SERVER – Introduction to Extended Events – Finding Long Running Queries

    - by pinaldave
    The job of an SQL Consultant is very interesting as always. The month before, I was busy doing query optimization and performance tuning projects for our clients, and this month, I am busy delivering my performance in Microsoft SQL Server 2005/2008 Query Optimization and & Performance Tuning Course. I recently read white paper about Extended Event by SQL Server MVP Jonathan Kehayias. You can read the white paper here: Using SQL Server 2008 Extended Events. I also read another appealing chapter by Jonathan in the book, SQLAuthority Book Review – Professional SQL Server 2008 Internals and Troubleshooting. After reading these excellent notes by Jonathan, I decided to upgrade my course and include Extended Event as one of the modules. This week, I have delivered Extended Events session two times and attendees really liked the said course. They really think Extended Events is one of the most powerful tools available. Extended Events can do many things. I suggest that you read the white paper I mentioned to learn more about this tool. Instead of writing a long theory, I am going to write a very quick script for Extended Events. This event session captures all the longest running queries ever since the event session was started. One of the many advantages of the Extended Events is that it can be configured very easily and it is a robust method to collect necessary information in terms of troubleshooting. There are many targets where you can store the information, which include XML file target, which I really like. In the following Events, we are writing the details of the event at two locations: 1) Ringer Buffer; and 2) XML file. It is not necessary to write at both places, either of the two will do. -- Extended Event for finding *long running query* IF EXISTS(SELECT * FROM sys.server_event_sessions WHERE name='LongRunningQuery') DROP EVENT SESSION LongRunningQuery ON SERVER GO -- Create Event CREATE EVENT SESSION LongRunningQuery ON SERVER -- Add event to capture event ADD EVENT sqlserver.sql_statement_completed ( -- Add action - event property ACTION (sqlserver.sql_text, sqlserver.tsql_stack) -- Predicate - time 1000 milisecond WHERE sqlserver.sql_statement_completed.duration > 1000 ) -- Add target for capturing the data - XML File ADD TARGET package0.asynchronous_file_target( SET filename='c:\LongRunningQuery.xet', metadatafile='c:\LongRunningQuery.xem'), -- Add target for capturing the data - Ring Bugger ADD TARGET package0.ring_buffer (SET max_memory = 4096) WITH (max_dispatch_latency = 1 seconds) GO -- Enable Event ALTER EVENT SESSION LongRunningQuery ON SERVER STATE=START GO -- Run long query (longer than 1000 ms) SELECT * FROM AdventureWorks.Sales.SalesOrderDetail ORDER BY UnitPriceDiscount DESC GO -- Stop the event ALTER EVENT SESSION LongRunningQuery ON SERVER STATE=STOP GO -- Read the data from Ring Buffer SELECT CAST(dt.target_data AS XML) AS xmlLockData FROM sys.dm_xe_session_targets dt JOIN sys.dm_xe_sessions ds ON ds.Address = dt.event_session_address JOIN sys.server_event_sessions ss ON ds.Name = ss.Name WHERE dt.target_name = 'ring_buffer' AND ds.Name = 'LongRunningQuery' GO -- Read the data from XML File SELECT event_data_XML.value('(event/data[1])[1]','VARCHAR(100)') AS Database_ID, event_data_XML.value('(event/data[2])[1]','INT') AS OBJECT_ID, event_data_XML.value('(event/data[3])[1]','INT') AS object_type, event_data_XML.value('(event/data[4])[1]','INT') AS cpu, event_data_XML.value('(event/data[5])[1]','INT') AS duration, event_data_XML.value('(event/data[6])[1]','INT') AS reads, event_data_XML.value('(event/data[7])[1]','INT') AS writes, event_data_XML.value('(event/action[1])[1]','VARCHAR(512)') AS sql_text, event_data_XML.value('(event/action[2])[1]','VARCHAR(512)') AS tsql_stack, CAST(event_data_XML.value('(event/action[2])[1]','VARCHAR(512)') AS XML).value('(frame/@handle)[1]','VARCHAR(50)') AS handle FROM ( SELECT CAST(event_data AS XML) event_data_XML, * FROM sys.fn_xe_file_target_read_file ('c:\LongRunningQuery*.xet', 'c:\LongRunningQuery*.xem', NULL, NULL)) T GO -- Clean up. Drop the event DROP EVENT SESSION LongRunningQuery ON SERVER GO Just run the above query, afterwards you will find following result set. This result set contains the query that was running over 1000 ms. In our example, I used the XML file, and it does not reset when SQL services or computers restarts (if you are using DMV, it will reset when SQL services restarts). This event session can be very helpful for troubleshooting. Let me know if you want me to write more about Extended Events. I am totally fascinated with this feature, so I’m planning to acquire more knowledge about it so I can determine its other usages. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Training, SQLServer, T SQL, Technology Tagged: SQL Extended Events

    Read the article

  • Performance and Optimization Isn’t Evil

    - by Reed
    Donald Knuth is a fairly amazing guy.  I consider him one of the most influential contributors to computer science of all time.  Unfortunately, most of the time I hear his name, I cringe.  This is because it’s typically somebody quoting a small portion of one of his famous statements on optimization: “premature optimization is the root of all evil.” I mention that this is only a portion of the entire quote, and, as such, I feel that Knuth is being quoted out of context.  Optimization is important.  It is a critical part of every software development effort, and should never be ignored.  A developer who ignores optimization is not a professional.  Every developer should understand optimization – know what to optimize, when to optimize it, and how to think about code in a way that is intelligent and productive from day one. I want to start by discussing my own, personal motivation here.  I recently wrote about a performance issue I ran across, and was slammed by multiple comments and emails that effectively boiled down to: “You’re an idiot.  Premature optimization is the root of all evil.  This doesn’t matter.”  It didn’t matter that I discovered this while measuring in a profiler, and that it was a portion of my code base that can take “many hours to complete.”  Even so, multiple people instantly jump to “it’s premature – it doesn’t matter.” This is a common thread I see.  For example, StackOverflow has many pages of posts with answers that boil down to (mis)quoting Knuth.  In fact, just about any question relating to a performance related issue gets this quote thrown at it immediately – whether it deserves it or not.  That being said, I did receive some positive comments and emails as well.  Many people want to understand how to optimize their code, approaches to take, tools and techniques they can use, and any other advice they can discover. First, lets get back to Knuth – I mentioned before that Knuth is being quoted out of context.  Lets start by looking at the entire quote from his 1974 paper Structured Programming with go to Statements: “We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified.” Ironically, if you read Knuth’s original paper, this statement was made in the middle of a discussion of how Knuth himself had changed how he approaches optimization.  It was never a statement saying “don’t optimize”, but rather, “optimizing intelligently provides huge advantages.”  His approach had three benefits: “a) it doesn’t take long” … “b) the payoff is real”, c) you can “be less efficient in the other parts of my programs, which therefore are more readable and more easily written and debugged.” Looking at Knuth’s premise here, and reading that section of his paper, really leads to a few observations: Optimization is important  “he will be wise to look carefully at the critical code” Normally, 3% of your code – three lines out of every 100 you write, are “critical code” and will require some optimization: “we should not pass up our opportunities in that critical 3%” Optimization, if done well, should not be time consuming: “it doesn’t take long” Optimization, if done correctly, provides real benefits: “the payoff is real” None of this is new information.  People who care about optimization have been discussing this for years – for example, Rico Mariani’s Designing For Performance (a fantastic article) discusses many of the same issues very intelligently. That being said, many developers seem unable or unwilling to consider optimization.  Many others don’t seem to know where to start.  As such, I’m going to spend some time writing about optimization – what is it, how should we think about it, and what can we do to improve our own code.

    Read the article

  • I’m a Phoenix… and I’m miffed

    - by Stan Spotts
    For personal reasons, almost 30 years ago I left school to enter the workforce. I decided late 2008 to go back to school and finish my degree. After the expected loss of credits for a transfer, from Temple University to University of Phoenix, I'm now about 75% done. The experience has been interesting. Classes are time compressed, only 5 weeks each. Because I have a family and a full time job, I'm taking one at a time. Even so, I've written more papers in these classes than I ever wrote at Temple. My own papers are one thing, but the team papers give me heartburn since I can't completely control what goes into them. Not a big deal except that they make up 30% of our grade. In any case, most of the class facilitators have been great. I had great ones for Accounting, Finance, and frankly most others. I've had a few (4, maybe) cases where I was less than 2 points from an A, and asked the facilitator if I could get any of my work reviewed to see if I could get those extra points. I figured it was worth a shot, and there were no extenuating circumstances to help make my case. I think that only one facilitator decided after a review of one paper that my interpretation was good, just not what he expected, and gave me another point, which gave me an A. So while none are pushovers, they've all been open to discussion, which is as much as I should expect. Overall, good experience. That is, until my last class. On the second week, the day I was due to hand in my personal assignment for the week, I was in an accident. An SUV creamed my little Ford Focus, and totaled it (estimated repair over $11K). I was pretty banged up, especially my left shoulder. I was scheduled for rotator cuff surgery for two weeks later, and getting hit against the door really made it worse. After dealing with the police, the EMT, the tow truck, and the Percocet and Flexeril for the pain, I crashed for the night and didn't get to upload my paper until the next day. The instructor took 30% off for it being late, even after I supplied photos of the car, my arm (huge bruises), and offered to supply the police report number. I figured I'd be okay since that's 2.7 points, and I could lose up to 5 before jeopardizing an A grade. Well, that wasn't the case as we lost more points than I expected on our team paper in Week 5. I ended up with a 94.3. Yes, 7/10 of a point from an A. Of course I asked the instructor to review the issue with the accident and give me just the 0.7 points I needed for the A. That got me a short response of "I have received your emails and review your work over the last five weeks. Your current grade will stand. If you would like to dispute your grade then please feel free to contact your academic advisor. I wish you much success in your professional and academic career." Brrrr….! So I asked my academic advisor to file a dispute. If it wasn't that a pretty bad car accident was the cause, I wouldn't have. Without the grade reduction, I would have had a 97 for the class, so I'll argue that I was performing at the A level throughout the class. Why her purported "review" of my work didn't then warrant such a minor adjustment, I don't know. An A- drops my GPA, and this ticked me off. Now I have to wait and see what the school says about the grade dispute.

    Read the article

  • Craig Mundie's video

    - by GGBlogger
    Timothy recently posted “Microsoft Shows Off Radical New UI, Could Be Used In Windows 8” on Slashdot. I took such grave exception to his post that I found it necessary to my senses to write this blog. We need to go back many years to the days of hand cranked calculators and early main frame computers. These devices had singular purposes – they were “number crunchers” used to make accounting easier. The front facing display in early mainframes was “blinken lights.” The calculators did provide printing – in the form of paper tape and the mainframes used line printers to generate reports as needed. We had other metaphors to work with. The typewriter was/is a mechanical device that substitutes for a type setting machine. The originals go back to 1867 and the keyboard layout has remained much the same to this day. In the earlier years the Morse code telegraphs gave way to Teletype machines. The old ASR33, seen on the left in this photo of one of the first computers I help manufacture, used a keyboard very similar to the keyboards in use today. It also generated punched paper tape that we generated to program this computer in machine language. Everything considered this computer which dates back to the late 1960s has a keyboard for input and a roll of paper as output. So in a very rudimentary fashion little has changed. Oh – we didn’t have a mouse! The entire point of this exercise is to point out that we still use very similar methods to get data into and out of a computer regardless of the operating system involved. The Altair, IMSAI, Apple, Commodore and onward to our modern machines changed the hardware that we interfaced to but changed little in the way we input, view and output the results of our computing effort. The mouse made some changes and the advent of windowed interfaces such as Windows and Apple made things somewhat easier for the user. My 4 year old granddaughter plays here Dora games on our computer. She knows how to start programs, use the mouse, play the game and is quite adept so we have come some distance in making computers useable. One of my chief bitches is the constant harangues leveled at Microsoft. Yup – they are a money making organization. You like Apple? No problem for me. I don’t use Apple mostly because I’m comfortable in the Windows environment but probably more because I don’t like Apple’s “Holier than thou” attitude. Some think they do superior things and that’s also fine with me. Obviously the iPhone has not done badly and other Apple products have fared well. But they are expensive. I just build a new machine with 4 Terabytes of storage, an Intel i7 Core 950 processor and 12 GB of RAMIII. It cost me – with dual monitors – less than 2000 dollars. Now to the chief reason for this blog. I’m going to continue developing software for as long as I’m able. For that reason I don’t see my keyboard, mouse and displays changing much for many years. I also don’t think Microsoft is going to spoil that for me by making radical changes to my developer experience. What Craig Mundie does in his video here:  http://www.ispyce.com/2011/02/microsoft-shows-off-radical-new-ui.html is explore the potential future of computer interfaces for the masses of potential users. Using a computer today requires a person to have rudimentary capabilities with keyboards and the mouse. Wouldn’t it be great if all they needed was hand gestures? Although not mentioned it would also be nice if computers responded intelligently to a user’s voice. There is absolutely no argument with the fact that user interaction with these machines is going to change over time. My personal prediction is that it will take years for much of what Craig discusses to come to a cost effective reality but it is certainly coming. I just don’t believe that what Craig discusses will be the future look of a Window 8.

    Read the article

  • WebCenter Customer Spotlight: Texas Industries, Inc.

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryTexas Industries, Inc. (TXI) is a leading supplier of cement, aggregate, and consumer product building materials for residential, commercial, and public works projects. TXI is based in Dallas and employs around 2,000 employees. The customer had the challenge of decentralized and manual processes for entering 180,000 vendor invoices annually.  Invoice entry was a time- and resource-intensive process that entailed significant personnel requirements. TXI implemented a centralized solution leveraging Oracle WebCenter Imaging, a smart routing solution that enables users to capture invoices electronically with Oracle WebCenter Capture and Oracle WebCenter Forms Recognition to send  the invoices through to Oracle Financials for approvals and processing.  TXI significantly lowered resource needs for payable processing,  increase productivity by 80% and reduce invoice processing cycle times by 84%—from 20 to 30 days to just 3 to 5 days, on average. Company OverviewTexas Industries, Inc. (TXI) is a leading supplier of cement, aggregate, and consumer product building materials for residential, commercial, and public works projects. With operating subsidiaries in six states, TXI is the largest producer of cement in Texas and a major producer in California. TXI is a major supplier of stone, sand, gravel, and expanded shale and clay products, and one of the largest producers of bagged cement and concrete  products in the Southwest. Business ChallengesTXI had the challenge of decentralized and manual processes for entering 180,000 vendor invoices annually.  Invoice entry was a time- and resource-intensive process that entailed significant personnel requirements. Their business objectives were: Increase the efficiency of core business processes, such as invoice processing, to support the organization’s desire to maintain its role as the Southwest’s leader in delivering high-quality, low-cost products to the construction industry Meet the audit and regulatory requirements for achieving Sarbanes-Oxley (SOX) compliance Streamline entry of 180,000 invoices annually to accelerate processing, reduce errors, cut invoice storage and routing costs, and increase visibility into payables liabilities Solution DeployedTXI replaced a resource-intensive, paper-based, decentralized process for invoice entry with a centralized solution leveraging Oracle WebCenter Imaging 11g. They worked with the Oracle Partner Keste LLC to develop a smart routing solution that enables users to capture invoices electronically with Oracle WebCenter Capture and then uses Oracle WebCenter Forms Recognition and the Oracle WebCenter Imaging workflow to send the invoices through to Oracle Financials for approvals and processing. Business Results Significantly lowered resource needs for payable processing through centralization and improved efficiency  Enabled the company to process invoices faster and pay bills earlier, allowing it to take advantage of additional vendor discounts Tracked to increase productivity by 80% and reduce invoice processing cycle times by 84%—from 20 to 30 days to just 3 to 5 days, on average Achieved a 25% reduction in paper invoice storage costs now that invoices are captured digitally, and enabled a 50% reduction in shipping costs, as the company no longer has to send paper invoices between headquarters and production facilities for approvals “Entering and manually processing more than 180,000 vendor invoices annually was time and labor intensive. With Oracle Imaging and Process Management, we have automated and centralized invoice entry and processing at our corporate office, improving productivity by 80% and reducing invoice processing cycle times by 84%—a very important efficiency gain.” Terry Marshall, Vice President of Information Services, Texas Industries, Inc. Additional Information TXI Customer Snapshot Oracle WebCenter Content Oracle WebCenter Capture Oracle WebCenter Forms Recognition

    Read the article

  • Two Candidates + One Job = Two Different Outcomes

    - by david.talamelli
    Recruiters have always headhunted (sidenote: I do not like this word, in general I think the type of people who use the phrase “headhunting” are the ones who are trying to sound more important than what they likely are). Any serious Recruiter engages in direct recruiting activity, it is part and parcel of the business it is not something unique. With the uptake in Social Media the past 4-5 years, we have seen an increase in the number of Recruiters proactively reaching out to people about job opportunities. We have also seen this activity increase across all levels of hire, from help desk roles to C-Level Executives. While getting approached about a role can be a nice boost to a person’s ego, do not let it give you an inflated sense of entitlement. It is The way that people handle themselves during these calls and subsequent interviews will have a large impact on their potential to land that job. Last week I spoke to two very different candidates, both about the same position and both with very different outcomes. On paper, Candidate #1 looked fantastic; they ticked many of the boxes that we were looking for. The person is working at global IT company and working in a similar role as the one we were hiring for but not in as senior as the role we had. This role would have been the perfect step to getting involved in more complex work for the person. Candidate #2 had less polished IT experience, ticked some of the boxes we were looking for and on paper in comparison to Candidate #1 was not as close a fit as Candidate #1 was. It seemed like I was comparing apples and oranges. After speaking to both candidates it turns out I was comparing apples and oranges except the person better suited for our role was not the one I was expecting it would be. The first candidate on paper looked great – they had the experience we were looking for and appeared to be just right for the role, but after talking to them, they gave me the impression that they thought the world owed them. The impression I was left with was that they did not equate success with hard work, they seemed more interested in “what is in it for me”. Rather than having a proper conversation with me, I was often cut off and asked to hurry it up when explaining our business, what we are doing, etc... . This person seemed more interested in the job title and money than how rather than think about ways to make the role successful. Candidate #2 who had limited experience, made up for any perceived lack of experience and them some with a demonstrated motivation to succeed and do the things needed to make that happen. Candidate #2 made a great first impression, they did not seem afraid of hard work and demonstrated a “team player” attitude. In talking to them they kept me engaged, listened and asked thoughtful questions that made me think this is the type of person who creates their own luck and who would thrive in a place like Oracle. Skills, capabilities, experience and a good resume can certainly get your foot in the door, but the wrong attitude or approach to work can close those opportunities just as easily. On the other hand, hard work, effort and a genuine work ethic may help open those doors that would otherwise closed for you. A resume with all the credentials gets you in the front door but that is just the beginning of the process. It is not how we start the race that is important, it’s how things end that matter most.

    Read the article

  • Working and Studying in Oracle, how I balance my time....

    - by anca.rosu
    Hi, my name is Laura. I am working as an Intern within Executive Administration at Oracle Denmark, whilst studying Information Management at Copenhagen Business school. I have recently handeding a paper on Information Systems which gave me exposure to Oracle. Once completing this paper I came across a job posting on my University’s intranet site and I applied directly online. When I submitted my application for the job offer, I wondered about what language I should use for the application form, as the job posting was in Danish, but the contact person and number looked Irish. I therefore chose English. Later that same day, Fiona, one of Oracle’s Graduates Recruitment Consultants based in Ireland, contacted me. This shows how global Oracle truly is. I went for my face-to-face interview in Oracle Denmark with Charlotte, one of the team managers. I spent 5 minutes waiting in the lobby, just looking around, thinking to myself, I really want to work here. The atmosphere seemed so pleasant with a relaxed approach between colleagues, employees and guests. The interview took about an hour, but we touched on a lot of different subjects. The profile I got of Oraclewas that this is a place where you are encouraged to think for yourself, and you are given the freedom to use your ideas. Later that evening, Fiona called and offered me the job. I was very happy. At Oracle Denmark we have 4 different zones: a Quiet Zone, a Project Zone, a Dialogue Zone and a Call Zone. Everyday when you arrive you consider what will be the most productive for the day’s task, and you take your toolbox and go find a desk in the zone you have decided on. It is therefore very unusual to be next to the same person two days in a row. At Oracle, people are located all over the world, and everybody has team members, colleagues or leaders in other countries, or even other time zones. Initially,I was worried about how I would adapt to this approach but I soon realized I had nothing to worry about and now I appreciate working this way. My colleagues have been very supportive and they have openly welcomed me into my new role. I typically work two days a week and have three days at University. During exam periods, I have the flexibility to work less hours and focus on the exams, in return for putting in more hours at work when needed. The first time I had to ask for time off before handing in a paper, my boss looked at me and said, ”Of course! Your education is the most important!” I hope that by sharing my experiences with you, I can inspire or encourage you to consider Oracle as a potential employer, where you can grow both professionally and personally. If you have any questions related to this article feel free to contact  [email protected].  You can find our job opportunities via http://campus.oracle.com Technorati Tags: Intern,Oracle Denmark,Information Systems,Business school,Copenhagen,Graduates Recruitment,Ireland,Quiet Zone,Project Zone,Dialogue Zone,Call Zone,University,flexibility

    Read the article

  • WebCenter Customer Spotlight: Hyundai Motor Company

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryHyundai Motor Company is one of the world’s fastest-growing car manufacturers, ranked as the fifth-largest in 2011. The company also operates the world’s largest integrated automobile manufacturing facility in Ulsan, Republic of Korea, which can produce 1.6 million units per year. They  undertook a project to improve business efficiency and reinforce data security by centralizing the company’s sales, financial, and car manufacturing documents into a single repository. Hyundai Motor Company chose Oracle Exalogic, Oracle Exadata, Oracle WebLogic Sever, and Oracle WebCenter Content 11g, as they provided better performance, stability, storage, and scalability than their competitors.  Hyundai Motor Company cut the overall time spent each day on document-related work by around 85%, saved more than US$1 million in paper and printing costs, laid the foundation for a smart work environment, and supported their future growth in the competitive car industry. Company OverviewHyundai Motor Company is one of the world’s fastest-growing car manufacturers, ranked as the fifth-largest in 2011. The company also operates the world’s largest integrated automobile manufacturing facility in Ulsan, Republic of Korea, which can produce 1.6 million units per year. The company strives to enhance its brand image and market recognition by continuously improving the quality and design of its cars. Business Challenges To maximize the company’s growth potential, Hyundai Motor Company undertook a project to improve business efficiency and reinforce data security by centralizing the company’s sales, financial, and car manufacturing documents into a single repository. Specifically, they wanted to: Introduce a smart work environment to improve staff productivity and efficiency, and take advantage of rapid company growth due to new, enhanced car designs Replace a legacy document system managed by individual staff to improve collaboration, the visibility of corporate documents, and sharing of work-related files between employees Improve the security and storage of documents containing corporate intellectual property, and prevent intellectual property loss when staff leaves the company Eliminate delays when downloading files from the central server to a PC Build a large, single document repository to more efficiently manage and share data between 30,000 staff at the company’s headquarters Establish a scalable system that can be extended to Hyundai offices around the world Solution DeployedAfter conducting a large-scale benchmark test, Hyundai Motor Company chose Oracle Exalogic, Oracle Exadata, Oracle WebLogic Sever, and Oracle WebCenter Content 11g, as they provided better performance, stability, storage, and scalability than their competitors. Business Results Lowered the overall time spent each day on all document-related work by approximately 85%—from 4.5 hours to around 42 minutes on an average day Saved more than US$1 million per year in printer, paper, and toner costs, and laid the foundation for a completely paperless environment Reduced staff’s time spent requesting and receiving documents about car sales or designs from supervisors by 50%, by storing and managing all documents across the corporation in a single repository Cut the time required to draft new-car manufacturing, sales, and design documents by 20%, by allowing employees to reference high-quality data, such as marketing strategy and product planning documents already in the system Enhanced staff productivity at company headquarters by 9% by reducing the document-related tasks of 30,000 administrative and research and development staff Ensured the system could scale to hold 3 petabytes of car sales, manufacturing, and design data by 2013 and be deployed at branches worldwide We chose Oracle Exalogic, Oracle Exadata, and Oracle WebCenter Content to support our new document-centralization system over their competitors as Oracle offers stable storage for petabytes of data and high processing speeds. We have cut the overall time spent each day on document-related work by around 85%, saved more than US$1 million in paper and printing costs, laid the foundation for a smart work environment, and supported our future growth in the competitive car industry. Kang Tae-jin, Manager, General Affairs Team, Hyundai Motor Company Additional Information Hyundai Motor Company Customer Snapshot Oracle WebCenter Content

    Read the article

  • Cloud – the forecast is improving

    - by Rob Farley
    There is a lot of discussion about “the cloud”, and how that affects people’s data stories. Today the discussion enters the realm of T-SQL Tuesday, hosted this month by Jorge Segarra. Over the years, companies have invested a lot in making sure that their data is good, and I mean every aspect of it – the quality of it, the security of it, the performance of it, and more. Experts such as those of us at LobsterPot Solutions have helped these companies with this, and continue to work with clients to make sure that data is a strong part of their business, not an oversight. Whether business intelligence systems are being utilised or not, every business needs to be able to rely on its data, and have the confidence in it. Data should be a foundation upon which a business is built. In the past, data had been stored in paper-based systems. Filing cabinets stored vital information. Today, people have server rooms with storage of various kinds, recognising that filing cabinets don’t necessarily scale particularly well. It’s easy to ‘lose’ data in a filing cabinet, when you have people who need to make sure that the sheets of paper are in the right spot, and that you know how things are stored. Databases help solve that problem, but still the idea of a large filing cabinet continues, it just doesn’t involve paper. If something happens to the physical ‘filing cabinet’, then the problems are larger still. Then the data itself is under threat. Many clients have generators in case the power goes out, redundant cables in case the connectivity dies, and spare servers in other buildings just in case they’re required. But still they’re maintaining filing cabinets. You see, people like filing cabinets. There’s something to be said for having your data ‘close’. Even if the data is not in readable form, living as bits on a disk somewhere, the idea that its home is ‘in the building’ is comforting to many people. They simply don’t want to move their data anywhere else. The cloud offers an alternative to this, and the human element is an obstacle. By leveraging the cloud, companies can have someone else look after their filing cabinet. A lot of people really don’t like the idea of this, partly because the administrators of the data, those people who could potentially log in with escalated rights and see more than they should be allowed to, who need to be trusted to respond if there’s a problem, are now a faceless entity in the cloud. But this doesn’t mean that the cloud is bad – this is simply a concern that some people may have. In new functionality that’s on its way, we see other hybrid mechanisms that mean that people can leverage parts of the cloud with less fear. Companies can use cloud storage to hold their backup data, for example, backups that have been encrypted and are therefore not able to be read by anyone (including administrators) who don’t have the right password. Companies can have a database instance that runs locally, but which has its data files in the cloud, complete with Transparent Data Encryption if needed. There can be a higher level of control, making the change easier to accept. Hybrid options allow people who have had fears (potentially very justifiable) to take a new look at the cloud, and to start embracing some of the benefits of the cloud (such as letting someone else take care of storage, high availability, and more) without losing the feeling of the data being close. @rob_farley

    Read the article

  • Raphael js text positioning: centering text in a circle

    - by j-man86
    Hey everyone, I am using Raphael js to draw circled numbers. The problem is that each number has a different width/height so using one set of coordinates to center the text isn't working. The text displays differently between IE, FF, and safari. Is there a dynamic way to find the height/width of the number and center it accordingly? Here is my test page: http://jesserosenfield.com/fluid/test.html and my code: function drawcircle(div, text) { var paper = Raphael(div, 26, 26); //<< var circle = paper.circle(13, 13, 10.5); circle.attr("stroke", "#f1f1f1"); circle.attr("stroke-width", 2); var text = paper.text(12, 13, text); //<< text.attr({'font-size': 15, 'font-family': 'FranklinGothicFSCondensed-1, FranklinGothicFSCondensed-2'}); text.attr("fill", "#f1f1f1"); } window.onload = function () { drawcircle("c1", "1"); drawcircle("c2", "2"); drawcircle("c3", "3"); }; Thanks very much!

    Read the article

  • help in the Donalds B. Johnson's algorithm, i cannot understand the pseudo code (PART II)

    - by Pitelk
    Hi all , i cannot understand a certain part of the paper published by Donald Johnson about finding cycles (Circuits) in a graph. More specific i cannot understand what is the matrix Ak which is mentioned in the following line of the pseudo code : Ak:=adjacency structure of strong component K with least vertex in subgraph of G induced by {s,s+1,....n}; to make things worse some lines after is mentins " for i in Vk do " without declaring what the Vk is... As far i have understand we have the following: 1) in general, a strong component is a sub-graph of a graph, in which for every node of this sub-graph there is a path to any node of the sub-graph (in other words you can access any node of the sub-graph from any other node of the sub-graph) 2) a sub-graph induced by a list of nodes is a graph containing all these nodes plus all the edges connecting these nodes. in paper the mathematical definition is " F is a subgraph of G induced by W if W is subset of V and F = (W,{u,y)|u,y in W and (u,y) in E)}) where u,y are edges , E is the set of all the edges in the graph, W is a set of nodes. 3)in the code implementation the nodes are named by integer numbers 1 ... n. 4) I suspect that the Vk is the set of nodes of the strong component K. now to the question. Lets say we have a graph G= (V,E) with V = {1,2,3,4,5,6,7,8,9} which it can be divided into 3 strong components the SC1 = {1,4,7,8} SC2= {2,3,9} SC3 = {5,6} (and their edges) Can anybody give me an example for s =1, s= 2, s= 5 what if going to be the Vk and Ak according to the code? The pseudo code is in my previous question in http://stackoverflow.com/questions/2908575/help-in-the-donalds-b-johnsons-algorithm-i-cannot-understand-the-pseudo-code and the paper can be found at http://stackoverflow.com/questions/2908575/help-in-the-donalds-b-johnsons-algorithm-i-cannot-understand-the-pseudo-code thank you in advance

    Read the article

  • help in the Donalds B. Johnson's algorithm, i cannot understand the pseudo code

    - by Pitelk
    Hi , does anyone know the Donald B. Johnson's algorithm which enumarates all the elementary circuits (cycles) in a Directed graph? link text I have the paper he had published in 1975 but I cannot understand the pseudo-code. My goal is to implement this algorithm in java. Some questions i have is for example what is the matrix Ak it refers to. In the pseudo code mentions that Ak:=adjacency structure of strong component K with least vertex in subgraph of G induced by {s,s+1,....n}; Does that mean i have to implement another algorithm that finds the Ak matrix? Another question is what the following means? begin logical f; Does also the line "logical procedure CIRCUIT (integer value v);" means that the circuit procedure returns a logical variable. In the pseudo code also has the line "CIRCUIT := f;" . Does this mean? It would be great if someone could translate this 1970's pseudocode to a more modern type of pseudo code so i can understand it in case you are interested to help but you cannot find the paper please email me at [email protected] and i will send you the paper. Thanks in advance

    Read the article

  • Raphael JS image animation performance.

    - by michael
    Hi, I'm trying to create an image animation using Raphael JS. I want the effect of a bee moving randomly across the page, I've got a working example but it's a bit "jittery", and I'm getting this warning in the console: "Resource interpreted as image but transferred with MIME type text/html" I'm not sure if the warning is causing the "jittery" movement or its just the way I approached it using maths. If anyone has a better way to create the effect, or improvements please let me know. I have a demo online here and heres my javascript code: function random(min, max) { return Math.floor(Math.random() * (max - min + 1)) + min; } function BEE(x, y, scale) { this.x = x; this.y = y; this.s = scale; this.paper = Raphael("head", 915, 250); this.draw = function() { this.paper.clear(); this.paper.image("bee.png", this.x, this.y, 159*this.s, 217*this.s); } this.update = function() { var deg = random(-25, 25); var newX = Math.cos(Raphael.rad(deg)) * 2; var newY = Math.sin(Raphael.rad(deg)) * 2; this.x += newX; this.y += newY; if( this.x > 915) { this.x = 0; } if( this.y > 250 || this.y < 0 ) { this.y = 125; } } } $(document).ready(function() { var bee = new BEE(100, 150, 0.4); var timer = setInterval(function(){ bee.draw(); bee.update(); }, 15); }

    Read the article

  • JavaScript (SVG drawing): Positioning x amount of points in an area

    - by Jack
    I'm using http://raphaeljs.com/ to try and draw multiple small circles. The problem I'm having is that the canvas has a fixed width, and if I want to draw, say, 1000 circles, they don't wrap onto a 'new line' (because you have to specify the xy position of each circle). E.g. I want this: .................................................. to look like this: ............................ ...................... At the moment I'm doing this: for ( var i = 0; i < 1000; i++ ) { var multiplier = i*3; if ( i <= 50 ) { paper.circle((2*multiplier),2,2); } else if ( i >= 51 && i <= 101 ) { paper.circle((2*multiplier) - 304,8,2); } else if ( i >= 152 && i <= 202 ) { paper.circle((2*multiplier) - 910,14,2); } } For reference: circle(x co-ord, y co-ord, radius) This is messy. I have to add an if statement for every new line I want. Must be a better way of doing it..?

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • Config files for xterm

    - by petersohn
    Is there any config files for xterm for default settings? For example, on my system, xterm start with black text on white background, and I want it the other way around. I can do it by starting it with: xterm -bg black -fg white. I want to set in a config file that if I run it without arguments, it will start with these options.

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >