Search Results

Search found 64639 results on 2586 pages for 'work stealing'.

Page 371/2586 | < Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >

  • Spotlight on an office - Denmark

    - by jessica.ebbelaar(at)oracle.com
    Hi, my name is Michael. I work as an Intern at the Danish office in Ballerup. My job is a part-time position beside my bachelor study in International Business at Copenhagen Business School. I joined Oracle end of February last year, and what a thrilling ride it has been! Last year, when I was offered the position, there was no doubt that I wanted to go for it. Back then, I only had little idea about Oracle as a company and what kind of exciting assignments lay ahead of me. My main role is internal communications, i.e. editor of a monthly employee’s news letter; Newszone. It is an interesting task, since it requires that I am updated on the different activities that take place within the Oracle Denmark office. I try to bring interesting articles, which are relevant and interesting news to my colleagues and it allows me to interact with many different persons at the office and to learn from their experience, which give me great inspiration and ideas for the magazine. Besides being the editor of Newszone, I also make sure that other communication flow freely at the Oracle Denmark office. I do this through our LCD screen channels. I update the internal channel with the latest information and important messages for employees, and on the external channel I circulate marketing videos featuring Oracle products and customer reference stories. In addition to this, I have the responsibility acting as a content manager of the Local Communication Denmark site on MyOracle (UCM). These are more or less my usual work assignments. On top of these I take care of various ad hoc assignments such as updating the GCM database, renew newspaper subscriptions etc. The Oracle Denmark office Being part of the local employees club I also assist with arranging social events outside working hours – e.g. evenings at the theater or cinema or by attending many of the sportsactivities;such as our running club, cycling club, food club and book club. These activities have indeed helped me grow my personal network within Oracle.  The office is packed with engaging, high-paced and motivated people who manage to take time off to spend a day attending Corporate Social Responsibility initiatives, one of them being GVD (Global Volunteer Day) with approximately 40 employees attending. This proofs some of the social responsible aspects of Oracle. I was positively surprised on how the office (named O-Zone) is designed. The office is designed into three distinct zones, namely Call zone, Project and Dialogue zone and Quiet zone, having different working environments for different job roles. The other thing which I like is that you do not have your own desk, which means you get to sit next to different people every day, getting new ideas and inspiration as well as getting to know more people in the organization you work in. To sum up: If you are considering pursuing an intern or a career after graduation in Oracle, do it! You will not regret it. It has given me many relevant practical experiences beside my study, and I am sure many great experiences will await you too.   Want to know more about the current vacancies in Denmark? Check http://campus.oracle.com for all of our vacancies.

    Read the article

  • Oracle Tutor: XPDL conversion (and why you should care)

    - by mary.keane
    You may have noticed that the Oracle Business Process Converter feature in Tutor 14 supports "XPDL" conversion to Oracle Business Process Analysis Suite (BPA), Oracle Business Process Management Suite (BPM), and Oracle Tutor, and you may have briefly wondered "what is XPDL?" before you moved on to the Visio import feature (a very popular feature in Tutor 14). This posting is for those who do not yet understand (or care) about XPDL and process modeling. Many of us (and I'm including myself) have spent years working in the process definition arena: we've written procedures, designed systems and software to help others write procedures, and have been responsible for embedding policies and procedures into training material for employees. We've worked with tools such as Oracle Tutor, Microsoft Visio, Microsoft Word, and UPK. Most of us have never worked with "modeling tools" before, and we certainly never had to understand BPMN. It's a brave new world in this arena, and companies desperately need people with policy and procedural system expertise to be able to work with system analysts so there is a seamless transfer of knowledge from IT to employees. When working with applications, a picture is worth a thousand words, so eventually you're going to need to understand and be able to work with business process models. XPDL is an acronym for XML Process Definition Language, and it is an interchange format for business process models. It allows you to take a BPMN model that was developed in one workflow application such as BizAgi and import it into another workflow application or a true BPMN management system such as Oracle BPM. Specifically, the XPDL format contains the graphical information of a model as well as any executable information. By using a common format, models can be moved from a basic modeling application used by business owners to applications used by system architects. Over 80 applications support the XPDL format, including MetaStorm ProVision, BEA ALBPM, BizAgi, and Tibco. I mention these applications because we have provided XSLT mapping files specifically for these vendors. Oracle Business Process Converter was designed with user extensibility in mind, and thus users can add their own XML files so that additional XPDL models from other vendors can be converted to BPM, BPA, and Oracle Tutor. Instructions on how to add your own files can be found in Appendix 4 of the Oracle Business Converter manual. Let's take a visual look at how this works. Here is an example of a model devloped in BizAgi: This model can be created by the average business user without a large learning curve, and it's a good start for the system analyst who will be adding web services as well as for the business manager who manages the process described in the model. By exporting this model as XPDL, the information can be converted into Oracle BPA and Oracle BPM as well as converted to Oracle Tutor to become the framework for a procedure. Through this conversion feature, one graphic illustration of a business process can be used by a system analyst, business analyst, business manager, and employee, as seen below. Model Converted to Tutor Procedure Below is the task section of the procedure after conversion from an XPDL file. Model converted to BPA Model converted to BPM End users still want step by step instructions on how to perform their jobs, so procedures (Oracle Tutor) and application simulations (UPK) are still a critical piece of the solution. But IT professionals need graphic descriptions of how the applications work, regardless of whether there are any tasks involving humans. Now there is a way to convert procedures (Oracle Tutor docx files) and basic models (XPDL files) so that business managers and system analysts can share process information. References Wikipedia XPDL. Workflow Management Coalition, XPDL Support and Resources Oracle Business Process Converter manual, Oracle Tutor 14 Oracle Business Process Management 11g If you have any XPDL conversion stories to share, we'd love to hear from you. Best wishes for the coming new year, Mary Keane, Senior Development Manager, Oracle Tutor and BPM

    Read the article

  • Automated Error Reporting = More Robust Software

    - by Laila
    I would like to tell you how to revolutionize your software development process </marketing hyperbole> On a more serious note, we (Red Gate's .NET Development team) recently rolled a new tool into our development process which has made our lives dramatically easier AND improved the quality of our software, and I (& one of our developers, Alex Davies) just wanted to take a quick moment to share the love. I work with a development team that takes pride in what they ship, so we take software testing rather seriously. For every development project we run, we allocate at least one software tester for every two developers, and we never ship software without first shipping early access releases and betas to get user feedback. And therein lies the challenge -encouraging users to provide consistent, useful feedback is a headache, but without that feedback, improving the software is. tricky. Until fairly recently, we used the standard (if long-winded) approach of receiving bug reports of variable quality via email or through our support forums. If that didn't give us enough information to reproduce the problem - which was most of the time - we had to enter into a time-consuming to-and-fro conversation with the end-user, to get scrape together the data we needed to work out where the problem lay. As I'm sure you're aware, this is painfully slow. To the delight of the team, we recently got to work with SmartAssembly, which lets us embed automated exception and error reporting into our software with very little pain, and we decided to do a little dogfooding. As a result, we've have made a really handy (if perhaps slightly obvious) discovery: As soon as we release a beta, or indeed any release of software, we now get tonnes of customer feedback through automated error reports. Making this process easier for our users has dramatically increased the amount (and quality) of feedback we get. From their point of view, they get an experience similar to Microsoft's error reporting, and process is essentially idiot-proof. From our side of things, we can now react much faster to the information we get, fixing the bugs and shipping a new-and-improved release, which our users rather appreciate. Smiles and hugs all round. Even more so because, as we're use SmartAssembly's Automated Error Reporting, we get to avoid having to spend weeks building an exception reporting mechanism. It takes just a few minutes to add reporting to a project, and we get a bunch of useful information back, like a stack trace and the values of all the local variables, which we can use to fix bugs. Happily, "Automated Error Reporting = More Robust Software" can actually be read two ways: we've found that we not only ship higher quality software, but we also release within a shorter time. We can ship stable software that our users are happy to upgrade to, and we then bask in the glory of lots of positive customer feedback. Once we'd starting working with SmartAssembly, we were curious to know how widespread error reporting was as a practice. Our product manager ran a survey in autumn last year, and found that 40% of software developers never really considered deploying error reporting. Considering how we've now got plenty of experience on the subject, one of our dev guys, Alex Davies, thought we should share what we've learnt, and he's kindly offered to host a webinar on delivering robust software with Automated Error Reporting. Drawing on our own in-house development experiences, he'll cover how to add error reporting to your program, how to actually use the error reports to fix bugs (don't snigger, not everyone's as bright as you), how to customize the error report dialog that your users see, and how to automatically get log files from your users' machine. The webinar will take place on Jan 25th (that's next week). It's free to attend, but you'll still need to register to hear Alex's dulcet tones.

    Read the article

  • RC of Entity Framework 4.1 (which includes EF Code First)

    - by ScottGu
    Last week the data team shipped the Release Candidate of Entity Framework 4.1.  You can learn more about it and download it here. EF 4.1 includes the new “EF Code First” option that I’ve blogged about several times in the past.  EF Code First provides a really elegant and clean way to work with data, and enables you to do so without requiring a designer or XML mapping file.  Below are links to some tutorials I’ve written in the past about it: Code First Development with Entity Framework 4.x EF Code First: Custom Database Schema Mapping Using EF Code First with an Existing Database The above tutorials were written against the CTP4 release of EF Code First (and so some APIs might be a little different) – but the concepts and scenarios outlined in them are the same as with the RC. Go Live License Last week’s EF 4.1 RC ships with a “go live” license that enables you to use it in production environments.  The final release of EF 4.1 will ship within the next 4 weeks and will be 100% API compatible with the RC release. Improvements with the RC The RC includes several improvements and enhancements.  The EF team has a good blog post summarizing the RC changes.  Scott Hanselman also has a nice video interview with the data team that talks more about the release. One of my favorite improvements introduced with last week’s RC is its support for medium trust security.  This enables you to use EF 4.1 (and code-first) within low-cost ASP.NET shared hosting web environments – without requiring a hoster to install anything to use it. EF 4.1 also now supports validation with not only code-first scenarios, but also model-first and database-first workflows.  Upgrading from previous releases The RC does include a few API tweaks and changes from the prior CTP builds.  Read the release notes that come with the release to get a more detailed listing of the changes. John Papa also has an excellent Upgrading to EF 4.1 RC blog post that describes the steps he took when upgrading a large project he wrote with the previous CTP5 release.  The work to upgrade is pretty straight forward and easy – use his write-up as a guide on how to quickly update projects of your own. NuGet Package Rename One of the changes that the data team made between the CTP5 and RC releases was to rename the NuGet package name from “EFCodeFirst” to “EntityFramework”. They decided to make this change since the EF 4.1 release now includes several additions above and beyond just code first. If you already have installed the “EFCodeFirst” NuGet package, you’ll want to uninstall it and then install the new “EntityFramework” NuGet package.  John Papa’s blog post details the exact steps on how to do this (it only takes ~20 seconds to do this). More EF Tutorials Julie Lerman has created some nice whitepapers and tutorials for MSDN that show using the new EF4 and EF 4.1 feature set. Click here to find links to read and watch them. Summary I’m really excited about the EF 4.1 release that will be shipping next month.  It significantly improves the Entity Framework, and makes it even easier and cleaner to work with data inside of .NET.  You can take advantage of it within all ASP.NET projects (including both Web Forms and MVC), within client projects using Windows Forms and WPF, and within other project types like WCF, Console and Services.  You can use NuGet to easily install it within all of them. Hope this helps, Scott P.S. I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Building a personal website using Silverlight.

    - by mbcrump
    I’ve always believed that as a developer you should always have a hobby project going on. I think a hobby project needs to contain at least one of following things: Something that you have never done before. Something that you are interested in. Something that you can work on in your spare time without affecting your *paying* job. I decided my hobby project would be an entire web application written in Silverlight that could be used as a self-promotion/marketing tool. This goal of the site is to provide information on the work that I’ve done to conferences, future employers and anyone else that wanted to learn more about me. Before I go any further, if you just want to check out the site then it is located at http://michaelcrump.info. So, what did I use to create it? MVVM Light – I’m a big fan of this software. The item and project templates plus code snippets make this a huge win for any SL/WPF/WP7 application. Jetpack Theme by Microsoft – I suck at designing so I used this template to help speed up this project. ComponentOne 3rd Party Controls – I have a license and really like several of their products. A User Control that Jeremy Likness created called DynamicXaml (used with his permission). I had created my own version of this a while back, but Jeremy’s implementation was simply better. Main Page – Designed to create my “brand”. This was built for a quick glimpse of who I am and what do I do.  Blog – The best marketing tool for a developer is their blog. I decided to go with an HTML page displaying my site and the user could pop into full-screen if desired. I also included my feed and Silverlight-Zone. (Another site I work on) Online – This page links to sites that I have been featured on as well as community involvement and awards. I also have a web service that I can update this information without re-compiling the Silverlight App. Projects – I’ve been wanting to use a CoverFlow for a really long time now. =) This page list several hobby projects as well as a few professional projects.  Resume Page – This page only exist because I got tired of sending companies my resume in e-mail. I can now provide a deep link to this page and the recruiter can print, search or save my resume. The PDF of my resume exist in a folder that I can easily update without recompiling the app. Contact Page – Just a contact page with a web service that sends the email. The Send button becomes disabled after a successful send. I thought of adding captcha to this page but in the end didn’t think it was worth it. Looking back at this app, I’m happy with how it turned out. I love Silverlight and I am already thinking of my next hobby project. (Thinking another Windows Phone 7 app or MVC3).  Subscribe to my feed

    Read the article

  • Sneak Peak: Social Developer Program at JavaOne

    - by Mike Stiles
    By guest blogger Roland Smart We're just days away from what is gunning to be the most exciting installment of OpenWorld to date, so how about an exciting sneak peak at the very first Social Developer Program? If your first thought is, "What's a social developer?" you're not alone. It’s an emerging term and one we think will gain prominence as social experiences become more prevalent in enterprise applications. For those who keep an eye on the ever-evolving Facebook platform, you'll recall that they recently rebranded their PDC (preferred developer consultant) group as the PMD (preferred marketing developer), signaling the importance of development resources inside the marketing organization to unlock the potential of social. The marketing developer they're referring to could be considered a social developer in a broader context. While it's true social has really blossomed in the marketing context and CMOs are winning more and more technical resources, social is starting to work its way more deeply into the enterprise with the help of developers that work outside marketing. Developers, like the rest of us, have fallen in "like" with social functionality and are starting to imagine how social can transform enterprise applications in the way it has consumer-facing experiences. The thesis of my presentation is that social developers will take many pages from the marketing playbook as they apply social inside the enterprise. To support this argument, lets walk through a range of enterprise applications and explore how consumer-facing social experiences might be interpreted in this context. Here's one example of how a social experience could be integrated into a sales enablement application. As a marketer, I spend a great deal of time collaborating with my sales colleagues, so I have good insight into their working process. While at Involver, we grew our sales team quickly, and it became evident some of our processes broke with scale. For example, we used to have weekly team meetings at which we'd discuss what was working and what wasn't from a messaging perspective. One aspect of these sessions focused on "objections" and "responses," where the salespeople would walk through common objections to purchasing and share appropriate responses. We tried to map each context to best answers and we'd capture these on a wiki page. As our team grew, however, participation at scale just wasn't tenable, and our wiki pages quickly lost their freshness. Imagine giving salespeople a place where they could submit common objections and responses for their colleagues to see, sort, comment on, and vote on. What you'd get is an up-to-date and relevant repository of information. And, if you supported an application like this with a social graph, it would be possible to make good recommendations to individual sales people about the objections they'd likely hear based on vertical, product, region or other graph data. Taking it even further, you could build in a badging/game element to reward those salespeople who participate the most. Both these examples are based on proven models at work inside consumer-facing applications. If you want to learn about how HR, Operations, Product Development and Customer Support can leverage social experiences, you’re welcome to join us at JavaOne or join our Social Developer Community to find some of the presentations after OpenWorld.

    Read the article

  • What Counts For a DBA: Replaceable

    - by Louis Davidson
    Replaceable is what every employee in every company instinctively strives not to be. Yet, if you’re an irreplaceable DBA, meaning that the company couldn’t find someone else who could do what you do, then you’re not doing a great job. A good DBA is replaceable. I imagine some of you are already reaching for the lighter fluid, about to set the comments section ablaze, but before you destroy a perfectly good Commodore 64, read on… Everyone is replaceable, ultimately. Anyone, anywhere, in any job, could be sitting at their desk reading this, blissfully unaware that this is to be their last day at work. Morbidly, you could be about to take your terminal breath. Ideally, it will be because another company suddenly offered you a truck full of money to take a new job, forcing you to bid a regretful farewell to your current employer (with barely a “so long suckers!” left wafting in the air as you zip out of the office like the Wile E Coyote wearing two pairs of rocket skates). I’ve often wondered what it would be like to be present at the meeting where your former work colleagues discuss your potential replacement. It is perhaps only at this point, as they struggle with the question “What kind of person do we need to replace old Wile?” that you would know your true worth in their eyes. Of course, this presupposes you need replacing. I’ve known one or two people whose absence we adequately compensated with a small rock, to keep their old chair from rolling down a slight incline in the floor. On another occasion, we bought a noise-making machine that frequently attracted attention its way, with unpleasant sounds, but never contributed anything worthwhile. These things never actually happened, of course, but you take my point: don’t confuse replaceable with expendable. Likewise, if the term “trained seal” comes up, someone they can teach to follow basic instructions and push buttons in the right order, then the replacement discussion is going to be over quickly. What, however, if your colleagues decide they’ll need a super-specialist to replace you. That’s a good thing, right? Well, usually, in my experience, no it is not. It often indicates that no one really knows what you do, or how. A typical example is the “senior” DBA who built a system just before 16-bit computing became all the rage and then settled into a long career managing it. Such systems are often central to the company’s operations and the DBA very skilled at what they do, but almost impossible to replace, because the system hasn’t evolved, and runs on processes and routines that others no longer understand or recognize. The only thing you really want to hear, at your replacement discussion, is that they need someone skilled at the fundamentals and adaptable. This means that the person they need understands that their goal is to be an excellent DBA, not a specialist in whatever the-heck the company does. Someone who understands the new versions of SQL Server and can adapt the company’s systems to the way things work today, who uses industry standard methods that any other qualified DBA/programmer can understand. More importantly, this person rarely wants to get “pigeon-holed” and so documents and shares the specialized knowledge and responsibilities with their teammates. Being replaceable doesn’t mean being “dime a dozen”. The company might need four people to take your place due to the depth of your skills, but still, they could find those replacements and those replacements could step right in using techniques that any decent DBA should know. It is a tough question to contemplate, but take some time to think about the sort of person that your colleagues would seek to replace you. If you think they would go looking for a “super-specialist” then consider urgently how you can diversify and share your knowledge, and start documenting all the processes you know as if today were your last day, because who knows, it just might be.

    Read the article

  • Workflow versioning

    - by Nitra
    I believe I have a fundamental misunderstanding when it comes to workflow engines which I would appreciate if you could help me sort out. I'm not sure if my misunderstanding is specific to the workflow engine I'm using, or if it's a general misunderstanding. I happen to use Windows Workflow Foundation (WWF). TLDR-version WWF allows you to implement business processes in long-running workflows (think months or even years). When started, the workflows can't be changed. But what business process can't change at any time? And if a business process changes, wouldn't you want your software to reflect this change for already started 'instances' of the business process? What am I missing? Background In WWF you define a workflow by combining a set of activites. There are different types of activities - some of them are for flow control, such as the IfElseActivity and the WhileActivty while others allows you to perform actual tasks, such as the CodeActivity wich allows you to run .NET code and the InvokeWebServiceActivity which allows you to call web services. The activites are combined to a workflow using a visual designer. You pretty much drag-and-drop activities from a toolbox to a designer area and connect the activites to each other. The workflow and activities have input paramters, output parameters and variables. We have a single workflow which sometimes runs in a matter of a few days, but it may run for 5-6 months. WWF takes care of persisting the workflow state (what activity are we currently executing, what are the variable values and so on). So far I think WWF makes sense. Some people will prefer to implement a software representation of a business process using a visual designer over writing all of it in code. So what's the issue then? What I don't really get is the following: WWF is designed to take care of long-running workflows. But at the same time, WWF has no built-in functionality which allows you to modify the running workflows. So if you model a business process using a workflow and run that for 6 months, you better hope that the business process does not change. Because if it do, you'll have to have multiple versions of the workflow executing at the same time. This seems like a fundamental design mistake to me, but at the same time it seems more likely that I've misunderstood something. For us, this has had some real-world effects: We release new versions every month, but some workflows may run for a year. This means that we have several versions of the workflow running in parallell, in other words several versions of the business logics. This is the same as having many differnt versions of your code running in production in the same system at the same time, which becomes a bit hard to understand for users. (depending on on whether they clicked a 'Start' button 9 or 10 months ago, the software will behave differently) Our workflow refers to different types of entities and since WWF now has persisted and serialized these we can't really refactor the entities since then existing workflows can't be resumed (deserialization will fail We've received some suggestions on how to handle this When we create a new version of the workflow, cancel all running workflows and create new ones. But in our workflows there's a lot of manual work involved and if we start from scratch a lot of people has to re-do their work. Track what has been done in the workflow and when you create a new one skip activites which have already been executed. I feel that this alternative may work for simple workflows, but it becomes hairy to automatically figure out what activities to skip if there's major refactoring done to a workflow. When we create a new version of the workflow, upgrade old versions using the new WWF 4.5 functionality for upgrading workflows. But then we would have to skip using the visual designer and write code to inject activities in the right places in the workflow. According to MSDN, this upgrade functionality is only intended for minor bug fixes and not larger changes. What am I missing?

    Read the article

  • Review&ndash;Build Android and iOS apps in Visual Studio with Nomad

    - by Bill Osuch
    Nomad is a Visual Studio extension that allows you build apps for both Android and iOS platforms in Visual Studio using HTML5. There is no need to switch between .Net, Java and Objective-C to target different platforms - write your code once in HTML5 and build for all common mobile platforms and tablets. You have access to the native hardware functions (such as camera and GPS) through the PhoneGap library, UI libraries such as jQuery mobile allow you to create an impressive UI with minimal work. Nomad is still in an early access beta stage, so the documentation is a bit sparse. In fact, the only documentation is a simple series of steps on how to install the plug-in, set up a project, build and deploy it. You're going to want to be a least a little familiar with the PhoneGap library and jQuery mobile to really tap into the power of this. The sample project included with the download shows you just how simple it is to create projects in Visual Studio. The sample solution comes with an index.html file containing the HTML5 code, the Cordova (PhoneGap) library, jQuery libraries, and a JQuery style sheet: The html file is pretty straightforward. If you haven't experimented with JQuery mobile before, some of the attributes (such as data-role) might be new to you, but some quick Googling will fill in everything you need to know. The first part of the file builds a simple (but attractive) list with some links in it: The second part of the file is where things get interesting and it taps into the PhoneGap library. For instance, it gets the geolocation position by calling position.coords.latitude and position.coords.longitude: ...and then displays it in a simple span: Building is pretty simple, at least for Android (I'm not an iOS developer so I didn't look at that feature) - just configure the display name, version number, and package ID. There's no need to specify Android version; Nomad supports 2.2 and later. Enter these bits of information, click the new "Build for Android" button (not the regular Visual Studio Build link...) and you get a dialog box saying that your code is being built by their cloud build service (so no building while away from a WiFi signal apparently). After a couple minutes you wind up with a .apk file that can be copied over to your device. Applications built with Nomad for Android currently use a temporary certificate, so you can test the app on your devices but you cannot publish them in the Google Play Store (yet). And I love the "success" dialog box: Since Nomad is still in Beta, no pricing plans have been announced yet, so I'll be curious to see if this becomes a cost-effective solution to mobile app development. If it is, I may even be tempted to spring for the $99 iOS membership fee! In the meantime, I plan to work on porting some of my apps over to it and seeing how they work. My only quibble at this time is the lack of a centralized documentation location - I'd like to at least see which (if any) features of JQuery and PhoneGap are limited or not supported. Also, some notes on targeting different Android screen sizes would be nice, but it's relatively easy to find jQuery examples out on the InterWebs. Oh well, trial and error! You can download the Nomad extension for Visual Studio by going to their web site: www.vsnomad.com. Technorati Tags: Android, Nomad

    Read the article

  • Ask the Readers: Would You Be Willing to Give Windows Up and Use a Different O.S.?

    - by Asian Angel
    When it comes to computers, Windows definitely rules the desktop in comparison to other operating systems. What we would like to know this week is if you would actually be willing to give up using Windows altogether and move to a different operating system on your computers. Note: This week’s Ask the Readers post is posing a hypothetical situation, so please refrain from starting arguments or a flame war in the comments. Good reasoned discussion is always welcome. There is no doubt that Windows is the dominant operating system in use today. Everywhere you go or look it is easy to find computers with Windows installed such as at work, home, the library, government offices, and more. For many people it is the operating system that they know and are comfortable with, which makes changing to a different operating system less appealing. Adding to the preference for Windows (or dependency based on your view) is the custom software that many businesses use on a daily basis. Throw in the high volume of people who depend on and use Microsoft Office as a standard for their business documents and it is little wonder that Windows is so dominant. So what would you use if you did decide to take a break from or permanently move away from Windows? If your choice is Linux then you have a large and wonderful variety of distributions to choose from based on what you want out of your system. Want a distribution that is easy to work with? You could choose Ubuntu, Linux Mint, or others that are engineered to be ready to go “out of the box”. Like a challenge? Perhaps Arch Linux is more your style. One of the most attractive features of all about Linux is the price…it is very hard to beat free! Maybe Mac OS X sounds like the perfect choice. It has a certain mystique and elegance associated with it and many OS X fans refuse to use anything else if given a choice. Then there is the soon to be released Chrome OS with its’ emphasis on cloud computing. This is a system that is definitely focused on being as low-maintenance and hassle-free as possible. Quick on, quick off, minimalist, and made to be portable. All of the system’s updates will occur automatically leaving you free to work and play in the cloud. But it does have its’ limitations…no installing all of those custom apps that you love using on Windows or other systems…it is literally all about the browsing window and web apps. So there you have it. If the opportunity presented itself would you, could you give Windows up and use a different operating system? Would it be easy or hard for you to do? Perhaps it would not really matter so long as you could do what you needed or wanted to do on a computer. And maybe this is the perfect time to try something new and find out…that new favorite operating system could be just an install disc away. Let us know your thoughts in the comments! How-To Geek Polls require Javascript. Please Click Here to View the Poll. Latest Features How-To Geek ETC The Complete List of iPad Tips, Tricks, and Tutorials The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor The Brothers Mario – Epic Gangland Style Mario Brothers Movie Trailer [Video] Score Awesome Games on the Cheap with the Humble Indie Bundle Add a Colorful Christmas Theme to Your Windows 7 Desktop This Windows Hack Changes the Blue Screen of Death to Red Edit Images Quickly in Firefox with Pixlr Grabber Zoho Writer, Sheet, and Show Now Available in Chrome Web Store

    Read the article

  • Hell and Diplomacy: Notes on Software Integration

    - by ericajanine
    Well, I'm getting cabin fever and short-timer's ADD all at the same time. I haven't been anywhere outside of my greater city area in FOREVER and I'm only days away from my vacation. I have brainlock because the last few days have been non-stop diffusing amazingly hostile conversations. I think I'll write about that. So then, I "do" software. At the end of the day, software is pretty straightforward. Software is that thing we love and try to make do things not currently in play, in existence. If a process around getting software to do something is broken (like most actually are), then we should acknowledge it and move on. We are professional. We are helpful beyond the normal call of duty. We live and breathe making the lives better for those apps being active in the world. But above all--the shocker: We are SERVICE. In a service frame of mind, all perspectives shift to what is best overall for system stabilization vs. what must be in production to meet business objectives. It doesn't matter how much you like or dislike the creator of said software. It doesn't matter what time you went to bed last night or if your mate appreciates your Death March attitude. Getting a product in and when is an age-old dilemma in a software environment where more than, say, 3 people are involved. We know this. Taking a servant's perspective eliminates the drama surrounding what a group of half-baked developers forgot to tell each other in the 11th hour about their trampling changes before check-in. We, my counterparts in society, get paid to deal with that drama. I get paid to diffuse that drama and make everything integrate as smoothly as possible. At the end of the day, attacking someone over a minor detail not only makes things worse, it's against the whole point of our real existence. Being in support or software integration means you are to keep your eyes on the end game. That end game? It's making a solution work for all stakeholders, not just you or your immediate superior. Development and technology groups exist because business groups need them to exist and solve their issues. The end game? Doing what is best for those business groups ultimately. Period. Note: That does not mean you let your business users solely dictate when and if something gets changed in an environment you ultimately own. That's just crazy. Software and its environments are legitimately owned by those who manage it directly, no matter how important a business group believes it is to the existence of mankind. So, you both negotiate the terms of changing that environment and only do so upon that negotiation. Diplomacy is in order. So, to finish my thoughts: If you have no ability to keep your mouth shut in a situation where a business or development group truly need your help to make something work even beyond a deadline, find another profession. Beating up someone verbally because they screw up means a service attitude is not at the forefront of your motivation for doing what is ultimately their work and their product. Software, especially integration, requires a strong will and a soft touch to keep it on track. Not a hammer covered in broken glass.

    Read the article

  • RewriteRule not working for certain URLs

    - by keiki
    There are a few domains pointing towards the same server, and of course I need them all redirect to only one of them. Redirects work, but only for certain URLs. What works: http://www.domain.com, http://domain.com, domain.com/index.html, domain.com/index.php, , domain.com/nonExistentDirectory, and if I click in the menu the following URLs are also redirected correctly: domain.com/foo/bar, domain.com/foo/bar.html or .php or other extension. What doesn't work: domain.com/existentDirectory, domain.com/foo/bar (if I type the URL in the address bar). If anyone will have the time and skill and will to tell me where's the mistake, I'll be deeply grateful. Here's my .htaccess file: AddHandler x-httpd-php .html .htm <ifModule mod_gzip.c> mod_gzip_on Yes mod_gzip_dechunk Yes mod_gzip_item_include file \.(html?|txt|css|js|php|pl)$ mod_gzip_item_include handler ^cgi-script$ mod_gzip_item_include mime ^text/.* mod_gzip_item_include mime ^application/x-javascript.* mod_gzip_item_exclude mime ^image/.* mod_gzip_item_exclude rspheader ^Content-Encoding:.*gzip.* </ifModule> <ifModule mod_expires.c> ExpiresActive On ExpiresDefault "access plus 1 seconds" ExpiresByType text/html "access plus 1 seconds" ExpiresByType image/gif "access plus 2592000 seconds" ExpiresByType image/jpeg "access plus 2592000 seconds" ExpiresByType image/png "access plus 2592000 seconds" ExpiresByType text/css "access plus 2592000 seconds" ExpiresByType text/javascript "access plus 216000 seconds" ExpiresByType application/x-javascript "access plus 216000 seconds" </ifModule> <ifModule mod_headers.c> <filesMatch "\\.(ico|pdf|flv|jpg|jpeg|png|gif|swf)$"> Header set Cache-Control "max-age=2592000, public" </filesMatch> <filesMatch "\\.(css)$"> Header set Cache-Control "max-age=2592000, public" </filesMatch> <filesMatch "\\.(js)$"> Header set Cache-Control "max-age=216000, private" </filesMatch> <filesMatch "\\.(xml|txt)$"> Header set Cache-Control "max-age=216000, public, must-revalidate" </filesMatch> <filesMatch "\\.(html|htm|php)$"> Header set Cache-Control "max-age=1, private, must-revalidate" </filesMatch> </ifModule> <ifModule mod_headers.c> Header unset ETag </ifModule> FileETag None <ifModule mod_headers.c> Header unset Last-Modified </ifModule> # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress RewriteCond %{HTTP_HOST} ^foo\.com$ [OR] RewriteCond %{HTTP_HOST} ^www\.foo\.com$ RewriteRule (.*) http://domain.com/$1 [R=301,L,QSA] RewriteCond %{HTTP_HOST} ^foo1\.com$ [OR] RewriteCond %{HTTP_HOST} ^www\.foo1\.com$ RewriteRule (.*) http://domain.com/$1 [R=301,L,QSA] RewriteCond %{HTTP_HOST} ^foo2\.com$ [OR] RewriteCond %{HTTP_HOST} ^www\.foo2\.com$ RewriteRule (.*) http://domain.com/$1 [R=301,L,QSA] RewriteCond %{HTTP_HOST} ^foo3\.com$ [OR] RewriteCond %{HTTP_HOST} ^www\.foo3\.com$ RewriteRule (.*) http://domain.com/$1 [R=301,L,QSA] RewriteCond %{HTTP_HOST} ^foo8\.com$ [OR] RewriteCond %{HTTP_HOST} ^www\.foo8\.com$ RewriteRule (.*) http://domain.com/$1 [R=301,L,QSA] Thinking that the above version was overkill, I've also tried to redirect all the requests for domains different than the main on to be redirected to it like this: RewriteCond %{HTTP_HOST} !^domain\.com$ [NC] RewriteRule ^(.*)$ http://domain.com [L,R=301] Is it also wrong? Because it doesn't work either! P.S. @Sodved I've tried that and it doesn't help (I comment here because I can't seem to be able to comment your answer.) Removing the following piece of code didn't solve the issue either, so the problem must be somewhere else: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress New details: using this tool for checking the redirects I got the following results for the URLs that are not redirected: Checked link: http://domain.com/aDirectory/ Type of link: direct link (note the trailing slash above) and: Checked link: http://domain.com/aDirectory Type of redirect: 301 Moved Permanently Redirected to: http://domain.com/aDirectory/ (no trailing slash here) I hope/suspect I'm getting closer to the cause of this behavior.

    Read the article

  • Find Rules and Defaults using the PowerShell for SQL Server 2008 Provider

    - by BuckWoody
    I ran into an issue the other day where I couldn't set up some features in SQL Server 2008 because they ddon't support the use of Rules or Defaults. Let me explain a little more about that. In older versions of SQL Server, you could decalre a "Rule" or "Default" just like you do with a Table Constraint today. You would then "bind" these rules or defaults to the tables you wanted them to apply to. Sure, there are advantages and disadvantages to this approach, but it certainly isn't standard Data Definition Language (DDL), so they are deprecated and many features don't work with them any more. Honestly, it's been so long since I've seen them in use I had forgotten to even check for them. My suspicion is that this was a new database created with an older script. Nevertheless, the feature failed when it ran into one. Immediately I thought that I had better build some logic into my process to try and catch those - but how? Lots of choices here, but since I was using PowerShell to do the rest of the work, I thought I would investigate how easy it would be just to do it there. And using the SQL Server 2008 provider, this could not be simpler. I won't show all of the scrupt here, because I was testing for these as a condition and then bailing out of the script and sending a notification, but all it is using is the DIR command! Here's an example on my "UNIVAC" computer for the "pubs" database: Find Rules using PowerShell: dir SQLSERVER:\SQL\UNIVAC\DEFAULT\Databases\pubs\Rulesdir SQLSERVER:\SQL\UNIVAC\DEFAULT\Databases\pubs\Defaults And this one will look in all databases:  #All Databases:dir SQLSERVER:\SQL\UNIVAC\DEFAULT\Databases | select-object -property Name, Rules, Defaults Awesome. Love me some PowerShell. Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately.       Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • What is the difference between Workcenters, Dashboards, and the Interaction Hub?

    - by Matthew Haavisto
    Oracle Open World has just concluded.  Over the course of the conference, we presented several sessions covering different aspects of the PeopleSoft user experience, including Workcenters, Dashboards, and the PeopleSoft Interaction Hub (formerly known as the PeopleSoft Applications Portal).  Although we've produced collateral on these features and covered them in sessions, it became apparent at the conference that customers still have many questions about the these products, including how they are licensed, how they are installed, what their various purposes are, and how they can be used together synergistically. Let's Start with Licensing and Installation As you may know, we've extended the restricted use license (RUL) for the Interaction Hub.  This grants customers with PeopleTools 8.52 licenses the right to install the Interaction Hub for free for use as specified in the Tools license notes.  Note that this means customers receive a restricted use license for the Interaction Hub that doesn't cost them an additional license fee, but it is a separate product, not part of PeopleTools or PeopleSoft applications, and is a separate installation.  This means customers must provide the infrastructure to install and run the Hub, just like any other application.  The benefits of using the Hub to unify your PeopleSoft user experience can be great.  PeopleSoft applications have not yet delivered instances of the Hub with their products, though they may in the future. Workcenters and Dashboards, on the other hand, are frameworks provided by PeopleTools.  No other license is required, and no additional installation of a separate product is needed (apart from PeopleTools and PeopleSoft applications).  PeopleSoft applications are delivering instances of the workcenters and dashboards with their products.  Some are available now, and more are coming in future releases.  These delivered workcenter and dashboard instances require no additional licenses, and no additional installations beyond Tools and the applications that provide them.  In addition, the workcenter and dashboard frameworks provided by PeopleTools can be used by customers to build their own workcenters and dashboards, and it's quite easy and simple to do so. What are Their Differences?  What Purposes do they Serve? Workcenters, Dashboards and the Interaction Hub appear somewhat similar.  They all contain pagelets, and have some visual characteristics in common.  However, their strengths and purposes are very different, and they were designed to provide different benefits to your PeopleSoft ecosystem. Workcenters and Dashboards have the following characteristics: Designed for specific roles Focus on the daily tasks of those roles Help to streamline the work performed most often Personal view of my work world Makes navigation and search easier and quicker, particularly for transactions and decision support Reports and data needed for day-to-day work Personalizable, but minimal Delivered by PS Apps, but can be altered by customer for their requirements Customers can create their own Workcenters can be used for guided processes  The Interaction Hub is designed to aggregate content from multiple applications, and is is used to unify the user experience of those applications.  It offers a rich, web site-based user experience, and is often used to provide access to infrequently performed activities like benefits enrollment, payroll inquiries, life event changes, onboarding, and so on. Full-featured and robust Centrally administered Pushed to large audience Broad info like Company News Infrequent activities like benefits, not day-to-day tasks Self-service, access to employer info Central launch point for other activities and can navigate to workcenters and dashboards Deployed by customers or consultants, instances not delivered by PeopleSoft (at this time) Content management Unified PS application navigation Although these products are quite different and serve different purposes in your PeopleSoft environment, they can be used together to provide a richer, more efficient and engaging user experience for your all your user communities.

    Read the article

  • Acceptance tests done first...how can this be accomplished?

    - by Crazy Eddie
    The basic gist of most Agile methods is that a feature is not "done" until it's been developed, tested, and in many cases released. This is supposed to happen in quick turnaround chunks of time such as "Sprints" in the Scrum process. A common part of Agile is also TDD, which states that tests are done first. My team works on a GUI program that does a lot of specific drawing and such. In order to provide tests, the testing team needs to be able to work with something that at least attempts to perform the things they are trying to test. We've found no way around this problem. I can very much see where they are coming from because if I was trying to write software that targeted some basically mysterious interface I'd have a very hard time. Although we have behavior fairly well specified, the exact process of interacting with various UI elements when it comes to automation seems to be too unique to a feature to allow testers to write automated scripts to drive something that does not exist. Even if we could, a lot of things end up turning up later as having been missing from the specification. One thing we considered doing was having the testers write test "scripts" that are more like a set of steps that must be performed, as described from a use-case perspective, so that they can be "automated" by a human being. This can then be performed by the developer(s) writing the feature and/or verified by someone else. When the testers later get an opportunity they automate the "script" for regression purposes mainly. This didn't end up catching on in the team though. The testing part of the team is actually falling behind us by quite a margin. This is one reason why the apparently extra time of developing a "script" for a human being to perform just did not happen....they're under a crunch to keep up with us developers. If we waited for them, we'd get nothing done. It's not their fault really, they're a bottle neck but they're doing what they should be and working as fast as possible. The process itself seems to be set up against them. Very often we end up having to go back a month or more in what we've done to fix bugs that the testers have finally gotten to checking. It's an ugly truth that I'd like to do something about. So what do other teams do to solve this fail cascade? How can we get testers ahead of us and how can we make it so that there's actually time for them to write tests for the features we do in a sprint without making us sit and twiddle our thumbs in the meantime? As it's currently going, in order to get a feature "done", using agile definitions, would be to have developers work for 1 week, then testers work the second week, and developers hopefully being able to fix all the bugs they come up with in the last couple days. That's just not going to happen, even if I agreed it was a reasonable solution. I need better ideas...

    Read the article

  • PASS: The Legal Stuff

    - by Bill Graziano
    I wanted to give a little background on the legal status of PASS.  The Professional Association for SQL Server (PASS) is an American corporation chartered in the state of Illinois.  In America a corporation has to be chartered in a particular state.  It has to abide by the laws of that state and potentially pay taxes to that state.  Our bylaws and actions have to comply with Illinois state law and United States law.  We maintain a mailing address in Chicago, Illinois but our headquarters is currently in Vancouver, Canada. We have roughly a dozen people that work in our Vancouver headquarters and 4-5 more that work remotely on various projects.  These aren’t employees of PASS.  They are employed by a management company that we hire to run the day to day operations of the organization.  I’ll have more on this arrangement in a future post. PASS is a non-profit corporation.  The term non-profit and not-for-profit are used interchangeably.  In a for-profit corporation (or LLC) there are owners that are entitled to the profits of a company.  In a non-profit there are no owners.  As a non-profit, all the money earned by the organization must be retained or spent.  There is no money that flows out to shareholders, owners or the board of directors.  Any money not spent in furtherance of our mission is retained as financial reserves. Many non-profits apply for tax exempt status.  Being tax exempt means that an organization doesn’t pay taxes on its profits.  There are a variety of laws governing who can be tax exempt in the United States.  There are many professional associations that are tax exempt however PASS isn’t tax exempt.  Because our mission revolves around the software of a single company we aren’t eligible for tax exempt status. PASS was founded in the late 1990’s by Microsoft and Platinum Technologies.  Platinum was later purchased by Computer Associates. As the founding partners Microsoft and CA each have two seats on the Board of Directors.  The other six directors and three officers are elected as specified in our bylaws. As a non-profit, our bylaws layout our governing practices.  They must conform to Illinois and United States law.  These bylaws specify that PASS is governed by a Board of Directors elected by the membership with two members each from Microsoft and CA.  You can find our bylaws as well as a proposed update to them on the governance page of the PASS web site. The last point that I’d like to make is that PASS is completely self-funded.  All of our $4 million in revenue comes from conference registrations, sponsorships and advertising.  We don’t receive any money from anyone outside those channels.  While we work closely with Microsoft we are independent of them and only derive a very small percentage of our revenue from them.

    Read the article

  • Feynman's inbox

    - by user12607414
    Here is Richard Feynman writing on the ease of criticizing theories, and the difficulty of forming them: The problem is not just to say something might be wrong, but to replace it by something — and that is not so easy. As soon as any really definite idea is substituted it becomes almost immediately apparent that it does not work. The second difficulty is that there is an infinite number of possibilities of these simple types. It is something like this. You are sitting working very hard, you have worked for a long time trying to open a safe. Then some Joe comes along who knows nothing about what you are doing, except that you are trying to open the safe. He says ‘Why don’t you try the combination 10:20:30?’ Because you are busy, you have tried a lot of things, maybe you have already tried 10:20:30. Maybe you know already that the middle number is 32 not 20. Maybe you know as a matter of fact that it is a five digit combination… So please do not send me any letters trying to tell me how the thing is going to work. I read them — I always read them to make sure that I have not already thought of what is suggested — but it takes too long to answer them, because they are usually in the class ‘try 10:20:30’. (“Seeking New Laws”, page 161 in The Character of Physical Law.) As a sometime designer (and longtime critic) of widely used computer systems, I have seen similar difficulties appear when anyone undertakes to publicly design a piece of software that may be used by many thousands of customers. (I have been on both sides of the fence, of course.) The design possibilities are endless, but the deep design problems are usually hidden beneath a mass of superfluous detail. The sheer numbers can be daunting. Even if only one customer out of a thousand feels a need to express a passionately held idea, it can take a long time to read all the mail. And it is a fact of life that many of those strong suggestions are only weakly supported by reason or evidence. Opinions are plentiful, but substantive research is time-consuming, and hence rare. A related phenomenon commonly seen with software is bike-shedding, where interlocutors focus on surface details like naming and syntax… or (come to think of it) like lock combinations. On the other hand, software is easier than quantum physics, and the population of people able to make substantial suggestions about software systems is several orders of magnitude bigger than Feynman’s circle of colleagues. My own work would be poorer without contributions — sometimes unsolicited, sometimes passionately urged on me — from the open source community. If a Nobel prize winner thought it was worthwhile to read his mail on the faint chance of learning a good idea, I am certainly not going to throw mine away. (In case anyone is still reading this, and is wondering what provoked a meditation on the quality of one’s inbox contents, I’ll simply point out that the volume has been very high, for many months, on the Lambda-Dev mailing list, where the next version of the Java language is being discussed. Bravo to those of my colleagues who are surfing that wave.) I started this note thinking there was an odd parallel between the life of the physicist and that of a software designer. On second thought, I’ll bet that is the story for anybody who works in public on something requiring special training. (And that would be pretty much anything worth doing.) In any case, Feynman saw it clearly and said it well.

    Read the article

  • Syntax of passing lambda

    - by Astara
    Right now, I'm working on refactoring a program that calls its parts by polling to a more event-driven structure. I've created sched and task classes with the sced to become a base class of the current main loop. The tasks will be created for each meter so they can be called off of that instead of polling. Each of the events main calls are a type of meter that gather info and display it. When the program is coming up, all enabled meters get 'constructed' by a main-sub. In that sub, I want to store off the "this" pointer associated with the meter, as well as the common name for the "action routine. void MeterMaker::Meter_n_Task (Meter * newmeter,) { push(newmeter); // handle non-timed draw events Task t = new Task(now() + 0.5L); t.period={0,1U}; t.work_meter = newmeter; t.work = [&newmeter](){newmeter.checkevent();};<<--attempt at lambda t.flags = T_Repeat; t.enable_task(); _xos->sched_insert(t); } A sample call to it: Meter_n_Task(new CPUMeter(_xos, "CPU ")); 've made the scheduler a base class of the main routine (that handles the loop), and I've tried serveral variations to get the task class to be a base of the meter class, but keep running into roadblocks. It's alot like "whack-a-mole" -- pound in something to fix something one place, and then a new probl pops out elsewhere. Part of the problem, is that the sched.h file that is trying to hold the Task Q, includes the Task header file. The task file Wants to refer to the most "base", Meter class. The meter class pulls in the main class of the parent as it passes a copy of the parent to the children so they can access the draw routines in the parent. Two references in the task file are for the 'this' pointer of the meter and the meter's update sub (to be called via this). void *this_data= NULL; void (*this_func)() = NULL; Note -- I didn't really want to store these in the class, as I wanted to use a lamdba in that meter&task routine above to store a routine+context to be used to call the meter's action routine. Couldn't figure out the syntax. But am running into other syntax problems trying to store the pointers...such as g++: COMPILE lsched.cc In file included from meter.h:13:0, from ltask.h:17, from lsched.h:13, from lsched.cc:13: xosview.h:30:47: error: expected class-name before ‘{’ token class XOSView : public XWin, public Scheduler { Like above where it asks for a class, where the classname "Scheduler" is. !?!? Huh? That IS a class name. I keep going in circles with things that don't make sense... Ideally I'd get the lamba to work right in the Meter_n_Task routine at the top. I wanted to only store 1 pointer in the 'Task' class that was a pointer to my lambda that would have already captured the "this" value ... but couldn't get that syntax to work at all when I tried to start it into a var in the 'Task' class. This project, FWIW, is my teething project on the new C++... (of course it's simple!.. ;-))... I've made quite a bit of progress in other areas in the code, but this lambda syntax has me stumped...its at times like thse that I appreciate the ease of this type of operation in perl. Sigh. Not sure the best way to ask for help here, as this isn't a simple question. But thought I'd try!... ;-) Too bad I can't attach files to this Q.

    Read the article

  • Head in the Clouds

    - by Tony Davis
    We're just past the second anniversary of the launch of Windows Azure. A couple of years' experience with Azure in the industry has provided some obvious success stories, but has deflated some of the initial marketing hyperbole. As a general principle, Azure seems to work well in providing a Service-Oriented Architecture for services in enterprises that suffer wide fluctuations in demand. Instead of being obliged to provide hardware sufficient for the occasional peaks in demand, one can hire capacity only when it is needed, and the cost of hosting an application is no longer a capital cost. It enables companies to avoid having to scale out hardware for peak periods only to see it underused for the rest of the time. A customer-facing application such as a concert ticketing system, which suffers high demand in short, predictable bursts of activity, is a great example of an application that would work well in Azure. However, moving existing applications to Azure isn't something to be done on impulse. Unless your application is .NET-based, and consists of 'stateless' components that communicate via queues, you are probably in for a lot of redevelopment work. It makes most sense for IT departments who are already deep in this .NET mindset, and who also want 'grown-up' methods of staging, testing, and deployment. Azure fits well with this culture and offers, as a bonus, good Visual Studio integration. The most-commonly stated barrier to porting these applications to Azure is the problem of reconciling the use of the cloud with legislation for data privacy and security. Putting databases in the cloud is a sticky issue for many and impossible for some due to compliance and security issues, the need for direct control over data, and so on. In the face of feedback from the early adopters of Azure, Microsoft has broadened the architectural choices to cater for a wide range of requirements. As well as SQL Azure Database (SAD) and Azure storage, the unstructured 'BLOB and Entity-Attribute-Value' NoSQL storage alternative (which equates more closely with folders and files than a database), Windows Azure offers a wide range of storage options including use of services such as oData: developers who are programming for Windows Azure can simply choose the one most appropriate for their needs. Secondly, and crucially, the Windows Azure architecture allows you the freedom to produce hybrid applications, where only those parts that need cloud-based hosting are deployed to Azure, whereas those parts that must unavoidably be hosted in a corporate datacenter can stay there. By using a hybrid architecture, it will seldom, if ever, be necessary to move an entire application to the cloud, along with personal and financial data. For example that we could port to Azure only put those parts of our ticketing application that capture and process tickets orders. Once an order is captured, the financial side can be processed in our own data center. In short, Windows Azure seems to be a very effective way of providing services that are subject to wide but predictable fluctuations in demand. Have you come to the same conclusions, or do you think I've got it wrong? If you've had experience with Azure, would you recommend it? It would be great to hear from you. Cheers, Tony.

    Read the article

  • Deploying Oracle ADF Essentials Applications to Glassfish

    - by Shay Shmeltzer
    With the new Oracle ADF Essentials offering you can now deploy applications that leverage Oracle ADF on the open source Glassfish 3.1 server. Deployment is documented in the official JDeveloper and ADF documentation (here) but below is a summary of the steps and a video of the steps you'll need to take to get a basic Oracle ADF Essentials application to work on GlassFish. Note - to make starting/stopping GlassFish easier for my demo I used my GlassFish extension that you can get here. First we'll install some ADF Runtime libraries on GlassFish Download and install Glassfish (Note - if you also have an Oracle DB on the same machine, you'll want to switch GlassFish's HTTP port to something else instead of 8080). Download the Oracle ADF Essentials packaging - this will get you an adf_essentials.zip file. Copy the adf_essentials.zip to the lib directory of your Glassfish domain - on a default windows install this would be: C:\glassfish3\glassfish\domains\domain1\lib Go the the above lib directory and issue a unzip -j adf_essentials.zip This will extract the ADF libraries to the directory. Now you can start the Glassfish server. Now let's configure Glassfish to handle applications of the ADF type: Invoke the admin console of glassfish (http://localhost:4848) and log into your admin account. Go to Configurations->Server-config->JVM Settings and choose the JVM Options tab Add the following entries: -XX:MaxPermSize=512m (note this entry should already exist so just make sure it has a big enough value) -Doracle.mds.cache=simple While we are in the admin console, we can also define JDBC connections that will be used by our application. Go into Resources->JDBC->JDBC Connection Pools and click to create a New one Give it a name and choose the resource type to be javax.sql.XADataSource and choose Oracle as the Database Driver vendor. Click Next Scroll down to the Additional Properties section and start filling in the information for your database. The values for an Oracle XE will be (user=hr, databaseName = XE, Password=hr, ServerName=localhost, DriverType=thin, PortNumber=1521) Click Finish Click Ping to check your connection works. Now define a new JDBC Resource that will use the pool you just defined. In my example I called the resource jdbc/HRDS You will need this name to match the name in your Application Module connection configuraiton.Now you can re-start the Glassfish server for the changes to take effect. Get an ADF application going (you can use the regular Fusion Application template for this) Go into the project properties of your viewController project, under the deployment section click to edit the deployment profile that is defined there. Go to Platform and choose Glassfish 3.1 from the drop down list. Click ok to go back to your project. Go to Application -> Application Properties-> Deployment Go to Platform and choose Glassfish 3.1 from the drop down list. Click ok to go back to your project. This step will make sure that JDeveloper will autoamtically add the necessary ADF libraries to the EAR file that is being generated for deployment on Glassfish  Go to your Application->Deploy and deploy either to an EAR file or directly to a Glassfish server connection that you created. Things should just work, but if they don't then look up the server.log in the log directory and check out what error is in there. Here is a video demo of the various steps: Note - right now the deployment of an ADF application takes about 2 minutes on my machine we are hoping to be able to improve this timing in the future. People who are more familiar with Glassfish might want to explore using exploded directory deployment and see if they can get it to work.

    Read the article

  • Introducing the Oracle Linux Playground yum repo

    - by wcoekaer
    We just introduced a new yum repository/channel on http://public-yum.oracle.com called the playground channel. What we started doing is the following: When a new stable mainline kernel is released by Linus or GregKH, we internally build RPMs to test it and do some QA work around it to keep track of what's going on with the latest development kernels. It helps us understand how performance moves up or down and if there are issues, we try to help look into them and of course send that stuff back upstream. Many Linux users out there are interested in trying out the latest features but there are some potential barriers to do this. (1) in general, you are looking at an upstream development distribution, which means that everything changes both in userspace(random applications) and kernel. Projects like Fedora are very useful and someone that wants to just see how the entire distribution evolves with all the changes, this is a great way to be current. A drawback here, though, is that if you have applications that are not part of the distribution, there's a lot of manual work involved or they might just not work because the changes are too drastic. The introduction of systemd is a good example. (2) when you look at many of our customers, that are interested in our database products or applications, the starting point of having a supported/certified userspace/distribution, like Oracle Linux, is a much easier way to get your feet wet in seeing what new/future Linux kernel enhancements could do. This is where the playground channel comes into play. When you install Oracle Linux 6 (which anyone can download and use from http://edelivery.oracle.com/linux), grab the latest public yum repository file http://public-yum.oracle.com/public-yum-ol6.repo, put it in /etc/yum.repos.d and enable the playground repo : [ol6_playground_latest] name=Latest mainline stable kernel for Oracle Linux 6 ($basearch) - Unsupported baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/playground/latest/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=1 Now, all you need to do : type yum update and you will be downloading the latest stable kernel which will install cleanly on Oracle Linux 6. Thus you end up with a stable Linux distribution where you can install all your software, and then download the latest stable kernel (at time of writing this is 3.6.7) without having to recompile a kernel, without having to jump through hoops. There is of course a big, very important disclaimer this is NOT for PRODUCTION use. We want to try and help make it easy for people that are interested, from a user perspective, where the Linux kernel is going and make it easy to install and use it and play around with new features. Without having to learn how to compile a kernel and without necessarily having to install a complete new distribution with all the changes top to bottom. So we don't or won't introduce any new userspace changes, this project really is around making it easy to try out the latest upstream Linux kernels in a very easy way on an environment that's stable and you can keep current, since all the latest errata for Oracle Linux 6 are published on the public yum repo as well. So one repository location for all your current changes and the upstream kernels. We hope that this will get more users to try out the latest kernel and report their findings. We are always interested in understanding stability and performance characteristics. As new features are going into the mainline kernel, that could potentially be interesting or useful for various products, we will try to point them out on our blogs and give an example on how something can be used so you can try it out for yourselves. Anyway, I hope people will find this useful and that it will help increase interested in upstream development beyond reading lkml by some of the more non-kernel-developer types.

    Read the article

  • Multithreading 2D gravity calculations

    - by Postman
    I'm building a space exploration game and I've currently started working on gravity ( In C# with XNA). The gravity still needs tweaking, but before I can do that, I need to address some performance issues with my physics calculations. This is using 100 objects, normally rendering 1000 of them with no physics calculations gets well over 300 FPS (which is my FPS cap), but any more than 10 or so objects brings the game (and the single thread it runs on) to its knees when doing physics calculations. I checked my thread usage and the first thread was killing itself from all the work, so I figured I just needed to do the physics calculation on another thread. However when I try to run the Gravity.cs class's Update method on another thread, even if Gravity's Update method has nothing in it, the game is still down to 2 FPS. Gravity.cs public void Update() { foreach (KeyValuePair<string, Entity> e in entityEngine.Entities) { Vector2 Force = new Vector2(); foreach (KeyValuePair<string, Entity> e2 in entityEngine.Entities) { if (e2.Key != e.Key) { float distance = Vector2.Distance(entityEngine.Entities[e.Key].Position, entityEngine.Entities[e2.Key].Position); if (distance > (entityEngine.Entities[e.Key].Texture.Width / 2 + entityEngine.Entities[e2.Key].Texture.Width / 2)) { double angle = Math.Atan2(entityEngine.Entities[e2.Key].Position.Y - entityEngine.Entities[e.Key].Position.Y, entityEngine.Entities[e2.Key].Position.X - entityEngine.Entities[e.Key].Position.X); float mult = 0.1f * (entityEngine.Entities[e.Key].Mass * entityEngine.Entities[e2.Key].Mass) / distance * distance; Vector2 VecForce = new Vector2((float)Math.Cos(angle), (float)Math.Sin(angle)); VecForce.Normalize(); Force = Vector2.Add(Force, VecForce * mult); } } } entityEngine.Entities[e.Key].Position += Force; } } Yeah, I know. It's a nested foreach loop, but I don't know how else to do the gravity calculation, and this seems to work, it's just so intensive that it needs its own thread. (Even if someone knows a super efficient way to do these calculations, I'd still like to know how I COULD do it on multiple threads instead) EntityEngine.cs (manages an instance of Gravity.cs) public class EntityEngine { public Dictionary<string, Entity> Entities = new Dictionary<string, Entity>(); public Gravity gravity; private Thread T; public EntityEngine() { gravity = new Gravity(this); } public void Update() { foreach (KeyValuePair<string, Entity> e in Entities) { Entities[e.Key].Update(); } T = new Thread(new ThreadStart(gravity.Update)); T.IsBackground = true; T.Start(); } } EntityEngine is created in Game1.cs, and its Update() method is called within Game1.cs. I need my physics calculation in Gravity.cs to run every time the game updates, in a separate thread so that the calculation doesn't slow the game down to horribly low (0-2) FPS. How would I go about making this threading work? (any suggestions for an improved Planetary Gravity system are welcome if anyone has them) I'm also not looking for a lesson in why I shouldn't use threading or the dangers of using it incorrectly, I'm looking for a straight answer on how to do it. I've already spent an hour googling this very question with little results that I understood or were helpful. I don't mean to come off rude, but it always seems hard as a programming noob to get a straight meaningful answer, I usually rather get an answer so complex I'd easily be able to solve my issue if I understood it, or someone saying why I shouldn't do what I want to do and offering no alternatives (that are helpful). Thank you for the help!

    Read the article

  • Make the Time

    - by WonderOfItAll
    Took the little one to the pool tonight for swim lessons. Okay, Okay. They're not really lessons so much as they are "Hey, here's a few bucks, let me rent out a small section of your pool to swim around with my little one" Saw a dad at the pool. Bluetooth on, iPad in hand, and two year old somewhere around there. Saw a mom at the pool. Arguing with her five year old to NOT take a shower after swimming. Bluetooth on, iPad in hand, work laptop open on stadium seats. Her reasoning for not wanting the child to shower "Look, I have to get this stuff to the office by 6:30, we don't have time for you to shower. Let's go" Wait, isn't the whole point of this little experience called Mommy and Me (or, as in my case, Daddy and Me). Wherein Mommy/Daddy is supposed to spend time with little one. Not with the Bluetooth. Not with the work laptop. Dad (yeah, the same dad from earlier), in the pool. Bluetooth off (it's not waterproof or I'm sure he would've had it on), two year old in hand and iPad somewhere put away. Getting frustrated with kid because he won't 'perform' on command. Here's a little exchange Kid: "I don't wanna get in the water" Dad: "Well, we're here for 30 minutes, get in the water" Kid: "No, don't wanna" Dad: "Fine, I'm getting in" and, true to his word, in he goes, off to swim. Kid: Crying Dad: "Well, c'mon" Kid: Walking to stands Dad: Ignoring kid Kid: At stands Dad: Out of pool, drying off. Frustrated. Grabs bag, grabs kid, leaves How sad. It really seems like I am living in a generation of parents who view their children as one big scheduled distraction to another. It's almost like the dad was saying "Look, little 2 year old boy, I have a busy scheduled. Right now my Outlook Calendar tells me that I have 30 mins to spend with you, so, let's go kid: PERFORM because I have the time" Really? Can someone please tell me when the hell this happened? When did spending time with your kid, spending time with your family, spending time with your spouse, etc... become a distraction? I've seen people at work all day Tweeting throughout the day, checked in with Four Square, IM up and running constantly so they can 'stay in touch' only to see these same folks come home and be irritated because their kids or their spouse wants to connect with the. I've seen these very same people leave the house, go to the corner bar/store/you-name-the-place to be 'alone' only to find them there, plugged in, tweeting away, etc, etc, etc I LOVE technology. I love working with technology. But I also know that I am a human being. A person who, by very definition, is a social being. I needed social interactions and contact--and, no, I'm not talking about the Social Graph kind of connections, I'm talking about those interactions which, *GASP* involve eye to eye contact and human contact. A recent study found that the number one complaint of kids is that they feel they have to compete with technology for their parents time and attention. The number one wish from high school kids? That there parents would turn off the computer/tv/cell phone at dinner. This, coming from high school kids. Shouldn't that tell you a whole helluva lot? So, do yourself a favor tomorrow. Plug into technology all day. Throw yourself into it. Be passionate about what you do. When you walk through the door to your family, turn it all off for 30 mins and be there with your loved ones. If you can manage to play Angry Birds, I'm sure you can handle being disconnected for 30 minutes. Make the time

    Read the article

  • Life at Oracle Russia: Stanislav, Tech Sales Manager

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Oracle is a place that brings together talented people from various countries and with a diversity of backgrounds. We often invite our employees to speak about their life at Oracle as we think It is important to share an insight into what working for our company looks like. This time we asked Stanislav to speak about his experience at Oracle. He is Technology Sales Manager at Oracle Russia. He joined the company in July 2011 as a Sales Representative for the Financial sector and had previously worked for another American IT company. He was promoted to a Management position in 2013. “I have been in this Industry for 15 years and I am now Technology Sales Manager, covering Database, BI and Fusion Middleware products. What I’ve learned in my role is that respect is one of the most important values a good professional should have. By respecting and embracing everyone’s opinions, we create a very good work environment that encourages innovation and change. It eventually leads to a stronger team where people listen to each other and value each other’s opinion. On the other hand, It is mandatory to have good knowledge about the area you work in and to continously seek to improve your expertise. Last but not least, working as a team is a top priority and It is something that I’ve learned at Oracle. There’s little you can achieve by yourself comparing to what you can do when you’re part of a team.” Stanislav shared the top three words that best describe his team and those were: professional, dynamic and smart. “The team I manage is a very professional, dynamic and smart one. I am really proud to work with such talented people! They are an asset to the Oracle business because they are the very best in the IT industry worldwide!” When asked why he would apply at Oracle if he was looking for a job, Stanislav responded “I would say because Oracle is a legend of the IT industry. It is a very dynamic company where you can fulfill your potential and gain extremely valuable knowledge. No doubt this is the number 1 IT company!” We invite you to explore our career opportunities on oracle.com/careers and to discover more stories about the life at Oracle on our blog. You can get the latest updates about careers within Oracle by following Oracle LinkedIn, CareersatOracle Facebook or joinOracleEMEA Twitter. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • There are 2 jobs available - which one sounds better all round [closed]

    - by Steve Gates
    I am currently employed at a company where we scrape by each year breaking even, sometimes having a little profit. The development environment is very relaxed and we have a laugh. My colleagues are not interested in improving their knowledge unless they have to, so trying to get them to adopt things like TDD is a non-starter. My development manager is stuck in .Net 2 land and refuses to use things like LINQ. He over complicates architecture and writes very unreadable code, heres an example SortedList<int,<SortedList<int,SortedList<int, MyClass>>>> The MD of the company has no drive and lets the one sales guy bring in the contracts. We are not busy all the time and this allows me time to look at new technology and learn. In terms of using things like TDD, my development manager has no problem with it and can kind of see the purpose of it, he just wont use it himself. This means I am alone in learning new things and am often resorting to StackOverflow to make sure I get things right. The company has a lot of flexibility, I can work from home if needs be and when my daughter was born they let me work from home 1 day a week however they expect this flexibility in return often asking me to travel occasionally on a Friday afternoon for the following week. Sometimes its abroad. We are also pretty much on call 24/5 as we have engineers in various countries. Also we have no testers so most of the testing is done by us developers and some testing by engineers. Either way no-one likes testing! I have been offered a role at a company I worked at 5 years ago. They were quite Victorian in their working practices but it appears to have relaxed now although I suspect still reasonably formal. There is a new team of developers I don't know and they are about to move to new offices. The team lead is a guy that was there when I was and I get the impression he takes his role seriously and likes his formal procedures and documentation. I think some of the Victorian practices may have rubbed off on him. However he did say if things crop up then as long as I can trust the person they can work at home although he prefers people in the office. The team uses SCRUM, TDD and SOLID design principles so they are quite up to date in technology. They are reasonably Microsoft focused. It appears the Technical Director might be the R&D man and research new technology on his own not allowing developers to play with new technology. He possibly might be a super developer and makes all the decisions that no can argue with. They are currently moving to Entity Framework away from NHibernate based on issues that their queries seem to fail sometimes and they feel NHibernate is stagnant. They have analysts and a QA team. The MD is focused and they are an expanding company making profit each year. I'm not sure what the team morale is and whether they have a laugh. When I had a tour around the office they were there in dead silence. I'm really unsure which role is the best for me and going with my gut instinct is useless as I'm not sure what my gut is telling me. Based on the information above which role would you choose and why?

    Read the article

< Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >