Search Results

Search found 5543 results on 222 pages for 'legacy terms'.

Page 48/222 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Ubuntu Installation Help for an IBM R31 Thinkpad

    - by David Taylor
    I recently acquired an old IBM R31 Thinkpad, and I'd figure I'd install Lubuntu on it. I've followed the quick steps for USB installation on the help wiki page, but I can't seem to get it to boot from my formatted flash drive. I've checked the boot priority on the BIOS page, but the option to boot from USB doesn't even seem to be there. The only bootable options are legacy and USB floppy drives. The CD drive is shot, so I can't install from there either. Do I have any other options for installation without having to pay for a floppy drive or a replacement CD drive? The wiki pages mentions something about installation from within Windows. Would it be possible to remove Windows using this option, or would it just create a partition? Thanks

    Read the article

  • Is HR/Recruitment Really Ready For Innovative Candidates

    - by david.talamelli
    Before I begin this blog post, I want to acknowledge that there are some great HR/Recruitment people out there who are innovative and are leading the way in using new means to successfully attract and connect with talented people. For those of you who fit in this category, please keep thinking outside the square - just because what you do may not be the norm doesn't mean it is bad. Ok, with that acknowledgment out of the way - Earlier this morning (I started this post Friday morning) I came across this online profile via a tweet from Philip Tusing I love the information that Jason has put on his web-pages. From his work Jason clearly demonstrates not only his skills/experience but also I love how he relates his experience and shows how it will help an employer and what the value add of having him on your team is. Looking at Jason's profile makes me think though, is HR/Recruitment in general terms ready to deal with innovative candidates. Sure most Recruiters are online in some form or another, but how many actually have a process that is flexible enough to deal with someone who may not fit into your processes. Is your company's recruitment practice proactive enough to find Jason's web-pages? I am not sure what he is doing in terms of a job search, but if he is not mailing a resume or replying to ads on a Job Board - hopefully Jason comes up on some of the candidate searching you are doing. Once you find this information, would the information Jason provides fit nicely into your Applicant Tracking System or your Database? If not, how much of the intangible information are you losing and potentially not passing on to a Hiring Manager. I think what has worked in the past will not necessarily work in the future. Candidates want to work somewhere they will be challenged and learn and grow. If your HR/Recruitment team displays processes that take don't necessarily convey this message, this potentially could turn people away who were once interested in your company. For example (and I have to admit I still do some of these things myself), once calling up and having a talk to a candidate a company may say: 1) HR Question: Send me in a copy of your resume - Candidate Reply - you actually already have my resume, the web-page is http:// 2) HR Question:Come in for a chat so we can get to know you - Candidate Reply - if this is the basis of a meeting, you already know me and my thoughts by looking at my online links (blog, portfolio, homepage, etc...) These questions if not handled properly could potentially turn a candidate from being interested in your company to not being interested in your company. It potentially could demonstrate that your company is not social media savvy or maybe give the impression of not really being all that innovative. A candidate may think, if this company isn't able to take information I have provided in the public forum and use it, is it really a company I want to work for? I think when liaising with candidates a company should utilise the information the person has provided in the public domain. A candidate may inadvertantly give you answers to many of the questions you are seeking on their online presence and save everyone time instead of having to fill out forms or paperwork. If you build this into your conversations with your candidates it becomes a much more individualised service you are providing and really demonstrates to a candidate you are thinking of them as an individual. Yes I know we need to have processes in place and I am not saying don't work to those processes, but don't let process take away a candidates individuality. Don't let your process inadvertently scare away the top candidates that you may want in your company. This article was originally posted on David Talamelli's Blog - David's Journal on Tap

    Read the article

  • Gartner PCC Follow-up: Interview with Chaeny Emanavin, Usability Lead - Office of Information Develo

    - by [email protected]
    Last week at the Gartner Portals, Content and Collaboration conference in Baltimore, Chaeny and I co-presented on Oracle Enterprise 2.0 and BIA's Citizen Portal. Chaeny's presentation about the BIA solution was very well received and I wanted to do a follow-up interview with Chaeny to discuss more details about their solution and its Enterprise 2.0 features. Ajay: What were the main objectives for the BIA Citizen Portal? Chaeny: The BIA Citizen Portal is designed to provide all the services of the Bureau of Indian Affairs to the community of 564 federally recognized tribes that include over 1.9 million American Indians and Alaska Natives. The BIA provides the same breadth of services that the entire U.S. Federal Government provides in one small Bureau. So, we needed a solution that was flexible enough to handle content ranging from law enforcement to housing to education. Key objectives for external users was to use the Web as a communications channel and keep them informed on what services are available. We also wanted to build an internal web presence and community for BIA's 5000 employees to ensure that they update their content, leverage internal experts and create single sources of truth for key policy documents. Ajay: How is the project being implemented? Chaeny: We are using a phased approach. In phases 1 & 2, interim internal and external sites were built to ensure usability and functional requirements are being met. In Phases 3 & 4, we built out a modern internal and external presence using Oracle WebCenter Suite and Oracle Universal Content Management (UCM), including enabling delegated content management for our internal business units. Phase 4 was completed in January 2010. Phase 5 will add deeper Enterprise 2.0 collaboration capabilities to the solution. Ajay: Are you integrating any existing sites into the new solution? Chaeny: Yes, we have a SharePoint implementation that we are using for document management. We needed more precise functionality however. We found that SharePoint would let individual administrators of a SharePoint site actually create new sites. In a 3 months span, we had over 200 new sites created and most were not being used. So, we had an enormous sprawl problem. Our requirements mandated increased governance and more granular control over the creation of sites and flexible user access to content. In SharePoint this required custom code and was very time-intensive which was unfeasible given our tight deadlines. We are piloting Oracle WebCenter Spaces as our collaboration solution to mitigate these issues. However, we must integrate our existing SharePoint investment which we can do easily by using the SharePoint connectors available in Oracle WebCenter and UCM. Ajay: What were the key design parameters for your solution? Chaeny: We wanted everything driven by standards and policies. We created a cross-functional steering group called the Indian Affairs Web Council to codify policies that were baked into the system. Other key design areas were focused on security/governance, self-service content management, ease of use, integration with legacy applications and seamless single sign-on. We are using Dublin Core as our metadata standard. We also are using Java, APEX, and ADF as our development standards. Ajay: Why was it important to standardize on a platform? Chaeny: We initially looked at best-of-breed solutions, but we faced a lot of issues getting the different solutions to work together. Going with an integrated solution was more economical, easier to learn and faster to deliver the solution. Ajay: What type of legacy applications are you integrating into the portal? Chaeny: Initially we are starting with administrative apps such as people directory and user admin and then we will integrate HR and Financial applications among others. Ajay: Can you describe some of the E20 collaboration features you are putting into the solution? Chaeny: We are adding Enterprise 2.0 using Oracle WebCenter Spaces to deliver different collaboration tools such as wikis, blogs and discussion forums. Wikis to create rapid, ad hoc monthly roll-up reports; discussion forums to provide context-specific help; blogs to capture tacit organization knowledge from experts, identify gurus and turn tacit knowledge into explicit knowledge. Ajay: Are you doing anything specifically to spur adoption and usage? Chaeny: Yes, we did several things that I think helped us ramp quickly. First, we met our commitments for the new system launch date and also provided extra resources for a customer support "hotline" during the launch period. Prior to launch, we did exhaustive usability studies to capture user requirements around functionality, navigation and other key interaction areas. We also created extensive training programs so that the content managers in each business unit were comfortable using the content management tools and knew the best practices for usage. Finally, to launch the Enterprise 2.0 collaboration capabilities, we are working with a pilot group from the Division of Forestry and Wildland Fire Management of BIA. This group of people in the past have been willing early adopters and they have a strong business need to collaborate with many agencies both internal and external across State, County and other Federal jurisdictions. Their feedback is key to helping us launch Enterprise 2.0 successfully in our broader organization. Ajay: What were the biggest benefits to internal BIA employees and to the external community of users? Chaeny: For our employees, the new Enterprise 2.0-based solution will make it easier to find information; enhance employee productivity by embedding standard business processes into the system and create more of a community by creating connections with experts via social collaboration to ultimately provide better services more quickly. For the external American Indian and Alaska Native communities, we have a better relationship with the users and the new site has improved BIA's perception as a more responsive and customer-centric organization.

    Read the article

  • Problem with ATI Radeon HD 7670M on Ubuntu 12.10

    - by Aniket
    I updated from Ubuntu 12.04 to 12.10 a couple of days back. My machine is a Dell Inspiron 15R with a Radeon HD 7670M Graphic Driver. When I was using 12.04, I was able to install the propriety driver using the notification you usually get. But after the upgrade, the graphic driver is not getting installed. I am not getting the notification to install the driver And I also tried to follow the steps given - What is the correct way to install ATI Catalyst Video Drivers (fglrx)? I tried the legacy as well as the usual installation procedure Both ways I failed and my graphics went to the fallback, disabling Unity Note: My laptop is getting heated to 80 degrees and can shut down at any point. Please help.

    Read the article

  • Friday Fun: Ghost’s Revenge

    - by Asian Angel
    This week’s game provides a spooky story in addition to the “spot the difference” challenges you will face on each level. Can you help this ghost unravel the mystery of the man who killed her Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines Four Awesome TRON Legacy Themes for Chrome and Iron Anger is Illogical – Old School Style Instructional Video [Star Trek Mashup] Get the Old Microsoft Paint UI Back in Windows 7 Relax and Sleep Is a Soothing Sleep Timer Google Rolls Out Two-Factor Authentication Peaceful Early Morning by the Riverside Wallpaper

    Read the article

  • Systemupdate Nr. 10 für das FY12 – 25. Oktober 2011

    - by swalker
    Server-, Speicher- und Systemsoftware von Sun Änderungen der Preislisten: Weltweite Preisänderungen für Oracle Systemhardware und -software Weltweite Preisänderungen für Hardware und Software von Oracle Legacy-Systemen Was müssen Sie tun? Die System-Updates beinhalten Preisänderungen bei Systemkomponenten und bestehen aus zwei Dateien: Bekanntgabe der Änderungen (PDF) und Detailinformationen (XLS). Mithilfe der angegebenen ID-Nummern finden Sie im Arbeitsblatt die Teilenummern der Komponenten, die von einer Änderung betroffen sind. Weitere Informationen Besuchen Sie regelmäßig auf dem OPN-Portal die Seite für die Systempreisgestaltung. Dort finden Sie neben weiteren Informationen über diese Aktualisierungen auch die aktuelle Systempreisliste und Ressourcen. * Hinweis: Die Preisinformationen zu Oracle Systemen sind vertrauliche Informationen von Oracle und werden Ihnen gemäß der Oracle PartnerNetwork-Vereinbarung zur Verfügung gestellt.

    Read the article

  • Attempted to dual boot with Windows and now can only run Ubuntu

    - by Zeusoflightning125
    Very recently, I decided to attempt to dual boot Ubuntu with my already installed windows 8. Everything worked perfectly, I manually set up disk partitions (this is all on 1 hard drive), and it loaded up Ubuntu fine. HOWEVER, now when I try to load up my computer it only has 2 options in the boot menu and both just load up Windows (both were something related to hard disk). I also can only boot from legacy hard disk things. (I already only was able to aside from my USB that I installed Windows from) The Windows files are still accessible from Ubuntu, but I cannot just load Windows. There is no option to. I also don't have the 2 buttons for each operating system I was expecting. I can only select the thing to load from BIOS. So, my question is, how do I load the Windows partition on my hard drive? I'm sorry if I'm a bit clueless I am just new to both Linux and dual-booting.

    Read the article

  • Building a Distributed Commerce Infrastructure in the Cloud using Azure and Commerce Server

    - by Lewis Benge
    One of the biggest questions I routinely get asked is how scalable Commerce Server is. Of course the text book answer is the product has been around for 10 years, powers some of the largest e-Commerce websites in the world, so it scales horizontally extremely well. One argument however though is what if you can't predict the growth of demand required of your Commerce Platform, or need the ability to scale up during busy seasons such as Christmas for a retail environment but are hesitant on maintaining the infrastructure on a year-round basis? The obvious answer is to utilise the many elasticated cloud infrastructure providers that are establishing themselves in the ever-growing market, the problem however is Commerce Server is still product which has a legacy tightly coupled dependency on Windows and IIS components. Commerce Server 2009 codename "R2" however introduced to the concept of an n-tier deployment of Microsoft Commerce Server, meaning you are no longer tied to core objects API but instead have serializable Commerce Entity objects, and business logic allowing for Commerce Server to now be built into a WCF-based SOA architecture. Presentation layers no-longer now need to remain on the same physical machine as the application server, meaning you can now build the user experience into multiple-technologies and host them in multiple places – leveraging the transport benefits that a WCF service may bring, such as message queuing, security, and multiple end-points. All of this logic will still need to remain in your internal infrastructure, for two reasons. Firstly cloud based computing infrastructure does not support PCI security requirements, and secondly even though many of the legacy Commerce Server dependencies have been abstracted away within this version of the application, it is still not a fully supported to be deployed exclusively into the cloud. If you do wish to benefit from the scalability of the cloud however, you can still achieve a great Commerce Server and Azure setup by utilising both the Azure App Fabric in terms of the service bus, and authentication services and Windows Azure to host any online presence you may require. The architecture would be something similar to this: This setup would allow you to construct your Commerce Services as part of your on-site infrastructure. These services would contain all of the channels custom business logic, and provide the overall interface back into the underlying Commerce Server components. It would be recommended that services are constructed around the specific business domain of the application, which based on your business model would usually consist of separate services around Catalogue, Orders, Search, Profiles, and Marketing. The App Fabric service bus is then used to abstract and aggregate further the services, making them available to the cloud and subsequently secured by App Fabrics authentication services. These services are now available for consumption by any client, using any supported technology – not just .NET. Thus meaning you are now able to construct apps for IPhone, integrate with Java based POS Devices, and any many other potential uses. This aggregation is useful, and forms the basis of the further strategy around diversifying and enhancing the e-Commerce experience, but also provides the foundation for the scalability we want to gain from utilising a cloud-based application platform. The Windows Azure application platform is Microsoft solution to benefiting from the true economies of scale in terms of the elasticity of the cloud. Just before the launch of the Azure Platform – Domino's pizza actually managed to run their whole SuperBowl operation from the scalability of Windows Azure, and simply switching back to their traditional operation the next day with no residual infrastructure costs. The platform also natively can subscribe to services and messages exposed within the AppFabric service bus, making it an ideal solution to build and deploy a presentation layer which will need to support of scalable infrastructure – such as a high demand public facing e-Commerce portal, or a promotion element of a brand. Windows Azure has excellent support for ASP.NET, including its own caching providers meaning expensive operations such as catalogue queries can persist in memory on the application server, reducing the demand on internal infrastructure and prioritising it for more business critical operations such as receiving orders and processing payments. Windows Azure also supports other languages too, meaning utilising this approach you can technically build a Commerce Server presentation layer in Java, PHP, or Ruby – or equally in ASP.NET or Silverlight without having to change any of the underlying business or Commerce Server implementation. This SOA-style architecture is one of the primary differentiators for Commerce Server as a product in the e-Commerce market, and now with the introduction of a WCF capability in Commerce Server 2009/2009 R2 the opportunities for extensibility of the both the user experience, and integration into third parties, are drastically increased, all with no effect to the underlying channel logic. So if you are looking at deployment options for your e-Commerce application to help support demand in a cost effective way. I would highly recommend you consider looking at Windows Azure, and if you have any questions in-particular about this style of deployment, please feel free to get in touch!

    Read the article

  • Using NServiceBus behind a custom web service

    - by Michael Stephenson
    In this post I'd like to talk about an architecture scenario we had recently and how we were able to utilise NServiceBus to help us address this problem. Scenario Cognos is a reporting system used by one of my clients. A while back we developed a web service façade to allow line of business applications to be able to access reports from Cognos to support their various functions. The service was intended to provide access to reports which were quick running reports or pre-generated reports which could be accessed real-time on demand. One of the key aims of the web service was to provide a simple generic interface to allow applications to get any report without needing to worry about the complex .net SDK for Cognos. The web service also supported multi-hop kerberos delegation so that report data could be accesses under the context of the end user. This service was working well for a period of time. The Problem The problem we encountered was that reports were now also required to be available to batch processes. The original design was optimised for low latency so users would enjoy a positive experience, however when the batch processes started to request 250+ concurrent reports over an extended period of time you can begin to imagine the sorts of problems that come into play. The key problems this new scenario caused are: Users may be affected and the latency of on demand reports was significantly slower The Cognos infrastructure was not scaled sufficiently to be able to cope with these long peaks of load From a cost perspective it just isn't feasible to scale the Cognos infrastructure to be able to handle the load when it is only for a couple of hour window each night. We really needed to introduce a second pattern for accessing this service which would support high through-put scenarios. We also had little control over the batch process in terms of being able to throttle its load. We could however make some changes to the way it accessed the reports. The Approach My idea was to introduce a throttling mechanism between the Web Service Façade and Cognos. This would allow the batch processes to push reports requests hard at the web service which we were confident the web service can handle. The web service would then queue these requests and process them behind the scenes and make a call back to the batch application to provide the report once it had been accessed. In terms of technology we had some limitations because we were not able to use WCF or IIS7 where the MSMQ-Activated WCF services could have helped, but we did have MSMQ as an option and I thought NServiceBus could do just the job to help us here. The flow of how this would work was as follows: The batch applications would send a request for a report to the web service The web service uses NServiceBus to send the message to a Queue The NServiceBus Generic Host is running as a windows service with a message handler which subscribes to these messages The message handler gets the message, accesses the report from Cognos The message handler calls back to the original batch application, this is decoupled because the calling application provides a call back url The report gets into the batch application and is processed as normal This approach looks something like the below diagram: The key points are an application wanting to take advantage of the batch driven reports needs to do the following: Implement our call back contract Make a call to the service providing a call back url Provide a correlation ID so it knows how to tie each response back to its request What does NServiceBus offer in this solution So this scenario is not the typical messaging service bus type of solution people implement with NServiceBus, but it did offer the following: Simplified interaction with MSMQ Offered the ability to configure the number of processes working through the queue so we could find a balance between load on Cognos versus the applications end to end processing time NServiceBus offers retries and a way to manage failed messages NServiceBus offers a high availability setup The simple thing is that NServiceBus gave us the platform to build the solution on. We just implemented a message handler which functionally processed a message and we could rely on NServiceBus to do all of the hard work around managing the queues and all of the lower level things that would have took ages to write to any kind of robust level. Conclusion With this approach we were able to deal with a fairly significant performance issue with out too much rework. Hopefully this write up gives people some insight into ideas on how to leverage the excellent NServiceBus framework to help solve integration and high through-put scenarios.

    Read the article

  • Drupal Modules for SEO & Content

    - by Aditi
    When we talk about Drupal SEO, there are two things to consider one is about the relevant SEO practices and about appropriate Drupal Modules available. Optimizing your website for search engines is one of the most important aspect of launching & promoting your website especially if ranking matters to you. Understanding SEO For starters, you have begin with Keyword research and then optimize your content according to your findings by tagging, meta tags etc, Drupal modules once installed help you manage a lot of such parameters. Identifying the target keywords Using the Page Title and Token modules PathAuto configuration <H1> heading tags Optimizing Drupal’s default robots.txt file Etc. While Drupal gives you a lot of ability to make your website content worthy & search engine friendly it is important for you to make sure you are not crossing the line or you could get penalized. Modules Overview Drupal Power is at its best when you have these modules & great brain working together. The basic SEO improvements can be achieved easily with the modules enlisted below, but you can win magical rankings if you use them logically & wisely. Understanding your keyword competition & enhancing your content is the basic key to success and ofcourse the modules: Pathauto Automatically create search enging friendly readable URLS from tokens. A token is a piece of data from content, say the author’s username, or the content’s title. For example mysite.com/an-article, rather than mysite.com/node/114 for every node you make. NodeWords Amazingly useful drupal module that allows you to create custom meta tags and descriptions for your nodes, which gives you the ability to target specific keywords and phrases. Page Title Enables you to set an alternative title for the <title></title> tags and for the <h1></h1> tags on a node. Global Redirect Manage content duplication, 301 redirects, and URL validation with this small, but powerful module. Taxonomy manager Make large additions, or changes to taxonomy very easy. This module provides a powerful interface for managing taxonomies. A vocabulary gets displayed in a dynamic tree view, where parent terms can be expanded to list their nested child terms or can be collapsed. robotstxt A robots.txt file is vital for ensuring that search engine spiders don’t index the unwanted areas of your site. This Drupal module gives you the ability to manage your robots.txt file through the CMS admin. xmlsitemap An XML Sitemap lets the search engines index your website content. This module helps in generating and maintaining a complete sitemap for your website and gives you control over exactly which parts of the site you want to be included in the index. It even gives you the ability to automatically submit your sitemap to Google, Yahoo!, Ask.com and Windows Live every time you update a node or at specific interval. Node Import This module allows you to import a set of nodes from a Comma Seperated Values (CSV) or Tab Seperated Values (TSV) text file. Makes it easy to import hundreds-thousands of csv rows and you get to tie up these rows to CCK fields (or locations), and it can file it under the right taxonomy hierarchy. This is Super life saver module.

    Read the article

  • Bert Ertman and Paul Bakker on Spring to Java EE 6 Migration Podcast

    - by arungupta
    NLJUG leader and Java Champion Bert Ertman and Paul Bakker talk about migrating Spring applications to Java EE 6 in the latest issue of Java Spotlight Podcast, episode #85. Bert and Paul talk about how to migrate your legacy Spring applications to use modern and lightweight Java EE 6 in five steps. The complete podcast is always fun but feel free to jump to 3:49 minutes into the show if you're in a hurry. They authored a series of article on the exact same topic starting here. There is an extensive set of articles available that help you migrate from Spring to Java EE 6. Subscribe to the podcast for future content.

    Read the article

  • Oracle Cloud Office and Oracle Open Office 3.3

    - by trond-arne.undheim
    Industry's First Complete, Open Standards-Based Office Productivity Suites for Desktop, Web and Mobile Users were launched today, 15 December 2010 (press release). Based on the Open Document Format (ODF) and open web standards, Oracle Open Office enables users to share files on any system as it is compatible with both legacy Microsoft Office documents and de facto formats, Portable Document Format (PDF), and modern web 2.0 publishing. Oracle Cloud Office is the foundation of the open standard office stack based on the open document format (ODF), and has powerful social sharing capability, ubiquitous document authoring and collaboration. Together, the two solutions enable cross-company, enterprise class collaboration with true interoperability, including the flexibility to support users across a wide variety of devices and platforms.

    Read the article

  • Accelerate your SOA with Data Integration - Live Webinar Tuesday!

    - by dain.hansen
    Need to put wind in your SOA sails? Organizations are turning more and more to Real-time data integration to complement their Service Oriented Architecture. The benefit? Lowering costs through consolidating legacy systems, reducing risk of bad data polluting their applications, and shortening the time to deliver new service offerings. Join us on Tuesday April 13th, 11AM PST for our live webinar on the value of combining SOA and Data Integration together. In this webcast you'll learn how to innovate across your applications swiftly and at a lower cost using Oracle Data Integration technologies: Oracle Data Integrator Enterprise Edition, Oracle GoldenGate, and Oracle Data Quality. You'll also hear: Best practices for building re-usable data services that are high performing and scalable across the enterprise How real-time data integration can maximize SOA returns while providing continuous availability for your mission critical applications Architectural approaches to speed service implementation and delivery times, with pre-integrations to CRM, ERP, BI, and other packaged applications Register now for this live webinar!

    Read the article

  • The OTN Garage Blog Week in Review

    - by Rick Ramsey
    In case you missed the last few blogs on the OTN Garage (because somebody neglected to cross-post them here), here they are: What Day Is It and Why Am I Wearing a Little Furry Skirt? - Oracle VM Templates, Oracle Linux, Wim Coekaerts, and jet lag. A Real Cutting Edge - Oracle Sun blade systems architecture, Blade Clusters, and best practices. Which Version of Solaris Were You Running When ... - Oracle Solaris Legacy Containers and the Voyager 1 Content Cluster: Understanding the Local Boot Option in the Automatic Installer of Oracle Solaris 11 Express - Resources to help you understand this cool option Rick - System Admin and Developer Community of the Oracle Technology Network

    Read the article

  • Is there a low carbon future for the retail industry?

    - by user801960
    Recently Oracle published a report in conjunction with The Future Laboratory and a global panel of experts to highlight the issue of energy use in modern industry and the serious need to reduce carbon emissions radically by 2050.  Emissions must be cut by 80-95% below the levels in 1990 – but what can the retail industry do to keep up with this? There are three key aspects to the retail industry where carbon emissions can be cut:  manufacturing, transport and IT.  Manufacturing Naturally, manufacturing is going to be a big area where businesses across all industries will be forced to make considerable savings in carbon emissions as well as other forms of pollution.  Many retailers of all sizes will use third party factories and will have little control over specific environmental impacts from the factory, but retailers can reduce environmental impact at the factories by managing orders more efficiently – better planning for stock requirements means economies of scale both in terms of finance and the environment. The John Lewis Partnership has made detailed commitments to reducing manufacturing and packaging waste on both its own-brand products and products it sources from third party suppliers. It aims to divert 95 percent of its operational waste from landfill by 2013, which is a huge logistics challenge.  The John Lewis Partnership’s website provides a large amount of information on its responsibilities towards the environment. Transport Similarly to manufacturing, tightening up on logistical planning for stock distribution will make savings on carbon emissions from haulage.  More accurate supply and demand analysis will mean less stock re-allocation after initial distribution, and better warehouse management will mean more efficient stock distribution.  UK grocery retailer Morrisons has introduced double-decked trailers to its haulage fleet and adjusted distribution logistics accordingly to reduce the number of kilometers travelled by the fleet.  Morrisons measures route planning efficiency in terms of cases moved per kilometre and has, over the last two years, increased the number of cases per kilometre by 12.7%.  See Morrisons Corporate Responsibility report for more information. IT IT infrastructure is often initially overlooked by businesses when considering environmental efficiency.  Datacentres and web servers often need to run 24/7 to handle both consumer orders and internal logistics, and this both requires a lot of energy and puts out a lot of heat.  Many businesses are lowering environmental impact by reducing IT system fragmentation in their offices, while an increasing number of businesses are outsourcing their datacenters to cloud-based services.  Using centralised datacenters reduces the power usage at smaller offices, while using cloud based services means the datacenters can be based in a more environmentally friendly location.  For example, Facebook is opening a massive datacentre in Sweden – close to the Arctic Circle – to reduce the need for artificial cooling methods.  In addition, moving to a cloud-based solution makes IT services more easily scaleable, reducing redundant IT systems that would still use energy.  In store, the UK’s Carbon Trust reports that on average, lighting accounts for 25% of a retailer’s electricity costs, and for grocery retailers, up to 50% of their electricity bill comes from refrigeration units.  On a smaller scale, retailers can invest in greener technologies in store and in their offices.  The report concludes that widely shared objectives of energy security, reduced emissions and continued economic growth are dependent on the development of a smart grid capable of delivering energy efficiency and demand response, as well as integrating renewable and variable sources of energy. The report is available to download from http://emeapressoffice.oracle.com/imagelibrary/detail.aspx?MediaDetailsID=1766I’d be interested to hear your thoughts on the report.   

    Read the article

  • Bitwise operators in DX9 ps_2_0 shader

    - by lapin
    I've got the following code in a shader: // v & y are both floats nPixel = v; nPixel << 8; nPixel |= y; and this gives me the following error in compilation: shader.fx(80,10): error X3535: Bitwise operations not supported on legacy targets. shader.fx(92,18): ID3DXEffectCompiler::CompileEffect: There was an error compiling expression ID3DXEffectCompiler: Compilation failed The error is on the following line: nPixel |= y; What am I doing wrong here?

    Read the article

  • To sample or not to sample...

    - by [email protected]
    Ideally, we would know the exact answer to every question. How many people support presidential candidate A vs. B? How many people suffer from H1N1 in a given state? Does this batch of manufactured widgets have any defective parts? Knowing exact answers is expensive in terms of time and money and, in most cases, is impractical if not impossible. Consider asking every person in a region for their candidate preference, testing every person with flu symptoms for H1N1 (assuming every person reported when they had flu symptoms), or destructively testing widgets to determine if they are "good" (leaving no product to sell). Knowing exact answers, fortunately, isn't necessary or even useful in many situations. Understanding the direction of a trend or statistically significant results may be sufficient to answer the underlying question: who is likely to win the election, have we likely reached a critical threshold for flu, or is this batch of widgets good enough to ship? Statistics help us to answer these questions with a certain degree of confidence. This focuses on how we collect data. In data mining, we focus on the use of data, that is data that has already been collected. In some cases, we may have all the data (all purchases made by all customers), in others the data may have been collected using sampling (voters, their demographics and candidate choice). Building data mining models on all of your data can be expensive in terms of time and hardware resources. Consider a company with 40 million customers. Do we need to mine all 40 million customers to get useful data mining models? The quality of models built on all data may be no better than models built on a relatively small sample. Determining how much is a reasonable amount of data involves experimentation. When starting the model building process on large datasets, it is often more efficient to begin with a small sample, perhaps 1000 - 10,000 cases (records) depending on the algorithm, source data, and hardware. This allows you to see quickly what issues might arise with choice of algorithm, algorithm settings, data quality, and need for further data preparation. Instead of waiting for a model on a large dataset to build only to find that the results don't meet expectations, once you are satisfied with the results on the initial sample, you can  take a larger sample to see if model quality improves, and to get a sense of how the algorithm scales to the particular dataset. If model accuracy or quality continues to improve, consider increasing the sample size. Sampling in data mining is also used to produce a held-aside or test dataset for assessing classification and regression model accuracy. Here, we reserve some of the build data (data that includes known target values) to be used for an honest estimate of model error using data the model has not seen before. This sampling transformation is often called a split because the build data is split into two randomly selected sets, often with 60% of the records being used for model building and 40% for testing. Sampling must be performed with care, as it can adversely affect model quality and usability. Even a truly random sample doesn't guarantee that all values are represented in a given attribute. This is particularly troublesome when the attribute with omitted values is the target. A predictive model that has not seen any examples for a particular target value can never predict that target value! For other attributes, values may consist of a single value (a constant attribute) or all unique values (an identifier attribute), each of which may be excluded during mining. Values from categorical predictor attributes that didn't appear in the training data are not used when testing or scoring datasets. In subsequent posts, we'll talk about three sampling techniques using Oracle Database: simple random sampling without replacement, stratified sampling, and simple random sampling with replacement.

    Read the article

  • New Shell In Oracle Solaris 11

    - by rickramsey
    In Oracle Solaris 11, Korn Shell 93 (/usr/bin/ksh/ or usr/bin/ksh93) replaces both the Bourne Shell (/usr/bin/sh or /sbin/sh) and Korn Shell 88 (/usr/bin/ksh). There are some incompatibilities between the shells. They are described in: /usr/share/doc/ksh/COMPATIBILITY If a script has compatibility problems you can use the legacy shell by changing the she-bang line: If this doesn't work Use This #!/bin/ksh #!/usr/sunos/bin/ksh #!/usr/bin/ksh #!/usr/sunos/bin/ksh     #!/bin/sh #!/usr/sunos/bin/sh #!/usr/bin/sh #!/usr/sunos/bin/sh #!/sbin/sh #!/usr/sunos/bin/sh - Mike Gerdts http://blogs.oracle.com/zoneszone/ Website Newsletter Facebook Twitter

    Read the article

  • Switch interface implementation using configuration

    - by Marcos
    We want to allow the same core service to be either fully implemented or, as other option, to be a proxy toward a client legacy system (via a WSDL for example). In that way, we have both implementation (proxy & full) and we switch which one to use in the configuration of the app. So in a nutshell, Some desired features: Two different implementation (proxy, full) instead of one implementation with a switch inside Switch implementation using configuration: dependency injection? reflection? Nice-to-have: the packaged delivered to the client doesn’t have to change depending on the choice between proxy or full Nice-to-have: Client can develop their custom implementation of the Core Interface and configure the applciation to use that one With this background, the question is: What alternatives we have to choose one implementation or other of an interface just changing configuration? Thanks

    Read the article

  • Oracle Desktop Virtualization Press Release

    - by [email protected]
    Even though Oracle has introduced new products (HW and SW) and pricing, and part numbers, modified licensing, and an EVP and an SVP have discussed openly where Oracle is going with Virtualization,  you may still have heard from "the other guys' that Oracle isn't going to be keeping the legacy Sun 'Desktop' portfolio.  I think that has soundly been addressed by the press release this morning.  Click here for the release.This is a great way to kick off Oracle's New (fiscal) Year.  As there are more announcements coming - I'll just say "Enjoy!, and "Stay Tuned".

    Read the article

  • Trying to install postgresql:i386 on 12.04 amd64

    - by tim jackson
    Due to some legacy 32 bit libraries being used in postgresql functions I need to get a 32 bit install of Postgresql on a 64 bit native system. But it seems like there is a problem with the multiarch not seeing all.debs as satisfying dependencies. uname -a: 3.8.0-29-generic #42-precise-Ubuntu SMP x86_64 dpkg --print-architecture: amd64 dpkg --print-foreign-architecture: i386 apt-get install postgresql-9.1: returns postgresql : Depends: postgresql-9.1 but it is nto going to be installed postgresql-9.1:i386 : Depends: postgresql-common:i386 but it is not installable Depends: ssl-cert:i386 but it is not installable Depends: locales:i386 but it is not installable etc .. But I have installed ssl-cert_1.0.28ubuntu0.1_all.deb and locales_..._all.deb andpostgresql-common is an all.deb Does anyone have experience installing 32 bit packages on 64 bit systems that depend on packages that are all.debs. Or has anyone installed 32 bit postgres on 64 bit? Any help appreciated.

    Read the article

  • Writing the tests for FluentPath

    Writing the tests for FluentPath is a challenge. The library is a wrapper around a legacy API (System.IO) that wasnt designed to be easily testable. If it were more testable, the sensible testing methodology would be to tell System.IO to act against a mock file system, which would enable me to verify that my code is doing the expected file system operations without having to manipulate the actual, physical file system: what we are testing here is FluentPath, not System.IO. Unfortunately, that...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to boot Ubuntu 12.04-64bit from a USB from Compaq CQ58

    - by user208092
    I try to boot Ubuntu 12.04, 64-bit on my Compaq CQ58 laptop from a USB but it is not working. I've correctly installed the Ubuntu on my pen drive following the instructions on Ubuntu website. (http://www.ubuntu.com/download/desktop/create-a-usb-stick-on-windows) These are my BIOS settings: Post Hotkey Delay (sec) <0 CD-ROM Boot Internal Network Adapter Boot Network Boot Protocol Legacy Support Secure Boot Platform Key Enrolled Pending Action None Clear All Secure Boot UEFI Boot Order: USB Diskette on Key/USB Hard Disk OS Boot Manager Internal CD/DVD ROM Drive ! Network Adapter With these settings when i restart my computer what shows up is: Boot Device Not Found. This is what I get on the Boot Manager: Boot Option Menu OS boot Manager Boot From EFI File (Arrow Up) and (Arrow Down) to change option, ENTER to select an option. Press F10 to BIOS Setup Options, ESC to exit. PLEASE HELP... P.S. My laptop has windows 8

    Read the article

  • C++ Pointers: Number of levels of Indirection

    - by A B
    In a C++ program that doesn't contain legacy C code, is there a guideline regarding the maximum number of levels of indirection that should be used in the source code? I know that in C (as opposed to C++), some programmers have used pointers to pointers for a multiple dimension array, but for the case of arrays, there are data structures in C++ that can be used to avoid the pointers to pointers. Are users who still create pointers to pointers (or more than this) trying to use pointers to pointers only for performance ETC. reasons? I have tried NOT to use any more than a pointer to a pointer, only in the case that a pointer needed modification; does anyone have any other official or unofficial guidelines or rules regarding the number of levels of indirection?

    Read the article

  • JavaOne 2012 - The Power of Java 7 NIO.2

    - by Sharon Zakhour
    At JavaOne 2012, Mohamed Taman of e-finance gave a presentation highlighting the power of NIO.2, the file I/O APIs introduced in JDK 7. He shared information on how to get the most out of NIO.2, gave tips on migrating your I/O code to NIO.2, and presented case studies. The File I/O (featuring NIO.2) lesson in the Java Tutorials has extensive coverage of NIO.2 and includes the following topics: Managing Metadata Walking the File Tree Finding Files, including information on using PatternMatcher and globs. Watching a Directory for Changes Legacy File I/O Code includes information on migrating your code. From the conference session page, you can watch the presentation or download the materials.

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >