Search Results

Search found 8252 results on 331 pages for 'live mesh'.

Page 216/331 | < Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >

  • Friday Tips #6, Part 2

    - by Chris Kawalek
    Here is a question about updating Oracle VM: Question: How can I perform Oracle VM 3 server updates from Oracle VM Manager? Answer by Gregory King, Principal Best Practices Consultant, Oracle VM Product Management: Server Update Manager is a built-in feature of the Oracle VM Manager. Basically, Server Update Manager automatically configures YUM updates on all the Oracle VM Servers, pointing each to our Unbreakable Linux Network (ULN) update channel for Oracle VM. The servers periodically check with our Oracle YUM repository and notify the Oracle VM Manager that an update is available for each server. Actual server updates must be triggered by the Oracle VM administrator – they are not executed automatically. At this point, you can use the Oracle VM Manager to put a server into maintenance mode which live migrates all the running Oracle VM Guests to other Oracle VM Servers in the server pool. Once all the Oracle VM Guests have been migrated, the Oracle VM administrator can trigger the update on the server. The entire process is documented in the Installation and Upgrade Guide of Oracle VM Documentation so I won’t spend time detailing the steps. However, configuring the Server Update Manager is exceedingly simple. Simply navigate to the Tools and Resources tab in the Oracle VM Manager, select the link for Server Update Manager and ensure the following values are added to the text boxes as shown in the illustration below: YUM Base URL: http://public-yum.oracle.com/repo/OracleVM/OVM3/latest/x86_64 YUM GPG Key: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle Every server in the pool will be automatically configured for YUM updates once you choose the Apply button. Many thanks to Greg and Rick for providing the answers to this week's questions. If you want to ask us something, hit up Twitter and use hashtag #AskOracleVirtualization. See you next week! -Chris 

    Read the article

  • TechEd 2010 Day Four: Learning how to help others learn

    - by BuckWoody
    I do quite a few presentations, and teach at the University of Washington, and also teach other classes. But I'm always learning from others how to help others learn. At events like TechEd I have access to some of the best speakers around, so I try to find out what they do that works. I attended a great session by allen White, in which he demonstrated a set of PowerShell scripts. He said that Dan Jones of the Microsoft Manageability team told him while he demonstrated a script he needed to provide some visual way to represent the process. Allen used one of the oldest visualizations around - a flowchart. It was the first time I'd seen one used to illustrate a PowerShell script, and it was very effective. I'm totally stealing the idea. All of us are teachers - we help others on our team understand what we're up to. Make sure you make notes for what you find effective in dealing with you, and then meld that into your own way of teaching. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • MRP/SCP (Not ASCP) Common Issues

    - by Annemarie Provisero
    ADVISOR WEBCAST: MRP/SCP (Not ASCP) Common Issues PRODUCT FAMILY: Manufacturing - Value Chain Planning   March 9, 2010 at 8 am PT, 9 am MT, 11 am ET   This session is intended for System Administrators, Database Administrator's (DBA), Functional Users, and Technical Users. We will discuss issues that are fairly common and will provide the general solutions to same. We will not only review power point information but review some of the application setups/checks as well. TOPICS WILL INCLUDE: Gig data memory limitation Setup Requirements for MRP Manager, Planning Manager, and Standard Manager Why components are not planned Sales Order Flow to MRP Calendars Patching Miscellaneous Forecast Consumption - only if we have time A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support. For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Too many heap subpools might break the upgrade

    - by Mike Dietrich
    Recently one of our new upcoming Oracle Database 11.2 reference customers did upgrade their production database - a huge EBS system - from Oracle 9.2.0.8 to Oracle Database 11.2.0.2. They've tested very well, we've optimized the upgrade process, the recompilation timings etc.  But once the live upgrade was done it did fail in the JAVA component piece with this error: begin if initjvmaux.startstep('CREATE_JAVA_SYSTEM') then * ORA-29553: classw in use: SYS.javax/mail/folder ORA-06512: at "SYS.INITJVMAUX", line 23 ORA-06512: at line 5 Support diagnosis was pretty quick - and refered to:Bug 10165223 - ORA-29553: class in use: sys.javax/mail/folder during database upgrade But how could this happen? Actually I don't know as we have used the same init.ora setup on test and production. The only difference: the prod system has more CPUs and RAM. Anyway, the bug names as workarounds to either decrease the SGA to less than 1GB or decrease the number of heap subpools to 1. Finally this query did help to diagnose the number of heap subpools: select count(distinct kghluidx) num_subpools from x$kghlu where kghlushrpool = 1; The result was 2 - so we did run the upgrade now with this parameter set: _kghdsidx_count=1 And finally it did work well. One sad thing:After the upgrade did fail Support did recommend to restore the whole database - which took an additional 3-4 hours. As the ORACLE SERVER component has been already upgraded successfully at the stage where the error did happen it would have been fine to go on with the manual upgrade and start catupgrd.sql script. It would have been detected that the ORACLE SERVER is upgraded already and just picked up the non-upgraded components. The good news:Finally I had one extra slide to add to our workshop presentation

    Read the article

  • How can I make multiple displays work on my Asus UX32VD?

    - by oKtosiTe
    Original title: Why do I have two trash icons in the Unity Launcher? Whether I run Ubuntu as a live-USB or install it, I always have two trash bins on the Unity Launcher. Both work, and both open the same location. This seems a bit redundant; what could be done about it? Update: Turning auto-hide on made it obvious that I have multiple Launchers showing. With auto-hide off, they simply overlap, making it look like there's a double trash icon, but with auto-hide enabled, I can display one Launcher (and therefore one trash icon) at a time. Still, two are running simultaneously. Second update: This problem appears to be caused by the way Ubuntu handles multiple displays on my Asus UX32VD Ultrabook. Somehow, the laptop display cannot be used while my external display is connected. It is shown in the Displays list, but remains black no matter how I configure it. The external display runs at 1920x1200, the laptop monitor should run at 1920x1080. It therefore becomes obvious that the Launcher that's supposed to run on the laptop display, is actually displayed on the external monitor. Using nomodeset as a kernel parameter as indicated here makes the laptop display inaccessible altogether, detecting the external monitor as the laptop display and making resolutions other than 1920x1200 inaccessible. That is not an option.

    Read the article

  • Deleting Unused Swaps Partions

    - by Nikita Kononov
    Good evening everyone , I got a little issue with Swap Partitions. Due to some issues after installing Ubuntu first time, I reinstalled it and now I have 3 Swaps. Here is sudo fdisk -l result Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0xaa9693fe Device Boot Start End Blocks Id System /dev/sda1 2048 52430847 26214400 1c Hidden W95 FAT32 (LBA) /dev/sda2 * 52430848 540677076 244123114+ 7 HPFS/NTFS/exFAT /dev/sda3 540678142 1465147391 462234625 5 Extended Partition 3 does not start on physical sector boundary. /dev/sda5 1452750848 1465147391 6198272 82 Linux swap / Solaris /dev/sda6 1440352256 1452742655 6195200 82 Linux swap / Solaris /dev/sda7 540678144 1427951615 443636736 83 Linux /dev/sda8 1427953664 1440339967 6193152 82 Linux swap / Solaris So Swaps in /dev/sda5 and /dev/sda6 are no longer in use as far as I understand and thus I was planning to delete them, however faced a problem. What I did is download and burn Gparted Live CD and boot it up, tried to delete those partitions but I have no idea how to add 12GB unallocated memory to the existing OS partition in this case to /dev/sda7 Is there anyway I can delete 2 swaps and extend unallocated memory to /dev/sda7 partion? Thank you in advance!

    Read the article

  • How to make and restore incremental snapshots of hard disk

    - by brunopereira81
    I use Virtual Box a lot for distro / applications testing purposes. One of the features I simply love about it is virtual machines snapshots, its saves a state of a virtual machine and is able to restore it to its former glory if something you did went wrong without any problems and without consuming your all hard disk space. On my live systems I know how to create a 1:1 image of the file system but all the solutions I'v known will create a new image of the complete file system. Are there any programs / file systems that are capable of taking a snapshot of a current file system, save it on another location but instead of making a complete new image it creates incremental backups? To easy describe what I want, it should be as dd images of a file system, but instead of only a full backup it would also create incremental. I am not looking for clonezilla, etc. It should run within the system itself with no (or almost none) intervention from the user, but contain all the data of the file systems. I am also not looking for a duplicity backup your all system excluding some folders script + dd to save your mbr. I can do that myself, looking for extra finesse. I'm looking for something I can do before doing massive changes to a system and then if something when wrong or I burned my hard disk after spilling coffee on it I can just boot from a liveCD and restore a working snapshot to a hard disk. It does not need to be daily, it doesn't even need a schedule. Just run once in a while and let it its job and preferably RAW based not file copy based.

    Read the article

  • Never Bet Against the Impossible

    - by BuckWoody
    My uncle used to say “If a man tells you that his car squirts milk in his eye when you lift the hood, don’t bet against that. You’ll end up with milk in your eye.” My friend Allen White tells me this is taken from a play (and was said about playing cards), but I think the sentiment holds, even in database work. I mentioned the other day that you should allow the other person to talk and actively listen before you propose a solution. Well, I saw a consultant “bet against the impossible”  the other day – and it bit her. She explained to the person telling her the problem that the situation simply couldn’t exist that way, and he proceeded to show her that it did. She got silent, typed a few things, muttered a little, and then said “well, must be something else.” She just couldn’t admit she was wrong. So don’t go there. If someone explains a problem to you with their database, listen with purpose, and then explore the troubleshooting steps you know to find the problem. But keep your absolutes to yourself. In fact, I have a friend that has recently sent me one of those. He connects to a system with SQL Server Management Studio (SSMS) version 2008 (if I recall correctly) and it shows a certain version number of the target system in the connection tab. Then he connects to it using SSMS 2008 R2 and gets a different number. Now, as far as I know, we didn’t change the connection string information, and that’s provided by the target system, so this is impossible. But I won’t tell him that. Not until I look a little more. :) Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Cannot enable Additional drivers for Broadcom in 12.10

    - by Compt
    tl;dr version: Additional Drivers does not work for enabling a driver in Ubuntu 12.10, however it worked in 12.04. I upgraded to Ubuntu 12.10 from 12.04 via a live USB. I did a fresh install to avoid any conflicts. From 12.04 I know that my wireless card has proprietary drivers that could be installed via Additional Drivers. I did eventually find additional drivers in the software sources menu, however when I attempt to switch to Broadcom 802.11 Linux STA wireless driver source from bcmwl-kernel-source, it not only does not work, it in fact disables my wireless completely. The only way to revert this is to restart, and revert back to "do not use the device". Then upon another restart, my wireless will function once again. I was curious if anyone else had this issue, and if there was a fix for it as of yet. I looked on launchpad and it may be a potential bug, but I am unsure. Any help would be greatly appreciated (and sorry for that wall of text). Until then, I'll continue to use my wireless as it is (or revert to 12.04), but I do notice a slower connection without it enabled.

    Read the article

  • Feature Updates to the Windows Azure Portal

    - by Clint Edmonson
    Lots of activity over at the Windows Azure portal this weekend, including some exciting new features and major improvements to existing features. Here are the highlights: Support for Managing Co-administrators Set up account co-administrators to allow others to share service management duties for each Azure subscription Import/Export support for SQL Databases Export existing SQL Azure databases to blob storage using SQL Server 2012’s BACPAC format. Create a new SQL Azure database from an existing BACPAC stored in blob storage Storage Container Management and Access Control Create blob storage containers directly within the portal Edit their public/private access settings Drill into storage containers and see the blobs contained within them Improved Cloud Service Status Notifications Detailed health status information about cloud services and roles as they transition between states Virtual Machine Experience Enhancements Option to automatically delete corresponding VHD files from blob storage when deleting VM disks Service Bus Management and Monitoring Ability to create and manage service bus Namespaces, Queues, Topics, Relays and Subscriptions Rich monitoring of Topics, Queues, and Subscriptions with detailed and customizable dashboard metrics Entity status (Topic, Queue, or Subscription) can be changed interactively via dashboard Direct links to the Access Control Services (ACS) namespaces when working with service bus access keys Media Services Monitoring Support Monitor encoding jobs that are queued for processing as well as active, failed and queued tasks for encoding jobs The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today. Stay tuned to my twitter feed for Windows Azure announcements, updates, and links: @clinted Reference ID: P7VVJCM38V8R

    Read the article

  • Demantra Implementation Tip Windows and Unix or Linux

    - by user702295
    Hello!  Are you implementing using a third party or consulting resources?   Recently we have seen some cases where customers no longer have a windows installation.  After the initial install and configuration, once the instance has gone live, the windows install is either deleted or most likely no longer with the customer as the same was installed on the implementers' laptop to start with. As a result when support comes back requesting the customer to apply a patch and/or upgrade they do not have a windows installation.  This has started happening after Oracle Demantra gave them the option to configure the engine on Unix.  Workaround: It is advisable that the customer keep their Windows installation intact for further patching and/or upgrade.  It is aslo possible that the implementer had installed Demantra on his Windows box and you do not have access to it any more.  It is possible that with the web and engine on Unix, and the silent installer having downloaded all the executable for Business Modeler, to work on the User's client machine, you may no longer need the windows install. I have not tested the above 

    Read the article

  • 13.10 doesn't boot on Vaio Pro 13

    - by vaioonbuntu
    I just installed Ubuntu 13.10 on my new Vaio Pro 13, disabled safe mode, but used UEFI and not legacy mode. I did an encrypted LVM installation and erased the complete SSD. It booted just fine from USB, but after installation it doesn't boot. The Vaio failed boot screen appears. I then tried this advice here: 13.10 on vaio pro with UEFI sadly it fails for me with "/usr/sbin/grub-probe: error: failed to get canonical path of /cow." I then tried mounted the encrypted partition with Nautilus and tried this: Cannot update grub with paramters on live USB With /dev/sda2 and then to install GRUB to /dev/sda. Didn't succeed and warned me that the "GPT partition label contains no BIOS Boot Partition; embedding won't be possible" What do i have to do, go fix GRUB and be able to boot my finished install? Here's my Boot Repair Log: http://paste.ubuntu.com/6386598/ I would really appreciate any help, I'm so happy to finally be able to ditch my big fat Macbook Pro and use Ubuntu on my new, light Vaio Pro, if only I could fix GRUB. best, x

    Read the article

  • Trying to recover deleted Ubuntu partition

    - by user110984
    I made a mistake in logging into my 200 GB Ubuntu partition. I could not access Grub after that. Using a live CD I then ran Boot_Repair and apparently deleted the partition, I guess because I ran it from my 70 GB Windows partition. I can send the results of boot_info before that and of Boot_Repair. Then I ran TestDisk, which apparently found only dev/sda/ -320GB / 298 / GiB - WDC - WD3200BEVT-22A23T0 (Was there any more I could have done with TestDisk? I looked at the TestDisk_Step_By_Step example and found no way forward given that no other partitions turned up) I have run gpart and found this: /sda1 - 15 GB /sda2 - system reserved /sda3 - 70.15 GB /sda4 - extended 212.84 unallocated - 209.10 /sda5 - unknown 3.74 . I have been told I can recover the partition using gparted's Rescue start end command, but I don't know what to enter for start and end. [--EDIT: TestDisk Deeper Search stated that "the following partitions can't be recovered" and listed a 220-GB Linux partition 6 times. Then it stated that "The current number of heads per cylinder is 255 but the correct value may be 128" and I could try to change it in the Geometry menu (because apparently these are overlapping partitions) So should I do that?--]

    Read the article

  • Lubuntu 12.04 on Acer laptop boots to blank blue screen

    - by WGCman
    My previous question on this was closed, but I am posting it again as the solution which my son eventually found may assist other users of the forum, or someone may be able to tweak the solution to improve the performance. Having installed Kubuntu 12.04.01 from a live USB onto my desktop, I wanted to do the same on my laptop, an Acer Aspire 1362 Laptop, which has 256MB RAM (actually 512 "on the box", but a good deal can be borrowed by the graphics!). I found Kubuntu wouldn't run on so little memory but downloaded: Lubuntu-12.04-alternate-i386.iso, which I understood was light enough to go. The laptop has one internal 40GB Toshiba hard drive divided into 3 partitions: C,19GB with Windows XP, Windows program files and some data, D, 19GB mostly data, and a small 2GB partition with some Acer software, which XP can't normally “see”. I transferred most of the contents of D to a memory stick, leaving 16GB free for Lubuntu. I did not want to dump XP yet, though it is painfully slow. I installed Lubuntu from then USB stick, accepting the default answers to most of the questions. The D: partition was further partitioned into a 500MB boot partition, 10GB for Linux, 2GB Swap and 6GB for data shareable between Linux and Windows. I had no error messages during installation, rebooted, was offered the choice of Ubuntu or XP, and selected the former. After a few minutes, I get a dark blue screen announcing Lubuntu with five dots underneath which lighten in turn. Eventually the lights stopped, and whatever I try the screen remains blank apart from “Lubuntu” I tried several solutions suggested on the forum for “identical” questions but without success.

    Read the article

  • EMEA OTN Virtual Technology Summit - Hands-On Learning

    - by Thanos Terentes Printzios
    The Oracle Technology Network (OTN) is excited to invite you to our first Virtual Technology Summit. EMEA – Thursday July 10th / 9am to 1pm BST / 10am – 2pm CET / 12pm to 4pm MSK / GST - Register Now Learn first hand from Oracle ACEs, Java Champions, and Oracle product experts, as they share their insight and expertise on using Oracle technologies to meet today’s IT challenges. This interactive, online event offers four technical tracks, each with a unique focus on specific tools, technologies, and tips in these focus areas. Java – Big Trends and Technologies – Java lets you mine Big Data, build robust apps with HTML5, JavaScript and Java EE, and expand into the Internet of Things. Experts will present and you’ll be able to chat with them live online. Don’t miss out on this great opportunity to learn from some of the best minds in the Java community. Systems – OS Tips and Tricks for Sysadmins – Learn first hand how to configure Oracle Linux to run Oracle Database 11g and 12c, how to use the latest networking capabilities in Oracle Solaris 11, and how to troubleshoot networking problems in Unix and Linux systems. Database – Mastering Oracle Database Management & Development Techniques – Experts will present advanced features and management methods that will help you master your Oracle Database capabilities and drive greater performance, agility and manageability of your IT implementation. This track will build upon your skills with data management, migration, and performance. Middleware – The Architecture of Analytics: Big Time Big Data and Business Intelligence – This track will present a solution architect’s perspective on how business intelligence products in Oracle’s Fusion Middleware family and beyond fit into an effective big data architecture, and present insight and expertise from Oracle ACEs specializing in business Intelligence to help you meet your big data business intelligence challenges. This same content is being offered at 3 different dates listed below, at times convenient for all regions Americas - Wednesday July 9th EMEA – Thursday July 10th APAC English - July 16th 9am to 1pm PST12pm to 4pm  EST1 to 5 pm BRTRegister 9am to 1pm BST10am – 2pm CET12pm to 4pm MSK / GSTRegister IST – 10:00amSG – 12:30pmAEST – 2:30pmRegister The full event agenda is available at https://wikis.oracle.com/display/OTNVirtualTechSummit/Home

    Read the article

  • How do I (quickly) let people know that software I am providing for free is not abandon-ware?

    - by blueberryfields
    As an independent, individual programmer: How do I let people very quickly know that I have not abandoned the software I've written and given away for free? That I am putting in the effort required to maintain and support my software to a professional level? When software written by one or two developers is available for free, or marked as open-source, usually the default assumption is that it's abandon-ware. This is usually a safe assumption - check out the answers to this question if you doubt it: Why do programmers write applications and then make them free?. There are lots of programmers who provide free and/or open-source tools which are not abandon-ware, though. If we're talking about large companies, ie Google, there's no real problem telling the difference between supported, live tools and software, and those which are abandoned or discontinued. A lively git repository isn't quick - users will have to be savvy enough to understand the repository and know where to look for it. Consistent marketing and community management take more time and effort than I can put in on my own. Also, if my software becomes popular/successful, I assume those will grow on their own, and be supported by power users in the community.

    Read the article

  • Failed Project: When to call it?

    - by Dan Ray
    A few months ago my company found itself with its hands around a white-hot emergency of a project, and my entire team of six pulled basically a five week "crunch week". In the 48 hours before go-live, I worked 41 of them, two back to back all-nighters. Deep in the middle of that, I posted what has been my most successful question to date. During all that time there was never any talk of "failure". It was always "get it done, regardless of the pain." Now that the thing is over and we as an organization have had some time to sit back and take stock of what we learned, one question has occurred to me. I can't say I've ever taken part in a project that I'd say had "failed". Plenty that were late or over budget, some disastrously so, but I've always ended up delivering SOMETHING. Yet I hear about "failed IT projects" all the time. I'm wondering about people's experience with that. What were the parameters that defined "failure"? What was the context? In our case, we are a software shop with external clients. Does a project that's internal to a large corporation have more space to "fail"? When do you make that call? What happens when you do? I'm not at all convinced that doing what we did is a smart business move. It wasn't my call (I'm just a code monkey) but I'm wondering if it might have been better to cut our losses, say we're not delivering, and move on. I don't just say that due to the sting of the long hours--the company royally lost its shirt on the project, plus the intangible costs to the company in terms of employee morale and loyalty were large. Factor that against the PR hit of failing to deliver a high profile project like this one was... and I don't know what the right answer is.

    Read the article

  • Ensure Payroll Success with PeopleSoft Year-End Training for U.S. and Canada

    - by Breanne Cooley
    Year-end payroll processing and reporting is a requirement for your business. If you're responsible for completing these processes in either Canada or the United States using the PeopleSoft Payroll application, and if you're new to PeopleSoft Payroll or to performing these processes, consider enrolling in Oracle University's expert training. Our PeopleSoft Payroll specialists will guide you through the necessary steps to ensure you can smoothly and successfully perform your job. Training is specific to the country for which you are performing the processing and reporting. Training lasts one day and is delivered in our Live Virtual Class Format, which helps you avoid travel during this busy season. Here's the training we recommend: PeopleSoft Year-End Payroll - U.S. This course teaches you how to complete U.S. year-end processing and reporting using PeopleSoft Payroll for North America, step-by-step. Update tax reporting setup tables and update employees' income and tax records. Load each employee's year-end data into a single year-end record for processing and reporting.  Identify reports needed to reconcile the year-end data. Correct tax balances and other data as necessary. Generate final print and online W-2 forms and prepare the electronic file for the Social Security Administration.  Enter corrected W-2 information and print a W-2c form. Report periodic retirement distributions and related tax withholding amounts on form 1099-R.   Please Note: this course is intended for organizations using PeopleSoft release 8.81 or higher. PeopleSoft Year-End Payroll – Canada This course covers the steps necessary to perform Canadian year-end processing using Oracle's PeopleSoft Payroll for North America. Explore adjustments, balances, year-end slip processing, common pitfalls and errors and balancing reports.  Produce accurate year-end reporting results such as T4, T4A, RL-1 and RL-2.  Please Note: this course is intended for organizations using PeopleSoft release 8.81 or higher. See you in class! -Oracle University Marketing Team 

    Read the article

  • Data recovery on Ubuntu 11.10?! (after crashing with Seagate 320GB)

    - by Sam
    Just installed 11.10 last week and decided to transfer iTunes music (from Windows dual boot) to my Seagate 320GB. I left it in, restarted, clicked Ubuntu at the boot screen, and then it froze after a few lines of code! I think I got to 3.7086 or something before I pressed CTRL+ALT+DEL and the system restarted after another few lines of code. I am completely new to Ubuntu so after Googling, I made a live CD with 10.04, the most stable release I've heard, and I'm typing this from there now. However, when I go to mount my partition, only the Windows Vista partition (308GB) is there! It has all my Windows files but my Ubuntu 11.10 ones are nowhere to be found. I need to restore these pictures I transferred from my camera using Shotwell the other day... any help is appreciated! p.s. 11.10 has never crashed on me in my trial week, so I'm guessing it's the Seagate hard drive's fault. However, now I'm running it on 10.04 and it works fine.

    Read the article

  • Is there any place to find real-world usage-style tutorials for programming languages?

    - by OleDid
    Let's face it. When you want to learn something completely new, be it mathematics or foreign languages, it's easiest to learn when you get real world scenarios in front of you, with theory applied. For example, trigonometry can be extremely interesting when applied to creation of 2D platform games. Norwegian can be really interesting to learn if you live in Norway. When I try to look at a new programming language, I always find these steps the hardest: What tools do I need to compile and how do I do it Introduction-step: Why is this programming language so cool? Where and how is it used? (The step I am looking for, real-world scenarios) The rest, deep diving into the language, pure theory and such, is often much easier if you have completed step 1 and 2. Because now you know what it's all about, and can just read the specification when you need to. What I ask is, do you have any recommendations for places I can find such material for programming languages? Be it websites or companies selling books in this style, I'm interested. Also, I am interested in all languages. (If I had found a "real-world usage" explained for even INTERCAL, I would be interested). In some other thread here, I found a book called "Seven Languages in Seven Weeks". This is kind of what I am looking for, but I believe there must be "more like this".

    Read the article

  • Partner Webcast - Oracle Taleo Cloud Service - 12 Dec 2012

    - by Thanos
    Talent Intelligence is the insight companies need to unlock the power of their most critical asset – their people. CEOs are charged with driving growth, and the one ingredient to growth that’s common across all industries and regions - both in good economic times and in bad – is people. In every economic environment, Talent Intelligence is a company’s biggest lever for driving growth, innovation and customer success. Oracle Taleo Cloud Service provides a comprehensive suite of SaaS products that help companies manage their investment in people by improving their Talent Intelligence. The Oracle Taleo Cloud Service enables enterprises and midsize businesses to recruit top talent, align that talent to key goals, manage performance, develop and compensate top performers, and turn today's best performers into tomorrow's leaders. Join us to find out more about the industry's broadest cloud-based talent management platform. Agenda: Oracle HCM Footprint Taleo value proposition Taleo quick tour Why invest in Taleo resources Demonstrating Taleo Q&A REGISTER NOW Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24 hours prior to start time may not receive confirmation to attend. Duration: 1 hour For any questions please contact us at [email protected]. Visit our ISV Migration Center blog Or Follow us @oracleimc to learn more on Oracle Technologies, upcoming partner webcasts and events. Existing content available YouTube - SlideShare - Oracle Mix.

    Read the article

  • Register to Attend the AutoVue 20.2 Webcast on April 3, 2012

    - by Pam Petropoulos
    Want to learn more about the latest AutoVue 20.2 release?               Discover what this latest major release of AutoVue can do for you. Join Celine Beck, AutoVue Product Management and Strategy Manager, during this live webcast to discover how the new release can transform your business processes and extend the value of your visualization investment. Hear how customers and partners are improving their workflows and creating differentiated offerings thanks to AutoVue enterprise visualization.   Date: Tuesday, April 3, 2012                                                                                                                                                            Time: 11:00 a.m. EST   Click here to register for this event.   For complete details about the new release, also check out the What’s New in AutoVue 20.2 Datasheet, available here.

    Read the article

  • Process Improvement and the Data Professional

    - by BuckWoody
    Don’t be afraid of that title – I’m not talking about Six Sigma or anything super-formal here. In many organizations, there are more folks in other IT roles than in the Data Professional area. In other words, there are more developers, system administrators and so on than there are the “DBA” role. That means we often have more to do than the time we need to do it. And, oddly enough, the first thing that is sacrificed is process improvement – the little things we need to do to make the day go faster in the first place. Then we get even more behind, the work piles up and…well, you know all about that. Earlier I challenged you to find 10-30 minutes a day to study. Some folks wrote back and asked “where do I start”? Well, why not be super-efficient and combine that time with learning how to make yourself more efficient? Try out a new scripting language, learn a new tool that automates things or find out ways others have automated their systems. In general, find out what you’re doing and how, and then see if that can be improved. It’s kind of like doing a performance tuning gig on yourself! If you’re pressed for time, look for bite-sized articles (like the ones I’ve done here for PowerShell and SQL Server) that you can follow in a “serial” fashion. In a short time you’ll have a new set of knowledge you can use to make your day faster. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Sharing My Thoughts on Space Flight

    - by Grant Fritchey
    This went out in the DBA newsletter from Red Gate, but I enjoyed writing it so much, I thought I'd share it to a wider audience: I grew up watching the US space program. I watched men walk on the moon for the first time in 1969, when I was only six years old. From that moment on, I dreamed of going into space. I studied aeronautics and tried to get into the Air Force Academy, all in preparation for my long career as an astronaut. Clearly, that didn't quite work out for me. But it sure could for you. At Red Gate, we're running a new contest: DBA in Space. The prize is a sub-orbital flight. When I first got word of this contest, my immediate response was, "And you need me to go right away and do a test flight? Excellent!" No, no test flight needed, plus I was pretty low on the list of volunteers. "That's OK, I'll just enter." Then I was told that, as a Red Gate employee, I couldn't win. My next response was, "I quit".eventually, I was talked down off the ledge, and agreed to help make this special for some other DBA. Many (most?) of us are science fiction fans, either the soft science of Star Trek and Star Wars, or the hard science of Niven and Pournelle, or Allen Steele. We watched the Shuttles go up and land. We've been dreaming of our own trips into orbit and our vacation-home on the Moon for a long, long time. All that might not arrive on schedule, but you've got a shot at breaking clear of the atmosphere. The first stage is a video quiz, starring Brad McGehee, and it's live at www.DBAinSpace.com now. Go for it. Good luck and God speed!

    Read the article

  • What are the hard and fast rules for Cache Control?

    - by Metalshark
    Confession: sites I maintain have different rules for Cache Control mostly based on the default configuration of the server followed up with recommendations from the Page Speed & Y-Slow Firefox plug-ins and the Network Resources view in Google's Speed Tracer. Cache-Control is set to private/public depending on what they say to do, ETag's/Last-Modified headers are only tinkered with if Y-Slow suggests there is something wrong and Vary-Accept-Encoding seems necessary when manually gziping files for Amazon CloudFront. When reading through the material on the different options and what they do there seems to be conflicting information, rules for broken proxies and cargo cult configurations. Any of the official information provided by the analysis tools mentioned above is quite inaccessible as it deals with each topic individually instead of as a unified strategy (so there is no cross-referencing of techniques). For example, it seems to make no sense that the speed analysis tools rate a site with ETag's the same as a site without them if they are meant to help with caching. What are the hard and fast rules for a platform agnostic Cache Control strategy? EDIT: A link through Jeff Atwood's article explains Caching in superb depth. For the record though here are the hard and fast rules: If the file is Compressed using GZIP, etc - use "cache-control: private" as a proxy may return the compressed version to a client that does not support it (the browser cache will hold files marked this way though). Also remember to include a "Vary: Accept-Encoding" to say that it is compressible. Use Last-Modified in conjunction with ETag - belt and braces usage provides both validators, whilst ETag is based on file contents instead of modification time alone, using both covers all bases. NOTE: AOL's PageTest has a carte blanche approach against ETags for some reason. If you are using Apache on more than one server to host the same content then remove the implicitly declared inode from ETags by excluding it from the FileETag directive (i.e. "FileETag MTime Size") unless you are genuinely using the same live filesystem. Use "cache-control: public" wherever you can - this means that proxy servers (and the browser cache) will return your content even if the rest of the page needs HTTP authentication, etc.

    Read the article

< Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >