Search Results

Search found 20358 results on 815 pages for 'disk management'.

Page 464/815 | < Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >

  • 409 CONFLICT : MAAS

    - by amir beygi
    I have some problem with my MAAS. juju bootstrap result: 2012-08-31 03:59:17,721 INFO Bootstrapping environment 'maas' (origin: distro type: maas)... Unexpected Error interacting with provider: 409 CONFLICT 2012-08-31 03:59:17,951 ERROR Unexpected Error interacting with provider: 409 CONFLICT Also i have 3 nodes in Commissioning status (delete node is disable and no start button) , DHCP seems working because LAN boot is working but boot but ends with : ALERT! /dev/disk/by-label/cloudimg-rootfs does not exist. Dropping to a shell! BusyBox.... (initramfs)

    Read the article

  • Bar Table Modded Into Standing Desk

    - by Jason Fitzpatrick
    This polished looking standing desk combines a stand alone bar-height counter with extra storage, cable management, and monitor riser. The end result looks like a $$$$ standing desk at a fraction of the price. Courtesy of IKEA hacker Marc Marton, the build combines the Billsta Bar Table, the Ekby Alex Shelf, and Besta legs to raise the shelf up off the desk and create a keyboard storage area. For more information about the build hit up the link below. Billsta Bar Table into Standing Work Station [IKEAHacker] 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • Upcoming Conferences to Showcase Oracle's Latest Procurement Applications

    - by Paul Homchick
    The 2010 conference season is kicking off with a series of events featuring executive updates demos of Oracle's newest procurement products. Attendees will also have the chance to meet with Oracle customers and technical representatives to discuss best practices for optimizing procurement processes. New Procurement TechnologiesOracle will use the events to showcase a number of procurement applications introduced since last year's Oracle OpenWorld: Oracle Supplier Lifecycle Management--a supplier-development application released this year to simplify the qualification, assessment, and performance monitoring of vendors (see related story). Oracle Supplier Hub--another 2010 introduction, the Oracle Supplier Hub unifies and shares critical information about all the suppliers in an organization's stable (see related story). Oracle Spend Classification--an intelligence-based application that improves spend and performance visibility. Oracle Procurement On Demand--the adaptive solution that enables and accelerates procurement transformation. Oracle Procurement and Spend Analytics 7.9.6.1--the latest release of Oracle Business Intelligence extends new content and integration capabilities to additional platforms and languages. Click here to find an event near you: List of conferences by location.

    Read the article

  • Learn About Oracle’s Strategy for a Simple, Modern User Experience at OpenWorld 2012

    - by Applications User Experience
    By Kathy Miedema, Oracle Applications User Experience If you’re interested in what the best possible user experience looks like, you’ll want to hear what Oracle’s Applications User Experience team is planning for OpenWorld 2012, Sept. 30-Oct. 4 in San Francisco. This year, we will talk Fusion, Fusion, Fusion. We were among the first to show Oracle Fusion Applications in the last couple of years, and we’ll be showing it again this year so you can see what Oracle is planning for the next generation of enterprise applications. Attend our sessions to learn more about the user experience strategy in which Oracle is investing. Simplicity is the driving force behind the demos that we are unveiling now, which you can see at OpenWorld. We want to create opportunities for productivity and efficiency, and deliver enterprise data across devices to help you do your work in the way best suited to your job and needs, said Jeremy Ashley, Vice President, Oracle Applications User Experience. You can see the new look for Fusion Applications at a general session led by Ashley at 3:30 p.m. on Wednesday, Oct. 3. You’ll also have the chance to learn more about tailoring in Oracle Fusion Applications, and gain a new understanding of the investment in the user experience behind Fusion Applications at our sessions (see session information below). Inside the Oracle Applications User Experience team’s on-site lab at Oracle OpenWorld 2011. Head to the demogrounds to see new demos from the Applications User Experience team, including the new look for Fusion Applications and what we’re building for mobile platforms. Take a spin on our eye tracker, a very cool tool that we use to research the usability of a particular design. Visit the Usable Apps OpenWorld page to find out where our demopods will be located. We are also recruiting participants for our on-site lab, in which we gather feedback on new user experience designs, and taking reservations for a charter bus that will bring you to Oracle headquarters for a lab tour Thursday, Oct. 4, or Friday, Oct. 5. Tours leave at 10 a.m. and 1:45 p.m. from the Moscone Center in San Francisco. You’ll see more of our newest designs at the lab tour, and some of our research tools in action. Can’t participate in a customer feedback session or take a lab tour this time around? Visit Usable Apps to participate or book a tour another time. For more information on any OpenWorld sessions, check the content catalog – also available at www.oracle.com/openworld. For information on Applications User Experience (Apps UX) sessions and activities, go to the Usable Apps OpenWorld page. APPS UX OPENWORLD SESSIONS Oracle’s Roadmap to a Simple, Modern User Experience Presenter: Jeremy Ashley, Vice President Applications User Experience, Oracle; with Debra Lilley, Fujitsu Consulting; Basheer Khan, Innowave; and Edward Roske, InterRelSession ID: CON9467Date: Wednesday, Oct. 3 Time: 3:30 - 4:30 p.m.Location: Moscone West - 3002/3004 Jeremy Ashley Oracle Fusion Applications: Transforming Insight into Action Presenters: Killian Evers and Kristin Desmond, OracleSession ID: CON8718Date: Thursday, Oct. 4Time: 11:15 a.m. - 12:15 p.m.Location: Moscone West - 2008 “FRIENDS OF UX” OPENWORLD SESSIONS Sessions by the Oracle Usability Advisory Board (OUAB) members: Advances in Oracle Enterprise Governance, Risk, and Compliance Manager  Presenters: Koen Delaure, KPMG Advisory NV, and Oracle Usability Advisory Board member; Russell Stohr, Oracle Session ID: CON9389Date: Tuesday, Oct. 2Time: 1:15 - 2:15 p.m.Location: Palace Hotel - Concert Optimize Oracle E-Busines Suite Procure-to-Pay: Cut Inefficiences/Fraud with Oracle GRC Apps Presenters: Koen Delaure, KPMG Advisory NV, and Solveig Wagner, Seadrill Management AS, both Oracle Usability Advisory Board members; and Swarnali Bag, OracleSession ID: CON9401Date: Monday, Oct. 1Time: 12:15 - 1:15 p.m.Location: Intercontinental - Sutter Showcase of JD Edwards EnterpriseOne Mobility Presenters: Jon Wells, Westmoreland Coal Co., Oracle Usability Advisory Board member; Rob Mills and Liz Davson, Town of Oakville; Keith Sholes and Louise Farner, Oracle Session ID: CON9123Date: Tuesday, Oct. 2Time: 1:15 - 2:15 p.m.Location: InterContinental - Grand Ballroom B Sessions by the Fusion User Experience Adovcates (FXA) Usability and Features of Oracle Fusion Applications, Built upon Oracle Fusion Middleware Presenters: Debra Lilley, Fujitsu Consulting and Oracle Usability Advisory Board member; John King, King Training ResourcesSession ID: UGF10371Date: Sunday, Sept. 30Time: 11 a.m. - 11:45 a.m. Location: Moscone West – 2010 Ten Things to Love About Oracle Fusion Project Portfolio Management  Presenter: Floyd Teter, EiS TechnologiesSession ID: CON6021Date: Tuesday, Oct. 2Time: 10:15 - 11:15 a.m.Location: Moscone West – 2003

    Read the article

  • "Virtual Machine Manager" and "Virtual Machine Server" setup manual

    - by urtihu
    Is there a manual available that covers the proper setup of a "Virtual Machine Server" with no GUI with an Ubuntu Workstation with a GUI and "Virtual Machine Manager" installed? Both are 12.04 version. I get the following error message: unable to connect to libvirt Verify that -The libvirt-bin package is installed -The libvirt daemon has been started -you are a member of the libvirtd group the package is installed for some reason starting the daemon seems to crash libvirtd start info: libvirt version 0.9.8 error: virExecWithHook:328 : cannot find 'pm-is-supported' in path: No such file or directory also qemucapsInit:856: Failed to get host power management capabilities So I guess I did not set the server up correctly. All manuals I found do not mention "Virtual Machine Manager". I only chose the packages to connect with SSH remotely and the "Virtual Machine Server" for the server installation. So I would like to find a manual that covers this combo or then covered only GUI machines that have both on the same machine, which will not really help with system performance as a hypervisor.

    Read the article

  • career development: build release engineer or .net developer [closed]

    - by runner
    I have been working as .net developer for many years. Recently I got two offers: Continue work as .net developer on a SAAS product. Job duty is to add new features and fix issues, similar to what i have been doing these years. Become a Software configuration management and build engineer, in charge of product build, automation and release. Require some script coding, but not much. For the career development. which one should I choose? thanks.

    Read the article

  • FAT Volume and CE

    - by Kate Moss' Open Space
    Whenever we format a disk volume, it is a good idea to name the label so it will be easier to categorize. To label a volume, we can use LABEL command or UI depends on your preference. Windows CE does provide FAT driver and support various format (FAT12, FAT16,FAT32, ExFAT and TFAT - transaction-safe FAT) and many feature to let you scan and even defrag the volume but not labeling. At any time you format a volume in CE and then mount it on PC, the label is always empty! Of course, you can always label the volume on PC, even it is formatted in CE. So looks like CE does not care about the volume label at all, neither report the label to OS nor changing the label on FAT.So how can we set the volume label in CE? To Answer this question, we need to know how does FAT stores the volume label. Here are some on-line resources are handy for parsing FAT. http://en.wikipedia.org/wiki/File_Allocation_Table http://www.pjrc.com/tech/8051/ide/fat32.html http://www.microsoft.com/whdc/system/platform/firmware/fatgen.mspx You can refer to PUBLIC\COMMON\OAK\DRIVERS\FSD\FATUTIL\MAIN\bootsec.h and dosbpb.h or the above links for the fields we discuss here. The first sector of a FAT Volume (it is not necessary to be the first sector of a disk.) is a FAT boot sector and BPB (BIOS Parameter Block). And at offset 43, bgbsVolumeLabel (or bsVolumeLabel on FAT16) is for storing the volume lable, but note in the spec also indicates "FAT file system drivers should make sure that they update this field when the volume label file in the root directory has its name changed or created.". So we can't just simply update the bgbsVolumeLabel but also need to create a volume lable file in root directory. The volume lable file is not a real file but just a file entry in root directory with zero file lenth and a very special file attribute, ATTR_VOLUME_ID. (defined in public\common\oak\drivers\fsd\fatutil\MAIN\fatutilp.h) Locating and accessing bootsector is quite straight forward, as long as we know the starting sector of a FAT volume, that's it. But where is the root directory? The layout of a typical FAT is like this Boot sector (Volume ID in the figure) followed by Reserved Sectors (1 on FAT12/16 and 32 on FAT32), then FAT chain table(s) (can be 1 or 2), after that is the root directory (FAT12/16 and not shows in the figure) then begining of the File and Directories. In FAT12/16, the root directory is placed right after FAT so it is not hard to caculate the offset in the volume. But in FAT32, this rule is no longer true: the first cluster of the root directory is determined by BGBPB_RootDirStrtClus (or offset 44 in boot sector). Although this field is usually 0x00000002 (it is how CE initial the root directory after formating a volume. Note we should never assume it is always true) which means the first cluster contains data but not like the root directory is contiguous in FAT12/16, it is just like a regular file can be fragmented. So we need to access the root directory (of FAT32) hopping one cluster to another by traversing FAT table. Let's trace the code now. Although the source of FAT driver is not available in CE Shared Source program, but the formatter, Fatutil.dll, is available in public\common\oak\drivers\fsd\fatutil\MAIN\formatdisk.cpp. Be aware the public code only provides formatter for FAT12/16/32 for ExFAT it is still not available. FormatVolumeInternal is the main worker function. With the knowledge here, you should be able the trace the code easily. But I would like to discuss the following code pieces     dwReservedSectors = (fo.dwFatVersion == 32) ? 32 : 1;     dwRootEntries = (fo.dwFatVersion == 32) ? 0 : fo.dwRootEntries; Note the dwReservedSectors is 32 in FAT32 and 1 in FAT12/16. Root Entries is another different mentioned in previous paragraph, 0 for FAT32 (dynamic allocated) and fixed size (usually 512, defined in DEFAULT_ROOT_ENTRIES in public\common\sdk\inc\fatutil.h) And then here   memset(pBootSec->bsVolumeLabel, 0x20, sizeof(pBootSec->bsVolumeLabel)); It sets the Volume Label as empty string. Now let's carry on to the next section - write the root directory.     if (fo.dwFatVersion == 32) {         if (!(fo.dwFlags & FATUTIL_FORMAT_TFAT)) {             dwRootSectors = dwSectorsPerCluster;         }         else {             DIRENTRY    dirEntry;             DWORD       offset;             int               iVolumeNo;             memset(pbBlock, 0, pdi->di_bytes_per_sect);             memset(&dirEntry, 0, sizeof(DIRENTRY));                         dirEntry.de_attr = ATTR_VOLUME_ID;             // the first one is volume label             memcpy(dirEntry.de_name, "TFAT       ", sizeof (dirEntry.de_name));             memcpy(pbBlock, &dirEntry, sizeof(dirEntry));              ...             // Skip the next step of zeroing out clusters             dwCurrentSec += dwSectorsPerCluster;             dwRootSectors = 0;         }     }     // Each new root directory sector needs to be zeroed.     memset(pbBlock, 0, cbSizeBlk);     iRootSec=0;     while ( iRootSec < dwRootSectors) { Basically, the code zero out the each entry in root directory depends on dwRootSectors. In FAT12/16, the dwRootSectors is calculated as the sectors we need for the root entries (512 for most of the case) and in FAT32 it just zero out the one cluster. Please note that, if it is a TFAT volume, it initialize the root directory with special volume label entries for some special purpose. Despite to its unusual initialization process for TFAT, it does provide a example for how to create a volume entry. With some minor modification, we can assign the volume label in FAT formatter and also remember to sync the volume label with bsVolumeLabel or bgbsVolumeLabel in boot sector.

    Read the article

  • ClickThrough on Google Webmaster Tool and Traffic Source in Google Analytics

    - by Svetlana
    I'm new to SEO and website management, but eager to learn. I manage a newly revamped site and I'm tracking it on Google Analytics and in Google Webmaster tools. The Webmaster tools show that I get about 3200 impressions and 180 click through's a week. Google Analytics show that no traffic comes from search engins, all of the traffic is direct. On average, I get about 60-80 visitors a day, shouldn't Google Analytics show at least a few of those visitors as having come from the search engines?. What does that discrepancy mean? I can't seem to wrap my mind around it... Thank you in advance, Svetlana

    Read the article

  • NTFS partitions hidden under EXT4 file system / partition...want to recover files from NTFS

    - by user7534
    I am new to ubuntu, but very impressed with the system. so one day i tried installing ubuntu 10.10 along with windows in dual boot first place it didnt get installed properly and during second attempt i could do it right but oh...i lost my windows 7 , here is my problem and what i have done till now. i have hdd installed with ubuntu same disk have windows partitions and i need to extract data from those ...very very important i tried to access the same from ubuntu ...can not access it, 3.reinstalled the windows 7 , hdd is not detected 4.during installation ubuntu gone , so reintalled scan in ubuntu says hdd is fine and DiskInternals linux reader actual show the NTFS partitions , recovery tool not able to get any data out. , please help i need data from these partitions...please I feel that i have put ext4 partition on ntfs filesystem...and now not able to access it

    Read the article

  • Introduction to Oracle’s New StorageTek SL150 Modular Tape Library

    - by Cinzia Mascanzoni
    Join the product announcement webcast on Thursday July 12, 2012 at 3pm CET (2pm GMT). This webcast will help you to understand Oracle's New StorageTek SL150 Modular tape library which is the first scalable tape library designed for small and midsized companies that are experiencing high growth. Built from Oracle software and StorageTek library technology, it delivers a cost-effective combination of ease of use and scalability, resulting in overall TCO savings. During the webcast Cindy McCurley, from Tape Product Management will introduce you to the latest addition to the Oracle Tape Storage product portfolio, the SL150 Modular Tape Library. This 60 minutes webcast will cover the product’s features, positioning, unique selling points and a competitive overview on StorageTek. You can submit your questions via WebEx chat and there will be a live Q&A session at the end of the webcast.Register NOW!

    Read the article

  • Create Second Web Application using the Default port 80 In SharePoint2010

    - by ybbest
    As a SharePoint developer, one of the common tasks is to create SharePoint Web Application. In this post I will show you how to create second Web Application using the default port 80 in SharePoint2010.You need to follow the steps below. 1. Go to Central Admin => Application Management =>Manage web applications and click new Web Application 2. I choose YBBEST as my IIS site name and host header name, change the port number to 80 and leave the rest settings as default. 3. After the web application creation wizard completes, add an entry in the host file located at C:\Windows\System32\Drivers\etc\hosts . 4. Create a root site collection for the new web application. After the site collection is created , you can browse to the site collection using URL http://ybbest.

    Read the article

  • The Evolution of Link’s Swords [Wallpaper]

    - by Jason Fitzpatrick
    If you’re a fan of all things Legend of Zelda, this high-resolution wallpaper showcases all the swords from every Legend of Zelda game. In addition to the wallpaper that gathers all the swords together in one place, you can also check out the description on the wallpaper’s Deviant Art page to grab high-resolution images of each individual sword. Hit up the link below to grab both the wallpaper and the individual renderings. The Evolution of Link’s Swords [Deviant Art] 7 Ways To Free Up Hard Disk Space On Windows HTG Explains: How System Restore Works in Windows HTG Explains: How Antivirus Software Works

    Read the article

  • Getting WCF Bindings and Behaviors from any config source

    The need of loading WCF bindings or behaviors from different sources such as files in a disk or databases is a common requirement when dealing with configuration either on the client side or the service side. The traditional way to accomplish this in WCF is loading everything from the standard configuration section (serviceModel section) or creating all the bindings and behaviors by hand in code. However, there is a solution in the middle that becomes handy when more flexibility is needed. This...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Donald Farmer comes to SQLBits

    What do medieval archaeology, fish farming, Southwestern University of Chongqing and Microsoft Business Intelligence have in common? If you know, you should tell Donald Farmer, because he has been deeply involved in all of them at various times. Donald has worked in the Microsoft Business Intelligence team for 8 years covering many subject areas: data integration, information quality, metadata intelligence, master data management, OLAP, predictive analytics and self-service BI. He is a well-known speaker at Microsoft and other industry events, and the author of several books and articles.   Great news from SQLBits! We can now confirm that Donald Farmer has agreed to do a pre-conference training day and the key note for our SQL Server 2008 and SQL Server 2008 R2 day. As Program Manager for Project Gemini, no-one is better placed to tell you what is going to be in R2 and what is not! More information about the Pre-conference Training Day and SQL 2008 and R2 Friday will be released soon.

    Read the article

  • Can my ikmnet test results say something about career choice I should take?

    - by Nicke
    I took 2 tests via ikmnet and scored 70 % on SQL and 65 % on Java. While not bad, it can be improved. The subskills I need to improve according to the test are interfaces and inheritance, compilation and deployment, flow control, The java.lang package and "Java Program Construction" and these topics seems rather broad to me. Rather than just learning by programming, could you advice me to take a certification, follow a course or otherwise improve my skills? By the way, I enjoy python more than Java so should I market myself more of a python programmer or even a role that some companies search for which seems like a system developer with more technical writing where the title is system analysts (evaluating systems in cooperation with management rather than programming.) Thank you for any comment and/or answer.

    Read the article

  • ~/.xsession-errors is 2.7gb big (and growing), on fresh install, caused by gnome-settings-daemon errors

    - by Alex Black
    I've just installed Ubuntu 10.10 x64, activated the recommended Nvidia drivers, and I noticed my hard disk space is disappearing, I narrowed the culprit down to this: alex@alex-home:~$ ls -la .x* -rw------- 1 alex alex 4436076400 2010-11-19 22:35 .xsession-errors -rw------- 1 alex alex 10495 2010-11-19 21:46 .xsession-errors.old Any idea what this file is, why its so big, and why its growing? A few seconds later: alex@alex-home:~$ ls -la .x* -rw------- 1 alex alex 5143604317 2010-11-19 22:36 .xsession-errors -rw------- 1 alex alex 10495 2010-11-19 21:46 .xsession-errors.old tailing it: alex@alex-home:~$ tail .xsession-errors (gnome-settings-daemon:1514): GLib-GObject-CRITICAL **: g_object_unref: assertion `G_IS_OBJECT (object)' failed (gnome-settings-daemon:1514): GLib-GObject-CRITICAL **: g_object_unref: assertion `G_IS_OBJECT (object)' failed (gnome-settings-daemon:1514): GLib-GObject-CRITICAL **: g_object_unref: assertion `G_IS_OBJECT (object)' failed (gnome-settings-daemon:1514): GLib-GObject-CRITICAL **: g_object_unref: assertion `G_IS_OBJECT (object)' failed (gnome-settings-daemon:1514): GLib-GObject-CRITICAL **: g_object_unref: assertion `G_IS_OBJECT (object)' failed Also, the process "gnome-settings" seems to be using 100% cpu: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1514 alex 20 0 268m 10m 7044 R 100 0.1 7:06.10 gnome-settings-

    Read the article

  • Social Business Forum Milano: Day 2

    - by me
    @YourService. The business world has flipped and small business can capitalize  by Frank Eliason (twitter: @FrankEliason ) Technology and social media tools have made it easier than ever for companies to communicate with consumers. They can listen and join in on conversations, solve problems, get instant feedback about their products and services, and more. So why, then, are most companies not doing this? Instead, it seems as if customer service is at an all time low, and that the few companies who are choosing to focus on their customers are experiencing a great competitive advantage. At Your Service explains the importance of refocusing your business on your customers and your employees, and just how to do it. Explains how to create a culture of empowered employees who understand the value of a great customer experience Advises on the need to communicate that experience to their customers and potential customers Frank Eliason, recognized by BusinessWeek as the 'most famous customer service manager in the US, possibly in the world,' has built a reputation for helping large businesses improve the way they connect with customers and enhance their relationships Quotes from the Audience: Bertrand Duperrin ?@bduperrin social service is not about shutting up the loudest cutsomers ! #sbf12 @frankeliason Paolo Pelloni ?@paolopelloniGautam Ghosh ?@GautamGhosh RT @cecildijoux: #sbf12 @frankeliason you need to change things and fix the approach it's not about social media it's about driving change  Peter H. Reiser ?@peterreiser #sbf12 Company Experience = Product Experience + Customer Interactions + Employee Experience @yourservice Engage or lose! Socialize, mobilize, conversify: engage your employees to improve business performance Christian Finn (twitter: @cfinn) First Christian was presenting the flying monkey   Then he outlined the four principals to fix the Intranet: 1. Socalize the Intranet 2. Get Thee to a Single Repository 3. Mobilize the Intranet 4. Conversationalize Your Processes Quotes from the Audience: Oscar Berg ?@oscarberg Engaged employees think their work bring out the best of their ideas @cfinn #sbf12 http://pic.twitter.com/68eddp48 John Stepper ?@johnstepper I like @cfinn's "conversify your processes" A nice related concept to "narrating your work", part of working out loud. http://johnstepper.com/2012/05/26/working-out-loud-your-personal-content-strategy/ Oscar Berg ?@oscarberg Organizations are talent markets - socializing your intranet makes this market function better @cfinn #sbf12 For profit, productivity, and personal benefit: creating a collaborative culture at Deutsche Bank John Stepper (twitter:@johnstepper) Driving adoption of collaboration + social media platforms at Deutsche Bank. John shared some great best practices on how to deploy an enterprise wide  community model  in a large company. He started with the most important question What is the commercial value of adding social ? Then he talked about the success of Community of Practices deployment and outlined some key use cases including the relevant measures to proof the ROI of the investment. Examples:  Community of practice -> measure: systematic collection of value stories  Self-service website  -> measure: based on representative models Optimizing asset inventory - > measure: Actual counts  This use case was particular interesting.  It is a crowd sourced spending/saving of infrastructure model.  User can cancel IT services they don't need (as example Software xx).  5% of the saving goes to social responsibility projects. The John outlined some  best practices on how to address the WIIFM (What's In It For Me) question of the individual users:  - change from hierarchy to graph -  working out loud = observable work + narrating  your work  - add social skills to career objectives - example: building a purposeful social network course/training as part of the job development curriculum And last but not least John gave some important tips on how to get senior management buy-in by establishing management sponsored division level collaboration boards which defines clear uses cases and measures. This divisional use cases are then implemented using a common social platform.  Thanks John - I learned a lot from your presentation!   Quotes from the Audience: Ana Silva ?@AnaDataGirl #sbf12 what's in it for individuals at Deutsche Bank? Shapping their reputations in a big org says @johnstepper #e20Ana Silva ?@AnaDataGirl Any reason why not? MT @magatorlibero #sbf12 is Deutsche B. experience on applying social inside company applicable to Italian people? Oscar Berg ?@oscarberg Your career is not a ladder, it is a network that opens up opportunities - @johnstepper #sbf12 Oscar Berg ?@oscarberg @johnstepper: Institutionalizing collaboration is next - collaboration woven into the fabric of daily work #sbf12 Ana Silva ?@AnaDataGirl #sbf12 @johnstepper talking about how Deutsche Bank is using #socbiz to build purposeful CoP & save money

    Read the article

  • Clean up after Visual Studio

    - by psheriff
    As programmer’s we know that if we create a temporary file during the running of our application we need to make sure it is removed when the application or process is complete. We do this, but why can’t Microsoft do it? Visual Studio leaves tons of temporary files all over your hard drive. This is why, over time, your computer loses hard disk space. This blog post will show you some of the most common places where these files are left and which ones you can safely delete..NET Left OversVisual Studio is a great development environment for creating applications quickly. However, it will leave a lot of miscellaneous files all over your hard drive. There are a few locations on your hard drive that you should be checking to see if there are left-over folders or files that you can delete. I have attempted to gather as much data as I can about the various versions of .NET and operating systems. Of course, your mileage may vary on the folders and files I list here. In fact, this problem is so prevalent that PDSA has created a Computer Cleaner specifically for the Visual Studio developer.  Instructions for downloading our PDSA Developer Utilities (of which Computer Cleaner is one) are at the end of this blog entry.Each version of Visual Studio will create “temporary” files in different folders. The problem is that the files created are not always “temporary”. Most of the time these files do not get cleaned up like they should. Let’s look at some of the folders that you should periodically review and delete files within these folders.Temporary ASP.NET FilesAs you create and run ASP.NET applications from Visual Studio temporary files are placed into the <sysdrive>:\Windows\Microsoft.NET\Framework[64]\<vernum>\Temporary ASP.NET Files folder. The folders and files under this folder can be removed with no harm to your development computer. Do not remove the "Temporary ASP.NET Files" folder itself, just the folders underneath this folder. If you use IIS for ASP.NET development, you may need to run the iisreset.exe utility from the command prompt prior to deleting any files/folder under this folder. IIS will sometimes keep files in use in this folder and iisreset will release the locks so the files/folders can be deleted.Website CacheThis folder is similar to the ASP.NET Temporary Files folder in that it contains files from ASP.NET applications run from Visual Studio. This folder is located in each users local settings folder. The location will be a little different on each operating system. For example on Windows Vista/Windows 7, the folder is located at <sysdrive>:\Users\<UserName>\AppData\Local\Microsoft\WebsiteCache. If you are running Windows XP this folder is located at <sysdrive>:\ Documents and Settings\<UserName>\Local Settings\Application Data\Microsoft\WebsiteCache. Check these locations periodically and delete all files and folders under this directory.Visual Studio BackupThis backup folder is used by Visual Studio to store temporary files while you develop in Visual Studio. This folder never gets cleaned out, so you should periodically delete all files and folders under this directory. On Windows XP, this folder is located at <sysdrive>:\Documents and Settings\<UserName>\My Documents\Visual Studio 200[5|8]\Backup Files. On Windows Vista/Windows 7 this folder is located at <sysdrive>:\Users\<UserName>\Documents\Visual Studio 200[5|8]\.Assembly CacheNo, this is not the global assembly cache (GAC). It appears that this cache is only created when doing WPF or Silverlight development with Visual Studio 2008 or Visual Studio 2010. This folder is located in <sysdrive>:\ Users\<UserName>\AppData\Local\assembly\dl3 on Windows Vista/Windows 7. On Windows XP this folder is located at <sysdrive>:\ Documents and Settings\<UserName>\Local Settings\Application Data\assembly. If you have not done any WPF or Silverlight development, you may not find this particular folder on your machine.Project AssembliesThis is yet another folder where Visual Studio stores temporary files. You will find a folder for each project you have opened and worked on. This folder is located at <sysdrive>:\Documents and Settings\<UserName>Local Settings\Application Data\Microsoft\Visual Studio\[8|9].0\ProjectAssemblies on Windows XP. On Microsoft Vista/Windows 7 you will find this folder at <sysdrive>:\Users\<UserName>\AppData\Local\Microsoft\Visual Studio\[8|9].0\ProjectAssemblies.Remember not all of these folders will appear on your particular machine. Which ones do show up will depend on what version of Visual Studio you are using, whether or not you are doing desktop or web development, and the operating system you are using.SummaryTaking the time to periodically clean up after Visual Studio will aid in keeping your computer running quickly and increase the space on your hard drive. Another place to make sure you are cleaning up is your TEMP folder. Check your OS settings for the location of your particular TEMP folder and be sure to delete any files in here that are not in use. I routinely clean up the files and folders described in this blog post and I find that I actually eliminate errors in Visual Studio and I increase my hard disk space.NEW! PDSA has just published a “pre-release” of our PDSA Developer Utilities at http://www.pdsa.com/DeveloperUtilities that contains a Computer Cleaner utility which will clean up the above-mentioned folders, as well as a lot of other miscellaneous folders that get Visual Studio build-up. You can download a free trial at http://www.pdsa.com/DeveloperUtilities. If you wish to purchase our utilities through the month of November, 2011 you can use the RSVP code: DUNOV11 to get them for only $39. This is $40 off the regular price.NOTE: You can download this article and many samples like the one shown in this blog entry at my website. http://www.pdsa.com/downloads. Select “Tips and Tricks”, then “Developer Machine Clean Up” from the drop down list.Good Luck with your Coding,Paul Sheriff** SPECIAL OFFER FOR MY BLOG READERS **We frequently offer a FREE gift for readers of my blog. Visit http://www.pdsa.com/Event/Blog for your FREE gift!

    Read the article

  • NoSQL Memcached API for MySQL: Latest Updates

    - by Mat Keep
    With data volumes exploding, it is vital to be able to ingest and query data at high speed. For this reason, MySQL has implemented NoSQL interfaces directly to the InnoDB and MySQL Cluster (NDB) storage engines, which bypass the SQL layer completely. Without SQL parsing and optimization, Key-Value data can be written directly to MySQL tables up to 9x faster, while maintaining ACID guarantees. In addition, users can continue to run complex queries with SQL across the same data set, providing real-time analytics to the business or anonymizing sensitive data before loading to big data platforms such as Hadoop, while still maintaining all of the advantages of their existing relational database infrastructure. This and more is discussed in the latest Guide to MySQL and NoSQL where you can learn more about using the APIs to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database The native Memcached API is part of the MySQL 5.6 Release Candidate, and is already available in the GA release of MySQL Cluster. By using the ubiquitous Memcached API for writing and reading data, developers can preserve their investments in Memcached infrastructure by re-using existing Memcached clients, while also eliminating the need for application changes. Speed, when combined with flexibility, is essential in the world of growing data volumes and variability. Complementing NoSQL access, support for on-line DDL (Data Definition Language) operations in MySQL 5.6 and MySQL Cluster enables DevOps teams to dynamically update their database schema to accommodate rapidly changing requirements, such as the need to capture additional data generated by their applications. These changes can be made without database downtime. Using the Memcached interface, developers do not need to define a schema at all when using MySQL Cluster. Lets look a little more closely at the Memcached implementations for both InnoDB and MySQL Cluster. Memcached Implementation for InnoDB The Memcached API for InnoDB is previewed as part of the MySQL 5.6 Release Candidate. As illustrated in the following figure, Memcached for InnoDB is implemented via a Memcached daemon plug-in to the mysqld process, with the Memcached protocol mapped to the native InnoDB API. Figure 1: Memcached API Implementation for InnoDB With the Memcached daemon running in the same process space, users get very low latency access to their data while also leveraging the scalability enhancements delivered with InnoDB and a simple deployment and management model. Multiple web / application servers can remotely access the Memcached / InnoDB server to get direct access to a shared data set. With simultaneous SQL access, users can maintain all the advanced functionality offered by InnoDB including support for Foreign Keys, XA transactions and complex JOIN operations. Benchmarks demonstrate that the NoSQL Memcached API for InnoDB delivers up to 9x higher performance than the SQL interface when inserting new key/value pairs, with a single low-end commodity server supporting nearly 70,000 Transactions per Second. Figure 2: Over 9x Faster INSERT Operations The delivered performance demonstrates MySQL with the native Memcached NoSQL interface is well suited for high-speed inserts with the added assurance of transactional guarantees. You can check out the latest Memcached / InnoDB developments and benchmarks here You can learn how to configure the Memcached API for InnoDB here Memcached Implementation for MySQL Cluster Memcached API support for MySQL Cluster was introduced with General Availability (GA) of the 7.2 release, and joins an extensive range of NoSQL interfaces that are already available for MySQL Cluster Like Memcached, MySQL Cluster provides a distributed hash table with in-memory performance. MySQL Cluster extends Memcached functionality by adding support for write-intensive workloads, a full relational model with ACID compliance (including persistence), rich query support, auto-sharding and 99.999% availability, with extensive management and monitoring capabilities. All writes are committed directly to MySQL Cluster, eliminating cache invalidation and the overhead of data consistency checking to ensure complete synchronization between the database and cache. Figure 3: Memcached API Implementation with MySQL Cluster Implementation is simple: 1. The application sends reads and writes to the Memcached process (using the standard Memcached API). 2. This invokes the Memcached Driver for NDB (which is part of the same process) 3. The NDB API is called, providing for very quick access to the data held in MySQL Cluster’s data nodes. The solution has been designed to be very flexible, allowing the application architect to find a configuration that best fits their needs. It is possible to co-locate the Memcached API in either the data nodes or application nodes, or alternatively within a dedicated Memcached layer. The benefit of this flexible approach to deployment is that users can configure behavior on a per-key-prefix basis (through tables in MySQL Cluster) and the application doesn’t have to care – it just uses the Memcached API and relies on the software to store data in the right place(s) and to keep everything synchronized. Using Memcached for Schema-less Data By default, every Key / Value is written to the same table with each Key / Value pair stored in a single row – thus allowing schema-less data storage. Alternatively, the developer can define a key-prefix so that each value is linked to a pre-defined column in a specific table. Of course if the application needs to access the same data through SQL then developers can map key prefixes to existing table columns, enabling Memcached access to schema-structured data already stored in MySQL Cluster. Conclusion Download the Guide to MySQL and NoSQL to learn more about NoSQL APIs and how you can use them to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database See how to build a social app with MySQL Cluster and the Memcached API from our on-demand webinar or take a look at the docs Don't hesitate to use the comments section below for any questions you may have 

    Read the article

  • We're Back: I'm Here

    - by Brian Dayton
    After a busy Fall and Winter post-Oracle OpenWorld 2009 Oracle's Application Strategy Blog is back. More on what we've been up to shortly. Me, I'm blogging here for the first time. After nearly 6 years at Oracle working on the Oracle Fusion Middleware business I've recently joined the Oracle Applications team. For me, what's old is new again. Prior to working on applications infrastructure at Oracle...and at BEA Systems before that...I worked at PeopleSoft in a number of roles spanning Enterprise Performance Management, Supply Chain, Public Sector and Financial Services and more. Some of the acronyms are the same, there are (of course) some new ones too. But what I'm really excited about is the intersection of Enterprise Applications and Applications Infrastructure that's happening right now. "Aligning IT with Business Strategy" has been the buzzphrase for longer than we can all remember---but what I've seen over the past 5 months makes me start to believe that it's finally starting to happen.

    Read the article

  • Don’t miss the Procurement AME New Features and Setup for Purchase Orders Webcast on December 6th and Follow up Live Chat

    - by MargaretW
    Webcast This one-hour session on December 6th is recommended for technical and functional users who are interested to know more about the new 12.1.3 features for Procurement with Approval Management Engine (AME). TOPICS WILL INCLUDE: Scope and limitations of AME functionality for purchase orders Setup and use of AME for purchase orders PO Review and PO E-Sign new features Demonstrations will be included See DocID 1456150.1 to sign up now! Live Chat There will be a live chat in the Procurement Community on December 13th for follow up questions and answers.  Join us to share and gain knowledge!

    Read the article

  • How do I disable unwanted iPXE boot attempt in Libvirt/qemu-kvm?

    - by gertvdijk
    Somehow after upgrading to 12.04, my virtual machines always boot with an attempt to boot from the network first. See this: while I don't have any PXE configuration set: I've tried: to disable SPICE, by changing the emulator to /usr/bin/kvm from /usr/bin/kvm-spice by editing the XML. Ctrl+B to configure the iPXE, but it doesn't let disable this as a boot option. setting another type of NIC - not an option, I need virtio for performance reasons. However, e1000e doesn't work either. removing the NIC: works. However, I need network. Googling around. Hard. Lots of result is about failing configured PXE boots. Not a big issue, but it increases boot times by 50-100% here (booting from SSD), so it's relatively long and annoys me. How can I disable this and boot from virtual hard disk directly?

    Read the article

  • ERRNO 5 Input/Output Error

    - by CCarey
    Going up for my first ubuntu installation and encountered a critical error. Mind that I am installing on my Macbook Pro, and have already removed all other partitions. (I'm installing with a CD) My Ubuntu version is: "ubuntu-12.10-desktop-i386" So once the installation gets to something around "Finishing copying files", a great big "[ERRNO5] Input/Output Error" pops up on the screen. Obviously this halts and crashes the whole installation. Now, I've already run a disk check, memtest, and cpu load test, and all came up green. I have also redownloaded ubuntu twice, md5 match both times, and burnt four disks. None got past this error. If anyone could help me out, that would be greatly appreciated! Cheers!

    Read the article

  • Don’t miss the Procurement Webcast for AME on October 30th, 2012

    - by user793553
    Procurement support is pleased to announce a new webcast covering the topic ‘Approval Management Engine (AME) Setup, Use and Troubleshooting’.  This one hour session will include the topics: · Basic Setup: Setup and how the default approval list is built in AME · Diagnostic Steps: Running the Test Workbench and accessing and review of Log Files, approval workflow and debug ·  Example of an AME setup to include defining attributes, conditions, action types and rules A short, live demonstration and question and answer period will be included. October 30, 2012 at 3:00 pm Cario / 1:00 pm London / 06:00 am Pacific / 7:00 am Mountain / 9:00 am Eastern From MyOracleSupport see Doc ID  1456150.1 for further details and sign up.

    Read the article

  • Oracle Extends Life Sciences Edition in New Release

    - by charles.knapp
    By Chris Kanaracus, IDG News Service Oracle (ORCL) announced the 17th version of its on-demand CRM (customer relationship management) application Wednesday and made a fresh push into pharmaceutical sales with a Life Sciences edition of the software. New features in CRM on Demand Release 17 include tools for managing sales pipelines and performing forecasts of future business; a redesigned user interface; and added language support. But one CRM industry observer flagged the Life Sciences product as a particular point of interest. Read the full article here.

    Read the article

< Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >