Search Results

Search found 11512 results on 461 pages for 'usb mass storage'.

Page 366/461 | < Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >

  • How to store bitmaps in memory?

    - by Geotarget
    I'm working with general purpose image rendering, and high-performance image processing, and so I need to know how to store bitmaps in-memory. (24bpp/32bpp, compressed/raw, etc) I'm not working with 3D graphics or DirectX / OpenGL rendering and so I don't need to use graphics card compatible bitmap formats. My questions: What is the "usual" or "normal" way to store bitmaps in memory? (in C++ engines/projects?) How to store bitmaps for high-performance algorithms, such that read/write times are the fastest? (fixed array? with/without padding? 24-bpp or 32-bpp?) How to store bitmaps for applications handling a lot of bitmap data, to minimize memory usage? (JPEG? or a faster [de]compression algorithm?) Some possible methods: Use a fixed packed 24-bpp or 32-bpp int[] array and simply access pixels using pointer access, all pixels are allocated in one continuous memory chunk (could be 1-10 MB) Use a form of "sparse" data storage so each line of the bitmap is allocated separately, reusing more memory and requiring smaller contiguous memory segments Store bitmaps in its compressed form (PNG, JPG, GIF, etc) and unpack only when its needed, reducing the amount of memory used. Delete the unpacked data if its not used for 10 secs.

    Read the article

  • Fusion CRM Release 7 RCDs and TOIs Now Available!

    - by Richard Lefebvre
    Fusion CRM Release 7 Release Content Documents (RCD) and Transfer of Information (TOI) presentations are now available. In addition, you can find 245 new or changed product features for Release 7 on Oracle Product Features. All the new RCDs and TOIs can be found on the Fusion Learning Center: Customer Relationship Management TOIs - Customer Center, Define Segmentation Strategy, Enterprise Contracts, Oracle Social Network, Sales, and Territory Management Business Process Model (BPM) RCDs - Customer Service, Marketing, Order Fulfillment, and Sales Financials BPM RCDs - Asset Lifecycle Management, Cash and Treasury Management, and Financial Control and Reporting Human Capital Management TOIs - Workforce Development, Compensation, Benefits, Worker Performance, Workforce Profiles, Enterprise Structures, Talent Review, Manage Transaction and Batch Processing, Delete HCM Storage Data, and Load Batch Data BPM RCDs - Compensation Management, Enterprise Information Management, Workforce Deployment, and Workforce Development Procurement TOI - Requisitions BPM RCD - Procurement Project Portfolio Management TOIs - Project Resources, Evaluate and Assign Resources, Maintain Resource Assignments, Manage Resource Demand, Manage Resource Supply, Manage Resource Utilization and Analytics, Project Management, Set Up Project Management BPM RCD - Project Management Supply Chain Management TOIs - Manage New Product Definition and Approval, Manage Product Change Orders, Product Hub, Define Item Class BPM RCDs - Materials Management and Logistics, Product Management and Supply Chain Planning Partners and customers can access the content from the following locations: Partner access: BPM RCDs and TOIs Oracle Partner Network Fusion Learning Center New Feature RCDs Oracle Product Features Customer access: TOIs My Oracle Support (Note:1528594.1) BPM RCDs My Oracle Support (Note:1559828.1) New Feature RCDs Oracle Product Features

    Read the article

  • svcbundle for easier SMF manifest creation

    - by Glynn Foster
    One of the new features we've introduced in Oracle Solaris 11.1 is a new utility called svcbundle. This utility allows for the easy creation of Service Management Facility (SMF) manifests and profiles, allowing you to take advantage of the benefits of automatic application restart without requiring you to have full knowledge of the XML file format that is used when integrating with the SMF. Integrating into SMF is one of the easiest and most obvious ways to take advantage of some of the mission critical aspects of the operating system, but many customers were often finding the initial learning curve of creating an XML manifest to be too hard. With the release of Oracle Solaris 11 11/11, SMF had a more integral part to play as more and more system configuration was starting to use the SMF configuration repository for the backend storage. This provides a number of aspects, including the ability to carefully manage customized administrative configuration, site specific configuration, and vendor provided configuration at different layers, helping to preserve them during system update. I've written a new article, Using svcbundle to Create SMF Manifests and Profiles in Oracle Solaris 11, to give you a feel for the help we can provide in converting your applications over to SMF, or doing some site wide configuration using profiles. This is the first pass at creating such a tool, so we'd love to hear feedback of your experiences using it.

    Read the article

  • Why can't tuxboot and ubuntu play well together?

    - by mmr
    I'm trying to get clonezilla to run off of a usb stick, and it seems that the right way to do that is via tuxboot. Tuxboot is not compilable on ubuntu. I used git to get it from the repository, and then when I run the 'install' script (because building it is apparently not allowed, since the build script just tries to install windows things). Qmake-linux wants my qmake executable to be in the same directory as the stuff I pulled down, and let's just say that if there's a way to do this easily, I ain't seein' it. So then I download the linux file, the most recent of which is tuxboot-linux-25. Try to run it, get a failure that libpng12.so.0 isn't found. OK, then I go to install that via the instructions I found on the web but firefox seems to have already deleted from my history (yay!) Then I add the /usr/local/lib directory to ldconfig via emacs (had to install that too, of course): http://ubuntuforums.org/showthread.php?t=369848 I still get the errors that libpng12.so.0 cannot be opened because 'No such file or directory'. ldconfig -p | grep libpng shows that the library is there, but it still doesn't seem to be findable. What to do next? (for the record, doing this in windows is painless-- download, click, and it's done. But I'm trying to be all linuxy and get away from Windows for this...)

    Read the article

  • Provide an OnChange event for an internal property which is controlled externally?

    - by NGLN
    For fun and by request I am updating this ImageGrid component, a kind of listbox for images that has a FileNames property of type TStrings. For ease of writing, I have been misusing its FileNames.Objects property for bitmap storage. But since the TStrings type suggests that users of the component could or would want to use the Objects property for custom data, e.g. like TListBox.Items, I am rewriting the component to store the bitmaps elsewhere and leave FileNames.Objects untouched for unknown future usage. Now I am wondering whether to provide an OnChange event. And if so, whether to fire it when one or more FileNames.Objects changes. Trying to answer it myself, I dove in Delphi's own VCL and stumbled on: TMemo: has an OnChange event, but ignores Lines.Objects TListBox: has no OnChange event, but is capable of storing Items.Objects TStringGrid: has no OnChange event, but is capable of storing Objects, Rows.Objects, Cols.Objects So now I am somewhat puzzeled, because I cannot imagine Borland's developers didn't add events for several Objects properties out of ease. Sure, when a user changes a FileNames.Object in my component, he knows he does and could implement appropriate interaction himself. But wouldn't it be convenient when the component does automatically? What would you expect from this component in this regard?

    Read the article

  • How can I triple boot Xubuntu, Ubuntu and Windows?

    - by ag.restringere
    Triple Booting Xubuntu, Ubuntu and Windows I'm an avid Xubuntu (Ubuntu + XFCE) user but I also dual boot with Windows XP. I originally created 3 partitions and wanted to use the empty one as a storage volume but now I want to install Ubuntu 12.04 LTS (the one with Unity) to do advanced testing and packaging. Ideally I would love to keep these two totally separate as I had problems in the past with conflicts between Unity and XFCE. This way I could wipe the Ubuntu w/ Unity installation if there are problems and really mess around with it. My disk looks like this: /dev/sda1 -- Windows XP /dev/sda2 -- Disk /dev/sda: 200.0 GB, 200049647616 bytes 255 heads, 63 sectors/track, 24321 cylinders, total 390721968 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Device Boot Start End Blocks Id System /dev/sda1 * 63 78139454 39069696 7 HPFS/NTFS/exFAT /dev/sda2 78141440 156280831 39069696 83 Linux /dev/sda3 156282878 386533375 115125249 5 Extended /dev/sda4 386533376 390721535 2094080 82 Linux swap / Solaris /dev/sda5 156282880 386533375 115125248 83 Linux Keep each in it's own partition and totally separate and be able to select from each of the three systems from the GRUB boot menu... sda1 --- [Windows XP] sda2 --- [Ubuntu 12.04] "Unity" sda3(4,5) -- [Xubuntu 12.02] "Primary XFCE" What is the safest and easiest way to do this without messing my system up and requiring invasive activity?

    Read the article

  • Notification framework for object lifecycle

    - by rlandster
    I am looking for an application, framework, or library that would help us with "object life-cycle management". There are many things that are created for users, departments, and services that, all too often, are left unmanaged. Some examples: user accounts groups SSL certificates access rights databases software license provisionings storage list-serve accounts These objects are created and managed by a wide variety of applications and systems. Typically, a user (person) requests (either explicitly or implicitly) one of these objects. A centralized management tool would help us manage such administration chores as: What objects does user X currently own/manage? Move the ownership of object P to user X; move all objects owned by user X (who was just been fired) to user Y. For all objects of type T that have expired be sure the objects have been disabled or deleted by their provider. How many active (expired, about-to-expire) objects of type P are there? Send periodic notifications to all users who own active objects of type P reminding them of what they own. There is a security alert for objects of type P; send a notification to all users who own these types of objects to take a specific remedial action. Delete or disable a set of objects based on expiration (or some other criteria). These objects are directly managed through their own applications (Active Directory, MySql, file systems, etc.) and may even have their own notification systems, but I want to centralize this into an "object management system". The OMS should allow the association with an external identity provider that defines who the users and groups are (e.g., LDAP, Active Directory) creation of objects association of an object to a specific user and/or group association with an expiration date creation of flexible reporting including letting users know what objects they currently own and their expiration dates integration with an external object "provider" via a plug-in We could write something from scratch, but I am hoping there is something already out there that will help, either an entire application or a set of libraries that provide much of what is needed. Any ideas?

    Read the article

  • Can't install Linux drivers for ASUS N-13 wireless adapter

    - by jcc
    I have a USB N13 wireless Adapter with install CD-- For Windows. I downloaded drivers from ASUS FOR N13 for Linux. Disregard install CD that came with adapter; it's for Windows. I then downloaded Windows Wireless driver install program app from Software Center in Ubuntu 12.10. The problem is me. I am newbie with all things linux; software sources, G Debi, default archive manager, synaptic package manager and the Terminal. The downloaded driver file is a .zip file. I managed to extract it to a tar.gz file and then to open it to the contained files. When I use the Windows Wireless driver program it ends up telling me there is no .inf file and goes no further. It wants to install .inf file but I don't even see one in all the files. Can someone please help me . I think you can tell by my wording I don't have a clue. I hope this is'nt too chatty. I've tried to be explicit and to the point. Thank you. Oh, this is on an ASUS LAPTOP K53E. I've looked all over Ask Ubuntu and finally found some questions even on the N13 but they didn't help; still some differences in the exact problem.

    Read the article

  • Help with dual booting Windows 8.1 Professional and Ubuntu 13.10

    - by user1292548
    I recently installed a clean version of Windows 8.1 Professional on my Lenovo Y500 (with Samsung 256GB 840 Pro SSD). I have Windows all set up and running normally. I am trying to dual boot Windows 8.1 and Ubuntu 13.10, but the installation procedure don't allow me to either "Install alongside..." or shows my SSD partitions correctly when I chose the "Something Else" option. I have created a 25GB partition of free space in the Windows disk manager, but on the installation screen on Ubuntu, it shows the whole drive as a free space. I have tried installing with a burned .ISO disk and a bootable USB, the results are the same for both. Windows Disk Management screen: http://imageshack.us/a/img855/9504/59zu.jpg The Ubuntu installation screen: http://imageshack.us/a/img62/2712/9g6i.jpg I've ran into this problem before when trying to dual boot Ubuntu and Windows 7 Professional a month ago. But I gave up and never resolved the issue. --EDIT-- I tried what Eero Aaltonen suggested, and this is my result: ubuntu@ubuntu:~$ sudo parted /dev/sda print Warning: /dev/sda contains GPT signatures, indicating that it has a GPT table. However, it does not have a valid fake msdos partition table, as it should. Perhaps it was corrupted -- possibly by a program that doesn't understand GPT partition tables. Or perhaps you deleted the GPT table, and are now using an msdos partition table. Is this a GPT partition table? Yes/No? yes Model: ATA Samsung SSD 840 (scsi) Disk /dev/sda: 256GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags ubuntu@ubuntu:~$

    Read the article

  • How to connect MTS MBlaze on ubuntu 11.04

    - by murali_ma
    i have installed ubuntu 11.04 inside the windows xp. i have MTS Mblaze USB Modem. (my service provider is MTS MBlaze,india) i want to use Mblaze into ubuntu so that i did the following steps for make connecting the device. from the task bar->edit connections->mobile broadband->choose country->mts mblaze and ok i give username and password ([email protected] and password MTS. ok now i enable mobile broadband , MTS MBlaze connection1(connection name) from task bar. steps followed from "http://randomshandom.wordpress.com/2010/12/20/how-to-connect-mts-mblaze-device-in-ubuntu-10-10-11-04/#more-3" for the first time it successfully connected and browse the internet. after i restart the system i connect it. it does not connected. it shows Modem network disconnected. i accessed many times and delete the connection and recreated but i does not help me. if i try to connect device i think it the search the network, i saw the wave signal. img Image for showing problem:

    Read the article

  • suggestions for lonewolf dev setup

    - by d33j
    I'm looking for some suggestions for a better development setup. Background: I'm a crusty old software engineer (mostly java of late) and I have around 50 - 100 incomplete java projects scattered everywhere, usb keys, HDDs, and spanning across 5 or 6 computers etc, which have been put on hold for a few years (ie: family). I have no version control at home. I've been using IntelliJ for around 10 years, so that's the only constant. I'm thinking of nominating one machine as a headless server to put all my projects on, maybe a ubuntu box, that way It won't matter which device I'm on, all my projects can be accessed (and I don't have to waste time actually looking for them). I don't need to access code over the net. These are my own 'happy place' projects so I only work on them when I'm at home, however I can see the benefit of the tasking app being online, that way if I think of something while on public transport lets say, I can add it then & there, but it's not a requirement. I can wait until I get home to create tasks. Summary: So I need some sort of version control so I can rollback mistakes, and some sort of simple tasking software where I can assign tasks for myself later on when I get time. I use Subversion, Sonar, Jira and Crucible at work but I think it's a little bit of an overkill for me though. What do you suggest?

    Read the article

  • wrong kernel running after install

    - by ticktockhouse
    I have installed Ubuntu 14.04 from unetbootin. When it reboots after the install, uname -r says: 3.5.0-17-generic ..this means that no modules have loaded for the kernel that is actually installed (3.13.0-32-generic). Does anyone know why this kernel should be installed via the install process? Is it an artifact of using Unetbootin? Booting into the Unetbootin image gives the correct kernel, and thus the modules load. Knowing why is one thing, but I'm not sure how to remedy it now. Because no modules are loaded, I can't connect to the network or connect a USB drive. I've tried update-grub, which seems to find the correct kernel, but doesn't seem to tell the system to boot from it. I've also tried selecting the kernel at boot time using the "Advanced Options for Ubuntu", and the 3.13.x kernel is the only one listed. Selecting this lead to the 3.5.x kernel stubbornly loading.. I'm a fairly accomplished sysadmin, but this one has me flummoxed :) Can anyone help?

    Read the article

  • Network print to brother MFC-7420

    - by trampster
    I am trying to pint to a Brother MFC-7420 from my ubuntu 10.04 machine. The brother is attached to a windows XP machine and is shared. This is what I have tried: System-Administration-Printing, Add, Expand Network Printer, Windows Printer via SAMBA, Browse (I can find the printer no problems here), Foward, Choose Driver Dialog, Brother, My printer is not in this list So the next thing I tried was to download the printer driver from here http://welcome.solutions.brother.com/bsc/public_s/id/linux/en/download_prn.html The driver installed fine but my printer still does not appear in the list. I also tried installing the cups wrapper but that gave the following error. Restarting Common Unix Printing System: cupsd [ OK ] cp: cannot stat `/usr/share/cups/model/MFC7420.ppd': No such file or directory dpkg: error processing cupswrappermfc7420 (--install): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: cupswrappermfc7420 I tried connecting the printer directly but even though I have installed the driver, when I go to printers and click on the printer (it shows up fine as a USB printer) then it say searching for drivers and then gives me a list, this is the same list as before which doesn't have my printer. It really shouldn't be this hard. on window you don't have to installing anything it just works and the same is true for my brothers Mac. How do I print to my printer?

    Read the article

  • Reflective discovery of an inner class in an API

    - by wassup
    Let me ask you, as this bothers me for quite a while but appears to be subjectively the best solution for my problem, if reflective discovery of an inner class for API purposes is that bad idea? First, let me explain what I mean by saying "reflective discovery" and all that stuff. I am sketching an API for a Java database system, that'll be centered around block-based entities (don't ask me what that means - that's a long story), and those entities can be read and returned to the Java code as objects subclassed from the Entity class. I have an Entity.Factory class, that, by means of fluent interfaces, takes a Class<? extends Entity> argument and then, uses an instance of Section.Builder, Property.Builder, or whatever builder the entity has, to put it into the back-end storage. The idea about registering all entity types and their builders just doesn't appeal to me, so I thought that the closest solution to the problem that'd suffice my design needs would be to discover, using reflection, all inner classes of Entity classes and find one that's called Builder. Looking for some expert insight :) And if I missed some important design details (which could happen as I tried to make this question as concise as possible), just tell me and I'll add them.

    Read the article

  • Accessing second hard drive

    - by Jonathan
    So I recently installed Ubuntu 10.10 64-bit on my computer. I installed it on my 60gb SSD hard drive, and in the installation it never acknowledged the existence of my second hard drive. The hard drive that I keep all my files on, and which I want to make my home folder if I can, is a Western Digital Caviar Black 1TB SATA 6Gb/s 64MB cache (WD1002FAEX). I've read the following: https://help.ubuntu.com/community/Mount but honestly cannot work out how to access the hard drive from my Ubuntu installation. I did have Windows 7 64-bit prior to installing Ubuntu. I have backed up all the files on the hard drive, but if I could just access them straight off that would be super cool. Does anyone know how I can use the second hard drive? Thank you for your help EDIT: The following directories are currently in my /dev/ folder: ati/, block/, bsg/, bus/, char/, cpu/, isk/, input/, mapper/, net/, pktcdvd/, pts/, shm/, snd/, and usb/ EDIT: Result from sudo fdisk -l Disk /dev/sda: 60.0 GB, 60022480896 bytes 255 heads, 63 sectors/track, 7297 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000d2dfd Device Boot Start End Blocks Id System /dev/sda1 * 1 6994 56174592 83 Linux /dev/sda2 6994 7298 2438145 5 Extended /dev/sda5 6994 7298 2438144 82 Linux swap / Solaris @djeykib So very close to fixing it.. unfortunately on the last command you gave it says this: $ sudo apt-get install linux-lts-backport-natty Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package linux-lts-backport-natty Checking on http://www.ubuntuupdates.org/ppas reveals that it is only available for 10.04. Looks like I'll have to unplug and re-plug hardware if I want it working still :(

    Read the article

  • Oracle OpenWorld 2012 Hands-on Lab: “Leading Your Everyday Application Integration Projects with Enterprise SOA”

    - by Lionel Dubreuil
    Sharpen your Oracle skill sets and master Oracle technology in Oracle OpenWorld Hands-on Labs.In self-paced, practical learning sessions covering everything from business applications to middleware, database, storage, and enterprise management solutions, you'll discover new ways to derive maximum benefits from your Oracle hardware and software solutionsOracle experts will be available in person to answer questions and guide you through each lab.Hands-on Labs fill up early, and seats are limited, so don’t be late.This  HOL10093 - Leading Your Everyday Application Integration Projects with Enterprise SOA is scheduled for: Date: Monday, Oct 1 Time: 10:45 AM - 11:45 AM Location: Marriott Marquis - Salon 5/6 In this Hands-on Lab, Experience firsthand how Oracle Enterprise Repository, Oracle Application Integration Architecture (AIA) Foundation Pack, and Oracle SOA Suite work together to help you drive your enterprisewide integration projects.From asset management, discovery, and management in Oracle Enterprise Repository to integration of content in Oracle AIA Foundation Pack operating on the Oracle SOA Suite platform, discover how you can develop integrations to support business agility.Take advantage of Oracle-delivered integration assets and validate your services for compliance, within Oracle JDeveloper. You will get your hands on the tools and talk with Oracle experts in this hands-on lab.Objectives for this session are to: Use Oracle Enterprise Repository to manage application interfaces, composite applications, and business processes See how Oracle Enterprise Repository can benefit every service-based application integration project Learn how to govern services through the software lifecycle and validate your services for compliance

    Read the article

  • Oracle OpenWorld 2012 Hands-on Lab: “Leading Your Everyday Application Integration Projects with Enterprise SOA”

    - by Lionel Dubreuil
    Sharpen your Oracle skill sets and master Oracle technology in Oracle OpenWorld Hands-on Labs.In self-paced, practical learning sessions covering everything from business applications to middleware, database, storage, and enterprise management solutions, you'll discover new ways to derive maximum benefits from your Oracle hardware and software solutionsOracle experts will be available in person to answer questions and guide you through each lab.Hands-on Labs fill up early, and seats are limited, so don’t be late.This  HOL10093 - Leading Your Everyday Application Integration Projects with Enterprise SOA is scheduled for: Date: Monday, Oct 1 Time: 10:45 AM - 11:45 AM Location: Marriott Marquis - Salon 5/6 In this Hands-on Lab, Experience firsthand how Oracle Enterprise Repository, Oracle Application Integration Architecture (AIA) Foundation Pack, and Oracle SOA Suite work together to help you drive your enterprisewide integration projects.From asset management, discovery, and management in Oracle Enterprise Repository to integration of content in Oracle AIA Foundation Pack operating on the Oracle SOA Suite platform, discover how you can develop integrations to support business agility.Take advantage of Oracle-delivered integration assets and validate your services for compliance, within Oracle JDeveloper. You will get your hands on the tools and talk with Oracle experts in this hands-on lab.Objectives for this session are to: Use Oracle Enterprise Repository to manage application interfaces, composite applications, and business processes See how Oracle Enterprise Repository can benefit every service-based application integration project Learn how to govern services through the software lifecycle and validate your services for compliance

    Read the article

  • How does datomic handle "corrections"?

    - by blueberryfields
    tl;dr Rich Hickey describes datomic as a system which implicitly deals with timestamps associated with data storage from my experience, data is often imperfectly stored in systems, and on many occasions needs to retroactively be corrected (ie, often the question of "was a True on Tuesday at 12:00pm?" will have an incorrect answer stored in the database) This seems like a spot where the abstractions behind datomic might break - do they? If they don't, how does the system handle such corrections? Rich Hickey, in several of his talks, justifies the creation of datomic, and explains its benefits. His work, if I understand correctly, is motivated by core the insight that humans, when speaking about data and facts, implicitly associate some of the related context into their work(a date-time). By pushing the work required to manage the implicit date-time component of context into the database, he's created a system which is both much easier to understand, and much easier to program. This turns out to be relevant to most database programmers in practice - his work saves everyone a lot of time managing complex, hard to produce/debug/fix, time queries. However, especially in large databases, data is often damaged/incorrect (maybe it was not input correctly, maybe it eroded over time, etc...). While most database updates are insertions of new facts, and should indeed be treated that way, a non-trivial subset of the work required to manage time-queries has to do with retroactive updates. I have yet to see any documentation which explains how such corrections, or retroactive updates, are handled by datomic; from my experience, they are a non-trivial (and incredibly difficult to deal with) subset of time-related data manipulation that database programmers are faced with. Does datomic gracefully handle such updates? If so, how?

    Read the article

  • The Minimalist Approach to Content Governance - Request Phase

    - by Kellsey Ruppel
    Originally posted by John Brunswick. For each project, regardless of size, it is critical to understand the required ownership, business purpose, prerequisite education / resources needed to execute and success criteria around it. Without doing this, there is no way to get a handle on the content life-cyle, resulting in a mass of orphaned material. This lowers the quality of end user experiences.     The good news is that by using a simple process in this request phase - we will not have to revisit this phase unless something drastic changes in the project. For each of the elements mentioned above in this stage, the why, how (technically focused) and impact are outlined with the intent of providing the most value to a small team. 1. Ownership Why - Without ownership information it will not be possible to track and manage any of the content and take advantage of many features of enterprise content management technology. To hedge against this, we need to ensure that both a individual and their group or department within the organization are associated with the content. How - Apply metadata that indicates the owner and department or group that has responsibility for the content. Impact - It is possible to keep the content system optimized by running native reports against the meta-data and acting on them based on what has been outlined for success criteria. This will maximize end user experience, as content will be faster to locate and more relevant to the user by virtue of working through a smaller collection. 2. Business Purpose Why - This simple step will weed out requests that have tepid justification, as users will most likely not spend the effort to request resources if they do not have a real need. How - Use a simple online form to collect and workflow the request to management native to the content system. Impact - Minimizes the amount user generated content that is of low value to the organization. 3. Prerequisite Education Resources Needed Why - If a project cannot be properly staffed the probability of its success is going to be low. By outlining the resources needed - in both skill set and duration - it will cause the requesting party to think critically about the commitment needed to complete their project and what gap must be closed with regard to education of those resources. How - In the simple request form outlined above, resources and a commitment to fulfilling any needed education should be included with a brief acceptance clause that outlines the requesting party's commitment. Impact - This stage acts as a formal commitment to ensuring that resources are able to execute on the vision for the project. 4. Success Criteria Why - Similar to the business purpose, this is a key element in helping to determine if the project and its respective content should continue to exist if it does not meet its intended goal. How - Set a review point for the project content that will check the progress against the originally outlined success criteria and then determine the fate of the content. This can even include logic that will tell the content system to remove items that have not been opened by any users in X amount of time. Impact - This ensures that projects and their contents do not live past their useful lifespans. Just as with orphaned content, non-relevant information will slow user's access to relevant materials for the jobs. Request Phase Summary With a simple form that outlines the ownership of a project and its content, business purpose, education and resources, along with success criteria, we can ensure that an enterprise content management system will stay clean and relevant to end users - allowing it to deliver the most value possible. The key here is to make it straightforward to make the request and let the content management technology manage as much as possible through metadata, retention policies and workflow. Doing these basic steps will allow project content to get off to a great start in the enterprise! Stay tuned for the next installment - the "Create Phase" - covering security access and workflow involved in content creation, enabling a practical layer of governance over our enterprise content repository.

    Read the article

  • Gmail Now Supports Google Drive Integration; Share Files Up to 10GB

    - by Jason Fitzpatrick
    Gmail users can now easily send large files thanks to Google Drive’s increased integration with Gmail–blow through the 25MB in-email attachment limit and share files up to 10GB. From the official Gmail announcement: Have you ever tried to attach a file to an email only to find out it’s too large to send? Now with Drive, you can insert filesup to 10GB – 400 times larger than what you can send as a traditional attachment. Also, because you’re sending a file stored in the cloud, all your recipients will have access to the same, most-up-to-date version.  Like a smart assistant, Gmail will also double-check that your recipients all have access to any files you’re sending. This works like Gmail’s forgotten attachment detector: whenever you send a file from Drive that isn’t shared with everyone, you’ll be prompted with the option to change the file’s sharing settings without leaving your email. It’ll even work with Drive links pasted directly into emails.  The new Gmail/Drive integration is rolling out in waves to users over the next few days and is accessible via the new Gmail compose window. How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It

    Read the article

  • Great tools to demonstrate and showcase the ODA appliance

    - by user12842161
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} 1. Introduction to the Oracle Database Appliance Video http://www.youtube.com/watch?v=DU0hCO_-q-8 2. Oracle Database Appliance Configurator (run in standalone mode on your PC for mode) http://www.oracle.com/technetwork/server-storage/database-appliance/oracle-database-appliance-manager-1352946.zip 3. Oracle Database Appliance 3D Demo http://oracle.com.edgesuite.net/producttours/3d/databaseappliance/

    Read the article

  • Wireless connected but internet not working, modem is fine because I'm typing over wifi on my android

    - by Sandro Livaja
    I can't connect to the Internet in 11.10. I tried switching my wifi card to another USB slot but nothing changed. Is it possible that the card is able to connect while not being able to receive packages? $ ifconfig eth0 Link encap:Ethernet HWaddr 40:61:86:35:11:b0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:43 Base address:0x6000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) wlan0 Link encap:Ethernet HWaddr 74:ea:3a:8c:d5:5c inet addr:192.168.2.5 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::76ea:3aff:fe8c:d55c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:12 errors:0 dropped:0 overruns:0 frame:0 TX packets:102 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1209 (1.2 KB) TX bytes:15329 (15.3 KB)

    Read the article

  • Lost Windows 7 files

    - by Pader
    My intention was to have a dual boot system with Ubuntu and Windows 7. Obviously I did something wrong because although I had a system menu on booting (is it normal to appear DOS-like?) which gave me an option of booting into windows 7, I was unable to do so. Also, when I booted into Ubuntu, my Windows 7 drive was not available. The Windows 7 drive was an internal 1TB drive partitioned into a 200GB (OS) and a second partition making up the remainder. I was still unable to access this Windows 7 drive even after deleting Ubuntu as I kept getting an 'requires an NTFS drive' error, or something similar. I could not even re-install Windows 7 as the disk was not recognised. I did eventually get the drive back by but I cannot for the life of me remember how. I did try to recover my lost W7 data using Ontrack Easy Recovery (which has always been succesfull in the past for post format recovery) but it would not recognise the 1TB although it was now formatted as NTFS. From other posts on this site, I gather that this is considered a 'Windows 7 Site' problem by Linux users. However, I would dearly love to recover some of my lost Windows 7 files. I had resigned myself to a lot of lost personal data but I happened to notice that a 2TB drive I had connected through a USB docking station had been repartitioned. It must have happened when I installed Ubuntu as I can think of no other explanation. I certainly do not remember consciously requiring Ubuntu to do this. The additional two partitions on the 2TB drive, the original Windows

    Read the article

  • Implementing new required feature after software release

    - by TiagoBrenck
    Fake Scenario There is a software that was released 1 year ago. The software is to map and register all kind of animals on our planet. When the software was released, the client only needed to know the scientific name of the animal, a flag if it is in risk of extinction and a scale of dangerous(that is a fake software and specification, I don't want to discuss this here). There are already 100.000 animals records saved on DB. New Feature One year later, the client wants a new feature. It is really important to him to know the animals classes, and this is a required field. So he asks me to put a field to input the animal class, and this field is required. Or maybe where this animal was discovered. Problem I have already 100.000 recorded animals without a class or where it was discovered, but I need to insert a new column to storage this information and this column can't be null. I don't have a default value for this situation (there isn't a default animal class or where it was discovered). I don't want to keep the requirement rule only on my software, my DB must have this requirement too(I like to keep business rules on DB too). What are the alternatives to solve this situation? I am on a situation that this new feature cannot be previewed or reviewed for the existing records. The time already passed and I can't go back on time to get it

    Read the article

  • New SPC2 benchmark- The 7420 KILLS it !!!

    - by user12620172
    This is pretty sweet. The new SPC2 benchmark came out last week, and the 7420 not only came in 2nd of ALL speed scores, but came in #1 for price per MBPS. Check out this table. The 7420 score of 10,704 makes it really fast, but that's not the best part. The price one would have to pay in order to beat it is ridiculous. You can go see for yourself at http://www.storageperformance.org/results/benchmark_results_spc2The only system on the whole page that beats it was over twice the price per MBPS. Very sweet for Oracle. So let's see, the 7420 is the fastest per $. The 7420 is the cheapest per MBPS. The 7420 has incredible, built-in features, management services, analytics, and protocols. It's extremely stable and as a cluster has no single point of failure. It won the Storage Magazine award for best NAS system this year. So how long will it be before it's the number 1 NAS system in the market? What are the biggest hurdles still stopping the widespread adoption of the ZFSSA? From what I see, it's three things: 1. Administrator's comfort level with older legacy systems. 2. Politics 3. Past issues with Oracle Support.   I see all of these issues crop up regularly. Number 1 just takes time and education. Number 3 takes time with our new, better, and growing support team. many of them came from Oracle and there were growing pains when they went from a straight software-model to having to also support hardware. Number 2 is tricky, but it's the job of the sales teams to break through the internal politics and help their clients see the value in oracle hardware systems. Benchmarks like this will help.

    Read the article

< Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >