Search Results

Search found 18786 results on 752 pages for 'document storage'.

Page 18/752 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Why might my Fedora 15 live USB persistent storage not work?

    - by Richard J Foster
    I created a Fedora 15 "live" USB stick using the live USB creator found at https://fedorahosted.org/liveusb-creator/ and the Fedora 15 i686 Desktop ISO image with the persistent storage space set to 4096MB. (The USB stick I have available has an 8GB capacity, so there should be plenty of space.) Fedora appears to boot correctly, however it seems that the persistent storage is not working. To verify this, I opened a terminal prompt, then did su - followed by yum update yum. As expected, I was informed that a new version was available. (The live CD contains version 3.2.29-4, at the time of typing 3.2.29-6 is the current version). After installing, I verified that the new version was installed by typing yum --version. I then shutdown the system using shutdown now. After the system had shut down, I rebooted and returned to the terminal prompt. On typing yum --version, I was informed that the version was 3.2.29-4 (i.e. the original version). Why might the persistent storage not be working? Is there anything I can do to fix it?

    Read the article

  • External Storage for 2TB of backups and 4TB of data RAID level? HW vs Software?

    - by Jerry Mayers
    I have a Mac Mini set up as a media center/file server. Currently I just have a hodgepodge mess of external drives for storage. I'm maxed out, and I have some new laptops on the way with much larger drives and I need to work out a good storage solution for backing them up, as well as storing media on the server. I need around 2 TB of storage for the time machine backups from my various systems and around 2 TB more for media. I would like to build this to handle around 6 TB total so I have some growing room. Since I'm using a Mac Mini as the server I need to use external enclosure(s) that support USB 2 or Firewire 800 (preferred) or gigabit Ethernet. Performance of the system isn't a huge concern since the majority of the access from other computers is done over 802.11N. I plan on using 2TB drives, for the final version, but initially I'll try and use my existing 2 (1TB) drives + some new 2TB drives, and swap the 1TB ones out as I fill up. As to the actual questions: Should I use hardware RAID in some enclosure? Because if the enclosure dies I have to find an identical one to get to my data right? Wouldn't a software RAID be better as I can use any method of connecting the drives to the system? Remember OS X server is my OS. What if I had to reinstall OS X, can I restore the software RAID easily? What RAID version should I use? For the 2TB used for the time machine disk I don't see why I need RAID here, just a single 2TB drive since its already the backup, but for the remaining 4TB it would be the only copy of the data so I should build some redundancy. I had a RAID 5 setup using a cheep RAID PCI card years ago running RAID 5 in a 2 TB array and when a drive died it wanted 48 hours to rebuild. Is this crazy slow for a setup of this size or is this to be expected? Any suggestions as to drive enclosures?

    Read the article

  • which is best smart automatic file replication solution for cloud storage based systems.

    - by TORr0t
    I am looking for a solution for a project i am working on. We are developing a websystem where people can upload their files and other people can download it. (similar to rapidshare.com model) Problem is, some files can be demanded much more than other files. The scenerio is like: I have uploaded my birthday video and shared it with all of my friend, I have uploaded it to myproject.com and it was stored in one of the cluster which has 100mbit connection. Problem is, once all of my friends want to download the file, they cant download it since the bottleneck here is 100mbit which is 15MB per second, but i got 1000 friends and they can only download 15KB per second. I am not taking into account that the hdd is serving same files. My network infrastrucre is as follows: 1 gbit server(client) and connected to 4 Nodes of storage servers that have 100mbit connection. 1gbit server can handle the 1000 users traffic if one of storage node can stream more than 15MB per second to my 1gbit (client) server and visitor will stream directly from client server instead of storage nodes. I can do it by replicating the file into 2 nodes. But i dont want to replicate all files uploadded to my network since it is costing much more. So i need a cloud based system, which will push the files into replicated nodes automatically when demanded to those files are high, and when the demand is low, they will delete from other nodes and it will stay in only 1 node. I have looked to gluster and asked in their irc channel that, gluster cant do such a thing. It is only able to replicate all the files or none of the files. But i need it the cluster software to do it automatically. Any solutions ? (instead of recommending me amazon s3) S

    Read the article

  • Microsoft Word Document Recovery seems unresolvable

    - by LarsTech
    We work on a server (Windows Server 2003 Enterprise) that a bunch of us developers remote in using Remote Desktop Connection, and now every "other" time I open up word I get this Document Recovery pane: As you can see, "Delete" and "Show Repairs" are disabled (I don't think this is my document). "Open" or "Save As" works, but it never marks this document recovered, it just keeps coming up every "other" time I open up word. I don't even care about this document. How can I remove this document from the list so that it doesn't come up anymore? Update: As per Moab's comment, I had the user delete the file. But the Document Recovery still shows up with that item every other time Word is opened. The only difference now is that clicking on "Open" produces this (obvious) message: How can I clear this list? BTW, the user that owned this document was never getting the Document Recovery pane.

    Read the article

  • Using Bulk Operations with Coherence Off-Heap Storage

    - by jpurdy
    Some NamedCache methods (including clear(), entrySet(Filter), aggregate(Filter, …), invoke(Filter, …)) may generate large intermediate results. The size of these intermediate results may result in out-of-memory exceptions on cache servers, and in some cases on cache clients. This may be particularly problematic if out-of-memory exceptions occur on more than one server (since these operations may be cluster-wide) or if these exceptions cause additional memory use on the surviving servers as they take over partitions from the failed servers. This may be particularly problematic with clusters that use off-heap storage (such as NIO or Elastic Data storage options), since these storage options allow greater than normal cache sizes but do nothing to address the size of intermediate results or final result sets. One workaround is to use a PartitionedFilter, which allows the application to break up a larger operation into a number of smaller operations, each targeting either a set of partitions (useful for reducing the load on each cache server) or a set of members (useful for managing client result set sizes). It is also possible to return a key set, and then pull in the full entries using that key set. This also allows the application to take advantage of near caching, though this may be of limited value if the result is large enough to result in near cache thrashing.

    Read the article

  • C programming multiple storage backends

    - by ahjmorton
    I am starting a side project in C which requires multiple storage backends to be driven by a particular piece of logic. These storage backends would each be linked with the decision of which one to use specified at runtime. So for example if I invoke my program with one set of parameters it will perform the operations in memory but if I change the program configuration it would write to disk. The underlying idea is that each storage backend should implement the same protocol. In other words the logic for performing operations should need to know which backend it is operating on. Currently the way I have thought to provide this indirection is to have a struct of function pointers with the logic calling these function pointers. Essentially the struct would contain all the operations needed to implement the higher level logic E.g. struct Context { void (* doPartOfDoOp)(void) int (* getResult)(void); } //logic.h void doOp(Context * context) { //bunch of stuff context->doPartOfDoOp(); } int getResult(Context * context) { //bunch of stuff return context->getResult(); } My questions is if this way of solving the problem is one a C programmer would understand? I am a Java developer by trade but enjoy using C/++. Essentially the Context struct provides an interface like level of indirection. However I would like to know if there is a more idiomatic way of achieving this.

    Read the article

  • Retrieve the content of Microsoft Word document using OpenXml and C#

    - by ybbest
    One of the tasks involves me to retrieve the contents of Microsoft Word document (word2007 above). I try to search for some resources online with not much luck; most of the examples are for writing contents to word document using OpenXml. I decide to blog this as my reference and hopefully people who read this post will find it useful as well. To retrieve the contents of Microsoft Word document using XML is extremely simple. 1. Firstly, you need to download and install the Open XML SDK 2.0 for Microsoft Office. (Download link) 2. Create a Console application then add the DocumentFormat.OpenXml.dll and WindowsBase.dll to the project, you can find these dlls in the .NET tab of the Add Reference window. 3. Write the following code to grab the contents from the word document and display it on the console window. You can download the complete source code here. References: Getting Started with the Open XML SDK 2.0 for Microsoft Office Walkthrough: Word 2007 XML Format Word Processing How To Open XML SDK 2.0 for Microsoft Office Office Developer Center openxmldeveloper Open XML Package Explorer

    Read the article

  • What is a good layout for a somewhat advanced home network and storage solution?

    - by Shaun
    My home network/storage needs are changing and I am searching for some opinions and starting points on what a good network/storage layout would be that can serve my needs for a few years into the future. I think I have a decent starting point for equipment, but I am also willing to invest fairly heavily in a solution that can last me for a while. I am a bit of a tech nerd and I have a moderate tolerance for setup of the solution. I would prefer if maintenance of the system is somewhat low once it is setup, but I am willing to accept some tradeoffs. Existing equipment: Router - Netgear WNDR3700 (gigabit) Router - DLink Gamerlounge DGL-4300 (gigabit) Switch - 16 port Trendnet green switch (gigabit) Switch - 5 port Trendnet green (gigabit) Computer - i7-950 office computer (gigabit ethernet) Computer - Q6600 quad core media center, hooked up to TV, records shows (gigabit ethernet) Computer - Acer 1810T ultraportable laptop (gigabit and N ethernet) NAS - Intel SS4200-E (gigabit) External hard drive - 2TB WD Green drive (esata) All kinds of miscellaneous network connected TV, Bluray, Verizon network extender, HDhomerun TV tuners, etc. Requirements: -Robust backup solution for a growing collection of huge family picture files and personal files, around 1.5TB. (Including offsite backup) -Central location for all user's files, while also keeping them secure from each other. -Storage for terabytes of movie backups and recorded TV, and access to them from all computers (maybe around 4TB eventually) -Possibility to host files to friends and family easily Nice to have: -Backup of terabytes of movie backups Intriguing possibilities: -Capability to have users' Windows desktops and files look the same from all network computers I am not sure if the new Windows Home Server 2011 would fit into this well, if I need a domain server, how best to organize my backups, or how to most effectively use RAID. Currently I am simply backing up all computers to a RAID 1 on the NAS box, which I was thinking could prevent a situation where I reach for a backup and find that the disk is corrupt. One possibility that I am thinking about now is simply using my media center PC with a huge RAID of hard drives on which all files are stored. Pseudo-backup of all files would be present because of the RAID, but important files would also be backed up off site via carrying hard drives to work. But what if corruption seeps into the files and the corrupted data is then backed up? Does RAID protect against this? I really want to take next to zero risks with the irreplaceable files. I can handle some degree of risk with the movies and other files. I'm looking for critiques on this idea as well as other possibilities. To summarize, my goal is high functionality, media capable, and robust backup of irreplaceable files.

    Read the article

  • Document-oriented vs Column-oriented database fit

    - by user1007922
    I have a data-intensive application that desperately needs a database make-over. The general data model: There are records with RIDs, grouped together by group IDs (GID). The records have arbitrary data fields, (maybe 5-15) with a few of them mandatory and the rest optional, and thus sparse. The general use model: There are LOTS and LOTS of Writes. Millions to Billions of records are stored. Very often, they are associated with new GIDs, but sometimes, they are associated with existing GIDs. There aren't as many reads, but when they happen, they need to be pretty fast or at least constant speed regardless of the database size. And when the reads happen, it will need to retrieve all the records/RIDs with a certain GID. I don't have a need to search by the record field values. Primarily, I will need to query by the GID and maybe RID. What database implementation should I use? I did some initial research between document-oriented and column-oriented databases and it seems the document-oriented ones are a good fit, model-wise. I could store all the records together under the same document key using the GID. But I don't really have any use for their ability to search the document contents itself. I like the simplicity and scalability of column-oriented databases like Cassandra, but how should I model my data in this paradigm for optimal performance? Should my key be the GID and should I create a column for each record/RID? (there maybe thousands or hundreds of thousands of records in a group/GID). Or should my key be the RID and ensure each row has a column for the GID value? What results in faster writes and reads under this model?

    Read the article

  • Speaking in Raleigh NC June 15th

    - by Andrew Kelly
    Just a heads up to those in the area that I will be speaking at the (TriPASS) Raleigh SQL Server user group on the 15th of June 2010. The topic is Storage & I/O Best Practices. The abstract is listed below: SQL Server relies heavily on a well configured storage sub-system to perform at its peak but unfortunately this is one of the most neglected or mis-configured areas of a SQL Server instance. Here we will focus on the best practices related to how SQL Server works with the underlying storage...(read more)

    Read the article

  • PowerShell One Liner: Duplicating a folder structure in a Sharepoint document library

    - by Darren Gosbell
    I was asked by someone at work the other day, if it was possible in Sharepoint to create a set of top level folders in one document library based on the set of folders in another library. One document library has a set of top level folders that is basically a client list and we needed to create the same top level folders in another library. I knew that it was possible to open a Sharepoint document library in explorer using a UNC style path and that you could map a drive using a technique like this one: http://www.endusersharepoint.com/2007/11/16/can-i-map-a-document-library-as-a-mapped-drive/. But while explorer would let us copy the folders, it would also take all of the folder contents too, which was not what we wanted. So I figured that some sort of PowerShell script was probably the way to go and it turned out to be even easier than I thought. The following script did it in one line, so I thought I would post it here in my "online memory". :) dir "\\sharepoint\client documents" | where {$_.PSIsContainer} | % {mkdir "\\sharepoint\admin documents\$($_.Name)"} I use "dir" to get a listing from the source folder, pipe it through "where" to get only objects that are folders and then do a foreach (using the % alias) and call "mkdir".

    Read the article

  • Is there an online storage service that works with Windows 7 Backup and Restore? [closed]

    - by user57813
    I love Windows 7 Backup and Restore.. However, I kind of find it inconvenient to use external hard drive, plug it in, take it out, put it away safely etc.. Is there an online storage service to which Windows Backup can save the backup files to? Can I make Windows Backup store the backed up data to an online storage service? I do not want to manually upload the backup files. I want it to save directly to the cloud. Doesn't matter if the service costs me some $$$. I am using Windows 7 Ultimate 32-bit edition.

    Read the article

  • Solaris 11.1: Encrypted Immutable Zones on (ZFS) Shared Storage

    - by darrenm
    Solaris 11 brought both ZFS encryption and the Immutable Zones feature and I've talked about the combination in the past.  Solaris 11.1 adds a fully supported method of storing zones in their own ZFS using shared storage so lets update things a little and put all three parts together. When using an iSCSI (or other supported shared storage target) for a Zone we can either let the Zones framework setup the ZFS pool or we can do it manually before hand and tell the Zones framework to use the one we made earlier.  To enable encryption we have to take the second path so that we can setup the pool with encryption before we start to install the zones on it. We start by configuring the zone and specifying an rootzpool resource: # zonecfg -z eizoss Use 'create' to begin configuring a new zone. zonecfg:eizoss> create create: Using system default template 'SYSdefault' zonecfg:eizoss> set zonepath=/zones/eizoss zonecfg:eizoss> set file-mac-profile=fixed-configuration zonecfg:eizoss> add rootzpool zonecfg:eizoss:rootzpool> add storage \ iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 zonecfg:eizoss:rootzpool> end zonecfg:eizoss> verify zonecfg:eizoss> commit zonecfg:eizoss> Now lets create the pool and specify encryption: # suriadm map \ iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 PROPERTY VALUE mapped-dev /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0 # echo "zfscrypto" > /zones/p # zpool create -O encryption=on -O keysource=passphrase,file:///zones/p eizoss \ /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0 # zpool export eizoss Note that the keysource example above is just for this example, realistically you should probably use an Oracle Key Manager or some other better keystorage, but that isn't the purpose of this example.  Note however that it does need to be one of file:// https:// pkcs11: and not prompt for the key location.  Also note that we exported the newly created pool.  The name we used here doesn't actually mater because it will get set properly on import anyway. So lets go ahead and do our install: zoneadm -z eizoss install -x force-zpool-import Configured zone storage resource(s) from: iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 Imported zone zpool: eizoss_rpool Progress being logged to /var/log/zones/zoneadm.20121029T115231Z.eizoss.install Image: Preparing at /zones/eizoss/root. AI Manifest: /tmp/manifest.xml.ujaq54 SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: eizoss Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.us.oracle.com/solaris/release/ Please review the licenses for the following packages post-install: consolidation/osnet/osnet-incorporation (automatically accepted, not displayed) Package licenses may be viewed using the command: pkg info --license <pkg_fmri> DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 187/187 33575/33575 227.0/227.0 384k/s PHASE ITEMS Installing new actions 47449/47449 Updating package state database Done Updating image state Done Creating fast lookup database Done Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 929.606 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/eizoss/root/var/log/zones/zoneadm.20121029T115231Z.eizoss.install That was really all we had to do, when the install is done boot it up as normal. The zone administrator has no direct access to the ZFS wrapping keys used for the encrypted pool zone is stored on.  Due to how inheritance works in ZFS he can still create new encrypted datasets that use those wrapping keys (without them ever being inside a process in the zone) or he can create encrypted datasets inside the zone that use keys of his own choosing, the output below shows the two cases: rpool is inheriting the key material from the global zone (note we can see the value of the keysource property but we don't use it inside the zone nor does that path need to be (or is) accessible inside the zone). Whereas rpool/export/home/bob has set keysource locally. # zfs get encryption,keysource rpool rpool/export/home/bob NAME PROPERTY VALUE SOURCE rpool encryption on inherited from $globalzone rpool keysource passphrase,file:///zones/p inherited from $globalzone rpool/export/home/bob encryption on local rpool/export/home/bob keysource passphrase,prompt local  

    Read the article

  • How do I recover lost/inacessible data from my storage device?

    - by TomWij
    What steps can I take to try to recover lost or inaccessible data from any storage device? Answers: This applies to any main storage devices; eg. internal/external hard drive, USB stick, Flash memory. The most important thing is to STOP using it, any type of I/O can ruin your chances of a recovery. Click here in case you suspect corruption or bad sectors. Click here in case you suspect mechanical issues.

    Read the article

  • How to disable 3G USB Modem internal storage from being loaded by linux kernel?

    - by Krystian
    Hi, I've got a problem with my 3G modem [Huawei E122]. It has internal storage and kernel assigns a device [/dev/sdX] to it. Because of that, every second time my machine will not boot - kernel panic - as my usb hdd gets assigned /dev/sdb instead of /dev/sda. I cannot use LABEL nor UUID in root= kernel parameter, as it is only available when using initrd, and I can't use it - I am using Debian on my router - mips architecture machine. I have to prevent this from happening, as my router has to start everyday and I have to be sure it works ok. I don't have physical access to restart it when something goes wrong. I don't use my modem internal storage, there's no SD card inserted. However kernel detects the reader and loads it. I can not prevent loading od usb drivers since my hdd is on USB as well. I will appreciate any ideas.

    Read the article

  • Web/Cloud Based OS with Torrent Features and Free Storage?

    - by Kristina E
    Hi, I want a web-based OS with a torrent client and I want to link it to one of the many free cloud storage solutions. I think it would be really cool to be able to check and download torrents anywhere and not use my hardware or connection until I want to transfer the files down to my actual desktop (like burning a Linux ISO or to convert the file to a IFO format). Anywyas, I created accounts at 4Shared, EyeOS, GlideOS, ADrive and iCloud and am having no luck. There is an eyeTorrent app but I can't seem to get it configured and I can't log into my cloud storage from the cloud OS. Has anyone been able to pull this off and if so would you please explain how? Thanks, Kristina

    Read the article

  • Reducing storage cost by moving old files to external USB HDDs. Your thoughts?

    - by cparker4486
    I've got about 300GB of pictures and marketing data that is rarely accessed and I'd like to get it off my main storage. I was thinking to simply add two external USB HDDs to the server and move all the files to one of the drives. The second drive would be the backup destination for the first drive. I'm working with Server 2003 R2 SP2. This will help me free a good amount of space on my main storage as well as reduce the complexity, backup window, and usage of my backups to tape.

    Read the article

  • How does storage spaces decide where to put my files?

    - by George Duckett
    With Windows 8 Storage Spaces, you can lump many hard disks of varying types, speeds and sizes together to use as a single storage space / logical drive. How does Windows decide what to place where? For example will it move files about depending on frequency of access? Maybe splitting files frequently accessed together between hard disks etc. What does it optimize for? Speed, reliability, etc? If the above is asking too much, can I easily see where the files are physically (on which physical disk)?

    Read the article

  • Benefit of using Data URI to embed images within HTML document and its cross-browser compatibility

    - by Manoj Agarwal
    I want to embed an image using Data URI within HTML document so that we don't need image as a separate attachment, we get just one HTML file that contains the actual image. What are its advantages and disadvantages? Does IE10 supports it? Is it useful to have such an implementation? I am working on an application, where we have html documents that link towards images stored in some location. If I use tiny online editor, as images are saved somewhere in the server, while editing the document, i can provide a link towards that image, but can't preview the final document with images from within tiny editor. If I chose to download the file locally, then i will need to download the images from server side. It looks a bit overkill, so I thought if Data URI could be used in such a situation.

    Read the article

  • Setup & Usage of Document Sequencing in Oracle Receivables

    - by Robert Story
    Upcoming WebcastTitle: Setup & Usage of Document Sequencing in Oracle ReceivablesDate: May 20, 2010 Time: 11:00 am EDT Product Family: Receivables Community Summary Understanding Document Sequencing and how it can be used to generate document numbers in Oracle Receivables. This one-hour session is recommended for both technical and functional users. Topics will include: Review of important tablesRequired setup stepsUse of Oracle Diagnostics to review critical setupsHow to create gapless sequencesCommon Errors Troubleshooting Tips A short, live demonstration (only if applicable) and question and answer period will be included. Click here to register for this session....... ....... ....... ....... ....... ....... .......The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • undefined control sequence in a NOWEB document

    - by Jean Baldraque
    I'm writing a TeX-noweb document. I compile it with noweave -tex -filter "elide comment:*" texcode.nw > documentation.tex but when I try to compile the resulting file with xetex -halt-on-error documentation.tex I obtain the following error message ! Undefined control sequence. <argument> ...on}\endmoddef \nwstartdeflinemarkup \nwenddeflinemarkup It seems that \nwenddeflinemarkup is not recognized. If i delete from the document all the sequences \nwstartdeflinemarkup\nwenddeflinemarkup the document compile without exceptions. What can be the problem?

    Read the article

  • Multiple Document Interfaces in Visual Basic

    What is Multiple Document Interface (MDI)? In most VB.NET applications, it is using a single document interface (SDI). In this type of interface, every window is unique to aother window. But in multiple document interface, it works by having one parent window with child windows under it. See the screenshot below: As you can see, there is one parent window (in gray color) and there are 3 child windows (in blue, violet and orange color). You can have more than 3 child windows depending on your application requirements. But you can only have one parent window. Depending on the design of your MDI...

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >