Search Results

Search found 45804 results on 1833 pages for 'large files'.

Page 48/1833 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • How can I open binary image files? (.img)

    - by Simon Cahill
    I'm a Windows/Mac/Ubuntu and Androoid user, so I know what I'm talking about, when I say: How do I open binary image files? (.img) They just won't open, on any OS... I'm an Android dev... I'm currently working on a ROM, (I also program, using Windows) but I need to extract files, from .img files. I've converted them to .ext4.img but they just aren't recognized by Linux (Definitly not by Android), by Mac OS or Windows. In other words, I can't open, extract or mount them. Can anyone help me? I'm kinda confused...

    Read the article

  • How to Create a Folder from Selected Files in Windows

    - by Lori Kaufman
    We’ve previously written about a tool that allows you to create a bunch of folders at one time from a list of words or phrases. However, what if you want to create one or more folders from a bunch of selected text files? There’s a simple, free tool, called Files 2 Folder, that allows you to do that. Installing Files 2 Folder adds an option to the context menu for Windows Explorer. Simply extract the .zip file you downloaded (see the link at the end of this article). Right-click on the Files2Folder.exe file and select Run as administrator. If the User Account Control dialog box displays, click Yes to continue. 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • Wordpress : Automatically transfer media files to Amazon S3

    - by Ron Ranieri
    I've been using VPS to host 7 Wordpress websites, most of them require big storage but very little RAM and traffic. So I'm thinking of moving the static files(uploads folder) content to Amazon S3 and I'm looking for the most viable solution to this. I want every website to have their own bucket and newly uploaded media files automatically uploaded to Amazon S3 without using plugin. I'm ok with cron job, for example the files were uploaded first to my server, then transferred to S3 and deleted from my server every 24 hour. Or is there any way for me to change the default upload directory to my S3 bucket without sacrificing any Wordpress functionality(resize/title etc)? What do you think the most efficient way to do this? Currently I'm looking at this plus cron job but I would like to know better option if it exist.

    Read the article

  • I cannot play some WMA files

    - by Lucio
    I did a backup of several music files from my older Windows XP system. Now I can play all the .MP3 files but not all the .WMA. There are some kinds of WMA files that can be reproduced without problems, may refute this in the picture below. The right file can be played, the left isn't. I have long been looking different sites, Q&A from here, installing many packages but not luck. An example is this answer. What can I do? My system: Ubuntu 11.10 32b. Player: Banshee & VLC

    Read the article

  • Game Asset Storage: Archive vs Individual files

    - by David Colson
    As I am in the process of creating a 3D c++ game and I was wondering what would be more beneficial when dealing with game assets with regards to storage. I have seen some games have a single asset file compressed with everything in it and other with lots of little compressed files. If I had lots of individual files I would not need to load a large file at once and use up memory but the code would have to go about file seeking when the level loads to find all the correct files needed. There is no file seeking needed when dealing with one large file, but again, what about all the assets not currently needed that would get loaded with the one file? I could also have an asset file for each level, but then how do I deal with shared assets This has been bothering me for a while so tell me what other advantages and disadvantages are there to either way of doing things.

    Read the article

  • Unable to mount location, Can't mount file

    - by user116008
    I'm a new user to Ubuntu and I have a problem: I had Windows XP in my computer and I had two partitions: C (for system data) and D(for my personal stuff), then, during the Ubuntu installation I chose the Advanced Settings and formated C partition and left D partition intact, went back and chose Install Ubuntu and replace Windows and it installed fine. The problem is that now I open Nautilus and go to Computer, in there it shows my D partition, 640 Hard Disk, but when I try to mount it displays me a message: "Unable to mount location. Can't mount file". I ask you to explain me step-by-step what I need to do because I'm not an advanced user. My computer specs: 2 GiB RAM, Proccesor Pentium(R) Dual-Core CPU E5400 @ 2.70GHz × 2, Graphics Unknow (It's Nvidia GeForce 220 {1GB} or something), OS type 32-bit, Disk 628.0 GB P.S.: My HDD is internal, I'm not using external Hard Drives. Thank You!!! Mike

    Read the article

  • Controlling the order in which files get processed

    - by [email protected]
    The File/Ftp Adapter allows you to control the order in which files get processed. For example, you might want the files to be processed in order of their modified times/ file sizes etc. Luckily, the File/Ftp adapters allow you to achieve this via a "FileSorter" attribute that you can define in the JCA file for your inbound File/Ftp Adapter service.   The File/Ftp Adapters ship with two predefined sorters that use the last modified times e.g.   However, there are times when you would like to define the order yourself. In situations like this, you can implement a Java Comparator and register the comparator with the File Adapter as described below: 1) Write a comparator. For example, the FileSizeSorter comparator sorts the files in descending order of their sizes:   2) In order to compile this class though, you will need fileAdapter.jar in the classpath.  

    Read the article

  • Files don't sync with U1.

    - by wrenchman76
    I am running 10.1 (all current updates have been installed) and I'm unable to sync any files to U1. U1 shows my computer as being added to my account and I bought enough space to hold all the files I want backed up, but for some reason it still doesn't sync. I have marked the files to be synced and even checked the devices tab in the U1 preferences; it shows my computer as being part of my account but the connect button is inoperable and restart doesn't produce any effect when clicked. I have read all the FAQ I can find relating to my problem and have tried all the suggestions that seem to adress problems similar to mine, as well as adding and removing my computer from my account a bunch of times. None of these produced any effect one way or another. I'm not very techno savvy so step by step and/or lay mens terms are greatly appreciated, Thank you

    Read the article

  • Why doesn't file detect the mime-type of mp3 properly?

    - by Grumbel
    Something odd I recently encountered, running file --mime-type on a collection of MP3 gets the mime-time wrong a third of the time: $ for i in */*.mp3; do cat "$i"| file --mime-type -; done | sort | uniq -c 140 /dev/stdin: application/octet-stream 309 /dev/stdin: audio/mpeg There doesn't seem to be any obvious reason, as even MP3s from the same source, will sometimes fail and sometimes not. Bug, feature or anything obvious I am missing here?

    Read the article

  • Got problem with installation. "No root file system is defined."

    - by user92322
    I'm very new with Ubuntu and generally with linux. I saw ubuntu and it seems like this OS is really good and stable, and so I decided to install it alongside my windows 7 OS. I have a few problems with the installation. Here is what I did: I downloaded the 64bit version from Ubuntu official website, and burned it on a dvd. I set the boot sequence to first load from my CD-Rom. Ubuntu installation started, and I chose "Install Ubuntu" in the menu. (where there is also a "Try Ubuntu" option) I clicked forward until I got into the installation type screen As you can see, the installation wont show my actual details about my hard drive! I have 1 hard drive with 750 GB - 80 GB - My main drive with windows 7 OS 600GB - All of my stuff 20GB Free space that I saved for Ubuntu But the installation wont show that!

    Read the article

  • No such file but the file is there!

    - by user288757
    I'm trying to compile a C++ file with some includes. My main file (well I didn't make it hdf5_getters includes a file which includes the file hdf5.h, also not my design but it's a downloaded library. Every time I try to compile it I get the error message that the file hdf5.h does not exist while it clearly does. I started reading on the internet and people say it can happen because it's a 32bit binary running on a 64bit architecture. But I'm running a 32bit Ubuntu so that can't be it... I'm out of ideas, if anyone can help me please :) This is the errormessage with commands: make hdf5_getters g++ -c -Wall -std=c++0x -O2 -c hdf5_getters.cc In file included from H5Cpp.h:20:0, from hdf5_getters.cc:34: H5Include.h:17:18: fatal error: hdf5.h: No such file or directory #include <hdf5.h> ^ compilation terminated. make: *** [hdf5_getters.o] Error 1

    Read the article

  • How do I recover files from U1 cloud

    - by MarkWW
    OK I have just stopped syncing a folder. This appears to have been a huge mistake. I did not realise the folder would disappear from my cloud storage. I assume (and hope) the sub-folders/files it contained still exists in the cloud. I hope they still exists in the cloud because they do not exist on my computer. I did not delete these files from the cloud (I have tried to recover deleted files - none are being recovered). Where are they and how do I get them back into My storage?

    Read the article

  • "Google files": Building a web interface to find/ack/grep

    - by user27915816
    I am working on a project where we would like to have build a web interface that gives the user the ability to "Google" files in a directory in a remote machine. For example, the user would type a string in a box, and then the system would find all files that contain that string and present them in the browser. The system would then give the user the ability click on any of the files to open them/display them in the browser. We want to avoid reinventing the wheel if possible, but don't really know where to start (none of us in the team have much experience building websites). What software packages, libraries or tools exist that can help us get this done?

    Read the article

  • Needs management tool for .java files

    - by Chris Okyen
    When I open the src file in the project folder in Package Explorer in Eclipse, it says the following: Error retrieving content description for resource '/GuiAdd/src/GuiAdd.Java.' Now, the directories of the source projects shown in Eclipse Package explorer doesn't always have the source file to link to, causing this message. I need a way to sync the folders in the correct directory without overwriting the newest ones. Some source files I project files of the root which the Package explorer links to may be the correct versions but other source files in the root may not have the latest source or any source. I am not using svn/git or other depository programs.

    Read the article

  • Greenfoot project is read-only

    - by AzharHafiz.com
    I received this message when starting greenfoot on ubuntu 11.10 I'm a newbie and not sure where does the file located (greenfoot) The project is read-only.How and where do I change the permission? You will not be able to create objects or execute methods. Either the access rights of the project directory are set as 'read only' for you, or the whole file system is not writable (is it a CD?). To fully use this project, you must ensure that it is on a writable file system (usually your hard disk), and that you have write permission in the project directory and each file within it. This can often be accomplished by choosing "save as" from the Project menu after closing this dialog.

    Read the article

  • Hide .desktop files from shares in VirtualBox

    - by Oli
    I share my desktop with VirtualBox. It allows me to work on current files in a nice easy way. I have quite a few utility launchers on my desktop. It's only a dozen or so at peak time but it makes navigating the list of real files a little harder when I'm working from Windows. I was wondering if there was a way of excluding the files from the share. Either at VirtualBox (I've no idea where it keeps its samba configuration -- or if it actually uses samba at all for that matter) or in Windows.

    Read the article

  • PowerShell script to find files that are consuming the most disk space

    As you know, SQL Server databases and backup files can take up a lot of disk space. When disk is running low and you need to troubleshoot disk space issues, the first thing to do is to find large files that are consuming disk space. In this article I will show you a PowerShell script that you can use to find large files on your disks. 12 essential tools for database professionalsThe SQL Developer Bundle contains 12 tools designed with the SQL Server developer and DBA in mind. Try it now.

    Read the article

  • Files Not Uploading

    - by Howdy_McGee
    So I'm running wordpress. I will make changes to theme files and upload them successfully but none of the changes show up on the actual website. At first I thought it was the wrong theme but I have specific theme files I created that are in there so I'm using the right theme. Then I thought it was a server problem and maybe the companies server are down so I checked a few other websites and updated information just fine. All on linux servers. Then I jumped to Wordpress to make sure it wasn't a wordpress problem but I can update the files fine from their Admin Panel. I checked to make sure it wasn't Filezilla Caches or Browser Caches so I cleared them both. What could be the problem? If it had to deal with the Filezilla client permissions I imagine I would get an error upon uploading but it uploads just fine. Suggestions would be extremely helpful I have no clue.

    Read the article

  • System.getProperty("user.dir") cannot get my project root path ,but the path which my eclipse is located

    - by facebook-100005613813158
    As the title goes , I have class named GetException.java,inside it ,I read a xml file in a static code block like(Because this document is shared): static{ ... document = db.parse(new File(System.getProperty("user.dir")+"/src/exception/ExceptionCode.xml")); ... } To test if the file path is correct, I write a main function just inside GetException.java, it proves that the path is correct ,xml file can be read successfully. My project root dir is "/home/wuchang/workspace/MongodbI". But When this Class is loaded from other class,such as I called one of its static functions , it reports the error message: /home/mrs/??/eclipse/src/exception/ExceptionCode.xml (No such file or directory) /home/mrs/??/eclipse/ is actually my eclipse installation directory.So , I wander how System.getProperty("user.dir") returned the eclipse installation directory to me ,instead of my project root directory?

    Read the article

  • What is the best way to work with large databases in Java depending on context?

    - by user19000
    We are trying to figure out the best practice for working with very large DBs in Java. What we do is a kind of BI (business Intelligence), i.e analyzing very large DBs, and using them to create intermediate DBs that represent intelligent knowledge of the DBs. We are currently using JDBC, and just preforming queries using a ResultSet. As more and more data is being created, we are wondering whether more appropriate ways exist for parsing and manipulating these large DBs: We need to support 'chunk' manipulation and not an entire DB at once(e.g. limit in JDBC, very poor performance) We do not need to be constantly connected since we are just pulling results and creating new tables of our own. We want to understand JDBC alternatives, with respect to advantages and disadvantages. Whether you think JDBC is the way to go or not, what are the best practices to go by depending on context (e.g. for large DBs queried in chunks)?

    Read the article

  • Access to my files on Android

    - by user18644
    I am thinking of subscribing to Dropbox which is slightly more costly than Ubuntu one but I need access to my files on the go, and I prefer to use my smartphone to my netbook most of the time as I like to travel light. I do not want to stream music, I want access to my files only. Whereas there is a free app for Dropbox to access said files, there isn't one for Ubuntu. I would be prepared to wait a while if you have got this in hand, have you actually given this any thought? Please tell me whether I should ignore Ubuntu One and link up with Dropbox?

    Read the article

  • Large Object Heap Fragmentation

    - by Paul Ruane
    The C#/.NET application I am working on is suffering from a slow memory leak. I have used CDB with SOS to try to determine what is happening but the data does not seem to make any sense so I was hoping one of you may have experienced this before. The application is running on the 64 bit framework. It is continuously calculating and serialising data to a remote host and is hitting the Large Object Heap (LOH) a fair bit. However, most of the LOH objects I expect to be transient: once the calculation is complete and has been sent to the remote host, the memory should be freed. What I am seeing, however, is a large number of (live) object arrays interleaved with free blocks of memory, e.g., taking a random segment from the LOH: 0:000> !DumpHeap 000000005b5b1000 000000006351da10 Address MT Size ... 000000005d4f92e0 0000064280c7c970 16147872 000000005e45f880 00000000001661d0 1901752 Free 000000005e62fd38 00000642788d8ba8 1056 <-- 000000005e630158 00000000001661d0 5988848 Free 000000005ebe6348 00000642788d8ba8 1056 000000005ebe6768 00000000001661d0 6481336 Free 000000005f214d20 00000642788d8ba8 1056 000000005f215140 00000000001661d0 7346016 Free 000000005f9168a0 00000642788d8ba8 1056 000000005f916cc0 00000000001661d0 7611648 Free 00000000600591c0 00000642788d8ba8 1056 00000000600595e0 00000000001661d0 264808 Free ... Obviously I would expect this to be the case if my application were creating long-lived, large objects during each calculation. (It does do this and I accept there will be a degree of LOH fragmentation but that is not the problem here.) The problem is the very small (1056 byte) object arrays you can see in the above dump which I cannot see in code being created and which are remaining rooted somehow. Also note that CDB is not reporting the type when the heap segment is dumped: I am not sure if this is related or not. If I dump the marked (<--) object, CDB/SOS reports it fine: 0:015> !DumpObj 000000005e62fd38 Name: System.Object[] MethodTable: 00000642788d8ba8 EEClass: 00000642789d7660 Size: 1056(0x420) bytes Array: Rank 1, Number of elements 128, Type CLASS Element Type: System.Object Fields: None The elements of the object array are all strings and the strings are recognisable as from our application code. Also, I am unable to find their GC roots as the !GCRoot command hangs and never comes back (I have even tried leaving it overnight). So, I would very much appreciate it if anyone could shed any light as to why these small (<85k) object arrays are ending up on the LOH: what situations will .NET put a small object array in there? Also, does anyone happen to know of an alternative way of ascertaining the roots of these objects? Thanks in advance. Update 1 Another theory I came up with late yesterday is that these object arrays started out large but have been shrunk leaving the blocks of free memory that are evident in the memory dumps. What makes me suspicious is that the object arrays always appear to be 1056 bytes long (128 elements), 128 * 8 for the references and 32 bytes of overhead. The idea is that perhaps some unsafe code in a library or in the CLR is corrupting the number of elements field in the array header. Bit of a long shot I know... Update 2 Thanks to Brian Rasmussen (see accepted answer) the problem has been identified as fragmentation of the LOH caused by the string intern table! I wrote a quick test application to confirm this: static void Main() { const int ITERATIONS = 100000; for (int index = 0; index < ITERATIONS; ++index) { string str = "NonInterned" + index; Console.Out.WriteLine(str); } Console.Out.WriteLine("Continue."); Console.In.ReadLine(); for (int index = 0; index < ITERATIONS; ++index) { string str = string.Intern("Interned" + index); Console.Out.WriteLine(str); } Console.Out.WriteLine("Continue?"); Console.In.ReadLine(); } The application first creates and dereferences unique strings in a loop. This is just to prove that the memory does not leak in this scenario. Obviously it should not and it does not. In the second loop, unique strings are created and interned. This action roots them in the intern table. What I did not realise is how the intern table is represented. It appears it consists of a set of pages -- object arrays of 128 string elements -- that are created in the LOH. This is more evident in CDB/SOS: 0:000> .loadby sos mscorwks 0:000> !EEHeap -gc Number of GC Heaps: 1 generation 0 starts at 0x00f7a9b0 generation 1 starts at 0x00e79c3c generation 2 starts at 0x00b21000 ephemeral segment allocation context: none segment begin allocated size 00b20000 00b21000 010029bc 0x004e19bc(5118396) Large object heap starts at 0x01b21000 segment begin allocated size 01b20000 01b21000 01b8ade0 0x00069de0(433632) Total Size 0x54b79c(5552028) ------------------------------ GC Heap Size 0x54b79c(5552028) Taking a dump of the LOH segment reveals the pattern I saw in the leaking application: 0:000> !DumpHeap 01b21000 01b8ade0 ... 01b8a120 793040bc 528 01b8a330 00175e88 16 Free 01b8a340 793040bc 528 01b8a550 00175e88 16 Free 01b8a560 793040bc 528 01b8a770 00175e88 16 Free 01b8a780 793040bc 528 01b8a990 00175e88 16 Free 01b8a9a0 793040bc 528 01b8abb0 00175e88 16 Free 01b8abc0 793040bc 528 01b8add0 00175e88 16 Free total 1568 objects Statistics: MT Count TotalSize Class Name 00175e88 784 12544 Free 793040bc 784 421088 System.Object[] Total 1568 objects Note that the object array size is 528 (rather than 1056) because my workstation is 32 bit and the application server is 64 bit. The object arrays are still 128 elements long. So the moral to this story is to be very careful interning. If the string you are interning is not known to be a member of a finite set then your application will leak due to fragmentation of the LOH, at least in version 2 of the CLR. In our application's case, there is general code in the deserialisation code path that interns entity identifiers during unmarshalling: I now strongly suspect this is the culprit. However, the developer's intentions were obviously good as they wanted to make sure that if the same entity is deserialised multiple times then only one instance of the identifier string will be maintained in memory.

    Read the article

  • Lock ups, crashing, transferring files using TrueCrypt with iSCSI

    - by Anthony
    I have looked into this error and it seems that it hasn't been discussed yet - or at least I can't find any information relating. I'm having issues transferring files, usually larger files over a couple of hundred MB. Here is the setup: QNAP 410 as iSCSI Target with multiple LUNs. (CRC is turned on (Data Digest and Header Digest) Server 2003 with iSCSI Initiator version 2.08 - build 3825 (I'm copying files from anothe machine to shares on Server 2003 = into TrueCrypt volume ergo onto the NAS) I have mounted the LUN and formatted it with TrueCrypt using NTFS (Full format, not a quick one). What happens is some files, mainly RAR/Compressed files, appear as if they copy but fail. I've tested this in a number of ways and can repeat the process every time. So I thought to check transfer over iSCSI without TrueCrypt in between, a plain NTFS format - no problem at all. So it would seem TrueCrypt is at least part of the problem here. I haven't tried copying directly from the server yet, I will try that. I also haven't tried it without CRC but fail to see how that would affect this. I will update with my findings later. In the meantime does anyone have any ideas as to what could be wrong? Thanks for your time. Update: I copied a set of files, the ones I was having issues with, to the server then from there I copied those into two places within the TrueCrypt volume (Mounted on the NAS). A seperate directory create in the root of the volume The same initial directory I was using in the first instance Both worked fine. So it now seems clear that this is a link between TrueCrypt, iSCSI and Windows Shares. I say this because I originally setup the whole system using TrueCrypt volume files, not iSCSI. I changed it as it didn't suit my requirements - day wasted as well. While I had this setup though I copied my entire file set to the volume files and all files copied without error - over the network, from a pc, to the server where TrueCrypt had the volume files mounted from the NAS. I didn't bother turning off CRC on the iSCSI system as I highly doubt that is the cause in light of this finding. So any ideas?

    Read the article

  • Deletion of SQL Profiler Trace files (.trc)

    - by Mark
    We've noticed a lot of .trc files in our SQL data folder (\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data) on our server. The date range for these files spans over one day and the total file size of all files together is about 21 gigs. I'd like to free up this space but I'm not sure if I can just delete the files manually through Windows Explorer or if I need to do anything in SQL, like run a command or script. Any ideas?

    Read the article

  • How to merge many text files data in databse

    - by Mirage
    i have around 100 text files. The files have questions and 3 choices. FIles are like below ab001.txt -- contains question ab001a.txt -- is the first choice ab001b.txt ---is second choice ab001c.txt --- is third choice There are thousnad files like this. now i want to insert them in sql or first may in excel like First columns questions and other three columns as answers First two characters are same for soom files , looks like it signifies osme category so around every 30 questioons have same first charaters Any ideas

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >