Search Results

Search found 85833 results on 3434 pages for 'general log file'.

Page 56/3434 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • How to convert djvu file to pdf or other more common file format?

    - by Zeta2
    I have downloaded file which is in djvu format. I want to convert it to pdf or some other more common format, so I can read the document from other devices (e.g. my phone etc). I found a conversion utility at Lizardtech, but when I converted the doc using the lizardtech software, every page had a watermark - which rendered the converted doc virtually useless. Does anyone know where I can get a free conversion utility that does not watermark the converted doc?

    Read the article

  • After deleting log files, Ubuntu server still saying there is no space

    - by Mark
    My Ubuntu server has stopped due to a lack of disk space. I deleted some log files which has grown huge very quickly. But df -h still shows I have no space left. When I run du -sh /* I can see that I should have plenty of disk space left after deleting the logs. I ran lsof +L1 and it brought up two files: /var/log/mail.log and /var/log/mail.err. These are two logs I had deleted. I restarted apache, postfix and mysql (mysql wont restart because of lack of disk space, it think) but still df -h shows no space.

    Read the article

  • Error while taking Transaction log backup

    - by Divya Kapoor
    Hello, I have sceduled a Transaction log back up schedule. But the backup is not happening. The error in the logs is this: Transaction Log Backup.Subplan_1,Error,0,ARCOTDB1\ARCOT_DB_INST1,Transaction Log Backup.Subplan_1,(Job outcome),,The job failed. Unable to determine if the owner (ARCOT-DB1\Superuser) of job Transaction Log Backup.Subplan_1 has server access (reason: Could not obtain information about Windows NT group /user 'ARCOT-DB1\Superuser'<c/> error code 0x534. [SQLSTATE 42000] (Error 15404)) Please help!

    Read the article

  • Custom metadata fields in Windows file manager (or other software)

    - by Dave Gaebler
    I'm trying to organize a collection of maybe 500 or so journal articles, stored in a combination of .pdf and .djvu formats. I'd like to be able to sort the collection by author(s), title, journal name, year, and subject keywords. Is there a way to create metadata fields for this information in the Windows file system (similar to how .mp3 files come with tags for album, title, track length, etc)? Or, if not, is there some software (preferably free) that can do something similar?

    Read the article

  • Where to find a tag based file manager?

    - by samoanbiscuit
    I'm a CS uni student in a country with limited bandwidth, and I find that I download files (particularly install files) for reuse\reinstallation\giving to my friends.. I've kept these files in different folders, and now find it extremely hard to find them, and keeping track of obselete files (the almost weekly releases of iTunes versions, or latest 7zip release or the .Net framework installers). What I would like to find is a file manager that could support tags, so instead of hierachical folders, i could find\view files according to their tags...

    Read the article

  • Do your filesystems have un-owned files ?

    - by darrenm
    As part of our work for integrated compliance reporting in Solaris we plan to provide a check for determining if the system has "un-owned files", ie those which are owned by a uid that does not exist in our configured nameservice.  Tests such as this already exist in the Solaris CIS Benchmark (9.24 Find Un-owned Files and Directories) and other security benchmarks. The obvious method of doing this would be using find(1) with the -nouser flag.  However that requires we bring into memory the metadata for every single file and directory in every local file system we have mounted.  That is probaby not an acceptable thing to do on a production system that has a large amount of storage and it is potentially going to take a long time. Just as I went to bed last night an idea for a much faster way of listing file systems that have un-owned files came to me. I've now implemented it and I'm happy to report it works very well and peforms many orders of magnatude better than using find(1) ever will.   ZFS (since pool version 15) has per user space accounting and quotas.  We can report very quickly and without actually reading any files at all how much space any given user id is using on a ZFS filesystem.  Using that information we can implement a check to very quickly list which filesystems contain un-owned files. First a few caveats because the output data won't be exactly the same as what you get with find but it answers the same basic question.  This only works for ZFS and it will only tell you which filesystems have files owned by unknown users not the actual files.  If you really want to know what the files are (ie to give them an owner) you still have to run find(1).  However it has the huge advantage that it doesn't use find(1) so it won't be dragging the metadata for every single file and directory on the system into memory. It also has the advantage that it can check filesystems that are not mounted currently (which find(1) can't do). It ran in about 4 seconds on a system with 300 ZFS datasets from 2 pools totalling about 3.2T of allocated space, and that includes the uid lookups and output. #!/bin/sh for fs in $(zfs list -H -o name -t filesystem -r rpool) ; do unknowns="" for uid in $(zfs userspace -Hipn -o name,used $fs | cut -f1); do if [ -z "$(getent passwd $uid)" ]; then unknowns="$unknowns$uid " fi done if [ ! -z "$unknowns" ]; then mountpoint=$(zfs list -H -o mountpoint $fs) mounted=$(zfs list -H -o mounted $fs) echo "ZFS File system $fs mounted ($mounted) on $mountpoint \c" echo "has files owned by unknown user ids: $unknowns"; fi done Sample output: ZFS File system rpool/ROOT/solaris-30/var mounted (no) on /var has files owned by unknown user ids: 6435 33667 101 ZFS File system rpool/ROOT/solaris-32/var mounted (yes) on /var has files owned by unknown user ids: 6435 33667ZFS File system builds/bob mounted (yes) on /builds/bob has files owned by unknown user ids: 101 Note that the above might not actually appear exactly like that in any future Solaris product or feature, it is provided just as an example of what you can do with ZFS user space accounting to answer questions like the above.

    Read the article

  • Shrinking a Linux OEL 6 virtual Box image (vdi) hosted on Windows 7

    - by AndyBaker
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Recently for a customer demonstration there was a requirement to build a virtual box image with Oracle Enterprise Manager Cloud Control 12c. This meant installing OEL Linux 6 as well as creating an 11gr2 database and Oracle Enterprise Manager Cloud Control 12c on a single virtual box. Storage was sized at 300Gb using dynamically allocated storage for the virtual box and about 10Gb was used for Linux and the initial build. After copying over all the binaries and performing all the installations the virtual box became in the region of 80Gb used size on the host operating system, however internally it only really needed around 20Gb. This meant 60Gb had been used when copying over all the binaries and although now free was not returned to the host operating system due to the growth of the virtual box storage '.vdi' file.  Once the ‘vdi’ storage had grown it is not shrunk automatically afterwards. Space is always tight on the laptop so it was desirable to shrink the virtual box back to a minimal size and here is the process that was followed. Install 'zerofree' Linux package into the OEL6 virtual box The RPM was downloaded and installed from a site similar to below; http://rpm.pbone.net/index.php3/stat/4/idpl/12548724/com/zerofree-1.0.1-5.el5.i386.rpm.html A simple internet search for ’zerofree Linux rpm’ was easy to perform and find the required rpm. Execute 'zerofree' package on the desired Linux file system To execute this package the desired file system needs to be mounted read only. The following steps outline this process. As root: # umount /u01 As root:# mount –o ro –t ext4 /u01 NOTE: The –o is options and the –t is the file system type found in the /etc/fstab. Next run zerofree against the required storage, this is located by a simple ‘df –h’ command to see the device associated with the mount. As root:# zerofree –v /dev/sda11   NOTE: This takes a while to run but the ‘-v’ option gives feedback on the process. What does Zerofree do? Zerofree’s purpose is to go through the file system and zero out any unused sectors on the volume so that the later stages can shrink the virtual box storage obtaining the free space back. When zerofree has completed the virtual box can be shutdown as the last stage is performed on the physical host where the virtual box vdi files are located. Compact the virtual box ‘.vdi’ files The final stage is to get virtual box to shrink back the storage that has been correctly flagged as free space after executing zerofree. On the physical host in this case a windows 7 laptop a DOS window was opened. At the prompt the first step is to put the virtual box binaries onto the PATH. C:\ >echo %PATH%   The above shows the current value of the PATH environment variable. C:\ >set PATH=%PATH%;c:\program files\Oracle\Virtual Box;   The above adds onto the existing path the virtual box binary location. C:\>cd c:\Users\xxxx\OEL6.1   The above changes directory to where the VDI files are located for the required virtual box machine. C:\Users\xxxxx\OEL6.1>VBoxManage.exe modifyhd zzzzzz.vdi compact  NOTE: The zzzzzz.vdi is the name of the required vdi file to shrink. Finally the above command is executed to perform the compact operation on the ‘.vdi’ file(s). This also takes a long time to complete but shrinks the VDI file back to a minimum size. In the case of the demonstration virtual box OEM12c this reduced the virtual box to 20Gb from 80Gb which was a great outcome to achieve.

    Read the article

  • How to get around batch file processing limit

    - by Patrick Cuff
    I have a Windows batch file that processes all the files in a given directory. I have 206,783 files I need to process: for %%f in (*.xml) do call :PROCESS %%f goto :STOP :PROCESS :: do something with the file program.exe %1 > %1.new set /a COUNTER=%COUNTER%+1 goto :EOF :STOP @echo %COUNTER% files processed When I run the batch file, the following output is written: 65535 files processed As part of the processing, an output file is created for each file procesed, with a .new extension. When I do a dir *.new it reports 65,535 files exist. So, it appears my command environment has a hard limit on the number of files it can recognize, and that limit is 64K - 1. Is there a way to extend the command environment to manage more than 64K - 1 files? If not, would a VBScript or JavaScript be able to process all 206,783 files? I'm running on Windows 2003 server, Enterprise Edition, 32-bit. UPDATE It looks like the root cause of my issue was with the built-in Windows "extract" command for ZIP files. The files I have to process were copied from another system via a ZIP file. My server doesn't have a ZIP utility installed, just the native Windows commands. I right-clicked on the ZIP file, and did an "Extract all...", which apparently just extracted the first 65,535 files. I downloaded and installed 7-zip onto my server, unzipped all the files, and my batch script worked as intended.

    Read the article

  • lseek/write suddenly returns -1 with errno = 9 (Bad file descriptor)

    - by Ger Teunis
    My application uses lseek to seek the desired position to write data. The file is opened using open() command successfully and my application was able to use lseek and wite lots of times. At a given time, for some users and not easily reproducible the lseek returns -1 with an errno of 9. File is not closed before this and the filehandle (int) isn't reset. After this an other file is created open is okay again and lseek and write works again. To make it even worse, this user tried the complete sequence again and all was well. So my question is, can the OS close the file handle for me for some reason? What could cause this? A file indexer or file scanner of some sort? What is the best way to solve this; is this pseudo code the best solution? (never mind the code layout, will create functions for it) int fd=open(...); if (fd>-1) { long result = lseek(fd,....); if (result == -1 && errno==9) { close(fd..); //make sure we try to close nicely fd=open(...); result = lseek(fd,....); } } Anybody experience with something similar? Summary: file seek and write works okay for a given fd and suddenly gives back errno=9 without a reason.

    Read the article

  • Java File IO Compendium

    - by Warren Taylor
    I've worked in and around Java for nigh on a decade, but have managed to ever avoid doing serious work with files. Mostly I've written database driven applications, but occasionally, even those require some file io. Since I do it so rarely, I end up googling around for quite some time to figure out the exact incantation that Java requires to read a file into a byte[], char[], String or whatever I need at the time. For a 'once and for all' list, I'd like to see all of the usual ways to get data from a file into Java, or vice versa. There will be a fair bit of overlap, but the point is to define all of the subtle different variants that are out there. For example: Read/Write a text file from/to a single String. Read a text file line by line. Read/Write a binary file from/to a single byte[]. Read a binary file into a byte[] of size x, one chunk at a time. The goal is to show concise ways to do each of these. Samples do not need to handle missing files or other errors, as that is generally domain specific. Feel free to suggest more IO tasks that are somewhat common and I have neglected to mention.

    Read the article

  • Modify Executing Jar file

    - by pinkynobrain
    Hello Stack Overflow friends. I have a simple problem which i fear doesnt have a simple solution and i need advice as to how to proceed. I am developing a java application packaged as and executable JAR but it requires to modify some of its JAR file contents during execution. At this stage i hit a problem because some OS lock the file preventing writes to it. It is essential that the user sees an updated version of the jar file by the time the application exits allthough i can be pretty flexible as to how to achieve this. A clean and efficient solution is obviously prefereable but portability is the only hard requirement. The following are three approaches i can see to solving the problem, feel free to comment on them or suggest others. Tell Java to unlock the JAR file for writing(this doesnt seem possible but it would be the easyest solution) Copy the executable class files to a tempory file on application startup, use a class loader to load these files and unload the ones from the initial JAR file.(Not had much experience with the classloaders but hopefully the JVM would then be smart enough to realize that the original JAR is nolonger in use and so unlock it) Put a Second executable JAR File inside the First, on startup extract the inner jar to e temporaryfile, invoke a new java process and pass it the location of the Outer JAR, first process exits, second process modifys the Outer jar unincumbered.(This will work but im not sure there is a platform independant way of one java app invoking another) I know this is a weird question but any help would be appreciated.

    Read the article

  • Problems saving a photo to a file

    - by Peter vdL
    Man, I am still not able to save a picture when I send an intent asking for a photo to be taken. Here's what I am doing: Make a URI representing the pathname android.content.Context c = getApplicationContext(); String fname = c.getFilesDir().getAbsolutePath()+"/parked.jpg"; java.io.File file = new java.io.File( fname ); Uri fileUri = Uri.fromFile(file); Create the Intent (don't forget the pkg name!) and start the activity private static int TAKE_PICTURE = 22; Intent intent = new Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE ); intent.putExtra("com.droidstogo.boom1." + MediaStore.EXTRA_OUTPUT, fileUri); startActivityForResult( intent, TAKE_PICTURE ); The camera activity starts, and I can take a picture, and approve it. My onActivityResult() then gets called. But my file doesn't get written. The URI is: file:///data/data/com.droidstogo.boom1/files/parked.jpg I can create thumbnail OK (by not putting the extra into the Intent), and can write that file OK, and later read it back). Can anyone see what simple mistake I am making? Nothing obvious shows up in the logcat - the camera is clearly taking the picture. Thanks, Peter

    Read the article

  • copy a text file in C#

    - by melt
    I am trying to copy a text file in an other text file line by line. It seems that there is a buffer of 1024 character. If there is less than 1024 character in my file, my function will not copie in the other file. Also if there is more than 1024 character but less a factor of 1024, these exceeding characters will not be copied. Ex: 2048 character in initial file - 2048 copied 988 character in initial file - 0 copied 1256 character in initial file - 1024 copied Thks! private void button3_Click(object sender, EventArgs e) { // écrire code pour reprendre le nom du fichier sélectionné et //ajouter un suffix "_poly.txt" string ma_ligne; const int RMV_CARCT = 9; //délcaration des fichier FileStream apt_file = new FileStream(textBox1.Text, FileMode.Open, FileAccess.Read); textBox1.Text = textBox1.Text.Replace(".txt", "_mod.txt"); FileStream mdi_file = new FileStream(textBox1.Text, FileMode.OpenOrCreate,FileAccess.ReadWrite); //lecture/ecriture des fichiers en question StreamReader apt = new StreamReader(apt_file); StreamWriter mdi_line = new StreamWriter(mdi_file, System.Text.Encoding.UTF8, 16); while (apt.Peek() >= 0) { ma_ligne = apt.ReadLine(); //if (ma_ligne.StartsWith("GOTO")) //{ // ma_ligne = ma_ligne.Remove(0, RMV_CARCT); // ma_ligne = ma_ligne.Replace(" ",""); // ma_ligne = ma_ligne.Replace(",", " "); mdi_line.WriteLine(ma_ligne); //} } apt_file.Close(); mdi_file.Close(); }

    Read the article

  • ASP.net file operations delay

    - by mtranda
    Ok, so here's the problem: I'm reading the stream from a FileUpload control, reading in chunks of n bytes and writing the array in a loop until I reach the stream's end. Now the reason I do this is because I need to check several things while the upload is still going on (rather than doing a Save(); which does the whole thing in one go). Here's the problem: when doing this from the local machine, I can see the file just fine as it's uploading and its size increases (had to add a Sleep(); clause in the loop to actually get to see the file being written). However, when I upload the file from a remote machine, I don't get to see it until the the file has completed uploading. Also, I've added another call to write the progress to a text file as the progress is going on, and I get the same thing. Local: the file updates as the upload goes on, remote: the token file only appears after the upload's done (which is somewhat useless since I need it while the upload's still happening). Is there some sort of security setting in (or ASP.net) that maybe saves files in a temporary location for remote machines as opposed to the local machine and then moves them to the specified destination? I would liken this with ASP.net displaying error messages when browsing from the local machine (even on the public hostname) as opposed to the generic compilation error page/generic exception page that is shown when browsing from a remote machine (and customErrors are not off) Any clues on this? Thanks in advance.

    Read the article

  • Read file:// URLs in IE XMLHttpRequest

    - by Dan Fabulich
    I'm developing a JavaScript application that's meant to be run either from a web server (over http) or from the file system (on a file:// URL). As part of this code, I need to use XMLHttpRequest to load files in the same directory as the page and in subdirectories of the page. This code works fine ("PASS") when executed on a web server, but doesn't work ("FAIL") in Internet Explorer 8 when run off the file system: <html><head> <script> window.onload = function() { var xhr = new XMLHttpRequest(); xhr.open("GET", window.location.href, false); xhr.send(null); if (/TestString/.test(xhr.responseText)) { document.body.innerHTML="<p>PASS</p>"; } } </script> <body><p>FAIL</p></body> Of course, at first it fails because no scripts can run at all on the file system; the user is prompted a yellow bar, warning that "To help protect your security, Internet Explorer has restricted this webpage from running scripts or ActiveX controls that could access your computer." But even once I click on the bar and "Allow Blocked Content" the page still fails; I get an "Access is Denied" error on the xhr.open call. This puzzles me, because MSDN says that "For development purposes, the file:// protocol is allowed from the Local Machine zone." This local file should be part of the Local Machine Zone, right? How can I get code like this to work? I'm fine with prompting the user with security warnings; I'm not OK with forcing them to turn off security in the control panel. EDIT: I am not, in fact, loading an XML document in my case; I'm loading a plain text file (.txt).

    Read the article

  • Programmatically downloaded file is larger than it should be

    - by Dan Revell
    I'm trying to download a file from SharePoint. The file itself is an InfoPath template although this is probably inconsequential. If I put the url into internet explorer and save the file to disk, it comes in at 4.47KB and works correctly. If I try to download from the same Url in code the it comes back as 21.9KB and is corrupt. Why it's coming down as corrupt I'm trying to work out. The following are two ways of downloading the file that both produce the corrupt file at 21.9KB: /// web client { WebClient wc = new WebClient(); wc.Credentials = CredentialCache.DefaultCredentials; wc.DownloadFile(templateUri, file); byte[] bytes = wc.DownloadData(templateUri); } /// web request { WebRequest request = WebRequest.Create(templateUri); request.Credentials = CredentialCache.DefaultCredentials; WebResponse responce = request.GetResponse(); StreamReader sr = new StreamReader(responce.GetResponseStream()); byte[] bytes = System.Text.Encoding.UTF8.GetBytes(sr.ReadToEnd()); } And this is how I write the data to disk using (FileStream fs = new FileStream(file, FileMode.Create, FileAccess.Write)) { fs.Write(bytes, 0, bytes.Length); }

    Read the article

  • iphone file download not working

    - by Anonymous
    Hi, In my app I 'm first connecting to a web service, which in return sends a url for a file. I use the url to download the file and then display it on the new view. I get the correct URL but not able to download file from that location. I have another test app which will download file from the same location and it works like a charm. following is my code for webservice-file download. This is a snippet of the code where i 'm parsing the web service xml and then pass the result to NSData for file download. Any suggestions where am i going wrong -- I 'm referring to the following tutorials. Web Service PDF Viewer if ([elementName isEqualToString:@"PRHPdfResultsResult"]) { NSLog(soapResults); UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Report downloaded from:" message:soapResults delegate:self cancelButtonTitle:@"OK" otherButtonTitles:nil]; NSData *pdfData = [[NSData alloc] initWithContentsOfURL:[NSURL URLWithString:soapResults]]; //Store the Data locally as PDF File NSString *resourceDocPath = [[NSString alloc] initWithString:[[[[NSBundle mainBundle] resourcePath] stringByDeletingLastPathComponent] stringByAppendingPathComponent:@"Documents"]]; NSString *filePath = [resourceDocPath stringByAppendingPathComponent:@"myPDF.pdf"]; [pdfData writeToFile:filePath atomically:YES]; [alert show]; [alert release]; [soapResults setString:@""]; elementFound = FALSE; }

    Read the article

  • Accents in uploaded file being replaced with '?'

    - by Katfish
    I am building a data import tool for the admin section of a website I am working on. The data is in both French and English, and contains many accented characters. Whenever I attempt to upload a file, parse the data, and store it in my MySQL database, the accents are replaced with '?'. I have text files containing data (charset is iso-8859-1) which I upload to my server using CodeIgniter's file upload library. I then read the file in PHP. My code is similar to this: $this->upload->do_upload() $data = array('upload_data' => $this->upload->data()); $fileHandle = fopen($data['upload_data']['full_path'], "r"); while (($line = fgets($fileHandle)) !== false) { echo $line; } This produces lines with accents replaced with '?'. Everything else is correct. If I download my uploaded file from my server over FTP, the charset is still iso-8850-1, but a diff reveals that the file has changed. However, if I open the file in TextEdit, it displays properly. I attempted to use PHP's stream_encoding method to explicitly set my file stream to iso-8859-1, but my build of PHP does not have the method. After running out of ideas, I tried wrapping my strings in both utf8_encode and utf8_decode. Neither worked. If anyone has any suggestions about things I could try, I would be extremely grateful.

    Read the article

  • PHP IIS problems downloading file says it is corrupt

    - by Matt
    Hi, I am running PHP on IIS 6 with mssql. I have uploaded a file to my webserver through a php script. Upon checking the file on the server the file is ok and not corrupt. However, when i then have a link on my website to try and download the file, it says the file is corrupt. I know the file isnt corrupt as i can view it perfectly if i look at the file on the server. Is seems like this is a common problem as a similar problem was posted here: http://www.bigresource.com/Tracker/Track-php-1pAakBhT/ Any help would be much appreciated. Thanks, M My download code is as follows: $filesize = $rows->filesize; $filepath = $rows->filepath; header("Content-Disposition: attachment; filename=$filename"); header("Content-length: $filesize"); header("Content-type: application/pdf"); header("Cache-control: must-revalidate"); header("Content-Description: PHP Generated Data"); readfile($filepath);

    Read the article

  • How to Save data in txt file in MATLAB

    - by Jessy
    I have 3 txt files s1.txt, s2.txt, s3.txt.Each have the same format and number of data.I want to combine only the second column of each of the 3 files into one file. Before I combine the data, I sorted it according to the 1st column: UnSorted file: s1.txt s2.txt s3.txt 1 23 2 33 3 22 4 32 4 32 2 11 5 22 1 10 5 28 2 55 8 11 7 11 Sorted file: s1.txt s2.txt s3.txt 1 23 1 10 2 11 2 55 2 33 3 22 4 32 4 32 5 28 5 22 8 11 7 11 Here is the code I have so far: BaseFile ='s' n=3 fid=fopen('RT.txt','w'); for i=1:n %Open each file consecutively d(i)=fopen([BaseFile num2str(i)'.txt']); %read data from file A=textscan(d(i),'%f%f') a=A{1} b=A{2} ab=[a,b]; %sort the data according to the 1st column B=sortrows(ab,1); %delete the 1st column after being sorted B(:,1)=[] %write to a new file fprintf(fid,'%d\n',B'); %close (d(i)); end fclose(fid); How can I get the output in the new txt file in this format? 23 10 11 55 33 22 32 32 28 22 11 11 instead of this format? 23 55 32 22 10 33 32 11 11 22 28 11

    Read the article

  • Sorting a very large text file in Java

    - by Alice
    Hi, I have a large text file I need to sort in Java. The format is: word [tab] frequency [new line] The algorithm for sorting is: Read some of the file, filtering for purlely alphabetic words. Once you have X number of alphabetic words, call Collections.sort and write the result to a file. Repeat until you have finished reading the file. Start reading two sorted files, comparing line by line for the word with higher frequency, and writing at the same time to a new file as to not load much into your memory Repeat until all files are merged into one large file Right now I've divided the large file into smaller ones (sorted by descending frequency) with 10,000 lines each. I know I need to somehow merge these files back together, but I'm not sure how to go about this. I've created a LinkedList to keep track of all the files created. The algorithm says to compare each line in the two files, except I've tried a case where , say file1 = 8,6,5,3,1 and file2 = 9,8,8,8,8. Then if I compare them line by line I would get file3 = 9,8,8,6,8,5,8,3,8,1 which is incorrectly sorted (they should be in decreasing order). I think I'm misunderstanding some part of the algorithm. If someone could point out what I should do instead, I'd greatly appreciate it. Thanks. edit: Yes this is an assignment. We aren't allowed to increase memory unfortunately :(

    Read the article

  • Move and rename file in android

    - by Andre Fróes
    I am trying to copy a file to another folder in the android, but so far, i got no success. I manage to do so with a selected image and when taking a photo, but not with files. I've read and tried several solutions passed by the community (searched over the forum and the internet), but none of it was able to solve my problem when copying. First things first. I added the permissions to my manifest: <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" /> after that, before copying a file, i print its filepath and the directory file path: 06-10 11:11:11.700: I/System.out(1442): /mimetype/storage/sdcard/Misc/Javascript erros for Submit and Plan buttons in IE.doc 06-10 11:11:11.710: I/System.out(1442): /storage/sdcard/mywfm/checklist-files both exists: to copy the file to the expected folder I used the FileUtils: try { FileUtils.copyFile(selectedFile, dir); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } The problem is: I get no exception and the file isn't there. I tried this solution either: How to move/rename file from internal app storage to external storage on Android? same thing, no exception, but no file either.

    Read the article

  • Migrating SQL Server Databases – The DBA’s Checklist (Part 1)

    - by Sadequl Hussain
    It is a fact of life: SQL Server databases change homes. They move from one instance to another, from one version to the next, from old servers to new ones.  They move around as an organisation’s data grows, applications are enhanced or new versions of the database software are released. If not anything else, servers become old and unreliable and databases eventually need to find a new home. Consider the following scenarios: 1.     A new  database application is rolled out in a production server from the development or test environment 2.     A copy of the production database needs to be installed in a test server for troubleshooting purposes 3.     A copy of the development database is regularly refreshed in a test server during the system development life cycle 4.     A SQL Server is upgraded to a newer version. This can be an in-place upgrade or a side-by-side migration 5.     One or more databases need to be moved between different instances as part of a consolidation strategy. The instances can be running the same or different version of SQL Server 6.     A database has to be restored from a backup file provided by a third party application vendor 7.     A backup of the database is restored in the same or different instance for disaster recovery 8.     A database needs to be migrated within the same instance: a.     Files are moved from direct attached storage to storage area network b.    The same database is copied under a different name for another application Migrating SQL Server database applications is a complex topic in itself. There are a number of components that can be involved: jobs, DTS or SSIS packages, logins or linked servers are only few pieces of the puzzle. However, in this article we will focus only on the central part of migration: the installation of the database itself. Unless it is an in-place upgrade, typically the database is taken from a source server and installed in a destination instance.  Most of the time, a full backup file is used for the rollout. The backup file is either provided to the DBA or the DBA takes the backup and restores it in the target server. Sometimes the database is detached from the source and the files are copied to and attached in the destination. Regardless of the method of copying, moving, refreshing, restoring or upgrading the physical database, there are a number of steps the DBA should follow before and after it has been installed in the destination. It is these post database installation steps we are going to discuss below. Some of these steps apply in almost every scenario described above while some will depend on the type of objects contained within the database.  Also, the principles hold regardless of the number of databases involved. Step 1:  Make a copy of data and log files when attaching and detaching When detaching and attaching databases, ensure you have made copies of the data and log files if the destination is running a newer version of SQL Server. This is because once attached to a newer version, the database cannot be detached and attached back to an older version. Trying to do so will give you a message like the following: Server: Msg 602, Level 21, State 50, Line 1 Could not find row in sysindexes for database ID 6, object ID 1, index ID 1. Run DBCC CHECKTABLE on sysindexes. Connection Broken If you try to backup the attached database and restore it in the source, it will still fail. Similarly, if you are restoring the database in a newer version, it cannot be backed up or detached and put back in an older version of SQL. Unlike detach and attach method though, you do not lose the backup file or the original database here. When detaching and attaching a database, it is important you keep all the log files available along with the data files. It is possible to attach a database without a log file and SQL Server can be instructed to create a new log file, however this does not work if the database was detached when the primary file group was read-only. You will need all the log files in such cases. Step 2: Change database compatibility level Once the database has been restored or attached to a newer version of SQL Server, change the database compatibility level to reflect the newer version unless there is a compelling reason not to do so. When attaching or restoring from a previous version of SQL, the database retains the older version’s compatibility level.  The only time you would want to keep a database with an older compatibility level is when the code within your database is no longer supported by SQL Server. For example, outer joins with *= or the =* operators were still possible in SQL 2000 (with a warning message), but not in SQL 2005 anymore. If your stored procedures or triggers are using this form of join, you would want to keep the database with an older compatibility level.  For a list of compatibility issues between older and newer versions of SQL Server databases, refer to the Books Online under the sp_dbcmptlevel topic. Application developers and architects can help you in deciding whether you should change the compatibility level or not. You can always change the compatibility mode from the newest to an older version if necessary. To change the compatibility level, you can either use the database’s property from the SQL Server Management Studio or use the sp_dbcmptlevel stored procedure.   Bear in mind that you cannot run the built-in reports for databases from SQL Server Management Studio if you keep the database with an older compatibility level. The following figure shows the error message I received when trying to run the “Disk Usage by Top Tables” report against a database. This database was hosted in a SQL Server 2005 system and still had a compatibility mode 80 (SQL 2000).     Continues…

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >