Search Results

Search found 40999 results on 1640 pages for 'duplicate files'.

Page 39/1640 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • No Sync of Files in Android U1 folder

    - by Oldbwl
    My Desktop (Ubuntu 12.10) and Laptop (Win 7) happily sync files. The Ubuntu One folders are perfectly in sync. I have an Asus Transformer with the Ubuntu One app installed. I can read files from the cloud, and they download to a U1 folder. If I edit the files in the Asus (say a sheet in Kingsoft) it appears they never go back to the cloud unless I physically select the file to be uploaded. Is this correct?

    Read the article

  • How do Windows 7 encrypted files look like?

    - by Sean Farrell
    Ok this is kind of an odd question: How do Windows 7 (Home Premium) encrypted files look like "from the outside"? Now here is the story. An acquaintance of a freind of mine got a nasty virus / scareware. So I wiped out my PC technician cap and went to work on it. What I did was remove the drive from the laptop and put drive into my external drive bay. I scanned the drive and yes it was loaded with stuff. That basically cured the infection and I could start the system back up. To check if it cured the problem I wanted to see the system while running. There where two user accounts, on with a password and one without (both admin users !?!). So I logged into the unprotected user and cleaned up the residual issues, like proxy server to localhost in the browser config. Now I wanted to do the same for the password protected user. What I noticed that from my system and the unprotected user account the files of the protected user looked garbled. The files are something like 12 random alphanum chars, but the folders looked ok. Naive as was thought this might be how encrypted files looked "from the outside". (I never use Microsoft's own security features, so how would I know. TrueCrypt is one big blob.) Since the second user could not be reached, I though sod it and removed the password from the account. (That might have been a mistake, I know.) Now I did the same clean up tasks and all nice and fine; except for the files which where still "encrypted". So I looked into many Windows Encrypted Files recovery posts and not all hope is lost, since I should be able to extract the certificate and with the password regain access to the files. Also note that windows did "only" prompt me that removing the password would be insecure, not that access to encrypted files would be lost, like it is claimed in most recovery articles. Resetting the password did not help and I gave up for the night. The question that nagged me half of the last night was, what if the files are not encrypted, but the scare-ware encrypted / destroyed the files? I don't want to spend hours of work trying to recover files that are not recoverable. The ting is that the user does not remember turning it on and aren't the files marked in blue and the filename is readable? Many thanks for input from users who have more knowledge about WEF...

    Read the article

  • Tool or script to detect moved or renamed files on Linux prior to a backup

    - by Pharaun
    Basically I am searching to see if there exists a tool or script that can detect moved or renamed files so that I can get a list of renamed/moved files and apply the same operation on the other end of the network to conserve on bandwidth. Basically disk storage is cheap but bandwidth isn't, and the problem is that the files often will be reorganized or moved around into a better directory structure thus when you use rsync to do the backup, rsync won't notice that its a renamed or moved file and re-transmission it over the network all over again despite having the same file on the other end. So I am wondering if there exists a script or tool that can record where all the files are and their names, then just prior to a backup, it would rescan and detect moved or renamed files, then I can take that list and re-apply the move/rename operation on the other side. Here's a list of the "general" features of the files: Large unchanging files They can be renamed or moved around [Edit:] These all are good answers, and what I end up doing in the end was looking at all of the answers and will be writing some code to deal with this. Basically what I am thinking/working on now is: Using something like AIDE for the "initial" scan and enable me to keep checksums on the files because they are supposed to never change, so it would aid on detecting corruption. Creating an inotify daemon that would monitor these files/directory and recording any changes relating to renames & moving the files around to a log file. There are some edge cases where inotify might fail to record that something happened to the file system, thus there is a final step of using find to search the file system for files that has a change time latter than the last backup. This has several benefits: Checksums/etc from AIDE to be able to check/make sure that some media did not get corrupt Inotify keeps resource usage low and no need to re-scan the filesystem over and over No need to patch rsync; If I have to patch things I can, but I would prefer to avoid patching things to keep the burden lower, (IE don't need to re-patch everytime there is an update). I've used Unison before and its really nice, however I could've sworn that Unison does keep copies around on the filesystem and that its "archive" files can grow to be rather large?

    Read the article

  • How do .so files avoid problems associated with passing header-only templates like MS dll files have?

    - by Doug T.
    Based on the discussion around this question. I'd like to know how .so files/the ELF format/the gcc toolchain avoid problems passing classes defined purely in header files (like the std library). According to Jan in that answer, the dynamic linker/loader only picks one version of such a class to load if its defined in two .so files. So if two .so files have two definitions, perhaps with different compiler options/etc, the dynamic linker can pick one to use. Is this correct? How does this work with inlining? For example, MSVC inlines templates aggressively. This makes the solution I describe above untenable for dlls. Does Gcc never inline header-only templates like the std library as MSVC does? If so wouldn't that make the functionality of ELF described above ineffective in these cases?

    Read the article

  • No inodes left error, df -i command says contrary

    - by abhinavkulkarni
    I copied a lot of files in my mounted Windows drive from Ubuntu and I subsequently ran into Error opening file '/media/windows/<some-file-path>': No space left on device error. I checked the output of df -i command to see if I had ran out of inodes for the mounted Windows drive: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda5 2363904 504119 1859785 22% / udev 207621 522 207099 1% /dev tmpfs 211487 450 211037 1% /run none 211487 3 211484 1% /run/lock none 211487 7 211480 1% /run/shm none 211487 19 211468 1% /run/user /dev/sda2 458686680 2588876 456097804 1% /media/windows As above output shows, lots of inodes are available for /media/windows drive. I have plenty of disk space left - around 500GB. What's the problem then?

    Read the article

  • Uploading Files Using ASP.NET Web Forms, Generic Handler and jQuery

    - by bipinjoshi
    In order to upload files from the client machine to the server ASP.NET developers use FileUpload server control. The FileUpload server control essentially renders an INPUT element with its type set to file and allows you to select one or more files. The actual upload operation is performed only when the form is posted to the server. Instead of making a full page postback you can use jQuery to make an Ajax call to the server and POST the selected files to a generic handler (.ashx). The generic handler can then save the files to a specified folder. The remainder of this post shows how this can be accomplished.http://www.bipinjoshi.net/articles/f2a2f1ee-e18a-416b-893e-883c800f83f4.aspx      

    Read the article

  • Approach to retrieve files from server

    - by Aerus
    I'm in the process of making a Java application with a corresponding update application. At any given time the user may want to update the application and the updater will ask for a list of files of the latest release. Based on this list, the updater can determine which files need to be downloaded to complete the update. I now have 2 approaches to solve this, but i would like to know what approach will put the least stress on my application and server. I could send a list of files i want to download to my server and the server zips the files and simply returns this compressed file to the application. The updater sents a request for each seperate file to the server, which simply returns the file The application will be used mainly in Belgium and The Netherlands and connections/bandwidth tend to be pretty decent in here. The average size of a single file should be around 100Kb and at most 1Mb. I expect an update to have anywhere between 10 to 50 new files. I expect at most 100 persons/day to update the application, i.e. in the week when a new version is released. I hope this is enough information to sketch my problem and any advice is welcome. If there is another common way to tackle this, i'd be glad to hear it.

    Read the article

  • Preferred apache permissions for www files with several authors

    - by user1316464
    I can't for the life of me figure out how to design my permissions scheme for my apache files. My requirements seem pretty simple: Apache should have standard permissions of RX for Directories and R for files Web authors should have RWX for Directories and RW for files Don't want to give any access to "other" Want new files/folders to inherit the proper permissions Here are the schemes I've tried 570 for directories and 460 for files Owner: Apache Group: Webdev The problem here is that new files created by users int the Webdev group are owned by user:Webdev and Apache can't read them. If Apache were in the group Webdev then it would also have the wrong permissions (ie it would have Write permissions to files) 750 for directories and 640 for files Owner: Webdev Group: Apache (Webdev is a member of Apache) The problem here is that there is only one webdev account and I have multiple people who need access to contribute. In theory this would work with only one developer if Webdev were also a member of the Apache group. Any ideas?

    Read the article

  • Converting MOD files to quicktime or mpeg for adobe premiere pro

    Ive been Editing lots of videos lately. My company got a video camera: Canon Legria FS200. It saves the movies in a digital format as MOD files. Unfortunately, Adobe Premiere doesnt work with these files. I needed software to convert MOD files to QuickTime or mpeg files. I found a good free one : Its called Mpeg StreamClip:  It works well. and its pretty fast. And its Free. Whats not to like? ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Google Site Search (commercial) not indexing files in sitemap

    - by melat0nin
    I have a client for whom we have purchased Google Site Search. It works well for HTML pages served by the CMS, but files aren't being reliably indexed. I wrote a script to generate an XML feed (sitemap) of all the files in the CMS which I've plugged in to Google Webmaster Tools for the site. It says that for that sitemap 923 URLs have been submitted, but only 26 have been indexed. The client relies heavily on searching within files, which is why we decided to use Google search, so this is a bit of a problem. Many of the files aren't linked to from any page on the site, as they are old and therefore don't merit having a page of their own. But they still need to be accessible through search for archiving purposes. The file archive xml can be found at www.sniffer.org.uk/file-archive and the standard xml sitemap (of pages) can be found at www.sniffer.org.uk/sitemap.xml. Any thought would be much appreciated!

    Read the article

  • MacMini (running Ubuntu 14.04) loses wlan connection when uploading larger files (several 100 mb) via ownCloud

    - by ManekenT
    I installed Ubuntu 14.04 on an old MacMini with the intention of running it as a homeserver. Additionally I installed ownCloud and tried to sync some files both from a laptop running elementaryOS and a desktop running windows 7. Syncing smaller files workes like a charm (4000 files at <10mb each) but when it comes to bigger files (1 GB ubuntu iso e.g.) the upload failes after 20-100mb. I can't ping the server anymore and the server can't ping me. It still shows up in our router as connected though. Disconnecting and reconnecting the wlan connection fixes the issue until the next attempt at syncing. Edit: I also had to install the wlan driver with this manual: https://help.ubuntu.com/community/MacBookPro8-2#Wireless

    Read the article

  • can rsyslog transfer files present in a directory

    - by Tarun
    I have configured rsyslog and its working fine as its intended to be this is the conf files: Server Side: $template OTHERS,"/rsyslog/test/log/%fromhost-ip%/others-log.log" $template APACHEACCESS,"/rsyslog/test/log/%fromhost-ip%/apache-access.log" $template APACHEERROR,"/rsyslog/test/log/%fromhost-ip%/apache-error.log" if $programname == 'apache-access' then ?APACHEACCESS & ~ if $programname == 'apache-error' then ?APACHEERROR & ~ *.* ?OTHERS Client Side: # Apache default access file: $ModLoad imfile $InputFileName /var/log/apache2/access.log $InputFileTag apache-access: $InputFileStateFile stat-apache-access $InputFileSeverity info $InputRunFileMonitor #Apache default Error file: $ModLoad imfile $InputFileName /var/log/apache2/error.log $InputFileTag apache-error: $InputFileStateFile stat-apache-error $InputFileSeverity error $InputRunFileMonitor if $programname == 'apache-access' then @10.134.125.179:514 & ~ if $programname == 'apache-error' then @10.134.125.179:514 & ~ *.* @10.134.125.179:514 Now in rsyslog can I instead of defining separate files can I give the complete directory so that the client sends all the log files automatically present in the directory /var/log/apache2 and on syslog server side these files gets automatically stored in different filenames?

    Read the article

  • Automatically minify and combine JavaScript and CSS files in any web site

    This article describes a complete package that speeds up loading of JavaScript, CSS, and images in an ASP.NET web site. It accomplishes this by minifying JavaScript and CSS files on the fly (and caching the minified content), combining JavaScript and CSS files, making it easy to load files from cookieless domains, and much more. This article has full step by step installation instructions for both IIS 7 and IIS 6 (each version requires different additions to web.config). It will also detail how to use all the features, complete with short examples.

    Read the article

  • How to get files that have been added/modifed in a batch file

    - by Chris L
    I have the following batch file which concatenates all of the files in a folder that have a .sql ending. set func=%~dp0%Stored Procedures\*.sql for %%i in (%func%) do type "%%i" >>InstallScript.sql We use SVN as our repository, and we're using branching. Currently the script concatenates all the .sql files, even the ones that haven't changed. I'd like to change it so it only concatenates files that have been modified and/or created after the branch was created. We can do that by looking at the datetime on the .svn folder in each folder(there's a Stored Procedure, View, Function subfolders). But I don't know how to do that with batch files. Ideally something like this(psuedo code): set func=%~dp0%Stored Procedures\*.sql set branchDateTime=GetDateTime(%~dp0%.svn) <- Gets the datetime when the .svn folder was created for %%i in (%func%) { if(%%i.LastModifiedOrCreated > branchDateTime) do type "%%i" >> InstallScript.sql }

    Read the article

  • Ubuntu 12.04 External HD Issues

    - by Anthie Georgiadi
    I am experiencing one more problem with my Ubuntu 12.04... I have an External HD (500GB) which is NTFS formatted. I connected this HD on my Ubuntu 12.04 and copied some files. When I connected the HD on a Windows 7 machine I could find the folders (the were visible) but I wasn't able to copy/cut/delete them. However when I opened the folders I could handle the material (mp3s) in them. Does anybody know how can I fix this? How can I fully access and modify folders copied from Ubuntu via a Windows machine? Thanks in advance!

    Read the article

  • Why is MediaWiki auto-linking the word “files”

    - by dfrankow
    Our MediaWiki installation is auto-linking the word "files". So Here are some files: a, b, c would result in the word "files" being linked to http://ourhost/mediawiki/files. Why is that happening and how do I make it stop? I can use the nowiki tag, but perhaps it does not surprise you that the word "files" appears often, and it is aggravating to use that tag all the time. Here is some info on our MediaWiki installation from Special:Version. Yes, it's old. Installed software Product Version MediaWiki 1.16.5 PHP 5.2.14-pl0-gentoo (apache2handler) MySQL 5.0.84 Installed extensions Parser hooks GoogleDocs4MW (Version 1.1) Adds tag for Google Docs' spreadsheets display Jack Phoenix SyntaxHighlight (Version 1.0.8.6) Provides syntax highlighting using GeSHi Highlighter Brion Vibber, Tim Starling, Rob Church and Niklas Laxström WebServiceSequenceDiagram(Version 1.0) Render inline sequence diagrams using websequencediagrams.com Eddie Olsson Other MWSearch MWSearch plugin Kate Turner and Brion Vibber Extension functions efLucenePrefixSetup Parser extension tags gallery, googlespreadsheet, html, nowiki, pre, sequencediagram, source and syntaxhighlight Parser function hooks anchorencode, basepagename, basepagenamee, defaultsort, displaytitle, filepath, formatdate, formatnum, fullpagename, fullpagenamee, fullurl, fullurle, gender, grammar, int, language, lc, lcfirst, localurl, localurle, namespace, namespacee, ns, nse, numberingroup, numberofactiveusers, numberofadmins, numberofarticles, numberofedits, numberoffiles, numberofpages, numberofusers, numberofviews, padleft, padright, pagename, pagenamee, pagesincategory, pagesize, plural, protectionlevel, special, subjectpagename, subjectpagenamee, subjectspace, subjectspacee, subpagename, subpagenamee, tag, talkpagename, talkpagenamee, talkspace, talkspacee, uc, ucfirst and urlencode

    Read the article

  • Deleted files not increasing available free space on Ubuntu (as reported by df -h)

    - by Homunculus Reticulli
    I am writing data munging scripts (python and bash), to munge data and import large quantities of text files into a database. I am currently in the test phase, so I am generating several K's of files and deleting them (the files consume about 20G of space). After a test run, I delete the files (sometimes without having imported into the database). I notice that there is a steady decrease in the amount of free space on my disk (as reported by df -h). I don't understand this, as I use rm * (in the data directory), and in the cases where I use Nautilus, I empty the Trash bin as well. Similarly, I notice that when I import the data into the (postgresql) database, and then delete the data from the tables using DELETE FROM tablename;, the size consumed in the postgresql data directory does not go down either. Currently, I have lost approximately 200G from hard drive, and I need to reclaim that - but don't know what to do to reclaim it - any ideas?. I am running Ubuntu 10.0.4 LTS + postgresql 8.4

    Read the article

  • XNA stopped compiling my model x files

    - by HuseyinUslu
    So I've a 3d game project I'm working on and I'm using 2 model files (SkyBlock.x and AimedBlock.x). So until now everything was all good and my models files were compiled all okay and I was able to use them within my game. With the latest changes (which I don't know what caused it really) - XNA stopped compiling my model files and instead only outputs files; AimedBlockxnb - 1kb SkyDome.xnb - 1kb SkyDomeTexture.xnb - 1389 kb SkyDomeTexture_0.xnb - 419 kb So I created a test XNA game project and moved all my asset's to new solution content project's and tried compiling them and saw that they're all good. AimedBlockxnb - 2kb SkyDome.xnb - 13kb SkyDomeTexture.xnb - 4097 kb SkyDomeTexture_0.xnb - 683 kb So I guess my main project sucks there but I couldn't came with a solution. I even tried overwriting my game's content project with new game's content project (which was all okay) but it didn't work. Anybody had similar issues?

    Read the article

  • chown select files only

    - by user114642
    I use the (excellant) unison to sync two file servers and I've just realised i've synced a number of files without using the switch in unison that maintains the file user ownership. these files now have a user of root (coz i have to run unison as root) Can I chown to a specified user BUT only change the files that now have the owner root and do so recursively in the directory in question? Sure i can but not sure of the arguments to "find files with owner 0 and change them to owner xxxx". THX for any help...

    Read the article

  • Would md5 hashes allow detection of synced files?

    - by codpursue
    We have to develop our own file management system in Java web application. We need to sync files between our main server and client severs and find out whether all the client server has all the latest version of files. Our files are in pdf, doc and xls format they changes every now and then as and when it is required. What we are thinking of using MD5 checksum to find out hashcode of files on Main server and store it in database. Same would be there in Client Servers database. After comparing records on database we would come to know whether client servers are synced or not. Please suggest if there are any better ways to do the same.

    Read the article

  • Tomcat directly serve static (css, js) files shared by multiple applications

    - by Josvic Zammit
    I'm using the ExtJS framework which has a bulk of js and css files that are used for all apps. I intend to share these between a number of web applications (different war files). For this reason I would like to serve ExtJS js and css directly from the web server, in my case Tomcat6, which can be used to serve static files, as in this helpful link. Therefore I put my files under /var/lib/tomcat6/webapps/ROOT/extjs/. The static files that are directly under that directory are served correctly, e.g. /extjs/ext.js correctly serves the file at /var/lib/tomcat6/webapps/ROOT/extjs/ext.js. However files in lower-level directories, for example /extjs/welcome/css/welcome.css, which should serve the file at /var/lib/tomcat6/webapps/ROOT/extjs/welcome/css/welcome.css, return a 404. TL/DR Tomcat serves static files only at top-level directory. A 404 is returned for files deeper in the hierarchy. Config file contents: server.xml application's web.xml

    Read the article

  • Best Way for Developers to Upload Files to Production Server

    - by ultrajohn
    Small team of developers doing their work here and there. We have a team leader, and is sole responsible for uploading updated source files from the development server to the production server. So let's say, so if an updated files needs to be uploaded to the prod server, that concerned developer shall notify the team lead about it, and then the team lead will update the files to the prod server. So no developer has an access to the prod server except for the team lead. That's our current setup. Now, what we want to do is to give developers a way for uploading their updated files to the server without the team lead intervening in the process. What do you think is the best way to go about this?

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >