Search Results

Search found 40999 results on 1640 pages for 'duplicate files'.

Page 54/1640 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Setup cronjob with SSH crontab [duplicate]

    - by user225711
    This question already has an answer here: Why is my crontab not working, and how can I troubleshoot it? 3 answers Someone can help me to setup cronjob with crontab. I have run command crontab -e -u root 01 0 * * * /usr/bin/php /var/www/web/daily/testssh.php But it always get error: crontab: usage error: no arguments permitted after this option

    Read the article

  • USB Mouse disconnects ONLY on bootcamp (win7,works fine on OSX) [duplicate]

    - by gourounakis
    This question already has an answer here: Why is my USB mouse disconnecting and reconnecting randomly and often? 7 answers I have a mid 2010 iMac with a Logitech G500 mouse which works fine on OS X. I game on Windows 7 in bootcamp, and for a month now I have been getting random mouse disconnects while gaming. Sometimes none, sometimes 2-3 per minute. The mouse lights go off and I get the disconnect sound from Windows 7 then it connects again after a second or two. I tried changing the port I connect the mouse to, but still the same thing. The only devices on USB are Apple keyboard with Apple extension cord, the mouse, and a Creative SoundBlaster Tactic 3D Alpha USB Gaming Headset. Any ideas?

    Read the article

  • use local ip and maintain ssl warning free [duplicate]

    - by Timothy Clemans
    This question already has an answer here: Loopback to forwarded Public IP address from local network - Hairpin NAT 6 answers I have a public facing website for a doctor's office for accessing the medical record. I'm using SSL. The server is at the doctor's office. When I access the website on the same network as the server I want the DNS to point to the local IP address. I don't want to do a HTTP redirect to the local ip because of the scary SSL warning. What's the recommended way of doing this?

    Read the article

  • Prevent being locked out [duplicate]

    - by Nick
    This question already has an answer here: How do you test iptables rules to prevent remote lockout and check matches? 3 answers When you are configuring iptables or ssh over ssh and the data center is thousands of kilometers away(and getting someone there to plug in a KVM is hard) what are some standard practices to prevent locking yourself out?

    Read the article

  • Loading Obj Files in Soya3d engine

    - by John Riselvato
    I recently just found soya3d and from what i have seen through the tutorials i will be able to make exactly what i wanted with python skills. Now i have built this map generator. The only issue is that i can not manage to understand from any documents how to load obj files. At first i figured that i had to convert it to a .data file, but i dont understand how to do this. I just want to load a simple model of a house. I tried using the soya_editor, but i can not figure out at all how to do anything with that. Heres my script so far: import sys, os, os.path, soya, soya.sdlconst width, height = 760, 375 soya.init("Generator 0.1", width, height) soya.path.append(os.path.join(os.path.dirname(sys.argv[0]), "data")) scene = soya.World() model = soya.model.get("house") light = soya.Light(scene) light.set_xyz(0.5, 0.0, 2.0) camera = soya.Camera(scene) camera.z = 2.0 soya.set_root_widget(camera) soya.MainLoop(scene).main_loop() house is in .obj form at folder data/models The error i get is: Traceback (most recent call last): File "introduction.py", line 7, in <module> model = soya.Model.get("house") File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 259, in get return klass._alls.get(filename) or klass._alls.setdefault(filename, klass.load(filename)) File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 268, in load dirname = klass._get_directory_for_loading_and_check_export(filename) File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 194, in _get_directory_for_loading_and_check_export dirname = klass._get_directory_for_loading(filename, ext) File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 171, in _get_directory_for_loading raise ValueError("Cannot find a %s named %s!" % (klass, filename)) ValueError: Cannot find a <class 'soya.Model'> named house! * Soya3D * Quit... So i am figuring that because i dont understand how to turn my files into .data files, i will need to learn that. So my question is, how do i use my own models?

    Read the article

  • XAMPP - Unable to serve files larger than ~30MB [on hold]

    - by Sparx401
    I'm developing a site locally with XAMPP on Windows 7, and as far as media is concerned, I'm unable to play media files that are larger than 30MB or so. Both video and audio files (MP4 and MP3 respectively) generate this error in Chrome (and show similar errors in other browsers such as IE9 and Opera): No data received Unable to load the webpage because the server sent no data. Error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection without sending any data. It seems that the exact number of MB somewhat varies between browsers though. One video in question is 34MB and actually plays in Opera and IE9, but gives the aforementioned error in Chrome. I've checked to make sure the file paths were typed correctly and ensured that the directive for .htaccess is there to serve MP4s: AddType video/mp4 mp4 Also, I have these directives set as well in the same .htaccess file: php_value upload_max_filesize "80M" php_value post_max_size "80M" php_value max_input_time 60 php_value max_execution_time 60 And memory_limit is set in php.ini as "128M" so I'm left wondering: what is causing my files to not play, and what, if any, directives I have to change on the server-side? Perhaps something to do with limitations with the GET method (the method I'm seeing on Chrome's network tab among other header request/response info)?

    Read the article

  • Files backup utility with incremental backups that would keep backup device clean

    - by Wojtek
    I've tested a few of backup utilities and still haven't found one that would satisfy me. Almost every one of them has two options: - full backup - not an option to use frequently - incremental backup - seems right, but there's one thing about it: Incremental backup builds on a base of a full backup, backing up only those files, that were created/changed. The thing is, that after some time you've got a lot of unwanted files from the old backups bloating your backup device. Also, if you'd accidentally delete your full (first) backup file, then the differential backups would be corrupted (you wouldn't be able to restore them). The thing I'm looking for is a program, that would backup files simply by copying them. It would check the backup device whether it contains the file (unchanged): - if yes, it should proceed to the next file (we've got current version backed up) - if no, it would copy the file to the backup device - if the device contains a file that is no longer on our disk, the program would delete it from the backup device Is there any such utility, that would work this way? If not, do you have any hints on how to backup fairly big amounts of data (around 20gb) quite frequently with incremental backups and not be exposed to those unwanted effects of backup size puffing up?

    Read the article

  • Data Aggregation of CSV files java

    - by royB
    I have k csv files (5 csv files for example), each file has m fields which produce a key and n values. I need to produce a single csv file with aggregated data. I'm looking for the most efficient solution for this problem, speed mainly. I don't think by the way that we will have memory issues. Also I would like to know if hashing is really a good solution because we will have to use 64 bit hashing solution to reduce the chance for a collision to less than 1% (we are having around 30000000 rows per aggregation). For example file 1: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,50,60,70,80 a3,b2,c4,60,60,80,90 file 2: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,30,50,90,40 a3,b2,c4,30,70,50,90 result: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,80,110,160,120 a3,b2,c4,90,130,130,180 algorithm that we thought until now: hashing (using concurentHashTable) merge sorting the files DB: using mysql or hadoop or redis. The solution needs to be able to handle Huge amount of data (each file more than two million rows) a better example: file 1 country,city,peopleNum england,london,1000000 england,coventry,500000 file 2: country,city,peopleNum england,london,500000 england,coventry,500000 england,manchester,500000 merged file: country,city,peopleNum england,london,1500000 england,coventry,1000000 england,manchester,500000 The key is: country,city. This is just an example, my real key is of size 6 and the data columns are of size 8 - total of 14 columns. We would like that the solution will be the fastest in regard of data processing.

    Read the article

  • text extraction from video game dialogue files [on hold]

    - by wdwvt1
    As part of an academic project, I am trying to access the dialogue files (whether audio or text) from a variety of sports video games (Madden or NBA 2kX would be fantastic). I have searched extensively on other sites (scholarly text-mining publications, r/gaming, r/madden, modding sites, etc.) for guidance in how to extract dialogue files, but have been unsuccessful. Given that I don't have even the domain specific language to ask the right question (i.e. the resources I am seeking are out there, I just can't find them) I am asking the SE game dev community for help with the 2 following questions: Is there a canonical resource that I should study that would get me started with how to extract text or audio files from games? I am very fluent in python, which usually excels at mining information from sources, but I struggle with knowing where to start with a video game (as opposed to a more familiar database with a defined API). Is this even feasible, or are protections included with newer games (e.g. NBA 2k13) going to make extraction of these resources in a programmatic way impossible? Thank you for your help!

    Read the article

  • Include Binary Files in DEB package

    - by user22611
    I need to build a DEB package from mainly Node.js Javascript files, but it should include some binary files as well. They are listed inside debian/source/include-binaries. Otherwise I get the error message dpkg-source: error: unrepresentable changes to source The command in question is: bzr builddeb -- -us -uc After adding the file include-binaries, when running bzr builddeb -- -us -uc again, now I get a different error: It says dpkg-source: error: aborting due to unexpected upstream changes, see /tmp/mailadmin_0.0-1.diff.n6m5_6 I have no idea how to get rid of this. In the next line of output it tells me dpkg-source: info: you can integrate the local changes with dpkg-source --commit But if I run this command in the build area of my package, it gives me the unrepresentable changes to source error message again, even though debian/source/include-binaries is present in the build area as well. I am missing the way out of this... I tried deleting all files that are produced by the build process, still no success. Further details: The target directory is /opt/mailadmin. Since this directory is unusual, I listed it in the file debian/mailadmin.install (which contains one line:) opt/mailadmin opt/ The bzr builddeb process uses this file as expected.

    Read the article

  • Ubuntu One, compressed files

    - by user8179
    I have uploaded some files to my Ubuntu One account and it seems to work great most of the time. I usually upload them directly from Nautilus by right clicking the folder, using the ”Synchronize this folder” option, and then I make sure that the file I want to upload is published. Then I usually test the whole thing by trying to download it. I right click the file again to get its URL and I paste it into my Web browser. This usually works fine. But yesterday I uploaded two compressed files – ”.tar.bz2”. When I tried to open them after downloading them with my Web browser (Opera), it failed. I found that the file was bigger than the original file (2358 B instead af 2335 B – 15 B added at the beginning of the file and 8 B added to the end), and someone at the Opera channel (IRC) at OperaNet (Europe) figured out that the reason for this is that the server compress the file again, ”without telling Opera”. So to be able to extract the file I need to add ”.gz” to the file name and then extract it twice. If I downloaded it with Firefox however, I didn't need to do that, so maybe Firefox figured this out somehow in a way that Opera does not. Someone also tried to download the file with wget and some other browser and he also got the same result as I did with Opera, that is the file is compressed a second time by the server. I guess ”the server” is the Ubuntu One server, right? So why is this? Could it be done better somehow? Or did I do something wrong when uploading the files? It also seems like this extra compressing thing does not always happen, because when I tried again a few minutes ago, the file came down with its right size (2335 B), without an extra compression. But the other file (114 MiB) was still compressed twice.

    Read the article

  • Adding files and folders to a Root Folder (inode/directory)

    - by xBaldwin
    Ok so I'm fairly new to Ubuntu and wasn't even the one who put it one this computer(my friend did while I was storing it at his house because I was in the middle of transitioning between houses), but It's on here so I need to learn what I can so I can use it more effectively. My question at the moment is "Would it be safe to add files/folders to a folder (inode/directory) that requires Root access?" I continue to be informed by the system that the directory I am using is running low on space which I found odd seeing how I should have a lot more room on this computer. That's when I started looking at the directories and found that there are two with a bunch of un-used space on them. One says it has 46.9 GB of free space and the other has 24.9 GB of free space. Seems like a complete waste to not use that space and yet they both say they require Root access to add to them. I know that Root folders and files are normally all system folders and files. I also know that changing or deleting them can mess up the computer which right now I cant afford to do. I just don't know if it would mess anything up to add something to those folders. Thank you in advance to anyone who takes the time to reply and try to teach me about how all that works. I really do appreciate it and will do the same if by some crazy (completely unlikely) reason I have an answer to your question. :-)`

    Read the article

  • Do you keep intermediate files under version control?

    - by Subb
    Here's an example with a Flash project, but I'm sure a lot of projects are like this. Suppose I create an image with Photoshop. I then export this image as a jpeg for integration in Flash. I compile the fla as an asset library, which is then used in my Flash Builder project to produce the final swf. So it goes like : psd => jpg -> fla => swc -> Flash Builder project => swf. => : produce -> : is used in The psd, fla, and Flash Builder Project are source files : they are not the result of some process. The jpg and swc are what I would call "intermediate" files. They are the product of one (or more) source file(s). The swf is the final result. So, would you keep those intermediate files under version control? How do you deal with them?

    Read the article

  • All files gone after running fsck. How can I recover my files?

    - by cinlung
    I am a newbie in Linux. So this is my story I installed Ubuntu server 10.04lts. It worked great for many months, until today i decided to run fsck on the system partition and although it warned me, I kept pressing yes and now it will only boot into grub prompt. So i read some article and tried grub reinstall. But before performing grub reinstall, i decided to run fsck again from Ubuntu 10.04 lts for desktop live CD. The fsck painfully passes, now my drive is recognized as ext4 system and I am able to mount it again. However, all i can see is just boot directory and lost&found. I tried to perform grub reinstalling by doing grup-install stuff, now my grub is still not loading right, my files are missing, and the weird thing is that the amount I found used by boot and lost n found is only 5gb and the amount used in he hdd is 8 gb. So my files must be somewhere in the hdd. Is there any sinple way maybe a windows tool or something yo recover my files? I only need to retrieve my database backup and everything else can go. I am freaking out here. Please help.

    Read the article

  • Best place to put application files [closed]

    - by takeshin
    Possible Duplicate: 'Installing' Applications, where to put folders? Hello, Where shall I put applications which do not require install (extracted from archive)? E.g. java based programs executable scripts In two variants: for all users for one user Some times the archive itself contains directories like lib or bin. For example, apps like ArgoUML Shall I put all the apps in /usr/local/appname?

    Read the article

  • Optimize Duplicate Detection

    - by Dave Jarvis
    Background This is an optimization problem. Oracle Forms XML files have elements such as: <Trigger TriggerName="name" TriggerText="SELECT * FROM DUAL" ... /> Where the TriggerText is arbitrary SQL code. Each SQL statement has been extracted into uniquely named files such as: sql/module=DIAL_ACCESS+trigger=KEY-LISTVAL+filename=d_access.fmb.sql sql/module=REP_PAT_SEEN+trigger=KEY-LISTVAL+filename=rep_pat_seen.fmb.sql I wrote a script to generate a list of exact duplicates using a brute force approach. Problem There are 37,497 files to compare against each other; it takes 8 minutes to compare one file against all the others. Logically, if A = B and A = C, then there is no need to check if B = C. So the problem is: how do you eliminate the redundant comparisons? The script will complete in approximately 208 days. Script Source Code The comparison script is as follows: #!/bin/bash echo Loading directory ... for i in $(find sql/ -type f -name \*.sql); do echo Comparing $i ... for j in $(find sql/ -type f -name \*.sql); do if [ "$i" = "$j" ]; then continue; fi # Case insensitive compare, ignore spaces diff -IEbwBaq $i $j > /dev/null # 0 = no difference (i.e., duplicate code) if [ $? = 0 ]; then echo $i :: $j >> clones.txt fi done done Question How would you optimize the script so that checking for cloned code is a few orders of magnitude faster? System Constraints Using a quad-core CPU with an SSD; trying to avoid using cloud services if possible. The system is a Windows-based machine with Cygwin installed -- algorithms or solutions in other languages are welcome. Thank you!

    Read the article

  • SQL SERVER – Fix : Error : 3117 : The log or differential backup cannot be restored because no files

    - by pinaldave
    I received the following email from one of my readers. Dear Pinal, I am new to SQL Server and our regular DBA is on vacation. Our production database had some problem and I have just restored full database backup to production server. When I try to apply log back I am getting following error. I am sure, this is valid log backup file. Screenshot is attached. [Few other details regarding server/ip address removed] Msg 3117, Level 16, State 1, Line 1 The log or differential backup cannot be restored because no files are ready to roll forward. Msg 3013, Level 16, State 1, Line 1 RESTORE LOG is terminating abnormally. Screenshot attached. [Removed as it contained live IP address] Please help immediately. Well I have answered this question in my earlier post, 2 years ago, over here SQL SERVER – Fix : Error : Msg 3117, Level 16, State 4 The log or differential backup cannot be restored because no files are ready to rollforward. However, I will try to explain it a little more this time. For SQL Server database to be used it should in online state. There are multiple states of SQL Server Database. ONLINE (Available – online for data) OFFLINE RESTORING RECOVERING RECOVERY PENDING SUSPECT EMERGENCY (Limited Availability) If the database is online, it means it is active and in operational mode. It will not make sense to apply further log from backup if the operations have continued on this database. The common practice during the backup restore process is to specify the keyword RECOVERY when the database is restored. When RECOVERY keyword is specified, the SQL Server brings back the database online and will not accept any further log backups. However, if you want to restore more than one backup files, i.e. after restoring the full back up if you want to apply further differential or log backup you cannot do that when database is online and already active. You need to have your database in the state where it can further accept the backup data and not the online data request. If the SQL Server is online and also accepts database backup file, then there can be data inconsistency. This is the reason that when there are more than one database backup files to be restored, one has to restore the database with NO RECOVERY keyword in the RESTORE operation. I suggest you all to read one more post written by me earlier. In this post, I explained the time line with image and graphic SQL SERVER – Backup Timeline and Understanding of Database Restore Process in Full Recovery Model. Sample Code for reference: RESTORE DATABASE AdventureWorks FROM DISK = 'C:\AdventureWorksFull.bak' WITH NORECOVERY; RESTORE DATABASE AdventureWorks FROM DISK = 'C:\AdventureWorksDiff.bak' WITH RECOVERY; In this post, I am not trying to cover complete backup and recovery. I am just attempting to address one type of error and its resolution. Please test these scenarios on the development server. Playing with live database backup and recovery is always very crucial and needs to be properly planned. Leave a comment here if you need help with this subject. Similar Post: SQL SERVER – Restore Sequence and Understanding NORECOVERY and RECOVERY Note: We will cover Standby Server maintenance and Recovery in another blog post and it is intentionally, not covered this post. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Question, SQL, SQL Authority, SQL Backup and Restore, SQL Error Messages, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Convert old AVI files to a modern format

    - by iWerner
    Hi, we have a collection of old home videos that were saved in AVI format a long time ago. I want to convert these files to a more modern format because the Totem Movie Player that comes with Ubuntu 10.4 seems to be the only program capable of playing them. The files seem to be encoded with a MJPEG codec, and playing them in VLC or Windows Media Player plays only the sound but there is no video. Avidemux was able to open the files, but the quality of the video is severely degraded: The video skips frames and is interlaced (it's not interlaced when playing it in Totem). Neither ffmpeg nor mencoder seems to be able to read the video stream. mencoder reports that it is using ffmpeg's codec. Here's a section from its output: ========================================================================== Opening video decoder: [ffmpeg] FFmpeg's libavcodec codec family [mjpeg @ 0x92a7260]mjpeg: using external huffman table [mjpeg @ 0x92a7260]mjpeg: error using external huffman table, switching back to internal Unsupported PixelFormat -1 Selected video codec: [ffmjpeg] vfm: ffmpeg (FFmpeg MJPEG) while running ffmpeg produces the following: $ ffmpeg -i input.avi output.avi FFmpeg version SVN-r0.5.1-4:0.5.1-1ubuntu1, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --extra-version=4:0.5.1-1ubuntu1 --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --enable-zlib --disable-stripping --disable-vhook --enable-runtime-cpudetect --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --enable-libdc1394 --enable-shared --disable-static libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 1 / 52.20. 1 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 0. 4. 0 / 0. 4. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Mar 4 2010 12:35:30, gcc: 4.4.3 [avi @ 0x87952c0]non-interleaved AVI Input #0, avi, from 'input.avi': Duration: 00:00:15.24, start: 0.000000, bitrate: 22447 kb/s Stream #0.0: Video: mjpeg, yuvj422p, 720x544, 25 tbr, 25 tbn, 25 tbc Stream #0.1: Audio: pcm_s16le, 44100 Hz, stereo, s16, 1411 kb/s Output #0, avi, to 'output.avi': Stream #0.0: Video: mpeg4, yuv420p, 720x544, q=2-31, 200 kb/s, 90k tbn, 25 tbc Stream #0.1: Audio: mp2, 44100 Hz, stereo, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding frame= 0 fps= 0 q=0.0 Lsize= 143kB time=15.23 bitrate= 76.9kbits/s video:0kB audio:119kB global headers:0kB muxing overhead 20.101777% So the problem is that output does not contain any video, as evidenced by the video:0kB at the end. In all of the above cases the audio comes out fine. So my question is: What can I do to convert these files to a more modern format with more modern codecs?

    Read the article

  • SQLite UPSERT - ON DUPLICATE KEY UPDATE

    - by Alix Axel
    MySQL has something like this: INSERT INTO visits (ip, hits) VALUES ('127.0.0.1', 1) ON DUPLICATE KEY UPDATE hits = hits + 1; As far as I'm know this feature doesn't exist in SQLite, what I want to know is if there is any way to archive the same effect without having to execute two queries. Also, if this is not possible, what do you prefer: SELECT + (INSERT or UPDATE) or UPDATE (+ INSERT if UPDATE fails)

    Read the article

  • Fogbugz Duplicate Cases

    - by LeeHull
    I am using FogBugz free hosting to manage my project bugs, I also have several customers I create custom software for, been using FogBugz to keep everything organized. Question I have is, there are times where they send me an email with a bug, so I report it in my system and they create it as well, instead of having 2 cases of same bug, would like to merge or link them together, rather not just delete the duplicate. Is there a way to link them together, maybe like a cross reference or even merge them together?

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >