Search Results

Search found 49453 results on 1979 pages for 'memory mapped files'.

Page 289/1979 | < Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >

  • Debian/Linux backup files changed by user

    - by verhogen
    I would like to backup my server that is hosting a few websites in such a way that I can restore everything to the way it was from a fresh format. I know that I should backup all the home folders and then probably my /etc/ folders. Is there a way to figure out all the folders that are relevant for backup in that they were not automatically generated or installed from apt-get? It would ideally restore all the users with their current passwords as well. Basically, enough to clone the system but only copying configuration files.

    Read the article

  • How do I copy files in Jolicloud from one drive to another

    - by Jason
    I'm running Jolicloud 1.2 from a USB stick. I clicked the "run but don't install option" at the start up menu and then i created an account. It says I am logged in as guest mode. How can I copy files from my original c:\ which is listed in the file manager to my usb stick. There's no button and drag and drop doesn't work is there a way to get into terminal? Is it perhaps restricted because I am a "guest"?

    Read the article

  • uploading files greater than 1MB = connection resets

    - by Legit
    I'm using nginx on the frontend as "proxy cache" and apache on the backend, i've set my PHP settings to the following: error_log = /var/www/site1/php_error.log error_reporting = 22527 file_uploads = On log_errors = On max_execution_time = 0 max_file_uploads = 20 max_input_time = -1 memory_limit = 512M post_max_size = 0 upload_max_filesize = 1000M What's the problem? Uploading files less than 1MB is successful but anything greater than that, Google Chrome outputs: Error 101 (net::ERR_CONNECTION_RESET): The connection was reset. I already checked for the error log file but it doesn't exist in the directory. I also checked /var/log/httpd/error_log but no uploading related problems. I don't know anything else which might have caused the problem so I have reached out for your helping hand. Thanks!

    Read the article

  • Copying List of Files Through Powershell

    - by Driftpeasant
    So I'm trying to copy 44k files from one server to another. My Powershell script is: Import-CSV f:\script\Listoffiles.csv | foreach $line {Move-item $_.Source $_.Destination} With the Format for the CSV: Source, Destination E:\folder1\folder2\file with space.txt, \\1.2.3.4\folder1\folder2\file with space.txt I keep getting: A positional parameter cannot be found that accepts argument '\\1.2.3.4\folder1\folder2\file'. At line:1 char:10 + move-item <<<< E:\folder1\folder2\file with space.txt \\1.2.3.4\folder1\folder2\file with space.txt + CategoryInfo : InvalidArgument: (:) [Move-Item], ParameterBindingException + FullyQualifiedErrorId : PositionalParameterNotFound,Microsoft.PowerShell.Commands.MoveItemCommand So I've tried putting "s around both paths, and also 's, and I still get either Move-Item: Could not find a part of the path errors. Can anyone help me?

    Read the article

  • Is there a way to diagnose a tar file when, on extract, files are missing but no errors are given?

    - by ljvillanueva
    I have tar files where I archive about 250 files, each about 80 Mb, without compression. In a few cases tar is only returning some of the files. For example, when doing an extract of the file using: tar -xvf 356.tar I got only 103 files, when it should return 255 files, but tar does not give me an error. Furthermore, the tar archive is 15.8 Gb while the extracted folder is just 6.4 Gb. The tar files were created using tar -cvf 356.tar 356 where 356 is the name of the folder. All the steps where done in the same machines, under Ubuntu 6 and newer. Any ideas if there is a way to recover the files that are not being extracted?

    Read the article

  • Sharing files from my Macbook to Windows Desktop

    - by Vahe
    What's the most dependable way to share files and folders on a OS X to Windows (I'm using 7). I've followed this guide: http://support.apple.com/kb/HT1812 However, this method does not seem dependable (Works only some of the time). My Macbook does not always show up under the Network on Windows. On certain occasions, turning off windows file sharing completely (in OS X preferences), restarting my Macbook and turning it back on helps. Is there any reason why this is happening? Is there an alternative method?

    Read the article

  • FFmpeg add multiple audio files to video at specific points

    - by Arran
    I have two audio files, each about 3 minutes long. I want to take the first 10 seconds of each file and add them each to a video file at specific points - 0 seconds and 10 seconds. So the resulting video should be 20 seconds long. I've got this far: ffmpeg -i video.mov -ss 0 -t 20 -itsoffset 0 -i audio1.mp3 -itsoffset 10 -i audio2.mp3 -acodec copy -vcodec copy out.mov ...but the resulting video has 20 seconds of the first audio file only, the second audio file doesn't start at 10 seconds like it should. Any help would be appreciated, thanks!

    Read the article

  • Updating an Uploaded Zip File without re-uploading the whole files

    - by Stranger
    Imagine I have uploaded a zip file which contains thousands of images and its size is about 2GB. Now I want to add 100 new pictures which their size is about 10MB. The only way I can imagine is that I add these images to my 2GB collection, then upload the whole collection AGAIN!!! But I'm looking for a way that I can inject only new files without re-uploading the whole collection. So is there any way to do this? any good way or software?

    Read the article

  • Delete protected files in ntfs?

    - by Balchev
    I want to delete an old Windows directory from my system drive (C), but I am unable to due NTFS permissions. I tried from Win 7 and Win 2003, but can't do it. I tried safe mode as well with same result. Is there any way to work around this (other then formating the drive)? Perhaps changing the owner or something? It errors at files like "oldwin/bfsvc.exe". Is there some "superuser" in windows similar to linux root account? Thanks

    Read the article

  • can't open large rss remote files

    - by anarchOi
    I'm setting up a rss feed in vbulletin to import the entries into the forum. The rss is very large (2000+ entries) and it is hosted on a different server (i have control on it). Problem is that i cannot add this feed into vBulletin, i am getting a weird error. If i edit the feed and remove half of the entries then it will work correctly, which makes me think it is because the file is too large. I suppose i have to change a setting in php.ini or something to allow bigger files to be opened, but what do i need to look for ? Thanks I'm on Debian.

    Read the article

  • Does LINQ require significantly more processing cycles and memory than lower-level data iteration techniques?

    - by Matthew Patrick Cashatt
    Background I am recently in the process of enduring grueling tech interviews for positions that use the .NET stack, some of which include silly questions like this one, and some questions that are more valid. I recently came across an issue that may be valid but I want to check with the community here to be sure. When asked by an interviewer how I would count the frequency of words in a text document and rank the results, I answered that I would Use a stream object put the text file in memory as a string. Split the string into an array on spaces while ignoring punctuation. Use LINQ against the array to .GroupBy() and .Count(), then OrderBy() said count. I got this answer wrong for two reasons: Streaming an entire text file into memory could be disasterous. What if it was an entire encyclopedia? Instead I should stream one block at a time and begin building a hash table. LINQ is too expensive and requires too many processing cycles. I should have built a hash table instead and, for each iteration, only added a word to the hash table if it didn't otherwise exist and then increment it's count. The first reason seems, well, reasonable. But the second gives me more pause. I thought that one of the selling points of LINQ is that it simply abstracts away lower-level operations like hash tables but that, under the veil, it is still the same implementation. Question Aside from a few additional processing cycles to call any abstracted methods, does LINQ require significantly more processing cycles to accomplish a given data iteration task than a lower-level task (such as building a hash table) would?

    Read the article

  • Problem in my terminal related clamav

    - by Hejar Hejar
    Recently I decided to use Avast Antivirus. I uninstalled clamav and installed Avast. Now whenever I use my terminal I get the following errors.: 5 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/1,250 kB of archives. After this operation, 5,120 B of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... dpkg: warning: files list file for package `clamav-daemon' missing, assuming package has no files currently installed. dpkg: warning: files list file for package `python-pyclamav' missing, assuming package has no files currently installed. dpkg: warning: files list file for package `clamav-base' missing, assuming package has no files currently installed. dpkg: warning: files list file for package `clamav-freshclam' missing, assuming package has no files currently installed. dpkg: warning: files list file for package `libclamav6' missing, assuming package has no files currently installed. dpkg: warning: files list file for package `python-clamav' missing, assuming package has no files currently installed. (Reading database ... 554940 files and directories currently installed.) Please help. Thank you

    Read the article

  • Creating foreign words' learning site with memory technique (Web 2.0)? Will it work?

    - by Michal P.
    I would like to earn a little money for realizing a good, simple project. My idea is to build a website for learning of chosen by me language (for users knowing English) using mnemonics. Users would be encourage to enter English words with translation to another language and describing the way, how to remember a foreign language word (an association link). Example: if I choose learning Spanish for people who knows English well, it would look like that: every user would be encourage to enter a way to remember a chosen by him/her Spanish word. So he/she would enter to the dictionary (my site database) ,e.g., English word: beach - playa (Spanish word). Then he/she would describe the method to remember Spanish word, e.g., "Image that U r on the beach and U play volleyball" - we have the word play and recall playa (mnemonics). I would like to give possibility of pic hotlinks, encourage for fun or little shocking memory links which is -- in the art of memory -- good. I would choose a language to take a niche of Google Search. The big question is if I don't lose my time on it?? (Maybe I need to find prototype way to check that idea?)

    Read the article

  • Best Method to SFTP or FTPS Files via SSIS

    - by Registered User
    What is the best method using SSIS (SQL Server Integration Services) to upload a file to either a remote SFTP (secure FTP with SSH2 protocal) or FTPS (FTP over SSL) site? I've used the following methods, but each has short-comings I would like to avoid: COZYROC LIBRARY Method: Install the CozyRoc library on each development and production server and use the SFTP task to upload the files. Pros: Easy to use. It looks, smells, and feels like a normal SSIS task. SSIS also recognizes the password as sensitive information and allows you all the normal options for protecting the sensitive information instead of just storing it in clear text in a non-secure manner. Works well with other SSIS tasks such as ForEach Loop Containers. Errors out when uploads and downloads fail. Works well when you don't know the names of the files on the remote FTP site to download or when you won't know the name of the file to upload until run-time. Cons: Costs money to license in a production environment. Makes you dependent upon the vendor to update their libraries between each version. Although they already have a 2008 version, this caused me a problem during the CTP's of 2008. Requires installing the libraries on each development and production machine. COMMAND LINE SFTP PROGRAM Method: Install a free command-line SFTP application such as Putty and execute it either by running a batch file or operating system process task. Pros: Free, free, and free. You can be sure it is secure if you are using Putty since numerous GUI FTP clients appear to use Putty under the covers. You DEFINATELY know you are using SSH2 and not SSH. Cons: The two command-line utilities I tried (Putty and Cygwin) required storing the SFTP password in a non-secure location. I haven't found a good way to capture failures or errors when uploading files. The process doesn't look and smell like SSIS. Most of the code is encapsulated in text files instead of SSIS itself. Difficult to use if you don't know the exact name of the file you are uploading or downloading. A 3RD PARTY C# or VB.NET LIBRARY Method: Install a SFTP or FTPS library and use a Script Task that references the library to upload the files. (I've never tried this, so I'm going to guess at the pros and cons) Pros: Probably easy to capture errors. Should work well with variables, so it would probably be easy to use even when you don't know the exact name of the file you are uploading or downloading. Cons: It's a script task combined with .NET libraries. If you are using SSIS, then you probably are more comfortable with SSIS tasks then .NET code. Script tasks are also difficult to troubleshoot since they don't have the same debugging tools and features as regular .NET projects. Creates a dependency on 3rd party code that may not work between different versions of SQL Server. To be fair, it is probably MORE likely to work between different versions of SQL Server than a 3rd party SSIS task library. Another huge con -- I haven't found a free C# or VB.NET library that does this as of yet. So if anyone knows of one, then please let me know!

    Read the article

  • Scripts help FIND command via atime output to multiple files

    - by sswagner
    here is a script I have wrote that I need help with. in the script I do a find for any file that has not been access for over 30 days, 60, 90, 180, 270 & 365 days. This works just fine. however, this takes a few days just to finish the 30 day portion. it is scanning a NAS. (millions and millions of files) as you see, the 30 day information really holds all the data need for the rest of the scripts. the 60, 90, etc. portion of the script are just redoing the same effort as the 30 day portion, except for an extended time frame. it would save in this case weeks worth of re-scanning if some how the 60, 90 180, etc.. portions could just get its data from the 30 day output. this is where I am asking for help. the output is just like an ls -l command. and you can also see from the output below, there are multiple years in this output. the script is attached and printed below. total 24 -rw-r--r-- 1 root bin 60 Apr 12 13:07 config_file -rw-r--r-- 1 root bin 9 Apr 12 13:07 config_file.InProgress -rw-r--r-- 1 root bin 0 Apr 12 13:07 config_file.sids -rw-r--r-- 1 root bin 1284 Apr 19 10:41 rpt_file -rw-r--r-- 1 16074 5003 20083 Apr 26 2002 /nas/quota/slot_2/CR_APP002/eb_ora_bin1/sun8/product/9.2s/oem_webstage/oracle/sysman/qtour/console/dat1_01.gif -rw-r--r-- 1 16074 5003 20088 Apr 26 2002 /nas/quota/slot_2/CR_APP002/eb_ora_bin1/sun8/product/9.2s/oem_webstage/oracle/sysman/qtour/console/set1_04.gif -rw-r--r-- 1 16074 5003 2008 Apr 26 2002 /nas/quota/slot_2/CR_APP002/eb_ora_bin1/sun8/product/9.2s/oem_webstage/oracle/sysman/qtour/oapps/get2_03.htm -rw-r--r-- 1 16074 5003 20083 Apr 26 2002 /nas/quota/slot_2/CR_APP002/eb_ora_bin1/sun8/product/9.2s/oem_webstage/oracle/sysman/qtour/oapps/per1_01.gif any help is appreciated. these are linux distro boxes, so I am sure perl is on there too if needed.. Thanks! !/bin/ksh # search shares for files that have not been accessed for a certain time. NOTE: $IN = input search $OUT = output directory for text file # TESTS Numeric arguments can be specified as # +n for greater than n, -n for less than n, n for exactly n. # -atime n File was last accessed n*24 hours ago. # # IN1=/nas/quota/slot_2/CR* IN2=/nas/quota/slot_3/CR* IN3=/nas/quota/slot_4/CR* IN4=/nas/quota/slot_5/CR* OUT=/nas/quota/slot_3/CR_PRJ144/steve mkdir ${OUT} for dir in ${IN1}; do find $dir -atime +30 -exec ls -l '{}' \; ${OUT}/30days.txt; done for dir in ${IN2}; do find $dir -atime +30 -exec ls -l '{}' \; ${OUT}/30days.txt; done for dir in ${IN3}; do find $dir -atime +30 -exec ls -l '{}' \; ${OUT}/30days.txt; done for dir in ${IN4}; do find $dir -atime +30 -exec ls -l '{}' \; ${OUT}/30days.txt; done for dir in ${IN1}; do find $dir -atime +60 -exec ls -l '{}' \; ${OUT}/60days.txt; done for dir in ${IN2}; do find $dir -atime +60 -exec ls -l '{}' \; ${OUT}/60days.txt; done for dir in ${IN3}; do find $dir -atime +60 -exec ls -l '{}' \; ${OUT}/60days.txt; done for dir in ${IN4}; do find $dir -atime +60 -exec ls -l '{}' \; ${OUT}/60days.txt; done for dir in ${IN1}; do find $dir -atime +90 -exec ls -l '{}' \; ${OUT}/90days.txt; done for dir in ${IN2}; do find $dir -atime +90 -exec ls -l '{}' \; ${OUT}/90days.txt; done for dir in ${IN3}; do find $dir -atime +90 -exec ls -l '{}' \; ${OUT}/90days.txt; done for dir in ${IN4}; do find $dir -atime +90 -exec ls -l '{}' \; ${OUT}/90days.txt; done for dir in ${IN1}; do find $dir -atime +180 -exec ls -l '{}' \; ${OUT}/180days.txt; done for dir in ${IN2}; do find $dir -atime +180 -exec ls -l '{}' \; ${OUT}/180days.txt; done for dir in ${IN3}; do find $dir -atime +180 -exec ls -l '{}' \; ${OUT}/180days.txt; done for dir in ${IN4}; do find $dir -atime +180 -exec ls -l '{}' \; ${OUT}/180days.txt; done for dir in ${IN1}; do find $dir -atime +270 -exec ls -l '{}' \; ${OUT}/270days.txt; done for dir in ${IN2}; do find $dir -atime +270 -exec ls -l '{}' \; ${OUT}/270days.txt; done for dir in ${IN3}; do find $dir -atime +270 -exec ls -l '{}' \; ${OUT}/270days.txt; done for dir in ${IN4}; do find $dir -atime +270 -exec ls -l '{}' \; ${OUT}/270days.txt; done for dir in ${IN1}; do find $dir -atime +365 -exec ls -l '{}' \; ${OUT}/365days.txt; done for dir in ${IN2}; do find $dir -atime +365 -exec ls -l '{}' \; ${OUT}/365days.txt; done for dir in ${IN3}; do find $dir -atime +365 -exec ls -l '{}' \; ${OUT}/365days.txt; done for dir in ${IN4}; do find $dir -atime +365 -exec ls -l '{}' \; ${OUT}/365days.txt; done

    Read the article

  • Munin not creating HTML files in Ubuntu Server 14.04

    - by lepe
    I have used munin in several servers and this is the first time is taking me so much time to set it up. When I telnet munin directly, I can list the services, there is no error at the logs and munin its being updated every 5 minutes. However no html files are created. I'm using the default location (/var/cache/munin/www) and I can confirm the permissions of that directory are set to munin.munin (IP and domain has been changed) munin.conf: dbdir /var/lib/munin htmldir /var/cache/munin/www logdir /var/log/munin rundir /var/run/munin [example.com;] address 100.100.50.200 munin-node.conf: log_level 4 log_file /var/log/munin/munin-node.log pid_file /var/run/munin/munin-node.pid background 1 setsid 1 user root group root host_name example.com allow ^127\.0\.0\.1$ allow ^100\.100\.50\.200$ allow ^::1$ /etc/hosts : 100.100.50.200 example.com 127.0.0.1 localhost $ telnet example.com 4949 Trying 100.100.50.200... Connected to example.com. Escape character is '^]'. # munin node at example.com list apache_accesses apache_processes apache_volume cpu cpuspeed df df_inode entropy fail2ban forks fw_packets if_err_eth0 if_err_eth1 if_eth0 if_eth1 interrupts ipmi_fans ipmi_power ipmi_temp irqstats load memory munin_stats mysql_bin_relay_log mysql_commands mysql_connections mysql_files_tables mysql_innodb_bpool mysql_innodb_bpool_act mysql_innodb_insert_buf mysql_innodb_io mysql_innodb_io_pend mysql_innodb_log mysql_innodb_rows mysql_innodb_semaphores mysql_innodb_tnx mysql_myisam_indexes mysql_network_traffic mysql_qcache mysql_qcache_mem mysql_replication mysql_select_types mysql_slow mysql_sorts mysql_table_locks mysql_tmp_tables ntp_2001:e40:100:208::123 ntp_91.189.94.4 ntp_kernel_err ntp_kernel_pll_freq ntp_kernel_pll_off ntp_offset ntp_states open_files open_inodes postfix_mailqueue postfix_mailvolume proc_pri processes swap threads uptime users vmstat fetch df _dev_sda3.value 2.1762874086869 _sys_fs_cgroup.value 0 _run.value 0.0503536980635825 _run_lock.value 0 _run_shm.value 0 _run_user.value 0 _dev_sda5.value 0.0176986285727571 _dev_sda8.value 1.08464646179852 _dev_sda7.value 0.0346633563514803 _dev_sda9.value 6.81031810822797 _dev_sda6.value 9.0932802215469 . /var/log/munin/munin-node.log Process Backgrounded 2014/08/16-14:13:36 Munin::Node::Server (type Net::Server::Fork) starting! pid(19610) Binding to TCP port 4949 on host 100.100.50.200 with IPv4 2014/08/16-14:23:11 CONNECT TCP Peer: "[100.100.50.200]:55949" Local: "[100.100.50.200]:4949" 2014/08/16-14:36:16 CONNECT TCP Peer: "[100.100.50.200]:56209" Local: "[100.100.50.200]:4949" /var/log/munin/munin-update.log ... 2014/08/16 14:30:01 [INFO]: Starting munin-update 2014/08/16 14:30:01 [INFO]: Munin-update finished (0.00 sec) 2014/08/16 14:35:02 [INFO]: Starting munin-update 2014/08/16 14:35:02 [INFO]: Munin-update finished (0.00 sec) 2014/08/16 14:40:01 [INFO]: Starting munin-update 2014/08/16 14:40:01 [INFO]: Munin-update finished (0.00 sec) $ ls -la /var/cache/munin/www/ drwxr-xr-x 3 munin munin 19 Aug 16 13:55 . drwxr-xr-x 3 root root 16 Aug 16 13:54 .. drwxr-xr-x 2 munin munin 4096 Aug 16 13:55 static Any ideas on why it is not working? EDIT This is how /var/log/munin/ log looks like after some days: -rw-r----- 1 www-data 0 Aug 16 13:54 munin-cgi-graph.log -rw-r----- 1 www-data 0 Aug 16 13:54 munin-cgi-html.log -rw-rw-r-- 1 munin 0 Aug 16 13:55 munin-html.log -rw-r----- 1 munin 0 Aug 19 06:18 munin-limits.log -rw-r----- 1 munin 15K Aug 18 14:10 munin-limits.log.1 -rw-r----- 1 munin 1.8K Aug 18 06:15 munin-limits.log.2.gz -rw-rw-r-- 1 munin 1.3K Aug 17 06:15 munin-limits.log.3.gz -rw-r--r-- 1 root 6.5K Aug 16 13:55 munin-node-configure.log -rw-r--r-- 1 root 0 Aug 17 06:18 munin-node.log -rw-r--r-- 1 root 420 Aug 16 14:52 munin-node.log.1.gz -rw-r----- 1 munin 0 Aug 19 06:18 munin-update.log -rw-r----- 1 munin 11K Aug 18 14:10 munin-update.log.1 -rw-r----- 1 munin 1.6K Aug 18 06:15 munin-update.log.2.gz -rw-rw-r-- 1 munin 1.5K Aug 17 06:15 munin-update.log.3.gz

    Read the article

  • yui compressor maven plugin doesnt compress the js files

    - by hanumant
    I am using yui compressor to compress the js files in my web app. I have configured the plugin as indicated on yui maven plugin site yui compressor maven plugin. This is the pom plugin conf <plugin> <groupId>net.sf.alchim</groupId> <artifactId>yuicompressor-maven-plugin</artifactId> <version>0.7.1</version> <executions> <execution> <phase>compile</phase> <goals> <goal>jslint</goal> <goal>compress</goal> </goals> </execution> </executions> <configuration> <failOnWarning>true</failOnWarning> <nosuffix>true</nosuffix> <force>true</force> <aggregations> <aggregation> <!-- remove files after aggregation (default: false) --> <removeIncluded>false</removeIncluded> <!-- insert new line after each concatenation (default: false) --> <insertNewLine>false</insertNewLine> <output>${project.basedir}/${webcontent.dir}/js/compressedAll.js</output> <!-- files to include, path relative to output's directory or absolute path--> <!--inputDir>base directory for non absolute includes, default to parent dir of output</inputDir--> <includes> <include>**/autocomplete.js</include> <include>**/calendar.js</include> <include>**/dialogs.js</include> <include>**/download.js</include> <include>**/folding.js</include> <include>**/jquery-1.4.2.min.js</include> <include>**/jquery.bgiframe.min.js</include> <include>**/jquery.loadmask.js</include> <include>**/jquery.printelement-1.1.js</include> <include>**/jquery.tablesorter.mod.js</include> <include>**/jquery.tablesorter.pager.js</include> <include>**/jquery.dialogs.plugin.js</include> <include>**/jquery.ui.autocomplete.js</include> <include>**/jquery.validate.js</include> <include>**/jquery-ui-1.8.custom.min.js</include> <include>**/languageDropdown.js</include> <include>**/messages.js</include> <include>**/print.js</include> <include>**/tables.js</include> <include>**/tabs.js</include> <include>**/uwTooltip.js</include> </includes> <!-- files to exclude, path relative to output's directory--> </aggregation> </aggregations> </configuration> <dependencies> <dependency> <groupId>rhino</groupId> <artifactId>js</artifactId> <scope>compile</scope> <version>1.6R5</version> </dependency> <dependency> <groupId>org.apache.maven</groupId> <artifactId>maven-plugin-api</artifactId> <version>2.0.7</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.apache.maven</groupId> <artifactId>maven-project</artifactId> <version>2.0.7</version> <scope>provided</scope> </dependency><dependency> <groupId>net.sf.retrotranslator</groupId> <artifactId>retrotranslator-runtime</artifactId> <version>1.2.9</version> <scope>runtime</scope> </dependency> </dependencies> </plugin> And here is the log at compress time These will use the artifact files already in the core ClassRealm instead, to allow them to be included in PluginDescriptor.getArtifacts(). [DEBUG] Configuring mojo 'net.sf.alchim:yuicompressor-maven-plugin:0.7.1:jslint' [DEBUG] (f) failOnWarning = true [DEBUG] (f) jswarn = true [DEBUG] (f) outputDirectory = C:\test\target\classes [DEBUG] (f) project = MavenProject: com.test.test1:test2:19-SNAPSHOT @ C:\test\pom.xml [DEBUG] (f) resources = [Resource {targetPath: null, filtering: false, FileSet {directory: C:\test\src, PatternSet [includes: {}, excludes: {**/*.class, **/*.java, site/*}]}}] [DEBUG] (f) sourceDirectory = C:\test\src\..\js [DEBUG] (f) warSourceDirectory = C:\test\src\main\webapp [DEBUG] (f) webappDirectory = C:\test\target\test2-19-SNAPSHOT [DEBUG] -- end configuration -- [INFO] [yuicompressor:jslint {execution: default}] [INFO] nb warnings: 0, nb errors: 0 [DEBUG] Configuring mojo 'net.sf.alchim:yuicompressor-maven-plugin:0.7.1:compress' -- [DEBUG] (f) removeIncluded = false [DEBUG] (f) insertNewLine = false [DEBUG] (f) output = C:\test\WebContent\js\compressedAll.js [DEBUG] (f) includes = [**/autocomplete.js, **/calendar.js, **/dialogs.js, **/download.js, **/folding.js, **/jquery-1.4.2.min.js, **/jquery.bgifram e.min.js, **/jquery.loadmask.js, **/jquery.printelement-1.1.js, **/jquery.tablesorter.mod.js, **/jquery.tablesorter.pager.js, **/jquery.dialogs.p lugin.js, **/jquery.ui.autocomplete.js, **/jquery.validate.js, **/jquery-ui-1.8.custom.min.js, **/languageDropdown.js, **/messages.js, **/print.js, * */tables.js, **/tabs.js, **/uwTooltip.js] [DEBUG] (f) aggregations = [net.sf.alchim.mojo.yuicompressor.Aggregation@65646564] [DEBUG] (f) disableOptimizations = false [DEBUG] (f) encoding = Cp1252 [DEBUG] (f) failOnWarning = true [DEBUG] (f) force = true [DEBUG] (f) gzip = false [DEBUG] (f) jswarn = true [DEBUG] (f) linebreakpos = 0 [DEBUG] (f) nomunge = false [DEBUG] (f) nosuffix = true [DEBUG] (f) outputDirectory = C:\test\target\classes [DEBUG] (f) preserveAllSemiColons = false [DEBUG] (f) project = MavenProject: com.test.test1:test2:19-SNAPSHOT @ C:\test\pom.xml [DEBUG] (f) resources = [Resource {targetPath: null, filtering: false, FileSet {directory: C:\test\src, PatternSet [includes: {}, excludes: {**/*.class, **/*.java, site/*}]}}] [DEBUG] (f) sourceDirectory = C:\test\src\..\js [DEBUG] (f) statistics = true [DEBUG] (f) suffix = -min [DEBUG] (f) warSourceDirectory = C:\test\src\main\webapp [DEBUG] (f) webappDirectory = C:\test\target\test2-19-SNAPSHOT [DEBUG] -- end configuration -- [INFO] [yuicompressor:compress {execution: default}] [INFO] generate aggregation : C:\test\WebContent\js\compressedAll.js [INFO] compressedAll.js (407505b) [INFO] nb warnings: 0, nb errors: 0 [DEBUG] Configuring mojo 'org.apache.maven.plugins:maven-resources-plugin:2.2:testResources' -- [DEBUG] (f) filters = [] [DEBUG] (f) outputDirectory = C:\test\target\test-classes [DEBUG] (f) project = MavenProject: com.test.test1:test2:19-SNAPSHOT @ C:\test\pom.xml [DEBUG] (f) resources = [Resource {targetPath: null, filtering: false, FileSet {directory: C:\test\test , PatternSet [includes: {}, excludes: {**/*.class, **/*.java}]}}] [DEBUG] -- end configuration -- The problem is the files are getting aggregated into one file but without compressing. The link above uses version 1.1 and the plugin version I am using is 0.7.1. Is this going to make any diff. Can someone tell what is wrong here. PS: I have obfuscated some text in log to follow the compliance in my firm. So you may find it mismatching somewhere.

    Read the article

  • use stream_socket_client to retrieve 2 remote files at the same time

    - by Hintswen
    I have a script in PHP which retrieves two very similar files and performs some tasks on the data then outputs a result. I'm currently using curl and getting one, processing it, then getting the other and processing it. I want to switch to stream_socket_client as I've heard you can retrieve both files at the same time and do the processing once they have been retrieved but I am unsure how to do this.

    Read the article

  • Convert OpenXml Excel files to HTML

    - by necrostaz
    Hello. I'm developing printing solution for MS Office 2007, office automation is not good for me, because it requires installed office. Open XML Document Viewer is solution for converting Word files (.docx) to HTML format by XSLT transform, but it works only for .docx. Can you suppose related or similar solutions for Excel spreadsheets files? Thanks.

    Read the article

  • c# write big files to blob sqlite

    - by brizjin-gmail-com
    I have c# application which write files to sqlite database. It uses entity fraemwork for modeling data. Write file to blob (entity byte[] varible) with this line: row.file = System.IO.File.ReadAllBytes(file_to_load.FileName); //row.file is type byte[] //row is entity class table All work correctly when files size is less. When size more 300Mb app throw exception: Exception of type 'System.OutOfMemoryException' was thrown. How I can write to blob direct, without memory varibles?

    Read the article

  • Slow synch of files form iPad to webserver?

    - by MikeN
    I'm building a recording iPad application that will take some moderately large recordings on the iPad (5-10 minutes of full audio roughly 5-10Megabytes in size.) How can I synch such large files to my web server for use? I want the synching to occur asynchronously in the background. Is there an existing library/utility to synch files in the Megabyte range from an iPhone/iPad to a server in small chunks?

    Read the article

  • Export Core Data Entity as text files in Cocoa

    - by happyCoding25
    Hello, I have an entity in core data that has 2 Attributes. One that is a string called "name", and another one that is a string called "message". I need a method to create text files for all the attributes that the user has added. I wan't the files names to be the name attribute and the contents to be the message attribute. If anyone knows how to do this any help would be great. Thanks for any help

    Read the article

  • SVNKit: Commit files that were manually deleted from filesystem( Work Copy)

    - by Jam
    I can not solve the problem with collecting CommitItem(changes that commit), or more accurately, I have no porblem with the changed and added files BUT files that I manually deleted from the file system is not seen in CommitItem list ... And those changes can not commit to the SVN server. If I delete a file using the API, then the problem does not exist... but manually deleting ... Has anyone had a similar problem?

    Read the article

< Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >