Search Results

Search found 208 results on 9 pages for 'infile'.

Page 8/9 | < Previous Page | 4 5 6 7 8 9  | Next Page >

  • Preview result of update/insert query without comitting changes to database in MySQL?

    - by Camsoft
    I am writing a script to import CSV files into existing tables within my database. I decided to do the insert/update operations myself using PHP and INSERT/UPDATE statements, and not use MySQL's LOAD INFILE command, I have good reasons for this. What I would like to do is emulate the insert/update operations and display the results to the user, and then give them the option of confirming that this is OK, and then committing the changes to the database. I'm using InnoDB database engine with support for transactions. Not sure if this helps but was thinking down the line of insert/update, query data, display to user, then either commit or rollback transaction? Any advise would be appreciated.

    Read the article

  • mysql does not utilize my cpu and ram enough?

    - by vick
    Hello Everyone! I am importing a 2.5gb csv file to a mysql table. My storage engine is innodb. Here is the script: use xxx; DROP TABLE IF EXISTS `xxx`.`xxx`; CREATE TABLE `xxx`.`xxx` ( `xxx_id` int(10) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(128) NOT NULL, `yy` varchar(128) NOT NULL, `yyy` varchar(64) NOT NULL, `yyyy` varchar(2) NOT NULL, `yyyyy` varchar(10) NOT NULL, `url` varchar(64) NOT NULL, `p` varchar(10) NOT NULL, `pp` varchar(10) NOT NULL, `category` varchar(256) NOT NULL, `flag` varchar(4) NOT NULL, PRIMARY KEY (`xxx_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; set autocommit = 0; load data local infile '/home/xxx/raw.csv' into table company fields terminated by ',' optionally enclosed by '"' lines terminated by '\r\n' ( name, yy, yyy, yyyy, yyyyy, url, p, pp, category, flag ); commit; Why does my PC (core i7 920 with 6gb ram) only consume 9% cpu power and 60% ram when running these queries?

    Read the article

  • Regular expressions in a Python find-and-replace script?

    - by Haidon
    I'm new to Python scripting, so please forgive me in advance if the answer to this question seems inherently obvious. I'm trying to put together a large-scale find-and-replace script using Python. I'm using code similar to the following: findreplace = [ ('term1', 'term2'), ] inF = open(infile,'rb') s=unicode(inF.read(),charenc) inF.close() for couple in findreplace: outtext=s.replace(couple[0],couple[1]) s=outtext outF = open(outFile,'wb') outF.write(outtext.encode('utf-8')) outF.close() How would I go about having the script do a find and replace for regular expressions? Specifically, I want it to find some information (metadata) specified at the top of a text file. Eg: Title: This is the title Author: This is the author Date: This is the date and convert it into LaTeX format. Eg: \title{This is the title} \author{This is the author} \date{This is the date} Maybe I'm tackling this the wrong way. If there's a better way than regular expressions please let me know! Thanks!

    Read the article

  • c++ strings and file input

    - by Dalton Conley
    Ok, its been a while since I've done any file input or string manipulation but what I'm attempting to do is as follows while(infile >> word) { for(int i = 0; i < word.length(); i++) { if(word[i] == '\n') { cout << "Found a new line" << endl; lineNumber++; } if(!isalpha(word[i])) { word.erase(i); } if(islower(word[i])) word[i] = toupper(word[i]); } } Now I assume this is not working because skips the new line character?? If so, whats a better way to do this.

    Read the article

  • How to change particular column entries in a mysql table when uploading data from csv file?

    - by understack
    I upload data into a mysql table from csv file in a standard way like this: TRUNCATE TABLE table_name; load data local infile '/path/to/file/file_name.csv' into table table_name fields terminated by ',' enclosed by '"' lines terminated by '\r\n' (id, name, type, deleted); All 'deleted' column entries in csv file has either 'current' or 'deleted' value. Question: When csv data is being loaded into table, I want to put current date in table for all those corresponding 'deleted' entries in csv file. And null for 'current' entries. How can I do this? Example: csv file: id_1, name_1, type_1, current id_2, name_1, type_2, deleted id_3, name_3, type_3, current Table after loading this data should look like this: id_1, name_1, type_1, null id_2, name_1, type_2, 2010-05-10 id_3, name_3, type_3, null Edit Probably, I could run another separate query after loading csv file. Wondering if it could be done in same query?

    Read the article

  • setsockopt EOPNOTSUPP (Operation not supported)

    - by brant
    When I strace my MySQL process, I keep finding the same error over and over: setsockopt(240, SOL_IP, IP_TOS, [8], 4) = -1 EOPNOTSUPP (Operation not supported) futex(0x87ab944, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x87ab940, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1 futex(0x87ab260, FUTEX_WAKE_PRIVATE, 1) = 1 select(13, [10 12], NULL, NULL, NULL) = 1 (in [12]) fcntl64(12, F_SETFL, O_RDWR|O_NONBLOCK) = 0 accept(12, {sa_family=AF_FILE, path="\246\32629iE"...}, [2]) = 803 fcntl64(12, F_SETFL, O_RDWR) = 0 getsockname(803, {sa_family=AF_FILE, path="/var/lib/mysql\1"...}, [28]) = 0 fcntl64(803, F_SETFL, O_RDONLY) = 0 fcntl64(803, F_GETFL) = 0x2 (flags O_RDWR) fcntl64(803, F_SETFL, O_RDWR|O_NONBLOCK) = 0 setsockopt(803, SOL_IP, IP_TOS, [8], 4) = -1 EOPNOTSUPP (Operation not supported) futex(0x87ab944, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x87ab940, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1 futex(0x87ab260, FUTEX_WAKE_PRIVATE, 1) = 1 select(13, [10 12], NULL, NULL, NULL) = 1 (in [12]) fcntl64(12, F_SETFL, O_RDWR|O_NONBLOCK) = 0 accept(12, {sa_family=AF_FILE, path="\246\32629iE"...}, [2]) = 240 fcntl64(12, F_SETFL, O_RDWR) = 0 getsockname(240, {sa_family=AF_FILE, path="/var/lib/mysql\1"...}, [28]) = 0 fcntl64(240, F_SETFL, O_RDONLY) = 0 fcntl64(240, F_GETFL) = 0x2 (flags O_RDWR) fcntl64(240, F_SETFL, O_RDWR|O_NONBLOCK) = 0 setsockopt(240, SOL_IP, IP_TOS, [8], 4) = -1 EOPNOTSUPP (Operation not supported) When I look for running mysql processes I don't see anything out of the ordinary. I figured it might be someplace in my code, so I modified .htaccess to spit out a 502 error to prevent it from loading anything. The error still shows up, just less frequently. There have been quite a few threads that talk about this error, but no real answer as to how to solve it. my.conf, as per request: [mysqld] #skip-networking #log-slow-queries #safe-show-database #local-infile = 0 log-slow-queries = /var/log/mysql-slow.log max_connections = 200 query_cache_limit = 128643200 key_buffer_size = 1200144000 low_priority_updates = 1 concurrent_insert = 2 thread_cache_size = 7 query_cache_size = 662144000 table_cache = 1600 table_definition_cache = 1024 long_query_time = 2.5 open_files_limit = 2647 max_connect_errors=999999999

    Read the article

  • How can I get MySQL 5.5 to log warnings to one of the log files?

    - by Wodin
    I have found various things that say that you can log warnings to the MySQL error log, but I have not been able to actually make it happen. I do have the error log working, and MySQL prints stuff to it on startup and shutdown and occasionally at other times, but if I e.g. SELECT CAST('123' AS DATE); and then SHOW WARNINGS; I can see the warning, but it does not show up in any logs. I've also tried enabling the general log and the slow query log, but these don't show the warnings either. I've tried with log_warnings = 1 and log_warnings = 2, but still no warnings are logged. What am I doing wrong? mysql> show variables like '%error%'; +--------------------+--------------------------+ | Variable_name | Value | +--------------------+--------------------------+ | error_count | 0 | | log_error | /var/log/mysql/mysql.err | | max_connect_errors | 10 | | max_error_count | 1024 | | slave_skip_errors | OFF | +--------------------+--------------------------+ mysql> show variables like '%warn%'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | log_warnings | 1 | | sql_warnings | OFF | | warning_count | 0 | +---------------+-------+ 3 rows in set (0.06 sec) mysql> show variables like '%log%'; +-----------------------------------------+-------------------------------+ | Variable_name | Value | +-----------------------------------------+-------------------------------+ ... | general_log | ON | | general_log_file | /var/log/mysql/general.log | ... | log | ON | ... | log_error | /var/log/mysql/mysql.err | | log_output | FILE | | log_queries_not_using_indexes | ON | ... | log_warnings | 1 | ... | slow_query_log | ON | | slow_query_log_file | /var/log/mysql/mysql-slow.log | ... +-----------------------------------------+-------------------------------+ Edit: mysql> show global status like 'Aborted%'; +------------------+-------+ | Variable_name | Value | +------------------+-------+ | Aborted_clients | 24 | | Aborted_connects | 15 | +------------------+-------+ 2 rows in set (0.08 sec) Edit: Clarification: I do get [Warning] Aborted connection 1 to db... and [Warning] Access denied for user... messages logged, but not the warnings that you can see via SHOW WARNINGS after e.g. inserting something or running LOAD DATA INFILE... which is what I'm looking for.

    Read the article

  • avconv gets killed if mkv has subtitles

    - by Lukas Knuth
    What I'm trying to do is to take a movie (in an Matroska container), convert all audio tracks to AC3 and don't touch anything else. I'm using this line: avconv -i infile.mkv -map 0 -vcodec copy -scodec copy -acodec ac3 -ab 256k outfile.mkv This works fine, except when there are subtitles embedded. Then, after some time processing with no progress, avconv just "dies" (output shortened, these seem to be the interesting parts): [matroska,webm @ 0xf867a0] max_analyze_duration reached [matroska,webm @ 0xf867a0] Estimating duration from bitrate, this may be inaccurate ... Incompatible sample format 's16' for codec 'ac3', auto-selecting format 'flt' ... Stream #0.0(eng): Video: H264 / 0x34363248, yuv420p, 1280x536 [PAR 1:1 DAR 160:67], q=2-31, 1k tbn, 1k tbc (default) Stream #0.1(ger): Audio: ac3, 48000 Hz, 5.1, flt, 256 kb/s (default) Stream #0.2(eng): Audio: ac3, 48000 Hz, 5.1, flt, 256 kb/s Stream #0.3(ger): Subtitle: dvdsub (default) (forced) Metadata: title : forced Stream #0.4(ger): Subtitle: dvdsub Metadata: title : complete Stream mapping: Stream #0:0 -> #0:0 (copy) Stream #0:1 -> #0:1 (dca -> ac3) Stream #0:2 -> #0:2 (dca -> ac3) Stream #0:3 -> #0:3 (copy) Stream #0:4 -> #0:4 (copy) Input stream #0:2 frame changed from rate:48000 fmt:s16 ch:6 to rate:48000 fmt:flt ch:6 Input stream #0:1 frame changed from rate:48000 fmt:s16 ch:6 to rate:48000 fmt:flt ch:6 frame= 2606 fps=1303 q=-1.0 size= 3kB time=107.36 bitrate= 0.2kbits/s ... frame=96141 fps=813 q=-1.0 size= 2195806kB time=2807.04 bitrate=6408.2kbits/s frame=96251 fps=810 q=-1.0 size= 2195806kB time=2807.04 bitrate=6408.2kbits/s ... frame=97015 fps=397 q=-1.0 size= 2195806kB time=2807.04 bitrate=6408.2kbits/s Getötet ["Killed", in English] I have no idea why this happens, as there is no error-output. I'd like to just copy the subtitles over, not touch them at all. If that won't work, they can be completely dropped.

    Read the article

  • Correct MySQL username/password, but getting Access Denied error when run from script

    - by Nick
    I'm currently trying to run the following command from within a shell script. /usr/bin/mysql -u username -ppassword -h localhost database It works perfectly fine when executed manually, and not from within a script. When I try to execute a script that contains that command, I get the following error: ERROR 1045 (28000) at line 3: Access denied for user 'username'@'localhost' (using password: YES) I literally copied and pasted the working command into the script. Why the error? As a sidenote: the ultimate intent is to run the script with cron. EDIT: Here is a stripped down version of my script that I'm trying to run. You can ignore most of it up until the point where it connects to MySQL around line 19. #!/bin/sh #Run download script to download product data cd /home/dir/Scripts/Linux /bin/sh script1.sh #Run import script to import product data to MySQL cd /home/dir/Mysql /bin/sh script2.sh #Download inventory stats spreadsheet and rename it cd /home/dir /usr/bin/wget http://www.url.com/file1.txt mv file1.txt sheet1.csv #Remove existing export spreadsheet rm /tmp/sheet2.csv #Run MySQL queries in "here document" format /usr/bin/mysql -u username -ppassword -h localhost database << EOF --Drop old inventory stats table truncate table table_name1; --Load new inventory stats into table Load data local infile '/home/dir/sheet1.csv' into table table_name1 fields terminated by ',' optionally enclosed by '"' lines terminated by '\r\n'; --MySQL queries to combine product data and inventory stats here --Export combined data in spreadsheet format group by p.value into outfile '/tmp/sheet2.csv' fields terminated by ',' optionally enclosed by '"' lines terminated by '\r\n'; EOF EDIT 2: After some more testing, the issue is with the << EOF that is at the end of the command. This is there for the "here document". When removed, the command works fine. The problem is that I need << EOF there so that the MySQL queries will run.

    Read the article

  • SQLSTATE[HY000]: General error: 2006 MySQL server has gone away

    - by Barkat Ullah
    Server details: RAM: 16GB HDD: 1000GB OS: Linux 2.6.32-220.7.1.el6.x86_64 Processor: 6 Core Please see the link below for my # top preview: I can often see the error mentioned in title in my plesk panel and my /etc/my.cnf configuration are as below: bind-address=127.0.0.1 local-infile=0 datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql max_connections=20000 max_user_connections=20000 key_buffer_size=512M join_buffer_size=4M read_buffer_size=4M read_rnd_buffer_size=512M sort_buffer_size=8M wait_timeout=300 interactive_timeout=300 connect_timeout=300 tmp_table_size=8M thread_concurrency=12 concurrent_insert=2 query_cache_limit=64M query_cache_size=128M query_cache_type=2 transaction_alloc_block_size=8192 max_allowed_packet=512M [mysqldump] quick max_allowed_packet=512M [myisamchk] key_buffer_size=128M sort_buffer_size=128M read_buffer_size=32M write_buffer_size=32M [mysqlhotcopy] interactive-timeout [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid open_files_limit=8192 As my server httpd conf is set to /etc/httpd/conf.d/swtune.conf and the configuration is as below: at prefork.c: <IfModule prefork.c> StartServers 8 MinSpareServers 10 MaxSpareServers 20 ServerLimit 1536 MaxClients 1536 MaxRequestsPerChild 4000 </IfModule> If I run grep -i maxclient /var/log/httpd/error_log then I can see everyday this error: [root@u16170254 ~]# grep -i maxclient /var/log/httpd/error_log [Sun Apr 15 07:26:03 2012] [error] server reached MaxClients setting, consider raising the MaxClients setting [Mon Apr 16 06:09:22 2012] [error] server reached MaxClients setting, consider raising the MaxClients setting I tried to explain everything that I changed to keep my server okay, but maximum time my server is down. Please help me which parameter can I change to keep my server okay and my sites can load fast. It is taking too much time to load my sites.

    Read the article

  • MySQL config for 2GB ram

    - by Tiffany Walker
    How is my config? Does it work well for 2GB? What would be an ideal config for a 2GB ram server? [mysqld] set-variable = max_connections=500 log-slow-queries safe-show-database local-infile=0 skip-networking symbolic-links=0 max_connections = 500 key_buffer = 256M myisam_sort_buffer_size = 64M join_buffer_size = 2M read_buffer_size = 2M sort_buffer_size = 2M read_rnd_buffer_size = 2M thread_concurrency = 16 table_cache = 1024 thread_cache_size = 50 wait_timeout = 7200 connect_timeout = 10 tmp_table_size = 32M max_allowed_packet = 160M max_connect_errors = 10 query_cache_limit = 1M query_cache_size = 32M query_cache_type = 1 [mysqld_safe] open_files_limit = 8192 [mysqldump] max_allowed_packet = 16M [myisamchk] key_buffer = 64M sort_buffer = 64M read_buffer = 16M write_buffer = 16M UPDATE 2012-03-28 12:58 EDT By RolandoMySQLDBA Please run these queries and paste them into your question: For MyISAM SELECT CONCAT(ROUND(KBS/POWER(1024, IF(PowerOf1024<0,0,IF(PowerOf1024>3,0,PowerOf1024)))+0.4999), SUBSTR(' KMG',IF(PowerOf1024<0,0, IF(PowerOf1024>3,0,PowerOf1024))+1,1)) recommended_key_buffer_size FROM (SELECT LEAST(POWER(2,32),KBS1) KBS FROM (SELECT SUM(index_length) KBS1 FROM information_schema.tables WHERE engine='MyISAM' AND table_schema NOT IN ('information_schema','mysql')) AA ) A, (SELECT 2 PowerOf1024) B; For InnoDB SELECT CONCAT(ROUND(KBS/POWER(1024, IF(PowerOf1024<0,0,IF(PowerOf1024>3,0,PowerOf1024)))+0.49999), SUBSTR(' KMG',IF(PowerOf1024<0,0, IF(PowerOf1024>3,0,PowerOf1024))+1,1)) recommended_innodb_buffer_pool_size FROM (SELECT SUM(data_length+index_length) KBS FROM information_schema.tables WHERE engine='InnoDB') A, (SELECT 2 PowerOf1024) B;

    Read the article

  • Apache and MySQL not working well after extending filesystem

    - by xtrimsky
    I had 4Gb on my /var (/dev/mapper/vg00-var) filesystem, and I wanted to extend it to 160Gb. I did it following this tutorial: http://faq.1and1.com/dedicated_servers/root_server/linux_admin_help/7.html Now I have 160: Filesystem Size Used Avail Use% Mounted on /dev/md1 4.0G 424M 3.6G 11% / /dev/mapper/vg00-usr 4.3G 1.4G 3.0G 32% /usr /dev/mapper/vg00-var 198G 6.5G 192G 4% /var /dev/mapper/vg00-home 4.3G 4.4M 4.3G 1% /home none 1.1G 0 1.1G 0% /tmp Now I have a problem, in order for Apache to work, each time I reboot, I need to also reboot apache: "apachectl -k restart" which is already terrible. I think this is because /var contains the htdocs The worst part is, mysql is not starting at all. Mysql has some files also in /var What have I done wrong ?? :( Thank you EDIT: Attaching /var/log/mysqld.log: 120602 11:17:44 InnoDB: Waiting for the background threads to start 120602 11:17:45 InnoDB: 1.1.8 started; log sequence number 8354009 120602 11:17:45 [ERROR] /usr/libexec/mysqld: unknown variable 'set-variable=local-infile=0' 120602 11:17:45 [ERROR] Aborting 120602 11:17:45 InnoDB: Starting shutdown... 120602 11:17:46 InnoDB: Shutdown completed; log sequence number 8354009 120602 11:17:46 [Note] /usr/libexec/mysqld: Shutdown complete 120602 11:17:46 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended

    Read the article

  • Traditional IO vs memory-mapped

    - by Senne
    I'm trying to illustrate the difference in performance between traditional IO and memory mapped files in java to students. I found an example somewhere on internet but not everything is clear to me, I don't even think all steps are nececery. I read a lot about it here and there but I'm not convinced about a correct implementation of neither of them. The code I try to understand is: public class FileCopy{ public static void main(String args[]){ if (args.length < 1){ System.out.println(" Wrong usage!"); System.out.println(" Correct usage is : java FileCopy <large file with full path>"); System.exit(0); } String inFileName = args[0]; File inFile = new File(inFileName); if (inFile.exists() != true){ System.out.println(inFileName + " does not exist!"); System.exit(0); } try{ new FileCopy().memoryMappedCopy(inFileName, inFileName+".new" ); new FileCopy().customBufferedCopy(inFileName, inFileName+".new1"); }catch(FileNotFoundException fne){ fne.printStackTrace(); }catch(IOException ioe){ ioe.printStackTrace(); }catch (Exception e){ e.printStackTrace(); } } public void memoryMappedCopy(String fromFile, String toFile ) throws Exception{ long timeIn = new Date().getTime(); // read input file RandomAccessFile rafIn = new RandomAccessFile(fromFile, "rw"); FileChannel fcIn = rafIn.getChannel(); ByteBuffer byteBuffIn = fcIn.map(FileChannel.MapMode.READ_WRITE, 0,(int) fcIn.size()); fcIn.read(byteBuffIn); byteBuffIn.flip(); RandomAccessFile rafOut = new RandomAccessFile(toFile, "rw"); FileChannel fcOut = rafOut.getChannel(); ByteBuffer writeMap = fcOut.map(FileChannel.MapMode.READ_WRITE,0,(int) fcIn.size()); writeMap.put(byteBuffIn); long timeOut = new Date().getTime(); System.out.println("Memory mapped copy Time for a file of size :" + (int) fcIn.size() +" is "+(timeOut-timeIn)); fcOut.close(); fcIn.close(); } static final int CHUNK_SIZE = 100000; static final char[] inChars = new char[CHUNK_SIZE]; public static void customBufferedCopy(String fromFile, String toFile) throws IOException{ long timeIn = new Date().getTime(); Reader in = new FileReader(fromFile); Writer out = new FileWriter(toFile); while (true) { synchronized (inChars) { int amountRead = in.read(inChars); if (amountRead == -1) { break; } out.write(inChars, 0, amountRead); } } long timeOut = new Date().getTime(); System.out.println("Custom buffered copy Time for a file of size :" + (int) new File(fromFile).length() +" is "+(timeOut-timeIn)); in.close(); out.close(); } } When exactly is it nececary to use RandomAccessFile? Here it is used to read and write in the memoryMappedCopy, is it actually nececary just to copy a file at all? Or is it a part of memorry mapping? In customBufferedCopy, why is synchronized used here? I also found a different example that -should- test the performance between the 2: public class MappedIO { private static int numOfInts = 4000000; private static int numOfUbuffInts = 200000; private abstract static class Tester { private String name; public Tester(String name) { this.name = name; } public long runTest() { System.out.print(name + ": "); try { long startTime = System.currentTimeMillis(); test(); long endTime = System.currentTimeMillis(); return (endTime - startTime); } catch (IOException e) { throw new RuntimeException(e); } } public abstract void test() throws IOException; } private static Tester[] tests = { new Tester("Stream Write") { public void test() throws IOException { DataOutputStream dos = new DataOutputStream( new BufferedOutputStream( new FileOutputStream(new File("temp.tmp")))); for(int i = 0; i < numOfInts; i++) dos.writeInt(i); dos.close(); } }, new Tester("Mapped Write") { public void test() throws IOException { FileChannel fc = new RandomAccessFile("temp.tmp", "rw") .getChannel(); IntBuffer ib = fc.map( FileChannel.MapMode.READ_WRITE, 0, fc.size()) .asIntBuffer(); for(int i = 0; i < numOfInts; i++) ib.put(i); fc.close(); } }, new Tester("Stream Read") { public void test() throws IOException { DataInputStream dis = new DataInputStream( new BufferedInputStream( new FileInputStream("temp.tmp"))); for(int i = 0; i < numOfInts; i++) dis.readInt(); dis.close(); } }, new Tester("Mapped Read") { public void test() throws IOException { FileChannel fc = new FileInputStream( new File("temp.tmp")).getChannel(); IntBuffer ib = fc.map( FileChannel.MapMode.READ_ONLY, 0, fc.size()) .asIntBuffer(); while(ib.hasRemaining()) ib.get(); fc.close(); } }, new Tester("Stream Read/Write") { public void test() throws IOException { RandomAccessFile raf = new RandomAccessFile( new File("temp.tmp"), "rw"); raf.writeInt(1); for(int i = 0; i < numOfUbuffInts; i++) { raf.seek(raf.length() - 4); raf.writeInt(raf.readInt()); } raf.close(); } }, new Tester("Mapped Read/Write") { public void test() throws IOException { FileChannel fc = new RandomAccessFile( new File("temp.tmp"), "rw").getChannel(); IntBuffer ib = fc.map( FileChannel.MapMode.READ_WRITE, 0, fc.size()) .asIntBuffer(); ib.put(0); for(int i = 1; i < numOfUbuffInts; i++) ib.put(ib.get(i - 1)); fc.close(); } } }; public static void main(String[] args) { for(int i = 0; i < tests.length; i++) System.out.println(tests[i].runTest()); } } I more or less see whats going on, my output looks like this: Stream Write: 653 Mapped Write: 51 Stream Read: 651 Mapped Read: 40 Stream Read/Write: 14481 Mapped Read/Write: 6 What is makeing the Stream Read/Write so unbelievably long? And as a read/write test, to me it looks a bit pointless to read the same integer over and over (if I understand well what's going on in the Stream Read/Write) Wouldn't it be better to read int's from the previously written file and just read and write ints on the same place? Is there a better way to illustrate it? I've been breaking my head about a lot of these things for a while and I just can't get the whole picture..

    Read the article

  • Linked List manipulation, issues retrieving data c++

    - by floatfil
    I'm trying to implement some functions to manipulate a linked list. The implementation is a template typename T and the class is 'List' which includes a 'head' pointer and also a struct: struct Node { // the node in a linked list T* data; // pointer to actual data, operations in T Node* next; // pointer to a Node }; Since it is a template, and 'T' can be any data, how do I go about checking the data of a list to see if it matches the data input into the function? The function is called 'retrieve' and takes two parameters, the data and a pointer: bool retrieve(T target, T*& ptr); // This is the prototype we need to use for the project "bool retrieve : similar to remove, but not removed from list. If there are duplicates in the list, the first one encountered is retrieved. Second parameter is unreliable if return value is false. E.g., " Employee target("duck", "donald"); success = company1.retrieve(target, oneEmployee); if (success) { cout << "Found in list: " << *oneEmployee << endl; } And the function is called like this: company4.retrieve(emp3, oneEmployee) So that when you cout *oneEmployee, you'll get the data of that pointer (in this case the data is of type Employee). (Also, this is assuming all data types have the apropriate overloaded operators) I hope this makes sense so far, but my issue is in comparing the data in the parameter and the data while going through the list. (The data types that we use all include overloads for equality operators, so oneData == twoData is valid) This is what I have so far: template <typename T> bool List<T>::retrieve(T target , T*& ptr) { List<T>::Node* dummyPtr = head; // point dummy pointer to what the list's head points to for(;;) { if (*dummyPtr->data == target) { // EDIT: it now compiles, but it breaks here and I get an Access Violation error. ptr = dummyPtr->data; // set the parameter pointer to the dummy pointer return true; // return true } else { dummyPtr = dummyPtr->next; // else, move to the next data node } } return false; } Here is the implementation for the Employee class: //-------------------------- constructor ----------------------------------- Employee::Employee(string last, string first, int id, int sal) { idNumber = (id >= 0 && id <= MAXID? id : -1); salary = (sal >= 0 ? sal : -1); lastName = last; firstName = first; } //-------------------------- destructor ------------------------------------ // Needed so that memory for strings is properly deallocated Employee::~Employee() { } //---------------------- copy constructor ----------------------------------- Employee::Employee(const Employee& E) { lastName = E.lastName; firstName = E.firstName; idNumber = E.idNumber; salary = E.salary; } //-------------------------- operator= --------------------------------------- Employee& Employee::operator=(const Employee& E) { if (&E != this) { idNumber = E.idNumber; salary = E.salary; lastName = E.lastName; firstName = E.firstName; } return *this; } //----------------------------- setData ------------------------------------ // set data from file bool Employee::setData(ifstream& inFile) { inFile >> lastName >> firstName >> idNumber >> salary; return idNumber >= 0 && idNumber <= MAXID && salary >= 0; } //------------------------------- < ---------------------------------------- // < defined by value of name bool Employee::operator<(const Employee& E) const { return lastName < E.lastName || (lastName == E.lastName && firstName < E.firstName); } //------------------------------- <= ---------------------------------------- // < defined by value of inamedNumber bool Employee::operator<=(const Employee& E) const { return *this < E || *this == E; } //------------------------------- > ---------------------------------------- // > defined by value of name bool Employee::operator>(const Employee& E) const { return lastName > E.lastName || (lastName == E.lastName && firstName > E.firstName); } //------------------------------- >= ---------------------------------------- // < defined by value of name bool Employee::operator>=(const Employee& E) const { return *this > E || *this == E; } //----------------- operator == (equality) ---------------- // if name of calling and passed object are equal, // return true, otherwise false // bool Employee::operator==(const Employee& E) const { return lastName == E.lastName && firstName == E.firstName; } //----------------- operator != (inequality) ---------------- // return opposite value of operator== bool Employee::operator!=(const Employee& E) const { return !(*this == E); } //------------------------------- << --------------------------------------- // display Employee object ostream& operator<<(ostream& output, const Employee& E) { output << setw(4) << E.idNumber << setw(7) << E.salary << " " << E.lastName << " " << E.firstName << endl; return output; } I will include a check for NULL pointer but I just want to get this working and will test it on a list that includes the data I am checking. Thanks to whoever can help and as usual, this is for a course so I don't expect or want the answer, but any tips as to what might be going wrong will help immensely!

    Read the article

  • How to parse a CSV file containing serialized PHP? [migrated]

    - by garbetjie
    I've just started dabbling in Perl, to try and gain some exposure to different programming languages - so forgive me if some of the following code is horrendous. I needed a quick and dirty CSV parser that could receive a CSV file, and split it into file batches containing "X" number of CSV lines (taking into account that entries could contain embedded newlines). I came up with a working solution, and it was going along just fine. However, as one of the CSV files that I'm trying to split, I came across one that contains serialized PHP code. This seems to break the CSV parsing. As soon as I remove the serialization - the CSV file is parsed correctly. Are there any tricks I need to know when it comes to parsing serialized data in CSV files? Here is a shortened sample of the code: use strict; use warnings; my $csv = Text::CSV_XS->new({ eol => $/, always_quote => 1, binary => 1 }); my $out; my $in; open $in, "<:encoding(utf8)", "infile.csv" or die("cannot open input file $inputfile"); open $out, ">outfile.000"; binmode($out, ":utf8"); while (my $line = $csv->getline($in)) { $lines++; $csv->print($out, $line); } I'm never able to get into the while loop shown above. As soon as I remove the serialized data, I suddenly am able to get into the loop. Edit: An example of a line that is causing me trouble (taken straight from Vim - hence the ^M): "26","other","1","20,000 Subscriber Plan","Some text here.^M\ Some more text","on","","18","","0","","0","0","recurring","0","","payment","totalsend","0","tsadmin","R34bL9oq","37","0","0","","","","","","","","","","","","","","","","","","","","","","","0","0","0","a:18:{i:0;s:1:\"3\";i:1;s:1:\"2\";i:2;s:2:\"59\";i:3;s:2:\"60\";i:4;s:2:\"61\";i:5;s:2:\"62\";i:6;s:2:\"63\";i:7;s:2:\"64\";i:8;s:2:\"65\";i:9;s:2:\"66\";i:10;s:2:\"67\";i:11;s:2:\"68\";i:12;s:2:\"69\";i:13;s:2:\"70\";i:14;s:2:\"71\";i:15;s:2:\"72\";i:16;s:2:\"73\";i:17;s:2:\"74\";}","","","0","0","","0","0","0.0000","0.0000","0","","","0.00","","6","1" "27","other","1","35,000 Subscriber Plan","Some test here.^M\ Some more text","on","","18","","0","","0","0","recurring","0","","payment","totalsend","0","tsadmin","R34bL9oq","38","0","0","","","","","","","","","","","","","","","","","","","","","","","0","0","0","a:18:{i:0;s:1:\"3\";i:1;s:1:\"2\";i:2;s:2:\"59\";i:3;s:2:\"60\";i:4;s:2:\"61\";i:5;s:2:\"62\";i:6;s:2:\"63\";i:7;s:2:\"64\";i:8;s:2:\"65\";i:9;s:2:\"66\";i:10;s:2:\"67\";i:11;s:2:\"68\";i:12;s:2:\"69\";i:13;s:2:\"70\";i:14;s:2:\"71\";i:15;s:2:\"72\";i:16;s:2:\"73\";i:17;s:2:\"74\";}","","","0","0","","0","0","0.0000","0.0000","0","","","0.00","","7","1" "28","other","1","50,000 Subscriber Plan","Some text here.^M\ Some more text","on","","18","","0","","0","0","recurring","0","","payment","totalsend","0","tsadmin","R34bL9oq","39","0","0","","","","","","","","","","","","","","","","","","","","","","","0","0","0","a:18:{i:0;s:1:\"3\";i:1;s:1:\"2\";i:2;s:2:\"59\";i:3;s:2:\"60\";i:4;s:2:\"61\";i:5;s:2:\"62\";i:6;s:2:\"63\";i:7;s:2:\"64\";i:8;s:2:\"65\";i:9;s:2:\"66\";i:10;s:2:\"67\";i:11;s:2:\"68\";i:12;s:2:\"69\";i:13;s:2:\"70\";i:14;s:2:\"71\";i:15;s:2:\"72\";i:16;s:2:\"73\";i:17;s:2:\"74\";}","","","0","0","","0","0","0.0000","0.0000","0","","","0.00","","8","1""73","other","8","10,000,000","","","","0","","0","","0","0","recurring","0","","payment","","0","","","75","0","10000000","","","","","","","","","","","","","","","","","","","","","","","0","0","0","a:17:{i:0;s:1:\"3\";i:1;s:1:\"2\";i:2;s:2:\"59\";i:3;s:2:\"60\";i:4;s:2:\"61\";i:5;s:2:\"62\";i:6;s:2:\"63\";i:7;s:2:\"64\";i:8;s:2:\"65\";i:9;s:2:\"66\";i:10;s:2:\"67\";i:11;s:2:\"68\";i:12;s:2:\"69\";i:13;s:2:\"70\";i:14;s:2:\"71\";i:15;s:2:\"72\";i:16;s:2:\"74\";}","","","0","0","","0","0","0.0000","0.0000","0","","","0.00","","14","0"

    Read the article

  • perl threading problem

    - by Alice Wozownik
    I'm writing a multithreaded website uptime checker in perl, and here is the basic code so far (includes only threading part): !/usr/bin/perl use LWP::UserAgent; use Getopt::Std; use threads; use threads::shared; my $maxthreads :shared = 50; my $threads :shared = 0; print "Website Uptime Checker\n"; my $infilename = $ARGV[0]; chomp($infilename); open(INFILE, $infilename); my $outfilename = $ARGV[1]; chomp($outfilename); open(OUTFILE, "" . $outfilename); OUTFILE-autoflush(1); while ($site = ) { chomp($site); while (1) { if ($threads < $maxthreads) { $threads++; my $thr = threads-create(\&check_site, $site); $thr-detach(); last; } else { sleep(1); } } } while ($threads 0) { sleep(1); } sub check_site { $server = $_[0]; print "$server\n"; $threads--; } It gives an error after a while: Can't call method "detach" on an undefined value at C:\perl\webchecker.pl line 28, line 245. What causes this error? I know it is at detach, but what am I doing wrong in my code? Windows shows lots of free memory, so it should not be the computer running out of memory, this error occurs even if I set $maxthreads as low as 10 or possibly even lower.

    Read the article

  • Named pipe blocking with user nobody

    - by dnagirl
    I have 2 short scripts. The first, an awk script, processes a large file and prints to a named pipe 'myfifo.dat'. The second, a Perl script, runs a LOAD DATA LOCAL INFILE 'myfifo.dat'... command. Both of these scripts work when run locally like so: lee.awk big.file & lee.pl However, when I call these scripts from a PHP webpage, the named pipe blocks: $awk="/path/to/lee.awk {$_FILES['uploadfile']['tmp_name']} &"; $sql="/path/to/lee.pl"; if(!exec($awk,$return,$err)) throw new ZException(print_r($err,true)); //blocks here if(!exec($sql,$return,$err)) throw new ZException(print_r($err,true)); If I modify the awk and Perl scripts so that they write and read to a normal file, everything works fine from PHP. The permissions on the fifo and the normal file are 666 (for testing purposes). These operations run much more quickly through a named pipe, so I'd prefer to use one. Any ideas how to unblock it? ps. In case you're wondering why I'm going to all this aggravation, see this SO question.

    Read the article

  • Oracle sqlldr: column not allowed here

    - by Wade Williams
    Can anyone spot the error in this attempted data load? The '\\N' is because this is an import of an OUTFILE dump from mysql, which puts \N for NULL fields. The decode is to catch cases where the field might be an empty string, or might have \N. Using Oracle 10g on Linux. load data infile objects.txt discardfile objects.dsc truncate into table objects fields terminated by x'1F' optionally enclosed by '"' (ID INTEGER EXTERNAL NULLIF (ID='\\N'), TITLE CHAR(128) NULLIF (TITLE='\\N'), PRIORITY CHAR(16) "decode(:PRIORITY, BLANKS, NULL, '\\N', NULL)", STATUS CHAR(64) "decode(:STATUS, BLANKS, NULL, '\\N', NULL)", ORIG_DATE DATE "YYYY-MM-DD HH:MM:SS" NULLIF (ORIG_DATE='\\N'), LASTMOD DATE "YYYY-MM-DD HH:MM:SS" NULLIF (LASTMOD='\\N'), SUBMITTER CHAR(128) NULLIF (SUBMITTER='\\N'), DEVELOPER CHAR(128) NULLIF (DEVELOPER='\\N'), ARCHIVE CHAR(4000) NULLIF (ARCHIVE='\\N'), SEVERITY CHAR(64) "decode(:SEVERITY, BLANKS, NULL, '\\N', NULL)", VALUED CHAR(4000) NULLIF (VALUED='\\N'), SRD DATE "YYYY-MM-DD" NULLIF (SRD='\\N'), TAG CHAR(64) NULLIF (TAG='\\N') ) Sample Data (record 1). The ^_ represents the unprintable 0x1F delimiter. 1987^_Component 1987^_\N^_Done^_2002-10-16 01:51:44^_2002-10-16 01:51:44^_import^_badger^_N^_^_N^_0000-00-00^_none Error: Record 1: Rejected - Error on table objects, column SEVERITY. ORA-00984: column not allowed here

    Read the article

  • Use LaTeX Listings to correctly detect and syntax highlight embedded code of a different language in

    - by D W
    I have scripts that have one-liners or sort scripts from other languages within them. How can I have LaTeX listings detect this and change the syntax formating language withing the script? This would be especially useful for awk withing bash I believe. Bash #!/bin/bash ... # usage message to catch bad input without invoking R ... # any bash pre-processing of input ... # etc echo "hello world" R --vanilla << EOF # Data on motor octane ratings for various gasoline blends x <- c(88.5,87.7,83.4,86.7,87.5,91.5,88.6,100.3, 95.6,93.3,94.7,91.1,91.0,94.2,87.5,89.9, 88.3,87.6,84.3,86.7,88.2,90.8,88.3,98.8, 94.2,92.7,93.2,91.0,90.3,93.4,88.5,90.1, 89.2,88.3,85.3,87.9,88.6,90.9,89.0,96.1, 93.3,91.8,92.3,90.4,90.1,93.0,88.7,89.9, 89.8,89.6,87.4,88.9,91.2,89.3,94.4,92.7, 91.8,91.6,90.4,91.1,92.6,89.8,90.6,91.1, 90.4,89.3,89.7,90.3,91.6,90.5,93.7,92.7, 92.2,92.2,91.2,91.0,92.2,90.0,90.7) x length(x) mean(x);var(x) stem(x) EOF perl -n -e ' @t = split(/\t/); %t2 = map { $_ => 1 } split(/,/,$t[1]); $t[1] = join(",",keys %t2); print join("\t",@t); ' knownGeneFromUCSC.txt awk -F'\t' '{ n = split($2, t, ","); _2 = x split(x, _) # use delete _ if supported for (i = 0; ++i <= n;) _[t[i]]++ || _2 = _2 ? _2 "," t[i] : t[i] $2 = _2 }-3' OFS='\t' infile Python #!/usr/local/bin/python print "Hello World" os.system(""" VAR=even; sed -i "s/$VAR/odd/" testfile; for i in `cat testfile` ; do echo $i; done; echo "now the tr command is removing the vowels"; cat testfile |tr 'aeiou' ' ' """)

    Read the article

  • Invalid argument in sendfile() with two regular files

    - by Daniel Hershcovich
    I'm trying to test the sendfile() system call under Linux 2.6.32 to zero-copy data between two regular files. As far as I understand, it should work: ever since 2.6.22, sendfile() has been implemented using splice(), and both the input file and the output file can be either regular files or sockets. The following is the content of sendfile_test.c: #include <sys/sendfile.h> #include <fcntl.h> #include <stdio.h> int main(int argc, char **argv) { int result; int in_file; int out_file; in_file = open(argv[1], O_RDONLY); out_file = open(argv[2], O_WRONLY | O_CREAT | O_TRUNC, 0644); result = sendfile(out_file, in_file, NULL, 1); if (result == -1) perror("sendfile"); close(in_file); close(out_file); return 0; } And when I'm running the following commands: $ gcc sendfile_test.c $ ./a.out infile The output is sendfile: Bad file descriptor Which means that the system call resulted in errno = -EINVAL, I think. What am I doing wrong?

    Read the article

  • export and import utf8 data in mysql: best practices

    - by ChrisRamakers
    We're often faced with the need to send a data file to one of our clients with data from the database he/she needs to translate. Most of the time this export is CSV or XLS. Most of the time we create a csv dump with phpmyadmin and get an xls file in return with the translated data. The problem is that most of the time the data is UTF8 and when the file is returned as xls each and every time we load the data into mysql again we end up with utf8 problems, characters not being displayed properly, etc ... We've already doublechecked everything in mysql from my.conf to column charactersets and everything is set correctly to UTF8. My question is not how to fix the encoding issue since that's been solved but how we would best proceed in the future handling this situation? What export format should we hand over? How should we import (just mysql load data infile or our own processing scripts). What is the general consensus on how to handle this situation? We would like to continue using excel if possible since that's the format almost everybody expects including our clients' translation agencies. Our clients' ease of use is the most important factor here, without overloading us with major issues each time. The best of both worlds :)

    Read the article

  • importing a large txt file in MySQL ?

    - by Taz
    Hi I am loading a text data in MySQL using the following command 'mysql> Load Data local Infile 'C:\\Documents and Settings\\Scan\\My Documents\\D ownloads\\instance_types_en.nt\\Copy of instance_types_en.txt' into table dbpedi aentities.resources fields terminated by ' ' lines terminated by 'rn';' Data is like (actually there is a newline after '.') <a> <b> <c> . <a> <b> <c> . <a> <b> <c> . <a> <b> <c> .<a> <b> <c> . <a> <b> <c> . Table has and auto increment ID field and then text fields for all three values. File size is about 750MB The problems are 1. appears to be in first text field 2. only 2MB data is imported

    Read the article

  • Python Error-Checking Standard Practice

    - by chaindriver
    Hi, I have a question regarding error checking in Python. Let's say I have a function that takes a file path as an input: def myFunction(filepath): infile = open(filepath) #etc etc... One possible precondition would be that the file should exist. There are a few possible ways to check for this precondition, and I'm just wondering what's the best way to do it. i) Check with an if-statement: if not os.path.exists(filepath): raise IOException('File does not exist: %s' % filepath) This is the way that I would usually do it, though the same IOException would be raised by Python if the file does not exist, even if I don't raise it. ii) Use assert to check for the precondition: assert os.path.exists(filepath), 'File does not exist: %s' % filepath Using asserts seems to be the "standard" way of checking for pre/postconditions, so I am tempted to use these. However, it is possible that these asserts are turned off when the -o flag is used during execution, which means that this check might potentially be turned off and that seems risky. iii) Don't handle the precondition at all This is because if filepath does not exist, there will be an exception generated anyway and the exception message is detailed enough for user to know that the file does not exist I'm just wondering which of the above is the standard practice that I should use for my codes.

    Read the article

  • Java: How to make this main thread wait for the new thread to terminate

    - by Jeff Bullard
    I have a java class that creates a process, called child, using ProcessBuilder. The child process generates a lot of output that I am draining on a separate thread to keep the main thread from getting blocked. However, a little later on I need to wait for the output thread to complete/terminate before going on, and I'm not sure how to do that. I think that join() is the usual way to do this but I'm not sure how to do that in this case. Here is the relevant part of the java code. // Capture output from process called child on a separate thread final StringBuffer outtext = new StringBuffer(""); new Thread(new Runnable() { public void run() { InputStream in = null; in = child.getInputStream(); try { if (in != null) { BufferedReader reader = new BufferedReader(new InputStreamReader(in)); String line = reader.readLine(); while ((line != null)) { outtext.append(line).append("\n"); ServerFile.appendUserOpTextFile(userName, opname, outfile, line+"\n"); line = reader.readLine(); } } } catch (IOException iox) { throw new RuntimeException(iox); } } }).start(); // Write input to for the child process on this main thread // String intext = ServerFile.readUserOpTextFile(userName, opname, infile); OutputStream out = child.getOutputStream(); try { out.write(intext.getBytes()); out.close(); } catch (IOException iox) { throw new RuntimeException(iox); } // ***HERE IS WHERE I NEED TO WAIT FOR THE THREAD TO FINISH *** // Other code goes here that needs to wait for outtext to get all // of the output from the process // Then, finally, when all the remaining code is finished, I return // the contents of outtext return outtext.toString();

    Read the article

  • Using std::ifstream to load in an array of struct data type into a std::vector

    - by Sent1nel
    I am working on a bitmap loader in C++ and when moving from the C style array to the std::vector I have run into an usual problem of which Google does not seem to have the answer. 8 Bit and 4 bit, bitmaps contain a colour palette. The colour palette has blue, green, red and reserved components each 1 byte in size. // Colour palette struct BGRQuad { UInt8 blue; UInt8 green; UInt8 red; UInt8 reserved; }; The problem I am having is when I create a vector of the BGRQuad structure I can no longer use the ifstream read function to load data from the file directly into the BGRQuad vector. // This code throws an assert failure! std::vecotr quads; if (coloursUsed) // colour table available { // read in the colours quads.reserve(coloursUsed); inFile.read( reinterpret_cast(&quads[0]), coloursUsed * sizeof(BGRQuad) ); } Does anyone know how to read directly into the vector without having to create a C array and copy data into the BGRQuad vector?

    Read the article

< Previous Page | 4 5 6 7 8 9  | Next Page >