Search Results

Search found 2280 results on 92 pages for 'tmp'.

Page 69/92 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • MySQL load data null values

    - by SP1
    Hello, I have a file that can contain from 3 to 4 columns of numerical values which are separated by comma. Empty fields are defined with the exception when they are at the end of the row: 1,2,3,4,5 1,2,3,,5 1,2,3 The following table was created in MySQL: +-------+--------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------+--------+------+-----+---------+-------+ | one | int(1) | YES | | NULL | | | two | int(1) | YES | | NULL | | | three | int(1) | YES | | NULL | | | four | int(1) | YES | | NULL | | | five | int(1) | YES | | NULL | | +-------+--------+------+-----+---------+-------+ I am trying to load the data using MySQL LOAD command: load data infile '/tmp/testdata.txt' into table moo fields terminated by "," lines terminated by "\n"; The resulting table: +------+------+-------+------+------+ | one | two | three | four | five | +------+------+-------+------+------+ | 1 | 2 | 3 | 4 | 5 | | 1 | 2 | 3 | 0 | 0 | | 1 | 2 | 3 | NULL | NULL | +------+------+-------+------+------+ The problem lies with the fact that when a field is empty in the raw data and is not defined, MySQL for some reason does not use the columns default value (which is NULL) and uses zero. NULL is used correctly when the field is missing alltogether. Unfortunately, I have to be able to distinguish between NULL and 0 at this stage so any help would be appreciated. Thanks S.

    Read the article

  • Migrating a Core Data Store from iCloud to local

    - by schmok
    I'm currently struggling with Core Data iCloud migration. I want to move a store from an iCloud ubiquity container (.nosync) to a local URL. Problem is whenever I call something like this: NSPersistentStore *newStore = [self.persistentStoreCoordinator migratePersistentStore: currentiCloudStore toURL: localURL options: nil withType: NSSQLiteStoreType error: &error]; I get this error: -[NSPersistentStoreCoordinator addPersistentStoreWithType:configuration:URL:options:error:](1055): CoreData: Ubiquity: Error: A persistent store which has been previously added to a coordinator using the iCloud integration options must always be added to the coordinator with the options present in the options dictionary. If you wish to use the store without iCloud, migrate the data from the iCloud store file to a new store file in local storage. file://localhost/Users/sch/Library/Containers/bla/Data/Documents/tmp.sqlite. This will be a fatal error in a future release Anyone ever seen this error? Maybe I'm just missing the right migration options?

    Read the article

  • Faster way to update 250k rows with SQL

    - by pablo
    I need to update about 250k rows on a table and each field to update will have a different value depending on the row itself (not calculated based on the row id or the key but externally). I tried with a parametrized query but it turns out to be slow (I still can try with a table-value parameter, SqlDbType.Structured, in SQL Server 2008, but I'd like to have a general way to do it on several databases including MySql, Oracle and Firebird). Making a huge concat of individual updates is also slow. What about creating a temp table and running an update joining my table and the tmp one? Will it work faster?

    Read the article

  • Commit SVN working copy into Git repository

    - by mchr
    I am currently working on a checked out SVN project along with some plugins for that project. I want to keep all of this work - including the current version of my SVN checkout within a single git repository. I thought I had achieved this by checking in the SVN working copy to git. However, when I did a pull on a new computer the SVN working copy had been corrupted. In particular it seemed that git had not checked it any of the .svn/tmp/ and .svn/props/ folders. I have now made a fresh checkout of the SVN project. Is there a way for me to add the ignored folders to my git repo (git status ignores them even though my .gitignore is empty) or force SVN to regenerate them?

    Read the article

  • what is the best setting for using lighttpd on 8G ram?

    - by user299415
    I have running 8GB ram and 8 x Xeon 3361 system! What is the best setting for running simultaneous connection! What is the maximum? Is setting like this correct? server.max-keep-alive-requests = 0 server.max-keep-alive-idle = 10 server.max-read-idle = 60 server.max-write-idle = 60 server.event-handler = "linux-sysepoll" server.max-fds = 2048 fastcgi.server = ( ".php" = ( "localhost" = ( "socket" = "/tmp/php-fastcgi.socket", "bin-path" = "/usr/bin/php-cgi", "max-procs" = 20, "bin-environment" = ( "PHP_FCGI_CHILDREN" = "40", "PHP_FCGI_MAX_REQUESTS" = "800" ), "broken-scriptfilename" = "enable" ) ) ) please help me!

    Read the article

  • DB function failed with error number 1 in joomla admin panel

    - by sabuj
    When i access joomla article manager or module manager then i had faced the bellow output: 500 - An error has occurred! DB function failed with error number 1 Can't create/write to file '/tmp/#sql_57c0_0.MYD' (Errcode: 17) SQL=SELECT c.*, g.name AS groupname, cc.title AS name, u.name AS editor, f.content_id AS frontpage, s.title AS section_name, v.name AS author FROM jos_content AS c LEFT JOIN jos_categories AS cc ON cc.id = c.catid LEFT JOIN jos_sections AS s ON s.id = c.sectionid LEFT JOIN jos_groups AS g ON g.id = c.access LEFT JOIN jos_users AS u ON u.id = c.checked_out LEFT JOIN jos_users AS v ON v.id = c.created_by LEFT JOIN jos_content_frontpage AS f ON f.content_id = c.id WHERE c.state != -2 ORDER BY section_name , section_name, cc.title, c.ordering LIMIT 0, 20

    Read the article

  • Illegal offset type

    - by BFTrick
    Hello, I am having trouble uploading a file through php. I check the file type at the beginning of the process and I get an error. This is the error I am getting: Warning: Illegal offset type in /balblabla/DBfunctions.inc.php on line 183 This is the printed out $_FILES var Array ( [Picture] = Array ( [name] = JPG.jpg [type] = image/jpeg [tmp_name] = /tmp/phpHlrNY8 [error] = 0 [size] = 192221 ) ) Here is the segment of code I am using that is giving me issues: function checkFile($file, $type) { if( in_array($_FILES[$file]['type'], $type) ){ // <--- LINE 183 return true; }//if return false; } // end checkFile() This is the line of code that calls the function if( checkFile( $_FILES['Picture'], array("image/jpeg") ) == true ){ //do stuff }// end if I have used this piece of code on dozens of websites on my own server so i am guessing that this is some different configuration option. How can i modify my code so that it works on this different server?

    Read the article

  • How can I write a null ASCII character (nul) to a file with a Windows batch script?

    - by Matthew Murdoch
    I'm attempting to write an ASCII null character (nul) to a file from a Windows batch script without success. I initially tried using echo like this: echo <Alt+2+5+6> which seems like it should work (typing <Alt+2+5+6> in the command window does write a null character - or ^@ as it appears), but echo then outputs: More? and hangs until I press <Return>. As an alternative I tried using: copy con tmp.txt >nul <Alt+2+5+6><Ctrl+Z> which does exactly what I need, but only if I type it manually in the command window. If I run it from a batch file it hangs until I press <Ctrl+Z> but even then the output file is created but remains empty. I really want the batch file to stand alone without requiring (for example) a separate file containing a null character which can be copied when needed.

    Read the article

  • Lua - initializing

    - by Ockonal
    Hello, I can't init lua correctly under Arch Linux. Lua - latest version. Here is my code: #include <stdio.h> extern "C" { #include <lua.h> #include <lauxlib.h> #include <lualib.h> } int main() { lua_State *luaVM = luaL_newstate(); if (luaVM == NULL) { printf("Error initializing lua!\n"); return -1; } luaL_openlibs(luaVM); lua_close(luaVM); return 0; } /tmp/cc0iJ6lW.o: In function main': test_lua.cpp:(.text+0xa): undefined reference toluaL_newstate' test_lua.cpp:(.text+0x34): undefined reference to `luaL_openlibs' test_lua.cpp:(.text+0x40): undefined reference to `lua_close' collect2: ld returned 1 exit status What's wrong?

    Read the article

  • Drupal 6 devel module dd() function not writing to drupal_debug.txt file

    - by Mike Munroe
    I am running a local development Drupal site on a Windows machine. I am trying to use the dd($data, $label = NULL) function from the devel module to help debug. Using this function, should write debug info to a drupal_debug.txt file in the /tmp folder on the machine where the Drupal site is hosted. On my Windows machine, although I am using this function, the drupal_debug.txt file is not getting created anywhere, leading me to believe I am using the function incorrectly. Here is a snippet of how I am using it, <?php $test = "this is my test"; dd($test, $label = NULL); I am looking for an example of the correct syntax for the dd($data, $label = NULL) function. I have the Devel module enabled.

    Read the article

  • sharpziplib - can you add a file without it copying the entire zip first?

    - by schmoopy
    Im trying to add an existing file to a .zip file using sharpziplib - problem is, the zip file is 1GB in size. When i try to add 1 small file (400k) sharpziplib creates a copy/temp of the orig zip file before adding the new file - this poses a problem when the amount of free disk space is less than 2x the zip file you are trying to update. for example: 1GB zip myfile.zip 1GB zip myfile.zip.tmp.293 ZipFile zf = new ZipFile(path); zf.BeginUpdate(); zf.Add(file); // Adding a 400k file here causes a 1GB temp file to be created zf.EndUpdate(); zf.Close(); Is there a more efficient way to do this? Thanks :-)

    Read the article

  • PHP upload to GoDaddy hosted site

    - by 105894384987190582154
    Hi, relatively new to both hosting and PHP, so apologies for (probably) missing the obvious, but… I built a page which would allow file uploads to my site, following the example laid out here: W3Schools PHP upload exercise Through the File Manage on my Godaddy hosting, I created a folder named ’upload’ so that the file would land there after being uploaded through the page I had built. Part of the returned page that appears after submitted the file reads: Temp file: d:temptmpphpE4C9.tmp Stored in: upload/testfile.txt which would indicate that the file has been sucesscully uploaded given the code in the example. However, I cannot see the file in the ’upload’ folder via my File Manage, or anywhere else on the hosting of my site (as far as I can see). I also cannot see the ’temp’ folder anywhere either… Any help or clarification would be greatly appreciated. Thanks Tim

    Read the article

  • How can I get around MySQL Errcode 13 with SELECT INTO OUTFILE?

    - by Ryan Olson
    but I am trying to dump the contents of a table to a csv file using a MySQL SELECT INTO OUTFILE statement. If I do: SELECT column1, column2 INTO OUTFILE 'outfile.csv' FIELDS TERMINATED BY ',' FROM table_name; outfile.csv will be created on the server in the same directory this database's files are stored in. However, when I change my query to: SELECT column1, column2 INTO OUTFILE '/data/outfile.csv' FIELDS TERMINATED BY ',' FROM table_name; I get: ERROR 1 (HY000): Can't create/write to file '/data/outfile.csv' (Errcode: 13) Errcode 13 is a permissions error, even if I change ownership of /data to mysql:mysql and give it 777 permissions. MySQL is running as user "mysql". Strangely, I can create the file in /tmp, just not in any other directory I've tried, even with permissions set such that user mysql should be able to write to the directory. This is MySQL 5.0.75 running on Ubuntu.

    Read the article

  • Whats the python way for recursively setting file permissions?

    - by Geoff
    What's the "python way" to recursively set the owner and group to files in a directory? I could just pass a 'chown -R' command to shell, but I feel like I'm missing something obvious. I'm mucking about with this: import os path = "/tmp/foo" for root, dirs, files in os.walk(path): for momo in dirs: os.chown(momo, 502, 20) This seems to work for setting the directory, but fails when applied to files. I suspect the files are not getting the whole path, so chown fails since it can't find the files. The error is: 'OSError: [Errno 2] No such file or directory: 'foo.html' What am I overlooking here?

    Read the article

  • Weblogic 10.3 domain unpacking problem

    - by MarkoU
    Hi, I'm trying to unpack a Weblogic 10.3 domain on one of our production servers (SunOS 5.10), but get the following error: $ /opt/bea10/wlserver_10.3/common/bin/unpack.sh -template=/tmp/CM.jar -domain=/opt/bea10/user_projects/CM Error: failed to create the temporary script file Assuming that this is a priviledge problem: where actually the unpack utility tries to create its temporary script files? The unpack script calls a Java class com.bea.plateng.domain.script.Unpacker, so reading the script itself does not reveal the location. I need to ask the sysadmin for the priviledges, so an exact directory location is needed. Of course, the error message is so vague that this might also be some other issue. Any ideas? BR, Marko P.S. Sorry for cross-posting. I tried this question also on Serverfault but got no replies. Perhaps programmers (like myself) do this kind of stuff anyway.

    Read the article

  • efficiently trimming postgresql tables

    - by agilefall
    I have about 10 tables with over 2 million records and one with 30 million. I would like to efficiently remove older data from each of these tables. My general algorithm is: create a temp table for each large table and populate it with newer data truncate the original tables copy tmp data back to original tables using: "insert into originaltable (select * from tmp_table)" However, the last step of copying the data back is taking longer than I'd like. I thought about deleting the original tables and making the temp tables "permanent", but I lose constraint/foreign key info. If I delete from the tables directly, it takes much longer. Given that I need to preserve all foreign keys and constraints, are there any faster ways of removing the older data? Thanks.

    Read the article

  • How to warn for the use of unset variables in a korn shell script

    - by Lepu
    Is there any way to throw errors or warnings in a korn shell script to prevent the use of unset variables ? Let's assume I have a temporary folder that I want to remove. TEMP_FILES_DIR='/app/myapp/tmp' rm -Rf $TEMP_FILE_DIR #notice the misspelling How to prevent this kind of mistakes before they actually happen? I know the script should check for file existence and empty string before attempting to remove, this is just a silly example to illustrate a mistake that could have been avoided with some warnings. I don't know if this feature exists in ksh. If it does exist, how do you turn it on?

    Read the article

  • Capistrano update causes C: to be placed in the current directory (cygwin)

    - by user321775
    When I run cap deploy:update in a directory on my local machine (via cygwin), "C:" magically appears in the directory. Sure enough, I can cd to it and it's my windows C: drive. Now I'm afraid to delete it, but I definitely don't want it in this directory (a rails project under /home/username/blah/blah). Here's my config/deploy.rb file. custom options set :application, "xyz.com" set :repository, "ssh://[email protected]:yyyy/home/git/xxx" set :user, "myname" set :runner, user set :use_sudo, false server "xxx.xxx.xxx.xxx:yyyy", :app, :web, :db, :primary = true deploy to set :deploy_to, "/home/myname/public_html/xyz" repository set :scm, :git set :deploy_via, :copy ssh options default_run_options[:pty] = true ssh_options[:paranoid] = false ssh_options[:port] = yyyy start passenger namespace :deploy do task :start do ; end task :stop do ; end task :restart, :roles = :app, :except = { :no_release = true } do run "#{try_sudo} touch #{File.join(current_path,'tmp','restart.txt')}" end end Anyone see the problem? And does anyone know a safe way of getting rid of the C: drives that have already shown up (this has happened in a few directories)?

    Read the article

  • Socket left in TIME_WAIT after file transfer via netcat

    - by com
    Using Copying by NetCat I am trying to copy files throught network by NetCat. From console it work pretty well. First I run listening netcat on the destination machine and after I run sending on source machine. The problem is it's doen't work from script from the source machine: ssh -f user@$desthost 'nc -l 1234 | tar xvf - /dev/null &' #listening on destination host tar cv /tmp/file | nc $desthost 1234 #sending to destination host I saw that after running port 1234 is still was open and status of the socket was TIME_WAIT. If you know what's the problem, please, help me out. And by the way, after copying how can I validate that the content is identical? Thanks! Addendum: I found one very strange thing, the same implementation with screen on destination work works, but not stable, sometimes it doesn't copy a file. ssh user@$desthost screen -dm -S test 'nc -l 1234 | tar xvf - ' #listening on destination host Maybe there is an issue with timeout?

    Read the article

  • Variable loss in redirected bash while loop

    - by James Hadley
    I have the following code for ip in $(ifconfig | awk -F ":" '/inet addr/{split($2,a," ");print a[1]}') do bytesin=0; bytesout=0; while read line do if [[ $(echo ${line} | awk '{print $1}') == ${ip} ]] then increment=$(echo ${line} | awk '{print $4}') bytesout=$((${bytesout} + ${increment})) else increment=$(echo ${line} | awk '{print $4}') bytesin=$((${bytesin} + ${increment})) fi done < <(pmacct -s | grep ${ip}) echo "${ip} ${bytesin} ${bytesout}" >> /tmp/bwacct.txt done Which I would like to print the incremented values to bwacct.txt, but instead the file is full of zeroes: 91.227.223.66 0 0 91.227.221.126 0 0 127.0.0.1 0 0 My understanding of Bash is that a redirected for loop should preserve variables. What am I doing wrong?

    Read the article

  • After mounting using sshfs I cannot commit my changes using subversion

    - by robUK
    Hello, local machine: Fedora 13 Subversion: 1.6.9 remote machine: CentSO 5.3 subversion 1.4.2 I have a project which is on the remote machine: [email protected]:projects/ssd1 I have mounted this on my local machine: sshfs [email protected]:projects/ssd1 /home/jbloggs/projects/mnt/ssd1 Everything mounts ok. So I open my project using GNU Emacs 23.2.1. When I want to comment my changes in emacs I get the following error: can't move /home/jbloggs/projects/mnt/ssd1/.svn/tmp/entries to /home/jbloggs/mnt/ssd1/.svn/entries: Operation not permitted Does anyone know of any way I can resolve this issue? many thanks for any advice,

    Read the article

  • asynchronous writing and reading of a file

    - by tazim
    hi, I have two processes. 1.) One processes is redirecting output of some unix command to a file on server side.the data is always appended to the file eg : find / > tmp.txt 2.)Another process is opening and reading the same file and storing it in a string and sending the entire string to the client Now, this things take simultaneously. I am using python. Any suggestion as in what can be possible ways to implement this scenario . Please explain with sample code . Thanks in advance . Tazim.

    Read the article

  • How to check if the internal typedef struct of a typedef struct is NULL ?

    - by watchloop
    typedef struct { uint32 item1; uint32 item2; uint32 item3; uint32 item4; <some_other_typedef struct> *table; } Inner_t; typedef struct { Inner_t tableA; Inner_t tableB; } Outer_t; Outer_t outer_instance = { {NULL}, { 0, 1, 2, 3, table_defined_somewhere_else, } }; My question is how to check if tableA is NULL just like the case for outer_instance. It tried: if ( tmp->tableA == NULL ). I get "error: invalid operands to binary =="

    Read the article

  • Rails: Is there a way to check for incomplete JPG file upload?

    - by user206481
    I have a Rails app that processes and serves up jpg files that were uploaded via FTP. On several occasions the FTP process was disconnected and left many incomplete .jpg files. I was surprised to see that the incomplete jpgs behave as normal jpg files in my app even tho they have incomplete image data. I have since implemented a more robust FTP process where the uploaded file is initially *.tmp and gets renamed to .jpg after an FTP success code is received. My problem is, I still have all of these incomplete jpg files on the server and can't figure out how to programmatically weed them out. I can actually display them in a view without generating any errors, but there is only a partial image there. I tried RMagick but they all successfully load (using Image.read) and report the valid x & y resolutions. I have so far not been able to determine a way to programmatically differentiate between an incomplete and complete jpg uploaded image. Any ideas?

    Read the article

  • Is it possible to open a pipe-based filehandle which prints to a variable in perl?

    - by blackkettle
    Hi, I know I can do this, ------ open(F,"",\$var); print F "something cool"; close(F); print $var; ------ or this, open(F, "| ./prog1 | ./prog2 tmp.file"); print F "something cool"; close(F); but is it possible to combine these? The semantics of what I'd like to do should be clear from the following, open(F,"|./prog1 | ./prog2", \$var); print F "something cool"; close(F); print $var; however the above clearly won't work. A few minutes of experimenting and googling seems to indicate that this is not possible, but I'd like to know if I'm stuck with using the `` to capture the output.

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >