Search Results

Search found 82718 results on 3309 pages for 'large file download'.

Page 105/3309 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • can load data(google app enngine) from http://localhost:8100/remote_api ..

    - by zjm1126
    i can download data from gae (http://zjm1126.appspot.com/remote_api), this is code: appcfg.py download_data --application=zjm1126 --url=http://zjm1126.appspot.com/remote_api --filename=a.csv and it successful : D:\zjm_demo\app>appcfg.py download_data --application=zjm1126 --url=http://zjm1 126.appspot.com/remote_api --filename=a.csv Downloading data records. [INFO ] Logging to bulkloader-log-20100618.162421 [INFO ] Throttling transfers: [INFO ] Bandwidth: 250000 bytes/second [INFO ] HTTP connections: 8/second [INFO ] Entities inserted/fetched/modified: 20/second [INFO ] Batch Size: 10 [INFO ] Opening database: bulkloader-progress-20100618.162421.sql3 [INFO ] Opening database: bulkloader-results-20100618.162421.sql3 [INFO ] Connecting to zjm1126.appspot.com/remote_api Please enter login credentials for zjm1126.appspot.com Email: [email protected] Password for [email protected]: [INFO ] Downloading kinds: [u'LogText', u'Greeting', u'Forum', u'Thread'] .... [INFO ] Have 0 entities, 0 previously transferred [INFO ] 0 entities (8804 bytes) transferred in 11.3 seconds so i want to know can load data from 127.0.0.1 , this is my code : appcfg.py download_data --application=zjm1126 --url=http://localhost:8100/remote_api --filename=a.csv and the error is : D:\zjm_demo\app>appcfg.py download_data --application=zjm1126 --url=http://loca lhost:8100/remote_api --filename=a.csv Downloading data records. [INFO ] Logging to bulkloader-log-20100618.162325 [INFO ] Throttling transfers: [INFO ] Bandwidth: 250000 bytes/second [INFO ] HTTP connections: 8/second [INFO ] Entities inserted/fetched/modified: 20/second [INFO ] Batch Size: 10 [INFO ] Opening database: bulkloader-progress-20100618.162325.sql3 [INFO ] Opening database: bulkloader-results-20100618.162325.sql3 Please enter login credentials for localhost Email: [email protected] Password for [email protected]: [INFO ] Connecting to localhost:8100/remote_api [ERROR ] Exception during authentication Traceback (most recent call last): File "d:\Program Files\Google\google_appengine\google\appengine\tools\bulkload er.py", line 3169, in Run self.request_manager.Authenticate() File "d:\Program Files\Google\google_appengine\google\appengine\tools\bulkload er.py", line 1178, in Authenticate remote_api_stub.MaybeInvokeAuthentication() File "d:\Program Files\Google\google_appengine\google\appengine\ext\remote_api \remote_api_stub.py", line 542, in MaybeInvokeAuthentication datastore_stub._server.Send(datastore_stub._path, payload=None) File "d:\Program Files\Google\google_appengine\google\appengine\tools\appengin e_rpc.py", line 346, in Send f = self.opener.open(req) File "D:\Python25\lib\urllib2.py", line 387, in open response = meth(req, response) File "D:\Python25\lib\urllib2.py", line 498, in http_response 'http', request, response, code, msg, hdrs) File "D:\Python25\lib\urllib2.py", line 425, in error return self._call_chain(*args) File "D:\Python25\lib\urllib2.py", line 360, in _call_chain result = func(*args) File "D:\Python25\lib\urllib2.py", line 506, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) HTTPError: HTTP Error 404: Not Found [INFO ] Authentication Failed so what should i do , thanks

    Read the article

  • Why does File::Slurp return a scalar when it should return a list?

    - by BrianH
    I am new to the File::Slurp module, and on my first test with it, it was not giving the results I was expecting. It took me a while to figure it out, so now I am interested in why I was seeing this certain behavior. My call to File::Slurp looked like this: my @array = read_file( $file ) || die "Cannot read $file\n"; I included the "die" part because I am used to doing that when opening files. My @array would always end up with the entire contents of the file in the first element of the array. Finally I took out the "|| die" section, and it started working as I expected. Here is an example to illustrate: perl -de0 Loading DB routines from perl5db.pl version 1.22 Editor support available. Enter h or `h h' for help, or `man perldebug' for more help. main::(-e:1): 0 DB<1> use File::Slurp DB<2> $file = '/usr/java6_64/copyright' DB<3> x @array1 = read_file( $file ) 0 'Licensed material - Property of IBM.' 1 'IBM(R) SDK, Java(TM) Technology Edition, Version 6' 2 'IBM(R) Runtime Environment, Java(TM) Technology Edition, Version 6' 3 '' 4 'Copyright Sun Microsystems Inc, 1992, 2008. All rights reserved.' 5 'Copyright IBM Corporation, 1998, 2009. All rights reserved.' 6 '' 7 'The Apache Software License, Version 1.1 and Version 2.0' 8 'Copyright 1999-2007 The Apache Software Foundation. All rights reserved.' 9 '' 10 'Other copyright acknowledgements can be found in the Notices file.' 11 '' 12 'The Java technology is owned and exclusively licensed by Sun Microsystems Inc.' 13 'Java and all Java-based trademarks and logos are trademarks or registered' 14 'trademarks of Sun Microsystems Inc. in the United States and other countries.' 15 '' 16 'US Govt Users Restricted Rights - Use duplication or disclosure' 17 'restricted by GSA ADP Schedule Contract with IBM Corp.' DB<4> x @array2 = read_file( $file ) || die "Cannot read $file\n"; 0 'Licensed material - Property of IBM. IBM(R) SDK, Java(TM) Technology Edition, Version 6 IBM(R) Runtime Environment, Java(TM) Technology Edition, Version 6 Copyright Sun Microsystems Inc, 1992, 2008. All rights reserved. Copyright IBM Corporation, 1998, 2009. All rights reserved. The Apache Software License, Version 1.1 and Version 2.0 Copyright 1999-2007 The Apache Software Foundation. All rights reserved. Other copyright acknowledgements can be found in the Notices file. The Java technology is owned and exclusively licensed by Sun Microsystems Inc. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems Inc. in the United States and other countries. US Govt Users Restricted Rights - Use duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. ' Why does the || die make a difference? I have a feeling this might be more of a Perl precedence question instead of a File::Slurp question. I looked in the File::Slurp module and it looks like it is set to croak if there is a problem, so I guess the proper way to do it is to allow File::Slurp to croak for you. Now I'm just curious why I was seeing these differences.

    Read the article

  • How to read LARGE Sqlite file to be copied into Android emulator, or device from assets folder?

    - by Peter SHINe ???
    I guess many people already read this article: Using your own SQLite database in Android applications: http://www.reigndesign.com/blog/using-your-own-sqlite-database-in-android-applications/comment-page-2/#comment-12368 However it's keep bringing IOException at while ((length = myInput.read(buffer))>0){ myOutput.write(buffer, 0, length); } I’am trying to use a large DB file. It’s as big as 8MB I built it using sqlite3 in Mac OS X, inserted UTF-8 encoded strings (for I am using Korean), added android_meta table with ko_KR as locale, as instructed above. However, When I debug, it keeps showing IOException at length=myInput.read(buffer) I suspect it’s caused by trying to read a big file. If not, I have no clue why. I tested the same code using much smaller text file, and it worked fine. Can anyone help me out on this? I’ve searched many places, but no place gave me the clear answer, or good solution. Good meaning efficient or easy. I will try use BufferedInput(Output)Stream, but if the simpler one cannot work, I don’t think this will work either. Can anyone explain the fundamental limits in file input/output in Android, and the right way around it, possibly? I will really appreciate anyone’s considerate answer. Thank you. WITH MORE DETAIL: private void copyDataBase() throws IOException{ //Open your local db as the input stream InputStream myInput = myContext.getAssets().open(DB_NAME); // Path to the just created empty db String outFileName = DB_PATH + DB_NAME; //Open the empty db as the output stream OutputStream myOutput = new FileOutputStream(outFileName); //transfer bytes from the inputfile to the outputfile byte[] buffer = new byte[1024]; int length; while ((length = myInput.read(buffer))>0){ myOutput.write(buffer, 0, length); } //Close the streams myOutput.flush(); myOutput.close(); myInput.close(); }

    Read the article

  • Char error C langauge

    - by Nadeem tabbaa
    i have a project for a course, i did almost everything but i have this error i dont know who to solve it... the project about doing our own shell some of them we have to write our code, others we will use the fork method.. this is the code, #include <sys/wait.h> #include <dirent.h> #include <limits.h> #include <errno.h> #include <stdlib.h> #include <string.h> #include<stdio.h> #include<fcntl.h> #include<unistd.h> #include<sys/stat.h> #include<sys/types.h> int main(int argc, char **argv) { pid_t pid; char str[21], *arg[10]; int x,status,number; system("clear"); while(1) { printf("Rshell>" ); fgets(str,21,stdin); x = 0; arg[x] = strtok(str, " \n\t"); while(arg[x]) arg[++x] = strtok(NULL, " \n\t"); if(NULL!=arg[0]) { if(strcasecmp(arg[0],"cat")==0) //done { int f=0,n; char l[1]; struct stat s; if(x!=2) { printf("Mismatch argument\n"); } /*if(access(arg[1],F_OK)) { printf("File Exist"); exit(1); } if(stat(arg[1],&s)<0) { printf("Stat ERROR"); exit(1); } if(S_ISREG(s.st_mode)<0) { printf("Not a Regular FILE"); exit(1); } if(geteuid()==s.st_uid) if(s.st_mode & S_IRUSR) f=1; else if(getegid()==s.st_gid) if(s.st_mode & S_IRGRP) f=1; else if(s.st_mode & S_IROTH) f=1; if(!f) { printf("Permission denied"); exit(1); }*/ f=open(arg[1],O_RDONLY); while((n=read(f,l,1))>0) write(1,l,n); } else if(strcasecmp(arg[0],"rm")==0) //done { if( unlink( arg[1] ) != 0 ) perror( "Error deleting file" ); else puts( "File successfully deleted" ); } else if(strcasecmp(arg[0],"rmdir")==0) //done { if( remove( arg[1] ) != 0 ) perror( "Error deleting Directory" ); else puts( "Directory successfully deleted" ); } else if(strcasecmp(arg[0],"ls")==0) //done { DIR *dir; struct dirent *dirent; char *where = NULL; //printf("x== %i\n",x); //printf("x== %s\n",arg[1]); //printf("x== %i\n",get_current_dir_name()); if (x == 1) where = get_current_dir_name(); else where = arg[1]; if (NULL == (dir = opendir(where))) { fprintf(stderr,"%d (%s) opendir %s failed\n", errno, strerror(errno), where); return 2; } while (NULL != (dirent = readdir(dir))) { printf("%s\n", dirent->d_name); } closedir(dir); } else if(strcasecmp(arg[0],"cp")==0) //not yet for Raed { FILE *from, *to; char ch; if(argc!=3) { printf("Usage: copy <source> <destination>\n"); exit(1); } /* open source file */ if((from = fopen(argv[1], "rb"))==NULL) { printf("Cannot open source file.\n"); exit(1); } /* open destination file */ if((to = fopen(argv[2], "wb"))==NULL) { printf("Cannot open destination file.\n"); exit(1); } /* copy the file */ while(!feof(from)) { ch = fgetc(from); if(ferror(from)) { printf("Error reading source file.\n"); exit(1); } if(!feof(from)) fputc(ch, to); if(ferror(to)) { printf("Error writing destination file.\n"); exit(1); } } if(fclose(from)==EOF) { printf("Error closing source file.\n"); exit(1); } if(fclose(to)==EOF) { printf("Error closing destination file.\n"); exit(1); } } else if(strcasecmp(arg[0],"mv")==0)//done { if( rename(arg[1],arg[2]) != 0 ) perror( "Error moving file" ); else puts( "File successfully moved" ); } else if(strcasecmp(arg[0],"hi")==0)//done { printf("hello\n"); } else if(strcasecmp(arg[0],"exit")==0) // done { return 0; } else if(strcasecmp(arg[0],"sleep")==0) // done { if(x==1) printf("plz enter the # seconds to sleep\n"); else sleep(atoi(arg[1])); } else if(strcmp(arg[0],"history")==0) // not done { FILE *infile; //char fname[40]; char line[100]; int lcount; ///* Read in the filename */ //printf("Enter the name of a ascii file: "); //fgets(History.txt, sizeof(fname), stdin); /* Open the file. If NULL is returned there was an error */ if((infile = fopen("History.txt", "r")) == NULL) { printf("Error Opening File.\n"); exit(1); } while( fgets(line, sizeof(line), infile) != NULL ) { /* Get each line from the infile */ lcount++; /* print the line number and data */ printf("Line %d: %s", lcount, line); } fclose(infile); /* Close the file */ writeHistory(arg); //write to txt file every new executed command //read from the file once the history command been called //if a command called not for the first time then just replace it to the end of the file } else if(strncmp(arg[0],"@",1)==0) // not done { //scripting files // read from the file command by command and executing them } else if(strcmp(arg[0],"type")==0) //not done { //if(x==1) //printf("plz enter the argument\n"); //else //type((arg[1])); } else { pid = fork( ); if (pid == 0) { execlp(arg[0], arg[0], arg[1], arg[2], NULL); printf ("EXEC Failed\n"); } else { wait(&status); if(strcmp(arg[0],"clear")!=0) { printf("status %04X\n",status); if(WIFEXITED(status)) printf("Normal termination, exit code %d\n", WEXITSTATUS(status)); else printf("Abnormal termination\n"); } } } } } } void writeHistory(char *arg[]) { FILE *file; file = fopen("History.txt","a+"); /* apend file (add text to a file or create a file if it does not exist.*/ int i =0; while(strcasecmp(arg[0],NULL)==0) { fprintf(file,"%s ",arg[i]); /*writes*/ } fprintf(file,"\n"); /*new line*/ fclose(file); /*done!*/ getchar(); /* pause and wait for key */ //return 0; } the thing is when i compile the code, this what it gives me /home/ugics/st255375/ICS431Labs/Project/Rshell.c: At top level: /home/ugics/st255375/ICS431Labs/Project/Rshell.c:264: warning: conflicting types for ‘writeHistory’ /home/ugics/st255375/ICS431Labs/Project/Rshell.c:217: note: previous implicit declaration of ‘writeHistory’ was here can any one help me??? thanks

    Read the article

  • PostgreSQL lots of large Arrays and Writes

    - by strife911
    Hi, I am running a python program that spawns 8 threads and as each thread launch its own postmaster process via psycopg2. This is to maximize the use of my CPU-cores (8). Each thread call a series of SQL Functions. Most of these functions go through many thousands of rows each associated to a large FLOAT8[] Array (250-300) values by using unnest() and multiplying each FLOAT8 by an another FLOAT8 associated to each row. This Array approach minimized the size of the Indexes and the Tables. The Function ends with an Insert into another Table of a row of the same form (pk INT4, array FLOAT8[]). Some SQL Functions called by python will Update a row of these kind of Tables (with large Arrays). Now I currently have configured PostgreSQL to use most of the memory for cache (effective_cache_size of 57 GB I think) and only a small amount of it for shared memory (1GB I think). First, I was wondering what the difference between Cache and Shared memory was in regards to PostgreSQL (and my application). What I have noticed is that only about 20-40% of my total CPU processing power is used during the most Read intensive parts of the application (Select unnest(array) etc). So secondly, I was wondering what I could do to improve this so that 100% of the CPU is used. Based on my observations, it does not seem to have anything to do with python or its GIL. Thanks

    Read the article

  • osx bash grep - finding search terms in a large file with one single line

    - by unsynchronized
    Is there simple unix command line i can enter which lets me isolate say 512 bytes either side of a search term, even if there is only one "line" in a very large text file? Ok, this should be easy. Famous last words. I'm not that familiar with grep, but it seems it is mainly used to filter out lines in the input that contain search terms. I have a very large json file that I downloaded that i want to search for a particular term. before you click the link - it's over 244MB so be warned - it is from the internet wayback machine and contains lists of zip files of archived photos. i am trying to find mine. Their web interface is broken, so i found the json file that they make public here - it's the last one on the list. when i grep looking for my username, it finds it, but proceeds to dump that line to the console. the problem is that line is 244MB long, and it's the only line in the file. i tried using less, but could not get that to do much - it's very slow, and seems to have the same issue. is there simple unix command line i can enter which lets me isolate say 512 bytes either side of a search term?

    Read the article

  • iis7 large worker process request queue creating process blocking aspnet.config & machine.config amended (bottleneck)

    - by scott_lotus
    ASP.net 2.0 app .net 2.0 framework IIS7 I am seeing a large queue of "requests" appear under the "worker process" option. State recorded appear to be Authenticate Request and Execute Request Handles more than anything else. I have amended aspnet.config in C:\Windows\Microsoft.NET\Framework64\v2.0.50727 (32 bit path and 64 bit path) to include: maxConcurrentRequestsPerCPU="50000" maxConcurrentThreadsPerCPU="0" requestQueueLimit="50000" I have amended machine.config in C:\Windows\Microsoft.NET\Framework64\v2.0.50727\CONFIG (32 bit and 64 bit path) to include: autoConfig="true" maxIoThreads="100" maxWorkerThreads="100" minIoThreads="50" minWorkerThreads="50" minFreeThreads="176" minLocalRequestFreeThreads="152" Still i get the issue. The issue manifestes itself as a large number of processes in the Worker Process queue. Number of current connections to the website display 500 when this issue occurs. I dont think i have seen concurrent connections over 500 without this issue occurring. Web application slows as the request block. Refreshing the app pool resolves for a while (as expected) as the load is spread between the two pools. Application pool in question FIXED REQUEST have been set to refresh on 50000. Thank you for any help. Scott quick edit to say hmm, my develeopers are telling me the project was built with .net 3.5 framework. Looking at C:\Windows\Microsoft.NET\Framework64\v3.5 there does not appear to be a ASPNET.CONFIG or a MACHINE.CONFIG .... is there a 3.5 equivalent ? after a little searching apparenetly 3.5 uses the 2.0 framework files that 3.5 is missing. So back to the original question , where is my bottleneck ?

    Read the article

  • Load Sharing Regarding Large Websites

    - by JHarley1
    Hello, I have a question regarding Load Sharing for large websites. My Understanding: So if you have a website that has millions of fits a day you will need to have an architecture that can support this sort of pressure. You can either do one or two things: Invest in a single large server that has huge amounts of processing power, memory and storage (such as Microsoft's TerraServer). Spread the load of your website across a number of machines. Let me tackle the second approach, so you have a collection of machines all running Web Server Software and all having access to identical copies of the websites pages. You can either spread the load across these machines using a cyclic pattern in a DNS or you can use a Load Ballancing Switch. The advantages of this approach is: - Redundancy - servers can fail and the others would "pick up the slack" - Incremental - the ability to easily add new machines to this set-up. My Question's Is there a Virtual approach to this issue of load balancing now? If the website runs from a database - is there still only a single copy of the database? If a user had a session running on one Server (e.g. they had gone to www.example.org and had been assigned to Server 2 - were they had created a session) if they refreshed the website (and were allocated Server 3) would they still have their session? What are the other disadvantages associated with Load Balancing? Many Thanks, J

    Read the article

  • How do I import large sql file to local LAMP (xampp) environment

    - by mraslton
    I have used Linux to import a large mysql dump file (into a new database), but am new to how the process works in a local LAMP environment using xampp, as xampp does not support SSH. I've dowloaded the large_dump_file.sql from the Linux server to my local system. I'm using Windows XP and have used xampp to setup LAMP. I am able to access the local_database via phpMyAdmin, but the dump file is too large to import using that app. I'm trying to import the file via the command prompt, but so far with no success. At the prompt: cd .. cd .. cd xampp cd mysql cd bin I've found that mysqlimport is used to import .csv and .txt files, and mysql is used to import .sql files, but can't find documentation as to whether or not to use the -u -p options so I've tried many variations of the command with no luck. What would be the proper command? I've modified the hosts, virtual-hosts conf, and apache config files. Do I need to change any other config files on my local system? Thanks.

    Read the article

  • Logging into Local Statusnet instance on Apache causes browser to download a file

    - by DilbertDave
    I've installed statusnet 0.9.1 on a Windows Server via the WAMP stack and on the whole it seems to be fine. However, when logging in using IE7 or Chrome the browers invoke a file download, i.e. the File Download dialog is displayed. In IE7 the file is called notice with the content below (some parts starred out): <?xml version="1.0" encoding="UTF-8"?> <OpenSearchDescription xmlns="http://a9.com/-/spec/opensearch/1.1/"> <ShortName>Mumble Notice Search</ShortName> <Contact>david.carson@*****.com</Contact> <Url type="text/html" method="get" template="http://voice.*****.com/mumble/search/notice?q={searchTerms}"></Url> <Image height="16" width="16" type="image/vnd.microsoft.icon">http://voice.*****.com/mumble/favicon.ico</Image> <Image height="50" width="50" type="image/png">http://voice.******.com/mumble/theme/cloudy/logo.png</Image> <AdultContent>false</AdultContent> <Language>en_GB</Language> <OutputEncoding>UTF-8</OutputEncoding> <InputEncoding>UTF-8</InputEncoding> </OpenSearchDescription> In Chrome (Linux and Windows!) the file is called people and contains similar XML. This is not an issue when logging in using FireFox. This is obviously a configuration issue but I'm not having much luck tracking it down. I tested the previous version of Statusnet on an Ubuntu Server VM on our network and it worked fine for months. Thanks In Advance

    Read the article

  • Tool for search, watch and download youtube videos on Ubuntu

    - by Mike
    I am looking for a tool like MacTubes for Ubuntu. MacTubes is a mac app that can search, watch and download youtube videos. The great advantage of this app is that it searches youtube and shows all videos available there and allows you to select all and download everything fast and easy. MacTubes is so awesome that it also converts the video to MP4 and downloads the HD version of a video when available. I use this on my mac, but my sister uses Linux and I am looking for something like that for her. I have tried Miro, but Miro's search feature is bad as hell. I search for something using MacTubes and it shows me 1600 results. The same search under Miro shows me 40 results. Miro never shows more than one page of results. I prefer it to be an application with a GUI, instead of command line, because my sister's proficiency in Linux is not that good. Any suggestions? Thanks.

    Read the article

  • Shrinking a large transaction log on a full drive

    - by Sam
    Someone fired off an update statement as part of some maintenance which did a cross join update on two tables with 200,000 records in each. That's 40 trillion statements, which would explain part of how the log grew to 200GB. I also did not have the log file capped, which is another problem I will be taking care of server wide - where we have almost 200 databases residing. The 'solution' I used was to backup the database, backup the log with truncate_only, and then backup the database again. I then shrunk the log file and set a cap on the log. Seeing as there were other databases using the log drive, I was in a bit of a rush to clean it out. I might have been able to back the log file up to our backup drive, hoping that no other databases needed to grow their log file. Paul Randal from http://technet.microsoft.com/en-us/magazine/2009.02.logging.aspx Under no circumstances should you delete the transaction log, try to rebuild it using undocumented commands, or simply truncate it using the NO_LOG or TRUNCATE_ONLY options of BACKUP LOG (which have been removed in SQL Server 2008). These options will either cause transactional inconsistency (and more than likely corruption) or remove the possibility of being able to properly recover the database. Were there any other options I'm not aware of?

    Read the article

  • ffmpeg - h264 to xvid creates large file

    - by fatnic
    I'm trying to use ffmpeg to convert a h264/aac video file to an xvid/mp3 file so I can play it in my ultra-cheap media player. At the moment the converted video file is TWICE the size of the original mp4. Is there any way to get a smaller file size without loosing too much quality? Even a drop to -qmin 1 is pretty awful! The command i'm using is ffmpeg -i input.mp4 -vcodec libxvid -sameq -acodec libmp3lame -ab 128k -ac 2 output.avi And the ffmpeg output is Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input.mp4' Metadata: major_brand : isom minor_version : 1 compatible_brands: isomavc1 Duration: 01:34:27.69, start: 0.000000, bitrate: 1520 kb/s Stream #0.0(und): Video: h264, yuv420p, 720x304 [PAR 1:1 DAR 45:19], 1387 kb/s, 25 fps, 25 tbr, 25k tbn, 50 tbc Stream #0.1(und): Audio: aac, 48000 Hz, stereo, s16, 128 kb/s Output #0, avi, to 'output.avi': Metadata: ISFT : Lavf52.64.2 Stream #0.0(und): Video: mpeg4, yuv420p, 720x304 [PAR 1:1 DAR 45:19], q=2-31, 200 kb/s, 25 tbn, 25 tbc Stream #0.1(und): Audio: libmp3lame, 48000 Hz, stereo, s16, 128 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1

    Read the article

  • How to configure a large mtu (linux)

    - by Somejan
    I have a gigabit ethernet connection from my laptop to my router, and a working ipv6 connection to the internet. I can receive very large packets from sites on the internet, with sizes up to at least 10000 bytes (according to wireshark). (edit: turns out to be linux's 'generic receive offload') However, when trying to send anything, my local computer fragments at just below 1500 bytes for ipv6. (On ipv4, I can send tcp packets to the internet of at least 1514 bytes, I can ping with packets up to the configured mtu of 6128 but they are blackholed.) I'm on ubuntu 12.04. I have configured an mtu for my eth0 of 6128 (the maximum it accepts), both using ip link set dev eth0 mtu 6128 and in the NetworkManager applet gui, and restarted the connection. ip link show eth0 shows the 6128 mtu is indeed set. ip -6 route shows that none of the paths the kernel knows about have an mtu set. I can ping over ipv4 with packets up to 6128 bytes (though I don't get responses), but when I do ping6 myrouter -c3 -s1500 -Mdo I get error replies from my own computer saying that the packets are too large and the mtu is 1480. I have confirmed with Wireshark that nothing is put on the wire, and the replies are indeed generated by my own computer. So, how do I get my computer to use the larger mtu?

    Read the article

  • Export-Mailbox - fails with large folders

    - by grojo
    I am trying to move messages from a rather large mailbox to an archive mailbox. However I run into errors all the time. the command I am executing is Export-Mailbox -Identity MAILBOX_FROM -TargetMailbox ARCHIVE -TargetFolder ARCHIVE_FOLDER -StartDate 2009-02-01 -EndDate 2009-02-28 -DeleteContent -Confirm:$false I can copy/move some messages, but run into frequent "an unknown error has occurred" (statuscode -1056749164) I run the console as administrative user, and all permissions are set right, as far as I can tell. I've restricted the start and end dates in case the number of messages moved/deleted should create problems. Anything I am missing in my setup? Corrupted messages? Over-limit message sizes? Update: What I've learnt so far, is that folder with more than approx 3000 messages will generate errors. If mail retention is set (default 30 days), Export-Mailbox will scan all messages whether these were deleted in previous runs or not, and date restriction to limit number of messages will not work. To avoid errors, I've switched off deleted message retention for the mailbox, and moved the messages from one large folder to multiple folders, and moved these one by one...

    Read the article

  • Large, high performance object or key/value store for HTTP serving on Linux

    - by Tommy
    I have a service that serves images to end users at a very high rate using plain HTTP. The images vary between 4 and 64kbytes, and there are 1.300.000.000 of them in total. The dataset is about 30TiB in size and changes (new objects, updates, deletes) make out less than 1% of the requests. The number of requests pr. second vary from 240 to 9000 and is dispersed pretty much all over, with few objects being especially "hot". As of now, these images are files on a ext3 filesystem distributed read only across a large amount of mid range servers. This poses several problems: Using a fileysystem is very inefficient since the metadata size is large, the inode/dentry cache is volatile on linux and some daemons tend to stat()/readdir() it's way through the directory structure, which in my case becomes very expensive. Updating the dataset is very time consuming and requires remounting between set A and B. The only reasonable handling is operating on the block device for backup, copying, etc. What I would like is a deamon that: speaks HTTP (get, put, delete and perhaps update) stores data it in an efficient structure. The index should remain in memory, and considering the amount of objects, the overhead must be small. The software should be able to handle massive connections with slow (if any) time needed to ramp up. Index should be read in memory at startup. Statistics would be nice, but not mandatory. I have experimented a bit with riak, redis, mongodb, kyoto and varnish with persistent storage, but I haven't had the chance to dig in really deep yet.

    Read the article

  • File corruption (bad checksums) in large files copied to VMware guest

    - by AllanA
    In setting up a development lab, I've got a desktop system running ESXi 4.1.0 (free license) on SATA RAID 0 (already purchased and configured when I started this job; I'm open to hardware input as it pertains to my problem.) Its guests so far include two Win2008 Server R2 64-bit VMs and on Ubuntu 10.04 64-bit VM. I'm installing onto the Windows servers. We've been copying off some fairly large files (over a gigabyte) for an installation, hoping to install more quickly from a (virtual) hard drive than from the network for from BD-ROM. The problem is that they keep coming up with different checksums from the originals. The file sizes are the same, but md5sum reports different numbers (and so does the installer, as it refuses to continue when the checksums don't match.) I've tried copying directly from the BD-ROM (attaching the OS drive to the host system's physical drive). I've tried copying the large files onto a co-worker's Windows machine from his Blu-Ray drive; when I do that, the checksums match. But when I copy from his machine to the VM guest over a network share, the checksums no longer match. Thinking this meant a corrupt destination drive, I deleted it in vSphere and added another freshly created drive. The problem persists. I'm not sure what to try next.

    Read the article

  • Passive mode FTP file download hangs from specific machine

    - by chiptuned
    I have a server which is an AWS instance that just cannot download files from a specific FTP server. I can connect to the FTP server fine and run some commands, but when I request a file it just hangs. Here is the debug output of the base linux ftp client after login: ---> SYST 215 UNIX Type: Apache FtpServer Remote system type is UNIX. ftp> get outgoing/catalog.gz catalog.gz local: catalog.gz remote: outgoing/catalog.gz ---> PASV 227 Entering Passive Mode (64,156,167,125,135,191) ---> RETR outgoing/catalog.gz 150 File status okay; about to open data connection. Thats it. Then it just sits there and nothing transfers. I have verified that a data connection is made but the client gets no data. ? ss -nt dst 64.156.167.125 State Recv-Q Send-Q Local Address:Port Peer Address:Port ESTAB 0 0 10.185.147.150:41190 64.156.167.125:21 ESTAB 0 0 10.185.147.150:48871 64.156.167.125:48557 The FTP server is not in my control and downloads from other FTP servers in passive mode have worked. Active mode does not work as the system is behind a firewall. Every FTP client I've tried has the same problem. The download works from other systems, even from other AWS instances I have with the same Security Group. Not necessarily the same distro or config though. I understand it may be some issue on the server side, but I want to know what it is about my particular machine where the transfer hangs and where on every other machine I can get my hands on, it works. Please let me know what the culprit on the client side could be or ideas on what else to look at.

    Read the article

  • Copy past speed very slow for a large number of files on Windows [closed]

    - by Arno2501
    I've run the following test I've created a folder containing 15'000 files of 400 bytes using this batch : @ECHO off SET times=15000 FOR /L %%i IN (1,1,%times%) DO ( fsutil file createnew filename%%i.txt 400 ) then I copy past it on my Windows Computer using this command : robocopy LargeNumberOfFiles\ LargeNumberOfFiles2\ After it has completed I can see that the transfer rate was 915810 Bytes/sec this is less than 1 MB/s. It took me several seconds to copy 7 MBytes Please note that this is very slow. I've tried the same with a folder with a single file of 50 Mbytes and the transfer rate is 1219512195 Bytes/sec. (yeah GB/s) instantaneous. Why copying large number of files take so much time - ressources on a windows filesystem ? Please note that I've tried to do the same on a linux system which runs on the same computer in a virtual machine (vmware player) with ext3 filesystem. I use the cp command and the copy is instantaneous ! Please also note the following : no antivirus I've tested that behaviour on multiple windows computers (always ntfs) i always get comparable results (transfer rate under 1MB/s avg 7-8 seconds to copy 7 MBytes) I've tested on multiple linux ext3 system the copy is always instantaneous for that amount (15000 files of 400 bytes) The question is about understanding what makes windows filesystem so slow to copy large number of files compared to a linux one for instance.

    Read the article

  • Using sed for introducing newline after each > in a +1 gigabyte large one-line text file

    - by wasatz
    I have a giant text file (about 1,5 gigabyte) with xml data in it. All text in the file is on a single line, and attempting to open it in any text editor (even the ones mentioned in this thread: http://stackoverflow.com/questions/159521/text-editor-to-open-big-giant-huge-large-text-files ) either fails horribly or is totally unusable due to the text editor hanging when attempting to scroll. I was hoping to introduce newlines into the file by using the following sed command sed 's/>/>\n/g' data.xml > data_with_newlines.xml Sadly, this caused sed to give me a segmentation fault. From what I understand, sed reads the file line-by-line which would in this case mean that it attempts to read the entire 1,5 gig file in one line which would most certainly explain the segfault. However, the problem remains. How do I introduce newlines after each in the xml file? Do I have to resort to writing a small program to do this for me by reading the file character-by-character?

    Read the article

  • Binary search in a sorted (memory-mapped ?) file in Java

    - by sds
    I am struggling to port a Perl program to Java, and learning Java as I go. A central component of the original program is a Perl module that does string prefix lookups in a +500 GB sorted text file using binary search (essentially, "seek" to a byte offset in the middle of the file, backtrack to nearest newline, compare line prefix with the search string, "seek" to half/double that byte offset, repeat until found...) I have experimented with several database solutions but found that nothing beats this in sheer lookup speed with data sets of this size. Do you know of any existing Java library that implements such functionality? Failing that, could you point me to some idiomatic example code that does random access reads in text files? Alternatively, I am not familiar with the new (?) Java I/O libraries but would it be an option to memory-map the 500 GB text file (I'm on a 64-bit machine with memory to spare) and do binary search on the memory-mapped byte array? I would be very interested to hear any experiences you have to share about this and similar problems.

    Read the article

  • RewriteRule and php download counter

    - by rcourtna
    (1) I have a site that serves up MP3 files: http://domain/files/1234567890.mp3 (2) I have a php script that tracks file download counts: http://domain/modules/download_counter.php?file=/files/1234567890.mp3 After download_counter.php records the download, it redirects to the original file: Header("Location: $FQDN_url"); (3) I'd like all my public links to be presented as the direct file urls from (1). I'm trying to use Apache to redirect the requests to download_counter.php: RewriteRule ^files/(.+\.mp3)$ /modules/download_counter.php?file=/files/$1 [L] I'm currently stuck on (3), as it results in a redirect loop, since download_counter.php simply redirects the request back to the original file (rather than streaming the file contents). I'm also motivated to use download_counter.php as is (without modifying it's redirect behaviour). This is because the script is part of a larger CMS module, and I'd like to avoid complicating my upgrade path. Perhaps there is no solution to my problem (other than modifying the download_counter script). WDYT?

    Read the article

  • Find a specific couple of lines of code from large git repo

    - by mustISignUp
    So i remember that i once did something in another project and (later removed it), that could be useful now. Thanks to some other SO post i managed to search for a half remembered string.. git grep halfRemeberedNameOfFunction $(git log -g --pretty=format:%h) and Yay! got some results 2d0bcde:path/to/project/file.c: result = halfRemeberedNameOfFunction( data ); 65fc672:path/to/project/file.c: result = halfRemeberedNameOfFunction( data ); 24f2858:path/to/project/file.c: result = halfRemeberedNameOfFunction( data ); 252e3a5:path/to/project/file.c: result = halfRemeberedNameOfFunction( data, args ); b58bc0b:path/to/project/file.c: result = _halfRemeberedNameOfFunction( data, options ); dce8d9d:path/to/project/file.c: result = halfRemeberedNameOfFunction( data, moreData ); But how do i get that file at one of those revisions? Many thanks

    Read the article

  • File path for J2ME FileConnection?

    - by Kilnr
    Hi, I'm writing a MIDlet which needs to write file. I'm using FileConnection from JSR-75 to accomplish this. The intention is to have this MIDlet runnning on as much devices as possible (all MIDP 2.0 devices with JSR-75 support, ideally). On several emulators and an HTC Touch Pro2, I can perfectly use the following code to get the root of the filesystem: Enumeration drives = FileSystemRegistry.listRoots(); String root = (String) drives.nextElement(); String path = "file:///" + root; However, on a Nokia S60 5th edition emulator, trying to open a FileConnection to this path throws a java.lang.SecurityException. Apparently S60 devices do not allow connections to the root of the filesystem. I realise I can use something like System.getProperty("fileconn.dir.photos"), but that isn't supported on all devices either. So, my actual question: what is the best approach to get a path to create a FileConnection with, that allows for maximum portability? Thanks. Edit: I suppose I could iterate over all the roots in the Enumeration, and check for a writable one, but that's hardly optimal for two reasons. First, there aren't necessarily any writable roots. Second, this could be the phone memory or a memory card, so the storage method wouldn't be consistent across devices, which is rather ugly.

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >