Search Results

Search found 546 results on 22 pages for 'dat chu'.

Page 16/22 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Optimize grep, awk and sed shell stuff

    - by kockiren
    I try to sum the traffic of diffrent ports in the logfiles from "IPCop" so i write and command for my shell, but i think its possible to optimize the command. First a Line from my Logfile: 01/00:03:16 kernel INPUT IN=eth1 OUT= MAC=xxx SRC=xxx DST=xxx LEN=40 TOS=0x00 PREC=0x00 TTL=98 ID=256 PROTO=TCP SPT=47438 DPT=1433 WINDOW=16384 RES=0x00 SYN URGP=0 Now i grep with following Command the sum of all lengths who contains port 1433 grep 1433 log.dat|awk '{for(i=1;i<=10;i++)if($i ~ /LEN/)print $i};'|sed 's/LEN=//g;'|awk '{sum+=$1}END{print sum}' The for loop i need because the LEN-col is not on same position at all time. Any suggestion for optimizing this command? Regards Rene

    Read the article

  • Creating user control with dock.Full property giving me a problem

    - by Royson
    Hi, My application has several controls. Like in one screen has treeview on left side,gridview with paging in the middle and 4 buttons at right side. the controls are properly appears when the form is in maximize state. but if i minimize it the controls are not properly fits in the screen. i tried with different different tricks like table layout.. in dat i added panel...etc.. but i could not solve the problem. How can i create such type of screens which fits independent of size of my window. Thanks

    Read the article

  • User controls do not properly fit on the screen

    - by Royson
    Hi, My application has several controls. Like in one screen has TreeView on left side, GridView with paging in the middle and 4 buttons at right side. The controls properly appear when the form is in a maximized state, but if I minimize it the controls do not properly fit on the screen. I tried with different different tricks like table layout.. in dat I added a panel, etc... But I could not solve the problem. How can I create such type of screens which fits independently of size of my window? Thanks

    Read the article

  • How to tell the database type checking the file

    - by Click Ok
    My friend have a system to manage customers. The program per si is terrible, and my friend lost contact with the developers. The case is, now my friend lost the access to program (something that the developers say "locked to machine" so when moved to another pc, he lost the access to program and data. I get mission of to try to recover the database, migrating to another database, and create a cool program to my friend. Now I need to discover wich database was used by the developers. I know that the program was made using Visual Basic, because the MSVBVM60.DLL is required. There is some program to read the metadata in the .dat files and discover wich database was used?

    Read the article

  • Python character count

    - by user74283
    I have been going over python tutorials in this resource. Everything is pretty clear in the below code which counts number of characters. Only section that i dont understand is the section where count assigned to a list and multiplied by 120. Can anyone explain what is the purpose of this in plain english please. def display(i): if i == 10: return 'LF' if i == 13: return 'CR' if i == 32: return 'SPACE' return chr(i) infile = open('alice_in_wonderland.txt', 'r') text = infile.read() infile.close() counts = 128 * [0] for letter in text: counts[ord(letter)] += 1 outfile = open('alice_counts.dat', 'w') outfile.write("%-12s%s\n" % ("Character", "Count")) outfile.write("=================\n") for i in range(len(counts)): if counts[i]: outfile.write("%-12s%d\n" % (display(i), counts[i])) outfile.close()

    Read the article

  • HTTP Data chunks over multiple packets?

    - by myforwik
    What is the correct way for a HTTP server to send data over multiple packets? For example I want to transfer a file, the first packet I send is: HTTP/1.1 200 OK Content-type: application/force-download Content-Type: application/download Content-Type: application/octet-stream Content-Description: File Transfer Content-disposition: attachment; filename=test.dat Content-Transfer-Encoding: chunked 400 <first 1024 bytes here> 400 <next 1024 bytes here> 400 <next 1024 bytes here> Now I need to make a new packet, if I just send: 400 <next 1024 bytes here> All the clients close there connections on me and the files are cut short. What headers do I put in a second packet to continue on with the data stream?

    Read the article

  • Android: Displaying Video on VideoView

    - by AndroidDev93
    I'm trying to display a video in my sdcard on the video view. Here is my code: String name = Environment.getExternalStorageDirectory() + "/test.mp4"; final VideoView videoView = (VideoView)findViewById(R.id.videoView1); videoView.setOnPreparedListener(new MediaPlayer.OnPreparedListener() { public void onPrepared(MediaPlayer arg0) { videoView.start(); } }); videoView.setVideoPath(name); The file I am trying to open is called test.mp4 and its located within the sdcard folder. I get an error saying the application has unfortunately stopped. I would appreciate it if someone could help me. Thanks. EDIT : I used the debugger and found out that I get an InvocationTargetException. The detailed message says that : Failure delivering result ResultInfo{who=null, request=1001, result=-1, data=Intent { dat=file:///mnt/sdcard/test.mp4 }} to activity : java.lang.NullPointerException EDIT : I looked at the logcat again and it seems to give the error at videoView.setOnPreparedListener(new MediaPlayer.OnPreparedListener() { I'm guessing either videoView or MediaPlayer is null.

    Read the article

  • Best way to provide software settings?

    - by claws
    I'm using C# .NET. In my software I'm providing settings dialog through which user can set the application settings which I want to save to a file. Requirements (typical): Every class I defined uses some part of these settings. So, these should be global to all classes. These should be loaded while software is started. When ever user changes settings and clicks 'save'/'apply'. Current settings should change. I am wondering what is the best way to do this? Also, what is the best way to save these settings to disk? I mean should I create a Settings class object and serializing it to 'settings.dat' or provide an structured file like XML/JSON

    Read the article

  • Exception in NHunspell

    - by Nikhil K
    I have used the following code for spelling checking.While i am running that i am getting an exception--" The code is: using (Hunspell hunspell = new Hunspell("en_us.aff", "en_us.dic")) { bool correct = hunspell.Spell("Recommendation"); var suggestions = hunspell.Suggest("Recommendatio"); foreach (string suggestion in suggestions) { Console.WriteLine("Suggestion is: " + suggestion); } } // Hyphen using (Hyphen hyphen = new Hyphen("hyph_en_us.dic")) { var hyphenated = hyphen.Hyphenate("Recommendation"); } using (MyThes thes = new MyThes("th_en_us_new.idx", "th_en_us_new.dat")) { using (Hunspell hunspell = new Hunspell("en_us.aff", "en_us.dic")) { ThesResult tr = thes.Lookup("cars", hunspell); foreach (ThesMeaning meaning in tr.Meanings) { Console.WriteLine(" Meaning: " + meaning.Description); foreach (string synonym in meaning.Synonyms) { Console.WriteLine(" Synonym: " + synonym); } } }

    Read the article

  • Export GridView to TXT, then upload file to server

    Basically what I want to do is export an array (or GridView) to a file called "getpathin.dat". That is easy, but the method I am using downloads the file to my computer, which is what I don't want. I want to write an array to either a PRE-EXISTING file that is on the server OR create a new file on the server in a folder, and this new file will contain either the array or the gridview, which I have stored in the array. i'll be doing this in... Visual Studio 2008/SQL Server using C#

    Read the article

  • The property 'IsDataSource' was not found in type 'ViewModelLocator

    - by dieter-preconsult-be
    Hello I have the following code: <UserControl x:Class="TestApp.View.ViewAlarmLog" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:Custom="http://schemas.microsoft.com/wpf/2008/toolkit" xmlns:mvvm="clr-namespace:Test.ViewModel"> <UserControl.Resources> <ResourceDictionary > <ResourceDictionary.MergedDictionaries> </ResourceDictionary.MergedDictionaries> <mvvm:ViewModelLocator x:Key="Locator" d:IsDataSource="True"/> </ResourceDictionary> </UserControl.Resources> the problem is dat i always get an error: The property 'IsDataSource' was not found in type 'ViewModelLocator'. " What could be the problem here ? Redgards Dieter

    Read the article

  • I need data structure for effective handling with dates

    - by ante.sabo
    What I need is something like Hashtable which I will fill with prices that were actual at desired days. For example: I will put two prices: January 1st: 100USD, March 5th: 89USD. If I search my hashtable for price: hashtable.get(February 14th) I need it to give me back actual price which was entered at Jan. 1st because this is the last actual price. Normal hashtable implementation won't give me back anything, since there is nothing put on that dat. I need to see if there is such implementation which can find quickly object based on range of dates.

    Read the article

  • Retrieving Windows Mobile browser history

    - by kurige
    How can I retrieve a list of urls a user has visited on a Windows Mobile phone? I've written a program that successfully retrieves the visited urls in a user's cache, using FindFirstUrlCacheEntry and FindNextUrlCacheEntry - but as I understand it this is not the same as the user's actual web history. In any case it does not seem to give correct results. Edit: I believe the file I'm looking for is index.dat. But it's certainly not in the same place it is on a desktop machine, if it exists at all. And I'm not sure how to parse it. Any experience in this area would be greatly appreciated.

    Read the article

  • MySQL import in phpmyadmin (CSV) chokes on quotes

    - by Andrew Swift
    I am trying to import a .csv file into a MySQL table via phpMyAdmin. The .csv file is separated by pipes, formated like this: data|d'ata|d'a"ta|dat"a| data|"da"ta|data|da't'a| dat'a|data|da"ta"|da'ta| The data contains quotes. I have no control over the format in which I recieve the data -- it is generated by a third party. The problem comes when there is a | followed by a double quote. I always get an "invalid field count in CSV input on line N" error. I am uploading the file from the import page, using Latin1, CSV, terminated by |, separated by ". I would like to just change the "enclosed by" character, but I keep getting "Invalid parameter for CSV import: Fields enclosed by". I have tried various characters with no success. How can I tell MySQL to accept this format in phpMyAdmin? Setting up these tables is the first step in writing a program that will use uploaded gzipped .csv files to maintain the catalog of an e-commerce site.

    Read the article

  • Writing booleans to file

    - by Sara
    Hello, I have a piece of code that gives a runtime error. Can anyone help find out why? vector<int> intData; vector<bool> boolData; for(int i=0;i<19000;i++) boolData.push_back(false); string ofile = "tree.dat"; ofstream fout(ofile.c_str(),ios::out | ios::binary); if (!boolData.empty()) fout.write((char *)&boolData[0], sizeof(bool)*boolData.size()); fout.close(); It gives the error when it tries to write the file (fout.write).

    Read the article

  • File Encrypt/Decrypt under load?

    - by chopps
    I found an interesting article about encrypting and decrypting files but since it uses a file.dat to store the key this will run into problems when theres alot of users on the site dealing with alot of files. http://www.codeproject.com/KB/security/VernamEncryption.aspx?display=Print Should a new file be created every time a file needs decrypting or would there be a better way to do this? UPDATE: Here is what im using to avoid the locking problems. using (Mutex FileLock = new Mutex(true, System.Guid.NewGuid().ToString())) { try { FileLock.WaitOne(); using (FileStream fs = new FileStream(keyFile, FileMode.Open)) { keyBytes = new byte[fs.Length]; fs.Read(keyBytes, 0, keyBytes.Length); } } catch (Exception ex) { EventLog.LogEvent(ex); } finally { FileLock.ReleaseMutex(); } } I tested it on 1000 TIFFs doing both encryption and decryption without any errors.

    Read the article

  • Why is Varnish not caching?

    - by Justin
    I am troubleshooting the setup of Varnish 3.x on my Ubuntu server. I'm running Drupal 7 on two sites set up on the box, via named-based vhosts. Before trying to get Varnish to play nice with Drupal I'm trying to just get Varnish to a PNG from cache. Here are the headers I get from a curl -I request of the PNG file: HTTP/1.1 200 OK Server: Apache/2.2.22 (Ubuntu) Last-Modified: Sun, 07 Oct 2012 21:18:59 GMT ETag: "a57c2-3850-4cb7ea73db6c0" Accept-Ranges: bytes Content-Length: 14416 Cache-Control: max-age=1209600 Expires: Thu, 25 Oct 2012 22:55:14 GMT Content-Type: image/png Accept-Ranges: bytes Date: Thu, 11 Oct 2012 22:55:14 GMT X-Varnish: 1766703058 Age: 0 Via: 1.1 varnish Connection: keep-alive X-Varnish-Cache: MISS Here is the Varnish VCL file I'm using (It's a default VCL configuration designed for Drupal): # Default backend definition. Set this to point to your content # server. # backend default { .host = "127.0.0.1"; .port = "8080"; } # Respond to incoming requests. sub vcl_recv { # Use anonymous, cached pages if all backends are down. if (!req.backend.healthy) { unset req.http.Cookie; } # Allow the backend to serve up stale content if it is responding slowly. set req.grace = 6h; # Pipe these paths directly to Apache for streaming. #if (req.url ~ "^/admin/content/backup_migrate/export") { # return (pipe); #} # Do not cache these paths. if (req.url ~ "^/status\.php$" || req.url ~ "^/update\.php$" || req.url ~ "^/admin$" || req.url ~ "^/admin/.*$" || req.url ~ "^/flag/.*$" || req.url ~ "^.*/ajax/.*$" || req.url ~ "^.*/ahah/.*$") { return (pass); } # Do not allow outside access to cron.php or install.php. #if (req.url ~ "^/(cron|install)\.php$" && !client.ip ~ internal) { # Have Varnish throw the error directly. # error 404 "Page not found."; # Use a custom error page that you've defined in Drupal at the path "404". # set req.url = "/404"; #} # Always cache the following file types for all users. This list of extensions # appears twice, once here and again in vcl_fetch so make sure you edit both # and keep them equal. if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset req.http.Cookie; } # Remove all cookies that Drupal doesn't need to know about. We explicitly # list the ones that Drupal does need, the SESS and NO_CACHE. If, after # running this code we find that either of these two cookies remains, we # will pass as the page cannot be cached. if (req.http.Cookie) { # 1. Append a semi-colon to the front of the cookie string. # 2. Remove all spaces that appear after semi-colons. # 3. Match the cookies we want to keep, adding the space we removed # previously back. (\1) is first matching group in the regsuball. # 4. Remove all other cookies, identifying them by the fact that they have # no space after the preceding semi-colon. # 5. Remove all spaces and semi-colons from the beginning and end of the # cookie string. set req.http.Cookie = ";" + req.http.Cookie; set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); set req.http.Cookie = regsuball(req.http.Cookie, ";(SESS[a-z0-9]+|SSESS[a-z0-9]+|NO_CACHE)=", "; \1="); set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); if (req.http.Cookie == "") { # If there are no remaining cookies, remove the cookie header. If there # aren't any cookie headers, Varnish's default behavior will be to cache # the page. unset req.http.Cookie; } else { # If there is any cookies left (a session or NO_CACHE cookie), do not # cache the page. Pass it on to Apache directly. return (pass); } } } # Set a header to track a cache HIT/MISS. sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Varnish-Cache = "HIT"; } else { set resp.http.X-Varnish-Cache = "MISS"; } } # Code determining what to do when serving items from the Apache servers. # beresp == Back-end response from the web server. sub vcl_fetch { # We need this to cache 404s, 301s, 500s. Otherwise, depending on backend but # definitely in Drupal's case these responses are not cacheable by default. if (beresp.status == 404 || beresp.status == 301 || beresp.status == 500) { set beresp.ttl = 10m; } # Don't allow static files to set cookies. # (?i) denotes case insensitive in PCRE (perl compatible regular expressions). # This list of extensions appears twice, once here and again in vcl_recv so # make sure you edit both and keep them equal. if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset beresp.http.set-cookie; } # Allow items to be stale if needed. set beresp.grace = 6h; } # In the event of an error, show friendlier messages. sub vcl_error { # Redirect to some other URL in the case of a homepage failure. #if (req.url ~ "^/?$") { # set obj.status = 302; # set obj.http.Location = "http://backup.example.com/"; #} # Otherwise redirect to the homepage, which will likely be in the cache. set obj.http.Content-Type = "text/html; charset=utf-8"; synthetic {" <html> <head> <title>Page Unavailable</title> <style> body { background: #303030; text-align: center; color: white; } #page { border: 1px solid #CCC; width: 500px; margin: 100px auto 0; padding: 30px; background: #323232; } a, a:link, a:visited { color: #CCC; } .error { color: #222; } </style> </head> <body onload="setTimeout(function() { window.location = '/' }, 5000)"> <div id="page"> <h1 class="title">Page Unavailable</h1> <p>The page you requested is temporarily unavailable.</p> <p>We're redirecting you to the <a href="/">homepage</a> in 5 seconds.</p> <div class="error">(Error "} + obj.status + " " + obj.response + {")</div> </div> </body> </html> "}; return (deliver); } I'm getting a MISS and age 0 every time. If I'm understanding correctly, this means the file isn't being returned from Varnish's cache. Is there a problem with my Varnish config?

    Read the article

  • Adobe Air: Read and Write MP3 or JPG from local directory and switch bytes

    - by Max
    I would like to make my local jpg and mp3 files kind of unreadable (without encoding) by just putting the first 100 bytes from the beginning of the file to the end and then saving the file with a .dat extension. I know I have to use the byte array but can't get it work. I would also need then a small function to read that file and put the 100bytes back to the front so that I can play/display the file. It would be great if you could post the whole function because I am quite new to Air so that I fully understand. Thank You!!!!

    Read the article

  • PowerShell copy fails without warning

    - by boink
    Howdy, am trying to copy a file from IE cache to somewhere else. This works on w7, but not Vista Ultimate. In short: copy-item $f -Destination "$targetDir" -force (I also tried $f.fullname) The full script: $targetDir = "C:\temp" $ieCache=(get-itemproperty "hkcu:\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders").cache $minSize = 5mb Write-Host "minSize:" $minSize Add-Content -Encoding Unicode -Path $targetDir"\log.txt" -Value (get-Date) Set-Location $ieCache #\Low\Content.IE5 for protected mode #\content.ie5 for unprotected $a = Get-Location foreach ($f in (get-childitem -Recurse -Force -Exclude *.dat, *.tmp | where {$_.length -gt $minSize}) ) { Write-Host (get-Date) $f.Name $f.length Add-Content -Encoding Unicode -Path $targetDir"\log.txt" -Value $f.name, $f.length copy-item $f -Destination "$targetDir" -force } End of wisdom. Please help!

    Read the article

  • How to process <input type="file" name="test1[f]" /> in PHP?

    - by user198729
    var_dump($_FILES) gives the following: array 'test1' => array 'name' => array 'f' => string 'ntuser.dat.LOG' (length=14) 'type' => array 'f' => string 'application/octet-stream' (length=24) 'tmp_name' => array 'f' => string 'D:\wamp\tmp\php223.tmp' (length=22) 'error' => array 'f' => int 0 'size' => array 'f' => int 0 Why is http designed this way?How can I get the file by $_FILES['test1']['f']?

    Read the article

  • Optimizing python code performance when importing zipped csv to a mongo collection

    - by mark
    I need to import a zipped csv into a mongo collection, but there is a catch - every record contains a timestamp in Pacific Time, which must be converted to the local time corresponding to the (longitude,latitude) pair found in the same record. The code looks like so: def read_csv_zip(path, timezones): with ZipFile(path) as z, z.open(z.namelist()[0]) as input: csv_rows = csv.reader(input) header = csv_rows.next() check,converters = get_aux_stuff(header) for csv_row in csv_rows: if check(csv_row): row = { converter[0]:converter[1](value) for converter, value in zip(converters, csv_row) if allow_field(converter) } ts = row['ts'] lng, lat = row['loc'] found_tz_entry = timezones.find_one(SON({'loc': {'$within': {'$box': [[lng-tz_lookup_radius, lat-tz_lookup_radius],[lng+tz_lookup_radius, lat+tz_lookup_radius]]}}})) if found_tz_entry: tz_name = found_tz_entry['tz'] local_ts = ts.astimezone(timezone(tz_name)).replace(tzinfo=None) row['tz'] = tz_name else: local_ts = (ts.astimezone(utc) + timedelta(hours = int(lng/15))).replace(tzinfo = None) row['local_ts'] = local_ts yield row def insert_documents(collection, source, batch_size): while True: items = list(itertools.islice(source, batch_size)) if len(items) == 0: break; try: collection.insert(items) except: for item in items: try: collection.insert(item) except Exception as exc: print("Failed to insert record {0} - {1}".format(item['_id'], exc)) def main(zip_path): with Connection() as connection: data = connection.mydb.data timezones = connection.timezones.data insert_documents(data, read_csv_zip(zip_path, timezones), 1000) The code proceeds as follows: Every record read from the csv is checked and converted to a dictionary, where some fields may be skipped, some titles be renamed (from those appearing in the csv header), some values may be converted (to datetime, to integers, to floats. etc ...) For each record read from the csv, a lookup is made into the timezones collection to map the record location to the respective time zone. If the mapping is successful - that timezone is used to convert the record timestamp (pacific time) to the respective local timestamp. If no mapping is found - a rough approximation is calculated. The timezones collection is appropriately indexed, of course - calling explain() confirms it. The process is slow. Naturally, having to query the timezones collection for every record kills the performance. I am looking for advises on how to improve it. Thanks. EDIT The timezones collection contains 8176040 records, each containing four values: > db.data.findOne() { "_id" : 3038814, "loc" : [ 1.48333, 42.5 ], "tz" : "Europe/Andorra" } EDIT2 OK, I have compiled a release build of http://toblerity.github.com/rtree/ and configured the rtree package. Then I have created an rtree dat/idx pair of files corresponding to my timezones collection. So, instead of calling collection.find_one I call index.intersection. Surprisingly, not only there is no improvement, but it works even more slowly now! May be rtree could be fine tuned to load the entire dat/idx pair into RAM (704M), but I do not know how to do it. Until then, it is not an alternative. In general, I think the solution should involve parallelization of the task. EDIT3 Profile output when using collection.find_one: >>> p.sort_stats('cumulative').print_stats(10) Tue Apr 10 14:28:39 2012 ImportDataIntoMongo.profile 64549590 function calls (64549180 primitive calls) in 1231.257 seconds Ordered by: cumulative time List reduced from 730 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.012 0.012 1231.257 1231.257 ImportDataIntoMongo.py:1(<module>) 1 0.001 0.001 1230.959 1230.959 ImportDataIntoMongo.py:187(main) 1 853.558 853.558 853.558 853.558 {raw_input} 1 0.598 0.598 370.510 370.510 ImportDataIntoMongo.py:165(insert_documents) 343407 9.965 0.000 359.034 0.001 ImportDataIntoMongo.py:137(read_csv_zip) 343408 2.927 0.000 287.035 0.001 c:\python27\lib\site-packages\pymongo\collection.py:489(find_one) 343408 1.842 0.000 274.803 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:699(next) 343408 2.542 0.000 271.212 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:644(_refresh) 343408 4.512 0.000 253.673 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:605(__send_message) 343408 0.971 0.000 242.078 0.001 c:\python27\lib\site-packages\pymongo\connection.py:871(_send_message_with_response) Profile output when using index.intersection: >>> p.sort_stats('cumulative').print_stats(10) Wed Apr 11 16:21:31 2012 ImportDataIntoMongo.profile 41542960 function calls (41542536 primitive calls) in 2889.164 seconds Ordered by: cumulative time List reduced from 778 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.028 0.028 2889.164 2889.164 ImportDataIntoMongo.py:1(<module>) 1 0.017 0.017 2888.679 2888.679 ImportDataIntoMongo.py:202(main) 1 2365.526 2365.526 2365.526 2365.526 {raw_input} 1 0.766 0.766 502.817 502.817 ImportDataIntoMongo.py:180(insert_documents) 343407 9.147 0.000 491.433 0.001 ImportDataIntoMongo.py:152(read_csv_zip) 343406 0.571 0.000 391.394 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:384(intersection) 343406 379.957 0.001 390.824 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:435(_intersection_obj) 686513 22.616 0.000 38.705 0.000 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:451(_get_objects) 343406 6.134 0.000 33.326 0.000 ImportDataIntoMongo.py:162(<dictcomp>) 346 0.396 0.001 30.665 0.089 c:\python27\lib\site-packages\pymongo\collection.py:240(insert) EDIT4 I have parallelized the code, but the results are still not very encouraging. I am convinced it could be done better. See my own answer to this question for details.

    Read the article

  • Python, Matplotlib, subplot: How to set the axis range?

    - by someone
    How can I set the y axis range of the second subplot to e.g. [0,1000] ? The FFT plot of my data (a column in a text file) results in a (inf.?) spike so that the actual data is not visible. pylab.ylim([0,1000]) has no effect, unfortunately. This is the whole script: # based on http://www.swharden.com/blog/2009-01-21-signal-filtering-with-python/ import numpy, scipy, pylab, random xs = [] rawsignal = [] with open("test.dat", 'r') as f: for line in f: if line[0] != '#' and len(line) > 0: xs.append( int( line.split()[0] ) ) rawsignal.append( int( line.split()[1] ) ) h, w = 3, 1 pylab.figure(figsize=(12,9)) pylab.subplots_adjust(hspace=.7) pylab.subplot(h,w,1) pylab.title("Signal") pylab.plot(xs,rawsignal) pylab.subplot(h,w,2) pylab.title("FFT") fft = scipy.fft(rawsignal) #~ pylab.axis([None,None,0,1000]) pylab.ylim([0,1000]) pylab.plot(abs(fft)) pylab.savefig("SIG.png",dpi=200) pylab.show() Other improvements are also appreciated!

    Read the article

  • Where do Java Applets live?

    - by Wendy Peters
    I'm trying to figure out where java Applets that I run from the browser get downloaded to. I'm using Firefox 3.0 on Windows XP with Java 1.6 if that makes any difference. From the Java Control Panel on the toolbar, I can access "Temporary Internet Files - Settings" to find the Java cache. From there I can show the resources and see a file called "dws2010066.dat". Does this resource correspond to a file on disk? I did a search in the Java cache (and my whole computer) but came up empty handed.

    Read the article

  • Async stream writing in a thread

    - by blez
    I have a thread in which I write to 2 streams. The problem is that the thread is blocked until the first one finishes writing (until all data is transferred on the other side of the pipe), and I don't want that. Is there a way to make it asynchronous? chunkOutput is a Dictionary filled with data from multiple threads, so the faster checking for existing keys is, the faster the pipe will write. void ConsumerMethod(object totalChunks) { while(true) { if (chunkOutput.ContainsKey(curChunk)) { if (outputStream != null && chunkOutput[curChunk].Length > 0) { outputStream.Write(chunkOutput[curChunk]); // <-- here it stops } ChunkDownloader.AppendData("outfile.dat", chunkOutput[curChunk], chunkOutput[curChunk].Length); curChunk++; if (curChunk >= (int) totalChunks) return; } Thread.Sleep(10); } }

    Read the article

  • How do I clear the contents of a file using c?

    - by Eddy
    I'm writing some code so that at each iteration of a for loop it runs a functions which writes data into a file, like this: int main() { int i; /* Write data to file 100 times */ for(i = 0; i < 100; i++) writedata(); return 0; } void writedata() { /* Create file for displaying output */ FILE *data; data = fopen("output.dat", "a"); /* do other stuff */ ... } How do I get it so that when I run the program it will delete the file contents at the beginning of the program, but after that it will append data to the file? I know that using the "w" identifier in fopen() will open a new file that's empty, but I want to be able to 'append' data to the file each time it goes through the 'writedata()' function, hence the use of the "a" identifier.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22  | Next Page >