Search Results

Search found 478 results on 20 pages for 'winmail dat'.

Page 14/20 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • The property 'IsDataSource' was not found in type 'ViewModelLocator

    - by dieter-preconsult-be
    Hello I have the following code: <UserControl x:Class="TestApp.View.ViewAlarmLog" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:Custom="http://schemas.microsoft.com/wpf/2008/toolkit" xmlns:mvvm="clr-namespace:Test.ViewModel"> <UserControl.Resources> <ResourceDictionary > <ResourceDictionary.MergedDictionaries> </ResourceDictionary.MergedDictionaries> <mvvm:ViewModelLocator x:Key="Locator" d:IsDataSource="True"/> </ResourceDictionary> </UserControl.Resources> the problem is dat i always get an error: The property 'IsDataSource' was not found in type 'ViewModelLocator'. " What could be the problem here ? Redgards Dieter

    Read the article

  • I need data structure for effective handling with dates

    - by ante.sabo
    What I need is something like Hashtable which I will fill with prices that were actual at desired days. For example: I will put two prices: January 1st: 100USD, March 5th: 89USD. If I search my hashtable for price: hashtable.get(February 14th) I need it to give me back actual price which was entered at Jan. 1st because this is the last actual price. Normal hashtable implementation won't give me back anything, since there is nothing put on that dat. I need to see if there is such implementation which can find quickly object based on range of dates.

    Read the article

  • Retrieving Windows Mobile browser history

    - by kurige
    How can I retrieve a list of urls a user has visited on a Windows Mobile phone? I've written a program that successfully retrieves the visited urls in a user's cache, using FindFirstUrlCacheEntry and FindNextUrlCacheEntry - but as I understand it this is not the same as the user's actual web history. In any case it does not seem to give correct results. Edit: I believe the file I'm looking for is index.dat. But it's certainly not in the same place it is on a desktop machine, if it exists at all. And I'm not sure how to parse it. Any experience in this area would be greatly appreciated.

    Read the article

  • Writing booleans to file

    - by Sara
    Hello, I have a piece of code that gives a runtime error. Can anyone help find out why? vector<int> intData; vector<bool> boolData; for(int i=0;i<19000;i++) boolData.push_back(false); string ofile = "tree.dat"; ofstream fout(ofile.c_str(),ios::out | ios::binary); if (!boolData.empty()) fout.write((char *)&boolData[0], sizeof(bool)*boolData.size()); fout.close(); It gives the error when it tries to write the file (fout.write).

    Read the article

  • MySQL import in phpmyadmin (CSV) chokes on quotes

    - by Andrew Swift
    I am trying to import a .csv file into a MySQL table via phpMyAdmin. The .csv file is separated by pipes, formated like this: data|d'ata|d'a"ta|dat"a| data|"da"ta|data|da't'a| dat'a|data|da"ta"|da'ta| The data contains quotes. I have no control over the format in which I recieve the data -- it is generated by a third party. The problem comes when there is a | followed by a double quote. I always get an "invalid field count in CSV input on line N" error. I am uploading the file from the import page, using Latin1, CSV, terminated by |, separated by ". I would like to just change the "enclosed by" character, but I keep getting "Invalid parameter for CSV import: Fields enclosed by". I have tried various characters with no success. How can I tell MySQL to accept this format in phpMyAdmin? Setting up these tables is the first step in writing a program that will use uploaded gzipped .csv files to maintain the catalog of an e-commerce site.

    Read the article

  • File Encrypt/Decrypt under load?

    - by chopps
    I found an interesting article about encrypting and decrypting files but since it uses a file.dat to store the key this will run into problems when theres alot of users on the site dealing with alot of files. http://www.codeproject.com/KB/security/VernamEncryption.aspx?display=Print Should a new file be created every time a file needs decrypting or would there be a better way to do this? UPDATE: Here is what im using to avoid the locking problems. using (Mutex FileLock = new Mutex(true, System.Guid.NewGuid().ToString())) { try { FileLock.WaitOne(); using (FileStream fs = new FileStream(keyFile, FileMode.Open)) { keyBytes = new byte[fs.Length]; fs.Read(keyBytes, 0, keyBytes.Length); } } catch (Exception ex) { EventLog.LogEvent(ex); } finally { FileLock.ReleaseMutex(); } } I tested it on 1000 TIFFs doing both encryption and decryption without any errors.

    Read the article

  • Why is Varnish not caching?

    - by Justin
    I am troubleshooting the setup of Varnish 3.x on my Ubuntu server. I'm running Drupal 7 on two sites set up on the box, via named-based vhosts. Before trying to get Varnish to play nice with Drupal I'm trying to just get Varnish to a PNG from cache. Here are the headers I get from a curl -I request of the PNG file: HTTP/1.1 200 OK Server: Apache/2.2.22 (Ubuntu) Last-Modified: Sun, 07 Oct 2012 21:18:59 GMT ETag: "a57c2-3850-4cb7ea73db6c0" Accept-Ranges: bytes Content-Length: 14416 Cache-Control: max-age=1209600 Expires: Thu, 25 Oct 2012 22:55:14 GMT Content-Type: image/png Accept-Ranges: bytes Date: Thu, 11 Oct 2012 22:55:14 GMT X-Varnish: 1766703058 Age: 0 Via: 1.1 varnish Connection: keep-alive X-Varnish-Cache: MISS Here is the Varnish VCL file I'm using (It's a default VCL configuration designed for Drupal): # Default backend definition. Set this to point to your content # server. # backend default { .host = "127.0.0.1"; .port = "8080"; } # Respond to incoming requests. sub vcl_recv { # Use anonymous, cached pages if all backends are down. if (!req.backend.healthy) { unset req.http.Cookie; } # Allow the backend to serve up stale content if it is responding slowly. set req.grace = 6h; # Pipe these paths directly to Apache for streaming. #if (req.url ~ "^/admin/content/backup_migrate/export") { # return (pipe); #} # Do not cache these paths. if (req.url ~ "^/status\.php$" || req.url ~ "^/update\.php$" || req.url ~ "^/admin$" || req.url ~ "^/admin/.*$" || req.url ~ "^/flag/.*$" || req.url ~ "^.*/ajax/.*$" || req.url ~ "^.*/ahah/.*$") { return (pass); } # Do not allow outside access to cron.php or install.php. #if (req.url ~ "^/(cron|install)\.php$" && !client.ip ~ internal) { # Have Varnish throw the error directly. # error 404 "Page not found."; # Use a custom error page that you've defined in Drupal at the path "404". # set req.url = "/404"; #} # Always cache the following file types for all users. This list of extensions # appears twice, once here and again in vcl_fetch so make sure you edit both # and keep them equal. if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset req.http.Cookie; } # Remove all cookies that Drupal doesn't need to know about. We explicitly # list the ones that Drupal does need, the SESS and NO_CACHE. If, after # running this code we find that either of these two cookies remains, we # will pass as the page cannot be cached. if (req.http.Cookie) { # 1. Append a semi-colon to the front of the cookie string. # 2. Remove all spaces that appear after semi-colons. # 3. Match the cookies we want to keep, adding the space we removed # previously back. (\1) is first matching group in the regsuball. # 4. Remove all other cookies, identifying them by the fact that they have # no space after the preceding semi-colon. # 5. Remove all spaces and semi-colons from the beginning and end of the # cookie string. set req.http.Cookie = ";" + req.http.Cookie; set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); set req.http.Cookie = regsuball(req.http.Cookie, ";(SESS[a-z0-9]+|SSESS[a-z0-9]+|NO_CACHE)=", "; \1="); set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); if (req.http.Cookie == "") { # If there are no remaining cookies, remove the cookie header. If there # aren't any cookie headers, Varnish's default behavior will be to cache # the page. unset req.http.Cookie; } else { # If there is any cookies left (a session or NO_CACHE cookie), do not # cache the page. Pass it on to Apache directly. return (pass); } } } # Set a header to track a cache HIT/MISS. sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Varnish-Cache = "HIT"; } else { set resp.http.X-Varnish-Cache = "MISS"; } } # Code determining what to do when serving items from the Apache servers. # beresp == Back-end response from the web server. sub vcl_fetch { # We need this to cache 404s, 301s, 500s. Otherwise, depending on backend but # definitely in Drupal's case these responses are not cacheable by default. if (beresp.status == 404 || beresp.status == 301 || beresp.status == 500) { set beresp.ttl = 10m; } # Don't allow static files to set cookies. # (?i) denotes case insensitive in PCRE (perl compatible regular expressions). # This list of extensions appears twice, once here and again in vcl_recv so # make sure you edit both and keep them equal. if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset beresp.http.set-cookie; } # Allow items to be stale if needed. set beresp.grace = 6h; } # In the event of an error, show friendlier messages. sub vcl_error { # Redirect to some other URL in the case of a homepage failure. #if (req.url ~ "^/?$") { # set obj.status = 302; # set obj.http.Location = "http://backup.example.com/"; #} # Otherwise redirect to the homepage, which will likely be in the cache. set obj.http.Content-Type = "text/html; charset=utf-8"; synthetic {" <html> <head> <title>Page Unavailable</title> <style> body { background: #303030; text-align: center; color: white; } #page { border: 1px solid #CCC; width: 500px; margin: 100px auto 0; padding: 30px; background: #323232; } a, a:link, a:visited { color: #CCC; } .error { color: #222; } </style> </head> <body onload="setTimeout(function() { window.location = '/' }, 5000)"> <div id="page"> <h1 class="title">Page Unavailable</h1> <p>The page you requested is temporarily unavailable.</p> <p>We're redirecting you to the <a href="/">homepage</a> in 5 seconds.</p> <div class="error">(Error "} + obj.status + " " + obj.response + {")</div> </div> </body> </html> "}; return (deliver); } I'm getting a MISS and age 0 every time. If I'm understanding correctly, this means the file isn't being returned from Varnish's cache. Is there a problem with my Varnish config?

    Read the article

  • PowerShell copy fails without warning

    - by boink
    Howdy, am trying to copy a file from IE cache to somewhere else. This works on w7, but not Vista Ultimate. In short: copy-item $f -Destination "$targetDir" -force (I also tried $f.fullname) The full script: $targetDir = "C:\temp" $ieCache=(get-itemproperty "hkcu:\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders").cache $minSize = 5mb Write-Host "minSize:" $minSize Add-Content -Encoding Unicode -Path $targetDir"\log.txt" -Value (get-Date) Set-Location $ieCache #\Low\Content.IE5 for protected mode #\content.ie5 for unprotected $a = Get-Location foreach ($f in (get-childitem -Recurse -Force -Exclude *.dat, *.tmp | where {$_.length -gt $minSize}) ) { Write-Host (get-Date) $f.Name $f.length Add-Content -Encoding Unicode -Path $targetDir"\log.txt" -Value $f.name, $f.length copy-item $f -Destination "$targetDir" -force } End of wisdom. Please help!

    Read the article

  • Adobe Air: Read and Write MP3 or JPG from local directory and switch bytes

    - by Max
    I would like to make my local jpg and mp3 files kind of unreadable (without encoding) by just putting the first 100 bytes from the beginning of the file to the end and then saving the file with a .dat extension. I know I have to use the byte array but can't get it work. I would also need then a small function to read that file and put the 100bytes back to the front so that I can play/display the file. It would be great if you could post the whole function because I am quite new to Air so that I fully understand. Thank You!!!!

    Read the article

  • How to process <input type="file" name="test1[f]" /> in PHP?

    - by user198729
    var_dump($_FILES) gives the following: array 'test1' => array 'name' => array 'f' => string 'ntuser.dat.LOG' (length=14) 'type' => array 'f' => string 'application/octet-stream' (length=24) 'tmp_name' => array 'f' => string 'D:\wamp\tmp\php223.tmp' (length=22) 'error' => array 'f' => int 0 'size' => array 'f' => int 0 Why is http designed this way?How can I get the file by $_FILES['test1']['f']?

    Read the article

  • Optimizing python code performance when importing zipped csv to a mongo collection

    - by mark
    I need to import a zipped csv into a mongo collection, but there is a catch - every record contains a timestamp in Pacific Time, which must be converted to the local time corresponding to the (longitude,latitude) pair found in the same record. The code looks like so: def read_csv_zip(path, timezones): with ZipFile(path) as z, z.open(z.namelist()[0]) as input: csv_rows = csv.reader(input) header = csv_rows.next() check,converters = get_aux_stuff(header) for csv_row in csv_rows: if check(csv_row): row = { converter[0]:converter[1](value) for converter, value in zip(converters, csv_row) if allow_field(converter) } ts = row['ts'] lng, lat = row['loc'] found_tz_entry = timezones.find_one(SON({'loc': {'$within': {'$box': [[lng-tz_lookup_radius, lat-tz_lookup_radius],[lng+tz_lookup_radius, lat+tz_lookup_radius]]}}})) if found_tz_entry: tz_name = found_tz_entry['tz'] local_ts = ts.astimezone(timezone(tz_name)).replace(tzinfo=None) row['tz'] = tz_name else: local_ts = (ts.astimezone(utc) + timedelta(hours = int(lng/15))).replace(tzinfo = None) row['local_ts'] = local_ts yield row def insert_documents(collection, source, batch_size): while True: items = list(itertools.islice(source, batch_size)) if len(items) == 0: break; try: collection.insert(items) except: for item in items: try: collection.insert(item) except Exception as exc: print("Failed to insert record {0} - {1}".format(item['_id'], exc)) def main(zip_path): with Connection() as connection: data = connection.mydb.data timezones = connection.timezones.data insert_documents(data, read_csv_zip(zip_path, timezones), 1000) The code proceeds as follows: Every record read from the csv is checked and converted to a dictionary, where some fields may be skipped, some titles be renamed (from those appearing in the csv header), some values may be converted (to datetime, to integers, to floats. etc ...) For each record read from the csv, a lookup is made into the timezones collection to map the record location to the respective time zone. If the mapping is successful - that timezone is used to convert the record timestamp (pacific time) to the respective local timestamp. If no mapping is found - a rough approximation is calculated. The timezones collection is appropriately indexed, of course - calling explain() confirms it. The process is slow. Naturally, having to query the timezones collection for every record kills the performance. I am looking for advises on how to improve it. Thanks. EDIT The timezones collection contains 8176040 records, each containing four values: > db.data.findOne() { "_id" : 3038814, "loc" : [ 1.48333, 42.5 ], "tz" : "Europe/Andorra" } EDIT2 OK, I have compiled a release build of http://toblerity.github.com/rtree/ and configured the rtree package. Then I have created an rtree dat/idx pair of files corresponding to my timezones collection. So, instead of calling collection.find_one I call index.intersection. Surprisingly, not only there is no improvement, but it works even more slowly now! May be rtree could be fine tuned to load the entire dat/idx pair into RAM (704M), but I do not know how to do it. Until then, it is not an alternative. In general, I think the solution should involve parallelization of the task. EDIT3 Profile output when using collection.find_one: >>> p.sort_stats('cumulative').print_stats(10) Tue Apr 10 14:28:39 2012 ImportDataIntoMongo.profile 64549590 function calls (64549180 primitive calls) in 1231.257 seconds Ordered by: cumulative time List reduced from 730 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.012 0.012 1231.257 1231.257 ImportDataIntoMongo.py:1(<module>) 1 0.001 0.001 1230.959 1230.959 ImportDataIntoMongo.py:187(main) 1 853.558 853.558 853.558 853.558 {raw_input} 1 0.598 0.598 370.510 370.510 ImportDataIntoMongo.py:165(insert_documents) 343407 9.965 0.000 359.034 0.001 ImportDataIntoMongo.py:137(read_csv_zip) 343408 2.927 0.000 287.035 0.001 c:\python27\lib\site-packages\pymongo\collection.py:489(find_one) 343408 1.842 0.000 274.803 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:699(next) 343408 2.542 0.000 271.212 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:644(_refresh) 343408 4.512 0.000 253.673 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:605(__send_message) 343408 0.971 0.000 242.078 0.001 c:\python27\lib\site-packages\pymongo\connection.py:871(_send_message_with_response) Profile output when using index.intersection: >>> p.sort_stats('cumulative').print_stats(10) Wed Apr 11 16:21:31 2012 ImportDataIntoMongo.profile 41542960 function calls (41542536 primitive calls) in 2889.164 seconds Ordered by: cumulative time List reduced from 778 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.028 0.028 2889.164 2889.164 ImportDataIntoMongo.py:1(<module>) 1 0.017 0.017 2888.679 2888.679 ImportDataIntoMongo.py:202(main) 1 2365.526 2365.526 2365.526 2365.526 {raw_input} 1 0.766 0.766 502.817 502.817 ImportDataIntoMongo.py:180(insert_documents) 343407 9.147 0.000 491.433 0.001 ImportDataIntoMongo.py:152(read_csv_zip) 343406 0.571 0.000 391.394 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:384(intersection) 343406 379.957 0.001 390.824 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:435(_intersection_obj) 686513 22.616 0.000 38.705 0.000 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:451(_get_objects) 343406 6.134 0.000 33.326 0.000 ImportDataIntoMongo.py:162(<dictcomp>) 346 0.396 0.001 30.665 0.089 c:\python27\lib\site-packages\pymongo\collection.py:240(insert) EDIT4 I have parallelized the code, but the results are still not very encouraging. I am convinced it could be done better. See my own answer to this question for details.

    Read the article

  • Python, Matplotlib, subplot: How to set the axis range?

    - by someone
    How can I set the y axis range of the second subplot to e.g. [0,1000] ? The FFT plot of my data (a column in a text file) results in a (inf.?) spike so that the actual data is not visible. pylab.ylim([0,1000]) has no effect, unfortunately. This is the whole script: # based on http://www.swharden.com/blog/2009-01-21-signal-filtering-with-python/ import numpy, scipy, pylab, random xs = [] rawsignal = [] with open("test.dat", 'r') as f: for line in f: if line[0] != '#' and len(line) > 0: xs.append( int( line.split()[0] ) ) rawsignal.append( int( line.split()[1] ) ) h, w = 3, 1 pylab.figure(figsize=(12,9)) pylab.subplots_adjust(hspace=.7) pylab.subplot(h,w,1) pylab.title("Signal") pylab.plot(xs,rawsignal) pylab.subplot(h,w,2) pylab.title("FFT") fft = scipy.fft(rawsignal) #~ pylab.axis([None,None,0,1000]) pylab.ylim([0,1000]) pylab.plot(abs(fft)) pylab.savefig("SIG.png",dpi=200) pylab.show() Other improvements are also appreciated!

    Read the article

  • Where do Java Applets live?

    - by Wendy Peters
    I'm trying to figure out where java Applets that I run from the browser get downloaded to. I'm using Firefox 3.0 on Windows XP with Java 1.6 if that makes any difference. From the Java Control Panel on the toolbar, I can access "Temporary Internet Files - Settings" to find the Java cache. From there I can show the resources and see a file called "dws2010066.dat". Does this resource correspond to a file on disk? I did a search in the Java cache (and my whole computer) but came up empty handed.

    Read the article

  • Async stream writing in a thread

    - by blez
    I have a thread in which I write to 2 streams. The problem is that the thread is blocked until the first one finishes writing (until all data is transferred on the other side of the pipe), and I don't want that. Is there a way to make it asynchronous? chunkOutput is a Dictionary filled with data from multiple threads, so the faster checking for existing keys is, the faster the pipe will write. void ConsumerMethod(object totalChunks) { while(true) { if (chunkOutput.ContainsKey(curChunk)) { if (outputStream != null && chunkOutput[curChunk].Length > 0) { outputStream.Write(chunkOutput[curChunk]); // <-- here it stops } ChunkDownloader.AppendData("outfile.dat", chunkOutput[curChunk], chunkOutput[curChunk].Length); curChunk++; if (curChunk >= (int) totalChunks) return; } Thread.Sleep(10); } }

    Read the article

  • How do I read from a file consists of city names and coordinates/Populations and create functions to get the coordinates and population?

    - by Braybray
    I'm using Python, and I have a file which has city names and information such as names, coordinates of the city and population of the city: Youngstown, OH[4110,8065]115436 Yankton, SD[4288,9739]12011 966 Yakima, WA[4660,12051]49826 1513 2410 Worcester, MA[4227,7180]161799 2964 1520 604 Wisconsin Dells, WI[4363,8977]2521 1149 1817 481 595 How can I create a function to take the city name and return a list containing the latitude and longitude of the given city? fin = open ("miles.dat","r") def getCoordinates cities = [] for line in fin: cities.append(line.rstrip()) for word in line: print line.split() That's what I tried now; how could I get the coordinates of the city by calling the names of the city and how can I return the word of each line but not letters? Any help will be much appreciated, thanks all.

    Read the article

  • How do I clear the contents of a file using c?

    - by Eddy
    I'm writing some code so that at each iteration of a for loop it runs a functions which writes data into a file, like this: int main() { int i; /* Write data to file 100 times */ for(i = 0; i < 100; i++) writedata(); return 0; } void writedata() { /* Create file for displaying output */ FILE *data; data = fopen("output.dat", "a"); /* do other stuff */ ... } How do I get it so that when I run the program it will delete the file contents at the beginning of the program, but after that it will append data to the file? I know that using the "w" identifier in fopen() will open a new file that's empty, but I want to be able to 'append' data to the file each time it goes through the 'writedata()' function, hence the use of the "a" identifier.

    Read the article

  • How to get the domain name from an url with PHP?

    - by ilhan
    For example http://www.google.com/ -> google.com http://www.ma219.metu.edu.tr -> metu.edu.tr https://www.nic.tr/ -> nic.tr http://www.plasticsurgery.a.bg/ -> plasticsurgery.a.bg www.abv.bg -> abv.bg abv.bg -> abv.bg The output should not have subdomain. Edit: It would be great if we were able to read http://mxr.mozilla.org/mozilla/source/netwerk/dns/src/effective_tld_names.dat?raw=1

    Read the article

  • improve my python program to fetch the desire rows by using if condition

    - by user2560507
    unique.txt file contains: 2 columns with columns separated by tab. total.txt file contains: 3 columns each column separated by tab. I take each row from unique.txt file and find that in total.txt file. If present then extract entire row from total.txt and save it in new output file. ###Total.txt column a column b column c interaction1 mitochondria_205000_225000 mitochondria_195000_215000 interaction2 mitochondria_345000_365000 mitochondria_335000_355000 interaction3 mitochondria_345000_365000 mitochondria_5000_25000 interaction4 chloroplast_115000_128207 chloroplast_35000_55000 interaction5 chloroplast_115000_128207 chloroplast_15000_35000 interaction15 2_10515000_10535000 2_10505000_10525000 ###Unique.txt column a column b mitochondria_205000_225000 mitochondria_195000_215000 mitochondria_345000_365000 mitochondria_335000_355000 mitochondria_345000_365000 mitochondria_5000_25000 chloroplast_115000_128207 chloroplast_35000_55000 chloroplast_115000_128207 chloroplast_15000_35000 mitochondria_185000_205000 mitochondria_25000_45000 2_16595000_16615000 2_16585000_16605000 4_2785000_2805000 4_2775000_2795000 4_11395000_11415000 4_11385000_11405000 4_2875000_2895000 4_2865000_2885000 4_13745000_13765000 4_13735000_13755000 My program: file=open('total.txt') file2 = open('unique.txt') all_content=file.readlines() all_content2=file2.readlines() store_id_lines = [] ff = open('match.dat', 'w') for i in range(len(all_content)): line=all_content[i].split('\t') seq=line[1]+'\t'+line[2] for j in range(len(all_content2)): if all_content2[j]==seq: ff.write(seq) break Problem: but istide of giving desire output (values of those 1st column that fulfile the if condition). i nead somthing like if jth of unique.txt == ith of total.txt then write ith row of total.txt into new file.

    Read the article

  • Read Text File line by line using Command Prompt/Batch

    - by user353305
    Hello All, First of all I am very thankful to the owner of this website. I have learned and implement various technologies with the help of solutions provided by the readers. I know the question I asked is posted many time in this forum. And I have tired all of the solutions available, but no luck I may case I am trying to read a dat file which is basically a msg/feed file having more than 22000 Characters. Every line may or may not be of same length. My requirement is to convert the file to fixed line length character file. I have a logic that work well using vb script, however its pretty slow. I have checked with For f/ but no luck. The only delimiter I have is EOT, which i can see in Textpad but not in notepad. I have tried with \n, token=. Please help me in resolving the issue. Regards, Rajiv [email protected]

    Read the article

  • How can a FILE* (pointer to a struct) be tested (if == NULL)?

    - by m4design
    I was playing around with C, anyways I was thinking how can file pointer (which points to a struct type), be tested if NULL as for instant: FILE *cfPtr; if ( ( cfPtr = fopen( "file.dat", "w" ) ) == NULL ) I tried to do that myself, but an error occurs. struct foo{ int x; }; struct foo bar = {0}; if (bar == NULL) puts("Yay\n"); else puts("Nay"); error C2088: '==' : illegal for struct Here's the FILE deceleration in the stdio.h file: struct _iobuf { char *_ptr; int _cnt; char *_base; int _flag; int _file; int _charbuf; int _bufsiz; char *_tmpfname; }; typedef struct _iobuf FILE;

    Read the article

  • Starting a browser from a textbox via Intent - Is http:// required?

    - by VitalyB
    Hi, I have the following code: /** Open a browser on the URL specified in the text box */ private void openBrowser() { Uri uri = Uri.parse(urlText.getText().toString()); Intent intent = new Intent(Intent.ACTION_VIEW, uri); startActivity(intent); } When I input "http://www.google.com" to the textbox, it works fine. However, when I try something like "www.google.com" it crashes with: No Activity found to handle Intent { act=android.intent.action.VIEW dat=www.google.com } Am I using Uri wrong? Is there a way to extract full address from it? Or am I supposed to write code that adds http manually? e.g, if not starts with http://, add http://. Thanks!

    Read the article

  • Help with dates in Android

    - by SamRowley
    Hi guys, Looking for a bit of help with taking a dat from a DatePicker Widget and storing it in an sqlite database within the app. I have the following code: java.util.Date utilDate = null; String y = Integer.toString(date.getDayOfMonth()) + "/" + (date.getMonth()+1) + "/" + Integer.toString(date.getYear()); SimpleDateFormat formatter = new SimpleDateFormat("dd/MM/yyy"); utilDate = formatter.parse(y); java.sql.Date z = new java.sql.Date(utilDate.getDate()); x = z.toString(); Log.v("The date: ", x); } Where date is the DatePicker widget. If I output the utilDate variable (i.e. the Java version of date) using logCat it seems to work fine and gives me a format like: Tue Jan 04 00:00:00 GMT 2011 which I am expecting but using the code above to get the sql version of the date, it always gives me the date 1970-01-01. I'm pretty sure the solution is very simple but I just can't see it.

    Read the article

  • python open does not create file if it doesnt exist

    - by Toddeman
    I am using Python. What is the best way to open a file in rw if it exists, or if it does not, then create it and open it in rw? From what i read, file = open('myfile.dat', 'rw') should do this, no? it is not working for me (python 2.6.2) and im wondering if it is a version problem, or not supposed to work like that or what. The bottom line is, i just need a solution for the problem. I am curious about the other stuff, but all i need is a nice way to do the opening part. thanks in advance

    Read the article

  • How to input test data using the DecisionTree module in python?

    - by lifera1n
    On the Python DescisionTree module homepage (DecisionTree-1.6.1), they give a piece of example code. Here it is: dt = DecisionTree( training_datafile = "training.dat", debug1 = 1 ) dt.get_training_data() dt.show_training_data() root_node = dt.construct_decision_tree_classifier() root_node.display_decision_tree(" ") test_sample = ['exercising=>never', 'smoking=>heavy', 'fatIntake=>heavy', 'videoAddiction=>heavy'] classification = dt.classify(root_node, test_sample) print "Classification: ", classification My question is: How can I specify sample data (test_sample here) from variables? On the project homepage, it says: "You classify new data by first constructing a new data vector:" I have searched around but have been unable to find out what a data vector is or the answer to my question. Any help would be appreciated!

    Read the article

  • Batch & log files

    - by Mat
    Hi All, Please help!!! ;) I have a problem with this code in a batch file (Linux): Mil=`date +"%Y%m%d%H%M%S"` batch=`echo "${DatMusic}"` TabimportEnteteMusic="importentetemusic.dat" { grep '^ENTETE' ${IMPORT}/${DatMusic} > ${IMPORT}/$TabimportEnteteMusic mysql -u basemine --password="basemine" -D basemine -e "delete from importmusic;" mysql -u basemine --password="basemine" -D basemine -e "delete from importentetemusic;" } >> $TRACES/batch/$Mil.$batch.log 2>&1 When I run this batch, its answer is: /home/mmoine/sgbd_mysql/batch/importMusic.sh: line 51: /batch/20100319160018.afce01aa.cr.log: Aucun fichier ou répertoire de ce type (in english, I suppose: "No files or Directory found") So, please, how can I put all generated messages in this log file? Thanks for your answers. Sorry for my english ;)

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20  | Next Page >