Search Results

Search found 21350 results on 854 pages for 'url parsing'.

Page 20/854 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Android - Parsing XML with XPath

    - by Ruben Deig Ramos
    First of all, thanks to all the people who's going to spend a little time on this question. Second, sorry for my english (not my first language! :D). Well, here is my problem. I'm learning Android and I'm making an app which uses a XML file to store some info. I have no problem creating the file, but trying to read de XML tags with XPath (DOM, XMLPullParser, etc. only gave me problems) I've been able to read, at least, the first one. Let's see the code. Here is the XML file the app generates: <dispositivo> <id>111</id> <nombre>Name</nombre> <intervalo>300</intervalo> </dispositivo> And here is the function which reads the XML file: private void leerXML() { try { XPathFactory factory=XPathFactory.newInstance(); XPath xPath=factory.newXPath(); // Introducimos XML en memoria File xmlDocument = new File("/data/data/com.example.gps/files/devloc_cfg.xml"); InputSource inputSource = new InputSource(new FileInputStream(xmlDocument)); // Definimos expresiones para encontrar valor. XPathExpression tag_id = xPath.compile("/dispositivo/id"); String valor_id = tag_id.evaluate(inputSource); id=valor_id; XPathExpression tag_nombre = xPath.compile("/dispositivo/nombre"); String valor_nombre = tag_nombre.evaluate(inputSource); nombre=valor_nombre; } catch (Exception e) { e.printStackTrace(); } } The app gets correctly the id value and shows it on the screen ("id" and "nombre" variables are assigned to a TextView each one), but the "nombre" is not working. What should I change? :) Thanks for all your time and help. This site is quite helpful! PD: I've been searching for a response on the whole site but didn't found any.

    Read the article

  • Scala: XML Attribute parsing

    - by Chris
    I'm trying to parse a rss feed that looks like this for the attribute "date": <rss version="2.0"> <channel> <item> <y:c date="AA"></y:c> </item> </channel> </rss> I tried several different versions of this: (rssFeed contains the RSS data) println(((rssFeed \\ "channel" \\ "item" \ "y:c" \"date").toString)) But nothing seems to work. What am I missing? Any help would really be appreciated!

    Read the article

  • Ignoring characters in a file while parsing

    - by sfactor
    i need to parse through a text file and process the data. the valid data is usually denoted by either a timestamp with TS followed by 10 numbers (TS1040501134) or values with a alpabet followed by nine numbers (A098098098)...so it will be like TS1040501134A111111111B222222222...........TS1020304050A000000000........ However, there are cases when there will be filler 0s when there is no data. So, such a case might be 00000000000000000000TS1040501134A111111111B2222222220000000000TS1020304050A000000000........` Now as we can see I need to ignore these zeros. how might i do this? I am using gnu C.

    Read the article

  • Regex for xml parsing

    - by ogmios
    What is your opinon about following regexes - is it correct? To find element with spcific and required attribute "<(" + elem_name + ")(\s+(?:[^<]?\s+)" + attr_name + "\s*=\s*(['\"])((?:(?!\3).))\3[^<])(.*?)" To find element with spcific but optional attribute "<(" + elem_name + ")(\s*|\s+(?:[^<]?\s+)(?:" + attr_name + "\s*=\s*(['\"])((?:(?!\3).))\3)?[^<])(.*?)" Pleas not another answer "use existing xml parser". Question is - are the regexes proper or not? This is specific situation - C language in embedded system and xml is not well-formed (cannot be fixed - does not depend on me). Xml have specified schema and no problem with namespaces etc. exists.

    Read the article

  • Simple, Custom Parsing with c++

    - by bradkovach
    Hi! I have been reading SO for some time now, but I truly cannot find any help for my problem. I have a c++ assignment to create an IAS simulator. Here is some sample code... 0 1 a 1 2 b 2 c 3 1 10 begin 11 . load a, subtract b and offset by -1 for jump+ 11 load M(0) 12 sub M(1) 13 sub M(3) 14 halt Using c++, I need to be able to read these lines and store them in a "memory register" class that I already have constructed... For example, the first line would need to store "1 a" in register zero. How can I parse out the number at the line beginning and then store the rest as a string? I have setup storage using a class that is called using mem.set(int, string);. int is the memory location at the beginning of the line and string is the stored instruction.

    Read the article

  • Parsing String to TreeNode

    - by Krusu70
    Anyone have a good algorithm how to parse a String to TreeNode in Java? Let's say we have a string s which says how to build a TreeNode. A(B,C) means that A is the name (String) of TreeNode, B is child of A (Treenode), C is sibling of A (TreeNode). So if I call function with string A(B(D,E(F,G)),C) (just a example), then I get a TreeNode equals to: level A (String: name), B - Child (TreeNode), C - Sibling (TreeNode) level B (String: name), D - Child of B (TreeNode), E - Sibling of B (TreeNode) level E (String: name), F - Child of E (TreeNode), G - Sibling of E (TreeNode) The name may not be 1 letter, it could be like real name (many letters).

    Read the article

  • Parsing timestamp with Python2.4

    - by jellybean
    I want to parse a timestamp from a log file that has been written via datetime.datetime.now().strftime('%Y%m%d%H%M%S') and then compute the number of seconds that have passed since this timestamp. I know I could do it with datetime.datetime.strptime to get back a datetime object and then compute a timedelta. Problem is, the strptime function has been introduced with Python 2.5 and I'm using Python2.4.4 (an upgrade is not possible in my context). Any easy way to do this?

    Read the article

  • Parsing timestamp with retarded Python

    - by jellybean
    I want to parse a timestamp from a log file that has been written via datetime.datetime.now().strftime('%Y%m%d%H%M%S') and then compute the number of seconds that have passed since this timestamp. I know I could do it with datetime.datetime.strptime to get back a datetime object and then compute a timedelta. Problem is, the strptime function has been introduced with Python 2.5 and I'm using Python2.4.4 (an upgrade is not possible in my context). Any easy way to do this?

    Read the article

  • Parsing a string to date gives 01/01/0001 00:00:00

    - by kawtousse
    String dateimput=request.getParameter("datepicker"); System.out.printl("datepicker:" +dateimput); DateFormat df = new SimpleDateFormat("MM/dd/yyyy"); Date dt = null; try { dt = df.parse(dateimput); System.out.println("date imput is:" +dt); } catch (ParseException e) { e.printStackTrace(); } *datepicker:04/29/2010 (value I currently selected from datepicker). *the field in database is typed date. 1-date imput is:Thu Apr 29 00:00:00 CEST 2010 and in database level it is inserted like that 01/01/0001 00:00:00

    Read the article

  • parsing command option with default values and range constrains in C

    - by agramfort
    Hi, I need to parse command line arguments in C. My arguments are basically int or float with default values and range constrains. I've started to implement something that look like this: option_float(float* out, int argc, char* argv, char* name, description, float default_val, int is_optional, float min_value, float max_value) which I call for example with: float* pct; option_float(pct, argc, argv, "pct", "My super percentage option", 50, 1, FALSE, 0, 100) however I don't want to reinvent the wheel ! My objective is to have error checking of range constrains, throw an error when the option is not optional and is not set. And generate the help message usually given by usage() function. The usage text would look like this: --pct My super percentage option (default : 50). Should be in [0, 100] I've started with getopt but it is too limited for what I want to do and I feel it still requires me to write too much code for a simple usecase like this. thanks

    Read the article

  • parsing python to csv

    - by user185955
    I'm trying to download some game stats to do some analysis, only problem is each season the data their isn't 100% consistent. I grab the json file from the site, then wish to save it to a csv with the first line in the csv containing the heading for that column, so the heading would be essentially the key from the python data type. #!/usr/bin/env python import requests import json import csv base_url = 'http://www.afl.com.au/api/cfs/afl/' token_url = base_url + 'WMCTok' player_url = base_url + 'matchItems/round' def printPretty(data): print(json.dumps(data, sort_keys=True, indent=2, separators=(',', ': '))) session = requests.Session() # session makes it simple to use the token across the requests token = session.post(token_url).json()['token'] # get the token session.headers.update({'X-media-mis-token': token}) # set the token Season = 2014 Roundno = 4 if Roundno<10: strRoundno = '0'+str(Roundno) else: strRoundno = str(Roundno) # get some data (could easily be a for loop, might want to put in a delay using Sleep so that you don't get IP blocked) data = session.get(player_url + '/CD_R'+str(Season)+'014'+strRoundno) # print everything printPretty(data.json()) with open('stats_game_test.csv', 'w', newline='') as csvfile: spamwriter = csv.writer(csvfile, delimiter="'",quotechar='|', quoting=csv.QUOTE_ALL) for profile in data.json()['items']: spamwriter.writerow(['%s' %(profile)]) #for key in data.json().keys(): # print("key: %s , value: %s" % (key, data.json()[key])) The above code grabs the json and writes it to a csv, but it puts the key in each individual cell next to the value (eg 'venueId': 'CD_V190'), the key needs to be just across the first row as a heading. It gives me a csv file with data in the cells like this Column A B 'tempInCelsius': 17.0 'totalScore': 32 'tempInCelsius': 16.0 'totalScore': 28 What I want is the data like this tempInCelsius totalScore 17 32 16 28 As I mentioned up the top, the data isn't always consistent so if I define what fields to grab with spamwriter.writerow([profile['tempInCelsius'], profile['totalScore']]) then it will error out on certain data grabs. This is why I'm now trying the above method so it just grabs everything regardless of what data is there.

    Read the article

  • parsing position files in ruby

    - by john
    I have a sample position file like below. 789754654 COLORA SOMETHING1 19370119FYY076 2342423234SS323423 742784897 COLORB SOMETHING2 20060722FYY076 2342342342SDFSD3423 I am interested in positions 54-61 (4th column). I want to change the date to be a different format. So final outcome will be: 789754654 COLORA SOMETHING1 01191937FYY076 2342423234SS323423 742784897 COLORB SOMETHING2 07222006FYY076 2342342342SDFSD3423 The columns are seperated by spaces not tabs. And the final file should have exact number of spaces as the original file....only thing changing should be the date format. How can I do this? I wrote a script but it will lose the original spaces and positioning will be messed up. file.each_line do |line| dob = line.split(" ") puts dob[3] #got the date. change its format 5.times { puts "**" } end Can anyone suggest a better strategy so that positioning in the original file remains the same?

    Read the article

  • Postback of delimited text from javascript and parsing on server side

    - by Alt_Doru
    In my ASP.NET page, I have a Javascript object, like this: var args = new Object(); args.Data1 = document.getElementById("Data1").value; args.Data2 = document.getElementById("Data2").value; args.Data3 = document.getElementById("Data3").value; The object is populated on client side using user input data. I am passing the data to a C# method, through an Ajax request: someObj.AjaxRequest(argsData1 + "|" + argsData2 + "|" + argsData3) Finally, I need to obtain the data in my C# code: string data1 = [JS args.Data1] string data2 = [JS args.Data2] string data3 = [JS args.Data3] My question is what's the best solution for this? As i am concatenating bits of user input, I don't think it's best to use "|" as a delimiter. Also, it's not clear to me how to actually parse the data in my C# code to populate the three variables with the original data.

    Read the article

  • Parsing data without HMLT tags

    - by user296507
    Hi, I need to extract the actual phone number form the html listed below, but I'm not really sure how to do it using Nokogiri CSS since there are no html tags around it. When an at_css(.phonetitle) it only parse Phone and not the number. <div class="detail"> <span class="address">Corner of Toorak Road and Chapel Street, South Yarra</span><br> <span class="phonetitle">Phone</span> 95435 34341 <br><br> </div>

    Read the article

  • parsing ssid with iwconfig in c

    - by user1781595
    I am about building a bar for DWM (ubuntu linux), showing wifi details such as the ssid. Thats my code: #include <stdio.h> #include <stdlib.h> int main( int argc, char *argv[] ) { FILE *fp; int status; char path[1035]; /* Open the command for reading. */ fp = popen("iwconfig", "r"); if (fp == NULL) { printf("Failed to run command\n" ); exit; } char s[500]; /* Read the output a line at a time - output it. */ while (fgets(path, sizeof(path)-1, fp) != NULL) { sprintf(s,"%s%s",s, path); } //printf("%s",s); /* close */ pclose(fp); char delimiter[1] = "s"; char *ptr; ptr = strtok(s, delimiter); printf("SSID: %s\n", ptr); return 0; } i am getting overflowerrors and dont know what to do. I dont think, thats a good way to get the ssid either... :/ Suggestions?

    Read the article

  • Parsing a Multi-Index Excel File in Pandas

    - by rhaskett
    I have a time series excel file with a tri-level column MultiIndex that I would like to successfully parse if possible. There are some results on how to do this for an index on stack overflow but not the columns and the parse function has a header that does not seem to take a list of rows. The ExcelFile looks like is like the following: Column A is all the time series dates starting on A4 Column B has top_level1 (B1) mid_level1 (B2) low_level1 (B3) data (B4-B100+) Column C has null (C1) null (C2) low_level2 (C3) data (C4-C100+) Column D has null (D1) mid_level2 (D2) low_level1 (D3) data (D4-D100+) Column E has null (E1) null (E2) low_level2 (E3) data (E4-E100+) ... So there are two low_level values many mid_level values and a few top_level values but the trick is the top and mid level values are null and are assumed to be the values to the left. So, for instance all the columns above would have top_level1 as the top multi-index value. My best idea so far is to use transpose, but the it fills Unnamed: # everywhere and doesn't seem to work. In Pandas 0.13 read_csv seems to have a header parameter that can take a list, but this doesn't seem to work with parse.

    Read the article

  • IIS redirect url for virtual directory

    - by Jouni
    Hello, How i can set redirect url for virtual directory in iis 7.0.I have installed lates url rewrite module 2. ? I could explain my problem with exsample. I have website on my iis 7.0 server: www.mysite.com I desided to create virtual directory sales under my site which is pointing to website root directory.Now I need create redirect url for the vdir. The vdir is pointing same virtual root directory as my site root is The big idea is that i can write on browser www.mysite/sales and i will automaticly redirect to url www.mysite.com?productid=200. I tried to make redirect with rewite url for vdir(not website), but I always get this error message : cannot add duplicate colletion entry of type 'rule' with unique key key attribute 'name' set to "test".This happens when i am pointing for virtual vdir and try to add rule. I can add rules to website level,but rules doesn work. I mean url www.mysite/sales gives me follwing error. I know that key is unique I checked it from web.config. This kind of feature was really easy use in IIS 6.0, just point vdir with your mouse and set properties--a redirect to url. Please some one explain what is right way to do it in IIS 7.0

    Read the article

  • Parsing complicated query parameters

    - by Will
    My Python server receives jobs that contain a list of the items to act against, rather like a search query term; an example input: (Customer:24 OR Customer:24 OR (Group:NW NOT Customer:26)) To complicate matters, customers can join and leave groups at any time, and the job should be updated live when this happens. How is best to parse, apply and store (in my RDBMS) this kind of list of constraints?

    Read the article

  • Manually extracting portions of strings contained in a list (parsing)

    - by user1652011
    I'm aware that there are modules that fully simplify this function, but saying that I am running from a base install of python (standard modules only), how would I extract the following: I have a list. This list is the contents, line by line, of a webpage. Here is a mock up list (unformatted) for informative purposes: <script> link = "/scripts/playlists/1/" + a.id + "/0-5417069212.asx"; <script> "<a href="/apps/audio/?feedId=11065"><span class="px13">Eastern Metro Area Fire</span>" From the above string, I need the following extracted. The feedId (11065), which is incidentally a.id in the code above., "/scripts/playlists/1/" and "/0-5417069212.asx". Remembering that each of these lines is just contents from objects in a list, how would I go about extracting that data? Here is the full list: contents = urllib2.urlopen("http://www.radioreference.com/apps/audio/?ctid=5586") Pseudo: from urllib2 import urlopen as getpage page_contents = getpage("http://www.radioreference.com/apps/audio/?ctid=5586") feedID = % in (page_contents.search() for "/apps/audio/?feedId=%") titleID = % in (page_contents.search() for "<span class="px13">%</span>") playlistID = % in (page_contents.search() for "link = "%" + a.id + "*.asx";") asxID = * in (page_contents.search() for "link = "*" + a.id + "%.asx";") streamURL = "http://www.radioreference.com/" + playlistID + feedID + asxID + ".asx" I plan to format it as such that streamURL should = : http://www.radioreference.com/scripts/playlists/1/11065/0-5417067072.asx

    Read the article

  • Parsing string based on initial format

    - by Kayla
    I'm trying to parse a set of lines and extract certain parts of the string based on an initial format (reading a configuration file). A little more explanation: the format can contain up to 4 parts to be formatted. This case, %S will skip the part, %a-%c will extract the part and will be treated as a string, %d as int. What I am trying to do now is to come up with some clever way to parse it. So far I came up with the following prototype. However, my pointer arithmetic still needs some work to skip/extract the parts. Ultimately each part will be stored on an array of structs. Such as: struct st_temp { char *parta; char *partb; char *partc; char *partd; }; ... #include <stdio.h> #include <string.h> #define DIM(x) (sizeof(x)/sizeof(*(x))) void process (const char *fmt, const char *line) { char c; const char *src = fmt; while ((c = *src++) != '\0') { if (c == 'S'); // skip part else if (c == 'a'); // extract %a else if (c == 'b'); // extract %b else if (c == 'c'); // extract %c else if (c == 'd'); // extract %d (int) else { printf("Unknown format\n"); exit(1); } } } static const char *input[] = { "bar 200.1 / / (zaz) - \"bon 10\"", "foo 100.1 / / (baz) - \"apt 20\"", }; int main (void) { const char *fmt = "%S %a / / (%b) - \"%c %d\""; size_t i; for(i = 0; i < DIM (input); i++) { process (fmt, input[i]); } return (0); }

    Read the article

  • Parsing ip addresses in php

    - by user2938780
    I am trying to get the number of active connections (Real Time) from a log file by IP connected and having a Play status but instead, it's giving me the total number of IP with status play. The number doesn't decrease at all. Keeps on increasing as soon as a new ip is added. How can I fix it? Here my code: $stringToParse = file_get_contents('wowzamediaserver_access.log'); preg_match_all('/\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}/', $stringToParse, $matchOP); echo "Number of connections: ".sizeof(array_unique($matchOP[0])); HERE IS THE LOG: 2013-10-30 14:54:36 CET stop stream INFO 200 account1 - _defaultVHost_ account1 _definst_ 149.21 streamURL 1935 fullStreamURL IP_ADDRESS_1 http (cupertino) - 2013-10-30 14:56:12 CET play stream INFO 200 account2 - _defaultVHost_ account1 _definst_ 149.21 streamURL 1935 fullStreamURL IP_ADDRESS_2 rtmp (cupertino) - 2013-10-30 14:58:23 CET stop stream INFO 200 account2 - _defaultVHost_ account1 _definst_ 149.21 streamURL 1935 fullStreamURL IP_ADDRESS_2 rtmp (cupertino) - 2013-10-30 14:58:39 CET play stream INFO 200 account1 - _defaultVHost_ account1 _definst_ 149.21 streamURL 1935 fullStreamURL IP_ADDRESS_1 http (cupertino) - 2013-10-30 14:59:12 CET play stream INFO 200 account2 - _defaultVHost_ account1 _definst_ 149.21 streamURL 1935 fullStreamURL IP_ADDRESS_2 rtmp (cupertino) - I want to be able to count the IP whenever it has a "PLAY" status and don't count it whenever it's "STOP" 2013-10-30 14:59:00 CET play stream INFO 200 tv2vielive - _defaultVHost_ tv2vielive _definst_ 0.315 [any] 1935 rtmp://tv2vie.zion3cloud.com:1935/tv2vielive 78.247.255.186 rtmp http://www.tv2vie.org/swf/flowplayer-3.2.16.swf WIN 11,7,700,202 92565864 3576 3455 1 0 0 0 tv2vielive - - - - - rtmp://tv2vie.zion3cloud.com:1935/tv2vielive/tv2vielive rtmp://tv2vie.zion3cloud.com:1935/tv2vielive/tv2vielive - rtmp://tv2vie.zion3cloud.com:1935/tv2vielive - 2013-10-30 14:59:04 CET stop stream INFO 200 tv2vielive - _defaultVHost_ tv2vielive _definst_ 4.75 [any] 1935 rtmp://tv2vie.zion3cloud.com:1935/tv2vielive 78.247.255.186 rtmp http://www.tv2vie.org/swf/flowplayer-3.2.16.swf WIN 11,7,700,202 92565864 3576 512571 1 7222 0 503766 tv2vielive - - - - - rtmp://tv2vie.zion3cloud.com:1935/tv2vielive/tv2vielive rtmp://tv2vie.zion3cloud.com:1935/tv2vielive/tv2vielive - rtmp://tv2vie.zion3cloud.com:1935/tv2vielive - Any solutions? I have even tried the first answer solution but getting "0" play connections. $stringToParse = file_get_contents('wowzamediaserver_access.log'); $pattern = '~^.* play.* ( ([0-9]{1,3}+\.){3,3}[0-9]{1,3}).*$~m'; preg_match_all($pattern, $stringToParse, $matches); echo count($matches[1]) . ' play actions'; But whenever I use my code, I am getting "Number of connections: xxxxx(actual count of IPs). My concern is that I only need the count of IPs with PLAY status. If that IP changes to STOP then it wont count.

    Read the article

  • Parsing a string for dates in PHP

    - by nickf
    Given an arbitrary string, for example ("I'm going to play croquet next Friday" or "Gadzooks, is it 17th June already?"), how would you go about extracting the dates from there? If this is looking like a good candidate for the too-hard basket, perhaps you could suggest an alternative. I want to be able to parse Twitter messages for dates. The tweets I'd be looking at would be ones which users are directing at this service, so they could be coached into using an easier format, however I'd like it to be as transparent as possible. Is there a good middle ground you could think of?

    Read the article

  • Drupal with clean urls turned on is putting question marks in URL

    - by aussiegeek
    I have a drupal site with clean urls, the pages load correctly, but then the URL is rewritten with a question mark in it, which I don't want the user to see. My .htaccess is: <IfModule mod_rewrite.c> RewriteEngine on # If your site can be accessed both with and without the 'www.' prefix, you # can use one of the following settings to redirect users to your preferred # URL, either WITH or WITHOUT the 'www.' prefix. Choose ONLY one option: # # To redirect all users to access the site WITH the 'www.' prefix, # (http://example.com/... will be redirected to http://www.example.com/...) # adapt and uncomment the following: # RewriteCond %{HTTP_HOST} ^example\.com$ [NC] # RewriteRule ^(.*)$ http://www.example.com/$1 [L,R=301] # # To redirect all users to access the site WITHOUT the 'www.' prefix, # (http://www.example.com/... will be redirected to http://example.com/...) # uncomment and adapt the following: # RewriteCond %{HTTP_HOST} ^www\.example\.com$ [NC] # RewriteRule ^(.*)$ http://example.com/$1 [L,R=301] # Modify the RewriteBase if you are using Drupal in a subdirectory or in a # VirtualDocumentRoot and the rewrite rules are not working properly. # For example if your site is at http://example.com/drupal uncomment and # modify the following line: # RewriteBase /drupal # # If your site is running in a VirtualDocumentRoot at http://example.com/, # uncomment the following line: RewriteBase / # Rewrite URLs of the form 'x' to the form 'index.php?q=x'. RewriteCond %{REQUEST_URI} !(connect|administration) RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !=/favicon.ico RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] </IfModule>

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >