Search Results

Search found 541 results on 22 pages for 'tokens'.

Page 2/22 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Uploadify and rails 3 authenticity tokens

    - by Ceilingfish
    Hi chaps, I'm trying to get a file upload progress bar working in a rails 3 app using uploadify (http://www.uploadify.com) and I'm stuck at authenticity tokens. My current uploadify config looks like <script type="text/javascript" charset="utf-8"> $(document).ready(function() { $("#zip_input").uploadify({ 'uploader': '/flash/uploadify.swf', 'script': $("#upload").attr('action'), 'scriptData': { 'format': 'json', 'authenticity_token': encodeURIComponent('<%= form_authenticity_token if protect_against_forgery? %>') }, 'fileDataName': "world[zip]", //'scriptAccess': 'always', // Incomment this, if for some reason it doesn't work 'auto': true, 'fileDesc': 'Zip files only', 'fileExt': '*.zip', 'width': 120, 'height': 24, 'cancelImg': '/images/cancel.png', 'onComplete': function(event, data) { $.getScript(location.href) }, // We assume that we can refresh the list by doing a js get on the current page 'displayData': 'speed' }); }); </script> But I am getting this response from rails: Started POST "/worlds" for 127.0.0.1 at 2010-04-22 12:39:44 ActionController::InvalidAuthenticityToken (ActionController::InvalidAuthenticityToken): Rendered /opt/local/lib/ruby/gems/1.8/gems/actionpack-3.0.0.beta3/lib/action_dispatch/middleware/templates/rescues/_trace.erb (1.0ms) Rendered /opt/local/lib/ruby/gems/1.8/gems/actionpack-3.0.0.beta3/lib/action_dispatch/middleware/templates/rescues/_request_and_response.erb (6.6ms) Rendered /opt/local/lib/ruby/gems/1.8/gems/actionpack-3.0.0.beta3/lib/action_dispatch/middleware/templates/rescues/diagnostics.erb within rescues/layout (12.2ms) This appears to be because I'm not sending the authentication cookie along with the request. Does anyone know how I can get the values I should be sending there, and how I can make rails read it from HTTP POST rather than trying to find it as a cookie?

    Read the article

  • Vimeo Desktop App OAuth

    - by Barry
    Hi Guys, I'm currently having massive trouble with Vimeo's Oauth implementation and my desktop app. My program does the following correctly. 1- Requests a Unauthorized Request Token with my key and secret and returns - a Token and a Token secret. 2- Generates a URL for the user to go to using the token which then shows our application's name and allows the user to Authorize us to use his/her account. It then shows a verifier which the user returns and puts into our app. The problem is the third step and actually exchanging the tokens for the access tokens. Basically every time we try and get them we get a "Invalid / expired token - The oauth_token passed was either not valid or has expired" I looked at the documentation and there's supposed to be a callback to a server when deployed like that which gives the user an "authorized token" but as im developing a desktop app we can't do this. So I assume the token retrieved in 1 is valid for this step. (actually it seems it is: http://vimeo.com/forums/topic:22605) So I'm wondering now am I missing something here on my actual vimeo application account now? is it treating it as a web hosted app with callbacks? all the elements are there for this to work and I've used this same component to create a twitter Oauth login in exactly the same way and it was fine. Thanks in advance, Barry

    Read the article

  • JavaCC: How can I specify which token(s) are expected in certain context?

    - by java.is.for.desktop
    Hello, everyone! I need to make JavaCC aware of a context (current parent token), and depending on that context, expect different token(s) to occur. Consider the following pseudo-code: TOKEN <abc> { "abc*" } // recognizes "abc", "abcd", "abcde", ... TOKEN <abcd> { "abcd*" } // recognizes "abcd", "abcde", "abcdef", ... TOKEN <element1> { "element1" "[" expectOnly(<abc>) "]" } TOKEN <element2> { "element2" "[" expectOnly(<abcd>) "]" } ... So when the generated parser is "inside" a token named "element1" and it encounter "abcdef" it recognizes it as <abc>, but when its "inside" a token named "element2" it recognizes the same string as <abcd>. element1 [ abcdef ] // aha! it can only be <abc> element2 [ abcdef ] // aha! it can only be <abcd> If I'm not wrong, it would behave similar to more complex DTD definitions of an XML file. So, how can one specify, in which "context" which token(s) are valid/expected? NOTE: It would be not enough for my real case to define a kind of "hierarchy" of tokens, so that "abcdef" is always first matched against <abcd> and than <abc>. I really need context-aware tokens.

    Read the article

  • Can't get tokens when using OAuth with LinkedIn API

    - by Angela
    Hi, was wondering if anyone can help me to get this basic OAuth implementation to work using the LinkedIn API: The output for the indexes oauth_token and oauth_token_secret are blank. The file I refer to in OAuth.php are a set of classes to help generate the token requests and tokens. That file is here: http://www.easy-share.com/1909603316/OAuth.php <?php session_start(); require_once("OAuth.php"); $app_token = "YOUR APP TOKEN GOES HERE"; $app_key = "YOUR APP KEY GOES HERE"; $domain = "https://api.linkedin.com/uas/oauth"; $sig_method = new OAuthSignatureMethod_HMAC_SHA1(); $test_consumer = new OAuthConsumer($app_token, $app_key, NULL); $callback = "http://".$_SERVER['HTTP_HOST'].$_SERVER['PHP_SELF']."?action=getaccesstoken"; # First time through, get a request token from LinkedIn. if (!isset($_GET['action'])) { $req_req = OAuthRequest::from_consumer_and_token($test_consumer, NULL, "POST", $domain . "/requestToken"); $req_req->set_parameter("oauth_callback", $callback); # part of OAuth 1.0a - callback now in requestToken $req_req->sign_request($sig_method, $test_consumer, NULL); $ch = curl_init(); // make sure we submit this as a post curl_setopt($ch, CURLOPT_POSTFIELDS, ''); //New Line curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0); curl_setopt($ch, CURLOPT_HTTPHEADER,array ( $req_req->to_header() )); curl_setopt($ch, CURLOPT_URL, $domain . "/requestToken"); curl_setopt($ch, CURLOPT_POST, 1); $output = curl_exec($ch); curl_close($ch); print_r($req_req); //<---- add this line print("$output\n"); //<---- add this line parse_str($output, $oauth); # pop these in the session for now - there's probably a more secure way of doing this! We'll need them when the callback is called. $_SESSION['oauth_token'] = $oauth['oauth_token']; $_SESSION['oauth_token_secret'] = $oauth['oauth_token_secret']; echo("token: " . $oauth['oauth_token']); echo("secret: " . $oauth['oauth_token_secret']); exit;

    Read the article

  • Security Token/Cross Domain Cookie in Classic ASP?

    - by jlrolin
    I have an interesting conundrum. We have a site that is a completely separate domain, we'll say http://www.x.com and our own site that is http://www.y.com. The y.com site is actually a classic ASP site, and we aren't converting it to .NET at this time. The problem is that there is a link on x.com that redirects to y.com from a members area. We want to "authenticate" the user to make sure they are a member from the other site. If they are, they are directed to a members area on y.com. If not, they have to provide login information on y.com. Cookies obviously don't work due to the cross domain security, but is there a way around this? I've also looked at a service for tokens, but I'm not sure exactly how that works in Classic ASP. Any ideas or suggestions?

    Read the article

  • SQL: convert tokens in a string or elements of an array into rows of a table

    - by slowpoison
    Is there a simple way in SQL to convert a string or an array to rows of a table? For example, let's stay the string is 'a,b,c,d,e,f,g'. I'd prefer an SQL statement that takes that string, splits it at commas and inserts the resulting strings into a table. In PostgreSQL I can use regexp_split_to_array() and split the string into an array. So, if you know a way to insert an array's elements as rows into a table, that would work too.

    Read the article

  • ANTLR lexer mismatches tokens

    - by Barry Brown
    I have a simple ANTLR grammar, which I have stripped down to its bare essentials to demonstrate this problem I'm having. I am using ANTLRworks 1.3.1. grammar sample; assignment : IDENT ':=' NUM ';' ; IDENT : ('a'..'z')+ ; NUM : ('0'..'9')+ ; WS : (' '|'\n'|'\t'|'\r')+ {$channel=HIDDEN;} ; Obviously, this statement is accepted by the grammar: x := 99; But this one also is: x := @!$()()%99***; Output from the ANTLRworks Interpreter: What am I doing wrong? Even other sample grammars that come with ANTLR (such as the CMinus grammar) exhibit this behavior.

    Read the article

  • Bison: Optional tokens in a single rule.

    - by Simone Margaritelli
    Hi there .. i'm using GNU Bison 2.4.2 to write a grammar for a new language i'm working on and i have a question. When i specify a rule, let's say : statement : T_CLASS T_IDENT '{' T_CLASS_MEMBERS '}' { // create a node for the statement ... } If i have a variation on the rule, for instance statement : T_CLASS T_IDENT T_EXTENDS T_IDENT_LIST '{' T_CLASS_MEMBERS '}' { // create a node for the statement ... } Where (from flex scanner rules) : "class" return T_CLASS; "extends" return T_EXTENDS; [a-zA-Z\_][a-zA-Z0-9\_]* return T_IDENT; (and T_IDENT_LIST is a rule for comma separated identifiers). Is there any way to specify all of this only in one rule, setting somehow the "T_EXTENDS T_IDENT_LIST" as optional? I've already tried with T_CLASS T_IDENT (T_EXTENDS T_IDENT_LIST)? '{' T_CLASS_MEMBERS '}' { // create a node for the statement ... } But Bison gave me an error. Thanks

    Read the article

  • java regex: capture multiline sequence between tokens

    - by Guillaume
    I'm struggling with regex for splitting logs files into log sequence in order to match pattern inside these sequences. log format is: timestamp fieldA fieldB fieldn log message1 timestamp fieldA fieldB fieldn log message2 log message2bis timestamp fieldA fieldB fieldn log message3 The timestamp regex is known. I want to extract every log sequence (potentialy multiline) between timestamps. And I want to keep the timestamp. I want in the same time to keep the exact count of lines. What I need is how to decorate timestamp pattern to make it split my log file in log sequence. I can not split the whole file as a String, since the file content is provided in a CharBuffer Here is sample method that will be using this log sequence matcher: private void matches(File f, CharBuffer cb) { Matcher sequenceBreak = sequencePattern.matcher(cb); // sequence matcher int lines = 1; int sequences = 0; while (sequenceBreak.find()) { sequences++; String sequence = sequenceBreak.group(); if (filter.accept(sequence)) { System.out.println(f + ":" + lines + ":" + sequence); } //count lines Matcher lineBreak = LINE_PATTERN.matcher(sequence); while (lineBreak.find()) { lines++; } if (sequenceBreak.end() == cb.limit()) { break; } } }

    Read the article

  • Accessing JSon raw tokens in C# ?

    - by user318332
    My json string looks like { abc: 123, def: 442, ghi=444 } - say stock list. I dont know what quotes are coming in , i.e I dont know what is abc, def etc is. I need to get this token dynamically. Any pointers would be of great help ! BTW, this has to run in silverlight.

    Read the article

  • Cache Auth Tokens (or Caching HTTP headers in General) - Best Practices

    - by viatropos
    I'm using the Ruby GData Library to access Google Docs and I recently got the GData::Client::CaptchaError because I was re-logging in with every request. Reading this post, it recommends not logging in with every request, but caching the authentication token. How do I go about doing that correctly? Google says it expires every 24 hours, and it doesn't seem like I should store it in the session, so what should I do? I'm using Ruby on Rails with all this. Thanks so much

    Read the article

  • String tokens in dotnet

    - by julio
    I am writing a app in dotnet which will generate random text based on some input. So if I have text like "I love your {lovely|nice|great} dress" I want to choose randomly from lovely/nice/great and use that in text. Any suggestions in c# or vb.net are welcome

    Read the article

  • Defining tokens at runtime

    - by Peter Crenshaw
    I want to write a parser for EDIFACT messages with JavaCC. My problem is that I cannot define all terminal symbols before parsing a message because at the begining of each message there is a so called "Advice Segment" ("UNA" Segment) which defines things like element seperator symbol, escape symbol, segment terminator symbol and decimal notation (e.g. '.' or ','). So I think/guess the production rules need some kind of variables which must be set at runtime during parsing. Can this be done with JavaCC and if so how? Or is there another way I am missing?

    Read the article

  • String tokens in .NET

    - by julio
    I am writing a app in .NET which will generate random text based on some input. So if I have text like "I love your {lovely|nice|great} dress" I want to choose randomly from lovely/nice/great and use that in text. Any suggestions in C# or VB.NET are welcome.

    Read the article

  • Optimizing a lot of Scanner.findWithinHorizon(pattern, 0) calls

    - by darvids0n
    I'm building a process which extracts data from 6 csv-style files and two poorly laid out .txt reports and builds output CSVs, and I'm fully aware that there's going to be some overhead searching through all that whitespace thousands of times, but I never anticipated converting about about 50,000 records would take 12 hours. Excerpt of my manual matching code (I know it's horrible that I use lists of tokens like that, but it was the best thing I could think of): public static String lookup(List<String> tokensBefore, List<String> tokensAfter) { String result = null; while(_match(tokensBefore)) { // block until all input is read if(id.hasNext()) { result = id.next(); // capture the next token that matches if(_matchImmediate(tokensAfter)) // try to match tokensAfter to this result return result; } else return null; // end of file; no match } return null; // no matches } private static boolean _match(List<String> tokens) { return _match(tokens, true); } private static boolean _match(List<String> tokens, boolean block) { if(tokens != null && !tokens.isEmpty()) { if(id.findWithinHorizon(tokens.get(0), 0) == null) return false; for(int i = 1; i <= tokens.size(); i++) { if (i == tokens.size()) { // matches all tokens return true; } else if(id.hasNext() && !id.next().matches(tokens.get(i))) { break; // break to blocking behaviour } } } else { return true; // empty list always matches } if(block) return _match(tokens); // loop until we find something or nothing else return false; // return after just one attempted match } private static boolean _matchImmediate(List<String> tokens) { if(tokens != null) { for(int i = 0; i <= tokens.size(); i++) { if (i == tokens.size()) { // matches all tokens return true; } else if(!id.hasNext() || !id.next().matches(tokens.get(i))) { return false; // doesn't match, or end of file } } return false; // we have some serious problems if this ever gets called } else { return true; // empty list always matches } } Basically wondering how I would work in an efficient string search (Boyer-Moore or similar). My Scanner id is scanning a java.util.String, figured buffering it to memory would reduce I/O since the search here is being performed thousands of times on a relatively small file. The performance increase compared to scanning a BufferedReader(FileReader(File)) was probably less than 1%, the process still looks to be taking a LONG time. I've also traced execution and the slowness of my overall conversion process is definitely between the first and last like of the lookup method. In fact, so much so that I ran a shortcut process to count the number of occurrences of various identifiers in the .csv-style files (I use 2 lookup methods, this is just one of them) and the process completed indexing approx 4 different identifiers for 50,000 records in less than a minute. Compared to 12 hours, that's instant. Some notes (updated): I don't necessarily need the pattern-matching behaviour, I only get the first field of a line of text so I need to match line breaks or use Scanner.nextLine(). All ID numbers I need start at position 0 of a line and run through til the first block of whitespace, after which is the name of the corresponding object. I would ideally want to return a String, not an int locating the line number or start position of the result, but if it's faster then it will still work just fine. If an int is being returned, however, then I would now have to seek to that line again just to get the ID; storing the ID of every line that is searched sounds like a way around that. Anything to help me out, even if it saves 1ms per search, will help, so all input is appreciated. Thankyou! Usage scenario 1: I have a list of objects in file A, who in the old-style system have an id number which is not in file A. It is, however, POSSIBLY in another csv-style file (file B) or possibly still in a .txt report (file C) which each also contain a bunch of other information which is not useful here, and so file B needs to be searched through for the object's full name (1 token since it would reside within the second column of any given line), and then the first column should be the ID number. If that doesn't work, we then have to split the search token by whitespace into separate tokens before doing a search of file C for those tokens as well. Generalised code: String field; for (/* each record in file A */) { /* construct the rest of this object from file A info */ // now to find the ID, if we can List<String> objectName = new ArrayList<String>(1); objectName.add(Pattern.quote(thisObject.fullName)); field = lookup(objectSearchToken, objectName); // search file B if(field == null) // not found in file B { lookupReset(false); // initialise scanner to check file C objectName.clear(); // not using the full name String[] tokens = thisObject.fullName.split(id.delimiter().pattern()); for(String s : tokens) objectName.add(Pattern.quote(s)); field = lookup(objectSearchToken, objectName); // search file C lookupReset(true); // back to file B } else { /* found it, file B specific processing here */ } if(field != null) // found it in B or C thisObject.ID = field; } The objectName tokens are all uppercase words with possible hyphens or apostrophes in them, separated by spaces. Much like a person's name. As per a comment, I will pre-compile the regex for my objectSearchToken, which is just [\r\n]+. What's ending up happening in file C is, every single line is being checked, even the 95% of lines which don't contain an ID number and object name at the start. Would it be quicker to use ^[\r\n]+.*(objectname) instead of two separate regexes? It may reduce the number of _match executions. The more general case of that would be, concatenate all tokensBefore with all tokensAfter, and put a .* in the middle. It would need to be matching backwards through the file though, otherwise it would match the correct line but with a huge .* block in the middle with lots of lines. The above situation could be resolved if I could get java.util.Scanner to return the token previous to the current one after a call to findWithinHorizon. I have another usage scenario. Will put it up asap.

    Read the article

  • SharePoint and Visual Studio - Replaceable parameters

    - by Sahil Malik
    SharePoint 2010 Training: more information What is a replaceable parameter? Sometimes you may see something like $SharePoint.Project.AssemblyFullName$ in your code. Visual studio is doing some magic to replace it during compile/build time with the full assembly signature. The following apply to these tokens or replaceable parameters - Tokens can be specified anywhere in a line. Tokens cannot span multiple lines. The same token may be specified multiple times on the same line and in the same file. Different tokens may be specified on the same line. Tokens that do not follow these rules are ignored without providing a warning or error. The replacement of tokens by string values is done immediately after manifest transformation, thus allowing manifest templates edited by a user to use tokens. Visual studio supports the following replaceable parameters -   Name Read full article ....

    Read the article

  • What is the difference: LoadUserProfile -vs- RegOpenCurrentUser

    - by Will5801
    These two APIs are very similar but it is unclear what the differences are and when each should be used (Except that LoadUserProfile is specified for use with CreateProcessAsUser which I am not using. I am simply impersonating for hive accesss). LoadUserProfile http://msdn.microsoft.com/en-us/library/bb762281(VS.85).aspx RegOpenCurrentUser http://msdn.microsoft.com/en-us/library/ms724894(VS.85).aspx According to the Services & the Registry article: http://msdn.microsoft.com/en-us/library/ms685145(VS.85).aspx we should use RegOpenCurrentUser when impersonating. But what does/should RegOpenCurrentUser do if the user profile is roaming - should it load it? As far as I can tell from these docs, both APIs provide a handle to the HKEY_CURRENT_USER for the user the thread is impersonating. Therefore, they both "load" the hive i.e. lock it as a database file and give a handle to it for registry APIs. It might seem that LoadUserProfile loads the user profile in the same way as the User does when he/she logs on, whereas RegOpenCurrentUser does not - is this correct? What is the fundamental difference (if any) in how these two APIs mount the hive? What are the implications and differences (if any) between what happens IF A user logs-on or logs-off while each of these impersonated handles is already in use? A user is already logged-on when each matching close function (RegCloseKey and UnloadUserProfile) is called?

    Read the article

  • How to make the tokenizer detect empty spaces while using strtok()

    - by Shadi Al Mahallawy
    I am designing a c++ program, somewhere in the program i need to detect if there is a blank(empty token) next to the token used know eg. if(token1==start) { token2=strtok(NULL," "); if(token2==NULL) {LCCTR=0;} else {LCCTR=atoi(token2);} so in the previous peice token1 is pointing to start , and i want to check if there is anumber next to the start , so I used token2=strtok(NULL," ") to point to the next token but unfortunattly the strtok function cannot detect empty spaces so it gives me an error at run time"INVALID NULL POINTER" how can i fix it or is there another function to use to detect empty spaces #include <iostream> #include<string> #include<map> #include<iomanip> #include<fstream> #include<ctype.h> using namespace std; const int MAX=300; int LCCTR; int START(char* token1); char* PASS1(char*token1); void tokinizer() { ifstream in; ofstream out; char oneline[MAX]; in.open("infile.txt"); out.open("outfile.txt"); if(in.is_open()) { char *token1; in.getline(oneline,MAX); token1 = strtok(oneline," \t"); START (token1); //cout<<'\t'; while(token1!=NULL) { //PASS1(token1); //cout<<token1<<" "; token1=strtok(NULL," \t"); if(NULL==token1) {//cout<<endl; //cout<<LCCTR<<'\t'; in.getline(oneline,MAX); token1 = strtok(oneline," \t"); } } } in.close(); out.close(); } int START(char* token1) { string start("START"); char*token2; if(token1 != start) {LCCTR=0;} else if(token1==start) { token2=strchr(token1+2,' '); cout<<token2; if(token2==NULL) {LCCTR=0;} else {LCCTR=atoi(token2); if(atoi(token2)>9999||atoi(token2)<0){cout<<"IVALID STARTING ADDRESS"<<endl;exit(1);} } } return LCCTR; } char* PASS1 (char*token1) { map<string,int> operations; map<string,int>symtable; map<string,int>::iterator it; pair<map<string,int>::iterator,bool> ret; char*token3=NULL; char*token2=NULL; string test; string comp(" "); string start("START"); string word("WORD"); string byte("BYTE"); string resb("RESB"); string resw("RESW"); string end("END"); operations["ADD"] = 18; operations["AND"] = 40; operations["COMP"] = 28; operations["DIV"] = 24; operations["J"] = 0X3c; operations["JEQ"] =30; operations["JGT"] =34; operations["JLT"] =38; operations["JSUB"] =48; operations["LDA"] =00; operations["LDCH"] =50; operations["LDL"] =55; operations["LDX"] =04; operations["MUL"] =20; operations["OR"] =44; operations["RD"] =0xd8; operations["RSUB"] =0x4c; operations["STA"] =0x0c; operations["STCH"] =54; operations["STL"] =14; operations["STSW"] =0xe8; operations["STX"] =10; operations["SUB"] =0x1c; operations["TD"] =0xe0; operations["TIX"] =0x2c; operations["WD"] =0xdc; if(operations.find("ADD")->first==token1) { token2=strtok(NULL," "); //test=token2; cout<<token2; //if(test.empty()){cout<<"MISSING OPERAND"<<endl;exit(1);} //else{LCCTR=LCCTR+3;} } /*else if(operations.find("AND")->first==token1) { token2=strtok(NULL," "); test=token2; if(test.empty()){cout<<"MISSING OPERAND"<<endl;exit(1);} else{LCCTR=LCCTR+3;} } else if(operations.find("COMP")->first==token1) { token2=token1+5; test=token2; if(test.empty()){cout<<"MISSING OPERAND"<<endl;exit(1);} else{LCCTR=LCCTR+3;} } else if(operations.find("DIV")->first==token1) { token2=token1+4; test=token2; if(test.empty()){cout<<"MISSING OPERAND"<<endl;exit(1);} else{LCCTR=LCCTR+3;} } else if(operations.find("J")->first==token1) { token2=token1+2; test=token2; if(test.empty()){cout<<"MISSING OPERAND"<<endl;exit(1);} else{LCCTR=LCCTR+3;} } else if(operations.find("JEQ")->first==token1) { token2=token1+5; test=token2; if(test.empty()){cout<<"MISSING OPERAND"<<endl;exit(1);} else{LCCTR=LCCTR+3;} } else if(operations.find("JGT")->first==token1) { token2=strtok(NULL," "); test=token2; if(test.empty()){cout<<"MISSING OPERAND"<<endl;exit(1);} else{LCCTR=LCCTR+3;} } else if(operations.find("JLT")->first==token1) { token2=token1+6; test=token2; if(test.empty()){cout<<"MISSING OPERAND"<<endl;exit(1);} else{LCCTR=LCCTR+3;} } else if(operations.find("JSUB")->first==token1) { token2=token1+6; test=token2; if(test.empty()){cout<<"MISSING OPERAND"<<endl;exit(1);} else{LCCTR=LCCTR+3;} } else if(operations.find("LDA")->first==token1) { token2=token1+6; test=token2; if(test.empty()){cout<<"MISSING OPERAND"<<endl;exit(1);} else{LCCTR=LCCTR+3;} } else if(operations.find("LDCH")->first==token1) { token2=token1+6; test=token2; if(test.empty()){cout<<"MISSING OPERAND"<<endl;exit(1);} else{LCCTR=LCCTR+3;} } else if(operations.find("LDL")->first==token1) { token2=token1+6; test=token2; if(test.empty()){cout<<"MISSING OPERAND"<<endl;exit(1);} else{LCCTR=LCCTR+3;} } else if(operations.find("LDX")->first==token1) { token2=token1+6; test=token2; if(test.empty()){cout<<"MISSING OPERAND"<<endl;exit(1);} else{LCCTR=LCCTR+3;} } else if(operations.find("MUL")->first==token1) { token2=token1+6; test=token2; if(test.empty()){cout<<"MISSING OPERAND"<<endl;exit(1);} else{LCCTR=LCCTR+3;} } else if(operations.find("OR")->first==token1) { token2=token1+6; test=token2; if(test.empty()){cout<<"MISSING OPERAND"<<endl;exit(1);} else{LCCTR=LCCTR+3;} } else if(operations.find("RD")->first==token1) { token2=token1+6; test=token2; if(test.empty()){cout<<"MISSING OPERAND"<<endl;exit(1);} else{LCCTR=LCCTR+3;} } else if(operations.find("RSUB")->first==token1) { token2=token1+6; test=token2; if(test.empty()){cout<<"MISSING OPERAND"<<endl;exit(1);} else{LCCTR=LCCTR+3;} } else if(operations.find("STA")->first==token1) { token2=token1+6; test=token2; if(test.empty()){cout<<"MISSING OPERAND"<<endl;exit(1);} else{LCCTR=LCCTR+3;} } else if(operations.find("STCH")->first==token1) { token2=token1+6; test=token2; if(test.empty()){cout<<"MISSING OPERAND"<<endl;exit(1);} else{LCCTR=LCCTR+3;} } else if(operations.find("STL")->first==token1) { token2=token1+6; test=token2; if(test.empty()){cout<<"MISSING OPERAND"<<endl;exit(1);} else{LCCTR=LCCTR+3;} } else if(operations.find("STSW")->first==token1) { token2=token1+6; test=token2; if(test.empty()){cout<<"MISSING OPERAND"<<endl;exit(1);} else{LCCTR=LCCTR+3;} } else if(operations.find("STX")->first==token1) { token2=token1+6; test=token2; if(test.empty()){cout<<"MISSING OPERAND"<<endl;exit(1);} else{LCCTR=LCCTR+3;} } else if(operations.find("SUB")->first==token1) { token2=token1+6; test=token2; if(test.empty()){cout<<"MISSING OPERAND"<<endl;exit(1);} else{LCCTR=LCCTR+3;} } else if(operations.find("TD")->first==token1) { token2=token1+6; test=token2; if(test.empty()){cout<<"MISSING OPERAND"<<endl;exit(1);} else{LCCTR=LCCTR+3;} } else if(operations.find("TIX")->first==token1) { token2=token1+6; test=token2; if(test.empty()){cout<<"MISSING OPERAND"<<endl;exit(1);} else{LCCTR=LCCTR+3;} } else if(operations.find("WD")->first==token1) { token2=token1+6; test=token2; if(test.empty()){cout<<"MISSING OPERAND"<<endl;exit(1);} else{LCCTR=LCCTR+3;} }*/ //else if( if(word==token1) {LCCTR=LCCTR+3;} else if(byte==token1) {string test; token2=token1+7; test=token2; if(test[0]=='C') {token3=token1+10; test=token3; if(test.length()>15) {cout<<"ERROR"<<endl; exit(1);} } else if(test[0]=='X') {token3=token1+10; test=token3; if(test.length()>14) {cout<<"ERROR"<<endl; exit(1);} } LCCTR=LCCTR+test.length(); } else if(resb==token1) {token3=token1+5; LCCTR=LCCTR+atoi(token3);} else if(resw==token1) {token3=token1+5; LCCTR=LCCTR+3*atoi(token3);} else if(end==token1) {exit(1);} /*else { test=token1; int last=test.length(); if(token1==start||test[0]=='C'||test[0]=='X'||ispunct(test[last])||isdigit(test[0])||isdigit(test[1])||isdigit(test[2])||isdigit(test[3])){} else { token2=strtok(NULL," "); //test=token2; cout<<token2; if(token2!=NULL) { symtable.insert( pair<string,int>(token1,LCCTR)); for(it=symtable.begin() ;it!=symtable.end() ;++it) {/*cout<<"symbol: "<<it->first<<" LCCTR: "<<it->second<<endl;} } else{} } }*/ return token3; } int main() { tokinizer(); return 0; }

    Read the article

  • Python: How best to parse a simple grammar?

    - by Rosarch
    Ok, so I've asked a bunch of smaller questions about this project, but I still don't have much confidence in the designs I'm coming up with, so I'm going to ask a question on a broader scale. I am parsing pre-requisite descriptions for a course catalog. The descriptions almost always follow a certain form, which makes me think I can parse most of them. From the text, I would like to generate a graph of course pre-requisite relationships. (That part will be easy, after I have parsed the data.) Some sample inputs and outputs: "CS 2110" => ("CS", 2110) # 0 "CS 2110 and INFO 3300" => [("CS", 2110), ("INFO", 3300)] # 1 "CS 2110, INFO 3300" => [("CS", 2110), ("INFO", 3300)] # 1 "CS 2110, 3300, 3140" => [("CS", 2110), ("CS", 3300), ("CS", 3140)] # 1 "CS 2110 or INFO 3300" => [[("CS", 2110)], [("INFO", 3300)]] # 2 "MATH 2210, 2230, 2310, or 2940" => [[("MATH", 2210), ("MATH", 2230), ("MATH", 2310)], [("MATH", 2940)]] # 3 If the entire description is just a course, it is output directly. If the courses are conjoined ("and"), they are all output in the same list If the course are disjoined ("or"), they are in separate lists Here, we have both "and" and "or". One caveat that makes it easier: it appears that the nesting of "and"/"or" phrases is never greater than as shown in example 3. What is the best way to do this? I started with PLY, but I couldn't figure out how to resolve the reduce/reduce conflicts. The advantage of PLY is that it's easy to manipulate what each parse rule generates: def p_course(p): 'course : DEPT_CODE COURSE_NUMBER' p[0] = (p[1], int(p[2])) With PyParse, it's less clear how to modify the output of parseString(). I was considering building upon @Alex Martelli's idea of keeping state in an object and building up the output from that, but I'm not sure exactly how that is best done. def addCourse(self, str, location, tokens): self.result.append((tokens[0][0], tokens[0][1])) def makeCourseList(self, str, location, tokens): dept = tokens[0][0] new_tokens = [(dept, tokens[0][1])] new_tokens.extend((dept, tok) for tok in tokens[1:]) self.result.append(new_tokens) For instance, to handle "or" cases: def __init__(self): self.result = [] # ... self.statement = (course_data + Optional(OR_CONJ + course_data)).setParseAction(self.disjunctionCourses) def disjunctionCourses(self, str, location, tokens): if len(tokens) == 1: return tokens print "disjunction tokens: %s" % tokens How does disjunctionCourses() know which smaller phrases to disjoin? All it gets is tokens, but what's been parsed so far is stored in result, so how can the function tell which data in result corresponds to which elements of token? I guess I could search through the tokens, then find an element of result with the same data, but that feel convoluted... What's a better way to approach this problem?

    Read the article

  • C++ vector and segmentation faults

    - by Headspin
    I am working on a simple mathematical parser. Something that just reads number = 1 + 2; I have a vector containing these tokens. They store a type and string value of the character. I am trying to step through the vector to build an AST of these tokens, and I keep getting segmentation faults, even when I am under the impression my code should prevent this from happening. Here is the bit of code that builds the AST: struct ASTGen { const vector<Token> &Tokens; unsigned int size, pointer; ASTGen(const vector<Token> &t) : Tokens(t), pointer(0) { size = Tokens.size() - 1; } unsigned int next() { return pointer + 1; } Node* Statement() { if(next() <= size) { switch(Tokens[next()].type) { case EQUALS : Node* n = Assignment_Expr(); return n; } } advance(); } void advance() { if(next() <= size) ++pointer; } Node* Assignment_Expr() { Node* lnode = new Node(Tokens[pointer], NULL, NULL); advance(); Node* n = new Node(Tokens[pointer], lnode, Expression()); return n; } Node* Expression() { if(next() <= size) { advance(); if(Tokens[next()].type == SEMICOLON) { Node* n = new Node(Tokens[pointer], NULL, NULL); return n; } if(Tokens[next()].type == PLUS) { Node* lnode = new Node(Tokens[pointer], NULL, NULL); advance(); Node* n = new Node(Tokens[pointer], lnode, Expression()); return n; } } } }; ... ASTGen AST(Tokens); Node* Tree = AST.Statement(); cout << Tree->Right->Data.svalue << endl; I can access Tree->Data.svalue and get the = Node's token info, so I know that node is getting spawned, and I can also get Tree->Left->Data.svalue and get the variable to the left of the = I have re-written it many times trying out different methods for stepping through the vector, but I always get a segmentation fault when I try to access the = right node (which should be the + node) Any help would be greatly appreciated.

    Read the article

  • tokens in visual studio: HACK, TODO... any other?

    - by b0x0rz
    what tokens do you find useful in visual studio? (visual studio 2010 ? environment ? task list ? tokens) currently i have only: HACK - low REVIEW - high TODO - normal WTF - high (only these - deleted some default ones) are you using any others? are you covering any other important thing with comment tokens? any best practices? thnx

    Read the article

  • c - strncpy issue

    - by Joe
    Hi there, I am getting segmentation fault when using strncpy and (pointer-to-struct)-(member) notation: I have simplified my code. I initialise a struct and set all of it's tokens to an empty string. Then a declare a pointer to a struct and assign the address of the struct to it. I pass the pointer to a function. I can print out the contents of the struct at the beginning of the function, but if I try to use the tp - mnemonic in a strncpy function, I get seg fault. Can anyone tell me what I am doing wrong? typedef struct tok { char* label; char* mnem; char* operand; }Tokens; Tokens* tokenise(Tokens* tp, char* line) { // This prints fine printf("Print this - %s\n", tp -> mnem); // This function gives me segmentation fault strncpy(tp -> mnem, line, 4); return tp; } int main() { char* line = "This is a line"; Tokens tokens; tokens.label = ""; tokens.mnem = "load"; tokens.operand = ""; Tokens* tp = &tokens; tp = tokenise(tp, line); return 0; } I have used printf statements to confirm that the code definitely stops executing at the strncpy function. Can anyone tell me where I am going wrong? Many thanks Joe

    Read the article

  • Doctrine Build-All Task fails in NetBeans - Class not found! Fatal Error: call to evictAll()

    - by Prasad
    When I build my model with the symfony doctrine:build --all --and-load command I have made no major changes to the model/schema, this is something new. I also tried sub-commands like build-model, build-tables, but they all hang.. I'm trying this in net beans. Any clue what this is? This command will remove all data in the following "dev" connection(s): - doctrine Are you sure you want to proceed? (y/N) y >> doctrine Dropping "doctrine" database >> doctrine Creating "dev" environment "doctrine" database >> doctrine generating model classes >> file+ C:\Documents and Settings\Gupte...\Temp/doctrine_schema_69845.yml >> tokens D:/projects/cim/lib/model/doctrine/base/BaseAffiliate.class.php >> tokens D:/projects/cim/lib/model/doctrine/base/BaseContact.class.php >> tokens D:/projects/cim/lib/model/doctr...e/BaseContactLocation.class.php >> tokens D:/projects/cim/lib/model/doctr...se/BaseGroupAffiliate.class.php >> tokens D:/projects/cim/lib/model/doctrine/base/BaseGrouping.class.php >> tokens D:/projects/cim/lib/model/doctrine/base/BaseLocation.class.php >> tokens D:/projects/cim/lib/model/doctr.../base/BasePhonenumber.class.php >> tokens D:/projects/cim/lib/model/doctrine/base/BaseTenant.class.php >> tokens D:/projects/cim/lib/model/doctr...base/BasesfGuardGroup.class.php >> tokens D:/projects/cim/lib/model/doctr...fGuardGroupPermission.class.php >> tokens D:/projects/cim/lib/model/doctr...BasesfGuardPermission.class.php >> tokens D:/projects/cim/lib/model/doctr...asesfGuardRememberKey.class.php >> tokens D:/projects/cim/lib/model/doctr.../base/BasesfGuardUser.class.php >> tokens D:/projects/cim/lib/model/doctr.../BasesfGuardUserGroup.class.php >> tokens D:/projects/cim/lib/model/doctr...sfGuardUserPermission.class.php >> autoload Resetting application autoloaders >> file- D:/projects/cim/cache/frontend/.../config/config_autoload.yml.php >> file- D:/projects/cim/cache/backend/dev/config/config_autoload.yml.php >> doctrine generating form classes [?php /** * Contact form base class. * * @method Contact getObject() Returns the current form's model object * * @package ##PROJECT_NAME## * @subpackage form * @author ##AUTHOR_NAME## * @version SVN: $Id: sfDoctrineFormGeneratedTemplate.php 24171 2009-11-19 16:37:50Z Kris.Wallsmith $ */ abstract class BaseContactForm extends BaseFormDoctrine { public function setup() { $this->setWidgets(array( 'id' Fatal error: Call to a member function evictAll() on a non-object in D:\projects\cim\lib\vendor\symfony\lib\plugins\sfDoctrinePlugin\lib\vendor\doctrine\Doctrine\Connection.php on line 1239 Call Stack: 0.9552 322760 1. {main}() D:\projects\cim\symfony:0 0.9594 587208 2. include('D:\projects\cim\lib\vendor\symfony\lib\command\cli.php') D:\projects\cim\symfony:14 11.9775 17118936 3. sfDatabaseManager->shutdown() D:\projects\cim\lib\vendor\symfony\lib\database\sfDatabaseManager.class.php:0 11.9775 17118936 4. sfDoctrineDatabase->shutdown() D:\projects\cim\lib\vendor\symfony\lib\database\sfDatabaseManager.class.php:134 11.9775 17118936 5. Doctrine_Manager->closeConnection() D:\projects\cim\lib\vendor\symfony\lib\plugins\sfDoctrinePlugin\lib\database\sfDoctrineDatabase.class.php:165 11.9775 17118936 6. Doctrine_Connection->close() D:\projects\cim\lib\vendor\symfony\lib\plugins\sfDoctrinePlugin\lib\vendor\doctrine\Doctrine\Manager.php:579 11.9776 17120160 7. Doctrine_Connection->clear() D:\projects\cim\lib\vendor\symfony\lib\plugins\sfDoctrinePlugin\lib\vendor\doctrine\Doctrine\Connection.php:1268 Couldn't find class Similar thing is mentioned here: http://osdir.com/ml/symfony-users/2010-01/msg00642.html

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >