Search Results

Search found 3241 results on 130 pages for 'extract'.

Page 102/130 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • LINQ Normalizing data

    - by Brennan Mann
    I am using an OMS that stores up to three line items per record in the database. Below is an example of an order containing five line items. Order Header Order Detail Prod 1 Prod 2 Prod 3 Order Detail Prod 4 Prod 5 One order header record and two detail records. My goal is have a one to one relation for details records(i.e., one detail record per line item). In the past, I used an UNION ALL SQL statement to extract the data. Is there a better approcach to this problem using LINQ? Below is my first attempt at using LINQ. Any feedback, suggestions or recommendations would greatly be appreciated. For what I have read, an UNION statement can tax the process? var orderdetail = (from o in context.ORDERSUBHEADs select new { edpNo = o.EDPNOS_001, price = o.EXTPRICES_001, qty = o.ITEMQTYS_001 } ).Union(from o in context.ORDERSUBHEADs select new { edpNo = o.EDPNOS_002, price = o.EXTPRICES_002, qty = o.ITEMQTYS_002 } ).Union(from o in context.ORDERSUBHEADs select new { edpNo = o.EDPNOS_003, price = o.EXTPRICES_003, qty = o.ITEMQTYS_003 });

    Read the article

  • Convert 4 bytes to int

    - by Oscar Reyes
    I'm reading a binary file like this: InputStream in = new FileInputStream( file ); byte[] buffer = new byte[1024]; while( ( in.read(buffer ) > -1 ) { int a = // ??? } What I want to do it to read up to 4 bytes and create a int value from those but, I don't know how to do it. I kind of feel like I have to grab 4 bytes at a time, and perform one "byte" operation ( like << & FF and stuff like that ) to create the new int What's the idiom for this? EDIT Ooops this turn out to be a bit more complex ( to explain ) What I'm trying to do is, read a file ( may be ascii, binary, it doesn't matter ) and extract the integers it may have. For instance suppose the binary content ( in base 2 ) : 00000000 00000000 00000000 00000001 00000000 00000000 00000000 00000010 The integer representation should be 1 , 2 right? :- / 1 for the first 32 bits, and 2 for the remaining 32 bits. 11111111 11111111 11111111 11111111 Would be -1 and 01111111 11111111 11111111 11111111 Would be Integer.MAX_VALUE ( 2147483647 )

    Read the article

  • Is there a way to get YQL to return HTML?

    - by Joe Shaw
    I am trying to use YQL to extract a portion of HTML from a series of web pages. The pages themselves have slightly different structure (so a Yahoo Pipes "Fetch Page" with its "Cut content" feature does not work well) but the fragment I am interested in always has the same class attribute. If I have an HTML page like this: <html> <body> <div class="foo"> <p>Wolf</p> <ul> <li>Dog</li> <li>Cat</li> </ul> </div> </body> </html> and use a YQL expression like this: SELECT * FROM html WHERE url="http://example.com/containing-the-fragment-above" AND xpath="//div[@class='foo']" what I get back are the (apparently unordered?) DOM elements, where what I want is the HTML content itself. I've tried SELECT content as well, but that only selects textual content. I want HTML. Is this possible?

    Read the article

  • Simple, fast SQL queries for flat files.

    - by plinehan
    Does anyone know of any tools to provide simple, fast queries of flat files using a SQL-like declarative query language? I'd rather not pay the overhead of loading the file into a DB since the input data is typically thrown out almost immediately after the query is run. Consider the data file, "animals.txt": dog 15 cat 20 dog 10 cat 30 dog 5 cat 40 Suppose I want to extract the highest value for each unique animal. I would like to write something like: cat animals.txt | foo "select $1, max(convert($2 using decimal)) group by $1" I can get nearly the same result using sort: cat animals.txt | sort -t " " -k1,1 -k2,2nr And I can always drop into awk from there, but this all feels a bit awkward (couldn't resist) when a SQL-like language would seem to solve the problem so cleanly. I've considered writing a wrapper for SQLite that would automatically create a table based on the input data, and I've looked into using Hive in single-processor mode, but I can't help but feel this problem has been solved before. Am I missing something? Is this functionality already implemented by another standard tool? Halp!

    Read the article

  • Python web scraping involving HTML tags with attributes

    - by rohanbk
    I'm trying to make a web scraper that will parse a web-page of publications and extract the authors. The skeletal structure of the web-page is the following: <html> <body> <div id="container"> <div id="contents"> <table> <tbody> <tr> <td class="author">####I want whatever is located here ###</td> </tr> </tbody> </table> </div> </div> </body> </html> I've been trying to use BeautifulSoup and lxml thus far to accomplish this task, but I'm not sure how to handle the two div tags and td tag because they have attributes. In addition to this, I'm not sure whether I should rely more on BeautifulSoup or lxml or a combination of both. What should I do? At the moment, my code looks like what is below: import re import urllib2,sys import lxml from lxml import etree from lxml.html.soupparser import fromstring from lxml.etree import tostring from lxml.cssselect import CSSSelector from BeautifulSoup import BeautifulSoup, NavigableString address='http://www.example.com/' html = urllib2.urlopen(address).read() soup = BeautifulSoup(html) html=soup.prettify() html=html.replace('&nbsp', '&#160') html=html.replace('&iacute','&#237') root=fromstring(html) I realize that a lot of the import statements may be redundant, but I just copied whatever I currently had in more source file. EDIT: I suppose that I didn't make this quite clear, but I have multiple tags in page that I want to scrape.

    Read the article

  • How do I get uri of HTTP packet with winpcap?

    - by Gtker
    Based on this article I can get all incoming packets. /* Callback function invoked by libpcap for every incoming packet */ void packet_handler(u_char *param, const struct pcap_pkthdr *header, const u_char *pkt_data) { struct tm *ltime; char timestr[16]; ip_header *ih; udp_header *uh; u_int ip_len; u_short sport,dport; time_t local_tv_sec; /* convert the timestamp to readable format */ local_tv_sec = header->ts.tv_sec; ltime=localtime(&local_tv_sec); strftime( timestr, sizeof timestr, "%H:%M:%S", ltime); /* print timestamp and length of the packet */ printf("%s.%.6d len:%d ", timestr, header->ts.tv_usec, header->len); /* retireve the position of the ip header */ ih = (ip_header *) (pkt_data + 14); //length of ethernet header /* retireve the position of the udp header */ ip_len = (ih->ver_ihl & 0xf) * 4; uh = (udp_header *) ((u_char*)ih + ip_len); /* convert from network byte order to host byte order */ sport = ntohs( uh->sport ); dport = ntohs( uh->dport ); /* print ip addresses and udp ports */ printf("%d.%d.%d.%d.%d -> %d.%d.%d.%d.%d\n", ih->saddr.byte1, ih->saddr.byte2, ih->saddr.byte3, ih->saddr.byte4, sport, ih->daddr.byte1, ih->daddr.byte2, ih->daddr.byte3, ih->daddr.byte4, dport); } But how do I extract URI information in packet_handler?

    Read the article

  • script to find "deny" ACE in ACLs, and remove it

    - by Tom
    On my 100TB cluster, I need to find dirs and files that have a "deny" ACE within their ACL, then remove that ACE on each instance. I'm using the following: # find . -print0 | xargs -0 ls -led | grep deny -B4 and get this output (partial, for example only) -r--rw---- 1 chris GroupOne 4096 Mar 6 18:12 ./directoryA/fileX.txt OWNER: user:chris GROUP: group:GroupOne 0: user:chris allow file_gen_read,std_write_dac,file_write_attr 1: user:chris deny file_write,append,file_write_ext_attr,execute -- -r--rwxrwx 1 chris GroupOne 14728221 Mar 6 18:12 ./directoryA/subdirA/fileZ.txt OWNER: user:chris GROUP: group:GroupOne 0: user:chris allow file_gen_read,std_write_dac,file_write_attr 1: user:chris deny file_write,append,file_write_ext_attr,execute -- OWNER: user:bob GROUP: group:GroupTwo 0: user:bob allow dir_gen_read,dir_gen_write,dir_gen_execute,std_write_dac,delete_child,object_inherit,container_inherit 1: group:GroupTwo allow std_read_dac,std_write_dac,std_synchronize,dir_read_attr,dir_write_attr,object_inherit,container_inherit 2: group:GroupTwo deny list,add_file,add_subdir,dir_read_ext_attr,dir_write_ext_attr,traverse,delete_child,object_inherit,container_inherit -- As you can see, depending on where the "deny" ACE is, I can see/not-see the path. I could increase the -B value (I've seen up to 8 ACEs on a file) but then I would get more output to distill from... What I need to do next is extract $ACENUMBER and $PATHTOFILE so that I can execute this command: chmod -a# $ACENUMBER $PATHTOFILE Additional issue is that the find command (above) gives a relative path, whereas I need the full path. I guess that would need to be edited somehow. Any guidance on how to accomplish this?

    Read the article

  • Passing string with (accidental) escape character loses character even though it's a raw string

    - by Steen
    I have a function with a python doctest that fails because one of the test input strings has a backslash that's treated like an escape character even though I've encoded the string as a raw string. My doctest looks like this: >>> infile = [ "Todo: fix me", "/** todo: fix", "* me", "*/", r"""//\todo stuff to fix""", "TODO fix me too", "toDo bug 4663" ] >>> find_todos( infile ) ['fix me', 'fix', 'stuff to fix', 'fix me too', 'bug 4663'] And the function, which is intended to extract the todo texts from a single line following some variation over a todo specification, looks like this: todos = list() for line in infile: print line if todo_match_obj.search( line ): todos.append( todo_match_obj.search( line ).group( 'todo' ) ) And the regular expression called todo_match_obj is: r"""(?:/{0,2}\**\s?todo):?\s*(?P<todo>.+)""" A quick conversation with my ipython shell gives me: In [35]: print "//\todo" // odo In [36]: print r"""//\todo""" //\todo And, just in case the doctest implementation uses stdout (I haven't checked, sorry): In [37]: sys.stdout.write( r"""//\todo""" ) //\todo My regex-foo is not high by any standards, and I realize that I could be missing something here. EDIT: Following Alex Martellis answer, I would like suggestions on what regular expression would actually match the blasted r"""//\todo fix me""". I know that I did not originally ask for someone to do my homework, and I will accept Alex's answer as it really did answer my question (or confirm my fears). But I promise to upvote any good solutions to my problem here :) I'm using Python 2.6.4 (r264:75706, Dec 7 2009, 18:45:15) Thank you for reading this far (If you skipped directly down here, I understand)

    Read the article

  • Extracting [] elements from form collection - mvc - should use icollection but have mix of types

    - by bergin
    hi there. have looked at Phil Haacks project on books at http://haacked.com/archive/2008/10/23/model-binding-to-a-list.aspx which has been useful, but I have a mix of data types. I use a modelview so that i can have a mix of objects, in this case: Order (ie order.id, order.date etc), Customer, SoilSamplingOrder and a list of SoilSamplingSubJobs which is like this [0].id, [0].field, [1].id, [1].field etc Perhaps I should be using ICollection instead of List? I had problems getting UpdateModel to work so I used an extract from collection method. the first 4 method calls : orderRepository.FindOrder(id); etc give the model the original to be edited. but after this point i'm a little lost in how to update the subjobs. I hope i have delineated enough to make sense of the problem. [HttpPost] public ActionResult Edit(int id, FormCollection collection) { Order order = orderRepository.FindOrder(id); Customer cust = orderRepository.FindCustomer(order.customer_id); IList<SoilSamplingSubJob> sssj = orderRepository.FindSubOrders(id); SoilSamplingOrder sso = orderRepository.FindSoilSampleOrder(id); try { UpdateModel(order, collection.ToValueProvider()); UpdateModel(cust, collection.ToValueProvider()); UpdateModel(sso, collection.ToValueProvider()); IList<SoilSamplingSubJob> sssjs = orderRepository.extractSSSJ(collection); foreach (var sj in sssjs) UpdateModel(sso, collection.ToValueProvider()); orderRepository.Save(); return RedirectToAction("Details", new { id=order.order_id}); } catch { return View(); } }

    Read the article

  • Google Reader API with Objective-C - Problem getting token

    - by JustinXXVII
    I am able to successfully get the SID (SessionID) for my Google Reader account. In order to obtain the feed and do other operations inside Google Reader, you have to obtain an authorization token. I'm having trouble doing this. Can someone shed some light? //Create a cookie to append to the GET request NSDictionary *cookieDictionary = [NSDictionary dictionaryWithObjectsAndKeys:@"SID",@"NSHTTPCookieName",self.sessionID,@"NSHTTPCookieValue",@".google.com",@"NSHTTPCookieDomain",@"/",@"NSHTTPCookiePath",@"NSHTTPCookieExpires",@"160000000000",nil]; NSHTTPCookie *authCookie = [NSHTTPCookie cookieWithProperties:cookieDictionary]; //The URL to obtain the Token from NSURL *tokenURL = [NSURL URLWithString:@"http://www.google.com/reader/api/0/token"]; NSMutableURLRequest *tokenRequest = [NSMutableURLRequest requestWithURL:tokenURL]; //Not sure if this is right: add cookie to array, extract the headers from the cookie inside the array...? [tokenRequest setAllHTTPHeaderFields:[NSHTTPCookie requestHeaderFieldsWithCookies:[NSArray arrayWithObjects:authCookie,nil]]]; //This gives me an Error 403 Forbidden in the didReceiveResponse of the delegate [NSURLConnection connectionWithRequest:tokenRequest delegate:self]; I get a 403 Forbidden error as the response from Google. I'm probably not doing it right. I set the dictionary values according to the documentation for NSHTTPCookie.

    Read the article

  • C# LINQ to XML missing space character.

    - by Fossaw
    I write an XML file "by hand", (i.e. not with LINQ to XML), which sometimes includes an open/close tag containing a single space character. Upon viewing the resulting file, all appears correct, example below... <Item> <ItemNumber>3</ItemNumber> <English> </English> <Translation>Ignore this one. Do not remove.</Translation> </Item> ... the reasons for doing this are various and irrelevent, it is done. Later, I use a C# program with LINQ to XML to read the file back and extract the record... XElement X_EnglishE = null; // This is CRAZY foreach (XElement i in Records) { X_EnglishE = i.Element("English"); // There is only one damned record! } string X_English = X_EnglishE.ToString(); ... and test to make sure it is unchanged from the database record. I detect a change, when processing Items where the field had the single space character... +E+ Text[3] English source has been altered: Was: >>> <<< Now: >>><<< ... the and <<< parts I added to see what was happening, (hard to see space characters). I have fiddled around with this but can't see why this is so. It is not absolutely critical, as the field is not used, (yet), but I cannot trust C# or LINQ or whatever is doing this, if I do not understand why it is so. So what is doing that and why?

    Read the article

  • Parse Text using scanner useDelimiter

    - by Brian
    Looking to parse the following text file: Sample text file: <2008-10-07text entered by user<2008-11-26additional text entered by user I would like to parse the above text so that I can have three variables: v1 = 2008-10-07 v2 = text entered by user v3 = Ted Parlor v1 = 2008-11-26 v2 = additional text entered by user v3 = Ted Parlor I attempted to use scanner and useDelimiter, however, I'm having issue on how to set this up to have the results as stated above. Here's my first attempt: enter code here import java.io.*; import java.util.Scanner; public class ScanNotes { public static void main(String[] args) throws IOException { Scanner s = null; try { //String regex = "(?<=\<)([^\*)(?=\)"; s = new Scanner(new BufferedReader(new FileReader("cur_notes.txt"))); s.useDelimiter("[<]+"); while (s.hasNext()) { String v1 = s.next(); String v2= s.next(); System.out.println("v1= " + v1 + " v2=" + v2); } } finally { if (s != null) { s.close(); } } } } The results is as follows: v1= 2008-10-07text entered by user v2=Ted Parlor What I desire is: v1= 2008-10-07 v2=text entered by user v3=Ted Parlor v1= 2008-11-26 v2=additional text entered by user v3=Ted Parlor Any help that would allow me to extract all three strings separately would be greatly appreciated.

    Read the article

  • Project Architecture For Enhancing Legacy project.

    - by vijay.shad
    I am working on legacy project. Now the situation demands project to be divided into parts. What strategy I should follow to do this task. Description: The legacy project (A) is fully functional web application with almost well defined layers. But now i need to extend the project to a further enhancement. This project usage maven as build tool. But it is used only for dependency managements only. (project exported to war form inside eclipse). The new enhancement needs me to add new data table, new UI(jsp, css, js and images). What should be my strategy to enhance to application. My proposed design. I am planing to create two new projects Project B : Main Enhancement works will done in this project. Will have all layers like service layer, dao layer and UI layer in itself. And will be a web application in itself. Project C : Extract some common model and service code form project-A and create this project. This project will be added as dependency to both the projects. If my this approach is okay! Then i presume there will be problem be problem in deployment. These two projects will demand to deploy separately(currently tomcat is used). But I must deploy these two projects as one war. So, i need to have a plan to change the web.xml entries to have configurations for both projects. This will comes with some more complexities with project. What should be my design for the project? Does my plan sounds good.

    Read the article

  • MYSQL Data -> PHP arrays -> Javascript (or Jquery) How can I pass the data with JSON ?

    - by Alex
    I am starting with a new site (it's my first one) and I am getting big troubles ! I wrote this code <?php include("misc.inc"); $cxn=mysqli_connect($host,$user,$password,$database) or die("couldn't connect to server"); $query="SELECT DISTINCT country FROM stamps"; $result=mysqli_query($cxn,$query) or die ("couldn't execute query"); $numberOfRows=mysqli_num_rows($result); for ($i=0;$i<$numberOfRows;$i++){ $row=mysqli_fetch_assoc($result); extract($row); $a=json_encode($row); $a=$a.","; echo $a; } ?> and the output is as follows : {"country":"liechtenstein"},{"country":"romania"},{"country":"jugoslavia"},{"country":"polonia"}, which should be a correct JSON outout ... How can I get it now in Jquery ? I tried with $.getJSON but I am not able to fuse it properly. I don't want yet to pass the data to a DIV or something similar in HTML. Alex

    Read the article

  • Download-from-PyPI-and-install script

    - by zubin71
    Hello, I have written a script which fetches a distribution, given the URL. After downloading the distribution, it compares the md5 hashes to verify that the file has been downloaded properly. This is how I do it. def download(package_name, url): import urllib2 downloader = urllib2.urlopen(url) package = downloader.read() package_file_path = os.path.join('/tmp', package_name) package_file = open(package_file_path, "w") package_file.write(package) package_file.close() I wonder if there is any better(more pythonic) way to do what I have done using the above code snippet. Also, once the package is downloaded this is what is done: def install_package(package_name): if package_name.endswith('.tar'): import tarfile tarfile.open('/tmp/' + package_name) tarfile.extract('/tmp') import shlex import subprocess installation_cmd = 'python %ssetup.py install' %('/tmp/'+package_name) subprocess.Popen(shlex.split(installation_cmd) As there are a number of imports for the install_package method, i wonder if there is a better way to do this. I`d love to have some constructive criticism and suggestions for improvement. Also, I have only implemented the install_package method for .tar files; would there be a better manner by which I could install .tar.gz and .zip files too without having to write seperate methods for each of these?

    Read the article

  • Disassembling with python - no easy solution?

    - by Abc4599
    Hi, I'm trying to create a python script that will disassemble a binary (a Windows exe to be precise) and analyze its code. I need the ability to take a certain buffer, and extract some sort of struct containing information about the instructions in it. I've worked with libdisasm in C before, and I found it's interface quite intuitive and comfortable. The problem is, its Python interface is available only through SWIG, and I can't get it to compile properly under Windows. At the availability aspect, diStorm provides a nice out-of-the-box interface, but it provides only the Mnemonic of each instruction, and not a binary struct with enumerations defining instruction type and what not. This is quite uncomfortable for my purpose, and will require a lot of what I see as spent time wrapping the interface to make it fit my needs. I've also looked at BeaEngine, which does in fact provide the output I need, a struct with binary info concerning each instruction, but its interface is really odd and counter-intuitive, and it crashes pretty much instantly when provided with wrong arguments. The CTypes sort of ultimate-death-to-your-python crashes. So, I'd be happy to hear about other solutions, which are a little less time consuming than messing around with djgcc or mingw to make SWIGed libdisasm, or writing an OOP wrapper for diStorm. If anyone has some guidance as to how to compile SWIGed libdisasm, or better yet, a compiled binary (pyd or dll+py), I'd love to hear/have it. :) Thanks ahead.

    Read the article

  • Parsing tab delimited file with double quotes in Perl

    - by sfactor
    I have a data set that is tab delimited with the user-agent strings in double quotes. I need to parse each of these columns and based on the answer of my other post I used the Text::CSV module. 94410634 0 GET "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; GTB6.6; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; AskTB5.5)" 1 The code is a simple one. #!/usr/bin/perl use strict; use warnings; use Text::CSV; my $csv = Text::CSV->new(sep_char => "\t"); while (<>) { if ($csv->parse($_)) { my @columns = $csv->fields(); print "@columns\n"; } else { my $err = $csv->error_input; print "Failed to parse line: $err"; } } But i get the Failed to parse line: error when I try it on this dataset. what am I doing wrong? I need to extract the 4th column containing the user-agent strings for further processing.

    Read the article

  • Making one of a group of similar form fields required in CakePHP

    - by Pickledegg
    I have a bunch of name/email fields in my form like this: data[Friend][0][name] data[Friend][1][name] data[Friend][2][name] etc. and data[Friend][0][email] data[Friend][1][email] data[Friend][2][email] etc. I have a custom validation rule on each one that checks to see if the corresponding field is filled in. Ie. if data[Friend][2][name] then data[Friend][2][email] MUST be filled in. FYI, heres what one of the two rules look like: My form validation rule: ( I have an email validation too but that's irrelevant here) 'name' => array( 'checkEmail' => array( 'rule' => 'hasEmail', 'message' => 'You must fill in the name field', 'last' => true ) ) My custom rule code: function hasEmail($data){ $name = array_values($data); $name = $name[0]; if(strlen($name) == 0){ return empty($this->data['Friend']['email']); } return true; } I need to make it so that one of the pairs should be filled in as a minimum. It can be any as long as the indexes correspond. I can't figure a way, as if I set the form rule to be required or allowEmpty false, it fails on ALL empty fields. How can I check for the existence of 1 pair and if present, carry on? Also, I need to strip out all of the remaining empty [Friend] fields, so my saveAll() doesn't save a load of empty rows, but I think I can handle that part using extract in my controller. The main problem is this validation. Thanks.

    Read the article

  • How to write regex that searches for a dynamic amount of pairs?

    - by citronas
    Lets say a have a string such as this one: string txt = "Lore ipsum {{abc|prop1=\"asd\";prop2=\"bcd\";}} asd lore ipsum"; The information I want to extract "abc" and pairs like ("prop1","asd") , ("prop3", "bcd") where each pair used a ; as delimeter. Edit1: (based on MikeB's) code Ah, getting close. I found out how to parse the following: string txt = "Lore ipsum {{abc|prop1=\"asd\";prop2=\"http:///www.foo.com?foo=asd\";prop3=\"asd\";prop4=\"asd\";prop5=\"asd\";prop6=\"asd\";}} asd"; Regex r = new Regex("{{(?<single>([a-z0-9]*))\\|((?<pair>([a-z0-9]*=\"[a-z0-9.:/?=]*\";))*)}}", RegexOptions.Singleline | RegexOptions.IgnoreCase); Match m = r.Match(txt); if (m.Success) { Console.WriteLine(m.Groups["single"].Value); foreach (Capture cap in m.Groups["pair"].Captures) { Console.WriteLine(cap.Value); } } Question 1: How must I adjust the regex to say 'each value of a pair in delimited by \" only? I added chars like '.',';' etc, but I can't think of any char that I want to permit. The other way around would be much nicer. Question 2: How must I adjust this regex work with this thing here? string txt = "Lore ipsum {{abc|prop1=\"asd\";prop2=\"http:///www.foo.com?foo=asd\";prop3=\"asd\";prop4=\"asd\";prop5=\"asd\";prop6=\"asd\";}} asd lore ipsum {{aabc|prop1=\"asd\";prop2=\"http:///www.foo.com?foo=asd\";prop3=\"asd\";prop4=\"asd\";prop5=\"asd\";prop6=\"asd\";}}"; Therefore I'd probably try to get groups of {{...}} and use the other regex?

    Read the article

  • CSRF Protection in AJAX Requests using MVC2

    - by mnemosyn
    The page I'm building depends heavily on AJAX. Basically, there is just one "page" and every data transfer is handled via AJAX. Since overoptimistic caching on the browser side leads to strange problems (data not reloaded), I have to perform all requests (also reads) using POST - that forces a reload. Now I want to prevent the page against CSRF. With form submission, using Html.AntiForgeryToken() works neatly, but in AJAX-request, I guess I will have to append the token manually? Is there anything out-of-the box available? My current attempt looks like this: I'd love to reuse the existing magic. However, HtmlHelper.GetAntiForgeryTokenAndSetCookie is private and I don't want to hack around in MVC. The other option is to write an extension like public static string PlainAntiForgeryToken(this HtmlHelper helper) { // extract the actual field value from the hidden input return helper.AntiForgeryToken().DoSomeHackyStringActions(); } which is somewhat hacky and leaves the bigger problem unsolved: How to verify that token? The default verification implementation is internal and hard-coded against using form fields. I tried to write a slightly modified ValidateAntiForgeryTokenAttribute, but it uses an AntiForgeryDataSerializer which is private and I really didn't want to copy that, too. At this point it seems to be easier to come up with a homegrown solution, but that is really duplicate code. Any suggestions how to do this the smart way? Am I missing something completely obvious?

    Read the article

  • Parsing / Extracting Text from String in Rails?

    - by user641116
    I have a string in Rails, e.g. "This is a Twitter message. #books War & Peace by Leo Tolstoy. I love this book!", and I want to parse the text and extract only certain phrases, like "War & Peace by Leo Tolstoy". Is this a matter of using Regex and lifting the text between "#books" to "."? What if there's no structure to the message, like: "This is a Twitter message #books War & Peace by Leo Tolstoy I love this book!" or "This is a Twitter message. I love the book War & Peace by Leo Tolstoy #books" How can I reliably pull the phrase "War & Peace by Leo Tolstoy" without knowing the phrase ex ante. Are there any gems, methods, etc. that can help me do this? At the very least, what would you call what I'm trying to do? It will help me search for a solution on Google. I've tried a few searches on "parsing" with no luck.

    Read the article

  • Find first item inside angular brackets after occurrence of other item, using RegEx, in C#

    - by Mihaela
    I have an xml-like text, in which I would like to find the item that occurs in the first occurrence of a certain pattern: typically: ... <PropertyGroup><name>true</name></PropertyGroup><PropertyGroup>.... .... Could also be ... <PropertyGroup> <name> true</name> </PropertyGroup> ... <PropertyGroup> ... In the above, I need to extract the "name". My initial assumption was that all occurrences were to be in one line, and I wrote my code using string properties, but it is very difficult o take in consideration every possibility, and only RegEx can save me. I just don't know how to write it... I Have started with something like this: Regex regex = new Regex("(?<=<PropertyGroup>#)<+"); Match matches = regex.Matches(Text)[0]; MessageBox.Show(matches.ToString()); I think this finds the first item after a <PropertyGroup>, but I don't know how to make it get the item within the angular brackets... (which may be after one or more newlines, and/or spaces). I know that there are utilities for parsing xml, but I am looking for something simple to insert in a c# program Can someone please help me ? Thank you very much.

    Read the article

  • How do I match complete XML objects in a string?

    - by cyclotis04
    I'm attempting to find complete XML objects in a string. They have been placed in the string by an XmlSerializer, but may or may not be complete. I've toyed with the idea of using a regular expression, because it seems like the kind of thing they were built for, except for the fact that I'm trying to parse XML. I'm trying to find complete objects in the form: <?xml version="1.0"?> <type> <field>value</field> ... </type> My thought was a regex to find <?xml version="1.0"?><type> and </type>, but if a field has the same name as type, it obviously won't work. There's plenty of documentation on XML parsers, but they seem to all need a complete, fully-formed document to parse. My XML objects can be in a string surrounded by pretty much anything else (including other complete objects). hw<e>reR@lot$0fr@ndm&nchrs%<?xml version="1.0"?><type><field>...</field>...</type>@ndH#r$omOre!!>nuT6erjc?y!<?xml version="1.0"?><type><field>...</field>...</type>ty!=] A regex would be able to match a string while excluding the random characters, but not find a complete XML object. I'd like some way to extract an object, parse it with a serializer, then repeat until the string contains no more valid objects.

    Read the article

  • Getter and Setter vs. Builder strategy

    - by Extrakun
    I was reading a JavaWorld's article on Getter and Setter where the basic premise is that getters expose internal content of an object, hence tightening coupling, and go on to provide examples using builder objects. I was rather leery of abolishing getter/setter but on second reading of the article, see to quite like the idea. However, sometimes I just need one cruical element of an entity class, such as the user's id and writing one whole class just to extract that cruical element seems like overkill. It also implies that for different view, a different type of importer/exporter must be implemented (or the whole data of the class to be exported out, thus resulting in waste). Usually I tend towards filtering the result of a getter - for example, if I need to output the price of a product in different currency, I would code it as: return CurrencyOutput::convertTo($product->price(), 'USD'); This is with the understanding that the raw output of a getter is not necessary the final result to be pushed onto a screen or a database. Is getter/setter really as bad as it is protrayed to be? When should one adopt a builder strategy, or a 'get the result and filter it' approach? How do you avoid having a class needing to know about every other objects if you are not using getter/setter?

    Read the article

  • I cannot grok MVC, what it is, and what it is not?

    - by Hao
    I cannot grok what MVC is, what mindset or programming model should I acquire so MVC stuff can instantly "lightbulb" on my head? If not instantly, what simple programs/projects should I try to do first so I can apply the neat things MVC brings to programming. OOP is intuitive and easier, object is all around us, and the benefits of code reuse using OOP-paradigm instantly click to anyone. You can probably talk to anybody about OOP in a few minutes and lecture some examples and they would get it. While OOP somehow raise the intuitiveness aspect of programming, MVC seems to do the opposite. I'm getting negative thoughts that some future employers(or even clients) would look down upon me for not using MVC technology. Though I probably get the skinnable aspect of MVC, but when I try to apply it to my own project, I don't know where to start. And also some programmers even have diverging views on how to accomplish MVC properly. Take this for instance from Jeff's post about MVC: The view is simply how you lay the data out, how it is displayed. If you want a subset of some data, for example, my opinion is that is a responsibility of the model. So maybe some programmers use MVC, but they somehow inadvertently use the View or the Controller to extract a subset of data. Why we can't have a definitive definition of what and how to accomplish MVC properly? And also, when I search for MVC .NET programs, most of it applies to web programs, not desktop apps, this intrigue me further. My guess is, this is most advantageous to web apps, there's not much problem about intermixed view(html) and controller(program code) in desktop apps.

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >