Search Results

Search found 277 results on 12 pages for 'parsers'.

Page 8/12 | < Previous Page | 4 5 6 7 8 9 10 11 12  | Next Page >

  • Xaml not WPF

    - by xan
    I am interested in using Xaml with expression blend for creating user interfaces in an application. However, because of the limitations of the target architecture, I cannot use WPF or C#. So, what I am interested in is in any examples / existing projects or advice from anyone who has experiance of this technology on the use of Xaml in it's "Pure" form as a specification language not tied to WPF. Specific questions: 1) Is it possible to use Blend + Xaml without the WPF elements, or without C# backing classes? 2) Are there any other implementations of Xaml parsers etc. which use different architectures, and can they work with blend or similar tools. 3) Are there alternative editor / designer tools which can help in this situation? I am aware of the MyXaml and MycroXaml projects, and have found a lot of resources on the web about Xaml, but 99% of it relates directly to WPF. This is fine for understanding the concepts of Xaml, but doesn't help with the implimentation I need. Many thanks!

    Read the article

  • Pros and Cons of Java HTML to XML cleaners

    - by cjavapro
    I am looking to allow HTML emails (and other HTML uploads) without letting in scripts and stuff. I plan to have a white list of safe tags and attributes as well as a whitelist of CSS tags and value regexes (to prevent automatic return receipt). I asked a question: Parse a badly formatted XML document (like an HTML file) I found there are many many ways to do this. Some systems have built in sanitizers (which I don't care so much about). I will post some answers and say Community Wiki. Please post any other options you like and say Community Wiki so they can be voted on. Also any comments or wiki edits on what part of a certain product is better and what is not would be greatly appreciated. This page is a very nice listing page but I get kinda lost http://java-source.net/open-source/html-parsers

    Read the article

  • Designing a service for consumption on multiple mobile platforms

    - by Nate Bross
    I am building and designing a (mostly) read-only interface to some data. I'll be uing ASP.NET MVC to build a psudo-restful API. I'm wondering if anyone can provide some resources for building full-client applications for various mobile platforms, iPhone, Android, Blackberry, Windows Mobile, etc. I'm thinking that serving up XML data is going to be the most simple and universal, but parsing XML in objective-C for example doesn't sound like fun to me, but maybe there are some good libaries out there to help ease this task? In other words, what formt will be the quickest to implement on the client side? Are there any JSON parsrs for iPhone or Android? I know there are .NET JSON parsers, but not sure about other platforms -- is ther another format that might better? Or should I stick with pure XML and deal with it on each platform differently?

    Read the article

  • Right recursive grammar or left recursive?

    - by user2485710
    I have little to no knowledge of what I'm about to ask, so I would like a suggestion based on the level of skills required to implemented a parser for the given grammar ( since I'm a beginner in this kind of formal approach to parsers and languages ). Just by going back of a couple of years, this situation reminds me a little of Pascal grammar vs C/C++ grammar, this left vs right stuff. But I'm not going to do any of that, my purpose is to implement a simple parser for a markup language for documents like Markdown. So considering that I'm starting with a markup language in mind, I want to keep things simple, which is the easiest one to handle between this 2 options and why . Another kind of grammar could be an easier option for me ? If yes which one do you suggest ?

    Read the article

  • What is "=C2=A0" in MIME encoded, quoted-printable text?

    - by TheSoftwareJedi
    This is an example raw email I am trying to parse: MIME-version: 1.0 Content-type: text/html; charset=UTF-8 Content-transfer-encoding: quoted-printable X-Mailer: Verizon Webmail X-Originating-IP: [x.x.x.x] =C2=A0test testing testing 123 What is =C2=A0? I have tried a half dozen quoted-printable parsers, but none handle this correctly. Honestly, for now, I'm coding: //TODO WTF encoded = encoded.Replace("=C2=A0", ""); Because I can't figure out why that text is there randomly within the MIME content, and isn't supposed to be rendered into anything. By just removing it, I'm getting the desired effect - but WHY?!

    Read the article

  • Maven: Multiple class with the same path implemented in different jar

    - by Phuong Nguyen de ManCity fan
    I'm running into trouble with having multiple class with the same path (i.e. same name, same package!!!). For some reason, gwt-dev comes with its own version of org.apache.xerces.jaxp.DocumentBuilderFactoryImpl and javax.xml.parsers.DocumentBuilderFactory. At the same time, spring also depends on these classes but from different jar. I don't know what should be, but look like xalan & xml-api are the two dependencies that spring depends on (these dependency are optional) Funny thing is that eclipse can run the same code (it's a unit test) without problem, but surefire cannot. So I guess the problem is due to the way each runner consider the priority of each jar. Now come to the question: How can I setup my POM so that I can sure that when ever any code running inside my app, then class from a jar will be selected over class from other jar? Thanks.

    Read the article

  • how to translate Haskell into Scalaz?

    - by TOB
    One of my high school students and I are going to try to do a port of Haskell's Parsec parser combinator library into Scala. (It has the advantage over Scala's built-in parsing library that you can pass state around fairly easily because all the parsers are monads.) The first hitch I've come across is trying to figure out how Functor works in scalaz. Can someone explain how to convert this Haskell code: data Reply s u a = Ok a !(State s u) ParseError | Error ParseError instance Functor (Reply s u) where fmap f (Ok x s e) = Ok (f x) s e fmap _ (Error e) = Error e -- XXX into Scala (using Scalaz, I assume). I got as far as sealed abstract class Reply[S, U, A] case class Ok[S, U, A](a: A, state: State[S, U], error: ParseError) extends Reply[S, U, A] case class Error[S, U, A](error: ParseError) extends Reply[S, U, A] and know that I should make Reply extend the scalaz.Functor trait, but I can't figure out how to do that. (Mostly I'm having trouble figuring out what the F[_] parameter does.) Any help appreciated! Thanks, Todd

    Read the article

  • processing json objects in jsp

    - by user98534
    i have a JSON object sent from the browser to the jsp page. how do i receive that object and process it in jsp. do i need any specific parsers? i have used the following piece of code. but it wouldnt work. essentially i should read the contents of the object and print them in the jsp. <%@page language="java" import="jso.JSONObject"%> <% JSONObject inp=request.getParameter("param1"); %> <% for(int i=0;i<inp.size();i++) {%> <%=inp.getString(i)%> <% } %>

    Read the article

  • Which rdfa parser for java that supports currently used rdfa attributes?

    - by lennyks
    I am building an app in Java using Jena for semantic information scraping. I am looking for a RDFa parser that would allow me to correctly extract all the rdfa statements. Specifically, one that extracts info about namespaces used and presuming that rdfa tags are correct in the page produces correct triples, ones that distinguish between object and data properties. I went through all rdfa parsers from the site http://rdfa.info/wiki/Consume for Java. They all struggle to extract any rdfa statements and if they do not crash, Jena RDFa parser shows plenty of errors and then dies a terrible death, the data is of little use as it is incorrectly processed and generally mixed up. I am newbie in this area so please be gentle:) I was also thinking of using a library written in different language but then again I don't really know how to plug it into Java code. Any suggestions?

    Read the article

  • Are there .NET Framework methods to parse an email (MIME)?

    - by Neil C. Obremski
    Is there a class or set of functions built into the .NET Framework (3.5+) to parse raw emails (MIME documents)? I am not looking for anything fancy or a separate library, it needs to be built-in. I'm going to be using this in some unit tests and need only grab the main headers of interest (To, From, Subject) along with the body (which in this case will always be text and therefore no MIME trees or boundaries). I've written several MIME parsers in the past and if there isn't anything readily available, I'll just put together something from regular expressions. It would be great to be able to do something like: MailMessage msg = MailMessage.Parse(text); Thoughts?

    Read the article

  • Generating 8000 text files from xml files

    - by Ray
    Hi all, i need to generate the same number of text files as the xml files i have. Within the text files, i need the title and maybe some other tags of it. I can generate text files with the elements i wanted but not all xml files can be generated. Only some of them are generated. Something might be wrong with my parser so help out please thanks. This is my code. Please have a look and give me suggestions. Thanks in advance. import java.io.File; import javax.xml.parsers.DocumentBuilder; import javax.xml.parsers.DocumentBuilderFactory; import org.w3c.dom.*; import java.io.*; public class AccessingXmlFile1 { public static void main(String argv[]) { try { //File file = new File("C:\\MyFile.xml"); // create a file that is really a directory File aDirectory = new File("C:/Documents and Settings/I2R/Desktop/test"); // get a listing of all files in the directory String[] filesInDir = aDirectory.list(); System.out.println(""+filesInDir.length); // sort the list of files (optional) // Arrays.sort(filesInDir); //////////////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////////////// // have everything i need, just print it now for ( int a=0; a<filesInDir.length; a++ ) { String xmlFile = filesInDir[a]; String newLine = System.getProperty("line.separator"); File file = new File(xmlFile); DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db = dbf.newDocumentBuilder(); Document document = db.parse(file); document.getDocumentElement().normalize(); //System.out.println("Root element " + document.getDocumentElement().getNodeName()); NodeList node = document.getElementsByTagName("metadata"); System.out.println("Information of Xml File"); System.out.println(xmlFile.substring(0, xmlFile.length() - 4)); //////////////////////////////////////////////////////////////////////////////////// String titleStoreText = ""; String descriptionStoreText = ""; String collectionStoreText = ""; String textToWrite = ""; //////////////////////////////////////////////////////////////////////////////////// for (int i = 0; i < node.getLength(); i++) { Node firstNode = node.item(i); if (firstNode.getNodeType() == Node.ELEMENT_NODE) { Element element = (Element) firstNode; NodeList titleElementList = element.getElementsByTagName("title"); Element titleElement = (Element) titleElementList.item(0); NodeList title = titleElement.getChildNodes(); //////////////////////////////////////////////////////////////////////////////////// if(titleElement == null) titleStoreText = " There is no title for this file."+ newLine; else titleStoreText = titleStoreText+((Node) title.item(0)).getNodeValue() + newLine; //titleStoreText = titleStoreText+((Node) title.item(0)).getNodeValue()+ newLine; //////////////////////////////////////////////////////////////////////////////////// System.out.println("Title : " + titleStoreText); NodeList collectionElementList = element.getElementsByTagName("collection"); Element collectionElement = (Element) collectionElementList.item(0); NodeList collection = collectionElement.getChildNodes(); //////////////////////////////////////////////////////////////////////////////////// if(collectionElement == null) collectionStoreText = " There is no collection for this file."+ newLine; else collectionStoreText = collectionStoreText+((Node) collection.item(0)).getNodeValue() + newLine; //collectionStoreText = collectionStoreText+((Node) collection.item(0)).getNodeValue()+ newLine; //////////////////////////////////////////////////////////////////////////////////// System.out.println("Collection : " + collectionStoreText); NodeList descriptionElementList = element.getElementsByTagName("description"); Element descriptionElement = (Element) descriptionElementList.item(0); NodeList description = descriptionElement.getChildNodes(); //////////////////////////////////////////////////////////////////////////////////// if(descriptionElement == null) descriptionStoreText = " There is no description for this file."+ newLine; else descriptionStoreText = descriptionStoreText+((Node) description.item(0)).getNodeValue() + newLine; //descriptionStoreText = descriptionStoreText+((Node) description.item(0)).getNodeValue() + newLine; //////////////////////////////////////////////////////////////////////////////////// System.out.println("Description : " + descriptionStoreText); //////////////////////////////////////////////////////////////////////////////////// textToWrite = "=====Title=====" + newLine + titleStoreText + newLine + "=====Collection=====" + newLine + collectionStoreText + newLine + "=====Description=====" + newLine + descriptionStoreText;// + newLine + "=====Subject=====" + newLine + subjectStoreText; //////////////////////////////////////////////////////////////////////////////////// } } ///////////////////////////////////////////write to file part is here///////////////////////////////////////// Writer output = null; File file2 = new File(xmlFile.substring(0, xmlFile.length() - 4)+".txt"); output = new BufferedWriter(new FileWriter(file2)); output.write(textToWrite); output.close(); System.out.println("Your file has been written"); //////////////////////////////////////////////////////////////////////////////////// } } catch (Exception e) { e.printStackTrace(); } } }

    Read the article

  • Custom Parser for JQuery Tablesorter

    - by Tim
    I'm using the jQuery Tablesorter and have an issue with the order in which parsers are applied against table columns. I'm adding a custom parser to handle currency of the form $-3.33. $.tablesorter.addParser({ id: "fancyCurrency", is: function(s) { return /^\$[\-]?[0-9,\.]*$/.test(s); }, format: function(s) { s = s.replace(/[$,]/g,''); return $.tablesorter.formatFloat( s ); }, type: "numeric" }); The problem seems to be that the built-in currency parser takes precedence over my custom parser. I could put the parser in the tablesorter code itself (before the currency parser) and it works properly, but this isn't very maintainable. I can't specify the sorter manually using something like: headers: { 3: { sorter: "fancyNumber" }, 11: { sorter: "fancyCurrency" } } since the table columns are generated dynamically from user inputs. I guess one option would be to specify the sorter to use as a css class and use some JQuery to explicitly specify a sorter like this question suggests, but I'd prefer to stick with dynamic detection if possible.

    Read the article

  • How do I make BeautifulSoup parse the contents of textarea tags as HTML?

    - by brofield
    Before 3.0.5, BeautifulSoup used to treat the contents of <textarea as HTML. It now treats it as text. The document I am parsing has HTML inside the textarea tags, and I am trying to process it. I've tried: for textarea in soup.findAll('textarea'): contents = BeautifulSoup.BeautifulSoup(textarea.contents) textarea.replaceWith(contents.html(text=True)) But I'm getting errors. I can't find this in the documentation, and the alternative parsers aren't helping. Anyone know how I can parse the textareas as HTML?

    Read the article

  • Scala's lazy arguments: How do they work?

    - by python dude
    In the file Parsers.scala (Scala 2.9.1) from the parser combinators library I seem to have come across a lesser known Scala feature called "lazy arguments". Here's an example: def ~ [U](q: => Parser[U]): Parser[~[T, U]] = { lazy val p = q // lazy argument (for(a <- this; b <- p) yield new ~(a,b)).named("~") } Apparently, there's something going on here with the assignment of the call-by-name argument q to the lazy val p. So far I have not been able to work out what this does and why it's useful. Can anyone help?

    Read the article

  • How to get an ordered list parsed by XML parser?

    - by Matthias Doringer
    I am using a xsd schema file; there I specified an ordered list. When parsing an XML node of the kind... <myOrderedList> "element_1" "element_2" "element_3" </myOrderedList> (which is valid XML syntax) ...all XML parsers I know parse this as a single node element. Is there a way to get the XML parser parse this list for me (return it as a list or an array or whatever) or do I always have to parse it myself?

    Read the article

  • Should I learn Haskell or F# if I already know OCaml?

    - by Unknown
    I am wondering if I should continue to learn OCaml or switch to F# or Haskell. Here are the criteria I am most interested in: Longevity Which language will last longer? I don't want to learn something that might be abandoned in a couple years by users and developers. Will Inria, Microsoft, University of Glasgow continue to support their respective compilers for the long run? Practicality Articles like this make me afraid to use Haskell. A hash table is the best structure for fast retrieval. Haskell proponents in there suggest using Data.Map which is a binary tree. I don't like being tied to a bulky .NET framework unless the benefits are large. I want to be able to develop more than just parsers and math programs. Well Designed I like my languages to be consistent. Please support your opinion with logical arguments and citations from articles. Thank you.

    Read the article

  • JS Framework that doesn't use CSS selectors?

    - by RoToRa
    A thing that I noticed about most JavaScript frameworks is that the most common way to find/access the DOM elements is to use CSS selectors. However this usually requires the framework to include a CSS selector parser, because they need to support selectors, that the browser natively doesn't, foremost the frameworks own proprietary extensions. I would think that these parsers are large and slow. Wouldn't it be more efficient to have something that doesn't require a parser, such a chained method calls? Some like: id("example").children().class("test").hasAttribute("href") instead of $("#example > .test[href]") Are there any frameworks around that do something like this? And how do they compare with jQuery and friends in regard to performance and size? EDIT: You can consider this a theoretical discussion topic. I don't plan to use anything other than jQuery in any practical projects in near furure. I was just wondering why there aren't any other, possibly better approaches.

    Read the article

  • Parasing HTML to find specific links (Without Keywords)

    - by Brett Powell
    I posted about this sort of earlier, but I am not sure how to post back to my original question as I can only comment or answer my own question. Anyways, I need to get 4 links from a website, the latest stable build links for windows and linux, and the latest development build links for windows and linux (4 links total) within my C++ application. I can download the page (http://www.sourcemod.net/snapshots.php) with LibCURL which is already implemented in the project, but after that I am not sure. I was looking at parsers, but I can't think of how I am going to discern link from link. Obviously using a parser I could get the first link from each table, but this does not seem efficient and would only provide me with the links to windows builds. It looks like the links I need will be in the fourth in both tables, but I am just very familiar with a good way to go about this, so any help would be appreciated.

    Read the article

  • Are browsers permitted to render <li>s in any order?

    - by ctford
    As I understand the XML spec, the significance of the order of child elements is not guaranteed. XML parsers tend to keep child elements in the same order as they occur in the XML document, but they are under no obligation to do so. If that's so, then are browsers free to render the <li>s in an <ol> or an <ul> in a different order than they occur in the XHTML? Or is it specified in the XHTML spec somewhere that order has to be preserved? I realise that all major browsers will respect the order of my <li>s. I'm just interested in the academic question of whether or not they are technically obliged to.

    Read the article

  • building an XML service parsing library

    - by DanInDC
    This is more of a design question I suppose. My company offers a web service to our client that spits data out in a custom xml format. I'd like to build a java library we can offer so our customers can just feed it the url and we will turn it into a set of POJOs built from the response. I can obviously just create a library that will do some simple xml parsing and building of the POJOs but I'm looking to build something a bit more robust. My brain is pulling me in a million directions, wondering if anyone has some pointers or some code to poke at. Was thinking about adding an Abdera extension, but it's not really a syndication format that fits the Abdera model. And most of the popular service libraries (twitter, facebook) all rely on standards format parsers, of which our format isn't.

    Read the article

  • How to build a sentence parser using only the c++ standared library?

    - by CiM
    Hello everyone, I am designing a text based game similar to Zork, and I would like it to able to parse a sentance and draw out keywords such TAKE, DROP ect. The thing is, I would like to do this all through the standard c++ library... I have heard of external libraries (such as flex/bison) that effectively accomplish this; however I don't want to mess with those just yet. What I am thinking of implementing is a token based system that has a list of words that the parser can recognize even if they are in a sentence such as "take sword and kill monster" and know that according to the parsers grammar rules, TAKE, SWORD, KILL and MONSTER are all recognized as tokens and would produce the output "Monster killed" or something to that effect. I have heard there is a function in the c++ standard library called strtok that does this, however I have also heard it's "unsafe". So if anyone here could lend a helping hand, I would greatly appreciate it.

    Read the article

  • Parsing C# code to evaluate expressions (basically, implementing Intellisense)

    - by halivingston
    I'm trying to evaluate C# code as it gets typed, think of it as if I'm trying to write an IDE. So a person types code, I want to find out what code did he just write: String x = ""; I want to now register that x is a type of String. And now everytime the user types x again, and I want to show him all the things he can do with x, basically like Visual Studio Intellisense. Will I need some lexers or parsers for this? Will that make it easier? I've heard VS 2010 has some features around this that Microsoft has released. Any ideas?

    Read the article

  • Are browsers permitted to render <ol>s in any order?

    - by ctford
    As I understand the XML spec, the significance of the order of child elements is not guaranteed. XML parsers tend to keep child elements in the same order as they occur in the XML document, but they are under no obligation to do so. If that's so, then are browsers free to render the <li>s in an <ol> in a different order than they occur in the XHTML? Or is it specified in the XHTML spec somewhere that order has to be preserved? I realise that all major browsers will respect the order of my <li>s. I'm just interested in the academic question of whether or not they are technically obliged to.

    Read the article

  • What’s New in The Second Edition of Regular Expressions Cookbook

    - by Jan Goyvaerts
    %COOKBOOKFRAME% The second edition of Regular Expressions Cookbook is a completely revised edition, not just a minor update. All of the content from the first edition has been updated for the latest versions of the regular expression flavors and programming languages we discuss. We’ve corrected all errors that we could find and rewritten many sections that were either unclear or lacking in detail. And lack of detail was not something the first edition was accused of. Expect the second edition to really dot all i’s and cross all t’s. A few sections were removed. In particular, we removed much talk about browser inconsistencies as modern browsers are much more compatible with the official JavaScript standard. There is plenty of new content. The second edition has 101 more pages, bringing the total to 612. It’s almost 20% bigger than the first edition. We’ve added XRegExp as an additional regex flavor to all recipes throughout the book where XRegExp provides a better solution than standard JavaScript. We did keep the standard JavaScript solutions, so you can decide which is better for your needs. The new edition adds 21 recipes, bringing the total to 146. 14 of the new recipes are in the new Source Code and Log Files chapter. These recipes demonstrate techniques that are very useful for manipulating source code in a text editor and for dealing with log files using a grep tool. Chapter 3 which has recipes for programming with regular expressions gets only one new recipe, but it’s a doozy. If anyone has ever flamed you for using a regular expression instead of a parser, you’ll now be able to tell them how you can create your own parser by mixing regular expressions with procedural code. Combined with the recipes from the new Source Code and Log Files chapter, you can create parsers for whatever custom language or file format you like. If you have any interest in regular expressions at all, whether you’re a beginner or already consider yourself an expert, you definitely need a copy of the second edition of Regular Expressions Cookbook if you didn’t already buy the first. If you did buy the first edition, and you often find yourself referring back to it, then the second edition is a very worthwhile upgrade. You can buy the second edition of Regular Expressions Cookbook from Amazon or wherever technical books are sold. Ask for ISBN 1449319432.

    Read the article

  • Toorcon 15 (2013)

    - by danx
    The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo Introduction to Toorcon 15 (2013) A Tale of One Software Bypass of MS Windows 8 Secure Boot Breaching SSL, One Byte at a Time Running at 99%: Surviving an Application DoS Security Response in the Age of Mass Customized Attacks x86 Rewriting: Defeating RoP and other Shinanighans Clowntown Express: interesting bugs and running a bug bounty program Active Fingerprinting of Encrypted VPNs Making Attacks Go Backwards Mask Your Checksums—The Gorry Details Adventures with weird machines thirty years after "Reflections on Trusting Trust" Introduction to Toorcon 15 (2013) Toorcon 15 is the 15th annual security conference held in San Diego. I've attended about a third of them and blogged about previous conferences I attended here starting in 2003. As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them. Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information. A Tale of One Software Bypass of MS Windows 8 Secure Boot Andrew Furtak and Oleksandr Bazhaniuk Yuri Bulygin, Oleksandr ("Alex") Bazhaniuk, and (not present) Andrew Furtak Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations. They previously gave this talk at the BlackHat 2013 conference. MS Windows 8 Secure Boot Overview UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS. UEFI is processor and architecture independent. Malware can replace bootloader (bootx64.efi, bootmgfw.efi). Once replaced can modify kernel. Trivial to replace bootloader. Today many legacy bootkits—UEFI replaces them most of them. MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes. UEFI firmware relies on secure update (with signed update). You would think Secure Boot would rely on ROM (such as used for phones0, but you can't do that for PCs—PCs use writable memory with signatures DXE core verifies the UEFI boat loader(s) OS Loader (winload.efi, winresume.efi) verifies the OS kernel A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor. Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx). X.509 certs with SHA-1/SHA-256 hashes. Keys are stored in non-volatile (NV) flash-based NVRAM. Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)). Root cert uses RSA-2048 public keys and PKCS#7 format signatures. SecureBoot — enable disable image signature checks SetupMode — update keys, self-signed keys, and secure boot variables CustomMode — allows updating keys Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation Attacking MS Windows 8 Secure Boot Secure Boot does NOT protect from physical access. Can disable from console. Each BIOS vendor implements Secure Boot differently. There are several platform and BIOS vendors. It becomes a "zoo" of implementations—which can be taken advantage of. Secure Boot is secure only when all vendors implement it correctly. Allow only UEFI firmware signed updates protect UEFI firmware from direct modification in flash memory protect FW update components program SPI controller securely protect secure boot policy settings in nvram protect runtime api disable compatibility support module which allows unsigned legacy Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off. Can also exploit TPM in a similar manner. One is not supposed to be able to directly modify the PK in SPI flash from the OS though. But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot. Multiple vendors are vulnerable. They will disclose this exploit to vendors in the future. Recommendations: allow only signed updates protect UEFI fw in ROM protect EFI variable store in ROM Breaching SSL, One Byte at a Time Yoel Gluck and Angelo Prado Angelo Prado and Yoel Gluck, Salesforce.com CRIME is software that performs a "compression oracle attack." This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header. CRIME requests with every possible character and measures the ciphertext length. Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time. SSL Compression uses LZ77 to reduce redundancy. Huffman coding replaces common byte sequences with shorter codes. US CERT thinks the SSL compression problem is fixed, but it isn't. They convinced CERT that it wasn't fixed and they issued a CVE. BREACH, breachattrack.com BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding). It takes advantage of the fact that the response is not compressed. BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds. It needs attacker-supplied content (say from a web form or added to a URL parameter). BREACH listens to a session's requests and responses, then inserts extra requests and responses. Eventually, BREACH guesses a session's secret key. Can use compression to guess contents one byte at-a-time. For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes, and "Supersecret Supersecret" (a correct guess) compresses 11 bytes, so it can find each character by guessing every character. To start the guess, BREACH needs at least three known initial characters in the response sequence. Compression length then "leaks" information. Some roadblocks include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same). The solutions include: lookahead (guess 2 or 3 characters at-a-time instead of 1 character). Expensive rollback to last known conflict check compression ratio can brute-force first 3 "bootstrap" characters, if needed (expensive) block ciphers hide exact plain text length. Solution is to align response in advance to block size Mitigations length: use variable padding secrets: dynamic CSRF tokens per request secret: change over time separate secret to input-less servlets Future work eiter understand DEFLATE/GZIP HTTPS extensions Running at 99%: Surviving an Application DoS Ryan Huber Ryan Huber, Risk I/O Ryan first discussed various ways to do a denial of service (DoS) attack against web services. One usual method is to find a slow web page and do several wgets. Or download large files. Apache is not well suited at handling a large number of connections, but one can put something in front of it Can use Apache alternatives, such as nginx How to identify malicious hosts short, sudden web requests user-agent is obvious (curl, python) same url requested repeatedly no web page referer (not normal) hidden links. hide a link and see if a bot gets it restricted access if not your geo IP (unless the website is global) missing common headers in request regular timing first seen IP at beginning of attack count requests per hosts (usually a very large number) Use of captcha can mitigate attacks, but you'll lose a lot of genuine users. Bouncer, goo.gl/c2vyEc and www.github.com/rawdigits/Bouncer Bouncer is software written by Ryan in netflow. Bouncer has a small, unobtrusive footprint and detects DoS attempts. It closes blacklisted sockets immediately (not nice about it, no proper close connection). Aggregator collects requests and controls your web proxies. Need NTP on the front end web servers for clean data for use by bouncer. Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms. Future features: gzip collection data, documentation, consumer library, multitask, logging destroyed connections. Takeaways: DoS mitigation is easier with a complete picture Bouncer designed to make it easier to detect and defend DoS—not a complete cure Security Response in the Age of Mass Customized Attacks Peleus Uhley and Karthik Raman Peleus Uhley and Karthik Raman, Adobe ASSET, blogs.adobe.com/asset/ Peleus and Karthik talked about response to mass-customized exploits. Attackers behave much like a business. "Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School. Mass customization is differentiating a product for an individual customer, but at a mass production price. For example, the same individual with a debit card receives basically the same customized ATM experience around the world. Or designing your own PC from commodity parts. Exploit kits are another example of mass customization. The kits support multiple browsers and plugins, allows new modules. Exploit kits are cheap and customizable. Organized gangs use exploit kits. A group at Berkeley looked at 77,000 malicious websites (Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits. Characteristics of Mass Malware: potent, resilient, relatively low cost Technical characteristics: multiple OS, multipe payloads, multiple scenarios, multiple languages, obfuscation Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now. So the drive with malware is towards mass customized exploits, to avoid detection There's plenty of evicence that exploit development has Project Manager bureaucracy. They infer from the malware edicts to: support all versions of reader support all versions of windows support all versions of flash support all browsers write large complex, difficult to main code (8750 lines of JavaScript for example Exploits have "loose coupling" of multipe versions of software (adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions. Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs. However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat. Future research includes normalizing malware and Javascript. Conclusion: The coming trend is that mass-malware with mass zero-day attacks will result in mass customization of attacks. x86 Rewriting: Defeating RoP and other Shinanighans Richard Wartell Richard Wartell The attack vector we are addressing here is: First some malware causes a buffer overflow. The malware has no program access, but input access and buffer overflow code onto stack Later the stack became non-executable. The workaround malware used was to write a bogus return address to the stack jumping to malware Later came ASLR (Address Space Layout Randomization) to randomize memory layout and make addresses non-deterministic. The workaround malware used was to jump t existing code segments in the program that can be used in bad ways "RoP" is Return-oriented Programming attacks. RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets"). Pinkie Pie was paid $60K last year for a RoP attack. One solution is using anti-RoP compilers that compile source code with NO return instructions. ASLR does not randomize address space, just "gadgets". IPR/ILR ("Instruction Location Randomization") randomizes each instruction with a virtual machine. Richard's goal was to randomize a binary with no source code access. He created "STIR" (Self-Transofrming Instruction Relocation). STIR disassembles binary and operates on "basic blocks" of code. The STIR disassembler is conservative in what to disassemble. Each basic block is moved to a random location in memory. Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations. The old code is copied and rewritten with jumps to new code. the original code sections in the file is marked non-executible. STIR has better entropy than ASLR in location of code. Makes brute force attacks much harder. STIR runs on MS Windows (PEM) and Linux (ELF). It eliminated 99.96% or more "gadgets" (i.e., moved the address). Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!). The unique thing about STIR is it requires no source access and the modified binary fully works! Current work is to rewrite code to enforce security policies. For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk. Clowntown Express: interesting bugs and running a bug bounty program Collin Greene Collin Greene, Facebook Collin talked about Facebook's bug bounty program. Background at FB: FB has good security frameworks, such as security teams, external audits, and cc'ing on diffs. But there's lots of "deep, dark, forgotten" parts of legacy FB code. Collin gave several examples of bountied bugs. Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care). We use security questions, as does everyone else, but they are basically insecure (often easily discoverable). Collin didn't expect many bugs from the bounty program, but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in. Bug bounties bring people in with different perspectives, and are paid only for success. Bug bounty is a better use of a fixed amount of time and money versus just code review or static code analysis. The Bounty program started July 2011 and paid out $1.5 million to date. 14% of the submissions have been high priority problems that needed to be fixed immediately. The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year. Spammers like to backstab competitors. The youngest sumitter was 13. Some submitters have been hired. Bug bounties also allows to see bugs that were missed by tools or reviews, allowing improvement in the process. Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet. Active Fingerprinting of Encrypted VPNs Anna Shubina Anna Shubina, Dartmouth Institute for Security, Technology, and Society (I missed the start of her talk because another track went overtime. But I have the DVD of the talk, so I'll expand later) IPsec leaves fingerprints. Using netcat, one can easily visually distinguish various crypto chaining modes just from packet timing on a chart (example, DES-CBC versus AES-CBC) One can tell a lot about VPNs just from ping roundtrips (such as what router is used) Delayed packets are not informative about a network, especially if far away from the network More needed to explore about how TCP works in real life with respect to timing Making Attacks Go Backwards Fuzzynop FuzzyNop, Mandiant This talk is not about threat attribution (finding who), product solutions, politics, or sales pitches. But who are making these malware threats? It's not a single person or group—they have diverse skill levels. There's a lot of fat-fingered fumblers out there. Always look for low-hanging fruit first: "hiding" malware in the temp, recycle, or root directories creation of unnamed scheduled tasks obvious names of files and syscalls ("ClearEventLog") uncleared event logs. Clearing event log in itself, and time of clearing, is a red flag and good first clue to look for on a suspect system Reverse engineering is hard. Disassembler use takes practice and skill. A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly. Key loggers are used a lot in targeted attacks. They are typically custom code or built in a backdoor. A big tip-off is that non-printable characters need to be printed out (such as "[Ctrl]" "[RightShift]") or time stamp printf strings. Look for these in files. Presence is not proof they are used. Absence is not proof they are not used. Java exploits. Can parse jar file with idxparser.py and decomile Java file. Java typially used to target tech companies. Backdoors are the main persistence mechanism (provided externally) for malware. Also malware typically needs command and control. Application of Artificial Intelligence in Ad-Hoc Static Code Analysis John Ashaman John Ashaman, Security Innovation Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives. Also tried using grep, but tis fails to find anything even mildly complex. So next John decided to write his own tool. His approach was to first generate a call graph then analyze the graph. However, the problem is that making a call graph is really hard. For example, one problem is "evil" coding techniques, such as passing function pointer. First the tool generated an Abstract Syntax Tree (AST) with the nodes created from method declarations and edges created from method use. Then the tool generated a control flow graph with the goal to find a path through the AST (a maze) from source to sink. The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order. The tool, called "Scat" (Static Code Analysis Tool), currently looks for C# vulnerabilities and some simple PHP. Later, he plans to add more PHP, then JSP and Java. For more information see his posts in Security Innovation blog and NRefactory on GitHub. Mask Your Checksums—The Gorry Details Eric (XlogicX) Davisson Eric (XlogicX) Davisson Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address. But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP. Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care. If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data). Other parts of packet may leak more data about the IP. TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum. Also, one can usually determine the OS from the TTL field and ports in a packet header. If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results, such as look at packet contents for domain or geo information. With hundreds of results, can import as CSV format into a spreadsheet. Can corelate with geo data and see where each possibility is located. Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender. Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security. Adventures with weird machines thirty years after "Reflections on Trusting Trust" Sergey Bratus Sergey Bratus, Dartmouth College (and Julian Bangert and Rebecca Shapiro, not present) "Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper. "You can't trust code that you did not totally create yourself." There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs. Thompson showed how a compiler can introduce and propagate bugs in unmodified source. But suppose if there's no bugs and you trust the author, can you trust the code? Hell No! There's too many factors—it's Babylonian in nature. Why not? Well, Input is not well-defined/recognized (code's assumptions about "checked" input will be violated (bug/vunerabiliy). For example, HTML is recursive, but Regex checking is not recursive. Input well-formed but so complex there's no telling what it does For example, ELF file parsing is complex and has multiple ways of parsing. Input is seen differently by different pieces of program or toolchain Any Input is a program input executes on input handlers (drives state changes & transitions) only a well-defined execution model can be trusted (regex/DFA, PDA, CFG) Input handler either is a "recognizer" for the inputs as a well-defined language (see langsec.org) or it's a "virtual machine" for inputs to drive into pwn-age ELF ABI (UNIX/Linux executible file format) case study. Problems can arise from these steps (without planting bugs): compiler linker loader ld.so/rtld relocator DWARF (debugger info) exceptions The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable). Only solution is to freeze code and sign it. But you can't freeze everything! Can't freeze ASLR or loading—must have tables and metadata. Any sufficiently complex input data is the same as VM byte code Example, ELF relocation entries + dynamic symbols == a Turing Complete Machine (TM). @bxsays created a Turing machine in Linux from relocation data (not code) in an ELF file. For more information, see Rebecca "bx" Shapiro's presentation from last year's Toorcon, "Programming Weird Machines with ELF Metadata" @bxsays did same thing with Mach-O bytecode Or a DWARF exception handling data .eh_frame + glibc == Turning Machine X86 MMU (IDT, GDT, TSS): used address translation to create a Turning Machine. Page handler reads and writes (on page fault) memory. Uses a page table, which can be used as Turning Machine byte code. Example on Github using this TM that will fly a glider across the screen Next Sergey talked about "Parser Differentials". That having one input format, but two parsers, will create confusion and opportunity for exploitation. For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA. Another example is ELF—several parsers in OS tool chain, which are all different. Can have two different Program Headers (PHDRs) because ld.so parses multiple PHDRs. The second PHDR can completely transform the executable. This is described in paper in the first issue of International Journal of PoC. Conclusions trusting computers not only about bugs! Bugs are part of a problem, but no by far all of it complex data formats means bugs no "chain of trust" in Babylon! (that is, with parser differentials) we need to squeeze complexity out of data until data stops being "code equivalent" Further information See and langsec.org. USENIX WOOT 2013 (Workshop on Offensive Technologies) for "weird machines" papers and videos.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12  | Next Page >