Search Results

Search found 2356 results on 95 pages for 'andrew mock'.

Page 84/95 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • How to create a Windows GUI with a file explorer window, allowing users to choose files?

    - by Badri
    Here's what I want to do. I want to present a file explorer, and allow the user to select files, and list the selected files below. (I then want to process those files but that's the next part) For example, the way CD Burning softwares work. I have created a mock up here http://dl.dropbox.com/u/113967/Mockup.png As you can see, the left frame has a directory structure, the right frame has a file selected, and the bottom frame shows the selected file. What framework can I go about creating this? I am familiar with command line C++ stuff, but I haven't ventured into any GUI programming, and figured this idea would be a good place to start. Any suggestions on where to start?

    Read the article

  • Get/save parameters to an expected JMock method call?

    - by Tayeb
    Hi, I want to test an "Adapter" object that when it receives an xml message, it digest it to a Message object, puts message ID + CorrelationID both with timestamps and forwards it to a Client object.=20 A message can be correlated to a previous one (e.g. m2.correlationID =3D m1.ID). I mock the Client, and check that Adapter successfully calls "client.forwardMessage(m)" twice with first message with null correlationID, and a second with a not-null correlationID. However, I would like to precisely test that the correlationIDs are set correctly, by grabing the IDs (e.g. m1.ID). But I couldn't find anyway to do so. There is a jira about adding the feature, but no one commented and it is unassigned. Is this really unimplemented? I read about the alternative of redesigning the Adapter to use an IdGenerator object, which I can stub, but I think there will be too many objects.=20 Don't you think it adds unnecessary complexity to split objects to a so fine granularity? Thanks, and I appreciate any comments :-) Tayeb

    Read the article

  • Argument constraints in RhinoMock methods

    - by Khash
    I am mocking a repository that should have 1 entity in it for the test scenario. The repository has to return this entity based on a known id and return nothing when other ids are passed in. I have tried doing something like this: _myRepository.Expect(item => item.Find(knownId)).Return(knownEntity); _myRepository.Expect(item => item.Find(Arg<Guid>.Is.Anything)).Return(null); It seems however the second line is overriding the first and the repository always returns null. I don't want to mock all the different possible IDs asked (they could go up to hundreds) when the test scenario is only concerned with the value of one Id.

    Read the article

  • do.call(rbind, list) for uneven number of column

    - by h.l.m
    I have a list, with each element being a character vector, of differing lengths I would like to bind the data as rows, so that the column names 'line up' and if there is extra data then create column and if there is missing data then create NAs Below is a mock example of the data I am working with x <- list() x[[1]] <- letters[seq(2,20,by=2)] names(x[[1]]) <- LETTERS[c(1:length(x[[1]]))] x[[2]] <- letters[seq(3,20, by=3)] names(x[[2]]) <- LETTERS[seq(3,20, by=3)] x[[3]] <- letters[seq(4,20, by=4)] names(x[[3]]) <- LETTERS[seq(4,20, by=4)] The below line would normally be what I would do if I was sure that the format for each element was the same... do.call(rbind,x) I was hoping that someone had come up with a nice little solution that matches up the column names and fills in blanks with NAs whilst adding new columns if in the binding process new columns are found...

    Read the article

  • How to create a gesture controlled rotating image for a UI

    - by ocdtrekkie
    I'm trying to figure out the best way to make an image rotate along with a user's finger dragging it left or right. I want to try and match the rate a user's finger is moving with the rate the image is rotating. I've got the basic setup for my application going, with the menus and whatnot I want to have, and that's all running great on the emulator, I'm just not sure how to approach this part. I can code all the logic I need for my app, I'm just not doing to well designing the UI, I have a picture in mind, I've actually made a couple mock images of it, I just can't figure out how to get it going in Android, and any help would be appreciated.

    Read the article

  • How to prevent swallowing exceptions caused by unset expectations for a mocked object?

    - by Schultz9999
    I am looking for a way to modify catch block depending on if it's executed during the unit test run or not. The purpose is basically to detect/setup mock expectations which are swallowed because catch doesn't rethrow. I am using MSTest. One obvious thing is using preprocessor but I don't think it works. Especially if to use DEBUG define. There should be an easy way to detect that, shouldn't it? I must have been looking for something wrong because I couldn't find much info on that. try {...} catch(Exception) { Log(...); #if DEBUG throw; #endif }

    Read the article

  • Unit Test this - Simple method but don't know what's to test!

    - by user309705
    a very simple method, but don't know what's to test! I'd like to test this method in Business Logic Layer, and the _dataAccess apparently is from data layer. public DataSet GetLinksByAnalysisId(int analysisId) { DataSet result = new DataSet(); result = _dataAccess.SelectAnalysisLinksOverviewByAnalysisId(analysisId); return result; } All Im testing really is to test _dataAccess.SelectAnalysisLinksOverviewByAnalysisId() is get called! here's my test code (using Rhino mock) [TestMethod] public void Test() { var _dataAccess = MockRepository.GenerateMock<IDataAccess>(); _dataAccess.Expect(x => x.SelectAnalysisLinksOverviewByAnalysisId(_settings.UserName, 0, out dateExecuted)); var analysisBusinessLogic = new AnalysisLinksBusinessLogic(_dataAccess); analysisBusinessLogic.GetLinksByAnalysisId(_settings, 0); _dataAccess.VerifyAllExpectations(); } Let me know if you writing the test for this method what would you test against? Many Thanks!

    Read the article

  • Behavior of nested finally in Exceptions

    - by kuriouscoder
    Hello: Today at work, I had to review a code snippet that looks similar to this mock example. package test; import java.io.IOException; import org.apache.log4j.Logger; public class ExceptionTester { public static Logger logger = Logger.getLogger(ExceptionTester.class); public void test() throws IOException { new IOException(); } public static void main(String[] args) { ExceptionTester comparator = new ExceptionTester(); try { try { comparator.test(); } finally { System.out.println("Finally 1"); } } catch(IOException ex) { logger.error("Exception happened" ex); // also close opened resources } System.out.println("Exiting out of the program"); } } It's printing the following output.I expected an compile error since the inner try did not have a catch block. Finally 1 Exiting out of the program I do not understand why IOException is caught by the outer catch block. I would appreciate if anyone can explain this, especially by citing stack unwinding process

    Read the article

  • rspec mocking object property assignment

    - by charlielee
    I have a rspec mocked object, a value is assign to is property. I am struggleing to have that expectation met in my rspec test. Just wondering what the sytax is? The code: def create @new_campaign = AdCampaign.new(params[:new_campaign]) @new_campaign.creationDate = "#{Time.now.year}/#{Time.now.mon}/#{Time.now.day}" if @new_campaign.save flash[:status] = "Success" else flash[:status] = "Failed" end end The test it "should able to create new campaign when form is submitted" do campaign_model = mock_model(AdCampaign) AdCampaign.should_receive(:new).with(params[:new_campaign]).and_return(campaign_model) campaign_model.should_receive(:creationDate).with("#{Time.now.year}/#{Time.now.mon}/#{Time.now.day}")campaign_model.should_receive(:save).and_return(true) post :create flash[:status].should == 'Success' response.should render_template('create') end The problem is I am getting this error: Spec::Mocks::MockExpectationError in 'CampaignController new campaigns should able to create new campaign when form is submitted' Mock "AdCampaign_1002" received unexpected message :creationDate= with ("2010/5/7") So how do i set a expectation for object property assignment? Thanks

    Read the article

  • What is the best way to automated (integration) test with java with OpenId4Java against real OpenIdP

    - by mP
    I would like to do a bit more than manually test my openid glue code which happens to use the openid4java library. My goals would be to be able to run it within my IDE with a bunch of tests using Junit or similar. Selenium & tomcat I was thinking of using selenium and a tomcat but thats not exactly a nice approach as this is a bit heavy and not really lightweight. httpunit A solution with http-unit is incomplete because it doesnt really fit with the redirect to my openid provider, authenticate and redirect back. Perhaps i am wrong but this looks like it could get quite involved just to make sure this works. Mocks My last solution is to mock everything and assume thats its accurate and works. If google or yahoo ever change in some way, then ill have to verify manually. The approach is simple but with a major flaw.

    Read the article

  • How to stub Restul-authentication's current_user method?

    - by Thiago
    Hi there, I'm trying to run the following spec: describe UsersController, "GET friends" do it "should call current_user.friends" do user = mock_model(User) user.should_receive(:friends) UsersController.stub!(:current_user).and_return(user) get :friends end end My controller looks like this def friends @friends = current_user.friends respond_to do |format| format.html end end The problem is that I cannot stub the current_user method, as when I run the test, I get: Spec::Mocks::MockExpectationError in 'UsersController GET friends should call current _user.friends' Mock "User_1001" expected :friends with (any args) once, but received it 0 times[0m ./spec/controllers/users_controller_spec.rb:44: current_user is a method from Restful-authentication, which is included in this controller. How am I supposed to test this controller? Thanks in advance

    Read the article

  • How can I use a pre-populated core data DB on my device.

    - by KingAndrew
    Hi all, I have developed my app using core data. It works fine in the simulator. When I deploy it to the device the DB is empty. It is 49k where it should be 484k. Basically it is not populated. Since I don't write to the DB when the app is running I need to provide a populated DB to the App. So I copied the populated DB from the simulator to resources and then deploy. Still no luck. the populated DB is in MyApp.app and the AppDelegate is reading from the Documents directory. How do I either get it in the documents directory or get the app delegate to look in the app? Thanks in advance, Andrew

    Read the article

  • prevent form from automatically re-submitting on page load

    - by user323774
    I am using jQuery to point a form's target to an iframe on .submit(). This is to upload a file. It works fine, but when the page reloads, and the iframe is appended to the DOM, the iframe automatically resubmits the form, causing the same file to be sent to the server on each page load. If I do not include the iframe in the HTML markup, or do not append it to the DOM, this doesn't happen, but of course, I need the iFrame. So my question is, how can i prevent this? :) Andrew

    Read the article

  • How do I print the Images?

    - by user1477539
    I want to print the images of the 30 nba teams drafting in the first round. However when I tell it to print it prints out the link instead of the image. How do I get it to print out the image instead of giving me the image link. Here's my code: import urllib2 from BeautifulSoup import BeautifulSoup # or if your're using BeautifulSoup4: # from bs4 import BeautifulSoup soup = BeautifulSoup(urllib2.urlopen('http://www.cbssports.com/nba/draft/mock-draft').read()) rows = soup.findAll("table", attrs = {'class': 'data borderTop'})[0].tbody.findAll("tr")[2:] for row in rows: fields = row.findAll("td") if len(fields) >= 3: anchor = row.findAll("td")[1].find("a") if anchor: print anchor

    Read the article

  • How to instantiate a Singleton multiple times?

    - by Sebi
    I need a singleton in my code. I implemented it in Java and it works well. The reason I did it, is to ensure that in a mulitple environment, there is only one instance of this class. But now I want to test my Singleton object locally with a Unit test. For this reason I need to simulate another instance of this Singleton (the object that would be from another device). So is there a possiblity to instantiate a Singleton a second time for testing purpose or do I have to mock it? I'm not sure, but I think it could be possible by using a different class loader?

    Read the article

  • Background position image overlay (Works in IE, not in Mozilla/Chrome/Safari)

    - by amm229
    Hi all, I am having an issue positioning a background image using the following jquery background position command in Firefox, Google Chrome, and Safari. The code works correctly in IE 8. $('#element).css({ backgroundPosition: 'xpx ypx' }); The x position of the image is calculated dynamically based on window size and the y position is static. The css appears to be modified correctly, however, the background image I am attempting to overlay is absent. See jscript code below: $(window).resize(function () { // image positioning variables var windowwidth = $(window).width(); var imgwidth = $('#imgFluid').width(); var offset = $('#divFluidBlur').offset(); // calculate and implement position blurPositionLeft = (windowwidth - imgwidth) - offset.left; $('#divFluidBlur').css({ backgroundPosition: blurPositionLeft + 'px' + ' 30px' }); // debug: display actual css Background Position of element to text box $("#txtActualBackgroundpos").val(document.getElementById ("divFluidBlur").style.backgroundPosition); Thanks in advance for your help, Andrew

    Read the article

  • C# Unit Testing: How do I set a Lazy<T>.ValueCreated to false?

    - by michael paul
    Basically, I have a unit test that gets a singleton instance of a class. Some of my tests required me to mock this singleton, so when I do Foo.Instance I get a different type of instance. The problem is that my checks are passing individually, but failing overall because one test is interfering with another. I tried to do a TestCleanup where I set: Foo_Accessor._instance = null; but that didn't work. What I really need is Foo_Accessor._instance.IsValueCreated = false; (_instance is a Lazy). Any way to unset the Lazy object that I didn't think of?

    Read the article

  • Ruby Metaprogramming

    - by Veerendra Manikonda
    I am having a method which returns the price of a given symbol and i am writing a test for that method. This is my test def setup @asset = NetAssetValue.new end def test_retrieve_price_for_symbol_YHOO assert_equal(33.987, @asset.retrieve_price_for_a_symbol('YHOO')) end def test_retrive_price_for_YHOO def self.retrieve_price_for_a_symbol(symbol) 33.77 end assert_equal(33.97, @asset.retrieve_price_for_a_symbol('YHOO')) end This is my method. def retrieve_price_for_a_symbol(symbol) symbol_price = { "YHOO" => 33.987, "UPS" => 35.345, "T" => 80.90 } raise Exception if(symbol_price[symbol].nil?) symbol_price[symbol] end I am trying to mock the retrieve_price_for_a_symbol method by writing same method in test class but when i call it, the call is happening to method in main class not in the test class. How do I add that method to meta class from test and how do i call it? Please help.

    Read the article

  • How to test that action uses argument?

    - by Caster Troy
    I am supposed to be using test-driven development but in this particular case, as I am having trouble, I implemented the action method first. It looks like this: public ViewResult Index(int pageNumber = 1) { var posts = repository.All(); var model = new PagedList<Post>(posts, pageNumber, PageSize); return View(model); } Both the repository and the PagedList<> have been tested already. Now I want to verify that when the action is given a page number that the page number is actually considered. private Mock<IPostsRepository> repository; private HomeController controller; [Test] public void Index_Doohickey() { var actual = controller.Index(2); // .. How do I test that the controller actually uses the page number here? }

    Read the article

  • Managing test data for Junit tests.

    - by nobody
    Hi, We are facing one problem in managing test data(xmls which is used to create mock objects). The data which we have currently has been evolved over a long period of time. Each time we add a new functionality or test case we add new data to test that functionality. Now, the problem is when the business requirement changes the format( like length or format of a variable) or any change which the test data doesn't support , we need to change the entire test data which is 100s of MBs in size. Could anyone suggest a better method or process to overcome this problem? Any suggestion would be appreciated.

    Read the article

  • Anyone plotting SO via code_swarm?

    - by Tim Post
    Is anyone working on something to render individual questions, or SO as a whole with codeswarm? If so, can you post a link to your work that transforms SO questions into revisions that codeswarm can understand (i.e. svn?) It would be really, really cool to see SO played (as a whole) via codeswarm, so I hope to not only ask if anyone is working it, but see if anyone is interested in trying to accomplish it. Augmenting that, will database dumps be made available? EDIT: Database dumps have since been made available :) Enough with user voice, is anyone doing it? If so, what VCS did you mock?

    Read the article

  • Toorcon 15 (2013)

    - by danx
    The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo Introduction to Toorcon 15 (2013) A Tale of One Software Bypass of MS Windows 8 Secure Boot Breaching SSL, One Byte at a Time Running at 99%: Surviving an Application DoS Security Response in the Age of Mass Customized Attacks x86 Rewriting: Defeating RoP and other Shinanighans Clowntown Express: interesting bugs and running a bug bounty program Active Fingerprinting of Encrypted VPNs Making Attacks Go Backwards Mask Your Checksums—The Gorry Details Adventures with weird machines thirty years after "Reflections on Trusting Trust" Introduction to Toorcon 15 (2013) Toorcon 15 is the 15th annual security conference held in San Diego. I've attended about a third of them and blogged about previous conferences I attended here starting in 2003. As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them. Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information. A Tale of One Software Bypass of MS Windows 8 Secure Boot Andrew Furtak and Oleksandr Bazhaniuk Yuri Bulygin, Oleksandr ("Alex") Bazhaniuk, and (not present) Andrew Furtak Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations. They previously gave this talk at the BlackHat 2013 conference. MS Windows 8 Secure Boot Overview UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS. UEFI is processor and architecture independent. Malware can replace bootloader (bootx64.efi, bootmgfw.efi). Once replaced can modify kernel. Trivial to replace bootloader. Today many legacy bootkits—UEFI replaces them most of them. MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes. UEFI firmware relies on secure update (with signed update). You would think Secure Boot would rely on ROM (such as used for phones0, but you can't do that for PCs—PCs use writable memory with signatures DXE core verifies the UEFI boat loader(s) OS Loader (winload.efi, winresume.efi) verifies the OS kernel A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor. Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx). X.509 certs with SHA-1/SHA-256 hashes. Keys are stored in non-volatile (NV) flash-based NVRAM. Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)). Root cert uses RSA-2048 public keys and PKCS#7 format signatures. SecureBoot — enable disable image signature checks SetupMode — update keys, self-signed keys, and secure boot variables CustomMode — allows updating keys Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation Attacking MS Windows 8 Secure Boot Secure Boot does NOT protect from physical access. Can disable from console. Each BIOS vendor implements Secure Boot differently. There are several platform and BIOS vendors. It becomes a "zoo" of implementations—which can be taken advantage of. Secure Boot is secure only when all vendors implement it correctly. Allow only UEFI firmware signed updates protect UEFI firmware from direct modification in flash memory protect FW update components program SPI controller securely protect secure boot policy settings in nvram protect runtime api disable compatibility support module which allows unsigned legacy Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off. Can also exploit TPM in a similar manner. One is not supposed to be able to directly modify the PK in SPI flash from the OS though. But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot. Multiple vendors are vulnerable. They will disclose this exploit to vendors in the future. Recommendations: allow only signed updates protect UEFI fw in ROM protect EFI variable store in ROM Breaching SSL, One Byte at a Time Yoel Gluck and Angelo Prado Angelo Prado and Yoel Gluck, Salesforce.com CRIME is software that performs a "compression oracle attack." This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header. CRIME requests with every possible character and measures the ciphertext length. Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time. SSL Compression uses LZ77 to reduce redundancy. Huffman coding replaces common byte sequences with shorter codes. US CERT thinks the SSL compression problem is fixed, but it isn't. They convinced CERT that it wasn't fixed and they issued a CVE. BREACH, breachattrack.com BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding). It takes advantage of the fact that the response is not compressed. BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds. It needs attacker-supplied content (say from a web form or added to a URL parameter). BREACH listens to a session's requests and responses, then inserts extra requests and responses. Eventually, BREACH guesses a session's secret key. Can use compression to guess contents one byte at-a-time. For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes, and "Supersecret Supersecret" (a correct guess) compresses 11 bytes, so it can find each character by guessing every character. To start the guess, BREACH needs at least three known initial characters in the response sequence. Compression length then "leaks" information. Some roadblocks include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same). The solutions include: lookahead (guess 2 or 3 characters at-a-time instead of 1 character). Expensive rollback to last known conflict check compression ratio can brute-force first 3 "bootstrap" characters, if needed (expensive) block ciphers hide exact plain text length. Solution is to align response in advance to block size Mitigations length: use variable padding secrets: dynamic CSRF tokens per request secret: change over time separate secret to input-less servlets Future work eiter understand DEFLATE/GZIP HTTPS extensions Running at 99%: Surviving an Application DoS Ryan Huber Ryan Huber, Risk I/O Ryan first discussed various ways to do a denial of service (DoS) attack against web services. One usual method is to find a slow web page and do several wgets. Or download large files. Apache is not well suited at handling a large number of connections, but one can put something in front of it Can use Apache alternatives, such as nginx How to identify malicious hosts short, sudden web requests user-agent is obvious (curl, python) same url requested repeatedly no web page referer (not normal) hidden links. hide a link and see if a bot gets it restricted access if not your geo IP (unless the website is global) missing common headers in request regular timing first seen IP at beginning of attack count requests per hosts (usually a very large number) Use of captcha can mitigate attacks, but you'll lose a lot of genuine users. Bouncer, goo.gl/c2vyEc and www.github.com/rawdigits/Bouncer Bouncer is software written by Ryan in netflow. Bouncer has a small, unobtrusive footprint and detects DoS attempts. It closes blacklisted sockets immediately (not nice about it, no proper close connection). Aggregator collects requests and controls your web proxies. Need NTP on the front end web servers for clean data for use by bouncer. Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms. Future features: gzip collection data, documentation, consumer library, multitask, logging destroyed connections. Takeaways: DoS mitigation is easier with a complete picture Bouncer designed to make it easier to detect and defend DoS—not a complete cure Security Response in the Age of Mass Customized Attacks Peleus Uhley and Karthik Raman Peleus Uhley and Karthik Raman, Adobe ASSET, blogs.adobe.com/asset/ Peleus and Karthik talked about response to mass-customized exploits. Attackers behave much like a business. "Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School. Mass customization is differentiating a product for an individual customer, but at a mass production price. For example, the same individual with a debit card receives basically the same customized ATM experience around the world. Or designing your own PC from commodity parts. Exploit kits are another example of mass customization. The kits support multiple browsers and plugins, allows new modules. Exploit kits are cheap and customizable. Organized gangs use exploit kits. A group at Berkeley looked at 77,000 malicious websites (Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits. Characteristics of Mass Malware: potent, resilient, relatively low cost Technical characteristics: multiple OS, multipe payloads, multiple scenarios, multiple languages, obfuscation Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now. So the drive with malware is towards mass customized exploits, to avoid detection There's plenty of evicence that exploit development has Project Manager bureaucracy. They infer from the malware edicts to: support all versions of reader support all versions of windows support all versions of flash support all browsers write large complex, difficult to main code (8750 lines of JavaScript for example Exploits have "loose coupling" of multipe versions of software (adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions. Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs. However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat. Future research includes normalizing malware and Javascript. Conclusion: The coming trend is that mass-malware with mass zero-day attacks will result in mass customization of attacks. x86 Rewriting: Defeating RoP and other Shinanighans Richard Wartell Richard Wartell The attack vector we are addressing here is: First some malware causes a buffer overflow. The malware has no program access, but input access and buffer overflow code onto stack Later the stack became non-executable. The workaround malware used was to write a bogus return address to the stack jumping to malware Later came ASLR (Address Space Layout Randomization) to randomize memory layout and make addresses non-deterministic. The workaround malware used was to jump t existing code segments in the program that can be used in bad ways "RoP" is Return-oriented Programming attacks. RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets"). Pinkie Pie was paid $60K last year for a RoP attack. One solution is using anti-RoP compilers that compile source code with NO return instructions. ASLR does not randomize address space, just "gadgets". IPR/ILR ("Instruction Location Randomization") randomizes each instruction with a virtual machine. Richard's goal was to randomize a binary with no source code access. He created "STIR" (Self-Transofrming Instruction Relocation). STIR disassembles binary and operates on "basic blocks" of code. The STIR disassembler is conservative in what to disassemble. Each basic block is moved to a random location in memory. Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations. The old code is copied and rewritten with jumps to new code. the original code sections in the file is marked non-executible. STIR has better entropy than ASLR in location of code. Makes brute force attacks much harder. STIR runs on MS Windows (PEM) and Linux (ELF). It eliminated 99.96% or more "gadgets" (i.e., moved the address). Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!). The unique thing about STIR is it requires no source access and the modified binary fully works! Current work is to rewrite code to enforce security policies. For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk. Clowntown Express: interesting bugs and running a bug bounty program Collin Greene Collin Greene, Facebook Collin talked about Facebook's bug bounty program. Background at FB: FB has good security frameworks, such as security teams, external audits, and cc'ing on diffs. But there's lots of "deep, dark, forgotten" parts of legacy FB code. Collin gave several examples of bountied bugs. Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care). We use security questions, as does everyone else, but they are basically insecure (often easily discoverable). Collin didn't expect many bugs from the bounty program, but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in. Bug bounties bring people in with different perspectives, and are paid only for success. Bug bounty is a better use of a fixed amount of time and money versus just code review or static code analysis. The Bounty program started July 2011 and paid out $1.5 million to date. 14% of the submissions have been high priority problems that needed to be fixed immediately. The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year. Spammers like to backstab competitors. The youngest sumitter was 13. Some submitters have been hired. Bug bounties also allows to see bugs that were missed by tools or reviews, allowing improvement in the process. Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet. Active Fingerprinting of Encrypted VPNs Anna Shubina Anna Shubina, Dartmouth Institute for Security, Technology, and Society (I missed the start of her talk because another track went overtime. But I have the DVD of the talk, so I'll expand later) IPsec leaves fingerprints. Using netcat, one can easily visually distinguish various crypto chaining modes just from packet timing on a chart (example, DES-CBC versus AES-CBC) One can tell a lot about VPNs just from ping roundtrips (such as what router is used) Delayed packets are not informative about a network, especially if far away from the network More needed to explore about how TCP works in real life with respect to timing Making Attacks Go Backwards Fuzzynop FuzzyNop, Mandiant This talk is not about threat attribution (finding who), product solutions, politics, or sales pitches. But who are making these malware threats? It's not a single person or group—they have diverse skill levels. There's a lot of fat-fingered fumblers out there. Always look for low-hanging fruit first: "hiding" malware in the temp, recycle, or root directories creation of unnamed scheduled tasks obvious names of files and syscalls ("ClearEventLog") uncleared event logs. Clearing event log in itself, and time of clearing, is a red flag and good first clue to look for on a suspect system Reverse engineering is hard. Disassembler use takes practice and skill. A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly. Key loggers are used a lot in targeted attacks. They are typically custom code or built in a backdoor. A big tip-off is that non-printable characters need to be printed out (such as "[Ctrl]" "[RightShift]") or time stamp printf strings. Look for these in files. Presence is not proof they are used. Absence is not proof they are not used. Java exploits. Can parse jar file with idxparser.py and decomile Java file. Java typially used to target tech companies. Backdoors are the main persistence mechanism (provided externally) for malware. Also malware typically needs command and control. Application of Artificial Intelligence in Ad-Hoc Static Code Analysis John Ashaman John Ashaman, Security Innovation Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives. Also tried using grep, but tis fails to find anything even mildly complex. So next John decided to write his own tool. His approach was to first generate a call graph then analyze the graph. However, the problem is that making a call graph is really hard. For example, one problem is "evil" coding techniques, such as passing function pointer. First the tool generated an Abstract Syntax Tree (AST) with the nodes created from method declarations and edges created from method use. Then the tool generated a control flow graph with the goal to find a path through the AST (a maze) from source to sink. The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order. The tool, called "Scat" (Static Code Analysis Tool), currently looks for C# vulnerabilities and some simple PHP. Later, he plans to add more PHP, then JSP and Java. For more information see his posts in Security Innovation blog and NRefactory on GitHub. Mask Your Checksums—The Gorry Details Eric (XlogicX) Davisson Eric (XlogicX) Davisson Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address. But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP. Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care. If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data). Other parts of packet may leak more data about the IP. TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum. Also, one can usually determine the OS from the TTL field and ports in a packet header. If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results, such as look at packet contents for domain or geo information. With hundreds of results, can import as CSV format into a spreadsheet. Can corelate with geo data and see where each possibility is located. Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender. Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security. Adventures with weird machines thirty years after "Reflections on Trusting Trust" Sergey Bratus Sergey Bratus, Dartmouth College (and Julian Bangert and Rebecca Shapiro, not present) "Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper. "You can't trust code that you did not totally create yourself." There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs. Thompson showed how a compiler can introduce and propagate bugs in unmodified source. But suppose if there's no bugs and you trust the author, can you trust the code? Hell No! There's too many factors—it's Babylonian in nature. Why not? Well, Input is not well-defined/recognized (code's assumptions about "checked" input will be violated (bug/vunerabiliy). For example, HTML is recursive, but Regex checking is not recursive. Input well-formed but so complex there's no telling what it does For example, ELF file parsing is complex and has multiple ways of parsing. Input is seen differently by different pieces of program or toolchain Any Input is a program input executes on input handlers (drives state changes & transitions) only a well-defined execution model can be trusted (regex/DFA, PDA, CFG) Input handler either is a "recognizer" for the inputs as a well-defined language (see langsec.org) or it's a "virtual machine" for inputs to drive into pwn-age ELF ABI (UNIX/Linux executible file format) case study. Problems can arise from these steps (without planting bugs): compiler linker loader ld.so/rtld relocator DWARF (debugger info) exceptions The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable). Only solution is to freeze code and sign it. But you can't freeze everything! Can't freeze ASLR or loading—must have tables and metadata. Any sufficiently complex input data is the same as VM byte code Example, ELF relocation entries + dynamic symbols == a Turing Complete Machine (TM). @bxsays created a Turing machine in Linux from relocation data (not code) in an ELF file. For more information, see Rebecca "bx" Shapiro's presentation from last year's Toorcon, "Programming Weird Machines with ELF Metadata" @bxsays did same thing with Mach-O bytecode Or a DWARF exception handling data .eh_frame + glibc == Turning Machine X86 MMU (IDT, GDT, TSS): used address translation to create a Turning Machine. Page handler reads and writes (on page fault) memory. Uses a page table, which can be used as Turning Machine byte code. Example on Github using this TM that will fly a glider across the screen Next Sergey talked about "Parser Differentials". That having one input format, but two parsers, will create confusion and opportunity for exploitation. For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA. Another example is ELF—several parsers in OS tool chain, which are all different. Can have two different Program Headers (PHDRs) because ld.so parses multiple PHDRs. The second PHDR can completely transform the executable. This is described in paper in the first issue of International Journal of PoC. Conclusions trusting computers not only about bugs! Bugs are part of a problem, but no by far all of it complex data formats means bugs no "chain of trust" in Babylon! (that is, with parser differentials) we need to squeeze complexity out of data until data stops being "code equivalent" Further information See and langsec.org. USENIX WOOT 2013 (Workshop on Offensive Technologies) for "weird machines" papers and videos.

    Read the article

  • Hyperlinked, externalized source code documentation

    - by Dave Jarvis
    Why do we still embed natural language descriptions of source code (i.e., the reason why a line of code was written) within the source code, rather than as a separate document? Given the expansive real-estate afforded to modern development environments (high-resolution monitors, dual-monitors, etc.), an IDE could provide semi-lock-step panels wherein source code is visually separated from -- but intrinsically linked to -- its corresponding comments. For example, developers could write source code comments in a hyper-linked markup language (linking to additional software requirements), which would simultaneously prevent documentation from cluttering the source code. What shortcomings would inhibit such a software development mechanism? A mock-up to help clarify the question: When the cursor is at a particular line in the source code (shown with a blue background, above), the documentation that corresponds to the line at the cursor is highlighted (i.e., distinguished from the other details). As noted in the question, the documentation would stay in lock-step with the source code as the cursor jumps through the source code. A hot-key could switch between "documentation mode" and "development mode". Potential advantages include: More source code and more documentation on the screen(s) at once Ability to edit documentation independently of source code (regardless of language?) Write documentation and source code in parallel without merge conflicts Real-time hyperlinked documentation with superior text formatting Quasi-real-time machine translation into different natural languages Every line of code can be clearly linked to a task, business requirement, etc. Documentation could automatically timestamp when each line of code was written (metrics) Dynamic inclusion of architecture diagrams, images to explain relations, etc. Single-source documentation (e.g., tag code snippets for user manual inclusion). Note: The documentation window can be collapsed Workflow for viewing or comparing source files would not be affected How the implementation happens is a detail; the documentation could be: kept at the end of the source file; split into two files by convention (filename.c, filename.c.doc); or fully database-driven By hyperlinked documentation, I mean linking to external sources (such as StackOverflow or Wikipedia) and internal documents (i.e., a wiki on a subdomain that could cross-reference business requirements documentation) and other source files (similar to JavaDocs). Related thread: What's with the aversion to documentation in the industry?

    Read the article

  • Java Spotlight Episode 58: Peter Korn and Ofir Leitner on ME Accessibility

    - by Roger Brinkley
    Tweet Interview with Peter Korn and Ofir Leitner on Mobile and Embedded Accessibility. Joining us this week on the Java All Star Developer Panel are Dalibor Topic, Java Free and Open Source Software Ambassador and Alexis Moussine-Pouchkine, Java EE Developer Advocate. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link: Java Spotlight Podcast in iTunes. Show Notes News Announcing Oracle WebLogic 12c Geronimo 3 beta - Another Apache project now compatible with Java EE 6 NetBeans 7.1 RC1 is out JavaFX links of the weeks JavaFX videos on Parleys: Nicolas Lorain's Introduction to JavaFX 2.0 from JavaOne 2011 & Richard Bair on JavaFX Architecture and Programming Model Events Dec 4, SOUJava Geek Bike Ride 2011, Sao Paulo  Dec 5-7, UKOUG, Birmingham, UK Dec 6-8, Java One Brazil, Sao Paulo Dec 9 UAIJUG, Uberlandia Dec 9 CEJUG, Fortaleza/CE Dec 10 GUJAVA, Florianopolis Dec 10 ALJUG, Maceio/AL Dec 11 Javaneiros, Campo Grande/MS Dec 12 GOJAVA, Goiania/GO Dec 13 RioJUG, Rio de Janeiro Feature interview Peter Korn is Oracle's Accessibility Principal – their senior individual contributor on accessibility. He is also Technical Manager of the AEGIS project, leading an EC-funded €12.6m investment building accessibility into future mainstream ICT (FP7-ICT224348). Mr. Korn co-developed and co-implemented the Java Accessibility API, and developed the Java Access Bridge for Windows. He helped design the open source GNOME Accessibility architecture found on most modern UNIX and GNU/Linux systems, and consulted on accessibility support for OpenOffice.org, Firefox, Thunderbird, and other applications. Prior to Sun/Oracle, Peter co-developed the outSPOKEN for Windows screen reader. Mr. Korn represented Sun/Oracle on TEITAC for the Section 508/255 refresh, co-led the OASIS ODF Accessibility subcommittee, and sits on INCITS V2 where he is contributing to ISO 13066: defining AT-IT interoperability standards including specifically the Java Accessibility API. Ofir Leitner is the architect of one of LWUIT's key features - the HTMLComponent which allows rendering HTML within LWUIT applications and to embed web-flows inside apps. Ofir is also responsible for LWUIT's bidirectional and RTL support and for the accessibility work that is being done these days in LWUIT. Mail Bag What's Cool Devoxx 2011 (Alexis) Eclipsecon Europe Talk by Andrew Overholt: IcedTea & IcedTea-Web Geek bike ride & Rio 500 Twitter followers @JavaSpotlight Show Transcripts Transcript for this show is available here when available.

    Read the article

  • Taking AIIM at Social

    - by Christie Flanagan
    Today we are pleased to have a guest post from Christian Finn (@cfinn).  Christian is Senior Director of Product Management for Oracle WebCenter and heads up the WebCenter evangelist team.Last week I had the privilege of speaking at AIIM’s new conference in San Francisco.  AIIM, for those of you not familiar with it, is a global community of information professionals and got its start with ECM and imaging long ago. With 65,000+ members, AIIM has now set about broadening its scope to focus more on the intersection between systems of record (think traditional ECM) and systems of engagement (think social solutions).  So AIIM’s conference is a natural place to be for WebCenter types like me, who have a foot in both of those worlds.AIIM used to have their name on a very large tradeshow, but have changed direction now to run a small, intimate conference.  The lineup of keynotes was terrific, including David Pogue of The New York Times, Clay Shirky, author of Here Comes Everybody, and Ted Schadler, author of Empowered among many thought-provoking and engaging speakers. (Note: Ted will soon be featured in our Social Business webcast series. Stay tuned.)John Mancini and his team at AIIM did a fabulous job running the event and the engagement from the 450 attendees was sustained over the two and a half days.  Our proudest moment was having three finalists up for AIIM awards including: San Joaquin County, CA, for a justice case management system using WebCenter Content and Oracle BPM; Medtronic and Fishbowl Solutions for their innovative iPad solutions on WebCenter Content, and the government of Louisville, Kentucky/Jefferson County for their accounts payable solution using WebCenter Content’s Image & Process Management.  The highlight of the awards night was San Joaquin winning the small organization award against some tough competition.In addition to the conversations sparked at the show, AIIM promoted the whitepapers their industry task forces have produced on the impact and opportunities created by systems of engagement and systems of record. The task forces were led by: Geoffrey Moore, the renowned high tech marketing guru and author of Crossing The Chasm; and Andrew McAfee, who coined the term and wrote the book, Enterprise 2.0. (Note: Andy will also be featured soon on the Social Business webcast series.)  These free papers make short, excellent reading and you can download them on the AIIM website: Moore highlights the changes to Enterprise IT that the social revolution will engender, and McAfee covers where and how organizations are finding value in using social techniques to foster innovation, to scale Q&A across the organization, and to connect sales and marketing for greater efficiency and effectiveness. Moore’s whitepaper is here and McAfee’s whitepapers are available here. For the benefit of those who did not get a chance to attend the AIIM conference, I’ll be posting the topics of my AIIM presentation, “Three Principles for Fixing Your Broken Organization,” here on the WebCenter blog over the rest of this week and next in a series of posts.  

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >