Search Results

Search found 15099 results on 604 pages for 'stop loading'.

Page 503/604 | < Previous Page | 499 500 501 502 503 504 505 506 507 508 509 510  | Next Page >

  • Why Photoshop CS5's photomerge's result immediately disappear?

    - by koiyu
    I have a bunch of JPG-files which I want to stitch together with Photoshop's Photomerge function. I choose File → Automate → Photomerge... and browse for the files. Photoshop opens the files and starts analyzing. I see the process bar filling and different phases are mentioned on the process bar. Nothing weird there. When the merging is done (and if I don't blink my eyes), I can see layers-palette is populated with the chosen files and, by quickly judging from the layer thumbnails, they're properly aligned. Sometimes the image window itself can be seen, but not always. Problem is that the layers and the image disappear in a flash. There is no error message. Everything is like prior starting the photomerge. No file has been changed. I could continue to use Photoshop normally. This is what I've tried so far: Loaded folder which has 38 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded the same files, but chose Use Files instead of Use Folder in the photomerge's window Loaded 19 JPG images, 4272 x 2848 and ˜ 5 megabytes per file Loaded 10 JPG images, ⇑ see above Loaded 5 JPG images, see above Loaded 3 JPG images, see above Scaled the images to 2256 x 1504 and ˜< 1 megabytes per file Loaded in a set of 38, 19, 10, 5, 3 Following steps are tested with these smaller files and with a set of 5 images Read Adobe's forums and reduced the amount of RAM Photoshop uses gradually from ˜ 80 % to 50 % (though I didn't understand the logic behind this) Would've reduced cache tile size to 128K, but it was set so already Disabled OpenGL Scaled the images to 800 x 533 and ˜ 100 kilobytes per file, loaded a set of 5 Read more unanswered threads around the internet In between each test I closed and reopened Photoshop. This is the first time I've even tried using photomerge. Am I doing something wrong? How can I locate what is the problem? How do I fix this? Photoshop is 64 bit Extended CS5 version. I'm on a mid-2010 quad-core (i5) iMac with up-to-date Mac OS X 10.6.6. Edit: Weird. First loading the images into one file via File → Scripts → Load Files into Stack… and then using Edit → Auto-Align Layers…, which, effectively, is the same as photomerge (even the dialog looks kind of the same), works! Even with the original JPGs without any issues. This doesn't fix photomerge, though.

    Read the article

  • A Windows Update Prevented Live Discs From Working?

    - by user88311
    First off I'll state my system specs. Acer Aspire M1100 Windows Vista Home Basic 32bit OEM 2 Gigs of DDR2 ram 160Gig hard drive 2.7Ghz AMD Athlon 64 processor ATI Radeon X1250 Graphics card A few days ago my computer did automatic updates and updated windows defender to KB915597 (Definition 1.135.415.0), After which when shutting down and starting up I would receive BSOD with the information BUGCODE_USE_DRIVER and 0x000000FE (0x00000008, 0x00000006, 0x00000006, 0x877330000) upon where my computer would not start up with any USB devices plugged in and it always require me to run startup repair before it started. Upon when I first started it up and was able to fully boot windows, I had no use of the mouse so I was unable to install the fix that the windows solutions center brought up on my screen, so I restarted again and installed the fix hoping it would cease the problem, it did not. Upoon starting up after installing the fix and restarting I was confronted with the BSOD 0x000000FE (0x00000008, 0x00000006, 0x00000006, 0x83291000) at which I found the startup repair could not fix the problem and I restored, as I most like should have in the first place. After going through that I read that simply installing the latest defender version from the microsoft site had fixed this problem for others, so I did that, to find I still received the BSOD's. So in a attempt to find a fix to the problem I went to the microsoft answers site to try to find a way to fix the problem, there I was told to simply disable defender and reboot to see if that fixed the problem, upon doing this my computer would no longer even startup, when I boot normally I get to just when the loading screen finishes and then my computer restarts and when I run startup repair, it runs for about 15 seconds and then my computer restarts as well. I have tried running ubuntu live discs in order to simply access the drive and simply copy and paste the 2 month old physical backup I have of my C drive to the C drive, but whenever I run the live OS when it gets to the end of the load screen and is about to boot, the computer again restarts, yet if I put in a gparted disc, I am able to boot it fully, although it does not give me access to the file system just partition managing and when I attempt to access the internet through it, the computer once again restarts. So my question is, how could the update and what has happened prevent me from running the live OS's properly?

    Read the article

  • Developer Dashboard in SharePoint 2010

    - by jcortez
    Introducing the Developer Dashboard As a SharePoint developer (or IT Professional), how many times have you had the pleasure of figuring out why a particular page on your site is taking too long to render? I'm sure one of the techniques you have employed in troubleshooting is the process of elimination - removing individual web parts from the page hoping to identify which web part is misbehaving. One of the new features of SharePoint 2010 is the Developer Dashboard. This dashboard provides tracing and performance information that can be useful when you are trying to troubleshoot pages that are loading too slow. The Developer Dashboard is turned off by default and I'll go over 3 different ways to display it. Here is a screenshot of what the Developer Dashboard looks like when displayed at the bottom of the page:   You can see on the left side the different events that fired during the page processing pipeline and how long these events took. This is where you will see individual web parts being processed and how long it took to complete (obviously the kind of processing depends on what the web part does). On the right side you would see the different database calls issued through the SharePoint Object Model to process the page. You will notice that each of these database queries are actually a hyperlink and clicking on it displays a pop-up window that shows the actual SQL Query Text, the Call Stack that triggered the database call, and the IO statistics of that query. Enabling the Developer Dashboard Option 1: Managed Code   The Developer Dashboard is a farm-wide setting and the code above won't work if it is used within a web part hosted on any non-Central Admin site. The SPDeveloperDashboardLevel enum has three possible values: On, Off, and OnDemand. Setting it to On will always display the Developer Dashboard at the bottom of the page. Setting it Off will hide the Developer Dashboard. Setting it to OnDemand will add an icon at the top right corner of the page (see screenshot below) where a Site Collection Admin can toggle the display of the Developer Dashboard for a particular site collection. In my opinion, OnDemand is the best setting when troubleshooting a page or during development since a Site Collection Admin can turn it on or off and for a particular site only. The first cool thing about this is that the Site Collection Admin that turned it on will be the only one to see the Developer Dashboard output. Everyday users won't see the Developer Dashboard output even if it was turned on by a Site Collection Admin. If you need more flexibility on who gets to see the Developer Dashboard output, you can set the SPDeveloperDashboardSettings.RequiredPermissions to control which group of users will have the permission to see the output. Option 2: Using stsadm Using stsadm, you can run the following command to configure the Developer Dashboard: STSADM –o setproperty –pn developer-dashboard –pv OnDemand To successfully execute this command, be sure you that are running as a Farm Admin. Option 3: Using PowerShell For all scripts in SharePoint 2010, I prefer writing them as PowerShell scripts. Though the stsadm command is less verbose, the PowerShell equivalent is pretty straightforward and uses the SharePoint Object Model: You can of course parameterized the value that gets assigned to the DisplayLevel property so you can turn it On, Off or OnDemand depending on the parameter. Events and the Developer Dashboard  Now, don't assume that all the code inside your web part or page will show up in the Developer Dashboard complete with all the great troubleshooting information. Only a finite set of events are monitored by default (for a web part it will events in the base web part class). Let's say you have a click event that could take some time, for example a web service call. And you want to include troubleshooting information for this event in the Developer Dashboard. Enter SPMonitoredScope which is also a new feature in SharePoint 2010. In SharePoint 2010, everything is executed within a "Monitored Scope". And each scope has a set of "Monitors" that measures and counts calls and timings which appears in the Developer Dashboard. Below is an example on how to get your custom code to get included in the Developer Dashboard by wrapping it inside a new monitored scope: The code above would include your new scope "My long web service call" into the Developer Dashboard and would log the time it took to complete processing. In my opinion, wrapping your custom code in a SPMonitoredScope is a SharePoint development best practice since it provides you visibility and a better understanding on the performance of your components.

    Read the article

  • Dec 5th Links: ASP.NET, ASP.NET MVC, jQuery, Silverlight, Visual Studio

    - by ScottGu
    Here is the latest in my link-listing series.  Also check out my VS 2010 and .NET 4 series for another on-going blog series I’m working on. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] ASP.NET ASP.NET Code Samples Collection: J.D. Meier has a great post that provides a detailed round-up of ASP.NET code samples and tutorials from a wide variety of sources.  Lots of useful pointers. Slash your ASP.NET compile/load time without any hard work: Nice article that details a bunch of optimizations you can make to speed up ASP.NET project load and compile times. You might also want to read my previous blog post on this topic here. 10 Essential Tools for Building ASP.NET Websites: Great article by Stephen Walther on 10 great (and free) tools that enable you to more easily build great ASP.NET Websites.  Highly recommended reading. Optimize Images using the ASP.NET Sprite and Image Optimization Framework: A nice article by 4GuysFromRolla that discusses how to use the open-source ASP.NET Sprite and Image Optimization Framework (one of the tools recommended by Stephen in the previous article).  You can use this to significantly improve the load-time of your pages on the client. Formatting Dates, Times and Numbers in ASP.NET: Scott Mitchell has a great article that discusses formatting dates, times and numbers in ASP.NET.  A very useful link to bookmark.  Also check out James Michael’s DateTime is Packed with Goodies blog post for other DateTime tips. Examining ASP.NET’s Membership, Roles and Profile APIs (Part 18): Everything you could possibly want to known about ASP.NET’s built-in Membership, Roles and Profile APIs must surely be in this tutorial series. Part 18 covers how to store additional user info with Membership. ASP.NET with jQuery An Introduction to jQuery Templates: Stephen Walther has written an outstanding introduction and tutorial on the new jQuery Template plugin that the ASP.NET team has contributed to the jQuery project. Composition with jQuery Templates and jQuery Templates, Composite Rendering, and Remote Loading: Dave Ward has written two nice posts that talk about composition scenarios with jQuery Templates and some cool scenarios you can enable with them. Using jQuery and ASP.NET to Build a News Ticker: Scott Mitchell has a nice tutorial that demonstrates how to build a dynamically updated “news ticker” style UI with ASP.NET and jQuery. Checking All Checkboxes in a GridView using jQuery: Scott Mitchell has a nice post that covers how to use jQuery to enable a checkbox within a GridView’s header to automatically check/uncheck all checkboxes contained within rows of it. Using jQuery to POST Form Data to an ASP.NET AJAX Web Service: Rick Strahl has a nice post that discusses how to capture form variables and post them to an ASP.NET AJAX Web Service (.asmx). ASP.NET MVC ASP.NET MVC Diagnostics Using NuGet: Phil Haack has a nice post that demonstrates how to easily install a diagnostics page (using NuGet) that can help identify and diagnose common configuration issues within your apps. ASP.NET MVC 3 JsonValueProviderFactory: James Hughes has a nice post that discusses how to take advantage of the new JsonValueProviderFactory support built into ASP.NET MVC 3.  This makes it easy to post JSON payloads to MVC action methods. Practical jQuery Mobile with ASP.NET MVC: James Hughes has another nice post that discusses how to use the new jQuery Mobile library with ASP.NET MVC to build great mobile web applications. Credit Card Validator for ASP.NET MVC 3: Benjii Me has a nice post that demonstrates how to build a [CreditCard] validator attribute that can be used to easily validate credit card numbers are in the correct format with ASP.NET MVC. Silverlight Silverlight FireStarter Keynote and Sessions: A great blog post from John Papa that contains pointers and descriptions of all the great Silverlight content we published last week at the Silverlight FireStarter.  You can watch all of the talks online.  More details on my keynote and Silverlight 5 announcements can be found here. 31 Days of Windows Phone 7: 31 great tutorials on how to build Windows Phone 7 applications (using Silverlight).  Silverlight for Windows Phone Toolkit Update: David Anson has a nice post that discusses some of the additional controls provided with the Silverlight for Windows Phone Toolkit. Visual Studio JavaScript Editor Extensions: A nice (and free) Visual Studio plugin built by the web tools team that significantly improves the JavaScript intellisense support within Visual Studio. HTML5 Intellisense for Visual Studio: Gil has a blog post that discusses a new extension my team has posted to the Visual Studio Extension Gallery that adds HTML5 schema support to Visual Studio 2008 and 2010. Team Build + Web Deployment + Web Deploy + VS 2010 = Goodness: Visual blogs about how to enable a continuous deployment system with VS 2010, TFS 2010 and the Microsoft Web Deploy framework.  Visual Studio 2010 Emacs Emulation Extension and VIM Emulation Extension: Check out these two extensions if you are fond of Emacs and VIM key bindings and want to enable them within Visual Studio 2010. Hope this helps, Scott

    Read the article

  • XNA Notes 006

    - by George Clingerman
    If you used to think the XNA community was small and inactive, hopefully these XNA Notes are opening your eyes. And I honestly feel like I’m still only catching the tail end of everything that’s going on. It’s a large and active community and you can be so mired down in one part of it you miss all sorts of cool stuff another part is doing. XNA is many things to a lot of people and that makes for a lot of really awesome things going on. So here’s what I saw going on this last week! Time Critical XNA New: XNA Team - Peer Review now closes for XNA 3.1 games http://blogs.msdn.com/b/xna/archive/2011/02/08/peer-review-pipeline-closed-for-new-xna-gs-3-1-games-or-updates-on-app-hub.aspx http://twitter.com/XNACommunity/statuses/34649816529256448 The XNA Team posts about a meet up with Microsoft for Creator’s going to be at GDC, March 3rd at the Lobby Bar http://on.fb.me/fZungJ XNA Team: @mklucher is busying playing the the bubblegum on WP7 made by a member of the XNA team (although reportedly made in Silverlight? Crazy! ;) ) http://twitter.com/mklucher/statuses/34645662737895426 http://bubblegum.me Shawn Hargreaves posts multiple posts (is this a sign that something new is coming from the XNA team? Usually when Shawn has time to post, something has just wrapped up…) Random Shuffle http://blogs.msdn.com/b/shawnhar/archive/2011/02/09/random-shuffle.aspx Doing the right thing: resume, rewind or skip ahead http://blogs.msdn.com/b/shawnhar/archive/2011/02/10/doing-the-right-thing-resume-rewind-or-skip-ahead.aspx XNA Developers: Andrew Russel was on .NET Rocks recently talking with Carl and Richard about developing games for Xbox, iPhone and Android http://www.dotnetrocks.com/default.aspx?ShowNum=635 Eric W. releases the Fishing Girl source code into the wild http://ericw.ca/blog/posts/fishing-girl-now-open-source/ http://forums.create.msdn.com/forums/p/74642/454512.aspx#454512 BinaryTweedDeej reminds that XNA community that Indie City wants you involved http://twitter.com/BinaryTweedDeej/statuses/34596114028044288 http://www.indiecity.com Mike McLaughlin (@mikebmcl) releases his first two XNA articles on the TechNet wiki http://social.technet.microsoft.com/wiki/contents/articles/xna-framework-overview.aspx http://social.technet.microsoft.com/wiki/contents/articles/content-pipeline-overview.aspx John Watte plays around with the Content Pipeline and Music Visualization exploring just what can be done. http://www.enchantedage.com/xna-content-pipeline-fft-song-analysis http://www.enchantedage.com/fft-in-xna-content-pipeline-for-beat-detection-for-the-win Simon Stevens writes up his talk on Vector Collision Physics http://www.simonpstevens.com/News/VectorCollisionPhysics @domipheus puts together an XNA Task Manager http://www.flickr.com/photos/domipheus/5405603197/ MadNinjaSkillz releases his fork of Nick's Easy Storage component on CodePlex http://twitter.com/MadNinjaSkillz/statuses/34739039068229634 http://ezstorage.codeplex.com @ActiveNick was interviewed by Rob Cameron and discusses Windows Phone 7, Bing Maps and XNA http://twitter.com/ActiveNick/statuses/35348548526546944 http://msdn.microsoft.com/en-us/cc537546 Radiangames (Luke Schneider) posts about converting his games from XNA to Unity http://radiangames.com/?p=592 UberMonkey (@ElementCy) posts about a new project in the works, CubeTest a Minecraft style terrain http://www.ubergamermonkey.com/personal-projects/new-project-in-the-works/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Ubergamermonkey+%28UberGamerMonkey%29 Xbox LIVE Indie Games (XBLIG): VideoGamer Rob review Bonded Realities http://videogamerrob.wordpress.com/2011/02/05/xblig-review-bonded-realities/ XBLIG Round Up on Gamergeddon http://www.gamergeddon.com/2011/02/06/xbox-indie-game-round-up-february-6th/ Are gamers still rating Indie Games after the Xbox Dashboard update? http://www.gamemarx.com/news/2011/02/06/are-gamers-still-rating-indie-games-after-the-xbox-dashboard-update.aspx Joystiq - Xbox Live Indie Gems: Corrupted http://www.joystiq.com/2011/02/04/xbox-live-indie-gems-corrupted/ Raymond Matthews of DarkStarMatryx reviews (Almost) Total Mayhem and Aban Hawkins & the 1000 Spikes http://www.darkstarmatryx.com/?p=225 http://www.darkstarmatryx.com/?p=229 8 Bit Horse reviews Aban Hawkins & the 1000 spikes http://8bithorse.blogspot.com/2011/01/aban-hawkins-1000-spikes-xbl-indie.html 2010 wrap-up for FunInfused Games http://www.krissteele.net/blogdetails.aspx?id=245 NeoGaf roundup of January's XBLIGs http://www.neogaf.com/forum/showthread.php?t=420528 Armless Ocotopus interviews Michael Ventnor creator of Bonded Realities http://www.armlessoctopus.com/2011/02/07/interview-michael-ventnor-of-red-crest-studios/ @recharge_media posts about the new city music for Woodvale in Sin Rising http://rechargemedia.com/2011/02/08/new-city-theme-woodvale/ @DrMisty posts some footage of YoYoYo in action http://www.mstargames.co.uk/mistryblogmain/54-yoyoyoblogs/184-video-update.html Xona Games - Decimation X3 on Reviews on the Run http://video.citytv.com/video/detail/782443063001.000000/reviews-on-the-run--february-8-2011/g4/ @benkane gives an early peek at his action RPG coming to XBLIG http://www.youtube.com/watch?v=bDF_PrvtwU8 Rock, Paper Shotgun talks to Zeboyd games about bringing Cthulhu Saves the World to PC http://www.rockpapershotgun.com/2011/02/11/summoning-cthulhu-natter-with-zeboyd/ Xbox LIVE Indieverse interviews the creator of Bonded Realities http://xbl-indieverse.blogspot.com/2011/02/xbl-indieverse-interview-red-crest.html XNA Game Development: Dream-In-Code posts about an upcoming XNA Challenge/Coding contest http://www.dreamincode.net/forums/blog/1385/entry-3192-xna-challengecontest/ Sgt.Conker covers Fishing Girl and IndieFreaks Game Framework release http://www.sgtconker.com/2011/02/fishing-girl-did-not-sell-a-single-copy/ http://www.sgtconker.com/2011/02/indiefreaks-game-framework-v0-2-0-0/ @slyprid releases Transmute v0.40a with lots of new features and fixes http://twitter.com/slyprid/statuses/34125423067533312 http://twitter.com/slyprid/statuses/35326876243337216 http://forgottenstarstudios.com/ Jeff Brown writes an XNA 4.0 tutorial on Saving/Loading on the Xbox 360 http://www.robotfootgames.com/xna-tutorials/92-xna-tutorial-savingloading-on-xbox-360-40 XNA for Silverlight Developers: Part 3- Animation http://www.silverlightshow.net/items/XNA-for-Silverlight-developers-Part-3-Animation-transforms.aspx?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+xna-connection-twitter-specific-stream+%28XNA+Connection%27s+Twitter+specific+stream%29 The news from Nokia is definitely something XNA developers will want to keep their eye on http://blogs.forum.nokia.com/blog/nokia-developer-news/2011/02/11/letter-to-developers?sf1066337=1

    Read the article

  • Session Report: What’s New in JSF: A Complete Tour of JSF 2.2

    - by Janice J. Heiss
    On Wednesday, Ed Burns, Consulting Staff Member at Oracle, presented a session, CON3870 -- “What’s New in JSF: A Complete Tour of JSF 2.2,” in which he provided an update on recent developments in JavaServer Faces 2.2. He began by emphasizing that, “JavaServer Faces 2.2 continues the evolution of the Java EE standard user interface technology. Like previous releases, this iteration is very community-driven and transparent.” He pointed out that since JSF was introduced at the 2001 JavaOne Keynote, it has had a long and successful run and has found a home in applications where the UI logic resides entirely on the server where the model and UI logic is. In such cases, the browser performs fairly simple functions. However, developers can take advantage of the power of browsers, something that Project Avatar is focused on by letting developers author their applications so the UI logic is running on the client and communicating to the back end via RESTful web services. “Most importantly,” remarked Burns, “JSF 2.2 offers a really good migration path because even in the scope of one application you could have an app written with JSF that has its UI logic on the server and, on a gradual basis, you could migrate parts of the app over to use client-side technologies. This can be done at any level of granularity – per page or per collection of pages. It all depends on what you want to do.” His presentation, which focused on the basic new features of JSF 2.2, began by restating the scope of JSF and encouraged attendees to check out Roger Kitain’s session: CON5133 “Techniques for Responsive Real-Time Web UIs.” Burns explained that JSF has endured because, “We still need web apps that are maintainable, localizable, quick to build, accessible, secure, look great and are fun to use.” It is used on every continent – the curious can go here to check out where its unofficial usage is tracked. He emphasized the significance of the UI logic being substantially on the server. This: Separates Component Semantics from Rendering, Allows components to “own” their little patch of the UI -- encode/decode, And offers a well-defined lifecycle: Inversion of Control. Burns reminded attendees that JSR-344, the spec for JSF 2.2, is now on Java Community Process 2.8, a revised version of the JCP that allows for more openness and transparency. He then offered some tools for community access to JSF 2.2:    * Public java.net projects spec http://jsf-spec.java.net/ impl http://jsf.java.net/ Open Source: GPL+Classpath Exception    * Mailing Lists [email protected]                                Public readable archive, JSPA signed member read/write [email protected]                                     Public readable archive, any java.net member read/write                         All mail sent to jsr344-experts is sent to users. * Issue Tracker spec http://jsf-spec.java.net/issues/ impl http://jsf.java.net/issues/ JSF 2.2, which is JSR 344, has a Public Review Draft planned by December 2012 with no need for a Renewal Ballot. The Early Draft Review of JSR 344 was published on December 8, 2011. Interested developers are encouraged to offer their input. Six Big Ticket Features of JSF 2.2 Burns summarized the six big ticket features of JSF 2.2:* HTML5 Friendly Markup Support Pass through attributes and elements * Faces Flows* Cross Site Request Forgery Protection* Loading Facelets via ResourceHandler* File Upload Component* Multi-Templating He explained that he called it “HTML 5 friendly” because there is really nothing HTML 5 specific about it -- it could be 4. But it enables developers to use new elements that are present in HTML5 without having a JSF component library that is written to take advantage of those specifically. It gives the page author the ability to use plain HTML5 to write their page, but to still take advantage of the server-side available in JSF. He presented a demo showing JSF 2.2’s ability to leverage the expressiveness of HTML5. Burns then explained the significance of face flows, which offer function points and quantify how much work has taken place, something of great value to JSF users. He went on to talk about JSF 2.2.’s cross-site request forgery protection (CSRF) and offered details about how it protects applications against attack. Then he talked about JSF 2.2’s File Upload Component and explained that the final specification will have Ajax and non-Ajax support. The current milestone has non-Ajax support implemented. He then went on to explain its capacity to add facelets through ResourceHandler. Previously, JSF 2.0 added Facelets and ResourceHandler as disparate units; now in JSF 2.2 the two concepts are unified. Finally, he explained the concept of multi-templating in JSF 2.2 and went on to discuss more medium-level features of the release. For an easy, low maintenance way of staying in touch with JSF developments go to JSF’s Twitter page where every month or so, important updates are offered.

    Read the article

  • Gradle + Robolectric: Where do I put the file org.robolectric.Config.properties?

    - by Rob Hawkins
    I'm trying to setup a test using Robolectric to click on a menu button in this repository. Basic Robolectric tests will run, but I'm not able to run any project-specific test using resources because it says it can't find my AndroidManifest.xml. After running ../gradlew clean check, here's the standard output from the Robolectric html file: WARNING: No manifest file found at ./AndroidManifest.xml.Falling back to the Android OS resources only. To remove this warning, annotate your test class with @Config(manifest=Config.NONE). I found these instructions which indicate I should create an org.robolectric.Config.properties file, but I'm not sure where to put it. I've tried everywhere, pretty much, and despite moving the file, the path in the error message is always the same as above (./AndroidManifest.xml). This makes me think the build process has never picked up the settings in the file org.robolectric.Config.properties. I also tried the @Config(manifest="") directive but this gave me a cannot find symbol error. If I move the AndroidManifest.xml into my project directory, then I get an error about it not being able to find the path ./res/values and I wasn't able to resolve that either. Any ideas? Update 1 Thanks Eugen, I'm now using @RunWith(RobolectricGradleTestRunner.class) instead of @RunWith(RobolectricTestRunner). Now I get a different error, still occurring on the same line of my BasicTest.java KeywordList keywordList = Robolectric.buildActivity(KeywordList.class).create().get(); Below are results from the standard error, standard output, and "failed tests" tab in the Robolectric test report: Note: I also tried substituting in a jar built from the latest Robolectric updates, robolectric-2.2-SNAPSHOT.jar, but still got an error. Standard Error WARNING: no system properties value for ro.build.date.utc Standard Output DEBUG: Loading resources for net.frontlinesms.android from ~/workspace-studio/frontlinesms-for-android/FrontlineSMS/build/res/all/debug... DEBUG: Loading resources for android from jar:~/.m2/repository/org/robolectric/android-res/4.1.2_r1_rc/android-res-4.1.2_r1_rc-real.jar!/res... INFO: no id mapping found for android:drawable/scrollbar_handle_horizontal; assigning ID #0x1140002 INFO: no id mapping found for android:drawable/scrollbar_handle_vertical; assigning ID #0x1140003 INFO: no id mapping found for android:color/highlighted_text_dark; assigning ID #0x1140004 INFO: no id mapping found for android:color/hint_foreground_dark; assigning ID #0x1140005 INFO: no id mapping found for android:color/link_text_dark; assigning ID #0x1140006 INFO: no id mapping found for android:color/dim_foreground_dark_disabled; assigning ID #0x1140007 INFO: no id mapping found for android:color/dim_foreground_dark; assigning ID #0x1140008 INFO: no id mapping found for android:color/dim_foreground_dark_inverse_disabled; assigning ID #0x1140009 INFO: no id mapping found for android:color/dim_foreground_dark_inverse; assigning ID #0x114000a INFO: no id mapping found for android:color/bright_foreground_dark_inverse; assigning ID #0x114000b INFO: no id mapping found for android:layout/text_edit_paste_window; assigning ID #0x114000c INFO: no id mapping found for android:layout/text_edit_no_paste_window; assigning ID #0x114000d INFO: no id mapping found for android:layout/text_edit_side_paste_window; assigning ID #0x114000e INFO: no id mapping found for android:layout/text_edit_side_no_paste_window; assigning ID #0x114000f INFO: no id mapping found for android:layout/text_edit_suggestion_item; assigning ID #0x1140010 Failed Tests android.view.InflateException: XML file ~/workspace-studio/frontlinesms-for-android/FrontlineSMS/build/res/all/debug/layout/rule_list.xml line #-1 (sorry, not yet implemented): Error inflating class net.frontlinesms.android.ui.view.ActionBar at android.view.LayoutInflater.createView(LayoutInflater.java:613) at android.view.LayoutInflater.createViewFromTag(LayoutInflater.java:687) at android.view.LayoutInflater.rInflate(LayoutInflater.java:746) at android.view.LayoutInflater.inflate(LayoutInflater.java:489) at android.view.LayoutInflater.inflate(LayoutInflater.java:396) at android.view.LayoutInflater.inflate(LayoutInflater.java:352) at org.robolectric.tester.android.view.RoboWindow.setContentView(RoboWindow.java:82) at org.robolectric.shadows.ShadowActivity.setContentView(ShadowActivity.java:272) at android.app.Activity.setContentView(Activity.java) at net.frontlinesms.android.activity.KeywordList.onCreate(KeywordList.java:70) at android.app.Activity.performCreate(Activity.java:5008) at org.fest.reflect.method.Invoker.invoke(Invoker.java:112) at org.robolectric.util.ActivityController$1.run(ActivityController.java:119) at org.robolectric.shadows.ShadowLooper.runPaused(ShadowLooper.java:256) at org.robolectric.util.ActivityController.create(ActivityController.java:114) at org.robolectric.util.ActivityController.create(ActivityController.java:126) at net.frontlinesms.android.BasicTest.setUp(BasicTest.java:30) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.robolectric.RobolectricTestRunner$2.evaluate(RobolectricTestRunner.java:241) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.robolectric.RobolectricTestRunner$1.evaluate(RobolectricTestRunner.java:177) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:80) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:47) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:69) at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:49) at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35) at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24) at org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32) at org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93) at com.sun.proxy.$Proxy2.processTestClass(Unknown Source) at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:103) at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35) at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24) at org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:355) at org.gradle.internal.concurrent.DefaultExecutorFactory$StoppableExecutorImpl$1.run(DefaultExecutorFactory.java:66) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:680) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at android.view.LayoutInflater.$$robo$$LayoutInflater_1d1f_createView(LayoutInflater.java:587) at android.view.LayoutInflater.createView(LayoutInflater.java) at android.view.LayoutInflater.$$robo$$LayoutInflater_1d1f_createViewFromTag(LayoutInflater.java:687) at android.view.LayoutInflater.createViewFromTag(LayoutInflater.java) at android.view.LayoutInflater.$$robo$$LayoutInflater_1d1f_rInflate(LayoutInflater.java:746) at android.view.LayoutInflater.rInflate(LayoutInflater.java) at android.view.LayoutInflater.$$robo$$LayoutInflater_1d1f_inflate(LayoutInflater.java:489) at android.view.LayoutInflater.inflate(LayoutInflater.java) at android.view.LayoutInflater.$$robo$$LayoutInflater_1d1f_inflate(LayoutInflater.java:396) at android.view.LayoutInflater.inflate(LayoutInflater.java) at android.view.LayoutInflater.$$robo$$LayoutInflater_1d1f_inflate(LayoutInflater.java:352) at android.view.LayoutInflater.inflate(LayoutInflater.java) at org.robolectric.tester.android.view.RoboWindow.setContentView(RoboWindow.java:82) at org.robolectric.shadows.ShadowActivity.setContentView(ShadowActivity.java:272) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.robolectric.bytecode.ShadowWrangler$ShadowMethodPlan.run(ShadowWrangler.java:455) at android.app.Activity.setContentView(Activity.java) at net.frontlinesms.android.activity.KeywordList.onCreate(KeywordList.java:70) at android.app.Activity.$$robo$$Activity_c57b_performCreate(Activity.java:5008) at android.app.Activity.performCreate(Activity.java) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.fest.reflect.method.Invoker.invoke(Invoker.java:112) at org.robolectric.util.ActivityController$1.run(ActivityController.java:119) at org.robolectric.shadows.ShadowLooper.runPaused(ShadowLooper.java:256) at org.robolectric.util.ActivityController.create(ActivityController.java:114) at org.robolectric.util.ActivityController.create(ActivityController.java:126) at net.frontlinesms.android.BasicTest.setUp(BasicTest.java:30) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.robolectric.RobolectricTestRunner$2.evaluate(RobolectricTestRunner.java:241) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.robolectric.RobolectricTestRunner$1.evaluate(RobolectricTestRunner.java:177) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:80) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:47) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:69) at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:49) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35) at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24) at org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32) at org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93) at com.sun.proxy.$Proxy2.processTestClass(Unknown Source) at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:103) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) ... 7 more Caused by: android.view.InflateException: XML file ~/workspace-studio/frontlinesms-for-android/FrontlineSMS/build/res/all/debug/layout/actionbar.xml line #-1 (sorry, not yet implemented): Error inflating class android.widget.ProgressBar at android.view.LayoutInflater.createView(LayoutInflater.java:613) at org.robolectric.shadows.RoboLayoutInflater.onCreateView(RoboLayoutInflater.java:38) at android.view.LayoutInflater.onCreateView(LayoutInflater.java:660) at android.view.LayoutInflater.createViewFromTag(LayoutInflater.java:685) at android.view.LayoutInflater.rInflate(LayoutInflater.java:746) at android.view.LayoutInflater.rInflate(LayoutInflater.java:749) at android.view.LayoutInflater.inflate(LayoutInflater.java:489) at android.view.LayoutInflater.inflate(LayoutInflater.java:396) at net.frontlinesms.android.ui.view.ActionBar.<init>(ActionBar.java:65) at android.view.LayoutInflater.createView(LayoutInflater.java:587) at android.view.LayoutInflater.createViewFromTag(LayoutInflater.java:687) at android.view.LayoutInflater.rInflate(LayoutInflater.java:746) at android.view.LayoutInflater.inflate(LayoutInflater.java:489) at android.view.LayoutInflater.inflate(LayoutInflater.java:396) at android.view.LayoutInflater.inflate(LayoutInflater.java:352) at org.robolectric.tester.android.view.RoboWindow.setContentView(RoboWindow.java:82) at org.robolectric.shadows.ShadowActivity.setContentView(ShadowActivity.java:272) at android.app.Activity.setContentView(Activity.java) at net.frontlinesms.android.activity.KeywordList.onCreate(KeywordList.java:70) at android.app.Activity.performCreate(Activity.java:5008) at org.fest.reflect.method.Invoker.invoke(Invoker.java:112) at org.robolectric.util.ActivityController$1.run(ActivityController.java:119) at org.robolectric.shadows.ShadowLooper.runPaused(ShadowLooper.java:256) at org.robolectric.util.ActivityController.create(ActivityController.java:114) at org.robolectric.util.ActivityController.create(ActivityController.java:126) at net.frontlinesms.android.BasicTest.setUp(BasicTest.java:30) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.robolectric.RobolectricTestRunner$2.evaluate(RobolectricTestRunner.java:241) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.robolectric.RobolectricTestRunner$1.evaluate(RobolectricTestRunner.java:177) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:80) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:47) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:69) at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:49) at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35) at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24) at org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32) at org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93) at com.sun.proxy.$Proxy2.processTestClass(Unknown Source) at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:103) ... 7 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at android.view.LayoutInflater.$$robo$$LayoutInflater_1d1f_createView(LayoutInflater.java:587) at android.view.LayoutInflater.createView(LayoutInflater.java) at org.robolectric.shadows.RoboLayoutInflater.onCreateView(RoboLayoutInflater.java:38) at android.view.LayoutInflater.$$robo$$LayoutInflater_1d1f_onCreateView(LayoutInflater.java:660) at android.view.LayoutInflater.onCreateView(LayoutInflater.java) at android.view.LayoutInflater.$$robo$$LayoutInflater_1d1f_createViewFromTag(LayoutInflater.java:685) at android.view.LayoutInflater.createViewFromTag(LayoutInflater.java) at android.view.LayoutInflater.$$robo$$LayoutInflater_1d1f_rInflate(LayoutInflater.java:746) at android.view.LayoutInflater.rInflate(LayoutInflater.java) at android.view.LayoutInflater.$$robo$$LayoutInflater_1d1f_rInflate(LayoutInflater.java:749) at android.view.LayoutInflater.rInflate(LayoutInflater.java) at android.view.LayoutInflater.$$robo$$LayoutInflater_1d1f_inflate(LayoutInflater.java:489) at android.view.LayoutInflater.inflate(LayoutInflater.java) at android.view.LayoutInflater.$$robo$$LayoutInflater_1d1f_inflate(LayoutInflater.java:396) at android.view.LayoutInflater.inflate(LayoutInflater.java) at net.frontlinesms.android.ui.view.ActionBar.<init>(ActionBar.java:65) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at android.view.LayoutInflater.$$robo$$LayoutInflater_1d1f_createView(LayoutInflater.java:587) at android.view.LayoutInflater.createView(LayoutInflater.java) at android.view.LayoutInflater.$$robo$$LayoutInflater_1d1f_createViewFromTag(LayoutInflater.java:687) at android.view.LayoutInflater.createViewFromTag(LayoutInflater.java) at android.view.LayoutInflater.$$robo$$LayoutInflater_1d1f_rInflate(LayoutInflater.java:746) at android.view.LayoutInflater.rInflate(LayoutInflater.java) at android.view.LayoutInflater.$$robo$$LayoutInflater_1d1f_inflate(LayoutInflater.java:489) at android.view.LayoutInflater.inflate(LayoutInflater.java) at android.view.LayoutInflater.$$robo$$LayoutInflater_1d1f_inflate(LayoutInflater.java:396) at android.view.LayoutInflater.inflate(LayoutInflater.java) at android.view.LayoutInflater.$$robo$$LayoutInflater_1d1f_inflate(LayoutInflater.java:352) at android.view.LayoutInflater.inflate(LayoutInflater.java) at org.robolectric.tester.android.view.RoboWindow.setContentView(RoboWindow.java:82) at org.robolectric.shadows.ShadowActivity.setContentView(ShadowActivity.java:272) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.robolectric.bytecode.ShadowWrangler$ShadowMethodPlan.run(ShadowWrangler.java:455) at android.app.Activity.setContentView(Activity.java) at net.frontlinesms.android.activity.KeywordList.onCreate(KeywordList.java:70) at android.app.Activity.$$robo$$Activity_c57b_performCreate(Activity.java:5008) at android.app.Activity.performCreate(Activity.java) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.fest.reflect.method.Invoker.invoke(Invoker.java:112) at org.robolectric.util.ActivityController$1.run(ActivityController.java:119) at org.robolectric.shadows.ShadowLooper.runPaused(ShadowLooper.java:256) at org.robolectric.util.ActivityController.create(ActivityController.java:114) at org.robolectric.util.ActivityController.create(ActivityController.java:126) at net.frontlinesms.android.BasicTest.setUp(BasicTest.java:30) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.robolectric.RobolectricTestRunner$2.evaluate(RobolectricTestRunner.java:241) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.robolectric.RobolectricTestRunner$1.evaluate(RobolectricTestRunner.java:177) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:80) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:47) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:69) at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:49) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35) at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24) at org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32) at org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93) at com.sun.proxy.$Proxy2.processTestClass(Unknown Source) at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:103) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) ... 7 more Caused by: java.lang.ClassCastException: org.robolectric.res.AttrData cannot be cast to org.robolectric.res.StyleData at org.robolectric.shadows.ShadowAssetManager$StyleResolver.getParent(ShadowAssetManager.java:353) at org.robolectric.shadows.ShadowAssetManager$StyleResolver.getAttrValue(ShadowAssetManager.java:336) at org.robolectric.shadows.ShadowResources.findAttributeValue(ShadowResources.java:259) at org.robolectric.shadows.ShadowResources.attrsToTypedArray(ShadowResources.java:188) at org.robolectric.shadows.ShadowResources.access$000(ShadowResources.java:51) at org.robolectric.shadows.ShadowResources$ShadowTheme.obtainStyledAttributes(ShadowResources.java:460) at android.content.res.Resources$Theme.obtainStyledAttributes(Resources.java) at android.content.Context.obtainStyledAttributes(Context.java:374) at android.view.View.__constructor__(View.java:3297) at org.fest.reflect.method.Invoker.invoke(Invoker.java:112) at org.robolectric.shadows.ShadowView.__constructor__(ShadowView.java:68) at android.view.View.<init>(View.java:3295) at android.widget.ProgressBar.<init>(ProgressBar.java:253) at android.widget.ProgressBar.<init>(ProgressBar.java:246) at android.widget.ProgressBar.<init>(ProgressBar.java:242) at android.view.LayoutInflater.createView(LayoutInflater.java:587) at org.robolectric.shadows.RoboLayoutInflater.onCreateView(RoboLayoutInflater.java:38) at android.view.LayoutInflater.onCreateView(LayoutInflater.java:660) at android.view.LayoutInflater.createViewFromTag(LayoutInflater.java:685) at android.view.LayoutInflater.rInflate(LayoutInflater.java:746) at android.view.LayoutInflater.rInflate(LayoutInflater.java:749) at android.view.LayoutInflater.inflate(LayoutInflater.java:489) at android.view.LayoutInflater.inflate(LayoutInflater.java:396) at net.frontlinesms.android.ui.view.ActionBar.<init>(ActionBar.java:65) at android.view.LayoutInflater.createView(LayoutInflater.java:587) at android.view.LayoutInflater.createViewFromTag(LayoutInflater.java:687) at android.view.LayoutInflater.rInflate(LayoutInflater.java:746) at android.view.LayoutInflater.inflate(LayoutInflater.java:489) at android.view.LayoutInflater.inflate(LayoutInflater.java:396) at android.view.LayoutInflater.inflate(LayoutInflater.java:352) at org.robolectric.tester.android.view.RoboWindow.setContentView(RoboWindow.java:82) [truncated, hit stack overflow character limit...]

    Read the article

  • nvidia driver problem after update

    - by baltasar
    I know there are a lot of posts about nvdia driver, but I am not able to solve this. I updated my configuration yesterday, September 27th, and it seems that a new nvidia driver was involved. The updated completed with an error, and invited me to send a bug report automatically. I send it, but it finally said that there was no place to send the bug report to. Today I updated again, and a new kernel was there. Then I rebooted. I found a desktop in 640x480 with a horrible, unreadable font. I run jockey and tried to go back to another version of the nvidia driver, but it seems that none of the drivers listed there are suddenly valid. I have the x-swat repository enabled, because I remember that I had a similar issue in the past that was only to be solved with the newest driver. jockey log: 2012-09-28 11:22:24,747 DEBUG: Selecting previously unselected package nvidia-173. (Reading database ... 536311 files and directories currently installed.) Unpacking nvidia-173 (from .../nvidia-173_173.14.35-0ubuntu0.2_i386.deb) ... Processing triggers for man-db ... Setting up nvidia-173 (173.14.35-0ubuntu0.2) ... Loading new nvidia-173-173.14.35 DKMS files... Building only for 3.2.0-32-generic-pae Building for architecture i686 Building initial module for 3.2.0-32-generic-pae Error! Bad return status for module build on kernel: 3.2.0-32-generic-pae (i686) Consult /var/lib/dkms/nvidia-173/173.14.35/build/make.log for more information. Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... 2012-09-28 11:22:25,143 WARNING: modinfo for module nvidia_173 failed: ERROR: modinfo: could not find module nvidia_173 2012-09-28 11:22:25,143 ERROR: XorgDriverHandler.enable(): package or module not installed, aborting 2012-09-28 11:22:53,613 DEBUG: NVidia(nvidia_173).enabled(): target_alt /usr/lib/nvidia-173/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-173/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:22:53,613 DEBUG: KMH enabled: False 2012-09-28 11:22:53,629 DEBUG: NVidia(nvidia_173).enabled(): target_alt /usr/lib/nvidia-173/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-173/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:22:53,629 DEBUG: KMH enabled: False 2012-09-28 11:23:01,943 DEBUG: NVidia(nvidia_173).enabled(): target_alt /usr/lib/nvidia-173/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-173/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:01,943 DEBUG: KMH enabled: False 2012-09-28 11:23:01,962 DEBUG: NVidia(nvidia_173).enabled(): target_alt /usr/lib/nvidia-173/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-173/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:01,963 DEBUG: KMH enabled: False 2012-09-28 11:23:01,998 DEBUG: NVidia(nvidia_173_updates).enabled(): target_alt None current_alt /usr/lib/nvidia-173/ld.so.conf other target alt None other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:01,998 DEBUG: nvidia_173_updates is not the alternative in use 2012-09-28 11:23:02,044 DEBUG: NVidia(nvidia_current).enabled(): target_alt /usr/lib/nvidia-current/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-current/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,044 DEBUG: nvidia_current is not the alternative in use 2012-09-28 11:23:02,066 DEBUG: NVidia(nvidia_current).enabled(): target_alt /usr/lib/nvidia-current/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-current/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,066 DEBUG: nvidia_current is not the alternative in use 2012-09-28 11:23:02,106 DEBUG: NVidia(nvidia_current_updates).enabled(): target_alt None current_alt /usr/lib/nvidia-173/ld.so.conf other target alt None other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,106 DEBUG: nvidia_current_updates is not the alternative in use 2012-09-28 11:23:02,157 DEBUG: NVidia(nvidia_173).enabled(): target_alt /usr/lib/nvidia-173/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-173/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,157 DEBUG: KMH enabled: False 2012-09-28 11:23:02,177 DEBUG: NVidia(nvidia_173).enabled(): target_alt /usr/lib/nvidia-173/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-173/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,178 DEBUG: KMH enabled: False 2012-09-28 11:23:02,245 DEBUG: NVidia(nvidia_173).enabled(): target_alt /usr/lib/nvidia-173/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-173/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,245 DEBUG: KMH enabled: False 2012-09-28 11:23:02,272 DEBUG: NVidia(nvidia_173).enabled(): target_alt /usr/lib/nvidia-173/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-173/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,273 DEBUG: KMH enabled: False 2012-09-28 11:23:02,303 DEBUG: NVidia(nvidia_173_updates).enabled(): target_alt None current_alt /usr/lib/nvidia-173/ld.so.conf other target alt None other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,303 DEBUG: nvidia_173_updates is not the alternative in use 2012-09-28 11:23:02,330 DEBUG: NVidia(nvidia_current).enabled(): target_alt /usr/lib/nvidia-current/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-current/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,410 DEBUG: nvidia_current is not the alternative in use 2012-09-28 11:23:02,427 DEBUG: NVidia(nvidia_current).enabled(): target_alt /usr/lib/nvidia-current/ld.so.conf current_alt /usr/lib/nvidia-173/ld.so.conf other target alt /usr/lib/nvidia-current/alt_ld.so.conf other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,428 DEBUG: nvidia_current is not the alternative in use 2012-09-28 11:23:02,457 DEBUG: NVidia(nvidia_current_updates).enabled(): target_alt None current_alt /usr/lib/nvidia-173/ld.so.conf other target alt None other current alt /usr/lib/nvidia-173/alt_ld.so.conf 2012-09-28 11:23:02,458 DEBUG: nvidia_current_updates is not the alternative in use

    Read the article

  • Cant correctly install Lazarus

    - by user206316
    I have a little problem with installing and running Lazarus. I just upgrade ubuntu from 13.04 to 13.10. When i had 13.04, i could install lazarus without any problems, but in 13.10 lazarus magicaly dissapeared, and when i tried install it from ubuntu software center, it said something like in my software resources lazarus-ide-0.9.30.4 doesnt exist. After some research on net i tried delete all files from earlier installations, download deb packages from sourceforge and install them, but when i want to instal fpc-src, error shows up with output: (Reading database ... 100% (Reading database ... 239063 files and directories currently installed.) Unpacking fpc-src (from .../Stiahnut/Lazarus/fpc-src.deb) ... dpkg: error processing /home/richi/Stiahnut/Lazarus/fpc-src.deb (--install): trying to overwrite '/usr/share/fpcsrc/2.6.2/rtl/nativent/tthread.inc', which is also in package fpc-source-2.6.2 2.6.2-5 dpkg-deb (subprocess): decompressing archive member: internal gzip write error: Broken pipe dpkg-deb: error: subprocess <decompress> returned error exit status 2 dpkg-deb (subprocess): cannot copy archive member from '/home/richi/Stiahnut/Lazarus/fpc-src.deb' to decompressor pipe: failed to write (Broken pipe) when i started lazarus, it of course tell me that it cant find fpc compier and fpc sources. So, please, i really need program for school and i dont wanna reinstall os anymore or something like that :( (Ubuntu 13.10 64bit) P.S: im not skilled in linux so if u know some commands to fix it just write them for copy and paste :) P.P.S:Sorry for bad English, im Slovak xD P.P.P.S: Thank so much for any answers update: output from sudo dpkg -l | grep "^rc" richi@Richi-Ubuntu:~/lazarus1.0.12$ sudo dpkg -l | grep "^rc" rc account-plugin-generic-oauth 0.10bzr13.03.26-0ubuntu1.1 amd64 GNOME Control Center account plugin for single signon - generic OAuth rc appmenu-gtk:amd64 12.10.3daily13.04.03-0ubuntu1 amd64 Export GTK menus over DBus rc appmenu-gtk3:amd64 12.10.3daily13.04.03-0ubuntu1 amd64 Export GTK menus over DBus rc fp-compiler-2.6.0 2.6.0-9 amd64 Free Pascal - compiler rc fp-utils-2.6.0 2.6.0-9 amd64 Free Pascal - utilities rc lazarus-ide-0.9.30.4 0.9.30.4-4 amd64 IDE for Free Pascal - common IDE files rc lazarus-ide-1.0.10 1.0.10+dfsg-1 amd64 IDE for Free Pascal - common IDE files rc lcl-utils-0.9.30.4 0.9.30.4-4 amd64 Lazarus Components Library - command line build tools rc lcl-utils-1.0.10 1.0.10+dfsg-1 amd64 Lazarus Components Library - command line build tools rc libbamf3-1:amd64 0.4.0daily13.06.19~13.04-0ubuntu1 amd64 Window matching library - shared library rc libboost-filesystem1.49.0 1.49.0-4 amd64 filesystem operations (portable paths, iteration over directories, etc) in C++ rc libboost-signals1.49.0 1.49.0-4 amd64 managed signals and slots library for C++ rc libboost-system1.49.0 1.49.0-4 amd64 Operating system (e.g. diagnostics support) library rc libboost-thread1.49.0 1.49.0-4 amd64 portable C++ multi-threading rc libbrlapi0.5:amd64 4.4-8ubuntu4 amd64 braille display access via BRLTTY - shared library rc libcamel-1.2-40 3.6.4-0ubuntu1.1 amd64 Evolution MIME message handling library rc libcolumbus0-0 0.4.0daily13.04.16~13.04-0ubuntu1 amd64 error tolerant matching engine - shared library rc libdns95 1:9.9.2.dfsg.P1-2ubuntu2.1 amd64 DNS Shared Library used by BIND rc libdvbpsi7 0.2.2-1 amd64 library for MPEG TS and DVB PSI tables decoding and generating rc libebackend-1.2-5 3.6.4-0ubuntu1.1 amd64 Utility library for evolution data servers rc libedata-book-1.2-15 3.6.4-0ubuntu1.1 amd64 Backend library for evolution address books rc libedata-cal-1.2-18 3.6.4-0ubuntu1.1 amd64 Backend library for evolution calendars rc libgc1c3:amd64 1:7.2d-0ubuntu5 amd64 conservative garbage collector for C and C++ rc libgd2-xpm:amd64 2.0.36~rc1~dfsg-6.1ubuntu1 amd64 GD Graphics Library version 2 rc libgd2-xpm:i386 2.0.36~rc1~dfsg-6.1ubuntu1 i386 GD Graphics Library version 2 rc libgnome-desktop-3-4 3.6.3-0ubuntu1 amd64 Utility library for loading .desktop files - runtime files rc libgphoto2-2:amd64 2.4.14-2 amd64 gphoto2 digital camera library rc libgphoto2-2:i386 2.4.14-2 i386 gphoto2 digital camera library rc libgphoto2-port0:amd64 2.4.14-2 amd64 gphoto2 digital camera port library rc libgphoto2-port0:i386 2.4.14-2 i386 gphoto2 digital camera port library rc libgtksourceview-3.0-0:amd64 3.6.3-0ubuntu1 amd64 shared libraries for the GTK+ syntax highlighting widget rc libgweather-3-1 3.6.2-0ubuntu1 amd64 GWeather shared library rc libharfbuzz0:amd64 0.9.13-1 amd64 OpenType text shaping engine rc libibus-1.0-0:amd64 1.4.2-0ubuntu2 amd64 Intelligent Input Bus - shared library rc libical0 0.48-2 amd64 iCalendar library implementation in C (runtime) rc libimobiledevice3 1.1.4-1ubuntu6.2 amd64 Library for communicating with the iPhone and iPod Touch rc libisc92 1:9.9.2.dfsg.P1-2ubuntu2.1 amd64 ISC Shared Library used by BIND rc libkms1:amd64 2.4.46-1 amd64 Userspace interface to kernel DRM buffer management rc libllvm3.2:i386 1:3.2repack-7ubuntu1 i386 Low-Level Virtual Machine (LLVM), runtime library rc libmikmod2:amd64 3.1.12-5 amd64 Portable sound library rc libpackagekit-glib2-14:amd64 0.7.6-3ubuntu1 amd64 Library for accessing PackageKit using GLib rc libpoppler28:amd64 0.20.5-1ubuntu3 amd64 PDF rendering library rc libraw5:amd64 0.14.7-0ubuntu1.13.04.2 amd64 raw image decoder library rc librhythmbox-core6 2.98-0ubuntu5 amd64 support library for the rhythmbox music player rc libsdl-mixer1.2:amd64 1.2.12-7ubuntu1 amd64 Mixer library for Simple DirectMedia Layer 1.2, libraries rc libsnmp15 5.4.3~dfsg-2.7ubuntu1 amd64 SNMP (Simple Network Management Protocol) library rc libsyncdaemon-1.0-1 4.2.0-0ubuntu1 amd64 Ubuntu One synchronization daemon library rc libunity-core-6.0-5 7.0.0daily13.06.19~13.04-0ubuntu1 amd64 Core library for the Unity interface. rc libusb-0.1-4:i386 2:0.1.12-23.2ubuntu1 i386 userspace USB programming library rc libwayland0:amd64 1.0.5-0ubuntu1 amd64 wayland compositor infrastructure - shared libraries rc linux-image-3.8.0-19-generic 3.8.0-19.30 amd64 Linux kernel image for version 3.8.0 on 64 bit x86 SMP rc linux-image-3.8.0-31-generic 3.8.0-31.46 amd64 Linux kernel image for version 3.8.0 on 64 bit x86 SMP rc linux-image-extra-3.8.0-19-generic 3.8.0-19.30 amd64 Linux kernel image for version 3.8.0 on 64 bit x86 SMP rc linux-image-extra-3.8.0-31-generic 3.8.0-31.46 amd64 Linux kernel image for version 3.8.0 on 64 bit x86 SMP rc screen-resolution-extra 0.15ubuntu1 all Extension for the GNOME screen resolution applet rc unity-common 7.0.0daily13.06.19~13.04-0ubuntu1 all Common files for the Unity interface.

    Read the article

  • .NET 3.5 Installation Problems in Windows 8

    - by Rick Strahl
    Windows 8 installs with .NET 4.5. A default installation of Windows 8 doesn't seem to include .NET 3.0 or 3.5, although .NET 2.0 does seem to be available by default (presumably because Windows has app dependencies on that). I ran into some pretty nasty compatibility issues regarding .NET 3.5 which I'll describe in this post. I'll preface this by saying that depending on how you install Windows 8 you may not run into these issues. In fact, it's probably a special case, but one that might be common with developer folks reading my blog. Specifically it's the install order that screwed things up for me -  installing Visual Studio before explicitly installing .NET 3.5 from Windows Features - in particular. If you install Visual Studio 2010 I highly recommend you install .NET 3.5 from Windows features BEFORE you install Visual Studio 2010 and save yourself the trouble I went through. So when I installed Windows 8, and then looked at the Windows Features to install after the fact in the Windows Feature dialog, I thought - .NET 3.5 - who needs it. I'd be happy to not have to install .NET 3.5, but unfortunately I found out quite a while after initial installation that one of my applications/tools (DevExpress's awesome CodeRush) depends on it and won't install without it. Enabling .NET 3.5 in Windows 8 If you want to run .NET 3.5 on Windows 8, don't download an installer - those installers don't work on Windows 8, and you don't need to do this because you can use the Windows Features dialog to enable .NET 3.5: And that *should* do the trick. If you do this before you install other apps that require .NET 3.5 and install a non-SP1 one version of it, you are going to have no problems. Unfortunately for me, even after I've installed the above, when I run the CodeRush installer I still get this lovely dialog: Now I double checked to see if .NET 3.5 is installed - it is, both for 32 bit and 64 bit. I went as far as creating a small .NET Console app and running it to verify that it actually runs. And it does… So naturally I thought the CodeRush installer is a little whacky. After some back and forth Alex Skorkin on Twitter pointed me in the right direction: He asked me to look in the registry for exact info on which version of .NET 3.5 is installed here: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP where I found that .NET 3.5 SP1 was installed. This is the 64 bit key which looks all correct. However, when I looked under the 32 bit node I found: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\NET Framework Setup\NDP\v3.5 Notice that the service pack number is set to 0, rather than 1 (which it was for the 64 bit install), which is what the installer requires. So to summarize: the 64 bit version is installed with SP1, the 32 bit version is not. Uhm, Ok… thanks for that! Easy to fix, you say - just install SP1. Nope, not so easy because the standalone installer doesn't work on Windows 8. I can't get either .NET 3.5 installer or the SP 1 installer to even launch. They simply start and hang (or exit immediately) without messages. I also tried to get Windows to update .NET 3.5 by checking for Windows Updates, which should pick up on the dated version of .NET 3.5 and pull down SP1, but that's also no go. Check for Updates doesn't bring down any updates for me yet. I'm sure at some random point in the future Windows will deem it necessary to update .NET 3.5 to SP1, but at this point it's not letting me coerce it to do it explicitly. How did this happen I'm not sure exactly whether this is the cause and effect, but I suspect the story goes like this: Installed Windows 8 without support for .NET 3.5 Installed Visual Studio 2010 which installs .NET 3.5 (no SP) I now had .NET 3.5 installed but without SP1. I then: Tried to install CodeRush - Error: .NET 3.5 SP1 required Enabled .NET 3.5 in Windows Features I figured enabling the .NET 3.5 Windows Features would do the trick. But still no go. Now I suspect Visual Studio installed the 32 bit version of .NET 3.5 on my machine and Windows Features detected the previous install and didn't reinstall it. This left the 32 bit install at least with no SP1 installed. How to Fix it My final solution was to completely uninstall .NET 3.5 *and* to reboot: Go to Windows Features Uncheck the .NET Framework 3.5 Restart Windows Go to Windows Features Check .NET Framework 3.5 and voila, I now have a proper installation of .NET 3.5. I tried this before but without the reboot step in between which did not work. Make sure you reboot between uninstalling and reinstalling .NET 3.5! More Problems The above fixed me right up, but in looking for a solution it seems that a lot of people are also having problems with .NET 3.5 installing properly from the Windows Features dialog. The problem there is that the feature wasn't properly loading from the installer disks or not downloading the proper components for updates. It turns out you can explicitly install Windows features using the DISM tool in Windows.dism.exe /online /enable-feature /featurename:NetFX3 /Source:f:\sources\sxs You can try this without the /Source flag first - which uses the hidden Windows installer files if you kept those. Otherwise insert the DVD or ISO and point at the path \sources\sxs path where the installer lives. This also gives you a little more information if something does go wrong.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Installing gtk-config and/or fsv in Ubuntu 10.10

    - by Wayne Werner
    Hi, I'm trying to install the File System Visualizer (think "It's a UNIX System! I know this!" from Jurassic Park) on Ubuntu 10.10. I've got the .tar.gz downloaded, and extracted. However, when I ./configure, I get this output: loading cache ./config.cache checking for a BSD compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking whether make sets ${MAKE}... yes checking for working aclocal... found checking for working autoconf... found checking for working automake... found checking for working autoheader... found checking for working makeinfo... missing checking for gcc... gcc checking whether the C compiler (gcc ) works... yes checking whether the C compiler (gcc ) is a cross-compiler... no checking whether we are using GNU C... yes checking whether gcc accepts -g... yes checking how to run the C preprocessor... gcc -E checking for ranlib... ranlib checking for POSIXized ISC... no checking for dirent.h that defines DIR... yes checking for opendir in -ldir... no checking for ANSI C header files... yes checking whether time.h and sys/time.h may both be included... yes checking for strings.h... yes checking for sys/time.h... yes checking for unistd.h... yes checking for working const... yes checking for mode_t... yes checking for uid_t in sys/types.h... yes checking for pid_t... yes checking for size_t... yes checking for comparison_fn_t... yes checking for st_blocks in struct stat... yes checking whether struct tm is in sys/time.h or time.h... time.h checking for working alloca.h... yes checking for alloca... yes checking for working fnmatch... yes checking for strftime... yes checking for getcwd... yes checking for gettimeofday... yes checking for mktime... yes checking for strcspn... yes checking for strdup... yes checking for strspn... yes checking for strtod... yes checking for strtoul... yes checking for scandir... yes checking for inline... inline checking for off_t... yes checking for unistd.h... (cached) yes checking for getpagesize... yes checking for working mmap... yes checking for argz.h... yes checking for limits.h... yes checking for locale.h... yes checking for nl_types.h... yes checking for malloc.h... yes checking for string.h... yes checking for unistd.h... (cached) yes checking for sys/param.h... yes checking for getcwd... (cached) yes checking for munmap... yes checking for putenv... yes checking for setenv... yes checking for setlocale... yes checking for strchr... yes checking for strcasecmp... yes checking for strdup... (cached) yes checking for __argz_count... yes checking for __argz_stringify... yes checking for __argz_next... yes checking for stpcpy... yes checking for LC_MESSAGES... yes checking whether NLS is requested... yes checking whether included gettext is requested... no checking for libintl.h... yes checking for gettext in libc... yes checking for msgfmt... /usr/bin/msgfmt checking for dcgettext... yes checking for gmsgfmt... /usr/bin/msgfmt checking for xgettext... /usr/bin/xgettext checking for gtk-config... no checking for GTK - version >= 1.2.1... no *** The gtk-config script installed by GTK could not be found *** If GTK was installed in PREFIX, make sure PREFIX/bin is in *** your path, or set the GTK_CONFIG environment variable to the *** full path to gtk-config. configure: error: Cannot find proper GTK+ version Obviously it's looking for gtk-config. However, apparently it doesn't exist in the repos anymore. Then this post mentioned that gtkglarea solved their problem, as mentioned in this file. Of course that poster neatly forgets to mention exactly what and how gtkglarea solved their problem, and Google is mostly devoid of information on the problem. So I come here asking for help! I would like to install fsv, but it tells me gtk-config doesn't exist. How can I fix this problem in Ubuntu 10.10? Thanks!

    Read the article

  • NoSQL Memcached API for MySQL: Latest Updates

    - by Mat Keep
    With data volumes exploding, it is vital to be able to ingest and query data at high speed. For this reason, MySQL has implemented NoSQL interfaces directly to the InnoDB and MySQL Cluster (NDB) storage engines, which bypass the SQL layer completely. Without SQL parsing and optimization, Key-Value data can be written directly to MySQL tables up to 9x faster, while maintaining ACID guarantees. In addition, users can continue to run complex queries with SQL across the same data set, providing real-time analytics to the business or anonymizing sensitive data before loading to big data platforms such as Hadoop, while still maintaining all of the advantages of their existing relational database infrastructure. This and more is discussed in the latest Guide to MySQL and NoSQL where you can learn more about using the APIs to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database The native Memcached API is part of the MySQL 5.6 Release Candidate, and is already available in the GA release of MySQL Cluster. By using the ubiquitous Memcached API for writing and reading data, developers can preserve their investments in Memcached infrastructure by re-using existing Memcached clients, while also eliminating the need for application changes. Speed, when combined with flexibility, is essential in the world of growing data volumes and variability. Complementing NoSQL access, support for on-line DDL (Data Definition Language) operations in MySQL 5.6 and MySQL Cluster enables DevOps teams to dynamically update their database schema to accommodate rapidly changing requirements, such as the need to capture additional data generated by their applications. These changes can be made without database downtime. Using the Memcached interface, developers do not need to define a schema at all when using MySQL Cluster. Lets look a little more closely at the Memcached implementations for both InnoDB and MySQL Cluster. Memcached Implementation for InnoDB The Memcached API for InnoDB is previewed as part of the MySQL 5.6 Release Candidate. As illustrated in the following figure, Memcached for InnoDB is implemented via a Memcached daemon plug-in to the mysqld process, with the Memcached protocol mapped to the native InnoDB API. Figure 1: Memcached API Implementation for InnoDB With the Memcached daemon running in the same process space, users get very low latency access to their data while also leveraging the scalability enhancements delivered with InnoDB and a simple deployment and management model. Multiple web / application servers can remotely access the Memcached / InnoDB server to get direct access to a shared data set. With simultaneous SQL access, users can maintain all the advanced functionality offered by InnoDB including support for Foreign Keys, XA transactions and complex JOIN operations. Benchmarks demonstrate that the NoSQL Memcached API for InnoDB delivers up to 9x higher performance than the SQL interface when inserting new key/value pairs, with a single low-end commodity server supporting nearly 70,000 Transactions per Second. Figure 2: Over 9x Faster INSERT Operations The delivered performance demonstrates MySQL with the native Memcached NoSQL interface is well suited for high-speed inserts with the added assurance of transactional guarantees. You can check out the latest Memcached / InnoDB developments and benchmarks here You can learn how to configure the Memcached API for InnoDB here Memcached Implementation for MySQL Cluster Memcached API support for MySQL Cluster was introduced with General Availability (GA) of the 7.2 release, and joins an extensive range of NoSQL interfaces that are already available for MySQL Cluster Like Memcached, MySQL Cluster provides a distributed hash table with in-memory performance. MySQL Cluster extends Memcached functionality by adding support for write-intensive workloads, a full relational model with ACID compliance (including persistence), rich query support, auto-sharding and 99.999% availability, with extensive management and monitoring capabilities. All writes are committed directly to MySQL Cluster, eliminating cache invalidation and the overhead of data consistency checking to ensure complete synchronization between the database and cache. Figure 3: Memcached API Implementation with MySQL Cluster Implementation is simple: 1. The application sends reads and writes to the Memcached process (using the standard Memcached API). 2. This invokes the Memcached Driver for NDB (which is part of the same process) 3. The NDB API is called, providing for very quick access to the data held in MySQL Cluster’s data nodes. The solution has been designed to be very flexible, allowing the application architect to find a configuration that best fits their needs. It is possible to co-locate the Memcached API in either the data nodes or application nodes, or alternatively within a dedicated Memcached layer. The benefit of this flexible approach to deployment is that users can configure behavior on a per-key-prefix basis (through tables in MySQL Cluster) and the application doesn’t have to care – it just uses the Memcached API and relies on the software to store data in the right place(s) and to keep everything synchronized. Using Memcached for Schema-less Data By default, every Key / Value is written to the same table with each Key / Value pair stored in a single row – thus allowing schema-less data storage. Alternatively, the developer can define a key-prefix so that each value is linked to a pre-defined column in a specific table. Of course if the application needs to access the same data through SQL then developers can map key prefixes to existing table columns, enabling Memcached access to schema-structured data already stored in MySQL Cluster. Conclusion Download the Guide to MySQL and NoSQL to learn more about NoSQL APIs and how you can use them to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database See how to build a social app with MySQL Cluster and the Memcached API from our on-demand webinar or take a look at the docs Don't hesitate to use the comments section below for any questions you may have 

    Read the article

  • SQL SERVER – Powershell – Importing CSV File Into Database – Video

    - by pinaldave
    Laerte Junior is my very dear friend and Powershell Expert. On my request he has agreed to share Powershell knowledge with us. Laerte Junior is a SQL Server MVP and, through his technology blog and simple-talk articles, an active member of the Microsoft community in Brasil. He is a skilled Principal Database Architect, Developer, and Administrator, specializing in SQL Server and Powershell Programming with over 8 years of hands-on experience. He holds a degree in Computer Science, has been awarded a number of certifications (including MCDBA), and is an expert in SQL Server 2000 / SQL Server 2005 / SQL Server 2008 technologies. Let us read the blog post in his own words. I was reading an excellent post from my great friend Pinal about loading data from CSV files, SQL SERVER – Importing CSV File Into Database – SQL in Sixty Seconds #018 – Video,   to SQL Server and was honored to write another guest post on SQL Authority about the magic of the PowerShell. The biggest stuff in TechEd NA this year was PowerShell. Fellows, if you still don’t know about it, it is better to run. Remember that The Core Servers to SQL Server are the future and consequently the Shell. You don’t want to be out of this, right? Let’s see some PowerShell Magic now. To start our tour, first we need to download these two functions from Powershell and SQL Server Master Jedi Chad Miller.Out-DataTable and Write-DataTable. Save it in a module and add it in your profile. In my case, the module is called functions.psm1. To have some data to play, I created 10 csv files with the same content. I just put the SQL Server Errorlog into a csv file and created 10 copies of it. #Just create a CSV with data to Import. Using SQLErrorLog [reflection.assembly]::LoadWithPartialName(“Microsoft.SqlServer.Smo”) $ServerInstance=new-object (“Microsoft.SqlServer.Management.Smo.Server“) $Env:Computername $ServerInstance.ReadErrorLog() | export-csv-path“c:\SQLAuthority\ErrorLog.csv”-NoTypeInformation for($Count=1;$Count-le 10;$count++)  {       Copy-Item“c:\SQLAuthority\Errorlog.csv”“c:\SQLAuthority\ErrorLog$($count).csv” } Now in my path c:\sqlauthority, I have 10 csv files : Now it is time to create a table. In my case, the SQL Server is called R2D2 and the Database is SQLServerRepository and the table is CSV_SQLAuthority. CREATE TABLE [dbo].[CSV_SQLAuthority]( [LogDate] [datetime] NULL, [Processinfo] [varchar](20) NULL, [Text] [varchar](MAX) NULL ) Let’s play a little bit. I want to import synchronously all csv files from the path to the table: #Importing synchronously $DataImport=Import-Csv-Path ( Get-ChildItem“c:\SQLAuthority\*.csv”) $DataTable=Out-DataTable-InputObject$DataImport Write-DataTable-ServerInstanceR2D2-DatabaseSQLServerRepository-TableNameCSV_SQLAuthority-Data$DataTable Very cool, right? Let’s do it asynchronously and in background using PowerShell  Jobs: #If you want to do it to all asynchronously Start-job-Name‘ImportingAsynchronously‘ ` -InitializationScript  {IpmoFunctions-Force-DisableNameChecking} ` -ScriptBlock {    ` $DataImport=Import-Csv-Path ( Get-ChildItem“c:\SQLAuthority\*.csv”) $DataTable=Out-DataTable-InputObject$DataImport Write-DataTable   -ServerInstance“R2D2″`                   -Database“SQLServerRepository“`                   -TableName“CSV_SQLAuthority“`                   -Data$DataTable             } Oh, but if I have csv files that are large in size and I want to import each one asynchronously. In this case, this is what should be done: Get-ChildItem“c:\SQLAuthority\*.csv” | % { Start-job-Name“$($_)” ` -InitializationScript  {IpmoFunctions-Force-DisableNameChecking} ` -ScriptBlock { $DataImport=Import-Csv-Path$args[0]                $DataTable=Out-DataTable-InputObject$DataImport                Write-DataTable-ServerInstance“R2D2″`                               -Database“SQLServerRepository“`                               -TableName“CSV_SQLAuthority“`                               -Data$DataTable             } -ArgumentList$_.fullname } How cool is that? Let’s make the funny stuff now. Let’s schedule it on an SQL Server Agent Job. If you are using SQL Server 2012, you can use the PowerShell Job Step. Otherwise you need to use a CMDexec job step calling PowerShell.exe. We will use the second option. First, create a ps1 file called ImportCSV.ps1 with the script above and save it in a path. In my case, it is in c:\temp\automation. Just add the line at the end: Get-ChildItem“c:\SQLAuthority\*.csv” | % { Start-job-Name“$($_)” ` -InitializationScript  {IpmoFunctions-Force-DisableNameChecking} ` -ScriptBlock { $DataImport=Import-Csv-Path$args[0]                $DataTable=Out-DataTable-InputObject$DataImport                Write-DataTable-ServerInstance“R2D2″`                               -Database“SQLServerRepository“`                               -TableName“CSV_SQLAuthority“`                               -Data$DataTable             } -ArgumentList$_.fullname } Get-Job | Wait-Job | Out-Null Remove-Job -State Completed Why? See my post Dooh PowerShell Trick–Running Scripts That has Posh Jobs on a SQL Agent Job Remember, this trick is for  ALL scripts that will use PowerShell Jobs and any kind of schedule tool (SQL Server agent, Windows Schedule) Create a Job Called ImportCSV and a step called Step_ImportCSV and choose CMDexec. Then you just need to schedule or run it. I did a short video (with matching good background music) and you can see it at: That’s it guys. C’mon, join me in the #PowerShellLifeStyle. You will love it. If you want to check what we can do with PowerShell and SQL Server, don’t miss Laerte Junior LiveMeeting on July 18. You can have more information in : LiveMeeting VC PowerShell PASS–Troubleshooting SQL Server With PowerShell–English Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology, Video Tagged: Powershell

    Read the article

  • FireFox 6 Super Slow? Cache Settings Corruption

    - by Rick Strahl
    For those of you that follow me on Twitter, you've probably seen some of my tweets regarding major performance problems I've seen with the install of FireFox 6.0. FireFox 6.0 was released a couple of weeks ago and is treated as a 'force feed' update for FireFox 5.0. I'm not sure what the deal is with this braindead versioning that Mozilla is doing with major version releases coming out, what now every other month? Seriously that's retarded especially given the limited number of new features these releases bring, and the upgrade pain for plug-ins that the major version release causes. Anyway, after the FireFox updater bugged me long enough I finally gave in last week and updated to FireFox 6. Immediately after install I noticed terrible performance. Everything was running at a snail's pace with Web pages loading slowly and most content actually slowly 'painting' the page. A typical sign of content downloading slowly. However these are pages that should be mostly cached on my system and even repeated accesses ran just as slow. Just for a reality check I ran the same sites in Chrome (blazing fast) and IE (fast enough :-)) but FireFox - dog on a stick. Why so slow Boss? While complaining lots of people recommended to ditch FireFox - use Chrome, yada yada yada. Yeah, Chrome is fast and getting better but I have a number of plug-ins that I use in FF that I can't easily give up. So I suffered and started looking around more closely at what was happening. The first thing I noticed when accessing pages was that I continually saw accesses to the Google CDN downloading jQuery and jQuery UI. UI especially is pretty heavy in size and currently I'm in a location with a fairly slow IP connection where large files are a bit of an issue. However, seeing the CDN urls pop up repeatedly raised a flag with me. That stuff should be caching and it looked like each and every hit was reloading these scripts and various images over and over again. Fired up FireBug and sure enough I saw something like this on a repeated hit to my blog: Those two highlights are jquery and the main CSS file for the site and both are being loaded fully and taking a while to load. However, since this page had been loaded before, these items should be cached and show 304 requests instead of the full HTTP requests returning 200 result codes. In short it looked like FireFox was not caching ANY content at all and constantly reloading all page resources. No wonder things were running dog slow. Once I realized what the problem was I took a look in the about:config settings and lo and behold a bunch of the cache settings were set to not cache: In my case ALL the main cache flags were set to false for some reason that I can't figure out.  It appears that after the FireFox 6 update these flags somehow mysteriously changed and performance took a nose dive. Switching the .enable flags back to true and resetting all the cache settings tote default reverted performance back to the way it's supposed to be: reasonably fast and snappy as soon as content is cached and accessed again  from cache. I try not to muck with the about:config settings much (other than turning off the IPV6 option) but when there are problems access to these features can be really nice. However, I treat this as a last resort so it took me quite some time before I started looking through ALL the settings. This takes a while, not knowing what I was looking for exactly. If Web load performance is slow it's a good idea to check the cache settings. I have no idea what hosed these settings for me - I certainly didn't explicitly set them in about:config and while in FireFox's Options dialog I didn't see any option that would affect global caching like this, so this remains a mystery to me. Anyway, I hope that this is helpful to some, in case some of you end up running into a similar issue.© Rick Strahl, West Wind Technologies, 2005-2011Posted in FireFox   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • CLR via C# 3rd Edition is out

    - by Abhijeet Patel
    Time for some book news update. CLR via C#, 3rd Edition seems to have been out for a little while now. The book was released in early Feb this year, and needless to say my copy is on it’s way. I can barely wait to dig in and chew on the goodies that one of the best technical authors and software professionals I respect has in store. The 2nd edition of the book was an absolute treat and this edition promises to be no less. Here is a brief description of what’s new and updated from the 2nd edition. Part I – CLR Basics Chapter 1-The CLR’s Execution Model Added about discussion about C#’s /optimize and /debug switches and how they relate to each other. Chapter 2-Building, Packaging, Deploying, and Administering Applications and Types Improved discussion about Win32 manifest information and version resource information. Chapter 3-Shared Assemblies and Strongly Named Assemblies Added discussion of TypeForwardedToAttribute and TypeForwardedFromAttribute. Part II – Designing Types Chapter 4-Type Fundamentals No new topics. Chapter 5-Primitive, Reference, and Value Types Enhanced discussion of checked and unchecked code and added discussion of new BigInteger type. Also added discussion of C# 4.0’s dynamic primitive type. Chapter 6-Type and Member Basics No new topics. Chapter 7-Constants and Fields No new topics. Chapter 8-Methods Added discussion of extension methods and partial methods. Chapter 9-Parameters Added discussion of optional/named parameters and implicitly-typed local variables. Chapter 10-Properties Added discussion of automatically-implemented properties, properties and the Visual Studio debugger, object and collection initializers, anonymous types, the System.Tuple type and the ExpandoObject type. Chapter 11-Events Added discussion of events and thread-safety as well as showing a cool extension method to simplify the raising of an event. Chapter 12-Generics Added discussion of delegate and interface generic type argument variance. Chapter 13-Interfaces No new topics. Part III – Essential Types Chapter 14-Chars, Strings, and Working with Text No new topics. Chapter 15-Enums Added coverage of new Enum and Type methods to access enumerated type instances. Chapter 16-Arrays Added new section on initializing array elements. Chapter 17-Delegates Added discussion of using generic delegates to avoid defining new delegate types. Also added discussion of lambda expressions. Chapter 18-Attributes No new topics. Chapter 19-Nullable Value Types Added discussion on performance. Part IV – CLR Facilities Chapter 20-Exception Handling and State Management This chapter has been completely rewritten. It is now about exception handling and state management. It includes discussions of code contracts and constrained execution regions (CERs). It also includes a new section on trade-offs between writing productive code and reliable code. Chapter 21-Automatic Memory Management Added discussion of C#’s fixed state and how it works to pin objects in the heap. Rewrote the code for weak delegates so you can use them with any class that exposes an event (the class doesn’t have to support weak delegates itself). Added discussion on the new ConditionalWeakTable class, GC Collection modes, Full GC notifications, garbage collection modes and latency modes. I also include a new sample showing how your application can receive notifications whenever Generation 0 or 2 collections occur. Chapter 22-CLR Hosting and AppDomains Added discussion of side-by-side support allowing multiple CLRs to be loaded in a single process. Added section on the performance of using MarshalByRefObject-derived types. Substantially rewrote the section on cross-AppDomain communication. Added section on AppDomain Monitoring and first chance exception notifications. Updated the section on the AppDomainManager class. Chapter 23-Assembly Loading and Reflection Added section on how to deploy a single file with dependent assemblies embedded inside it. Added section comparing reflection invoke vs bind/invoke vs bind/create delegate/invoke vs C#’s dynamic type. Chapter 24-Runtime Serialization This is a whole new chapter that was not in the 2nd Edition. Part V – Threading Chapter 25-Threading Basics Whole new chapter motivating why Windows supports threads, thread overhead, CPU trends, NUMA Architectures, the relationship between CLR threads and Windows threads, the Thread class, reasons to use threads, thread scheduling and priorities, foreground thread vs background threads. Chapter 26-Performing Compute-Bound Asynchronous Operations Whole new chapter explaining the CLR’s thread pool. This chapter covers all the new .NET 4.0 constructs including cooperative cancelation, Tasks, the aralle class, parallel language integrated query, timers, how the thread pool manages its threads, cache lines and false sharing. Chapter 27-Performing I/O-Bound Asynchronous Operations Whole new chapter explaining how Windows performs synchronous and asynchronous I/O operations. Then, I go into the CLR’s Asynchronous Programming Model, my AsyncEnumerator class, the APM and exceptions, Applications and their threading models, implementing a service asynchronously, the APM and Compute-bound operations, APM considerations, I/O request priorities, converting the APM to a Task, the event-based Asynchronous Pattern, programming model soup. Chapter 28-Primitive Thread Synchronization Constructs Whole new chapter discusses class libraries and thread safety, primitive user-mode, kernel-mode constructs, and data alignment. Chapter 29-Hybrid Thread Synchronization Constructs Whole new chapter discussion various hybrid constructs such as ManualResetEventSlim, SemaphoreSlim, CountdownEvent, Barrier, ReaderWriterLock(Slim), OneManyResourceLock, Monitor, 3 ways to solve the double-check locking technique, .NET 4.0’s Lazy and LazyInitializer classes, the condition variable pattern, .NET 4.0’s concurrent collection classes, the ReaderWriterGate and SyncGate classes.

    Read the article

  • Entity Framework Batch Update and Future Queries

    - by pwelter34
    Entity Framework Extended Library A library the extends the functionality of Entity Framework. Features Batch Update and Delete Future Queries Audit Log Project Package and Source NuGet Package PM> Install-Package EntityFramework.Extended NuGet: http://nuget.org/List/Packages/EntityFramework.Extended Source: http://github.com/loresoft/EntityFramework.Extended Batch Update and Delete A current limitations of the Entity Framework is that in order to update or delete an entity you have to first retrieve it into memory. Now in most scenarios this is just fine. There are however some senerios where performance would suffer. Also, for single deletes, the object must be retrieved before it can be deleted requiring two calls to the database. Batch update and delete eliminates the need to retrieve and load an entity before modifying it. Deleting //delete all users where FirstName matches context.Users.Delete(u => u.FirstName == "firstname"); Update //update all tasks with status of 1 to status of 2 context.Tasks.Update( t => t.StatusId == 1, t => new Task {StatusId = 2}); //example of using an IQueryable as the filter for the update var users = context.Users .Where(u => u.FirstName == "firstname"); context.Users.Update( users, u => new User {FirstName = "newfirstname"}); Future Queries Build up a list of queries for the data that you need and the first time any of the results are accessed, all the data will retrieved in one round trip to the database server. Reducing the number of trips to the database is a great. Using this feature is as simple as appending .Future() to the end of your queries. To use the Future Queries, make sure to import the EntityFramework.Extensions namespace. Future queries are created with the following extension methods... Future() FutureFirstOrDefault() FutureCount() Sample // build up queries var q1 = db.Users .Where(t => t.EmailAddress == "[email protected]") .Future(); var q2 = db.Tasks .Where(t => t.Summary == "Test") .Future(); // this triggers the loading of all the future queries var users = q1.ToList(); In the example above, there are 2 queries built up, as soon as one of the queries is enumerated, it triggers the batch load of both queries. // base query var q = db.Tasks.Where(t => t.Priority == 2); // get total count var q1 = q.FutureCount(); // get page var q2 = q.Skip(pageIndex).Take(pageSize).Future(); // triggers execute as a batch int total = q1.Value; var tasks = q2.ToList(); In this example, we have a common senerio where you want to page a list of tasks. In order for the GUI to setup the paging control, you need a total count. With Future, we can batch together the queries to get all the data in one database call. Future queries work by creating the appropriate IFutureQuery object that keeps the IQuerable. The IFutureQuery object is then stored in IFutureContext.FutureQueries list. Then, when one of the IFutureQuery objects is enumerated, it calls back to IFutureContext.ExecuteFutureQueries() via the LoadAction delegate. ExecuteFutureQueries builds a batch query from all the stored IFutureQuery objects. Finally, all the IFutureQuery objects are updated with the results from the query. Audit Log The Audit Log feature will capture the changes to entities anytime they are submitted to the database. The Audit Log captures only the entities that are changed and only the properties on those entities that were changed. The before and after values are recorded. AuditLogger.LastAudit is where this information is held and there is a ToXml() method that makes it easy to turn the AuditLog into xml for easy storage. The AuditLog can be customized via attributes on the entities or via a Fluent Configuration API. Fluent Configuration // config audit when your application is starting up... var auditConfiguration = AuditConfiguration.Default; auditConfiguration.IncludeRelationships = true; auditConfiguration.LoadRelationships = true; auditConfiguration.DefaultAuditable = true; // customize the audit for Task entity auditConfiguration.IsAuditable<Task>() .NotAudited(t => t.TaskExtended) .FormatWith(t => t.Status, v => FormatStatus(v)); // set the display member when status is a foreign key auditConfiguration.IsAuditable<Status>() .DisplayMember(t => t.Name); Create an Audit Log var db = new TrackerContext(); var audit = db.BeginAudit(); // make some updates ... db.SaveChanges(); var log = audit.LastLog;

    Read the article

  • Slow boot on Ubuntu 12.04

    - by Hailwood
    My Ubuntu is booting really slow (Windows is booting faster...). I am using Ubuntu a Dell Inspiron 1545 Pentium(R) Dual-Core CPU T4300 @ 2.10GHz, 4GB Ram, 500GB HDD running Ubuntu 12.04 with gnome-shell 3.4.1. After running dmesg the culprit seems to be this section, in particular the last three lines: [26.557659] ADDRCONF(NETDEV_UP): eth0: link is not ready [26.565414] ADDRCONF(NETDEV_UP): eth0: link is not ready [27.355355] Console: switching to colour frame buffer device 170x48 [27.362346] fb0: radeondrmfb frame buffer device [27.362347] drm: registered panic notifier [27.362357] [drm] Initialized radeon 2.12.0 20080528 for 0000:01:00.0 on minor 0 [27.617435] init: udev-fallback-graphics main process (1049) terminated with status 1 [30.064481] init: plymouth-stop pre-start process (1500) terminated with status 1 [51.708241] CE: hpet increased min_delta_ns to 20113 nsec [59.448029] eth2: no IPv6 routers present But I have no idea how to start debugging this. sudo lshw -C video $ sudo lshw -C video *-display description: VGA compatible controller product: RV710 [Mobility Radeon HD 4300 Series] vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 32 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=fglrx_pci latency=0 resources: irq:48 memory:e0000000-efffffff ioport:de00(size=256) memory:f6df0000-f6dfffff memory:f6d00000-f6d1ffff After loading the propriety driver my new dmesg log is below (starting from the first major time gap): [2.983741] EXT4-fs (sda6): mounted filesystem with ordered data mode. Opts: (null) [25.094327] ADDRCONF(NETDEV_UP): eth0: link is not ready [25.119737] udevd[520]: starting version 175 [25.167086] lp: driver loaded but no devices found [25.215341] fglrx: module license 'Proprietary. (C) 2002 - ATI Technologies, Starnberg, GERMANY' taints kernel. [25.215345] Disabling lock debugging due to kernel taint [25.231924] wmi: Mapper loaded [25.318414] lib80211: common routines for IEEE802.11 drivers [25.318418] lib80211_crypt: registered algorithm 'NULL' [25.331631] [fglrx] Maximum main memory to use for locked dma buffers: 3789 MBytes. [25.332095] [fglrx] vendor: 1002 device: 9552 count: 1 [25.334206] [fglrx] ioport: bar 1, base 0xde00, size: 0x100 [25.334229] pci 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [25.334235] pci 0000:01:00.0: setting latency timer to 64 [25.337109] [fglrx] Kernel PAT support is enabled [25.337140] [fglrx] module loaded - fglrx 8.96.4 [Mar 12 2012] with 1 minors [25.342803] Adding 4189180k swap on /dev/sda7. Priority:-1 extents:1 across:4189180k [25.364031] type=1400 audit(1338241723.027:2): apparmor="STATUS" operation="profile_load" name="/sbin/dhclient" pid=606 comm="apparmor_parser" [25.364491] type=1400 audit(1338241723.031:3): apparmor="STATUS" operation="profile_load" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=606 comm="apparmor_parser" [25.364760] type=1400 audit(1338241723.031:4): apparmor="STATUS" operation="profile_load" name="/usr/lib/connman/scripts/dhclient-script" pid=606 comm="apparmor_parser" [25.394328] wl 0000:0c:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [25.394343] wl 0000:0c:00.0: setting latency timer to 64 [25.415531] acpi device:36: registered as cooling_device2 [25.416688] input: Video Bus as /devices/LNXSYSTM:00/device:00/PNP0A03:00/device:34/LNXVIDEO:00/input/input6 [25.416795] ACPI: Video Device [VID] (multi-head: yes rom: no post: no) [25.416865] [Firmware Bug]: Duplicate ACPI video bus devices for the same VGA controller, please try module parameter "video.allow_duplicates=1"if the current driver doesn't work. [25.425133] lib80211_crypt: registered algorithm 'TKIP' [25.448058] snd_hda_intel 0000:00:1b.0: PCI INT A -> GSI 21 (level, low) -> IRQ 21 [25.448321] snd_hda_intel 0000:00:1b.0: irq 47 for MSI/MSI-X [25.448353] snd_hda_intel 0000:00:1b.0: setting latency timer to 64 [25.738867] eth1: Broadcom BCM4315 802.11 Hybrid Wireless Controller 5.100.82.38 [25.761213] input: HDA Intel Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input7 [25.761406] input: HDA Intel Headphone as /devices/pci0000:00/0000:00:1b.0/sound/card0/input8 [25.783432] dcdbas dcdbas: Dell Systems Management Base Driver (version 5.6.0-3.2) [25.908318] EXT4-fs (sda6): re-mounted. Opts: errors=remount-ro [25.928155] input: Dell WMI hotkeys as /devices/virtual/input/input9 [25.960561] udevd[543]: renamed network interface eth1 to eth2 [26.285688] init: failsafe main process (835) killed by TERM signal [26.396426] input: PS/2 Mouse as /devices/platform/i8042/serio2/input/input10 [26.423108] input: AlpsPS/2 ALPS GlidePoint as /devices/platform/i8042/serio2/input/input11 [26.511297] Bluetooth: Core ver 2.16 [26.511383] NET: Registered protocol family 31 [26.511385] Bluetooth: HCI device and connection manager initialized [26.511388] Bluetooth: HCI socket layer initialized [26.511391] Bluetooth: L2CAP socket layer initialized [26.512079] Bluetooth: SCO socket layer initialized [26.530164] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [26.530168] Bluetooth: BNEP filters: protocol multicast [26.553893] type=1400 audit(1338241724.219:5): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient" pid=928 comm="apparmor_parser" [26.554860] Bluetooth: RFCOMM TTY layer initialized [26.554866] Bluetooth: RFCOMM socket layer initialized [26.554868] Bluetooth: RFCOMM ver 1.11 [26.557910] type=1400 audit(1338241724.223:6): apparmor="STATUS" operation="profile_load" name="/usr/lib/lightdm/lightdm/lightdm-guest-session-wrapper" pid=927 comm="apparmor_parser" [26.559166] type=1400 audit(1338241724.223:7): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=928 comm="apparmor_parser" [26.559574] type=1400 audit(1338241724.223:8): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=928 comm="apparmor_parser" [26.575519] type=1400 audit(1338241724.239:9): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/mission-control-5" pid=931 comm="apparmor_parser" [26.581100] type=1400 audit(1338241724.247:10): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/telepathy-*" pid=931 comm="apparmor_parser" [26.582794] type=1400 audit(1338241724.247:11): apparmor="STATUS" operation="profile_load" name="/usr/bin/evince" pid=929 comm="apparmor_parser" [26.605672] ppdev: user-space parallel port driver [27.592475] sky2 0000:09:00.0: eth0: enabling interface [27.604329] ADDRCONF(NETDEV_UP): eth0: link is not ready [27.606962] ADDRCONF(NETDEV_UP): eth0: link is not ready [27.852509] vesafb: mode is 1024x768x32, linelength=4096, pages=0 [27.852513] vesafb: scrolling: redraw [27.852515] vesafb: Truecolor: size=0:8:8:8, shift=0:16:8:0 [27.852523] mtrr: type mismatch for e0000000,400000 old: write-back new: write-combining [27.852527] mtrr: type mismatch for e0000000,200000 old: write-back new: write-combining [27.852531] mtrr: type mismatch for e0000000,100000 old: write-back new: write-combining [27.852534] mtrr: type mismatch for e0000000,80000 old: write-back new: write-combining [27.852538] mtrr: type mismatch for e0000000,40000 old: write-back new: write-combining [27.852541] mtrr: type mismatch for e0000000,20000 old: write-back new: write-combining [27.852544] mtrr: type mismatch for e0000000,10000 old: write-back new: write-combining [27.852548] mtrr: type mismatch for e0000000,8000 old: write-back new: write-combining [27.852551] mtrr: type mismatch for e0000000,4000 old: write-back new: write-combining [27.852554] mtrr: type mismatch for e0000000,2000 old: write-back new: write-combining [27.852558] mtrr: type mismatch for e0000000,1000 old: write-back new: write-combining [27.853154] vesafb: framebuffer at 0xe0000000, mapped to 0xffffc90005580000, using 3072k, total 3072k [27.853405] Console: switching to colour frame buffer device 128x48 [27.853426] fb0: VESA VGA frame buffer device [28.539800] fglrx_pci 0000:01:00.0: irq 48 for MSI/MSI-X [28.540552] [fglrx] Firegl kernel thread PID: 1168 [28.540679] [fglrx] Firegl kernel thread PID: 1169 [28.540789] [fglrx] Firegl kernel thread PID: 1170 [28.540932] [fglrx] IRQ 48 Enabled [29.845620] [fglrx] Gart USWC size:1236 M. [29.845624] [fglrx] Gart cacheable size:489 M. [29.845629] [fglrx] Reserved FB block: Shared offset:0, size:1000000 [29.845632] [fglrx] Reserved FB block: Unshared offset:fc21000, size:3df000 [29.845635] [fglrx] Reserved FB block: Unshared offset:1fffb000, size:5000 [59.700023] eth2: no IPv6 routers present

    Read the article

  • org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'transactionManager

    - by BilalFromParis
    when I add the code into my spring configuration file beans-hibernate.xml <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <property name="sessionFactory" ref="sessionFactory" /> </bean> It doesn't work and I don't know why, can someone help me please ? My Dao Class is : public class CourseDaoImpl implements CourseDao { private SessionFactory sessionFactory; public void setSessionFactory(SessionFactory sessionFactory) { this.sessionFactory = sessionFactory; } @Transactional public void store(Course course) { sessionFactory.getCurrentSession().saveOrUpdate(course); } @Transactional public void delete(Long courseId) { Course course = (Course)sessionFactory.getCurrentSession().get(Course.class, courseId); sessionFactory.getCurrentSession().delete(course); } @Transactional(readOnly=true) public Course findById(Long courseId) { return (Course)sessionFactory.getCurrentSession().get(Course.class, courseId); } @Transactional public List<Course> findAll() { Query query = sessionFactory.getCurrentSession().createQuery("FROM Course"); return (List<Course>)query.list(); } } but : juil. 04, 2012 3:38:18 AM org.springframework.context.support.AbstractApplicationContext prepareRefresh Infos: Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@6ba8fb1b: startup date [Wed Jul 04 03:38:18 CEST 2012]; root of context hierarchy juil. 04, 2012 3:38:18 AM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions Infos: Loading XML bean definitions from class path resource [beans-hibernate.xml] juil. 04, 2012 3:38:19 AM org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons Infos: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@5a7fed46: defining beans [org.springframework.aop.config.internalAutoProxyCreator,org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0,org.springframework.transaction.interceptor.TransactionInterceptor#0,org.springframework.transaction.config.internalTransactionAdvisor,sessionFactory,transactionManager,courseDao]; root of factory hierarchy juil. 04, 2012 3:38:19 AM org.hibernate.annotations.common.Version INFO: HCANN000001: Hibernate Commons Annotations {4.0.1.Final} juil. 04, 2012 3:38:19 AM org.hibernate.Version logVersion INFO: HHH000412: Hibernate Core {4.1.3.Final} juil. 04, 2012 3:38:19 AM org.hibernate.cfg.Environment INFO: HHH000206: hibernate.properties not found juil. 04, 2012 3:38:19 AM org.hibernate.cfg.Environment buildBytecodeProvider INFO: HHH000021: Bytecode provider name : javassist juil. 04, 2012 3:38:19 AM org.hibernate.service.jdbc.connections.internal.DriverManagerConnectionProviderImpl configure INFO: HHH000402: Using Hibernate built-in connection pool (not for production use!) juil. 04, 2012 3:38:19 AM org.hibernate.service.jdbc.connections.internal.DriverManagerConnectionProviderImpl configure INFO: HHH000115: Hibernate connection pool size: 20 juil. 04, 2012 3:38:19 AM org.hibernate.service.jdbc.connections.internal.DriverManagerConnectionProviderImpl configure INFO: HHH000006: Autocommit mode: false juil. 04, 2012 3:38:19 AM org.hibernate.service.jdbc.connections.internal.DriverManagerConnectionProviderImpl configure INFO: HHH000401: using driver [org.hibernate.dialect.PostgreSQLDialect] at URL [jdbc:postgresql://localhost:5432/spring] juil. 04, 2012 3:38:19 AM org.hibernate.service.jdbc.connections.internal.DriverManagerConnectionProviderImpl configure INFO: HHH000046: Connection properties: {user=Bilal, password=**} juil. 04, 2012 3:38:19 AM org.hibernate.dialect.Dialect INFO: HHH000400: Using dialect: org.hibernate.dialect.PostgreSQLDialect juil. 04, 2012 3:38:19 AM org.hibernate.engine.jdbc.internal.LobCreatorBuilder useContextualLobCreation INFO: HHH000423: Disabling contextual LOB creation as JDBC driver reported JDBC version [3] less than 4 juil. 04, 2012 3:38:19 AM org.hibernate.engine.transaction.internal.TransactionFactoryInitiator initiateService INFO: HHH000399: Using default transaction strategy (direct JDBC transactions) juil. 04, 2012 3:38:19 AM org.hibernate.hql.internal.ast.ASTQueryTranslatorFactory INFO: HHH000397: Using ASTQueryTranslatorFactory juil. 04, 2012 3:38:19 AM org.hibernate.tool.hbm2ddl.SchemaUpdate execute INFO: HHH000228: Running hbm2ddl schema update juil. 04, 2012 3:38:19 AM org.hibernate.tool.hbm2ddl.SchemaUpdate execute INFO: HHH000102: Fetching database metadata juil. 04, 2012 3:38:19 AM org.hibernate.tool.hbm2ddl.SchemaUpdate execute INFO: HHH000396: Updating schema juil. 04, 2012 3:38:19 AM org.hibernate.tool.hbm2ddl.TableMetadata INFO: HHH000261: Table found: public.course juil. 04, 2012 3:38:19 AM org.hibernate.tool.hbm2ddl.TableMetadata INFO: HHH000037: Columns: [fee, id, title, end_date, begin_date] juil. 04, 2012 3:38:19 AM org.hibernate.tool.hbm2ddl.TableMetadata INFO: HHH000108: Foreign keys: [] juil. 04, 2012 3:38:19 AM org.hibernate.tool.hbm2ddl.TableMetadata INFO: HHH000126: Indexes: [course_pkey] juil. 04, 2012 3:38:19 AM org.hibernate.tool.hbm2ddl.SchemaUpdate execute INFO: HHH000232: Schema update complete juil. 04, 2012 3:38:19 AM org.springframework.beans.factory.support.DefaultSingletonBeanRegistry destroySingletons Infos: Destroying singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@5a7fed46: defining beans [org.springframework.aop.config.internalAutoProxyCreator,org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0,org.springframework.transaction.interceptor.TransactionInterceptor#0,org.springframework.transaction.config.internalTransactionAdvisor,sessionFactory,transactionManager,courseDao]; root of factory hierarchy juil. 04, 2012 3:38:19 AM org.hibernate.service.jdbc.connections.internal.DriverManagerConnectionProviderImpl stop INFO: HHH000030: Cleaning up connection pool [jdbc:postgresql://localhost:5432/spring] Exception in thread "main" org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'transactionManager' defined in class path resource [beans-hibernate.xml]: Invocation of init method failed; nested exception is java.lang.NoClassDefFoundError: org/hibernate/engine/SessionFactoryImplementor at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1455) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:294) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:225) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:291) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:585) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:913) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:464) at org.springframework.context.support.ClassPathXmlApplicationContext.(ClassPathXmlApplicationContext.java:139) at org.springframework.context.support.ClassPathXmlApplicationContext.(ClassPathXmlApplicationContext.java:83) at com.boutaya.bill.main.Main.main(Main.java:14) Caused by: java.lang.NoClassDefFoundError: org/hibernate/engine/SessionFactoryImplementor at org.springframework.orm.hibernate3.SessionFactoryUtils.getDataSource(SessionFactoryUtils.java:123) at org.springframework.orm.hibernate3.HibernateTransactionManager.afterPropertiesSet(HibernateTransactionManager.java:411) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1514) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1452) ... 12 more Caused by: java.lang.ClassNotFoundException: org.hibernate.engine.SessionFactoryImplementor at java.net.URLClassLoader$1.run(Unknown Source) at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) ... 16 more I think the problem is when I use the Class : org.springframework.orm.hibernate3.HibernateTransactionManager ???

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • .NET Security Part 2

    - by Simon Cooper
    So, how do you create partial-trust appdomains? Where do you come across them? There are two main situations in which your assembly runs as partially-trusted using the Microsoft .NET stack: Creating a CLR assembly in SQL Server with anything other than the UNSAFE permission set. The permissions available in each permission set are given here. Loading an assembly in ASP.NET in any trust level other than Full. Information on ASP.NET trust levels can be found here. You can configure the specific permissions available to assemblies using ASP.NET policy files. Alternatively, you can create your own partially-trusted appdomain in code and directly control the permissions and the full-trust API available to the assemblies you load into the appdomain. This is the scenario I’ll be concentrating on in this post. Creating a partially-trusted appdomain There is a single overload of AppDomain.CreateDomain that allows you to specify the permissions granted to assemblies in that appdomain – this one. This is the only call that allows you to specify a PermissionSet for the domain. All the other calls simply use the permissions of the calling code. If the permissions are restricted, then the resulting appdomain is referred to as a sandboxed domain. There are three things you need to create a sandboxed domain: The specific permissions granted to all assemblies in the domain. The application base (aka working directory) of the domain. The list of assemblies that have full-trust if they are loaded into the sandboxed domain. The third item is what allows us to have a fully-trusted API that is callable by partially-trusted code. I’ll be looking at the details of this in a later post. Granting permissions to the appdomain Firstly, the permissions granted to the appdomain. This is encapsulated in a PermissionSet object, initialized either with no permissions or full-trust permissions. For sandboxed appdomains, the PermissionSet is initialized with no permissions, then you add permissions you want assemblies loaded into that appdomain to have by default: PermissionSet restrictedPerms = new PermissionSet(PermissionState.None); // all assemblies need Execution permission to run at all restrictedPerms.AddPermission( new SecurityPermission(SecurityPermissionFlag.Execution)); // grant general read access to C:\config.xml restrictedPerms.AddPermission( new FileIOPermission(FileIOPermissionAccess.Read, @"C:\config.xml")); // grant permission to perform DNS lookups restrictedPerms.AddPermission( new DnsPermission(PermissionState.Unrestricted)); It’s important to point out that the permissions granted to an appdomain, and so to all assemblies loaded into that appdomain, are usable without needing to go through any SafeCritical code (see my last post if you’re unsure what SafeCritical code is). That is, partially-trusted code loaded into an appdomain with the above permissions (and so running under the Transparent security level) is able to create and manipulate a FileStream object to read from C:\config.xml directly. It is only for operations requiring permissions that are not granted to the appdomain that partially-trusted code is required to call a SafeCritical method that then asserts the missing permissions and performs the operation safely on behalf of the partially-trusted code. The application base of the domain This is simply set as a property on an AppDomainSetup object, and is used as the default directory assemblies are loaded from: AppDomainSetup appDomainSetup = new AppDomainSetup { ApplicationBase = @"C:\temp\sandbox", }; If you’ve read the documentation around sandboxed appdomains, you’ll notice that it mentions a security hole if this parameter is set correctly. I’ll be looking at this, and other pitfalls, that will break the sandbox when using sandboxed appdomains, in a later post. Full-trust assemblies in the appdomain Finally, we need the strong names of the assemblies that, when loaded into the appdomain, will be run as full-trust, irregardless of the permissions specified on the appdomain. These assemblies will contain methods and classes decorated with SafeCritical and Critical attributes. I’ll be covering the details of creating full-trust APIs for partial-trust appdomains in a later post. This is how you get the strongnames of an assembly to be executed as full-trust in the sandbox: // get the Assembly object for the assembly Assembly assemblyWithApi = ... // get the StrongName from the assembly's collection of evidence StrongName apiStrongName = assemblyWithApi.Evidence.GetHostEvidence<StrongName>(); Creating the sandboxed appdomain So, putting these three together, you create the appdomain like so: AppDomain sandbox = AppDomain.CreateDomain( "Sandbox", null, appDomainSetup, restrictedPerms, apiStrongName); You can then load and execute assemblies in this appdomain like any other. For example, to load an assembly into the appdomain and get an instance of the Sandboxed.Entrypoint class, implementing IEntrypoint, you do this: IEntrypoint o = (IEntrypoint)sandbox.CreateInstanceFromAndUnwrap( "C:\temp\sandbox\SandboxedAssembly.dll", "Sandboxed.Entrypoint"); // call method the Execute method on this object within the sandbox o.Execute(); The second parameter to CreateDomain is for security evidence used in the appdomain. This was a feature of the .NET 2 security model, and has been (mostly) obsoleted in the .NET 4 model. Unless the evidence is needed elsewhere (eg. isolated storage), you can pass in null for this parameter. Conclusion That’s the basics of sandboxed appdomains. The most important object is the PermissionSet that defines the permissions available to assemblies running in the appdomain; it is this object that defines the appdomain as full or partial-trust. The appdomain also needs a default directory used for assembly lookups as the ApplicationBase parameter, and you can specify an optional list of the strongnames of assemblies that will be given full-trust permissions if they are loaded into the sandboxed appdomain. Next time, I’ll be looking closer at full-trust assemblies running in a sandboxed appdomain, and what you need to do to make an API available to partial-trust code.

    Read the article

  • ct.sym steals the ASM class

    - by Geertjan
    Some mild consternation on the Twittersphere yesterday. Marcus Lagergren not being able to find the ASM classes in JDK 8 in NetBeans IDE: And there's no such problem in Eclipse (and apparently in IntelliJ IDEA). Help, does NetBeans (despite being incredibly awesome) suck, after all? The truth of the matter is that there's something called "ct.sym" in the JDK. When javac is compiling code, it doesn't link against rt.jar. Instead, it uses a special symbol file lib/ct.sym with class stubs. Internal JDK classes are not put in that symbol file, since those are internal classes. You shouldn't want to use them, at all. However, what if you're Marcus Lagergren who DOES need these classes? I.e., he's working on the internal JDK classes and hence needs to have access to them. Fair enough that the general Java population can't access those classes, since they're internal implementation classes that could be changed anytime and one wouldn't want all unknown clients of those classes to start breaking once changes are made to the implementation, i.e., this is the rt.jar's internal class protection mechanism. But, again, we're now Marcus Lagergen and not the general Java population. For the solution, read Jan Lahoda, NetBeans Java Editor guru, here: https://netbeans.org/bugzilla/show_bug.cgi?id=186120 In particular, take note of this: AFAIK, the ct.sym is new in JDK6. It contains stubs for all classes that existed in JDK5 (for compatibility with existing programs that would use private JDK classes), but does not contain implementation classes that were introduced in JDK6 (only API classes). This is to prevent application developers to accidentally use JDK's private classes (as such applications would be unportable and may not run on future versions of JDK). Note that this is not really a NB thing - this is the behavior of javac from the JDK. I do not know about any way to disable this except deleting ct.sym or the option mentioned above. Regarding loading the classes: JVM uses two classpath's: classpath and bootclasspath. rt.jar is on the bootclasspath and has precedence over anything on the "custom" classpath, which is used by the application. The usual way to override classes on bootclasspath is to start the JVM with "-Xbootclasspath/p:" option, which prepends the given jars (and presumably also directories) to bootclasspath. Hence, let's take the first option, the simpler one, and simply delete the "ct.sym" file. Again, only because we need to work with those internal classes as developers of the JDK, not because we want to hack our way around "ct.sym", which would mean you'd not have portable code at the end of the day. Go to the JDK 8 lib folder and you'll find the file: Delete it. Start NetBeans IDE again, either on JDK 7 or JDK 8, doesn't make a difference for these purposes, create a new Java application (or use an existing one), make sure you have set the JDK above as the JDK of the application, and hey presto: The above obviously assumes you have a build of JDK 8 that actually includes the ASM package. And below you can see that not only are the classes found but my build succeeded, even though I'm using internal JDK classes. The yellow markings in the sidebar mean that the classes are imported but not used in the code, where normally, if I hadn't removed "ct.sym", I would have seen red error marking instead, and the code wouldn't have compiled. Note: I've tried setting "-XDignore.symbol.file" in "netbeans.conf" and in other places, but so far haven't got that to work. Simply deleting the "ct.sym" file (or back it up somewhere and put it back when needed) is quite clearly the most straightforward solution. Ultimately, if you want to be able to use those internal classes while still having portable code, do you know what you need to do? You need to create a JDK bug report stating that you need an internal class to be added to "ct.sym". Probably you'll get a motivation back stating WHY that internal class isn't supposed to be used externally. There must be a reason why those classes aren't available for external usage, otherwise they would have been added to "ct.sym". So, now the only remaining question is why the Eclipse compiler doesn't hide the internal JDK classes. Apparently the Eclipse compiler ignores the "ct.sym" file. In other words, at the end of the day, far from being a bug in NetBeans... we have now found a (pretty enormous, I reckon) bug in Eclipse. The Eclipse compiler does not protect you from using internal JDK classes and the code that you create in Eclipse may not work with future releases of the JDK, since the JDK team is simply going to be changing those classes that are not found in the "ct.sym" file while assuming (correctly, thanks to the presence of "ct.sym" mechanism) that no code in the world, other than JDK code, is tied to those classes.

    Read the article

  • Tuning Red Gate: #4 of Some

    - by Grant Fritchey
    First time connecting to these servers directly (keys to the kingdom, bwa-ha-ha-ha. oh, excuse me), so I'm going to take a look at the server properties, just to see if there are any issues there. Max memory is set, cool, first possible silly mistake clear. In fact, these look to be nicely set up. Oh, I'd like to see the ANSI Standards set by default, but it's not a big deal. The default location for database data is the F:\ drive, where I saw all the activity last time. Cool, the people maintaining the servers in our company listen, parallelism threshold is set to 35 and optimize for ad hoc is enabled. No shocks, no surprises. The basic setup is appropriate. On to the problem database. Nothing wrong in the properties. The database is in SIMPLE recovery, but I think it's a reporting system, so no worries there. Again, I'd prefer to see the ANSI settings for connections, but that's the worst thing I can see. Time to look at the queries, tables, indexes and statistics because all the information I've collected over the last several days suggests that we're not looking at a systemic problem (except possibly not enough memory), but at the traditional tuning issues. I just want to note that, I started looking at the system, not the queries. So should you when tuning your environment. I know, from the data collected through SQL Monitor, what my top poor performing queries are, and the most frequently called, etc. I'm starting with the most frequently called. I'm going to get the execution plan for this thing out of the cache (although, with the cache dumping constantly, I might not get it). And it's not there. Called 1.3 million times over the last 3 days, but it's not in cache. Wow. OK. I'll see what's in cache for this database: SELECT  deqs.creation_time,         deqs.execution_count,         deqs.max_logical_reads,         deqs.max_elapsed_time,         deqs.total_logical_reads,         deqs.total_elapsed_time,         deqp.query_plan,         SUBSTRING(dest.text, (deqs.statement_start_offset / 2) + 1,                   (deqs.statement_end_offset - deqs.statement_start_offset) / 2                   + 1) AS QueryStatement FROM    sys.dm_exec_query_stats AS deqs         CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest         CROSS APPLY sys.dm_exec_query_plan(deqs.plan_handle) AS deqp WHERE   dest.dbid = DB_ID('Warehouse') AND deqs.statement_end_offset > 0 AND deqs.statement_start_offset > 0 ORDER BY deqs.max_logical_reads DESC ; And looking at the most expensive operation, we have our first bad boy: Multiple table scans against very large sets of data and a sort operation. a sort operation? It's an insert. Oh, I see, the table is a heap, so it's doing an insert, then sorting the data and then inserting into the primary key. First question, why isn't this a clustered index? Let's look at some more of the queries. The next one is deceiving. Here's the query plan: You're thinking to yourself, what's the big deal? Well, what if I told you that this thing had 8036318 reads? I know, you're looking at skinny little pipes. Know why? Table variable. Estimated number of rows = 1. Actual number of rows. well, I'm betting several more than one considering it's read 8 MILLION pages off the disk in a single execution. We have a serious and real tuning candidate. Oh, and I missed this, it's loading the table variable from a user defined function. Let me check, let me check. YES! A multi-statement table valued user defined function. And another tuning opportunity. This one's a beauty, seriously. Did I also mention that they're doing a hash against all the columns in the physical table. I'm sure that won't lead to scans of a 500,000 row table, no, not at all. OK. I lied. Of course it is. At least it's on the top part of the Loop which means the scan is only executed once. I just did a cursory check on the next several poor performers. all calling the UDF. I think I found a big tuning opportunity. At this point, I'm typing up internal emails for the company. Someone just had their baby called ugly. In addition to a series of suggested changes that we need to implement, I'm also apologizing for being such an unkind monster as to question whether that third eye & those flippers belong on such an otherwise lovely child.

    Read the article

  • ASSIMP in my program is much slower to import than ASSIMP view program

    - by Marco
    The problem is really simple: if I try to load with the function aiImportFileExWithProperties a big model in my software (around 200.000 vertices), it takes more than one minute. If I try to load the very same model with ASSIMP view, it takes 2 seconds. For this comparison, both my software and Assimp view are using the dll version of the library at 64 bit, compiled by myself (Assimp64.dll). This is the relevant piece of code in my software // default pp steps unsigned int ppsteps = aiProcess_CalcTangentSpace | // calculate tangents and bitangents if possible aiProcess_JoinIdenticalVertices | // join identical vertices/ optimize indexing aiProcess_ValidateDataStructure | // perform a full validation of the loader's output aiProcess_ImproveCacheLocality | // improve the cache locality of the output vertices aiProcess_RemoveRedundantMaterials | // remove redundant materials aiProcess_FindDegenerates | // remove degenerated polygons from the import aiProcess_FindInvalidData | // detect invalid model data, such as invalid normal vectors aiProcess_GenUVCoords | // convert spherical, cylindrical, box and planar mapping to proper UVs aiProcess_TransformUVCoords | // preprocess UV transformations (scaling, translation ...) aiProcess_FindInstances | // search for instanced meshes and remove them by references to one master aiProcess_LimitBoneWeights | // limit bone weights to 4 per vertex aiProcess_OptimizeMeshes | // join small meshes, if possible; aiProcess_SplitByBoneCount | // split meshes with too many bones. Necessary for our (limited) hardware skinning shader 0; cout << "Loading " << pFile << "... "; aiPropertyStore* props = aiCreatePropertyStore(); aiSetImportPropertyInteger(props,AI_CONFIG_IMPORT_TER_MAKE_UVS,1); aiSetImportPropertyFloat(props,AI_CONFIG_PP_GSN_MAX_SMOOTHING_ANGLE,80.f); aiSetImportPropertyInteger(props,AI_CONFIG_PP_SBP_REMOVE, aiPrimitiveType_LINE | aiPrimitiveType_POINT); aiSetImportPropertyInteger(props,AI_CONFIG_GLOB_MEASURE_TIME,1); //aiSetImportPropertyInteger(props,AI_CONFIG_PP_PTV_KEEP_HIERARCHY,1); // Call ASSIMPs C-API to load the file scene = (aiScene*)aiImportFileExWithProperties(pFile.c_str(), ppsteps | /* default pp steps */ aiProcess_GenSmoothNormals | // generate smooth normal vectors if not existing aiProcess_SplitLargeMeshes | // split large, unrenderable meshes into submeshes aiProcess_Triangulate | // triangulate polygons with more than 3 edges //aiProcess_ConvertToLeftHanded | // convert everything to D3D left handed space aiProcess_SortByPType | // make 'clean' meshes which consist of a single typ of primitives 0, NULL, props); aiReleasePropertyStore(props); if(!scene){ cout << aiGetErrorString() << endl; return 0; } this is the relevant piece of code in assimp view code // default pp steps unsigned int ppsteps = aiProcess_CalcTangentSpace | // calculate tangents and bitangents if possible aiProcess_JoinIdenticalVertices | // join identical vertices/ optimize indexing aiProcess_ValidateDataStructure | // perform a full validation of the loader's output aiProcess_ImproveCacheLocality | // improve the cache locality of the output vertices aiProcess_RemoveRedundantMaterials | // remove redundant materials aiProcess_FindDegenerates | // remove degenerated polygons from the import aiProcess_FindInvalidData | // detect invalid model data, such as invalid normal vectors aiProcess_GenUVCoords | // convert spherical, cylindrical, box and planar mapping to proper UVs aiProcess_TransformUVCoords | // preprocess UV transformations (scaling, translation ...) aiProcess_FindInstances | // search for instanced meshes and remove them by references to one master aiProcess_LimitBoneWeights | // limit bone weights to 4 per vertex aiProcess_OptimizeMeshes | // join small meshes, if possible; aiProcess_SplitByBoneCount | // split meshes with too many bones. Necessary for our (limited) hardware skinning shader 0; aiPropertyStore* props = aiCreatePropertyStore(); aiSetImportPropertyInteger(props,AI_CONFIG_IMPORT_TER_MAKE_UVS,1); aiSetImportPropertyFloat(props,AI_CONFIG_PP_GSN_MAX_SMOOTHING_ANGLE,g_smoothAngle); aiSetImportPropertyInteger(props,AI_CONFIG_PP_SBP_REMOVE,nopointslines ? aiPrimitiveType_LINE | aiPrimitiveType_POINT : 0 ); aiSetImportPropertyInteger(props,AI_CONFIG_GLOB_MEASURE_TIME,1); //aiSetImportPropertyInteger(props,AI_CONFIG_PP_PTV_KEEP_HIERARCHY,1); // Call ASSIMPs C-API to load the file g_pcAsset->pcScene = (aiScene*)aiImportFileExWithProperties(g_szFileName, ppsteps | /* configurable pp steps */ aiProcess_GenSmoothNormals | // generate smooth normal vectors if not existing aiProcess_SplitLargeMeshes | // split large, unrenderable meshes into submeshes aiProcess_Triangulate | // triangulate polygons with more than 3 edges aiProcess_ConvertToLeftHanded | // convert everything to D3D left handed space aiProcess_SortByPType | // make 'clean' meshes which consist of a single typ of primitives 0, NULL, props); aiReleasePropertyStore(props); As you can see the code is nearly identical because I copied from assimp view. What could be the reason for such a difference in performance? The two software are using the same dll Assimp64.dll (compiled in my computer with vc++ 2010 express) and the same function aiImportFileExWithProperties to load the model, so I assume that the actual code employed is the same. How is it possible that the function aiImportFileExWithProperties is 100 times slower when called by my sotware than when called by assimp view? What am I missing? I am not good with dll, dynamic and static libraries so I might be missing something obvious. ------------------------------ UPDATE I found out the reason why the code is going slower. Basically I was running my software with "Start debugging" in VC++ 2010 Express. If I run the code outside VC++ 2010 I get same performance of assimp view. However now I have a new question. Why does the dll perform slower in VC++ debugging? I compiled it in release mode without debugging information. Is there any way to have the dll go fast in debugmode i.e. not debugging the dll? Because I am interested in debugging only my own code, not the dll that I assume is already working fine. I do not want to wait 2 minutes every time I want to load my software to debug. Does this request make sense?

    Read the article

  • NoSQL Java API for MySQL Cluster: Questions & Answers

    - by Mat Keep
    The MySQL Cluster engineering team recently ran a live webinar, available now on-demand demonstrating the ClusterJ and ClusterJPA NoSQL APIs for MySQL Cluster, and how these can be used in building real-time, high scale Java-based services that require continuous availability. Attendees asked a number of great questions during the webinar, and I thought it would be useful to share those here, so others are also able to learn more about the Java NoSQL APIs. First, a little bit about why we developed these APIs and why they are interesting to Java developers. ClusterJ and Cluster JPA ClusterJ is a Java interface to MySQL Cluster that provides either a static or dynamic domain object model, similar to the data model used by JDO, JPA, and Hibernate. A simple API gives users extremely high performance for common operations: insert, delete, update, and query. ClusterJPA works with ClusterJ to extend functionality, including - Persistent classes - Relationships - Joins in queries - Lazy loading - Table and index creation from object model By eliminating data transformations via SQL, users get lower data access latency and higher throughput. In addition, Java developers have a more natural programming method to directly manage their data, with a complete, feature-rich solution for Object/Relational Mapping. As a result, the development of Java applications is simplified with faster development cycles resulting in accelerated time to market for new services. MySQL Cluster offers multiple NoSQL APIs alongside Java: - Memcached for a persistent, high performance, write-scalable Key/Value store, - HTTP/REST via an Apache module - C++ via the NDB API for the lowest absolute latency. Developers can use SQL as well as NoSQL APIs for access to the same data set via multiple query patterns – from simple Primary Key lookups or inserts to complex cross-shard JOINs using Adaptive Query Localization Marrying NoSQL and SQL access to an ACID-compliant database offers developers a number of benefits. MySQL Cluster’s distributed, shared-nothing architecture with auto-sharding and real time performance makes it a great fit for workloads requiring high volume OLTP. Users also get the added flexibility of being able to run real-time analytics across the same OLTP data set for real-time business insight. OK – hopefully you now have a better idea of why ClusterJ and JPA are available. Now, for the Q&A. Q & A Q. Why would I use Connector/J vs. ClusterJ? A. Partly it's a question of whether you prefer to work with SQL (Connector/J) or objects (ClusterJ). Performance of ClusterJ will be better as there is no need to pass through the MySQL Server. A ClusterJ operation can only act on a single table (e.g. no joins) - ClusterJPA extends that capability Q. Can I mix different APIs (ie ClusterJ, Connector/J) in our application for different query types? A. Yes. You can mix and match all of the API types, SQL, JDBC, ODBC, ClusterJ, Memcached, REST, C++. They all access the exact same data in the data nodes. Update through one API and new data is instantly visible to all of the others. Q. How many TCP connections would a SessionFactory instance create for a cluster of 8 data nodes? A. SessionFactory has a connection to the mgmd (management node) but otherwise is just a vehicle to create Sessions. Without using connection pooling, a SessionFactory will have one connection open with each data node. Using optional connection pooling allows multiple connections from the SessionFactory to increase throughput. Q. Can you give details of how Cluster J optimizes sharding to enhance performance of distributed query processing? A. Each data node in a cluster runs a Transaction Coordinator (TC), which begins and ends the transaction, but also serves as a resource to operate on the result rows. While an API node (such as a ClusterJ process) can send queries to any TC/data node, there are performance gains if the TC is where most of the result data is stored. ClusterJ computes the shard (partition) key to choose the data node where the row resides as the TC. Q. What happens if we perform two primary key lookups within the same transaction? Are they sent to the data node in one transaction? A. ClusterJ will send identical PK lookups to the same data node. Q. How is distributed query processing handled by MySQL Cluster ? A. If the data is split between data nodes then all of the information will be transparently combined and passed back to the application. The session will connect to a data node - typically by hashing the primary key - which then interacts with its neighboring nodes to collect the data needed to fulfil the query. Q. Can I use Foreign Keys with MySQL Cluster A. Support for Foreign Keys is included in the MySQL Cluster 7.3 Early Access release Summary The NoSQL Java APIs are packaged with MySQL Cluster, available for download here so feel free to take them for a spin today! Key Resources MySQL Cluster on-line demo  MySQL ClusterJ and JPA On-demand webinar  MySQL ClusterJ and JPA documentation MySQL ClusterJ and JPA whitepaper and tutorial

    Read the article

  • About Me

    - by Jeffrey West
    I’m new to blogging.  This is the second blog post that I have written, and before I go too much further I wanted the readers of my blog to know a bit more about me… Kid’s Stuff By trade, I am a programmer (or coder, developer, engineer, architect, etc).  I started programming when I was 12 years old.  When I was 7, we got our first ‘family’ computer – an Apple IIc.  It was great to play games on, and of course what else was a 7-year-old going to do with it.  I did have one problem with it, though.  When I put in my 5.25” floppy to play a game, sometimes, instead loading my game I would get a mysterious ‘]’ on the screen with a flashing cursor.  This, of course, was not my game.  Much like the standard ‘Microsoft fix’ is to reboot, back then you would take the floppy out, shake it, and restart the computer and pray for a different result. One day, I learned at school that I could topple my nemesis – the ‘]’ and flashing cursor – by typing ‘load’ and pressing enter.  Most of the time, this would load my game and then I would get to play.  Problem solved.  However, I began to wonder – what else can I make it do? When I was in 5th grade my dad got a bright idea to buy me a Tandy 1000HX.  He didn’t know what I was going to do with it, and neither did I.  Least of all, my mom wasn’t happy about buying a 5th grader a $1,000 computer.  Nonetheless, Over time, I learned how to write simple basic programs out of the back of my Math book: 10 x=5 20 y=6 30 PRINT x+y That was fun for all of about 5 minutes.  I needed more – more challenges, more things that I could make the computer do.  In order to quench this thirst my parents sent me to National Computer Camps in Connecticut.  It was one of the best experiences of my childhood, and I spent 3 weeks each summer after that learning BASIC, Pascal, Turbo C and some C++.  There weren’t many kids at the time who knew anything about computers, and lets just say my knowledge of and interest in computers didn’t score me many ‘cool’ points.  My experiences at NCC set me on the path that I find myself on now, and I am very thankful for the experience.  Real Life I have held various positions in the past at different levels within the IT layer cake.  I started out as a Software Developer for a startup in the Dallas, TX area building software for semiconductor testing statistical process control and sampling.  I was the second Java developer that was hired, and the ninth employee overall, so I got a great deal of experience developing software.  Since there weren’t that many people in the organization, I also got a lot of field experience which meant that if I screwed up the code, I got yelled at (figuratively) by both my boss AND the customer.  Fun Times!  What made it better was that I got to help run pilot programs in Taiwan, Singapore, Malaysia and Malta.  Getting yelled at in Taiwan is slightly less annoying that getting yelled at in Dallas… I spent the next 5 years at Accenture doing systems integration in the ‘SOA’ group.  I joined as a Consultant and left as a Senior Manager.  I started out writing code in WebLogic Integration and left after I wrapped up project where I led a team of 25 to develop the next generation of a digital media platform to deliver HD content in a digital format.  At Accenture, I had the pleasure of working with some truly amazing people – mentoring some and learning from many others – and on some incredible real-world IT projects.  Given my background with the BEA stack of products I was often called in to troubleshoot and tune WebLogic, ALBPM and ALSB installations and have logged many hours digging through thread dumps, running performance tests with SoapUI and decompiling Java classes we didn’t have the source for so I could see what was going on in the code. I am now a Senior Principal Product Manager at Oracle in the Application Grid practice.  The term ‘Application Grid’ refers to a collection of software and hardware products within Oracle that enables customers to build horizontally scalable systems.  This collection of products includes WebLogic, GlassFish, Coherence, Tuxedo and the JRockit/HotSpot JVMs (HotSprocket, maybe?).  Now, with the introduction of Exalogic it has grown to include hardware as well. Wrapping it up… I love technology and have a diverse background ranging from software development to HW and network architecture & tuning.  I have held certifications for being an Oracle Certified DBA, MSCE and Cisco Certified Network Professional (CCNP), among others and I have put those to great use over my career.  I am excited about programming & technology and I enjoy helping people learn and be successful.  If you are having challenges with WebLogic, BPM or Service Bus feel free to reach out to me and I’ll be happy to help as I have time. Thanks for stopping by!   --Jeff

    Read the article

< Previous Page | 499 500 501 502 503 504 505 506 507 508 509 510  | Next Page >