Search Results

Search found 6730 results on 270 pages for 'loaded'.

Page 176/270 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • Silverlight Cream for May 04, 2010 -- #855

    - by Dave Campbell
    In this Issue: John Papa, Adam Kinney, Mike Taulty, Kirupa, Gunnar Peipman, Mike Snow(-2-, -3-), Jesse Liberty, and Lee. Shoutout: Jeff Wilcox announced Silverlight Unit Test Framework: New version in the April 2010 Silverlight Toolkit From SilverlightCream.com: Silverlight TV 23: MVP Q&A with WWW (Wildermuth, Wahlin and Ward) John Papa has Silverlight 23 up which is a panel discussion between Shawn Wildermuth, Dan Wahlin, Ward Bell and John... wow... what a crew! Design-time Resources in Expression Blend 4 RC Adam Kinney reports on the new feature of Expresseion Blend RC to load resources at design time. Adam also has a project available to demonstrate the concepts he's explaining. Silverlight and WCF RIA Services (1 - Overview) Mike Taulty is starting a series on WCF RIA Services. This first one is an overview and looks to be a good series as expected. Introduction to Sample Data - Page 1 Kirupa has a great 5-part post up about sample data in Expression Blend. Windows Phone 7 development: Using WebBrowser control Gunnar Peipman posted about using the web browser control in WP7 to display RSS data. Good stuff, and all the code too. Silverlight Tip of the Day #10 – Converting Client IP to Geographical Location Mike Snow's Tip #10 is about taking an IP address and getting a geographical location from it. Combine this with his Tip #9 that retrieves the IP address. Silverlight Tip of the Day #11 – Deploying Silverlight Applications with WCF web services. Mike Snow's Tip #11 is much bigger than most ... it's almost an end-to-end solution for creating and deploying a WCF service, including resolving problems. Silverlight Tip of the Day #12 – Getting an Images Source File Name Mike Snow also has tip #12 up, and it's a quick one on getting the original source file name for an image you've loaded. Screen Scraping – When All You Have Is A Hammer… Jesse Liberty posted his solution to a self-imposed problem and ended up writing a 'mini tutorial on using Silverlight for creating desk-top utilities' ... all with source. RIA services and combobox lookups Lee has a post up about RIA Services and setting up comboboxes for lookups. Lots of source in the post and full project download. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Certificate Revocation checking affecting system performance [migrated]

    - by Colm Clarke
    I have a .NET 3.5 desktop application that had been showing periodic slow downs in functionality whenever the test machine it was on was out of the office. I managed to replicate the error on a machine in the office without an internet connection, but it was only when i used ANTS performance profiler that i got a clearer picture of what was going on. In ANTS I saw a "Waiting for synchronization" taking up to 16 seconds that corresponded to the delay I could see in the application when NHibernate tried to load the System.Data.SqlServerCE.dll assembly. If I tried the action again immediately it would work with no delay but if I left it for 5 minutes then it would be slow to load again the next time I tried it. From my research so far it appears to be because the SqlServerCE dll is signed and so the system is trying to connect to get the certificate revocation lists and timing out. Disabling the "Automatically detect settings" setting in the Internet Options LAN settings makes the problem go away, as does disabling the "Check for publishers certificate revocation". But the admins where this application will be deployed are not going to be happy with the idea of disabling certificate checking on a per machine or per user basis so I really need to get the application level disabling of the CRL check working. There is the well documented bug in .net 2.0 which describes this behaviour, and offers a possible fix with a config file element. <?xml version="1.0" encoding="utf-8"?> <configuration> <runtime> <generatePublisherEvidence enabled="false"/> </runtime> </configuration> This is NOT working for me however even though I am using .net 3.5. The SQLServerCE dll is being loaded dynamically by NHibernate and I wonder if the fact that it's dynamic could somehow be why the setting isn't working, but I don't know how I could check that. Can anyone offer suggestions as to why the config setting might not work? Or is there another way I could disable the check at the application level, perhaps a CAS policy setting that I can use to set an exception for the application when it's installed? Or is there something I can change in the application to up the trust level or something like that? I have also tried using to no advantage ServicePointManager.CheckCertificateRevocationList = false; http://rusanu.com/2009/07/24/fix-slow-application-startup-due-to-code-sign-validation/ I have also tried those registry settings out and unfortunately they didn't help. The dlls that appear to be the cause of the hold up are native SQL Server CE dlls, and looking at the stack traces in ProcMon mscorwks.dll doesn't appear to be involved even though the checks on crypto and cert registry keys are being done under the .NET application. It's definitely still something to do with publisher certificate checking because unticking "Check for publisher revocation certificate" still works but something odd is going on.

    Read the article

  • Setting up Visual Studio 2010 to step into Microsoft .NET Source Code

    - by rajbk
    Using the Microsoft Symbol Server to obtain symbol debugging information is now much easier in VS 2010. Microsoft gives you access to their internet symbol server that contains symbol files for most of the .NET framework including the recently announced availability of MVC 2 Symbols.  SETUP In VS 2010 RTM, go to Tools –> Options –> Debugging –> General. Check “Enable .NET Framework source stepping” We get the following dialog box   This automatically disables “Enable My Code”   Go to Debugging –> Symbols and Check “Microsoft Symbol Servers”. You can selectively exclude modules if you want to.   You will get a warning dialog like so: Hitting OK will start the download process   The setup is complete. You are now ready to start debugging! DEBUGGING Add a break point to your application and run the application in debug mode (F5 shortcut for me). Go to your call stack when you hit the break point. Right click on a frame that is grayed out. Select “Load Symbols from” “Microsoft Symbol Servers”. VS will begin a one time download of that assembly. This assembly will be cached locally so you don’t have to wait for the download the next time you debug the app.   We get a one time license agreement dialog box You might see an error like the one below regarding different encoding (hopefully will be fixed).    Assemblies for which the symbols have been loaded are no longer grayed out. Double clicking on any entry in the call stack should now directly take you to the source code for that assembly. AFAIK, not all symbols are available on the MS symbol server. In cases like that you will see a tab like the one below and be given the option to “Show Disassembly”. Enjoy! Newsreel Announcer: Humiliated, Muntz vows a return to Paradise Falls and promises to capture the beast alive! Charles Muntz: [speaking to a large audience outside in the newsreel] I promise to capture the beast alive, and I will not come back until I do!

    Read the article

  • Incremental Statistics Maintenance – what statistics will be gathered after DML occurs on the table?

    - by Maria Colgan
    Incremental statistics maintenance was introduced in Oracle Database 11g to improve the performance of gathering statistics on large partitioned table. When incremental statistics maintenance is enabled for a partitioned table, oracle accurately generated global level  statistics by aggregating partition level statistics. As more people begin to adopt this functionality we have gotten more questions around how they expected incremental statistics to behave in a given scenario. For example, last week we got a question around what partitions should have statistics gathered on them after DML has occurred on the table? The person who asked the question assumed that statistics would only be gathered on partitions that had stale statistics (10% of the rows in the partition had changed). However, what they actually saw when they did a DBMS_STATS.GATHER_TABLE_STATS was all of the partitions that had been affected by the DML had statistics re-gathered on them. This is the expected behavior, incremental statistics maintenance is suppose to yield the same statistics as gathering table statistics from scratch, just faster. This means incremental statistics maintenance needs to gather statistics on any partition that will change the global or table level statistics. For instance, the min or max value for a column could change after just one row is inserted or updated in the table. It might easier to demonstrate this using an example. Let’s take the ORDERS2 table, which is partitioned by month on order_date.  We will begin by enabling incremental statistics for the table and gathering statistics on the table. After the statistics gather the last_analyzed date for the table and all of the partitions now show 13-Mar-12. And we now have the following column statistics for the ORDERS2 table. We can also confirm that we really did use incremental statistics by querying the dictionary table sys.HIST_HEAD$, which should have an entry for each column in the ORDERS2 table. So, now that we have established a good baseline, let’s move on to the DML. Information is loaded into the latest partition of the ORDERS2 table once a month. Existing orders maybe also be update to reflect changes in their status. Let’s assume the following transactions take place on the ORDERS2 table this month. After these transactions have occurred we need to re-gather statistic since the partition ORDERS_MAR_2012 now has rows in it and the number of distinct values and the maximum value for the STATUS column have also changed. Now if we look at the last_analyzed date for the table and the partitions, we will see that the global statistics and the statistics on the partitions where rows have changed due to the update (ORDERS_FEB_2012) and the data load (ORDERS_MAR_2012) have been updated. The column statistics also reflect the changes with the number of distinct values in the status column increase to reflect the update. So, incremental statistics maintenance will gather statistics on any partition, whose data has changed and that change will impact the global level statistics.

    Read the article

  • How To Fix YouTube Re-Buffering On Full Screen Issue

    - by Gopinath
    YouTube has an annoying bug – videos starts re-buffering when we switch to full screen mode from normal mode. On a high speed broadband connection the re-buffering issue may not be annoying much but on a slow broadband connection it annoys hell out of us. When users reported this problem to YouTube, the engineers at YouTube dubbed it as a feature rather than bug!. That is sick and this behaviour shows that they started ignoring the users and their problem. Anyways we got solutions to get around this annoying issue. Root Cause Of The Issue The root cause of the bug is YouTube’s resolution switching mechanism.When the video is loaded in normal model it is buffered and played at 360p, but when the full screen mode is activated YouTube player switches to 480p and starts re-buffering the video. How To Fix The Issue on Google Chrome Browser Fixing this issue on Google Chrome is very simple. All we need to do is to install this Greasemonkey script and it fixes everything for you. How To Fix The Issue on Firefox Browser Fixing this issue on Firefox browser involves an extra step when compared to Chrome browser. To fix the issue Step 1: Install Greasemonkey Addon for Firefox Step 2: Install this script from userscripts.org Done. Firefox will handle the full screen switching smoothly. How To Fix The Issue on Internet Explorer Hufff!! Internet Explorer users are poor users not because they are dumb but because they are using stone age browser. No offense, IE is a pathetic browser and there is no support for Greasemonkey scripts. Anyway lets look at the solution for fixing YouTube issue on IE. To fix the YouTube bug you need to follow the official solution provided by Google and it’s not a friendly one. Step 1: Login to your YouTube account and select the option “I have a slow connection. Never play higher-quality video“. Step 2 – Repeat Always: Make sure that you are always logged into your YouTube account as YouTube need to know your settings before switching the resolution. (Now you know why I called IE as a poor browser). Related: Set the start time of a YouTube Video This article titled,How To Fix YouTube Re-Buffering On Full Screen Issue, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • JavaFX, Google Maps, and NetBeans Platform

    - by Geertjan
    Thanks to a great new article by Rob Terpilowski, and other work and research he describes in that article, it's now trivial to introduce a map component to a NetBeans Platform application. Making use of the GMapsFX library, as described in Rob's article, which provides a JavaFX API for Google Maps, you can very quickly knock this application together. Click to enlarge the image. Here's all the code (from Rob's article): @TopComponent.Description( preferredID = "MapTopComponent", persistenceType = TopComponent.PERSISTENCE_ALWAYS ) @TopComponent.Registration(mode = "editor", openAtStartup = true) @ActionID(category = "Window", id = "org.map.MapTopComponent") @ActionReference(path = "Menu/Window" /*, position = 333 */) @TopComponent.OpenActionRegistration( displayName = "#CTL_MapWindowAction", preferredID = "MapTopComponent" ) @NbBundle.Messages({ "CTL_MapWindowAction=Map", "CTL_MapTopComponent=Map Window", "HINT_MapTopComponent=This is a Map window" }) public class MapWindow extends TopComponent implements MapComponentInitializedListener { protected GoogleMapView mapComponent; protected GoogleMap map; private static final double latitude = 52.3667; private static final double longitude = 4.9000; public MapWindow() { setName(Bundle.CTL_MapTopComponent()); setToolTipText(Bundle.HINT_MapTopComponent()); setLayout(new BorderLayout()); JFXPanel panel = new JFXPanel(); Platform.setImplicitExit(false); Platform.runLater(() -> { mapComponent = new GoogleMapView(); mapComponent.addMapInializedListener(this); BorderPane root = new BorderPane(mapComponent); Scene scene = new Scene(root); panel.setScene(scene); }); add(panel, BorderLayout.CENTER); } @Override public void mapInitialized() { //Once the map has been loaded by the Webview, initialize the map details. LatLong center = new LatLong(latitude, longitude); MapOptions options = new MapOptions(); options.center(center) .mapMarker(true) .zoom(9) .overviewMapControl(false) .panControl(false) .rotateControl(false) .scaleControl(false) .streetViewControl(false) .zoomControl(false) .mapType(MapTypeIdEnum.ROADMAP); map = mapComponent.createMap(options); //Add a couple of markers to the map. MarkerOptions markerOptions = new MarkerOptions(); LatLong markerLatLong = new LatLong(latitude, longitude); markerOptions.position(markerLatLong) .title("My new Marker") .animation(Animation.DROP) .visible(true); Marker myMarker = new Marker(markerOptions); MarkerOptions markerOptions2 = new MarkerOptions(); LatLong markerLatLong2 = new LatLong(latitude, longitude); markerOptions2.position(markerLatLong2) .title("My new Marker") .visible(true); Marker myMarker2 = new Marker(markerOptions2); map.addMarker(myMarker); map.addMarker(myMarker2); //Add an info window to the Map. InfoWindowOptions infoOptions = new InfoWindowOptions(); infoOptions.content("<h2>Center of the Universe</h2>") .position(center); InfoWindow window = new InfoWindow(infoOptions); window.open(map, myMarker); } } Awesome work Rob, will be useful for many developers out there.

    Read the article

  • Sqlite &amp; Entity Framework 4

    - by Dane Morgridge
    I have been working on a few client app projects in my spare time that need to persist small amounts of data and have been looking for an easy to use embedded database.  I really like db4o but I'm not wanting to open source this particular project so it was not an option.  Then I remembered that there was an ADO.NET provider for sqlite.  Being a fan of sqlite in general, I downloaded it and gave it an install.  The installer added tooling support for both Visual Studio 2008 & 2010 which is nice because I am working almost exclusively in 2010 at the moment.  I noticed that the provider also had support for Entity Framework, but not specifically v4.  I created a database using the tools that get installed with Visual Studio and all seemed to work fine.  I went on to create an Entity Framework context and selected the sqlite database and to my surprise it worked with out any problems.  The model showed up just like it would for any database and so I started to write a little code to test and then.. BAM!.. Exception. "Mixed mode assembly is built against version 'v2.0.50727' of the runtime and cannot be loaded in the 4.0 runtime without additional configuration information." A quick bit of searching on Bing found the answer.  To get it working, you need to include the following code in your web.config file: 1: <startup useLegacyV2RuntimeActivationPolicy="true"> 2: <supportedRuntime version="v4.0" /> 3: </startup> And then everything magically works.  Entity Framework 4 features worked, like lazy loading and even the POCO templates worked.  The only thing that didn't work was the model first development.  The SQL generated was for SQL Server and of course wouldn't run on sqlite without some modifications. The only other oddity I found was that in order to have an auto incrementing id, you have to use the full integer data type for sqlite; a regular int won't do the trick.  This translates to an Int64, or a long when working with it in Entity Framework.  Not a big deal, but something you need to be aware of. All in all, I am quite impressed with the Entity Framework support I found with sqlite.  I wasn't really expecting much at all, and I was pleasantly surprised. I downloaded the ADO.NET sqlite provider from http://sqlite.phxsoftware.com/.  If you want to use an embedded database with Entity Framework, give it a look.  It will be well worth your time.

    Read the article

  • OracleServiceBus+SOA in same server

    - by Manoj Neelapu
    Oracle Service Bus 11gR1 (11.1.1.3) supports running in same JVM as SOA. This tutorial covers on how to do create domain in of SOA+OSB combined to run in single JVM .For this tutorial we will use a flavor  WebLogic installer bundled with both OEPE and coherence components (eg oepe111150_wls1033_win32.exe). WebLogic installer bundled with coherence and OEPE components can be seen in the screen shot.Oracle Service Bus 11gR1 (11.1.1.3) has built-in caching support for Business Services using coherence. Because of this we will have to install coherence before  installing OSB.  To get SOAand OSB running in the same domain, we have to install the SOA and OSB on the above ORACLE_HOME. After installation we should see both the SOA and OSB homes has highlighted in red.We could also see the coherence components which is mandatory for OSB and optional OEPE also installed.                     Now we will execute RCU(ofm_rcu_win_11.1.1.3.0_disk1_1of1) to install the schema for SOA and OSB. New RCU contains OSB tables (WLI_QS_REPORT_DATA , WLI_QS_REPORT_ATTRIBUTE) gets loaded as part of SOAINFRA schema  After this step we will have to create soa+osb domain using config wizard. It is located under $WEBLOGIC_HOME\common\bin\config.* (.cmd or .sh as per your platform) .While creating a domain we will select options for SOA Suite  and Oracle Service Bus Extension-All Domain Topologies.We can also bundle Enterprise Manager in the same installation or in a different server. Here in this case we will use the enterprise manager in the same domain. So we selected the Enterprise Manager component also. There is another option for OSB  Oracle Service Bus Extension-Single server Domain Topology. This topology is for users who want to use OSB in single server configuration. Currently SOA doesn't support single server topology. So this topology cannot be used with SOA domain but can only be used for stand alone OSB installations.We can continue with domain configuration till we reach the below screen. Following steps are mandatory if we want to have the SOA and OSB run in same JVMwe should select Managed Server, Clusters and Machines as shown below   After this selection you should see a screen with two servers One managed server for OSB and one managed for SOA.  Since we would like to have both the servers in one managed server (one JVM) we will have to do one  

    Read the article

  • Overview of getting and setting the URL and parts of the URL using angularjs and/or Javascript

    - by Sandy Good
    Getting and Setting the URL, and different parts of the URL are a basic part of Application Design. For Page Navigation Deep Linking Providing a link to the user Querying Data Passing information to other pages Both angularjs and javascript provide ways to get/set the URL and parts of the URL. I'm looking for the following information: Situation: Show a simple URL in the browser address bar to the user Provide a more detailed URL with string parameters to the page that the user will not see. In other words, two different URLs will be used, one simple one that the user sees in the browser, a more detailed one available to the page on load. Get URL info with PHP when then page intially loads, both don't reload the PHP page when the user needs more detailed info that is already loaded but not displayed yet. Set the URL with a more detailed URL for deep linking as the user drills down to more specific information. Get URL info in a controller or JavaSript when angularjs detects a change in the URL with routing. Hash or Query String or Both? Should I use a hash # in the URL, a string ?= or both? Here is what I currently know and what I want: A Query String HTTP:\\www.name.com?mykey=itemID will prevent angularjs from reloading the page. So, I can change the URL by adding/changing the string at the end, thereby providing new info to the page, and keep the page from reloading. I can change the URL and force a page reload with: window.location.href = "#Store/" + argUserPubId + "?itemID=home"; If home is the itemID string, I want code to simply load the page, and not display more detailed information. If there is a real itemID in the URL query string, I want the code to display the more detailed information. Code from angularjs will run either from the controller specified in the routing, or a controller specified in the HTML, or both. The angularjs code specified in the routing seems to run first, before the code specified in the HTML. A different URL for the page can be used in angularjs templateURL: than the URL that was sent to the browser address bar. when('/Store/:StoreId', { templateUrl: function(params){return 'Client_Pages/Stores.php?storeID=' + params.StoreId;}, controller: 'storeParseData' }). The above code detects http:\\www.name.com\Store\StoreID in the browser, but SENDS http:\\www.name.com\Client_Pages/Stores.php?storeID=StoreID to the page. In the above code, a function is used for the angularjs routing templateURL: to dynamically set the templateURL. So, when the user clicks something to see details of an item, how should I configure the URL? Should I use angularjs $location or window.location.href ? Should I use a longer URL with more parameters, a hash bang, or a query string? Should I use: http:\\www.name.com\Store\StoreID\ItemID or http:\\www.name.com\Store\StoreID#ItemID or http:\\www.name.com\Store\StoreID?ItemID or http:\\www.name.com\Store#StoreID?ItemID or Something else?

    Read the article

  • Performance problems loading XML with SSIS, an alternative way!

    - by AtulThakor
    I recently needed to load several thousand XML files into a SQL database, I created an SSIS package which was created as followed: Using a foreach container to loop through a directory and load each file path into a variable, the “Import XML” dataflow would then load each XML file into a SQL table.       Running this, it took approximately 1 second to load each file which seemed a massive amount of time to parse the XML and load the data, speaking to my colleague Martin Croft, he suggested the use of T-SQL Bulk Insert and OpenRowset, so we adjusted the package as followed:     The same foreach container was used but instead the following SQL command was executed (this is an expression):     "INSERT INTO MyTable(FileDate) SELECT   CAST(bulkcolumn AS XML)     FROM OPENROWSET(         BULK         '" + @[User::CurrentFile]  + "',         SINGLE_BLOB ) AS x"     Using this method we managed to load approximately 20 records per second, much faster…for data loading! For what we wanted to achieve this was perfect but I’ll leave you with the following points when making your own decision on which solution you decide to choose!      Openrowset Method Much faster to get the data into SQL You’ll need to parse or create a view over the XML data to allow the data to be more usable(another post on this!) Not able to apply validation/transformation against the data when loading it The SQL Server service account will need permission to the file No schema validation when loading files SSIS Slower (in our case) Schema validation Allows you to apply transformations/joins to the data Permissions should be less of a problem Data can be loaded into the final form through the package When using a schema validation errors can fail the package (I’ll do another post on this)

    Read the article

  • Easy Scaling in XAML (WPF)

    - by Robert May
    Ran into a problem that needed solving that was kind of fun.  I’m not a XAML guru, and I’m sure there are better solutions, but I thought I’d share mine. The problem was this:  Our designer had, appropriately, designed the system for a 1920 x 1080 screen resolution.  This is for a full screen, touch screen device (think Kiosk), which has that resolution, but we also wanted to demo the device on a tablet (currently using the AWESOME Samsung tablet given out at Microsoft Build).  When you’d run it on that tablet, things were ugly because it was at a lower resolution than the target device. Enter scaling.  I did some research and found out that I probably just need to monkey with the LayoutTransform of some grid somewhere.  This project is using MVVM and has a navigation container that we built that lives on a single root view.  User controls are then loaded into that view as navigation occurs. In the parent grid of the root view, I added the following XAML: <Grid.LayoutTransform> <ScaleTransform ScaleX="{Binding ScaleWidth}" ScaleY="{Binding ScaleHeight}" /> </Grid.LayoutTransform> And then in the root View Model, I added the following code: /// <summary> /// The required design width /// </summary> private const double RequiredWidth = 1920; /// <summary> /// The required design height /// </summary> private const double RequiredHeight = 1080; /// <summary>Gets the ActualHeight</summary> public double ActualHeight { get { return this.View.ActualHeight; } } /// <summary>Gets the ActualWidth</summary> public double ActualWidth { get { return this.View.ActualWidth; } } /// <summary> /// Gets the scale for the height. /// </summary> public double ScaleHeight { get { return this.ActualHeight / RequiredHeight; } } /// <summary> /// Gets the scale for the width. /// </summary> public double ScaleWidth { get { return this.ActualWidth / RequiredWidth; } } Note that View.ActualWidth and View.ActualHeight are just pointing directly at FrameworkElement.ActualWidth and FrameworkElement.ActualHeight. That’s it.  Just calculate the ratio and bind the scale transform to it. Hopefully you’ll find this useful. Technorati Tags: WPF,XAML

    Read the article

  • How do you keep code with continuations/callbacks readable?

    - by Heinzi
    Summary: Are there some well-established best-practice patterns that I can follow to keep my code readable in spite of using asynchronous code and callbacks? I'm using a JavaScript library that does a lot of stuff asynchronously and heavily relies on callbacks. It seems that writing a simple "load A, load B, ..." method becomes quite complicated and hard to follow using this pattern. Let me give a (contrived) example. Let's say I want to load a bunch of images (asynchronously) from a remote web server. In C#/async, I'd write something like this: disableStartButton(); foreach (myData in myRepository) { var result = await LoadImageAsync("http://my/server/GetImage?" + myData.Id); if (result.Success) { myData.Image = result.Data; } else { write("error loading Image " + myData.Id); return; } } write("success"); enableStartButton(); The code layout follows the "flow of events": First, the start button is disabled, then the images are loaded (await ensures that the UI stays responsive) and then the start button is enabled again. In JavaScript, using callbacks, I came up with this: disableStartButton(); var count = myRepository.length; function loadImage(i) { if (i >= count) { write("success"); enableStartButton(); return; } myData = myRepository[i]; LoadImageAsync("http://my/server/GetImage?" + myData.Id, function(success, data) { if (success) { myData.Image = data; } else { write("error loading image " + myData.Id); return; } loadImage(i+1); } ); } loadImage(0); I think the drawbacks are obvious: I had to rework the loop into a recursive call, the code that's supposed to be executed in the end is somewhere in the middle of the function, the code starting the download (loadImage(0)) is at the very bottom, and it's generally much harder to read and follow. It's ugly and I don't like it. I'm sure that I'm not the first one to encounter this problem, so my question is: Are there some well-established best-practice patterns that I can follow to keep my code readable in spite of using asynchronous code and callbacks?

    Read the article

  • SOA+OSB in same JVM

    - by Manoj Neelapu
    Oracle Service Bus 11gR1 (11.1.1.3) supports running in same JVM as SOA. This tutorial covers on how to do create domain in of SOA+OSB combined to run in single JVM . For this tutorial we will use a flavor  WebLogic installer bundled with both OEPE and coherence components (eg oepe111150_wls1033_win32.exe). WebLogic installer bundled with coherence and OEPE components can be seen in the screen shot.Oracle Service Bus 11gR1 (11.1.1.3) has built-in caching support for Business Services using coherence. Because of this we will have to install coherence before  installing OSB.  To get soa and osb running in the same domain, we have to install the SOA and OSB on the above ORACLE_HOME. After installation we should see both the SOA and OSB homes has highlighted in red.We could also see the coherence components which is mandatory for OSB and optional OEPE also installed.Now we will execute RCU(ofm_rcu_win_11.1.1.3.0_disk1_1of1) to install the schema for SOA and OSB. New RCU contains OSB tables (WLI_QS_REPORT_DATA , WLI_QS_REPORT_ATTRIBUTE) gets loaded as part of SOAINFRA schema After this step we will have to create soa+osb domain using config wizard. It is located under $WEBLOGIC_HOME\common\bin\config.* (.cmd or .sh as per your platform) .While creating a domain we will select options for SOA Suite  and Oracle Service Bus Extension-All Domain Topologies.There is another option for OSB  Oracle Service Bus Extension-Single server Domain Topology. This topology is for users who want to use OSB in single server configuration. Currently SOA doesn't support single server topology. So this topology cannot be used with SOA domain but can only be used for stand alone OSB installations.We can continue with domain configuration till we reach the below screen. Following steps are mandatory if we want to have the SOA and OSB run in same JVMwe should select Managed Server, Clusters and Machines as shown below After this selection you should see a screen with two servers One managed server for OSB and one managed for SOA. Since we would like to have both the servers in one managed server (one JVM) we will have to do one important step here. We have to delete either of the servers and rename the other server with deleted server name.eg delete osb_server1 and rename the soa_server1 to osb_server1 or we can also delete soa_server1 and rename the osb_server1 to soa_server1After this steps proceed as as-usual . If we observe created domain we see only one managed server which contains components for both SOA and OSB ($DOMAIN_HOME/startManagedWebLogic_readme.txt). 

    Read the article

  • Workaround: XNA 4 importing only part of 3d model from FBX

    - by Vitus
    Recently I found a problem with importing 3D models from FBX files: it sometimes imported partly. That is when you draw a 3D model, loaded from FBX file, processed by content pipeline, you got only part of meshes. “Sometimes” means that you got this error only for some files. Results of my research below. For example, I have 10Mb binary FBX file with a model, looks like: And when I load it, result Model instance contains only part of meshes and looks like: Because models from other files imported normally, I think that it’s a “bad format” file. When you add FBX file to your XNA Content project and build it, imported file processing by XNA Fbx Importer & Processor. On MSDN I found that FbxImporter designed to work with 2006.11 version of FBX format. My file is FBX 2012 format. Ok, I need to convert it to 2006 format. It can be done by using Autodesk FBX Converter 2012.1. I tried to convert it to other versions of FBX formats, but without success. And I also tried to import my FBX file to 3D MAX, and it imported correctly. Then I export model using 3D MAX, and it generate me other FBX, which I add to my XNA project. After that I got full model, that rendered well! So, internal data structure of FBX file is more important for right XNA import, than it version! Unfortunately, Autodesk FBX is not an open file format. If you want to work with FBX, you should use Autodesk FBX SDK. This way you can manually read content of FBX file, and use it everyway. Then I tried to convert my source FBX file to DAE Collada, and result DAE file back to FBX, using FBX Converter (FBX –> DAE –> FBX). The result FBX file can be imported normally.   Conclusion: XNA FbxImporter correct work doesn't depend on version (2006, 2011, etc) and form (binary, ascii) of FBX file. Internal FBX data structure much more important. To make FBX "readable" for XNA Importer you can use double conversion like FBX -> Collada -> FBX You also can use FBX SDK to manually load data from FBX P.S. Autodesk FBX Converter 2012 is more, than simple converter. It provide you tools like: FBX Explorer, which show you structure of FBX file; FBX Viewer, which render content of FBX and provide basic intercation like model move and zoom; FBX Take Manager, which allow to work with embedded animations

    Read the article

  • InvokeRequired not reliable?

    - by marocanu2001
    InvalidOperationException: Cross-thread operation not valid: Control 'progressBar' accessed from a thread other than the thread it was created ... Now this is not a nice way to start a day! not even if it's a Monday! So you have seen this and already thought, come on, this is an old one, just ask whether InvokeRequired before calling the method in the control, something like this: if (progressBar.InvokeRequired) {           progressBar.Invoke  ( new MethodInvoker( delegate {  progressBar.Value = 50;    }) ) }else      progressBar.Value = 50;    Unfortunately this was not working the way I would have expected, I got this error, debugged and though in debugging the InvokeRequired had become true , the error was thrown on the branch that did not required Invoke. So there was a scenario where InvokeRequired returned false and still accessing the control was not on the right thread ... Problem was that I kept showing and hiding the little form showing the progressbar. The progressbar was updating on an event  , ProgressChanged and I could not guarantee the little form was loaded by the time the event was thrown. So , spotted the problem, if none of the parents of the control you try to modify is created at the time of the method invoking, the InvokeRequired returns true! That causes your code to execute on the woring thread. Of course, updating UI before the win dow was created is not a legitimate action either, still I would have expected a different error. MSDN: "If the control's handle does not yet exist, InvokeRequired searches up the control's parent chain until it finds a control or form that does have a window handle. If no appropriate handle can be found, the InvokeRequired method returns false. This means that InvokeRequired can return false if Invoke is not required (the call occurs on the same thread), or if the control was created on a different thread but the control's handle has not yet been created." Have  a look at InvokeRequired's implementation: public bool InvokeRequired {     get     {         HandleRef hWnd;         int lpdwProcessId;         if (this.IsHandleCreated)         {             hWnd = new HandleRef(this, this.Handle);         }         else         {             Control wrapper = this.FindMarshallingControl();             if (!wrapper.IsHandleCreated)             {                 return false; // <==========             }             hWnd = new HandleRef(wrapper, wrapper.Handle);         }         int windowThreadProcessId = SafeNativeMethods.GetWindowThreadProcessId(hWnd, out lpdwProcessId);         int currentThreadId = SafeNativeMethods.GetCurrentThreadId();         return (windowThreadProcessId != currentThreadId);     } } Here 's a good article about this and a workaround http://www.ikriv.com/en/prog/info/dotnet/MysteriousHang.html

    Read the article

  • permanently load module

    - by Radu
    I have a Compaq Presario CQ-61 320SQ, I am using Ubuntu 10.04 because after update to 10.10 my mouse and touchpad won't work, network won't work, sound won't work ... (I managed to fix most of them after almost a month of googling, but not all, my 2 Desktops have no problem with 10.10) so I decided to switch back to 10.04, where I have a problem: My broadband speed is very low beacause of the kernel module r8169, I downloaded the good module r8101 and every time the computer boots have a rc.local entry to fix this. Question: Can I load the modul permanently from a specific location. I heard about /etc/modules but there I need the module name, but I have to load it from a specific path (where is the default path for that) Thank you. So I studied the script: It creates the file r8101.ko in /lib/modules/uname -r/kernel/drivers/net so I think as long as nobody will delete that file, and I don't update the kernel, maybe adding r8108 to /etc/modules will work, and add r8169 to blacklist ... I will give it a try. EDIT2: So I added r8101 to /etc/modules and blacklist r8169 to /etc/modprobe.d/blacklist.conf It still uses the old module, lsmod prints: radu@adu:~$ lsmod | grep r8 r8101 67626 0 r8169 34108 0 mii 4381 1 r8169 EDIT: the module is loaded using this script that came with it: #!/bin/sh # invoke insmod with all arguments we got # and use a pathname, as insmod doesn't look in . by default TARGET_PATH=/lib/modules/`uname -r`/kernel/drivers/net echo echo "Check old driver and unload it." check=`lsmod | grep r8169` if [ "$check" != "" ]; then echo "rmmod r8169" /sbin/rmmod r8169 fi check=`lsmod | grep r8101` if [ "$check" != "" ]; then echo "rmmod r8101" /sbin/rmmod r8101 fi echo "Build the module and install" echo "-------------------------------" >> log.txt date 1>>log.txt make all 1>>log.txt || exit 1 module=`ls src/*.ko` module=${module#src/} module=${module%.ko} if [ "$module" == "" ]; then echo "No driver exists!!!" exit 1 elif [ "$module" != "r8169" ]; then if test -e $TARGET_PATH/r8169.ko ; then echo "Backup r8169.ko" if test -e $TARGET_PATH/r8169.bak ; then i=0 while test -e $TARGET_PATH/r8169.bak$i do i=$(($i+1)) done echo "rename r8169.ko to r8169.bak$i" mv $TARGET_PATH/r8169.ko $TARGET_PATH/r8169.bak$i else echo "rename r8169.ko to r8169.bak" mv $TARGET_PATH/r8169.ko $TARGET_PATH/r8169.bak fi fi fi echo "Depending module. Please wait." depmod -a echo "load module $module" modprobe $module echo "Completed." exit 0

    Read the article

  • How to install Awesome WM without root access?

    - by ssice
    I want to install the Awesome window manager. In the environment where I want to configure it I don't have root access. I do have a machine were I can be root (I use for this a virtual machine in my laptop). I have tried the following: $ sudo apt-get install awesome The following packages are about to be installed: awesome libev3 libid3tag0 libimlib2 liblua5.1-0 libxcb-icccm1 libxcb-image0 libxcb-keysyms1 libxcb-property1 libxcb-randr0 libxcb-xinerama0 libxcb-xtest0 libxdg-basedir1 menu rlwrap Do you want to continue [Y/n]? n I do now have the list of dependencies for awesome, so I downloaded them all. For that, I did the following. $ pkgs="awesome libev3 libid3tag0 libimlib2 liblua5.1-0 libxcb-icccm1 libxcb-image0 libxcb-keysyms1 libxcb-property1 libxcb-randr0 libxcb-xinerama0 libxcb-xtest0 libxdg-basedir1 menu rlwrap" # this is just for not writing it all ;) $ sudo apt-get install --download-only $pkgs .... $ mkdir -p /tmp/x_debs $ for pkg in $pkgs; do cp /var/cache/apt/archives/$pkg* /tmp/x_debs/; done [ copies all *.deb from my dependencies to /tmp/x_debs ] Now, I want to install the dependencies. For that, I setup a fake dpkg install in my home folder: $ mkdir $HOME/root $ mkdir -p $HOME/root/var/lib/dpkg/{triggers,updates} $ touch $HOME/root/var/lib/dpkg/{available,status} Now I tried to install with dpkg, but I could not: $ dpkg --force-not-root --root=$HOME/root --recursive -i /tmp/x_debs It failed while trying to set permissions for the packages and running chroot. As I do have root access in this machine, I ran it with privileges: $ sudo dpkg --root=$HOME/root --recursive -i /tmp/x_debs Then I had a lot of stuff (i.e., everything: dependencies and the own WM) installed inside $HOME/root. Particularly, xcb-* libraries were installed in $HOME/root/usr/lib and the awesome binary in $HOME/root/usr/bin/awesome. If I try to execute awesome as is I get as an error that libraries could not be loaded. That's normal, as they are not in /usr/lib nor in /lib. So I ran export LD_LIBRARY_PATH=$HOME/root/usr/lib:$HOME/root/lib:${LD_LIBRARY_PATH} and awesome would try to load. However, I could not make gdm to run awesome within gnome or replacing it. I did it this way so I can copy everything in my $HOME/root folder, paste it in the other machine and have it running. Is there any other way (to have less wasted space maybe..) to do this? How can I tell gdm to exec awesome without root access?

    Read the article

  • Game of Phones

    - by Carlos Chang
    Game  of  Phones There’s an excellent DZone article titled: 2014 Guide to Mobile Development. It’s loaded with excellent information including some results from a mobile related survey to more than 1000 IT professionals. Without giving away too much, these highlights should convince you to read the entire article.  Web and Hybrid apps are gaining tons of traction particularly in the enterprise. If you want to better understand the differences between Web, Native and Hybrid, this article has you covered. Enterprise developers are increasingly more interested in cross platform tools. Makes sense right?  I mean, unless you have infinite resources (e.g. Facebook) and can afford to write native apps to every platform, finding something that can meet your needs for iOS and Android makes sense.  And toss in the possibility of Windows Phone …and oh, just to be current, the addition of Apple’s new mobile language, Swift, to add to Objective C.. and oh boy.  Why not check out cross platform tools? BTW, don’t  forget testing on each platform, and maintenance and the next versions of the app. It’s not one and done. If you’re successful, you’re never done. Various mobile vendors are represented and many provide some great information.  Oracle's own Suhas Uliyar, VP of Mobile Strategy, represented with some great insights into the challenges of mobile back end integration (SOA, mBaaS, etc.) and moving from "mobile first" to a mobile plus world. BTW, Suhas was recently named Top 100 Wireless Technology Experts for 2014 by Today's Wireless World magazine.  And if your not yet convinced, DZone did a very nice job with their mobile infographic stylized after the insanely popular series, Game of Thrones.  Even though there were no dragons illustrated, worth the price of admission just for that.   Check it out here.

    Read the article

  • How to shoot a triangle out of an asteroid which floats all of the way up to the screen?

    - by Holland
    I currently have an asteroid texture loaded as my "test player" for the game I'm writing. What I'm trying to figure out how to do is get a triangle to shoot from the center of the asteroid, and keep going until it hits the top of the screen. What happens in my case (as you'll see from the code I've posted), is that the triangle will show, however it will either be a long line, or it will just be a single triangle which stays in the same location as the asteroid moving around (that disappears when I stop pressing the space bar), or it simply won't appear at all. I've tried many different methods, but I could use a formula here. All I'm trying to do is write a space invaders clone for my final in C#. I know how to code fairly well, my formulas just need work is all. So far, this is what I have: Main Logic Code protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(ClearOptions.Target, Color.Black, 1, 1); mAsteroid.Draw(mSpriteBatch); if (mIsFired) { mPositions.Add(mAsteroid.LastPosition); mRay.Fire(mPositions); mIsFired = false; mRay.Bullets.Clear(); mPositions.Clear(); } base.Draw(gameTime); } Draw Code public void Draw() { VertexPositionColor[] vertices = new VertexPositionColor[3]; int stopDrawing = mGraphicsDevice.Viewport.Width / mGraphicsDevice.Viewport.Height; for (int i = 0; i < mRayPos.Length(); ++i) { vertices[0].Position = new Vector3(mRayPos.X, mRayPos.Y + 5f, 10); vertices[0].Color = Color.Blue; vertices[1].Position = new Vector3(mRayPos.X - 5f, mRayPos.Y - 5f, 10); vertices[1].Color = Color.White; vertices[2].Position = new Vector3(mRayPos.X + 5f, mRayPos.Y - 5f, 10); vertices[2].Color = Color.Red; mShader.CurrentTechnique.Passes[0].Apply(); mGraphicsDevice.DrawUserPrimitives<VertexPositionColor>(PrimitiveType.TriangleStrip, vertices, 0, 1); mRayPos += new Vector2(0, 1f); mGraphicsDevice.ReferenceStencil = 1; } }

    Read the article

  • How do I draw a scene with 2 nested frames

    - by Guido Granobles
    I have been trying for long time to figure out this: I have loaded a model from a directx file (I am using opengl and Java) the model have a hierarchical system of nested reference frames (there are not bones). There are just 2 frames, one of them is called x3ds_Torso and it has a child frame called x3ds_Arm_01. Each one of them has a mesh. The thing is that I can't draw the arm connected to the body. Sometimes the body is in the center of the screen and the arm is at the top. Sometimes they are both in the center. I know that I have to multiply the matrix transformation of every frame by its parent frame starting from the top to the bottom and after that I have to multiply every vertex of every mesh by its final transformation matrix. So I have this: public void calculeFinalMatrixPosition(Bone boneParent, Bone bone) { System.out.println("-->" + bone.name); if (boneParent != null) { bone.matrixCombined = bone.matrixTransform.multiply(boneParent.matrixCombined); } else { bone.matrixCombined = bone.matrixTransform; } bone.matrixFinal = bone.matrixCombined; for (Bone childBone : bone.boneChilds) { calculeFinalMatrixPosition(bone, childBone); } } Then I have to multiply every vertex of the mesh: public void transformVertex(Bone bone) { for (Iterator<Mesh> iterator = meshes.iterator(); iterator.hasNext();) { Mesh mesh = iterator.next(); if (mesh.boneName.equals(bone.name)) { float[] vertex = new float[4]; double[] newVertex = new double[3]; if (mesh.skinnedVertexBuffer == null) { mesh.skinnedVertexBuffer = new FloatDataBuffer( mesh.numVertices, 3); } mesh.vertexBuffer.buffer.rewind(); while (mesh.vertexBuffer.buffer.hasRemaining()) { vertex[0] = mesh.vertexBuffer.buffer.get(); vertex[1] = mesh.vertexBuffer.buffer.get(); vertex[2] = mesh.vertexBuffer.buffer.get(); vertex[3] = 1; newVertex = bone.matrixFinal.transpose().multiply(vertex); mesh.skinnedVertexBuffer.buffer.put(((float) newVertex[0])); mesh.skinnedVertexBuffer.buffer.put(((float) newVertex[1])); mesh.skinnedVertexBuffer.buffer.put(((float) newVertex[2])); } mesh.vertexBuffer = new FloatDataBuffer( mesh.numVertices, 3); mesh.skinnedVertexBuffer.buffer.rewind(); mesh.vertexBuffer.buffer.put(mesh.skinnedVertexBuffer.buffer); } } for (Bone childBone : bone.boneChilds) { transformVertex(childBone); } } I know this is not the more efficient code but by now I just want to understand exactly how a hierarchical model is organized and how I can draw it on the screen. Thanks in advance for your help.

    Read the article

  • Lubuntu Desktop messed up for logged in user, but not for guest

    - by RPi Awesomeness
    I recently upgraded my laptop from Lubuntu 12.04 to 14.04.1 and the upgrade process seemed to go fine. However, when I went to login as my normal user, I encountered an issue. The background loaded up, but none of LXDE or LXPanel showed up, leaving me with an empty desktop and nothing else except two errors. I thought that this was weird, so I just figured something had been messed up and would be fixed by a reboot. But it wasn't. I then tried logging in as guest, and it's just fine. I checked the ~/.xsession-errors file (for my main user, not guest, did it via TTY1) and this is what I got: Script for ibus started at run_im. Script for auto started at run_im. Script for default started at run_im. init: Unable to register as subreaper: Invalid argument init: lxsession main process (1649) killed by TERM signal init: Disconnected from notified D-Bus bus init: job dbus failed to stop init: job upstart-dbus-session-bridge failed to stop init: job upstart-dbus-system-bridge failed to stop init: job upstart-file-bridge failed to stop I also read the sometimes removing the ~/.Xauthority file can help, if the ownership is messed up. ls -l /home/MYUSER/.Xauthority tells me -rw------- 1 MYUSER MYUSER 60 Aug 16 09:57 /home/MYUSER/.Xauthority. Should that be root or something else, or should I try deleting that and ~/.profile. Here's what ~/.profile looks like: # ~/.profile: executed by the command interpreter for login shells. # This file is not read by bash(1), if ~/.bash_profile or ~/.bash_login # exists. # see /usr/share/doc/bash/examples/startup-files for examples. # the files are located in the bash-doc package. # the default umask is set in /etc/profile; for setting the umask # for ssh logins, install and configure the libpam-umask package. #umask 022 # if running bash if [ -n "$BASH_VERSION" ]; then # include .bashrc if it exists if [ -f "$HOME/.bashrc" ]; then . "$HOME/.bashrc" fi fi # set PATH so it includes user's private bin if it exists if [ -d "$HOME/bin" ] ; then PATH="$HOME/bin:$PATH" fi Should I post the output of dmesg? I'll try and get a screenshot, but does anyone have any idea what could be causing the desktop (LXDE/LXPanel) not to display? EDIT I attempted removing the ~/.XAuthority file, but that didn't seem to do anything.

    Read the article

  • 13.04 Logitech bluetooth speaker adapter pairing but no mixer output

    - by user1455622
    I had to change [General] Enable = Socket in /etc/bluetooth/audio.conf to get it to pair. But now that they are I don't get an output in pavucontrol. D: [pulseaudio] bluetooth-util.c: Registering /MediaEndpoint/HFPAG on adapter /org/bluez/3855/hci0. D: [pulseaudio] bluetooth-util.c: Registering /MediaEndpoint/HFPHS on adapter /org/bluez/3855/hci0. D: [pulseaudio] bluetooth-util.c: Registering /MediaEndpoint/A2DPSource on adapter /org/bluez/3855/hci0. D: [pulseaudio] bluetooth-util.c: Registering /MediaEndpoint/A2DPSink on adapter /org/bluez/3855/hci0. E: [pulseaudio] bluetooth-util.c: org.bluez.Media.RegisterEndpoint() failed: org.bluez.Error.AlreadyExists: Already Exists E: [pulseaudio] bluetooth-util.c: org.bluez.Media.RegisterEndpoint() failed: org.bluez.Error.AlreadyExists: Already Exists E: [pulseaudio] bluetooth-util.c: org.bluez.Media.RegisterEndpoint() failed: org.bluez.Error.AlreadyExists: Already Exists E: [pulseaudio] bluetooth-util.c: org.bluez.Media.RegisterEndpoint() failed: org.bluez.Error.AlreadyExists: Already Exists D: [pulseaudio] bluetooth-util.c: dbus: property 'State' changed to value 'disconnected' D: [pulseaudio] bluetooth-util.c: dbus: property 'State' changed to value 'disconnected' D: [pulseaudio] bluetooth-util.c: dbus: property 'State' changed to value 'disconnected' D: [pulseaudio] bluetooth-util.c: dbus: property 'State' changed to value 'disconnected' D: [pulseaudio] bluetooth-util.c: dbus: property 'State' changed to value 'connected' D: [pulseaudio] bluetooth-util.c: dbus: property 'State' changed to value 'connected' D: [pulseaudio] bluetooth-util.c: Unknown Bluetooth minor device class 0 D: [pulseaudio] module-card-restore.c: Not restoring profile for card bluez_card.C8_84_47_15_B7_34, because already set. I: [pulseaudio] module-card-restore.c: Restoring port latency offsets for card bluez_card.C8_84_47_15_B7_34. I: [pulseaudio] card.c: Created 2 "bluez_card.C8_84_47_15_B7_34" W: [pulseaudio] module-bluetooth-device.c: Profile has no transport D: [pulseaudio] core-subscribe.c: Dropped redundant event due to change event. I: [pulseaudio] card.c: Changed profile of card 2 "bluez_card.C8_84_47_15_B7_34" to off I: [pulseaudio] module.c: Loaded "module-bluetooth-device" (index: #22; argument: "address=C8:84:47:15:B7:34 profile=a2dp"). I: [alsa-source] alsa-source.c: Scheduling delay of 10,06ms, you might want to investigate this to improve latency... I: [alsa-source] ratelimit.c: 5 events suppressed I: [alsa-source] alsa-source.c: Overrun! I: [alsa-source] alsa-source.c: Increasing minimal latency to 2,00 ms D: [alsa-source] alsa-source.c: latency set to 20,00ms D: [alsa-source] alsa-source.c: hwbuf_unused=62008 D: [alsa-source] alsa-source.c: setting avail_min=442 What can I do to get it working? Regards,

    Read the article

  • Debugging .NET code called from X++ code in AX 2012

    - by ssmantha
    A very intriguing issue came to me to debug .Net code called from X++ code in AX 2012. This was indeed a challenge to be nailed down. Luckily the tools and some concepts helped me to achieve this task. Here it goes... We need to do a seamless debugging from AX debugger to Visual Studio back and forth. To enable this we need to first see if the dll to be debug is present in GAC then we might need to uninstall it from it due to the order of preference .NET loads the assemblies. The assemblies are first loaded from GAC and then the runtime checks for Public and Private Assemblies. Since the assembly in GAC is always compiled with runtime optimizations it is difficult to debug. We need to unhook this assembly from GAC and then move further relying on >NET assembly loading patterns. Step 1: Remove the target assembly to debug from GAC. Before that stop all the AOS servers and close all the instances of programs which rely on AOT e.g. all clients and even visual studio now. Step 2: Build your sample code which is present in AOT in debug mode and get the dll file along with PDB files. Step 3: Place these files in the Server\..\Bin and Client\bin directories of AX installation. Step 4: Configure Visual Studio: Step 4.1: Configure Debugging Options. In Visual Studio Go to Debug -> Options and Settings -> Debug node -> General sub node and disable “Enable Just My Code (managed)” Step 4.2: Specify the symbol loading directory options. Specify the locations for Client bin and server bin directories of the installation, remember to specify the correct instance of Server bin directory corresponding to your AOS. Step 4.3: Configure the project for debugging Step 5: Ready to go place your breakpoints in X++ and in .Net wherever necessary before this process... Run the Visual studio project and it will invoke the AX client with your breakpoint hitting X++ code.. and when you do a step-in using F11 the Visual studio debugger will be active and from here onwards you would be able to debug the complete flow. Debugging in seamless manner across debuggers is really very good feature and mostly underutilized, but by doing so we can have improved troubleshooting and saves a hell lot of time.. Stay tuned for more in Advanced Debugging..

    Read the article

  • Installing Ubuntu 12.04 on NUC intel i3 DC3217IYE

    - by Kieron
    System: NUC i3, 2 hdmi ports, ethernet, no wireless. UEFI boot 2x2gb ram 30gb mSATA internal drive Ubuntu 64bit 12.04.03 Hello i am having much trouble loading an OS on my NUC. I started out attempting another OS with various loaders (beast/hack) without much success,(various panics on boot or endless reboot loops, or graphics failures) after many tries i decided to attempt Ubuntu. Many years ago i loaded Ubuntu on an e-machine without an issue so i figured it would go smoothly, nothing could be further from the truth. The Live USB stick loads, but when i am installing the OS it always fails to load grub 2. Obviously it wont boot from SSD without grub2. I searched and found that Boot-repair should fix it...so i created a live usb with boot repair...it refuses to repair the grub because the install never finished and the needed partions are not fully created and flagged. It also demands an internet connection which i am unable to provide. I then ran gparted in an attempt to create the needed partitions manually, then re-ran boot repair turning off the "check internet" option. and disabling the re-install grub hoping it would create the missing directories and or fix the flags. it appeared to run successfully but upon return to the Ubuntu live USB it still fails at the grub2 install. also gparted doesnt have the same choices that Ubuntu install has when creating partitions, causing it to not recognize that i already had a root, or an EFI directory or it sometimes couldnt tell what the format of the partition was...all very annoying the reason i cant connect to internet (and cant upload the error logs) is the nuc only has Ethernet and the location i have to set up is too far away from modem. i can not move the monitor closer to modem as it is a 50inch LCD. I just want to do a basic install with one user acct and remote desktop (vnc) turned on so i can move the NUC to the modem connect via ethernet and then finish setting it up via Remote desktop/VNC chicken from my mac. While i await any assistance you maybe able to provide i am going to attempt to switch to the 32bit version and legacy boot to see if that can load grub. thnx again to anyone that can come up with a possible solution. i would love to hit "erase and install ubuntu" if anyone can figure out what is stopping that simple answer from working. Also disks (CD/DVD) are not an option as neither my Mac mini or my NUC have optical drives, and i have no desire to buy one for one task

    Read the article

  • Fast programmatic compare of "timetable" data

    - by Brendan Green
    Consider train timetable data, where each service (or "run") has a data structure as such: public class TimeTable { public int Id {get;set;} public List<Run> Runs {get;set;} } public class Run { public List<Stop> Stops {get;set;} public int RunId {get;set;} } public class Stop { public int StationId {get;set;} public TimeSpan? StopTime {get;set;} public bool IsStop {get;set;} } We have a list of runs that operate against a particular line (the TimeTable class). Further, whilst we have a set collection of stations that are on a line, not all runs stop at all stations (that is, IsStop would be false, and StopTime would be null). Now, imagine that we have received the initial timetable, processed it, and loaded it into the above data structure. Once the initial load is complete, it is persisted into a database - the data structure is used only to load the timetable from its source and to persist it to the database. We are now receiving an updated timetable. The updated timetable may or may not have any changes to it - we don't know and are not told whether any changes are present. What I would like to do is perform a compare for each run in an efficient manner. I don't want to simply replace each run. Instead, I want to have a background task that runs periodically that downloads the updated timetable dataset, and then compares it to the current timetable. If differences are found, some action (not relevant to the question) will take place. I was initially thinking of some sort of checksum process, where I could, for example, load both runs (that is, the one from the new timetable received and the one that has been persisted to the database) into the data structure and then add up all the hour components of the StopTime, and all the minute components of the StopTime and compare the results (i.e. both the sum of Hours and sum of Minutes would be the same, and differences introduced if a stop time is changed, a stop deleted or a new stop added). Would that be a valid way to check for differences, or is there a better way to approach this problem? I can see a problem that, for example, one stop is changed to be 2 minutes earlier, and another changed to be 2 minutes later would have a net zero change. Or am I over thinking this, and would it just be simpler to brute check all stops to ensure that The updated run stops at the same stations; and Each stop is at the same time

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >