Search Results

Search found 845 results on 34 pages for 'tricky'.

Page 18/34 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Pass a single boolean from an Android App to a libgdx game

    - by Doug Henning
    I'm writing an Android application that needs to pass a single boolean into an Android game that I am also writing. The idea is that the user does something in the App which will affect how the game operates. This is tricky with LIBGDX since I need to get the bool value into the Java files of the game, but of course, you can't call Android specific things from within LIBGDX's main Java files. I tried using an intent but of course the same problem persists. I can get the boolean into the MainActivity.Java of the android output of the game, but can't pass it along any further since the android output and the main java files don't know about each other. I have seen a few tutorials that explain how to use set up an interface in the LIBGDX java files that can call android things. This seems like wild overkill for what I want to do. I've been trying to use Android's Shared Preferences with LIBGDX's Gdx.app.getPreferences, but I can't make it work. Anyhelp would be MUCH appreciated. I've set up two hello world applications. One is a standard Android app, with a single button that is supposed to write "true" into the shared preferences. The other is a standard LIBGDX hello world that is supposed to do nothing but check that bool when launched and if true display one image to the screen, if false, display a different one. Here's the relevant bit of the Android code: import android.preference.PreferenceManager; public void onClick(View view) { if (view == this.boolButton){ final String PREF_FILE_NAME = "myBool"; SharedPreferences preferences = getSharedPreferences(PREF_FILE_NAME, MODE_WORLD_WRITEABLE); SharedPreferences.Editor editor = preferences.edit(); editor.putBoolean("myBool", true); editor.commit(); } } And here's the relevant bit of the code from the LIBGDX main file: Preferences prefs = Gdx.app.getPreferences("myBool"); boolean switcher = prefs.getBoolean("myBool"); if(switcher == true){ texture = new Texture(Gdx.files.internal("data/worked512.png")); prefs.putBoolean("myBool", false); } else { texture = new Texture(Gdx.files.internal("data/libgdx.png")); } Everything compiles fine, it just doesn't work. I've spent HOURS googling trying to find a way to pass this single boolean from android into a LIBGDX main and I'm totally stumped. Thanks for your help.

    Read the article

  • Pass a single boolean from an Android App to a LIBGDK game

    - by Doug Henning
    I'm writing an Android application that needs to pass a single boolean into an Android game that I am also writing. The idea is that the user does something in the App which will affect how the game operates. This is tricky with LIBGDX since I need to get the bool value into the Java files of the game, but of course, you can't call Android specific things from within LIBGDX's main Java files. I tried using an intent but of course the same problem persists. I can get the boolean into the MainActivity.Java of the android output of the game, but can't pass it along any further since the android output and the main java files don't know about each other. I have seen a few tutorials that explain how to use set up an interface in the LIBGDX java files that can call android things. This seems like wild overkill for what I want to do. I've been trying to use Android's Shared Preferences with LIBGDX's Gdx.app.getPreferences, but I can't make it work. Anyhelp would be MUCH appreciated. I've set up two hello world applications. One is a standard Android app, with a single button that is supposed to write "true" into the shared preferences. The other is a standard LIBGDX hello world that is supposed to do nothing but check that bool when launched and if true display one image to the screen, if false, display a different one. Here's the relevant bit of the Android code: import android.preference.PreferenceManager; public void onClick(View view) { if (view == this.boolButton){ final String PREF_FILE_NAME = "myBool"; SharedPreferences preferences = getSharedPreferences(PREF_FILE_NAME, MODE_WORLD_WRITEABLE); SharedPreferences.Editor editor = preferences.edit(); editor.putBoolean("myBool", true); editor.commit(); } } And here's the relevant bit of the code from the LIBGDX main file: Preferences prefs = Gdx.app.getPreferences("myBool"); boolean switcher = prefs.getBoolean("myBool"); if(switcher == true){ texture = new Texture(Gdx.files.internal("data/worked512.png")); prefs.putBoolean("myBool", false); } else { texture = new Texture(Gdx.files.internal("data/libgdx.png")); } Everything compiles fine, it just doesn't work. I've spent HOURS googling trying to find a way to pass this single boolean from android into a LIBGDX main and I'm totally stumped. Thanks for your help.

    Read the article

  • Now It’s Personal (Although It Should Always Be): Campus Recruitment

    - by user769227
    One of the things that I think is important and I want our Campus Recruitment Team here at Oracle to be known for is outstanding customer service. When I say customer service, I mean both students and hiring managers should feel they have had a great experience in our campus hiring process. I think one of the keys to providing outstanding customer service is being able to provide as best as we can a personalised experience where the students who are interviewing with us feel like individuals in our process and not just part a ‘campus drive’. In the campus world this can be challenging at times especially in countries where there is high volume hiring. It can be tricky to create a personal experience when you are hiring for a large number of open graduate roles at one time. I think Campus Recruitment is one of the areas in the recruitment industry that is just waiting for a change. We have all seen the proliferation of Social Media in Recruitment over the past 4-6 years. Every Recruiter has a LinkedIn account or uses Twitter or G+ or FB, etc… and some individuals and organisations do it really well. Even in Campus Hiring there is great Social Media initiatives where companies reach out to students and talk to them. However one thing that has not really changed (and this is a generalisation) is the campus hiring interview process. Do these words inspire enthusiasm to you: “Group Interview, Assessment Centre, On-Campus Drive, Off-Campus Drive, etc...” I don’t know about you but to me these words don’t really sound very personal or individual to students. It almost conjures up images of a factory production line or those long queues you see where the person behind the counter says ‘take a number’. Campus Recruitment has come a long way don’t get me wrong – companies can share data with and talk to students in so many different ways now it really has become a much more transparent and open process. There are some times such as at IIT’s in India where it really is a bit old school in terms of interviewing with students running from company to company interviewing on campus over the course of a few days but I want students talking to Oracle to have as great an experience as possible (the outcome of getting a job or not is separate to the customer experience). As students, what are your thoughts? Do you feel like ‘just a number’ when you are interviewing or is there ways that companies can make the process more personalised. Let us know your thoughts. If you are interviewing with Oracle and have questions, want to talk to us or want to know what it is like working here – email us and we will help where we can. If you can’t reach your local Recruiter in your region email me at [email protected] and I will put you in touch with the appropriate person.

    Read the article

  • Should I implement BackBone.js into my ASP.NET WebForms applications?

    - by Walter Stabosz
    Background I'm trying to improve my group's current web app development pattern. Our current pattern is something we came up with while trying to rich web apps on top of ASP.NET WebForms (none of us knew ASP.NET MVC). This is the current pattern: ! Our application is using the WinForms Framework. Our ASPX pages are essentially just HTML, we use almost no WebControls. We use JavaScript/jQuery to perform all of our UI events and AJAX calls. For a single ASPX page, we have a single .js file. All of our AJAX calls are POSTs (not RESTful at all) Our AJAX calls contact WebMethods which we have defined in a series of ASMX files. One ASMX file per business object. Why Change? I want to revise our pattern a bit for a couple of reasons: We're starting to find that our JavaScript files are getting a bit unwieldy. We're using a hodgepodge of methods for keeping our local data and DOM updates in sync. We seem to spend too much time writing code to keep things in sync, and it can get tricky to debug. I've been reading Developing Backbone.js Applications and I like a lot of what Backbone has to offer in terms of code organization and separation of concerns. However, I've gotten to the chapter on RESTful app, I started to feel some hesitation about using Backbone. The Problem The problem is our WebMethods do not really fit into the RESTful pattern, which seems to be the way Backbone wants to consume them. For now, I'd only like to address our issue of disorganized client side code. I'd like to avoid major rewrites to our WebMethods. My Questions Is it possible to use Backbone (or a similar library) to clean up our client code, while not majorly impacting our data access WebMethods? Or would trying to use Backbone in this manner be a bastardization of it's intended use? Anyone have any suggestions for improving our pattern in the area of code organization and spending less time writing DOM and data sync code?

    Read the article

  • SQLAuthority News – Social Media Series – Facebook and Google+

    - by pinaldave
    Pinal on Facebook and Google+ Unless you have been living under a rock for the last few years, you know that Facebook is the first and last word in social networking.  Everyone has a Facebook account – from your local store to the 10-year old school child.  Because of this ability to be completely connected to everyone in your entire life, keeping a Facebook page for a professional business can be tricky. For the most part, I use Facebook strictly for personal matters.  I am friends only with friends I know in the “real” world (as opposed to my “virtual” online friends) and with family, of course.  I chat with friends on Facebook and upload personal photos to share with family who are far away.  I hope this doesn’t make readers from my professional life feel left out.  You can follow me on Facebook at www.facebook.com/SQLAuth, but you should know that Twitter is probably the better place to find updates about SQL Server and my blog (you can follow me on Twitter at www.twitter.com/pinaldave). There are definitely businesses who keep in touch with their clients using Facebook, but I felt the need to keep my personal and professional life separate.  That’s why I was so excited to find out Google was coming out with their own social media site, Google+.  On Google+ I post some personal things as well, and there is a lot of overlap between what I put on Facebook and what I put on Google+.  But since Google+ has become so popular amongst the “techie” crowd, I have found that it’s a good place to follow some of the stars of the Microsoft world, like Scott Hanselman and Buck Woody. If you are also a member of Google+, I am looking to expand my circle there.  You can find me at https://plus.google.com/104990425207662620918/posts.  Google+ is the newest face in the social media world, and it still hasn’t found a good footing between personal and professional yet.  That’s why I felt it would be a good idea to jump on the site early and help them determine which way to go.  Maybe someday it will be a place where business and personal can mix. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Social Media

    Read the article

  • Deploying an ADF Secure Application using WLS Console

    - by juan.ruiz
    Last week I worked on a requirement from a customer that wanted to understand how to deploy to WLS an application with ADF Security without using JDeveloper. The main question was, what steps where needed in order to set up Enterprise Roles, Security Policies and Application Credentials. In this entry I will explain the steps taken using JDeveloper 11.1.1.2. 0 Requirements: Instead of building a sample application from scratch, we can use Andrejus 's sample application that contains all the security pieces that we need. Open and migrate the project. Also make sure you adjust the database settings accordingly. Creating the EAR file Review the Security settings of the application by going into the Application -> Secure menu and see that there are two enterprise roles as well as the ADF Policies enforcing security on the main page. Make sure the Application Module uses the Data Source instead of JDBC URL for its connection type, also take note of the data source name - in my case I have: java:comp/env/jdbc/HrDS To facilitate the access to this application once we deploy it. Go to your ViewController project properties select the Java EE Application category and give it a meaningful name to the context root as well to the Application Name Go to the ADFSecurityWL Application properties -> Deployment  and create a new EAR deployment profile. Uncheck the Auto generate and Synchronize weblogic-jdbc.xml Descriptors During Deployment Deploy the application as an EAR file. Deploying the Application to WLS using the WLS Console On the WLS console create a JNDI data source. This is the part that I found more tricky of the hole exercise given that the name should match the AM's data source name, however the naming convention that worked for me was jdbc.HrDS Now, deploy the application manually by selecting deployments ->Install look for the EAR and follow the default steps. If this is the firs time you deploy the application, once the deployment finishes you will be asked to Activate Changes on the domain, these changes contain all the security policies and application roles insertion into the WLS instance. Creating Roles and User Groups for the Application To finish the after-deployment set up, we need to create the groups that are the equivalent of the Enterprise Roles of ADF Security. For our sample we have two Enterprise Roles employeesApplication and managersApplication. After that, we create the application users and assign them into their respective groups. Now we can run the application and test the security constraints

    Read the article

  • ASP.NET Localization: Enabling resource expressions with an external resource assembly

    - by Brian Schroer
    I have several related projects that need the same localized text, so my global resources files are in a shared assembly that’s referenced by each of those projects. It took an embarrassingly long time to figure out how to have my .resx files generate “public” properties instead of “internal” so I could have a shared resources assembly (apparently it was pretty tricky pre-VS2008, and my “googling” bogged me down some out-of-date instructions). It’s easy though – Just change the “Custom Tool” to “PublicResXFileCodeGenerator”:    …which can be done via the “Access Modifier” dropdown of the resource file designer window:   A reference to my shared resources DLL gives me the ability to use the resources in code, but by default, the ASP.NET resource expression syntax: <asp:Button ID="BeerButton" runat="server" Text="<%$ Resources:MyResources, Beer %>" />   …assumes that your resources are in your web site project.   To make resource expressions work with my shared resources assembly, I added two classes to the resources assembly: 1) a custom IResourceProvider implementation:   1: using System; 2: using System.Web.Compilation; 3: using System.Globalization; 4:   5: namespace DuffBeer 6: { 7: public class CustomResourceProvider : IResourceProvider 8: { 9: public object GetObject(string resourceKey, CultureInfo culture) 10: { 11: return MyResources.ResourceManager.GetObject(resourceKey, culture); 12: } 13:   14: public System.Resources.IResourceReader ResourceReader 15: { 16: get { throw new NotSupportedException(); } 17: } 18: } 19: }   2) and a custom factory class inheriting from the ResourceProviderFactory base class:   1: using System; 2: using System.Web.Compilation; 3:   4: namespace DuffBeer 5: { 6: public class CustomResourceProviderFactory : ResourceProviderFactory 7: { 8: public override IResourceProvider CreateGlobalResourceProvider(string classKey) 9: { 10: return new CustomResourceProvider(); 11: } 12:   13: public override IResourceProvider CreateLocalResourceProvider(string virtualPath) 14: { 15: throw new NotSupportedException(String.Format( 16: "{0} does not support local resources.", 17: this.GetType().Name)); 18: } 19: } 20: }   In the “system.web / globalization” section of my web.config file, I point the “resourceProviderFactoryType" property to my custom factory:   <system.web> <globalization culture="auto:en-US" uiCulture="auto:en-US" resourceProviderFactoryType="DuffBeer.CustomResourceProviderFactory, DuffBeer" />   This simple approach met my needs for these projects , but if you want to create reusable resource provider and factory classes that allow you to specify the assembly in the resource expression, the instructions are here.

    Read the article

  • Separating physics and game logic from UI code

    - by futlib
    I'm working on a simple block-based puzzle game. The game play consists pretty much of moving blocks around in the game area, so it's a trivial physics simulation. My implementation, however, is in my opinion far from ideal and I'm wondering if you can give me any pointers on how to do it better. I've split the code up into two areas: Game logic and UI, as I did with a lot of puzzle games: The game logic is responsible for the general rules of the game (e.g. the formal rule system in chess) The UI displays the game area and pieces (e.g. chess board and pieces) and is responsible for animations (e.g. animated movement of chess pieces) The game logic represents the game state as a logical grid, where each unit is one cell's width/height on the grid. So for a grid of width 6, you can move a block of width 2 four times until it collides with the boundary. The UI takes this grid, and draws it by converting logical sizes into pixel sizes (that is, multiplies it by a constant). However, since the game has hardly any game logic, my game logic layer [1] doesn't have much to do except collision detection. Here's how it works: Player starts to drag a piece UI asks game logic for the legal movement area of that piece and lets the player drag it within that area Player lets go of a piece UI snaps the piece to the grid (so that it is at a valid logical position) UI tells game logic the new logical position (via mutator methods, which I'd rather avoid) I'm not quite happy with that: I'm writing unit tests for my game logic layer, but not the UI, and it turned out all the tricky code is in the UI: Stopping the piece from colliding with others or the boundary and snapping it to the grid. I don't like the fact that the UI tells the game logic about the new state, I would rather have it call a movePieceLeft() method or something like that, as in my other games, but I didn't get far with that approach, because the game logic knows nothing about the dragging and snapping that's possible in the UI. I think the best thing to do would be to get rid of my game logic layer and implement a physics layer instead. I've got a few questions regarding that: Is such a physics layer common, or is it more typical to have the game logic layer do this? Would the snapping to grid and piece dragging code belong to the UI or the physics layer? Would such a physics layer typically work with pixel sizes or with some kind of logical unit, like my game logic layer? I've seen event-based collision detection in a game's code base once, that is, the player would just drag the piece, the UI would render that obediently and notify the physics system, and the physics system would call a onCollision() method on the piece once a collision is detected. What is more common? This approach or asking for the legal movement area first? [1] layer is probably not the right word for what I mean, but subsystem sounds overblown and class is misguiding, because each layer can consist of several classes.

    Read the article

  • SQL SERVER – Difference Between CURRENT_TIMESTAMP and GETDATE() – CURRENT_TIMESTAMP Equivalent in SQL Server

    - by pinaldave
    A common question – I often get from Oracle/MySQL Professionals: “What is the Equivalent to CURRENT_TIMESTAMP in SQL Server?” Here is a common question I often get from SQL Server Professionals: “What are differences between Difference Between CURRENT_TIMESTAMP and GETDATE ()?” Very simple question but have showed up so frequently that I feel like to write about it. Well in SQL Server GETDATE() is Equivalent to CURRENT_TIMESTAMP. However, if you use CURRENT_TIMESTAMP in your select statement it will work fine. You can see in the above example – both of them returns the same value. Now let us go to next question regarding difference between GETDATE and CURRENT_TIMESTAMP. Well, the matter of the fact, there is no difference between them in SQL Server (Reference Link). CURRENT_TIMESTAMP is an ANSI SQL function, whereas GETDATE is T-SQL implementation of the same function. Both of them derive value from the operating system of the computer on which SQL Server instance is running. Above discussion prompts another question – in this case, what should one use GETDATE or CURRENT_TIMESTAMP? Well, this is indeed tricky and interesting question. I think I am very comfortable using the GETDATE () so I will go to use it but a matter of the fact there is no right or wrong answer. If you want to follow ancient saying “When in Rome, do as the Romans do”, I suggest using the GETDATE (), or continue using CURRENT_TIMESTAMP. With that said, there is one very important property we all need to keep in mind. If you use CURRENT_TIMESTAMP while creating an object, they are automatically converted to GETDATE() and stored internally. To illustrate what I am suggesting here is the example - Create a table using the following script CREATE TABLE [dbo].[TestTable]( [Cold2] [datetime] NULL ) ON [PRIMARY] GO ALTER TABLE [dbo].[TestTable] ADD DEFAULT (CURRENT_TIMESTAMP) FOR [Cold2] GO Now go to SSMS and generate the script for the table and you will notice following syntax. CREATE TABLE [dbo].[TestTable]( [Cold2] [datetime] NULL ) ON [PRIMARY] GO ALTER TABLE [dbo].[TestTable] ADD DEFAULT (GETDATE()) FOR [Cold2] GO You can notice that SQL Server have automatically converted CURRENT_TIMESTAMP to GETDATE(). I guess this gives us an idea how they behave. Now go ahead and make your choice! Do let me know which one will you use CURRENT_TIMESTAMP or GETDATE () in the comments area. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Oracle User Productivity Kit Translation

    - by ultan o'broin
    Oracle's customers just love the User Productivity Kit (UPK). I hear only great things about it from our international customers at the Oracle Usability Advisory Board meetings too. The UPK is the perfect solution for enterprise applications training needs (I previously reviewed a fine book about UPK btw). One question I am often asked is how source content created using the UPK can be translated into another language. I spoke with Peter Maravelias, Principal Product Strategy Manager for UPK about this recently. UPK is already optimized for easy source-target translation already. There is even a solution for re-recording demos. Here's what you can do to get your source content into another language: Use UPK's ability to automatically translate events and actions. UPK comes with XML templates that allow you to accomplish this in 21 languages with a simple publishing action switch. These templates even deal with the tricky business of using gender-based translations. Spanish localization template sample Japanese localization template sample Use the Import and Export localization features to export additional custom content in a format like XLIFF, easily handled by translation tools. You could also export and import in Word format. Re-record the sound (audio) files that go with the recordings, one per screen. UPK's granular approach to the sound files means that timing isn't an option. Retiming demos isn't required. A tip here with sound files and XLFF-exported custom content is to facilitate translation context by avoiding explicit references to actions going on in the screen recordings. A text based storyboard with screenshots accompanying the sound files should also be provided to the translators. Provide a glossary of terms too. Use the re-record option in UPK to record any demo from a translated application. This will allow all the translated UI labels to be automatically captured. You may be required to resize any action events here due to text expansion issues. Of course, you will need translated data in the translated application too, so plan for this in advance. However, source-target language skills aren't required for the re-recording. The UPK Player itself, of course, is also available from Oracle along with content and doc in 21 languages. The Developer and Setup is also translated in a smaller number of languages. Check the Oracle UPK website for latest details. UPK is a super solution for global enterprise applications training deployments allowing source content to be translated into multiple languages easily. See this post on the UPK blog for more insight too!

    Read the article

  • Best pathfinding for a 2D world made by CPU Perlin Noise, with random start- and destinationpoints?

    - by Mathias Lykkegaard Lorenzen
    I have a world made by Perlin Noise. It's created on the CPU for consistency between several devices (yes, I know it takes time - I have my techniques that make it fast enough). Now, in my game you play as a fighter-ship-thingy-blob or whatever it's going to be. What matters is that this "thing" that you play as, is placed in the middle of the screen, and moves along with the camera. The white stuff in my world are walls. The black stuff is freely movable. Now, as the player moves around he will constantly see "monsters" spawning around him in a circle (a circle that's larger than the screen though). These monsters move inwards and try to collide with the player. This is the part that's tricky. I want these monsters to constantly spawn, moving towards the player, but avoid walls entirely. I've added a screenshot below that kind of makes it easier to understand (excuse me for my bad drawing - I was using Paint for this). In the image above, the following rules apply. The red dot in the middle is the player itself. The light-green rectangle is the boundaries of the screen (in other words, what the player sees). These boundaries move with the player. The blue circle is the spawning circle. At the circumference of this circle, monsters will spawn constantly. This spawncircle moves with the player and the boundaries of the screen. Each monster spawned (shown as yellow triangles) wants to collide with the player. The pink lines shows the path that I want the monsters to move along (or something similar). What matters is that they reach the player without colliding with the walls. The map itself (the one that is Perlin Noise generated on the CPU) is saved in memory as two-dimensional bit-arrays. A 1 means a wall, and a 0 means an open walkable space. The current tile size is pretty small. I could easily make it a lot larger for increased performance. I've done some path algorithms before such as A*. I don't think that's entirely optimal here though.

    Read the article

  • Creating metadata value relationships

    - by kyle.hatlestad
    I was recently asked an question about an interesting use case. They wanted content to be submitted into UCM with a particular ID in a custom metadata field. But they wanted that ID to be translated during submission into an employee name in another metadata field upon submission. My initial thought was that this could be done with a dependent choice list (DCL). One option list field driving the choices in another. But this didn't work in this case for a couple of reasons. First, the number of IDs could potentially be very large. So making that into a drop-down list would not be practical. The preference would be for that field to simply be a text field to type in the ID. Secondly, data could be submitted through different methods other then the web-based check-in form. And without an interface to select the DCL choices, the system needed a way to determine and populate the name field. So instead I went the approach of having the value of the ID field drive the value of the Name field using the derived field approach in my rule. In looking at it though, it was easy to simply copy the value of the ID field into the Name field...but to have it look up and translate the value proved to be the tricky part. So here is the approach I took... First I created my two metadata fields as standard text fields in the Configuration Manager applet. Next I create a table that stores the relationship between the IDs and Names. I then create a View into that table and set the column to the EmployeeID. I now create a new Application Field and set it as an option list using the View I created in the previous step. The reason I create it as an Application field is because I don't need to display the field or store a value in it. I simply need to make use of the option list in the next step... Finally, I create a Rule in which I select the Employee Name field and turn on the 'Is derived field' checkbox. I edit the derived value and add a new condition. Because the option list is a Application field and not an Information field, I can't use the Compute button. Instead, I insert this line directly in the Value field: @getFieldViewValue("EmployeeMapping",#active.xEmployeeID, "EmployeeName") The "EmployeeMapping" parameter designates that the value should be pulled from the EmployeeMapping Application field that I had created in the previous step. The #active.xEmployeeID field is the ID value that should be pulled from what the user entered. "EmployeeName" is the column name in the table which has the value which corresponds to the ID. The extracted name then becomes the value within our Employee Name field. That's it. You can then add additional Rules to make the Name field read-only/hidden on the check-in page and such.

    Read the article

  • Convert Chrome Bookmark Toolbar Folders to Icons

    - by Asian Angel
    So you have your regular bookmarks reduced to icons but what about the folders? With our little hack and a few minutes of your time you can turn those folders into icons too. Condensing the Folders Reducing bookmark folders to icons is a little more tricky than regular bookmarks but not hard to do. Right click on the folder and select “Rename…”. The folder’s name should already be highlighted/selected as shown here. Delete the text…notice that the “OK Button” has become unusable for the moment. Now what you will need to do is: Hold down the “Alt Key” Type in “0160” (without the quotes) using the numbers keypad on the right side of your keyboard Release the “Alt Key” after you have finished typing in the number above Once you have released the “Alt Key” you will notice two things…the “cursor” has moved further into the text area and you can now click on the “OK Button” again. There is our folder after editing. And it works just as well as before but without taking up so much room. Here is how our “iconized” folder looks next to our bookmarks. Perfect! What if you want to reduce multiple folders to icons? Perform the same exact steps shown above for each folder and pack your “Bookmarks Toolbar” full of folder goodness! As seen here the folders will have a little more space between them in comparison with singular bookmarks due to the “blank name” for each folder. For those who may be curious this is what your bookmarks will look like in the “Bookmark Manager Page”. Note: If you export your bookmarks all bookmarks contained in multiple blank name folders will be combined into a single folder. Conclusion With just a little bit of work you can pack a lot of goodness into your “Bookmarks Toolbar”. No more wasted space… Similar Articles Productive Geek Tips Condense the Bookmarks in the Firefox Bookmarks ToolbarAccess Your Bookmarks with a Toolbar Button in Google ChromeAdd the Bookmarks Menu to Your Bookmarks Toolbar with Bookmarks UI ConsolidatorAdd a Vertical Bookmarks Toolbar to FirefoxReduce Your Bookmarks Toolbar to a Toolbar Button TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error

    Read the article

  • Overload Avoidance

    - by mikef
    A little under a year ago, Matt Simmons wrote a rather reflective article about his terrifying brush with stress-induced ill health. SysAdmins and DBAs have always been prime victims of work-related stress, but I wonder if that predilection is perhaps getting worse, despite the best efforts of Matt and his trusty side-kick, HR. The constant pressure from share-holders and CFOs to 'streamline' the workforce is partially to blame, but the more recent culprit is technology itself. I can't deny that the rise of technologies like virtualization, PowerCLI, PowerShell, and a host of others has been a tremendous boon. As a result, individual IT professionals are now able to handle more and more tasks and manage increasingly large and complex environments. But, without a doubt, this is a two-edged sword; The reward for competence is invariably more work. Unfortunately, SysAdmins play such a pivotal role in modern business that it's easy to see how they can very quickly become swamped in conflicting demands coming from different directions. However, that doesn't justify the ridiculous hours many are asked (or volunteer) to devote to their work. Admirably though their commitment is, it isn't healthy for them, it sets a dangerous expectation, and eventually something will snap. There are times when everyone needs to step up to the plate outside of 'normal' work hours, but that time isn't all the time. Naturally, with all that lovely technology, you can automate more and more of those tricky tasks to keep on top of the workload, but you are still only human. Clever though you may be, there is a very real limit to how far technology can take you. I'm not suggesting that you avoid these technologies, or deliberately aim for mediocrity; I'm just saying that you need to be more than just technically skilled (and Wesley Nonapeptide riffs on and around this topic in his excellent 'Telepathic Robot Drones' blog post). You need to be able to manage expectations, not just Exchange. Specifically, that means your own expectations of what you are capable of, because those come before everyone else's. After all, how can you keep your work-life balance under control, if you're the one setting the bar way too high? Talking to your manager, or discussing issues with your users, is only going to be productive if you have some facts to work with. "Know Thyself" is the first law of managing work overload, and this is obviously a skill which people develop over time; the fact that veteran Sysadmins exist at all is testament to this. I'd just love to know how you get to that point. Personally, I'm using RescueTime to keep myself honest, but I'm open to recommendations for better methods. Do you track your own time, do you have an intuitive sense of what is possible, or do you just rely on someone else to handle that all for you? Cheers, Michael

    Read the article

  • BPM ADF Task forms. Checking whether the current user is in a BPM Swimlane

    - by Christopher Karl Chan
    @page { margin: 0.79in } P { margin-bottom: 0.08in } --Focus So this blog entry will focus on BPM Swimlane roles and users from a ADF context. So we have an ADF Task Details Form and we are in the process of making it richer and dynamic in functionality. A common requirement could be to dynamically show different areas based on the user logged into the workspace. Perhaps even we want to know even what swim-lane role the user belongs to. It is is a little bit harder to achieve then one thinks unless you know the trick. The Challenge The tricky part here is that the ADF Task Details Form is in fact part of a separate J2EE application to the main workspace. So if you try to use Java or Expression Language to get the logged in user you will only find anonymous and none of the BPM Roles you will be expecting. So what to do? The Magic First add the BC4J Security library to your view project. Then Restart JDeveloper. Now find the web.xml file in the view project of your ADF Task Details Application and look for the JpsFilter section. Then add in the following section. <init-param> <param-name>application.name</param-name> <param-value>OracleBPMProcessRolesApp</param-value></init-param> This will link your application to that of the BPM workspace. Then in your dynamic part of your ADF form you can now check whether the user logged into the BPM Workspace belongs in a BPM swim-lane in any BPM process. The best way to do this is by using expression language in the JSF page itself. Here I am simply changing the rendered flag to either true or false and thereby hiding or showing a section. Perhaps you are re-using the same form for a task in an approver swim-lane and ordinary user swimlane. So we only want the approver to see this field. So call the built in function to check if the user is a member of the BPM swim-lane role. The name of the role must be of the syntax BPMProject.RoleName <af:outputText value="This will only be rendered when the user is part of the BPM Swimlane Role rendered="#{securityContext.userInRole['BPMProjectName.Rolename']}"/> Now you must redeploy your ADF Task Form project Now (in the image above) the text will ONLY get rendered in the Task Details Form only if the user logged into the workspace is a member of the swimlane Unsecure of the BPM project SimpleTask

    Read the article

  • Part 1 Basic Webtrends REST Examples

    - by GeekAgilistMercenary
    In this entry I just want to cover some examples of how to connect to Webtrends DX Web Services.  The DX Web Services use REST as the architecture, providing simple URI based end points to connect to.  With the Webtrends SDK you can connect to these services with your account information.  Here are the basic steps to retrieve a profile list, the reports from one of those profiles, and then the report you want from that report list. First step is to create a Webtrends User. WebTrends.Sdk.Account.User webtrendsUser = new Account.User(); webtrendsUser.UserName = username; webtrendsUser.Password = password; webtrendsUser.AccountName = account; After you create the Webtrends User, simple request a profile list by getting list of ProfileDefinition Objects. List<WebTrends.Sdk.Profile.ProfileDefinition> profiles = WebTrends.Sdk.Factory.NavigationFactory.BuildListing(webtrendsUser); Next you will want to grab a report based on the profile you are in and your credentials. List<WebTrends.Sdk.Report.ReportDefinition> reports = WebTrends.Sdk.Factory.NavigationFactory.BuildListing(profiles[i], webtrendsUser); In the code above, i would equate to the specific profile you want from the retrieved list of profiles in the profiles list.  The common scenario is that one has pulled the profiles into a drop down, combo, or list box that the user can select.  Then when the user selects the specific profile that profile object can then be used to pull the List of ReportDefinitions. Once we have the report definitions, all sorts of criteria can be added together to query for a specific report.  This is also were things can get a little tricky.  For instance, take a look at the code below. WebTrends.Sdk.Factory.ReportFactory.CreateDimensionalReport( report.ID.ToString(), profiles[i].ID.ToString(), "2010m01", webtrendsUser); The CreateDimensionalReport takes 4 parameters for this particular overload.  The report ID, profile ID, the Webtrends Date Format, and the Webtrends User Object.  There are a number of other overloads available within this factory's method that allow for passing the specific REST URI, and other criteria to retrieve the report of your choice.  In the near future we will be adding some more to this method also, which will provide more flexibility without needing to use the full REST URI. I will have more on this, so all you Coders out there using Webtrends DX Services, I hope this is helpful!  Enjoy. Original Entry

    Read the article

  • Bay Area Coherence Special Interest Group Next Meeting July 21, 2011

    - by csoto
    Date: Thursday, July 21, 2011 Time: 4:30pm - 8:15pm ET (note that Parking at 475 Sansome Closes at 8:30pm) Where: Oracle Office, 475 Sansome Street, San Francisco, CA Google Map We will be providing snacks and beverages. Register! - Registration is required for building security. Presentation Line Up:? 5:10pm - Batch Processing Using Coherence in Oracle Group Policy Administration - Paul Cleary, Oracle Oracle Insurance Policy Administration (OIPA) is a flexible, rules-based policy administration solution that provides full record keeping for all policy lifecycle transactions. One component of OIPA is Cycle processing, which is the batch processing of pending insurance transactions. This presentation introduces OIPA and Cycle processing, describing the unique challenges of processing a high volume of transactions within strict time windows. It then reviews how OIPA uses Oracle Coherence and the Processing Pattern to meet these challenges, describing implementation specifics that highlight the simplicity and robustness of the Processing Pattern. 6:10pm - Secure, Optimize, and Load Balance Coherence with F5 - Chris Akker, F5 F5 Networks, Inc., the global leader in Application Delivery Networking, helps the world’s largest enterprises and service providers realize the full value of virtualization, cloud computing, and on-demand IT. Recently, F5 and Oracle partnered to deliver a novel solution that integrates Oracle Coherence 3.7 with F5 BIG-IP Local Traffic Manager (LTM). This session will introduce F5 and how you can leverage BIG-IP LTM to secure, optimize, and load balance application traffic generated from Coherence*Extend clients across any number of servers in a cluster and to hardware-accelerate CPU-intensive SSL encryption. 7:10pm - Using Oracle Coherence to Enable Database Partitioning and DC Level Fault Tolerance - Alexei Ragozin, Independent Consultant and Brian Oliver, Oracle Partitioning is a very powerful technique for scaling database centric applications. One tricky part of partitioned architecture is routing of requests to the right database. The routing layer (routing table) should know the right database instance for each attribute which may be used for routing (e.g. account id, login, email, etc): it should be fast, it should fault tolerant and it should scale. All the above makes Oracle Coherence a natural choice for implementing such routing tables in partitioned architectures. This presentation will cover synchronization of the grid with multiple databases, conflict resolution, cross cluster replication and other aspects related to implementing robust partitioned architecture. Additional Info:?? - Download Past Presentations: The presentations from the previous meetings of the BACSIG are available for download here. Click on the presentation titles to download the PDF files. - Join the Coherence online community on our Oracle Coherence Users Group on LinkedIn. - Contact BACSIG with any comments, questions, presentation proposals and content suggestions.

    Read the article

  • Unit testing in Django

    - by acjohnson55
    I'm really struggling to write effective unit tests for a large Django project. I have reasonably good test coverage, but I've come to realize that the tests I've been writing are definitely integration/acceptance tests, not unit tests at all, and I have critical portions of my application that are not being tested effectively. I want to fix this ASAP. Here's my problem. My schema is deeply relational, and heavily time-oriented, giving my model object high internal coupling and lots of state. Many of my model methods query based on time intervals, and I've got a lot of auto_now_add going on in timestamped fields. So take a method that looks like this for example: def summary(self, startTime=None, endTime=None): # ... logic to assign a proper start and end time # if none was provided, probably using datetime.now() objects = self.related_model_set.manager_method.filter(...) return sum(object.key_method(startTime, endTime) for object in objects) How does one approach testing something like this? Here's where I am so far. It occurs to me that the unit testing objective should be given some mocked behavior by key_method on its arguments, is summary correctly filtering/aggregating to produce a correct result? Mocking datetime.now() is straightforward enough, but how can I mock out the rest of the behavior? I could use fixtures, but I've heard pros and cons of using fixtures for building my data (poor maintainability being a con that hits home for me). I could also setup my data through the ORM, but that can be limiting, because then I have to create related objects as well. And the ORM doesn't let you mess with auto_now_add fields manually. Mocking the ORM is another option, but not only is it tricky to mock deeply nested ORM methods, but the logic in the ORM code gets mocked out of the test, and mocking seems to make the test really dependent on the internals and dependencies of the function-under-test. The toughest nuts to crack seem to be the functions like this, that sit on a few layers of models and lower-level functions and are very dependent on the time, even though these functions may not be super complicated. My overall problem is that no matter how I seem to slice it, my tests are looking way more complex than the functions they are testing.

    Read the article

  • Subterranean IL: Volatile

    - by Simon Cooper
    This time, we'll be having a look at the volatile. prefix instruction, and one of the differences between volatile in IL and C#. The volatile. prefix volatile is a tricky one, as there's varying levels of documentation on it. From what I can see, it has two effects: It prevents caching of the load or store value; rather than reading or writing to a cached version of the memory location (say, the processor register or cache), it forces the value to be loaded or stored at the 'actual' memory location, so it is then immediately visible to other threads. It forces a memory barrier at the prefixed instruction. This ensures instructions don't get re-ordered around the volatile instruction. This is slightly more complicated than it first seems, and only seems to matter on certain architectures. For more details, Joe Duffy has a blog post going into the details. For this post, I'll be concentrating on the first aspect of volatile. Caching field accesses To demonstrate this, I created a simple multithreaded IL program. It boils down to the following code: .class public Holder { .field public static class Holder holder .field public bool stop .method public static specialname void .cctor() { newobj instance void Holder::.ctor() stsfld class Holder Holder::holder ret }}.method private static void Main() { .entrypoint // Thread t = new Thread(new ThreadStart(DoWork)) // t.Start() // Thread.Sleep(2000) // Console.WriteLine("Stopping thread...") ldsfld class Holder Holder::holder ldc.i4.1 stfld bool Holder::stop call instance void [mscorlib]System.Threading.Thread::Join() ret}.method private static void DoWork() { ldsfld class Holder Holder::holder // while (!Holder.holder.stop) {} DoWork: dup ldfld bool Holder::stop brfalse DoWork pop ret} If you compile and run this code, you'll find that the call to Thread.Join() never returns - the DoWork spinlock is reading a cached version of Holder.stop, which is never being updated with the new value set by the Main method. Adding volatile to the ldfld fixes this: dupvolatile.ldfld bool Holder::stopbrfalse DoWork The volatile ldfld forces the field access to read direct from heap memory, which is then updated by the main thread, rather than using a cached copy. volatile in C# This highlights one of the differences between IL and C#. In IL, volatile only applies to the prefixed instruction, whereas in C#, volatile is specified on a field to indicate that all accesses to that field should be volatile (interestingly, there's no mention of the 'no caching' aspect of volatile in the C# spec; it only focuses on the memory barrier aspect). Furthermore, this information needs to be stored within the assembly somehow, as such a field might be accessed directly from outside the assembly, but there's no concept of a 'volatile field' in IL! How this information is stored with the field will be the subject of my next post.

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-21

    - by Bob Rhubart
    The Real Architects of Los Angeles: OTN Architect Day in LA - Oct 25 No gossip. No drama. No hair pulling. Just a full day of technical sessions and peer interaction focused on using Oracle technologies in today's cloud and SOA architectures. The event is free, but seating is limited, so register now. Thursday October 25, 2012. 8:00 a.m. – 5:00 p.m. Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048. Why IT is a profession in 'flux' | ZDNet I usuallly don't post two items from the same person in one day, but this post from ZDNet blogger Joe McKendrick deals with some critical issues affecting those in IT. As McKendrick puts it: "IT professionals are under considerable pressure to deliver more value to the business, versus being good at coding and testing and deploying and integrating." Cloud, automation drive new growth in SOA governance market | ZDNet "SOA governance tools and processes learned over the past decade are now underpinning cloud projects as they scale across enterprises," reports Joe McKendrick. But there remains a lack of understanding about SOA Governance. For a broader discussion of the importance of IT governance check out the lastest OTN Archbeat Podcast: By Any Other Name: Governance and Architecture How I Simplified the Installation of Oracle Database on Oracle Linux 6 Michele Casey's update of Ginny Henningsen's original article. This version shows how to simplify the installation of Oracle Database 11g on Oracle Linux 6 by installing the oracle-rdbms-server-11gR2-preinstall RPM package. Fault Handling Slides and Q&A | Ronald van Luttikhuizen Oracle ACE Director Ronald van Luttikhuizen shares the slides and a Q&A transcript from a presentation he and fellow ACE Director Guido Schmutz gave at the recent Oracle OpenWorld and JavaOne preview event organized by AMIS Technology. BPM ADF Task forms. Checking whether the current user is in a BPM Swimlane | Christopher Karl Chan "The tricky part here is that the ADF Task Details Form is in fact part of a separate J2EE application to the main workspace," says Oracle Fusion Middleware A-Team member Christopher Karl Chan. "So if you try to use Java or Expression Language to get the logged in user you will only find anonymous and none of the BPM Roles you will be expecting. So what to do?" Don't guess—read the post and find out. Thought for the Day "The speed of change makes you wonder what will become of architecture. " — Tadao Ando Source: BrainyQuote

    Read the article

  • Coordinate based travel through multi-line path over elapsed time

    - by Chris
    I have implemented A* Path finding to decide the course of a sprite through multiple waypoints. I have done this for point A to point B locations but am having trouble with multiple waypoints, because on slower devices when the FPS slows and the sprite travels PAST a waypoint I am lost as to the math to switch directions at the proper place. EDIT: To clarify my path finding code is separate in a game thread, this onUpdate method lives in a sprite like class which happens in the UI thread for sprite updating. To be even more clear the path is only updated when objects block the map, at any given point the current path could change but that should not affect the design of the algorithm if I am not mistaken. I do believe all components involved are well designed and accurate, aside from this piece :- ) Here is the scenario: public void onUpdate(float pSecondsElapsed) { // this could be 4x speed, so on slow devices the travel moved between // frames could be very large. What happens with my original algorithm // is it will start actually doing circles around the next waypoint.. pSecondsElapsed *= SomeSpeedModificationValue; final int spriteCurrentX = this.getX(); final int spriteCurrentY = this.getY(); // getCoords contains a large array of the coordinates to each waypoint. // A waypoint is a destination on the map, defined by tile column/row. The // path finder converts these waypoints to X,Y coords. // // I.E: // Given a set of waypoints of 0,0 to 12,23 to 23, 0 on a 23x23 tile map, each tile // being 32x32 pixels. This would translate in the path finder to this: // -> 0,0 to 12,23 // Coord : x=16 y=16 // Coord : x=16 y=48 // Coord : x=16 y=80 // ... // Coord : x=336 y=688 // Coord : x=336 y=720 // Coord : x=368 y=720 // // -> 12,23 to 23,0 -NOTE This direction change gives me trouble specifically // Coord : x=400 y=752 // Coord : x=400 y=720 // Coord : x=400 y=688 // ... // Coord : x=688 y=16 // Coord : x=688 y=0 // Coord : x=720 y=0 // // The current update index, the index specifies the coordinate that you see above // I.E. final int[] coords = getCoords( 2 ); -> x=16 y=80 final int[] coords = getCoords( ... ); // now I have the coords, how do I detect where to set the position? The tricky part // for me is when a direction changes, how do I calculate based on the elapsed time // how far to go up the new direction... I just can't wrap my head around this. this.setPosition(newX, newY); }

    Read the article

  • Robust line of sight test on the inside of a polygon with tolerance

    - by David Gouveia
    Foreword This is a followup to this question and the main problem I'm trying to solve. My current solution is an hack which involves inflating the polygon, and doing most calculations on the inflated polygon instead. My goal is to remove this step completely, and correctly solve the problem with calculations only. Problem Given a concave polygon and treating all of its edges as if they were walls in a level, determine whether two points A and B are in line of sight of each other, while accounting for some degree of floating point errors. I'm currently basing my solution on a series of line-segment interection tests. In other words: If any of the end points are outside the polygon, they are not in line of sight. If both end points are inside the polygon, and the line segment from A to B crosses any of the edges from the polygon, then they are not in line of sight. If both end points are inside the polygon, and the line segment from A to B does not cross any of the edges from the polygon, then they are in line of sight. But the problem is dealing correctly with all the edge cases. In particular, it must be able to deal with all the situations depicted below, where red lines are examples that should be rejected, and green lines are examples that should be accepted. I probably missed a few other situations, such as when the line segment from A to B is colinear with an edge, but one of the end points is outside the polygon. One point of particular interest is the difference between 1 and 9. In both cases, both end points are vertices of the polygon, and there are no edges being intersected, but 1 should be rejected while 9 should be accepted. How to distinguish these two? I could check some middle point within the segment to see if it falls inside or not, but it's easy to come up with situations in which it would fail. Point 7 was also pretty tricky and I had to to treat it as a special case, which checks if two points are adjacent vertices of the polygon directly. But there are also other chances of line segments being col linear with the edges of the polygon, and I'm still not entirely sure how I should handle those cases. Is there any well known solution to this problem?

    Read the article

  • How do we keep dependent data structures up to date?

    - by Geo
    Suppose you have a parse tree, an abstract syntax tree, and a control flow graph, each one logically derived from the one before. In principle it is easy to construct each graph given the parse tree, but how can we manage the complexity of updating the graphs when the parse tree is modified? We know exactly how the tree has been modified, but how can the change be propagated to the other trees in a way that doesn't become difficult to manage? Naturally the dependent graph can be updated by simply reconstructing it from scratch every time the first graph changes, but then there would be no way of knowing the details of the changes in the dependent graph. I currently have four ways to attempt to solve this problem, but each one has difficulties. Nodes of the dependent tree each observe the relevant nodes of the original tree, updating themselves and the observer lists of original tree nodes as necessary. The conceptual complexity of this can become daunting. Each node of the original tree has a list of the dependent tree nodes that specifically depend upon it, and when the node changes it sets a flag on the dependent nodes to mark them as dirty, including the parents of the dependent nodes all the way down to the root. After each change we run an algorithm that is much like the algorithm for constructing the dependent graph from scratch, but it skips over any clean node and reconstructs each dirty node, keeping track of whether the reconstructed node is actually different from the dirty node. This can also get tricky. We can represent the logical connection between the original graph and the dependent graph as a data structure, like a list of constraints, perhaps designed using a declarative language. When the original graph changes we need only scan the list to discover which constraints are violated and how the dependent tree needs to change to correct the violation, all encoded as data. We can reconstruct the dependent graph from scratch as though there were no existing dependent graph, and then compare the existing graph and the new graph to discover how it has changed. I'm sure this is the easiest way because I know there are algorithms available for detecting differences, but they are all quite computationally expensive and in principle it seems unnecessary so I'm deliberately avoiding this option. What is the right way to deal with these sorts of problems? Surely there must be a design pattern that makes this whole thing almost easy. It would be nice to have a good solution for every problem of this general description. Does this class of problem have a name?

    Read the article

  • Subterranean IL: Exception handling 1

    - by Simon Cooper
    Today, I'll be starting a look at the Structured Exception Handling mechanism within the CLR. Exception handling is quite a complicated business, and, as a result, the rules governing exception handling clauses in IL are quite strict; you need to be careful when writing exception clauses in IL. Exception handlers Exception handlers are specified using a .try clause within a method definition. .try <TryStartLabel> to <TryEndLabel> <HandlerType> handler <HandlerStartLabel> to <HandlerEndLabel> As an example, a basic try/catch block would be specified like so: TryBlockStart: // ... leave.s CatchBlockEndTryBlockEnd:CatchBlockStart: // at the start of a catch block, the exception thrown is on the stack callvirt instance string [mscorlib]System.Object::ToString() call void [mscorlib]System.Console::WriteLine(string) leave.s CatchBlockEnd CatchBlockEnd: // method code continues... .try TryBlockStart to TryBlockEnd catch [mscorlib]System.Exception handler CatchBlockStart to CatchBlockEnd There are four different types of handler that can be specified: catch <TypeToken> This is the standard exception catch clause; you specify the object type that you want to catch (for example, [mscorlib]System.ArgumentException). Any object can be thrown as an exception, although Microsoft recommend that only classes derived from System.Exception are thrown as exceptions. filter <FilterLabel> A filter block allows you to provide custom logic to determine if a handler block should be run. This functionality is exposed in VB, but not in C#. finally A finally block executes when the try block exits, regardless of whether an exception was thrown or not. fault This is similar to a finally block, but a fault block executes only if an exception was thrown. This is not exposed in VB or C#. You can specify multiple catch or filter handling blocks in each .try, but fault and finally handlers must have their own .try clause. We'll look into why this is in later posts. Scoped exception handlers The .try syntax is quite tricky to use; it requires multiple labels, and you've got to be careful to keep separate the different exception handling sections. However, starting from .NET 2, IL allows you to use scope blocks to specify exception handlers instead. Using this syntax, the example above can be written like so: .try { // ... leave.s EndSEH}catch [mscorlib]System.Exception { callvirt instance string [mscorlib]System.Object::ToString() call void [mscorlib]System.Console::WriteLine(string) leave.s EndSEH}EndSEH:// method code continues... As you can see, this is much easier to write (and read!) than a stand-alone .try clause. Next time, I'll be looking at some of the restrictions imposed by SEH on control flow, and how the C# compiler generated exception handling clauses.

    Read the article

  • Playing with F#

    - by mroberts
    Project Euler is a awesome site.   When working with a new language it can be tricky to find problems that need solving, that are more complex than "Hello World" and simpler than a full blown application. Project Euler gives use just that, cool and thought provoking problems that usually don't take days to solve.  I've solved a number of questions with C# and some with Java.  BTW, I used Java because it had BigInteger support before .Net. A couple weeks ago, back when winter had a firm grip on Columbus, OH, I began playing (researching) with F#.  I began with Problem #1 from Project Euler.  I started by looking at my solution in C#. Here is my solution in C#. 1: using System; 2: using System.Collections.Generic; 3:   4: namespace Problem001 5: { 6: class Program 7: { 8: static void Main(string[] args) 9: { 10: List<int> values = new List<int>(); 11:   12: for (int i = 1; i < 1000; i++) 13: { 14: if (i % 3 == 0 || i % 5 == 0) 15: values.Add(i); 16: } 17: int total = 0; 18:   19: values.ForEach(v => total += v); 20:   21: Console.WriteLine(total); 22: Console.ReadKey(); 23: } 24: } 25: }   Now, after much tweaking and learning, here is my solution in F#.   1: open System 2:   3: let calc n = 4: [1..n] 5: |> List.map (fun x -> if (x % 3 = 0 || x % 5 = 0) then x else 0) 6: |> List.sum 7:   8: let main() = 9: calc 999 10: |> printfn "result = %d" 11: Console.ReadKey(true) |> ignore 12:   13: main() Just this little example highlights some cool things about F#. Type inference. F# infers the type of a value.  In the C# code above we declare a number of variables, the list, and a couple ints.  F# does not require this, it infers the calc (a function) accepts a int and returns a int. Great built in functionality for Lists.  List.map for example. BTW, I don’t think I’m spilling the beans by giving away the code for Problem 1.  It by far is the easiest question, IMHO, solved by 92,000+ people. Next I’ll look into writing a class library with F#.

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >