Search Results

Search found 12107 results on 485 pages for 'pinned objects'.

Page 214/485 | < Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >

  • Non-Dom Element Event Binding with jQuery

    Yesterday I had a short discussion with Dave Reed on Twitter regarding setting up fake events on objects that are hookable. jQuery makes it real easy to bind events on DOM elements and with a little bit of extra work (that I didnt know about) you can also set up binding to non-DOM element event bindings. Assume for a second that you have a simple JavaScript object like this: var item = { sku: "wwhelp" , foo: function() { alert('orginal foo function'); } }; and...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL SERVER – Disabled Index and Update Statistics

    - by pinaldave
    When we try to update the statistics, it throws an error as if the clustered index is disabled. Now let us enable the clustered index only and attempt to update the statistics of the table right after that. Have you ever come across the situation where a conversation never gets over and it continues even though original point of discussion has passed. I am facing the same situation in the case of Disabled Index. Here is the link to original conversations. SQL SERVER – Disable Clustered Index and Data Insert – Reader had a issue here with Disabled Index SQL SERVER – Understanding ALTER INDEX ALL REBUILD with Disabled Clustered Index – Reader asked the effect of Rebuilding Indexes The same reader asked me today – “I understood what the disabled indexes do; what is their effect on statistics. Is it true that even though indexes are disabled, they continue updating the statistics?“ The answer is very interesting: If you have disabled clustered index, you will be not able to update the statistics at all for any index. If you have enabled clustered index and disabled non clustered index when you update the statistics of the table, it automatically updates the statistics of the ALL (disabled and enabled – both) the indexes on the table. If you are not satisfied with the answer, let us go over a simple example. I have written necessary comments in the code itself to have a clear idea. USE tempdb GO -- Drop Table if Exists IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[TableName]') AND type IN (N'U')) DROP TABLE [dbo].[TableName] GO -- Create Table CREATE TABLE [dbo].[TableName]( [ID] [int] NOT NULL, [FirstCol] [varchar](50) NULL ) GO -- Insert Some data INSERT INTO TableName SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Third' UNION ALL SELECT 4, 'Fourth' UNION ALL SELECT 5, 'Five' GO -- Create Clustered Index ALTER TABLE [TableName] ADD CONSTRAINT [PK_TableName] PRIMARY KEY CLUSTERED ([ID] ASC) GO -- Create Nonclustered Index CREATE UNIQUE NONCLUSTERED INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] ([FirstCol] ASC) GO -- Check that all the indexes are enabled SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO Now let us update the statistics of the table and check the statistics update date. -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO Now let us disable the indexes and check if they are disabled using sys.indexes. -- Disable Indexes -- Disable Nonclustered Index ALTER INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] DISABLE GO -- Disable Clustered Index ALTER INDEX [PK_TableName] ON [dbo].[TableName] DISABLE GO -- Check that all the indexes are disabled SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO Let us try to update the statistics of the table. -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO /* -- Above operation should thrown following error Msg 1974, Level 16, State 1, Line 1 Cannot perform the specified operation on table 'TableName' because its clustered index 'PK_TableName' is disabled. */ When we try to update the statistics it throws an error as it clustered index is disabled. Now let us enable the clustered index only and attempt to update the statistics of the table right after that. -- Now let us rebuild clustered index only ALTER INDEX [PK_TableName] ON [dbo].[TableName] REBUILD GO -- Check that all the indexes status SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO We can clearly see that even though the nonclustered index is disabled it is also updated. If you do not need a nonclustered index, I suggest you to drop it as keeping them disabled is an overhead on your system. This is because every time the statistics are updated for system all the statistics for disabled indexesare also updated. -- Clean up DROP TABLE [TableName] GO The complete script is given below for easy reference. USE tempdb GO -- Drop Table if Exists IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[TableName]') AND type IN (N'U')) DROP TABLE [dbo].[TableName] GO -- Create Table CREATE TABLE [dbo].[TableName]( [ID] [int] NOT NULL, [FirstCol] [varchar](50) NULL ) GO -- Insert Some data INSERT INTO TableName SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Third' UNION ALL SELECT 4, 'Fourth' UNION ALL SELECT 5, 'Five' GO -- Create Clustered Index ALTER TABLE [TableName] ADD CONSTRAINT [PK_TableName] PRIMARY KEY CLUSTERED ([ID] ASC) GO -- Create Nonclustered Index CREATE UNIQUE NONCLUSTERED INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] ([FirstCol] ASC) GO -- Check that all the indexes are enabled SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO -- Disable Indexes -- Disable Nonclustered Index ALTER INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] DISABLE GO -- Disable Clustered Index ALTER INDEX [PK_TableName] ON [dbo].[TableName] DISABLE GO -- Check that all the indexes are disabled SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO /* -- Above operation should thrown following error Msg 1974, Level 16, State 1, Line 1 Cannot perform the specified operation on table 'TableName' because its clustered index 'PK_TableName' is disabled. */ -- Now let us rebuild clustered index only ALTER INDEX [PK_TableName] ON [dbo].[TableName] REBUILD GO -- Check that all the indexes status SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO -- Clean up DROP TABLE [TableName] GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics

    Read the article

  • Creating ground in a 2D runner game

    - by user739711
    It may be a repetitive uestion but I could not find any specific answer to my query How to create A slanted/curved ground in a 2d runner game. The user will see side view like the old game "Mario" If I use tiled based map I can have only rectangular objects. What is the best way to create tilted ground? Should I use tiled based map, or should I define corner points in the map and create the ground programatically? And what are the difficulties in creating curved ground.

    Read the article

  • Multitask Like a Pro with AquaSnap

    - by Matthew Guay
    Are you tired of shuffling back and forth between windows?  Here’s a handy app that can help you keep all of your windows organized and accessible. AquaSnap is a great free utility that helps you use multiple windows at the same time easily and efficiently.  One of Windows 7’s greatest new features is Aero Snap, which lets you easily view windows side by side by simply dragging windows to side of your screen.  After using Windows 7 for the past year, Aero Snap is one of the features we really miss when using older versions of Windows. With AquaSnap, you now have all of the features of Aero Snap and more in Windows 2000, XP, Vista, and of course Windows 7.  Not only does it give you Aero Snap features, but AquaSnap also gives you more control over your windows to make you more productive. Getting Started AquaSnap is a a free download for Windows 2000, XP, Vista, and 7.  Download the small installer (link below) and install it with the default settings. AquaSnap automatically runs as soon as it is installed, and you will notice a new icon in your system tray. Now you can go ahead and put it to use.  Drag a window to any edge or corner of your desktop, and you will see an icon showing what part of the screen the window will cover. Dragging it to the side of the screen expanded the window to fill the right half of the screen, just like the default Aero Snap in Windows 7.  You can drag the window away to restore it to its former size. AquaSnap works on any corner of the screen too, so you can have 4 windows side-by-side.  We already have 3 windows snapped to the corners, and notice that we’re dragging a fourth window to the bottom right corner. You can also snap windows to the bottom and top of the screen.  Here we have Word snapped to the bottom half of the screen, and we’re dragging Chrome to the top. You can even snap internal windows in Multiple Document Interface (MDI) programs such as Excel.  Here we are snapping a workbook in Excel to the left to view 2 workbooks side-by-side.   Additionally, AquaSnap lets you keep any window always on top.  Simply shake any window, and it will turn semi-transparent and stay on top of all other windows.  Notice the transparent calculator here on top of Excel. All of AquaSnap’s features work great in Windows 2000, XP, and Vista too.  Here we are snapping IE6 to the left of the screen in XP. Here are 3 windows snapped to the sides in XP.  You can mix the snap modes, and have, for instance, two windows on the right side and one window on the left.  This is a great way to maximize productivity if you need more space in one of the windows. Even AquaShake works to keep a window transparent and on top in XP. Settings AquaSnap has a detailed settings dialog where you can tweak it to work exactly like you want.  Simply right-click on its icon in the taskbar, and select Settings. From the first screen, you can choose if you want AquaSnap to start with Windows, and if you want it to show an icon in the system tray.  If you turn off the system tray icon, you can access the AquaSnap settings from Start > All Programs > AquaSnap > Configuration (or simply search for Configuration in Vista or Windows 7). The second tab in settings lets you choose what you want each snapping region to do.  You can also choose two other presets, including AeroSnap (which works just like the default Aero Snap in Windows 7) and AquaSnap simple (which only snaps at the edges of the screen, not the corners). The third tab lets you increase or decrease the opacity of pinned windows when using AquaShake, and also lets you increase or decrease the shaking sensitivity.  Additionally, if you prefer the standard AeroShake functionality, which minimizes all other open windows when you shake a window, you can choose that too. The fourth tab lets you activate an optional feature, AquaGlass.  If you activate this, it will make windows turn transparent when you drag them across the screen.   Finally, the last tab lets you change the color and opacity of the preview rectangle, or simply turn it off. Or, if you want to temporarily turn AquaSnap off, simply right-click on its icon and select Off.  In Windows 7, turning off AquaSnap will restore your standard Windows Aero Snap functionality, and in other version of Windows it will stop letting you snap windows at all.  You can then repeat the steps and select On when you want to use AquaSnap again. Conclusion AquaSnap is a handy tool to make you more productive at your computer.  With a wide variety of useful features, there’s something here for everyone.  Download AquaSnap Similar Articles Productive Geek Tips How to Get Virtual Desktops on Windows XP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Out of band Security Update for Internet Explorer 7 Cool Looking Screensavers for Windows SyncToy syncs Files and Folders across Computers on a Network (or partitions on the same drive) If it were only this easy Classic Cinema Online offers 100’s of OnDemand Movies OutSync will Sync Photos of your Friends on Facebook and Outlook

    Read the article

  • Top 10 Tips & Tricks for Oracle SQL Developer

    - by thatjeffsmith
    Being a short week due to the holiday, and with everyone enjoying their Summer vacations (apologies Southern Hemispherians), I reckoned it was a great time to do one of those lazy recap-Top 10-Reader’s Digest type posts. I’ve been sharing 1-3 tips or ‘tricks’ a week since I started blogging about SQL Developer, and I have more than enough content to write a book. But since I’m lazy, I’m just going to compile a list of my favorite ‘must know’ tips instead. I always have to leave out a few tips when I do my presentations, so now I can refer back to this list to make sure I’m not forgetting anything. So without further ado… 1. Configure Your Preferences Yes, there are a LOT of options. But you don’t need to worry about all of them just yet. I do recommend you take a quick look at these ones in particular. Whether you’re new to the tool or have been using it for 5 years, don’t overlook these settings! 2. Disable Extensions You Aren’t Using If you’re not using Data Miner, or if you’re not working on a Migration – disable those extensions! SQL Developer will run leaner & meaner, plus the user interface will be a bit more simplified making the tool easier to navigate as well. 3. SQL Recall via Keyboard Access your history via the keyboard! Cycle through your recent SQL statements just using these magic key strokes! Ctrl+Up or Ctrl+Down. 4. Format Your Query Output Directly to CSV, XML, HTML, etc Have the query results pre-formatted in the format of your choice! Too lazy to run the Export wizard for your query result sets? Just add the SQL Developer output hints to your statement and have the output auto-magically formatted to the style of your choice! 5. Drag & Drop Multiple Tables to the Worksheet SQL Developer will auto-join the related objects. You can then toggle over to the Query Builder to toggle off the columns you don’t want to query. I guarantee this tip will save you time if you’re joining 3 or more tables! 6. Drag & Drop Multiple Tables to a Relational Model A pretty picture is worth a few dozen DDL scripts? SQL Developer does data modeling! If you ctrl-drag a table to a model, it will take that table and any related tables and reverse engineer them to a relational model! You can then print it out or export it to HTML, PDF, etc. 7. View Your PL/SQL Execution Output Automatically Function returns a refcursor? Procedure had 3 out parameters? When you run these programs via the Procedure Editor, we automatically capture the output and place them into one or more data grids for you to browse. 8. Disable Automatic Code Insight and Use It On-Demand Code Editor – Completion Insight – Enable Completion Auto-Popup (Keyword being Auto) Some folks really don’t like it when their IDEs or word-processors try to do ‘too much’ for them. Thankfully SQL Developer allows you to either increase the delay before it attempts to auto-complete your text OR to disable the automatic bit. Instead, you can invoke it on-demand. 9. Interactive Debugging – Change Your Variable Values as You Step Through Your PLSQL Watches aren’t just for watching. You can actually interact with your programs and ‘see what happens’ when X = 256 instead of 1. 10. Ditch the Tree View for the Schema Browser There’s nothing wrong with the Connection tree for browsing your database objects. But some folks just can’t seem to get comfortable with it. So, we built them a Schema Browser that uses a drop down control instead for changing up your schema and object types. Already Know This Stuff, Want More? Just check out my SQL Developer resource page, it’s one of the main links on the top of this page. Or if you can’t find something, just drop me a note in the form of a comment on this page and I’ll do my best to find it or write it for you.

    Read the article

  • Pygame surface rotation, rect rotation or sprite rotation?

    - by Alan
    i seem to have a conceptual misunderstanding of the surface and rect object in pygame. I currently observe these objects this way: Surface Just the loaded image rect the 'hard' representation of the ingame object (sprite). Used for simplifying object moment and collision detection sprite rect and surface grouped together What i want to do is rotate my sprite. The only available method i found for rotation is pygame.transform.rotate. How do i rotate the rectangle, or even better, the whole sprite? Below is the image of how i visualize this problem.

    Read the article

  • Lessons from a SAN Failure

    - by Bill Graziano
    At 1:10AM Sunday morning the main SAN at one of my clients suffered a “partial” failure.  Partial means that the SAN was still online and functioning but the LUNs attached to our two main SQL Servers “failed”.  Failed means that SQL Server wouldn’t start and the MDF and LDF files mostly showed a zero file size.  But they were online and responding and most other LUNs were available.  I’m not sure how SANs know to fail at 1AM on a Saturday night but they seem to.  From a personal standpoint this worked out poorly: I was out with friends and after more than a few drinks.  From a work standpoint this was about the best time to fail you could imagine.  Everything was running well before Monday morning.  But it was a long, long Sunday.  I started tipsy, got tired and ended up hung over later in the day. Note to self: Try not to go out drinking right before the SAN fails. This caught us at an interesting time.  We’re in the process of migrating to an entirely new set of servers so some things were partially moved.  This made it difficult to follow our procedures as cleanly as we’d like.  The benefit was that we had much better documentation of everything on the server.  I would encourage everyone to really think through the process of implementing your DR plan and document as much as possible.  Following a checklist is much easier than trying to remember at night under pressure in a hurry after a few drinks. I had a series of estimates on how long things would take.  They were accurate for any single server failure.  They weren’t accurate for a SAN failure that took two servers down.  This wasn’t bad but we should have communicated better. Don’t forget how many things are outside the database.  Logins, linked servers, DTS packages (yikes!), jobs, service broker, DTC (especially DTC), database triggers and any objects in the master database are all things you need backed up.  We’d done a decent job on this and didn’t find significant problems here.  That said this still took a lot of time.  There were many annoyances as a result of this.  Small settings like a login’s default database had a big impact on whether an application could run.  This is probably the single biggest area of concern when looking to recreate a server.  I’d encourage everyone to go through every single node of SSMS and look for user created objects or settings outside the database. Script out your logins with the proper SID and already encrypted passwords and keep it updated.  This makes life so much easier.  I used an approach based on KB246133 that worked well.  I’ll get my scripts posted over the next few days. The disaster can cause your DR process to fail in unexpected ways.  We have a job that scripts out all logins and role memberships and writes it to a file.  This runs on the DR server and pulls from the production server.  Upon opening the file I found that the contents were a “server not found” error.  Fortunately we had other copies and didn’t need to try and restore the master database.  This now runs on the production server and pushes the script to the DR site.  Soon we’ll get it pushed to our version control software. One of the biggest challenges is keeping your DR resources up to date.  Any server change (new linked server, new SQL Server Agent job, etc.) means that your DR plan (and scripts) is out of date.  It helps to automate the generation of these resources if possible. Take time now to test your database restore process.  We test ours quarterly.  If you have a large database I’d also encourage you to invest in a compressed backup solution.  Restoring backups was the single larger consumer of time during our recovery. And yes, there’s a database mirroring solution planned in our new architecture. I didn’t have much involvement in things outside SQL Server but this caused many, many things to change in our environment.  Many applications today aren’t just executables or web sites.  They are a combination of those plus network infrastructure, reports, network ports, IP addresses, DTS and SSIS packages, batch systems and many other things.  These all needed a little bit of attention to make sure they were functioning properly. Profiler turned out to be a handy tool.  I started a trace for failed logins and kept that running.  That let me fix a number of problems before people were able to report them.  I also ran traces to capture exceptions.  This helped identify problems with linked servers. Overall the thing that gave me the most problem was linked servers.  In order for a linked server to function properly you need to be pointed to the right server, have the proper login information, have the network routes available and have MSDTC configured properly.  We have a lot of linked servers and this created many failure points.  Some of the older linked servers used IP addresses and not DNS names.  This meant we had to go in and touch all those linked servers when the servers moved.

    Read the article

  • Approximating walking physics via simpler sliding physics

    - by Dave
    I am modeling walking insects. I implement them as cuboids and use forces (including friction and drag), to control motion. However, the movement characteristics of this 'sliding box' physics don't match those due to a legged creature. For example, legged creatures near-instantly accelerate to their top speed; whereas applying a force to a box takes time to accelerate it. The applied force can be increased along with the counteracting drag, giving much quicker acceleration (via force) to a max speed (via drag). However, this also means the force that creatures can exert when pushing on other objects is increased. Does anyone know of any techniques using a physics engine to cheaply model walking creatures?

    Read the article

  • VSDB to SSDT Part 2 : SQL Server 2008 Server Project &hellip; with SSDT

    - by Etienne Giust
    With Visual Studio 2012 and the use of SSDT technology, there is only one type of database project : SQL Server Database Project. With Visual Studio 2010, we used to have SQL Server 2008 Server Project which we used to define server-level objects, mostly logins and linked servers. A convenient wizard allowed for creation of this type of projects. It does not exists anymore. Here is how to create an equivalent of the SQL Server 2008 Server Project  with Visual Studio 2012: Create a new SQL Server Database Project : it will be created empty Create a new SQL Schema Compare ( SQL menu item > Schema Compare > New Schema Comparison ) As a source, select any database on the SQL server you want to mimic Set the target to be your newly Database Project In the Schema Compare options (cog-like icon), Object Types pane, set the options as below. You might want to tweak those and select only the object types you want. Then, run the comparison, review and select your changes and apply them to the project.

    Read the article

  • 3D Huge mesh rendering

    - by Keyhan Asghari
    I am writing a program, that as input, I have a huge 3d mesh (with mostly structured and cubic shaped elements), and I want to realtime render it, but not as real-time as a game. But speed of rendering is somehow important. The most important point is, I don't need any special lighting nor any shadows. Also, the objects to render are static, and they do not move. I've read about ray tracing methods, but I don't know if there is any good libraries for this purpose, or I have to implement everything by myself. Thanks a lot.

    Read the article

  • Enforce SSIS naming conventions using BI-xPress

    - by jamiet
    A long long long time ago (in 2006 in fact) I published a blog post entitled Suggested Best Practises and naming conventions in which I suggested a bunch of acronyms that folks could use to prefix object names in their SSIS packages, thus allowing easier identification of those objects in log records, here is a sample of some of those suggestions: If you have adopted these naming conventions (and I am led to believe that a bunch of people have) then you might like to know that you can now check for adherence to these conventions using a tool called BI-xPress from Pragmatic Works. BI-xPress includes a feature called the Best Practices Analyzer that scans your packages and assess them according to some rules that you specify. In addition Pragmatic Works have made available a collection of these rules that adhere to the naming conventions I specified in 2006 You can download this collection however I recommend you first read the accompanying article that demonstrates the capabilities of the Best Practices Analyzer. Pretty cool stuff. @Jamiet

    Read the article

  • To ORM or Not to ORM. That is the question&hellip;

    - by Patrick Liekhus
    UPDATE:  Thanks for the feedback and comments.  I have adjusted my table below with your recommendations.  I had missed a point or two. I wanted to do a series on creating an entire project using the EDMX XAF code generation and the SpecFlow BDD Easy Test tools discussed in my earlier posts, but I thought it would be appropriate to start with a simple comparison and reasoning on why I choose to use these tools. Let’s start by defining the term ORM, or Object-Relational Mapping.  According to Wikipedia it is defined as the following: Object-relational mapping (ORM, O/RM, and O/R mapping) in computer software is a programming technique for converting data between incompatible type systems in object-oriented programming languages. This creates, in effect, a "virtual object database" that can be used from within the programming language. Why should you care?  Basically it allows you to map your business objects in code to their persistence layer behind them. And better yet, why would you want to do this?  Let me outline it in the following points: Development speed.  No more need to map repetitive tasks query results to object members.  Once the map is created the code is rendered for you. Persistence portability.  The ORM knows how to map SQL specific syntax for the persistence engine you choose.  It does not matter if it is SQL Server, Oracle and another database of your choosing. Standard/Boilerplate code is simplified.  The basic CRUD operations are consistent and case use database metadata for basic operations. So how does this help?  Well, let’s compare some of the ORM tools that I have used and/or researched.  I have been interested in ORM for some time now.  My ORM of choice for a long time was NHibernate and I still believe it has a strong case in some business situations.  However, you have to take business considerations into account and the law of diminishing returns.  Because of these two factors, my recent activity and experience has been around DevExpress eXpress Persistence Objects (XPO).  The primary reason for this is because they have the DevExpress eXpress Application Framework (XAF) that sits on top of XPO.  With this added value, the data model can be created (either database first of code first) and the Web and Windows client can be created from these maps.  While out of the box they provide some simple list and detail screens, you can verify easily extend and modify these to your liking.  DevExpress has done a tremendous job of providing enough framework while also staying out of the way when you need to extend it.  This sounds worse than it really is.  What I mean by this is that if you choose to follow DevExpress coding style and recommendations, the hooks and extension points provided allow you to do some pretty heavy lifting while also not worrying about the basics. I have put together a list of the top features that I have used to compare the limited list of ORM’s that I have exposure with.  Again, the biggest selling point in my opinion is that XPO is just a solid as any of the other ORM’s but with the added layer of XAF they become unstoppable.  And then couple that with the EDMX modeling tools and code generation, it becomes a no brainer. Designer Features Entity Framework NHibernate Fluent w/ Nhibernate Telerik OpenAccess DevExpress XPO DevExpress XPO/XAF plus Liekhus Tools Uses XML to map relationships - Yes - - -   Visual class designer interface Yes - - - - Yes Management integrated w/ Visual Studio Yes - - Yes - Yes Supports schema first approach Yes - - Yes - Yes Supports model first approach Yes - - Yes Yes Yes Supports code first approach Yes Yes Yes Yes Yes Yes Attribute driven coding style Yes - Yes - Yes Yes                 I have a very small team and limited resources with a lot of responsibilities.  In order to keep up with our customers, we must rely on tools like these.  We use the EDMX tool so that we can create a visual representation of the applications with our customers.  Second, we rely on the code generation so that we can focus on the business problems at hand and not whether a field is mapped correctly.  This keeps us from requiring as many junior level developers on our team.  I have also worked on multiple teams where they believed in writing their own “framework”.  In my experiences and opinion this is not the route to take unless you have a team dedicated to supporting just the framework.  Each time that I have worked on custom frameworks, the framework eventually becomes old, out dated and full of “performance” enhancements specific to one or two requirements.  With an ORM, there are a lot smarter people than me working on the bigger issue of persistence and performance.  Again, my recommendation would be to use an available framework and get to working on your business domain problems.  If your coding is not making money for you, why are you working on it?  Do you really need to be writing query to object member code again and again? Thanks

    Read the article

  • Editing Routes in ASP.NET MVC

    - by imran_ku07
    Introduction :        Phil Haack's had written two great articles about Editable Routes, Editable Routes or Editable Routes Using App_Code.These Article are great. But if you not need to unit test your Routes and don't care about restart Application Domian during editing your Routes then global.asax file is the fastest and easiest to achieve the same. In this Article I will use Global.asax file instead of Global.asax.cs file for defining Routes and you will also see how this whole process will works.   Description :          You just need to Cut (or Copy) the code inside Global.asax.cs and paste it in Global.asax inside runat server tag.          You can simply do this by cutting the code of Global.asax.cs,          and paste it inside Global.asax,               Easy and quick ,Now you can change Global.asax without compiling the application. How this works :        I think it is worth here to see what is happening here.        Actually, ASP.NET will use Global.asax file to create a class named global_asax within ASP namespace and place all the code in Global.asax inside the class global_asax class which is created at runtime,                namespace ASP               {                    public class global_asax: NerdDinner.MvcApplication                    {                         //Any definitions defined in Global.asax like Application_Start method                                     }               }         Which inherits from class defined in Application tag,      <%@ Application Codebehind="Global.asax.cs" Inherits="NerdDinner.MvcApplication" Language="C#" %>          Actually ASP.NET creates a pool of application objects of this class, which varies from 1 to 100. Every request take one of these application objects to a serve incoming requests. After receiving an application object then it will call application specific events, like Application_Start(for only firstRequest), Application_BeginRequest(for every request), and so on. Therefore if these methods are defined in global_asax class then ASP.NET will call these method from global_asax, if not then it will use base class methods may be defined in Global.asax.cs(the concept known as shadowing or hiding). Summary :        In this article, I showed how easily and quickly you can make your Routes Editable. But also note that any change in global.asax results in Application Domain restart and this technique also makes your Route Unit Test difficult.

    Read the article

  • Extending ASP.NET Output Caching

    One of the most sure-fire ways to improve a web application's performance is to employ caching. Caching takes some expensive operation and stores its results in a quickly accessible location. Since it's inception, ASP.NET has offered two flavors of caching:<ul><li><b>Output Caching</b> - caches the entire rendered markup of an ASP.NET page or <a href="http://www.asp101.com/lessons/usercontrols.asp">User Control</a> for a specified duration.</li><li><b>Data Caching</b> - a API for caching objects. Using the data cache you can write code to add, remove, and retrieve items from the cache.</li></ul>Until recently, the underlying functionality of these two caching mechanisms was fixed - both cached data

    Read the article

  • Parallelism in .NET – Part 8, PLINQ’s ForAll Method

    - by Reed
    Parallel LINQ extends LINQ to Objects, and is typically very similar.  However, as I previously discussed, there are some differences.  Although the standard way to handle simple Data Parellelism is via Parallel.ForEach, it’s possible to do the same thing via PLINQ. PLINQ adds a new method unavailable in standard LINQ which provides new functionality… LINQ is designed to provide a much simpler way of handling querying, including filtering, ordering, grouping, and many other benefits.  Reading the description in LINQ to Objects on MSDN, it becomes clear that the thinking behind LINQ deals with retrieval of data.  LINQ works by adding a functional programming style on top of .NET, allowing us to express filters in terms of predicate functions, for example. PLINQ is, generally, very similar.  Typically, when using PLINQ, we write declarative statements to filter a dataset or perform an aggregation.  However, PLINQ adds one new method, which provides a very different purpose: ForAll. The ForAll method is defined on ParallelEnumerable, and will work upon any ParallelQuery<T>.  Unlike the sequence operators in LINQ and PLINQ, ForAll is intended to cause side effects.  It does not filter a collection, but rather invokes an action on each element of the collection. At first glance, this seems like a bad idea.  For example, Eric Lippert clearly explained two philosophical objections to providing an IEnumerable<T>.ForEach extension method, one of which still applies when parallelized.  The sole purpose of this method is to cause side effects, and as such, I agree that the ForAll method “violates the functional programming principles that all the other sequence operators are based upon”, in exactly the same manner an IEnumerable<T>.ForEach extension method would violate these principles.  Eric Lippert’s second reason for disliking a ForEach extension method does not necessarily apply to ForAll – replacing ForAll with a call to Parallel.ForEach has the same closure semantics, so there is no loss there. Although ForAll may have philosophical issues, there is a pragmatic reason to include this method.  Without ForAll, we would take a fairly serious performance hit in many situations.  Often, we need to perform some filtering or grouping, then perform an action using the results of our filter.  Using a standard foreach statement to perform our action would avoid this philosophical issue: // Filter our collection var filteredItems = collection.AsParallel().Where( i => i.SomePredicate() ); // Now perform an action foreach (var item in filteredItems) { // These will now run serially item.DoSomething(); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This would cause a loss in performance, since we lose any parallelism in place, and cause all of our actions to be run serially. We could easily use a Parallel.ForEach instead, which adds parallelism to the actions: // Filter our collection var filteredItems = collection.AsParallel().Where( i => i.SomePredicate() ); // Now perform an action once the filter completes Parallel.ForEach(filteredItems, item => { // These will now run in parallel item.DoSomething(); }); This is a noticeable improvement, since both our filtering and our actions run parallelized.  However, there is still a large bottleneck in place here.  The problem lies with my comment “perform an action once the filter completes”.  Here, we’re parallelizing the filter, then collecting all of the results, blocking until the filter completes.  Once the filtering of every element is completed, we then repartition the results of the filter, reschedule into multiple threads, and perform the action on each element.  By moving this into two separate statements, we potentially double our parallelization overhead, since we’re forcing the work to be partitioned and scheduled twice as many times. This is where the pragmatism comes into play.  By violating our functional principles, we gain the ability to avoid the overhead and cost of rescheduling the work: // Perform an action on the results of our filter collection .AsParallel() .Where( i => i.SomePredicate() ) .ForAll( i => i.DoSomething() ); The ability to avoid the scheduling overhead is a compelling reason to use ForAll.  This really goes back to one of the key points I discussed in data parallelism: Partition your problem in a way to place the most work possible into each task.  Here, this means leaving the statement attached to the expression, even though it causes side effects and is not standard usage for LINQ. This leads to my one guideline for using ForAll: The ForAll extension method should only be used to process the results of a parallel query, as returned by a PLINQ expression. Any other usage scenario should use Parallel.ForEach, instead.

    Read the article

  • Building a Distributed Commerce Infrastructure in the Cloud using Azure and Commerce Server

    - by Lewis Benge
    One of the biggest questions I routinely get asked is how scalable Commerce Server is. Of course the text book answer is the product has been around for 10 years, powers some of the largest e-Commerce websites in the world, so it scales horizontally extremely well. One argument however though is what if you can't predict the growth of demand required of your Commerce Platform, or need the ability to scale up during busy seasons such as Christmas for a retail environment but are hesitant on maintaining the infrastructure on a year-round basis? The obvious answer is to utilise the many elasticated cloud infrastructure providers that are establishing themselves in the ever-growing market, the problem however is Commerce Server is still product which has a legacy tightly coupled dependency on Windows and IIS components. Commerce Server 2009 codename "R2" however introduced to the concept of an n-tier deployment of Microsoft Commerce Server, meaning you are no longer tied to core objects API but instead have serializable Commerce Entity objects, and business logic allowing for Commerce Server to now be built into a WCF-based SOA architecture. Presentation layers no-longer now need to remain on the same physical machine as the application server, meaning you can now build the user experience into multiple-technologies and host them in multiple places – leveraging the transport benefits that a WCF service may bring, such as message queuing, security, and multiple end-points. All of this logic will still need to remain in your internal infrastructure, for two reasons. Firstly cloud based computing infrastructure does not support PCI security requirements, and secondly even though many of the legacy Commerce Server dependencies have been abstracted away within this version of the application, it is still not a fully supported to be deployed exclusively into the cloud. If you do wish to benefit from the scalability of the cloud however, you can still achieve a great Commerce Server and Azure setup by utilising both the Azure App Fabric in terms of the service bus, and authentication services and Windows Azure to host any online presence you may require. The architecture would be something similar to this: This setup would allow you to construct your Commerce Services as part of your on-site infrastructure. These services would contain all of the channels custom business logic, and provide the overall interface back into the underlying Commerce Server components. It would be recommended that services are constructed around the specific business domain of the application, which based on your business model would usually consist of separate services around Catalogue, Orders, Search, Profiles, and Marketing. The App Fabric service bus is then used to abstract and aggregate further the services, making them available to the cloud and subsequently secured by App Fabrics authentication services. These services are now available for consumption by any client, using any supported technology – not just .NET. Thus meaning you are now able to construct apps for IPhone, integrate with Java based POS Devices, and any many other potential uses. This aggregation is useful, and forms the basis of the further strategy around diversifying and enhancing the e-Commerce experience, but also provides the foundation for the scalability we want to gain from utilising a cloud-based application platform. The Windows Azure application platform is Microsoft solution to benefiting from the true economies of scale in terms of the elasticity of the cloud. Just before the launch of the Azure Platform – Domino's pizza actually managed to run their whole SuperBowl operation from the scalability of Windows Azure, and simply switching back to their traditional operation the next day with no residual infrastructure costs. The platform also natively can subscribe to services and messages exposed within the AppFabric service bus, making it an ideal solution to build and deploy a presentation layer which will need to support of scalable infrastructure – such as a high demand public facing e-Commerce portal, or a promotion element of a brand. Windows Azure has excellent support for ASP.NET, including its own caching providers meaning expensive operations such as catalogue queries can persist in memory on the application server, reducing the demand on internal infrastructure and prioritising it for more business critical operations such as receiving orders and processing payments. Windows Azure also supports other languages too, meaning utilising this approach you can technically build a Commerce Server presentation layer in Java, PHP, or Ruby – or equally in ASP.NET or Silverlight without having to change any of the underlying business or Commerce Server implementation. This SOA-style architecture is one of the primary differentiators for Commerce Server as a product in the e-Commerce market, and now with the introduction of a WCF capability in Commerce Server 2009/2009 R2 the opportunities for extensibility of the both the user experience, and integration into third parties, are drastically increased, all with no effect to the underlying channel logic. So if you are looking at deployment options for your e-Commerce application to help support demand in a cost effective way. I would highly recommend you consider looking at Windows Azure, and if you have any questions in-particular about this style of deployment, please feel free to get in touch!

    Read the article

  • Static vs Singleton in C# (Difference between Singleton and Static)

    - by Jalpesh P. Vadgama
    Recently I have came across a question what is the difference between Static and Singleton classes. So I thought it will be a good idea to share blog post about it.Difference between Static and Singleton classes:A singleton classes allowed to create a only single instance or particular class. That instance can be treated as normal object. You can pass that object to a method as parameter or you can call the class method with that Singleton object. While static class can have only static methods and you can not pass static class as parameter.We can implement the interfaces with the Singleton class while we can not implement the interfaces with static classes.We can clone the object of Singleton classes we can not clone the object of static classes.Singleton objects stored on heap while static class stored in stack.more at my personal blog: dotnetjalps.com

    Read the article

  • Searching for tasks with code – Executables and Event Handlers

    Searching packages or just enumerating through all tasks is not quite as straightforward as it may first appear, mainly because of the way you can nest tasks within other containers. You can see this illustrated in the sample package below where I have used several sequence containers and loops. To complicate this further all containers types, including packages and tasks, can have event handlers which can then support the full range of nested containers again. Towards the lower right, the task called SQL In FEL also has an event handler not shown, within which is another Execute SQL Task, so that makes a total of 6 Execute SQL Tasks 6 tasks spread across the package. In my previous post about such as adding a property expressionI kept it simple and just looked at tasks at the package level, but what if you wanted to find any or all tasks in a package? For this post I've written a console program that will search a package looking at all tasks no matter how deeply nested, and check to see if the name starts with "SQL". When it finds a matching task it writes out the hierarchy by name for that task, starting with the package and working down to the task itself. The output for our sample package is shown below, note it has found all 6 tasks, including the one on the OnPreExecute event of the SQL In FEL task TaskSearch v1.0.0.0 (1.0.0.0) Copyright (C) 2009 Konesans Ltd Processing File - C:\Projects\Alpha\Packages\MyPackage.dtsx MyPackage\FOR Counter Loop\SQL In Counter Loop MyPackage\SEQ For Each Loop Wrapper\FEL Simple Loop\SQL In FEL MyPackage\SEQ For Each Loop Wrapper\FEL Simple Loop\SQL In FEL\OnPreExecute\SQL On Pre Execute for FEL SQL Task MyPackage\SEQ Top Level\SEQ Nested Lvl 1\SEQ Nested Lvl 2\SQL In Nested Lvl 2 MyPackage\SEQ Top Level\SEQ Nested Lvl 1\SQL In Nested Lvl 1 #1 MyPackage\SEQ Top Level\SEQ Nested Lvl 1\SQL In Nested Lvl 1 #2 6 matching tasks found in package. The full project and code is available for download below, but first we can walk through the project to highlight the most important sections of code. This code has been abbreviated for this description, but is complete in the download. First of all we load the package, and then start by looking at the Executables for the package. // Load the package file Application application = new Application(); using (Package package = application.LoadPackage(filename, null)) { int matchCount = 0; // Look in the package's executables ProcessExecutables(package.Executables, ref matchCount); ... // // ... // Write out final count Console.WriteLine("{0} matching tasks found in package.", matchCount); } The ProcessExecutables method is a key method, as an executable could be described as the the highest level of a working functionality or container. There are several of types of executables, such as tasks, or sequence containers and loops. To know what to do next we need to work out what type of executable we are dealing with as the abbreviated version of method shows below. private static void ProcessExecutables(Executables executables, ref int matchCount) { foreach (Executable executable in executables) { TaskHost taskHost = executable as TaskHost; if (taskHost != null) { ProcessTaskHost(taskHost, ref matchCount); ProcessEventHandlers(taskHost.EventHandlers, ref matchCount); continue; } ... // // ... ForEachLoop forEachLoop = executable as ForEachLoop; if (forEachLoop != null) { ProcessExecutables(forEachLoop.Executables, ref matchCount); ProcessEventHandlers(forEachLoop.EventHandlers, ref matchCount); continue; } } } As you can see if the executable we find is a task we then call out to our ProcessTaskHost method. As with all of our executables a task can have event handlers which themselves contain more executables such as task and loops, so we also make a call out our ProcessEventHandlers method. The other types of executables such as loops can also have event handlers as well as executables. As shown with the example for the ForEachLoop we call the same ProcessExecutables and ProcessEventHandlers methods again to drill down into the hierarchy of objects that the package may contain. This code needs to explicitly check for each type of executable (TaskHost, Sequence, ForLoop and ForEachLoop) because whilst they all have an Executables property this is not from a common base class or interface. This example was just a simple find a task by its name, so ProcessTaskHost really just does that. We also get the hierarchy of objects so we can write out for information, obviously you can adapt this method to do something more interesting such as adding a property expression. private static void ProcessTaskHost(TaskHost taskHost, ref int matchCount) { if (taskHost == null) { return; } // Check if the task matches our match name if (taskHost.Name.StartsWith(TaskNameFilter, StringComparison.OrdinalIgnoreCase)) { // Build up the full object hierarchy of the task // so we can write it out for information StringBuilder path = new StringBuilder(); DtsContainer container = taskHost; while (container != null) { path.Insert(0, container.Name); container = container.Parent; if (container != null) { path.Insert(0, "\\"); } } // Write the task path // e.g. Package\Container\Event\Task Console.WriteLine(path); Console.WriteLine(); // Increment match counter for info matchCount++; } } Just for completeness, the other processing method we covered above is for event handlers, but really that just calls back to the executables. This same method is called in our main package method, but it was omitted for brevity here. private static void ProcessEventHandlers(DtsEventHandlers eventHandlers, ref int matchCount) { foreach (DtsEventHandler eventHandler in eventHandlers) { ProcessExecutables(eventHandler.Executables, ref matchCount); } } As hopefully the code demonstrates, executables (Microsoft.SqlServer.Dts.Runtime.Executable) are the workers, but within them you can nest more executables (except for task tasks).Executables themselves can have event handlers which can in turn hold more executables. I have tried to illustrate this highlight the relationships in the following diagram. Download Sample code project TaskSearch.zip (11KB)

    Read the article

  • Tom Kyte szeminárium Budapesten, 2010. ápr. 19-20.

    - by Fekete Zoltán
    Még lehet jelentkezni Tom Kyte 2010-es budapesti szemináriumára, az itt található linkeken keresztül: - 2010. április 19-20., Budapesten tantermi szeminárium keretében. Témák: The top 10 - no 11 - new features of Oracle; Database 11g; All about binding; Materialized Views, Caching; Effective Indexing; Storage Techniques; Reorganizing objects - when and how. ASK TOM!

    Read the article

  • SQL – Quick Start with Explorer Sections of NuoDB – Query NuoDB Database

    - by Pinal Dave
    This is the third post in the series of the blog posts I am writing about NuoDB. NuoDB is very innovative and easy-to-use product. I can clearly see how one can scale-out NuoDB with so much ease and confidence. In my very first blog post we discussed how we can install NuoDB (link), and in my second post I discussed how we can manage the NuoDB database transaction engines and storage managers with a few clicks (link). Note: You can Download NuoDB from here. In this post, we will learn how we can use the Explorer feature of NuoDB to do various SQL operations. NuoDB has a browser-based Explorer, which is very powerful and has many of the features any IDE would normally have. Let us see how it works in the following step-by-step tutorial. Let us go to the NuoDBNuoDB Console by typing the following URL in your browser: http://localhost:8080/ It will bring you to the QuickStart screen. Make sure that you have created the sample database. If you have not created sample database, click on Create Database and create it successfully. Now go to the NuoDB Explorer by clicking on the main tab, and it will ask you for your domain username and password. Enter the username as a domain and password as a bird. Alternatively you can also enter username as a quickstart and password as a quickstart. Once you enter the password you will be able to see the databases. In our example we have installed the Sample Database hence you will see the Test database in our Database Hierarchy screen. When you click on database it will ask for the database login. Note that Database Login is different from Domain login and you will have to enter your database login over here. In our case the database username is dba and password is goalie. Once you enter a valid username and password it will display your database. Further expand your database and you will notice various objects in your database. Once you explore various objects, select any database and click on Open. When you click on execute, it will display the SQL script to select the data from the table. The autogenerated script displays entire result set from the database. The NuoDB Explorer is very powerful and makes the life of developers very easy. If you click on List SQL Statements it will list all the available SQL statements right away in Query Editor. You can see the popup window in following image. Here is the cool thing for geeks. You can even click on Query Plan and it will display the text based query plan as well. In case of a SELECT, the query plan will be much simpler, however, when we write complex queries it will be very interesting. We can use the query plan tab for performance tuning of the database. Here is another feature, when we click on List Tables in NuoDB Explorer.  It lists all the available tables in the query editor. This is very helpful when we are writing a long complex query. Here is a relatively complex example I have built using Inner Join syntax. Right below I have displayed the Query Plan. The query plan displays all the little details related to the query. Well, we just wrote multi-table query and executed it against the NuoDB database. You can use the NuoDB Admin section and do various analyses of the query and its performance. NuoDB is a distributed database built on a patented emergent architecture with full support for SQL and ACID guarantees.  It allows you to add Transaction Engine processes to a running system to improve the performance of your system.  You can also add a second Storage Engine to your running system for redundancy purposes.  Conversely, you can shut down processes when you don’t need the extra database resources. NuoDB also provides developers and administrators with a single intuitive interface for centrally monitoring deployments. If you have read my blog posts and have not tried out NuoDB, I strongly suggest that you download it today and catch up with the learnings with me. Trust me though the product is very powerful, it is extremely easy to learn and use. Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Code excavations, wishful invocations, perimeters and domain specific unit test frameworks

    One of the talks I did at QCON London was about a subject that Ive come across fairly recently , when I was building SilverUnit a pure unit test framework for silverlight objects that depend on the silverlight runtime to run. It is the concept of cogs in the machine when your piece of code needs to run inside a host framework or runtime that you have little or no control over for testability related matters. Examples of such cogs and machines can be: your custom control running inside silverlight...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Working with Backing Beans in JDeveloper - The Right Way

    - by shay.shmeltzer
    One nice feature that was in JDeveloper for a long time is the ability to automatically expose every component on your JSF page in a backing bean. While this is a nice "work saving" feature, you shouldn't be using this one in most cases. The reason is that it will create objects in your backing bean code for a lot of items you don't actually need to manipulate, making your code bigger and more complex to maintain. The right way of working is to expose only components you need in your backing bean - and JDeveloper makes this just as easy through the binding property in the property inspector and the edit option it has. Here is a quick video showing you how to do that:

    Read the article

  • Never Call Me at Work [Humorous Star Wars Video]

    - by Asian Angel
    Have you ever had one of those days when someone close to you calls at the worst possible time? See what happens when this stormtrooper’s wife calls him while he is at work above Tatooine! Needless to say Darth Vader is in a “less than forgiving” mood… Never Call Me At Work [YouTube] Latest Features How-To Geek ETC How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Never Call Me at Work [Humorous Star Wars Video] Add an Image Properties Listing to the Context Menu in Chrome and Iron Add an Easy to View Notification Badge to Tabs in Firefox SpellBook Parks Bookmarklets in Chrome’s Context Menu Drag2Up Brings Multi-Source Drag and Drop Uploading to Firefox Enchanted Swing in the Forest Wallpaper

    Read the article

  • Ray picking - get direction from pitch and yaw

    - by Isaac Waller
    I am attempting to cast a ray from the center of the screen and check for collisions with objects. When rendering, I use these calls to set up the camera: GL11.glRotated(mPitch, 1, 0, 0); GL11.glRotated(mYaw, 0, 1, 0); GL11.glTranslated(mPositionX, mPositionY, mPositionZ); I am having trouble creating the ray, however. This is the code I have so far: ray.origin = new Vector(mPositionX, mPositionY, mPositionZ); ray.direction = new Vector(?, ?, ?); My question is: what should I put in the question mark spots? I.e. how can I create the ray direction from the pitch and roll? Any help would be much appreciated!

    Read the article

  • 2d movement solution

    - by Phil
    Hi! I'm making a simple top-down tank game on the ipad where the user controls the movement of the tank with the left "joystick" and the rotation of the turret with the right one. I've spent several hours just trying to get it to work decently but now I turn to the pros :) I have two referencial objects, one for the movement and one for the rotation. The referencial objects always stay max two units away from the tank and I use them to tell the tank in what direction to move. I chose this approach to decouple movement and rotational behaviour from the raw input of the joysticks, I believe this will make it simpler to implement whatever behaviour I want for the tank. My problem is 1; the turret rotates the long way to the target. With this I mean that the target can be -5 degrees away in rotation and still it rotates 355 degrees instead of -5 degrees. I can't figure out why. The other problem is with the movement. It just doesn't feel right to have the tank turn while moving. I'd like to have a solution that would work as well for the AI as for the player. A blackbox function for the movement where the player only specifies in what direction it should move and it moves there under the constraints that are imposed on it. I am using the standard joystick class found in the Unity iPhone package. This is the code I'm using for the movement: public class TankFollow : MonoBehaviour { //Check angle difference and turn accordingly public GameObject followPoint; public float speed; public float turningSpeed; void Update() { transform.position = Vector3.Slerp(transform.position, followPoint.transform.position, speed * Time.deltaTime); //Calculate angle var forwardA = transform.forward; var forwardB = (followPoint.transform.position - transform.position); var angleA = Mathf.Atan2(forwardA.x, forwardA.z) * Mathf.Rad2Deg; var angleB = Mathf.Atan2(forwardB.x, forwardB.z) * Mathf.Rad2Deg; var angleDiff = Mathf.DeltaAngle(angleA, angleB); //print(angleDiff.ToString()); if (angleDiff > 5) { //Rotate to transform.Rotate(new Vector3(0, (-turningSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); } else if (angleDiff < 5) { transform.Rotate(new Vector3(0, (turningSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); } else { } transform.position = new Vector3(transform.position.x, 0, transform.position.z); } } And this is the code I'm using to rotate the turret: void LookAt() { var forwardA = -transform.right; var forwardB = (toLookAt.transform.position - transform.position); var angleA = Mathf.Atan2(forwardA.x, forwardA.z) * Mathf.Rad2Deg; var angleB = Mathf.Atan2(forwardB.x, forwardB.z) * Mathf.Rad2Deg; var angleDiff = Mathf.DeltaAngle(angleA, angleB); //print(angleDiff.ToString()); if (angleDiff - 180 > 1) { //Rotate to transform.Rotate(new Vector3(0, (turretSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); } else if (angleDiff - 180 < -1) { transform.Rotate(new Vector3(0, (-turretSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); print((angleDiff - 180).ToString()); } else { } } Since I want the turret reference point to turn in relation to the tank (when you rotate the body, the turret should follow and not stay locked on since it makes it impossible to control when you've got two thumbs to work with), I've made the TurretFollowPoint a child of the Turret object, which in turn is a child of the body. I'm thinking that I'm making it too difficult for myself with the reference points but I'm imagining that it's a good idea. Please be honest about this point. So I'll be grateful for any help I can get! I'm using Unity3d iPhone. Thanks!

    Read the article

< Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >