Search Results

Search found 22257 results on 891 pages for 'let binding'.

Page 107/891 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • VLOOKUP in Excel, part 2: Using VLOOKUP without a database

    - by Mark Virtue
    In a recent article, we introduced the Excel function called VLOOKUP and explained how it could be used to retrieve information from a database into a cell in a local worksheet.  In that article we mentioned that there were two uses for VLOOKUP, and only one of them dealt with querying databases.  In this article, the second and final in the VLOOKUP series, we examine this other, lesser known use for the VLOOKUP function. If you haven’t already done so, please read the first VLOOKUP article – this article will assume that many of the concepts explained in that article are already known to the reader. When working with databases, VLOOKUP is passed a “unique identifier” that serves to identify which data record we wish to find in the database (e.g. a product code or customer ID).  This unique identifier must exist in the database, otherwise VLOOKUP returns us an error.  In this article, we will examine a way of using VLOOKUP where the identifier doesn’t need to exist in the database at all.  It’s almost as if VLOOKUP can adopt a “near enough is good enough” approach to returning the data we’re looking for.  In certain circumstances, this is exactly what we need. We will illustrate this article with a real-world example – that of calculating the commissions that are generated on a set of sales figures.  We will start with a very simple scenario, and then progressively make it more complex, until the only rational solution to the problem is to use VLOOKUP.  The initial scenario in our fictitious company works like this:  If a salesperson creates more than $30,000 worth of sales in a given year, the commission they earn on those sales is 30%.  Otherwise their commission is only 20%.  So far this is a pretty simple worksheet: To use this worksheet, the salesperson enters their sales figures in cell B1, and the formula in cell B2 calculates the correct commission rate they are entitled to receive, which is used in cell B3 to calculate the total commission that the salesperson is owed (which is a simple multiplication of B1 and B2). The cell B2 contains the only interesting part of this worksheet – the formula for deciding which commission rate to use: the one below the threshold of $30,000, or the one above the threshold.  This formula makes use of the Excel function called IF.  For those readers that are not familiar with IF, it works like this: IF(condition,value if true,value if false) Where the condition is an expression that evaluates to either true or false.  In the example above, the condition is the expression B1<B5, which can be read as “Is B1 less than B5?”, or, put another way, “Are the total sales less than the threshold”.  If the answer to this question is “yes” (true), then we use the value if true parameter of the function, namely B6 in this case – the commission rate if the sales total was below the threshold.  If the answer to the question is “no” (false), then we use the value if false parameter of the function, namely B7 in this case – the commission rate if the sales total was above the threshold. As you can see, using a sales total of $20,000 gives us a commission rate of 20% in cell B2.  If we enter a value of $40,000, we get a different commission rate: So our spreadsheet is working. Let’s make it more complex.  Let’s introduce a second threshold:  If the salesperson earns more than $40,000, then their commission rate increases to 40%: Easy enough to understand in the real world, but in cell B2 our formula is getting more complex.  If you look closely at the formula, you’ll see that the third parameter of the original IF function (the value if false) is now an entire IF function in its own right.  This is called a nested function (a function within a function).  It’s perfectly valid in Excel (it even works!), but it’s harder to read and understand. We’re not going to go into the nuts and bolts of how and why this works, nor will we examine the nuances of nested functions.  This is a tutorial on VLOOKUP, not on Excel in general. Anyway, it gets worse!  What about when we decide that if they earn more than $50,000 then they’re entitled to 50% commission, and if they earn more than $60,000 then they’re entitled to 60% commission? Now the formula in cell B2, while correct, has become virtually unreadable.  No-one should have to write formulae where the functions are nested four levels deep!  Surely there must be a simpler way? There certainly is.  VLOOKUP to the rescue! Let’s redesign the worksheet a bit.  We’ll keep all the same figures, but organize it in a new way, a more tabular way: Take a moment and verify for yourself that the new Rate Table works exactly the same as the series of thresholds above. Conceptually, what we’re about to do is use VLOOKUP to look up the salesperson’s sales total (from B1) in the rate table and return to us the corresponding commission rate.  Note that the salesperson may have indeed created sales that are not one of the five values in the rate table ($0, $30,000, $40,000, $50,000 or $60,000).  They may have created sales of $34,988.  It’s important to note that $34,988 does not appear in the rate table.  Let’s see if VLOOKUP can solve our problem anyway… We select cell B2 (the location we want to put our formula), and then insert the VLOOKUP function from the Formulas tab: The Function Arguments box for VLOOKUP appears.  We fill in the arguments (parameters) one by one, starting with the Lookup_value, which is, in this case, the sales total from cell B1.  We place the cursor in the Lookup_value field and then click once on cell B1: Next we need to specify to VLOOKUP what table to lookup this data in.  In this example, it’s the rate table, of course.  We place the cursor in the Table_array field, and then highlight the entire rate table – excluding the headings: Next we must specify which column in the table contains the information we want our formula to return to us.  In this case we want the commission rate, which is found in the second column in the table, so we therefore enter a 2 into the Col_index_num field: Finally we enter a value in the Range_lookup field. Important:  It is the use of this field that differentiates the two ways of using VLOOKUP.  To use VLOOKUP with a database, this final parameter, Range_lookup, must always be set to FALSE, but with this other use of VLOOKUP, we must either leave it blank or enter a value of TRUE.  When using VLOOKUP, it is vital that you make the correct choice for this final parameter. To be explicit, we will enter a value of true in the Range_lookup field.  It would also be fine to leave it blank, as this is the default value: We have completed all the parameters.  We now click the OK button, and Excel builds our VLOOKUP formula for us: If we experiment with a few different sales total amounts, we can satisfy ourselves that the formula is working. Conclusion In the “database” version of VLOOKUP, where the Range_lookup parameter is FALSE, the value passed in the first parameter (Lookup_value) must be present in the database.  In other words, we’re looking for an exact match. But in this other use of VLOOKUP, we are not necessarily looking for an exact match.  In this case, “near enough is good enough”.  But what do we mean by “near enough”?  Let’s use an example:  When searching for a commission rate on a sales total of $34,988, our VLOOKUP formula will return us a value of 30%, which is the correct answer.  Why did it choose the row in the table containing 30% ?  What, in fact, does “near enough” mean in this case?  Let’s be precise: When Range_lookup is set to TRUE (or omitted), VLOOKUP will look in column 1 and match the highest value that is not greater than the Lookup_value parameter. It’s also important to note that for this system to work, the table must be sorted in ascending order on column 1! If you would like to practice with VLOOKUP, the sample file illustrated in this article can be downloaded from here. Similar Articles Productive Geek Tips Using VLOOKUP in ExcelImport Microsoft Access Data Into ExcelImport an Access Database into ExcelCopy a Group of Cells in Excel 2007 to the Clipboard as an ImageShare Access Data with Excel in Office 2010 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Quickly Schedule Meetings With NeedtoMeet Share Flickr Photos On Facebook Automatically Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition

    Read the article

  • Windows FAT/NTFS Low-Level Disk Viewer (Norton DiskEdit alternative)

    - by Synetech inc.
    Hi, One of my most valuable software tools has always been Norton DiskEdit from Norton Utilities/Symantec Systemworks. I have used it for years for so many things, including, but not limited to learning file-systems. I am now in search of a similar tool that can let me view FAT and NTFS disks at a low and high level under Windows. I have seen Runtime Software’s DiskExplorer, but unfortunately, it is limited in a number of ways, particularly annoying is that it does not really let you view the disk as structured (it does not let you see directory entries from a directory that is fragmented like DiskEdit can without doing a too-exhaustive scan which doesn’t even work for FAT partitions that have cluster sizes <4KB). Does anyone know of a Windows alternative of a Norton DiskEditor? Thanks a lot.

    Read the article

  • ArcGIS–Getting the Legend Labels out

    - by Avner Kashtan
    Working with ESRI’s ArcGIS package, especially the WPF API, can be confusing. There’s the REST API, the SOAP APIs, and the WPF classes themselves, which expose some web service calls and information, but not everything. With all that, it can be hard to find specific features between the different options. Some functionality is handed to you on a silver platter, while some is maddeningly hard to implement. Today, for instance, I was working on adding a Legend control to my map-based WPF application, to explain the different symbols that can appear on the map. This is how the legend looks on ESRI’s own map-editing tools:   but this is how it looks when I used the Legend control, supplied out of the box by ESRI:   Very pretty, but unfortunately missing the option to display the name of the fields that make up the symbology. Luckily, the WPF controls have a lot of templating/extensibility points, to allow you to specify the layout of each field: 1: <esri:Legend> 2: <esri:Legend.MapLayerTemplate> 3: <DataTemplate> 4: <TextBlock Text="{Binding Layer.ID}"/> 5: </DataTemplate> 6: </esri:Legend.MapLayerTemplate> 7: </esri:Legend> but that only replicates the same built in behavior. I could now add any additional fields I liked, but unfortunately, I couldn’t find them as part of the Layer, GraphicsLayer or FeatureLayer definitions. This is the part where ESRI’s lack of organization is noticeable, since I can see this data easily when accessing the ArcGis Server’s web-interface, but I had no idea how to find it as part of the built-in class. Is it a part of Layer? Of LayerInfo? Of the LayerDefinition class that exists only in the SOAP service? As it turns out, neither. Since these fields are used by the symbol renderer to determine which symbol to draw, they’re actually a part of the layer’s Renderer. Since I already had a MyFeatureLayer class derived from FeatureLayer that added extra functionality, I could just add this property to it: 1: public string LegendFields 2: { 3: get 4: { 5: if (this.Renderer is UniqueValueRenderer) 6: { 7: return (this.Renderer as UniqueValueRenderer).Field; 8: } 9: else if (this.Renderer is UniqueValueMultipleFieldsRenderer) 10: { 11: var renderer = this.Renderer as UniqueValueMultipleFieldsRenderer; 12: return string.Join(renderer.FieldDelimiter, renderer.Fields); 13: } 14: else return null; 15: } For my scenario, all of my layers used symbology derived from a single field or, as in the examples above, from several of them. The renderer even kindly supplied me with the comma to separate the fields with. Now it was a simple matter to get the Legend control in line – assuming that it was bound to a collection of MyFeatureLayer: 1: <esri:Legend> 2: <esri:Legend.MapLayerTemplate> 3: <DataTemplate> 4: <StackPanel> 5: <TextBlock Text="{Binding Layer.ID}"/> 6: <TextBlock Text="{Binding Layer.LegendFields}" Margin="10,0,0,0" TextStyle="Italic"/> 7: </StackPanel> 8: </DataTemplate> 9: </esri:Legend.MapLayerTemplate> 10: </esri:Legend> and get the look I wanted – the list of fields below the layer name, indented.

    Read the article

  • Adventures in MVVM &ndash; ViewModel Location and Creation

    - by Brian Genisio's House Of Bilz
    More Adventures in MVVM In this post, I am going to explore how I prefer to attach ViewModels to my Views.  I have published the code to my ViewModelSupport project on CodePlex in case you'd like to see how it works along with some examples.  Some History My approach to View-First ViewModel creation has evolved over time.  I have constructed ViewModels in code-behind.  I have instantiated ViewModels in the resources sectoin of the view. I have used Prism to resolve ViewModels via Dependency Injection. I have created attached properties that use Dependency Injection containers underneath.  Of all these approaches, I continue to find issues either in composability, blendability or maintainability.  Laurent Bugnion came up with a pretty good approach in MVVM Light Toolkit with his ViewModelLocator, but as John Papa points out, it has maintenance issues.  John paired up with Glen Block to make the ViewModelLocator more generic by using MEF to compose ViewModels.  It is a great approach, but I don’t like baking in specific resolution technologies into the ViewModelSupport project. I bring these people up, not to name drop, but to give them credit for the place I finally landed in my journey to resolve ViewModels.  I have come up with my own version of the ViewModelLocator that is both generic and container agnostic.  The solution is blendable, configurable and simple to use.  Use any resolution mechanism you want: MEF, Unity, Ninject, Activator.Create, Lookup Tables, new, whatever. How to use the locator 1. Create a class to contain your resolution configuration: public class YourViewModelResolver: IViewModelResolver { private YourFavoriteContainer container = new YourFavoriteContainer(); public YourViewModelResolver() { // Configure your container } public object Resolve(string viewModelName) { return container.Resolve(viewModelName); } } Examples of doing this are on CodePlex for MEF, Unity and Activator.CreateInstance. 2. Create your ViewModelLocator with your custom resolver in App.xaml: <VMS:ViewModelLocator x:Key="ViewModelLocator"> <VMS:ViewModelLocator.Resolver> <local:YourViewModelResolver /> </VMS:ViewModelLocator.Resolver> </VMS:ViewModelLocator> 3. Hook up your data context whenever you want a ViewModel (WPF): <Border DataContext="{Binding YourViewModelName, Source={StaticResource ViewModelLocator}}"> This example uses dynamic properties on the ViewModelLocator and passes the name to your resolver to figure out how to compose it. 4. What about Silverlight? Good question.  You can't bind to dynamic properties in Silverlight 4 (crossing my fingers for Silverlight 5), but you CAN use string indexing: <Border DataContext="{Binding [YourViewModelName], Source={StaticResource ViewModelLocator}}"> But, as John Papa points out in his article, there is a silly bug in Silverlight 4 (as of this writing) that will call into the indexer 6 times when it binds.  While this is little more than a nuisance when getting most properties, it can be much more of an issue when you are resolving ViewModels six times.  If this gets in your way, the solution (as pointed out by John), is to use an IndexConverter (instantiated in App.xaml and also included in the project): <Border DataContext="{Binding Source={StaticResource ViewModelLocator}, Converter={StaticResource IndexConverter}, ConverterParameter=YourViewModelName}"> It is a bit uglier than the WPF version (this method will also work in WPF if you prefer), but it is still not all that bad.  Conclusion This approach works really well (I suppose I am a bit biased).  It allows for composability from any mechanisim you choose.  It is blendable (consider serving up different objects in Design Mode if you wish... or different constructors… whatever makes sense to you).  It works in Cider.  It is configurable.  It is flexible.  It is the best way I have found to manage View-First ViewModel hookups.  Thanks to the guys mentioned in this article for getting me to something I love using.  Enjoy.

    Read the article

  • Fibonacci numbers in F#

    - by BobPalmer
    As you may have gathered from some of my previous posts, I've been spending some quality time at Project Euler.  Normally I do my solutions in C#, but since I have also started learning F#, it only made sense to switch over to F# to get my math coding fix. This week's post is just a small snippet - spefically, a simple function to return a fibonacci number given it's place in the sequence.  One popular example uses recursion: let rec fib n = if n < 2 then 1 else fib (n-2) + fib(n-1) While this is certainly elegant, the recursion is absolutely brutal on performance.  So I decided to spend a little time, and find an option that achieved the same functionality, but used a recursive function.  And since this is F#, I wanted to make sure I did it without the use of any mutable variables. Here's the solution I came up with: let rec fib n1 n2 c =    if c = 1 then        n2    else        fib n2 (n1+n2) (c-1);;let GetFib num =    (fib 1 1 num);;printfn "%A" (GetFib 1000);; Essentially, this function works through the sequence moving forward, passing the two most recent numbers and a counter to the recursive calls until it has achieved the desired number of iterations.  At that point, it returns the latest fibonacci number. Enjoy!

    Read the article

  • Window borders missing - gtk-window-decorator segmentation fault

    - by Balakrshnan Ramakrishnan
    I have been using Ubuntu for about 1 year now and I got a problem just two days ago. Suddenly I started experiencing a problem with the window borders (title bar with close, maximize, and minimize buttons). The problem : The window borders disappear I run "gtk-window-decorator --replace" For like 20 seconds everything is back to normal But it again returns to the problem. I searched over the Internet and found that my problem is similar to what is specified in this bug report: https://bugs.launchpad.net/ubuntu/+source/compiz/+bug/814091 This bug report says that the "fix released". I updated everything using the Update Manager, but still the problem remains. Can anyone let me know whether the problem is fixed? If yes, can you please let me know how to do it? I have already tried normal replace/reset commands like unity --reset unity --replace compiz --replace The "window borders" plugin is on in CCSM (CompizConfig Settings Manager) and it points to "gtk-window-decorator". I use Ubuntu 11.10 on an Intel Core2Duo T6500 with AMD Mobility Radeon HD 4300 graphics card. If you need more information, please let me know.

    Read the article

  • How to start a task before networking?

    - by user1252434
    I've written an upstart task that modifies /etc/network/interfaces. (Actually a file sourced into it.) Which start on condition do I need to declare to let my task run before any networking jobs? I've tried start on starting networking, but that's apparently too late. When I log in after booting I can see that the changes were written, but obviously they are not used: the new config states a static IP, but the boot process waits for a non-existing DHCP server (old config) to time out. I've also tried start on starting network-interface INTERFACE=eth0, which didn't work either. IIRC there was an error in the log that the change couldn't be written. Background: I need a VM template that can be cloned and the clones configured through a script. Among other settings, I need to give them a static IP address to access them from the host. I use guestfish to write a config file to one of the virtual disks and let a script apply these settings to the system. I don't want that disk to contain an actual system settings file. I can't modify /etc directly, because that disk is shared (copy-on-write/diff) among the clones and guestfish apparently doesn't support that type of image. I could also let them use DHCP and setup a server that assigns IP by MAC, but I'm afraid of the complexity. I could also add just another virtual disk for configuration files, but if possible I'd prefer to store settings directly on the system disk image. Used software: Ubuntu Server 12.04, VirtualBox. The configuration modifier is a self written ruby script.

    Read the article

  • SQL SERVER – DMV sys.dm_exec_describe_first_result_set_for_object – Describes the First Result Metadata for the Module

    - by pinaldave
    Here is another interesting follow up blog post of SQL SERVER – sp_describe_first_result_set New System Stored Procedure in SQL Server 2012. While I was writing earlier blog post I had come across DMV sys.dm_exec_describe_first_result_set_for_object as well. I found that SQL Server 2012 is providing all this quick and new features which quite often we miss  to learn it and when in future someone demonstrates the same to us, we express our surprise on the subject. DMV sys.dm_exec_describe_first_result_set_for_object returns result set which describes the columns used in the stored procedure. Here is the quick example. Let us first create stored procedure. USE [AdventureWorks] GO ALTER PROCEDURE [dbo].[CompSP] AS SELECT [DepartmentID] id ,[Name] n ,[GroupName] gn FROM [HumanResources].[Department] GO Now let us run following two DMV which gives us meta data description of the stored procedure passed as a parameter. Option1: Pass second parameter @include_browse_information as a 0. SELECT * FROM sys.dm_exec_describe_first_result_set_for_object ( OBJECT_ID('[dbo].[CompSP]'),0) AS Table1 GO Option2: Pass second parameter @include_browse_information as a 1. SELECT * FROM sys.dm_exec_describe_first_result_set_for_object ( OBJECT_ID('[dbo].[CompSP]'),1) AS Table1 GO Here is the result of Option1 and Option2. If you see the result, there is absolutely no difference between the results. Both of the resultset are returning column names which are aliased in the stored procedure. Let us scroll on the right side and you will notice that there is clear difference in some columns. You will see in second resultset source_database, Source_schema as well few other columns are reporting original table instead of NULL values. When @include_browse_information result is set to 1 it will provide the columns details of the underlying table. I have just discovered this DMV, I have yet to use it in production code and find out where exactly I will use this DMV. Do you have any idea? Does any thing comes up to your mind where this DMV can be helpful. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Introduction to PERCENT_RANK() – Analytic Functions Introduced in SQL Server 2012

    - by pinaldave
    SQL Server 2012 introduces new analytical functions PERCENT_RANK(). This function returns relative standing of a value within a query result set or partition. It will be very difficult to explain this in words so I’d like to attempt to explain its function through a brief example. Instead of creating a new table, I will be using the AdventureWorks sample database as most developers use that for experiment purposes. Now let’s have fun following query: USE AdventureWorks GO SELECT SalesOrderID, OrderQty, RANK() OVER(ORDER BY SalesOrderID) Rnk, PERCENT_RANK() OVER(ORDER BY SalesOrderID) AS PctDist FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY PctDist DESC GO The above query will give us the following result: Now let us understand the resultset. You will notice that I have also included the RANK() function along with this query. The reason to include RANK() function was as this query is infect uses RANK function and find the relative standing of the query. The formula to find PERCENT_RANK() is as following: PERCENT_RANK() = (RANK() – 1) / (Total Rows – 1) If you want to read more about this function read here. Now let us attempt the same example with PARTITION BY clause USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, RANK() OVER(PARTITION BY SalesOrderID ORDER BY ProductID ) Rnk, PERCENT_RANK() OVER(PARTITION BY SalesOrderID ORDER BY ProductID ) AS PctDist FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY PctDist DESC GO Now you will notice that the same logic is followed in follow result set. I have now quick question to you – how many of you know the logic/formula of PERCENT_RANK() before this blog post? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Part 3: Customization Strategy or how long does it take

    - by volker.eckardt(at)oracle.com
    The previous part in this blog should have made us aware, that many procedures are required to manage all these steps. To review your status let me ask you a question:What is your Customization Strategy?Your answer might be something like, 'customization strategy, well, we have standards and we let requirement documents approve'.Let me ask you another question:How long does it take to redeploy all your customizations into a fresh installation?In 90% of all installations the answer to this question would be: we can't!Although no one would have to do it (hopefully), just thinking about it and recognizing that we have today too many manual steps involved, different procedures and sometimes (undocumented) manual steps to complete a customization installation. And ... in general too many customizations.Why is working with customizations often so complicated and time consuming?Here are the key reasons as I have identified them in my projects:Customization standards defined, but not maintainedDifferent knowledge on developer side (results getting an individual developer touch)No need to automate deployment (not forced by client)Different documentation styles, not easy to hand over to someone elseDifferent development concepts, difficult for the maintenanceJust the minimum present for testing, often positive testing onlyDeviations from naming conventions accepted, although definedComplicated procedures, therefore sometimes partially ignoredAnd last but not least, hand made version control (still)If you would have to 'redeploy all your customizations' you would have to Follow all your own standards and best practiceTrack deviations and define corrective tasksAutomate as much as possible, minimize manual tasksDo not allow any change coming in without version controlUtilize products to support you in deploymentMinimize hand made scripts and extensive documentationReview regularly used techniques to guarantee that all are in line with the current release and also easy maintainableCreate solution libraries and force the team to contribute and reuseDefine quality activities and execute themDefine a procedure to release customizationsI know, it is easy to write down, but much harder to manage. Will provide some guidelines in my next blog.Volker

    Read the article

  • HTML5 game engine for a 2D or 2.5D RPG style "map walk"

    - by stargazer
    please help me to choose a HTML5 game engine or Javascript libraries I want to do the following in the game: when the game starts a part the huge map (full size of the map: about 7 screens) is shown. The map itself is completely designed in the editor mapeditor.org (or in some comparable editor - if you know a good alternative to mapeditor.org - let me know) and loaded at runtime or at design time. The game engine should support loading of isometric maps (well, in worst case only orthogonal maps will be sufficient) both "tile layer" and "object layer" from mapeditor.org should be supported. Scrolling/performance of this map should be fast enough. The map and the game should be either in 2D (orthogonal map) or in 2.5D (isometric map) The game engine should support movement of sprites with animation. Let say I have a sprite for "human" with animation sequences showing "walking" in 8 directions - it should be imported into game engine and should "walk" on the map without writing a lot of Javascript code. Automatic scrolling of the map the "human" nears the screen border. Collision detection, "solid" objects. The mapeditor.org supports properies on tiles. Let say I assign a "solid" property to some tiles in editor. It should be easy to check this "solid" property in the game engine and implement kind of "solid" behavior, so the animanted sprites do not walk through the walls. Collision detection - it should be easy to implement some custom functionality like "when sprite A is close to sprite B - call this function" Showing "dialogs" or popup windows on top of the map - should be easy to implement. Cross-browser audio support - (it is implemented quite well in construct 2 from scirra, so I'm looking for the comparable audio quality) The game itself is a king of RPG but without fighting scenes and without huge "inventory". The main character just walking on the map, discovers some things, there are dialogs and sounds. The functionality of this example from sprite.js http://batiste.dosimple.ch/sprite.js/tests/mapeditor/map_reader.html is very close to what I'm developing. But I'm not a Javascript guru (and a very lazy guy) and would like to write even less Javascript code as in the example...

    Read the article

  • SEM & Adwords: How many click without a sale before i should pause a keyword

    - by Thomas Jönsson
    I wonder how many clicks I optimally should let pass through every new keyword I try in Adwords before I find out that it's not making a profit and it should be paused! It's actually four question. 1: At which likelihood percentile should I pause a word? 2: How many clicks should I let through before I pause a word for those word which do not generate any lead? 3: How many clicks should I let through after one sale to consider the word not to be profitable? 4: Does the likelihood of the word becoming profitable affect the above? Conditions: -The clicks is normally distributed. (correct?) -A CR of 1% is break even, everything above is profit (1 sale/100 clicks=break even) Cost per Click(cpc) = 4$ -Marginal (profit per sale) = 400$ -Paybacktime = 1 year -Average click per word = 0,333 per day (121 + 2/3 per year) Exampel: After 1 click and no sale the keyword still has a high probability to be profitable. After 500 clicks and no sale it has almost no likelihood to not be profitable and should probably be paused. Thanks in advance!

    Read the article

  • SQL SERVER – Answer – Value of Identity Column after TRUNCATE command

    - by pinaldave
    Earlier I had one conversation with reader where I almost got a headache. I suggest all of you to read it before continuing this blog post SQL SERVER – Reseting Identity Values for All Tables. I believed that he faced this situation because he did not understand the difference between SQL SERVER – DELETE, TRUNCATE and RESEED Identity. I wrote a follow up blog post explaining the difference between them. I asked a small question in the second blog post and I received many interesting comments. Let us go over the question and its answer here one more time. Here is the scenario to set up the puzzle. Create Table with Seed Identity = 11 Insert Value and Check Seed (it will be 11) Reseed it to 1 Insert Value and Check Seed (it will be 2) TRUNCATE Table Insert Value and Check Seed (it will be 11) Let us see the T-SQL Script for the same. USE [TempDB] GO -- Create Table CREATE TABLE [dbo].[TestTable]( [ID] [int] IDENTITY(11,1) NOT NULL, [var] [nchar](10) NULL ) ON [PRIMARY] GO -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO -- Reseed to 1 DBCC CHECKIDENT ('TestTable', RESEED, 1) GO -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO -- Truncate table TRUNCATE TABLE [TestTable] GO -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO -- Question for you Here -- Clean up DROP TABLE [TestTable] GO Now let us see the output of three of the select statements. 1) First Select after create table 2) Second Select after reseed table 3) Third Select after truncate table The reason is simple: If the table contains an identity column, the counter for that column is reset to the seed value defined for the column. Reference: Pinal Dave (http://blog.sqlauthority.com)       Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Eculidean space and vector magnitude

    - by Starkers
    Below we have distances from the origin calculated in two different ways, giving the Euclidean distance, the Manhattan distance and the Chebyshev distance. Euclidean distance is what we use to calculate the magnitude of vectors in 2D/3D games, and that makes sense to me: Let's say we have a vector that gives us the range a spaceship with limited fuel can travel. If we calculated this with Manhattan metric, our ship could travel a distance of X if it were travelling horizontally or vertically, however the second it attempted to travel diagonally it could only tavel X/2! So like I say, Euclidean distance does make sense. However, I still don't quite get how we calculate 'real' distances from the vector's magnitude. Here are two points, purple at (2,2) and green at (3,3). We can take two points away from each other to derive a vector. Let's create a vector to describe the magnitude and direction of purple from green: |d| = purple - green |d| = (purple.x, purple.y) - (green.x, green.y) |d| = (2, 2) - (3, 3) |d| = <-1,-1> Let's derive the magnitude of the vector via Pythagoras to get a Euclidean measurement: euc_magnitude = sqrt((x*x)+(y*y)) euc_magnitude = sqrt((-1*-1)+(-1*-1)) euc_magnitude = sqrt((1)+(1)) euc_magnitude = sqrt(2) euc_magnitude = 1.41 Now, if the answer had been 1, that would make sense to me, because 1 unit (in the direction described by the vector) from the green is bang on the purple. But it's not. It's 1.41. 1.41 units is the direction described, to me at least, makes us overshoot the purple by almost half a unit: So what do we do to the magnitude to allow us to calculate real distances on our point graph? Worth noting I'm a beginner just working my way through theory. Haven't programmed a game in my life!

    Read the article

  • Exadata X3 In-Memory Database Machine: To be or not to be

    - by Luis Moreno Campos
    Since Larry Ellison announced Oracle Exadata X3 as the new generation of the Database Machine, he established the product in the In-Memory Database arena. And that annoyed some people. We all know that In-Memory Databases are the ones that *only* execute in memory and use the other layers of storage for persistency (mainly disk). Oracle database has always been a technology that uses memory as a caching mechanism and that hasn't change nor it will change with Oracle Database 12c. So this is the central point of fuss when it comes to announcing an Engineered Systems as In-Memory Database, when in fact it still runs Oracle Database, not vanilla but still the same product. Let me tell you purist people out there: when you find no new ground breaking point to get all excited about you decide to bash it, and go against its claims. It's not like a car manufacturer that launches a mini-van in the market and calls it a Sports Car, we are talking about a fundamental change in the ILM stack: level 2 of caching is now self sufficient. It's not DRAM? Who cares, still let's you put in flash amounts of data not done up until now, so I guess Oracle can name it whatever Larry wants because in the end it's something never done before. Now let's imagine that you hop on the pure In-Memory Database bandwagon. You would be stuck with a database technology that lags behind the Oracle Database hundreds of light years in man/hours innovations and features. Do you really want to travel back in time? Remember, the first rule about time travelling is that "Security is not Guaranteed". Your choice. LMC

    Read the article

  • Green (Screen) Computing

    - by onefloridacoder
    I recently was given an assignment to create a UX where a user could use the up and down arrow keys, as well as the tab and enter keys to move through a Silverlight datagrid that is going be used as part of a high throughput data entry UI. And to be honest, I’ve not trapped key codes since I coded JavaScript a few years ago.  Although the frameworks I’m using made it easy, it wasn’t without some trial and error.    The other thing that bothered me was that the customer tossed this into the use case as they were delivering the use case.  Fine.  I’ll take a whack at anything and beat up myself and beg (I’m not beyond begging for help) the community for help to get something done if I have to. It wasn’t as bad as I thought and I thought I would hopefully save someone a few keystrokes if you wanted to build a green screen for your customer.   Here’s the ValueConverter to handle changing the strings to decimals and then back again.  The value is a nullable valuetype so there are few extra steps to take.  Usually the “ConvertBack()” method doesn’t get addressed but in this case we have two-way binding and the converter needs to ensure that if the user doesn’t enter a value it will remain null when the value is reapplied to the model object’s setter.  1: using System; 2: using System.Windows.Data; 3: using System.Globalization; 4:  5: public class NullableDecimalToStringConverter : IValueConverter 6: { 7: public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) 8: { 9: if (!(((decimal?)value).HasValue)) 10: { 11: return (decimal?)null; 12: } 13: if (!(value is decimal)) 14: { 15: throw new ArgumentException("The value must be of type decimal"); 16: } 17:  18: NumberFormatInfo nfi = culture.NumberFormat; 19: nfi.NumberDecimalDigits = 4; 20:  21: return ((decimal)value).ToString("N", nfi); 22: } 23:  24: public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) 25: { 26: decimal nullableDecimal; 27: decimal.TryParse(value.ToString(), out nullableDecimal); 28:  29: return nullableDecimal == 0 ? null : nullableDecimal.ToString(); 30: } 31: }            The ConvertBack() method uses TryParse to create a value from the incoming string so if the parse fails, we get a null value back, which is what we would expect.  But while I was testing I realized that if the user types something like “2..4” instead of “2.4”, TryParse will fail and still return a null.  The user is getting “puuu-lenty” of eye-candy to ensure they know how many values are affected in this particular view. Here’s the XAML code.   This is the simple part, we just have a DataGrid with one column here that’s bound to the the appropriate ViewModel property with the Converter referenced as well. 1: <data:DataGridTextColumn 2: Header="On-Hand" 3: Binding="{Binding Quantity, 4: Mode=TwoWay, 5: Converter={StaticResource DecimalToStringConverter}}" 6: IsReadOnly="False" /> Nothing too magical here.  Just some XAML to hook things up.   Here’s the code behind that’s handling the DataGridKeyup event.  These are wired to a local/private method but could be converted to something the ViewModel could use, but I just need to get this working for now. 1: // Wire up happens in the constructor 2: this.PicDataGrid.KeyUp += (s, e) => this.HandleKeyUp(e);   1: // DataGrid.BeginEdit fires when DataGrid.KeyUp fires. 2: private void HandleKeyUp(KeyEventArgs args) 3: { 4: if (args.Key == Key.Down || 5: args.Key == Key.Up || 6: args.Key == Key.Tab || 7: args.Key == Key.Enter ) 8: { 9: this.PicDataGrid.BeginEdit(); 10: } 11: }   And that’s it.  The ValueConverter was the biggest problem starting out because I was using an existing converter that didn’t take nullable value types into account.   Once the converter was passing back the appropriate value (null, “#.####”) the grid cell(s) and the model objects started working as I needed them to. HTH.

    Read the article

  • Is there a process-oriented IDE ?

    - by Raveline
    My problem is simple : when I'm programming in an OO paradigm, I'm often having part of a main business process divided in many classes. Which means, if I want to examine the whole functional chain that leads to the output, for debugging or for optimization research, it can be a bit painful. So I was wondering : is there an IDE that let you put a "process tag" on functions coming from different objects, and give you a view of all those functions having the same tag ? edit : To give an example (that I'm making up completely, sorry if it doesn't sound very realistic). Let's say we have the following business process for a HR application : receive a holiday-request by an employee, check the validity of the request, then give an alert to his boss ("one of those lazy programmer wants another day off"); at the same time, let's say the boss will want to have a table of all employee's timetable during the time the employee wants his vacations; then handle the answer of the boss, send a nice little mail to the employee ("No way, lazy bones"). Even if we get rid of everything not purely business-related (mail sending process, db handling to get the useful info, GUI functionalities, and so on), we still have something that doesn't really fit in "one class". I'd like to have an IDE that would give me the opportunity to embrace quickly, at the very least : The function handling the validation of the request by the employee; The function preparing the "timetable" for the boss; The function handling the validation of the request by the boss; I wouldn't put all those functions in the same class (but perhaps that's my mistake ?). This is where my dreamed IDE could be helpful.

    Read the article

  • External table and preprocessor for loading LOBs

    - by David Allan
    I was using the COLUMN TRANSFORMS syntax to load LOBs into Oracle using the Oracle external which is a handy way of doing several stuff - from loading LOBs from the filesystem to having constants as fields. In OWB you can use unbound external tables to define an external table using your own arbitrary access parameters - I blogged a while back on this for doing preprocessing before it was added into OWB 11gR2. For loading LOBs using the COLUMN TRANSFORMS syntax have a read through this post on loading CLOB, BLOB or any LOB, the files to load can be specified as a field that is a filename field, the content of this file will be the LOB data. So using the example from the linked post, you can define the columns; Then define the access parameters - if you go the unbound external table route you can can put whatever you want in here (your external table get out of jail free card); This will let you read the LOB files fromn the filesystem and use the external table in a mapping. Pushing the envelope a little further I then thought about marrying together the preprocessor with the COLUMN TRANSFORMS, this would have let me have a shell script for example as the preprocessor which listed the contents of a directory and let me read the files as LOBs via an external table. Unfortunately that doesn't quote work - there is now a bug/enhancement logged, so one day maybe. So I'm afraid my blog title was a little bit of a teaser....

    Read the article

  • Setting up a Carousel Component in ADF Mobile

    - by Shay Shmeltzer
    The Carousel component is one of the slickier ways of showing collections of data, and on a mobile device it works really great with the finger swipe gesture. Using the Carousel component in ADF Mobile is similar to using it in regular web ADF applications, with one major change - right now you can't drag a collection from the data control palette and drop it as a carousel. So here is a quick work around for that, and details about setting up carousels in your application. First thing you'll need is a data control that returns an array of records. In my demo I'm using the Emps collection that you can get from following this tutorial. Then you drag the emps and drop it in your amx page as an ADF mobile iterator. We are doing this as a short cut to getting the right binding needed for a carousel in our page. If you look now in your page's binding you'll see something like this: You can now remark the whole iterator code in your page's source. Next let's add the carousel From the component palette drag the carousel (from the data view category) to the page. Next drag a carousel item and drop it in the nodestamp facet of the carousel. Now we'll hook up the carousel to the binding we got from the iterator - this is quite simple just copy the var and value attributes from the iterator tag to the carousel tag: var="row" value="#{bindings.emps.collectionModel}" Next drop a panelForm, or another layout panel in to the carousel item. Into that panelForm you can now drop items and bind their value property to row.attributeNames - basically copying the way it is in the fields in the iterator for example: value="#{row.hireDate}". By the way you can also copy other attributes like the label. And that's it. Your code should end up looking something like this:     <amx:carousel id="c1" var="row" value="#{bindings.emps.collectionModel}">      <amx:facet name="nodeStamp">        <amx:carouselItem id="ci1">          <amx:panelFormLayout id="pfl1">            <amx:inputText label="#{bindings.emps.hints.salary.label}" value="#{row.salary}" id="it1"/>            <amx:inputText label="#{bindings.emps.hints.name.label}" value="#{row.name}" id="it2"/>          </amx:panelFormLayout>        </amx:carouselItem>      </amx:facet>    </amx:carousel> And when you run your application it will look like this:

    Read the article

  • Alternate method to dependent, nested if statements to check multiple states

    - by octopusgrabbus
    Is there an easier way to process multiple true/false states than using nested if statements? I think there is, and it would be to create a sequence of states, and then use a function like when to determine if all states were true, and drop out if not. I am asking the question to make sure there is not a preferred Clojure way to do this. Here is the background of my problem: I have an application that depends on quite a few input files. The application depends on .csv data reports; column headers for each report (.csv files also), so each sequence in the sequence of sequences can be zipped together with its columns for the purposes of creating a smaller sequence; and column files for output data. I use the following functions to find out if a file is present: (defn kind [filename] (let [f (File. filename)] (cond (.isFile f) "file" (.isDirectory f) "directory" (.exists f) "other" :else "(cannot be found)" ))) (defn look-for [filename expected-type] (let [find-status (kind-stat filename expected-type)] find-status)) And here are the first few lines of a multiple if which looks ugly and is hard to maintain: (defn extract-re-values "Plain old-fashioned sub-routine to process real-estate values / 3rd Q re bills extract." [opts] (if (= (utl/look-for (:ifm1 opts) "f") 0) ; got re columns? (if (= (utl/look-for (:ifn1 opts) "f") 0) ; got re data? (if (= (utl/look-for (:ifm3 opts) "f") 0) ; got re values output columns? (if (= (utl/look-for (:ifm4 opts) "f") 0) ; got re_mixed_use_ratio columns? (let [re-in-col-nams (first (utl/fetch-csv-data (:ifm1 opts))) re-in-data (utl/fetch-csv-data (:ifn1 opts)) re-val-cols-out (first (utl/fetch-csv-data (:ifm3 opts))) mu-val-cols-out (first (utl/fetch-csv-data (:ifm4 opts))) chk-results (utl/chk-seq-len re-in-col-nams (first re-in-data) re-rec-count)] I am not looking for a discussion of the best way, but what is in Clojure that facilitates solving a problem like this.

    Read the article

  • iOS chat application design, sending/relaying the message over to the end user

    - by AyBayBay
    I have a design question. Let us say you were tasked with building a chat application, specifically for iOS (iOS Chat Application). For simplicity let us say you can only chat with one person at a time (no group chat functionality). How then can you achieve sending a message directly to an end user from phone A to phone B? Obviously there is a web service layer with some API calls. One of the API calls available will be startChat(). After starting a chat, when you send a message, you make another async call, let us call it sendMessage() and pass in a string with your message. Once it goes to the web service layer, the message gets stored in a database. Here is where I am currently stuck. After the message gets sent to the web service layer, how do we then achieve sending/relaying the message over to the end user? Should the web server send out a message to the end user and notify them, or should each client call a receiveMessage() method periodically, and if the server side has some info for them it can then respond with that info? Finally, how can we handle the case in which the user you are trying to send a message to is offline? How can we make sure the end user gets the packet when he moves back to an area with signal?

    Read the article

  • Forwarding ports with ssh on Linux

    - by Patrick Klingemann
    I have a database server, let's call it: dbserver I have a web server with access to my dbserver, let's call it: webserver I have a development machine that I'd like to use to access a database on dbserver, let's call it: dev dbserver has a firewall rule set to allow TCP requests from webserver to dbserver:1433 I'd like to set up a tunnel from dev:1433 to dbserver:1433, so all requests to 1433 on dev are passed along to dbserver:1433 My sshd_config on webserver has the following rules set: AllowTcpForwarding yes GatewayPorts yes This is what I've tried: ssh -v -L localhost:1433:dbserver:1433 webserver In another terminal: telnet localhost 1433 Results in: Trying ::1... Connected to localhost. Escape character is '^]'. Connection closed by foreign host. Any idea what I'm doing wrong here? Thanks in advance!

    Read the article

  • Building Tag Cloud Declarative ADF Component

    - by Arunkumar Ramamoorthy
    When building a website, there could a requirement to add a tag cloud to let the users know the popular tags (or terms) used in the site. In this blog, we would build a simple declarative component to be used as tag cloud in the page. To start with, we would first create the declarative component, which could display the tag cloud. We will do that by creating a new custom application from the new gallery. Give a name for the app and the project and from the new gallery, let us create a new ADF Declarative Component We need to specify the name for the declarative component, attributes in it etc. as follows For displaying the tags as cloud, we need to pass the content to this component. So, we will create an attribute to hold the values for the tag. Let us name it as "value" and make it as java.lang.String  type. Once after this, to hold the component, we need to create a tag library. This can be done by clicking on the Add Tag Library button. Clicking on OK buttons in all the open dialogs would create a declarative component for us. Now, we need to display the tag cloud based on the value passed to the component. To do that, we assume that the value is a Tree Binding and has two attributes in it, say "Name" and "Weight". To make a tag cloud, we would put together the "Name" in a loop and set it's font size based on the "Weight". After putting our logic to work, here is how the source look Attributes added to the declarative components can be retrieved by using #{attrs.<attribute_name>}. Now, we need to deploy this project as ADF Library Jar file, so that this can be distributed to the consuming applications. We'll select ADF Library Jar as type and create the profile. We would be getting the jar file after deployment. To test the functionality, we could create a simple Fusion Web Application. To add our custom component to the consuming application, we can create a file system connection pointing to the location where the jar file is and add it or, add through the project properties of the ViewController project. Now, our custom component has been added to the consuming application. We could test that by creating a VO in the model project with a query like, select 'Faces' as Name,25 as Weight from dual union all select 'ADF', 15 from dual  union all select 'ADFdi', 30 from dual union all select 'BC4J', 20 from dual union all select 'EJB', 40 from dual union all select 'WS', 35 from dual Add this VO to the AppModule, so that it would be exposed to the data control. Then, we could create a jspx page, and add a tree binding to the VO created. We can now see our Tag Cloud declarative component is available in the component palette.  It can be inserted from the component palette to our page and set it's value property to CollectionModel of the tree binding created. Now that we've created the Declarative component and added that to our page successfully, we can run the page to see how it looks. As per the query, the Tags are displayed in different fonts, based on their weight.

    Read the article

  • Client-Server MMOG & data structures sync when joining / playing

    - by plang
    After reading a few articles on MMOG architecture, there is still one point on which I cannot find much information: it has to do with how you keep in sync server data on the client, when you join, and while you play. A pretty vague question, I agree. Let me refine it: Let's say we have an MMOG virtual world subdivided into geographical cells. A player in a cell is mostly interested in what happens in the cell itself, and all the surrounding cells, not more. When joining the game for the first time, the only thing we can do is send some sort of "database dump" of the interesting cells to the client. When playing, I guess it would be very inefficient to do the same thing regularly. I imagine the best thing to do is to send "deltas" to the client, which would allow keeping the local database in sync. Now let's say the player moves, and arrives in another cell. Surrounding cells change, and for all the new cells the player subscribes, the same technique as used when joining the game has to be used: some sort of "database dump". This mechanic of joining/moving in a cell-based MMOG virtual world interests me, and I was wondering if there were tried and tested techniques in this domain. Thanks!

    Read the article

  • Avoiding spam filters on my CentOS 5.5 64bit server?

    - by Andrew Fashion
    I run a social network on my web server, with about 15,000 members right now. My administration section let's me Mass Email all my users. Currently it uses the built in PHP mail function. What is the best way to congfigure my server to bypass spam? Can I install anything on the server? Or should I just make the social network use SMTP? The admin panel lets me choose SMTP or built-in mail function. I'm not to familiar with mailing from servers, as I usually use Aweber for my mailing, but I cannot use Aweber for this as they will not let me just import 15,000 emails. Let me know, thanks.

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >