Search Results

Search found 18096 results on 724 pages for 'let me be'.

Page 22/724 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • What Keeps You from Changing Your Public IP Address and Wreaking Havoc on the Internet?

    - by Jason Fitzpatrick
    What exactly is preventing you (or anyone else) from changing their IP address and causing all sorts of headaches for ISPs and other Internet users? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Whitemage is curious about what’s preventing him from wantonly changing his IP address and causing trouble: An interesting question was asked of me and I did not know what to answer. So I’ll ask here. Let’s say I subscribed to an ISP and I’m using cable internet access. The ISP gives me a public IP address of 60.61.62.63. What keeps me from changing this IP address to, let’s say, 60.61.62.75, and messing with another consumer’s internet access? For the sake of this argument, let’s say that this other IP address is also owned by the same ISP. Also, let’s assume that it’s possible for me to go into the cable modem settings and manually change the IP address. Under a business contract where you are allocated static addresses, you are also assigned a default gateway, a network address and a broadcast address. So that’s 3 addresses the ISP “loses” to you. That seems very wasteful for dynamically assigned IP addresses, which the majority of customers are. Could they simply be using static arps? ACLs? Other simple mechanisms? Two things to investigate here, why can’t we just go around changing our addresses, and is the assignment process as wasteful as it seems? The Answer SuperUser contributor Moses offers some insight: Cable modems aren’t like your home router (ie. they don’t have a web interface with simple point-and-click buttons that any kid can “hack” into). Cable modems are “looked up” and located by their MAC address by the ISP, and are typically accessed by technicians using proprietary software that only they have access to, that only runs on their servers, and therefore can’t really be stolen. Cable modems also authenticate and cross-check settings with the ISPs servers. The server has to tell the modem whether it’s settings (and location on the cable network) are valid, and simply sets it to what the ISP has it set it for (bandwidth, DHCP allocations, etc). For instance, when you tell your ISP “I would like a static IP, please.”, they allocate one to the modem through their servers, and the modem allows you to use that IP. Same with bandwidth changes, for instance. To do what you are suggesting, you would likely have to break into the servers at the ISP and change what it has set up for your modem. Could they simply be using static arps? ACLs? Other simple mechanisms? Every ISP is different, both in practice and how close they are with the larger network that is providing service to them. Depending on those factors, they could be using a combination of ACL and static ARP. It also depends on the technology in the cable network itself. The ISP I worked for used some form of ACL, but that knowledge was a little beyond my paygrade. I only got to work with the technician’s interface and do routine maintenance and service changes. What keeps me from changing this IP address to, let’s say, 60.61.62.75 and mess with another consumer’s internet access? Given the above, what keeps you from changing your IP to one that your ISP hasn’t specifically given to you is a server that is instructing your modem what it can and can’t do. Even if you somehow broke into the modem, if 60.61.62.75 is already allocated to another customer, then the server will simply tell your modem that it can’t have it. David Schwartz offers some additional insight with a link to a white paper for the really curious: Most modern ISPs (last 13 years or so) will not accept traffic from a customer connection with a source IP address they would not route to that customer were it the destination IP address. This is called “reverse path forwarding”. See BCP 38. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Yippy &ndash; the F# MVVM Pattern

    - by MarkPearl
    I did a recent post on implementing WPF with F#. Today I would like to expand on this posting to give a simple implementation of the MVVM pattern in F#. A good read about this topic can also be found on Dean Chalk’s blog although my example of the pattern is possibly simpler. With the MVVM pattern one typically has 3 segments, the view, viewmodel and model. With the beauty of WPF binding one is able to link the state based viewmodel to the view. In my implementation I have kept the same principles. I have a view (MainView.xaml), and and a ViewModel (MainViewModel.fs).     What I would really like to illustrate in this posting is the binding between the View and the ViewModel so I am going to jump to that… In Program.fs I have the following code… module Program open System open System.Windows open System.Windows.Controls open System.Windows.Markup open myViewModels // Create the View and bind it to the View Model let myView = Application.LoadComponent(new System.Uri("/FSharpWPF;component/MainView.xaml", System.UriKind.Relative)) :?> Window myView.DataContext <- new MainViewModel() :> obj // Application Entry point [<STAThread>] [<EntryPoint>] let main(_) = (new Application()).Run(myView) You can see that I have simply created the view (myView) and then created an instance of my viewmodel (MainViewModel) and then bound it to the data context with the code… myView.DataContext <- new MainViewModel() :> obj If I have a look at my viewmodel (MainViewModel) it looks like this… module myViewModels open System open System.Windows open System.Windows.Input open System.ComponentModel open ViewModelBase type MainViewModel() = // private variables let mutable _title = "Bound Data to Textbox" // public properties member x.Title with get() = _title and set(v) = _title <- v // public commands member x.MyCommand = new FuncCommand ( (fun d -> true), (fun e -> x.ShowMessage) ) // public methods member public x.ShowMessage = let msg = MessageBox.Show(x.Title) () I have exposed a few things, namely a property called Title that is mutable, a command and a method called ShowMessage that simply pops up a message box when called. If I then look at my view which I have created in xaml (MainView.xaml) it looks as follows… <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="F# WPF MVVM" Height="350" Width="525"> <Grid> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition Height="*"/> </Grid.RowDefinitions> <TextBox Text="{Binding Path=Title, Mode=TwoWay}" Grid.Row="0"/> <Button Command="{Binding MyCommand}" Grid.Row="1"> <TextBlock Text="Click Me"/> </Button> </Grid> </Window>   It is also very simple. It has a button that’s command is bound to the MyCommand and a textbox that has its text bound to the Title property. One other module that I have created is my ViewModelBase. Right now it is used to store my commanding function but I would look to expand on it at a later stage to implement other commonly used functions… module ViewModelBase open System open System.Windows open System.Windows.Input open System.ComponentModel type FuncCommand (canExec:(obj -> bool),doExec:(obj -> unit)) = let cecEvent = new DelegateEvent<EventHandler>() interface ICommand with [<CLIEvent>] member x.CanExecuteChanged = cecEvent.Publish member x.CanExecute arg = canExec(arg) member x.Execute arg = doExec(arg) Put this all together and you have a basic project that implements the MVVM pattern in F#. For me this is quite exciting as it turned out to be a lot simpler to do than I originally thought possible. Also because I have my view in XAML I can use the XAML designer to design forms in F# which I believe is a much cleaner way to go rather than implementing it all in code. Finally if I look at my viewmodel code, it is actually quite clean and compact…

    Read the article

  • Upgrading to Code Based Migrations EF 4.3.1 with Connector/Net 6.6

    - by GABMARTINEZ
    Entity Framework 4.3.1 includes a new feature called code first migrations.  We are adding support for this feature in our upcoming 6.6 release of Connector/Net.  In this walk-through we'll see the workflow of code-based migrations when you have an existing application and you would like to upgrade to this EF 4.3.1 version and use this approach, so you can keep track of the changes that you do to your database.   The first thing we need to do is add the new Entity Framework 4.3.1 package to our application. This should via the NuGet package manager.  You can read more about why EF is not part of the .NET framework here. Adding EF 4.3.1 to our existing application  Inside VS 2010 go to Tools -> Library Package Manager -> Package Manager Console, this will open the Power Shell Host Window where we can work with all the EF commands. In order to install this library to your existing application you should type Install-Package EntityFramework This will make some changes to your application. So Let's check them. In your .config file you'll see a  <configSections> which contains the version you have from EntityFramework and also was added the <entityFramework> section as shown below. This section is by default configured to use SQL Express which won't be necesary for this case. So you can comment it out or leave it empty. Also please make sure you're using the Connector/Net 6.6.x version which is the one that has this support as is shown in the previous image. At this point we face one issue; in order to be able to work with Migrations we need the __MigrationHistory table that we don't have yet since our Database was created with an older version. This table is used to keep track of the changes in our model. So we need to get it in our existing Database. Getting a Migration-History table into an existing database First thing we need to do to enable migrations in our existing application is to create our configuration class which will set up the MySqlClient Provider as our SQL Generator. So we have to add it with the following code: using System.Data.Entity.Migrations;     //add this at the top of your cs file public class Configuration : DbMigrationsConfiguration<NameOfYourDbContext>  //Make sure to use the name of your existing DBContext { public Configuration() { this.AutomaticMigrationsEnabled = false; //Set Automatic migrations to false since we'll be applying the migrations manually for this case. SetSqlGenerator("MySql.Data.MySqlClient", new MySql.Data.Entity.MySqlMigrationSqlGenerator());     }   }  This code will set up our configuration that we'll be using when executing all the migrations for our application. Once we have done this we can Build our application so we can check that everything is fine. Creating our Initial Migration Now let's add our Initial Migration. In Package Manager Console, execute "add-migration InitialCreate", you can use any other name but I like to set this as our initial create for future reference. After we run this command, some changes were done in our application: A new Migrations Folder was created. A new class migration call InitialCreate which in most of the cases should have empty Up and Down methods as long as your database is up to date with your Model. Since all your entities already exists, delete all duplicated code to create any entity which exists already in your Database if there is any. I found this easier when you don't have any pending updates to do to your database. Now we have our empty migration that will make no changes in our database and represents how are all the things at the begining of our migrations.  Finally, let's create our MigrationsHistory table. Optionally you can add SQL code to delete the edmdata table which is not needed anymore. public override void Up() { // Just make sure that you used 4.1 or later version         Sql("DROP TABLE EdmMetadata"); } From our Package Manager Console let's type: Update-database; If you like to see the operations made on each Update-database command you can use the flag -verbose after the Update-database. This will make two important changes.  It will execute the Up method in the initial migration which has no changes in the database. And second, and very important,  it will create the __MigrationHistory table necessary to keep track of your changes. And next time you make a change to your database it will compare the current model to the one stored in the Model Column of this table. Conclusion The important thing of this walk through is that we must create our initial migration before we start doing any changes to our model. This way we'll be adding the necessary __MigrationsHistory table to our existing database, so we can keep our database up to date with all the changes we do in our context model using migrations. Hope you have found this information useful. Please let us know if you have any questions or comments, also please check our forums here where we keep answering questions in general for the community.  Happy MySQL/Net Coding!

    Read the article

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

  • SQL SERVER – sp_describe_first_result_set New System Stored Procedure in SQL Server 2012

    - by pinaldave
    I might have said this earlier many times but I will say it again – SQL Server never stops to amaze me. Here is the example of it sp_describe_first_result_set. I stumbled upon it when I was looking for something else on BOL. This new system stored procedure did attract me to experiment with it. This SP does exactly what its names suggests – describes the first result set. Let us see very simple example of the same. Please note that this will work on only SQL Server 2012. EXEC sp_describe_first_result_set N'SELECT * FROM AdventureWorks.Sales.SalesOrderDetail', NULL, 1 GO Here is the partial resultset. Now let us take this simple example to next level and learn one more interesting detail about this function. First I will be creating a view and then we will use the same procedure over the view. USE AdventureWorks GO CREATE VIEW dbo.MyView AS SELECT [SalesOrderID] soi_v ,[SalesOrderDetailID] sodi_v ,[CarrierTrackingNumber] stn_v FROM [Sales].[SalesOrderDetail] GO Now let us execute above stored procedure with various options. You can notice I am changing the very last parameter which I am passing to the stored procedure.This option is known as for browse_information_mode. EXEC sp_describe_first_result_set N'SELECT soi_v soi, sodi_v sodi, stn_v stn FROM MyView', NULL, 0; GO EXEC sp_describe_first_result_set N'SELECT soi_v soi, sodi_v sodi, stn_v stn FROM MyView', NULL, 1; GO EXEC sp_describe_first_result_set N'SELECT soi_v soi, sodi_v sodi, stn_v stn FROM MyView', NULL, 2; GO Here is result of all the three queries together in single image for easier understanding regarding their difference. You can see that when BrowseMode is set to 1 the resultset describes the details of the original source database, schema as well source table. When BrowseMode is set to 2 the resulset describes the details of the view as the source database. I found it really really interesting that there exists system stored procedure which now describes the resultset of the output. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Stored Procedure, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Guest Posts – Feodor Georgiev – The Context of Our Database Environment – Going Beyond the Internal SQL Server Waits – Wait Type – Day 21 of 28

    - by pinaldave
    This guest post is submitted by Feodor. Feodor Georgiev is a SQL Server database specialist with extensive experience of thinking both within and outside the box. He has wide experience of different systems and solutions in the fields of architecture, scalability, performance, etc. Feodor has experience with SQL Server 2000 and later versions, and is certified in SQL Server 2008. In this article Feodor explains the server-client-server process, and concentrated on the mutual waits between client and SQL Server. This is essential in grasping the concept of waits in a ‘global’ application plan. Recently I was asked to write a blog post about the wait statistics in SQL Server and since I had been thinking about writing it for quite some time now, here it is. It is a wide-spread idea that the wait statistics in SQL Server will tell you everything about your performance. Well, almost. Or should I say – barely. The reason for this is that SQL Server is always a part of a bigger system – there are always other players in the game: whether it is a client application, web service, any other kind of data import/export process and so on. In short, the SQL Server surroundings look like this: This means that SQL Server, aside from its internal waits, also depends on external waits and settings. As we can see in the picture above, SQL Server needs to have an interface in order to communicate with the surrounding clients over the network. For this communication, SQL Server uses protocol interfaces. I will not go into detail about which protocols are best, but you can read this article. Also, review the information about the TDS (Tabular data stream). As we all know, our system is only as fast as its slowest component. This means that when we look at our environment as a whole, the SQL Server might be a victim of external pressure, no matter how well we have tuned our database server performance. Let’s dive into an example: let’s say that we have a web server, hosting a web application which is using data from our SQL Server, hosted on another server. The network card of the web server for some reason is malfunctioning (think of a hardware failure, driver failure, or just improper setup) and does not send/receive data faster than 10Mbs. On the other end, our SQL Server will not be able to send/receive data at a faster rate either. This means that the application users will notify the support team and will say: “My data is coming very slow.” Now, let’s move on to a bit more exciting example: imagine that there is a similar setup as the example above – one web server and one database server, and the application is not using any stored procedure calls, but instead for every user request the application is sending 80kb query over the network to the SQL Server. (I really thought this does not happen in real life until I saw it one day.) So, what happens in this case? To make things worse, let’s say that the 80kb query text is submitted from the application to the SQL Server at least 100 times per minute, and as often as 300 times per minute in peak times. Here is what happens: in order for this query to reach the SQL Server, it will have to be broken into a of number network packets (according to the packet size settings) – and will travel over the network. On the other side, our SQL Server network card will receive the packets, will pass them to our network layer, the packets will get assembled, and eventually SQL Server will start processing the query – parsing, allegorizing, generating the query execution plan and so on. So far, we have already had a serious network overhead by waiting for the packets to reach our Database Engine. There will certainly be some processing overhead – until the database engine deals with the 80kb query and its 20 subqueries. The waits you see in the DMVs are actually collected from the point the query reaches the SQL Server and the packets are assembled. Let’s say that our query is processed and it finally returns 15000 rows. These rows have a certain size as well, depending on the data types returned. This means that the data will have converted to packages (depending on the network size package settings) and will have to reach the application server. There will also be waits, however, this time you will be able to see a wait type in the DMVs called ASYNC_NETWORK_IO. What this wait type indicates is that the client is not consuming the data fast enough and the network buffers are filling up. Recently Pinal Dave posted a blog on Client Statistics. What Client Statistics does is captures the physical flow characteristics of the query between the client(Management Studio, in this case) and the server and back to the client. As you see in the image, there are three categories: Query Profile Statistics, Network Statistics and Time Statistics. Number of server roundtrips–a roundtrip consists of a request sent to the server and a reply from the server to the client. For example, if your query has three select statements, and they are separated by ‘GO’ command, then there will be three different roundtrips. TDS Packets sent from the client – TDS (tabular data stream) is the language which SQL Server speaks, and in order for applications to communicate with SQL Server, they need to pack the requests in TDS packets. TDS Packets sent from the client is the number of packets sent from the client; in case the request is large, then it may need more buffers, and eventually might even need more server roundtrips. TDS packets received from server –is the TDS packets sent by the server to the client during the query execution. Bytes sent from client – is the volume of the data set to our SQL Server, measured in bytes; i.e. how big of a query we have sent to the SQL Server. This is why it is best to use stored procedures, since the reusable code (which already exists as an object in the SQL Server) will only be called as a name of procedure + parameters, and this will minimize the network pressure. Bytes received from server – is the amount of data the SQL Server has sent to the client, measured in bytes. Depending on the number of rows and the datatypes involved, this number will vary. But still, think about the network load when you request data from SQL Server. Client processing time – is the amount of time spent in milliseconds between the first received response packet and the last received response packet by the client. Wait time on server replies – is the time in milliseconds between the last request packet which left the client and the first response packet which came back from the server to the client. Total execution time – is the sum of client processing time and wait time on server replies (the SQL Server internal processing time) Here is an illustration of the Client-server communication model which should help you understand the mutual waits in a client-server environment. Keep in mind that a query with a large ‘wait time on server replies’ means the server took a long time to produce the very first row. This is usual on queries that have operators that need the entire sub-query to evaluate before they proceed (for example, sort and top operators). However, a query with a very short ‘wait time on server replies’ means that the query was able to return the first row fast. However a long ‘client processing time’ does not necessarily imply the client spent a lot of time processing and the server was blocked waiting on the client. It can simply mean that the server continued to return rows from the result and this is how long it took until the very last row was returned. The bottom line is that developers and DBAs should work together and think carefully of the resource utilization in the client-server environment. From experience I can say that so far I have seen only cases when the application developers and the Database developers are on their own and do not ask questions about the other party’s world. I would recommend using the Client Statistics tool during new development to track the performance of the queries, and also to find a synchronous way of utilizing resources between the client – server – client. Here is another example: think about similar setup as above, but add another server to the game. Let’s say that we keep our media on a separate server, and together with the data from our SQL Server we need to display some images on the webpage requested by our user. No matter how simple or complicated the logic to get the images is, if the images are 500kb each our users will get the page slowly and they will still think that there is something wrong with our data. Anyway, I don’t mean to get carried away too far from SQL Server. Instead, what I would like to say is that DBAs should also be aware of ‘the big picture’. I wrote a blog post a while back on this topic, and if you are interested, you can read it here about the big picture. And finally, here are some guidelines for monitoring the network performance and improving it: Run a trace and outline all queries that return more than 1000 rows (in Profiler you can actually filter and sort the captured trace by number of returned rows). This is not a set number; it is more of a guideline. The general thought is that no application user can consume that many rows at once. Ask yourself and your fellow-developers: ‘why?’. Monitor your network counters in Perfmon: Network Interface:Output queue length, Redirector:Network errors/sec, TCPv4: Segments retransmitted/sec and so on. Make sure to establish a good friendship with your network administrator (buy them coffee, for example J ) and get into a conversation about the network settings. Have them explain to you how the network cards are setup – are they standalone, are they ‘teamed’, what are the settings – full duplex and so on. Find some time to read a bit about networking. In this short blog post I hope I have turned your attention to ‘the big picture’ and the fact that there are other factors affecting our SQL Server, aside from its internal workings. As a further reading I would still highly recommend the Wait Stats series on this blog, also I would recommend you have the coffee break conversation with your network admin as soon as possible. This guest post is written by Feodor Georgiev. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL

    Read the article

  • ASP.NET 4.0- Html Encoded Expressions

    - by Jalpesh P. Vadgama
    We all know <%=expression%> features in asp.net. We can print any string on page from there. Mostly we are using them in asp.net mvc. Now we have one new features with asp.net 4.0 that we have HTML Encoded Expressions and this prevent Cross scripting attack as we are html encoding them. ASP.NET 4.0 introduces a new expression syntax <%: expression %> which automatically convert string into html encoded. Let’s take an example for that. I have just created an hello word protected method which will return a simple string which contains characters that needed to be HTML Encoded. Below is code for that. protected static string HelloWorld() { return "Hello World!!! returns from function()!!!>>>>>>>>>>>>>>>>>"; } Now let’s use the that hello world in our page html like below. I am going to use both expression to give you exact difference. <form id="form1" runat="server"> <div> <strong><%: HelloWorld()%></strong> </div> <div> <strong><%= HelloWorld()%></strong> </div> </form> Now let’s run the application and you can see in browser both look similar. But when look into page source html in browser like below you can clearly see one is HTML Encoded and another one is not. That’s it.. It’s cool.. Stay tuned for more.. Happy Programming Technorati Tags: ASP.NET 4.0,HTMLEncode,C#4.0

    Read the article

  • Should i scrap my own leader board and go for the Facebook built in one?

    - by Magnus Johansson
    Currently I'm rolling my own score and leader board functionality in my FB canvas game. In my game, users can see what score they have, in addition I have a public leader board where everybody can see all scores from all other users.(I also have possibility for each user to set themselves as anonymous in the leader board, if desired) But now I started thinking; why do I have my own leader board system? Facebook has this scores API and I started play around with it. It, of course, integrates well with Facebook, scores and achievement showing up in the ticker and what not. But it seems that I can't let each user see a public leader board in much the same way I currently have it. But it do let the users see their friends score. Let's face it, this is all what FB is all about, right? Friends. So the question is; should i scrap my own leader board and go for the Facebook built in one (and skip the public part of it)? My gut feeling says yes, but I wanted to hear what other thinks.

    Read the article

  • SQL SERVER – Take the Quiz for a chance to win a Quadcopter Drone – Brain Teasers

    - by Pinal Dave
    It has been a long time since we ran quiz. So let us get ready for a quiz. The quiz has two parts. You have to get both the parts correct to win Quadcopter with Camera (we will call it drone). We will be giving away a total of 2 Quadcopters. The quiz is extremely easy and I will ship the Drone anywhere in the world where Amazon will ship it. Let us jump directly to the quiz. Please complete all the three questions of the contest.  Contest Part 1: Brain Teasers There are two questions for you in this part of the contest. Question: There are two 7s. How will you write select statement with a single operator that returns single 7? Hint: SELECT 7(Answer)7 Question: Write down the shortest code that produces 1 without using any numbers in the select statement? Hint: SELECT (Answer) Contest Part 2: Download and Activate Rapid SQL Question: Download and Activate Rapid SQL. Hint: You have to download and activate Rapid SQL. If you do not activate Rapid SQL, you will be disqualified for the contest. Why take risk, let us start! That’s it! Just answer above questions in the following comments area, in following format. Remember: Use comments area right below the blog to take participation in the contest Answer before June 5, 2014 midnight GMT. The winner will be announced on June 8. The winner will be selected randomly from all the valid answers. All the valid answers will be kept hidden till June 5, 2014. There will be a total of two winners. The contest is open for any country of the world where Amazon ships products. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • VLOOKUP in Excel, part 2: Using VLOOKUP without a database

    - by Mark Virtue
    In a recent article, we introduced the Excel function called VLOOKUP and explained how it could be used to retrieve information from a database into a cell in a local worksheet.  In that article we mentioned that there were two uses for VLOOKUP, and only one of them dealt with querying databases.  In this article, the second and final in the VLOOKUP series, we examine this other, lesser known use for the VLOOKUP function. If you haven’t already done so, please read the first VLOOKUP article – this article will assume that many of the concepts explained in that article are already known to the reader. When working with databases, VLOOKUP is passed a “unique identifier” that serves to identify which data record we wish to find in the database (e.g. a product code or customer ID).  This unique identifier must exist in the database, otherwise VLOOKUP returns us an error.  In this article, we will examine a way of using VLOOKUP where the identifier doesn’t need to exist in the database at all.  It’s almost as if VLOOKUP can adopt a “near enough is good enough” approach to returning the data we’re looking for.  In certain circumstances, this is exactly what we need. We will illustrate this article with a real-world example – that of calculating the commissions that are generated on a set of sales figures.  We will start with a very simple scenario, and then progressively make it more complex, until the only rational solution to the problem is to use VLOOKUP.  The initial scenario in our fictitious company works like this:  If a salesperson creates more than $30,000 worth of sales in a given year, the commission they earn on those sales is 30%.  Otherwise their commission is only 20%.  So far this is a pretty simple worksheet: To use this worksheet, the salesperson enters their sales figures in cell B1, and the formula in cell B2 calculates the correct commission rate they are entitled to receive, which is used in cell B3 to calculate the total commission that the salesperson is owed (which is a simple multiplication of B1 and B2). The cell B2 contains the only interesting part of this worksheet – the formula for deciding which commission rate to use: the one below the threshold of $30,000, or the one above the threshold.  This formula makes use of the Excel function called IF.  For those readers that are not familiar with IF, it works like this: IF(condition,value if true,value if false) Where the condition is an expression that evaluates to either true or false.  In the example above, the condition is the expression B1<B5, which can be read as “Is B1 less than B5?”, or, put another way, “Are the total sales less than the threshold”.  If the answer to this question is “yes” (true), then we use the value if true parameter of the function, namely B6 in this case – the commission rate if the sales total was below the threshold.  If the answer to the question is “no” (false), then we use the value if false parameter of the function, namely B7 in this case – the commission rate if the sales total was above the threshold. As you can see, using a sales total of $20,000 gives us a commission rate of 20% in cell B2.  If we enter a value of $40,000, we get a different commission rate: So our spreadsheet is working. Let’s make it more complex.  Let’s introduce a second threshold:  If the salesperson earns more than $40,000, then their commission rate increases to 40%: Easy enough to understand in the real world, but in cell B2 our formula is getting more complex.  If you look closely at the formula, you’ll see that the third parameter of the original IF function (the value if false) is now an entire IF function in its own right.  This is called a nested function (a function within a function).  It’s perfectly valid in Excel (it even works!), but it’s harder to read and understand. We’re not going to go into the nuts and bolts of how and why this works, nor will we examine the nuances of nested functions.  This is a tutorial on VLOOKUP, not on Excel in general. Anyway, it gets worse!  What about when we decide that if they earn more than $50,000 then they’re entitled to 50% commission, and if they earn more than $60,000 then they’re entitled to 60% commission? Now the formula in cell B2, while correct, has become virtually unreadable.  No-one should have to write formulae where the functions are nested four levels deep!  Surely there must be a simpler way? There certainly is.  VLOOKUP to the rescue! Let’s redesign the worksheet a bit.  We’ll keep all the same figures, but organize it in a new way, a more tabular way: Take a moment and verify for yourself that the new Rate Table works exactly the same as the series of thresholds above. Conceptually, what we’re about to do is use VLOOKUP to look up the salesperson’s sales total (from B1) in the rate table and return to us the corresponding commission rate.  Note that the salesperson may have indeed created sales that are not one of the five values in the rate table ($0, $30,000, $40,000, $50,000 or $60,000).  They may have created sales of $34,988.  It’s important to note that $34,988 does not appear in the rate table.  Let’s see if VLOOKUP can solve our problem anyway… We select cell B2 (the location we want to put our formula), and then insert the VLOOKUP function from the Formulas tab: The Function Arguments box for VLOOKUP appears.  We fill in the arguments (parameters) one by one, starting with the Lookup_value, which is, in this case, the sales total from cell B1.  We place the cursor in the Lookup_value field and then click once on cell B1: Next we need to specify to VLOOKUP what table to lookup this data in.  In this example, it’s the rate table, of course.  We place the cursor in the Table_array field, and then highlight the entire rate table – excluding the headings: Next we must specify which column in the table contains the information we want our formula to return to us.  In this case we want the commission rate, which is found in the second column in the table, so we therefore enter a 2 into the Col_index_num field: Finally we enter a value in the Range_lookup field. Important:  It is the use of this field that differentiates the two ways of using VLOOKUP.  To use VLOOKUP with a database, this final parameter, Range_lookup, must always be set to FALSE, but with this other use of VLOOKUP, we must either leave it blank or enter a value of TRUE.  When using VLOOKUP, it is vital that you make the correct choice for this final parameter. To be explicit, we will enter a value of true in the Range_lookup field.  It would also be fine to leave it blank, as this is the default value: We have completed all the parameters.  We now click the OK button, and Excel builds our VLOOKUP formula for us: If we experiment with a few different sales total amounts, we can satisfy ourselves that the formula is working. Conclusion In the “database” version of VLOOKUP, where the Range_lookup parameter is FALSE, the value passed in the first parameter (Lookup_value) must be present in the database.  In other words, we’re looking for an exact match. But in this other use of VLOOKUP, we are not necessarily looking for an exact match.  In this case, “near enough is good enough”.  But what do we mean by “near enough”?  Let’s use an example:  When searching for a commission rate on a sales total of $34,988, our VLOOKUP formula will return us a value of 30%, which is the correct answer.  Why did it choose the row in the table containing 30% ?  What, in fact, does “near enough” mean in this case?  Let’s be precise: When Range_lookup is set to TRUE (or omitted), VLOOKUP will look in column 1 and match the highest value that is not greater than the Lookup_value parameter. It’s also important to note that for this system to work, the table must be sorted in ascending order on column 1! If you would like to practice with VLOOKUP, the sample file illustrated in this article can be downloaded from here. Similar Articles Productive Geek Tips Using VLOOKUP in ExcelImport Microsoft Access Data Into ExcelImport an Access Database into ExcelCopy a Group of Cells in Excel 2007 to the Clipboard as an ImageShare Access Data with Excel in Office 2010 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Quickly Schedule Meetings With NeedtoMeet Share Flickr Photos On Facebook Automatically Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition

    Read the article

  • Windows FAT/NTFS Low-Level Disk Viewer (Norton DiskEdit alternative)

    - by Synetech inc.
    Hi, One of my most valuable software tools has always been Norton DiskEdit from Norton Utilities/Symantec Systemworks. I have used it for years for so many things, including, but not limited to learning file-systems. I am now in search of a similar tool that can let me view FAT and NTFS disks at a low and high level under Windows. I have seen Runtime Software’s DiskExplorer, but unfortunately, it is limited in a number of ways, particularly annoying is that it does not really let you view the disk as structured (it does not let you see directory entries from a directory that is fragmented like DiskEdit can without doing a too-exhaustive scan which doesn’t even work for FAT partitions that have cluster sizes <4KB). Does anyone know of a Windows alternative of a Norton DiskEditor? Thanks a lot.

    Read the article

  • Fibonacci numbers in F#

    - by BobPalmer
    As you may have gathered from some of my previous posts, I've been spending some quality time at Project Euler.  Normally I do my solutions in C#, but since I have also started learning F#, it only made sense to switch over to F# to get my math coding fix. This week's post is just a small snippet - spefically, a simple function to return a fibonacci number given it's place in the sequence.  One popular example uses recursion: let rec fib n = if n < 2 then 1 else fib (n-2) + fib(n-1) While this is certainly elegant, the recursion is absolutely brutal on performance.  So I decided to spend a little time, and find an option that achieved the same functionality, but used a recursive function.  And since this is F#, I wanted to make sure I did it without the use of any mutable variables. Here's the solution I came up with: let rec fib n1 n2 c =    if c = 1 then        n2    else        fib n2 (n1+n2) (c-1);;let GetFib num =    (fib 1 1 num);;printfn "%A" (GetFib 1000);; Essentially, this function works through the sequence moving forward, passing the two most recent numbers and a counter to the recursive calls until it has achieved the desired number of iterations.  At that point, it returns the latest fibonacci number. Enjoy!

    Read the article

  • Window borders missing - gtk-window-decorator segmentation fault

    - by Balakrshnan Ramakrishnan
    I have been using Ubuntu for about 1 year now and I got a problem just two days ago. Suddenly I started experiencing a problem with the window borders (title bar with close, maximize, and minimize buttons). The problem : The window borders disappear I run "gtk-window-decorator --replace" For like 20 seconds everything is back to normal But it again returns to the problem. I searched over the Internet and found that my problem is similar to what is specified in this bug report: https://bugs.launchpad.net/ubuntu/+source/compiz/+bug/814091 This bug report says that the "fix released". I updated everything using the Update Manager, but still the problem remains. Can anyone let me know whether the problem is fixed? If yes, can you please let me know how to do it? I have already tried normal replace/reset commands like unity --reset unity --replace compiz --replace The "window borders" plugin is on in CCSM (CompizConfig Settings Manager) and it points to "gtk-window-decorator". I use Ubuntu 11.10 on an Intel Core2Duo T6500 with AMD Mobility Radeon HD 4300 graphics card. If you need more information, please let me know.

    Read the article

  • Enum driving a Visual State change via the ViewModel

    - by Chris Skardon
    Exciting title eh? So, here’s the problem, I want to use my ViewModel to drive my Visual State, I’ve used the ‘DataStateBehavior’ before, but the trouble with it is that it only works for bool values, and the minute you jump to more than 2 Visual States, you’re kind of screwed. A quick search has shown up a couple of points of interest, first, the DataStateSwitchBehavior, which is part of the Expression Samples (on Codeplex), and also available via Pete Blois’ blog. The second interest is to use a DataTrigger with GoToStateAction (from the Silverlight forums). So, onwards… first let’s create a basic switch Visual State, so, a DataObj with one property: IsAce… public class DataObj : NotifyPropertyChanger { private bool _isAce; public bool IsAce { get { return _isAce; } set { _isAce = value; RaisePropertyChanged("IsAce"); } } } The ‘NotifyPropertyChanger’ is literally a base class with RaisePropertyChanged, implementing INotifyPropertyChanged. OK, so we then create a ViewModel: public class MainPageViewModel : NotifyPropertyChanger { private DataObj _dataObj; public MainPageViewModel() { DataObj = new DataObj {IsAce = true}; ChangeAcenessCommand = new RelayCommand(() => DataObj.IsAce = !DataObj.IsAce); } public ICommand ChangeAcenessCommand { get; private set; } public DataObj DataObj { get { return _dataObj; } set { _dataObj = value; RaisePropertyChanged("DataObj"); } } } Aaaand finally – hook it all up to the XAML, which is a very simple UI: A Rectangle, a TextBlock and a Button. The Button is hooked up to ChangeAcenessCommand, the TextBlock is bound to the ‘DataObj.IsAce’ property and the Rectangle has 2 visual states: IsAce and NotAce. To make the Rectangle change it’s visual state I’ve used a DataStateBehavior inside the Layout Root Grid: <i:Interaction.Behaviors> <ei:DataStateBehavior Binding="{Binding DataObj.IsAce}" Value="true" TrueState="IsAce" FalseState="NotAce"/> </i:Interaction.Behaviors> So now we have the button changing the ‘IsAce’ property and giving us the other visual state: Great! So – the next stage is to get that to work inside a DataTemplate… Which (thankfully) is easy money. All we do is add a ListBox to the View and an ObservableCollection to the ViewModel. Well – ok, a little bit more than that. Once we’ve got the ListBox with it’s ItemsSource property set, it’s time to add the DataTemplate itself. Again, this isn’t exactly taxing, and is purely going to be a Grid with a Textblock and a Rectangle (again, I’m nothing if not consistent). Though, to be a little jazzy I’ve swapped the rectangle to the other side (living the dream). So, all that’s left is to add some States to the template.. (Yes – you can do that), these can be the same names as the others, or indeed, something else, I have chosen to stick with the same names and take the extra confusion hit right on the nose. Once again, I add the DataStateBehavior to the root Grid element: <i:Interaction.Behaviors> <ei:DataStateBehavior Binding="{Binding IsAce}" Value="true" TrueState="IsAce" FalseState="NotAce"/> </i:Interaction.Behaviors> The key difference here is the ‘Binding’ attribute, where I’m now binding to the IsAce property directly, and boom! It’s all gravy!   So far, so good. We can use boolean values to change the visual states, and (crucially) it works in a DataTemplate, bingo! Now. Onwards to the Enum part of this (finally!). Obviously we can’t use the DataStateBehavior, it' only gives us true/false options. So, let’s give the GoToStateAction a go. Now, I warn you, things get a bit complex from here, instead of a bool with 2 values, I’m gonna max it out and bring in an Enum with 3 (count ‘em) 3 values: Red, Amber and Green (those of you with exceptionally sharp minds will be reminded of traffic lights). We’re gonna have a rectangle which also has 3 visual states – cunningly called ‘Red’, ‘Amber’ and ‘Green’. A new class called DataObj2: public class DataObj2 : NotifyPropertyChanger { private Status _statusValue; public DataObj2(Status status) { StatusValue = status; } public Status StatusValue { get { return _statusValue; } set { _statusValue = value; RaisePropertyChanged("StatusValue"); } } } Where ‘Status’ is my enum. Good times are here! Ok, so let’s get to the beefy stuff. So, we’ll start off in the same manner as the last time, we will have a single DataObj2 instance available to the Page and bind to that. Let’s add some Triggers (these are in the LayoutRoot again). <i:Interaction.Triggers> <ei:DataTrigger Binding="{Binding DataObject2.StatusValue}" Value="Amber"> <ei:GoToStateAction StateName="Amber" UseTransitions="False" /> </ei:DataTrigger> <ei:DataTrigger Binding="{Binding DataObject2.StatusValue}" Value="Green"> <ei:GoToStateAction StateName="Green" UseTransitions="False" /> </ei:DataTrigger> <ei:DataTrigger Binding="{Binding DataObject2.StatusValue}" Value="Red"> <ei:GoToStateAction StateName="Red" UseTransitions="False" /> </ei:DataTrigger> </i:Interaction.Triggers> So what we’re saying here is that when the DataObject2.StatusValue is equal to ‘Red’ then we’ll go to the ‘Red’ state. Same deal for Green and Amber (but you knew that already). Hook it all up and start teh project. Hmm. Just grey. Not what I wanted. Ok, let’s add a ‘ChangeStatusCommand’, hook that up to a button and give it a whirl: Right, so the DataTrigger isn’t picking up the data on load. On the plus side, changing the status is making the visual states change. So. We’ll cross the ‘Grey’ hurdle in a bit, what about doing the same in the DataTemplate? <Codey Codey/> Grey again, but if we press the button: (I should mention, pressing the button sets the StatusValue property on the DataObj2 being represented to the next colour). Right. Let’s look at this ‘Grey’ issue. First ‘fix’ (and I use the term ‘fix’ in a very loose way): The Dispatcher Fix This involves using the Dispatcher on the View to call something like ‘RefreshProperties’ on the ViewModel, which will in turn raise all the appropriate ‘PropertyChanged’ events on the data objects being represented. So, here goes, into turdcode-ville – population – me: First, add the ‘RefreshProperties’ method to the DataObj2: internal void RefreshProperties() { RaisePropertyChanged("StatusValue"); } (shudder) Now, add it to the hosting ViewModel: public void RefreshProperties() { DataObject2.RefreshProperties(); if (DataObjects != null && DataObjects.Count > 0) { foreach (DataObj2 dataObject in DataObjects) dataObject.RefreshProperties(); } } (double shudder) and now for the cream on the cake, adding the following line to the code behind of the View: Dispatcher.BeginInvoke(() => ((MoreVisualStatesViewModel)DataContext).RefreshProperties()); So, what does this *ahem* code give us: Awesome, it makes the single bound data object show the colour, but frankly ignores the DataTemplate items. This (by the way) is the same output you get from: Dispatcher.BeginInvoke(() => ((MoreVisualStatesViewModel)DataContext).ChangeStatusCommand.Execute(null)); So… Where does that leave me? What about adding a button to the Page to refresh the properties – maybe it’s a timer thing? Yes, that works. Right, what about using the Loaded event then eh? Loaded += (s, e) => ((MoreVisualStatesViewModel) DataContext).RefreshProperties(); Ahhh No. What about converting the DataTemplate into a UserControl? Anything is worth a shot.. Though – I still suspect I’m going to have to ‘RefreshProperties’ if I want the rectangles to update. Still. No. This DataTemplate DataTrigger binding is becoming a bit of a pain… I can’t add a ‘refresh’ button to the actual code base, it’s not exactly user friendly. I’m going to end this one now, and put some investigating into the use of the DataStateSwitchBehavior (all the ones I’ve found, well, all 2 of them are working in SL3, but not 4…)

    Read the article

  • How to start a task before networking?

    - by user1252434
    I've written an upstart task that modifies /etc/network/interfaces. (Actually a file sourced into it.) Which start on condition do I need to declare to let my task run before any networking jobs? I've tried start on starting networking, but that's apparently too late. When I log in after booting I can see that the changes were written, but obviously they are not used: the new config states a static IP, but the boot process waits for a non-existing DHCP server (old config) to time out. I've also tried start on starting network-interface INTERFACE=eth0, which didn't work either. IIRC there was an error in the log that the change couldn't be written. Background: I need a VM template that can be cloned and the clones configured through a script. Among other settings, I need to give them a static IP address to access them from the host. I use guestfish to write a config file to one of the virtual disks and let a script apply these settings to the system. I don't want that disk to contain an actual system settings file. I can't modify /etc directly, because that disk is shared (copy-on-write/diff) among the clones and guestfish apparently doesn't support that type of image. I could also let them use DHCP and setup a server that assigns IP by MAC, but I'm afraid of the complexity. I could also add just another virtual disk for configuration files, but if possible I'd prefer to store settings directly on the system disk image. Used software: Ubuntu Server 12.04, VirtualBox. The configuration modifier is a self written ruby script.

    Read the article

  • SQL SERVER – DMV sys.dm_exec_describe_first_result_set_for_object – Describes the First Result Metadata for the Module

    - by pinaldave
    Here is another interesting follow up blog post of SQL SERVER – sp_describe_first_result_set New System Stored Procedure in SQL Server 2012. While I was writing earlier blog post I had come across DMV sys.dm_exec_describe_first_result_set_for_object as well. I found that SQL Server 2012 is providing all this quick and new features which quite often we miss  to learn it and when in future someone demonstrates the same to us, we express our surprise on the subject. DMV sys.dm_exec_describe_first_result_set_for_object returns result set which describes the columns used in the stored procedure. Here is the quick example. Let us first create stored procedure. USE [AdventureWorks] GO ALTER PROCEDURE [dbo].[CompSP] AS SELECT [DepartmentID] id ,[Name] n ,[GroupName] gn FROM [HumanResources].[Department] GO Now let us run following two DMV which gives us meta data description of the stored procedure passed as a parameter. Option1: Pass second parameter @include_browse_information as a 0. SELECT * FROM sys.dm_exec_describe_first_result_set_for_object ( OBJECT_ID('[dbo].[CompSP]'),0) AS Table1 GO Option2: Pass second parameter @include_browse_information as a 1. SELECT * FROM sys.dm_exec_describe_first_result_set_for_object ( OBJECT_ID('[dbo].[CompSP]'),1) AS Table1 GO Here is the result of Option1 and Option2. If you see the result, there is absolutely no difference between the results. Both of the resultset are returning column names which are aliased in the stored procedure. Let us scroll on the right side and you will notice that there is clear difference in some columns. You will see in second resultset source_database, Source_schema as well few other columns are reporting original table instead of NULL values. When @include_browse_information result is set to 1 it will provide the columns details of the underlying table. I have just discovered this DMV, I have yet to use it in production code and find out where exactly I will use this DMV. Do you have any idea? Does any thing comes up to your mind where this DMV can be helpful. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Introduction to PERCENT_RANK() – Analytic Functions Introduced in SQL Server 2012

    - by pinaldave
    SQL Server 2012 introduces new analytical functions PERCENT_RANK(). This function returns relative standing of a value within a query result set or partition. It will be very difficult to explain this in words so I’d like to attempt to explain its function through a brief example. Instead of creating a new table, I will be using the AdventureWorks sample database as most developers use that for experiment purposes. Now let’s have fun following query: USE AdventureWorks GO SELECT SalesOrderID, OrderQty, RANK() OVER(ORDER BY SalesOrderID) Rnk, PERCENT_RANK() OVER(ORDER BY SalesOrderID) AS PctDist FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY PctDist DESC GO The above query will give us the following result: Now let us understand the resultset. You will notice that I have also included the RANK() function along with this query. The reason to include RANK() function was as this query is infect uses RANK function and find the relative standing of the query. The formula to find PERCENT_RANK() is as following: PERCENT_RANK() = (RANK() – 1) / (Total Rows – 1) If you want to read more about this function read here. Now let us attempt the same example with PARTITION BY clause USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, RANK() OVER(PARTITION BY SalesOrderID ORDER BY ProductID ) Rnk, PERCENT_RANK() OVER(PARTITION BY SalesOrderID ORDER BY ProductID ) AS PctDist FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY PctDist DESC GO Now you will notice that the same logic is followed in follow result set. I have now quick question to you – how many of you know the logic/formula of PERCENT_RANK() before this blog post? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Part 3: Customization Strategy or how long does it take

    - by volker.eckardt(at)oracle.com
    The previous part in this blog should have made us aware, that many procedures are required to manage all these steps. To review your status let me ask you a question:What is your Customization Strategy?Your answer might be something like, 'customization strategy, well, we have standards and we let requirement documents approve'.Let me ask you another question:How long does it take to redeploy all your customizations into a fresh installation?In 90% of all installations the answer to this question would be: we can't!Although no one would have to do it (hopefully), just thinking about it and recognizing that we have today too many manual steps involved, different procedures and sometimes (undocumented) manual steps to complete a customization installation. And ... in general too many customizations.Why is working with customizations often so complicated and time consuming?Here are the key reasons as I have identified them in my projects:Customization standards defined, but not maintainedDifferent knowledge on developer side (results getting an individual developer touch)No need to automate deployment (not forced by client)Different documentation styles, not easy to hand over to someone elseDifferent development concepts, difficult for the maintenanceJust the minimum present for testing, often positive testing onlyDeviations from naming conventions accepted, although definedComplicated procedures, therefore sometimes partially ignoredAnd last but not least, hand made version control (still)If you would have to 'redeploy all your customizations' you would have to Follow all your own standards and best practiceTrack deviations and define corrective tasksAutomate as much as possible, minimize manual tasksDo not allow any change coming in without version controlUtilize products to support you in deploymentMinimize hand made scripts and extensive documentationReview regularly used techniques to guarantee that all are in line with the current release and also easy maintainableCreate solution libraries and force the team to contribute and reuseDefine quality activities and execute themDefine a procedure to release customizationsI know, it is easy to write down, but much harder to manage. Will provide some guidelines in my next blog.Volker

    Read the article

  • HTML5 game engine for a 2D or 2.5D RPG style "map walk"

    - by stargazer
    please help me to choose a HTML5 game engine or Javascript libraries I want to do the following in the game: when the game starts a part the huge map (full size of the map: about 7 screens) is shown. The map itself is completely designed in the editor mapeditor.org (or in some comparable editor - if you know a good alternative to mapeditor.org - let me know) and loaded at runtime or at design time. The game engine should support loading of isometric maps (well, in worst case only orthogonal maps will be sufficient) both "tile layer" and "object layer" from mapeditor.org should be supported. Scrolling/performance of this map should be fast enough. The map and the game should be either in 2D (orthogonal map) or in 2.5D (isometric map) The game engine should support movement of sprites with animation. Let say I have a sprite for "human" with animation sequences showing "walking" in 8 directions - it should be imported into game engine and should "walk" on the map without writing a lot of Javascript code. Automatic scrolling of the map the "human" nears the screen border. Collision detection, "solid" objects. The mapeditor.org supports properies on tiles. Let say I assign a "solid" property to some tiles in editor. It should be easy to check this "solid" property in the game engine and implement kind of "solid" behavior, so the animanted sprites do not walk through the walls. Collision detection - it should be easy to implement some custom functionality like "when sprite A is close to sprite B - call this function" Showing "dialogs" or popup windows on top of the map - should be easy to implement. Cross-browser audio support - (it is implemented quite well in construct 2 from scirra, so I'm looking for the comparable audio quality) The game itself is a king of RPG but without fighting scenes and without huge "inventory". The main character just walking on the map, discovers some things, there are dialogs and sounds. The functionality of this example from sprite.js http://batiste.dosimple.ch/sprite.js/tests/mapeditor/map_reader.html is very close to what I'm developing. But I'm not a Javascript guru (and a very lazy guy) and would like to write even less Javascript code as in the example...

    Read the article

  • SQL SERVER – Answer – Value of Identity Column after TRUNCATE command

    - by pinaldave
    Earlier I had one conversation with reader where I almost got a headache. I suggest all of you to read it before continuing this blog post SQL SERVER – Reseting Identity Values for All Tables. I believed that he faced this situation because he did not understand the difference between SQL SERVER – DELETE, TRUNCATE and RESEED Identity. I wrote a follow up blog post explaining the difference between them. I asked a small question in the second blog post and I received many interesting comments. Let us go over the question and its answer here one more time. Here is the scenario to set up the puzzle. Create Table with Seed Identity = 11 Insert Value and Check Seed (it will be 11) Reseed it to 1 Insert Value and Check Seed (it will be 2) TRUNCATE Table Insert Value and Check Seed (it will be 11) Let us see the T-SQL Script for the same. USE [TempDB] GO -- Create Table CREATE TABLE [dbo].[TestTable]( [ID] [int] IDENTITY(11,1) NOT NULL, [var] [nchar](10) NULL ) ON [PRIMARY] GO -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO -- Reseed to 1 DBCC CHECKIDENT ('TestTable', RESEED, 1) GO -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO -- Truncate table TRUNCATE TABLE [TestTable] GO -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO -- Question for you Here -- Clean up DROP TABLE [TestTable] GO Now let us see the output of three of the select statements. 1) First Select after create table 2) Second Select after reseed table 3) Third Select after truncate table The reason is simple: If the table contains an identity column, the counter for that column is reset to the seed value defined for the column. Reference: Pinal Dave (http://blog.sqlauthority.com)       Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SEM & Adwords: How many click without a sale before i should pause a keyword

    - by Thomas Jönsson
    I wonder how many clicks I optimally should let pass through every new keyword I try in Adwords before I find out that it's not making a profit and it should be paused! It's actually four question. 1: At which likelihood percentile should I pause a word? 2: How many clicks should I let through before I pause a word for those word which do not generate any lead? 3: How many clicks should I let through after one sale to consider the word not to be profitable? 4: Does the likelihood of the word becoming profitable affect the above? Conditions: -The clicks is normally distributed. (correct?) -A CR of 1% is break even, everything above is profit (1 sale/100 clicks=break even) Cost per Click(cpc) = 4$ -Marginal (profit per sale) = 400$ -Paybacktime = 1 year -Average click per word = 0,333 per day (121 + 2/3 per year) Exampel: After 1 click and no sale the keyword still has a high probability to be profitable. After 500 clicks and no sale it has almost no likelihood to not be profitable and should probably be paused. Thanks in advance!

    Read the article

  • Eculidean space and vector magnitude

    - by Starkers
    Below we have distances from the origin calculated in two different ways, giving the Euclidean distance, the Manhattan distance and the Chebyshev distance. Euclidean distance is what we use to calculate the magnitude of vectors in 2D/3D games, and that makes sense to me: Let's say we have a vector that gives us the range a spaceship with limited fuel can travel. If we calculated this with Manhattan metric, our ship could travel a distance of X if it were travelling horizontally or vertically, however the second it attempted to travel diagonally it could only tavel X/2! So like I say, Euclidean distance does make sense. However, I still don't quite get how we calculate 'real' distances from the vector's magnitude. Here are two points, purple at (2,2) and green at (3,3). We can take two points away from each other to derive a vector. Let's create a vector to describe the magnitude and direction of purple from green: |d| = purple - green |d| = (purple.x, purple.y) - (green.x, green.y) |d| = (2, 2) - (3, 3) |d| = <-1,-1> Let's derive the magnitude of the vector via Pythagoras to get a Euclidean measurement: euc_magnitude = sqrt((x*x)+(y*y)) euc_magnitude = sqrt((-1*-1)+(-1*-1)) euc_magnitude = sqrt((1)+(1)) euc_magnitude = sqrt(2) euc_magnitude = 1.41 Now, if the answer had been 1, that would make sense to me, because 1 unit (in the direction described by the vector) from the green is bang on the purple. But it's not. It's 1.41. 1.41 units is the direction described, to me at least, makes us overshoot the purple by almost half a unit: So what do we do to the magnitude to allow us to calculate real distances on our point graph? Worth noting I'm a beginner just working my way through theory. Haven't programmed a game in my life!

    Read the article

  • Exadata X3 In-Memory Database Machine: To be or not to be

    - by Luis Moreno Campos
    Since Larry Ellison announced Oracle Exadata X3 as the new generation of the Database Machine, he established the product in the In-Memory Database arena. And that annoyed some people. We all know that In-Memory Databases are the ones that *only* execute in memory and use the other layers of storage for persistency (mainly disk). Oracle database has always been a technology that uses memory as a caching mechanism and that hasn't change nor it will change with Oracle Database 12c. So this is the central point of fuss when it comes to announcing an Engineered Systems as In-Memory Database, when in fact it still runs Oracle Database, not vanilla but still the same product. Let me tell you purist people out there: when you find no new ground breaking point to get all excited about you decide to bash it, and go against its claims. It's not like a car manufacturer that launches a mini-van in the market and calls it a Sports Car, we are talking about a fundamental change in the ILM stack: level 2 of caching is now self sufficient. It's not DRAM? Who cares, still let's you put in flash amounts of data not done up until now, so I guess Oracle can name it whatever Larry wants because in the end it's something never done before. Now let's imagine that you hop on the pure In-Memory Database bandwagon. You would be stuck with a database technology that lags behind the Oracle Database hundreds of light years in man/hours innovations and features. Do you really want to travel back in time? Remember, the first rule about time travelling is that "Security is not Guaranteed". Your choice. LMC

    Read the article

  • Is there a process-oriented IDE ?

    - by Raveline
    My problem is simple : when I'm programming in an OO paradigm, I'm often having part of a main business process divided in many classes. Which means, if I want to examine the whole functional chain that leads to the output, for debugging or for optimization research, it can be a bit painful. So I was wondering : is there an IDE that let you put a "process tag" on functions coming from different objects, and give you a view of all those functions having the same tag ? edit : To give an example (that I'm making up completely, sorry if it doesn't sound very realistic). Let's say we have the following business process for a HR application : receive a holiday-request by an employee, check the validity of the request, then give an alert to his boss ("one of those lazy programmer wants another day off"); at the same time, let's say the boss will want to have a table of all employee's timetable during the time the employee wants his vacations; then handle the answer of the boss, send a nice little mail to the employee ("No way, lazy bones"). Even if we get rid of everything not purely business-related (mail sending process, db handling to get the useful info, GUI functionalities, and so on), we still have something that doesn't really fit in "one class". I'd like to have an IDE that would give me the opportunity to embrace quickly, at the very least : The function handling the validation of the request by the employee; The function preparing the "timetable" for the boss; The function handling the validation of the request by the boss; I wouldn't put all those functions in the same class (but perhaps that's my mistake ?). This is where my dreamed IDE could be helpful.

    Read the article

  • External table and preprocessor for loading LOBs

    - by David Allan
    I was using the COLUMN TRANSFORMS syntax to load LOBs into Oracle using the Oracle external which is a handy way of doing several stuff - from loading LOBs from the filesystem to having constants as fields. In OWB you can use unbound external tables to define an external table using your own arbitrary access parameters - I blogged a while back on this for doing preprocessing before it was added into OWB 11gR2. For loading LOBs using the COLUMN TRANSFORMS syntax have a read through this post on loading CLOB, BLOB or any LOB, the files to load can be specified as a field that is a filename field, the content of this file will be the LOB data. So using the example from the linked post, you can define the columns; Then define the access parameters - if you go the unbound external table route you can can put whatever you want in here (your external table get out of jail free card); This will let you read the LOB files fromn the filesystem and use the external table in a mapping. Pushing the envelope a little further I then thought about marrying together the preprocessor with the COLUMN TRANSFORMS, this would have let me have a shell script for example as the preprocessor which listed the contents of a directory and let me read the files as LOBs via an external table. Unfortunately that doesn't quote work - there is now a bug/enhancement logged, so one day maybe. So I'm afraid my blog title was a little bit of a teaser....

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >