Search Results

Search found 31207 results on 1249 pages for 'atg best practice in industries'.

Page 38/1249 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Best approach for unit enemy "awareness" in RTS?

    - by Phil
    I'm using Unity3d to develop an RTS/TD hybrid prototype game. What is the best approach to have "awareness" between units and their enemies? Is it sane to have every unit check the distance to every enemy and engage if within range? The approach I'm going for right now is to have a trigger sphere on every unit. If an enemy enters the trigger, the unit becomes aware of the enemy and starts distance checking. I'm imagining that this would save some unnecessary checks? What's the best practice here (if there's such a thing)? Thanks for reading.

    Read the article

  • Database design and performance impact

    - by Craige
    I have a database design issue that I'm not quite sure how to approach, nor if the benefits out weigh the costs. I'm hoping some P.SE members can give some feedback on my suggested design, as well as any similar experiences they may have came across. As it goes, I am building an application that has large reporting demands. Speed is an important issue, as there will be peak usages throughout the year. This application/database has a multiple-level, many-to-many relationship. eg object a object b object c object d object b has relationship to object a object c has relationship to object b, a object d has relationship to object c, b, a Theoretically, this could go on for unlimited levels, though logic dictates it could only go so far. My idea here, to speed up reporting, would be to create a syndicate table that acts as a global many-to-many join table. In this table (with the given example), one might see: +----------+-----------+---------+ | child_id | parent_id | type_id | +----------+-----------+---------+ | b | a | 1 | | c | b | 2 | | c | a | 3 | | d | c | 4 | | d | b | 5 | | d | a | 6 | +----------+-----------+---------+ Where a, b, c and d would translate to their respective ID's in their respective tables. So, for ease of reporting all of a which exist on object d, one could query SELECT * FROM `syndicates` ... JOINS TO child and parent tables ... WHERE parent_id=a and type_id=6; rather than having a query with a join to each level up the chain. The Problem This table grows exponentially, and in a given year, could easily grow past 20,000 records for one client. Given multiple clients over multiple years, this table will VERY quickly explode to millions of records and beyond. Now, the database will, in time, be partitioned across multiple servers, but I would like (as most would) to keep the number of servers as low as possible while still offering flexibility. Also writes and updates would be exponentially longer (though possibly not noticeable to the end user) as there would be multiple inserts/updates/scans on this table to keep it in sync. Am I going in the right direction here, or am I way off track. What would you do in a similar situation? This solution seems overly complex, but allows the greatest flexibility and fastest read-operations. Sidenote 1 - This structure allows me to add new levels to the tree easily. Sidenote 2 - The database querying for this database is done through an ORM framework.

    Read the article

  • First time application where to start?

    - by Nazariy
    After many years of searches and copy pasting, I'm still looking for simple solution that can transliterate text input on the fly from one key set to another. There are quite few online services that provide this feature but it still quite annoying to go online all the time. Unfortunately there is not that many applications left which are capable of doing so, and none of them supported by this day. I decided to make my own and at same time to learn something new for my self. The idea is quite simple: application should sit in system tray and wait until input language get changed, for example to Russian. If Russian language is activated, application should start to listen for user key strokes combination and replace them based on custom dictionary for example R = ?, SH = ? etc. I should be able to bind application to any installed language (Russian, Ukrainian, Bulgarian, Belarusian etc.) and customise dictionary for any of them. So my question is: Which language should I chose for this task C++, C# or might be something hardcore like Assembler, as application should work natively with Windows XP/Vista/7 or possibly Mac. (cross platform support is good but my main target is Windows) Due to nature of application behaviour how can I tell anti-virus software that it is not a "Key Logger" and basically not a virus? Where should I start and what should I be aware of? P.S. My current programming knowledge is quite basic, PHP and JavaScript with Object Oriented approach.

    Read the article

  • Friendly URLs: is there a max length for search engines?

    - by Olivier Pons
    People from stackoverflow have been working closely with google team to help them make the panda algorithm more efficient, so I guess they've learned a lot from the google team. Thus they may have done very clever friendly URLs to maximize the page rank. I've seen from time to time very long URLs (can't find where) in stackoverflow, but after a certain "amount" of character there were only numbers, kind of "ok passed this length, SEOs will ignore this so let's put only numbers". I've done a huge work on my framework to make very friendly URLs, and my website can come up with URLs like: http://www.mysite.fr/recherche/region/provence-alpes-cote-d-azur/departement/bouches-du-rhone/categorie-de-metiers/paramedical/ It's very long and I'm wondering if the previous URL won't be mixed with, say, this one: http://www.mysite.fr/recherche/region/provence-alpes-cote-d-azur/departement/bouches-du-rhone/categorie-de-metiers/art/

    Read the article

  • Come Aboard. We're Expecting You...

    - by KKline
    Those of us over a certain age (read - old as dirt) can remember the theme songs to certain TV shows better than we can the National Anthem. Try these lines out and see if you don't immediately remember the tune that goes along with them: Come and knock on our door | We've been waiting for you ... Makin' your way in the world today | Takes everything you've got ... Just some good ol' boys | Never meaning no harm ... Thank you for being a friend | Travel down the road and back again ... So when I...(read more)

    Read the article

  • Technology/Programming mailing lists How do you manage?

    - by AdityaGameProgrammer
    Email Alerts, Blog /Forum updates, discussion subscriptions general programming/technology update emails that we often subscribe to.Do you actually read them ? or go direct to the source when you find time. Often we might the mail of programmers filled with loads of unread subscription mail from technology they previously were following or worked on or things they wish to follow .Some or a majority of these mail just keep on piling up . I personally have few updates that i wish i read but constantly avoid and keep of for latter and finally delete them in effort keep the in box clean. Few questions come to mind regarding this Do you keep such mail in separate accounts? Do you read all the mail you have subscribed to? Do you ever unsubscribe to any such email if you aren't reading them? How much do you really value these email. Lastly do you keep your in box clean ? wish to deal with this in a better way.

    Read the article

  • The 50 Best How-To Geek Windows Articles of 2010

    - by The Geek
    Even though we cover plenty of other topics, Windows has always been a primary focus around here, and we’ve got one of the largest collections of Windows-related how-to articles anywhere. Here’s the fifty best Windows articles that we wrote in 2010. Want even more? You should make sure to check out our top 20 How-To Geek Explains topics of 2010, or the 50 Windows Registry hacks that make Windows better Latest Features How-To Geek ETC The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Is Your Desktop Printer More Expensive Than Printing Services? 20 OS X Keyboard Shortcuts You Might Not Know Awesome WebGL Demo – Flight of the Navigator from Mozilla Sunrise on the Alien Desert Planet Wallpaper Add Falling Snow to Webpages with the Snowfall Extension for Opera [Browser Fun] Automatically Keep Up With the Latest Releases from Mozilla Labs in Firefox 4.0 A Look Back at 2010 Through Infographics Monitor the Weather with the Weather Forecast Extension for Opera

    Read the article

  • How to concentrate on one project at a time. Divide and Conquer doesn't work for me [closed]

    - by refhat
    Possible Duplicate: Tips for staying focused and motivated on a project I have serious issues on concentrating on one project at a time. I cant even follow the Divide and Conquer Approach. Once I start a project, I try to get the things done as neatly as possible but very soon I end up messing so many components of it. I try to do divide and conquer, but my approach doesn't work smoothly, and then I then wonder here and there in other projects. Sometimes I try spending so many hours for some trivial issues, which in-fact are not even issues. How do I avoid this jargon and be a smooth developer and have a nice workflow around my projects. I tend to loose my concentration on the current project and wonder in another project.

    Read the article

  • Is there a "golden ratio" in coding?

    - by badallen
    My coworkers and I often come up with silly ideas such as adding entries to Urban Dictionary that are inappropriate but completely make sense if you are a developer. Or making rap songs that are about delegates, reflections or closures in JS... Anyhow, here is what I brought up this afternoon which was immediately dismissed to be a stupid idea. So I want to see if I can get redemptions here. My idea is coming up with a Golden Ratio (or in the neighborhood of) between the number of classes per project versus the number of methods/functions per class versus the number of lines per method/function. I know this is silly and borderline, if not completely, useless, but just think of all the legacy methods or classes you have encountered that are absolutely horrid - like methods with 10000 lines or classes with 10000 methods. So Golden Ratio, anyone? :)

    Read the article

  • Ms Build publishing vs Visual Studio IDE publishing

    - by reggie
    I am currently working on ms build to publish my winform application based on the environment selected (Dev or Prod). I am using Ms Build Community Task and referencing this article to achieve this purpose. I had a few theoretical doubts based on publishing application. 1) Is there any difference in publishing through the visual studio ide and msbuild? 2) What do most developers prefer to use and why? 3) What are the advantages of using MsBuild to publish an application as compared to publishing through the visual studio IDE? 4) What is faster? I am using a .net 3.5 winform application developed in Csharp and my question is pertaining to clickonce windows applications only. Please help me clear these doubts

    Read the article

  • Laptops or Notebooks in a meeting? [closed]

    - by greengit
    Is taking the laptop to the meeting a good idea? Of course, the project leader needs to have one -- but the programmers -- especially those who only need to get straight instructions on what to do next on the project -- do they need to take laptops? I feel it takes longer to save notes in a software -- and it's lot easier to just jot down "things to do" in a simple note book. That way you can keep up with the discussion and not lose track of what someone else is saying by spending too much time entering text in the machine.

    Read the article

  • Managing .NET External Dependencies

    - by Ben Griswold
    Noah and I continue our screencast series by sharing our approach to managing external dependencies referenced within a .NET solution.  This is another introductory episode but you might find a hidden gem in the short 4-minute clip.  ELMAH (Error Logging Modules and Handlers) is the external dependencies we are focusing on in the presentation.  If you are not familiar with ELMAH, this episode may be worth your time.   YouTube - Managing .NET External Dependencies This is one of our first screencasts.  If you have feedback, I’d love to hear it.

    Read the article

  • Announcing: Oracle Enterprise Manager 12c Delivers Advanced Self-Service Automation for Oracle Database 12c Multitenant

    - by Scott McNeil
    New Self-Service Driven Provisioning of Pluggable Databases Today Oracle announced new capabilities that support managing the full lifecycle of pluggable database as a service in Oracle Enterprise Manager 12c Release 3 (12.1.0.3). This latest release builds on the existing capabilities to provide advanced automation for deploying database as a service using Oracle Database 12c Multitenant option. It takes it one step further by offering pluggable database as a service through Oracle Enterprise Manager 12c self-service portal providing customers with fast provisioning of database cloud services with minimal time and effort. This is a significant addition to Oracle Enterprise Manager 12c’s existing portfolio of cloud services that includes infrastructure as a service, database as a service, testing as a service, and Java platform as a service. The solution provides a self-service mechanism to provision pluggable databases allowing users to request and access database(s) on-demand. The self-service operations are also enabled through REST APIs allowing customers to integrate with third-party automation systems or their custom enterprise portals. Benefits Self-service provisioning allows rapid access to pluggable database as a service for hosting or certifying applications on Oracle Database 12c Self-service driven migration to pluggable database as a service in order to migrate a pre-Oracle Database 12c database to a pluggable database as a service model and test the consolidation strategy Single service catalog for all approved pluggable database as a service configurations which helps customers achieve standardization while catering to all applications and users in the enterprise Resource guarantee via database resource manager (and IORM on Oracle Exadata) that enables deployment of mixed workloads in a shared environment Quota, role based access, and policy based management that enforces governance and reduces administrative overhead Chargeback or showback which improves metering and accountability for services consumed by each pluggable database Comprehensive REST APIs that support integration with ticketing or change management systems, and or with other self-service portals Minimal administrative and maintenance overhead through self-managing automation that allows for intelligent placement of pluggable databases To understand how pluggable database as a service works, watch this quick demo: Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager Cloud Control12c Mobile app

    Read the article

  • MVC Pattern, ViewModels, Location of conversion.

    - by Pino
    I've been working with ASP.Net MVC for around a year now and have created my applications in the following way. X.Web - MVC Application Contains Controller and Views X.Lib - Contains Data Access, Repositories and Services. This allows us to drop the .Lib into any application that requires it. At the moment we are using Entity Framework, the conversion from EntityO to a more specific model is done in the controller. This set-up means if a service method returns an EntityO and then the Controller will do a conversion before the data is passed to a view. I'm interested to know if I should move the conversion to the Service so that the app doesn't have Entity Objects being passed around.

    Read the article

  • Third Party Applications and Other Acts of Violence Against Your SQL Server

    - by KKline
    I just got finished reading a great blog post from my buddy, Thomas LaRock ( t | b ), in which he describes a useful personal policy he used to track changes made to his SQL Servers when installing third-party products. Note that I'm talking about line-of-business applications here - your inventory management systems and help desk ticketing apps. I'm not talking about monitoring and tuning applications since they, by their very nature, need a different sort of access to your back-end server resources....(read more)

    Read the article

  • Defensive Programming Techniques.

    - by Pemdas
    I was attempting to identify an element of software engineering that I think is overlooked, not emphasized or not taught in typical undergraduate course work for CS or SE. What I came up is the concept of defensive programing. I would like to hear the communities options on defensive program and/or specific techniques that you use on a regular basis. Also, I would to know if there are any language specific techniques.

    Read the article

  • Convert ddply {plyr} to Oracle R Enterprise, or use with Embedded R Execution

    - by Mark Hornick
    The plyr package contains a set of tools for partitioning a problem into smaller sub-problems that can be more easily processed. One function within {plyr} is ddply, which allows you to specify subsets of a data.frame and then apply a function to each subset. The result is gathered into a single data.frame. Such a capability is very convenient. The function ddply also has a parallel option that if TRUE, will apply the function in parallel, using the backend provided by foreach. This type of functionality is available through Oracle R Enterprise using the ore.groupApply function. In this blog post, we show a few examples from Sean Anderson's "A quick introduction to plyr" to illustrate the correpsonding functionality using ore.groupApply. To get started, we'll create a demo data set and load the plyr package. set.seed(1) d <- data.frame(year = rep(2000:2014, each = 3),         count = round(runif(45, 0, 20))) dim(d) library(plyr) This first example takes the data frame, partitions it by year, and calculates the coefficient of variation of the count, returning a data frame. # Example 1 res <- ddply(d, "year", function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(cv.count = cv)   }) To illustrate the equivalent functionality in Oracle R Enterprise, using embedded R execution, we use the ore.groupApply function on the same data, but pushed to the database, creating an ore.frame. The function ore.push creates a temporary table in the database, returning a proxy object, the ore.frame. D <- ore.push(d) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(year=x$year[1], cv.count = cv)   }, FUN.VALUE=data.frame(year=1, cv.count=1)) You'll notice the similarities in the first three arguments. With ore.groupApply, we augment the function to return the specific data.frame we want. We also specify the argument FUN.VALUE, which describes the resulting data.frame. From our previous blog posts, you may recall that by default, ore.groupApply returns an ore.list containing the results of each function invocation. To get a data.frame, we specify the structure of the result. The results in both cases are the same, however the ore.groupApply result is an ore.frame. In this case the data stays in the database until it's actually required. This can result in significant memory and time savings whe data is large. R> class(res) [1] "ore.frame" attr(,"package") [1] "OREbase" R> head(res)    year cv.count 1 2000 0.3984848 2 2001 0.6062178 3 2002 0.2309401 4 2003 0.5773503 5 2004 0.3069680 6 2005 0.3431743 To make the ore.groupApply execute in parallel, you can specify the argument parallel with either TRUE, to use default database parallelism, or to a specific number, which serves as a hint to the database as to how many parallel R engines should be used. The next ddply example uses the summarise function, which creates a new data.frame. In ore.groupApply, the year column is passed in with the data. Since no automatic creation of columns takes place, we explicitly set the year column in the data.frame result to the value of the first row, since all rows received by the function have the same year. # Example 2 ddply(d, "year", summarise, mean.count = mean(count)) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   data.frame(year=x$year[1], mean.count = mean.count)   }, FUN.VALUE=data.frame(year=1, mean.count=1)) R> head(res)    year mean.count 1 2000 7.666667 2 2001 13.333333 3 2002 15.000000 4 2003 3.000000 5 2004 12.333333 6 2005 14.666667 Example 3 uses the transform function with ddply, which modifies the existing data.frame. With ore.groupApply, we again construct the data.frame explicilty, which is returned as an ore.frame. # Example 3 ddply(d, "year", transform, total.count = sum(count)) res <- ore.groupApply (D, D$year, function(x) {   total.count <- sum(x$count)   data.frame(year=x$year[1], count=x$count, total.count = total.count)   }, FUN.VALUE=data.frame(year=1, count=1, total.count=1)) > head(res)    year count total.count 1 2000 5 23 2 2000 7 23 3 2000 11 23 4 2001 18 40 5 2001 4 40 6 2001 18 40 In Example 4, the mutate function with ddply enables you to define new columns that build on columns just defined. Since the construction of the data.frame using ore.groupApply is explicit, you always have complete control over when and how to use columns. # Example 4 ddply(d, "year", mutate, mu = mean(count), sigma = sd(count),       cv = sigma/mu) res <- ore.groupApply (D, D$year, function(x) {   mu <- mean(x$count)   sigma <- sd(x$count)   cv <- sigma/mu   data.frame(year=x$year[1], count=x$count, mu=mu, sigma=sigma, cv=cv)   }, FUN.VALUE=data.frame(year=1, count=1, mu=1,sigma=1,cv=1)) R> head(res)    year count mu sigma cv 1 2000 5 7.666667 3.055050 0.3984848 2 2000 7 7.666667 3.055050 0.3984848 3 2000 11 7.666667 3.055050 0.3984848 4 2001 18 13.333333 8.082904 0.6062178 5 2001 4 13.333333 8.082904 0.6062178 6 2001 18 13.333333 8.082904 0.6062178 In Example 5, ddply is used to partition data on multiple columns before constructing the result. Realizing this with ore.groupApply involves creating an index column out of the concatenation of the columns used for partitioning. This example also allows us to illustrate using the ORE transparency layer to subset the data. # Example 5 baseball.dat <- subset(baseball, year > 2000) # data from the plyr package x <- ddply(baseball.dat, c("year", "team"), summarize,            homeruns = sum(hr)) We first push the data set to the database to get an ore.frame. We then add the composite column and perform the subset, using the transparency layer. Since the results from database execution are unordered, we will explicitly sort these results and view the first 6 rows. BB.DAT <- ore.push(baseball) BB.DAT$index <- with(BB.DAT, paste(year, team, sep="+")) BB.DAT2 <- subset(BB.DAT, year > 2000) X <- ore.groupApply (BB.DAT2, BB.DAT2$index, function(x) {   data.frame(year=x$year[1], team=x$team[1], homeruns=sum(x$hr))   }, FUN.VALUE=data.frame(year=1, team="A", homeruns=1), parallel=FALSE) res <- ore.sort(X, by=c("year","team")) R> head(res)    year team homeruns 1 2001 ANA 4 2 2001 ARI 155 3 2001 ATL 63 4 2001 BAL 58 5 2001 BOS 77 6 2001 CHA 63 Our next example is derived from the ggplot function documentation. This illustrates the use of ddply within using the ggplot2 package. We first create a data.frame with demo data and use ddply to create some statistics for each group (gp). We then use ggplot to produce the graph. We can take this same code, push the data.frame df to the database and invoke this on the database server. The graph will be returned to the client window, as depicted below. # Example 6 with ggplot2 library(ggplot2) df <- data.frame(gp = factor(rep(letters[1:3], each = 10)),                  y = rnorm(30)) # Compute sample mean and standard deviation in each group library(plyr) ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y)) # Set up a skeleton ggplot object and add layers: ggplot() +   geom_point(data = df, aes(x = gp, y = y)) +   geom_point(data = ds, aes(x = gp, y = mean),              colour = 'red', size = 3) +   geom_errorbar(data = ds, aes(x = gp, y = mean,                                ymin = mean - sd, ymax = mean + sd),              colour = 'red', width = 0.4) DF <- ore.push(df) ore.tableApply(DF, function(df) {   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4) }) But let's take this one step further. Suppose we wanted to produce multiple graphs, partitioned on some index column. We replicate the data three times and add some noise to the y values, just to make the graphs a little different. We also create an index column to form our three partitions. Note that we've also specified that this should be executed in parallel, allowing Oracle Database to control and manage the server-side R engines. The result of ore.groupApply is an ore.list that contains the three graphs. Each graph can be viewed by printing the list element. df2 <- rbind(df,df,df) df2$y <- df2$y + rnorm(nrow(df2)) df2$index <- c(rep(1,300), rep(2,300), rep(3,300)) DF2 <- ore.push(df2) res <- ore.groupApply(DF2, DF2$index, function(df) {   df <- df[,1:2]   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4)   }, parallel=TRUE) res[[1]] res[[2]] res[[3]] To recap, we've illustrated how various uses of ddply from the plyr package can be realized in ore.groupApply, which affords the user explicit control over the contents of the data.frame result in a straightforward manner. We've also highlighted how ddply can be used within an ore.groupApply call.

    Read the article

  • How to document/verify consistent layering?

    - by Morten
    I have recently moved to the dark side: I am now a CUSTOMER of software development -- mainly websites. With this new role comes new concerns. As a programmer i know how solid an application becomes when it is properly layered, and I want to use this knowledge in my new job. I don't want business logic in my presentation layer, and certainly not presentation stuff in my data layer. Thus, I want to be able to demand from my supllier that they document the level of layering, and how neat and consistent the layering is. The big question is: How is the level of layering documented to me as a customer, and is that a reasonable demmand for me to have, so I don't have to look in the code (I'm not supposed to do that anymore)?

    Read the article

  • Better Programming By Programming Better?

    - by ahmed
    I am not convinced by the idea that developers are either born with it or they are not. Where’s the empirical evidence to support these types of claims? Can a programmer move from say the 50th to 90th percentile? However, most developers are not in the 99th or even 90th percentile (by definition), and thus still have room for improvement in programming ability, along with the important skills.The belief in innate talent is “lacking in hard evidence to substantiate it” as well.So how do I reconcile these seemingly contradictory statements? I think the lesson for software developers who wish to keep on top of their game and become experts is to keep exercising the mind via effortful studying. I read a lot technical books, but many of them aren’t making me better as a developer.

    Read the article

  • Back in Atlanta! Wed, Feb 9 2011

    - by KKline
    I always enjoy spending time with my friends from Atlanta, as well as meeting folks and making new friends. If you live in the Atlanta area, I hope you'll join me on the evening of Wednesday, February 9th, 2011. Details are at the Atlanta SQL Server user group website . It's common knowledge that I have a terrible memory for many things. However, one of the few things that my memory is usually really good at is remember names & faces (and remembering stories, but that is another story as well)....(read more)

    Read the article

  • Windows Azure Database (SQL Azure) Development Tip

    - by BuckWoody
    When you create something in the cloud, it's real, and you're charged for it. There are free offerings, and you even get free resources with your Microsoft Developer Network (MSDN) subscription, but there are limits within those. Creating a 1 GB database - even with nothing in it - is a 1 GB Database. If you create it, drop it, and create it again 2 minutes later, that's 2 GB of space you've used for the month. Wait - how do I develop in this kind of situation? With Windows Azure, you can simply install the free Software Development Kit (SDK) and develop your entire application for free - you need never even log in to Windows Azure to code. Once you're done, you simply deploy the app and you start making money from the application as you're paying for it. Windows Azure Databases (The Artist Formerly Known As SQL Azure) is a bit different. It's not emulated in the SDK - because it doesn't have to be. It's just SQL Server, with some differences in feature set. To develop in this environment, you can use SQL Server, any edition. Be aware of the feature differences, of course, but just develop away - even in the free "Express" or LocalDB flavors - and then right-click in SQL Server Management Studio to script objects. Script the database, but change the "Advanced" selection to the Engine Type of "SQL Azure". Bing. Although most all T-SQL ports directly, one thing to keep in mind is that you need a Clustered Index on every table. Often the Primary Key (PK) is a good choice for that.

    Read the article

  • In defense of SELECT * in production code, in some limited cases?

    - by Alexander Kuznetsov
    It is well known that SELECT * is not acceptable in production code, with the exception of this pattern: IF EXISTS( SELECT * We all know that whenever we see code code like this: Listing 1. "Bad" SQL SELECT Column1 , Column2 FROM ( SELECT c. * , ROW_NUMBER () OVER ( PARTITION BY Column1 ORDER BY Column2 ) AS rn FROM data.SomeTable AS c ) AS c WHERE rn < 5 we are supposed to automatically replace * with an explicit list of columns, as follows: Listing 2. "Good" SQL SELECT Column1 , Column2 FROM...(read more)

    Read the article

  • Serial plans: Threshold / Parallel_degree_limit = 1

    - by jean-pierre.dijcks
    As a very short follow up on the previous post. So here is some more on getting a serial plan and why that happens Another reason - compared to the auto DOP is not on as we looked at in the earlier post - and often more prevalent to get a serial plan is if the plan simply does not take long enough to consider a parallel path. The resulting plan and note looks like this (note that this is a serial plan!): explain plan for select count(1) from sales; SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY()); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- Plan hash value: 672559287 -------------------------------------------------------------------------------------- | Id  | Operation            | Name  | Rows  | Cost (%CPU)| Time     | Pstart| Pstop | -------------------------------------------------------------------------------------- PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- |   0 | SELECT STATEMENT     |       |     1 |     5   (0)| 00:00:01 |       |     | |   1 |  SORT AGGREGATE      |       |     1 |            |          |       |     | |   2 |   PARTITION RANGE ALL|       |   960 |     5   (0)| 00:00:01 |     1 |  16 | |   3 |    TABLE ACCESS FULL | SALES |   960 |     5   (0)| 00:00:01 |     1 |  16 | Note -----    - automatic DOP: Computed Degree of Parallelism is 1 because of parallel threshold 14 rows selected. The parallel threshold is referring to parallel_min_time_threshold and since I did not change the default (10s) the plan is not being considered for a parallel degree computation and is therefore staying with the serial execution. Now we go into the land of crazy: Assume I do want this DOP=1 to happen, I could set the parameter in the init.ora, but to highlight it in this case I changed it on the session: alter session set parallel_degree_limit = 1; The result I get is: ERROR: ORA-02097: parameter cannot be modified because specified value is invalid ORA-00096: invalid value 1 for parameter parallel_degree_limit, must be from among CPU IO AUTO INTEGER>=2 Which of course makes perfect sense...

    Read the article

  • Are short identifiers bad?

    - by Daniel C. Sobral
    Are short identifiers bad? How does identifier length correlate with code comprehension? What other factors (besides code comprehension) might be of consideration when it comes to naming identifiers? Just to try to keep the quality of the answers up, please note that there is some research on the subject already! Edit Curious that everyone either doesn't think length is relevant or tend to prefer larger identifiers, when both links I provided indicate large identifiers are harmful!

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >