Search Results

Search found 12287 results on 492 pages for 'column oriented'.

Page 159/492 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • A new mission statement for my school's algorithms class

    - by Eric Fode
    The teacher at Eastern Washington University that is now teaching the algorithms course is new to eastern and as a result the course has changed drastically mostly in the right direction. That being said I feel that the class could use a more specific, and industry oriented (since that is where most students will go, though suggestions for an academia oriented class are also welcome) direction, having only worked in industry for 2 years I would like the community's (a wider and much more collectively experienced and in the end plausibly more credible) opinion on the quality of this as a statement for the purpose an algorithms class, and if I am completely off target your suggestion for the purpose of a required Jr. level Algorithms class that is standalone (so no other classes focusing specifically on algorithms are required). The statement is as follows: The purpose of the algorithms class is to do three things: Primarily, to teach how to learn, do basic analysis, and implement a given algorithm found outside of the class. Secondly, to teach the student how to model a problem in their mind so that they can find a an existing algorithm or have a direction to start the development of a new algorithm. Third, to overview a variety of algorithms that exist and to deeply understand and analyze one algorithm in each of the basic algorithmic design strategies: Divide and Conquer, Reduce and Conquer, Transform and Conquer, Greedy, Brute Force, Iterative Improvement and Dynamic Programming. The Question in short is: do you agree with this statement of the purpose of an algorithms course, so that it would be useful in the real world, if not what would you suggest?

    Read the article

  • Building Queries Systematically

    - by Jeremy Smyth
    The SQL language is a bit like a toolkit for data. It consists of lots of little fiddly bits of syntax that, taken together, allow you to build complex edifices and return powerful results. For the uninitiated, the many tools can be quite confusing, and it's sometimes difficult to decide how to go about the process of building non-trivial queries, that is, queries that are more than a simple SELECT a, b FROM c; A System for Building Queries When you're building queries, you could use a system like the following:  Decide which fields contain the values you want to use in our output, and how you wish to alias those fields Values you want to see in your output Values you want to use in calculations . For example, to calculate margin on a product, you could calculate price - cost and give it the alias margin. Values you want to filter with. For example, you might only want to see products that weigh more than 2Kg or that are blue. The weight or colour columns could contain that information. Values you want to order by. For example you might want the most expensive products first, and the least last. You could use the price column in descending order to achieve that. Assuming the fields you've picked in point 1 are in multiple tables, find the connections between those tables Look for relationships between tables and identify the columns that implement those relationships. For example, The Orders table could have a CustomerID field referencing the same column in the Customers table. Sometimes the problem doesn't use relationships but rests on a different field; sometimes the query is looking for a coincidence of fact rather than a foreign key constraint. For example you might have sales representatives who live in the same state as a customer; this information is normally not used in relationships, but if your query is for organizing events where sales representatives meet customers, it's useful in that query. In such a case you would record the names of columns at either end of such a connection. Sometimes relationships require a bridge, a junction table that wasn't identified in point 1 above but is needed to connect tables you need; these are used in "many-to-many relationships". In these cases you need to record the columns in each table that connect to similar columns in other tables. Construct a join or series of joins using the fields and tables identified in point 2 above. This becomes your FROM clause. Filter using some of the fields in point 1 above. This becomes your WHERE clause. Construct an ORDER BY clause using values from point 1 above that are relevant to the desired order of the output rows. Project the result using the remainder of the fields in point 1 above. This becomes your SELECT clause. A Worked Example   Let's say you want to query the world database to find a list of countries (with their capitals) and the change in GNP, using the difference between the GNP and GNPOld columns, and that you only want to see results for countries with a population greater than 100,000,000. Using the system described above, we could do the following:  The Country.Name and City.Name columns contain the name of the country and city respectively.  The change in GNP comes from the calculation GNP - GNPOld. Both those columns are in the Country table. This calculation is also used to order the output, in descending order To see only countries with a population greater than 100,000,000, you need the Population field of the Country table. There is also a Population field in the City table, so you'll need to specify the table name to disambiguate. You can also represent a number like 100 million as 100e6 instead of 100000000 to make it easier to read. Because the fields come from the Country and City tables, you'll need to join them. There are two relationships between these tables: Each city is hosted within a country, and the city's CountryCode column identifies that country. Also, each country has a capital city, whose ID is contained within the country's Capital column. This latter relationship is the one to use, so the relevant columns and the condition that uses them is represented by the following FROM clause:  FROM Country JOIN City ON Country.Capital = City.ID The statement should only return countries with a population greater than 100,000,000. Country.Population is the relevant column, so the WHERE clause becomes:  WHERE Country.Population > 100e6  To sort the result set in reverse order of difference in GNP, you could use either the calculation, or the position in the output (it's the third column): ORDER BY GNP - GNPOld or ORDER BY 3 Finally, project the columns you wish to see by constructing the SELECT clause: SELECT Country.Name AS Country, City.Name AS Capital,        GNP - GNPOld AS `Difference in GNP`  The whole statement ends up looking like this:  mysql> SELECT Country.Name AS Country, City.Name AS Capital, -> GNP - GNPOld AS `Difference in GNP` -> FROM Country JOIN City ON Country.Capital = City.ID -> WHERE Country.Population > 100e6 -> ORDER BY 3 DESC; +--------------------+------------+-------------------+ | Country            | Capital    | Difference in GNP | +--------------------+------------+-------------------+ | United States | Washington | 399800.00 | | China | Peking | 64549.00 | | India | New Delhi | 16542.00 | | Nigeria | Abuja | 7084.00 | | Pakistan | Islamabad | 2740.00 | | Bangladesh | Dhaka | 886.00 | | Brazil | Brasília | -27369.00 | | Indonesia | Jakarta | -130020.00 | | Russian Federation | Moscow | -166381.00 | | Japan | Tokyo | -405596.00 | +--------------------+------------+-------------------+ 10 rows in set (0.00 sec) Queries with Aggregates and GROUP BY While this system might work well for many queries, it doesn't cater for situations where you have complex summaries and aggregation. For aggregation, you'd start with choosing which columns to view in the output, but this time you'd construct them as aggregate expressions. For example, you could look at the average population, or the count of distinct regions.You could also perform more complex aggregations, such as the average of GNP per head of population calculated as AVG(GNP/Population). Having chosen the values to appear in the output, you must choose how to aggregate those values. A useful way to think about this is that every aggregate query is of the form X, Y per Z. The SELECT clause contains the expressions for X and Y, as already described, and Z becomes your GROUP BY clause. Ordinarily you would also include Z in the query so you see how you are grouping, so the output becomes Z, X, Y per Z.  As an example, consider the following, which shows a count of  countries and the average population per continent:  mysql> SELECT Continent, COUNT(Name), AVG(Population)     -> FROM Country     -> GROUP BY Continent; +---------------+-------------+-----------------+ | Continent     | COUNT(Name) | AVG(Population) | +---------------+-------------+-----------------+ | Asia          |          51 |   72647562.7451 | | Europe        |          46 |   15871186.9565 | | North America |          37 |   13053864.8649 | | Africa        |          58 |   13525431.0345 | | Oceania       |          28 |    1085755.3571 | | Antarctica    |           5 |          0.0000 | | South America |          14 |   24698571.4286 | +---------------+-------------+-----------------+ 7 rows in set (0.00 sec) In this case, X is the number of countries, Y is the average population, and Z is the continent. Of course, you could have more fields in the SELECT clause, and  more fields in the GROUP BY clause as you require. You would also normally alias columns to make the output more suited to your requirements. More Complex Queries  Queries can get considerably more interesting than this. You could also add joins and other expressions to your aggregate query, as in the earlier part of this post. You could have more complex conditions in the WHERE clause. Similarly, you could use queries such as these in subqueries of yet more complex super-queries. Each technique becomes another tool in your toolbox, until before you know it you're writing queries across 15 tables that take two pages to write out. But that's for another day...

    Read the article

  • Graduate expectations versus reality

    - by Bobby Tables
    When choosing what we want to study, and do with our careers and lives, we all have some expectations of what it is going to be like. Now that I've been in the industry for almost a decade, I've been reflecting a bit on what I thought (back when I was studying Computer Science) programming working life was going to be like, and how it's actually turning out to be. My two biggest shocks (or should I say, broken expectations) by far are the sheer amount of maintenance work involved in software, and the overall lack of professionalism: Maintenance: At uni, we were all told that the majority of software work is maintenance of existing systems. So I knew to expect this in the abstract. But I never imagined exactly how overwhelming this would turn out to be. Perhaps it's something I mentally glazed over, and hoped I'd be building cool new stuff from scratch a lot more. But it really is the case that most jobs are overwhelmingly maintenance, bug fixing, and support oriented. Lack of professionalism: At uni, I always had the impression that commercial software work is very process-oriented and stringently engineered. I had images of ISO processes, reams of technical documentation, every feature and bug being strictly documented, and a generally professional environment. It came as a huge shock to realise that most software companies operate no differently to a team of students working on a large semester-long project. And I've worked in both the small agile hack shop, and the medium sized corporate enterprise. While I wouldn't say that it's always been outright "unprofessional", it definitely feels like the software industry (on the whole) is far from the strong engineering discipline that I expected it to be. Has anyone else had similar experiences to this? What are the ways in which your expectations of what our profession would be like were different to the reality?

    Read the article

  • SOA Forcing A Shift In IT Governance

    As more and more companies adopt a service oriented approach to developing and maintaining existing enterprise systems, IT governance also needs to shift its philosophies to fit the emerging development paradigm. When I first started programming companies placed an emphasis on “Code and Go” software development style. They only developed for current problems and did not really take a look at how the company could leverage some of the code we were developing across the entire enterprise system.  The concept of Service Oriented Architecture (SOA) has dramatically shifted how we develop enterprise software with emphasizing software processes as company assets. This has driven some to start developing new components as processes strictly for the possibility of future integration of existing and new systems. I personally like this new paradigm because it truly promotes code reusability. However, most enterprise level IT governance polices were created prior to the introduction of SOA in their respected organization. This can create a sense of the Wild West for developers working on projects related to SOA. This is due to the fact that a lot of the standards and polices implemented by enterprise IT governing boards were initially for developing under the “Code and Go” paradigm and do not take in to account idiosyncrasies found in the SOA/integration based development. As IT governance moves forward its focus should aim more for “Develop to Integrate” versus “Code and Go” philosophies. Examples of “Develop to Integrate” Philosophy: Defining preferred data transfer methodologies (XML vs. JSON), and when to use them Updating security best practices for exposing public services based on existing standard security policies Define when to use create new SOA project vs. implementing localized components that could be reused elsewhere in the enterprise.

    Read the article

  • Today's Links (6/24/2011)

    - by Bob Rhubart
    Fusion Applications - How we look at the near future | Domien Bolmers Bolmers recaps a Logica pow-wow around Fusion Applications. Who invented e-mail? | Nicholas Carr IT apparently does matter to Nicholas Carr as he shares links to Errol Morris's 5-part NYT series about the origins of email. David Sprott's Blog: Service Oriented Cloud (SOC) "Whilst all the really good Cloud environments are Service Oriented," says Sprott, "it’s very much the minority of consumer SaaS that is today." Fast, Faster, JRockit | René van Wijk Oracle ACE René van Wijk tells you "everything you ever wanted to know about the JRockit JVM, well quite a lot anyway." Creating an XML document based on my POJO domain model – how will JAXB help me? | Lucas Jellema "I thought that adding a few JAXB annotations to my existing POJO model would do the trick," says Jellema, "but no such luck." Announcing Oracle Environmental Accounting and Reporting | Theresa Hickman Oracle Environmental Accounting and Reporting is designed to help companies track and report greenhouse emissions. Yoga framework for REST-like partial resource access | William Vambenepe Vambenepe says: "A tweet by Stefan Tilkov brought Yoga to my attention, 'a framework for supporting REST-like URI requests with field selectors.'" InfoQ: Pragmatic Software Architecture and the Role of the Architect "Joe Wirtley introduces software architecture and the role of the architect in software development along with techniques, tips and resources to help one get started thinking as an architect."

    Read the article

  • Silverlight 4 &ndash; Coded UI Framework Video Tutorial

    - by mbcrump
    With the release of Visual Studio 2010 Feature Pack 2, Microsoft included the Coded UI Test framework. With this release it is possible to create automated test with just a few mouse clicks. This is a very powerful feature that all Silverlight developers need to learn. Instead of my normal blog post, I have created a video tutorial that walks you through it starting from “File” –> New Project. I hope you enjoy and please leave feedback. Video Tutorial (short 9 minute video): Slides from the demo (only 3): Silverlight 4 – Coded UI Testing Code for the MainPage.xaml that was used in the Demo. For the sake of time, I did not go into the AutomationProperties.Name that I used for the TextBox or Button. I added that for each element . <Grid x:Name="LayoutRoot" Background="White" Height="100" Width="350"> <Grid.ColumnDefinitions> <ColumnDefinition/> <ColumnDefinition/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition/> <RowDefinition/> </Grid.RowDefinitions> <TextBlock Padding="15" Grid.Column="0" TextAlignment="Right">Name</TextBlock> <TextBox AutomationProperties.Name="txtAP" Grid.Column="1" Height="25" TextAlignment="Right" Name="txtName" /> <Button AutomationProperties.Name="btnAP" Grid.Row="1" Grid.Column="1" Content="Click for Name" x:Name="btnMessage" Click="btnMessage_Click" /> </Grid>  Subscribe to my feed

    Read the article

  • Maximum Length Of IP Address: 15 (IPv4) & 39(IPv6)

    - by Gopinath
    Problem You are designing a database table for a web application that requires to store IP address of users who visits the site. The IP address is required to be stored a character data in the table. To define size of the character column you need to know maximum length of IP address. So, what is the maximum length of an IP address? Solution The IPv4 version of IP address is in the following format 255.255.255.255 To store IPv4 address we require 15 characters. The IPv6 version of IP address is grouped into sets of 4 hex digits separated by colons, like the below 2001:0db8:85a3:0000:0000:8a2e:0370:7334 To store IPv6 address you require a 39 characters long column. Conclusion As IPv4 and IPv6 are the commonly use protocols, you better define a column with 39 characters length so that both the format address are saved in to the table without any issues. This article titled,Maximum Length Of IP Address: 15 (IPv4) & 39(IPv6), was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • What's the best practice to do SOA exception handling?

    - by sun1991
    Here's some interesting debate going on between me and my colleague when coming to handle SOA exceptions: On one side, I support what Juval Lowy said in Programming WCF Services 3rd Edition: As stated at the beginning of this chapter, it is a common illusion that clients care about errors or have anything meaningful to do when they occur. Any attempt to bake such capabilities into the client creates an inordinate degree of coupling between the client and the object, raising serious design questions. How could the client possibly know more about the error than the service, unless it is tightly coupled to it? What if the error originated several layers below the service—should the client be coupled to those lowlevel layers? Should the client try the call again? How often and how frequently? Should the client inform the user of the error? Is there a user? By having all service exceptions be indistinguishable from one another, WCF decouples the client from the service. The less the client knows about what happened on the service side, the more decoupled the interaction will be. On the other side, here's what my colleague suggest: I believe it’s simply incorrect, as it does not align with best practices in building a service oriented architecture and it ignores the general idea that there are problems that users are able to recover from, such as not keying a value correctly. If we considered only systems exceptions, perhaps this idea holds, but systems exceptions are only part of the exception domain. User recoverable exceptions are the other part of the domain and are likely to happen on a regular basis. I believe the correct way to build a service oriented architecture is to map user recoverable situations to checked exceptions, then to marshall each checked exception back to the client as a unique exception that client application programmers are able to handle appropriately. Marshall all runtime exceptions back to the client as a system exception, along with the stack trace so that it is easy to troubleshoot the root cause. I'd like to know what you think about this? Thank you.

    Read the article

  • Using Python to traverse a parent-child data set

    - by user132748
    I have a dataset of two columns in a csv file. Th purpose of this dataset is to provide a linking between two different id's if they belong to the same person. e.g (2,3,5 belong to 1) e.g COLA COLB 1 2 ; 1 3 ; 1 5 ; 2 6 ; 3 7 ; 9 10 In the above example 1 is linked to 2,3,5 and 2 is the linked to 6 and 3 is linked to 7. What I am trying to achieve is to identify all records which are linked to 1 directly (2,3,5) or indirectly(6,7) and be able to say that these id's in column B belong to same person in column A and then either dedupe or add a new column to the output file which will have 1 populated for all rows that link to 1 e.g of expected output colA colB GroupField 1 2 1; 1 3 1; 1 5 1 ; 2 6 1 ;3 7 1; 9 10 9; 10 11 9 I am a newbie and so am not sure on how to approach this problem.Appreciate any inputs you'll can provide.

    Read the article

  • Displaying a Paged Grid of Data in ASP.NET MVC

    This article demonstrates how to display a paged grid of data in an ASP.NET MVC application and builds upon the work done in two earlier articles: Displaying a Grid of Data in ASP.NET MVC and Sorting a Grid of Data in ASP.NET MVC. Displaying a Grid of Data in ASP.NET MVC started with creating a new ASP.NET MVC application in Visual Studio, then added the Northwind database to the project and showed how to use Microsoft's Linq-to-SQL tool to access data from the database. The article then looked at creating a Controller and View for displaying a list of product information (the Model). Sorting a Grid of Data in ASP.NET MVC enhanced the application by adding a view-specific Model (ProductGridModel) that provided the View with the sorted collection of products to display along with sort-related information, such as the name of the database column the products were sorted by and whether the products were sorted in ascending or descending order. The Sorting a Grid of Data in ASP.NET MVC article also walked through creating a partial view to render the grid's header row so that each column header was a link that, when clicked, sorted the grid by that column. In this article we enhance the view-specific Model (ProductGridModel) to include paging-related information to include the current page being viewed, how many records to show per page, and how many total records are being paged through. Next, we create an action in the Controller that efficiently retrieves the appropriate subset of records to display and then complete the exercise by building a View that displays the subset of records and includes a paging interface that allows the user to step to the next or previous page, or to jump to a particular page number, we create and use a partial view that displays a numeric paging interface Like with its predecessors, this article offers step-by-step instructions and includes a complete, working demo available for download at the end of the article. Read on to learn more! Read More >

    Read the article

  • Can higher-order functions in FP be interpreted as some kind of dependency injection?

    - by Giorgio
    According to this article, in object-oriented programming / design dependency injection involves a dependent consumer, a declaration of a component's dependencies, defined as interface contracts, an injector that creates instances of classes that implement a given dependency interface on request. Let us now consider a higher-order function in a functional programming language, e.g. the Haskell function filter :: (a -> Bool) -> [a] -> [a] from Data.List. This function transforms a list into another list and, in order to perform its job, it uses (consumes) an external predicate function that must be provided by its caller, e.g. the expression filter (\x -> (mod x 2) == 0) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] selects all even numbers from the input list. But isn't this construction very similar to the pattern illustrated above, where the filter function is the dependent consumer, the signature (a -> Bool) of the function argument is the interface contract, the expression that uses the higher-order is the injector that, in this particular case, injects the implementation (\x -> (mod x 2) == 0) of the contract. More in general, can one relate higher-order functions and their usage pattern in functional programming to the dependency injection pattern in object-oriented languages? Or in the inverse direction, can dependency injection be compared to using some kind of higher-order function?

    Read the article

  • Why is my query soooooo slow?

    - by geekrutherford
    A stored procedure used in our production environment recently became so slow it cause the calling web service to begin timing out. When running the stored procedure in Query Analyzer it took nearly 3 minutes to complete.   The stored procedure itself does little more than create a small bit of dynamic SQL which calls a view with a where clause at the end.   At first the thought was that the query used within the view needed to be optimized. The query is quite long and therefore easy to jump to this conclusion.   Fortunately, after bringing the issue to the attention of a coworker they asked "is there a where clause, and if so, is there an index on the column(s) in it?" I had no idea and quickly said as much. A quick check on the table/column utilized in the where clause indicated indeed there was no index.   Before adding the index, and after admitting I am no SQL wiz, I checked the internet for info on the difference between clustered and non-clustered indexes. I found the following site quite helpful OdeToCode. After adding the non-clustered index on the column, the query that used to take nearly 3 minutes now takes 10 seconds! Ah, if only I'd thought to do this ahead of time!

    Read the article

  • What's a way to implement a flexible buff/debuff system?

    - by gkimsey
    Overview: Lots of games which RPG-like statistics allow for character "buffs", ranging from simple "Deal 25% extra damage" to more complicated things like "Deal 15 damage back to attackers when hit." The specifics of each type of buff aren't really relevant. I'm looking for a (presumably object-oriented) way to handle arbitrary buffs. Details: In my particular case, I have multiple characters in a turn-based battle environment, so I envisioned buffs being tied to events like "OnTurnStart", "OnReceiveDamage", etc. Perhaps each buff is a subclass of a main Buff abstract class, where only the relevant events are overloaded. Then each character could have a vector of buffs currently applied. Does this solution make sense? I can certainly see dozens of event types being necessary, it feels like making a new subclass for each buff is overkill, and it doesn't seem to allow for any buff "interactions". That is, if I wanted to implement a cap on damage boosts so that even if you had 10 different buffs which all give 25% extra damage, you only do 100% extra instead of 250% extra. And there's more complicated situations that ideally I could control. I'm sure everyone can come up with examples of how more sophisticated buffs can potentially interact with each other in a way that as a game developer I may not want. As a relatively inexperienced C++ programmer (I generally have used C in embedded systems), I feel like my solution is simplistic and probably doesn't take full advantage of the object-oriented language. Thoughts? Has anyone here designed a fairly robust buff system before?

    Read the article

  • Example of DOD design

    - by Jeffrey
    I can't seem to find a nice explanation of the Data Oriented Design for a generic zombie game (it's just an example, pretty common example). Could you make an example of the Data Oriented Design on creating a generic zombie class? Is the following good? Zombie list class: class ZombieList { GLuint vbo; // generic zombie vertex model std::vector<color>; // object default color std::vector<texture>; // objects textures std::vector<vector3D>; // objects positions public: unsigned int create(); // return object id void move(unsigned int objId, vector3D offset); void rotate(unsigned int objId, float angle); void setColor(unsigned int objId, color c); void setPosition(unsigned int objId, color c); void setTexture(unsigned int, unsigned int); ... void update(Player*); // move towards player, attack if near } Example: Player p; Zombielist zl; unsigned int first = zl.create(); zl.setPosition(first, vector3D(50, 50)); zl.setTexture(first, texture("zombie1.png")); ... while (running) { // main loop ... zl.update(&p); zl.draw(); // draw every zombie } Or would creating a generic World container that contains every action from bite(zombieId, playerId) to moveTo(playerId, vector) to createPlayer() to shoot(playerId, vector) to face(radians)/face(vector); and contains: std::vector<zombie> std::vector<player> ... std::vector<mapchunk> ... std::vector<vbobufferid> player_run_animation; ... be a good example? Whats the proper way to organize a game with DOD?

    Read the article

  • T-SQL select where and group by date

    - by bconlon
    T-SQL has never been my favorite language, but I need to use it on a fairly regular basis and every time I seem to Google the same things. So if I add it here, it might help others with the same issues, but it will also save me time later as I will know where to look for the answers!! 1. How do I SELECT FROM WHERE to filter on a DateTime column? As it happens this is easy but I always forget. You just put the DATE value in single quotes and in standard format: SELECT StartDate FROM Customer WHERE StartDate >= '2011-01-01' ORDER BY StartDate 2. How do I then GROUP BY and get a count by StartDate? Bit trickier, but you can use the built in DATEADD and DATEDIFF to set the TIME part to midnight, allowing the GROUP BY to have a consistent value to work on: SELECT DATEADD (d, DATEDIFF(d, 0, StartDate),0) [Customer Creation Date], COUNT(*) [Number Of New Customers] FROM Customer WHERE StartDate >= '2011-01-01' GROUP BY DATEADD(d, DATEDIFF(d, 0, StartDate),0) ORDER BY [Customer Creation Date] Note: [Customer Creation Date] and [Number Of New Customers] column alias just provide more readable column headers. 3. Finally, how can you format the DATETIME to only show the DATE part (after all the TIME part is now always midnight)? The built in CONVERT function allows you to convert the DATETIME to a CHAR array using a specific format. The format is a bit arbitrary and needs looking up, but 101 is the U.S. standard mm/dd/yyyy, and 103 is the U.K. standard dd/mm/yyyy. SELECT CONVERT(CHAR(10), DATEADD(d, DATEDIFF(d, 0, StartDate),0), 103) [Customer Creation Date], COUNT(*) [Number Of New Customers] FROM Customer WHERE StartDate >= '2011-01-01' GROUP BY DATEADD(d, DATEDIFF(d, 0, StartDate),0) ORDER BY [Customer Creation Date]  #

    Read the article

  • OBIA on Teradata - Part 3 Stats

    - by Mohan Ramanuja
    Statements to run table stats on W_Party_Per_DS and W_Party_Per_DCOLLECT STATISTICS ON W_PARTY_PER_DS COLUMN ("DEPARTMENT_NAME");COLLECT STATISTICS ON W_PARTY_PER_DS COLUMN ("CONTACT_ID");COLLECT STATISTICS ON W_PARTY_PER_DS COLUMN ("CITY");COLLECT STATISTICS ON W_PARTY_PER_D COLUMN ("ACCNT_FLG");COLLECT STATISTICS ON W_PARTY_PER_D COLUMN ("SUPPLIER_FLG");help statistics w_party_per_d; Date Time    Unique Values    Column Names10/06/02    15:37:47  5,002,185        ROW_WID10/06/21    14:02:55  0     VIS_PR_POS_ID10/06/02    15:37:48  2     CREATED_BY_WID10/06/02    15:37:49  2     CHANGED_BY_WID10/06/02    15:37:50  2     SRC_EFF_FROM_DT10/06/02    15:37:51  1     SRC_EFF_TO_DT10/06/02    15:37:52  2     EFFECTIVE_FROM_DT10/06/02    15:37:53  2     EFFECTIVE_TO_DT10/06/02    15:37:57  1     DELETE_FLG10/06/21    14:02:54  0     CURRENT_FLG10/06/02    15:37:59  2     DATASOURCE_NUM_ID10/06/02    15:38:02  1     ETL_PROC_WID10/06/10    18:27:21  1,000     INTEGRATION_ID select top 10 * from DBC.TableSize; VprocDataBaseName AccountName     TableName     CurrentPerm PeakPerm 0    T21_ETL_TEMP_ENT         IM IT/IM IT Enterprise region  RZ_PENDD_FCLTY_CLM_STG   1024     0 0    SSB_RDS                  IM IT/IM IT ENTERPRISE REGION  RDS_RESP_997_TLR         1024     0 0    T17_EDL                  IM IT/IM IT Enterprise region  SPCMN_ACTN               1024     0 0    T20_ETL_CAPTR_DATA_ENT   IM IT/IM IT Enterprise region  HZ_CS90_VSGPNTE_S9MGNT14 2048     0 0    T5_ETL_DATA_PBM          IM IT/IM IT Enterprise region  PRCG_OVRD_BY_RX_NM       1536     0 0    PIP_DB                   $H&D&H                         PIPTRGENTSRC             1024     0 0    STest5_ADW0              sysadmin                       PROV_RGSTRTN             59904     0 0    AEDWSTG1                 NEIM/NEIM                      MEMBERSHIP_LKUP_ETL      1024     0 0    AEDWTST5                 dbc                            cptn_agrmt_xwlk          1024     0 0    VAL_LAG_TEMP             $H1$&D&HDBA                    clm_lag_stg              347136     0 select vproc, CurrentPerm from DBC.TableSize where databasename = 'PRJ_CRM_STGC' and tablename='w_party_per_d' ORDER BY 2 DESC;Vproc    DataBaseName    AccountName TableName        CurrentPerm    PeakPerm0        PRJ_CRM_STGC    DBA/DBA      W_PARTY_PER_D    8704.00        841728.003        PRJ_CRM_STGC    DBA/DBA      W_PARTY_PER_D    8704.00        782848.00

    Read the article

  • Design Pattern for Skipping Steps in a Wizard

    - by Eric J.
    I'm designing a flexible Wizard system that presents a number of screens to complete a task. Some screens may need to be skipped based on answers to prompts on one or more previous screens. The conditions to skip a given screen need to be editable by a non-technical user via a UI. Multiple conditions need only be combined with and. I have an initial design in mind, but it feels inelegant. I wonder if there's a better way to approach this class of problem. Initial Design UI where The first column allows the user to select a question from a previous screen. The second column allows the user to select an operator applicable to the type of question asked. The third column allows the user to enter one or more values depending on the selected operator. Object Model public enum Operations { ... } public class Condition { int QuestionId { get; set; } Operations Operation { get; set; } List<object> Parameters { get; private set; } } List<Condition> pageSkipConditions; Controller Logic bool allConditionsTrue = pageSkipConditions.Count > 0; foreach (Condition c in pageSkipConditions) { allConditionsTrue &= Evaluate(previousAnswers, c); } // ... private bool Evaluate(List<Answers> previousAnswers, Condition c) { switch (c.Operation) { case Operations.StartsWith: // logic for this operation // etc. } }

    Read the article

  • Choosing an open source license such that maximum value is added to a startup

    - by echo-flow
    There are many companies that produce open source software products, and many business models that these companies can use. I'm particularly interested in companies like 280 North, the company behind Objective-J and Cappucino frameworks. My understanding of this organization's business model is that they: worked to develop a tool which added significant value to developers, released the tool under an open source license, built a community around the tool (which was helped by the project's open source licensing), created interesting demos illustrating the project's value All of these things added value to the project, and the company that owned it. Finally, 280 North was sold to Motorola. My question has to do with the role of software licensing in this particular business model. 280 North licensed their software projects under the LGPL, which gave them some proprietary control over how the project could be used. I believe that the LGPL is what's known as a "weak copyleft" license, meaning that the project can be linked to, without the linking code also being licensed under the LGPL; but software derived directly from the project would need to be licensed under the LGPL. For web-oriented libraries in particular, weak copyleft, or non-copyleft licensing seems to be quite common; I can't think of a single example of a popular or well-known web-oriented library that is licensed under the GPL (or AGPL). The question then, is, how much value would a weak copyleft license like the LGPL add to a software venture like 280 North, versus a non-copyleft license, such as the BSD license or the Apache Software License? I'd really appreciate any insight anyone can offer into this, but I'd be most interested in answers that can cite other companies as case studies or examples.

    Read the article

  • Any empirical evidence on the efficacy of CMMI?

    - by mehaase
    I am wondering if there are any studies that examine the efficacy of software projects in CMMI-oriented organizations. For example, are CMMI organizations more likely to finish projects on time and/or on budget than non-CMMI organizations? Edit for clarification: CMMI stands for "Capability Maturity Model Integration". It's developed by the Software Engineering Institute at Carnegie-Mellon University (SEI-CMU). It's not a certification, but there are various companies that will "appraise" your organization to various levels of CMMI, such as level 2 and level 3. (I believe CMMI level 1 is an animalistic, Hobbesian free-for-all that nobody aspires to. In other words, everybody is at least CMMI level 1, even if you've never heard of CMMI before.) I'm definitely not an expert, but I believe that an organization can be appraised for CMMI levels within different scopes of work: i.e. service delivery, software development, foobaring, etc. My question is focused on the software development appraisal: is an organization that has been appraised to CMMI Level X for software projects more likely to finish a software project on time and on budget than another organization that has not been appraised to CMMI Level X? However, in the absence of hard data about software-oriented CMMI, I'd be interested in the effect that CMMI appraisals have on other activities as well. I originally asked the question because I've seen various studies conducted on software (e.g. the essays in The Mythical Man Month refer to numerous empirical studies, as does McConnell's Code Complete), so I know that there are organizations performing empirical studies of software development.

    Read the article

  • How to Structure a Trinary state in DB and Application

    - by ABMagil
    How should I structure, in the DB especially, but also in the application, a trinary state? For instance, I have user feedback records which need to be reviewed before they are presented to the general public. This means a feedback reviewer must see the unreviewed feedback, then approve or reject them. I can think of a couple ways to represent this: Two boolean flags: Seen/Unseen and Approved/Rejected. This is the simplest and probably the smallest database solution (presumably boolean fields are simple bits). The downside is that there are really only three states I care about (unseen/approved/rejected) and this creates four states, including one I don't care about (a record which is seen but not approved or rejected is essentially unseen). String column in the DB with constants/enum in application. Using Rating::APPROVED_STATE within the application and letting it equal whatever it wants in the DB. This is a larger column in the db and I'm concerned about doing string comparisons whenever I need these records. Perhaps mitigatable with an index? Single boolean column, but allow nulls. A true is approved, a false is rejected. A null is unseen. Not sure the pros/cons of this solution. What are the rules I should use to guide my choice? I'm already thinking in terms of DB size and the cost of finding records based on state, as well as the readability of code the ends up using this structure.

    Read the article

  • Designing a system with different business rules for different customers

    - by user1595846
    My company is rewriting our proprietary business application. The current architecture is poorly done and inflexible. It is coded more procedural oriented as opposed to object oriented. It has become difficult to maintain. Our system is a web application written in .Net Webforms. I am considering ASP.Net MVC for the rewrite. We intend to rewrite it with a good, solid architecture with the goal of maintainability and reusable classes for some of our other systems and services. We would also like the system to be customizable for different customers in the event that we market the system. I am considering redesigning the system based on the layered architecture (Presentation, Business, Data Access layers) described in the Microsoft Patterns and Practices Application Architecture Guide. http://msdn.microsoft.com/en-us/library/ff650706.aspx Hopefully this isn't too open ended, but how would you recommend allowing for different business logic/rules for different customers? I'm aware of Windows Workflow Foundation, but from what I've read about it, it seems many business rules could be too complicated to handle there. Also, Can anyone point me to where I can download an example of a .net solution that is based on the Application Architecture Guide? I have already downloaded the Layered Architecture Solution Guidance and the Expense Sample on codeplex. I was looking for something a bit larger and more robust that I could step through the code and see how it works. If you feel there are better architectures to base our redesign on please feel free to share. I appreciate your help!

    Read the article

  • Example: Cross Cutting Concerns of an Application

    A little while ago I was given an opportunity to design and implement a new system that sent data via an HTTP Post method and then processed the results that were returned so that they could be inserted in to a database. My system had eight core concerns that it needed to fulfill. Eight Core Concerns Database Access Data Entities Worker Result Processing Process Flow Manager Email/Notification Error Handling Logging Of these eight, five were actually cross cutting concerns. 5 Cross Cutting Concerns Database Access Data Entities Email/Notification Error Handling Logging These five cross cutting concerns were determined after I created an aspect oriented model to help identity the system components that could be factored out into separate components.  These separated components would then be included in the system so that they could be used by various other components.  These five components allow all of the other components to access the database, store data, send notifications, handle errors, and log all system events.  Thus, these components are used to share unique aspects to the system via their implementation. The use of Aspect oriented architecture greatly helped me define what components I needed to create and what each of those components could do.  It also showed how all of the other aspects depended on each other so that each component did not have to re-implement code that was already created in the existing system.

    Read the article

  • SOA: Simplifying Cloud, Mobile, and On-premise Integration–Webcast October 24th 2013

    - by JuergenKress
    Proliferation of mobile devices, data explosion, and cloud enablement has caused a dramatic shift in IT. Organizations need to rethink their application infrastructures to accommodate increased processing speeds, heightened security and availability concerns for their applications, all while meeting lowered total cost of ownership. Traditional infrastructures may not be sufficient to accommodate the diversity and complexity of integrations in this new era. Many of today’s IT organizations rely on a Service Oriented Architecture (SOA) backbone to keep their businesses running. SOA adoption and acceptance across industries have led to platform maturity at the application layer level. However, we are at the start of an era where there is a new modus operandi for organizations to thrive and deliver continuously on competitive differentiation. This change is a result of market globalization, explosion in the number of mobile devices, unparalleled growth in voluminous data and innovation that crosses organizational boundaries. Social, mobile, cloud are terms that are revolutionizing the way organizations operate. Oracle SOA Suite is a hot-pluggable software suite to build, deploy and manage Service-Oriented Architectures (SOA).Oracle SOA transforms complex application integration into agile and reusable service-based connectivity by mediating, routing, and managing interactions between services and applications in the enterprise and in the cloud. Oracle SOA Suite's hot-pluggable architecture helps businesses lower upfront costs by allowing maximum re-use of existing IT investments and assets. Join us on this webcast to find out how you can optimize the use of Oracle SOA Suite, simplifying integration, and what does the next generation of SOA has to offer to you. Agenda: What's new in Oracle SOA Simplifying integration Application Integration and SOA Cloud integration with SOA Mobile Integration leveraging Oracle SOA Suite Oracle Delivers on Next Generation SOA Customer Examples Summary and Q&A Webcast Thursday October 24th, 2013 10am CET (8am UTC / 11am EEST)Details at the Registration Page SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: cloud integration,mobile integration,training,webcast middeware,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Matrices: Arrays or separate member variables?

    - by bjz
    I'm teaching myself 3D maths and in the process building my own rudimentary engine (of sorts). I was wondering what would be the best way to structure my matrix class. There are a few options: Separate member variables: struct Mat4 { float m11, m12, m13, m14, m21, m22, m23, m24, m31, m32, m33, m34, m41, m42, m43, m44; // methods } A multi-dimensional array: struct Mat4 { float[4][4] m; // methods } An array of vectors struct Mat4 { Vec4[4] m; // methods } I'm guessing there would be positives and negatives to each. From 3D Math Primer for Graphics and Game Development, 2nd Edition p.155: Matrices use 1-based indices, so the first row and column are numbered 1. For example, a12 (read “a one two,” not “a twelve”) is the element in the first row, second column. Notice that this is different from programming languages such as C++ and Java, which use 0-based array indices. A matrix does not have a column 0 or row 0. This difference in indexing can cause some confusion if matrices are stored using an actual array data type. For this reason, it’s common for classes that store small, fixed size matrices of the type used for geometric purposes to give each element its own named member variable, such as float a11, instead of using the language’s native array support with something like float elem[3][3]. So that's one vote for method one. Is this really the accepted way to do things? It seems rather unwieldy if the only benefit would be sticking with the conventional math notation.

    Read the article

  • A Better Way to Plan, Execute and Manage Enterprise Architecture

    - by JuergenKress
    IT Strategies from Oracle is an authorized library of guidelines and reference architectures that will help you better plan, execute, and manage your enterprise architecture and IT initiatives. The IT Strategies from Oracle library offers two types of best practice documents: practitioner guides containing pragmatic advice and approaches, and reference architectures containing the proven technology patterns to jumpstart your initiative. The IT Strategies from Oracle library can help you establish a reliable set of principles and standards to guide your use of Oracle technology. We will expand this library over time across all of Oracle's technologies. Today, you can access: Overview documents providing an introduction to all the resources available in the library and best practices maturity models Oracle Reference Architectures covering the application infrastructure foundation, management and monitoring, security, software engineering, service-oriented integration, service orientation, user interaction, engineered systems, and a master glossary. Enterprise Technology Strategies for Service-Oriented Architecture offering practitioner guides on creating a SOA roadmap, frameworks for governance, determining ROI, identifying services, software engineering, and white papers. Enterprise Technology Strategies for Event-Driven Architecture offering practitioner guides on creating an EDA roadmap and reference architectures on an EDA foundation and EDA infrastructure. Enterprise Technology Strategies for Business Process Management including practitioner guides on creating a BPM roadmap, business process engineering, governance, and reference architectures on a BPM foundation and BPM infrastructure. Enterprise Technology Strategies for Cloud Computing including reference architectures on a Cloud foundation and Cloud infrastructure. Enterprise Technology Strategies for Business Analytics includes a practitioner guide for creating a BA roadmap, and reference architectures for a BA foundation and BA infrastructure. Get the Oracle Enterprise Architecture content here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Architecture,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >