Search Results

Search found 14548 results on 582 pages for 'const reference'.

Page 415/582 | < Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >

  • Why no more macro languages?

    - by Muhammad Alkarouri
    In this answer to a previous question of mine about scripting languages suitability as shells, DigitalRoss identifies the difference between the macro languages and the "parsed typed" languages in terms of string treatment as the main reason that scripting languages are not suitable for shell purposes. Macro languages include nroff and m4 for example. What are the design decisions (or compromises) needed to create a macro programming language? And why are most of the mainstream languages parsed rather than macro? This very similar question (and the accepted answer) covers fairly well why the parsed typed languages, take C for example, suffer from the use of macros. I believe my question here covers different grounds: Macro languages or those working on a textual level are not wholly failures. Arguably, they include bash, Tcl and other shell languages. And they work in a specific niche such as shells as explained in my links above. Even m4 had a fairly long time of success, and some of the web template languages can be regarded as macro languages. It is quite possible that macros and parsed typing do not go well together and that is why macros "break" common languages. In the answer to the linked question, a macro like #define TWO 1+1 would have been covered by the common rules of the language rather than conflicting with those of the host language. And issues like "macros are not typed" and "code doesn't compile" are not relevant in the context of a language designed as untyped and interpreted with little concern for efficiency. The question about the design decisions needed to create a macro language pertain to a hobby project which I am currently working on on designing a new shell. Taking the previous question in context would clarify the difference between adding macros to a parsed language and my objective. I hope the clarification shows that the question linked doesn't cover this question, which is two parts: If I want to create a macro language (for a shell or a web template, for example), what limitations and compromises (and guidelines, if exist) need to be done? (Probably answerable by a link or reference) Why have no macro languages succeed in becoming mainstream except in particular niches? What makes typed languages successful in large programming, while "stringly-typed" languages succeed in shells and one-liner like environments?

    Read the article

  • Brightness Crash and Fan issues in 12.04.1

    - by S.A. McIntosh
    I would just like to state beforehand that I am a total novice in using Ubuntu when it comes to the more complex issues. So I thought it would be best to finally come here and ask for help before being re-directed or closed out for a solution. I have already looked high and low on this board for one but nothing came up for my particular case, so I might as well take a shot asking for the first time here. This is what I have at the moment: -Dell Insprion 1764 w/ 64-bit Intel i5 Core -Dual Boot: Windows 7/Ubuntu 12.04.1 32-bit (from 12.04 install) -Unity shell -Linux kernel: 3.2.0-32 generic-pae ...and this is my fglrxinfo: OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: ATI Mobility Radeon HD 5000 Series OpenGL version string: 4.2.11627 Compatibility Profile Context The one issue I have with using Ubuntu is brightness. With the driver in every time I use the slider in the brightness and lock settings or use the keyboard function, it freezes, goes black and comes up with a scrambled colors page like this in the video. So I have looked all over this board and the web for answers looking for a solution that might have an answer. This is what I have done so far to fix this: -First Solution: Looking around, I found this small fix using terminal: sudo gedit /etc/rc.local followed by adding this into "rc.local" echo # > /sys/class/backlight/acpi_video0/brightness This works rarely with the graphics driver still in and I often get lucky say during restart but reboot would only snap back the brightness at max. -Second Solution Simply remove the graphics driver while leaving the solution of first behind. This solves the issue but results in having the monitor flicker and flash at startup which in itself is not a problem to me but maybe not so good for monitor health. Also it causes the fan to speed up throughout the session and render any program that needs the driver useless. -Third Solution This is the most obvious. Just simply use the brightness on AMD Catalyst Control Center software that came with the driver, and I can say that it's form of brightness is HORRIBLE compared to the actual settings. Which leads up to where I am now, back to the driver to stop the fan speed-up and seems that the only solution to the brightness crash is to use the keyboard-controlled brightness at the login screen NOT the desktop if I want the issued effect but will just snap at max bright again if I restart. Fan speed problem is dealt with but now run the risk of crashing my computer if I so much touch the brightness settings. Speaking of which I found this on launchpad and it seems that the issue has been going far since June of 2012. Any help, redirect link or reference would be greatly appreciated. Thank you.

    Read the article

  • SQLAuthority News – Featuring in Call Me Maybe The Developer Way – Pluralsight Video

    - by pinaldave
    Is SQL boring? Not at all. SQL is fun – one has to know how to maximize the fun while working with SQL Server. Earlier I was invited to participate in the video Pluralsight. I am sure all of you know that I have authored 3 SQL Server Learning courses with Pluralsight – 1) SQL Server Q and A 2) SQL Server Performance Tuning and 3) SQL Server Indexing. Before I say anything I suggest all of you watch the following video. Make sure that you pay special attention after 0 minute and 36 seconds. What I can say about this. I am just fortunate to be part of the history in the making. There are more than 53 super cool celebrities in this video. In this just over 3 minute video there are so many story lines. I must congratulate director Megan and creative assistant Mari for excellent work. There are so many fun moments in this small video. Let me list my top five moments. @John_Papa ‘s dance at second 14 @julielerman playing with cute doggy The RACE between @josepheames and @bruthafish – the end is hilarious The black belt moment by @boedie @stwool relaxing on something strange! Well, this is indeed a great short film. This video demonstrates how cool is the culture of Pluralsight and how fun loving they are. A good organization provides an environment to its employees and partners to have maximum fun while they all become part of the success story. Hats off to Aaron Skonnard for producing this fun loving video. Well, after listening to this song for multiple times, I decided to give a call to Pluralsight. If you want, you can call them at +1 (801) 784-9032 or send an email to james-cole at pluralsight.com . What are your top five favorite moments? List it in comments and you may win Pluralsight subscription. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Storing 64-bit Unsigned Integer Value in Database

    - by Pinal Dave
    Here is a very interesting question I received in an email just another day. Some questions just are so good that it makes me wonder how come I have not faced it first hand. Anyway here is the question - “Pinal, I am migrating my database from MySQL to SQL Server and I have faced unique situation. I have been using Unsigned 64-bit integer in MySQL but when I try to migrate that column to SQL Server, I am facing an issue as there is no datatype which I find appropriate for my column. It is now too late to change the datatype and I need immediate solution. One chain of thought was to change the data type of the column from Unsigned 64-bit (BIGINT) to VARCHAR(n) but that will just change the data type for me such that I will face quite a lot of performance related issues in future. In SQL Server we also have the BIGINT data type but that is Signed 64-bit datatype. BIGINT datatype in SQL Server have range of -2^63 (-9,223,372,036,854,775,808) to 2^63-1 (9,223,372,036,854,775,807). However, my digit is much larger than this number. Is there anyway, I can store my big 64-bit Unsigned Integer without loosing much of the performance of by converting it to VARCHAR.” Very interesting question, for the sake of the argument, we can ask user that there should be no need of such a big number or if you are taking about identity column I really doubt that if your table will grow beyond this table. Here the real question which I found interesting was how to store 64-bit unsigned integer value in SQL Server without converting it to String data type. After thinking a bit, I found a fairly simple answer. I can use NUMERIC data type. I can use NUMERIC(20) datatype for 64-bit unsigned integer value, NUMERIC(10) datatype for 32-bit unsigned integer value and NUMERIC(5) datatype for 16-bit unsigned integer value. Numeric datatype supports 38 maximum of 38 precision. Now here is another thing to keep in mind. Using NUMERIC datatype will indeed accept the 64-bit unsigned integer but in future if you try to enter negative value, it will also allow the same. Hence, you will need to put any additional constraint over column to only accept positive integer there. Here is another big concern, SQL Server will store the number as numeric and will treat that as a positive integer for all the practical purpose. You will have to write in your application logic to interpret that as a 64-bit Unsigned Integer. On another side if you are using unsigned integers in your application, there are good chance that you already have logic taking care of the same. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Datatype

    Read the article

  • SQL – Contest to Get The Date – Win USD 50 Amazon Gift Cards and Cool Gift

    - by Pinal Dave
    If you are a regular reader of this blog – you will find no issue at all in resolving this puzzle. This contest is based on my experience with NuoDB. If you are not familiar with NuoDB, here are few pointers for you. Step by Step Guide to Download and Install NuoDB – Getting Started with NuoDB Quick Start with Admin Sections of NuoDB – Manage NuoDB Database Quick Start with Explorer Sections of NuoDB – Query NuoDB Database In today’s contest you have to answer following questions: Q 1: Precision of NOW() What is the precision of the NuoDB’s NOW() function, which returns current date time? Hint: Run following script on NuoDB Console Explorer section: SELECT NOW() AS CurrentTime FROM dual; Here is the image. I have masked the area where the time precision is displayed. Q 2: Executing Date and Time Script When I execute following script - SELECT 'today' AS Today, 'tomorrow' AS Tomorrow, 'yesterday' AS Yesterday FROM dual; I will get the following result:   NOW – What will be the answer when we execute following script? and WHY? SELECT CAST('today' AS DATE) AS Today, CAST('tomorrow' AS DATE) AS Tomorrow, CAST('yesterday'AS DATE) AS Yesterday FROM dual; HINT: Install NuoDB (it takes 90 seconds). Prizes: 2 Amazon Gifts 2 Limited Edition Hoodies (US resident only)   Rules: Please leave an answer in the comments section below. You must answer both the questions together in a single comment. US resident who wants to qualify to win NuoDB apparel please mention your country in the comment. You can resubmit your answer multiple times, the latest entry will be considered valid. Last day to participate in the puzzle is June 24, 2013. All valid answer will be kept hidden till June 24, 2013. The winner will be announced on June 25, 2013. Two Winners will get USD 25 worth Amazon Gift Card. (Total Value = 25 x 2 = 50 USD) The winner will be selected using a random algorithm from all the valid answers. Anybody with a valid email address can take part in the contest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • What are the software design essentials? [closed]

    - by Craig Schwarze
    I've decided to create a 1 page "cheat sheet" of essential software design principles for my programmers. It doesn't explain the principles in any great depth, but is simply there as a reference and a reminder. Here's what I've come up with - I would welcome your comments. What have I left out? What have I explained poorly? What is there that shouldn't be? Basic Design Principles The Principle of Least Surprise – your solution should be obvious, predictable and consistent. Keep It Simple Stupid (KISS) - the simplest solution is usually the best one. You Ain’t Gonna Need It (YAGNI) - create a solution for the current problem rather than what might happen in the future. Don’t Repeat Yourself (DRY) - rigorously remove duplication from your design and code. Advanced Design Principles Program to an interface, not an implementation – Don’t declare variables to be of a particular concrete class. Rather, declare them to an interface, and instantiate them using a creational pattern. Favour composition over inheritance – Don’t overuse inheritance. In most cases, rich behaviour is best added by instantiating objects, rather than inheriting from classes. Strive for loosely coupled designs – Minimise the interdependencies between objects. They should be able to interact with minimal knowledge of each other via small, tightly defined interfaces. Principle of Least Knowledge – Also called the “Law of Demeter”, and is colloquially summarised as “Only talk to your friends”. Specifically, a method in an object should only invoke methods on the object itself, objects passed as a parameter to the method, any object the method creates, any components of the object. SOLID Design Principles Single Responsibility Principle – Each class should have one well defined purpose, and only one reason to change. This reduces the fragility of your code, and makes it much more maintainable. Open/Close Principle – A class should be open to extension, but closed to modification. In practice, this means extracting the code that is most likely to change to another class, and then injecting it as required via an appropriate pattern. Liskov Substitution Principle – Subtypes must be substitutable for their base types. Essentially, get your inheritance right. In the classic example, type square should not inherit from type rectangle, as they have different properties (you can independently set the sides of a rectangle). Instead, both should inherit from type shape. Interface Segregation Principle – Clients should not be forced to depend upon methods they do not use. Don’t have fat interfaces, rather split them up into smaller, behaviour centric interfaces. Dependency Inversion Principle – There are two parts to this principle: High-level modules should not depend on low-level modules. Both should depend on abstractions. Abstractions should not depend on details. Details should depend on abstractions. In modern development, this is often handled by an IoC (Inversion of Control) container.

    Read the article

  • problem with frustum AABB culling in DirectX

    - by Matthew Poole
    Hi, I am currently working on a project with a few friends, and I am trying to get frustum culling working. Every single tutorial or article I go to shows that my math is correct and that this should be working. I thought maybe posting here, somebody would catch something I could not. Thank you. Here are the important code snippets /create the projection matrix void CD3DCamera::SetLens(float fov, float aspect, float nearZ, float farZ) { D3DXMatrixPerspectiveFovLH(&projMat, D3DXToRadian(fov), aspect, nearZ, farZ); } //build the view matrix after changes have been made to camera void CD3DCamera::BuildView() { //keep axes orthoganal D3DXVec3Normalize(&look, &look); //up D3DXVec3Cross(&up, &look, &right); D3DXVec3Normalize(&up, &up); //right D3DXVec3Cross(&right, &up, &look); D3DXVec3Normalize(&right, &right); //fill view matrix float x = -D3DXVec3Dot(&position, &right); float y = -D3DXVec3Dot(&position, &up); float z = -D3DXVec3Dot(&position, &look); viewMat(0,0) = right.x; viewMat(1,0) = right.y; viewMat(2,0) = right.z; viewMat(3,0) = x; viewMat(0,1) = up.x; viewMat(1,1) = up.y; viewMat(2,1) = up.z; viewMat(3,1) = y; viewMat(0,2) = look.x; viewMat(1,2) = look.y; viewMat(2,2) = look.z; viewMat(3,2) = z; viewMat(0,3) = 0.0f; viewMat(1,3) = 0.0f; viewMat(2,3) = 0.0f; viewMat(3,3) = 1.0f; } void CD3DCamera::BuildFrustum() { D3DXMATRIX VP; D3DXMatrixMultiply(&VP, &viewMat, &projMat); D3DXVECTOR4 col0(VP(0,0), VP(1,0), VP(2,0), VP(3,0)); D3DXVECTOR4 col1(VP(0,1), VP(1,1), VP(2,1), VP(3,1)); D3DXVECTOR4 col2(VP(0,2), VP(1,2), VP(2,2), VP(3,2)); D3DXVECTOR4 col3(VP(0,3), VP(1,3), VP(2,3), VP(3,3)); // Planes face inward frustum[0] = (D3DXPLANE)(col2); // near frustum[1] = (D3DXPLANE)(col3 - col2); // far frustum[2] = (D3DXPLANE)(col3 + col0); // left frustum[3] = (D3DXPLANE)(col3 - col0); // right frustum[4] = (D3DXPLANE)(col3 - col1); // top frustum[5] = (D3DXPLANE)(col3 + col1); // bottom // Normalize the frustum for( int i = 0; i < 6; ++i ) D3DXPlaneNormalize( &frustum[i], &frustum[i] ); } bool FrustumCheck(D3DXVECTOR3 max, D3DXVECTOR3 min, const D3DXPLANE* frustum) { // Test assumes frustum planes face inward. D3DXVECTOR3 P; D3DXVECTOR3 Q; bool ret = false; for(int i = 0; i < 6; ++i) { // For each coordinate axis x, y, z... for(int j = 0; j < 3; ++j) { // Make PQ point in the same direction as the plane normal on this axis. if( frustum[i][j] > 0.0f ) { P[j] = min[j]; Q[j] = max[j]; } else { P[j] = max[j]; Q[j] = min[j]; } } if(D3DXPlaneDotCoord(&frustum[i], &Q) < 0.0f ) ret = false; } return true; }

    Read the article

  • SQL SERVER – What is SSAS Tabular Data model and Why to use it – Part 2

    - by Pinal Dave
    In my last article, I talked about the basics of tabular data model and why use it. Then I demonstrated step by step creation of a basic tabular model project. In this part I’m going to throw some light on how to create measures and analyses in excel. If you look at the tabular project closely, you will notice that we have not defined any measure yet. So, in the first step we will define the measure first.  Open the solution and select the column you want to define as a measure. Then, click on the summation icon on the toolbar. You will see the aggregated results at the bottom of that column. You have also other choices as well like average, min, max, count and distinct count. After creating the required measures, we need to analyze our data in excel. To do this, click on the excel icon in the upper left corner of the toolbar. This will open your analysis in excel. Notice the pivot table field list here. I have highlighted the measures that we created in the earlier step. Now, we can use these measures in our analysis Now, we have to put the required fields in their respective places as column labels, row labels, Values and Report filter for analysis. See below snapshot for details, it shows region wise sales on a yearly basis You can even apply filters on the above analysis by placing the slicer field in report filter. In our example, we will take an English product name as a filter. You can use the filter as depicted in the below snapshot. Optionally, you can also use the slider to filter data more interactively. Further to improve our analysis, we can insert pivot charts That’s all for this time, in my next post I’m going to show in detail about how to create hierarchies, perspectives, KPI’s  and many more features. Author: Namita Sharma, Senior Corporate Trainer at Koenig Solutions. Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSAS

    Read the article

  • Patterns for Handling Changing Property Sets in C++

    - by Bhargav Bhat
    I have a bunch "Property Sets" (which are simple structs containing POD members). I'd like to modify these property sets (eg: add a new member) at run time so that the definition of the property sets can be externalized and the code itself can be re-used with multiple versions/types of property sets with minimal/no changes. For example, a property set could look like this: struct PropSetA { bool activeFlag; int processingCount; /* snip few other such fields*/ }; But instead of setting its definition in stone at compile time, I'd like to create it dynamically at run time. Something like: class PropSet propSetA; propSetA("activeFlag",true); //overloading the function call operator propSetA("processingCount",0); And the code dependent on the property sets (possibly in some other library) will use the data like so: bool actvFlag = propSet["activeFlag"]; if(actvFlag == true) { //Do Stuff } The current implementation behind all of this is as follows: class PropValue { public: // Variant like class for holding multiple data-types // overloaded Conversion operator. Eg: operator bool() { return (baseType == BOOLEAN) ? this->ToBoolean() : false; } // And a method to create PropValues various base datatypes static FromBool(bool baseValue); }; class PropSet { public: // overloaded[] operator for adding properties void operator()(std::string propName, bool propVal) { propMap.insert(std::make_pair(propName, PropVal::FromBool(propVal))); } protected: // the property map std::map<std::string, PropValue> propMap; }; This problem at hand is similar to this question on SO and the current approach (described above) is based on this answer. But as noted over at SO this is more of a hack than a proper solution. The fundamental issues that I have with this approach are as follows: Extending this for supporting new types will require significant code change. At the bare minimum overloaded operators need to be extended to support the new type. Supporting complex properties (eg: struct containing struct) is tricky. Supporting a reference mechanism (needed for an optimization of not duplicating identical property sets) is tricky. This also applies to supporting pointers and multi-dimensional arrays in general. Are there any known patterns for dealing with this scenario? Essentially, I'm looking for the equivalent of the visitor pattern, but for extending class properties rather than methods. Edit: Modified problem statement for clarity and added some more code from current implementation.

    Read the article

  • Suggestions for connecting .NET WPF GUI with Java SE Server

    - by Sam Goldberg
    BACKGROUND We are building a Java (SE) trading application which will be monitoring market data and sending trade messages based on the market data, and also on user defined configuration parameters. We are planning to provide the user with a thin client, built in .NET (WPF) for managing the parameters, controlling the server behavior, and viewing the current state of the trading. The client doesn't need real-time updates; it will instead update the view once every few seconds (or whatever interval is configured by the user). The client has about 6 different operations it needs to perform with the server, for example: CRUD with configuration parameters query subset of the data receive updates of current positions from server It is possible that most of the different operations (except for receiving data) are just different flavors of managing the configuration parameters, but it's too early in our analysis for us to be sure. To connect the client with the server, we have been considering using: SOAP Web Service RESTful service building a custom TCP/IP based API (text or xml) (least preferred - but we use this approach with other applications we have) As best as I understand, pros and cons of the different web service flavors are: SOAP pro: totally automated in .NET (and Java), modifying server side interface require no code changes in communication layer, just running refresh on Web Service reference to regenerate the classes. con: more overhead in the communication layer sending more text, etc. We're not using J2EE container so maybe doesn't work so well with J2SE REST pro: lighter weight, less data. Has good .NET and Java support. (I don't have any real experience with this, so don't know what other benefits it has.) con: client will not be automatically aware if there are any new operations or properties added (?), so communication layer needs to be updated by developer if server interface changes. con: (both approaches) Server cannot really push updates to the client at regular intervals (?) (However, we won't mind if client polls the server to get updates.) QUESTION What are your opinions on the above options or suggestions for other ways to connect the 2 parts? (Ideally, we don't want to put much work into the communication layer, because it's not the significant part of the application so the more off-the-shelf and automated the better.)

    Read the article

  • SQL SERVER – Fix: Error: 10920 Cannot drop user-defined function. It is being used as a resource governor classifier

    - by pinaldave
    If you have not read my SQL SERVER – Simple Example to Configure Resource Governor – Introduction to Resource Governor yesterday’s detailed primer on Resource Governor, I suggest you go ahead and read it before continuing this article. After reading the article the very first email I received was as follows: “Pinal, I configured resource governor on my development server and it worked fine with tests I ran. After doing some tests, I decided to remove the resource governor and as a first step I disabled it however, I was not able to drop the classification function during the process of the clean up. It was continuously giving me following error. Msg 10920, Level 16, State 1, Line 1 Cannot drop user-defined function myudfname. It is being used as a resource governor classifier. Would you please give me solution?” The original email was really this short and there is no other information. I am glad he has done experiments on development server and not on the production server. Production server must not be the playground of the experiments. I think I have covered the answer of this error in an earlier blog post. If the user disables the Resource Governor it is still not possible to drop the function because it can be enabled again and when enabled it can still use the same function. Here is the simple resolution of the how one can drop the classifier function (do this only if you are not going to use the function). The reason the classifier function can’t be dropped because it is associated with resource governor. Create a new classified function for your resource governor or just assign NULL as described in the following T-SQL Script and you will be able to drop the function without error. ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION = NULL) GO ALTER RESOURCE GOVERNOR DISABLE GO DROP FUNCTION dbo.UDFClassifier GO I am glad that user asked me question instead of doing something radically different, which can leave the server in the unusable state. I am aware of this only method to avoid this error. Is there any better way to achieve the same? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Use ROLL UP Clause instead of COMPUTE BY

    - by pinaldave
    Note: This upgrade was test performed on development server with using bits of SQL Server 2012 RC0 (which was available at in public) when this test was performed. However, SQL Server RTM (GA on April 1) is expected to behave similarly. I recently observed an upgrade from SQL Server 2005 to SQL Server 2012 with compatibility keeping at SQL Server 2012 (110). After upgrading the system and testing the various modules of the application, we quickly observed that few of the reports were not working. They were throwing error. When looked at carefully I noticed that it was using COMPUTE BY clause, which is deprecated in SQL Server 2012. COMPUTE BY clause is replaced by ROLL UP clause in SQL Server 2012. However there is no direct replacement of the code, user have to re-write quite a few things when using ROLL UP instead of COMPUTE BY. The primary reason is that how each of them returns results. In original code COMPUTE BY was resulting lots of result set but ROLL UP. Here is the example of the similar code of ROLL UP and COMPUTE BY. I personally find the ROLL UP much easier than COMPUTE BY as it returns all the results in single resultset unlike the other one. Here is the quick code which I wrote to demonstrate the said behavior. CREATE TABLE tblPopulation ( Country VARCHAR(100), [State] VARCHAR(100), City VARCHAR(100), [Population (in Millions)] INT ) GO INSERT INTO tblPopulation VALUES('India', 'Delhi','East Delhi',9 ) INSERT INTO tblPopulation VALUES('India', 'Delhi','South Delhi',8 ) INSERT INTO tblPopulation VALUES('India', 'Delhi','North Delhi',5.5) INSERT INTO tblPopulation VALUES('India', 'Delhi','West Delhi',7.5) INSERT INTO tblPopulation VALUES('India', 'Karnataka','Bangalore',9.5) INSERT INTO tblPopulation VALUES('India', 'Karnataka','Belur',2.5) INSERT INTO tblPopulation VALUES('India', 'Karnataka','Manipal',1.5) INSERT INTO tblPopulation VALUES('India', 'Maharastra','Mumbai',30) INSERT INTO tblPopulation VALUES('India', 'Maharastra','Pune',20) INSERT INTO tblPopulation VALUES('India', 'Maharastra','Nagpur',11 ) INSERT INTO tblPopulation VALUES('India', 'Maharastra','Nashik',6.5) GO SELECT Country,[State],City, SUM ([Population (in Millions)]) AS [Population (in Millions)] FROM tblPopulation GROUP BY Country,[State],City WITH ROLLUP GO SELECT Country,[State],City, [Population (in Millions)] FROM tblPopulation ORDER BY Country,[State],City COMPUTE SUM([Population (in Millions)]) BY Country,[State]--,City GO After writing this blog post I continuously feel that there should be some better way to do the same task. Is there any easier way to replace COMPUTE BY? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – A Brief Note on SET TEXTSIZE

    - by pinaldave
    Here is a small conversation I received. I thought though an old topic, indeed a thought provoking for the moment. Question: Is there any difference between LEFT function and SET TEXTSIZE? I really like this small but interesting question. The question does not specify the difference between usage or performance. Anyway we will quickly take a look at how TEXTSIZE works. You can run the following script to see how LEFT and SET TEXTSIZE works. USE TempDB GO -- Create TestTable CREATE TABLE MyTable (ID INT, MyText VARCHAR(MAX)) GO INSERT MyTable (ID, MyText) VALUES(1, REPLICATE('1234567890', 100)) GO -- Select Data SELECT ID, MyText FROM MyTable GO -- Using Left SELECT ID, LEFT(MyText, 10) MyText FROM MyTable GO -- Set TextSize SET TEXTSIZE 10; SELECT ID, MyText FROM MyTable; SET TEXTSIZE 2147483647 GO -- Clean up DROP TABLE MyTable GO Now let us see the usage result which we receive from both of the example. If you are going to ask what you should do – I really do not know. I can tell you where I will use either of the same. LEFT seems to be easy to use but again if you like to do extra work related to SET TEXTSIZE go for it. Here is how I will use SET TEXTSIZE. If I am selecting data from in my SSMS for testing or any other non production related work from a large table which has lots of columns with varchar data, I will consider using this statement to reduce the amount of the data which is retrieved in the result set. In simple word, for testing purpose I will use it. On the production server, there should be a specific reason to use the same. Here is my candid opinion – I do not think they can be directly comparable even though both of them give the exact same result. LEFT is applicable only on the column of a single SELECT statement. where it is used but it SET TEXTSIZE applies to all the columns in the SELECT and follow up SELECT statements till the SET TEXTSIZE is not modified again in the session. Uncomparable! I hope this sample example gives you idea how to use SET TEXTSIZE in your daily use. I would like to know your opinion about how and when do you use this feature. Please leave a comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Default Parameters vs Method Overloading

    - by João Angelo
    With default parameters introduced in C# 4.0 one might be tempted to abandon the old approach of providing method overloads to simulate default parameters. However, you must take in consideration that both techniques are not interchangeable since they show different behaviors in certain scenarios. For me the most relevant difference is that default parameters are a compile time feature while method overloading is a runtime feature. To illustrate these concepts let’s take a look at a complete, although a bit long, example. What you need to retain from the example is that static method Foo uses method overloading while static method Bar uses C# 4.0 default parameters. static void CreateCallerAssembly(string name) { // Caller class - Invokes Example.Foo() and Example.Bar() string callerCode = String.Concat( "using System;", "public class Caller", "{", " public void Print()", " {", " Console.WriteLine(Example.Foo());", " Console.WriteLine(Example.Bar());", " }", "}"); var parameters = new CompilerParameters(new[] { "system.dll", "Common.dll" }, name); new CSharpCodeProvider().CompileAssemblyFromSource(parameters, callerCode); } static void Main() { // Example class - Foo uses overloading while Bar uses C# 4.0 default parameters string exampleCode = String.Concat( "using System;", "public class Example", "{{", " public static string Foo() {{ return Foo(\"{0}\"); }}", " public static string Foo(string key) {{ return \"FOO-\" + key; }}", " public static string Bar(string key = \"{0}\") {{ return \"BAR-\" + key; }}", "}}"); var compiler = new CSharpCodeProvider(); var parameters = new CompilerParameters(new[] { "system.dll" }, "Common.dll"); // Build Common.dll with default value of "V1" compiler.CompileAssemblyFromSource(parameters, String.Format(exampleCode, "V1")); // Caller1 built against Common.dll that uses a default of "V1" CreateCallerAssembly("Caller1.dll"); // Rebuild Common.dll with default value of "V2" compiler.CompileAssemblyFromSource(parameters, String.Format(exampleCode, "V2")); // Caller2 built against Common.dll that uses a default of "V2" CreateCallerAssembly("Caller2.dll"); dynamic caller1 = Assembly.LoadFrom("Caller1.dll").CreateInstance("Caller"); dynamic caller2 = Assembly.LoadFrom("Caller2.dll").CreateInstance("Caller"); Console.WriteLine("Caller1.dll:"); caller1.Print(); Console.WriteLine("Caller2.dll:"); caller2.Print(); } And if you run this code you will get the following output: // Caller1.dll: // FOO-V2 // BAR-V1 // Caller2.dll: // FOO-V2 // BAR-V2 You see that even though Caller1.dll runs against the current Common.dll assembly where method Bar defines a default value of “V2″ the output show us the default value defined at the time Caller1.dll compiled against the first version of Common.dll. This happens because the compiler will copy the current default value to each method call, much in the same way a constant value (const keyword) is copied to a calling assembly and changes to it’s value will only be reflected if you rebuild the calling assembly again. The use of default parameters is also discouraged by Microsoft in public API’s as stated in (CA1026: Default parameters should not be used) code analysis rule.

    Read the article

  • Help with "Cannot find ContentTypeReader BB.HeightMapInfoReader, BB, Version=1.0.0.0, Culture=neutral." needed

    - by rFactor
    Hi, I have this irritating problem in XNA that I have spent my Saturday with: Cannot find ContentTypeReader BB.HeightMapInfoReader, BB, Version=1.0.0.0, Culture=neutral. It throws me that when I do (within the game assembly's Renderer.cs class): this.terrain = this.game.Content.Load<Model>("heightmap"); There is a heightmap.bmp and I don't think there's anything wrong with it, because I used it in a previous version which I switched to this new better system. So, I have a GeneratedGeometryPipeline assembly that has these classes: HeightMapInfoContent, HeightMapInfoWriter, TerrainProcessor. The GeneratedGeometryPipeline assembly does not reference any other assemblies under the solution. Then I have the game assembly that neither references any other solution assemblies and has these classes: HeightMapInfo, HeightMapInfoReader. All game assembly classes are under namespace BB and the GeneratedGeometryPipeline classes are under the namespace GeneratedGeometryPipeline. I do not understand why it does not find it. Here's some code from the GeneratedGeometryPipeline.HeightMapInfoWriter: /// <summary> /// A TypeWriter for HeightMapInfo, which tells the content pipeline how to save the /// data in HeightMapInfo. This class should match HeightMapInfoReader: whatever the /// writer writes, the reader should read. /// </summary> [ContentTypeWriter] public class HeightMapInfoWriter : ContentTypeWriter<HeightMapInfoContent> { protected override void Write(ContentWriter output, HeightMapInfoContent value) { output.Write(value.TerrainScale); output.Write(value.Height.GetLength(0)); output.Write(value.Height.GetLength(1)); foreach (float height in value.Height) { output.Write(height); } foreach (Vector3 normal in value.Normals) { output.Write(normal); } } /// <summary> /// Tells the content pipeline what CLR type the /// data will be loaded into at runtime. /// </summary> public override string GetRuntimeType(TargetPlatform targetPlatform) { return "BB.HeightMapInfo, BB, Version=1.0.0.0, Culture=neutral"; } /// <summary> /// Tells the content pipeline what worker type /// will be used to load the data. /// </summary> public override string GetRuntimeReader(TargetPlatform targetPlatform) { return "BB.HeightMapInfoReader, BB, Version=1.0.0.0, Culture=neutral"; } } Can someone help me out?

    Read the article

  • OpenGL not rendering my images to the screen

    - by Brendan Webster
    for some reason my game isn't showing the image I am rendering to the screen. My engine is state based, and at the beginning I set the logo, but it isn't showing on the screen. Here is my method of doing so first I create one image and assign some values to it's preset values. //create one image instance for the logo background O_File.v_Create_Images(1); //set the atributes of the background //first Image O_File.sImage[0].nImageDepth = -30.0f; O_File.sImage[0].sImageLocation = "image.bmp"; //load the images int O_File.v_Load_Images(); Then I load them with DevIL void C_File_Manager::v_Load_Images() { ilGenImages(1, &image); ilBindImage(image); for(int i = 0;i < sImage.size();i++) { success = ilLoadImage(sImage[i].sImageLocation.c_str()); if (success) { success = ilConvertImage(IL_RGBA, IL_UNSIGNED_BYTE); glGenTextures(1, &image); glBindTexture(GL_TEXTURE_2D, image); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, 4, ilGetInteger(IL_IMAGE_WIDTH), ilGetInteger(IL_IMAGE_HEIGHT), 0, ilGetInteger(IL_IMAGE_FORMAT), GL_UNSIGNED_BYTE, ilGetData()); //asign values to the width and height of the image if they are already not assigned if(sImage[i].nImageHeight == 0) sImage[i].nImageHeight = ilGetInteger(IL_IMAGE_HEIGHT); if(sImage[i].nImageWidth == 0) sImage[i].nImageWidth = ilGetInteger(IL_IMAGE_WIDTH); std::cout << sImage[i].nImageHeight << std::endl; const std::string word = sImage[i].sImageLocation.c_str(); std::cout << sImage[i].sImageLocation.c_str() << std::endl; ilLoadImage(word.c_str()); ilDeleteImages(1, &image); } } } and then I apply them to the screen void C_File_Manager::v_Apply_Images() { glMatrixMode(GL_MODELVIEW); glLoadIdentity(); for(int i = 0;i < sImage.size();i++) { //move the image to where it should be on the screen; glTranslatef(sImage[i].nImageX,sImage[i].nImageY,sImage[i].nImageDepth); //rotate image around the 3 axes glRotatef(sImage[i].fImageAngleX,1,0,0); glRotatef(sImage[i].fImageAngleY,0,1,0); glRotatef(sImage[i].fImageAngleZ,0,0,1); //scale the image glScalef(1,1,1); //center the image glTranslatef((sImage[i].nImageWidth/2),(sImage[i].nImageHeight/2),0); //draw the box that will encase the loaded image glBegin(GL_QUADS); //change the color of the loaded image; glColor4f(1,1,1,1); //top left corner of image glNormal3f(0.0,0,0.0); glTexCoord2f (1.0, 0.0); glVertex3f(0,0,sImage[i].nImageDepth); //top right corner of image glNormal3f(1.0,0,0.0); glTexCoord2f (1.0, 1.0); glVertex3f(0,sImage[i].nImageHeight,sImage[i].nImageDepth); //bottom right corner of image glNormal3f(-1.0,0,0.0); glTexCoord2f (0.0, 1.0); glVertex3f(sImage[i].nImageWidth,sImage[i].nImageHeight,sImage[i].nImageDepth); //bottom left corner of image glNormal3f(-1.0,0,0.0); glTexCoord2f(0.0, 0.0); glVertex3f(sImage[i].nImageWidth,0,sImage[i].nImageDepth); glEnd(); } } when I debug there is no errors at all, but yet the images don't show up on the screen, I have positioned the camera at (0,0,-1) and that is where the images should show up. the clipping plane is set 1 to 1000. There is probably some random problem with the code, but I just can't catch it.

    Read the article

  • Problems implementing a screen space shadow ray tracing shader

    - by Grieverheart
    Here I previously asked for the possibility of ray tracing shadows in screen space in a deferred shader. Several problems were pointed out. One of the most important problem is that only visible objects can cast shadows and objects between the camera and the shadow caster can interfere. Still I thought it'd be a fun experiment. The idea is to calculate the view coordinates of pixels and cast a ray to the light. The ray is then traced pixel by pixel to the light and its depth is compared with the depth at the pixel. If a pixel is in front of the ray, a shadow is casted at the original pixel. At first I thought that I could use the DDA algorithm in 2D to calculate the distance 't' (in p = o + t d, where o origin, d direction) to the next pixel and use it in the 3D ray equation to find the ray's z coordinate at that pixel's position. For the 2D ray, I would use the projected and biased 3D ray direction and origin. The idea was that 't' would be the same in both 2D and 3D equations. Unfortunately, this is not the case since the projection matrix is 4D. Thus, some tweak needs to be done to make this work this way. I would like to ask if someone knows of a way to do what I described above, i.e. from a 2D ray in texture coordinate space to get the 3D ray in screen space. I did implement a simple version of the idea which you can see in the following video: video here Shadows may seem a bit pixelated, but that's mostly because of the size of the step in 't' I chose. And here is the shader: #version 330 core uniform sampler2D DepthMap; uniform vec2 projAB; uniform mat4 projectionMatrix; const vec3 light_p = vec3(-30.0, 30.0, -10.0); noperspective in vec2 pass_TexCoord; smooth in vec3 viewRay; layout(location = 0) out float out_AO; vec3 CalcPosition(void){ float depth = texture(DepthMap, pass_TexCoord).r; float linearDepth = projAB.y / (depth - projAB.x); vec3 ray = normalize(viewRay); ray = ray / ray.z; return linearDepth * ray; } void main(void){ vec3 origin = CalcPosition(); if(origin.z < -60) discard; vec2 pixOrigin = pass_TexCoord; //tex coords vec3 dir = normalize(light_p - origin); vec2 texel_size = vec2(1.0 / 600.0); float t = 0.1; ivec2 pixIndex = ivec2(pixOrigin / texel_size); out_AO = 1.0; while(true){ vec3 ray = origin + t * dir; vec4 temp = projectionMatrix * vec4(ray, 1.0); vec2 texCoord = (temp.xy / temp.w) * 0.5 + 0.5; ivec2 newIndex = ivec2(texCoord / texel_size); if(newIndex != pixIndex){ float depth = texture(DepthMap, texCoord).r; float linearDepth = projAB.y / (depth - projAB.x); if(linearDepth > ray.z + 0.1){ out_AO = 0.2; break; } pixIndex = newIndex; } t += 0.5; if(texCoord.x < 0 || texCoord.x > 1.0 || texCoord.y < 0 || texCoord.y > 1.0) break; } } As you can see, here I just increment 't' by some arbitrary factor, calculate the 3D ray and project it to get the pixel coordinates, which is not really optimal. Hopefully, I would like to optimize the code as much as possible and compare it with shadow mapping and how it scales with the number of lights. PS: Keep in mind that I reconstruct position from depth by interpolating rays through a full screen quad.

    Read the article

  • AWS .NET SDK v2: setting up queues and topics

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/13/aws-.net-sdk-v2-setting-up-queues-and-topics.aspxFollowing on from my last post, reading from SQS queues with the new SDK is easy stuff, but linking a Simple Notification Service topic to an SQS queue is a bit more involved. The AWS model for topics and subscriptions is a bit more advanced than in Azure Service Bus. SNS lets you have subscribers on multiple different channels, so you can send a message which gets relayed to email address, mobile apps and SQS queues all in one go. As the topic owner, when you request a subscription on any channel, the owner needs to confirm they’re happy for you to send them messages. With email subscriptions, the user gets a confirmation request from Amazon which they need to reply to before they start getting messages. With SQS, you need to grant the topic permission to write to the queue. If you own both the topic and the queue, you can do it all in code with the .NET SDK. Let’s say you want to create a new topic, a new queue as a topic subscriber, and link the two together. Creating the topic is easy with the SNS client (which has an expanded name, AmazonSimpleNotificationServiceClient, compare to the SQS class which is just called QueueClient): var request = new CreateTopicRequest(); request.Name = TopicName; var response = _snsClient.CreateTopic(request); TopicArn = response.TopicArn; In the response from AWS (which I’m assuming is successful), you get an ARN – Amazon Resource Name – which is the unique identifier for the topic. We create the queue using the same code from my last post, AWS .NET SDK v2: the message-pump pattern, and then we need to subscribe the queue to the topic. The topic creates the subscription request: var response = _snsClient.Subscribe(new SubscribeRequest { TopicArn = TopicArn, Protocol = "sqs", Endpoint = _queueClient.QueueArn }); That response will give you an ARN for the subscription, which you’ll need if you want to set attributes like RawMessageDelivery. Then the SQS client needs to confirm the subscription by allowing the topic to send messages to it. The SDK doesn’t give you a nice mechanism for doing that, so I’ve extended my AWS wrapper with a method that encapsulates it: internal void AllowSnsToSendMessages(TopicClient topicClient) { var policy = Policies.AllowSendFormat.Replace("%QueueArn%", QueueArn).Replace("%TopicArn%", topicClient.TopicArn); var request = new SetQueueAttributesRequest(); request.Attributes.Add("Policy", policy); request.QueueUrl = QueueUrl; var response = _sqsClient.SetQueueAttributes(request); } That builds up a policy statement, which gets added to the queue as an attribute, and specifies that the topic is allowed to send messages to the queue. The statement itself is a JSON block which contains the ARN of the queue, the ARN of the topic, and an Allow effect for the sqs:SendMessage action: public const string AllowSendFormat= @"{ ""Statement"": [ { ""Sid"": ""MySQSPolicy001"", ""Effect"": ""Allow"", ""Principal"": { ""AWS"": ""*"" }, ""Action"": ""sqs:SendMessage"", ""Resource"": ""%QueueArn%"", ""Condition"": { ""ArnEquals"": { ""aws:SourceArn"": ""%TopicArn%"" } } } ] }"; There’s a new gist with an updated QueueClient and a new TopicClient here: Wrappers for the SQS and SNS clients in the AWS SDK for .NET v2. Both clients have an Ensure() method which creates the resource, so if you want to create a topic and a subscription you can use:  var topicClient = new TopicClient(“BigNews”, “ImListening”); And the topic client has a Subscribe() method, which calls into the message pump on the queue client: topicClient.Subscribe(x=>Log.Debug(x.Body)); var message = {}; //etc. topicClient.Publish(message); So you can isolate all the fiddly bits and use SQS and SNS with a similar interface to the Azure SDK.

    Read the article

  • WebLogic Partner Community Newsletter June 2013

    - by JuergenKress
    Dear WebLogic Partner Community member, This month newsletter contains tons of Java material with the availability of Java EE7! Including many articles in the May/June Java Magazine and the Duke Award nomination! Thanks for all your efforts to become certified and Specialized. For all the experts who achieved the WebLogic Server 12c Certified Implementation Specialist  or ADF 11g Certified Implementation Specialist you can download a logo for your blog or business card at the Competence Center. For all the companies who achieved a WebLogic and ADF Specialization you can request a nice plaque for your office. Summer time is training time – make sure you attend our Fusion Middleware Summer Camps or one of our upcoming Exalogic Implementation Workshop by PTS in Spain, Turkey and Middle East. For those who can not make it we offers plenty of online courses like the New ADF Academy. Or the webcast The value of Engineered Systems for SAP. At our WebLogic Community Workspace (WebLogic Community membership required) you can find Engineered Systems Strategy presentation. Thanks to the community for all the WebLogic best practice papers and tools like Automated provision of Oracle Weblogic Server Platform & Frank’s Coherence videos on YouTube & Create and deploy applications on the Oracle Java Cloud & WebLogic Application redeployment using shared libraries - without downtime & Oracle Traffic Director. The first tests deployments for WebLogic on Oracle Database Appliance are on the way, thanks to all who are their experience: Sizing & Configuration and Traffic Director. Virtualised Oracle Database Appliance Proof of Concept - #1 Planning from Simon Haslam. Would be great if you can also share your experience via twitter @wlscommunity! In the ADF section of the newsletter you find Tuning Application Module Pools and Connection Pools & List View - Cool Looking ADF PS6 Component for Collections & Insider Essential & User Interface & Skinning & Oracle Forms to ADF Modernization reference. Great summer time! Jürgen Kress Oracle WebLogic Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/WebLogicnewsJune2013 (OPN Account required) To become a member of the WebLogic Partner Community please register at http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic Community newsletter,newsletter,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Only draw visible objects to the camera in 2D

    - by Deukalion
    I have Map, each map has an array of Ground, each Ground consists of an array of VertexPositionTexture and a texture name reference so it renders a texture at these points (as a shape through triangulation). Now when I render my map I only want to get a list of all objects that are visible in the camera. (So I won't loop through more than I have to) Structs: public struct Map { public Ground[] Ground { get; set; } } public struct Ground { public int[] Indexes { get; set; } public VertexPositionNormalTexture[] Points { get; set; } public Vector3 TopLeft { get; set; } public Vector3 TopRight { get; set; } public Vector3 BottomLeft { get; set; } public Vector3 BottomRight { get; set; } } public struct RenderBoundaries<T> { public BoundingBox Box; public T Items; } when I load a map: foreach (Ground ground in CurrentMap.Ground) { Boundaries.Add(new RenderBoundaries<Ground>() { Box = BoundingBox.CreateFromPoints(new Vector3[] { ground.TopLeft, ground.TopRight, ground.BottomLeft, ground.BottomRight }), Items = ground }); } TopLeft, TopRight, BottomLeft, BottomRight are simply the locations of each corner that the shape make. A rectangle. When I try to loop through only the objects that are visible I do this in my Draw method: public int Draw(GraphicsDevice device, ICamera camera) { BoundingFrustum frustum = new BoundingFrustum(camera.View * camera.Projection); // Visible count int count = 0; EffectTexture.World = camera.World; EffectTexture.View = camera.View; EffectTexture.Projection = camera.Projection; foreach (EffectPass pass in EffectTexture.CurrentTechnique.Passes) { pass.Apply(); foreach (RenderBoundaries<Ground> render in Boundaries.Where(m => frustum.Contains(m.Box) != ContainmentType.Disjoint)) { // Draw ground count++; } } return count; } When I try adding just one ground, then moving the camera so the ground is out of frame it still returns 1 which means it still gets draw even though it's not within the camera's view. Am I doing something or wrong or can it be because of my Camera? Any ideas why it doesn't work?

    Read the article

  • Oracle SOA Partner Community Forum Lisbon, Portugal &ndash; April 21st 2010

    - by Jürgen Kress
    We would like to invite you to attend our SOA Partner Community Forum that will be in held in Lisbon, April 21, 2010 The Oracle SOA Partner Community Forum is a wonderful opportunity to: Meet with Oracle SOA and BPM Product management Exchange thoughts and knowledge with SOA and BPM experts Learn from successful SOA implementation Network within the Oracle SOA Partner Community During this highly informative event you can learn about partner success stories, participate in an array of breakout sessions, exchange information with other partners and enjoy a vibrant panel discussion. Places are limited, so register today. Registration only takes a few minutes and it is free of charge. By registration you will confirm that you will attend to the event. Seminar is free. In the event that you cancel your registration after April 16th 2010 Oracle may request that you will pay late cancellation fee of € 150. Please visit our website for further information. Alternatively, if you require assistance or have any queries please contact Jürgen Kress. Agenda 10:00     Welcome & Introduction 10:15     SOA Cloud presentation 11:15     SOA Partner Sales Campaign 12:30     Lunch break 13:15     Partner Reference Case 14:15     BPMN 2.0 15:00     Cocktail reception   Location: Lagoas Park Hotel 2740-245, Porto Salvo, Oeiras For partners with BPM 11g opportunities we will offer an advanced workshop on Thursday April 22nd and Friday April 23rd hosted by Clemens Utschig-Utschig. If you are interested please contact Jürgen Kress.   Quotes from previous SOA Partner Community Forums "The SOA Partner Community Forum was a first-rate event that provided a balanced agenda of vendor-specific and vendor-neutral content pertaining to modern-day service-oriented computing technologies and practices. I enjoyed the opportunity to provide an objective voice on the topics I consider most important for today's IT practitioners to fully leverage the many patterns, principles, and service technology innovations that comprise the next-generation SOA platform." Thomas Erl, SOA Systems Inc., SOASchool.com “The Community is an excellent forum for Partners to hear about each others success stories on SOA, especially BPEL and ODI” Jørn F. Schurink, Competence Expert Oracle Technologies Logica “The Community is the best source for information around Oracle SOA a wonderful platform with many interesting contacts and discussions”. Torsten Winterberg, Opitz Consulting “The regular meetings of the SOA Partner Community are a perfectly organized platform for learning the latest in Oracle SOA tooling by extraordinary speakers and for vivid discussions with practitioners about SOA challenges and design solutions. This is the best opportunity to build and deepen a network with the brightest and most passionate protagonists in Oracle SOA world in EMEA.” Hajo Normann, HP Services Technorati Tags: soa partner community forum,soa,event

    Read the article

  • SQL SERVER – QUOTED_IDENTIFIER ON/OFF Explanation and Example – Question on Real World Usage

    - by Pinal Dave
    This is a follow up blog post of SQL SERVER – QUOTED_IDENTIFIER ON/OFF and ANSI_NULL ON/OFF Explanation. I wrote that blog six years ago and I had plans that I will write a follow up blog post of the same. Today, when I was going over my to-do list and I was surprised that I had an item there which was six years old and I never got to do that. In the earlier blog post I wrote about exploitation of the Quoted Identifier and ANSI Null. In this blog post we will see a quick example of Quoted Identifier. However, before we continue this blog post, let us see a refresh what both of Quoted Identifider do. QUOTED IDENTIFIER ON/OFF This option specifies the setting for use of double quotes. When this is on, double quotation mark is used as part of the SQL Server identifier (object name). This can be useful in situations in which identifiers are also SQL Server reserved words. In simple words when we have QUOTED IDENTIFIER ON, anything which is wrapped in double quotes becomes an object. E.g. -- The following will work SET QUOTED_IDENTIFIER ON GO CREATE DATABASE "Test1" GO -- The following will throw an error about Incorrect syntax near 'Test2'. SET QUOTED_IDENTIFIER OFF GO CREATE DATABASE "Test2" GO This feature is particularly helpful when we are working with reserved keywords in SQL Server. For example if you have to create a database with the name VARCHAR or INT or DATABASE you may want to put double quotes around your database name and turn on quoted identifiers to create a database with the such name. Personally, I do not think so anybody will ever create a database with the reserve keywords intentionally, as it will just lead to confusion. Here is another example to give you further clarity about how Quoted Idenifier setting works with SELECT statement. -- The following will throw an error about Invalid column name 'Column'. SET QUOTED_IDENTIFIER ON GO SELECT "Column" GO -- The following will work SET QUOTED_IDENTIFIER OFF GO SELECT "Column" GO Personally, I always use the following method to create database as it works irrespective of what is the quoted identifier’s status. It always creates objects with my desire name whenever I would like to create. CREATE DATABASE [Test3] I believe the future of the quoted identifier on or off is useful in the real world when we have script generated from another database where this setting was ON and we have to now execute the same script again in our environment again. Question to you - I personally have never used this feature as I mentioned earlier. I believe this feature is there to support the scripts which are generated in another SQL Database or generate the script for other database. Do you have a real world scenario where we need to turn on or off Quoted Identifiers. Click to Download Scripts Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Get 2 of My Books FREE at Koenig Tech Day – Where Technologies Converge!

    - by pinaldave
    As a regular reader of my blog – you must be aware of that I love to write books and talk about various subjects of my book. The founders of Koenig Solutions are my very old friends, I know them for many years. They have been my biggest supporter of my books. Coming weekend they have a technology event at their Bangalore Location. Every attendee of the technology event will get a set of two books worth Rs. 450 – ‘SQL Server Interview Questions And Answers‘ and ‘SQL Wait Stats Joes 2 Pros‘. I am going to cover a couple of topics of the books and present  as well. I am very confident that every attendee will be having a great time. I will be covering following subjects: SQL Server Tricks and Tips for Blazing Fast Performance Slow Running Queries (SQL) are the most common problem that developers face while working with SQL Server. While it is easy to blame the SQL Server for unsatisfactory performance, however the issue often persists with the way queries have been written, and how SQL Server has been set up. The session will focus on the ways of identifying problems that slow down SQL Servers, and tricks to fix them. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. After the session is over – I will point to what exact location in the book where you can continue for the further learning. I am pretty excited, this is more like book reading but in entire different format. The one day event will cover four technologies in four separate interactive sessions on: Microsoft SQL Server Security VMware/Virtualization ASP.NET MVC Date of the event: Dec 15, 2012 9 AM to 6PM. Location of the event:  Koenig Solutions Ltd. # 47, 4th Block, 100 feet Road, 3rd Floor, Opp to Shanthi Sagar, Koramangala, Bangalore- 560034 Mobile : 09008096122 Office : 080- 41127140 Organizers have informed me that there are very limited seats for this event and technical session based on my book will start at Sharp 9 AM. If you show up late there are chances that you will not get any seats. Registration for the event is a MUST. Please visit this link for further information. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

  • Business Strategy - Google Case Study

    Business strategy defined by SMBTN.com is a term used in business planning that implies a careful selection and application of resources to obtain a competitive advantage in anticipation of future events or trends. In more general terms business strategy is positioning a company so that it has the greatest competitive advantage over others in the markets and industries that they participate in. This process involves making corporate decisions regarding which markets to provide goods and services, pricing, acceptable quality levels, and how to interact with others in the marketplace. The primary objective of business strategy is to create and increase value for all of its shareholders and stakeholders through the creation of customer value. According to InformationWeek.com, Google has a distinctive technology advantage over its competitors like Microsoft, eBay, Amazon, Yahoo. Google utilizes custom high-performance systems which are cost efficient because they can scale to extreme workloads. This hardware allows for a huge cost advantage over its competitors. In addition, InformationWeek.com interviewed Stephen Arnold who stated that Google’s programmers are 50%-100% more productive compared to programmers working for their competitors.  He based this theory on Google’s competitors having to spend up to four times as much just to keep up. In addition to Google’s technological advantage, they also have developed a decentralized management schema where employees report directly to multiple managers and team project leaders. This allows for the responsibility of the technology department to be shared amongst multiple senior level engineers and removes the need for a singular department head to oversee the activities of the department.  This is a unique approach from the standard management style. Typically a department head like a CIO or CTO would oversee the department’s global initiatives and business functionality.  This would then be passed down and administered through middle management and implemented by programmers, business analyst, network administrators and Database administrators. It goes without saying that an IT professional’s responsibilities would be directed by Google’s technological advantage and management strategy.  Simply because they work within the department, and would have to design, develop, and support the high-performance systems and would have to report multiple managers and project leaders on a regular basis. Since Google was established and driven by new and immerging technology, all other departments would be directly impacted by the technology department.  In fact, they would have to cater to the technology department since it is a huge driving for in the success of Google. Reference: http://www.smbtn.com/smallbusinessdictionary/#b http://www.informationweek.com/news/software/linux/showArticle.jhtml?articleID=192300292&pgno=1&queryText=&isPrev=

    Read the article

  • What are the tradeoffs for using 'partial view models'?

    - by Kenny Evitt
    I've become aware of an itch due to some non-DRY code pertaining to view model classes in an (ASP.NET) MVC web application and I'm thinking of scratching my itch by organizing code in various 'partial view model' classes. By partial-view-model, I'm referring to a class like a view model class in an analogous way to how partial views are like views, i.e. a way to encapsulate common info and behavior. To strengthen the 'analogy', and to aid in visually organizing the code in my IDE, I was thinking of naming the partial-view-model classes with a _ prefix, e.g. _ParentItemViewModel. As a slightly more concrete example of why I'm thinking along these lines, imagine that I have a domain-model-entity class ParentItem and the user-friendly descriptive text that identifies these items to users is complex enough that I'd like to encapsulate that code in a method in a _ParentItemViewModel class, for which I can then include an object or a collection of objects of that class in all the view model classes for all the views that need to include a reference to a parent item, e.g. ChildItemViewModel can have a ParentItem property of the _ParentItemViewModel class type, so that in my ChildItemView view, I can use @Model.ParentItem.UserFriendlyDescription as desired, like breadcrumbs, links, etc. Edited 2014-02-06 09:56 -05 As a second example, imagine that I have entity classes SomeKindOfBatch, SomeKindOfBatchDetail, and SomeKindOfBatchDetailEvent, and a view model class and at least one view for each of those entities. Also, the example application covers a lot more than just some-kind-of-batches, so that it wouldn't really be useful or sensible to include info about a specific some-kind-of-batch in all of the project view model classes. But, like the above example, I have some code, say for generating a string for identifying a some-kind-of-batch in a user-friendly way, and I'd like to be able to use that in several views, say as breadcrumb text or text for a link. As a third example, I'll describe another pattern I'm currently using. I have a Contact entity class, but it's a fat class, with dozens of properties, and at least a dozen references to other fat classes. However, a lot of view model classes need properties for referencing a specific contact and most of those need other properties for collections of contacts, e.g. possible contacts to be referenced for some kind of relationship. Most of these view model classes only need a small fraction of all of the available contact info, basically just an ID and some kind of user-friendly description (i.e. a friendly name). It seems to be pretty useful to have a 'partial view model' class for contacts that all of these other view model classes can use. Maybe I'm just misunderstanding 'view model class' – I understand a view model class as always corresponding to a view. But maybe I'm assuming too much.

    Read the article

< Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >