Search Results

Search found 4593 results on 184 pages for 'operator equal'.

Page 140/184 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • More SQL Smells

    - by Nick Harrison
    Let's continue exploring some of the SQL Smells from Phil's list. He has been putting together. Datatype mis-matches in predicates that rely on implicit conversion.(Plamen Ratchev) This is a great example poking holes in the whole theory of "If it works it's not broken" Queries will this probably will generally work and give the correct response. In fact, without careful analysis, you probably may be completely oblivious that there is even a problem. This subtle little problem will needlessly complicate queries and slow them down regardless of the indexes applied. Consider this example: CREATE TABLE [dbo].[Page](     [PageId] [int] IDENTITY(1,1) NOT NULL,     [Title] [varchar](75) NOT NULL,     [Sequence] [int] NOT NULL,     [ThemeId] [int] NOT NULL,     [CustomCss] [text] NOT NULL,     [CustomScript] [text] NOT NULL,     [PageGroupId] [int] NOT NULL;  CREATE PROCEDURE PageSelectBySequence ( @sequenceMin smallint , @sequenceMax smallint ) AS BEGIN SELECT [PageId] , [Title] , [Sequence] , [ThemeId] , [CustomCss] , [CustomScript] , [PageGroupId] FROM [CMS].[dbo].[Page] WHERE Sequence BETWEEN @sequenceMin AND @SequenceMax END  Note that the Sequence column is defined as int while the sequence parameter is defined as a small int. The problem is that the database may have to do a lot of type conversions to evaluate the query. In some cases, this may even negate the indexes that you have in place. Using Correlated subqueries instead of a join   (Dave_Levy/ Plamen Ratchev) There are two main problems here. The first is a little subjective, since this is a non-standard way of expressing the query, it is harder to understand. The other problem is much more objective and potentially problematic. You are taking much of the control away from the optimizer. Written properly, such a query may well out perform a corresponding query written with traditional joins. More likely than not, performance will degrade. Whenever you assume that you know better than the optimizer, you will most likely be wrong. This is the fundmental problem with any hint. Consider a query like this:  SELECT Page.Title , Page.Sequence , Page.ThemeId , Page.CustomCss , Page.CustomScript , PageEffectParams.Name , PageEffectParams.Value , ( SELECT EffectName FROM dbo.Effect WHERE EffectId = dbo.PageEffects.EffectId ) AS EffectName FROM Page INNER JOIN PageEffect ON Page.PageId = PageEffects.PageId INNER JOIN PageEffectParam ON PageEffects.PageEffectId = PageEffectParams.PageEffectId  This can and should be written as:  SELECT Page.Title , Page.Sequence , Page.ThemeId , Page.CustomCss , Page.CustomScript , PageEffectParams.Name , PageEffectParams.Value , EffectName FROM Page INNER JOIN PageEffect ON Page.PageId = PageEffects.PageId INNER JOIN PageEffectParam ON PageEffects.PageEffectId = PageEffectParams.PageEffectId INNER JOIN dbo.Effect ON dbo.Effects.EffectId = dbo.PageEffects.EffectId  The correlated query may just as easily show up in the where clause. It's not a good idea in the select clause or the where clause. Few or No comments. This one is a bit more complicated and controversial. All comments are not created equal. Some comments are helpful and need to be included. Other comments are not necessary and may indicate a problem. I tend to follow the rule of thumb that comments that explain why are good. Comments that explain how are bad. Many people may be shocked to hear the idea of a bad comment, but hear me out. If a comment is needed to explain what is going on or how it works, the logic is too complex and needs to be simplified. Comments that explain why are good. Comments may explain why the sql is needed are good. Comments that explain where the sql is used are good. Comments that explain how tables are related should not be needed if the sql is well written. If they are needed, you need to consider reworking the sql or simplify your data model. Use of functions in a WHERE clause. (Anil Das) Calling a function in the where clause will often negate the indexing strategy. The function will be called for every record considered. This will often a force a full table scan on the tables affected. Calling a function will not guarantee that there is a full table scan, but there is a good chance that it will. If you find that you often need to write queries using a particular function, you may need to add a column to the table that has the function already applied.

    Read the article

  • Unable to build my c++ code with g++ 4.6.3

    - by Mriganka
    I am facing multiple issues with building my c++ code on Ubuntu 12.04. This code was building and running fine on RH Enterprise. I am using g++ 4.6.3. Here's the output of g++ -v. g++ -v Using built-in specs. COLLECT_GCC=g++ COLLECT_LTO_WRAPPER=/usr/lib/gcc/i686-linux-gnu/4.6/lto-wrapper Target: i686-linux-gnu Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 4.6.3-1ubuntu5' --with-bugurl=file:///usr/share/doc/gcc-4.6/README.Bugs --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-4.6 --enable-shared --enable-linker-build-id --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.6 --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-gnu-unique-object --enable-plugin --enable-objc-gc --enable-targets=all --disable-werror --with-arch-32=i686 --with-tune=generic --enable-checking=release --build=i686-linux-gnu --host=i686-linux-gnu --target=i686-linux-gnu Thread model: posix gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) Here's a sample of my code: #include "Word.h" #include < string> using namespace std; pthread_mutex_t Word::_lock = PTHREAD_MUTEX_INITIALIZER; Word::Word(): _occurrences(1) { memset(_buf, 0, 25); } Word::Word(char *str): _occurrences(1) { memset(_buf, 0, 25); if (str != NULL) { strncpy(_buf, str, strlen(str)); } } g++ -c -ansi or g++ -c -std=c++98 or g++ -c -std=c++03, none of these options are able to build the code correctly. I get the following compilation errors: mriganka@ubuntu:~/WordCount$ make g++ -c -g -ansi Word.cpp -o Word.o Word.cpp: In constructor ‘Word::Word()’: Word.cpp:10:21: error: ‘memset’ was not declared in this scope Word.cpp: In constructor ‘Word::Word(char*)’: Word.cpp:16:21: error: ‘memset’ was not declared in this scope Word.cpp:19:34: error: ‘strlen’ was not declared in this scope Word.cpp:19:35: error: ‘strncpy’ was not declared in this scope Word.cpp: In member function ‘void Word::operator=(const Word&)’: Word.cpp:37:42: error: ‘strlen’ was not declared in this scope Word.cpp:37:43: error: ‘strncpy’ was not declared in this scope Word.cpp: In copy constructor ‘Word::Word(const Word&)’: Word.cpp:44:21: error: ‘memset’ was not declared in this scope Word.cpp:45:52: error: ‘strlen’ was not declared in this scope Word.cpp:45:53: error: ‘strncpy’ was not declared in this scope So basically g++ 4.6.3 on Ubuntu 12.04 is not able to recognize the standard c++ headers. And I am not finding a way out of this situation. Second problem: In order to make progress, I included < string.h instead of < string. But now I am facing linking errors with my message queue and pthread library functions. Here's the error that I am getting: mriganka@ubuntu:~/WordCount$ make g++ -c -g -ansi Word.cpp -o Word.o g++ -lrt -I/usr/lib/i386-linux-gnu Word.o HashMap.o main.o -o word_count main.o: In function `main': /home/mriganka/WordCount/main.cpp:75: undefined reference to `pthread_create' /home/mriganka/WordCount/main.cpp:90: undefined reference to `mq_open' /home/mriganka/WordCount/main.cpp:93: undefined reference to `mq_getattr' /home/mriganka/WordCount/main.cpp:113: undefined reference to `mq_send' /home/mriganka/WordCount/main.cpp:123: undefined reference to `pthread_join' /home/mriganka/WordCount/main.cpp:129: undefined reference to `mq_close' /home/mriganka/WordCount/main.cpp:130: undefined reference to `mq_unlink' main.o: In function `count_words(void*)': /home/mriganka/WordCount/main.cpp:151: undefined reference to `mq_open' /home/mriganka/WordCount/main.cpp:154: undefined reference to `mq_getattr' /home/mriganka/WordCount/main.cpp:162: undefined reference to `mq_timedreceive' collect2: ld returned 1 exit status Here's my makefile: CC=g++ CFLAGS=-c -g -ansi LDFLAGS=-lrt INC=-I/usr/lib/i386-linux-gnu SOURCES=Word.cpp HashMap.cpp main.cpp OBJECTS=$(SOURCES:.cpp=.o) EXECUTABLE=word_count all: $(SOURCES) $(EXECUTABLE) $(EXECUTABLE): $(OBJECTS) $(CC) $(LDFLAGS) $(INC) -pthread $(OBJECTS) -o $@ .cpp.o: $(CC) $(CFLAGS) $< -o $@ clean: rm -f *.o word_count Please help me to resolve both the issues. I searched online relentlessly for any solution of these problems, but no one seems to have encountered these issues.

    Read the article

  • A Closer Look at the HiddenInput Attribute in MVC 2

    - by Steve Michelotti
    MVC 2 includes an attribute for model metadata called the HiddenInput attribute. The typical usage of the attribute looks like this (line #3 below): 1: public class PersonViewModel 2: { 3: [HiddenInput(DisplayValue = false)] 4: public int? Id { get; set; } 5: public string FirstName { get; set; } 6: public string LastName { get; set; } 7: } So if you displayed your PersonViewModel with Html.EditorForModel() or Html.EditorFor(m => m.Id), the framework would detect the [HiddenInput] attribute metadata and produce HTML like this: 1: <input id="Id" name="Id" type="hidden" value="21" /> This is pretty straight forward and allows an elegant way to keep the technical key for your model (e.g., a Primary Key from the database) in the HTML so that everything will be wired up correctly when the form is posted to the server and of course not displaying this value visually to the end user. However, when I was giving a recent presentation, a member of the audience asked me (quite reasonably), “When would you ever set DisplayValue equal to true when using a HiddenInput?” To which I responded, “Well, it’s an edge case. There are sometimes when…er…um…people might want to…um…display this value to the user.” It was quickly apparent to me (and I’m sure everyone else in the room) what a terrible answer this was. I realized I needed to have a much better answer here. First off, let’s look at what is produced if we change our view model to use “true” (which is equivalent to use specifying [HiddenInput] since “true” is the default) on line #3: 1: public class PersonViewModel 2: { 3: [HiddenInput(DisplayValue = true)] 4: public int? Id { get; set; } 5: public string FirstName { get; set; } 6: public string LastName { get; set; } 7: } Will produce the following HTML if rendered from Htm.EditorForModel() in your view: 1: <div class="editor-label"> 2: <label for="Id">Id</label> 3: </div> 4: <div class="editor-field"> 5: 21<input id="Id" name="Id" type="hidden" value="21" /> 6: <span class="field-validation-valid" id="Id_validationMessage"></span> 7: </div> The key is line #5. We get the text of “21” (which happened to be my DB Id in this instance) and also a hidden input element (again with “21”). So the question is, why would one want to use this? The best answer I’ve found is contained in this MVC 2 whitepaper: When a view lets users edit the ID of an object and it is necessary to display the value as well as to provide a hidden input element that contains the old ID so that it can be passed back to the controller. Well, that actually makes sense. Yes, it seems like something that would happen *rarely* but, for those instances, it would enable them easily. It’s effectively equivalent to doing this in your view: 1: <%: Html.LabelFor(m => m.Id) %> 2: <%: Model.Id %> 3: <%: Html.HiddenFor(m => m.Id) %> But it’s allowing you to specify it in metadata on your view model (and thereby take advantage of templated helpers like Html.EditorForModel() and Html.EditorFor()) rather than having to explicitly specifying everything in your view.

    Read the article

  • Using Lambdas for return values in Rhino.Mocks

    - by PSteele
    In a recent StackOverflow question, someone showed some sample code they’d like to be able to use.  The particular syntax they used isn’t supported by Rhino.Mocks, but it was an interesting idea that I thought could be easily implemented with an extension method. Background When stubbing a method return value, Rhino.Mocks supports the following syntax: dependency.Stub(s => s.GetSomething()).Return(new Order()); The method signature is generic and therefore you get compile-time type checking that the object you’re returning matches the return value defined by the “GetSomething” method. You could also have Rhino.Mocks execute arbitrary code using the “Do” method: dependency.Stub(s => s.GetSomething()).Do((Func<Order>) (() => new Order())); This requires the cast though.  It works, but isn’t as clean as the original poster wanted.  They showed a simple example of something they’d like to see: dependency.Stub(s => s.GetSomething()).Return(() => new Order()); Very clean, simple and no casting required.  While Rhino.Mocks doesn’t support this syntax, it’s easy to add it via an extension method. The Rhino.Mocks “Stub” method returns an IMethodOptions<T>.  We just need to accept a Func<T> and use that as the return value.  At first, this would seem straightforward: public static IMethodOptions<T> Return<T>(this IMethodOptions<T> opts, Func<T> factory) { opts.Return(factory()); return opts; } And this would work and would provide the syntax the user was looking for.  But the problem with this is that you loose the late-bound semantics of a lambda.  The Func<T> is executed immediately and stored as the return value.  At the point you’re setting up your mocks and stubs (the “Arrange” part of “Arrange, Act, Assert”), you may not want the lambda executing – you probably want it delayed until the method is actually executed and Rhino.Mocks plugs in your return value. So let’s make a few small tweaks: public static IMethodOptions<T> Return<T>(this IMethodOptions<T> opts, Func<T> factory) { opts.Return(default(T)); // required for Rhino.Mocks on non-void methods opts.WhenCalled(mi => mi.ReturnValue = factory()); return opts; } As you can see, we still need to set up some kind of return value or Rhino.Mocks will complain as soon as it intercepts a call to our stubbed method.  We use the “WhenCalled” method to set the return value equal to the execution of our lambda.  This gives us the delayed execution we’re looking for and a nice syntax for lambda-based return values in Rhino.Mocks. Technorati Tags: .NET,Rhino.Mocks,Mocking,Extension Methods

    Read the article

  • WEB203 &ndash; Jump into Silverlight!&hellip; and Become Effective Immediately with Tim Huckaby, Fou

    - by Robert Burger
    Getting ready for the good stuff. Definitely wish there were more Silverlight and WCF RIA sessions, but this is a start.  Was lucky to get a coveted power-enabled seat.  Luckily, due to my trustily slow Verizon data card, I can get these notes out amidst a total Internet outage here.  This is the second breakout session of the day, and is by far standing-room only.  I stepped out before the session started to get a cool Diet COKE and wouldn’t have gotten back in if I didn’t already have a seat. Tim says this is an intro session and that he’s been begging for intro sessions at TechEd for years and that by looking at this audience, he thinks the demand is there.  Admittedly, I didn’t know this was an intro session, or I might have gone elsewhere.  But, it was the very first Silverlight session, so I had to be here. Tim says he will be providing a very good comprehensive reference application at the end of the presentation.  He has just demoed it, and it is a full CRUD-based Sales Manager application based on…  AdventureWorks! Session Agenda What it is / How to get started Declarative Programming Layout and Controls, Events and Commands Working with Data Adding Style to Your Application   Silverlight…  “WPF Light” Why is the download 4.2MB?  Because the direct competitor is a 4.2MB download.  There is no technical reason it is not the entire framework.  It is purely to “be competitive”.   Getting Started Get all of the following downloads from www.silverlight.net/getstarted Install VS2010 or Visual Web Developer Express 2010 Install Silverlight 4 Tools for VS2010 Install Expression Blend 4 Install the Silverlight 4 Toolkit   Reference Application Features Uses MVVM pattern – a way to move data access code that would normally be inline within the UI and placing it in nice data access libraries Images loaded dynamically from the database, converting GIF to PNG because Silverlight does not support GIF. LINQ to SQL is the data access model WCF is the data provider and is using binary message encoding   Declarative Programming XAML replaces code for UI representation Attributes control Layout and Style Event handlers wired-up in XAML Declarative Data Binding   Layout Overview Content rendering flows inside of parent Fixed positioning (Canvas) is seldom used Panels are used to house content Margins and Padding over fixed size   Panels StackPanel – Arranges child elements into a single line oriented horizontally or vertically Grid – A flexible grid are that consists of rows and columns Canvas – An are where positions are specifically fixed WrapPanel (in Toolkit) – Positions child elements in sequential position left to right and top to bottom. DockPanel (in Toolkit) – Positions child controls within a dockable area   Positioning Horizontal and Vertical Alignment Margin – Separates an element from neighboring elements Padding – Enlarges the effective size of an element by a thickness   Controls Overview Not all controls created equal Silverlight, as a subset of WPF, so many WPF controls do not exist in the core Siverlight release Silverlight Toolkit continues to add controls, but are released in different quality bands Plenty of good 3rd party controls to fill the gaps Windows Phone 7 is to have 95% of controls available in Silverlight Core and Toolkit.   Events and Commands Standard .NET Events Routed Events Commands – based on the ICommand interface – logical action that can be invoked in several ways   Adding Style to Your Application Resource Dictionaries – Contains a hash table of key/value pairs.  Silverlight can only use Static Resources whereas WPF can also use Dynamic Resources Visual State Manager Silverlight 4 supports Implicit styles ResourceDictionary.MergedDictionaries combines many different file-based resources   Downloads

    Read the article

  • Enterprise 2.0 - Connecting People, Processes & Content

    - by kellsey.ruppel(at)oracle.com
    With recent technological advances, the Internet is changing. When users head to the web, they are no longer just looking for information from a simple text and picture based website. Users want a more interactive experience - they want to participate, to share their views and get the feedback of others. And this is precisely what Web 2.0 technology addresses. Web 2.0 is about web applications that facilitate interactive information sharing, user-centered design and collaboration on the World Wide Web. Web 2.0 technology is everywhere on the Internet and is radically changing the speed and medium in which we interact and communicate. There are thousands of examples in the consumer world of Web 2.0 applications, technologies and solutions at work. You might be familiar with some of them...blogs, wikis (Wikipedia), Twitter, Facebook, LinkedIn - these are all examples of Web 2.0. And these technologies are transforming our world into a real-time, participation-oriented, user-driven, content-centric world. With all of these Web 2.0 solutions it's about the user, the consumer and all the content they are generating. It's a world full of online communities where people share and participate. We're not talking about disseminating information top-down , nor is it a bottom-up fight. Everyone has an equal opportunity to participate and share. The more you participate, the more you share, the more valued you are in the community. The web is not just a collection of documents online. It is the social web.  For the active users in the community, staying connected becomes critically important so they can participate at anytime and from anywhere. And because feedback and interaction are so critical, time is of the essence. When everyone is providing immediate responses, you feel the urge to do the same. Hence everything needs to be done right now, together...and collaboratively. With all the content being generated online by users, there is complete information overload out there. (That's a good thing for Google). But...it's no longer just about search. Sometimes you want the information to just come to you. Recommendations and discovery engines will deliver you more applicable results than a non-contextual search. How many of you have heard about a news headline on Facebook as part of your feed before you read the paper or see it on TV? This is how the new generation of workers live their daily lives...and as they enter the workforce, these trends and technologies are showing up in the enterprise too. A lot of the Web 2.0 technologies and solutions in the consumer world are geared for just that....consumers. But the core concepts that put them into the Web 2.0 category can be applied to the enterprise as well. And that is what we mean when we talk about Enterprise 2.0. Enterprise 2.0 is the use of Web 2.0 tools and technologies in the workplace.  It provides a modern user experience by connecting the people, content and business processes inside and outside the enterprise. Enterprise 2.0 empowers users to collaborate more effectively, find and share information in the proper content and improves the overall business processes which they participate in. As we head into 2011, is your organization using Enterprise 2.0 capabilities to the fullest? Are you connecting your people, processes and content together to provide a modern user experience?

    Read the article

  • Using 3G/UMTS in Mauritius

    After some conversation, threads in online forum and mailing lists I thought about writing this article on how to setup, configure and use 3G/UMTS connections on Linux here in Mauritius. Personally, I can only share my experience with Emtel Ltd. but try to give some clues about how to configure Orange as well. Emtel 3G/UMTS surf stick Emtel provides different surf sticks from Huawei. Back in 2007, I started with an E220 that wouldn't run on Windows Vista either. Nowadays, you just plug in the surf stick (ie. E169) and usually the Network Manager will detect the new broadband modem. Nothing to worry about. The Linux Network Manager even provides a connection profile for Emtel here in Mauritius and establishing the Internet connection is done in less than 2 minutes... even quicker. Using wvdial Old-fashioned Linux users might not take Network Manager into consideration but feel comfortable with wvdial. Although that wvdial is primarily used with serial port attached modems, it can operate on USB ports as well. Following is my configuration from /etc/wvdial.conf: [Dialer Defaults]Phone = *99#Username = emtelPassword = emtelNew PPPD = yesStupid Mode = 1Dial Command = ATDT[Dialer emtel]Modem = /dev/ttyUSB0Baud = 3774000Init2 = ATZInit3 = ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0Init4 = AT+cgdcont=1,"ip","web"ISDN = 0Modem Type = Analog Modem The values of user name and password are optional and can be configured as you like. In case that your SIM card is protected by a pin - which is highly advised, you might another dialer section in your configuration file like so: [Dialer pin]Modem = /dev/ttyUSB0Init1 = AT+CPIN=0000 This way you can "daisy-chain" your command to establish your Internet connection like so: wvdial pin emtel And it works auto-magically. Depending on your group assignments (dialout), you might have to sudo the wvdial statement like so: sudo wvdial pin emtel Orange parameters As far as I could figure out without really testing it myself, it is also necessary to set the Access Point (AP) manually with Orange. Well, although it is pretty obvious a lot of people seem to struggle. The AP value is "orange". [Dialer orange]Modem = /dev/ttyUSB0Baud = 3774000Init2 = ATZInit3 = ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0Init4 = AT+cgdcont=1,"ip","orange"ISDN = 0Modem Type = Analog Modem And you are done. Official Linux support from providers It's just simple: Forget it! The people at the Emtel call center are completely focused on the hardware and Mobile Connect software application provided by Huawei and are totally lost in case that you confront them with other constellations. For example, my wife's netbook has an integrated 3G/UMTS modem from Ericsson. Therefore, no need to use the Huawei surf stick at all and of course we use the existing software named Wireless Manager instead of. Now, imagine to mention at the help desk: "Ehm, sorry but what's Mobile Connect?" And Linux after all might give the call operator sleepless nights... Who knows? Anyways, I hope that my article and configuration could give you a helping hand and that you will be able to connect your Linux box with 3G/UMTS surf sticks here in Mauritius.

    Read the article

  • Implementing Database Settings Using Policy Based Management

    - by Ashish Kumar Mehta
    Introduction Database Administrators have always had a tough time to ensuring that all the SQL Servers administered by them are configured according to the policies and standards of organization. Using SQL Server’s  Policy Based Management feature DBAs can now manage one or more instances of SQL Server 2008 and check for policy compliance issues. In this article we will utilize Policy Based Management (aka Declarative Management Framework or DMF) feature of SQL Server to implement and verify database settings on all production databases. It is best practice to enforce the below settings on each Production database. However, it can be tedious to go through each database and then check whether the below database settings are implemented across databases. In this article I will explain it to you how to utilize the Policy Based Management Feature of SQL Server 2008 to create a policy to verify these settings on all databases and in cases of non-complaince how to bring them back into complaince. Database setting to enforce on each user database : Auto Close and Auto Shrink Properties of database set to False Auto Create Statistics and Auto Update Statistics set to True Compatibility Level of all the user database set as 100 Page Verify set as CHECKSUM Recovery Model of all user database set to Full Restrict Access set as MULTI_USER Configure a Policy to Verify Database Settings 1. Connect to SQL Server 2008 Instance using SQL Server Management Studio 2. In the Object Explorer, Click on Management > Policy Management and you will be able to see Policies, Conditions & Facets as child nodes 3. Right click Policies and then select New Policy…. from the drop down list as shown in the snippet below to open the  Create New Policy Popup window. 4. In the Create New Policy popup window you need to provide the name of the policy as “Implementing and Verify Database Settings for Production Databases” and then click the drop down list under Check Condition. As highlighted in the snippet below click on the New Condition… option to open up the Create New Condition window. 5. In the Create New Condition popup window you need to provide the name of the condition as “Verify and Change Database Settings”. In the Facet drop down list you need to choose the Facet as Database Options as shown in the snippet below. Under Expression you need to select Field value as @AutoClose and then choose Operator value as ‘ = ‘ and finally choose Value as False. Now that you have successfully added the first field you can now go ahead and add rest of the fields as shown in the snippet below. Once you have successfully added all the above shown fields of Database Options Facet, click OK to save the changes and to return to the parent Create New Policy – Implementing and Verify Database Settings for Production Database windows where you will see that the newly created condition “Verify and Change Database Settings” is selected by default. Continues…

    Read the article

  • SQLAuthority News – Monthly list of Puzzles and Solutions on SQLAuthority.com

    - by pinaldave
    This month has been very interesting month for SQLAuthority.com we had multiple and various puzzles which everybody participated and lots of interesting conversation which we have shared. Let us start in latest puzzles and continue going down. There are few answers also posted on facebook as well. SQL SERVER – Puzzle Involving NULL – Resolve – Error – Operand data type void type is invalid for sum operator This puzzle involves NULL and throws an error. The challenge is to resolve the error. There are multiple ways to resolve this error. Readers has contributed various methods. Few of them even have supplied the answer why this error is showing up. NULL are very important part of the database and if one of the column has NULL the result can be totally different than the one expected. SQL SERVER – T-SQL Scripts to Find Maximum between Two Numbers I modified script provided by friend to find greatest number between two number. My script has small bug in it. However, lots of readers have suggested better scripts. Madhivanan has written blog post on the subject over here. SQL SERVER – BI Quiz Hint – Performance Tuning Cubes – Hints This quiz is hosted on my friend Jacob‘s site. I have written many hints how one can tune cubes. Now one can take part here and win exciting prizes. SQL SERVER – Solution – Generating Zero Without using Any Numbers in T-SQL Madhivanan has asked very interesting question on his blog about How to Generate Zero without using Any Numbers in T-SQL. He has demonstrated various methods how one can generate Zero. I asked the same question on blog and got many interesting answers which I have shared. SQL SERVER – Solution – Puzzle – Statistics are not Updated but are Created Once I have to accept that this was most difficult puzzle. In this puzzle I have asked even though settings are correct, why statistics of the tables are not getting updated. In this puzzle one is tested with various concepts 1) Indexes, 2) Statistics, 3) database settings etc. There are multiple ways of solving this puzzles. It was interesting as many took interest but only few got it right. SQL SERVER – Question to You – When to use Function and When to use Stored Procedure This is rather straight forward question and not the typical puzzle. The answers from readers are great however, still there is chance of more detailed answers. SQL SERVER – Selecting Domain from Email Address I wrote on selecting domains from email addresses. Madhivanan makes puzzle out of a simple question. He wrote a follow-up post over here. In his post he writes various way how one can find email addresses from list of domains. Well, this is not a puzzle but amazing Guest Post by Feodor Georgiev who has written on subject Job Interviewing the Right Way (and for the Right Reasons). An article which everyone should read. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Help finding time of collision

    - by WannaBe
    I am making a simple game right now and am struggling with collision response. My goal is to someday be able to turn it into a 2D platformer but I have a long way to go. I am currently making this in JavaScript and using the canvas element so (0,0) is in the top left and positive X is to the right and positive Y is down. I read a helpful post on StackExchange that got me started on this but I can't seem to get the algorithm 100% correct. How to deal with corner collisions in 2D? I can detect the collision fine but I can't seem to get the response right. The goal is to detect which side the player hit first since minimum displacement doesn't always work. The X response seems to work fine but the Y only works when I am far from the corners. Here is a picture showing what happens Here is the code var bx = box.x; var by = box.y; var bw = box.width; var bh = box.height; var boxCenterX = bx + (bw/2); var boxCenterY = by + (bh/2); var playerCenterX = player.x + player.xvel + (player.width/2); var playerCenterY = player.y + player.yvel + (player.height/2); //left = negative and right = positve, 0 = middle var distanceXin = playerCenterX - boxCenterX; var distanceYin = playerCenterY - boxCenterY; var distanceWidth = Math.abs(distanceXin); var distanceHeight = Math.abs(distanceYin); var halfWidths = (bw/2) + (player.width/2); var halfHeights = (bh/2) + (player.height/2); if(distanceWidth < halfWidths){ //xcollision if(distanceHeight < halfHeights){ //ycollision if(player.xvel == 0){ //adjust y if(distanceYin > 0){ //bottom player.y = by + bh; player.yvel = 0; }else{ player.y = by - player.height; player.yvel = 0; } }else if(player.yvel == 0){ //adjust x if(distanceXin > 0){ //right player.x = bx + bw; player.xvel = 0; }else{ //left player.x = bx - player.width; player.xvel = 0; } }else{ var yTime = distanceYin / player.yvel; var xTime = distanceXin / player.xvel; if(xTime < yTime){ //adjust the x it collided first if(distanceXin > 0){ //right player.x = bx + bw; player.xvel = 0; }else{ //left player.x = bx - player.width; player.xvel = 0; } }else{ //adjust the y it collided first if(distanceYin > 0){ //bottom player.y = by + bh; player.yvel = 0; }else{ player.y = by - player.height; player.yvel = 0; } } } } } And here is a JSFiddle if you would like to see the problem yourself. http://jsfiddle.net/dMumU/ To recreate this move the player to here And press up and left at the same time. The player will jump to the right for some reason. Any advice? I know I am close but I can't seem to get xTime and yTime to equal what I want every time.

    Read the article

  • How to make the constructor for the following exercise in c++?

    - by user40630
    This is the exercise I?m trying to solve. It's from C++, How to program book from Deitel and it's my homework. (Card Shuffling and Dealing) Create a program to shuffle and deal a deck of cards. The program should consist of class Card, class DeckOfCards and a driver program. Class Card should provide: a) Data members face and suit of type int. b) A constructor that receives two ints representing the face and suit and uses them to initialize the data members. c) Two static arrays of strings representing the faces and suits. d) A toString function that returns the Card as a string in the form “face of suit.” You can use the + operator to concatenate strings. Class DeckOfCards should contain: a) A vector of Cards named deck to store the Cards. b) An integer currentCard representing the next card to deal. c) A default constructor that initializes the Cards in the deck. The constructor should use vector function push_back to add each Card to the end of the vector after the Card is created and initialized. This should be done for each of the 52 Cards in the deck. d) A shuffle function that shuffles the Cards in the deck. The shuffle algorithm should iterate through the vector of Cards. For each Card, randomly select another Card in the deck and swap the two Cards. e) A dealCard function that returns the next Card object from the deck. f) A moreCards function that returns a bool value indicating whether there are more Cards to deal. The driver program should create a DeckOfCards object, shuffle the cards, then deal the 52 cards. The problem I'm facing is that I don't know exactly how to make the constructor for the second class. See description commented in the code bellow. #include <iostream> #include <vector> using namespace std; /* * */ //Class card. No problems here. class Card { public: Card(int, int); string toString(); private: int suit, face; static string faceNames[13]; static string suitNames[4]; }; string Card::faceNames[13] = {"Ace","Two","Three","Four","Five","Six","Seven","Eight","Nine","Ten","Queen","Jack","King"}; string Card::suitNames[4] = {"Diamonds","Clubs","Hearts","Spades"}; string Card::toString() { return faceNames[face]+" of "+suitNames[suit]; } Card::Card(int f, int s) :face(f), suit(s) { } /*The problem begins here. This class should create(when and object for it is created) a copy of the vector deck, right? But how exactly are these vector cards be initialized? I'll explain better in the constructor definition bellow.*/ class DeckOfCards { public: DeckOfCards(); void shuffleCards(); Card dealCard(); bool moreCards(); private: vector<Card> deck(52); int currentCard; }; int main(int argc, char** argv) { return 0; } DeckOfCards::DeckOfCards() { //This is where I'm stuck. I can't figure out how to set each of the 52 cards of the vector deck to have a specific suit and face every one of them, by using only the constructor of the Card class. //What you see bellow was one of my attempts to solve this problem but I blocked pretty soon in the middle of it. for(int i=0; i<deck.size(); i++) { deck[i]//....There is no function to set them. They must be set when initialized. But how?? } } For easier reading: http://pastebin.com/pJeXMH0f

    Read the article

  • How Can I Improve This Card-Game AI?

    - by James Burgess
    Let me get this out there before anything else: this is a learning exercise for me. I am not a game developer by trade or hobby (at least, not seriously) and am purely delving into some AI- and 3D-related topics to broaden my horizons a bit. As part of the learning experience, I thought I'd have a go at developing a basic card game AI. I selected Pit as the card game I was going to attempt to emulate (specifically, the 'bull and bear' variation of the game as mentioned in the link above). Unfortunately, the rule-set that I'm used to playing with (an older version of the game) isn't described. The basics of it are: The number of commodities played with is equal to the number of players. The bull and bear cards are included. All but two players receive 8 cards, two receive 9 cards. A player can win the round with 7 + bull, 8, or 8 + bull (receiving double points). The bear is a penalty card. You can trade up to a maximum of 4 cards at a time. They must all be of the same type, but can optionally include the bull or bear (so, you could trade A, A, A, Bull - but not A, B, A, Bull). For those who have played the card game, it will probably have been as obvious to you as it was to me that given the nature of the game, gameplay would seem to resemble a greedy algorithm. With this in mind, I thought it might simplify my AI experience somewhat. So, here's what I've come up with for a basic AI player to play Pit... and I'd really just like any form of suggestion (from improvements to reading materials) relating to it. Here it is in something vaguely pseudo-code-ish ;) While AI does not hold 7 similar + bull, 8 similar, or 8 similar + bull, do: 1. Establish 'target' hand, by seeing which card AI holds the most of. 2. Prepare to trade next-most-numerous card type in a trade (max. held, or 4, whichever is fewer) 3. If holding the bear, add to (if trading <=3 cards) or replace in (if trading 4 cards) hand. 4. Offer cards for trade. 5. If cards are accepted for trade within X turns, continue (clearing 'failed card types'). Otherwise: a. If only one card remains in the trade, go to #6. Otherwise: i. Remove one non-penalty card from the trade. ii. Return to #5. 6. Add card type to temporary list of failed card types. 7. Repeat from #2 (excluding 'failed card types'). I'm aware this is likely to be a sub-optimal way of solving the problem, but that's why I'm posting this question. Are there any AI- or algorithm-related concepts that I've missed and should be incorporating to make a better AI? Additionally, what are the flaws with my AI at present (I'm well aware it's probably far from complete)? Thanks in advance!

    Read the article

  • JustMock is here !!

    - by mehfuzh
    As announced earlier by Hristo Kosev at Telerik blogs , we have started giving out JustMock builds from today. This is the first of early builds before the official Q2 release and we are pretty excited to get your feedbacks. Its pretty early to say anything on it. It actually depends on your feedback. To add few, with JustMock we tried to build a mocking tool with simple and intuitive syntax as possible excluding more and more noises and avoiding any smell that can be made to your code [We are still trying everyday] and we want to make the tool even better with your help. JustMock can be used to mock virtually anything. Moreover, we left an option open that it can be used to reduce / elevate the features  just though a single click. We tried to make a strong API and make stuffs fluent and guided as possible so that you never have the chance to get de-railed. Our syntax is AAA (Arrange – Act – Assert) , we don’t believe in Record – Reply model which some of the smarter mocking tools are planning to remove from their coming release or even don’t have [its always fun to lean from each other]. Overall more signals equals more complexity , reminds me of 37 signals :-). Currently, here are the things you can do with JustMock ( will cover more in-depth in coming days) Proxied mode Mock interfaces and class with virtuals Mock properties that includes indexers Set raise event for specific calls Use matchers to control mock arguments Assert specific occurrence of a mocked calls. Assert using matchers Do recursive mocks Do Sequential mocking ( same method with argument returns different values or perform different tasks) Do strict mocking (by default and i prefer loose , so that i can use it as stubs) Elevated mode Mock static calls Mock final class Mock sealed classes Mock Extension methods Partially mock a  class member directly using Mock.Arrange Mock MsCorlib (we will support more and more members in coming days) , currently we support FileInfo, File and DateTime. These are few, you need to take a look at the test project that is provided with the build to find more [Along with the document]. Also, one of feature that will i will be using it for my next OS projects is the ability to run it separately in  proxied mode which makes it easy to redistribute and do some personal development in a more DI model and my option to elevate as it go.   I’ve surely forgotten tons of other features to mention that i will cover time but  don’t for get the URL : www.telerik.com/justmock   Finally a little mock code:   var lvMock = Mock.Create<ILoveJustMock>();    // set your goal  Mock.Arrange(() => lvMock.Response(Arg.Any<string>())).Returns((int result) => result);    //perform  string ret =  lvMock.Echo("Yes");    Assert.Equal(ret, "Yes");  // make sure everything is fine  Mock.Assert(() => lvMock.Echo("Yes"), Occurs.Once());   Hope that helps to get started,  will cover if not :-).

    Read the article

  • SQL SERVER – Basic Calculation and PEMDAS Order of Operation

    - by pinaldave
    After thinking a long time, I have decided to write about this blog post. I had no plan to create a blog post about this subject but the amount of conversation this one has created on my Facebook page, I decided to bring up a few of the question and concerns discussed on the Facebook page. There are more than 10,000 comments here so far. There are lots of discussion about what should be the answer. Well, as far as I can tell there is a big debate going on on Facebook, for educational purpose you should go ahead and read some of the comments. They are very interesting and for sure teach some new stuff. Even though some of the comments are clearly wrong they have made some good points and I believe it for sure develops some logic. Here is my take on this subject. I believe the answer is 9 as I follow PEMDAS  Order of Operation. PEMDAS stands for  parentheses, exponents, multiplication, division, addition, subtraction. PEMDAS is commonly known as BODMAS in India. BODMAS stands for Brackets, Orders (ie Powers and Square Roots, etc), Division, Multiplication,  Addition and Subtraction. PEMDAS and BODMAS are almost same and both of them follow the operation order from LEFT to RIGHT. Let us try to simplify above statement using the PEMDAS or BODMAS (whatever you prefer to call). Step 1: 6 ÷ 2 (1+2) (parentheses first) Step 2: = 6 ÷ 2 * (1+2) (adding multiplication sign for further clarification) Step 3: = 6 ÷ 2* (3) (single digit in parentheses – simplify using operator) Step 4: = 6 ÷ 2 * 3 (Remember next Operation should be LEFT to RIGHT) Step 5: = 3 * 3 (because 6 ÷ 2 = 3; remember LEFT to RIGHT) Step 6: = 9 (final answer) Some often find Step 4 confusing and often ended up multiplying 2 and 3 resulting Step 5 to be 6 ÷ 6, this is incorrect because in this case we did not follow the order of LEFT to RIGHT. When we do not follow the order of operation from LEFT to RIGHT we end up with the answer 1 which is incorrect. Let us see what SQL Server returns as a result. I executed following statement in SQL Server Management Studio SELECT 6/2*(1+2) It is clear that SQL Server also thinks that the answer should be 9. Let us go ahead and ask Google what will be the answer of above question in Google I have searched for the following term: 6/2(1+2) The result also says the answer should be 9. If you want a further reference here is a great video which describes why the answer should be 9 and not 1. And here is a fantastic conversation on Google Groups. Well, now what is your take on this subject? You are welcome to share constructive feedback and your answer may be different from my answer. NOTE: A healthy conversation about this subject is indeed encouraged but if there is a single bad word or comment is flaming it will be deleted without any notification (it does not matter how valuable information it contains). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • links for 2010-12-08

    - by Bob Rhubart
    What's a data architect? A comic dialog by one who knows: Oracle ACE Director Lewis Cunningham. Webcast: Oracle Business Intelligence Forum - December 15, 2010 at 9:00 am PT "The Oracle Business Intelligence Online Forum is a half-day virtual event that offers you a unique opportunity to see, in one place, the full portfolio of Oracle’s Business Intelligence (BI) offerings, and to learn what sets Oracle apart from the rest. Hear Oracle executives and industry analyst, Howard Dresner, present the current state of Business Intelligence, along with a series of customers who will share their case studies of putting analytics in action." Oracle Rolls Out Private Cloud Architecture And World-Record Transaction Performance | Forrester Blogs "Exadata has been dealt with extensively in other venues, both inside Forrester and externally, and appears to deliver the goods for I&O groups who require efficient consolidation and maximum performance from an Oracle database environment." -- Richard Fichera, Forrester Seven ways to get things started: Java EE Startup Classes with GlassFish and WebLogic "This is a blog about a topic that I realy don't like. But it comes across my ways over and over again and it's no doubt that you need it from time to time. Enough reasons for me to collect some information about it and publish it for your reference. I am talking about Startup-/Shutdown classes with Java EE applications or servers." -- Oracle ACE Director Markus "@myfear" Eisele." Monitoring Undelivered Messages in BPEL in SOA 10g (Antony Reynolds' Blog) "I am currently working with a client that wants to know how many undelivered messages they have, and if it reaches a certain threshold then they wants to alert the operator. To do this they plan on using the Enterprise Manager alert functions, but first they needs to know how many undelivered instances are out there." SOA author Antony Reynolds VirtualBox Appliances for Developers "Developers can simply download a few files, assemble them with a script , and then import and run the resulting pre-built VM in VirtualBox. This makes starting with these technologies even easier. Each appliance contains some Hands-On-Labs to start learning." -- Peter Paul van de Beek Oracle UCM 11g Remote Intradoc Client (RIDC) Integration with Oracle ADF 11g "It's great we have out of the box WebCenter ADF task flows for document management in UCM. However, for complete business scenario implementations, usually it's not enough and we need to manage Content Repository programmatically. This can be achieved through Remote Intradoc Client (RIDC) API. It's quite hard to find any practical information about this API, but I managed to get code for UCM folder creation/removal and folder information." -- Oracle ACE Director Andrejus Baranovskis Interview with Java Champion Matjaz B. Juric on Cloud Computing, SOA, and Java EE 6 "Matjaz Juric of Slovenia, head of the Cloud Computing and SOA Competence Centre at the University of Maribor, and professor at the University of Ljubljana, shares insights about cloud computing, SOA and Java EE 6." White Paper: Oracle Complex Event Processing High Availability "This whitepaper describes the high availability (HA) solutions available in Oracle CEP 11g Release 1 Patch Set 2 and  presents the results of a benchmark study demonstrating the performance of the Oracle CEP HA solutions."

    Read the article

  • Oracle celebrates a successful Oracle CloudWorld in Bogotá

    - by yaldahhakim
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 written by: Diana Tamayo Tovar Oracle CloudWorld Bogotá began with scattered showers, rain and strong winds, inviting Colombians to spend a whole day in the social, mobile and complete world of Oracle Cloud. The event took place on November 6th with 807 attendees, 15 media representatives and 65 partners, who gathered to share the business value of Cloud along with Oracle executives and Colombian market leaders. Line-of-business leaders in sales and marketing, customer service and support, HR and talent management, and finance and operations, shared their ideas with Colombian customers, giving them a chance to learn, discover and engage with the tools, trends and concepts of Cloud. The highlights of the event included the presence of keynote speakers such as Bob Evans, Chief Communications Officer, and a customer testimonial session with top business leaders from insurance, finances, retail, communications and health Colombian industries, who shared their innovation experiences and success stories on workforce empowerment, talent management, cloud security, social engagement and productivity, providing best case scenarios on how Oracle has helped them transform their business with technologies like cloud, social collaboration and mobile applications. The keynote session was preceded by a customer success story from one of the largest virtual network operator in the country, providing an interesting case study of mobile banking innovation and a great customer testimonial of the importance of cross industry strategies and cloud technology. The event provided five different tracks on the main trends of how companies communicate and engage with different audiences, providing a different perspective on the importance of empowering brands through their customers, trusting and investing in technology for growth, while Oracle University shared their knowledge with “Oracle Cloud Fundamentals” a training lesson regarding Java Cloud, Database Cloud and other Oracle Cloud product technologies and solutions. The rainy day scenario included sideshows of aerial acrobatics and speed painting performances to recreate the environment of modern and flexible Cloud Solutions in a colorful and creative way. Oracle CloudWorld Bogotá was a great opportunity to expose invalid cloud Myths and the main concerns of the Colombian customers towards cloud, considering IDC Latin America studies stating that 93% of Colombian business leaders are interested in cloud but only 47% understand its business value. Spending a day in the cloud with 6 demogrounds stations, conference sessions, interesting case studies and customer testimonials will surely widen the endless market opportunities for Colombian customers, leaving them amazed with how Oracle Cloud works towards integration with other environments, non oracle applications, social media and mobile devices with bulletproof security infrastructure. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Flash Actionscript 3.0 Game Projectile Creation

    - by Christian Basar
    I have been creating a side-scrolling Actionscript 3.0 game. In this game I want the Player to be able to shoot blow darts as weapons. I had some trouble getting the darts to be created in the right place (in front of the player), but eventually got it working with some help from this page (please look at it for background information on this problem): http://stackoverflow.com/questions/8031553/flash-actionscript-3-0-game-projectile-creation I got the darts to be created in the right place (near the player) and a 'movePlayerDarts()' function moves them. But I actually have a new problem. When the player moves after firing a dart, the dart tries to follow him! If the player jumps, the dart rises up. If the player moves to the left, the dart moves slightly to the left. Obviously, there is some code somewhere which is telling the darts to follow the player. I do not see how, unless the 'playerDartContainer' has something to do with that. But the container is always at position (0,0) and it does not move. Also, as a test I traced a dart's 'y' coordinate within the constantly-running 'movePlayerDarts()' function. As you can see, that function constantly moves the dart down the y axis by increasing its y-coordinate value. But when I jump, the 'y' coordinate being traced is never reduced, even though the dart clearly looks like it's rising! If anybody has any suggestions, I'd appreciate them! Here is the code I use to create the darts: // This function creates a dart public function createDart():void { if (playerDartContainer.numChildren <= 4) { // Play dart shooting sound sndDartShootIns.play(); // Create a new 'PlayerDart' object playerDart = new PlayerDart(); // Set the new dart's initial position and direction depending on the player's direction // Player's facing right if (player.scaleX == 1) { // Create dart in front of player's dart gun playerDart.x = player.x + 12; playerDart.y = player.y - 85; // Dart faces right, too playerDart.scaleX = 1; } // Player's facing left else if (player.scaleX == -1) { // Create dart in front of player's dart gun playerDart.x = player.x - 12; playerDart.y = player.y - 85; // Dart faces left, too playerDart.scaleX = -1; } playerDartContainer.addChild(playerDart); } } // End of 'createDart()' function This code is the EnterFrameHandler for the player darts: // In every frame, call 'movePlayerDarts()' to move the darts within the 'playerDartContainer' public function playerDartEnterFrameHandler(event:Event):void { // Only move the Player's darts if their container has at least one dart within if (playerDartContainer.numChildren > 0) { movePlayerDarts(); } } And finally, this is the code that actually moves all of the player's darts: // Move all of the Player's darts public function movePlayerDarts():void { for (var pdIns:int = 0; pdIns < playerDartContainer.numChildren; pdIns++) { // Set the Player Dart 'instance' variable to equal the current PlayerDart playerDartIns = PlayerDart(playerDartContainer.getChildAt(pdIns)); // Move the current dart in the direction it was shot. The dart's 'x-scale' // factor is multiplied by its speed (5) to move the dart in its correct // direction. If the 'x-scale' factor is -1, the dart is pointing left (as // seen in the 'createDart()' function. (-1 * 5 = -5), so the dart will go // to left at a rate of 5. The opposite is true for the right-ward facing // darts playerDartIns.x += playerDartIns.scaleX * 1; // Make gravity work on the dart playerDartIns.y += 0.7; //playerDartIns.y += 1; // What if the dart hits the ground? if (HitTest.intersects(playerDartIns, floor, this)) { playerDartContainer.removeChild(playerDartIns); } //trace("Dart x: " + playerDartIns.x); trace("Dart y: " + playerDartIns.y); } }

    Read the article

  • How to use a list of values in Excel as filter in a query

    - by Luca Zavarella
    It often happens that a customer provides us with a list of items for which to extract certain information. Imagine, for example, that our clients wish to have the header information of the sales orders only for certain orders. Most likely he will give us a list of items in a column in Excel, or, less probably, a simple text file with the identification code:     As long as the given values ??are at best a dozen, it costs us nothing to copy and paste those values ??in our SSMS and place them in a WHERE clause, using the IN operator, making sure to include the quotes in the case of alphanumeric elements (the database sample is AdventureWorks2008R2): SELECT * FROM Sales.SalesOrderHeader AS SOH WHERE SOH.SalesOrderNumber IN ( 'SO43667' ,'SO43709' ,'SO43726' ,'SO43746' ,'SO43782' ,'SO43796') Clearly, the need to add commas and quotes becomes an hassle when dealing with hundreds of items (which of course has happened to us!). It’d be comfortable to do a simple copy and paste, leaving the items as they are pasted, and make sure the query works fine. We can have this commodity via a User Defined Function, that returns items in a table. Simply we’ll provide the function with an input string parameter containing the pasted items. I give you directly the T-SQL code, where comments are there to clarify what was written: CREATE FUNCTION [dbo].[SplitCRLFList] (@List VARCHAR(MAX)) RETURNS @ParsedList TABLE ( --< Set the item length as your needs Item VARCHAR(255) ) AS BEGIN DECLARE --< Set the item length as your needs @Item VARCHAR(255) ,@Pos BIGINT --< Trim TABs due to indentations SET @List = REPLACE(@List, CHAR(9), '') --< Trim leading and trailing spaces, then add a CR\LF at the end of the list SET @List = LTRIM(RTRIM(@List)) + CHAR(13) + CHAR(10) --< Set the position at the first CR/LF in the list SET @Pos = CHARINDEX(CHAR(13) + CHAR(10), @List, 1) --< If exist other chars other than CR/LFs in the list then... IF REPLACE(@List, CHAR(13) + CHAR(10), '') <> '' BEGIN --< Loop while CR/LFs are over (not found = CHARINDEX returns 0) WHILE @Pos > 0 BEGIN --< Get the heading list chars from the first char to the first CR/LF and trim spaces SET @Item = LTRIM(RTRIM(LEFT(@List, @Pos - 1))) --< If the so calulated item is not empty... IF @Item <> '' BEGIN --< ...insert it in the @ParsedList temporary table INSERT INTO @ParsedList (Item) VALUES (@Item) --(CAST(@Item AS int)) --< Use the appropriate conversion if needed END --< Remove the first item from the list... SET @List = RIGHT(@List, LEN(@List) - @Pos - 1) --< ...and set the position to the next CR/LF SET @Pos = CHARINDEX(CHAR(13) + CHAR(10), @List, 1) --< Repeat this block while the upon loop condition is verified END END RETURN END At this point, having created the UDF, our query is transformed trivially in: SELECT * FROM Sales.SalesOrderHeader AS SOH WHERE SOH.SalesOrderNumber IN ( SELECT Item FROM SplitCRLFList('SO43667 SO43709 SO43726 SO43746 SO43782 SO43796') AS SCL) Convenient, isn’t it? You can find the script DBA_SplitCRLFList.sql here. Bye!!

    Read the article

  • IPgallery banks on Solaris SPARC

    - by Frederic Pariente
    IPgallery is a global supplier of converged legacy and Next Generation Networks (NGN) products and solutions, including: core network components and cloud-based Value Added Services (VAS) for voice, video and data sessions. IPgallery enables network operators and service providers to offer advanced converged voice, chat, video/content services and rich unified social communications in a combined legacy (fixed/mobile), Over-the-Top (OTT) and Social Community (SC) environments for home and business customers. Technically speaking, this offer is a scalable and robust telco solution enabling operators to offer new services while controlling operating expenses (OPEX). In its solutions, IPgallery leverages the following Oracle components: Oracle Solaris, Netra T4 and SPARC T4 in order to provide a competitive and scalable solution without the price tag often associated with high-end systems. Oracle Solaris Binary Application Guarantee A unique feature of Oracle Solaris is the guaranteed binary compatibility between releases of the Solaris OS. That means, if a binary application runs on Solaris 2.6 or later, it will run on the latest release of Oracle Solaris.  IPgallery developed their application on Solaris 9 and Solaris 10 then runs it on Solaris 11, without any code modification or rebuild. The Solaris Binary Application Guarantee helps IPgallery protect their long-term investment in the development, training and maintenance of their applications. Oracle Solaris Image Packaging System (IPS) IPS is a new repository-based package management system that comes with Oracle Solaris 11. It provides a framework for complete software life-cycle management such as installation, upgrade and removal of software packages. IPgallery leverages this new packaging system in order to speed up and simplify software installation for the R&D and production environments. Notably, they use IPS to deliver Solaris Studio 12.3 packages as part of the rapid installation process of R&D environments, and during the production software deployment phase, they ensure software package integrity using the built-in verification feature. Solaris IPS thus improves IPgallery's time-to-market with a faster, more reliable software installation and deployment in production environments. Extreme Network Performance IPgallery saw a huge improvement in application performance both in CPU and I/O, when running on SPARC T4 architecture in compared to UltraSPARC T2 servers.  The same application (with the same activation environment) running on T2 consumes 40%-50% CPU, while it consumes only 10% of the CPU on T4. The testing environment comprised of: Softswitch (Call management), TappS (Telecom Application Server) and Billing Server running on same machine and initiating various services in capacity of 1000 CAPS (Call Attempts Per Second). In addition, tests showed a huge improvement in the performance of the TCP/IP stack, which reduces network layer processing and in the end Call Attempts latency. Finally, there is a huge improvement within the file system and disk I/O operations; they ran all tests with maximum logging capability and it didn't influence any benchmark values. "Due to the huge improvements in performance and capacity using the T4-1 architecture, IPgallery has engineered the solution with less hardware.  This means instead of deploying the solution on six T2-based machines, we will deploy on 2 redundant machines while utilizing Oracle Solaris Zones and Oracle VM for higher availability and virtualization" Shimon Lichter, VP R&D, IPgallery In conclusion, using the unique combination of Oracle Solaris and SPARC technologies, IPgallery is able to offer solutions with much lower TCO, while providing a higher level of service capacity, scalability and resiliency. This low-OPEX solution enables the operator, the end-customer, to deliver a high quality service while maintaining high profitability.

    Read the article

  • Threading Overview

    - by ACShorten
    One of the major features of the batch framework is the ability to support multi-threading. The multi-threading support allows a site to increase throughput on an individual batch job by splitting the total workload across multiple individual threads. This means each thread has fine level control over a segment of the total data volume at any time. The idea behind the threading is based upon the notion that "many hands make light work". Each thread takes a segment of data in parallel and operates on that smaller set. The object identifier allocation algorithm built into the product randomly assigns keys to help ensure an even distribution of the numbers of records across the threads and to minimize resource and lock contention. The best way to visualize the concept of threading is to use a "pie" analogy. Imagine the total workset for a batch job is a "pie". If you split that pie into equal sized segments, each segment would represent an individual thread. The concept of threading has advantages and disadvantages: Smaller elapsed runtimes - Jobs that are multi-threaded finish earlier than jobs that are single threaded. With smaller amounts of work to do, jobs with threading will finish earlier. Note: The elapsed runtime of the threads is rarely proportional to the number of threads executed. Even though contention is minimized, some contention does exist for resources which can adversely affect runtime. Threads can be managed individually – Each thread can be started individually and can also be restarted individually in case of failure. If you need to rerun thread X then that is the only thread that needs to be resubmitted. Threading can be somewhat dynamic – The number of threads that are run on any instance can be varied as the thread number and thread limit are parameters passed to the job at runtime. They can also be configured using the configuration files outlined in this document and the relevant manuals.Note: Threading is not dynamic after the job has been submitted Failure risk due to data issues with threading is reduced – As mentioned earlier individual threads can be restarted in case of failure. This limits the risk to the total job if there is a data issue with a particular thread or a group of threads. Number of threads is not infinite – As with any resource there is a theoretical limit. While the thread limit can be up to 1000 threads, the number of threads you can physically execute will be limited by the CPU and IO resources available to the job at execution time. Theoretically with the objects identifiers evenly spread across the threads the elapsed runtime for the threads should all be the same. In other words, when executing in multiple threads theoretically all the threads should finish at the same time. Whilst this is possible, it is also possible that individual threads may take longer than other threads for the following reasons: Workloads within the threads are not always the same - Whilst each thread is operating on the roughly the same amounts of objects, the amount of processing for each object is not always the same. For example, an account may have a more complex rate which requires more processing or a meter has a complex amount of configuration to process. If a thread has a higher proportion of objects with complex processing it will take longer than a thread with simple processing. The amount of processing is dependent on the configuration of the individual data for the job. Data may be skewed – Even though the object identifier generation algorithm attempts to spread the object identifiers across threads there are some jobs that use additional factors to select records for processing. If any of those factors exhibit any data skew then certain threads may finish later. For example, if more accounts are allocated to a particular part of a schedule then threads in that schedule may finish later than other threads executed. Threading is important to the success of individual jobs. For more guidelines and techniques for optimizing threading refer to Multi-Threading Guidelines in the Batch Best Practices for Oracle Utilities Application Framework based products (Doc Id: 836362.1) whitepaper available from My Oracle Support

    Read the article

  • What C++ coding standard do you use?

    - by gablin
    For some time now, I've been unable to settle on a coding standard and use it concistently between projects. When starting a new project, I tend to change some things around (add a space there, remove a space there, add a line break there, an extra indent there, change naming conventions, etc.). So I figured that I might provide a piece of sample code, in C++, and ask you to rewrite it to fit your standard of coding. Inspiration is always good, I say. ^^ So here goes: #ifndef _DERIVED_CLASS_H__ #define _DERIVED_CLASS_H__ /** * This is an example file used for sampling code layout. * * @author Firstname Surname */ #include <stdio> #include <string> #include <list> #include "BaseClass.h" #include "Stuff.h" /** * The DerivedClass is completely useless. It represents uselessness in all its * entirety. */ class DerivedClass : public BaseClass { //////////////////////////////////////////////////////////// // CONSTRUCTORS / DESTRUCTORS //////////////////////////////////////////////////////////// public: /** * Constructs a useless object with default settings. * * @param value * Is never used. * @throws Exception * If something goes awry. */ DerivedClass (const int value) : uselessSize_ (0) {} /** * Constructs a copy of a given useless object. * * @param object * Object to copy. * @throws OutOfMemoryException * If necessary data cannot be allocated. */ ItemList (const DerivedClass& object) {} /** * Destroys this useless object. */ ~ItemList (); //////////////////////////////////////////////////////////// // PUBLIC METHODS //////////////////////////////////////////////////////////// public: /** * Clones a given useless object. * * @param object * Object to copy. * @return This useless object. */ DerivedClass& operator= (const DerivedClass& object) { stuff_ = object.stuff_; uselessSize_ = object.uselessSize_; } /** * Does absolutely nothing. * * @param useless * Pointer to useless data. */ void doNothing (const int* useless) { if (useless == NULL) { return; } else { int womba = *useless; switch (womba) { case 0: cout << "This is output 0"; break; case 1: cout << "This is output 1"; break; case 2: cout << "This is output 2"; break; default: cout << "This is default output"; break; } } } /** * Does even less. */ void doEvenLess () { int mySecret = getSecret (); int gather = 0; for (int i = 0; i < mySecret; i++) { gather += 2; } } //////////////////////////////////////////////////////////// // PRIVATE METHODS //////////////////////////////////////////////////////////// private: /** * Gets the secret value of this useless object. * * @return A secret value. */ int getSecret () const { if ((RANDOM == 42) && (stuff_.size() > 0) || (1000000000000000000 > 0) && true) { return 420; } else if (RANDOM == -1) { return ((5 * 2) + (4 - 1)) / 2; } int timer = 100; bool stopThisMadness = false; while (!stopThisMadness) { do { timer--; } while (timer > 0); stopThisMadness = true; } } //////////////////////////////////////////////////////////// // FIELDS //////////////////////////////////////////////////////////// private: /** * Don't know what this is used for. */ static const int RANDOM = 42; /** * List of lists of stuff. */ std::list <Stuff> stuff_; /** * Specifies the size of this object's uselessness. */ size_t uselessSize_; }; #endif

    Read the article

  • SQL SERVER – A Puzzle Part 4 – Fun with SEQUENCE in SQL Server 2012 – Guess the Next Value

    - by pinaldave
    It seems like every weekend I get a new puzzle in my mind. Before continuing I suggest you read my previous posts here where I have shared earlier puzzles. A Puzzle – Fun with SEQUENCE in SQL Server 2012 – Guess the Next Value  A Puzzle Part 2 – Fun with SEQUENCE in SQL Server 2012 – Guess the Next Value A Puzzle Part 3 – Fun with SEQUENCE in SQL Server 2012 – Guess the Next Value After reading above three posts, I am very confident that you all will be ready for the next set of puzzles now. First execute the script which I have written here. Now guess what will be the next value as requested in the query. USE TempDB GO -- Create sequence CREATE SEQUENCE dbo.SequenceID AS DECIMAL(3,0) START WITH 1 INCREMENT BY -1 MINVALUE 1 MAXVALUE 3 CYCLE NO CACHE; GO SELECT next value FOR dbo.SequenceID; -- Guess the number SELECT next value FOR dbo.SequenceID; -- Clean up DROP SEQUENCE dbo.SequenceID; GO Please note that Starting value is 1, Increment value is the negative value of -1 and Minimum value is 3. Now let us first assume how this will work out. In our example of the sequence starting value is equal to 1 and decrement value is -1, this means the value should decrement from 1 to 0. However, the minimum value is 1. This means the value cannot further decrement at all. What will happen here? The natural assumption is that it should throw an error. How many of you are assuming about query will throw an ERROR? Well, you are WRONG! Do not blame yourself, it is my fault as I have told you only half of the story. Now if you have voted for error, let us continue running above code in SQL Server Management Studio. The above script will give the following output: Isn’t it interesting that instead of error out it is giving us result value 3. To understand the answer about the same, carefully observe the original syntax of creating SEQUENCE – there is a keyword CYCLE. This keyword cycles the values between the minimum and maximum value and when one of the range is exhausted it cycles the values from the other end of the cycle. As we have negative incremental value when query reaches to the minimum value or lower end it will cycle it from the maximum value. Here the maximum value is 3 so the next logical value is 3. If your business requirement is such that if sequence reaches the maximum or minimum value, it should throw an error, you should not use the keyword cycle, and it will behave as discussed. I hope, you are enjoying the puzzles as much as I am enjoying it. If you have any interesting puzzle to share, please do share with me and I will share this on blog with due credit to you. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Have Your Cake and Eat it Too: Industry Best Practices + Flexibility

    - by Oracle Accelerate for Midsize Companies
    By Richard Garraputa, VP of Sales & Marketing, brij Richard joined brij in 1996 after graduating from the University of North Carolina at Greensboro with degrees in Information Systems and Accounting. He directs brij’s overall strategies of both the business development and marketing departments. Companies looking for new ERP systems spend so much time comparing features and functions of software products but too often short change the value of their own processes.  Company managers I meet often claim that they are implementing a new ERP system so they can perform better and faster.  When asked how, the answer is often “by implementing best practices”.  But the term ‘best practices’ is frequently used to mean ‘doing things the way everyone else does them’ rather than a starting point or benchmark to build upon by adding your own value. Of course, implementing standardized processes across an enterprise is an important step in improving operational efficiencies.  But not all companies are alike.  Do you ever tell your customers “We are just like our competition and have no competitive differentiation”?  Probably not.  So why should the implementation of your business processes be just like your competitor’s?  Even within the same industry, companies differentiate themselves by leveraging their unique expertise and approach to business.  These unique aspects—the competitive differentiators that companies use to thrive in a crowded marketplace—can and should be supported by the implementation of business systems like ERP. Modern ERP systems like Oracle’s JD Edwards EnterpriseOne have a broad and deep functional footprint designed to integrate a company’s core operations.  But how can a company take advantage of this footprint without blowing up their implementation budget?  Some ERP vendors claim to solve this challenge by stating that their systems come pre-configured with ‘best practices’.  Too often what they are really saying is that you will have to abandon your key operational differentiators to fit a vendor’s template for your business—or extend your implementation and postpone the realization of any benefits. Thankfully for midsize companies, there is an alternative to the undesirable options of extended implementation projects or abandoning their competitive differentiators.  Oracle Accelerate Solutions speed the time it takes to implement JD Edwards EnterpriseOne solution based on your unique business characteristics, getting your new ERP system up and running faster without forcing your business to fit a cookie-cutter solution. We’ve been a JD Edwards implementation partner since 1986 and we now leverage Oracle Business Accelerators—cloud based rapid implementation tools built and maintained by Oracle. Oracle Business Accelerators deliver the benefits of embedded industry best practices without forcing every customer in to one set of processes like many template or “clone and go” approaches do. You retain the ability to reconfigure your applications—without customization—as your business changes. Wielded by Oracle partners with industry-specific domain expertise, Oracle Accelerate Solution implementations powered by Oracle Business Accelerators help automate the application configuration to fit your business better, faster. For example, on a recent project at a manufacturing company, the project manager told me that Oracle Business Accelerators helped get them to Conference Room Pilot 20% faster than with a traditional approach. Time savings equal cost savings. And if ‘better and faster’ is your goal for your business performance, shouldn’t it be the goal for your ERP implementation as well? Established in 1986, brij has been dedicated solely to helping its customers implement Oracle’s JD Edwards solutions and to maximize the value of those customers’ IT investments. They are a Gold level member in Oracle PartnerNetwork and an Oracle Accelerate Solution provider.

    Read the article

  • Tuning Default WorkManager - Advantages and Disadvantages

    - by Murali Veligeti
    Before discussing on Tuning Default WorkManager, lets have a brief introduction on What is Default WorkManger Before Weblogic Server 9.0 release, we had the concept of Execute Queues. WebLogic Server (before WLS 9.0), processing was performed in multiple execute queues. Different classes of work were executed in different queues, based on priority and ordering requirements, and to avoid deadlocks. In addition to the default execute queue, weblogic.kernel.default, there were pre-configured queues dedicated to internal administrative traffic, such as weblogic.admin.HTTP and weblogic.admin.RMI.Users could control thread usage by altering the number of threads in the default queue, or configure custom execute queues to ensure that particular applications had access to a fixed number of execute threads, regardless of overall system load. From WLS 9.0 release onwards WebLogic Server uses is a single thread pool (single thread pool which is called Default WorkManager), in which all types of work are executed. WebLogic Server prioritizes work based on rules you define, and run-time metrics, including the actual time it takes to execute a request and the rate at which requests are entering and leaving the pool.The common thread pool changes its size automatically to maximize throughput. The queue monitors throughput over time and based on history, determines whether to adjust the thread count. For example, if historical throughput statistics indicate that a higher thread count increased throughput, WebLogic increases the thread count. Similarly, if statistics indicate that fewer threads did not reduce throughput, WebLogic decreases the thread count. This new strategy makes it easier for administrators to allocate processing resources and manage performance, avoiding the effort and complexity involved in configuring, monitoring, and tuning custom executes queues. The Default WorkManager is used to handle thread management and perform self-tuning.This Work Manager is used by an application when no other Work Managers are specified in the application’s deployment descriptors. In many situations, the default Work Manager may be sufficient for most application requirements. WebLogic Server’s thread-handling algorithms assign each application its own fair share by default. Applications are given equal priority for threads and are prevented from monopolizing them. The default work-manager, as its name tells, is the work-manager defined by default.Thus, all applications deployed on WLS will use it. But sometimes, when your application is already in production, it's obvious you can't take your EAR / WAR, update the deployment descriptor(s) and redeploy it.The default work-manager belongs to a thread-pool, as initial thread-pool comes with only five threads, that's not much. If your application has to face a large number of hits, you may want to start with more than that.Well, that's quite easy. You have  two option to do so.1) Modify the config.xmlJust add the following line(s) in your server definition : <server> <name>AdminServer</name> <self-tuning-thread-pool-size-min>100</self-tuning-thread-pool-size-min> <self-tuning-thread-pool-size-max>200</self-tuning-thread-pool-size-max> [...] </server> 2) Adding some JVM parameters Add the following system property in setDomainEnv.sh/setDomainEnv.cmd or startWebLogic.sh/startWebLogic.cmd : -Dweblogic.threadpool.MinPoolSize=100 -Dweblogic.threadpool.MaxPoolSize=100 Reboot WLS and see the option has been taken into account . Disadvantage: So far its fine. But here there is an disadvantage in tuning Default WorkManager. Internally Weblogic Server has many work managers configured for different types of work.  if we run out of threads in the self-tuning pool(because of system property -Dweblogic.threadpool.MaxPoolSize) due to being undersized, then important work that WLS might need to do could be starved.  So, while limiting the self-tuning would limit the default WorkManager and internally it also limits all other internal WorkManagers which WLS uses.So the best alternative is to override the default WorkManager that means creating a WorkManager for the Application and assign the WorkManager for the application instead of tuning the Default WorkManager.

    Read the article

  • Is Cloud Security Holding Back Social SaaS?

    - by Mike Stiles
    The true promise of social data co-mingling with enterprise data to influence and inform social marketing (all marketing really) lives in cloud computing. The cloud brings processing power, services, speed and cost savings the likes of which few organizations could ever put into action on their own. So why wouldn’t anyone jump into SaaS (Software as a Service) with both feet? Cloud security. Being concerned about security is proper and healthy. That just means you’re a responsible operator. Whether it’s protecting your customers’ data or trying to stay off the radar of regulatory agencies, you have plenty of reasons to make sure you’re as protected from hacking, theft and loss as you can possibly be. But you also have plenty of reasons to not let security concerns freeze you in your tracks, preventing you from innovating, moving the socially-enabled enterprise forward, and keeping up with competitors who may not be as skittish regarding SaaS technology adoption. Over half of organizations are transferring sensitive or confidential data to the cloud, an increase of 10% over last year. With the roles and responsibilities of CMO’s, CIO’s and other C’s changing, the first thing you should probably determine is who should take point on analyzing cloud software options, providers, and policies. An oft-quoted Ponemon Institute study found 36% of businesses don’t have a cloud security policy at all. So that’s as good a place to start as any. What applications and data are you comfortable housing in the cloud? Do you have a classification system for data that clearly spells out where data types can go and how they can be used? Who, both internally and at the cloud provider, will function as admins? What are the different levels of admin clearance? Will your security policies and procedures sync up with those of your cloud provider? The key is verifiable trust. Trust in cloud security is actually going up. 1/3 of organizations polled say it’s the cloud provider who should be responsible for data protection. And when you look specifically at SaaS providers, that expectation goes up to 60%. 57% “strongly agree” or “agree” there’s more confidence in cloud providers’ ability to protect data. In fact, some businesses bypass the “verifiable” part of verifiable trust. Just over half have no idea what their cloud provider does to protect data. And yet, according to the “Private Cloud Vision vs. Reality” InformationWeek Report, 82% of organizations say security/data privacy are one of the main reasons they’re still holding the public cloud at arm’s length. That’s going to be a tough position to maintain, because just as social is rapidly changing the face of marketing, big data is rapidly changing the face of enterprise IT. Netflix, who’s particularly big on the benefits of the cloud, says, "We're systematically disassembling the corporate IT components." An enterprise can never realize the full power of big data, nor get the full potential value out of it, if it’s unwilling to enable the integrations and dataset connections necessary in the cloud. Because integration is called for to reduce fragmentation, a standardized platform makes a lot of sense. With multiple components crafted to work together, you’re maximizing scalability, optimization, cost effectiveness, and yes security and identity management benefits. You can see how the incentive is there for cloud companies to develop and add ever-improving security features, making cloud computing an eventual far safer bet than traditional IT. @mikestilesPhoto: stock.xchng

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >