Search Results

Search found 2601 results on 105 pages for 'if condition'.

Page 20/105 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Visual Studio - "{ }" settings

    - by Xorty
    Seriously, I don't know what to google. Here's the thing, I like this java-like code writting: if (condition == true) { doSomeStuff(); } But VisualStudio "helps" me with his own "style", which I don't like and I am unable to change (after rather big time of desperate checking all settings :/) if (condition == true) { DoStuff(); } I obviously want the "{" char to be in same line where condition is ... I am using MS Visual Studio 2010 professional Any help appreciated!

    Read the article

  • Am I right in thinking there is no way to put an if statement and an else statement on line in Python?

    - by Louise
    Am I right in thinking I can't put an if-statement and the corresponding else-statement on one line in Python? NB: variable = value1 if condition else value2 is NOT two statements. It's one statement which can take the value of one of two expressions. I want to do something like if condition a=value else b=value Am I right in thinking this requires a full if-else in Python? Like if condition: a=value else: b=value Thanks, Louise

    Read the article

  • #include in C# (conditional compilation)

    - by HeavyWave
    Is it possible in C# to set such a condition that if the condition is true - compile one file;If condition is false - compile another file? Sort of like #ifdef DEBUG #include Class1.cs #else #include Class2.cs #endif Or possibly set it up in project properties.

    Read the article

  • Sorting and Pagination does not work after I build a custom keyword search that is build using relat

    - by Roland
    I recently started to build a custom keyword search using Yii 1.1.x The search works 100%. But as soon as I sort the columns and use pagination in the admin view the search gets lost and all results are shown. So with otherwords it's not filtering so that only the search results show. Somehow it resets it. In my controller my code looks as follows $builder=Messages::model()->getCommandBuilder(); //Table1 Columns $columns1=array('0'=>'id','1'=>'to','2'=>'from','3'=>'message','4'=>'error_code','5'=>'date_send'); //Table 2 Columns $columns2=array('0'=>'username'); //building the Keywords $keywords = explode(' ',$_REQUEST['search']); $count=0; foreach($keywords as $key){ $kw[$count]=$key; ++$count; } $keywords=$kw; $condition1=$builder->createSearchCondition(Messages::model()->tableName(),$columns1,$keywords,$prefix='t.'); $condition2=$builder->createSearchCondition(Users::model()->tableName(),$columns2,$keywords); $condition = substr($condition1,0,-1) . " OR ".substr($condition2,1); $condition = str_replace('AND','OR',$condition); $dataProvider=new CActiveDataProvider('Messages', array( 'pagination'=>array( 'pageSize'=>self::PAGE_SIZE, ), 'criteria'=>array( 'with'=>'users', 'together'=>true, 'joinType'=>'LEFT JOIN', 'condition'=>$condition, ), 'sort'=>$sort, )); $this->render('admin',array( 'dataProvider'=>$dataProvider,'keywords'=>implode(' ',$keywords),'sort'=>$sort )); and my view looks like this $this->widget('zii.widgets.grid.CGridView', array( 'dataProvider'=>$dataProvider, 'columns'=>array( 'id', array( 'name'=>'user_id', 'value'=>'CHtml::encode(Users::model()->getReseller($data->user_id))', 'visible'=>Yii::app()->user->checkAccess('poweradministrator') ), 'to', 'from', 'message', /* 'date_send', */ array( 'name'=>'error_code', 'value'=>'CHtml::encode($data->status($data->error_code))', ), array( 'class'=>'CButtonColumn', 'template'=>'{view} {delete}', ), ), )); I really do not know what do do anymore since I'm terribly lost, any help will be hihsly appreciated

    Read the article

  • Java ternary operator and boxing Integer/int?

    - by Markus
    I tripped across a really strange NullPointerException the other day caused by an unexpected type-cast in the ternary operator. Given this (useless exemplary) function: Integer getNumber() { return null; } I was expecting the following two code segments to be exactly identical after compilation: Integer number; if (condition) { number = getNumber(); } else { number = 0; } vs. Integer number = (condition) ? getNumber() : 0; . Turns out, if condition is true, the if-statement works fine, while the ternary opration in the second code segment throws a NullPointerException. It seems as though the ternary operation has decided to type-cast both choices to int before auto-boxing the result back into an Integer!?! In fact, if I explicitly cast the 0 to Integer, the exception goes away. In other words: Integer number = (condition) ? getNumber() : 0; is not the same as: Integer number = (condition) ? getNumber() : (Integer) 0; . So, it seems that there is a byte-code difference between the ternary operator and an equivalent if-else-statement (something I didn't expect). Which raises three questions: Why is there a difference? Is this a bug in the ternary implementation or is there a reason for the type cast? Given there is a difference, is the ternary operation more or less performant than an equivalent if-statement (I know, the difference can't be huge, but still)?

    Read the article

  • Parameterise Service start option in WiX installer

    - by Jamiec
    I have a ServiceInstall component in a WiX installer where I have a requirement to either start auto or demand depending on parameters passed into the MSI. So the Xml element in question is <ServiceInstall Vital="yes" Name="My Windows Service" Type="ownProcess" Account="[SERVICEUSERDOMAIN]\[SERVICEUSERNAME]" DisplayName="My Service" Password="[SERVICEUSERPASSWORD]" Start="demand" Interactive="no" Description="Something interesting here" Id="Service" ErrorControl="ignore"></ServiceInstall> WiX will not allow using a PArameter for the Start attribute, so Im stuck with completely suplicating the component with a condition, eg/ <Component Id="ServiceDemand" Guid="{E204A71D-B0EB-4af0-96DB-9823605050C7}" > <Condition>SERVICESTART="demand"</Condition> ... and completely duplicating the whole component, with a different setting for Start and a different Condition. Anyone know of a more elegant solution? One where I don;t have to maintain 2 COmponents whjich do exactly the same thing except the Attribute for Start?

    Read the article

  • MSI install sequence - run DB scripts before services start

    - by marc_s
    Folks, we're running into some sequencing troubles with our MSI install. As part of our app, we install a bunch of services and allow the user to pick whether to start them right away or later. When they start right away, they seem to start too early in the install sequence - before our database manager had a chance to update the database. Right now, our custom action to run the database updater looks like this - it's being run after "InstallFinalize" - very late in the process. <InstallExecuteSequence> <RemoveExistingProducts After='InstallInitialize' /> <Custom Action='RunDbUpdateManagerAction' After='InstallFinalize'> DbUpdateManager=3</Custom> </InstallExecuteSequence> What would be the more appropriate step to run after or before, to make sure the DB scripts are executed before any of the installed services start up? Is there a "BeforeServiceStart" step? EDIT: Just defining the "Before='StartServices'" attribute on the tag didn't solve my problem. I am assuming the issue is this: the custom action has an "inner text", which represents a condition, and this condition is: "&DbUpdateManager=3". From what I can deduce from trial & error, this probably means "the DbUpdateManager feature must be published". Now, trouble is: "PublishFeature" comes way at the end in the install sequence, just before "InstallFinalize", and definitely AFTER InstallServices / StartServices. So when I specify the "Before=StartServices" requirement, the condition "DbUpdateManager feature must be published" isn't true yet, so the DbUpdateManager doesn't get executed :-( I tried removing the condition - in that case, my DbUpdateManager sometimes doesn't execute at all, sometimes more than once - no real clear pattern as to what happens when..... Any more ideas?? Is there a way I could check for a condition "the DbUpdateManager feature is installed" which would be true after the "InstallFiles" step?? Marc

    Read the article

  • convert string to double

    - by James123
    I have string value in that I need to convert to double in VB.Net. Conditions are like below string = "12345.00232232" if condition is 3 (2 digits after decimal and comma) display = 12,345.00 if condition is 5 (5 digits after decimal and comma) display = 12,345.00232 If Condition is 7 ( 5 digits after decimal and no comma) display = 12345.00232 How can I do that in VB.Net?

    Read the article

  • UrlRewriteFilter Direct to https

    - by adi
    I am using UrlRewriteFilter to redirect to SSL. I am running Glassfishv2. My rule looks something like this now. It's in my urlrewrite.xml in WEB-INF of my war folder. Is there any other glassfish setting that needs to be set? <rule> <condition name="host" operator="notequal">https://abc.def.com</condition> <condition name="host" operator="notequal">^$</condition> <from>^/(.*)</from> <to type="permanent-redirect" last="true">https://abc.def.com/ghi/$1</to> </rule> But FF keeps saying that the URL redirect rule is in a way that it will never complete. I am not exactly sure whats happening here. any ideas?

    Read the article

  • move data from one table to another, postgresql edition

    - by IggShaman
    Hi All, I'd like to move some data from one table to another (with a possibly different schema). Straightforward solution that comes into mind is - start a transaction with serializable isolation level; INSERT INTO dest_table SELECT data FROM orig_table,other-tables WHERE <condition>; DELETE FROM orig_table USING other-tables WHERE <condition>; COMMIT; Now what if the amount of data is rather big, and the <condition> is expensive to compute? In PostgreSQL, a RULE or a stored procedure can be used to delete data on the fly, evaluating condition only once. Which solution is better? Are there other options?

    Read the article

  • Get the sum by comparing between two tables

    - by Ismail Gunes
    I have to tables ProdBiscuit As tb and StockData As sd , I have to get the sum of the quantity in StockData (quantite) with the condition of if (sd.status0 AND sd.prodid = tb.id AND sd.matcuisine = 3) Here is my sql query SELECT tb.id, tb.nom, tb.proddate, tb.qty, tb.stockrecno FROM ProdBiscuit AS tb JOIN (SELECT id, prodid, matcuisine, status, SUM(quantite) AS rq FROM StockData) AS sd ON (tb.id = sd.prodid AND sd.status > 0 AND sd.matcuisine = 3) LIMIT 25 OFFSET @Myid This gives me no rows at all ? There is only 3 rows in ProdBiscuit and 11 rows in Stockdata and there is only 2 rows in StockData good with the condition. And as shown in the picture there is only two rows which give the condition. What is wrong in my query ? PS: The green lines on the image shows the condition in my query.

    Read the article

  • Initialise a wix CheckBox's check state based on a property?

    - by MauriceL
    How does one initalise a Wix check box based on the value of a property? So far, I've done the following: <Control Id="Checkbox" Type="CheckBox" X="0" Y="0" Width="100" Height="15" Property="CHECKBOX_SELECTION" Text="I want this feature" CheckBoxValue="1" TabSkip="no"> <Condition Action="hide">HIDE_CHECKBOX</Condition> <Condition Action="show">NOT HIDE_CHECKBOX</Condition> </Control> Currently I have two custom actions to set HIDE_CHECKBOX and CHECKBOX_SELECTION. The CHECKBOX_SELECTION custom action occurs immediately after the HIDE_CHECKBOX action. What I'm seeing is that HIDE_CHECKBOX is behaving correctly (ie. the checkbox is hidden) which suggests that I've got the ordering of custom actions correct, but CHECKBOX_SELECTION is not changing the check state of the check box. Is this a safe assumption? Also, I've confirmed that SELECTION is being set to '1' in the logs.

    Read the article

  • How do I pass a conditional expression as a parameter in Ruby?

    - by srayhan
    For example this what I am trying to do, def method_a(condition, params={}, &block) if condition method_b(params, &block) else yield end end and I am trying to call the method like this, method_a(#{@date > Date.today}, {:param1 => 'value1', :param2 => 'value2'}) do end The result is the condition is always evaluated to true. How do I make it work?

    Read the article

  • So…is it a Seek or a Scan?

    - by Paul White
    You’re probably most familiar with the terms ‘Seek’ and ‘Scan’ from the graphical plans produced by SQL Server Management Studio (SSMS).  The image to the left shows the most common ones, with the three types of scan at the top, followed by four types of seek.  You might look to the SSMS tool-tip descriptions to explain the differences between them: Not hugely helpful are they?  Both mention scans and ranges (nothing about seeks) and the Index Seek description implies that it will not scan the index entirely (which isn’t necessarily true). Recall also yesterday’s post where we saw two Clustered Index Seek operations doing very different things.  The first Seek performed 63 single-row seeking operations; and the second performed a ‘Range Scan’ (more on those later in this post).  I hope you agree that those were two very different operations, and perhaps you are wondering why there aren’t different graphical plan icons for Range Scans and Seeks?  I have often wondered about that, and the first person to mention it after yesterday’s post was Erin Stellato (twitter | blog): Before we go on to make sense of all this, let’s look at another example of how SQL Server confusingly mixes the terms ‘Scan’ and ‘Seek’ in different contexts.  The diagram below shows a very simple heap table with two columns, one of which is the non-clustered Primary Key, and the other has a non-unique non-clustered index defined on it.  The right hand side of the diagram shows a simple query, it’s associated query plan, and a couple of extracts from the SSMS tool-tip and Properties windows. Notice the ‘scan direction’ entry in the Properties window snippet.  Is this a seek or a scan?  The different references to Scans and Seeks are even more pronounced in the XML plan output that the graphical plan is based on.  This fragment is what lies behind the single Index Seek icon shown above: You’ll find the same confusing references to Seeks and Scans throughout the product and its documentation. Making Sense of Seeks Let’s forget all about scans for a moment, and think purely about seeks.  Loosely speaking, a seek is the process of navigating an index B-tree to find a particular index record, most often at the leaf level.  A seek starts at the root and navigates down through the levels of the index to find the point of interest: Singleton Lookups The simplest sort of seek predicate performs this traversal to find (at most) a single record.  This is the case when we search for a single value using a unique index and an equality predicate.  It should be readily apparent that this type of search will either find one record, or none at all.  This operation is known as a singleton lookup.  Given the example table from before, the following query is an example of a singleton lookup seek: Sadly, there’s nothing in the graphical plan or XML output to show that this is a singleton lookup – you have to infer it from the fact that this is a single-value equality seek on a unique index.  The other common examples of a singleton lookup are bookmark lookups – both the RID and Key Lookup forms are singleton lookups (an RID lookup finds a single record in a heap from the unique row locator, and a Key Lookup does much the same thing on a clustered table).  If you happen to run your query with STATISTICS IO ON, you will notice that ‘Scan Count’ is always zero for a singleton lookup. Range Scans The other type of seek predicate is a ‘seek plus range scan’, which I will refer to simply as a range scan.  The seek operation makes an initial descent into the index structure to find the first leaf row that qualifies, and then performs a range scan (either backwards or forwards in the index) until it reaches the end of the scan range. The ability of a range scan to proceed in either direction comes about because index pages at the same level are connected by a doubly-linked list – each page has a pointer to the previous page (in logical key order) as well as a pointer to the following page.  The doubly-linked list is represented by the green and red dotted arrows in the index diagram presented earlier.  One subtle (but important) point is that the notion of a ‘forward’ or ‘backward’ scan applies to the logical key order defined when the index was built.  In the present case, the non-clustered primary key index was created as follows: CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col ASC) ) ; Notice that the primary key index specifies an ascending sort order for the single key column.  This means that a forward scan of the index will retrieve keys in ascending order, while a backward scan would retrieve keys in descending key order.  If the index had been created instead on key_col DESC, a forward scan would retrieve keys in descending order, and a backward scan would return keys in ascending order. A range scan seek predicate may have a Start condition, an End condition, or both.  Where one is missing, the scan starts (or ends) at one extreme end of the index, depending on the scan direction.  Some examples might help clarify that: the following diagram shows four queries, each of which performs a single seek against a column holding every integer from 1 to 100 inclusive.  The results from each query are shown in the blue columns, and relevant attributes from the Properties window appear on the right: Query 1 specifies that all key_col values less than 5 should be returned in ascending order.  The query plan achieves this by seeking to the start of the index leaf (there is no explicit starting value) and scanning forward until the End condition (key_col < 5) is no longer satisfied (SQL Server knows it can stop looking as soon as it finds a key_col value that isn’t less than 5 because all later index entries are guaranteed to sort higher). Query 2 asks for key_col values greater than 95, in descending order.  SQL Server returns these results by seeking to the end of the index, and scanning backwards (in descending key order) until it comes across a row that isn’t greater than 95.  Sharp-eyed readers may notice that the end-of-scan condition is shown as a Start range value.  This is a bug in the XML show plan which bubbles up to the Properties window – when a backward scan is performed, the roles of the Start and End values are reversed, but the plan does not reflect that.  Oh well. Query 3 looks for key_col values that are greater than or equal to 10, and less than 15, in ascending order.  This time, SQL Server seeks to the first index record that matches the Start condition (key_col >= 10) and then scans forward through the leaf pages until the End condition (key_col < 15) is no longer met. Query 4 performs much the same sort of operation as Query 3, but requests the output in descending order.  Again, we have to mentally reverse the Start and End conditions because of the bug, but otherwise the process is the same as always: SQL Server finds the highest-sorting record that meets the condition ‘key_col < 25’ and scans backward until ‘key_col >= 20’ is no longer true. One final point to note: seek operations always have the Ordered: True attribute.  This means that the operator always produces rows in a sorted order, either ascending or descending depending on how the index was defined, and whether the scan part of the operation is forward or backward.  You cannot rely on this sort order in your queries of course (you must always specify an ORDER BY clause if order is important) but SQL Server can make use of the sort order internally.  In the four queries above, the query optimizer was able to avoid an explicit Sort operator to honour the ORDER BY clause, for example. Multiple Seek Predicates As we saw yesterday, a single index seek plan operator can contain one or more seek predicates.  These seek predicates can either be all singleton seeks or all range scans – SQL Server does not mix them.  For example, you might expect the following query to contain two seek predicates, a singleton seek to find the single record in the unique index where key_col = 10, and a range scan to find the key_col values between 15 and 20: SELECT key_col FROM dbo.Example WHERE key_col = 10 OR key_col BETWEEN 15 AND 20 ORDER BY key_col ASC ; In fact, SQL Server transforms the singleton seek (key_col = 10) to the equivalent range scan, Start:[key_col >= 10], End:[key_col <= 10].  This allows both range scans to be evaluated by a single seek operator.  To be clear, this query results in two range scans: one from 10 to 10, and one from 15 to 20. Final Thoughts That’s it for today – tomorrow we’ll look at monitoring singleton lookups and range scans, and I’ll show you a seek on a heap table. Yes, a seek.  On a heap.  Not an index! If you would like to run the queries in this post for yourself, there’s a script below.  Thanks for reading! IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; -- ================ -- Singleton lookup -- ================ ; -- Single value equality seek in a unique index -- Scan count = 0 when STATISTIS IO is ON -- Check the XML SHOWPLAN SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 32 ; -- =========== -- Range Scans -- =========== ; -- Query 1 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col <= 5 ORDER BY E.key_col ASC ; -- Query 2 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col > 95 ORDER BY E.key_col DESC ; -- Query 3 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 10 AND E.key_col < 15 ORDER BY E.key_col ASC ; -- Query 4 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 20 AND E.key_col < 25 ORDER BY E.key_col DESC ; -- Final query (singleton + range = 2 range scans) SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 10 OR E.key_col BETWEEN 15 AND 20 ORDER BY E.key_col ASC ; -- === TIDY UP === DROP TABLE dbo.Example; © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • while(true) and loop-breaking - anti-pattern?

    - by KeithS
    Consider the following code: public void doSomething(int input) { while(true) { TransformInSomeWay(input); if(ProcessingComplete(input)) break; DoSomethingElseTo(input); } } Assume that this process involves a finite but input-dependent number of steps; the loop is designed to terminate on its own as a result of the algorithm, and is not designed to run indefinitely (until cancelled by an outside event). Because the test to see if the loop should end is in the middle of a logical set of steps, the while loop itself currently doesn't check anything meaningful; the check is instead performed at the "proper" place within the conceptual algorithm. I was told that this is bad code, because it is more bug-prone due to the ending condition not being checked by the loop structure. It's more difficult to figure out how you'd exit the loop, and could invite bugs as the breaking condition might be bypassed or omitted accidentally given future changes. Now, the code could be structured as follows: public void doSomething(int input) { TransformInSomeWay(input); while(!ProcessingComplete(input)) { DoSomethingElseTo(input); TransformInSomeWay(input); } } However, this duplicates a call to a method in code, violating DRY; if TransformInSomeWay were later replaced with some other method, both calls would have to be found and changed (and the fact that there are two may be less obvious in a more complex piece of code). You could also write it like: public void doSomething(int input) { var complete = false; while(!complete) { TransformInSomeWay(input); complete = ProcessingComplete(input); if(!complete) { DoSomethingElseTo(input); } } } ... but you now have a variable whose only purpose is to shift the condition-checking to the loop structure, and also has to be checked multiple times to provide the same behavior as the original logic. For my part, I say that given the algorithm this code implements in the real world, the original code is the most readable. If you were going through it yourself, this is the way you'd think about it, and so it would be intuitive to people familiar with the algorithm. So, which is "better"? is it better to give the responsibility of condition checking to the while loop by structuring the logic around the loop? Or is it better to structure the logic in a "natural" way as indicated by requirements or a conceptual description of the algorithm, even though that may mean bypassing the loop's built-in capabilities?

    Read the article

  • How to Modify Data Security in Fusion Applications

    - by Elie Wazen
    The reference implementation in Fusion Applications is designed with built-in data security on business objects that implement the most common business practices.  For example, the “Sales Representative” job has the following two data security rules implemented on an “Opportunity” to restrict the list of Opportunities that are visible to an Sales Representative: Can view all the Opportunities where they are a member of the Opportunity Team Can view all the Opportunities where they are a resource of a territory in the Opportunity territory team While the above conditions may represent the most common access requirements of an Opportunity, some customers may have additional access constraints. This blog post explains: How to discover the data security implemented in Fusion Applications. How to customize data security Illustrative example. a.) How to discover seeded data security definitions The Security Reference Manuals explain the Function and Data Security implemented on each job role.  Security Reference Manuals are available on Oracle Enterprise Repository for Oracle Fusion Applications. The following is a snap shot of the security documented for the “Sales Representative” Job. The two data security policies define the list of Opportunities a Sales Representative can view. Here is a sample of data security policies on an Opportunity. Business Object Policy Description Policy Store Implementation Opportunity A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team Role: Opportunity Territory Resource Duty Privilege: View Opportunity (Data) Resource: Opportunity A Sales Representative can view opportunity where they are an opportunity sales team member with view, edit, or full access Role: Opportunity Sales Representative Duty Privilege: View Opportunity (Data) Resource: Opportunity Description of Columns Column Name Description Policy Description Explains the data filters that are implemented as a SQL Where Clause in a Data Security Grant Policy Store Implementation Provides the implementation details of the Data Security Grant for this policy. In this example the Opportunities listed for a “Sales Representative” job role are derived from a combination of two grants defined on two separate duty roles at are inherited by the Sales Representative job role. b.) How to customize data security Requirement 1: Opportunities should be viewed only by members of the opportunity team and not by all the members of all the territories on the opportunity. Solution: Remove the role “Opportunity Territory Resource Duty” from the hierarchy of the “Sales Representative” job role. Best Practice: Do not modify the seeded role hierarchy. Create a custom “Sales Representative” job role and build the role hierarchy with the seeded duty roles. Requirement 2: Opportunities must be more restrictive based on a custom attribute that identifies if a Opportunity is confidential or not. Confidential Opportunities must be visible only the owner of the Opportunity. Solution: Modify the (2) data security policy in the above example as follows: A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team and the opportunity is not confidential. Implementation of this policy is more invasive. The seeded SQL where clause of the data security grant on “Opportunity Territory Resource Duty” has to be modified and the condition that checks for the confidential flag must be added. Best Practice: Do not modify the seeded grant. Create a new grant with the modified condition. End Date the seeded grant. c.) Illustrative Example (Implementing Requirement 2) A data security policy contains the following components: Role Object Instance Set Action Of the above four components, the Role and Instance Set are the only components that are customizable. Object and Actions for that object are seed data and cannot be modified. To customize a seeded policy, “A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team”, Find the seeded policy Identify the Role, Object, Instance Set and Action components of the policy Create a new custom instance set based on the seeded instance set. End Date the seeded policies Create a new data security policy with custom instance set c-1: Find the seeded policy Step 1: 1. Find the Role 2. Open 3. Find Policies Step 2: Click on the Data Security Tab Sort by “Resource Name” Find all the policies with the “Condition” as “where they are a territory resource in the opportunity territory team” In this example, we can see there are 5 policies for “Opportunity Territory Resource Duty” on Opportunity object. Step 3: Now that we know the policy details, we need to create new instance set with the custom condition. All instance sets are linked to the object. Find the object using global search option. Open it and click on “condition” tab Sort by Display name Find the Instance set Edit the instance set and copy the “SQL Predicate” to a notepad. Create a new instance set with the modified SQL Predicate from above by clicking on the icon as shown below. Step 4: End date the seeded data security policies on the duty role and create new policies with your custom instance set. Repeat the navigation in step Edit each of the 5 policies and end date them 3. Create new custom policies with the same information as the seeded policies in the “General Information”, “Roles” and “Action” tabs. 4. In the “Rules” tab, please pick the new instance set that was created in Step 3.

    Read the article

  • Delphi 2009 MS Build headaches

    - by X-Ray
    does anyone know of any good description of delphi's build system? (i know it's using MS Build.) i'm using delphi 2009. i wanted to set up a variation of the Debug build configuration that (often) has different defines (d2009 seems to call them "preprocessor symbols"). the problem i'm having is that--even though i turned off "inherit" for "Base" and "Debug"--have only very limited control. for example, i can't get rid of FastMM_. <PropertyGroup> <ProjectGuid>{D7FE7347-8E2C-438C-A275-38B8DA9244B0}</ProjectGuid> <ProjectVersion>12.0</ProjectVersion> <MainSource>oca.dpr</MainSource> <Config Condition="'$(Config)'==''">Debug</Config> <DCC_DCCCompiler>DCC32</DCC_DCCCompiler> </PropertyGroup> <PropertyGroup Condition="'$(Config)'=='Base' or '$(Base)'!=''"> <Base>true</Base> </PropertyGroup> <PropertyGroup Condition="'$(Config)'=='Release' or '$(Cfg_1)'!=''"> <Cfg_1>true</Cfg_1> <CfgParent>Base</CfgParent> <Base>true</Base> </PropertyGroup> <PropertyGroup Condition="'$(Config)'=='Debug' or '$(Cfg_2)'!=''"> <Cfg_2>true</Cfg_2> <CfgParent>Base</CfgParent> <Base>true</Base> </PropertyGroup> <PropertyGroup Condition="'$(Base)'!=''"> <DCC_StringChecks>off</DCC_StringChecks> <DCC_MinimumEnumSize>4</DCC_MinimumEnumSize> <DCC_RangeChecking>true</DCC_RangeChecking> <DCC_IntegerOverflowCheck>true</DCC_IntegerOverflowCheck> <DCC_UNIT_PLATFORM>false</DCC_UNIT_PLATFORM> <DCC_SYMBOL_PLATFORM>false</DCC_SYMBOL_PLATFORM> <DCC_DcuOutput>.\dcu</DCC_DcuOutput> <DCC_UnitSearchPath>C:\Prj\Lib\AutoQADocking\Delphi2009.Win32\Lib;$(BDS)\Source\DUnit\src;$(DCC_UnitSearchPath)</DCC_UnitSearchPath> <DCC_Optimize>false</DCC_Optimize> <DCC_DependencyCheckOutputName>oca.exe</DCC_DependencyCheckOutputName> <DCC_ImageBase>00400000</DCC_ImageBase> <DCC_UnitAlias>WinTypes=Windows;WinProcs=Windows;DbiTypes=BDE;DbiProcs=BDE;DbiErrs=BDE;$(DCC_UnitAlias)</DCC_UnitAlias> <DCC_Platform>x86</DCC_Platform> <DCC_E>false</DCC_E> <DCC_N>false</DCC_N> <DCC_S>false</DCC_S> <DCC_F>false</DCC_F> <DCC_K>false</DCC_K> </PropertyGroup> <PropertyGroup Condition="'$(Cfg_1)'!=''"> <DCC_PentiumSafeDivide>true</DCC_PentiumSafeDivide> <DCC_Optimize>true</DCC_Optimize> <DCC_IntegerOverflowCheck>false</DCC_IntegerOverflowCheck> <BRCC_Defines>MadExcept;FastMM;$(BRCC_Defines)</BRCC_Defines> <DCC_AssertionsAtRuntime>false</DCC_AssertionsAtRuntime> <DCC_LocalDebugSymbols>false</DCC_LocalDebugSymbols> <DCC_Define>RELEASE;$(DCC_Define)</DCC_Define> <DCC_SymbolReferenceInfo>0</DCC_SymbolReferenceInfo> <DCC_DebugInformation>false</DCC_DebugInformation> </PropertyGroup> <PropertyGroup Condition="'$(Cfg_2)'!=''"> <DCC_DebugInfoInExe>true</DCC_DebugInfoInExe> <BRCC_Defines>FastMM</BRCC_Defines> <DCC_DebugDCUs>true</DCC_DebugDCUs> <DCC_MapFile>3</DCC_MapFile> <DCC_Define>DEBUG;FastMM_;madExcept;$(DCC_Define)</DCC_Define> </PropertyGroup> i even had to edit it today with notepad to get rid of a DCC define that the delphi UI doesn't seem to give access to. (it said "From Delphi Compiler" for the item i couldn't remove.) does anyone know a good primer on the use of this feature in delphi? thank you!

    Read the article

  • Reference Data Management

    - by rahulkamath
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableColorfulListAccent2 {mso-style-name:"Colorful List - Accent 2"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:72; mso-style-unhide:no; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-tstyle-shading:#F8EDED; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:25; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; color:black; mso-themecolor:text1;} table.MsoTableColorfulListAccent2FirstRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#9E3A38; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themeshade:204; mso-tstyle-border-bottom:1.5pt solid white; mso-tstyle-border-bottom-themecolor:background1; color:white; mso-themecolor:background1; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:white; mso-tstyle-shading-themecolor:background1; mso-tstyle-border-top:1.5pt solid black; mso-tstyle-border-top-themecolor:text1; color:#9E3A38; mso-themecolor:accent2; mso-themeshade:204; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2FirstCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2OddColumn {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-column; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#EFD3D2; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:63; mso-tstyle-border-top:cell-none; mso-tstyle-border-left:cell-none; mso-tstyle-border-bottom:cell-none; mso-tstyle-border-right:cell-none; mso-tstyle-border-insideh:cell-none; mso-tstyle-border-insidev:cell-none;} table.MsoTableColorfulListAccent2OddRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#F2DBDB; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:51;} Reference Data Management Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise MDM solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or mastering sales territories in light of rapid fire acquisitions that require frequent sales territory refinement, equitable distribution of leads and accounts to salespersons, and alignment of budget/forecast with results to optimize sales coverage. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? Reference data is a close cousin of master data. While master data may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and give them contextual value. The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Specialty Finance: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change.

    Read the article

  • How to avoid throwing vexing exceptions?

    - by Mike
    Reading Eric Lippert's article on exceptions was definitely an eye opener on how I should approach exceptions, both as the producer and as the consumer. However, I'm still struggling to define a guideline regarding how to avoid throwing vexing exceptions. Specifically: Suppose you have a Save method that can fail because a) Somebody else modified the record before you, or b) The value you're trying to create already exists. These conditions are to be expected and not exceptional, so instead of throwing an exception you decide to create a Try version of your method, TrySave, which returns a boolean indicating if the save succeeded. But if it fails, how will the consumer know what was the problem? Or would it be best to return an enum indicating the result, kind of Ok/RecordAlreadyModified/ValueAlreadyExists? With integer.TryParse this problem doesn't exist, since there's only one reason the method can fail. Is the previous example really a vexing situation? Or would throwing an exception in this case be the preferred way? I know that's how it's done in most libraries and frameworks, including the Entity framework. How do you decide when to create a Try version of your method vs. providing some way to test beforehand if the method will work or not? I'm currently following these guidelines: If there is the chance of a race condition, then create a Try version. This prevents the need for the consumer to catch an exogenous exception. For example, in the Save method described before. If the method to test the condition pretty much would do all that the original method does, then create a Try version. For example, integer.TryParse(). In any other case, create a method to test the condition.

    Read the article

  • My new favourite traceflag

    - by Dave Ballantyne
    As we are all aware, there are a number of traceflags.  Some documented, some semi-documented and some completely undocumented.  Here is one that is undocumented that Paul White(b|t) mentioned almost as an aside in one of his excellent blog posts. Much has been written about residual predicates and how a predicate can be pushed into a seek/scan operation.  This is a good thing to happen,  it does save a lot of processing from having to be done.  For the uninitiated though: If we have a simple SELECT statement such as : the process that SQL Server goes through to resolve this is : The index IX_Person_LastName_FirstName_MiddleName is navigated to find the first “Smith” For each “Smith” the middle name is checked for being a null. Two operations!, and the execution plan doesnt fully represent all the work that is being undertaken. As you can see there is only a single seek operation, the work undertaken to resolve the condition “MiddleName is not null” has been pushed into it.  This can be seen in the properties. “Seek predicate” is how the index has been navigated, and “Predicate” is the condition run over every row,  a scan inside a seek!. So the question is:  How many rows have been resolved by the seek and how many by the scan ?  How many rows did the filter remove ? Wouldn’t it be nice if this operation could be split ?  That exactly what traceflag 9130 does. Executing the query: That changes the plan rather dramatically, and should be changing how we think about the index seek itself.  The Filter operator has been added and, unsurprisingly, the condition in this is “MiddleName is not null” So it is now evident that the seek operation found 103 Smiths and 60 of those Smiths had a non-null MiddleName. This traceflag has no place on a production system,  dont even think about it

    Read the article

  • Context switches much slower in new linux kernels

    - by Michael Goldshteyn
    We are looking to upgrade the OS on our servers from Ubuntu 10.04 LTS to Ubuntu 12.04 LTS. Unfortunately, it seems that the latency to run a thread that has become runnable has significantly increased from the 2.6 kernel to the 3.2 kernel. In fact the latency numbers we are getting are hard to believe. Let me be more specific about the test. We have a program that has two threads. The first thread gets the current time (in ticks using RDTSC) and then signals a condition variable once a second. The second thread waits on the condition variable and wakes up when it is signaled. It then gets the current time (in ticks using RDTSC). The difference between the time in the second thread and the time in the first thread is computed and displayed on the console. After this the second thread waits on the condition variable once more. So, we get a thread to thread signaling latency measurement once a second as a result. In linux 2.6.32, this latency is somewhere on the order of 2.8-3.5 us, which is reasonable. In linux 3.2.0, this latency is somewhere on the order of 40-100 us. I have excluded any differences in hardware between the two host hosts. They run on identical hardware (dual socket X5687 {Westmere-EP} processors running at 3.6 GHz with hyperthreading, speedstep and all C states turned off). We are changing the affinity to run both threads on physical cores of the same socket (i.e., the first thread is run on Core 0 and the second thread is run on Core 1), so there is no bouncing of threads on cores or bouncing/communication between sockets. The only difference between the two hosts is that one is running Ubuntu 10.04 LTS with kernel 2.6.32-28 (the fast context switch box) and the other is running the latest Ubuntu 12.04 LTS with kernel 3.2.0-23 (the slow context switch box). Have there been any changes in the kernel that could account for this ridiculous slow down in how long it takes for a thread to be scheduled to run?

    Read the article

  • Why does the Ternary\Conditional operator seem significantly faster

    - by Jodrell
    Following on from this question, which I have partially answered. I compile this console app in x64 Release Mode, with optimizations on, and run it from the command line without a debugger attached. using System; using System.Diagnostics; class Program { static void Main() { var stopwatch = new Stopwatch(); var ternary = Looper(10, Ternary); var normal = Looper(10, Normal); if (ternary != normal) { throw new Exception(); } stopwatch.Start(); ternary = Looper(10000000, Ternary); stopWatch.Stop(); Console.WriteLine( "Ternary took {0}ms", stopwatch.ElapsedMilliseconds); stopwatch.Start(); normal = Looper(10000000, Normal); stopWatch.Stop(); Console.WriteLine( "Normal took {0}ms", stopwatch.ElapsedMilliseconds); if (ternary != normal) { throw new Exception(); } Console.ReadKey(); } static int Looper(int iterations, Func<bool, int, int> operation) { var result = 0; for (int i = 0; i < iterations; i++) { var condition = result % 11 == 4; var value = ((i * 11) / 3) % 5; result = operation(condition, value); } return result; } static int Ternary(bool condition, in value) { return value + (condition ? 2 : 1); } static int Normal(int iterations) { if (condition) { return = 2 + value; } return = 1 + value; } } I don't get any exceptions and the output to the console is somthing close to, Ternary took 107ms Normal took 230ms When I break down the CIL for the two logical functions I get this, ... Ternary ... { : ldarg.1 // push second arg : ldarg.0 // push first arg : brtrue.s T // if first arg is true jump to T : ldc.i4.1 // push int32(1) : br.s F // jump to F T: ldc.i4.2 // push int32(2) F: add // add either 1 or 2 to second arg : ret // return result } ... Normal ... { : ldarg.0 // push first arg : brfalse.s F // if first arg is false jump to F : ldc.i4.2 // push int32(2) : ldarg.1 // push second arg : add // add second arg to 2 : ret // return result F: ldc.i4.1 // push int32(1) : ldarg.1 // push second arg : add // add second arg to 1 : ret // return result } Whilst the Ternary CIL is a little shorter, it seems to me that the execution path through the CIL for either function takes 3 loads and 1 or 2 jumps and a return. Why does the Ternary function appear to be twice as fast. I underdtand that, in practice, they are both very quick and indeed, quich enough but, I would like to understand the discrepancy.

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >