Search Results

Search found 29638 results on 1186 pages for 'phone number'.

Page 512/1186 | < Previous Page | 508 509 510 511 512 513 514 515 516 517 518 519  | Next Page >

  • How to program for constraints/rules

    - by Gaurav
    First the background, during interviews in the past, many times I have been asked to design some or other variation of card game as programming puzzle, and I have tried to design it in OO way, but I have never been satisfied with my solutions. However it was not until recently that I realized that I had been approaching the problem from the wrong direction. Specifically I was trying to solve the problem by modeling individual card as an object. Problem with this is individual cards don't have any non-trivial intrinsic behavior and therefore are not suitable (or primary) candidate as objects. What is interesting and important about cards are rules and constraints, such as there could be only four suits, or only thirteen cards in each suit. Of course, then there are any number of rules for games. So my questions are Are there any idioms/constructs/patterns to program for rules & constraints. How many in 1 can be applied in conjunction with OO paradigm.

    Read the article

  • Transitioning to asynchronous programming model

    - by Simone
    our team is mantaining and developing a .NET web service written in C#. We have stress tested the web service's farm and we have evidence that the actual architecture doesn't scale well, as the number of request are constantly increasing. We analyzed Martin Fowler's conclusion in this article, and our team feels that migrating to an asynchronous programming model such as the one described could be the right direction to point to for our service too. My question is: do you think that this "switch" needs a complete rewrite of the application? Has been someone of you been able to adopt APM without rewriting everything and has some insight to share? Thank you in advance

    Read the article

  • Google adwords API - credit card safety question

    - by anon
    Hi, Google is asking me to fax credit card xerox in order to activate adwords API in MCC. This really sucks imo... they could have had prepaid option. In any case, my questions: 1) Are there alternatives to this - is there a 3rd party provider who will give me this service without me sending them the credit card info? 2) How secure is it to send my credit card fax via some online fax service? 3) Do you think they will reject the application if I hide my CVV number in the fax? Any other thoughts appreciated :) thanks

    Read the article

  • What units does the ntp drift file use?

    - by arielf
    When the ntpd daemon is running, the file: /var/lib/ntp/ntp.drift gets updated periodically. Example: 17:20 hostname 118 ~> ls -l /var/lib/ntp/ntp.drift -rw-r--r-- 1 ntp ntp 7 May 20 16:46 /var/lib/ntp/ntp.drift # So it looks like it was last updated ~34 minutes ago The file has one number in it, for example, looking at a 4 virtual hosts, I find these values, respectively: -22.086 -10.214 -13.669 6.045 I assume these are seconds per day(?), but not sure. man ntpd mentions a different drift file /etc/ntp.drift which doesn't seem to exist. The man page doesn't explain what units are being used for the drift. Questions: Is /etc/ntp.drift actually /var/lib/ntp/ntp.drift on Ubuntu? What units is the drift expressed in? Thanks!

    Read the article

  • How could RDBMSes be considered a fad?

    - by StuperUser
    Completing my Computing A-level in 2003 and getting a degree in Computing in 2007, and learning my trade in a company with a lot of SQL usage, I was brought up on the idea of Relational Databases being used for storage. So, despite being relatively new to development, I was taken-aback to read a comment (on Is LinqPad site quote "Tired of querying in antiquated SQL?" accurate? ) that said: [Some devs] despise [SQL] and think that it and RDBMS are a fad Obviously, a competent dev will use the right tool for the right job and won't create a relational database when e.g. flat file or another solution for storage is appropriate, but RDBMs are useful in a massive number of circumstances, so how could they be considered a fad?

    Read the article

  • Web Application Tasks Estimation

    - by Ali
    I know the answer depends on the exact project and its requirements but i am asking on the avarage % of the total time goes into the tasks of the web application development statistically from your own experiance. While developing a web application (database driven) How much % of time does each of the following activities usually takes: -- database creation & all related stored procedures -- server side development -- client side development -- layout settings and designing I know there are lots of great web application developers around here and each one of you have done fair amount of web development and as a result there could be an almost fixed percentage of time going to each of the above activities of web developments for standard projects Update : I am not expecting someone to tell me number of hours i am asking about the average percentage of time that goes on each of the activities as per your experience i.e. server side dev 50%, client side development 20% ,,,,, I repeat there will be lots of cases that differs from the standard depending on the exact requirments of each web application project but here i am asking about Avarage for standard (no special requirment) web project

    Read the article

  • Wow Twitter!!! Ten billions and counting

    - by samsudeen
    Twitter the micro blogging site crossed the ten billions milestone on 4th of this month as per the report by GigaTweet (Site which tracks the number of tweets posted on twitter) The person who sent the 10 billionth tweet is still unknown as his profile is protected. But the 9,999,999,999th tweet was sent by one Rafaela Marques from Brazil. AS you can see GigaTweet expects just another 196 days to reach the 20 billionth marks if tweet continues with the current pace. Some of the interesting statics about rate in which people tweeted every year 2007 – 5000 tweets per day 2008 – 300,000 tweets per day 2009 – 2.5 million per day It reached an average of 35 million tweets per day by end  2009. Today believe it or not the tweet rate is 50 million tweets per day and that’s why we call Wow Twitter!!! . Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • How can architects work with self-organizing Scrum teams?

    - by Martin Wickman
    An organization with a number of agile Scrum teams also has a small group of people appointed as "enterprise architects". The EA group acts as control and gatekeeper for quality and adherence to decisions. This leads to overlaps between the team decision and EA decisions. For instance, the team might want to use library X or want to use REST instead of SOAP, but the EA does not approve of that. Now, this can lead to frustration when team decisions are overruled. Taken far enough, it can potentially lead to a situation where the EA people "grabs" all power and the team ends up feeling demotivated and not very agile at all. The Scrum guides has this to say about it: Self-organizing: No one (not even the Scrum Master) tells the Development Team how to turn Product Backlog into Increments of potentially releasable functionality. Is that reasonable? Should the EA team be disbanded? Should the teams refuse or simply comply?

    Read the article

  • Compiler Dependencies [closed]

    - by asghar ashgari
    I'm a newbie researcher who's passion is programming languages (Web era). I'm wondering why all the Web frameworks and Web-based general purposes languages, have a huge number of dependencies when you want to install and then use (e.g., extend, alternate, etc.) their compilers. For example, Ruby on Rails or Scala. If I want to download their source code, and try to build it again, to me at least, feels like a can of worms. I have a MAC, so I need to install MACports, then update my XCode, then get the compiler source code that has bunch of other dependencies, then its hard to set things up; just to see the installed open-source compiler works fine.

    Read the article

  • Web Site Performance and Assembly Versioning – Part 2 Versioning Combined Files Using Subversion

    - by capgpilk
    Ok so it took a while to post this second part. Many apologies, we had a big roll out of a new platform at work and many things had to get sidelined. So this is the second part in a short series of website performance and using versioning to help improve it. Minification and Concatination of JavaScript and CSS Files Versioning Combined Files Using Subversion – this post Versioning Combined Files Using Mercurial – published shortly In the previous post we used AjaxMin to shrink js and css files then concatenated them into one file each which had the file name of site-script.combined.min.js and site-style.combined.min.css. These file names are fine, but you can configure IIS 7 to cache these static files and so lower the amount of data transferred between server and client. This is done by editing the response headers in IIS. 1. In IIS7 Manager, choose the directory where these files are located and select HTTP Response Headers. 2. Check the Expire Web Content and set a time period well into the future. 3. When refreshing the web page, the server will respond with HTTP 304 forcing the browser to retrieve the file from its cache. 4. As can be seen in FireBug, the Cache-Control header has a max age of 31536000 seconds which equates to 365 days.   The server will always send this HTTP 304 message unless the file changes forcing it to send new content. To help force this we can change the file name based on the latest build using the SVN revision number in the filename. So we have lowered data transfer on content that hasn’t changed, but forced it to be sent when you have made a change to the css or js files. Now to get the SVN revision number in to the file name. 1. Import the MSBuildCommunityTasks targets which can be dowloaded from here. 1: <Import Project="$(MSBuildExtensionsPath) 2: \MSBuildCommunityTasks 3: \MSBuild.Community.Tasks.Targets" /> 2. Edit the BeforeBuild target to call out to svn and get the latest revision 1: <SvnVersion LocalPath="$(MSBuildProjectDirectory)" 2: ToolPath="$(ProgramFiles)\VisualSVN Server\bin"> 3: <Output TaskParameter="Revision" PropertyName="Revision" /> 4: </SvnVersion> 3. Set it to update the project AssemblyInfo.cs file for the svn revision. 1: <FileUpdate Files="Properties\AssemblyInfo.cs" 2: Regex="(\d+)\.(\d+)\.(\d+)\.(\d+)" 3: ReplacementText="$1.$2.$3.$(Revision)" /> 4. Now edit the AfterBuild target to get the full dll version. You could combine these two steps and just get the version from svn, I am working on one project that updates the AssemblyInfo file and another project that allows manual editing of the file, but needs that version within the file name; so I just combined the two for this post. 1: <MSBuild.ExtensionPack.Framework.Assembly 2: TaskAction="GetInfo" 3: NetAssembly="$(OutputPath)\mydll.dll"> 4: <Output TaskParameter="OutputItems" ItemName="Info" /> 5: </MSBuild.ExtensionPack.Framework.Assembly> 6: <Message Text="Version: %(Info.AssemblyVersion)" 7: Importance="High" /> 5. Use this Info.AssemblyVersion to write out the combined css and js files as described in the last post. 1: <WriteLinestoFile File="Scripts\site-%(Info.AssemblyVersion).combined.min.js" 2: Lines="@(JSLinesSite)" Overwrite="true" />   In the next post I will cover doing the same, but for a Mercurial repository.

    Read the article

  • DBA job ranked #7 for Best Job in America 2010

    - by Tara Kizer
    I started my IT career as a student worker in the database team at the County of San Diego.  Although I worked on many different things in that group, it launched my career as a Database Administrator.  You can get an overview of my career here. It seems I picked the right career for a good job in America.  According to CNNMoney and PayScale, a database administrator job is ranked number 7 for the best job in America in 2010.  Check it out here. There are quite a few IT jobs on the list.  You can check out the full list here. Do you agree with this ranking?

    Read the article

  • Why AdMob&#8217;s reported iPhone and Android market shares are inflated

    AdMob, the mobile advertiser that was bought by Google some months ago, has released its latest market share figures for the mobile browsers.Their main findings have already been discussed extensively: Smartphones are on the rise; 48% versus 35% last month. Feature phones are falling quickly; 58% to 35%. Still, the absolute number of feature phones rose by 31%, which means that the market as a whole is growing rapidly.The AdMob report, however, is not about browser market share but about ad impressions....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • External display, windows are moving to external display

    - by hextler
    I use external display with lenovo w520, I am running it with following script xrandr --output LVDS1 --auto --primary --output VIRTUAL --mode 1920x1200 --left-of LVDS1 optirun screenclone -d :8 -x 1 xrandr --output VIRTUAL --off but I have number of problems. All windows are moving on external display If I interrupt this script I cannot run it second time without restarting xorg. yakuake is changing height and does not showing tabs. I cannot make external display primary, in that case I cannot see windows. yakuake is changing height and does not showing tabs. If you know how to solve thisw issues please let me know. Thanks in advance!

    Read the article

  • Tuning Red Gate: #4 of Some

    - by Grant Fritchey
    First time connecting to these servers directly (keys to the kingdom, bwa-ha-ha-ha. oh, excuse me), so I'm going to take a look at the server properties, just to see if there are any issues there. Max memory is set, cool, first possible silly mistake clear. In fact, these look to be nicely set up. Oh, I'd like to see the ANSI Standards set by default, but it's not a big deal. The default location for database data is the F:\ drive, where I saw all the activity last time. Cool, the people maintaining the servers in our company listen, parallelism threshold is set to 35 and optimize for ad hoc is enabled. No shocks, no surprises. The basic setup is appropriate. On to the problem database. Nothing wrong in the properties. The database is in SIMPLE recovery, but I think it's a reporting system, so no worries there. Again, I'd prefer to see the ANSI settings for connections, but that's the worst thing I can see. Time to look at the queries, tables, indexes and statistics because all the information I've collected over the last several days suggests that we're not looking at a systemic problem (except possibly not enough memory), but at the traditional tuning issues. I just want to note that, I started looking at the system, not the queries. So should you when tuning your environment. I know, from the data collected through SQL Monitor, what my top poor performing queries are, and the most frequently called, etc. I'm starting with the most frequently called. I'm going to get the execution plan for this thing out of the cache (although, with the cache dumping constantly, I might not get it). And it's not there. Called 1.3 million times over the last 3 days, but it's not in cache. Wow. OK. I'll see what's in cache for this database: SELECT  deqs.creation_time,         deqs.execution_count,         deqs.max_logical_reads,         deqs.max_elapsed_time,         deqs.total_logical_reads,         deqs.total_elapsed_time,         deqp.query_plan,         SUBSTRING(dest.text, (deqs.statement_start_offset / 2) + 1,                   (deqs.statement_end_offset - deqs.statement_start_offset) / 2                   + 1) AS QueryStatement FROM    sys.dm_exec_query_stats AS deqs         CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest         CROSS APPLY sys.dm_exec_query_plan(deqs.plan_handle) AS deqp WHERE   dest.dbid = DB_ID('Warehouse') AND deqs.statement_end_offset > 0 AND deqs.statement_start_offset > 0 ORDER BY deqs.max_logical_reads DESC ; And looking at the most expensive operation, we have our first bad boy: Multiple table scans against very large sets of data and a sort operation. a sort operation? It's an insert. Oh, I see, the table is a heap, so it's doing an insert, then sorting the data and then inserting into the primary key. First question, why isn't this a clustered index? Let's look at some more of the queries. The next one is deceiving. Here's the query plan: You're thinking to yourself, what's the big deal? Well, what if I told you that this thing had 8036318 reads? I know, you're looking at skinny little pipes. Know why? Table variable. Estimated number of rows = 1. Actual number of rows. well, I'm betting several more than one considering it's read 8 MILLION pages off the disk in a single execution. We have a serious and real tuning candidate. Oh, and I missed this, it's loading the table variable from a user defined function. Let me check, let me check. YES! A multi-statement table valued user defined function. And another tuning opportunity. This one's a beauty, seriously. Did I also mention that they're doing a hash against all the columns in the physical table. I'm sure that won't lead to scans of a 500,000 row table, no, not at all. OK. I lied. Of course it is. At least it's on the top part of the Loop which means the scan is only executed once. I just did a cursory check on the next several poor performers. all calling the UDF. I think I found a big tuning opportunity. At this point, I'm typing up internal emails for the company. Someone just had their baby called ugly. In addition to a series of suggested changes that we need to implement, I'm also apologizing for being such an unkind monster as to question whether that third eye & those flippers belong on such an otherwise lovely child.

    Read the article

  • Do most companies not know how to write software?

    - by SnOrfus
    If you're an active reader here, try to think about how many times you've heard (and even agreed) when someone here has told someone else to start looking for a new job. Personally, I've seen it a lot more than I expected: it's almost starting to sound cliche. I get that there are bound to be a number of companies that are bad at developing software or managing a software project, but it almost seems like it's getting worse and more frequent, maybe we're just hearing from them and not all of the places that have decent work atmospheres/conditions. So I ask: In your experience, and through your developer friends do you find that it is common that companies have bad development environments and if so: Why do you think it's common? What do you think could be done to fix it as a developer, as a manager, as an industry? Do you think it's improving?

    Read the article

  • Did "sudo dconf reset -f /org/compiz". Now ccsm settings ignored

    - by bshanks
    I executed: sudo dconf reset -f /org/compiz Now changing settings in CompizConfig Settings Manager (ccsm) has no effect. For example, changing the number of desktops has no effect. I tried purging and reinstalling ccsm but it didn't help. Incidentally, where is the documentation for this sort of thing (which documentation specifies where Unity and Compiz store config settings?)? And where does dconf store things? And where is dconf's documentation? http://dag.wieers.com/home-made/dconf/dconf.1.html says nothing about the 'reset' command, and it says it stores things in /var/log/dconf, but nothing was there. Are there two things named 'dconf'? I would actually like to just put things back to where they were before I executed sudo dconf reset. I have a backup of my hard drive available, I just need to know which files to rollback. I tried rolling back the .config, .gconf, and .cache directories, to no avail. I'm using 12.04.

    Read the article

  • User Productivity Kit - Powerful Packages (Part 2)

    - by [email protected]
    In my first post on packages I described what a package is and how it can be used. I also started explaining some of the considerations that should be taken into account when determining how to arrange your packages. The first is when the files are interrelated and depend on one another such as an HTML file and it's graphics. A second consideration is how the files are used in your outlines. Let's say you're using a dozen Word doc files. You could place them all in a single package or put each Word doc file in a separate package but what's the right thing to do? There are several factors that will influence your decision. To understand the first, let me explain a function of UPK publishing. Take an outline in UPK that has an attachment (concept, frame link, or hyperlink) that points to a file in a package. When you publish this outline, the publishing engine will determine that there is a link to a file in the package and copy the contents of the package to the publishing destination directory. This is done to ensure that any interrelated files are kept together. For the situation where you have an HTML file with links to number of graphics files, this is a good thing. If, however, the package has a dozen unrelated Word doc files and you link to only one of them, all dozen Word documents will be copied to the publishing destination directory.  Whether or not this is a good thing is dependent on two things. First, are all of the files in the package used in the outline that you're publishing? Take an outline that includes links to all of the Word documents in that dozen document package I described earlier. For this situation, you may choose to keep all the files in a single package for convenience. A second consideration is how your organization leverages reuse in UPK. In this context, I'm referring to the link style of reuse such as when you link to the same topic from multiple UPK outlines and changes to the topic appear in both places. Take an example where you have the earlier mentioned dozen Word document package and an outline with a dozen topics in it. Each topic has an attachment pointing to one of the Word documents in the package (frame link, concept, etc.) If you're only publishing this outline, the single package probably works fine but what if you're reusing one of these topics in another outline? As I explained earlier, linking to one file in the package will result in all files in the package being copied to your published output. In this example, linking to one topic in the first outline will result in all dozen Word documents being copied to the published output. This may result in files in the output that you don't want there for business or size reasons. This is a situation in which you should consider placing each of the Word documents in it's own separate package. With each document in it's own package, that link to a single document will result in only that single package and single Word document being copied to the published output. In my last post I had described that packages are documents in the UPK library. When using the multi-user version of the UPK Developer you can leverage standard library capabilities for managing the files in these packages during the development process - capabilities such as check in / check out, history, etc. When structuring your packages take into consideration how the authors are going to be adding, modifying and deleting files from the packages. A single package is a single document in the UPK library. Like any other document in the library, a single user can check out the package and edit it at a time. If you have a large number of files in a single package and these must be modified by many users, you need to consider whether this will cause problems as multiple users compete to update the same package. If the files don't depend on each other consider placing the files in separate packages to reduce contention. I hope you've enjoyed these two posts on how you can leverage the power of packages in your content. In summary, consider the following when structuring your packages: Is the asset a single, standalone file or a set of files that depend on each other? Will all the files always be used together in a single outline or may only some of the files be needed based on how the content is reused across multiple outlines? Will multiple developers need to update the files in a single package or should you break it into multiple packages to reduce contention when checking out the document? We'd like to hear from you on how you're using packages in your content. Please add your comments below! Thank you and I hope these two posts have given you additional insights into how to use packages in your content and structure them for efficient use. John Zaums Senior Director, Product Development Oracle User Productivity Kit

    Read the article

  • Looking for enterprise web application design inspiration [closed]

    - by Farshid
    I've checked many websites to be inspired about what the look and feel of a serious enterprise web-application should look like. But the whole I saw were designed for being used by single users and not serious corporate users. The designs I saw were mostly happy-colored, and looked like being developed by a small team of eager young passionate developers. But what I'm looking for, are showcases of serious web apps' designs (e.g. web apps developed by large corporations) that are developed for being used by a large number of corporate uses. Can you suggest me sources for this kind of inspiration?

    Read the article

  • Strategy to find bottleneck in a network

    - by Simone
    Our enterprise is having some problem when the number of incoming request goes beyond a certain amount. To make things simpler, we have N websites that uses, amongst other, a local web service. This service is hosted by IIS, and it's a .NET 4.0 (C#) application executed in a farm. It's REST-oriented, built around OpenRasta. As already mentioned, by stress testing it with JMeter, we've found that beyond a certain amount of request the service's performance drop. Anyway, this service is, amongst other, a client itself of other 3 distinct web services and also a client for a DB server, so it's not very clear what really is the culprit of this abrupt decay. In turn, these 3 other web services are installed in our farm too, and client of other DB servers (and services, possibly, that are out of my team control). What strategy do you suggest to try to locate where the bottleneck(s) are? Do you have any high-level suggestions?

    Read the article

  • HTG Explains: What The Windows Event Viewer Is and How You Can Use It

    - by Chris Hoffman
    The Windows Event Viewer shows a log of application and system messages – errors, information messages, and warnings. Scammers have used the Event Viewer to deceive people – event a properly functioning system will have error messages here. In one infamous scam, a person claiming to be from Microsoft phones someone up and instructs them to open the Event Viewer. The person is sure to see error messages here, and the scammer will ask for the person’s credit card number to fix them. As a rule of thumb, you can generally ignore all of the errors and warnings that appear in the Event Viewer – assuming your computer is working properly. HTG Explains: What The Windows Event Viewer Is and How You Can Use It HTG Explains: How Windows Uses The Task Scheduler for System Tasks HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows?

    Read the article

  • How to Secure a Data Role by Multiple Business Units

    - by Elie Wazen
    In this post we will see how a Role can be data secured by multiple Business Units (BUs).  Separate Data Roles are generally created for each BU if a corresponding data template generates roles on the basis of the BU dimension. The advantage of creating a policy with a rule that includes multiple BUs is that while mapping these roles in HCM Role Provisioning Rules, fewer number of entires need to be made. This could facilitate maintenance for enterprises with a large number of Business Units. Note: The example below applies as well if the securing entity is Inventory Organization. Let us take for example the case of a user provisioned with the "Accounts Payable Manager - Vision Operations" Data Role in Fusion Applications. This user will be able to access Invoices in Vision Operations but will not be able to see Invoices in Vision Germany. Figure 1. A User with a Data Role restricting them to Data from BU: Vision Operations With the role granted above, this is what the user will see when they attempt to select Business Units while searching for AP Invoices. Figure 2.The List Of Values of Business Units is limited to single one. This is the effect of the Data Role granted to that user as can be seen in Figure 1 In order to create a data role that secures by multiple BUs,  we need to start by creating a condition that groups those Business Units we want to include in that data role. This is accomplished by creating a new condition against the BU View .  That Condition will later be used to create a data policy for our newly created Role.  The BU View is a Database resource and  is accessed from APM as seen in the search below Figure 3.Viewing a Database Resource in APM The next step is create a new condition,  in which we define a sql predicate that includes 2 BUs ( The ids below refer to Vision Operations and Vision Germany).  At this point we have simply created a standalone condition.  We have not used this condition yet, and security is therefore not affected. Figure 4. Custom Role that inherits the Purchase Order Overview Duty We are now ready to create our Data Policy.  in APM, we search for our newly Created Role and Navigate to “Find Global Policies”.  we query the Role we want to secure and navigate to view its global policies. Figure 5. The Job Role we plan on securing We can see that the role was not defined with a Data Policy . So will create one that uses the condition we created earlier.   Figure 6. Creating a New Data Policy In the General Information tab, we have to specify the DB Resource that the Security Policy applies to:  In our case this is the BU View Figure 7. Data Policy Definition - Selection of the DB Resource we will secure by In the Rules Tab, we  make the rule applicable to multiple values of the DB Resource we selected in the previous tab.  This is where we associate the condition we created against the BU view to this data policy by entering the Condition name in the Condition field Figure 8. Data Policy Rule The last step of Defining the Data Policy, consists of  explicitly selecting  the Actions that are goverened by this Data Policy.  In this case for example we select the Actions displayed below in the right pane. Once the record is saved , we are ready to use our newly secured Data Role. Figure 9. Data Policy Actions We can now see a new Data Policy associated with our Role.  Figure 10. Role is now secured by a Data Policy We now Assign that new Role to the User.  Of course this does not have to be done in OIM and can be done using a Provisioning Rule in HCM. Figure 11. Role assigned to the User who previously was granted the Vision Ops secured role. Once that user accesses the Invoices Workarea this is what they see: In the image below the LOV of Business Unit returns the two values defined in our data policy namely: Vision Operations and Vision Germany Figure 12. The List Of Values of Business Units now includes the two we included in our data policy. This is the effect of the data role granted to that user as can be seen in Figure 11

    Read the article

  • Getting started with Oracle Database In-Memory Part III - Querying The IM Column Store

    - by Maria Colgan
    In my previous blog posts, I described how to install, enable, and populate the In-Memory column store (IM column store). This weeks post focuses on how data is accessed within the IM column store. Let’s take a simple query “What is the most expensive air-mail order we have received to date?” SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE  lo_shipmode = 5; The LINEORDER table has been populated into the IM column store and since we have no alternative access paths (indexes or views) the execution plan for this query is a full table scan of the LINEORDER table. You will notice that the execution plan has a new set of keywords “IN MEMORY" in the access method description in the Operation column. These keywords indicate that the LINEORDER table has been marked for INMEMORY and we may use the IM column store in this query. What do I mean by “may use”? There are a small number of cases were we won’t use the IM column store even though the object has been marked INMEMORY. This is similar to how the keyword STORAGE is used on Exadata environments. You can confirm that the IM column store was actually used by examining the session level statistics, but more on that later. For now let's focus on how the data is accessed in the IM column store and why it’s faster to access the data in the new column format, for analytical queries, rather than the buffer cache. There are four main reasons why accessing the data in the IM column store is more efficient. 1. Access only the column data needed The IM column store only has to scan two columns – lo_shipmode and lo_ordtotalprice – to execute this query while the traditional row store or buffer cache has to scan all of the columns in each row of the LINEORDER table until it reaches both the lo_shipmode and the lo_ordtotalprice column. 2. Scan and filter data in it's compressed format When data is populated into the IM column it is automatically compressed using a new set of compression algorithms that allow WHERE clause predicates to be applied against the compressed formats. This means the volume of data scanned in the IM column store for our query will be far less than the same query in the buffer cache where it will scan the data in its uncompressed form, which could be 20X larger. 3. Prune out any unnecessary data within each column The fastest read you can execute is the read you don’t do. In the IM column store a further reduction in the amount of data accessed is possible due to the In-Memory Storage Indexes(IM storage indexes) that are automatically created and maintained on each of the columns in the IM column store. IM storage indexes allow data pruning to occur based on the filter predicates supplied in a SQL statement. An IM storage index keeps track of minimum and maximum values for each column in each of the In-Memory Compression Unit (IMCU). In our query the WHERE clause predicate is on the lo_shipmode column. The IM storage index on the lo_shipdate column is examined to determine if our specified column value 5 exist in any IMCU by comparing the value 5 to the minimum and maximum values maintained in the Storage Index. If the value 5 is outside the minimum and maximum range for an IMCU, the scan of that IMCU is avoided. For the IMCUs where the value 5 does fall within the min, max range, an additional level of data pruning is possible via the metadata dictionary created when dictionary-based compression is used on IMCU. The dictionary contains a list of the unique column values within the IMCU. Since we have an equality predicate we can easily determine if 5 is one of the distinct column values or not. The combination of the IM storage index and dictionary based pruning, enables us to only scan the necessary IMCUs. 4. Use SIMD to apply filter predicates For the IMCU that need to be scanned Oracle takes advantage of SIMD vector processing (Single Instruction processing Multiple Data values). Instead of evaluating each entry in the column one at a time, SIMD vector processing allows a set of column values to be evaluated together in a single CPU instruction. The column format used in the IM column store has been specifically designed to maximize the number of column entries that can be loaded into the vector registers on the CPU and evaluated in a single CPU instruction. SIMD vector processing enables the Oracle Database In-Memory to scan billion of rows per second per core versus the millions of rows per second per core scan rate that can be achieved in the buffer cache. I mentioned earlier in this post that in order to confirm the IM column store was used; we need to examine the session level statistics. You can monitor the session level statistics by querying the performance views v$mystat and v$statname. All of the statistics related to the In-Memory Column Store begin with IM. You can see the full list of these statistics by typing: display_name format a30 SELECT display_name FROM v$statname WHERE  display_name LIKE 'IM%'; If we check the session statistics after we execute our query the results would be as follow; SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE lo_shipmode = 5; SELECT display_name FROM v$statname WHERE  display_name IN ('IM scan CUs columns accessed',                        'IM scan segments minmax eligible',                        'IM scan CUs pruned'); As you can see, only 2 IMCUs were accessed during the scan as the majority of the IMCUs (44) in the LINEORDER table were pruned out thanks to the storage index on the lo_shipmode column. In next weeks post I will describe how you can control which queries use the IM column store and which don't. +Maria Colgan

    Read the article

  • Google I/O 2010 - Geospatial apps for desktop and mobile

    Google I/O 2010 - Geospatial apps for desktop and mobile Google I/O 2010 - Map once, map anywhere: Developing geospatial applications for both desktop and mobile Geo 201 Mano Marks As the number of desktop and mobile platforms proliferates the cost of developing and maintaining multiple versions of an application continues to increase. This session illustrates how the JS Maps API can be used to simplify cross platform geospatial application development by enabling a single implementation to be shared across multiple platforms, while maintaining a native look and feel. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 8 0 ratings Time: 01:00:58 More in Science & Technology

    Read the article

  • T-SQL Tuesday #005: On Technical Reporting

    - by Adam Machanic
    Reports. They're supposed to look nice. They're supposed to be a method by which people can get vital information into their heads. And that's obvious, right? So obvious that you're undoubtedly getting ready to close this tab and go find something better to do with your life. "Why is Adam wasting my time with this garbage?" Because apparently, it's not obvious. In the world of reporting we have a number of different types of reports: business reports, status reports, analytical reports, dashboards,...(read more)

    Read the article

  • SQLAuthority News – What’s New in SQL Server “Denali”

    - by pinaldave
    I was today doing SQL Server Advanced Training at Bangalore and I had few attendees asked me if I can give them review of the SQL Server Denali. I had not downloaded Denali on my work computer so I could not do demonstration of the same. However, I promised to blog about with additional details very next day. Denali is also known as SQL 11 and the compatibility mode number is 110. Here are few details about it. What is new in SQL Server “Denali” Download CTP1 Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: Denali

    Read the article

< Previous Page | 508 509 510 511 512 513 514 515 516 517 518 519  | Next Page >