Search Results

Search found 2214 results on 89 pages for 'significant figures'.

Page 67/89 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • Why googling by keycaptcha gives results on reCAPTCHA? [closed]

    - by vgv8
    EDIT: I'd like to change this title to: How to STOP Google's manipulation of Google search engine presented to general public? I am frequently googling and more and more frequently bump when searching by one software product I am given instead the results on Google's own products. For ex., if I google by keyword keycaptcha for the "Past 24 hours" (after clicking on "Show search tools" -- "Past 24 hours" on the left sidebar of a browser) I am getting the Google's search results show only results on reCAPTCHA. Image uploaded later: Though, if confine keycaptcha in quotes the results are "correct" (well, kind of since they are still distorted in comparison with other search engines). I checked this during few months from different domains at different ISPs, different operating systems and from a dozen of browsers. The results are the same. Why is it and how can it be possibly corrected? My related posts: "How Gmail spam filter works?" IP adresses blacklisting Update: It is impossible for me to directly start using google.com as I am always redirected to google.ru (from google.com) by my ip-address "auto-detect location" google's "convenience". The google's help tells that it is impossible to switch off my location auto-detection because it is very helpful feature. There is a work-around to use google.com/ncr (to get google.com) (?anybody know what does it mean) to prevent redirection from google.com but even. But all results are exactly the same OK, I can search by quoted "keycaptcha", I am already accustomed to these google's quirks, but the question arises why the heck to burn time promoting someone's product if GOOGLE uses other product brands for showing its own interests/brands (reCAPTCHA) instead and what can be done with it? The general user will not understand that he was cheated and just will pick up the first (wrong) results Update2: Note that this googling behaviour: is independent on whether I am logged-in (or log-out-ed of) a google account, which account, on browser (I tried Opera, Chrome, FireFox, IE of different versions, Safari), OS or even domain; there are many such cases but I just targeted one concrete restricted example speciffically to to prevent wandering between unrelated details and peculiarities; @Michael, first it is not true and this text contains 2 links for real and significant results.. I also wrote that this is just one concrete example from many and based on many-month exp. These distortions happen upon clicking on: Past 24 hours, Past week, Past month, Past year in many other keywords, occasions/configurations of searches, etc. Second, the absence of the results is the result and there is no point to sneakingly substitute it by another unsolicited one. It is the definition of spam and scam. 3d, the question is not abt workarounds like how to write search queries or use another searching engines. The question is how to straighten the googling's results in order to stop disorienting general public about. Update: I could not understand: nobody reproduces the described by me behavior (i.e. when I click "Past 24 hours" link in google search searching for keycaptcha, the presented results are only on reCAPTCHA presented)? Update: And for the "Past week":

    Read the article

  • BI&EPM in Focus November 2012

    - by Mike.Hallett(at)Oracle-BI&EPM
    Customers ·       San Diego Unified School District Harnesses Attendance, Procurement, and Operational Data with Oracle Exalytics, Generating $4.4 Million in Savings ·       NilsonGroup chooses Oracle Exalytics In-Memory Machine as their solution to access critical data to keep its stores competitive with real-time Mobile BI:  Video ·       Nykredit, in the Danish Financial Sector, describes their experiences from testing the Exalytics Business Intelligence Machine: Video  ·       Sodexo chose Oracle Exalytics as their business analytics platform:  Video ·       AstraZeneca (US, Canada, MedImmune) Improves Insight, Analytics, and Reporting, Enterprisewide with Unified Planning on a Single Platform ·       Experian Consolidates Reporting Systems for One, Global View of Financial Data and Improves Planning for Continued Growth ·       Munchkin Gives its Line of Children’s Products Plenty of Room to Grow in an Upgraded Enterprise Application Environment ·       Top 20 EPM Customer Snapshots, in One Handy Booklet (link) ·       Customer and Partner Successes: Link to Complete Archive Enterprise Performance Management ·       Nov 15: Is Hope and Email the Core of your Reconciliation Process? (link) ·       Replay: Integrated Business Planning, Featuring Leggett & Platt (link) ·       Whitepaper: The New Competitive Advantage - Strategic CIO's Embrace the Cloud (link) ·       Press: Oracle Enterprise Performance Management Driving Significant Improvements in Budget Management and Reporting for Public Sector Organizations (link) ·       Enterprise Performance Management Video Feature Overviews, Now Available on YouTube (link) ·       NEW Solution Brief - Oracle Hyperion Planning on the Oracle Exalytics In-Memory Machine (link) ·       For Insurance sector: Datasheet for new release V2.0 - Oracle Quantitative Management and Reporting for Solvency II (link) ·       Whitepaper FSN 2012: Managing Risk and Uncertainty, an Executive's Guide to Integrated Business Planning (link) ·       NEW Datasheet for Oracle Planning and Budgeting Cloud Service (link) ·       Blog: Planning in the Cloud - For Real Business Intelligence ·       Press: Latest Release of Oracle Exalytics In-Memory Machine Software Enables Customers to View and Analyze Data at the Speed of Business (link) ·       Press: New Release (11.1.1.6.2BP1) of Oracle Business Intelligence Enables Users to Quickly Access and Analyze Key Business Information, Anytime, Anywhere (link) ·       Mark Hurd Interviewed on USA Today about Big Data & Analytics (link) ·       Whitepaper: Mastering Big Data - CFO Strategies to Transform Insight into Opportunity (link) ·       Nov 15: Improve Asset Utilization. Achieve Greater Profitability: Oracle Enterprise Asset Management Analytics (link) ·       Replay: Oracle Enterprise Asset Mgmt Analytics and Oracle Manufacturing Analytics (link) ·       Overload to Impact: An Industry Scorecard on Big Data Business Challenges (link) ·       Webcast Replay: Overview of Oracle Endeca Informational Discovery (link) ·       OBIEE 11g: Required and Recommended Patches and Patch Sets (link)

    Read the article

  • New Version 3.1 Endeca Information Discovery Now Available

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 Business User Self-Service Data Mash-up Analysis and Discovery integrated with OBI11g and Hadoop Oracle Endeca Information Discovery 3.1 (OEID) is a major release that incorporates significant new self-service discovery capabilities for business users, including agile data mashup, extended support for unstructured analytics, and an even tighter integration with Oracle BI.  · Self-Service Data Mashup and Discovery Dashboards: business users can combine information from multiple sources, including their own up-loaded spreadsheets, to conduct analysis on the complete set.  Creating discovery dashboards has been made even easier by intuitive drag-and drop layouts and wizard-based configuration.  Business users can now build new discovery applications in minutes, without depending on IT. · Enhanced Integration with Oracle BI: OEID 3.1 enhances its’ native integration with Oracle Business Intelligence Foundation. Business users can now incorporate information from trusted BI warehouses, leveraging dimensions and attributes defined in Oracle’s Common Enterprise Information Model, but evolve them based on the varying day-to-day demands and requirements that they personally manage. · Deep Unstructured Analysis: business users can gain new insights from a wide variety of enterprise and public sources, helping companies to build an actionable Big Data strategy.  With OEID’s long-standing differentiation in correlating unstructured information with structured data, business users can now perform their own text mining to identify hidden concepts, without having to request support from IT. They can augment these insights with best in class keyword search and pattern matching, all in the context of rich, interactive visualizations and analytic summaries. · Enterprise-Class Self-Service Discovery:  OEID 3.1 enables IT to provide a powerful self-service platform to the business as part of a broader Business Analytics strategy, preserving the value of existing investments in data quality, governance, and security.  Business users can take advantage of IT-curated information to drive discovery across high volumes and varieties of data, and share insights with colleagues at a moment’s notice. · Harvest Content from the Web with the Endeca Web Acquisition Toolkit:  Oracle now provides best-of-breed data access to website content through the Oracle Endeca Web Acquisition Toolkit.  This provides an agile, graphical interface for developers to rapidly access and integrate any information exposed through a web front-end.  Organizations can now cost-effectively include content from consumer sites, industry forums, government or supplier portals, cloud applications, and myriad other web sources as part of their overall strategy for data discovery and unstructured analytics. For more information: OEID 3.1 OTN Software and Documentation Download And Endeca available for download on Software Delivery Cloud (eDelivery) New OEID 3.1 Videos on YouTube Oracle.com Endeca Site /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • concurrency::extent<N> from amp.h

    - by Daniel Moth
    Overview We saw in a previous post how index<N> represents a point in N-dimensional space and in this post we'll see how to define the N-dimensional space itself. With C++ AMP, an N-dimensional space can be specified with the template class extent<N> where you define the size of each dimension. From a look and feel perspective, you'd expect the programmatic interface of a point type and size type to be similar (even though the concepts are different). Indeed, exactly like index<N>, extent<N> is essentially a coordinate vector of N integers ordered from most- to least- significant, BUT each integer represents the size for that dimension (and hence cannot be negative). So, if you read the description of index, you won't be surprised with the below description of extent<N> There is the rank field returning the value of N you passed as the template parameter. You can construct one extent from another (via the copy constructor or the assignment operator), you can construct it by passing an integer array, or via convenience constructor overloads for 1- 2- and 3- dimension extents. Note that the parameterless constructor creates an extent of the specified rank with all bounds initialized to 0. You can access the components of the extent through the subscript operator (passing it an integer). You can perform some arithmetic operations between extent objects through operator overloading, i.e. ==, !=, +=, -=, +, -. There are operator overloads so that you can perform operations between an extent and an integer: -- (pre- and post- decrement), ++ (pre- and post- increment), %=, *=, /=, +=, –= and, finally, there are additional overloads for plus and minus (+,-) between extent<N> and index<N> objects, returning a new extent object as the result. In addition to the usual suspects, extent offers a contains function that tests if an index is within the bounds of the extent (assuming an origin of zero). It also has a size function that returns the total linear size of this extent<N> in units of elements. Example code extent<2> e(3, 4); _ASSERT(e.rank == 2); _ASSERT(e.size() == 3 * 4); e += 3; e[1] += 6; e = e + index<2>(3,-4); _ASSERT(e == extent<2>(9, 9)); _ASSERT( e.contains(index<2>(8, 8))); _ASSERT(!e.contains(index<2>(8, 9))); grid<N> Our upcoming pre-release bits also have a similar type to extent, grid<N>. The way you create a grid is by passing it an extent, e.g. extent<3> e(4,2,6); grid<3> g(e); I am not going to dive deeper into grid, suffice for now to think of grid<N> simply as an alias for the extent<N> object, that you create when you encounter a function that expects a grid object instead of an extent object. Usage The extent class on its own simply defines the size of the N-dimensional space. We'll see in future posts that when you create containers (arrays) and wrappers (array_views) for your data, it is an extent<N> object that you'll need to use to create those (and use an index<N> object to index into them). We'll also see that it is a grid<N> object that you pass to the new parallel_for_each function that I'll cover in the next post. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Faster Trip to Innovation with Simplified Data Integration: Sabre Holdings Case Study

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Author: Irem Radzik, Director of Product Marketing, Data Integration, Oracle In today’s fast-paced, competitive environment, IT teams are under pressure to deliver technology solutions for many critical business initiatives as fast as possible. When the focus is on speed, it can be easy to continue to use old style, point-to-point custom scripts that grow organically to the point where they are unmanageable and too costly to maintain. As data volumes, data sources, and end users grow, uncoordinated data integration efforts create significant inefficiencies for both IT and business users. In addition to losing IT productivity due to maintaining spaghetti architecture, data integrity becomes a concern as well. Errors caused by inconsistent, data and manual data entry can prove very costly for companies and disrupt business activities. Many industry leaders recognize now that data should be moved in an automated and reliable manner across all platforms to have one version of the truth. By simplifying their data integration architecture and standardizing on a centralized approach, IT teams now accelerate time to market. Especially, using a centralized, shared-service approach brings agility, increases IT productivity, and frees up resources for innovation. One such industry leader that simplified its data integration architecture is Sabre Holdings. Sabre Holdings provides distribution and technology solutions for the travel industry, and is a winner of Oracle Excellence Awards for Fusion Middleware in 2011 in the data integration category. I had the pleasure to host Sabre Holdings on a public webcast and discuss their data integration best practices for data warehousing. In this webcast Sabre’s Amjad Saeed, presented how the company reduced complexity by consolidating systems and standardizing development on Oracle Data Integrator and Oracle GoldenGate for its global data warehouse development team. With Oracle’s complete real-time data integration solution, Sabre also streamlined support and maintenance operations, achieved real-time view in the execution of the integration processes, and can manage the data warehouse and business intelligence solution performance on demand. By reducing complexity and leveraging timely market insights, the company was able to decrease time to market by 40%. You can now listen to the webcast on demand: Sabre Holdings Case Study: Accelerating Innovation using Oracle Data Integration I invite you to hear directly from Sabre how to use advanced data integration capabilities to enable accelerated innovation. To learn more about Oracle’s data integration offering you can download our free resources.

    Read the article

  • Get More From Your Service Request

    - by Get Proactive Customer Adoption Team
    Leveraging Service Request Best Practices Use best practices to get there faster. In the daily conversations I have with customers, they sometimes express frustration over their Service Requests. They often feel powerless to make needed changes, so their sense of frustration grows. To help you avoid some of the frustration you might feel in dealing with your Service Requests (SR), here are a few pointers that come from our best practice discussions. Be proactive. If you can anticipate some of the questions that Support will ask, or the information they may need, try to provide this up front, when you log the SR. This could be output from the Remote Diagnostic Agent (RDA), if this is a database issue, or the output from another diagnostic tool, if you’re an EBS customer. Any information you can supply that helps us understand the situation better, helps us resolve the issue sooner. As you use some of these tools proactively, you might even find the solution to the problem before you log an SR! Be right. Make sure you have the correct severity level. Since you select the initial severity level, it’s easy to accept the default without considering how significant this may be. Business impact is the driving factor, so make sure you take a moment to select the severity level that is appropriate to the situation. Also, make sure you ask us to change the severity level, should the situation dictate. Be responsive! If this is an important issue to you, quickly follow up on any action plan submitted to you by Oracle Support. The support engineer assigned to your Service Request will be able to move the issue forward more aggressively when they have the needed information. This is crucial in resolving your issues in a timely manner. Be thorough. If there are five questions in the action plan, make sure you provide an answer for all five questions in one response, rather than trickling them in one at a time. This will allow the engineer to look at all of the information as a whole and to avoid multiple trips to your SR, saving valuable time and getting you a resolution sooner. Be your own advocate! You know your situation best; make sure Oracle Support understands both how and why this issue is important to you and your company. Use the escalation process if you're concerned that your SR isn't going the right direction, the right pace, or through the right person. Don't wait until you're frustrated and angry. An escalation is as simple as a quick conversation on the phone and can be amazingly effective in getting your issues back on track. The support manager you speak with is empowered to make any needed changes. Be our partner. You can make your support experience better. When your SR has been resolved, you may receive a survey request. This is intended to get your feedback about how your SR went and what we can do to improve your overall support experience. Oracle Support is here to help you. Our goal with any Service Request is to provide the best possible solution as quickly as possible. With your help, we’ll be able to do this with your Service Request too.  

    Read the article

  • When things go awry

    - by Phil Factor
    The moment the Entrepreneur opened his mouth on prime-time national TV, spelled out the URL and waxed big on how exciting ‘his’ new website was, I knew I was in for a busy night. I’d designed and built it. All at once, half a million people tried to log into the website. Although all my stress-testing paid off, I have to admit that the network locked up tight long before there was any danger of a database or website problem. Soon afterwards, the Entrepreneur and the Big Boss were there in the autopsy meeting. We picked through all our systems in detail to see how they’d borne the unexpected strain. Mercifully, in view of the sour mood of the Big Boss, it turned out that the only thing we could have done better was buy a bigger pipe to and from the internet. We’d specified that ‘big pipe’ when designing the system. The Big Boss had then railed at the cost and so we’d subsequently compromised. I felt that my design decisions were vindicated. The Big Boss brooded for a while. Then he made the significant comment: “What really ****** me off is the fact that, for ten minutes, we couldn’t take people’s money.” At that point I stopped feeling smug. Had the internet connection been better, the system would have reached its limit and failed rather precipitously, and that wasn’t what he wanted. Then it occurred to me that what had gummed up the connection was all those images on the site, that had made it so impressive for the visitors. If there had been a way to automatically pare down the site to the bare essentials under stress… Hmm. I began to consider disaster-recovery in the broadest sense – maintaining a service in spite of unusual or unexpected events. What he said makes a lot of sense: sacrifice whatever isn’t essential to keep the core service running when we approach the capacity limits. Maybe in IT we should borrow (or revive) the business concept of the ‘Skeleton service’, maintaining only the priority parts under stress, using a process that is well-prepared and carefully rehearsed. How might this work? Whatever the event we have to prepare for, it is all about understanding the priorities; knowing what one can dispense with when the going gets tough. In the event of database disaster, it’s much faster to deploy a skeletal system with only the essential data than to restore the entire system, though there would have to be a reconciliation process to update the revived database retrospectively, once the emergency was over. It isn’t just the database that could be designed for resilience. One could prepare for unusually high traffic in a website by designing a system that degraded gradually to a ‘skeletal’ site, one that maintained the commercial essentials without fat images, JavaScript libraries and razzmatazz. This is all what the Big Boss scathingly called ‘a mere technicality’. It seems to me that what is needed first is a culture of application and database design which acknowledges that we live in a very imperfect world, and react accordingly when things go awry.

    Read the article

  • D&rsquo;Arcy&rsquo;s Book Club - The New Strategic Selling

    - by D'Arcy Lussier
    The New Strategic Selling Miller and Heiman Amazon.ca Amazon.com Chapters Everybody is a salesmen. Every day, without knowing it, we sell something to someone. Now, the typical vision people think of when they hear the word “sales” is the sleazy used car salesperson who does whatever they can to get you to buy the clunker on their lot. But selling is not an action tied to money and products. Selling is about convincing people to see your point of view and act on it. If you want your company to cover a trip to a conference, you may have to sell the idea to your boss. If you want to buy that new big screen TV, you have to sell the idea to your significant other. If you want to go on a weekend fishing trip with the boys you might be called in to help sell the idea to your buddies wife. We all sell, but we don’t all sell very well. So enter The New Strategic Selling, a book based on the sales course put on by the Miller-Heiman group. In fact, this isn’t really a “New” strategy to selling as its been around for a number of years. But the concepts they present, the ideas about selling, these are still very radical based on what most of us have experienced. Gone are the high pressure, win at all cost, GlenGarry-GlenRoss style of sales…instead the book presents a framework to switch to need-based selling. It’s the idea that instead of going in raving about a product or service, you build a relationship where the buyer expresses what their needs are and your response is to present a solution that best fits that need. Instead of focussing on the amount of money you can squeeze out of a client, you focus on whether everyone wins, that they receive win-results from the engagement, that repeat business is developed over time delivering value over and over again. The great thing about the book is that what it teaches…things like how to identify different buying influencers, how to prepare for meetings, techniques to solicit information about what the buyer is really thinking/feeling…these things are entirely applicable in *any* situation that you need to sell to someone…and remember: selling is convincing people to see your point of view and act on it. So that new big screen TV you want to buy but need to convince your wife on? This book can help you. That training opportunity you want your company to send you on? This book can help you. The upgrade to your community park that you want to lobby the local civic authorities for? This book can help you. The book is a bit wordy. I found that the length could have been reduced and the points still have gotten across. That’s really the only knock that I have though; the insight that it provides is so worthwhile that having to chew through extra words is well worth it. You definitely don’t have to be a professional salesperson to benefit from this book. Rating: 4/5

    Read the article

  • Consolidation in a Database Cloud

    - by B R Clouse
    Consolidation of multiple databases onto a shared infrastructure is the next step after Standardization.  The potential consolidation density is a function of the extent to which the infrastructure is shared.  The three models provide increasing degrees of sharing: Server: each database is deployed in a dedicated VM. Hardware is shared, but most of the software infrastructure is not. Standardization is often applied incompletely since operating environments can be moved as-is onto the shared platform. The potential for VM sprawl is an additional downside. Database: multiple database instances are deployed on a shared software / hardware infrastructure. This model is very efficient and easily implemented with the features in the Oracle Database and supporting products. Many customers have moved to this model and achieved significant, measurable benefits. Schema: multiple schemas are deployed within a single database instance. The most efficient model, it places constraints on the environment. Usually this model will be implemented only by customers deploying their own applications.  (Note that a single deployment can combine Database and Schema consolidations.) Customer value: lower costs, better system utilization In this phase of the maturity model, under-utilized hardware can be used to host more workloads, or retired and those workloads migrated to consolidation platforms. Customers benefit from higher utilization of the hardware resources, resulting in reduced data center floor space, and lower power and cooling costs. And, the OpEx savings from Standardization are multiplied, since there are fewer physical components (both hardware and software) to manage. Customer value: higher productivity The OpEx benefits from Standardization are compounded since not only are there fewer types of things to manage, now there are fewer entities to manage. In this phase, customers discover that their IT staff has time to move away from "day-to-day" tasks and start investing in higher value activities. Database users benefit from consolidating onto shared infrastructures by relieving themselves of the requirement to maintain their own dedicated servers. Also, if the shared infrastructure offers capabilities such as High Availability / Disaster Recovery, which are often beyond the budget and skillset of a standalone database environment, then moving to the consolidation platform can provide access to those capabilities, resulting in less downtime. Capabilities / Characteristics In this phase, customers will typically deploy fixed-size clusters and consolidate on a cluster until that cluster is deemed "full," at which point a new cluster is built. Customers will define one or a few cluster architectures that are used wherever possible; occasionally there may be deployments which must be handled as exceptions. The "full" policy may be based on number of databases deployed on the cluster, or observed peak workload, etc. IT will own the provisioning of new databases on a cluster, making the decision of when and where to place new workloads. Resources may be managed dynamically, e.g., as a priority workload increases, it may be given more CPU and memory to handle the spike. Users will be charged at a fixed, relatively coarse level; or in some cases, no charging will be applied. Activities / Tasks Oracle offers several tools to plan a successful consolidation. Real Application Testing (RAT) has a feature to help plan and validate database consolidations. Enterprise Manager 12c's Cloud Management Pack for Database includes a planning module. Looking ahead, customers should start planning for the Services phase by defining the Service Catalog that will be made available for database services.

    Read the article

  • Rethinking Oracle Optimizer Statistics for P6 Part 2

    - by Brian Diehl
    In the previous post (Part 1), I tried to draw some key insights about the relationship between P6 and Oracle Optimizer Statistics.  The first is that average cardinality has the greatest impact on query optimization and that the particular queries generated by P6 are more likely to use this average during calculations. The second is that these are statistics that are unlikely to change greatly over the life of the application. Ultimately, our goal is to get the best query optimization possible.  Or is it? Stability No application administrator wants to get the call at 9am that their application users cannot get there work done because everything is running slow. This is a possibility with a regularly scheduled nightly collection of statistics. It may not just be slow performance, but a complete loss of service because one or more queries are optimized poorly. Ideally, this should not be the case. The database optimizer should make better decisions with more up-to-date data. Better statistics may give incremental performance benefit. However, this benefit must be balanced against the potential cost of system down time.  It is stability that we ultimately desire and not absolute optimal performance. We do want the benefit from more accurate statistics and better query plans, but not at the risk of an unusable system. As a result, I've developed the following methodology around managing database statistics for the P6 database.  1. No Automatic Re-Gathering - The daily, weekly, or other interval of statistic gathering is unlikely to be beneficial. Quite the opposite. It is more likely to cause problems. 2. Smart Re-Gathering - The time to collect statistics is when things have changed significantly. For a new installation of P6, this is happening more often because the data is growing from a few rows to thousands and more. But for a mature system, the data is not changing significantly from week-to-week. There are times to collect statistics: New releases of the application Changes in the underlying hardware or software versions (ex. new Oracle RDBMS version) When additional user groups are added. The new groups may use the software in significantly different ways. After significant changes in the data. This may be monthly, quarterly or yearly.  3. Always Test - If you take away one thing from this post, it would be to always have a plan to test after changing statistics. In reality, statistics can be collected as often as you desire provided there are tests in place to verify that performance is the same or better. These might be automated tests or simply a manual script of application functions. 4. Have a Way Out - Never change the statistics without a way to return to the previous set. Think of the statistics as one part of the overall application code that also includes the source code--both application and RDBMS. It would be foolish to change to the new code without a way to get back to the previous version. In the final post, I will talk about the actual script I created for P6 PMDB and possible future direction for managing query performance. 

    Read the article

  • ORA-4031 Troubleshooting Tool ???

    - by Takeyoshi Sasaki
    ORA-4031 ???????????????????? SGA ????????????(??????)??????????????????????????????????????????????????? ORA-4031 ??????????????????? ORA-4031 Troubleshooting Tool ??????????? ORA-4031 Troubleshooting Tool ?? ORA-4031 Troubleshooting Tool ? ORA-4031 ????????? ORA-4031 ???????????????????????????????????WEB????????????????????????????????My Oracle Support ??????????????????????? ORA-4031 ??????????????????????????????ORA-4031 ?????????????????????????????? ORA-4031 Troubleshooting Tool ???????? My Oracle Support ?????? Diagnostic Tools Catalog ??  ORA-4031 Troubleshooting Tool ???????????????? ORA-4031 Troubleshooting Tool ??????????????? ORA-4031 Troubleshooting Tool ????? ???2??????????????? ORA-4031 ?????????????????????????????ORA-4031 Troubleshooting Tool ????????????????????????????????????????ORA-4031 ???????????????????? ??????????????? ORA-4031???????????? ????????????? ORA-4031 ?????? AWR???????????????????????????????????????????????????????????? ????·???????·???????????? ORA-4031 ?????????? ????????? SR ?????????????????????????? [ADR] ????·??????·?????????? [10g ???] AWR?????(STATSPACK????)???? ?????????? ORA-4031 ????????????????????????????????????????????? ORA-4031 ?????????????? ORA-4031 ??????1?1??????????? ORA-4031 Troubleshooting Tool ???????????(??????????????)???????? ORA-4031 ????????? ???????????????????????????????????????????????? ??????????????????????????????????????1)High Session_Cached_Cursor Setting Causing Excessive Consumption of Shared Pool???SESSION_CACHED_CURSOR ??????????????????????????????????????????????2)Insufficient SGA Free Memory at StartupThis issue could occur if in the init.ora parameters of your Alert log, (shared_pool_size + large_pool_size + java_pool_size + db_keep_cache_size + streams_pool_size + db_cache_size) / sga_target is greater than 90%.????????????????????????? ?????????????? shared_pool_size, large_pool_size, java_pool_size, db_keep_cache_size, streams_pool_size, db_cache_size ????? /sga_target ??? 90% ??????????????????sga_target ??? memory_target ??????????????????????????????????????????????????????????????????? shared_pool_size ???????????????????????????????????????????????????????????????????????(?????????)? sga_target ????????????????????????????????????????????????????????????? ?????????????????????????????????????????????1)In your Alert log,* Look for parameters under "System parameters with non-default values:". If session_cached_cursor * 2000 / shared_pool_size is greater than 10%, then session_cached_cursors are consuming significant shared_pool_size.??? ????????????? "System parameters with non-default values:" ????  session_cached_cursor * 2000 ??? shared_pool_size ? 10% ???????????????????????? ???2) In your Alert log, SGA Utilization (Sum of shared_pool_size, large_pool_size, java_pool_size, db_keep_cache_size, streams_pool_size and db_cache_size over sga_target) is 99%, which might be too high. ??? shared_pool_size, large_pool_size, java_pool_size, db_keep_cache_size, streams_pool_size and db_cache_size ???? sga_target ? 99% ???????????????????? ?????????????????????????????? My Oracle Support ??????????????????????????1)Decrease the parameter SESSION_CACHED_CURSORSSESSION_CACHED_CURSORS ???????????????????????2)Reduce the minimum values for the dynamic SGA components to allow memory manager to make changes as neededSGA ?????????????????(???)?????????????????? 2????????????????????? ORA-4031 Trobuleshooting Tool ????????????????????????????????????????? ORA-4031 Troubleshooting Tool ?????? ORA-4031 ??????????????????????ORA-4031 Troubleshooting Tool ???????????????????????????????????????????????????????ORA-4031 ????????????????????????????ORA-4031 ??????????????? ORA-4031 Troubleshooting Tool ?????????

    Read the article

  • Eclipse Helios on OS X Snow Leopard crashes frequently when editing certain PHP files

    - by William
    I use Eclipse Helios (Eclipse Platform: 3.6.0.I20100608-0911, Eclipse IDE for PHP Developers: 1.3.0.20100617-0520) all the time on OS X (Snow Leopard), and it seems I only run into trouble whenever I'm editing a PHP file that's part of the WordPress blogging framework. When I move my cursor to a variable or function name, that often triggers the beach ball of death. I suspect Eclipse is trying to look up that variable/function and for some reason that causes an endless loop. Sometimes it's not just variables or functions. Just today I was trying to replace all occurrences of a quoted string. Every time I clicked "Replace All", the program would freeze immediately after the string was replaced and the text cursor was moved to the replaced position. I think the moving of the text cursor is important, because I got the same result when I searched for the string (thus moving the cursor), but NOT when I searched for a nonexistent string. I tried disabling everything in my preferences related to marked occurrences, hovering, code assistance, etc. Nothing helps. I use Eclipse for all my projects, and I find that it's only WordPress projects where this happens. Here's my eclipse.ini file: -startup ../../../plugins/org.eclipse.equinox.launcher_1.1.0.v20100507.jar --launcher.library ../../../plugins/org.eclipse.equinox.launcher.cocoa.macosx_1.1.0.v20100503 -product org.eclipse.epp.package.php.product --launcher.defaultAction openFile -showsplash org.eclipse.platform --launcher.XXMaxPermSize 512m --launcher.defaultAction openFile -vmargs -Dosgi.requiredJavaVersion=1.5 -XstartOnFirstThread -Dorg.eclipse.swt.internal.carbon.smallFonts -XX:PermSize=128m -XX:MaxPermSize=128m -XX:MaxGCPauseMillis=10 -XX:MaxHeapFreeRatio=70 -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:+CMSIncrementalPacing -XX:CompileThreshold=5 -Xms128m -Xmx512m -Xss2m -Xdock:icon=../Resources/Eclipse.icns -XstartOnFirstThread -Dorg.eclipse.swt.internal.carbon.smallFonts -framework ../../../plugins/org.eclipse.osgi.services_3.2.100.v20100503.jar I have 4GB of RAM, so I don't know if the problem is I'm underutilizing my resources. Here's what I see over and over in the error log: !ENTRY org.eclipse.jface 2 0 2011-01-16 16:26:21.533 !MESSAGE Keybinding conflicts occurred. They may interfere with normal accelerator operation. !SUBENTRY 1 org.eclipse.jface 2 0 2011-01-16 16:26:21.533 !MESSAGE A conflict occurred for ALT+COMMAND+Q P: Binding(ALT+COMMAND+Q P, ParameterizedCommand(Command(org.eclipse.ui.views.showView,Show View, Shows a particular view, Category(org.eclipse.ui.category.views,Views,Commands for opening views,true), org.eclipse.ui.handlers.ShowViewHandler@2a46d1, [Lorg.eclipse.ui.internal.commands.Parameter;@18f50c2,,true), [Lorg.eclipse.core.commands.Parameterization;@1ff1855), org.eclipse.ui.defaultAcceleratorConfiguration, org.eclipse.ui.contexts.window,,cocoa,system) Binding(ALT+COMMAND+Q P, ParameterizedCommand(Command(org.eclipse.ui.views.showView,Show View, Shows a particular view, Category(org.eclipse.ui.category.views,Views,Commands for opening views,true), org.eclipse.ui.handlers.ShowViewHandler@2a46d1, [Lorg.eclipse.ui.internal.commands.Parameter;@18f50c2,,true), [Lorg.eclipse.core.commands.Parameterization;@96b40c), org.eclipse.ui.defaultAcceleratorConfiguration, org.eclipse.ui.contexts.window,,cocoa,system) !ENTRY org.eclipse.core.net 1 0 2011-01-16 16:26:22.217 !MESSAGE System property http.proxyHost has been set to 127.0.0.1 by an external source. This value will be overwritten using the values from the preferences !ENTRY org.eclipse.core.net 1 0 2011-01-16 16:26:22.217 !MESSAGE System property http.proxyPort has been set to 8888 by an external source. This value will be overwritten using the values from the preferences !ENTRY org.eclipse.core.net 1 0 2011-01-16 16:26:22.218 !MESSAGE System property https.proxyHost has been set to 127.0.0.1 by an external source. This value will be overwritten using the values from the preferences !ENTRY org.eclipse.core.net 1 0 2011-01-16 16:26:22.219 !MESSAGE System property https.proxyPort has been set to 8888 by an external source. This value will be overwritten using the values from the preferences I did some experimenting with the particular script that's giving me trouble. It's a hybrid of HTML and PHP, so Eclipse has to do both HTML and PHP validation. I wondered if the HTML validation had something to do with it, so I created a new file, copied the contents over, and messed with the doctype element. I found that if I replaced the well-formed XHTML 1.0 Strict doctype element with a generic doctype (as such: <!DOCTYPE html>), then I did not crash the program just by moving the cursor around. I set all HTML validation rules to "Ignore", but it still didn't solve my problems. For now, I'm just going to echo the doctype using PHP instead of entering it literally. That seems to prevent crashes. I notice that when I move the cursor around the document, Eclipse displays the "xpath" to my current location at the bottom of the screen. Sometimes there's a delay while it figures out my current path. Perhaps when it's validating against the Strict doctype, it has problems quickly calculating the xpath as I move the cursor around? Maybe it has a stack overflow that causes it to crash.

    Read the article

  • Obtaining MFC Feature Pack GUI elements in .NET WinForms

    - by Cody Gray
    The MFC Feature Pack (and VS 2010) adds out-of-the-box support for several "modern" GUI elements (such as MDI with tabbed documents, the ribbon, and a Visual Studio-style interface with docking panels). These are a boon to those of us that have to support legacy MFC-based applications and want to update their look-and-feel, and a sign that Microsoft has not completely abandoned unmanaged C++ development. However, with the push so strongly in favor of .NET, WinForms, and managed code (and for plenty of good reasons), there seems little reason to develop new applications in unmanaged C++/MFC. The question then becomes how does one obtain these GUI elements in a WinForms application. Almost all of the add-ons and libraries I have found so far cost money, and introduce additional dependencies. I don't have a budget to buy third-party libraries, and the controls provided by Microsoft in MFC for free seem sufficient for our needs. But I still have reservations about learning MFC to develop a new application. Not only does the investment in time seem significant (by all accounts, MFC seems particularly difficult to learn, even for experienced .NET developers--although I am willing to try), but the question of MFC's lifespan is raised as well. Certainly, given the millions of lines of code and existing apps written in native C++, it will be around for some time, but the handwriting seems to be on the wall, so to speak, that it's no longer Microsoft's touted development platform. It seems like these features should be available by now in WinForms without the need for third-party add-ons, or devoting a lot of time and resources to custom-drawing EVERYTHING. Am I just missing something? I find very little online that compares these new features of MFC to what is available in WinForms, mainly because most everything written on MFC pre-dated its most recent update, before which it looked admitted "dated," and with its other flaws, was hardly an appealing platform for new development. With the very recent release of VS 2010, we have a while to wait before WinForms gets updated again. What routes are you guys taking for applications whose customers demand a modern-looking UI on a budget?

    Read the article

  • Installing The ruby-gmail rubygem on Mac OS Snow Leopard

    - by johnnygoodman
    I'm working off these instructions: http://github.com/dcparker/ruby-gmail From the home directory I do a standard install and good stuff happens: Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ sudo gem install ruby-gmail Successfully installed ruby-gmail-0.2.1 1 gem installed Installing ri documentation for ruby-gmail-0.2.1... Installing RDoc documentation for ruby-gmail-0.2.1... I head over to my ~/www dir where I run scripts that include other rubygems successfully and create a gmail directory. I create a script that includes rubygems and gmail, but does nothing else: Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ pwd /Users/johnnygoodman/www/gmail Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ ls test-send.rb Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ cat test-send.rb require 'rubygems' require 'gmail' I run this script and the errors begin: Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ ruby test-send.rb /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require': no such file to load -- mime/message (LoadError) from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' from /Library/Ruby/Gems/1.8/gems/ruby-gmail-0.2.1/lib/gmail/message.rb:1 from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' from /Library/Ruby/Gems/1.8/gems/ruby-gmail-0.2.1/lib/gmail.rb:168 from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:36:in `gem_original_require' from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:36:in `require' from test-send.rb:2 Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ Here's my gem env: Johnny-Goodmans-MacBook-Pro:gmail johnnygoodman$ gem environment RubyGems Environment: - RUBYGEMS VERSION: 1.3.7 - RUBY VERSION: 1.8.7 (2009-06-08 patchlevel 173) [universal-darwin10.0] - INSTALLATION DIRECTORY: /Library/Ruby/Gems/1.8 - RUBY EXECUTABLE: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby - EXECUTABLE DIRECTORY: /usr/bin - RUBYGEMS PLATFORMS: - ruby - universal-darwin-10 - GEM PATHS: - /Library/Ruby/Gems/1.8 - /Users/johnnygoodman/.gem/ruby/1.8 - /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8 - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - :sources => ["http://rubygems.org/", "http://gems.github.com"] - REMOTE SOURCES: - http://rubygems.org/ - http://gems.github.com The path that the errors give when I run the script is not the same as the GEM PATHS given in the env output. However, I don't know how to make them match or if that's the significant thing here.

    Read the article

  • WCF Reliable Session Timeout

    - by RemotecUk
    Hi, when do reliable sessions time out? My session class is defined as follows: [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerSession, ConcurrencyMode = ConcurrencyMode.Multiple)] and in my app.config... <bindings> <netTcpBinding> <binding name="FTS_netTcpBinding"> <reliableSession enabled="true" inactivityTimeout="00:00:30"/> </binding> </netTcpBinding> </bindings> I have put a timer in the constructor of my session class that simply outputs a count (1..2..3...) to the console for every second the session is active. I have tested it so far by faulting my channel. I would have imagined that the session class would have died after ~30 seconds (as specified in my inactivityTimeout parameter) and hence the timer would have died. However it was still going after a minute. Each session on my app will have significant resources so I need to make sure that they are cleaned up when something goes wrong. Thanks.

    Read the article

  • How to get Processor and Motherboard Id ?

    - by Frank
    I use the code from http://www.rgagnon.com/javadetails/java-0580.html to get Motherboard Id, but the result is "null", <1 How can that be ? <2 Also I modified the code a bit to look like this to get processor Id : "Set objWMIService = GetObject(\"winmgmts:\\\\.\\root\\cimv2\")\n"+ "Set colItems = objWMIService.ExecQuery _ \n"+ " (\"Select * from Win32_Processor\") \n"+ "For Each objItem in colItems \n"+ " Wscript.Echo objItem.ProcessorId \n"+ " exit for ' do the first cpu only! \n"+ "Next \n"; The result is something like : ProcessorId = BFEBFBFF00010676 On http://msdn.microsoft.com/en-us/library/aa389273%28VS.85%29.aspx it says : ProcessorId : Processor information that describes the processor features. For an x86 class CPU, the field format depends on the processor support of the CPUID instruction. If the instruction is supported, the property contains 2 (two) DWORD formatted values. The first is an offset of 08h-0Bh, which is the EAX value that a CPUID instruction returns with input EAX set to 1. The second is an offset of 0Ch-0Fh, which is the EDX value that the instruction returns. Only the first two bytes of the property are significant and contain the contents of the DX register at CPU reset—all others are set to 0 (zero), and the contents are in DWORD format. I don't quite understand it, in plain English, is it unique or just a number for this class of processors, for instance all Intel Core2 Duo P8400 will have this number ? Frank

    Read the article

  • eclipse tomcat debug mode slow - pegs cpu

    - by andersonbd1
    Running Tomcat through eclipse works fine in non-debug mode, but not in debug mode. When I try to start the Tomcat server in debug mode, the console output looks fine for a while, but then starts slowing down and eventually just stops, pegging the cpu at 100%. I don't think it's relevant, but just in case - here's the console output right about when it starts slowing down and eventually stopping (by stopping I mean no more console output, but still 100% cpu). 2009-09-02 14:35:30,859 INFO NONE org.springframework.context.weaving.DefaultContextLoadTimeWeaver:72 - Found Spring's JVM agent for instrumentation 2009-09-02 14:35:49,562 INFO NONE org.springframework.beans.factory.support.DefaultListableBeanFactory:414 - Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@ed889d: defining beans [... 2009-09-02 14:37:31,031 INFO NONE org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean:221 - Building JPA container EntityManagerFactory for persistence unit ... I tried everything I could think of to fix it: cleanesd tomcat working directory restarted eclipse restarted Windows refreshed/cleaned all projects I first had this problem last week using eclipse ganymede. I had been running fine in debug-mode for several months prior to this issue. I didn't make any significant changes to our project that would cause this. Eventually, I upgraded to eclipse galileo which solved my problem. Now 2 days later, I'm having the same problem in galileo. Like I said it works fine in non-debug mode. Any help is much appreciated. I should add that other things work in debug mode - for instance junit tests, so it is something specific to tomcat.

    Read the article

  • Open source GIS tools

    - by TRV_SQL
    Hi I’ve been looking at Open Source GIS tools. In particular MapServer and GeoServer. The problem I’m seeing is that to actually deploy these to the public you can’t use a regular $5/ month (or free) hosting service because you have to install these services on the server in ways that are not accessible in the average hosting scheme. So you either have to use a host that has MapServer installed (many of which look unreliable) or have a dedicated server or VPS. All of these options have a significant cost barrier ($30 - $200/month). I’m just doing this for fun. Are there any free or inexpensive ways to have your GIS services hosted? Or are there any products that install in a way that you don’t need to access the root structure of the server? I have tried OpenLayers and GeoExt but I don’t think a client side option will work for me because of the size of the datasets I am using. My base data will be vector data not WMS data (or something similar). I haven’t tried Google maps yet, but I will be looking into it. Also, any thought on using SVG for GIS purposes? Thanks

    Read the article

  • How did the Lunar Lander example make the image backgrounds transparent?

    - by user279112
    Hello. I'm trying to make a GUI program with the Android SDK, using their Lunar Lander example as a significant self-teaching tool in the process. I've noticed their sprites' images' backgrounds, which were at least usually pure white, did not show up in their program. I want to ask how they did that, since their site doesn't explain simple things very well. I've managed to pull that off before on another GUI SDK, wherein all I had to do was to call a function and pass it a few floats to define a certain color, and until my code told it to do otherwise, that function would make sure that that particular color in my sprites' images was totally transparent. However I've wrestled with the Lunar Lander example and getting my own program to show some custom graphics for a week or two now, and I haven't noticed any such function call in the Lunar Lander example. I tried to look for it, but I did not find anything. I've tried to Google some tutorial or other reference material, but what I've found so far is just straying off into unrelated areas and totally dodging this EXTREMELY important lesson on the SDK's basics. Any ideas? Thanks!

    Read the article

  • Text mining on large database (data mining)

    - by yox
    Hello, I have a large database of resumes (CV), and a certain table skills grouping all users skills. inside that table there's a field skill_text that describes the skill in full text. I'm looking for an algorithm/software/method to extract significant terms/phrases from that table in order to build a new table with standarized skills.. Here are some examples skills extracted from the DB : Sectoral and competitive analysis Business Development (incl. in international settings) Specific structure and road design software - Microstation, Macao, AutoCAD (basic knowledge) Creative work (Photoshop, In-Design, Illustrator) checking and reporting back on campaign progress organising and attending events and exhibitions Development : Aptana Studio, PHP, HTML, CSS, JavaScript, SQL, AJAX Discipline: One to one marketing, E-marketing (SEO & SEA, display, emailing, affiliate program) Mix marketing, Viral Marketing, Social network marketing. The output shoud be something like : Sectoral and competitive analysis Business Development Specific structure and road design software - Macao AutoCAD Photoshop In-Design Illustrator organising events Development Aptana Studio PHP HTML CSS JavaScript SQL AJAX Mix marketing Viral Marketing Social network marketing emailing SEO One to one marketing As you see only skills remains no other representation text. I know this is possible using text mining technics but how to do it ? the database is realy large.. it's a good thing because we can calculate text frequency and decide if it's a real skill or just meaningless text... The big problem is .. how to determin that "blablabla" is a skill ? thanks

    Read the article

  • Smoothing touch-based animation in iPhone OpenGL?

    - by quixoto
    I know this is vague, but looking for general tips/help on this, as it's not an area of significant expertise for me. I have some iPhone code that's basically an EAGL view handling a single touch. The app draws (using GL) a circle via triangle fan at the touch point, and moves it when the user moves the touch point, and re-renders the view then. When dragging a finger slowly, the circle keeps up and consistent with the finger as it moves. If I scribble my finger quickly back and forth across the screen, the rendering doesn't keep up with the touch motion, so you see an optical illusion of "multiple" discrete circles on the screen "at once". (Normal persistence of vision illusion). This optical illusion is jarring. How can I make this look more natural? Can I blur the motion of the circle somehow? Is this result the evidence of some bad frame rate issue? I see this artifact even when nothing else is being rendered, so I think this might just be as fast as we can go. Any hints or suggestions? Much appreciated. Thanks.

    Read the article

  • Resources related to data-mining and gaming on social networks

    - by darren
    Hi all I'm interested in the problem of patterning mining among players of social networking games. For example detecting cheaters of a game, given a company's user database. So far I have been following the usual recipe for a data mining project: construct a data warehouse that aggregates significant information select a classifier, and train it with a subsectio of records from the warehouse validate classifier with another test set lather, rinse, repeat Surprisingly, I've found very little in this area regarding literature, best practices, etc. I am hoping to crowdsource the information gathering problem here. Specifically what I'm looking for: What classifiers have worked will for this type of pattern mining (it seems highly temporal, users playing games, users receiving rewards, users transferring prizes etc). Are there any highly agreed upon attributes specific to social networking / gaming data? What is a practical amount of information that should be considered? One problem I've run into is data overload, where queries and data cleansing may take days to complete. Related to point above, what hardware resources are required to produce results? I've found it difficult to estimate the amount of computing power I will require for production use. It has become apparent that a white box in the corner does not have enough horse-power for such a project. Are companies generally resorting to cloud solutions? Are they buying clusters? Basically, any resources (theoretical, academic, or practical) about implementing a social networking / gaming pattern-mining program would be very much appreciated. Thanks.

    Read the article

  • PDFView printWithInfo:autoRotate: fails

    - by Brian Postow
    I'm trying to print a PDFDocument that I am constructing from a series of images. In case it matters, I'm doing all of this from within a Mozilla plugin. I create the PDFDocument, and put it into a PDFView, then I call [printView printWithInfo: [NSPrintInfo sharedPrintInfo] autoRotate: YES]; The print dialog comes up (as a separate window, instead of panel. I assume that that comes from being inside a mozilla window, so I wasn't too worried about it. The dialog shows my document, and I can page through it correctly, and everything looks good. However, when I hit "Print" the dropdown with "Layout" etc becomes empty, and the view under that becomes empty. The window doesn't disappear, and the document doesn't print. Hitting Cancel does exactly the same thing. The only thing I can do then is force-quit Mozillla. I based the program off of PDFKitLinker2 from the apple dev site, and that program works. But I can't see any significant differences between it and my version. Any suggestions on where to look? thanks. EDIT: Yes, I know that this is pretty much an exact duplicate of http://stackoverflow.com/questions/2058501/printing-off-screen-pdfviews but that never got a sufficient answer... (And I didn't notice it until just now...)

    Read the article

  • How to explain to users the advantages of dumb primary key?

    - by Hao
    Primary key attractiveness I have a boss(and also users) that wants primary key to be sophisticated/smart/attractive control number(sort of like Social Security number, or credit card number format) I just padded the primary key(in Views) with zeroes to appease their desire to make the control number sophisticated,smart and attractive. But they wanted it as: first 2 digits as client code, then 4 digits as year year, then last 4 digits as transaction number on that client on a given year, then reset the transaction number of client to 1 when next year flows. Each client's transaction starts with 1. e.g. WM20090001, WM20090002, BB2009001, WM20100001, BB20100001 But as I wanted to make things as simple as possible, I forgo embedding their suggested smartness in primary key, I just keep the primary key auto increments regardless of client and year. But to make it not dull-looking(they really are adamant to make the primary key as smart control number), I made the primary key appears to them smart, on view query, I put the client code and four digit year code on front of the eight-zero padded autoincrement key, i.e. WM200900000001. Sort of slug-like information on autoincremented primary key. Keeping primary key autoincrement regardless of any other information, we are able keep other potential side effects problem when they edit a record, for example, if they made a mistake of entering the transaction on WM, then they edit the client code to BB, if we use smart primary key, the primary keys of WM customer will have gaps in their control number. Or worse yet, instead of letting the control numbers have gaps/holes, the user will request that subsequent records of that gap should shift up to that gap and have their subsequent primary keys re-adjust(decremented). How do you deal with these user requests(reasonable or otherwise)? Do you yield to their request? Or just continue using dumb primary key and explain them the repercussions of having a very smart/sophisticated primary key and educate them the significant advantages of having a dumb primary key? P.S. quotable quote(http://articles.techrepublic.com.com/5100-10878_11-1044961.html): "If you hold your tongue the first time users ask what is for them a reasonable request, things will work a lot better in the end."

    Read the article

  • Silverlight DataForm Memory Leak

    - by Andrew Garrison
    Some Background I have noticed that setting the EditTemplate of a DataForm (from the Silverlight Toolkit) can cause the DataForm to not be garbage collected. Consequently, the parent control of the DataForm cannot be garbage collected either, causing a very significant memory leak. Here's some XAML which demonstrates the case. <toolkit:DataForm HorizontalAlignment="Stretch" Margin="10" VerticalAlignment="Stretch"> <toolkit:DataForm.EditTemplate> <DataTemplate> <toolkit:DataField Label="Dummy Binding:"> <TextBox Text="{Binding DummyBinding, Mode=TwoWay}" /> </toolkit:DataField> </DataTemplate> </toolkit:DataForm.EditTemplate> </toolkit:DataForm> I have opened an issue on CodePlex. The isssue has an attachment which has a project which desmonstrates the case. So, My Question Is Has anyone else encountered this issue? More importantly, does anyone know of any workarounds? How can I force this DataForm to be garbage collected?

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >