Search Results

Search found 4291 results on 172 pages for 'cluster analysis'.

Page 90/172 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Where is a postgresql 9.1 database stored in ubuntu 12.04?

    - by celenius
    I installed and created a Postgresql database on ubuntu. I then created the database using the following command: sudo su postgres createdb mydatabase However, I can't figure out where the database was initialized. I would like to be able to edit the hba.conf file and postgresl.conf files. When I view the database using pgadmin I see the following information: CREATE DATABASE mydatabase WITH OWNER = postgres ENCODING = 'UTF8' TABLESPACE = pg_default LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8' CONNECTION LIMIT = -1; Any thoughts on how I can find the database cluster location?

    Read the article

  • Power View Infrastructure Configuration and Installation: Step-by-Step and Scripts

    This document contains step-by-step instructions for installing and testing the Microsoft Business Intelligence infrastructure based on SQL Server 2012 and SharePoint 2010, focused on SQL Server 2012 Reporting Services with Power View. This document describes how to completely install the following scenarios: a standalone instance of SharePoint and Power View with all required components; a new SharePoint farm with the Power View infrastructure; a server with the Power View infrastructure joined to an existing SharePoint farm; installation on a separate computer of client tools; installation of a tabular instance of Analysis Services on a separate instance; and configuration of single sign-on access for double-hop scenarios with and without Kerberos. Scripts are provided for all/most scenarios.

    Read the article

  • Moving OVD from Test to Production

    - by mark.wilcox
    Customer asked support "How to move a test OVD server to production". There is a couple of ways to do this. One way is to clone the environment: http://download.oracle.com/docs/cd/E15523_01/core.1111/e10105/testprod.htm#CH... Another way - which is particularly useful if you want to push configuration from a parent OVD server to children in a cluster: http://download.oracle.com/docs/cd/E14571_01/oid.1111/e10046/basic_server_set... Note if you use the second option and you have any data in a Local Store Adapter - you may also need to use the oidcmprec tool to synchronize that data: http://download.oracle.com/docs/cd/E14571_01/oid.1111/e10029/replic_mng_mon.h... Posted via email from Virtual Identity Dialogue

    Read the article

  • LinuxCon North America 2014

    - by Chris Kawalek
    As the first day of LinuxCon North America 2014 draws to a close, we want to thank all the folks that stopped by our booth today! If you're at the show, please stop by our booth #204 and have a chat with our experts. And you won't want to miss these Linux and virtualization related sessions coming up tomorrow and Friday: Thursday Aug 21 - Static Analysis in the Linux Kernel Using Smatch - Dan Carpenter, Oracle Friday Aug 22 - The Proper Care of Feeding of MySQL Database for Linux Admins Who Also Have DBA duties - Morgan Tocker, Oracle Friday Aug 22 - Why Use Xen for Large Scale Enterprise Deployments, Konrad Rzeszutek Wilk , Oracle We look forward to meeting with you and we hope you enjoy the rest of LinuxCon North America 2014! 

    Read the article

  • How to avoid “The web server process that was being debugged has been terminated by IIS”

    - by ybbest
    Problem: When debugging asp.net by attaching w3wp.exe process, you often will see encounter the following error message, the web server process that was being debugged has been terminated by IIS. Analysis: This caused Internet Information Services (IIS) to assume that the worker process had stopped responding. Therefore, IIS terminated the worker process. Solution: 1. Open IIS manager. 2.Click application Pools>>select the application pool associated with the site>>and click Advanced Settings 3. Click Advanced Settings of the application pool and set the Ping Enabled property from True to False. Now, reattach the process from Visual Studio, you should not get the error message. References: msdn

    Read the article

  • Roll Your Own Solaris Blogroll

    - by Larry Wake
    Something handy I just ran across: There are lots of people here who blog about Solaris, either as their main topic, or as the occasional tangent. If the blogger has tagged their post appropriately, here's a quick way to find them: Articles tagged Solaris Articles tagged ZFS Articles tagged IPS Articles tagged DTrace Articles tagged Zones Articles tagged Studio Articles tagged Cluster Note that this is a little different from using the "word cloud" you can find in the right-hand column on this page, since that only finds articles tagged in this blog. The above links will find all tagged blogs.oracle.com posts. Some topics are a little trickier to nail down, because there may not be a standardized tag for the topic, so building a more conventional "blogroll" is on my to-do list. In the meantime, you can also refer to the post Markus Weber made of interesting Solaris 11 launch-related posts.

    Read the article

  • Is acoustic fingerprinting too broad for one audio file only?

    - by IBG
    We were looking for some topics related to audio analysis and found acoustic fingerprinting. As it is, it seems like the most famous application for it is for identification of music. Enter our manager, who requested us to research and possible find an algorithm or existing code that we can use for this very simple approach (like it's easy, source codes don't show up like mushrooms): Always-on app for listening Compare the audio patterns to a single audio file (assume sound is a simple beep) If beep is detected, send notification to server With a flow this simple, do you think acoustic fingerprinting is a broad approach to use? Should we stop and take another approach? Where to best start? We haven't started anything yet (on the development side) on this regard, so I want to get other opinion if this is pursuit is worth it or moot.

    Read the article

  • Coherence Query Performance in Large Clusters

    - by jpurdy
    Large clusters (measured in terms of the number of storage-enabled members participating in the largest cache services) may introduce challenges when issuing queries. There is no particular cluster size threshold for this, rather a gradually increasing tendency for issues to arise. The most obvious challenges are that a client's perceived query latency will be determined by the slowest responder (more likely to be a factor in larger clusters) as well as the fact that adding additional cache servers will not increase query throughput if the query processing is not compute-bound (which would generally be the case for most indexed queries). If the data set can take advantage of the partition affinity features of Coherence, then the application can use a PartitionedFilter to target a query to a single server (using partition affinity to ensure that all data is in a single partition). If this can not be done, then avoiding an excessive number of cache server JVMs will help, as will ensuring that each cache server has sufficient CPU resources available and is also properly configured to minimize GC pauses (the most common cause of a slow-responding cache server).

    Read the article

  • SQL Server Data Tools–BI for Visual Studio 2013 Re-released

    - by Greg Low
    Customers used to complain that the tooling for creating BI projects (Analysis Services MD and Tabular, Reporting Services, and Integration services) has been based on earlier versions of Visual Studio than the ones they were using for their other work in Visual Studio (such as C#, VB, and ASP.NET projects). To alleviate that problem, the shipment of those tools has been decoupled from the shipment of the SQL Server product. In SQL Server 2014, the BI tooling isn’t even included in the released version of SQL Server. This allows the team to keep up-to-date with the releases of Visual Studio. A little while back, I was really pleased to see that the Visual Studio 2013 update for SSDT-BI (SQL Server Data Tools for Business Intelligence) had been released. Unfortunately, they then had to be withdrawn. The good news is that they’re back and you can get the latest version from here: http://www.microsoft.com/en-us/download/details.aspx?id=42313

    Read the article

  • Technical Computing

      Today, Microsoft announced our Technical Computing initiative.    Through the Technical Computing initiative, we will enable scientists, engineers and analysts to more easily model the world at much greater fidelity.  The Technical Computing initiative will address a wide range of users.  One of the most critical elements is to help developers create applications that can take advantage of parallelism on their desktop, in a cluster, and in public and private clouds. ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • AMD sort un processeur 12 coeurs, devance-t-il Intel à tous les niveaux ?

    AMD sort un processeur 12 coeurs, devance-t-il Intel à tous les niveaux ? Depuis quelques années, les processeurs les plus répandus sont ceux comprenant deux ou quatre coeurs. D'autres existent, dotés notamment de six coeurs (ou plus), mais ils sont assez chers et plutôt destinés à des usages professionnels dans des serveurs ayant un grand nombre de charges de travail à traiter en parallèle. Plus il y a de coeurs, plus il y a de rapidité, si tant est que les programmes en face puissent tirer tout l'avantage de cette puissance. Généralement, les serveurs utilisant ce type de puces les lient ensemble en un gros cluster. AMD amène donc sur le marché un processeur encore plus puissant, doté de 12 coeurs.

    Read the article

  • What's an acceptable "Avg. Page Load Time"?

    - by hawbsl
    Is there any industry rule of thumb for what's considered an unacceptable load time v. an OK one v. a blistering fast one? We're just reviewing some Google Analytics data and getting 0.74 Avg. Page Load Time reported. I guess that's OK. However it would be good if some meatier comparison data were available, or a blog post, or somewhere where there's some analysis of what speeds are generally being achieved by various kinds of sites. Any useful links to help someone interpret these speeds? If you Google it you just get a lot of results dealing with how to improve your speed. We're not at that stage yet.

    Read the article

  • Latest EMEA Partner Success Stories (21st November)

    - by swalker
    Be Recognised Through Partner Success Stories You can showcase your capabilities in Oracle products and industries through partner success stories that are published on Oracle.com, and benefit from the traffic on our portal. To participate, you are required to complete a form and inform us of a successfully implemented project. If your story is selected, we will contact you for an interview. Click here to access the form and submit your success story. The latest customer success snapshots with partners being uploaded on to Oracle.com are: Robur S.p.A. Telenor LinkPlus A.S. Urals Power Engineering Company Bochemie Group Mediaset S.p.A Landbrokes Coeclerici S.p.A. IDS GmbH -Analysis and Reporting Services Teatre Nacional de Catalunya LinkPlus A.S. Scottish Water LH Dienstbekleidungs GmbH Champion Europe SpA  Metropolitan Housing Partnership McKesson Robur Learn more about our partner and customer successes by browsing the many EMEA Success Stories across all industries here.

    Read the article

  • Is a yobibit really a meaningful unit? [closed]

    - by Joe
    Wikipedia helpfully explains: The yobibit is a multiple of the bit, a unit of digital information storage, prefixed by the standards-based multiplier yobi (symbol Yi), a binary prefix meaning 2^80. The unit symbol of the yobibit is Yibit or Yib.1[2] 1 yobibit = 2^80 bits = 1208925819614629174706176 bits = 1024 zebibits[3] The zebi and yobi prefixes were originally not part of the system of binary prefixes, but were added by the International Electrotechnical Commission in August 2005.[4] Now, what in the world actually takes up 1,208,925,819,614,629,174,706,176 bits? The information content of the known universe? I guess this is forward thinking -- maybe astrophyics or nanotech, or even DNA analysis really will require these orders of magnitude. How far off do you think all this is? Are these really meaningful units?

    Read the article

  • Acquiring the Skill of SEO

    Many people might not be aware of the topic or what the article is in fact focusing upon. Basically the term referred in the above topic is related to internet tactics. It refers to search engine optimization which is a very well-known skill nowadays. Well moving on towards the definition, it is the process of improving the ranking of a particular website on a search engine. It is known as an internet market strategy in which first of all an analysis of working of search engines is carried out.

    Read the article

  • Implementing a Custom Coherence PartitionAssignmentStrategy

    - by jpurdy
    A recent A-Team engagement required the development of a custom PartitionAssignmentStrategy (PAS). By way of background, a PAS is an implementation of a Java interface that controls how a Coherence partitioned cache service assigns partitions (primary and backup copies) across the available set of storage-enabled members. While seemingly straightforward, this is actually a very difficult problem to solve. Traditionally, Coherence used a distributed algorithm spread across the cache servers (and as of Coherence 3.7, this is still the default implementation). With the introduction of the PAS interface, the model of operation was changed so that the logic would run solely in the cache service senior member. Obviously, this makes the development of a custom PAS vastly less complex, and in practice does not introduce a significant single point of failure/bottleneck. Note that Coherence ships with a default PAS implementation but it is not used by default. Further, custom PAS implementations are uncommon (this engagement was the first custom implementation that we know of). The particular implementation mentioned above also faced challenges related to managing multiple backup copies but that won't be discussed here. There were a few challenges that arose during design and implementation: Naive algorithms had an unreasonable upper bound of computational cost. There was significant complexity associated with configurations where the member count varied significantly between physical machines. Most of the complexity of a PAS is related to rebalancing, not initial assignment (which is usually fairly simple). A custom PAS may need to solve several problems simultaneously, such as: Ensuring that each member has a similar number of primary and backup partitions (e.g. each member has the same number of primary and backup partitions) Ensuring that each member carries similar responsibility (e.g. the most heavily loaded member has no more than one partition more than the least loaded). Ensuring that each partition is on the same member as a corresponding local resource (e.g. for applications that use partitioning across message queues, to ensure that each partition is collocated with its corresponding message queue). Ensuring that a given member holds no more than a given number of partitions (e.g. no member has more than 10 partitions) Ensuring that backups are placed far enough away from the primaries (e.g. on a different physical machine or a different blade enclosure) Achieving the above goals while ensuring that partition movement is minimized. These objectives can be even more complicated when the topology of the cluster is irregular. For example, if multiple cluster members may exist on each physical machine, then clearly the possibility exists that at certain points (e.g. following a member failure), the number of members on each machine may vary, in certain cases significantly so. Consider the case where there are three physical machines, with 3, 3 and 9 members each (respectively). This introduces complexity since the backups for the 9 members on the the largest machine must be spread across the other 6 members (to ensure placement on different physical machines), preventing an even distribution. For any given problem like this, there are usually reasonable compromises available, but the key point is that objectives may conflict under extreme (but not at all unlikely) circumstances. The most obvious general purpose partition assignment algorithm (possibly the only general purpose one) is to define a scoring function for a given mapping of partitions to members, and then apply that function to each possible permutation, selecting the most optimal permutation. This would result in N! (factorial) evaluations of the scoring function. This is clearly impractical for all but the smallest values of N (e.g. a partition count in the single digits). It's difficult to prove that more efficient general purpose algorithms don't exist, but the key take away from this is that algorithms will tend to either have exorbitant worst case performance or may fail to find optimal solutions (or both) -- it is very important to be able to show that worst case performance is acceptable. This quickly leads to the conclusion that the problem must be further constrained, perhaps by limiting functionality or by using domain-specific optimizations. Unfortunately, it can be very difficult to design these more focused algorithms. In the specific case mentioned, we constrained the solution space to very small clusters (in terms of machine count) with small partition counts and supported exactly two backup copies, and accepted the fact that partition movement could potentially be significant (preferring to solve that issue through brute force). We then used the out-of-the-box PAS implementation as a fallback, delegating to it for configurations that were not supported by our algorithm. Our experience was that the PAS interface is quite usable, but there are intrinsic challenges to designing PAS implementations that should be very carefully evaluated before committing to that approach.

    Read the article

  • Consolidating SQL Server Error Logs from Multiple Instances Using SSIS

    SQL Server hides a lot of very useful information in its error log files. Unfortunately, the process of hunting through all these logs, file-by-file, server-by-server, can cause a problem. Rodney Landrum offers a solution which will allow you to pull error log records from multiple servers into a central database, for analysis and reporting with T-SQL....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • If I intend to use Hadoop is there a difference in 12.04 LTS 64 Desktop and Server?

    - by Charles Daringer
    Sorry for such a Newbie Question, but I'm looking at installing M3 edition of MapR the requirements are at this link: http://www.mapr.com/doc/display/MapR/Requirements+for+Installation And my question is this, is the Desktop Kernel 64 for 12.04 LTS adequate or the "same" as the Server version of the product? If I'm setting up a lab to attempt to install a home cluster environment should I start with the Server or Dual Boot that distribution? My assumption is that the two are the same. That I can add any additional software to the 64 as needed. Can anyone elaborate on this? Have I missed something obvious?

    Read the article

  • Find Waldo with Mathematica

    - by Jason Fitzpatrick
    If you’re looking for a geeky (and speedy) way to find Waldo, of the Where’s Waldo? fame, this series of Mathematica scripts makes it a snap. Over at Stack Overflow, programmer Arnoud Buzing shares a clever bit of Mathematica-based coding that analyzes a Where’s Waldo? drawing and finds the elusive Waldo. Hit up the link below to see the distinct steps of analysis with accompanying photos. How Do I Find Waldo with Mathematica? [via Make] How To Properly Scan a Photograph (And Get An Even Better Image) The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume Make Your Own Windows 8 Start Button with Zero Memory Usage

    Read the article

  • Do you use to third party companies to review your company's code?

    - by CodeToGlory
    I am looking to get the following - Basic code review to make sure they follow the guidelines imposed. Security code analysis to make sure there are no loopholes. No performance bottlenecks by doing a load test etc. We have lot of code coming in from third parties and is becoming laborious to manage code reviews and hence looking to see if others employ such practices. I understand that it may be a concern for some and would raise the question "Well, who is going to make sure the agency is doing their job right?" But basically I am just looking for a third party who can hold all vendor code to the same standards.

    Read the article

  • Is OpenStack suitable as a fault tolerant DB host?

    - by Jit B
    I am trying to design a fault tolerant DB cluster (schema does not matter) that would not require much maintenance. After looking at almost everything from MySQL to MongoDB to HBase I still find that no DB is easily scalable - Cassandra comes close but it has its own set of problems. So I was thinking what if I run something like MySQL or OrientDB on top of a large openstack VM. The VM would be fault tolerant by itself so I dont need to do it st DB level. Is it viable? Has it been done before? If not then what are the possible problems with this approach?

    Read the article

  • New Exadata public references

    - by Javier Puerta
    The following customers are now public references for Exadata. Show your customers how other companies in their industries are leveraging Exadata to achieve their business objectives. MIGROS BANK - Financial Services - Switzerland Oracle EXADATA Database Machine + OBIEE 11gMigros Bank AG Makes Systems More Available and Improves Operational Insight and Analytics with a Scalable, Integrated Data Warehouse Success Story (English)Success Story(German) - Professional Services - United Arab Emirates Oracle EXADATA Database MachineTech Access Drives Compelling Proof-of-Concept Evaluations for Hardware Sales in Regions Largest Solutions CenterSuccess Story   - Saudi Arabia - Wholesale Distribution Oracle EXADATA Database Machine + OBIEE 11g Balubaid Group of Companies Reduces Help-Desk Complaints by 75%, Improves Business Continuity and System Response Success Story   - Nigeria - Communications Oracle EXADATA Database Machine Etisalat Accelerates Data Retrieval and Analysis by 99 Percent with Oracle Communications Data Model Running on Oracle Exadata Database Machine Oracle Press Release   ETISALAT BALUBAID GROUP TECH ACCESS

    Read the article

  • Solaris 11.1 changes building of code past the point of __NORETURN

    - by alanc
    While Solaris 11.1 was under development, we started seeing some errors in the builds of the upstream X.Org git master sources, such as: "Display.c", line 65: Function has no return statement : x_io_error_handler "hostx.c", line 341: Function has no return statement : x_io_error_handler from functions that were defined to match a specific callback definition that declared them as returning an int if they did return, but these were calling exit() instead of returning so hadn't listed a return value. These had been generating warnings for years which we'd been ignoring, but X.Org has made enough progress in cleaning up code for compiler warnings and static analysis issues lately, that the community turned up the default error levels, including the gcc flag -Werror=return-type and the equivalent Solaris Studio cc flags -v -errwarn=E_FUNC_HAS_NO_RETURN_STMT, so now these became errors that stopped the build. Yet on Solaris, gcc built this code fine, while Studio errored out. Investigation showed this was due to the Solaris headers, which during Solaris 10 development added a number of annotations to the headers when gcc was being used for the amd64 kernel bringup before the Studio amd64 port was ready. Since Studio did not support the inline form of these annotations at the time, but instead used #pragma for them, the definitions were only present for gcc. To resolve this, I fixed both sides of the problem, so that it would work for building new X.Org sources on older Solaris releases or with older Studio compilers, as well as fixing the general problem before it broke more software building on Solaris. To the X.Org sources, I added the traditional Studio #pragma does_not_return to recognize that functions like exit() don't ever return, in patches such as this Xserver patch. Adding a dummy return statement was ruled out as that introduced unreachable code errors from compilers and analyzers that correctly realized you couldn't reach that code after a return statement. And on the Solaris 11.1 side, I updated the annotation definitions in <sys/ccompile.h> to enable for Studio 12.0 and later compilers the annotations already existing in a number of system headers for functions like exit() and abort(). If you look in that file you'll see the annotations we currently use, though the forms there haven't gone through review to become a Committed interface, so may change in the future. Actually getting this integrated into Solaris though took a bit more work than just editing one header file. Our ELF binary build comparison tool, wsdiff, actually showed a large number of differences in the resulting binaries due to the compiler using this information for branch prediction, code path analysis, and other possible optimizations, so after comparing enough of the disassembly output to be comfortable with the changes, we also made sure to get this in early enough in the release cycle so that it would get plenty of test exposure before the release. It also required updating quite a bit of code to avoid introducing new lint or compiler warnings or errors, and people building applications on top of Solaris 11.1 and later may need to make similar changes if they want to keep their build logs similarly clean. Previously, if you had a function that was declared with a non-void return type, lint and cc would warn if you didn't return a value, even if you called a function like exit() or panic() that ended execution. For instance: #include <stdlib.h> int callback(int status) { if (status == 0) return status; exit(status); } would previously require a never executed return 0; after the exit() to avoid lint warning "function falls off bottom without returning value". Now the compiler & lint will both issue "statement not reached" warnings for a return 0; after the final exit(), allowing (or in some cases, requiring) it to be removed. However, if there is no return statement anywhere in the function, lint will warn that you've declared a function returning a value that never does so, suggesting you can declare it as void. Unfortunately, if your function signature is required to match a certain form, such as in a callback, you not be able to do so, and will need to add a /* LINTED */ to the end of the function. If you need your code to build on both a newer and an older release, then you will either need to #ifdef these unreachable statements, or, to keep your sources common across releases, add to your sources the corresponding #pragma recognized by both current and older compiler versions, such as: #pragma does_not_return(exit) #pragma does_not_return(panic) Hopefully this little extra work is paid for by the compilers & code analyzers being able to better understand your code paths, giving you better optimizations and more accurate errors & warning messages.

    Read the article

  • Grouping IP Addresses based on ranges [on hold]

    - by mustard
    Say I have 5 different categories based on IP Address ranges for monitoring user base. What is the best way to categorize a list of input IP addresses into one of the 5 categories depending on which range it falls into? Would sorting using a segment tree structure be efficient? Specifically - I'm looking to see if there are more efficient ways to sort IP addresses into groups or ranges than using a segment sort. Example: I have a list of IP address ranges per country from http://dev.maxmind.com/geoip/legacy/geolite/ I am trying to group incoming user requests on a website per country for demographic analysis. My current approach is to use a segment tree structure for the IP address ranges and use lookups based on the structure to identify which range a given ip address belongs to. I would like to know if there is a better way of accomplishing this.

    Read the article

  • FxCop ... Where have you been all my life?

    - by PhilSando
    I was recently introduced to microsoft's tool that analyzes managed code assemblies called FxCop. It points out possible design, localization, performance, and security improvements against a pre defined set of rules (and also accepts custom rules). At first I was unsure how to go about using it as it seems to be aimed at software developers (.exe and .dll) . Its easy to get around this with the following steps: 1)Create a new folder (i.e C:\Code Analysis) 2)Publish your web application into the new folder 3)Open FxCop and add all the dll files from the newly created bin folder  to be scrutinized. Lots more info / docs available here on msdn and you can also download fxcop free

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >