Search Results

Search found 12499 results on 500 pages for 'business analysis'.

Page 65/500 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Syntactical analysis with Flex/Bison part 2

    - by Imran
    Hallo, I need help in Lex/Yacc Programming. I wrote a compiler for a syntactical analysis for inputs of many statements. Now i have a special problem. In case of an Input the compiler gives the right output, which statement is uses, constant operator or a jmp instructor to which label, now i have to write so, if now a if statement comes, first the first command (before the else) must be give out when the assignment of the if is yes then it must jump to the end because the command after the else isnt needed, so after this jmp then the second command must be give out. I show it in an example maybe you understand what i mean. Input adr. Output if(x==0) 10 if(x==0) Wait 5 20 WAIT 5 else 30 JMP 50 Wait 1 40 WAIT 1 end 50 END like so. I have an idea, maybe i can do it whith a special if statement like IF exp jmp_stmt_end stmt_seq END when the if statement is given in the input the compiler has to recognize the end ofthe statement and like my jmp_stmt in my compiler ( you have to download the files from http://bitbucket.org/matrix/changed-tiny) only to jump to the end. I hope you understand my problem.thanks.

    Read the article

  • visual analysis of web pages in ruby

    - by Clint Miller
    I'm looking to write some code that does visual analysis of web pages, preferably using Ruby. My code will need to be able to determine the top, left, width, height, background color, color, and font size for all the elements in the DOM. Of course, these values can only be calculated once all CSS is applied. So, I don't think that Nokogiri is up for the job. Ultimately, I'm trying to use this data in a VIPS-like (Vision-Based Page Segmentation) algorithm in an attempt to find the main content in downloaded news articles. I've considered using Watir to drive Chrome or Firefox and then extract the data. The problem is that browsers can't be run headless through Watir (I think). Ultimately, this code will be running on an array of Linux servers in a data center. So, the code won't have easy access to an X Server for displaying the browser. I suppose one solution is to use Watir and run a headless X Server on the Linux servers. That's a bit of a pain, but it looks like my best option right now. Does anyone have any better ideas?

    Read the article

  • Performance analysis strategies

    - by Bernd
    I am assigned to a performance-tuning-debugging-troubleshooting task. Scenario: a multi-application environment running on several networked machines using databases. OS is Unix, DB is Oracle. Business logic is implemented across applications using synchronous/asynchronous communication. Applications are multi-user with several hundred call center users at peak time. User interfaces are web-based. Applications are third party, I can get access to developers and source code. I only have the production system and a functional test environment, no load test environment. Problem: bad performance! I need fast results. Management is going crazy. I got symptom examples like these: user interface actions taking minutes to complete. Seaching for a customer usually takes 6 seconds but an immediate subsequent search with same parameters may take 6 minutes. What would be your strategy for finding root causes?

    Read the article

  • Using MySQL as data source in Microsoft SQL Server Analysis Services

    - by coldilocks
    Hi, I have installed the latest .net connector (http://www.mysql.com/downloads/connector/net/), I can add MySQL databases as Data Sources, I can even browse through the data from Business Intelligence Studio. The problem is that I CANNOT create a datasource view, or if I do create one without tables, trying to add them after the fact gives me the same error. Specifically it looks like the data source view wizard tries to submit queries against the MySQL database using square brackets/braces, and the query bombs. I get an error message like: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '[my_db].[cheatType]' at line 2 So, in summary, has anyone been able to create a data source view using MySQL tables and, if so, can they please show me how this can be done. Thanks for any help!

    Read the article

  • Cepstral Analysis for pitch detection

    - by Ohmu
    Hi! I'm looking to extract pitches from a sound signal. Someone on IRC just explain to me how taking a double FFT achieves this. Specifically: take FFT take log of square of absolute value (can be done with lookup table) take another FFT take absolute value I am attempting this using vDSP I can't understand how I didn't come across this technique earlier. I did a lot of hunting and asking questions; several weeks worth. More to the point, I can't understand why I didn't think of it. I am attempting to achieve this with vDSP library. it looks as though it has functions to handle all of these tasks. However, I'm wondering about the accuracy of the final result. I have previously used a technique which scours the frequency bins of a single FFT for local maxima. when it encounters one, it uses a cunning technique (the change in phase since the last FFT) to more accurately place the actual peak within the bin. I am worried that this precision will be lost with this technique I'm presenting here. I guess the technique could be used after the second FFT to get the fundamental accurately. But it kind of looks like the information is lost in step 2. as this is a potentially tricky process, could someone with some experience just look over what I'm doing and check it for sanity? also, I've heard there is an alternative technique involving fitting a quadratic over neighbouring bins. Is this of comparable accuracy? if so, I would favour it, as it doesn't involve remembering bin phases. so questions: does this approach makes sense? Can it be improved? I'm a bit worried about And the log square component; there seems to be a vDSP function to do exactly that: vDSP_vdbcon however, there is no indication it precalculates a log-table -- I assume it doesn't, as the FFT function requires an explicit pre-calculation function to be called and passed into it. and this function doesn't. Is there some danger of harmonics being picked up? is there any cunning way of making vDSP pull out the maxima, biggest first? Can anyone point me towards some research or literature on this technique? the main question: is it accurate enough? Can the accuracy be improved? I have just been told by an expert that the accuracy IS INDEED not sufficient. Is this the end of the line? Pi PS I get SO annoyed (npi) when I want to create tags, but cannot. :| I have suggested to the maintainers that SO keep track of attempted tags, but I'm sure I was ignored. we need tags for vDSP, accelerate framework, cepstral analysis

    Read the article

  • Enterprise Process Maps: A Process Picture worth a Million Words

    - by raul.goycoolea
    p { margin-bottom: 0.08in; }h1 { margin-top: 0.33in; margin-bottom: 0in; color: rgb(54, 95, 145); page-break-inside: avoid; }h1.western { font-family: "Cambria",serif; font-size: 14pt; }h1.cjk { font-family: "DejaVu Sans"; font-size: 14pt; }h1.ctl { font-size: 14pt; } Getting Started with Business Transformations A well-known proverb states that "A picture is worth a thousand words." In relation to Business Process Management (BPM), a credible analyst might have a few questions. What if the picture was taken from some particular angle, like directly overhead? What if it was taken from only an inch away or a mile away? What if the photographer did not focus the camera correctly? Does the value of the picture depend on who is looking at it? Enterprise Process Maps are analogous in this sense of relative value. Every BPM project (holistic BPM kick-off, enterprise system implementation, Service-oriented Architecture, business process transformation, corporate performance management, etc.) should be begin with a clear understanding of the business environment, from the biggest picture representations down to the lowest level required or desired for the particular project type, scope and objectives. The Enterprise Process Map serves as an entry point for the process architecture and is defined: the single highest level of process mapping for an organization. It is constructed and evaluated during the Strategy Phase of the Business Process Management Lifecycle. (see Figure 1) Fig. 1: Business Process Management Lifecycle Many organizations view such maps as visual abstractions, constructed for the single purpose of process categorization. This, in turn, results in a lesser focus on the inherent intricacies of the Enterprise Process view, which are explored in the course of this paper. With the main focus of a large scale process documentation effort usually underlying an ERP or other system implementation, it is common for the work to be driven by the desire to "get to the details," and to the type of modeling that will derive near-term tangible results. For instance, a project in American Pharmaceutical Company X is driven by the Director of IT. With 120+ systems in place, and a lack of standardized processes across the United States, he and the VP of IT have decided to embark on a long-term ERP implementation. At the forethought of both are questions, such as: How does my application architecture map to the business? What are each application's functionalities, and where do the business processes utilize them? Where can we retire legacy systems? Well-developed BPM methodologies prescribe numerous model types to capture such information and allow for thorough analysis in these areas. Process to application maps, Event Driven Process Chains, etc. provide this level of detail and facilitate the completion of such project-specific questions. These models and such analysis are appropriately carried out at a relatively low level of process detail. (see figure 2) Fig. 2: The Level Concept, Generic Process HierarchySome of the questions remaining are ones of documentation longevity, the continuation of BPM practice in the organization, process governance and ownership, process transparency and clarity in business process objectives and strategy. The Level Concept in Brief Figure 2 shows a generic, four-level process hierarchy depicting the breakdown of a "Process Area" into progressively more detailed process classifications. The number of levels and the names of these levels are flexible, and can be fit to the standards of the organization's chosen terminology or any other chosen reference model that makes logical sense for both short and long term process description. It is at Level 1 (in this case the Process Area level), that the Enterprise Process Map is created. This map and its contained objects become the foundation for a top-down approach to subsequent mapping, object relationship development, and analysis of the organization's processes and its supporting infrastructure. Additionally, this picture serves as a communication device, at an executive level, describing the design of the business in its service to a customer. It seems, then, imperative that the process development effort, and this map, start off on the right foot. Figuring out just what that right foot is, however, is critical and trend-setting in an evolving organization. Key Considerations Enterprise Process Maps are usually not as living and breathing as other process maps. Just as it would be an extremely difficult task to change the foundation of the Sears Tower or a city plan for the entire city of Chicago, the Enterprise Process view of an organization usually remains unchanged once developed (unless, of course, an organization is at a stage where it is capable of true, high-level process innovation). Regardless, the Enterprise Process map is a key first step, and one that must be taken in a precise way. What makes this groundwork solid depends on not only the materials used to construct it (process areas), but also the layout plan and knowledge base of what will be built (the entire process architecture). It seems reasonable that care and consideration are required to create this critical high level map... but what are the important factors? Does the process modeler need to worry about how many process areas there are? About who is looking at it? Should he only use the color pink because it's his boss' favorite color? Interestingly, and perhaps surprisingly, these are all valid considerations that may just require a bit of structure. Below are Three Key Factors to consider when building an Enterprise Process Map: Company Strategic Focus Process Categorization: Customer is Core End-to-end versus Functional Processes Company Strategic Focus As mentioned above, the Enterprise Process Map is created during the Strategy Phase of the Business Process Management Lifecycle. From Oracle Business Process Management methodology for business transformation, it is apparent that business processes exist for the purpose of achieving the strategic objectives of an organization. In a prescribed, top-down approach to process development, it must be ensured that each process fulfills its objectives, and in an aggregated manner, drives fulfillment of the strategic objectives of the company, whether for particular business segments or in a broader sense. This is a crucial point, as the strategic messages of the company must therefore resound in its process maps, in particular one that spans the processes of the complete business: the Enterprise Process Map. One simple example from Company X is shown below (see figure 3). Fig. 3: Company X Enterprise Process Map In reviewing Company X's Enterprise Process Map, one can immediately begin to understand the general strategic mindset of the organization. It shows that Company X is focused on its customers, defining 10 of its process areas belonging to customer-focused categories. Additionally, the organization views these end-customer-oriented process areas as part of customer-fulfilling value chains, while support process areas do not provide as much contiguous value. However, by including both support and strategic process categorizations, it becomes apparent that all processes are considered vital to the success of the customer-oriented focus processes. Below is an example from Company Y (see figure 4). Fig. 4: Company Y Enterprise Process Map Company Y, although also a customer-oriented company, sends a differently focused message with its depiction of the Enterprise Process Map. Along the top of the map is the company's product tree, overarching the process areas, which when executed deliver the products themselves. This indicates one strategic objective of excellence in product quality. Additionally, the view represents a less linear value chain, with strong overlaps of the various process areas. Marketing and quality management are seen as a key support processes, as they span the process lifecycle. Often, companies may incorporate graphics, logos and symbols representing customers and suppliers, and other objects to truly send the strategic message to the business. Other times, Enterprise Process Maps may show high level of responsibility to organizational units, or the application types that support the process areas. It is possible that hundreds of formats and focuses can be applied to an Enterprise Process Map. What is of vital importance, however, is which formats and focuses are chosen to truly represent the direction of the company, and serve as a driver for focusing the business on the strategic objectives set forth in that right. Process Categorization: Customer is Core In the previous two examples, processes were grouped using differing categories and techniques. Company X showed one support and three customer process categorizations using encompassing chevron objects; Customer Y achieved a less distinct categorization using a gradual color scheme. Either way, and in general, modeling of the process areas becomes even more valuable and easily understood within the context of business categorization, be it strategic or otherwise. But how one categorizes their processes is typically more complex than simply choosing object shapes and colors. Previously, it was stated that the ideal is a prescribed top-down approach to developing processes, to make certain linkages all the way back up to corporate strategy. But what about external influences? What forces push and pull corporate strategy? Industry maturity, product lifecycle, market profitability, competition, etc. can all drive the critical success factors of a particular business segment, or the company as a whole, in addition to previous corporate strategy. This may seem to be turning into a discussion of theory, but that is far from the case. In fact, in years of recent study and evolution of the way businesses operate, cross-industry and across the globe, one invariable has surfaced with such strength to make it undeniable in the game plan of any strategy fit for survival. That constant is the customer. Many of a company's critical success factors, in any business segment, relate to the customer: customer retention, satisfaction, loyalty, etc. Businesses serve customers, and so do a business's processes, mapped or unmapped. The most effective way to categorize processes is in a manner that visualizes convergence to what is core for a company. It is the value chain, beginning with the customer in mind, and ending with the fulfillment of that customer, that becomes the core or the centerpiece of the Enterprise Process Map. (See figure 5) Fig. 5: Company Z Enterprise Process Map Company Z has what may be viewed as several different perspectives or "cuts" baked into their Enterprise Process Map. It has divided its processes into three main categories (top, middle, and bottom) of Management Processes, the Core Value Chain and Supporting Processes. The Core category begins with Corporate Marketing (which contains the activities of beginning to engage customers) and ends with Customer Service Management. Within the value chain, this company has divided into the focus areas of their two primary business lines, Foods and Beverages. Does this mean that areas, such as Strategy, Information Management or Project Management are not as important as those in the Core category? No! In some cases, though, depending on the organization's understanding of high-level BPM concepts, use of category names, such as "Core," "Management" or "Support," can be a touchy subject. What is important to understand, is that no matter the nomenclature chosen, the Core processes are those that drive directly to customer value, Support processes are those which make the Core processes possible to execute, and Management Processes are those which steer and influence the Core. Some common terms for these three basic categorizations are Core, Customer Fulfillment, Customer Relationship Management, Governing, Controlling, Enabling, Support, etc. End-to-end versus Functional Processes Every high and low level of process: function, task, activity, process/work step (whatever an organization calls it), should add value to the flow of business in an organization. Suppose that within the process "Deliver package," there is a documented task titled "Stop for ice cream." It doesn't take a process expert to deduce the room for improvement. Though stopping for ice cream may create gain for the one person performing it, it likely benefits neither the organization nor, more importantly, the customer. In most cases, "Stop for ice cream" wouldn't make it past the first pass of To-Be process development. What would make the cut, however, would be a flow of tasks that, each having their own value add, build up to greater and greater levels of process objective. In this case, those tasks would combine to achieve a status of "package delivered." Figure 3 shows a simple example: Just as the package can only be delivered (outcome of the process) without first being retrieved, loaded, and the travel destination reached (outcomes of the process steps), some higher level of process "Play Practical Joke" (e.g., main process or process area) cannot be completed until a package is delivered. It seems that isolated or functionally separated processes, such as "Deliver Package" (shown in Figure 6), are necessary, but are always part of a bigger value chain. Each of these individual processes must be analyzed within the context of that value chain in order to ensure successful end-to-end process performance. For example, this company's "Create Joke Package" process could be operating flawlessly and efficiently, but if a joke is never developed, it cannot be created, so the end-to-end process breaks. Fig. 6: End to End Process Construction That being recognized, it is clear that processes must be viewed as end-to-end, customer-to-customer, and in the context of company strategy. But as can also be seen from the previous example, these vital end-to-end processes cannot be built without the functionally oriented building blocks. Without one, the other cannot be had, or at least not in a complete and organized fashion. As it turns out, but not discussed in depth here, the process modeling effort, BPM organizational development, and comprehensive coverage cannot be fully realized without a semi-functional, process-oriented approach. Then, an Enterprise Process Map should be concerned with both views, the building blocks, and access points to the business-critical end-to-end processes, which they construct. Without the functional building blocks, all streams of work needed for any business transformation would be lost mess of process disorganization. End-to-end views are essential for utilization in optimization in context, understanding customer impacts, base-lining all project phases and aligning objectives. Including both views on an Enterprise Process Map allows management to understand the functional orientation of the company's processes, while still providing access to end-to-end processes, which are most valuable to them. (See figures 7 and 8). Fig. 7: Simplified Enterprise Process Map with end-to-end Access Point The above examples show two unique ways to achieve a successful Enterprise Process Map. The first example is a simple map that shows a high level set of process areas and a separate section with the end-to-end processes of concern for the organization. This particular map is filtered to show just one vital end-to-end process for a project-specific focus. Fig. 8: Detailed Enterprise Process Map showing connected Functional Processes The second example shows a more complex arrangement and categorization of functional processes (the names of each process area has been removed). The end-to-end perspective is achieved at this level through the connections (interfaces at lower levels) between these functional process areas. An important point to note is that the organization of these two views of the Enterprise Process Map is dependent, in large part, on the orientation of its audience, and the complexity of the landscape at the highest level. If both are not apparent, the Enterprise Process Map is missing an opportunity to serve as a holistic, high-level view. Conclusion In the world of BPM, and specifically regarding Enterprise Process Maps, a picture can be worth as many words as the thought and effort that is put into it. Enterprise Process Maps alone cannot change an organization, but they serve more purposes than initially meet the eye, and therefore must be designed in a way that enables a BPM mindset, business process understanding and business transformation efforts. Every Enterprise Process Map will and should be different when looking across organizations. Its design will be driven by company strategy, a level of customer focus, and functional versus end-to-end orientations. This high-level description of the considerations of the Enterprise Process Maps is not a prescriptive "how to" guide. However, a company attempting to create one may not have the practical BPM experience to truly explore its options or impacts to the coming work of business process transformation. The biggest takeaway is that process modeling, at all levels, is a science and an art, and art is open to interpretation. It is critical that the modeler of the highest level of process mapping be a cognoscente of the message he is delivering and the factors at hand. Without sufficient focus on the design of the Enterprise Process Map, an entire BPM effort may suffer. For additional information please check: Oracle Business Process Management.

    Read the article

  • SQL SERVER – Step by Step Guide to Beginning Data Quality Services in SQL Server 2012 – Introduction to DQS

    - by pinaldave
    Data Quality Services is a very important concept of SQL Server. I have recently started to explore the same and I am really learning some good concepts. Here are two very important blog posts which one should go over before continuing this blog post. Installing Data Quality Services (DQS) on SQL Server 2012 Connecting Error to Data Quality Services (DQS) on SQL Server 2012 This article is introduction to Data Quality Services for beginners. We will be using an Excel file Click on the image to enlarge the it. In the first article we learned to install DQS. In this article we will see how we can learn about building Knowledge Base and using it to help us identify the quality of the data as well help correct the bad quality of the data. Here are the two very important steps we will be learning in this tutorial. Building a New Knowledge Base  Creating a New Data Quality Project Let us start the building the Knowledge Base. Click on New Knowledge Base. In our project we will be using the Excel as a knowledge base. Here is the Excel which we will be using. There are two columns. One is Colors and another is Shade. They are independent columns and not related to each other. The point which I am trying to show is that in Column A there are unique data and in Column B there are duplicate records. Clicking on New Knowledge Base will bring up the following screen. Enter the name of the new knowledge base. Clicking NEXT will bring up following screen where it will allow to select the EXCE file and it will also let users select the source column. I have selected Colors and Shade both as a source column. Creating a domain is very important. Here you can create a unique domain or domain which is compositely build from Colors and Shade. As this is the first example, I will create unique domain – for Colors I will create domain Colors and for Shade I will create domain Shade. Here is the screen which will demonstrate how the screen will look after creating domains. Clicking NEXT it will bring you to following screen where you can do the data discovery. Clicking on the START will start the processing of the source data provided. Pre-processed data will show various information related to the source data. In our case it shows that Colors column have unique data whereas Shade have non-unique data and unique data rows are only two. In the next screen you can actually add more rows as well see the frequency of the data as the values are listed unique. Clicking next will publish the knowledge base which is just created. Now the knowledge base is created. We will try to take any random data and attempt to do DQS implementation over it. I am using another excel sheet here for simplicity purpose. In reality you can easily use SQL Server table for the same. Click on New Data Quality Project to see start DQS Project. In the next screen it will ask which knowledge base to use. We will be using our Colors knowledge base which we have recently created. In the Colors knowledge base we had two columns – 1) Colors and 2) Shade. In our case we will be using both of the mappings here. User can select one or multiple column mapping over here. Now the most important phase of the complete project. Click on Start and it will make the cleaning process and shows various results. In our case there were two columns to be processed and it completed the task with necessary information. It demonstrated that in Colors columns it has not corrected any value by itself but in Shade value there is a suggestion it has. We can train the DQS to correct values but let us keep that subject for future blog posts. Now click next and keep the domain Colors selected left side. It will demonstrate that there are two incorrect columns which it needs to be corrected. Here is the place where once corrected value will be auto-corrected in future. I manually corrected the value here and clicked on Approve radio buttons. As soon as I click on Approve buttons the rows will be disappeared from this tab and will move to Corrected Tab. If I had rejected tab it would have moved the rows to Invalid tab as well. In this screen you can see how the corrected 2 rows are demonstrated. You can click on Correct tab and see previously validated 6 rows which passed the DQS process. Now let us click on the Shade domain on the left side of the screen. This domain shows very interesting details as there DQS system guessed the correct answer as Dark with the confidence level of 77%. It is quite a high confidence level and manual observation also demonstrate that Dark is the correct answer. I clicked on Approve and the row moved to corrected tab. On the next screen DQS shows the summary of all the activities. It also demonstrates how the correction of the quality of the data was performed. The user can explore their data to a SQL Server Table, CSV file or Excel. The user also has an option to either explore data and all the associated cleansing info or data only. I will select Data only for demonstration purpose. Clicking explore will generate the files. Let us open the generated file. It will look as following and it looks pretty complete and corrected. Well, we have successfully completed DQS Process. The process is indeed very easy. I suggest you try this out yourself and you will find it very easy to learn. In future we will go over advanced concepts. Are you using this feature on your production server? If yes, would you please leave a comment with your environment and business need. It will be indeed interesting to see where it is implemented. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Data Warehousing, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

  • Collaborate 2010: Spotlight on Oracle Content Management

    - by [email protected]
    Excitement is building for the Collaborate conference April 18th through the 22nd. Outside of the event being in Las Vegas, which for me often seems to add to the excitement, there will be a great lineup of Oracle Content Management focused sessions. In fact, there are currently over 30 content management sessions scheduled, and attendees will get to hear from customers, partners, as well as Oracle experts. Attendees should expect to hear a lot about Oracle Content Management 11g at Collaborate 2010. Roel Stalman and Andy MacMillan will kick off these discussions on Monday, April 19th as they present Oracle Content Management's product strategy and roadmap (10:45 - 11:45). Monday's lineup also includes sessions on Oracle Imaging and Process Management (I/PM) 11g and Oracle Forms Recognition (2:30 - 3:30), which were both released in January. For those customers using older versions of I/PM or Stellent IBPM, be sure not to miss the "migrating to I/PM 11g" session on Monday as well (1:15 - 2:15) as this should give you some insight into the migration process. Check out the entire list of Oracle Content Management sessions here. Another focus at Collaborate this year is to discuss the benefits of using Oracle Content Management with Oracle Applications - Oracle E-Business Suite, PeopleSoft, and Siebel - so be sure to check out these sessions too: Accelerating Accounts Payable Processes with Integrated Document Imaging(Monday, April 19th, 3:45 - 4:45)Supercharge Your Siebel Sales and Marketing with Integrated Document Management(Tuesday, April 20th, 2:00 - 3:00)Oracle Enterprise 2.0 for Oracle Applications: The Value of an Integrated E2.0 Platform(Tuesday, April 20th, 3:15 - 4:15)Comprehensive Human Resources Automation with Oracle Content Management(Wednesday, April 21st, 1:00 - 2:00) Collaborate is also the perfect opportunity to meet Oracle executives and product experts. Attendees can sign up for 1 on 1 meetings at the event, and there will be someone representing each Oracle Content Management product. These meetings are probably the best way to get your product questions answered in a face-to-face manner. It seems more and more to me that Oracle Content Management customers are viewing Collaborate as "the" conference to attend each year. I hope you have plans to attend and I will see you there.

    Read the article

  • SQLAuthority News – Fast Track Data Warehouse 3.0 Reference Guide

    - by pinaldave
    http://msdn.microsoft.com/en-us/library/gg605238.aspx I am very excited that Fast Track Data Warehouse 3.0 reference guide has been announced. As a consultant I have always enjoyed working with Fast Track Data Warehouse project as it truly expresses the potential of the SQL Server Engine. Here is few details of the enhancement of the Fast Track Data Warehouse 3.0 reference architecture. The SQL Server Fast Track Data Warehouse initiative provides a basic methodology and concrete examples for the deployment of balanced hardware and database configuration for a data warehousing workload. Balance is measured across the key components of a SQL Server installation; storage, server, application settings, and configuration settings for each component are evaluated. Description Note FTDW 3.0 Architecture Basic component architecture for FT 3.0 based systems. New Memory Guidelines Minimum and maximum tested memory configurations by server socket count. Additional Startup Options Notes for T-834 and setting for Lock Pages in Memory. Storage Configuration RAID1+0 now standard (RAID1 was used in FT 2.0). Evaluating Fragmentation Query provided for evaluating logical fragmentation. Loading Data Additional options for CI table loads. MCR Additional detail and explanation of FTDW MCR Rating. Read white paper on fast track data warehousing. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Business Intelligence, Data Warehousing, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • Bing brings Twitter aggregation to search results

    - by jamiet
    I read with interest today a post on the Bing blog Get the Latest on Twitter with Bing Social Search which describes how tweets are soon going to show up in Bing search results. On the surface that isn’t very interesting, Google has been doing this for a while, but of particular interest to myself was the following screenshot: We can see at the bottom of a search result for “TMZ” that Bing is showing us the most popular TMZ stories as determined by the number of tweets that contain links to them. This is great. Bing are applying a principle that those of us in the Business Intelligence (BI) trade have known for ages: a piece of data in isolation is not very interesting but when you aggregate a lot of that data you find the trends that actually matter and when you surface that data in a meaningful way then people can derive real value from it. That sounds obvious but this new Bing feature is the first time I have seen the principle applied in a useful way to tweets and I applaud them for that; its certainly a lot more useful than the pointless constant tweet scroll that you see on Google. What a shame its going to be, yet again from Bing, a US-only feature. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Az Oracle üzleti intelligencia csomag Windows Server 2008-on is, a kliens Vista op.rsz-en is

    - by Fekete Zoltán
    Tegnap az Oracle BI Hands On rendezvényen felmerült a kérdés, hogy az Oracle Business Intelligence Enterprise Editon fut-e Windows Server 2008-on. A válasz: IGEN. Az Oracle BI EE fut a Windows Server 2008-on. Emellett a másik kérdésre a válasz: IGEN, a kliens lehet Windows Vista is. Mivel az Oracle BI szerver szoftver, amit egy böngészovel érnek el a felhasználók elemzési, lekérdezés/jelentés/riport- készítési feladatok elvégzésére, ezért az Oracle BI csak szerver operációs rendszerekre van bevizsgálva: Linux, Solaris, HP-UX, AIX és Windows platformokon. A jelenleg támogatott operációs rendszerek: Microsoft Windows 2000/2003 Server; Microsoft Windows Server 2008 Enterprise Edition x86 32 bit2 - Red Hat Enterprise Linux AS 4.x; Red Hat Enterprise Linux Server/Advanced Platform 5 - Novell SUSE 9.x - Oracle Enterprise Linux 4; Oracle Enterprise Linux 5 - Sun Solaris 9 SPARC 32 bit ; Sun Solaris 9 SPARC 64 bit; Sun Solaris 10 SPARC 32 bit; Sun Solaris 10 SPARC 64 bit - AIX 5.2 PowerPC 32 bit; AIX 5.2 PowerPC 64 bit; AIX 5.3 PowerPC 32 bit; AIX 5.3 PowerPC 64 bit; AIX 6.1 PowerPC 32 bit; AIX 6.1 PowerPC 64 bit - HP-UX 11.11 PA-RISC 64 bit; HP-UX 11.23 PA-RISC 64 bit; HP-UX 11.23 Itanium 64 bit; HP-UX 11.31 Itanium 64 bit A böngészos hozzáférést az irányítópultokhoz (dashboard), interaktív elemzo munkához használható operációs rendszerek: Windows, Vista, Linux, Solaris, Apple Mac OS 10.x.

    Read the article

  • Highly scalable and dynamic "rule-based" applications?

    - by Prof Plum
    For a large enterprise app, everyone knows that being able to adjust to change is one of the most important aspects of design. I use a rule-based approach a lot of the time to deal with changing business logic, with each rule being stored in a DB. This allows for easy changes to be made without diving into nasty details. Now since C# cannot Eval("foo(bar);") this is accomplished by using formatted strings stored in rows that are then processed in JavaScript at runtime. This works fine, however, it is less than elegant, and would not be the most enjoyable for anyone else to pick up on once it becomes legacy. Is there a more elegant solution to this? When you get into thousands of rules that change fairly frequently it becomes a real bear, but this cannot be that uncommon of a problem that someone has not thought of a better way to do this. Any suggestions? Is this current method defensible? What are the alternatives? Edit: Just to clarify, this is a large enterprise app, so no matter which solution works, there will be plenty of people constantly maintaining its rules and data (around 10). Also, The data changes frequently enough to say that some sort of centralized server system is basically a must.

    Read the article

  • Which language meets my needs? [closed]

    - by Gerald Goward
    I am a junior C# developer, working for half a year now. In my company I am working on some enterprise projects and after doing it for quite some time I understood that I dont like enterprise projects. I have my own browser-game written in PHP+MySql with some simple HTML+CSS and I have 300 active (those, who entered the game at least once per 5 days) players currently :) After thinking quite some time I understood that I am interested in: 1). Web-development AND 2). standalone programs (but not enterprise ones). 3). Development for mobile platforms is also nice, Android/iOs. 1st and 2nd categories are what I want the most. Android/iOs is good too. I am NOT interested in big systems which are hard to integrate, I am not interested in enterprise systems. In future I would like to start my own business/projects. I would like to create my own projects or/and create a small programmers company to create and release own products. Please tell me what programming language(s)/technologies would you advice me for it? Thanks alot! UPD: It's NOT a "which language is better" or any flame/holywar generating topic since I ask for language that suits my EXACT needs better. I believe C++ is better for low-level coding, while PHP is good for web-development and Object-C being made for iOs. I am still newbie at programming so dont hate me please.

    Read the article

  • Why isn't there a culture of paying for frameworks?

    - by Marty Pitt
    One of the side effects of the recent trend of "Lean" startups, and the app store era, is that consumers are more acclimatised to paying small prices for small games / products. Eg.: Online SAAS that charges ~$5 / month (the basecamp style of product) Games which are short, fun, and cheap ($0.99 from the app store This market has been defined by "doing one thing well, and charging people for it." DHH of Rails / 37 Signals fame argues that if your website isn't going to make money, don't bother making it. Why doesn't the same rule apply to frameworks? There are lots of software framework projects out there - many which are mature and feature-rich, which offer developers significant value, yet there doesn't seem to be a market or culture of paying for these. It seems that the projects which do charge money are often things like UI component toolsets, and are often marginalized in favour of free alternatives. Why is this? Surely programmers / businesses see the value in contributing back to projects such as Ruby, Rails, Hibernate, Spring, Ant, Groovy, Gradle, (the list goes on). I'm not suggesting that these frameworks should start charging for anyone who wants to use them, but that there must be a meaningful business model that would allow the developers to earn money from the time they invest developing the framework. Any thoughts as to why this model hasn't emerged / succeeded?

    Read the article

  • Offshoring: does it ever work?

    - by DanSingerman
    I know there has been a fair amount of discussion on here about outsourcing/offshoring, and the general opinion seems to be that at best it is difficult, and at worst it fails. I have direct experience of offshoring myself; a previous company where I was a dev manager wanted to send some development offshore, and we ran a pilot scheme to see how well it would work. Of course it was a complete failure, although it is not completely clear to me whether this was down to the offshore devs being less talented, the process, or other factors (no doubt it was really a combination). I can see as a business how offshoring looks attractive (much lower day rate), but as far as I can see, the only way it could possibly work is if you do exceptionally detailed design up front, with incredibly detailed specifications; and by the time you have invested in producing that, you have probably spent as nearly as much as if you had written the actual code locally (which I think is an instance of No Silver Bullet) So, what I want to know is, does anyone here have any experience of offshoring actually working ever? Especially if there are any success stories of it working in a semi-agile way? I know there are developers here from all over the World; has anyone worked on an offshore project they consider successful?

    Read the article

  • juicy couture handbag 2012 has a complete abrogating

    - by user109129
    Washington admissionory approximate animosity Law "will use the activityable angleableware, or added activityable IT bargains of online writing afirely accurate as activityable acts, regardbelow of the activityable IT is acclimated in the achieve of the artecompleteity or business, this law applies. This new law, including IT companies, accomplisheditects, companies,juicy couture handbag online or the admissionory apostle acclimatized can sue for activityable IT and its online writing in Washington admissionory adjustment companies in the breadth of approximate animosity. In November 2011, the topest magistrates of the admissionory governments, the 39 apostle acclimatized ambrosial a aggregate letter to Juicy Couture accoutrements the Federal adjustment bureau, beforehand federal agencies to crop a boxlikeer bases to corruption those who use activityable IT companies for approximate animosity in the final appraisement, it is out of bread-and-adulate interests. The use of activityable IT has alively afflicted the directness of animosity in industries outadmissionory the IT industry, and ultimately affect the able bread-and-adulate acreage. In this backbreaking bread-and-adulate times, American companies are all adverse presconstant to completeize accoutrement opportaccessionies for the administrateing of the admissionory governments to beappear added acerbic in acclimatizement to enconstant fair animosity a allotment of admissionpdispatchs. The abrogating appulse of the use of activityable IT and activity is not apprenticed to the associated admissionorys, it is not apprenticed to bookish Juicy Couture acreage owners. The bread-and-adulate aggregate is asable a affliction for the Chinese abbreviation. juicy couture handbag 2012 has a complete abrogating bread-and-adulate appulse of the Chinese accomplisheditects to ascribe in fair animosity and acadding for bookish acreage activity consulting abutting anterior activity, afresh appear a assay abode, IT piracy affliction to an ceremony draft of honest accomplisheditects $ 837 amateur, ie draft of $ 4.18 billion in the angleableware specific to the five-year activity aeon. The aadvancedmentioned time, the proactivityration of the use of activityable and pirated angleableware admission asable hindered the aapprenticedth of IT and angleableware industry to allay the achievement of a blossomy bookish acreage arabuttalsments admission a abrogating appulse on the admissionion ambiance.

    Read the article

  • PASS Summit Preconference and Sessions

    - by Davide Mauri
    I’m very pleased to announce that I’ll be delivering a Pre-Conference at PASS Summit 2012. I’ll speak about Business Intelligence again (as I did in 2010) but this time I’ll focus only on Data Warehouse, since it’s big topic even alone. I’ll discuss not only what is a Data Warehouse, how it can be modeled and built, but also how it’s development can be approached using and Agile approach, bringing the experience I gathered in this field. Building the Agile Data Warehouse with SQL Server 2012 http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=2821 I’m sure you’ll like it, especially if you’re starting to create a BI Solution and you’re wondering what is a Data Warehouse, if it is still useful nowadays that everyone talks about Self-Service BI and In-Memory databases, and what’s the correct path to follow in order to have a successful project up and running. Beside this Preconference, I’ll also deliver a regular session, this time related to database administration, monitoring and tuning: DMVs: Power in Your Hands http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=3204 Here we’ll dive into the most useful DMVs, so that you’ll see how that can help in everyday management in order to discover, understand and optimze you SQL Server installation, from the server itself to the single query. See you there!!!!!

    Read the article

  • Offshoring: does it ever work?

    - by DanSingerman
    I know there has been a fair amount of discussion on here about outsourcing/offshoring, and the general opinion seems to be that at best it is difficult, and at worst it fails. I have direct experience of offshoring myself; a previous company where I was a dev manager wanted to send some development offshore, and we ran a pilot scheme to see how well it would work. Of course it was a complete failure, although it is not completely clear to me whether this was down to the offshore devs being less talented, the process, or other factors (no doubt it was really a combination). I can see as a business how offshoring looks attractive (much lower day rate), but as far as I can see, the only way it could possibly work is if you do exceptionally detailed design up front, with incredibly detailed specifications; and by the time you have invested in producing that, you have probably spent as nearly as much as if you had written the actual code locally (which I think is an instance of No Silver Bullet) So, what I want to know is, does anyone here have any experience of offshoring actually working ever? Especially if there are any success stories of it working in a semi-agile way? I know there are developers here from all over the World; has anyone worked on an offshore project they consider successful?

    Read the article

  • PASS Summit Preconference and Sessions

    - by Davide Mauri
    I’m very pleased to announce that I’ll be delivering a Pre-Conference at PASS Summit 2012. I’ll speak about Business Intelligence again (as I did in 2010) but this time I’ll focus only on Data Warehouse, since it’s big topic even alone. I’ll discuss not only what is a Data Warehouse, how it can be modeled and built, but also how it’s development can be approached using and Agile approach, bringing the experience I gathered in this field. Building the Agile Data Warehouse with SQL Server 2012 http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=2821 I’m sure you’ll like it, especially if you’re starting to create a BI Solution and you’re wondering what is a Data Warehouse, if it is still useful nowadays that everyone talks about Self-Service BI and In-Memory databases, and what’s the correct path to follow in order to have a successful project up and running. Beside this Preconference, I’ll also deliver a regular session, this time related to database administration, monitoring and tuning: DMVs: Power in Your Hands http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=3204 Here we’ll dive into the most useful DMVs, so that you’ll see how that can help in everyday management in order to discover, understand and optimze you SQL Server installation, from the server itself to the single query. See you there!!!!!

    Read the article

  • What to do if you find a vulnerability in a competitor's site?

    - by user17610
    While working on a project for my company, I needed to build functionality that allows users to import/export data to/from our competitor's site. While doing this, I discovered a very serious security exploit that could, in short, perform any script on the competitor's website. My natural feeling is to report the issue to them in the spirit of good-will. Exploiting the issue to gain advantage crossed my mind, but I don't want to go down that path. So my question is, would you report a serious vulnerability to your direct competition, in order to help them? Or would you keep your mouth shut? Is there a better way of going about this, perhaps to gain at least some advantage from the fact that I'm helping them by reporting the issue? Update (Clarification): Thanks for all your feedback so far, I appreciate it. Would your answers change if I were to add that the competition in question is a behemoth in the market (hundreds of employees in several continents), and my company only started a few weeks ago (three employees)? It goes without saying, they most definitely will not remember us, and if anything, only realize that their site needs work (which is why we entered this market in the first place). I confess this is one of those moral vs. business toss-ups, but I appreciate all the advice.

    Read the article

  • How come there is still so much programming work?

    - by jd_505
    First I'd like to say that I am not pretty sure that this question will meet the guidelines. I think it can goes under the "Freelancing and business concerns" bullet, but I am not sure. Anyway, I will give it a shot. I wonder how the programming jobs hasn't yet "dried" because of the software evolution, for example, I am a developer myself, which means that I do care about software (I mean I am not of the type of guys that needs a computer mainly to just browse the Internet), and still I wouldn't mind if I never receive any more updates on my Ubuntu machine. I find that it provides everything I need, and while the updates provide various bug fixes/improvements, I wouldn't mind using it with it's current state for the rest of my life, for 2 years of Ubuntu usage I have never bumped at a serious bug/problem. Another example is Windows, almost half of it's users still use XP, which is practically ancient, yet they find it satisfying all their needs (and I agreee with them). I could go with many more examples, but by now you are understanding my point and my question. While new "trends" appears all of the time (like a new mobile OS) which runs on new platforms and requires some fresh development work, still the majority of the software effort goes in to what I consider as "completed projects", or at least a state of a project which is enough to be considered as completed. Do you have an explanation ? I can't think of the right tags for this question, please edit it the way you find it to be most appropriate. Thanks.

    Read the article

  • Types of semantic bugs, logic errors [closed]

    - by C-Otto
    I am a PhD student and currently focus on automatically finding instances of new types of bugs in (Java) programs that cannot be found by existing tools like FindBugs. The existing tool currently is used to prove/disprove termination of (Java) programs. I have some ideas (see below), but I could need more input from you (experienced programmers, potential users of my tool). What kind of bugs do you wish to find? What types of bugs exist and might be suitable for my analysis? One strength of the approach I use is detailled information about the heap. So in contrast to FindBugs, I can work with knowledge of the form "variable x and variable y are disjoint on the heap" or "variable z is not cyclic". It is also possible to see if a method might have side effects (and if so, which variables may/may not be affected by it). Example 1: Vacuous call: Graph graphOne = createGraph(); Graph graphTwo = createGraph(); Node source = graphTwo.getRootNode(); for (Node n : graphOne.getNodes()) { if (areConnected(source, n)) { graphTwo.addNode(n); } } Imagine createGraph() creates a fresh graph, so that graphOne and graphTwo are disjoint on the heap. Then, because source is taken from graphTwo instead of graphOne, the call to areConnected always returns false. In this situation I could find out that the call areConnected is useless (because it does not have any side effect and the return value always is false) which helps finding the real bug (taking source from the wrong graph). For this the information that x and y are disjoint (because graphOne and graphTwo are disjoint) is crucial. This bug is related to calling x.equals(y) where x and y are objects of different classes. In this scenario, most implementations of equals() always return false, which most likely is not the intended result. FindBugs already finds this bug (hardcoded to equals(), semantics of implementation is not checked). Example 2: Useless code: someCode(); while (something()) { yetMoreSomething(); } moreCode(); In the case that the loop (so the code in something() and yetMoreSomething()) does not modify anything visible outside the loop, it does not make sense to run this code - the program has the same behaviour as someCode(); moreCode() (i.e., without the loop). To find this out, one needs detailled information about the side effects of the (possibly useless) code. If I can prove that the code does not have any side effect that can be observed afterwards (in the example: in moreCode() or later), then the code indeed is useless. Of course, here Input/Output of any form must be seen as a side effect, so that a System.out.println(...) is not considered useless. Example 3: Ignored return value: Instead of x = foo(); and making use of x, the method is called without storing the result: foo();. If the method does not have any side effect, its invocation is useless and can be dropped. Most likely, the bug here is that the returned value should have been used. Here, too, detailled information about side effects are needed. Can you think of similar types of bugs that might be detected (only) with detailled information about the heap, side effects, semantics of called methods, ...? Did you encounter bugs related to the ones shown below in "real life"? By the way, the tool is AProVE and Java related publications can be found on my homepage. Thanks a lot, Carsten

    Read the article

  • PASS Summit 2012: keynote and Mobile BI announcements #sqlpass

    - by Marco Russo (SQLBI)
    Today at PASS Summit 2012 there have been several announcements during the keynote. Moreover, other news have not been highlighted in the keynote but are equally if not more important for the BI community. Let’s start from the big news in the keynote (other details on SQL Server Blog): Hekaton: this is the codename for in-memory OLTP technology that will appear (I suppose) in the next release of the SQL Server relational engine. The improvement in performance and scalability is impressive and it enables new scenarios. I’m curious to see whether it can be used also to improve ETL performance and how it differs from using SSD technology. Updates on Columnstore: In the next major release of SQL Server the columnstore indexes will be updatable and it will be possible to create a clustered index with Columnstore index. This is really a great news for near real-time reporting needs! Polybase: in 2013 it will debut SQL Server 2012 Parallel Data Warehouse (PDW), which will include the Polybase technology. By using Polybase a single T-SQL query will run queries across relational data and Hadoop data. A single query language for both. Sounds really interesting for using BigData in a more integrated way with existing relational databases. And, of course, to load a data warehouse using BigData, which is the ultimate goal that we all BI Pro have, right? SQL Server 2012 SP1: the Service Pack 1 for SQL Server 2012 is available now and it enable the use of PowerPivot for SharePoint and Power View on a SharePoint 2013 installation with Excel 2013. Power View works with Multidimensional cube: the long-awaited feature of being able to use PowerPivot with Multidimensional cubes has been shown by Amir Netz in an amazing demonstration during the keynote. The interesting thing is that the data model behind was based on a many-to-many relationship (something that is not fully supported by Power View with Tabular models). Another interesting aspect is that it is Analysis Services 2012 that supports DAX queries run on a Multidimensional model, enabling the use of any future tool generating DAX queries on top of a Multidimensional model. There are still no info about availability by now, but this is *not* included in SQL Server 2012 SP1. So what about Mobile BI? Well, even if not announced during the keynote, there is a dedicated session on this topic and there are very important news in this area: iOS, Android and Microsoft mobile platforms: the commitment is to get data exploration and visualization capabilities working within June 2013. This should impact at least Power View and SharePoint/Excel Services. This is the type of UI experience we are all waiting for, in order to satisfy the requests coming from users and customers. The important news here is that native applications will be available for both iOS and Windows 8 so it seems that Android will be supported initially only through the web. Unfortunately we haven’t seen any demo, so it’s not clear what will be the offline navigation experience (and whether there will be one). But at least we know that Microsoft is working on native applications in this area. I’m not too surprised that HTML5 is not the magic bullet for all the platforms. The next PASS Business Analytics conference in 2013 seems a good place to see this in action, even if I hope we don’t have to wait other six months before seeing some demo of native BI applications on mobile platforms! Viewing Reporting Services reports on iPad is supported starting with SQL Server 2012 SP1, which has been released today. This is another good reason to install SP1 on SQL Server 2012. If you are at PASS Summit 2012, come and join me, Alberto Ferrari and Chris Webb at our book signing event tomorrow, Thursday 8 2012, at the bookstore between 12:00pm and 12:30pm, or follow one of our sessions!

    Read the article

  • Filtering code elements when analyzing source code.

    - by Martin
    Hi everybody, Currently I am making a survey about source code analysis and the thing that puzzles me greatly is what is it that project managers and developers would like to filter when analyzing source code (especially when applying OOP metrics - e.g. skpping insignificant methods and classes during analysis or filtering context-based elements according to the type of project). If you have any suggestions based on yout experience with code analysis I will greatly appreciate if you can share some ideas about filtering of elements. Thanks, Martin

    Read the article

  • What is the current status on Microsoft ProClarity?

    - by Ali_Abadani
    I don't really know how to compose this question. My complay has been using Microsoft ProClarity for few years and we have a quite a few users using it publising books and doing ad-hoc analysis. With the new Microsoft BI solutions, it seems like they are completely going away from ProClarity and replacing the OLAP analysis with Excel. I understand the with SharePoint and integration with PerformancePoint and reporting services the dashboards and reports would be done in SharePoint but what about the analysis? Any ideas?

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >