Search Results

Search found 3717 results on 149 pages for 'escape analysis'.

Page 45/149 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Webcast: Extreme Analytics Without Limits

    - by Rob Reynolds
    Event Date: November 8, 2012 Event Time: 1 p.m. PT / 4 p.m. ET If you are a current Oracle Business Intelligence Applications customer, or thinking about implementing the BI Apps, please join us for this Webcast to learn how the combination of Oracle Exalytics In- Memory Machine and Oracle’s market leading analytic applications enables you to go beyond the traditional boundaries of data analysis and get the insight you need from massive volumes of data – all at the speed of thought. See how you can benefit from running your analytic applications on Oracle Exalytics to: Lower TCO Improve Operational Decision Making and Enhance Competitive Advantage Deliver Speed-of-thought Analysis – Anytime, Anywhere Click here to register.

    Read the article

  • Replace conditional with polymorphism refactoring or similar?

    - by Anders Svensson
    Hi, I have tried to ask a variant of this question before. I got some helpful answers, but still nothing that felt quite right to me. It seems to me this shouldn't really be that hard a nut to crack, but I'm not able to find an elegant simple solution. (Here's my previous post, but please try to look at the problem stated here as procedural code first so as not to be influenced by the earlier explanation which seemed to lead to very complicated solutions: http://stackoverflow.com/questions/2772858/design-pattern-for-cost-calculator-app ) Basically, the problem is to create a calculator for hours needed for projects that can contain a number of services. In this case "writing" and "analysis". The hours are calculated differently for the different services: writing is calculated by multiplying a "per product" hour rate with the number of products, and the more products are included in the project, the lower the hour rate is, but the total number of hours is accumulated progressively (i.e. for a medium-sized project you take both the small range pricing and then add the medium range pricing up to the number of actual products). Whereas for analysis it's much simpler, it is just a bulk rate for each size range. How would you be able to refactor this into an elegant and preferably simple object-oriented version (please note that I would never write it like this in a purely procedural manner, this is just to show the problem in another way succinctly). I have been thinking in terms of factory, strategy and decorator patterns, but can't get any to work well. (I read Head First Design Patterns a while back, and both the decorator and factory patterns described have some similarities to this problem, but I have trouble seeing them as good solutions as stated there. The decorator example seems very complicated for just adding condiments, but maybe it could work better here, I don't know. And the factory pattern example with the pizza factory...well it just seems to create such a ridiculous explosion of classes, at least in their example. I have found good use for factory patterns before, but I can't see how I could use it here without getting a really complicated set of classes) The main goal would be to only have to change in one place (loose coupling etc) if I were to add a new parameter (say another size, like XSMALL, and/or another service, like "Administration"). Here's the procedural code example: public class Conditional { private int _numberOfManuals; private string _serviceType; private const int SMALL = 2; private const int MEDIUM = 8; public int GetHours() { if (_numberOfManuals <= SMALL) { if (_serviceType == "writing") return 30 * _numberOfManuals; if (_serviceType == "analysis") return 10; } else if (_numberOfManuals <= MEDIUM) { if (_serviceType == "writing") return (SMALL * 30) + (20 * _numberOfManuals - SMALL); if (_serviceType == "analysis") return 20; } else //i.e. LARGE { if (_serviceType == "writing") return (SMALL * 30) + (20 * (MEDIUM - SMALL)) + (10 * _numberOfManuals - MEDIUM); if (_serviceType == "analysis") return 30; } return 0; //Just a default fallback for this contrived example } } All replies are appreciated! I hope someone has a really elegant solution to this problem that I actually thought from the beginning would be really simple... Regards, Anders

    Read the article

  • Tooltips problem, making this javascript work with my smarty foreach loop, help pelase!

    - by Kyle Sevenoaks
    I am using an example of tooltips from http://www.dynamicdrive.com/dynamicindex5/stickytooltip.htm on www.euroworker.no/order I have this code here to work with, but it just doesn't seem to work correctly, I've tried everything I can think of (not a lot of things) Here's the code. {foreach from=$cart.cartItems item="item" name="cart"} <div class="{zebra loop="cart"}"> <div id="sgproductview"> <div id="cart2Varekode"> <p> {if $product.sku} <span class="param">{$item.product.sku}</span> {else} <span>{img src=$item.Product.DefaultImage.paths.1 alt=$item.Product.name_lang|escape}</span> {/if} </p> </div> <div id="cart2Produkt"> <p>{if $item.Product.ID} <a href="{productUrl product=$item.Product}" data-tooltip="sticky{$smarty.foreach.cart.iteration}" target="_blank">{$item.Product.name_lang|truncate:20}</a> {else} <span>{$item.Product.name_lang|truncate:20}</span> </a> {/if} </p> <p> {include file="order/itemVariations.tpl"} {include file="order/block/itemOptions.tpl"} {if $multi} {include file="order/selectItemAddress.tpl" item=$item} {/if} </p> </div> {if $item.Product.DefaultImage.paths.3} <div id="mystickytooltip" class="stickytooltip"> <div style="padding:5px;"> <div id="sticky1" class="atip" style="width:200px;"> <img src="{$item.Product.DefaultImage.paths.3}" alt="{$item.Product.name_lang|escape}"><br> {$item.Product.name_lang} </div> <div id="sticky2" class="atip" style="width:200px;"> <img src={$item.Product.DefaultImage.paths.3} alt="{$item.Product.name_lang|escape}"><br> {$item.formattedPrice} </div> <div id="sticky3" class="atip" style="width:200px;"> <img src="{$item.Product.DefaultImage.paths.3}" alt="{$item.Product.name_lang|escape}"><br> {$item.Product.name_lang}PRODUCT 3 </div> <div id="sticky4" class="atip" style="width:200px;"> <img src="{$item.Product.DefaultImage.paths.3}" alt="{$item.Product.name_lang|escape}"><br> {$item.Product.name_lang} </div> </div> </div> {/if} <div id="cart2Price"> <p class="actualPrice"> {$item.formattedPrice} </p> </div> <div id="salg"></div> <div id="cart2Salg"> <p></p> </div> <div id="antallbox"> <p class="cartQuant"> {textfield name="item_`$item.ID`" class="text"} </p> </div> <div id="cart2Total"> <p> {if $item.count == 1} <span class="basePrice">{$item.formattedBasePrice}</span><span class="actualPrice">{$item.formattedPrice}</span> {else} {$item.formattedDisplaySubTotal} <div class="subTotalCalc"> {$item.count} x <span class="basePrice">{$item.formattedBasePrice}</span><span class="actualPrice">{$item.formattedPrice}</span> </div> {/if} </p> </div> <div id="delete"> {if 'ENABLE_WISHLISTS'|config} <a href="{link controller=order action=moveToWishList id=$item.ID query="return=`$return`"}">{t _move_to_wishlist}</a> {/if} <a id="slett" href="{link controller=order action=delete id=$item.ID query="return=`$return`"}" title="Slett"><!--{t _remove}--></a> </div> </div> </div> {/foreach} Anyone can help? {html_image} doesn't work, by the way and all the extensions are present and correct.

    Read the article

  • Spreadsheet RDBMS

    - by John Nilsson
    I'm looking for a software (or set of software) that will let me combine spreadsheet and database workflows. Data entry in spreadsheet to enable simple entry from clipboard, analysis based on joins, unions and aggregates and pivot/data pilot summaries. So far I've only found either spreadsheets OR db applications but no good combination. OO base with calc for tables doesn't support aggregates f.ex. Google Spreadsheet + Visualizaion API doesn't support unions or joins, zoho db doesn't let me paste from clipboard. Any hints on software that could be used? Basically I'm trying to do some analysis of my personal bank transactions. Problem 1, ETL. The data has to be moved from my bank to a database. My current solution is to manually copy and paste the data into one spread sheet per account from my internet bank. Pains: Not very scriptable. Lots of scrolling to reach the point to paste. Have to apply sorting and formatting to the pasted data each time. Problem 2, analysis. I then want to aggregate the different accounts in one sweep to track transfers per type of transfer over all accounts. The actual aggregation is still unsolved because I can't find a UNION equivalent in the spreadsheets I've tried.

    Read the article

  • Suppress task switch keys (winkey, alt-tab, alt-esc, ctrl-esc) using low-level keyboard hook

    - by matt
    I'm trying to suppress task switch keys (such as winkey, alt-tab, alt-esc, ctrl-esc, etc.) by using a low-level keyboard hook. I'm using the following LowLevelKeyboardProc callback: IntPtr HookCallback(int nCode, IntPtr wParam, ref KBDLLHOOKSTRUCT lParam) { if (nCode >= 0) { bool suppress = false; // Suppress left and right windows keys. if (lParam.Key == VK_LWIN || lParam.Key == VK_RWIN) suppress = true; // Suppress alt-tab. if (lParam.Key == VK_TAB && HasAltModifier(lParam.Flags)) suppress = true; // Suppress alt-escape. if (lParam.Key == VK_ESCAPE && HasAltModifier(lParam.Flags)) suppress = true; // Suppress ctrl-escape. /* How do I hook CTRL-ESCAPE ? */ // Suppress keys by returning 1. if (suppress) return new IntPtr(1); } return CallNextHookEx(HookID, nCode, wParam, ref lParam); } bool HasAltModifier(int flags) { return (flags & 0x20) == 0x20; } However, I'm at a loss as to how to suppress the CTRL-ESC combination. Any suggestions? Thanks.

    Read the article

  • Dot Matrix printing in C# ?

    - by Dale
    I'm trying to print to Dot Matrix printers (various models) out of C#, currently I'm using Win32 API (you can find alot of examples online) calls to send escape codes directly to the printer out of my C# application. This works great, but... My problem is because I'm generating the escape codes and not relying on the windows print system the printouts can't be sent to any "normal" printers or to things like PDF print drivers. (This is now causing a problem as we're trying to use the application on a 2008 Terminal Server using Easy Print [Which is XPS based]) The question is: How can I print formatted documents (invoices on pre-printed stationary) to Dot Matrix printers (Epson, Oki and Panasonic... various models) out of C# not using direct printing, escape codes etc. **Just to clarify, I'm trying things like GDI+ (System.Drawing.Printing) but the problem is that its very hard, to get things to line up like the old code did. (The old code sent the characters direct to the printer bypassing the windows driver.) Any suggestions how things could be improved so that they could use GDI+ but still line up like the old code did?

    Read the article

  • Separating Javascript functions

    - by msharma
    I am wondering how javascripts get included in a jsp - can we put any code which the jsp will recognize and not just javascript code only in the .js file? I have some common javascript code which needs to get executed on different pages, so I decided to place it in its own separate .js file and include it on all jsps which call that function. The js function now refers to a key from a properties file and some other non-javascript code: function openPrivacyStmntWindow(){ var url = <h:outputText escape="false" value="\"#{urls.url_privacyStatement}\";" /> newwindow=window.open(url,'Terms','height=600,width=800,left=300,top=100,scrollbars=1'); newwindow.focus(); return false; } This function worked just fine when it was included in the jsp itself. Now that I have separated it into its own file it doesnt, do I need to include the properties bundle in this file. The value="\"#{urls.url_privacyStatement}\";" is referring to a bundle called "urls" which has a key called "url_privacyStatement" Also in Line 1 var url = <h:outputText escape="false" value="\"#{urls.url_privacyStatement}\";" /> the <h:outputText escape="false" ... /> will it cause any issues? Thanks.

    Read the article

  • How to access method variables from within an anonymous function in JavaScript?

    - by Hussain
    I'm writing a small ajax class for personal use. In the class, I have a "post" method for sending post requests. The post method has a callback parameter. In the onreadystatechange propperty, I need to call the callback method. Something like this: this.requestObject.onreadystatechange = function() { callback(this.responseText); } However, I can't access the callback variable from within the anonomous function. How can I bring the callback variable into the scope of the onreadystatechange anonomous function? edit: Here's the full code so far: function request() { this.initialize = function(errorHandeler) { try { try { this.requestObject = new XDomainRequest(); } catch(e) { try { this.requestObject = new XMLHttpRequest(); } catch (e) { try { this.requestObject = new ActiveXObject("Msxml2.XMLHTTP"); //newer versions of IE5+ } catch (e) { this.requestObject = new ActiveXObject("Microsoft.XMLHTTP"); //older versions of IE5+ } } } } catch(e) { errorHandeler(); } } this.post = function(url,data) { var response;var escapedData = ""; if (typeof data == 'object') { for (i in data) { escapedData += escape(i)+'='+escape(data[i])+'&'; } escapedData = escapedData.substr(0,escapedData.length-1); } else { escapedData = escape(data); } this.requestObject.open('post',url,true); this.requestObject.setRequestHeader("Content-type", "application/x-www-form-urlencoded"); this.requestObject.setRequestHeader("Content-length", data.length); this.requestObject.setRequestHeader("Connection", "close"); this.requestObject.onreadystatechange = function() { if (this.readyState == 4) { // call callback function } } this.requestObject.send(data); }

    Read the article

  • How to post non-latin1 data to non-UTF8 site using perl?

    - by ZyX
    I want to post russian text on a CP1251 site using LWP::UserAgent and get following results: $text="??????? ?????"; FIELD_NAME => $text # result: ??? ?'???'???'?????????????? ?'?'?????????'???'?' $text=Encode::decode_utf8($text); FIELD_NAME => $text # result: ? ???????????? ?'???????' FIELD_NAME => Encode::encode("cp1251", $text) # result: ?????+?+?????? ???????+?? FIELD_NAME => URI::Escape::uri_escape_utf8($text) # result: D0%a0%d1%83%d1%81%d1%81%d0%ba%d0%b8%d0%b9%20%d1%82%d0%b5%d0%ba%d1%81%d1%82 How can I do this? Content-Type must be x-www-form-urlencoded. You can find similar form here, but there you can just escape any non-latin character using &#...; form, trying to escape it in FIELD_NAME results in 10561091108910891 10901077108210891 (every &, # and ; stripped out of the string).

    Read the article

  • Javascript Cookie Function not working for Domain

    - by danit
    Here are the functions Im using: Set Cookie: function set_cookie ( name, value, exp_y, exp_m, exp_d, path, domain, secure ) { var cookie_string = name + "=" + escape ( value ); if ( exp_y ) { var expires = new Date ( exp_y, exp_m, exp_d ); cookie_string += "; expires=" + expires.toGMTString(); } if ( path ) cookie_string += "; path=" + escape ( path ); if ( domain ) cookie_string += "; domain=" + escape ( domain ); if ( secure ) cookie_string += "; secure"; document.cookie = cookie_string; } Read Cookie: function get_cookie ( cookie_name ) { var results = document.cookie.match ( '(^|;) ?' + cookie_name + '=([^;]*)(;|$)' ); if ( results ) return ( unescape ( results[2] ) ); else return null; } Delete Cookie: function delete_cookie ( cookie_name ) { var cookie_date = new Date ( ); // current date & time cookie_date.setTime ( cookie_date.getTime() - 1 ); document.cookie = cookie_name += "=; expires=" + cookie_date.toGMTString(); } The Jquery I use to construct the cookie: if(get_cookie('visible')== 'no') { $("#wrapper").hide(); $(".help").hide(); $("#slid .show").show(); $("#slid .hide").hide(); } else { $("#slid .show").hide(); $("#slid .hide").show(); } $("#slider").click(function() { if(get_cookie('visible')== null) { set_cookie('visible','no', 2020, 01,01, '/', 'domain.com'); } else { delete_cookie('visible'); } $(".help").slideToggle(); $("#wrapper").animate({ opacity: 1.0 },200).slideToggle(200, function() { $("#slid img").toggle(); }); }); Im trying to set the cookie for all pages that exist under domain.com with the path '/'. However using these functions and jQuery it doesn't appear to be working, any anyone give me an idea of where im going wrong?

    Read the article

  • OOP vs PP for algorithms

    - by Ilian
    Which paradigm is better for design and analysis of algorithms? Which is faster? Because I have a subject called Design and Analysis of Algorithms in university and have a time limit for programs. Is OOP slower than Procedure programming? Or the time difference is not big?

    Read the article

  • First Party and Third Party Cookie

    - by ajithperuva
    I want to create a analysis project (just like google analysis),for getting conversion rate and track visitor count.How can we create first party cookie and third party cookie using php.Actually, how can we identify our third party and first party cookie.Need to follow any type of standard for identify them?if anybody know please give me some idea about it...please

    Read the article

  • Motion detection in compressed domain (JPEG/Mpeg4/H264)

    - by paft
    everyone! I process video from IP cameras and have wrote a motion detection algorithm based on decompressed video analysis. But i really something more fast. I've found several papers about compressed domain analysis but have failed to find any implementations. Can anyone recommend me some code? found materials: http://www.ist-live.org/intranet/school-of-informatics-university-of-bradford001-7/41410206.pdf/view http://doc.rero.ch/lm.php?url=1000,43,4,20061128120121-NA/Bracamonte_Javier_-_A_Low_Complexity_Change_Detection_Algorithm_20061128.pdf

    Read the article

  • How do you combine "Revision Control" with "WorkFlow" for R?

    - by Tal Galili
    Hello all, I remember coming across R users writing that they use "Revision control" (e.g: "Source control"), and I am curious to know: How do you combine "Revision control" with your statistical analysis WorkFlow? Two (very) interesting discussions talk about how to deal with the WorkFlow. But neither of them refer to the revision control element: http://stackoverflow.com/questions/1266279/how-to-organize-large-r-programs http://stackoverflow.com/questions/1429907/workflow-for-statistical-analysis-and-report-writing A Long Update To The Question: Following some of the people's answers, and Dirk's question in the comment, I would like to direct my question a bit more. After reading the Wiki article about "revision control" (which I was previously not familiar with), it was clear to me that when using revision control, what one does is to build a development structure of his code. This structure either leads to a "final product" or to several branches. When building something like, let's say, a website. There is usually one end product you work towards (the website), with some prototypes along the way. But when doing a statistical analysis, the work (to my view) is different. Sometimes you know where you want to get to. But more often, you explore. Explore cleaning the dataset. Explore different methods for statistical analysis, and ask various questions of your data (and I am writing this, knowing how Frank Harrell, and other experience statisticians feels about Data dredging). That is way the WorkFlow question with statistical programming is (in my view) a serious and deep question, raising many issues, The simpler ones are technical: Which revision control software do you use (and why) ? Which IDE do you use(and why) ? The more interesting question are about work process: How do you structure your files? What do you keep as a separate file and what as a revision? or asking in a different way - What should be a "branch" and what should be a "sub project" in your code? For example: When starting to explore your data, should a plot be creating and then erased because it didn't lead any where (but kept as a revision) or should there be a backup file of that path? How you solve this tension was my initial curiosity. The second question is "what might I be missing?". What rules (of thumb) should one follow so to avoid common pitfalls doing statistical programming with version control? In my intuition, I feel that statistical programming is inherently different then software development (I am writing this without being a real expert in statistical programming, and even less so in software development). That's way I am unsure which of the lessons I have read here about version control would be applicable. Thanks a lot, Tal

    Read the article

  • Dynamically loaded jQuery with GreaseMonkey inconsistent on pages (refreshing seems to fix it)... do

    - by uprightnetizen
    Hi, I want a custom page analysis footer on every site I visit... so I've used a method to attach JQuery to unsafeWindow. I then create a floating footer on the page. I want to be able to call commands in a menu, do some processing, then put the results in the footer. Unfortunately it sometimes works, sometimes it doesn't. At least two alerts should happen in the printOutput function. Sometimes it only fires one, then it (crashes?) without error? On other pages, both alerts fire and it finds the element, but it doesn't add the extra text. (e.g. www.linode.com) Refreshing the page, then running the printOutput command again seems to always work. Does anyone know what's going on??? The userscript can be installed at: http://www.captionwizard.com/test/page_analysis.user.js // ==UserScript== // @name page_analysis // @namespace markspace // @description Page Analysis // @include http://*/* // ==/UserScript== (function() { // Add jQuery var GM_JQ = document.createElement('script'); GM_JQ.src = 'http://code.jquery.com/jquery-1.4.2.min.js'; GM_JQ.type = 'text/javascript'; document.getElementsByTagName('head')[0].appendChild(GM_JQ); var jqueryActive = false; //Check if jQuery's loaded function GM_wait() { if(typeof unsafeWindow.jQuery == 'undefined') { window.setTimeout(GM_wait,100); } else { $ = unsafeWindow.jQuery; letsJQuery(); } } GM_wait(); function letsJQuery() { jqueryActive = true; setupOutputFooter(); } /******************************* Analysis FOOTER Functions ******************************/ function printOutput(someText) { alert('printing output'); if($('div.analysis_footer').length) { alert('is here - appending'); $('div.analysis_footer').append('<br>' + someText); } else { alert('not here - trying again'); setupOutputFooter(); $('div.analysis_footer').append('<br>' + someText); } } GM_registerMenuCommand("Test Output", testOutput, "k", "control", "k" ); function testOutput() { printOutput('testing this'); } function setupOutputFooter() { $('<div class="analysis_footer">Page Analysis Footer:</div>').appendTo('body'); $('div.analysis_footer').css('position','fixed').css('bottom', '0px').css('background-color','#F8F8F8'); $('div.analysis_footer').css('width','100%').css('color','#3B3B3B').css('font-size', '0.8em'); $('div.analysis_footer').css('font-family', '"Myriad",Verdana,Arial,Helvetica,sans-serif').css('padding', '5px'); $('div.analysis_footer').css('border-top', '1px solid black').css('text-align', 'left'); } }());

    Read the article

  • What does the symbol ::: mean in R

    - by Milktrader
    I came across this in the following context from B. Pfaff's "Analysis of Integrated and Cointegrated Time Series in R" ## Impulse response analysis of SVAR A-type model 1 args (vars ::: irf.svarest) 2 irf.svara <- irf (svar.A, impulse = ”y1 ” , 3 response = ”y2 ” , boot = FALSE) 4 args (vars ::: plot.varirf) 5 plot (irf.svara)

    Read the article

  • SQL Server (2012 Enterprise) Browser service failing

    - by Watki02
    SQL Server (2012 Enterprise) Browser service failing I have a problem as described below: I have an instance of SQL Server 2012 Enterprise (thanks to MSDN) for local development on my PC. I try to start SQL server Browser Service from SQL Server Configuration Manager and it takes a long time to fail, then fails with: The request failed or the service did not respond in a timely fashion. Consult the event log or other applicable error logs for details. I checked event logs and found these errors in this order (all within the same 1-second time frame): The SQL Server Browser service port is unavailable for listening, or invalid. The SQL Server Browser service was unable to establish SQL instance and connectivity discovery. The SQL Server Browser is enabling SQL instance and connectivity discovery support. The SQL Server Browser service was unable to establish Analysis Services discovery. The SQL Server Browser service has started. The SQL Server Browser service has shutdown. I checked firewall rules and both port 1433 (TCP) and 1434 (UDP) are wide open, just as well - the programs and service binary had been "allowed through windows firewall". I started the "Analysis Services" service by hand and it works fine. Browser still won't start. Some History: Installed SQL 2008 R2 express advanced Installed SQL2012 Express advanced Uninstalled SQL 2008 R2 express advanced Installed 2012 SSDT and lots of features with Express install Installed a unique instance of SQL 2012 Enterprise with all features Uninstalled SSDT and reinstalled SSDT with Enterprise (solved a different problem) Uninstalled SQL 2012 Express Uninstalled SQL 2012 Enterprise Removed anything with "SQL" in the name from Control panel "Programs and features" Installed SQL 2012 Enterprise without Analysis services (This is where I noticed SQL Browser service was failing to start even on the install) Added the feature of Analysis Services (and everything else) via the installer (Browser continued to fail to start on the install) ======================== Other interesting facts: opening a command window with administrator and trying to run sqlbrowser.exe manually yielded: Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Windows\system32cd C:\Program Files (x86)\Microsoft SQL Server\90\Shared C:\Program Files (x86)\Microsoft SQL Server\90\Sharedsqlbrowser.exe -c SQLBrowser: starting up in console mode SQLBrowser: starting up SSRP redirection service SQLBrowser: failed starting SSRP redirection services -- shutting down. SQLBrowser: starting up OLAP redirection service SQLBrowser: Stopping the OLAP redirector C:\Program Files (x86)\Microsoft SQL Server\90\Shared As I try to repair the install it errors out saying The following error has occurred: Service 'SQLBrowser' start request failed. Click 'Retry' to retry the failed action, or click 'Cancel' to cancel this action and continue setup. For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft%20SQL%20Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=11.0.2100.60&EvtType=0x4F9BEA51%25400xD3BEBD98%25401211%25401 Clicking retry fails every time. When clicking cancel I get: The following error has occurred: SQL Server Browser configuration for feature 'SQL_Browser_Redist_SqlBrowser_Cpu32' was cancelled by user after a previous installation failure. The last attempted step: Starting the SQL Server Browser service 'SQLBrowser', and waiting for up to '900' seconds for the process to complete. . For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft%20SQL%20Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=11.0.2100.60&EvtType=0x4F9BEA51%25400xD3BEBD98%25401211%25401 When I go to uninstall the SQL Browser from "Programs and Features", it complains: Error opening installation log file. Verify that the specified log file location exists and is writable. Is there any way I can fix this short of re-imaging my computer and reinstalling from scratch? A possible approach would be to somehow really uninstall everything and delete all files related to SQL... is that a good idea, and how do I do that?

    Read the article

  • Windows 7 inbuilt and 3rd party (de)fragmentation related queries

    - by Karan
    I have a pretty good idea of how files end up getting fragmented. That said, I just copied ~3,200 files of varying sizes (from a few KB to ~20GB) from an external USB HDD to an internal, freshly formatted (under Windows 7 x64), NTFS, 2TB, 5400RPM, WD, SATA, non-system (i.e. secondary) drive, filling it up 57%. Since it should have been very much possible for each file to have been stored in one contiguous block, I expected the drive to be fragmented not more than 1-2% at most after this rather lengthy exercise (unfortunately this older machine doesn't support USB 3.0). Windows 7's inbuilt defrag utility told me after a quick analysis that the drive was fragmented only 1% or so, which dovetailed neatly with my expectations. However, just out of curiosity I downloaded and ran the latest portable x64 version of Piriform's Defraggler, and was shocked to see the drive being reported as being ~85% fragmented! The portable version of Auslogics Disk Defrag also agreed with Defraggler, and both clearly expected to grind away for ~10 hours to completely defragment the drive. 1) How in blazes could the inbuilt and 3rd party defrag utils disagree so badly? I mean, 10-20% variance is probably understandable, but 1% and 85% are miles apart! This Engineering Windows 7 blog post states: In Windows XP, any file that is split into more than one piece is considered fragmented. Not so in Windows Vista if the fragments are large enough – the defragmentation algorithm was changed (from Windows XP) to ignore pieces of a file that are larger than 64MB. As a result, defrag in XP and defrag in Vista will report different amounts of fragmentation on a volume. ... [Please read the entire post so the quote is not taken out of context.] Could it simply be that the 3rd party defrag utils ignore this post-XP change and continue to use analysis algos similar to those XP used? 2) Assuming that the 3rd party utils aren't lying about the real extent of fragmentation (which Windows is downplaying post-XP), how could the files have even got fragmented so badly given they were just copied over afresh to an empty drive? 3) If vastly differing analysis algos explain the yawning gap, which do I believe? I'm no defrag fanatic for sure, but 85% is enough to make me seriously consider spending 10 hours defragging this drive. On the other hand, 1% reported by Windows' own defragger clearly implies that there is no cause for concern and defragging would actually have negative consequences (as per the post). Is Windows' assumption valid and should I just let it be, or will there be any noticeable performance gains after running one of the 3rd party utils for 10 hours straight? 4) I see that out of the box Windows 7 defrag is scheduled to run weekly. Does anyone know whether it defrags every single time, or only if its analysis reveals a fragmentation percentage over a set threshold? If the latter, what is this threshold and can it be changed, maybe via a Registry edit? Thanks for reading through (my first query on this wonderful site!) and for any helpful replies. Also, if you're answering question #3, please keep in mind that any speed increases post defragging with 3rd party utils vis-à-vis Windows' inbuilt program should not include pre-Vista (preferably pre-Win7) examples. Further, examples of programs that made your system boot faster won't help in this case, since this is a non-system drive (although one that'll still be used daily).

    Read the article

  • Oracle BI Server Modeling, Part 1- Designing a Query Factory

    - by bob.ertl(at)oracle.com
      Welcome to Oracle BI Development's BI Foundation blog, focused on helping you get the most value from your Oracle Business Intelligence Enterprise Edition (BI EE) platform deployments.  In my first series of posts, I plan to show developers the concepts and best practices for modeling in the Common Enterprise Information Model (CEIM), the semantic layer of Oracle BI EE.  In this segment, I will lay the groundwork for the modeling concepts.  First, I will cover the big picture of how the BI Server fits into the system, and how the CEIM controls the query processing. Oracle BI EE Query Cycle The purpose of the Oracle BI Server is to bridge the gap between the presentation services and the data sources.  There are typically a variety of data sources in a variety of technologies: relational, normalized transaction systems; relational star-schema data warehouses and marts; multidimensional analytic cubes and financial applications; flat files, Excel files, XML files, and so on. Business datasets can reside in a single type of source, or, most of the time, are spread across various types of sources. Presentation services users are generally business people who need to be able to query that set of sources without any knowledge of technologies, schemas, or how sources are organized in their company. They think of business analysis in terms of measures with specific calculations, hierarchical dimensions for breaking those measures down, and detailed reports of the business transactions themselves.  Most of them create queries without knowing it, by picking a dashboard page and some filters.  Others create their own analysis by selecting metrics and dimensional attributes, and possibly creating additional calculations. The BI Server bridges that gap from simple business terms to technical physical queries by exposing just the business focused measures and dimensional attributes that business people can use in their analyses and dashboards.   After they make their selections and start the analysis, the BI Server plans the best way to query the data sources, writes the optimized sequence of physical queries to those sources, post-processes the results, and presents them to the client as a single result set suitable for tables, pivots and charts. The CEIM is a model that controls the processing of the BI Server.  It provides the subject areas that presentation services exposes for business users to select simplified metrics and dimensional attributes for their analysis.  It models the mappings to the physical data access, the calculations and logical transformations, and the data access security rules.  The CEIM consists of metadata stored in the repository, authored by developers using the Administration Tool client.     Presentation services and other query clients create their queries in BI EE's SQL-92 language, called Logical SQL or LSQL.  The API simply uses ODBC or JDBC to pass the query to the BI Server.  Presentation services writes the LSQL query in terms of the simplified objects presented to the users.  The BI Server creates a query plan, and rewrites the LSQL into fully-detailed SQL or other languages suitable for querying the physical sources.  For example, the LSQL on the left below was rewritten into the physical SQL for an Oracle 11g database on the right. Logical SQL   Physical SQL SELECT "D0 Time"."T02 Per Name Month" saw_0, "D4 Product"."P01  Product" saw_1, "F2 Units"."2-01  Billed Qty  (Sum All)" saw_2 FROM "Sample Sales" ORDER BY saw_0, saw_1       WITH SAWITH0 AS ( select T986.Per_Name_Month as c1, T879.Prod_Dsc as c2,      sum(T835.Units) as c3, T879.Prod_Key as c4 from      Product T879 /* A05 Product */ ,      Time_Mth T986 /* A08 Time Mth */ ,      FactsRev T835 /* A11 Revenue (Billed Time Join) */ where ( T835.Prod_Key = T879.Prod_Key and T835.Bill_Mth = T986.Row_Wid) group by T879.Prod_Dsc, T879.Prod_Key, T986.Per_Name_Month ) select SAWITH0.c1 as c1, SAWITH0.c2 as c2, SAWITH0.c3 as c3 from SAWITH0 order by c1, c2   Probably everybody reading this blog can write SQL or MDX.  However, the trick in designing the CEIM is that you are modeling a query-generation factory.  Rather than hand-crafting individual queries, you model behavior and relationships, thus configuring the BI Server machinery to manufacture millions of different queries in response to random user requests.  This mass production requires a different mindset and approach than when you are designing individual SQL statements in tools such as Oracle SQL Developer, Oracle Hyperion Interactive Reporting (formerly Brio), or Oracle BI Publisher.   The Structure of the Common Enterprise Information Model (CEIM) The CEIM has a unique structure specifically for modeling the relationships and behaviors that fill the gap from logical user requests to physical data source queries and back to the result.  The model divides the functionality into three specialized layers, called Presentation, Business Model and Mapping, and Physical, as shown below. Presentation services clients can generally only see the presentation layer, and the objects in the presentation layer are normally the only ones used in the LSQL request.  When a request comes into the BI Server from presentation services or another client, the relationships and objects in the model allow the BI Server to select the appropriate data sources, create a query plan, and generate the physical queries.  That's the left to right flow in the diagram below.  When the results come back from the data source queries, the right to left relationships in the model show how to transform the results and perform any final calculations and functions that could not be pushed down to the databases.   Business Model Think of the business model as the heart of the CEIM you are designing.  This is where you define the analytic behavior seen by the users, and the superset library of metric and dimension objects available to the user community as a whole.  It also provides the baseline business-friendly names and user-readable dictionary.  For these reasons, it is often called the "logical" model--it is a virtual database schema that persists no data, but can be queried as if it is a database. The business model always has a dimensional shape (more on this in future posts), and its simple shape and terminology hides the complexity of the source data models. Besides hiding complexity and normalizing terminology, this layer adds most of the analytic value, as well.  This is where you define the rich, dimensional behavior of the metrics and complex business calculations, as well as the conformed dimensions and hierarchies.  It contributes to the ease of use for business users, since the dimensional metric definitions apply in any context of filters and drill-downs, and the conformed dimensions enable dashboard-wide filters and guided analysis links that bring context along from one page to the next.  The conformed dimensions also provide a key to hiding the complexity of many sources, including federation of different databases, behind the simple business model. Note that the expression language in this layer is LSQL, so that any expression can be rewritten into any data source's query language at run time.  This is important for federation, where a given logical object can map to several different physical objects in different databases.  It is also important to portability of the CEIM to different database brands, which is a key requirement for Oracle's BI Applications products. Your requirements process with your user community will mostly affect the business model.  This is where you will define most of the things they specifically ask for, such as metric definitions.  For this reason, many of the best-practice methodologies of our consulting partners start with the high-level definition of this layer. Physical Model The physical model connects the business model that meets your users' requirements to the reality of the data sources you have available. In the query factory analogy, think of the physical layer as the bill of materials for generating physical queries.  Every schema, table, column, join, cube, hierarchy, etc., that will appear in any physical query manufactured at run time must be modeled here at design time. Each physical data source will have its own physical model, or "database" object in the CEIM.  The shape of each physical model matches the shape of its physical source.  In other words, if the source is normalized relational, the physical model will mimic that normalized shape.  If it is a hypercube, the physical model will have a hypercube shape.  If it is a flat file, it will have a denormalized tabular shape. To aid in query optimization, the physical layer also tracks the specifics of the database brand and release.  This allows the BI Server to make the most of each physical source's distinct capabilities, writing queries in its syntax, and using its specific functions. This allows the BI Server to push processing work as deep as possible into the physical source, which minimizes data movement and takes full advantage of the database's own optimizer.  For most data sources, native APIs are used to further optimize performance and functionality. The value of having a distinct separation between the logical (business) and physical models is encapsulation of the physical characteristics.  This encapsulation is another enabler of packaged BI applications and federation.  It is also key to hiding the complex shapes and relationships in the physical sources from the end users.  Consider a routine drill-down in the business model: physically, it can require a drill-through where the first query is MDX to a multidimensional cube, followed by the drill-down query in SQL to a normalized relational database.  The only difference from the user's point of view is that the 2nd query added a more detailed dimension level column - everything else was the same. Mappings Within the Business Model and Mapping Layer, the mappings provide the binding from each logical column and join in the dimensional business model, to each of the objects that can provide its data in the physical layer.  When there is more than one option for a physical source, rules in the mappings are applied to the query context to determine which of the data sources should be hit, and how to combine their results if more than one is used.  These rules specify aggregate navigation, vertical partitioning (fragmentation), and horizontal partitioning, any of which can be federated across multiple, heterogeneous sources.  These mappings are usually the most sophisticated part of the CEIM. Presentation You might think of the presentation layer as a set of very simple relational-like views into the business model.  Over ODBC/JDBC, they present a relational catalog consisting of databases, tables and columns.  For business users, presentation services interprets these as subject areas, folders and columns, respectively.  (Note that in 10g, subject areas were called presentation catalogs in the CEIM.  In this blog, I will stick to 11g terminology.)  Generally speaking, presentation services and other clients can query only these objects (there are exceptions for certain clients such as BI Publisher and Essbase Studio). The purpose of the presentation layer is to specialize the business model for different categories of users.  Based on a user's role, they will be restricted to specific subject areas, tables and columns for security.  The breakdown of the model into multiple subject areas organizes the content for users, and subjects superfluous to a particular business role can be hidden from that set of users.  Customized names and descriptions can be used to override the business model names for a specific audience.  Variables in the object names can be used for localization. For these reasons, you are better off thinking of the tables in the presentation layer as folders than as strict relational tables.  The real semantics of tables and how they function is in the business model, and any grouping of columns can be included in any table in the presentation layer.  In 11g, an LSQL query can also span multiple presentation subject areas, as long as they map to the same business model. Other Model Objects There are some objects that apply to multiple layers.  These include security-related objects, such as application roles, users, data filters, and query limits (governors).  There are also variables you can use in parameters and expressions, and initialization blocks for loading their initial values on a static or user session basis.  Finally, there are Multi-User Development (MUD) projects for developers to check out units of work, and objects for the marketing feature used by our packaged customer relationship management (CRM) software.   The Query Factory At this point, you should have a grasp on the query factory concept.  When developing the CEIM model, you are configuring the BI Server to automatically manufacture millions of queries in response to random user requests. You do this by defining the analytic behavior in the business model, mapping that to the physical data sources, and exposing it through the presentation layer's role-based subject areas. While configuring mass production requires a different mindset than when you hand-craft individual SQL or MDX statements, it builds on the modeling and query concepts you already understand. The following posts in this series will walk through the CEIM modeling concepts and best practices in detail.  We will initially review dimensional concepts so you can understand the business model, and then present a pattern-based approach to learning the mappings from a variety of physical schema shapes and deployments to the dimensional model.  Along the way, we will also present the dimensional calculation template, and learn how to configure the many additivity patterns.

    Read the article

  • Properly escaping forward slash in bash script for usage with sed

    - by user331839
    I'm trying to determine the size of the files that would be newly copied when syncing two folders by running rsync in dry mode and then summing up the sizes of the files listed in the output of rsync. Currently I'm stuck at prefixing the files by their parent folder. I found out how to prefix lines using sed and how to escape using sed, but I'm having troubles combining those two. This is how far I got: source="/my/source/folder/" target="/my/target/folder/" escaped=`echo "$source" | sed -e 's/[\/&]/\\//g'` du `rsync -ahnv $source $target | tail -n +2 | head -n -3 | sed "s/^/$escaped/"` | awk '{i+=$1} END {print i}' This is the output I get from bash -x myscript.sh + source=/my/source/folder/ + target=/my/target/folder ++ echo /my/source/folder/ ++ sed -e 's/[\/&]/\//g' + escaped=/my/source/folder/ + awk '{i+=$1} END {print i}' ++ rsync -ahnv /my/source/folder/ /my/target/folder/ ++ sed 's/^//my/source/folder//' ++ head -n -3 ++ tail -n +2 sed: -e expression #1, char 8: unknown option to `s' + du 80268 Any ideas on how to properly escape would be highly appreciated.

    Read the article

  • Odd behavior on Shift-{Esc,Fx}

    - by ??????? ???????????
    Sometimes, when changing between the modes in Vim, I forget to take my finger off the Shift key. This innocent mistake is probably part of the luggage carried over from other terminals, but I have never seen my input treated this way. After changing from command mode to input mode, if I hit the Esc key while the Shift key is down, Vim will display <9b (Control Sequence Introducer) instead of switching to the command mode. At least two work-arounds to this intended behavior are available on the mintty site (faq, issue). " Avoiding escape timeout issues in vim :let &t_ti.="\e[?7727h" :let &t_te.="\e[?7727l" :noremap <EscO[ <Esc :noremap! <EscO[ <Esc " Remap escape :imap <special <CSI <ESC My question is about the syntax and the meaning of the first solution. From the looks of it, it seems like t_ti is being assigned a literal value, but I'm not sure why the "c address-of" operator is required. I'm also not sure why there are two noremap statements.

    Read the article

  • Closing telnet connection gracefully from session mode itself without going to telnet prompt.

    - by Kumar Alok
    a normal telnet connection is like this: telnet localhost 22 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. SSH-2.0-OpenSSH_4.2 ^] telnet close Connection closed. I want to close it from telnet session itself without coming to telnet prompt by pressing. My requirement is that if i press some control character from telnet session itself like CTRL+A so it will come out of session and close it automatically. something like this: $ telnet localhost 22 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. SSH-2.0-OpenSSH_4.2 ^A Connection closed. $ I tried all the options given at the man page and tried to do some $HOME/.telnetrc file tests but couldn't achieve it, as telnetrc will execute all the commands written in it with the given host whenever a telnet to that host is done. Can anyone help me in this, like how it can be achieved.

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >