Search Results

Search found 23262 results on 931 pages for 'content analysis'.

Page 33/931 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Tracking scroll depth to reveal content engagement in Google Analytics

    This article investigates a way to track content engagement on your site. By monitoring how far down the page a visitor to your site travels and then recording the data in Google Analytics you can discover how many of your visitors are reading your content all the way to the end. Introduction When browsing through your Google Analytics reports you will find a massive selection of data at your fingertips. You can find how many people visited a page, how long they were there, what country they were...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to send Content-Disposition headers in apache for ALL files?

    - by user37900
    I have seen similar questions, but the answers currently posted do not work for me. I am trying to get all files to prompt user to download but .html and .jpg files are still being displayed. here is what I have in my httpd.conf <Location /testfiles> SetEnvIf Request_URI "^.*/([^/]*)$" FILENAME=$1 Header set "Content-disposition" "attachment; filename=%{FILENAME}e" UnsetEnv FILENAME </Location> What am i doing wrong ? Many thanks

    Read the article

  • Problems installing Adobe Premiere Elements 8 Content features

    - by Walt Maken
    I recently purchased Adobe Photoshop & Premiere Elements 8. Photoshop Elements 8 installed fine, along with the additional material available via internet download. Premiere Elements 8 installed ok from the DVD. During the Premiere Elements 8 startup process the following message appears: A reduced set of content (Instant Movie Themes, Title and Menu Templates,etc) has been installed. To install the full content set, please insert your Content DVD and run Setup.exe. If you do not have a Content DVD please visit http://www.adobe.com/go/pre_additional_downloads to download the content installer. Since I didn't have the Content DVD, I did the download, which took nearly 12 hours. The extract appeared to complete at 100%, but then immediately gave the error message "A problem occurred while extracting some files. Check available space on your computer and the write privileges on the destination folder." Why would it show this error message if it had completed the extract process 100%? What step(s) do I take now to have Content installed? Do I need to go thru the 12 hour download again or, hopefully, is there something I can do that will make it unnecessary to download again?

    Read the article

  • Importing Analysis Services 2008 KPI's in a PerformancePoint scorecard

    - by Colin
    I am trying to import a KPI from Analysis Services into a PerformancePoint Scorecard, and when I do, The Dashboard Designer throws an error: An unknown error has occurred. If the problem persists contact an administrator. There may be additional information in the server application event log. When I examine the event log, I find the following exception: System.IO.FileNotFoundException: Could not load file or assembly 'Microsoft.AnalysisServices, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified. File name: 'Microsoft.AnalysisServices, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' at Microsoft.PerformancePoint.Scorecards.Server.ImportExportHelper.GetImportableAsKpis(IBpm pmService, DataSource asDataSource) at Microsoft.PerformancePoint.Scorecards.Server.PmServer.GetImportableAsKpis(DataSource dataSource) I have found this thread which recommends reinstalling Microsoft ADOMD.NET but the installer for that won't run because the server already has a newer version of the product (The server is running SQL Server Analysis Services 2008 which includes Microsoft.AnalysisServices.AdomdClient.dll version 9.0.3042.0) Anyone have any ideas (short of finding the DLL myself and manually installing it to the GAC)?

    Read the article

  • Java library for HTML analysis

    - by Raj
    Hi, (I've seen similar questions, but I think none of them cater to my specific needs, hence...) I would like to know if there is a Java library for analysis of real-world (read: incomplete, ill-formed) HTML. By analysis, I mean things like: figuring out the most prominent color in an HTML chunk changing that color to some other color (hence, has to support modification of the HTML as well) pruning out unwanted tags fixing up the HTML to result in a well formed HTML snippet Parts of the last two are done by libraries such as Jericho, and jTidy. 'Plugins' on top of these would be great. Thanks in advance!

    Read the article

  • Exclude complete namespace from FxCop code analysis?

    - by hangy
    Is it possible to exclude a complete namespace from all FxCop analysis while still analyzing the rest of the assembly using the SuppressMessageAttribute? In my current case, I have a bunch of classes generated by LINQ to SQL which cause a lot of FxCop issues, and obviously, I will not modify all of those to match FxCop standards, as a lot of those modifications would be gone if I re-generated the classes. I know that FxCop has a project option to suppress analysis on generated code, but it does not seem to recognize the entity and context classes created by LINQ 2 SQL as generated code.

    Read the article

  • PInvokeStackImbalance was detected when manually running Xna Content pipeline

    - by Miau
    So I m running this code static readonly string[] PipelineAssemblies = { "Microsoft.Xna.Framework.Content.Pipeline.FBXImporter" + XnaVersion, "Microsoft.Xna.Framework.Content.Pipeline.XImporter" + XnaVersion, "Microsoft.Xna.Framework.Content.Pipeline.TextureImporter" + XnaVersion, "Microsoft.Xna.Framework.Content.Pipeline.EffectImporter" + XnaVersion, "Microsoft.Xna.Framework.Content.Pipeline.XImporter" + XnaVersion, "Microsoft.Xna.Framework.Content.Pipeline.AudioImporters" + XnaVersion, "Microsoft.Xna.Framework.Content.Pipeline.VideoImporters" + XnaVersion, }; more code in between ..... // Register any custom importers or processors. foreach (string pipelineAssembly in PipelineAssemblies) { _buildProject.AddItem("Reference", pipelineAssembly); } more code in between ..... var execute = Task.Factory.StartNew(() => submission.ExecuteAsync(null, null), cancellationTokenSource.Token); var endBuild = execute.ContinueWith(ant => BuildManager.DefaultBuildManager.EndBuild()); endBuild.Wait(); Basically trying to build the content project with code ... this works well if you remove XImporter, AudioIMporters and VideoImporters but with those I get the following error: PInvokeStackImbalance was detected Message: A call to PInvoke function 'Microsoft.Xna.Framework.Content.Pipeline!Microsoft.Xna.Framework.Content.Pipeline.UnsafeNativeMethods+AudioHelper::GetFormatSize' has unbalanced the stack. This is likely because the managed PInvoke signature does not match the unmanaged target signature. Check that the calling convention and parameters of the PInvoke signature match the target unmanaged signature. Things I've tried: turn unsafe code on adding a lot of logging (the dll is found in case you are wondering) different wav and Mp3 files (just in case) Will update...

    Read the article

  • SQL Server - Schema/Code Analysis Rules - What would your rules include?

    - by Randy Minder
    We're using Visual Studio Database Edition (DBPro) to manage our schema. This is a great tool that, among the many things it can do, can analyse our schema and T-SQL code based on rules (much like what FxCop does with C# code), and flag certain things as warnings and errors. Some example rules might be that every table must have a primary key, no underscore's in column names, every stored procedure must have comments etc. The number of rules built into DBPro is fairly small, and a bit odd. Fortunately DBPro has an API that allows the developer to create their own. I'm curious as to the types of rules you and your DB team would create (both schema rules and T-SQL rules). Looking at some of your rules might help us decide what we should consider. Thanks - Randy

    Read the article

  • Thoughts on Static Code Analysis Warning CA1806 for TryParse calls

    - by Tim
    I was wondering what people's thoughts were on the CA1806 (DoNotIgnoreMethodResults) Static Code Analysis warning when using FxCop. I have several cases where I use Int32.TryParse to pull in internal configuration information that was saved in a file. I end up with a lot of code that looks like: Int32.TryParse(someString, NumberStyles.Integer, CultureInfo.InvariantCulture, out intResult); MSDN says the default result of intResult is zero if something fails, which is exactly what I want. Unfortunately, this code will trigger CA1806 when performing static code analysis. It seems like a lot of redundant/useless code to fix the errors with something like the following: bool success = Int32.TryParse(someString, NumberStyles.Integer, CultureInfo.InvariantCulture, out intResult); if (!success) { intResult= 0; } Should I suppress this message or bite the bullet and add all this redundant error checking? Or maybe someone has a better idea for handling a case like this? Thanks!

    Read the article

  • Add title to meta analysis forest plot

    - by Timothy Alston
    I am meta-analysing some studies and drawing a forest plot for my results. However I can`t seem to get the forest plot to display the title. An example of my code is: require(meta) parameter1<-metaprop(sm="PLOGIT", event=c(4,16,3,2,10,1,0,2), n=c(90,402,89,29,153,86,21,48), level = 0.95, studlab=c("study 1", "study 2", "study 3", "study 4", "study 5", "study 6", "study 7", "study 8"), title="meta analysis 1") forest(parameter1) When it produces the forest plot, the title "meta analysis 1" is missing. How can I add this in? Thanks in advance, Timothy

    Read the article

  • MS Analysis Services OLAP API for Python

    - by Kaloyan Todorov
    I am looking for a way to connect to a MS Analysis Services OLAP cube, run MDX queries, and pull the results into Python. In other words, exactly what Excel does. Is there a solution in Python that would let me do that? Someone with a similar question going pointed to Django's ORM. As much as I like the framework, this is not what I am looking for. I am also not looking for a way to pull rows and aggregate them -- that's what Analysis Services is for in the first place. Ideas? Thanks.

    Read the article

  • Static content not displayed with Zend FW

    - by shin
    I am trying to display a static content with Zend framework. When I go to http://square.localhost/content/services, I get an error message. Could anyone tell me how to fix this please? Thanks in advance. application.ini .... .... resources.layout.layoutPath = APPLICATION_PATH "/layouts" resources.layout.layout = "master" resources.router.routes.home.route = /home resources.router.routes.home.defaults.module = default resources.router.routes.home.defaults.controller = index resources.router.routes.home.defaults.action = index resources.router.routes.static-content.route = /content/:page resources.router.routes.static-content.defaults.module = default resources.router.routes.static-content.defaults.controller = static-content resources.router.routes.static-content.defaults.action = display application/modules/default/controllers/StaticContentController.php class StaticContentController extends Zend_Controller_Action { public function init() { } // display static views public function displayAction() { $page = $this->getRequest()->getParam('page'); if (file_exists($this->view->getScriptPath(null) . "/" . $this->getRequest()->getControllerName() . "/$page." . $this->viewSuffix)) { $this->render($page); } else { throw new Zend_Controller_Action_Exception('Page not found', 404); } } } application/modules/default/views/scripts/static-content/services.phtml some html ... ... Error message An error occurred Page not found Exception information: Message: Page not found Stack trace: #0 /var/www/square/library/Zend/Controller/Action.php(513): StaticContentController->displayAction() #1 /var/www/square/library/Zend/Controller/Dispatcher/Standard.php(295): Zend_Controller_Action->dispatch('displayAction') #2 /var/www/square/library/Zend/Controller/Front.php(954): Zend_Controller_Dispatcher_Standard->dispatch(Object(Zend_Controller_Request_Http), Object(Zend_Controller_Response_Http)) #3 /var/www/square/library/Zend/Application/Bootstrap/Bootstrap.php(97): Zend_Controller_Front->dispatch() #4 /var/www/square/library/Zend/Application.php(366): Zend_Application_Bootstrap_Bootstrap->run() #5 /var/www/square/public/index.php(26): Zend_Application->run() #6 {main} Request Parameters: array ( 'page' => 'services', 'module' => 'default', 'controller' => 'static-content', 'action' => 'display', )

    Read the article

  • asp.net webpages content block and helper differences

    - by metanaito
    In asp.net webpages framework what is the difference between using a content block versus a helper? They both seem to be used to output HTML to multiple pages. They both can contain code and both can pass parameters. Are there other differences? When should you use a helper versus a content block? More info: With Content Blocks we create a .cshtml (for example _MakeNote.cshtml) file to hold the content we want to insert into a page. Then we use: @RenderPage("/Shared/_MakeNote.cshtml") to insert the content into a page. We can pass parameters to the content block like this: @RenderPage("/Shared/_MakeNote.cshtml", new { content = "hello from content block" }) It's somewhat like an include file, but I think does not share scope with the parent page. With Helpers we create a .cshtml page in the App_Code folder (for example MyHelpers.cshtml) and place methods in that page which we want to call. The method looks something like this: @helper MakeNote(string content) { <div>@content</div> } The helper is called by using: @MyHelpers.MakeNote("Hello from helper")

    Read the article

  • Jquery return element html instead of content

    - by jim smith
    I have something like this <ul> <li class="aclass" id="a">content</li> <li class="aclass" id="b">content</li> <li class="aclass" id="c">content</li> <li class="aclass" id="d">content</li> <li class="aclass" id="e">content</li> <li class="aclass" id="f">content</li> </ul> I have code like $(".aclass").live("mousedown",function() { alert($this.html()); }); This will alert the content, what I would like to do is alert the entire element like <li class="aclass" id="f">content</li> I've tried $(this).parent() but that returns the whole UL

    Read the article

  • JQuery-code on different content pages

    - by Ockonal
    Hello, I have a site with menu, which reloads content. Content is dynamically loadable, using ajax and php-script. At different content pages I need in different jquery-plugins. But If I'm writing including need plugin directly in some content page, I get a big lags during loading. So, now I'm includgin all plugins into main page... And it's not the best idea... Any suggestion? UPD: I have a div with content. Content is shown with effects (rolling up/down). When part of menu is clicked, I'm sending ajax-responce to the php script, which reads need text from database and returns it to the main page, from which I sent the responce. Than I'm pasting that content into a div and roll id down. So I'll include jquery-plugins in content pages, the animation of rolling will'nt be normal.

    Read the article

  • Data structure for pattern matching.

    - by alvonellos
    Let's say you have an input file with many entries like these: date, ticker, open, high, low, close, <and some other values> And you want to execute a pattern matching routine on the entries(rows) in that file, using a candlestick pattern, for example. (See, Doji) And that pattern can appear on any uniform time interval (let t = 1s, 5s, 10s, 1d, 7d, 2w, 2y, and so on...). Say a pattern matching routine can take an arbitrary number of rows to perform an analysis and contain an arbitrary number of subpatterns. In other words, some patterns may require 4 entries to operate on. Say also that the routine (may) later have to find and classify extrema (local and global maxima and minima as well as inflection points) for the ticker over a closed interval, for example, you could say that a cubic function (x^3) has the extrema on the interval [-1, 1]. (See link) What would be the most natural choice in terms of a data structure? What about an interface that conforms a Ticker object containing one row of data to a collection of Ticker so that an arbitrary pattern can be applied to the data. What's the first thing that comes to mind? I chose a doubly-linked circular linked list that has the following methods: push_front() push_back() pop_front() pop_back() [] //overloaded, can be used with negative parameters But that data structure seems very clumsy, since so much pushing and popping is going on, I have to make a deep copy of the data structure before running an analysis on it. So, I don't know if I made my question very clear -- but the main points are: What kind of data structures should be considered when analyzing sequential data points to conform to a pattern that does NOT require random access? What kind of data structures should be considered when classifying extrema of a set of data points?

    Read the article

  • need explanation on amortization in algorithm

    - by Pradeep
    I am a learning algorithm analysis and came across a analysis tool for understanding the running time of an algorithm with widely varying performance which is called as amortization. The autor quotes " An array with upper bound of n elements, with a fixed bound N, on it size. Operation clear takes O(n) time, since we should dereference all the elements in the array in order to really empty it. " The above statement is clear and valid. Now consider the next content: "Now consider a series of n operations on an initially empty array. if we take the worst case viewpoint, the running time is O(n^2), since the worst case of a sigle clear operation in the series is O(n) and there may be as many as O(n) clear operations in the series." From the above statement how is the time complexity O(n^2)? I did not understand the logic behind it. if 'n' operations are performed how is it O(n ^2)? Please explain what the autor is trying to convey..

    Read the article

  • Common request: export #Tabular model and data to #PowerPivot

    - by Marco Russo (SQLBI)
    I received this request in many courses, messages and also forum discussions: having an Analysis Services Tabular model, it would be nice being able to extract a correspondent PowerPivot data model. In order of priority, here are the specific feature people (including me) would like to see: Create an empty PowerPivot workbook with the same data model of a Tabular model Change the connections of the tables in the PowerPivot workbook extracting data from the Tabular data model Every table should have an EVALUATE ‘TableName’ query in DAX Apply a filter to data extracted from every table For example, you might want to extract all data for a single country or year or customer group Using the same technique of applying filter used for role based security would be nice Expose an API to automate the process of creating a PowerPivot workbook Use case: prepare one workbook for every employee containing only its data, that he can use offline Common request for salespeople who want a mini-BI tool to use in front of the customer/lead/supplier, regardless of a connection available This feature would increase the adoption of PowerPivot and Tabular (and, therefore, Business Intelligence licenses instead of Standard), and would probably raise the sales of Office 2013 / Office 365 driven by ISV, who are the companies who requests this feature more. If Microsoft would do this, it would be acceptable it only works on Office 2013. But if a third-party will do that, it will make sense (for their revenues) to cover both Excel 2010 and Excel 2013. Another important reason for this feature is that the “Offline cube” feature that you have in Excel is not available when your PivotTable is connected to a Tabular model, but it can only be used when you connect to Analysis Services Multidimensional. If you think this is an important features, you can vote this Connect item.

    Read the article

  • Parallelize incremental processing in Tabular #ssas #tabular

    - by Marco Russo (SQLBI)
    I recently came in a problem trying to improve the parallelism of Tabular processing. As you know, multiple tables can be processed in parallel, whereas the processing of several partitions within the same table cannot be parallelized. When you perform an incremental update by adding only new rows to existing table, what you really do is adding rows to a partition, so adding rows to many tables means adding rows to several partitions. The particular condition you have in this case is that every partition in which you add rows belongs to a different table. Adding rows implies using the ProcessAdd command; its QueryBinding parameter specifies a SQL syntax to read new rows, otherwise the original query specified for the partition will be used, and it could generate duplicated data if you don’t have a dynamic behavior on the SQL side. If you create the required XMLA code manually, you will find that the QueryBinding node that should be part of the ProcessAdd command has to be moved out from ProcessAdd in case you are using a Batch command with more than one Process command (which is the reason why you want to use a single batch: run multiple process operations in parallel!). If you use AMO (Analysis Management Objects) you will find that this combination is not supported, even if you don’t have a syntax error compiling the code, but you might obtain this error at execution time: The syntax for the 'Process' command is incorrect. The 'Bindings' keyword cannot appear under a 'Process' command if the 'Process' command is a part of a 'Batch' command and there are more than one 'Process' commands in the 'Batch' or the 'Batch' command contains any out of line related information. In this case, the 'Bindings' keyword should be a part of the 'Batch' command only. If this is happening to you, the best solution I’ve found is manipulating the XMLA code generated by AMO moving the Binding nodes in the right place. A more detailed description of the issue and the code required to send a correct XMLA batch to Analysis Services is available in my article Parallelize ProcessAdd with AMO. By the way, the same technique (and code) can be used also if you have the same problem in a Multidimensional model.

    Read the article

  • SharePoint Content and Site Editing Tips

    - by Bil Simser
    A few content management and site editing tips for power users on this bacon flavoured unicorn morning. The theme here is keep it clean!Write "friendly" email addressesRemember it's human beings reading your content. So seeing something like "If you have questions please send an email to [email protected]" breaks up the readiblity. Instead just do the simple steps of writing the content in plain English and going back, highlighting the name and insert a link (note: you might have to prefix the link with mailto:[email protected]). It makes for a friendlier looking page and hides the ugliness that are sometimes in email addresses.Use friendly column and list namesThis is a big pet peeve of mine. When you first create a column or list with spaces the internal name is changed. The display name might be "My Amazing List of Animals with Large Testicles" but the internal (and link) name becomes "My_x00x20_Amazing_x00x20_List_x00x20_of_x00x20_Animals_x00x20_with_x00x20_Large_x00x20_Testicles". What's worse is if you create a publishing page named "This Website is Fueled By a Dolphin's Spleen". Not only is it incorrect grammar, but the apostrophe wreaks havoc on both the internal name for the list (with lots of crazy hex codes) as well as the hyperlink (where everything is uuencoded). Instead create the list with a distinct and compact name then go back and change it to whatever you want. The end result is a better formed name that you can both script and access in code easier.Keep your Views CleanWhen you add a column to a list or create a new list the default is to add it to the default view. Do everyone a favour and don't check this box! The default view of a list should be something similar to the Title field and nothing else. Keep it clean. If you want to set a defalt view that's different, go back and create one with all the fields and filtering and sorting columns you want and set it as default. It's a good idea to keep the original AllItems.aspx (note the lack of space in the filename!) easy and unfiltered. It's also a good idea to keep your column count down in views. Don't let every column be added by default and don't add every column just because you can. Create separate views for distinct responsibilities and try to keep the number of columns down to a single screen to prevent horizontal scrolling.Simple NavigationThe Quick Launch is a great tool for navigating around your site but don't use the default of adding all lists to it. Uncheck that box and keep navigation simple. Create custom groupings that make sense so if you don't have a site with "Documents and Lists" but "Reports and Notices" makes more sense then do it. Also hide internal lists from the Quick Launch. For example, if most users don't need to see all the lookup tables you might have on a site don't show them. You can use audience filtering on the Quick Launch if you want to hide admin items from non-admin users so consider that as an option.Enjoy!

    Read the article

  • Amazon S3 not sending Content-Type header

    - by Luke____
    I have an application that downloads content from various sources. It relies on the "Content-Type" header being set on images. The majority of web-servers do this correctly but it appears Amazon S3 server is not setting the Content-Type. I assume Amazon servers are configured correctly so what could be the problem? Are these images not uploaded correctly? Or should I not be relying on content type being set? Example Thanks

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >