Search Results

Search found 1603 results on 65 pages for 'analytics'.

Page 60/65 | < Previous Page | 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • CFOs: Do You Have a Playbook for Growth?

    - by Oracle Accelerate for Midsize Companies
    by Jim Lein, Oracle Midsize Programs In most global markets, CFOs are optimistic about their company's growth opportunities. Deloitte's CFO Signals Report, "Time to Accelerate" found that: In the U.K. business optimism is at its highest level in three-and-a-half years Optimism in North America rose from a strong +42% last quarter (Q2 to Q3 2013) to an even stronger +54%. The inaugural Southeast Asia survey, 44% of CFOs reported a positive outlook despite worries over the Chinese economy and political uncertainty. Sustainable and profitable business growth doesn't usually happen by accident. Company's need a playbook for growth that's owned by the CFO. And today, that playbook must leverage the six enabling technologies--Social, Big Data, Mobile, Cloud, Analytics, and The Internet of Things (or, as Oracle president Mark Hurd explains, "The Internet of the People"). On Monday June 9 at  2:00 pm Eastern, CFO.com is hosting a webcast, "The CFO Playbook on Growth: How CFOs Can Boost Efficiency and Performance with Automation". Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} “Investing in technology begins with a business metric driven business case with clear tangible business results expected," says John Lieblang, Affiliate Partner with Waterstone Management Group. "The progressive CFO has learned how to forge a partnership with the CIO to align everyone in the 'result value chain' to be accountable for the business results not just for functional technology.” Click HERE to register  Looking for more news and information about Oracle Solutions for Midsize Companies? Read the latest Oracle for Midsize Companies Newsletter Sign-up to receive the latest communications from Oracle’s industry leaders and experts Jim Lein I evangelize Oracle's enterprise solutions for growing midsize companies. I recently celebrated 15 years with Oracle, having joined JD Edwards in 1999. I'm based in Evergreen, Colorado and love relating stories about creativity and innovation whether they be about software, live music, or the mountains. The views expressed here are my own, and not necessarily those of Oracle.

    Read the article

  • OBI10g will be Non-Qualifying for OPN Specialisation in September 2013

    - by Mike.Hallett(at)Oracle-BI&EPM
    aboteste 12.00 Just to alert you, that OPN Specialisation is software version-specific: so as we bring out new product versions, so the specialisation crtiteria and exam needs to keep updated to the new version. Specifically for our analytics partners, be aware that your “OBI10g Specialisation” credentials will be Non-Qualifying for OPN Specialisation in September 2013. Therefore to keep your profile up to date please start now to take the latest OBI11g exams and apply to OPN to upgrade to “OBI11g Specialisation”. Find out more about the new version OBI11g Exams to update your OPN Specialisation @ OPN Exam for OBI Suite 11g is Now LIVE, and more generally the Update on BI & EPM Specializations and Knowledge Zones. aboteste 12.00 Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} And to help you pass, see the Oracle Business Intelligence Foundation Suite 11 Essentials Exam Study Guide. Also check the documents on OPN for Frequently Asked Questions for Product Version Specialization and the latest Specializations Catalog. Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • College Ratings via the Federal Government

    - by user9147039
    A few weeks back you might remember news about a higher education rating system proposal from the Obama administration. As I've discussed previously, political and stakeholder pressures to improve outcomes and increase transparency are stronger than ever before. The executive branch proposal is intended to make progress in this area. Quoting from the proposal itself, "The ratings will be based upon such measures as: Access, such as percentage of students receiving Pell grants; Affordability, such as average tuition, scholarships, and loan debt; and Outcomes, such as graduation and transfer rates, graduate earnings, and advanced degrees of college graduates.” This is going to be quite complex, to say the least. Most notably, higher ed is not monolithic. From community and other 2-year colleges, to small private 4-year, to professional schools, to large public research institutions…the many walks of higher ed life are, well, many. Designing a ratings system that doesn't wind up with lots of unintended consequences and collateral damage will be difficult. At best you would end up potentially tarnishing the reputation of certain institutions that were actually performing well against the metrics and outcome measures that make sense in their "context" of education. At worst you could spend a lot of time and resources designing a system that would lose credibility with its "customers". A lot of institutions I work with already have in place systems like the one described above. They are tracking completion rates, completion timeframes, transfers to other institutions, job placement, and salary information. As I talk to these institutions there are several constants worth noting: • Deciding on which metrics to measure is complicated. While employment and salary data are relatively easy to track, qualitative measures are more difficult. How do you quantify the benefit to someone who studies in one field that may not compensate him or her as well as another field but that provides huge personal fulfillment and reward is a difficult measure to quantify? • The data is available but the systems to transform the data into actual information that can be used in meaningful ways are not. Too often in higher ed information is siloed. As such, much of the data that need to be a part of a comprehensive system sit in multiple organizations, oftentimes outside the reach of core IT. • Politics and culture are big barriers. One of the areas that my team and I spend a lot of time talking about with higher ed institutions all over the world is the imperative to optimize for student success. This, like the tracking of the students’ achievement after graduation, requires a level or organizational capacity that does not currently exist. The primary barrier is the culture of "data islands" in higher ed, and the need for leadership to drive out the divisions between departments, schools, colleges, etc. and institute academy-wide analytics and data stewardship initiatives that will enable student success. • Data quality is a very big issue. So many disparate systems exist (some on premise, some "in the cloud") that keep data about "persons" using different means to identify them. Establishing a single source of truth about an individual and his or her data is difficult without some type of data quality policy and tools. Good tools actually exist but are seldom leveraged. Don't misunderstand - I think it's a great idea to drive additional transparency and accountability into the system of higher education. And not just at home, but globally. Students and parents need access to key data to make informed, responsible choices. The tools exist to not only enable this kind of information to be shared but to capture the very metrics stakeholders care most about and in a way that makes sense in the context of a given institution's "place" in the overall higher ed panoply.

    Read the article

  • SEO/Google: How should I handle multiple countries and domains?

    - by Valorized
    Hello. I'm the webmaster of an online shop based in Austria (Europe). Therefore we registered "example.at". We also own different other domain names like "example-shop.com" and "example.info". Currently all those domains are redirected (301) to the .at one. Still available is: "example.net" and "example.org" (and .ws/.cc), unfortunately not available: .de/.eu The .com is currently owned by one of our partners, the contract ends in 2012 but until then we have no chance to get this one. Recently I read more about geo-targeting and I noticed ONE big deal. The tld ".at" is hardly recognised in Germany (google.de) whereas it is excellently listed in Austria (google.at). As a result of the .at I cannot set the target location manually (or to unlisted). More info: https://www.google.com/support/webmasters/bin/answer.py?answer=62399&hl=en This is a big problem. I looked at Google Analytics and - although Germany is 10x as big as Austria - there are more visits from Austria. So, how should I config the domain in order to get the best results in both, Germany and Austria? I thought of some solutions: First I could stop redirecting the .info. Then there would be a duplicate of the .at one. Moreover, in Webmastertools, I could set the target location of the .info to Germany. As the .at still targets Austria, both would be targeted - however I don't now if google punishes one of them because of the duplicate content? Same as 1. but with .net or .org (I think .info is not a "nice" domain and moreover I think search engines prefer .com, .net or .org to .info). Same as 1. (or 2.) but with a rel="canonical" on the new one (pointing to the .at). Con: I don't think this will improve the situation, because it still tells google that the .at one is more important, like: "if .info points to .at, the target may still be Austria". rel="canonical" on the .at pointing to the new (.info or .net or .org). However I fear that this will have a negative impact on the listing on google.at because: "Hey, the well-known .at is not important anymore, so let's focus on the .info which is not well-known." - Therefore: bad position in search results. Redirect .at to the new (.info or .net or .org) with a 301-Redirect. Con: Might be worse than 4, we might loose Page-Rank (or "the value of the page", because google says that page rank is not important anymore). Moreover this might be even more confusing for the customers. In 3. or 4. customers don't get redirected, they do not see the canonical-meta-tag. So, dear experts, please tell me what the best option would be! Thank you very much for your advice in advance and please excuse the long question. I really appreciate this network! Please note: It's exactly the same content AND language. In Austria we speak German.

    Read the article

  • Using Microsoft.Reporting.WebForms.ReportViewer in a custom SharePoint WebPart

    - by iHeartDucks
    I have a requirement where I have to display some data (from a custom db) and let the user export it to an excel file. I decided to use the ReportViewer control present in Microsoft.Reporting.WebForms.ReportViewer. I added the required assembly to the project references and I add the following code to test it out protected override void CreateChildControls() { base.CreateChildControls(); objRV = new Microsoft.Reporting.WebForms.ReportViewer(); objRV.ID = "objRV"; this.Controls.Add(objRV); } The first error asked me to add this line in the web.config which I did and the next error says The type 'Microsoft.SharePoint.Portal.Analytics.UI.ReportViewerMessages, Microsoft.SharePoint.Portal, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c' does not implement IReportViewerMessages Is it possible to use ReportViewer in my custom Web Part? I rather not bind a repeater and write my own export to excel code. I want to use something which is already built by Microsoft? Any ideas on what I can reuse? Edit I commented the following line <add key="ReportViewerMessages"... and now my code looks like this after I added a data source to it protected override void CreateChildControls() { base.CreateChildControls(); objRV = new Microsoft.Reporting.WebForms.ReportViewer(); objRV.ID = "objRV"; objRV.Visible = true; Microsoft.Reporting.WebForms.ReportDataSource datasource = new Microsoft.Reporting.WebForms.ReportDataSource("test", GroupM.Common.DB.GetAllClientCodes()); objRV.LocalReport.DataSources.Clear(); objRV.LocalReport.DataSources.Add(datasource); objRV.LocalReport.Refresh(); this.Controls.Add(objRV); } but now I do not see any data on the page. I did check my db call and it does return a data table with 15 rows. Any ideas why I don't see anything on the page?

    Read the article

  • pyscripter Rpyc error

    - by jf328
    pyscripter 2.5.3.0 x64, python 2.7.7 anaconda 2.0.1, windows 7 I was using pyscripter and EPD python happily in 32 bit, no problem. Just changed to 64 bit anaconda version and re-installed everything but now pyscripter cannot import rpyc -- it runs with internal engine (no anaconda), but no such error in pure python. Thanks very much! btw, there is a similar SO post few years ago, but the answer there does not work. *** Python 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit (AMD64)] on win32. *** Internal Python engine is active *** *** Internal Python engine is active *** >>> import rpyc Traceback (most recent call last): File "<interactive input>", line 1, in <module> File "C:\Anaconda\lib\site-packages\rpyc\__init__.py", line 44, in <module> from rpyc.core import (SocketStream, TunneledSocketStream, PipeStream, Channel, File "C:\Anaconda\lib\site-packages\rpyc\core\__init__.py", line 1, in <module> from rpyc.core.stream import SocketStream, TunneledSocketStream, PipeStream File "C:\Anaconda\lib\site-packages\rpyc\core\stream.py", line 7, in <module> import socket File "C:\Anaconda\Lib\socket.py", line 47, in <module> import _socket ImportError: DLL load failed: The specified procedure could not be found. >>> C:\research>python Python 2.7.7 |Anaconda 2.0.1 (64-bit)| (default, Jun 11 2014, 10:40:02) [MSC v.1500 64bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. Anaconda is brought to you by Continuum Analytics. Please check out: http://continuum.io/thanks and https://binstar.org >>> import rpyc >>>

    Read the article

  • Getting Indy Error "Could not bind socket. Address and port are already in use"

    - by M Schenkel
    I have developed a Delphi web server application (TWebModule). It runs as a ISAPI DLL on Apache under Windows. The application in turn makes frequent https calls to other web sites using the Indy TIdHTTP component. Periodically I get this error when using the TIdHTTP.get method: Could not bind socket. Address and port are already in use Here is the code: IdSSLIOHandlerSocket1 := TIdSSLIOHandlerSocketOpenSSL.create(nil); IdHTTP := TIdHTTP.create(nil); idhttp.handleredirects := True; idhttp.OnRedirect := DoRedirect; with IdSSLIOHandlerSocket1 do begin SSLOptions.Method := sslvSSLv3; SSLOptions.Mode := sslmUnassigned; SSLOptions.VerifyMode := []; SSLOptions.VerifyDepth := 2; end; with IdHTTP do begin IOHandler := IdSSLIOHandlerSocket1; ProxyParams.BasicAuthentication := False; Request.UserAgent := 'Test Google Analytics Interface'; Request.ContentType := 'text/html'; request.connection := 'keep-alive'; Request.Accept := 'text/html, */*'; end; try idhttp.get('http://www.mysite.com......'); except ....... end; IdHTTP.free; IdSSLIOHandlerSocket1.free; I have read about the reusesocket method, which can be set on both the TIdHttp and TIdSSLLIOHandlerSocketOpenSSL objects. Will setting this to rsTrue solve my problems? I ask because I have not been able to replicate this error, it just happens periodically. Other considerations: I know multiple instances of TWebModule are being spawned. Would wrapping all calls to TIdHttp.get within a TCriticalSection solve the problem?

    Read the article

  • SEM/SEO tasks doubts

    - by Josemalive
    Hello, Actually i think that i have an strong knowledge of SEO, but im having some doubts about the following: I will have to increase the position in Google of certain product pages of a company in the next months. I supposed that not only will be sufficient the following tasks: Improve usability of those pages. Change the pages title. Add meta description and keywords. Url's in a REST way. 301's http header to dont lose page rank for the new URLS Optimizing content for Google. Configure links of the website (follow and no follow attributes) Get more inbounds links (Link building tasks). Create RSS. Put main website in Twitter (using twitter feed) using the RSS. Put main website in Facebook. Create a Youtube channel. Invest in Adwords. Invest in other online advertising companies. Use sitemap.xml and Google Webmaster tools. Use Google Trends to analyze the volume of searches of certain keywords. Use Google Analytics to analyze weak points and good points of your site, and find new oportunities in keywords. Use tools to find new keywords related with your content. Do you have some internet links, or knowledge about all the tasks that a SEO Expert should do? Could you share some knowledge about what kind of business could be do with another companies (B2B) to increase the search engine position of those product pages. Do you know more tecniques about how to get more inbound links? (i only know the link interchange) Thanks in advance. Best Regards. Jose.

    Read the article

  • Browser security when calling HTTP assets via a SWF on a HTTPS site

    - by Mark Ursino
    We have a site that runs on HTTPS and needs to pull in various JS assets to run a video player on the page. We get a browser security warning on this page because the JS files we are externally calling are being accessed via HTTP, not HTTPS. E.g. // HTTP reference on a HTTPS site <script src="http://the-cdn.tld/player.js"></script> Simply accessing this one JS assets via HTTP and not HTTPS will cause the browser security warning which we need to get rid of. The provider of the JS file does not support an HTTPS equivalent (like Google Analytics does). We would ideally love to just do the following, but the provider does not have this: // HTTPS reference on a HTTPS site <script src="https://the-cdn.tld/player.js"></script> One option we had was to just download a copy of the JS file and serve it on the HTTPS site, however we have concerns with this as it is not recommended by the provider and will not include updates from them. Assuming we cannot do that, we were thinking a possible other option would be to use a SWF file as a proxy. We were thinking that we could have one of our flash guys create a SWF that loads in the HTTP-served JS file to the page. We were wondering that if this SWF makes the request, would that prevent the browser from showing the security warning or not? I assumed that we would still see the warning since the SWF is still making the request through the browser, but I wanted to see what the hive mind thinks.

    Read the article

  • Application log aggregation, management and notifications...

    - by Matthew Savage
    I'm wondering what everyone is using for logging, log management and log aggregation on their systems. I am working in a company which uses .NET for all it's applications and all systems are Windows based. Currently each application looks after its own logging and notifications of failures (e.g. if app A fails it will send out its own 'call for help' to an admin). While this current practice works its a bit hacky and hard to manage. I've been trying to find some options for making this work better and I've come up with the following: log4net & Chainsaw (ah, if it works). Logging via log4net or another framework into a central database & rolling our own management tool. Logging to the Windows event log and using MOM or System Center Operations Manager to aggregate and manage each of these servers & their apps. A hand-rolled solution to suck all the log files into one point and work some magic across them. Essentially what we are after is something which can pull log entries all together and allow for some analytics to be run across them, plus use a kind of event based system to, for example, send out a warning email when there have been 30+ warning level logs for an application in the last x minutes. So is there anything I've missed, or something someone else can suggest?

    Read the article

  • Downloading javascript Without Blocking

    - by doug
    The context: My question relates to improving web-page loading performance, and in particular the effect that javascript has on page-loading (resources/elements below the script are blocked from downloading/rendering). This problem is usually avoided/mitigated by placing the scripts at the bottom (eg, just before the tag). The code i am looking at is for web analytics. Placing it at the bottom reduces its accuracy; and because this script has no effect on the page's content, ie, it's not re-writing any part of the page--i want to move it inside the head. Just how to do that without ruining page-loading performance is the crux. From my research, i've found six techniques (w/ support among all or most of the major browsers) for downloading scripts so that they don't block down-page content from loading/rendering: (i) XHR + eval(); (ii) XHR + 'inject'; (iii) download the HTML-wrapped script as in iFrame; (iv) setting the script tag's 'async' flag to 'TRUE' (HTML 5 only); (v) setting the script tag's 'defer' attribute; and (vi) 'Script DOM Element'. It's the last of these i don't understand. The javascript to implement the pattern (vi) is: (function() { var q1 = document.createElement('script'); q1.src = 'http://www.my_site.com/q1.js' document.documentElement.firstChild.appendChild(q1) })(); Seems simple enough: inside this anonymous function, a script element is created, its 'src' element is set to it's location, then the script element is added to the DOM. But while each line is clear, it's still not clear to me how exactly this pattern allows script loading without blocking down-page elements/resources from rendering/loading?

    Read the article

  • Silverlight WinDg Memory Release Issue

    - by Chris Newton
    Hi, I have used WinDbg succesfully on a number of occasions to track down and fix memory leaks (or more accurately the CLRs inability to garbage collect a released object), but am stuck with one particular control. The control is displayed within a child window and when the window is closed a reference to the control remains and cannot be garbage collected. I have resolved what I believe to be the majority of the issues that could have caused the leak, but the !gcroot of the affected object is not clear (to me at least) as to what is still holding on to this object. The ouput is always the same regardless of the content being presented in the child window: DOMAIN(03FB7238):HANDLE(Pinned):79b12f8:Root: 06704260(System.Object[])- 05719f00(System.Collections.Generic.Dictionary2[[System.IntPtr, mscorlib],[System.Object, mscorlib]])-> 067c1310(System.Collections.Generic.Dictionary2+Entry[[System.IntPtr, mscorlib],[System.Object, mscorlib]][])- 064d42b0(System.Windows.Controls.Grid)- 064d4314(System.Collections.Generic.Dictionary2[[MS.Internal.IManagedPeerBase, System.Windows],[System.Object, mscorlib]])-> 064d4360(System.Collections.Generic.Dictionary2+Entry[[MS.Internal.IManagedPeerBase, System.Windows],[System.Object, mscorlib]][])- 064d3860(System.Windows.Controls.Border)- 064d4218(System.Collections.Generic.Dictionary2[[MS.Internal.IManagedPeerBase, System.Windows],[System.Object, mscorlib]])-> 064d4264(System.Collections.Generic.Dictionary2+Entry[[MS.Internal.IManagedPeerBase, System.Windows],[System.Object, mscorlib]][])- 064d3bfc(System.Windows.Controls.ContentPresenter)- 064d3d64(System.Collections.Generic.Dictionary2[[MS.Internal.IManagedPeerBase, System.Windows],[System.Object, mscorlib]])-> 064d3db0(System.Collections.Generic.Dictionary2+Entry[[MS.Internal.IManagedPeerBase, System.Windows],[System.Object, mscorlib]][])- 064d3dec(System.Collections.Generic.Dictionary2[[System.UInt32, mscorlib],[System.Windows.DependencyObject, System.Windows]])-> 064d3e38(System.Collections.Generic.Dictionary2+Entry[[System.UInt32, mscorlib],[System.Windows.DependencyObject, System.Windows]][])- 06490b04(Insurer.Analytics.SharedResources.Controls.HistoricalKPIViewerControl) If anyone has any ideas about what could potentially be the problem, or if you require more information, please let me know. Kind Regards, Chris

    Read the article

  • looking to streamline my RSS feed mashup

    - by Mark Cejas
    Hello crafty developers, I have aggregated RSS feeds from various sources with RSSowl, fetching directly from the social mention API. The RSS feeds are categorized into the following major categories: blogs, news, twitter, Q&A and social networking sites. Each major category is nested with a common group of RSS feeds that represent a particular client/brand ontology. Merging these feeds into the RSSowl reader application, allows me to conduct and save refined search queries (from the aggregated data) into a single file - that I can then tag and further segment for analysis. This scheme is utilized for my own research needs and has helped me considerably. However, I find this RSS mashup scheme kinda clumsy, it requires quite a bit of time to initially organize all of the feeds and I would like to be able to do further natural language processing to the data as well as eventually be able to rank the collected list of URL's into some order of media prominence - right I don't want to pay the ridiculous radian6 web analytics fees, when my intuition is telling me that with a bit of 'elbow grease' I can maybe leverage some available resources online to develop a functional low scale web mining application and get some good intelligence from it. I am now starting to learn a little about computer science - my background is in physical science/statistics so is my thinking in the right track? So, I guess I am imagining an application that allows me to query in a refined manner. A manner that allows me to search for keyword combinations, applying AND/OR operators, selectively focus my queries into particular sources - like a collection of blogs or twitter, or social networking communities, then save the results of my queries into a structured format that can then be manipulated and explored. Am I dreaming? I just had to get all of this out. any bit of advice and insight would be hugely appreciated. my best, Mark

    Read the article

  • How do I call an Obj-C method from Javascript?

    - by gnfti
    Hi all, I'm developing a native iPhone app using Phonegap, so everything is done in HTML and JS. I am using the Flurry SDK for analytics and want to use the [FlurryAPI logEvent:@"EVENT_NAME"]; method to track events. Is there a way to do this in Javascript? So when tracking a link I would imagine using something like <a onClick="flurryTrackEvent("Click_Rainbows")" href="#Rainbows">Rainbows</a> <a onClick="flurryTrackEvent("Click_Unicorns")" href="#Unicorns">Unicorns</a> "FlurryAPI.h" has the following: @interface FlurryAPI : NSObject { } + (void)startSession:(NSString *)apiKey; + (void)logEvent:(NSString *)eventName; + (void)logEvent:(NSString *)eventName withParameters:(NSDictionary *)parameters; + (void)logError:(NSString *)errorID message:(NSString *)message exception:(NSException *)exception; + (void)setUserID:(NSString *)userID; + (void)setEventLoggingEnabled:(BOOL)value; + (void)setServerURL:(NSString *)url; + (void)setSessionReportsOnCloseEnabled:(BOOL)sendSessionReportsOnClose; @end I'm only interested in the logEvent method(s). If it's not clear by now, I'm comfortable with JS but a recovering Obj-C noob. I've read the Apple docs but the examples described there are all for newly declared methods and I imagine this could be simpler to implement because the Obj-C method(s) are already defined. Thank you in advance for any input.

    Read the article

  • Where to store site settings: DB? XML? CONFIG? CLASS FILES?

    - by Emin
    I am re-building a news portal of which already have a large number of visits every day. One of the major concerns when re-building this site was to maximize performance and speed. Having said this, we have done many things from caching, to all sort of other measures to ensure speed. Now towards the end of the project, I am having a dilemma of where to store my site settings that would least affect performance. The site settings will include things such as: Domain, DefaultImgPath, Google Analytics code, default emails of editors as well as more dynamic design/display feature settings such as the background color of specific DIVs and default color for links etc.. As far as I know, I have 4 choices in storing all these info. Database: Storing general settings in the DB and caching them may be a solution however, I want to limit the access to the database for only necessary and essential functions of the project which generally are insert/update/delete news items, author articles etc.. XML: I can store these settings in an XML file but I have not done this sort of thing before so I don't know what kind of problems -if any- I might face in the future. CONFIG: I can also store these settings in web.config CLASS FILE: I can hard code all these settings in a SiteSettings class, but since the site admin himself will be able to edit these settings, It may not be the best solution. Currently, I am more close to choosing web.config but letting people fiddle with it too often is something I do not want. E.g. if somehow, I miss out a validation for something and it breaks the web.config, the whole site will go down. My concern basically is that, I cannot forsee any possible consequences of using any of the methods above (or is there any other?), I was hoping to get this question over to more experienced people out here who hopefully help make my decision.

    Read the article

  • PayPal sandbox anomalies

    - by Christian
    When testing some donations on my local machine, I set various key=value pairs to do various things (return to specific thank you page, get POST data from PayPal and not GET data and others) I also built my code around the response from the PayPal sandbox. BUT, when my code goes to the production server and we switch on live payments and test with real accounts and money, a few strange things happen; We get a GET response from PayPal - the URL is filled with crap. We get no transaction details. This is the biggie, no name, no txn_id, no dates, nothing. We get a handful of keys etc, its not totally empty and the payment has gone through, but nowhere near the verbosity of the sandbox. Curious about why this might be? It doesn't really make sense to have a sandbox (or dev environment) that is substantially different from the production environment. Or, am I missing something? EDIT: Still no response to my question in the PayPal Developer Forums. I don't even get a donation amount back from PayPal. Is this a setting maybe? EDIT #2: Two of you have suggested to check PDT and Auto-Return. The data analytics guy for the project only 2 hrs ago suggested the same. I have asked the client to confirm this. I can't see a setting for it in the Sandbox so can assume that it is enabled by default?

    Read the article

  • Aggregate path counts using HierarchyID

    - by austincav
    Business problem - understand process fallout using analytics data. Here is what we have done so far: Build a dictionary table with every possible process step Find each process "start" Find the last step for each start Join dictionary table to last step to find path to final step In the final report output we end up with a list of paths for each start to each final step: User Fallout Step HierarchyID.ToString() A 1/1/1 B 1/1/1/1/1 C 1/1/1/1 D 1/1/1 E 1/1 What this means is that five users (A-E) started the process. Assume only User B finished, the other four did not. Since this is a simple example (without branching) we want the output to look as follows: Step Unique Users 1 5 2 5 3 4 4 2 5 1 The easiest solution I could think of is to take each hierarchyID.ToString(), parse that out into a set of subpaths, JOIN back to the dictionary table, and output using GROUP BY. Given the volume of data, I'd like to use the built-in HierarchyID functions, e.g. IsAncestorOf. Any ideas or thoughts how I could write this? Maybe a recursive CTE?

    Read the article

  • What is the best way to include Javascript?

    - by Paul Tarjan
    Many of the big players recommend slightly different techniques. Mostly on the placement of the new <script>. Google Anayltics: (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); Facebook: (function() { var e = document.createElement('script'); e.async = true; e.src = document.location.protocol + '//connect.facebook.net/en_US/all.js'; document.getElementById('fb-root').appendChild(e); }());: Disqus: (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = 'http://' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); (post others and I'll add them) Is there any rhyme or reason for these choices or does it not matter at all?

    Read the article

  • Process data BEFORE a 301 Redirect?

    - by Jesse
    So, I've been working on a PHP link shortener (I know, just what the world needs). Basically when the page loads, php determines where it needs to go and sends a 301 Header to redirect the browser, like so... Header( "HTTP/1.1 301 Moved Permanently" ); header("Location: http://newsite.com"; Now, I'm trying to add some tracking to my redirects and insert some custom analytics data into a MySQL table before the redirect happen. It works perfectly if I don't specify the a redirect type and just use: header("Location: http://newsite.com"; But, of course as soon as you add in the 301 header, nothing else gets processed. Actually, on the first request, it sends the data to MySQL, but on any subsequent requests there's no communication with the database. I assume it's a browser caching issue, once it's seen the 301 it decides they're no reason to parse anything on future requests. But, does anyone know if there's any way to get around this? I'd really like to keep it as a 301 for SEO purposes (I believe if you don't specify it sends a 404 by default?). I thought about using .htaccess to prepend a file to the page that will do the MySQL work, but with the 301, wouldn't that just get ignored as well? Anyway, I'm not sure if there's any solution other than using a different type of redirect, but I'm ready to give up just yet. So, any suggestions would be much appreciated. Thanks!

    Read the article

  • how to Retrieve the parameters of document.write to detect the creation of dynamic tags

    - by user1335906
    In my Project i am supposed to identify the dynamically created tags which can be done in scripts through document.write("<script src='jquery.js'></script>") For this i used Regular expressions and my code is as follows function find_tag_docwrite(text) { var attrib=new Object; var pat_tag=/<((\S+)\s(.*))>/g; while(t=pat_tag.exec(text) { var tag=RegExp.$1; for(i=0;i<tags.length;i++) { var pat=/(\S+)=((['"]*)(\S+)(['"]*)\3)/g; while(p=pat.exec(f)) { attr=RegExp.$1;val=RegExp.$4; attrib[attr]=val; } } } } in the above function text is parameters of document.write function. Now through this code i am getting the tag names and all the attributes of the tags. But for the below example the above code is not working var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); In such cases Regular expressions does not work so after searching some time where i found hooks on dom methods. so by using this i thought of creating hook for document.write method but i am able to understand how it is done i included the following code in my program but it is not working. function someFunction(text) { console.log(text); } document.write = someFunction; where text is the parameters of document.write. Another problem is After monitoring all the document.write methods using hooks again i have to use regex for finding tag creations. Is there Any alternative

    Read the article

  • Detecting well behaved / well known bots

    - by Simon_Weaver
    I found this question very interesting : Programmatic Bot Detection I have a very similar question, but I'm not bothered about 'badly behaved bots'. I am tracking (in addition to google analytics) the following per visit : Entry URL Referer UserAgent Adwords (by means of query string) Whether or not the user made a purchase etc. The problem is that to calculate any kind of conversion rate I'm ending up with lots of 'bot' visits that are greatly skewing my results. I'd like to ignore as many as possible bot visits, but I want a solution that I don't need to monitor too closely, and that won't in itself be a performance hog and preferably still work if someone has javascript disabled. Are there good published lists of the top 100 bots or so? I did find a list at http://www.user-agents.org/ but that appears to contain hundreds if not thousands of bots. I don't want to check every referer against thousands of links. Here is the current googlebot UserAgent. How often does it change? Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)

    Read the article

  • Best way to build an application based on R?

    - by Prasad Chalasani
    I'm looking for suggestions on how to go about building an application that uses R for analytics, table generation, and plotting. What I have in mind is an application that: displays various data tables in different tabs, somewhat like in Excel, and the columns should be sortable by clicking. takes user input parameters in some dialog windows. displays plots dynamically (i.e. user-input-dependent) either in a tab or in a new pop-up window/frame Note that I am not talking about a general-purpose fron-end/GUI for exploring data with R (like say Rattle), but a specific application. Some questions I'd like to see addressed are: Is an entirely R-based approach even possible ( on Windows ) ? The following passage from the Rattle article in R-Journal intrigues me: It is interesting to note that the first implementation of Rattle actually used Python for implementing the callbacks and R for the statistics, using rpy. The release of RGtk2 allowed the interface el- ements of Rattle to be written directly in R so that Rattle is a fully R-based application If it's better to use another language for the GUI part, which language is best suited for this? I'm looking for a language where it's relatively "painless" to build the GUI, and that also integrates very well with R. From this StackOverflow question How should I do rapid GUI development for R and Octave methods (possibly with Python)? I see that Python + PyQt4 + QtDesigner + RPy2 seems to be the best combo. Is that the consensus ? Anyone have pointers to specific (open source) applications of the type I describe, as examples that I can learn from?

    Read the article

  • sending large data .getJSON or proxy ?

    - by numerical25
    Hey guys. I was told that the only trick to sending data to a external server (i.e x-domain) is to use getJSON. Well my problem is that the data I am sending exceeds the getJSON data limit. I am tracking mouse movements on a screen for analytics. Another option is I could also send a little data at a time. probably every time the mouse moves. but that seems as if it would slow things down. I could setup a proxy server. My question is which would be better? Setting up a proxy server ? or Just sending bits of information via javascript or JQUERY. What do the professionals use (Google and other company's that build mash-ups that send a lot of data to x-domain sites.) I need to know the best practices. Thanx!! Also the data is put into JSON.

    Read the article

  • Rails 3 MySQL 2 reports an error in what looks to be valid SQL syntax

    - by John Judd
    I am trying to use the following bit of code to help in seeding my database. I need to add data continually over development and do not want to have to completely reseed data every time I add something new to the seeds.rb file. So I added the following function to insert the data if it doesn't already exist. def AddSetting(group, name, value, desc) Admin::Setting.create({group: group, name: name, value: value, description: desc}) unless Admin::Setting.find_by_sql("SELECT * FROM admin_settings WHERE group = '#{group}' AND name = '#{name}';").exists? end AddSetting('google', 'analytics_id', '', 'The ID of your Google Analytics account.') AddSetting('general', 'page_title', '', '') AddSetting('general', 'tag_line', '', '') This function is included in the db/seeds.rb file. Is this the right way to do this? However I am getting the following error when I try to run it through rake. rake aborted! Mysql2::Error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'group = 'google' AND name = 'analytics_id'' at line 1: SELECT * FROM admin_settings WHERE group = 'google' AND name = 'analytics_id'; Tasks: TOP => db:seed (See full trace by running task with --trace) Process finished with exit code 1 What is confusing me is that I am generating correct SQL as far as I can tell. In fact my code generates the SQL and I pass that to the find_by_sql function for the model, Rails itself can't be changing the SQL, or is it? SELECT * FROM admin_settings WHERE group = 'google' AND name = 'analytics_id'; I've written a lot of SQL over the years and I've looked through similar questions here. Maybe I've missed something, but I cannot see it.

    Read the article

  • Pushing or serving real-time data to an excel spreadsheet

    - by evan_irl
    I am running some test automation on a networked computer resource (remote). The remote computer running the test automation generates some output, which I can customize however I wish - probably a text or excel file. I would like to create an excel spreadsheet which, from my local machine, monitors this output and provides real-time analytics. Later I would make the networked computer visible to more people, and they can use the same spreadsheet to monitor this output. My problem is that this networked computer is located on the other side of the earth, and so using any kind of polling in excel VBA to PULL the data from the networked computer results in a very long wait with the pinwheel spinning, making the sheet clumsy and less useful. The same thing happens when I use excel's built in function for linking to "external resources" Is there any way to PUSH data to the excel spreadsheet from the networked computer? Something that is easy to set up would be ideal, the latency does not have to be low, so long as there is no awkward "busy wait" while the sheet updates. If that is not possible, is there any way of using PULL from the excel sheet that avoids the same busy wait?

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65  | Next Page >