Search Results

Search found 4072 results on 163 pages for 'social analytics'.

Page 142/163 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • Using Microsoft.Reporting.WebForms.ReportViewer in a custom SharePoint WebPart

    - by iHeartDucks
    I have a requirement where I have to display some data (from a custom db) and let the user export it to an excel file. I decided to use the ReportViewer control present in Microsoft.Reporting.WebForms.ReportViewer. I added the required assembly to the project references and I add the following code to test it out protected override void CreateChildControls() { base.CreateChildControls(); objRV = new Microsoft.Reporting.WebForms.ReportViewer(); objRV.ID = "objRV"; this.Controls.Add(objRV); } The first error asked me to add this line in the web.config which I did and the next error says The type 'Microsoft.SharePoint.Portal.Analytics.UI.ReportViewerMessages, Microsoft.SharePoint.Portal, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c' does not implement IReportViewerMessages Is it possible to use ReportViewer in my custom Web Part? I rather not bind a repeater and write my own export to excel code. I want to use something which is already built by Microsoft? Any ideas on what I can reuse? Edit I commented the following line <add key="ReportViewerMessages"... and now my code looks like this after I added a data source to it protected override void CreateChildControls() { base.CreateChildControls(); objRV = new Microsoft.Reporting.WebForms.ReportViewer(); objRV.ID = "objRV"; objRV.Visible = true; Microsoft.Reporting.WebForms.ReportDataSource datasource = new Microsoft.Reporting.WebForms.ReportDataSource("test", GroupM.Common.DB.GetAllClientCodes()); objRV.LocalReport.DataSources.Clear(); objRV.LocalReport.DataSources.Add(datasource); objRV.LocalReport.Refresh(); this.Controls.Add(objRV); } but now I do not see any data on the page. I did check my db call and it does return a data table with 15 rows. Any ideas why I don't see anything on the page?

    Read the article

  • pyscripter Rpyc error

    - by jf328
    pyscripter 2.5.3.0 x64, python 2.7.7 anaconda 2.0.1, windows 7 I was using pyscripter and EPD python happily in 32 bit, no problem. Just changed to 64 bit anaconda version and re-installed everything but now pyscripter cannot import rpyc -- it runs with internal engine (no anaconda), but no such error in pure python. Thanks very much! btw, there is a similar SO post few years ago, but the answer there does not work. *** Python 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit (AMD64)] on win32. *** Internal Python engine is active *** *** Internal Python engine is active *** >>> import rpyc Traceback (most recent call last): File "<interactive input>", line 1, in <module> File "C:\Anaconda\lib\site-packages\rpyc\__init__.py", line 44, in <module> from rpyc.core import (SocketStream, TunneledSocketStream, PipeStream, Channel, File "C:\Anaconda\lib\site-packages\rpyc\core\__init__.py", line 1, in <module> from rpyc.core.stream import SocketStream, TunneledSocketStream, PipeStream File "C:\Anaconda\lib\site-packages\rpyc\core\stream.py", line 7, in <module> import socket File "C:\Anaconda\Lib\socket.py", line 47, in <module> import _socket ImportError: DLL load failed: The specified procedure could not be found. >>> C:\research>python Python 2.7.7 |Anaconda 2.0.1 (64-bit)| (default, Jun 11 2014, 10:40:02) [MSC v.1500 64bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. Anaconda is brought to you by Continuum Analytics. Please check out: http://continuum.io/thanks and https://binstar.org >>> import rpyc >>>

    Read the article

  • Getting Indy Error "Could not bind socket. Address and port are already in use"

    - by M Schenkel
    I have developed a Delphi web server application (TWebModule). It runs as a ISAPI DLL on Apache under Windows. The application in turn makes frequent https calls to other web sites using the Indy TIdHTTP component. Periodically I get this error when using the TIdHTTP.get method: Could not bind socket. Address and port are already in use Here is the code: IdSSLIOHandlerSocket1 := TIdSSLIOHandlerSocketOpenSSL.create(nil); IdHTTP := TIdHTTP.create(nil); idhttp.handleredirects := True; idhttp.OnRedirect := DoRedirect; with IdSSLIOHandlerSocket1 do begin SSLOptions.Method := sslvSSLv3; SSLOptions.Mode := sslmUnassigned; SSLOptions.VerifyMode := []; SSLOptions.VerifyDepth := 2; end; with IdHTTP do begin IOHandler := IdSSLIOHandlerSocket1; ProxyParams.BasicAuthentication := False; Request.UserAgent := 'Test Google Analytics Interface'; Request.ContentType := 'text/html'; request.connection := 'keep-alive'; Request.Accept := 'text/html, */*'; end; try idhttp.get('http://www.mysite.com......'); except ....... end; IdHTTP.free; IdSSLIOHandlerSocket1.free; I have read about the reusesocket method, which can be set on both the TIdHttp and TIdSSLLIOHandlerSocketOpenSSL objects. Will setting this to rsTrue solve my problems? I ask because I have not been able to replicate this error, it just happens periodically. Other considerations: I know multiple instances of TWebModule are being spawned. Would wrapping all calls to TIdHttp.get within a TCriticalSection solve the problem?

    Read the article

  • SEM/SEO tasks doubts

    - by Josemalive
    Hello, Actually i think that i have an strong knowledge of SEO, but im having some doubts about the following: I will have to increase the position in Google of certain product pages of a company in the next months. I supposed that not only will be sufficient the following tasks: Improve usability of those pages. Change the pages title. Add meta description and keywords. Url's in a REST way. 301's http header to dont lose page rank for the new URLS Optimizing content for Google. Configure links of the website (follow and no follow attributes) Get more inbounds links (Link building tasks). Create RSS. Put main website in Twitter (using twitter feed) using the RSS. Put main website in Facebook. Create a Youtube channel. Invest in Adwords. Invest in other online advertising companies. Use sitemap.xml and Google Webmaster tools. Use Google Trends to analyze the volume of searches of certain keywords. Use Google Analytics to analyze weak points and good points of your site, and find new oportunities in keywords. Use tools to find new keywords related with your content. Do you have some internet links, or knowledge about all the tasks that a SEO Expert should do? Could you share some knowledge about what kind of business could be do with another companies (B2B) to increase the search engine position of those product pages. Do you know more tecniques about how to get more inbound links? (i only know the link interchange) Thanks in advance. Best Regards. Jose.

    Read the article

  • Browser security when calling HTTP assets via a SWF on a HTTPS site

    - by Mark Ursino
    We have a site that runs on HTTPS and needs to pull in various JS assets to run a video player on the page. We get a browser security warning on this page because the JS files we are externally calling are being accessed via HTTP, not HTTPS. E.g. // HTTP reference on a HTTPS site <script src="http://the-cdn.tld/player.js"></script> Simply accessing this one JS assets via HTTP and not HTTPS will cause the browser security warning which we need to get rid of. The provider of the JS file does not support an HTTPS equivalent (like Google Analytics does). We would ideally love to just do the following, but the provider does not have this: // HTTPS reference on a HTTPS site <script src="https://the-cdn.tld/player.js"></script> One option we had was to just download a copy of the JS file and serve it on the HTTPS site, however we have concerns with this as it is not recommended by the provider and will not include updates from them. Assuming we cannot do that, we were thinking a possible other option would be to use a SWF file as a proxy. We were thinking that we could have one of our flash guys create a SWF that loads in the HTTP-served JS file to the page. We were wondering that if this SWF makes the request, would that prevent the browser from showing the security warning or not? I assumed that we would still see the warning since the SWF is still making the request through the browser, but I wanted to see what the hive mind thinks.

    Read the article

  • Application log aggregation, management and notifications...

    - by Matthew Savage
    I'm wondering what everyone is using for logging, log management and log aggregation on their systems. I am working in a company which uses .NET for all it's applications and all systems are Windows based. Currently each application looks after its own logging and notifications of failures (e.g. if app A fails it will send out its own 'call for help' to an admin). While this current practice works its a bit hacky and hard to manage. I've been trying to find some options for making this work better and I've come up with the following: log4net & Chainsaw (ah, if it works). Logging via log4net or another framework into a central database & rolling our own management tool. Logging to the Windows event log and using MOM or System Center Operations Manager to aggregate and manage each of these servers & their apps. A hand-rolled solution to suck all the log files into one point and work some magic across them. Essentially what we are after is something which can pull log entries all together and allow for some analytics to be run across them, plus use a kind of event based system to, for example, send out a warning email when there have been 30+ warning level logs for an application in the last x minutes. So is there anything I've missed, or something someone else can suggest?

    Read the article

  • Downloading javascript Without Blocking

    - by doug
    The context: My question relates to improving web-page loading performance, and in particular the effect that javascript has on page-loading (resources/elements below the script are blocked from downloading/rendering). This problem is usually avoided/mitigated by placing the scripts at the bottom (eg, just before the tag). The code i am looking at is for web analytics. Placing it at the bottom reduces its accuracy; and because this script has no effect on the page's content, ie, it's not re-writing any part of the page--i want to move it inside the head. Just how to do that without ruining page-loading performance is the crux. From my research, i've found six techniques (w/ support among all or most of the major browsers) for downloading scripts so that they don't block down-page content from loading/rendering: (i) XHR + eval(); (ii) XHR + 'inject'; (iii) download the HTML-wrapped script as in iFrame; (iv) setting the script tag's 'async' flag to 'TRUE' (HTML 5 only); (v) setting the script tag's 'defer' attribute; and (vi) 'Script DOM Element'. It's the last of these i don't understand. The javascript to implement the pattern (vi) is: (function() { var q1 = document.createElement('script'); q1.src = 'http://www.my_site.com/q1.js' document.documentElement.firstChild.appendChild(q1) })(); Seems simple enough: inside this anonymous function, a script element is created, its 'src' element is set to it's location, then the script element is added to the DOM. But while each line is clear, it's still not clear to me how exactly this pattern allows script loading without blocking down-page elements/resources from rendering/loading?

    Read the article

  • Silverlight WinDg Memory Release Issue

    - by Chris Newton
    Hi, I have used WinDbg succesfully on a number of occasions to track down and fix memory leaks (or more accurately the CLRs inability to garbage collect a released object), but am stuck with one particular control. The control is displayed within a child window and when the window is closed a reference to the control remains and cannot be garbage collected. I have resolved what I believe to be the majority of the issues that could have caused the leak, but the !gcroot of the affected object is not clear (to me at least) as to what is still holding on to this object. The ouput is always the same regardless of the content being presented in the child window: DOMAIN(03FB7238):HANDLE(Pinned):79b12f8:Root: 06704260(System.Object[])- 05719f00(System.Collections.Generic.Dictionary2[[System.IntPtr, mscorlib],[System.Object, mscorlib]])-> 067c1310(System.Collections.Generic.Dictionary2+Entry[[System.IntPtr, mscorlib],[System.Object, mscorlib]][])- 064d42b0(System.Windows.Controls.Grid)- 064d4314(System.Collections.Generic.Dictionary2[[MS.Internal.IManagedPeerBase, System.Windows],[System.Object, mscorlib]])-> 064d4360(System.Collections.Generic.Dictionary2+Entry[[MS.Internal.IManagedPeerBase, System.Windows],[System.Object, mscorlib]][])- 064d3860(System.Windows.Controls.Border)- 064d4218(System.Collections.Generic.Dictionary2[[MS.Internal.IManagedPeerBase, System.Windows],[System.Object, mscorlib]])-> 064d4264(System.Collections.Generic.Dictionary2+Entry[[MS.Internal.IManagedPeerBase, System.Windows],[System.Object, mscorlib]][])- 064d3bfc(System.Windows.Controls.ContentPresenter)- 064d3d64(System.Collections.Generic.Dictionary2[[MS.Internal.IManagedPeerBase, System.Windows],[System.Object, mscorlib]])-> 064d3db0(System.Collections.Generic.Dictionary2+Entry[[MS.Internal.IManagedPeerBase, System.Windows],[System.Object, mscorlib]][])- 064d3dec(System.Collections.Generic.Dictionary2[[System.UInt32, mscorlib],[System.Windows.DependencyObject, System.Windows]])-> 064d3e38(System.Collections.Generic.Dictionary2+Entry[[System.UInt32, mscorlib],[System.Windows.DependencyObject, System.Windows]][])- 06490b04(Insurer.Analytics.SharedResources.Controls.HistoricalKPIViewerControl) If anyone has any ideas about what could potentially be the problem, or if you require more information, please let me know. Kind Regards, Chris

    Read the article

  • How do I call an Obj-C method from Javascript?

    - by gnfti
    Hi all, I'm developing a native iPhone app using Phonegap, so everything is done in HTML and JS. I am using the Flurry SDK for analytics and want to use the [FlurryAPI logEvent:@"EVENT_NAME"]; method to track events. Is there a way to do this in Javascript? So when tracking a link I would imagine using something like <a onClick="flurryTrackEvent("Click_Rainbows")" href="#Rainbows">Rainbows</a> <a onClick="flurryTrackEvent("Click_Unicorns")" href="#Unicorns">Unicorns</a> "FlurryAPI.h" has the following: @interface FlurryAPI : NSObject { } + (void)startSession:(NSString *)apiKey; + (void)logEvent:(NSString *)eventName; + (void)logEvent:(NSString *)eventName withParameters:(NSDictionary *)parameters; + (void)logError:(NSString *)errorID message:(NSString *)message exception:(NSException *)exception; + (void)setUserID:(NSString *)userID; + (void)setEventLoggingEnabled:(BOOL)value; + (void)setServerURL:(NSString *)url; + (void)setSessionReportsOnCloseEnabled:(BOOL)sendSessionReportsOnClose; @end I'm only interested in the logEvent method(s). If it's not clear by now, I'm comfortable with JS but a recovering Obj-C noob. I've read the Apple docs but the examples described there are all for newly declared methods and I imagine this could be simpler to implement because the Obj-C method(s) are already defined. Thank you in advance for any input.

    Read the article

  • Aggregate path counts using HierarchyID

    - by austincav
    Business problem - understand process fallout using analytics data. Here is what we have done so far: Build a dictionary table with every possible process step Find each process "start" Find the last step for each start Join dictionary table to last step to find path to final step In the final report output we end up with a list of paths for each start to each final step: User Fallout Step HierarchyID.ToString() A 1/1/1 B 1/1/1/1/1 C 1/1/1/1 D 1/1/1 E 1/1 What this means is that five users (A-E) started the process. Assume only User B finished, the other four did not. Since this is a simple example (without branching) we want the output to look as follows: Step Unique Users 1 5 2 5 3 4 4 2 5 1 The easiest solution I could think of is to take each hierarchyID.ToString(), parse that out into a set of subpaths, JOIN back to the dictionary table, and output using GROUP BY. Given the volume of data, I'd like to use the built-in HierarchyID functions, e.g. IsAncestorOf. Any ideas or thoughts how I could write this? Maybe a recursive CTE?

    Read the article

  • Where to store site settings: DB? XML? CONFIG? CLASS FILES?

    - by Emin
    I am re-building a news portal of which already have a large number of visits every day. One of the major concerns when re-building this site was to maximize performance and speed. Having said this, we have done many things from caching, to all sort of other measures to ensure speed. Now towards the end of the project, I am having a dilemma of where to store my site settings that would least affect performance. The site settings will include things such as: Domain, DefaultImgPath, Google Analytics code, default emails of editors as well as more dynamic design/display feature settings such as the background color of specific DIVs and default color for links etc.. As far as I know, I have 4 choices in storing all these info. Database: Storing general settings in the DB and caching them may be a solution however, I want to limit the access to the database for only necessary and essential functions of the project which generally are insert/update/delete news items, author articles etc.. XML: I can store these settings in an XML file but I have not done this sort of thing before so I don't know what kind of problems -if any- I might face in the future. CONFIG: I can also store these settings in web.config CLASS FILE: I can hard code all these settings in a SiteSettings class, but since the site admin himself will be able to edit these settings, It may not be the best solution. Currently, I am more close to choosing web.config but letting people fiddle with it too often is something I do not want. E.g. if somehow, I miss out a validation for something and it breaks the web.config, the whole site will go down. My concern basically is that, I cannot forsee any possible consequences of using any of the methods above (or is there any other?), I was hoping to get this question over to more experienced people out here who hopefully help make my decision.

    Read the article

  • PayPal sandbox anomalies

    - by Christian
    When testing some donations on my local machine, I set various key=value pairs to do various things (return to specific thank you page, get POST data from PayPal and not GET data and others) I also built my code around the response from the PayPal sandbox. BUT, when my code goes to the production server and we switch on live payments and test with real accounts and money, a few strange things happen; We get a GET response from PayPal - the URL is filled with crap. We get no transaction details. This is the biggie, no name, no txn_id, no dates, nothing. We get a handful of keys etc, its not totally empty and the payment has gone through, but nowhere near the verbosity of the sandbox. Curious about why this might be? It doesn't really make sense to have a sandbox (or dev environment) that is substantially different from the production environment. Or, am I missing something? EDIT: Still no response to my question in the PayPal Developer Forums. I don't even get a donation amount back from PayPal. Is this a setting maybe? EDIT #2: Two of you have suggested to check PDT and Auto-Return. The data analytics guy for the project only 2 hrs ago suggested the same. I have asked the client to confirm this. I can't see a setting for it in the Sandbox so can assume that it is enabled by default?

    Read the article

  • What is the best way to include Javascript?

    - by Paul Tarjan
    Many of the big players recommend slightly different techniques. Mostly on the placement of the new <script>. Google Anayltics: (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); Facebook: (function() { var e = document.createElement('script'); e.async = true; e.src = document.location.protocol + '//connect.facebook.net/en_US/all.js'; document.getElementById('fb-root').appendChild(e); }());: Disqus: (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = 'http://' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); (post others and I'll add them) Is there any rhyme or reason for these choices or does it not matter at all?

    Read the article

  • Process data BEFORE a 301 Redirect?

    - by Jesse
    So, I've been working on a PHP link shortener (I know, just what the world needs). Basically when the page loads, php determines where it needs to go and sends a 301 Header to redirect the browser, like so... Header( "HTTP/1.1 301 Moved Permanently" ); header("Location: http://newsite.com"; Now, I'm trying to add some tracking to my redirects and insert some custom analytics data into a MySQL table before the redirect happen. It works perfectly if I don't specify the a redirect type and just use: header("Location: http://newsite.com"; But, of course as soon as you add in the 301 header, nothing else gets processed. Actually, on the first request, it sends the data to MySQL, but on any subsequent requests there's no communication with the database. I assume it's a browser caching issue, once it's seen the 301 it decides they're no reason to parse anything on future requests. But, does anyone know if there's any way to get around this? I'd really like to keep it as a 301 for SEO purposes (I believe if you don't specify it sends a 404 by default?). I thought about using .htaccess to prepend a file to the page that will do the MySQL work, but with the 301, wouldn't that just get ignored as well? Anyway, I'm not sure if there's any solution other than using a different type of redirect, but I'm ready to give up just yet. So, any suggestions would be much appreciated. Thanks!

    Read the article

  • how to Retrieve the parameters of document.write to detect the creation of dynamic tags

    - by user1335906
    In my Project i am supposed to identify the dynamically created tags which can be done in scripts through document.write("<script src='jquery.js'></script>") For this i used Regular expressions and my code is as follows function find_tag_docwrite(text) { var attrib=new Object; var pat_tag=/<((\S+)\s(.*))>/g; while(t=pat_tag.exec(text) { var tag=RegExp.$1; for(i=0;i<tags.length;i++) { var pat=/(\S+)=((['"]*)(\S+)(['"]*)\3)/g; while(p=pat.exec(f)) { attr=RegExp.$1;val=RegExp.$4; attrib[attr]=val; } } } } in the above function text is parameters of document.write function. Now through this code i am getting the tag names and all the attributes of the tags. But for the below example the above code is not working var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); In such cases Regular expressions does not work so after searching some time where i found hooks on dom methods. so by using this i thought of creating hook for document.write method but i am able to understand how it is done i included the following code in my program but it is not working. function someFunction(text) { console.log(text); } document.write = someFunction; where text is the parameters of document.write. Another problem is After monitoring all the document.write methods using hooks again i have to use regex for finding tag creations. Is there Any alternative

    Read the article

  • Detecting well behaved / well known bots

    - by Simon_Weaver
    I found this question very interesting : Programmatic Bot Detection I have a very similar question, but I'm not bothered about 'badly behaved bots'. I am tracking (in addition to google analytics) the following per visit : Entry URL Referer UserAgent Adwords (by means of query string) Whether or not the user made a purchase etc. The problem is that to calculate any kind of conversion rate I'm ending up with lots of 'bot' visits that are greatly skewing my results. I'd like to ignore as many as possible bot visits, but I want a solution that I don't need to monitor too closely, and that won't in itself be a performance hog and preferably still work if someone has javascript disabled. Are there good published lists of the top 100 bots or so? I did find a list at http://www.user-agents.org/ but that appears to contain hundreds if not thousands of bots. I don't want to check every referer against thousands of links. Here is the current googlebot UserAgent. How often does it change? Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)

    Read the article

  • Best way to build an application based on R?

    - by Prasad Chalasani
    I'm looking for suggestions on how to go about building an application that uses R for analytics, table generation, and plotting. What I have in mind is an application that: displays various data tables in different tabs, somewhat like in Excel, and the columns should be sortable by clicking. takes user input parameters in some dialog windows. displays plots dynamically (i.e. user-input-dependent) either in a tab or in a new pop-up window/frame Note that I am not talking about a general-purpose fron-end/GUI for exploring data with R (like say Rattle), but a specific application. Some questions I'd like to see addressed are: Is an entirely R-based approach even possible ( on Windows ) ? The following passage from the Rattle article in R-Journal intrigues me: It is interesting to note that the first implementation of Rattle actually used Python for implementing the callbacks and R for the statistics, using rpy. The release of RGtk2 allowed the interface el- ements of Rattle to be written directly in R so that Rattle is a fully R-based application If it's better to use another language for the GUI part, which language is best suited for this? I'm looking for a language where it's relatively "painless" to build the GUI, and that also integrates very well with R. From this StackOverflow question How should I do rapid GUI development for R and Octave methods (possibly with Python)? I see that Python + PyQt4 + QtDesigner + RPy2 seems to be the best combo. Is that the consensus ? Anyone have pointers to specific (open source) applications of the type I describe, as examples that I can learn from?

    Read the article

  • sending large data .getJSON or proxy ?

    - by numerical25
    Hey guys. I was told that the only trick to sending data to a external server (i.e x-domain) is to use getJSON. Well my problem is that the data I am sending exceeds the getJSON data limit. I am tracking mouse movements on a screen for analytics. Another option is I could also send a little data at a time. probably every time the mouse moves. but that seems as if it would slow things down. I could setup a proxy server. My question is which would be better? Setting up a proxy server ? or Just sending bits of information via javascript or JQUERY. What do the professionals use (Google and other company's that build mash-ups that send a lot of data to x-domain sites.) I need to know the best practices. Thanx!! Also the data is put into JSON.

    Read the article

  • Rails 3 MySQL 2 reports an error in what looks to be valid SQL syntax

    - by John Judd
    I am trying to use the following bit of code to help in seeding my database. I need to add data continually over development and do not want to have to completely reseed data every time I add something new to the seeds.rb file. So I added the following function to insert the data if it doesn't already exist. def AddSetting(group, name, value, desc) Admin::Setting.create({group: group, name: name, value: value, description: desc}) unless Admin::Setting.find_by_sql("SELECT * FROM admin_settings WHERE group = '#{group}' AND name = '#{name}';").exists? end AddSetting('google', 'analytics_id', '', 'The ID of your Google Analytics account.') AddSetting('general', 'page_title', '', '') AddSetting('general', 'tag_line', '', '') This function is included in the db/seeds.rb file. Is this the right way to do this? However I am getting the following error when I try to run it through rake. rake aborted! Mysql2::Error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'group = 'google' AND name = 'analytics_id'' at line 1: SELECT * FROM admin_settings WHERE group = 'google' AND name = 'analytics_id'; Tasks: TOP => db:seed (See full trace by running task with --trace) Process finished with exit code 1 What is confusing me is that I am generating correct SQL as far as I can tell. In fact my code generates the SQL and I pass that to the find_by_sql function for the model, Rails itself can't be changing the SQL, or is it? SELECT * FROM admin_settings WHERE group = 'google' AND name = 'analytics_id'; I've written a lot of SQL over the years and I've looked through similar questions here. Maybe I've missed something, but I cannot see it.

    Read the article

  • Pushing or serving real-time data to an excel spreadsheet

    - by evan_irl
    I am running some test automation on a networked computer resource (remote). The remote computer running the test automation generates some output, which I can customize however I wish - probably a text or excel file. I would like to create an excel spreadsheet which, from my local machine, monitors this output and provides real-time analytics. Later I would make the networked computer visible to more people, and they can use the same spreadsheet to monitor this output. My problem is that this networked computer is located on the other side of the earth, and so using any kind of polling in excel VBA to PULL the data from the networked computer results in a very long wait with the pinwheel spinning, making the sheet clumsy and less useful. The same thing happens when I use excel's built in function for linking to "external resources" Is there any way to PUSH data to the excel spreadsheet from the networked computer? Something that is easy to set up would be ideal, the latency does not have to be low, so long as there is no awkward "busy wait" while the sheet updates. If that is not possible, is there any way of using PULL from the excel sheet that avoids the same busy wait?

    Read the article

  • Stopping Wordpress From Appending /index.html to everything

    - by user439796
    I have a spaghetti code of a theme I inherited from someone and for whatever reason Google Analytics shows that I keep getting hits to a variety of URLs on the site, but the urls are all appended with /index.html. So an example would be like http://www.mysite.com/category/storyname/index.html And it appears to be doing this to almost everything (despite my permalinks being set to be "tidy"). So... What in the hell could be possibly causing this? How do I fix it? When I visit all those pages I get 404 errors so that means my visitors are not getting what they want. I have the Redirection plugin and have been manually trying to update some of these, but it is ridiculous. I'm sure there's a way to do it with htaccess but I know next to nothing about that. Here's what my htaccess currently has (the default): # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress

    Read the article

  • How can I measure my (SAMP) server's bandwidth usage?

    - by enkrates
    I'm running a Solaris server to serve PHP through Apache. What tools can I use to measure the bandwidth my server is currently using? I use Google analytics to measure traffic, but as far as I know, it ignores file size. I have a rough idea of the average size of the pages I serve, and can do a back-of-the-envelope calculation of my bandwidth usage by multiplying page views (from Google) by average page size, but I'm looking for a solution that is more rigorous and exact. Also, I'm not trying to throttle anything, or implement usage caps or anything like that. I'd just like to measure the bandwidth usage, so I know what it is. An example of what I'm after is the usage meter that Slicehost provides in their admin website for their users. They tell me (for another site I run) how much bandwidth I've used each month and also divide the usage for uploading and downloading. So, it seems like this data can be measured, and I'd like to be able to do it myself. To put it simply, what is the conventional method for measuring the bandwidth usage of my server?

    Read the article

  • Nginx and PHP-FPM running out of connections

    - by E3pO
    I keep running into errors like these, [02-Jun-2012 01:52:04] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 19 idle, and 49 total children [02-Jun-2012 01:52:05] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 16 children, there are 19 idle, and 50 total children [02-Jun-2012 01:52:06] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 32 children, there are 19 idle, and 51 total children [02-Jun-2012 03:10:51] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 18 idle, and 91 total children I changed my settings for php-fpm to these, pm.max_children = 150 (It was at 100, i got a max_children reached and upped to 150) pm.start_servers = 75 pm.min_spare_servers = 20 pm.max_spare_servers = 150 Resulting in [02-Jun-2012 01:39:19] WARNING: [pool www] server reached pm.max_children setting (150), consider raising it I've just launched a new website that is getting a conciderable amount of traffic on it. This traffic is legitimate and users are getting 504 gateway timeouts when the limit is reached. I have limited connections to my server with IPTABLES and I'm running fail2ban and keeping track of nginx access logs. The traffic is all legitimate, i'm just running out of room for users. I'm currently running on a dual core box with ubuntu 64bit. free total used free shared buffers cached Mem: 6114284 5726984 387300 0 141612 4985384 -/+ buffers/cache: 599988 5514296 Swap: 524284 5804 518480 My php.ini max_input_time = 60 My nginx config is worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 19000; # multi_accept on; } worker_rlimit_nofile 20000; #each connection needs a filehandle (or 2 if you are proxying) client_max_body_size 30M; client_body_timeout 10; client_header_timeout 10; keepalive_timeout 5 5; send_timeout 10; location ~ \.php$ { try_files $uri /er/error.php; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_max_temp_file_size 0; fastcgi_intercept_errors on; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } What can I do to stop running out of connections? Why does this keep occurring? I'm monitoring my traffic on Google Analytics realtime and when the user count gets above about 120 my php-fpm.log is full of these warnings..

    Read the article

  • Server 2008, 2 NICs, 2 fixed IPs - big delays using internet

    - by user46055
    Hi geniuses I have an all in one Windows 2008 server, configured with AD/DHCP/DNS/RRAS - all set up with wizards and no specific tweaking. The server has 2 network adapters : one of which ("MyWAN") is plugged into our office's internet connection, the other ("MyLAN") is plugged into a local switch, which is also where all our desktops are connected. So this one server is doing everything. When first set up, MyLAN had a fixed IP of 192.168.2.1 and served the desktops with DHCP scope 192.168.2.50-99. It also told them to use 192.168.2.1 as DNS and gateway. MyWAN was setup to take its IP etc from DHCP, being handled by the building's router and ADSL modem etc. All desktops were setup to use DHCP. This all worked perfectly fine, until I recently changed MyWAN to have a static IP (I wanted to access it from home, and needed to give it a static IP to port map in the building's router). Things still work, but there is now a long delay when accessing the internet. The actual speed is as before when downloading, but there is a pause of 3-6 secs when connecting to new hosts (for example if I browse to slashdot from either a desktop or the server itself, it'll hang on connecting to slashdot.org, hang again on connecting to *.fsdn, *.google-analytics.com and all the other hosts referenced from the main page). If I ping slashdot.org from the server, I get the following : Pinging slashdot.org [216.34.181.45] with 32 bytes of data: Reply from 192.168.2.1: Destination host unreachable. Reply from 216.34.181.45: bytes=32 time=99ms TTL=239 Reply from 216.34.181.45: bytes=32 time=100ms TTL=239 Reply from 216.34.181.45: bytes=32 time=101ms TTL=239 Pinging anywhere external always seems to hit 192.168.2.1 first, which doesn't seem right. Trying tracert from the server gives the following : Tracing route to slashdot.org [216.34.181.45] over a maximum of 30 hops: 1 MYSERVER01.intranet [192.168.2.1] reports: Destination host unreachable Trying tracert from a desktop gives the following : Tracing route to slashdot.org [216.34.181.45] over a maximum of 30 hops: 1 <1 ms * <1 ms MYSERVER [192.168.2.1] 2 * * * Request timed out. 3 6 ms 6 ms 6 ms dsl-gw1.ge.mer.uk.webtapestry.net [217.151.111.17] 4 38 ms 239 ms 251 ms gw-router.ge.mer.uk.webtapestry.net [217.151.111.13] ...and then all is fine after that. I think that DNS is working fine because the domain names are getting translated to correct IPs immediately. DHCP seems to be okay? So perhaps it's something up with my RRAS setup - although I can't see any option during the setup wizard which I would have filled in differently. I've also tried changing the binding order of the two network connections, to prioritise MyWAN, but that doesn't seem to have done anything. Any idea what's up? Many thanks - Rob

    Read the article

  • Tips for maximizing Nginx requests/sec?

    - by linkedlinked
    I'm building an analytics package, and project requirements state that I need to support 1 billion hits per day. Yep, "billion". In other words, no less than 12,000 hits per second sustained, and preferably some room to burst. I know I'll need multiple servers for this, but I'm trying to get maximum performance out of each node before "throwing more hardware at it". Right now, I have the hits-tracking portion completed, and well optimized. I pretty much just save the requests straight into Redis (for later processing with Hadoop). The application is Python/Django with a gunicorn for the gateway. My 2GB Ubuntu 10.04 Rackspace server (not a production machine) can serve about 1200 static files per second (benchmarked using Apache AB against a single static asset). To compare, if I swap out the static file link with my tracking link, I still get about 600 requests per second -- I think this means my tracker is well optimized, because it's only a factor of 2 slower than serving static assets. However, when I benchmark with millions of hits, I notice a few things -- No disk usage -- this is expected, because I've turned off all Nginx logs, and my custom code doesn't do anything but save the request details into Redis. Non-constant memory usage -- Presumably due to Redis' memory managing, my memory usage will gradually climb up and then drop back down, but it's never once been my bottleneck. System load hovers around 2-4, the system is still responsive during even my heaviest benchmarks, and I can still manually view http://mysite.com/tracking/pixel with little visible delay while my (other) server performs 600 requests per second. If I run a short test, say 50,000 hits (takes about 2m), I get a steady, reliable 600 requests per second. If I run a longer test (tried up to 3.5m so far), my r/s degrades to about 250. My questions -- a. Does it look like I'm maxing out this server yet? Is 1,200/s static files nginx performance comparable to what others have experienced? b. Are there common nginx tunings for such high-volume applications? I have worker threads set to 64, and gunicorn worker threads set to 8, but tweaking these values doesn't seem to help or harm me much. c. Are there any linux-level settings that could be limiting my incoming connections? d. What could cause my performance to degrade to 250r/s on long-running tests? Again, the memory is not maxing out during these tests, and HDD use is nil. Thanks in advance, all :)

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >