Search Results

Search found 1631 results on 66 pages for 'statistics'.

Page 53/66 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • jQuery - .Ready fires on div tag not in the page

    - by Anders Juul
    Dear all, I have implemented below function in a separate .js script file which I load in my main page (Asp.net mvc). The call back to server is rather costly but ok if specifically requested on the sub-page / content page on which my div tag needs to be filled. My problem is that the code runs on every page, including pages where the div tag is not present. How can I remedy my code? Any comments welcome, Thanks, Anders, Denmark jQuery("divStatWrapper").ready(function() { jQuery.getJSON("/Statistics/GetPointsDevelopment", function(json) { jQuery("#divStatWrapper #divStatLoading").fadeOut(1000); var jsonPoints = new Array(); jQuery.each(json.Points, function(i, item) { jsonPoints.push(item.PlayerPoints ); }); jsonPoints.reverse(); var api = new jGCharts.Api(); jQuery('<img>') .attr('src', api.make({ data : jsonPoints, type : "lc", size : "600x250" })) .appendTo("#divStatContents"); jQuery("#divStatWrapper #divStatContents").fadeIn(1000); }); });

    Read the article

  • monitoring iphone 3g/wifi bandswitch

    - by user262325
    Hello everyone I hope add the function to monitor iphone used 3g/wifi band switch. I know I can do these in my app, but I wonder if there is way to monitor all band switch usage of iPhone? I noticed that there is an app http://itunes.apple.com/ca/app/download-meter-for-wi-fi-gprs/id327227530?mt=8 It claims: Works with any carrier (iPods are supported too!). Requires no login/password to your carrier's site. Does not require Internet connection to get traffic usage statistics (works even in Airplane mode) because it gets information on traffic from the device, not a carrier; this means it won't cost you anything to view current traffic consumed, even while roaming in a location where traffic is very expensive. Allows the user to adjust consumed traffic for today or the current month to match traffic amounts reported by your carrier. Stats on the screen are updated every second. Measures traffic in real time with up to byte precision. Separates traffic into incoming and outgoing, Wi-Fi traffic and mobile Internet traffic. It looks like it gets the band switch info from carrier site, but they also said it gets info from device. I am really confused. Welcome any comment. Thanks interdev

    Read the article

  • Excel export displaying '#####...'

    - by Cypher
    I'm trying to export an Excel database into .txt (Tab Delimited), but some of my cells are quite large. When I export into a txt some of the cells are exported as '#######....' which is surprisingly useless. Has this happened to anyone else? Do you know an easy fix? Data from one cell of my column: Accounting, African Studies, Agricultural/Bioresource Engineering, Agricultural Economics, Agricultural Science, Anatomy/Cell Biology, Animal Biology, Animal Science, Anthropology, Applied Zoology, Architecture, Art History, Atmospheric/Oceanic Science, Biochemistry, Biology, Botanical Sciences, Canadian Studies, Chemical Engineering, Chemistry/Bio-Organic/Environmental/Materials,ChurchMusicPerformance, Civil Engineering/Applied Mechanics, Classics, Composition, Computer Engineering,ComputerScience, ContemporaryGerman Studies, Dietetics, Early Music Performance, Earth/Planetary Sciences, East Asian Studies, Economics, Electrical Engineering, English Literature/ Drama/Theatre/Cultural Studies, Entrepreneurship, Environment, Environmental Biology, Finance, Food Science, Foundations of Computing, French Language/Linguistics/Literature/Translation, Geography, Geography/ Urban Systems, German, German Language/Literature/Culture, Hispanic Languages/Literature/Culture,History,Humanistic Studies, Industrial Relations, Information Systems, International Business, International Development Studies, Italian Studies/Medieval/Renaissance, Jazz Performance, Jewish Studies, Keyboard Studies, Kindergarten/Elementary Education, Kindergarten/Elementary Education/Jewish Studies,Kinesiology, Labor/Management Relations, Latin American/Caribbean Studies, Linguistics, Literature/Translation, Management Science, Marketing, Materials Engineering,Mathematics,Mathematics/Statistics,Mechanical Engineering, Microbiology, Microbiology/Immunology, Middle Eastern Studies, Mining Engineering, Music, Music Education, MusicHistory,Music Technology,Music Theory,North American Studies, Nutrition,OperationsManagement,OrganizationalBehavior/Human Resources Management, Performing Arts, Philosophy, Physical Education, Physics, Physiology, Plant Sciences, Political Science, Psychology, Quebec Studies, Religious Studies/Scriptures/Interpretations/World Religions,ResourceConservation,Russian, Science for Teachers,Secondary Education, Secondary Education/Music, Secondary Education/Science, SocialWork, Sociology, Software Engineering, Soil Science, Strategic Management, Teaching of French/English as a Second Language, Theology, Wildlife Biology, Wildlife Resources, Women’s Studies.

    Read the article

  • How to create offline OLAP cube in C#?

    - by jimmyjoe
    I have a problem with creating an offline OLAP cube from C# using following code: using (var connection = new OleDbConnection()) { connection.ConnectionString = "Provider=MSOLAP; Initial Catalog=[OCWCube]; Data Source=C:\\temp\\test.cub; CreateCube=CREATE CUBE [OCWCube] ( DIMENSION [NAME], LEVEL [Wszystkie] TYPE ALL, LEVEL [NAME], MEASURE [Liczba DESCRIPTIO] FUNCTION COUNT ); InsertInto=INSERT INTO OCWCube([Liczba DESCRIPTIO], [NAME].[NAME]) OPTIONS ATTEMPT_ANALYSIS SELECT Planners.DESCRIPTIO, Planners.NAME FROM Planners Planners; Source_DSN=\"CollatingSequence=ASCII;DefaultDir=c:\\temp;Deleted=1;Driver={Microsoft dBase Driver (*.dbf)};DriverId=277;FIL=dBase IV;MaxBufferSize=2048;MaxScanRows=8;PageTimeout=600;SafeTransactions=0;Statistics=0;Threads=3;UserCommitSync=Yes;\";Mode=Write;UseExistingFile=True"; try { connection.Open(); } catch (OleDbException e) { Console.WriteLine(e); } } I keep on getting the following exception: "Multiple-step operation generated errors. Check each OLE database status value. No action was taken." I took the connection string literally from OQY file generated by Excel. I had to add "Mode=Write" section, otherwise I was getting another exception ("file may be in use"). What is wrong with the connection string? How to diagnose the error? Somebody please guide me...

    Read the article

  • Can I make a "TCP packet modifier" using tun/tap and raw sockets?

    - by benhoyt
    I have a Linux application that talks TCP, and to help with analysis and statistics, I'd like to modify the data in some of the TCP packets that it sends out. I'd prefer to do this without hacking the Linux TCP stack. The idea I have so far is to make a bridge which acts as a "TCP packet modifier". My idea is to connect to the application via a tun/tap device on one side of the bridge, and to the network card via raw sockets on the other side of the bridge. My concern is that when you open a raw socket it still sends packets up to Linux's TCP stack, and so I couldn't modify them and send them on even if I wanted to. Is this correct? A pseudo-C-code sketch of the bridge looks like: tap_fd = open_tap_device("/dev/net/tun"); raw_fd = open_raw_socket(); for (;;) { select(fds = [tap_fd, raw_fd]); if (FD_ISSET(tap_fd, &fds)) { read_packet(tap_fd); modify_packet_if_needed(); write_packet(raw_fd); } if (FD_ISSET(raw_fd, &fds)) { read_packet(raw_fd); modify_packet_if_needed(); write_packet(tap_fd); } } Does this look possible, or are there other better ways of achieving the same thing? (TCP packet bridging and modification.)

    Read the article

  • SQL SERVER 2008 JOIN hints

    - by Nai
    Hi all, Recently, I was trying to optimise this query UPDATE Analytics SET UserID = x.UserID FROM Analytics z INNER JOIN UserDetail x ON x.UserGUID = z.UserGUID Estimated execution plan show 57% on the Table Update and 40% on a Hash Match (Aggregate). I did some snooping around and came across the topic of JOIN hints. So I added a LOOP hint to my inner join and WA-ZHAM! The new execution plan shows 38% on the Table Update and 58% on an Index Seek. So I was about to start applying LOOP hints to all my queries until prudence got the better of me. After some googling, I realised that JOIN hints are not very well covered in BOL. Therefore... Can someone please tell me why applying LOOP hints to all my queries is a bad idea. I read somewhere that a LOOP JOIN is default JOIN method for query optimiser but couldn't verify the validity of the statement? When are JOIN hints used? When the sh*t hits the fan and ghost busters ain't in town? What's the difference between LOOP, HASH and MERGE hints? BOL states that MERGE seems to be the slowest but what is the application of each hint? Thanks for your time and help people! I'm running SQL Server 2008 BTW. The statistics mentioned above are ESTIMATED execution plans.

    Read the article

  • How is jQuery so fast?

    - by ClarkeyBoy
    Hey, I have a rather large application which, on the admin frontend, takes a few seconds to load a page because of all the pageviews that it has to load into objects before displaying anything. Its a bit complex to explain how the system works, but a few of my other questions explains the system in great detail. The main difference between what they say and the current system is that the customer frontend no longer loads all the pageviews into objects when a customer first views the page - it simply adds the pageview to the database and creates an object in an unsynchronised list... to put it simply, when a customer views a page it no longer loads all the pageviews into objects; but the admin frontend still does. I have been working on some admin tools on the customer frontend recently, so if an administrator clicks the description of an item in the catalogue then the right hand column will display statistics and available actions for the selected item. To do this the page which gets loaded (through $('action-container').load(bla bla bla);) into the right hand column has to loop through ALL the pageviews - this ultimately means that ALL the pageviews are loaded into objects if they haven't been already. For some reason this loads really REALLY fast. The difference in speed is only like a second on my dev site, but the live site has thousands of pageviews so the difference is quite big... So my question is: why is it that the admin frontend loads so slowly while using $(bla).load(bla); is so fast? I mean whatever method jQuery uses, can't browsers use this method too and load pages super-fast? Obviously not as someone would've done that by now - but I am interested to know just why the difference is so big... is it just my system or is there a major difference in speed between the browser getting a page and jQuery getting a page? Do other people experience the same kind of differences? Thanks in advance, Regards, Richard

    Read the article

  • Perf4j Not Logging to Separate File

    - by Jehud
    I setup some stop watch calls in my code to measure some code blocks and all the messages are going into my primary log and not into the timing log. The perfStats.log file gets created just fine but all the messages go to the root log which I didn't think was supposed to happen according to the docs I've read. Is there something obvious I'm missing here? Example log4j.xml <!-- This file appender is used to output aggregated performance statistics --> <appender name="fileAppender" class="org.apache.log4j.FileAppender"> <param name="File" value="perfStats.log"/> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%m%n"/> </layout> </appender> <!-- Loggers --> <!-- The Perf4J logger. Note that org.perf4j.TimingLogger is the value of the org.perf4j.StopWatch.DEFAULT_LOGGER_NAME constant. Also, note that additivity is set to false, which is usually what is desired - this means that timing statements will only be sent to this logger and NOT to upstream loggers. --> <logger name="org.perf4j.TimingLogger" additivity="false"> <level value="INFO"/> <appender-ref ref="CoalescingStatistics"/> </logger> <root> <priority value="info"/> <appender-ref ref="STDOUT-DEBUG"/> </root>

    Read the article

  • Flexible Decorator Pattern?

    - by Omar Kooheji
    I was looking for a pattern to model something I'm thinking of doing in a personal project and I was wondering if a modified version of the decorator patter would work. Basicly I'm thinking of creating a game where the characters attributes are modified by what items they have equiped. The way that the decorator stacks it's modifications is perfect for this, however I've never seen a decorator that allows you to drop intermediate decorators, which is what would happen when items are unequiped. Does anyone have experience using the decorator pattern in this way? Or am I barking up the wrong tree? Clarification To explain "Intermediate decorators" if for example my base class is coffe which is decorated with milk which is decorated with sugar (using the example in Head first design patterns) milk would be an intermediate decorator as it decorates the base coffee, and is decorated by the sugar. Yet More Clarification :) The idea is that items change stats, I'd agree that I am shoehorning the decorator into this. I'll look into the state bag. essentially I want a single point of call for the statistics and for them to go up/down when items are equiped/unequiped. I could just apply the modifiers to the characters stats on equiping and roll them back when unequiping. Or whenever a stat is asked for iterate through all the items and calculate the stat. I'm just looking for feedback here, I'm aware that I might be using a chainsaw where scissors would be more appropriate...

    Read the article

  • How can I load file into web app through certain periods?

    - by Elena
    Hi all! I have next task: I need to load the same file into my web app several times, for example - twice a day! Suppose in that file I have information, that changes, and I need to load this info into my app to change the statistics for example. How can I load file several times (twice an hour, or twice a day)? What should I use? Is any algorithm to do that? I am not allowed to use external libraries like Quartz Scheduler. So I need to do it with Thread and/or Timer. Can anybody give me some example or algorithm how to do it. Where can I create the entry point to my Thread, can I do it in managed bean or I need some sort of filter/listener/servlet. I works with jsf and richFaces. Maybe in this technologies there are some algorithms to solve my problem. Any ideas? Thanks very much for help!

    Read the article

  • Seeking reporting or templating tool to generate large formatted PDF reports from dataset

    - by Mr. Tacos
    Say I have some data in MySQL or a big ole CSV file. I also have a report. It's a PDF, call it 100 pages long. I need to generate variations on this PDF for slices of the data. More specific example: I have a CSV file with each StackOverflow user in a row and each column contains various statistics about that user. I have a report called "Your StackOverflow Performance". Its got lots of text, always the same, but each section contains something like: "You Vs. The Average StackOverflow Poster on this metric". I want a table that appears there that has the average data, which is the same in every run of the PDF, in one column. In the second column, I want your data, which is different for each PDF/row in the CSV file/user of StackOverflow. I'm pretty sure people use things like Crystal for this? Is there something in MS SQL Server that's good for this? An open source template language? I'm not even really sure if what I need is called a 'reporting' tool (since I don't really need to do any crunching, the data in this case is being crunched by a series of scripts and SPSS, I don't need bands and subbands and so on) or 'templating'. Is there even such a thing as templating PDFs? Natch, I'd be fine with something that generates output easily scriptable to PDF, like eps, but not something like HTML. The report formatting is fussy and done and externally determined and handed down from on high. It's print-oriented, not webby. Thanks in advance.

    Read the article

  • Jquery Internet Explorer 8 compatibility issue, does not load data unless history is deleted...

    - by Scarface
    Hey guys, I have a weird problem. I have an update system that refreshes data on a time interval. It works well in all browsers except internet explorer 8. The problem is that once it loads the data, it does not matter if the data updates further, it will not update the data visually until the internet history is cleared. I am not using any cookies server-side...Anyone ever encounter something like this? Here is my javascript, thanks for any assistance in advance function prepare(response) { var d = new Date(); count++; d.setTime(response.time*1000); var mytime = d.getHours()+':'+d.getMinutes()+':'+d.getSeconds(); var string = '<li class="shoutbox-list" id="list-'+count+'">' + '<span class="shoutbox-list-nick"><a href="statistics.php?user='+response.user+'">'+response.user+'</a></span>' + ' <span class="date">'+mytime+'</span><br>' + '<span class="msg">'+response.message+'</span>' +'</li>'; return string; } function refresh() { $.getJSON(files+"shoutbox.php?action=view&time="+lastTime+"&topic_id="+topic_id, function(json) { if(json.length) { for(i=0; i < json.length; i++) { $('#daddy-shoutbox-list').prepend(prepare(json[i])); $('#list-' + count).fadeIn(1500); } var j = i-1; lastTime = json[j].time; } //alert(lastTime); }); timeoutID = setTimeout(refresh, 3000); } $(document).ready(function() { var options = { dataType: 'json', beforeSubmit: validate, success: function(response, status){ if (response.error=='success'){ success(response, status); } else { $.prompt(response.error); } } }; $('#daddy-shoutbox-form').ajaxForm(options); timeoutID = setTimeout(refresh, 100); });

    Read the article

  • Programmatically talking to a Serial Port in OS X or Linux

    - by deadprogrammer
    I have a Prolite LED sign that I like to set up to show scrolling search queries from a apache logs and other fun statistics. The problem is, my G5 does not have a serial port, so I have to use a usb to serial dongle. It shows up as /dev/cu.usbserial and /dev/tty.usbserial . When i do this everything seems to be hunky-dory: stty -f /dev/cu.usbserial speed 9600 baud; lflags: -icanon -isig -iexten -echo iflags: -icrnl -ixon -ixany -imaxbel -brkint oflags: -opost -onlcr -oxtabs cflags: cs8 -parenb Everything also works when I use the serial port tool to talk to it. If I run this piece of code while the above mentioned serial port tool, everthing also works. But as soon as I disconnect the tool the connection gets lost. #!/usr/bin/python import serial ser = serial.Serial('/dev/cu.usbserial', 9600, timeout=10) ser.write("<ID01><PA> \r\n") read_chars = ser.read(20) print read_chars ser.close() So the question is, what magicks do I need to perform to start talking to the serial port without the serial port tool? Is that a permissions problem? Also, what's the difference between /dev/cu.usbserial and /dev/tty.usbserial?

    Read the article

  • Strange execution times in t-sql

    - by TonyP
    Hi All I have two stored procedures, the first one calls the second .. If I execute the second one alone it takes over 5 minutes to complete.. But when executed within the first one it takes little over 1 minute.. What is the reason ! Here is the first one ALTER procedure [dbo].[schRefreshPriceListItemGroups] as begin tran delete from PriceListItemGroups if @@error !=0 goto rolback Insert PriceListItemGroups(comno,t$cuno,t$cpls,t$cpgs,t$dsca,t$cpru) SELECT distinct c.comno,c.t$cuno, c.t$cpls,I.t$cpgs,g.t$dsca,g.t$cpru FROM TTCCOM010nnn C JOIN TTDSLS032nnn PL ON PL.comno = c.Comno and PL.t$cpls = c.t$cpls JOIN TTIITM001nnn I ON I.t$item = pl.t$item AND I.comno = pl.comNo JOIN TTCMCS024nnn G ON g.T$cprg = I.t$cpgs AND g.comno = I.Comno WHERE c.t$cpls !='' order by comno desc, t$cuno, t$cpgs if @@error !=0 goto rolback ----------------------------------------------------- Exec scrRefreshCustomersCatalogs ----------------------------------------------------- commit tran return rolback: Rollback tran And the second one Alter proc scrRefreshCustomersCatalogs as declare @baanIds table(id int identity(1,1),baanId varchar(12)) declare @baanId varchar(12),@i int, @n int Insert @baanIds(BaanId) select baanId from ftElBaanIds() SELECT @I=1,@n=max(id) from @baanIds select @i,@n Begin tran if @@error !=0 goto xRollBack WHILE @I <=@n Begin select @baanId=baanId from @baanIds where id=@i if @@error !=0 goto xRollBack Delete from customersCatalogs where comno+'-'+t$cuno=@baanId print Convert(varchar,@i)+' baanId='+@baanId Insert customersCatalogs exec customersCatalog @baanId if @@error !=0 goto xRollBack set @i=@i+1; end Commit Tran Update statistics customersCatalogs with fullscan Return xRollBack: Print '*****Rolling back*************' Rollback tran

    Read the article

  • LINQ Query please help C#.Net.

    - by Paul Matthews
    I'm very new to LINQ and struggling to find the answers. I have a simple SQL query. Select ID, COUNT(ID) as Selections, OptionName, SUM(Units) as Units FROM tbl_Results GROUP BY ID, OptionName. The results I got were: '1' '4' 'Approved' '40' '2' '1' 'Rejected' '19' '3' '2' 'Not Decided' '12' Due to having to encrypt all my data in the database I'm unable to do sums. Therefore I have now brought back the data and decrypt it in the application layer. Results would be: '1' 'Approved' '10' '3' 'Not Deceided' '6' '2' 'Rejected' '19' '1' 'Approved' '15' '1' 'Approved' '5' '3' 'Not Deceided' '6' '1' 'Approved' '10' using a simple class I have called back the above results, and put them in a list class. public class results { public int ID {get;set;} public string OptionName {get;set;} public int Unit {get;set;} } I almost have the LINQ query to bring back the results like the SQL query at the top var q = from r in Results group p.Unit by p.ID int g select new {ID = g.Key, Selections = g.Count(), Units = g.Sum()}; How do I ensure my LINQ query also give me the Option Name? Also if I created a class called Statistics to hold my results how would I modify the LINQ query to give me list result set? public class results { public int ID {get;set;} public int NumberOfSelections { get; set; } public string OptionName {get;set;} public int UnitTotal {get;set;} }

    Read the article

  • Mathematics for Computer Science Students

    - by Ender
    To cut a long story short, I am a CS student that has received no formal Post-16 Maths education for years. Right now even my Algebra is extremely rusty and I have a couple of months to shape up my skills. I've got a couple of video lectures in my bookmarks, consisting of: Pre-Calculus Algebra Calculus Probability Introduction to Statistics Differential Equations Linear Algebra My aim as of today is to be able to read the CLRS book Introduction to Algorithms and be able to follow the Mathematical notation in that, as well as being able to confidently read and back-up any arguments written in Mathematical notation. Aside from these video lectures, can anyone recommend any good books to help teach someone wishing to go from a low-foundation level to a more advanced level of Mathematics? Just as a note, I've taken a first-year module in Analytical Modelling, so I understand some of the basic concepts of Discrete Mathematics. EDIT: Just a note to those that are looking to learn Linear Algebra using the Video Lectures I have posted up. Peteris Krumins' Blog contains a run-through of these lecture notes as well as his own commentary and lecture notes, an invaluable resource for those looking to follow the lectures too.

    Read the article

  • kohana project structure

    - by user176217
    Hello Guys. I'm investigating using Kohana for my next project. The site will consist of user registration (and hence user profiles) where users will have certain privileges. The site will also have an admin section where administrators can go to say block a user or delete a post or look at usage statistics for example. A good comparison site would be a multi-user blog, where each blogger depending on her/his permissions can post/edit/delete blogs...just as an example. Firstly, I'm not sure about how to set up the controller/view structure in order to separate the admin section from the front facing site. I'm using Kohana 3, so I was thinking of a controller structure like so: application/classes/controller/front (front facing)...and application/classes/controller/admin (for administrative section). Or I notice you may be able to use the Route class to set up routes, so I could set up an "admin" route. for example: www.example.com/admin will lead to the admin logon screen. www.example.com --- front controller. As well, can I somehow separate the "Admin" views and controllers from the "front facing" views and controllers like divide them up based on folder structure? Any help is very much appreciated. Thank you.

    Read the article

  • sybase - fails to use index unless string is hard-coded

    - by Garrett
    I'm using Sybase 12.5.3 (ASE); I'm new to Sybase though I've worked with MSSQL pretty extensively. I'm running into a scenario where a stored procedure is really very slow. I've traced the issue to a single SELECT stmt for a relatively large table. Modifying that statement dramatically improves the performance of the procedure (and reverting it drastically slows it down; i.e., the SELECT stmt is definitely the culprit). -- Sybase optimizes and uses multi-column index... fast!<br> SELECT ID,status,dateTime FROM myTable WHERE status in ('NEW','SENT') ORDER BY ID -- Sybase does not use index and does very slow table scan<br> SELECT ID,status,dateTime FROM myTable WHERE status in (select status from allowableStatusValues) ORDER BY ID The code above is an adapted/simplified version of the actual code. Note that I've already tried recompiling the procedure, updating statistics, etc. I have no idea why Sybase ASE would choose an index only when strings are hard-coded and choose a table scan when choosing from another table. Someone please give me a clue, and thank you in advance.

    Read the article

  • Reverse ordered list for jquery submitted comments

    - by g-man
    Hey guys I have one more question lol. I am using a script that allows users to submit comments through jquery ajax, however when they are submitted, the submitted comments submit at the bottom of the other comments which are sorted in descending order (newest on top) when the page first loads (due to mysql query). Is there a way to make it submit on top through some sort of sorting javascript function? function prepare(response) { var d = new Date(); count++; d.setTime(response.time*1000); var mytime = d.getHours()+':'+d.getMinutes()+':'+d.getSeconds(); var string = '<li class="shoutbox-list" id="list-'+count+'">' + '<span class="date">'+mytime+'</span>' + '<span class="shoutbox-list-nick"><a href="statistics.php?user='+response.user+'">'+response.user+'</a>:</span>' + '<span class="msg">'+response.message+'</span>' +'</li>'; return string; } function success(response, status) { if(status == 'success') { lastTime = response.time; $('#daddy-shoutbox-list').append(prepare(response)); $('input[name=message]').attr('value', '').focus(); $('#list-'+count).fadeIn('slow'); timeoutID = setTimeout(refresh, 3000); } } <div id="daddy-shoutbox"> <ol id="daddy-shoutbox-list"></ol> </div>

    Read the article

  • Can't seem to load a webview from xml, why?

    - by Bilthon
    Well, I'm trying to follow the tutorial from http://rapidandroid.org/wiki/Graphing. But I found a problem just in the first part of it. I'll describe the problem here, I just cannot understand how this could be wrong, it's pretty simple stuff. What am I doing wrong here? First I created a xml file called statistics.xml, in it among other things I put this code: <WebView android:id="@+id/webview" android:layout_height="fill_parent" android:layout_width="fill_parent"/> Now in the activity that is supposed to display this webview I'm doing this: public class DisplayStatistics extends Activity { public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); WebView wv = (WebView) findViewById(R.id.webview); The problem arises here, as I seem to be getting null for wv whenever I test for it. Which means of course that findViewById(R.id.webview) couldn't find the view. But again, what am I doing wrong?? Of course I know I could also instantiate the webview directly from code without the need to specify it from the xml, but I was just wondering what was wrong about this way of doing it. Just in case I also added the following line in my android manifest file: <uses-permission android:name="android.permission.INTERNET" /> I'm not really sure if it's necessary for displaying the webview (I think not) but I just put it there to see if that was the problem, and it turned out it isn't. Thanks Nelson

    Read the article

  • Web-based game in Python + Django and client browser polling

    - by ty
    I am creating a text-based game that implements a basic model in which multiple (10+) players interact with data and one moderator watches them and sets certain environmental statistics that affect gameplay. Recently I have begun to familiarize myself with Django. It seems to me that it would be an excellent tool for creating a game quickly, particularly because the nature of my game depends largely on sets of data (which lends itself quite well to a database). I am wondering how to "push" changes made by the game moderator to the players (for example, the moderator can decide to display an image to all players). The game is turn-based, not real-time, but certain messages need to be pushed out in roughly real-time. My thoughts: I could have each player's browser poll a status periodically (say, every 30 seconds) to see if there is a message from a moderator. But this forces a lag and means different players might receive it at different times. And reducing this interval to <10 seems like a bad idea for the server. Is there a better way to inform clients of changes? Would you suggest something other than using a web framework like Django? Thanks!

    Read the article

  • mySQL Efficiency Issue - How to find the right balance of normalization...?

    - by Foo
    I'm fairly new to working with relational databases, but have read a few books and know the basics of good design. I'm facing a design decision, and I'm not sure how to continue. Here's a very over simplified version of what I'm building: People can rate photos 1-5, and I need to display the average votes on the picture while keeping track of the individual votes. For example, 12 people voted 1, 7 people voted 2, etc. etc. The normalization freak of me initially designed the table structure like this: Table pictures id* | picture | userID | Table ratings id* | pictureID | userID | rating With all the foreign key constraints and everything set as they shoudl be. Every time someone rates a picture, I just insert a new record into ratings and be done with it. To find the average rating of a picture, I'd just run something like this: SELECT AVG(rating) FROM ratings WHERE pictureID = '5' GROUP by pictureID Having it setup this way lets me run my fancy statistics to. I can easily find who rated a certain picture a 3, and what not. Now I'm thinking if there's a crapload of ratings (which is very possible in what I'm really designing), finding the average will became very expensive and painful. Using a non-normalized version would seem to be more efficient. e.g.: Table picture id | picture | userID | ratingOne | ratingTwo | ratingThree | ratingFour | ratingFive To calculate the average, I'd just have to select a single row. It seems so much more efficient, but so much more uglier. Can someone point me in the right direction of what to do? My initial research shows that I have to "find the right balance", but how do I go about finding that balance? Any articles or additional reading information would be appreciated as well. Thanks.

    Read the article

  • subtotals in columns usind reshape2 in R

    - by user1043144
    I have spent some time now learning RESHAPE2 and plyr but I still do not get it. This time I have a problem with (a) subtotals and (b) passing different aggregate functions . Here an example using data from the excellent tutorial on the blog of mrdwab http://news.mrdwab.com/ # libraries library(plyr) library(reshape2) # get data and add few more variables book.sales = read.csv("http://news.mrdwab.com/data-booksales") book.sales$Stock = book.sales$Quantity + 10 book.sales$SubjCat[(book.sales$Subject == 'Economics') | (book.sales$Subject == 'Management') ] <- '1_EconSciences' book.sales$SubjCat[book.sales$Subject %in% c('Anthropology', 'Politics', 'Sociology') ] <- '2_SocSciences' book.sales$SubjCat[book.sales$Subject %in% c('Communication', 'Fiction', 'History', 'Research', 'Statistics') ] <- '3_other' # to get to my starting dataframe (close to the project I am working on) book.sales1 <- ddply(book.sales, c('Region', 'Representative', 'SubjCat', 'Subject', 'Publisher'), summarize, Stock = sum(Stock), Sold = sum(Quantity), Ratio = round((100 * sum(Quantity)/ sum(Stock)), digits = 1)) #melt it m.book.sales = melt(data = book.sales1, id.vars = c('Region', 'Representative', 'SubjCat', 'Subject', 'Publisher'), measured.vars = c('Stock', 'Sold', 'Ratio')) # cast it Tab1 <- dcast(data = m.book.sales, formula = Region + Representative ~ Publisher + variable, fun.aggregate = sum, margins = c('Region', 'Representative')) Now my questions : I have been able to add the subtotals in rows. But is it possible also to add margins in the columns. Say for example, Totals of Stock for one Publisher ? Sorry I meant to say example total sold for all publishers There is a problem with the columns with “ratio”. How can I get “mean” instead of “sum” for this variable ? P.S: I have seen some examples using reshape. Will you recommend to use it instead of reshape2 (which seems not to include the functionalities of two functions).

    Read the article

  • Getting started with massive data

    - by Max
    I'm a math guy and occasionally do some statistics/machine learning analysis consulting projects on the side. The data I have access to are usually on the smaller side, at most a couple hundred of megabytes (and almost always far less), but I want to learn more about handling and analyzing data on the gigabyte/terabyte scale. What do I need to know and what are some good resources to learn from? Hadoop/MapReduce is one obvious start. Is there a particular programming language I should pick up? (I primarily work now in Python, Ruby, R, and occasionally Java, but it seems like C and Clojure are often used for large-scale data analysis?) I'm not really familiar with the whole NoSQL movement, except that it's associated with big data. What's a good place to learn about it, and is there a particular implementation (Cassandra, CouchDB, etc.) I should get familiar with? Where can I learn about applying machine learning algorithms to huge amounts of data? My math background is mostly on the theory side, definitely not on the numerical or approximation side, and I'm guessing most of the standard ML algorithms don't really scale. Any other suggestions on things to learn would be great!

    Read the article

  • Poor execution plans when using a filter and CONTAINSTABLE in a query

    - by Paul McLoughlin
    We have an interesting problem that I was hoping someone could help to shed some light on. At a high level the problem is as below: The following query executes quickly (1 second): SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID but if we add a filter to the query, then it takes approximately 2 minutes to return: SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID WHERE SA.CHG_DATE'19 Feb 2010' Looking at the execution plan for the two queries, I can see that in the second case there are two places where there are huge differences between the actual and estimated number of rows, these being: 1) For the FulltextMatch table valued function where the estimate is approx 22,000 rows and the actual is 29 million rows (which are then filtered down to 1670 rows before the join) and 2) For the index seek on the full text index, where the estimate is 1 row and the actual is 13,000 rows As a result of the estimates, the optimiser is choosing to use a nested loops join (since it assumes a small number of rows) hence the plan is inefficient. We can work around the problem by either (a) parameterising the query and adding an OPTION (OPTIMIZE FOR UNKNOWN) to the query or (b) by forcing a HASH JOIN to be used. In both of these cases the query returns in sub 1 second and the estimates appear reasonable. My question really is 'why are the estimates being used in the poorly performing case so wildly inaccurate and what can be done to improve them'? Statistics are up to date on the indexes on the indexed view being used here. Any help greatly appreciated.

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >