Search Results

Search found 1631 results on 66 pages for 'statistics'.

Page 55/66 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • AppFabric caching's local cache isnt working for us... What are we doing wrong?

    - by Olly
    We are using appfabric as the 2ndlevel cache for an NHibernate asp.net application comprising a customer facing website and an admin website. They are both connected to the same cache so when admin updates something, the customer facing site is updated. It seems to be working OK - we have a CacheCLuster on a seperate server and all is well but we want to enable localcache to get better performance, however, it dosnt seem to be working. We have enabled it like this... bool UseLocalCache = int LocalCacheObjectCount = int.MaxValue; TimeSpan LocalCacheDefaultTimeout = TimeSpan.FromMinutes(3); DataCacheLocalCacheInvalidationPolicy LocalCacheInvalidationPolicy = DataCacheLocalCacheInvalidationPolicy.TimeoutBased; if (UseLocalCache) { configuration.LocalCacheProperties = new DataCacheLocalCacheProperties( LocalCacheObjectCount, LocalCacheDefaultTimeout, LocalCacheInvalidationPolicy ); // configuration.NotificationProperties = new DataCacheNotificationProperties(500, TimeSpan.FromSeconds(300)); } Initially we tried using a timeout invalidation policy (3mins) and our app felt like it was running faster. HOWEVER, we noticed that if we changed something in the admin site, it was immediatley updated in the live site. As we are using timeouts not notifications, this demonstrates that the local cache isnt being queried (or is, but is always missing). The cache.GetType().Name returns "LocalCache" - so the factory has made a local cache. Running "Get-Cache-Statistics MyCache" in PS on my dev environment (asp.net app running local from vs2008, cache cluster running on a seperate w2k8 machine) show a handful of Request Counts. However, on the Production environment, the Request Count increases dramaticaly. We tried following the method here to se the cache cliebt-server traffic... http://blogs.msdn.com/b/appfabriccat/archive/2010/09/20/appfabric-cache-peeking-into-client-amp-server-wcf-communication.aspx but the log file had nothing but the initial header in it - i.e no loggin either. I cant find anything in SO or Google. Have we done something wrong? Have we got a screwy install of AppFabric - we installed it via WebPlatform Installer - I think? (note: the IIS box running ASp.net isnt in yhe cluster - it is just the client). Any insights greatfully received!

    Read the article

  • can this problem be solved with a single SQL query?

    - by PierrOz
    I have the two following tables (with some sample datas) LOGS: ID | SETID | DATE ======================== 1 | 1 | 2010-02-25 2 | 2 | 2010-02-25 3 | 1 | 2010-02-26 4 | 2 | 2010-02-26 5 | 1 | 2010-02-27 6 | 2 | 2010-02-27 7 | 1 | 2010-02-28 8 | 2 | 2010-02-28 9 | 1 | 2010-03-01 STATS: ID | OBJECTID | FREQUENCY | STARTID | ENDID ============================================= 1 | 1 | 0.5 | 1 | 5 2 | 2 | 0.6 | 1 | 5 3 | 3 | 0.02 | 1 | 5 4 | 4 | 0.6 | 2 | 6 5 | 5 | 0.6 | 2 | 6 6 | 6 | 0.4 | 2 | 6 7 | 1 | 0.35 | 3 | 7 8 | 2 | 0.6 | 3 | 7 9 | 3 | 0.03 | 3 | 7 10 | 4 | 0.6 | 4 | 8 11 | 5 | 0.6 | 4 | 8 7 | 1 | 0.45 | 5 | 9 8 | 2 | 0.6 | 5 | 9 9 | 3 | 0.02 | 5 | 9 Every day new logs are analyzed on different sets of objects and stored in table LOGS. Among other processes, some statistics are computed on the objects contained into these sets and the result are stored in table STATS. These statistic are computed through several logs (identified by the STARTID and ENDID columns). So, what could be the SQL query that would give me the latest computed stats for all the objects with the corresponding log dates. In the given example, the result rows would be: OBJECTID | SETID | FREQUENCY | STARTDATE | ENDDATE ====================================================== 1 | 1 | 0.45 | 2010-02-27 | 2010-03-01 2 | 1 | 0.6 | 2010-02-27 | 2010-03-01 3 | 1 | 0.02 | 2010-02-27 | 2010-03-01 4 | 2 | 0.6 | 2010-02-26 | 2010-02-28 5 | 2 | 0.6 | 2010-02-26 | 2010-02-28 So, the most recent stats for set 1 are computed with logs from feb 27 to march 1 whereas stats for set 2 are computed from feb 26 to feb 28. object 6 is not in the results rows as there is no stat on it within the last period of time. Last thing, I use MySQL. Any Idea ?

    Read the article

  • FileConnection Blackberry memory usage

    - by Dean
    Hello, I'm writing a blackberry application that reads ints and strings out of a database. This is my first time dealing with reading/writing on the blackberry, so forgive me if this is a dumb question. The database file I'm reading is only about 4kB I open the file with the following code fconn = (FileConnection) Connector.open("file_path_here", Connector.READ); if(fconn.exists()==false){fconn.close();return;} is = fconn.openDataInputStream(); while(!eof){ //etc... } is.close(); fconn.close(); The problem is, this code appears to be eating a LOT of memory. Using breakpoints and the "Memory Statistics" view, I determined the following: calling "Connector.open" creates 71 objects and changes "RAM Bytes in use" by 5376 calling "fconn.openDataInputStream();" increases RAM usage by a whopping 75920 Is this normal? Or am I doing something wrong? And how can I fix this? 75MB of RAM is a LOT of memory to waste on a handheld device, especially when the file I'm reading is only 4kB and I haven't even begun reading any data! How is this even possible?

    Read the article

  • Cannot connect to Github?

    - by user2973438
    so I tried to push some updates onto my repo on github via terminal on Mac OSX 10.8.4 and it doesn't work. I've been getting the same error many times: Lillys-MacBook-Air:Yuewei Lilly$ git push origin master error: Failed connect to github.com:443; Operation timed out while accessing https://github.com/lillybeans/Yuewei.git/info/refs?service=git-receive-pack fatal: HTTP request failed Some background: I've pushed many projects onto github before using terminal (when I was in Canada). I am currently in Shanghai, China, could it be the GFW? But when I was in Beijing, I was able to push projects onto github still. when I do ping github.com: Lillys-MacBook-Air:Yuewei Lilly$ ping github.com PING github.com (192.30.252.131): 56 data bytes Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 ping: sendto: No route to host Request timeout for icmp_seq 2 ping: sendto: Host is down Request timeout for icmp_seq 3 ping: sendto: Host is down Request timeout for icmp_seq 4 ping: sendto: Host is down Request timeout for icmp_seq 5 ping: sendto: Host is down Request timeout for icmp_seq 6 ping: sendto: Host is down Request timeout for icmp_seq 7 ^C --- github.com ping statistics --- 9 packets transmitted, 0 packets received, 100.0% packet loss Lillys-MacBook-Air:Yuewei Lilly$ I have ShadowSocks (proxy) turned on. Without it I can't access github.com via browser, with it, I can. also when I do "git remote -v" I see both my pull and push remote repos correctly listed. Thank you in advance!

    Read the article

  • What about parallelism across network using multiple PCs?

    - by MainMa
    Parallel computing is used more and more, and new framework features and shortcuts make it easier to use (for example Parallel extensions which are directly available in .NET 4). Now what about the parallelism across network? I mean, an abstraction of everything related to communications, creation of processes on remote machines, etc. Something like, in C#: NetworkParallel.ForEach(myEnumerable, () => { // Computing and/or access to web ressource or local network database here }); I understand that it is very different from the multi-core parallelism. The two most obvious differences would probably be: The fact that such parallel task will be limited to computing, without being able for example to use files stored locally (but why not a database?), or even to use local variables, because it would be rather two distinct applications than two threads of the same application, The very specific implementation, requiring not just a separate thread (which is quite easy), but spanning a process on different machines, then communicating with them over local network. Despite those differences, such parallelism is quite possible, even without speaking about distributed architecture. Do you think it will be implemented in a few years? Do you agree that it enables developers to easily develop extremely powerfull stuff with much less pain? Example: Think about a business application which extracts data from the database, transforms it, and displays statistics. Let's say this application takes ten seconds to load data, twenty seconds to transform data and ten seconds to build charts on a single machine in a company, using all the CPU, whereas ten other machines are used at 5% of CPU most of the time. In a such case, every action may be done in parallel, resulting in probably six to ten seconds for overall process instead of forty.

    Read the article

  • Sub Query making Query slow.

    - by Muhammad Kashif Nadeem
    Please copy and paste following script. DECLARE @MainTable TABLE(MainTablePkId int) INSERT INTO @MainTable SELECT 1 INSERT INTO @MainTable SELECT 2 DECLARE @SomeTable TABLE(SomeIdPk int, MainTablePkId int, ViewedTime1 datetime) INSERT INTO @SomeTable SELECT 1, 1, DATEADD(dd, -10, getdate()) INSERT INTO @SomeTable SELECT 2, 1, DATEADD(dd, -9, getdate()) INSERT INTO @SomeTable SELECT 3, 2, DATEADD(dd, -6, getdate()) DECLARE @SomeTableDetail TABLE(DetailIdPk int, SomeIdPk int, Viewed INT, ViewedTimeDetail datetime) INSERT INTO @SomeTableDetail SELECT 1, 1, 1, DATEADD(dd, -7, getdate()) INSERT INTO @SomeTableDetail SELECT 2, 2, NULL, DATEADD(dd, -6, getdate()) INSERT INTO @SomeTableDetail SELECT 3, 2, 2, DATEADD(dd, -8, getdate()) INSERT INTO @SomeTableDetail SELECT 4, 3, 1, DATEADD(dd, -6, getdate()) SELECT m.MainTablePkId, (SELECT COUNT(Viewed) FROM @SomeTableDetail), (SELECT TOP 1 s2.ViewedTimeDetail FROM @SomeTableDetail s2 INNER JOIN @SomeTable s1 ON s2.SomeIdPk = s1.SomeIdPk WHERE s1.MainTablePkId = m.MainTablePkId) FROM @MainTable m Above given script is just sample. I have long list of columns in SELECT and around 12+ columns in Sub Query. In my From clause there are around 8 tables. To fetch 2000 records full query take 21 seconds and if I remove Subquiries it just take 4 seconds. I have tried to optimize query using 'Database Engine Tuning Advisor' and on adding new advised indexes and statistics but these changes make query time even bad. Note: As I have mentioned that this is test data to explain my question the real data has lot of tables joins columns but without Sub-Query the results us fine. Any help thanks.

    Read the article

  • Periodic GPU performance problem

    - by Peter Lillevold
    Hi folks! I have a WinForms application that uses XNA to animate 3D models in a control. The app have been doing just fine for months but recently I've started to experience periodic pauses in the animation. Setting out to investigate what is going on I have established these facts: It (currently) happens on my machine only Removing everything from my render loop does not improve the problem In 2. I didn't actually remove everything, I limited my loop to set the viewport on my GraphicsDevice and then do a GraphicsDevice.Present. Trying to dig further I fired up PIX to capture some statistics. Screenshots of two PIX runs can be viewed here (Run6) and here (Run14). Run6 is using my original render loop and Run14 is using the bare-bones Present loop. PIX tells me that the GPU is periodically doing something, and I assume this is causing the pauses. What could be the cause of this? Or how do I go about finding out what the GPU is actually doing? Note: I'm using XNA 3.1 on a Windows 7 x64 dual-core machine with 8GB RAM. Note2: also posted this question on the XNA Creators forums here.

    Read the article

  • mySQL select and group by values

    - by Foo
    I'd like to count and group rows by specific values. This seems fairly simple, but I can't seem to do it. I have a table set up similar to this: Table: Ratings id pID uID rating 1 1 2 7 2 1 7 7 3 1 5 4 4 1 1 1 id is the primary key, piD and uID are foreign-keys. Rating contains values between 1 and 10, and only between 1 and 10. I want to run some statistics and count the number of ratings with a certain value. In the example above, two have left a rating of 7. So I wrote the following query: SELECT COUNT(*) AS 'count' , 'rating' FROM 'ratings' WHERE pID= '1' GROUP BY `rating` ORDER BY `rating` Which yields the nice result as: count ratings 1 1 1 4 2 7 I'd like to get the mySQL query to include values between 1 and 10 as well. For example: Desired Result count ratings 1 1 0 2 0 3 1 4 0 5 0 6 2 7 0 8 0 9 0 10 Unfortunately, I'm relatively new to SQL and I've been reading through everything I could get my hands on for the past hour, but I can't get it to work. I've been leaning along the lines of a some type of JOIN. If anyone can point me in the right direction, it'd be appreciated. Thanks.

    Read the article

  • MSSQL 2008 - Bit Param Evaluation alters Execution Plan

    - by Nathanial Woolls
    I have been working on migrating some of our data from Microsoft SQL Server 2000 to 2008. Among the usual hiccups and whatnot, I’ve run across something strange. Linked below is a SQL query that returns very quickly under 2000, but takes 20 minutes under 2008. I have read quite a bit on upgrading SQL server and went down the usual paths of checking indexes, statistics, etc. before coming to the conclusion that the following statement, found in the WHERE clause, causes the execution plan for the steps that follow this statement to change dramatically: And ( @bOnlyUnmatched = 0 -- offending line Or Not Exists( The SQL statements and execution plans are linked below. A coworker was able to rewrite a portion of the WHERE clause using a CASE statement, which seems to “trick” the optimizer into using a better execution plan. The version with the CASE statement is also contained in the linked archive. I’d like to see if someone has an explanation as to why this is happening and if there may be a more elegant solution than using a CASE statement. While we can work around this specific issue, I’d like to have a broader understanding of what is happening to ensure the rest of the migration is as painless as possible. Zip file with SQL statements and XML execution plans Thanks in advance!

    Read the article

  • Best way to build an application based on R?

    - by Prasad Chalasani
    I'm looking for suggestions on how to go about building an application that uses R for analytics, table generation, and plotting. What I have in mind is an application that: displays various data tables in different tabs, somewhat like in Excel, and the columns should be sortable by clicking. takes user input parameters in some dialog windows. displays plots dynamically (i.e. user-input-dependent) either in a tab or in a new pop-up window/frame Note that I am not talking about a general-purpose fron-end/GUI for exploring data with R (like say Rattle), but a specific application. Some questions I'd like to see addressed are: Is an entirely R-based approach even possible ( on Windows ) ? The following passage from the Rattle article in R-Journal intrigues me: It is interesting to note that the first implementation of Rattle actually used Python for implementing the callbacks and R for the statistics, using rpy. The release of RGtk2 allowed the interface el- ements of Rattle to be written directly in R so that Rattle is a fully R-based application If it's better to use another language for the GUI part, which language is best suited for this? I'm looking for a language where it's relatively "painless" to build the GUI, and that also integrates very well with R. From this StackOverflow question How should I do rapid GUI development for R and Octave methods (possibly with Python)? I see that Python + PyQt4 + QtDesigner + RPy2 seems to be the best combo. Is that the consensus ? Anyone have pointers to specific (open source) applications of the type I describe, as examples that I can learn from?

    Read the article

  • Better code for accessing fields in a matlab structure array?

    - by John
    I have a matlab structure array Modles1 of size (1x180) that has fields a, b, c, ..., z. I want to understand how many distinct values there are in each of the fields. i.e. max(grp2idx([foo(:).a])) The above works if the field a is a double. {foo(:).a} needs to be used in the case where the field a is a string/char. Here's my current code for doing this. I hate having to use the eval, and what is essentially a switch statement. Is there a better way? names = fieldnames(Models1); for ix = 1 : numel(names) className = eval(['class(Models1(1).',names{ix},')']); if strcmp('double', className) || strcmp('logical',className) eval([' values = [Models1(:).',names{ix},'];']); elseif strcmp('char', className) eval([' values = {Models1(:).',names{ix},'};']); else disp(['Unrecognized class: ', className]); end % this line requires the statistics toolbox. [g, gn, gl] = grp2idx(values); fprintf('%30s : %4d\n',names{ix},max(g)); end

    Read the article

  • Logic: Best way to sample & count bytes of a 100MB+ file

    - by Jami
    Let's say I have this 170mb file (roughly 180 million bytes). What I need to do is to create a table that lists: all 4096 byte combinations found [column 'bytes'], and the number of times each byte combination appeared in it [column 'occurrences'] Assume two things: I can save data very fast, but I can update my saved data very slow. How should I sample the file and save the needed information? Here're some suggestions that are (extremely) slow: Go through each 4096 byte combinations in the file, save each data, but search the table first for existing combinations and update it's values. this is unbelievably slow Go through each 4096 byte combinations in the file, save until 1 million rows of data in a temporary table. Go through that table and fix the entries (combine repeating byte combinations), then copy to the big table. Repeat going through another 1 million rows of data and repeat the process. this is faster by a bit, but still unbelievably slow This is kind of like taking the statistics of the file. NOTE: I know that sampling the file can generate tons of data (around 22Gb from experience), and I know that any solution posted would take a bit of time to finish. I need the most efficient saving process

    Read the article

  • Need help optimizing this Django aggregate query

    - by Chris Lawlor
    I have the following model class Plugin(models.Model): name = models.CharField(max_length=50) # more fields which represents a plugin that can be downloaded from my site. To track downloads, I have class Download(models.Model): plugin = models.ForiegnKey(Plugin) timestamp = models.DateTimeField(auto_now=True) So to build a view showing plugins sorted by downloads, I have the following query: # pbd is plugins by download - commented here to prevent scrolling pbd = Plugin.objects.annotate(dl_total=Count('download')).order_by('-dl_total') Which works, but is very slow. With only 1,000 plugins, the avg. response is 3.6 - 3.9 seconds (devserver with local PostgreSQL db), where a similar view with a much simpler query (sorting by plugin release date) takes 160 ms or so. I'm looking for suggestions on how to optimize this query. I'd really prefer that the query return Plugin objects (as opposed to using values) since I'm sharing the same template for the other views (Plugins by rating, Plugins by release date, etc.), so the template is expecting Plugin objects - plus I'm not sure how I would get things like the absolute_url without a reference to the plugin object. Or, is my whole approach doomed to failure? Is there a better way to track downloads? I ultimately want to provide users some nice download statistics for the plugins they've uploaded - like downloads per day/week/month. Will I have to calculate and cache Downloads at some point? EDIT: In my test dataset, there are somewhere between 10-20 Download instances per Plugin - in production I expect this number would be much higher for many of the plugins.

    Read the article

  • Determining Best Table Structure for MySQL Performance

    - by Joe Majewski
    I'm working on a browser-based RPG for one of my websites, and right now I'm trying to determine the best way to organize my SQL tables for performance and maintenance. Here's my question: Does the number of columns in an SQL table affect the speed in which it can be queried? I am not a newbie when it comes to PHP or MySQL. I used to develop things with the common goal of getting them to work, but I've recently advanced to the stage where a functional program is not good enough unless it's fast and reliable. Anyways, right now I have a members table that has around 15 columns. It contains information such as the player's username, password, email, logins, page views, etcetera. It doesn't contain any information on the player's progress in the game, however. If I added columns for things such as army size, gold, turns, and whatnot, then it could easily rise to around 40 or 50 total columns. Oh, and my database structure IS normalized. Will a table with 50 columns that gets constantly queried be a bad idea? Should I split it into two tables; one for the user's general information and one for the user's game statistics? I know I could check the query time myself, but I haven't actually created the tables yet and I think I'd be better off with some professional advice on this important decision for my game. Thank you for your time! :)

    Read the article

  • Looking for an appropriate design pattern

    - by user1066015
    I have a game that tracks user stats after every match, such as how far they travelled, how many times they attacked, how far they fell, etc, and my current implementations looks somewhat as follows (simplified version): Class Player{ int id; public Player(){ int id = Math.random()*100000; PlayerData.players.put(id,new PlayerData()); } public void jump(){ //Logic to make the user jump //... //call the playerManager PlayerManager.jump(this); } public void attack(Player target){ //logic to attack the player //... //call the player manager PlayerManager.attack(this,target); } } Class PlayerData{ public static HashMap<int, PlayerData> players = new HashMap<int,PlayerData>(); int id; int timesJumped; int timesAttacked; } public void incrementJumped(){ timesJumped++; } public void incrementAttacked(){ timesAttacked++; } } Class PlayerManager{ public static void jump(Player player){ players.get(player.getId()).incrementJumped(); } public void incrementAttacked(Player player, Player target){ players.get(player.getId()).incrementAttacked(); } } So I have a PlayerData class which holds all of the statistics, and brings it out of the player class because it isn't part of the player logic. Then I have PlayerManager, which would be on the server, and that controls the interactions between players (a lot of the logic that does that is excluded so I could keep this simple). I put the calls to the PlayerData class in the Manager class because sometimes you have to do certain checks between players, for instance if the attack actually hits, then you increment "attackHits". The main problem (in my opinion, correct me if I'm wrong) is that this is not very extensible. I will have to touch the PlayerData class if I want to keep track of a new stat, by adding methods and fields, and then I have to potentially add more methods to my PlayerManager, so it isn't very modulized. If there is an improvement to this that you would recommend, I would be very appreciative. Thanks.

    Read the article

  • Does anyone know of any good tutorials for using the APIs from Amazon WeB Services, namely CloudWatc

    - by undefined
    Hi, I have been wrestling with Amazon's CloudWatch API with limited success. Does anyone know of any good resources (other than amazon's api docs) for using the APIs. I have tried to run them using the PHP library for CloudWatch but get nothing but error codes. I am configuring the GetMetricStatisticsSample.php file as follows: $request = array(); $endTime = date("Y-m-d G:i:s"); $yesterday = mktime (date("H"), date("i"), date("s"), date("m"), date("d")-1, date("Y")); $startTime = date("Y-m-d 00:00:00", $yesterday); $request["Statistics.member.1"] = "Average"; $request["EndTime"] = $endTime; $request["StartTime"] = $startTime; $request["MeasureName"] = "CPUUtilization"; $request["Unit"] = "Percent"; invokeGetMetricStatistics($service, $request); But this returns "Caught Exception: Internal Error Response Status Code: 400 Error Code: Error Type: Request ID: XML:" I have also tried from command line as follows - set JAVA_HOME=C:\Program Files\Java\jre1.6.0_05 set AWS_CLOUDWATCH_HOME=C:\AmazonWebServices\API_tools\CloudWatch-1.0.0.24 set PATH=%AWS_CLOUDWATCH_HOME%\bin mon-list-metrics but get C:|Program' is not recognized as an internal or external command... any suggestions? cheers

    Read the article

  • How do I call Matlab in a script on Windows?

    - by Benjamin Oakes
    I'm working on a project that uses several languages: SQL for querying a database Perl/Ruby for quick-and-dirty processing of the data from the database and some other bookkeeping Matlab for matrix-oriented computations Various statistics languages (SAS/R/SPSS) for processing the Matlab output Each language fits its niche well and we already have a fair amount of code in each. Right now, there's a lot of manual work to run all these steps that would be much better scripted. I've already done this on Linux, and it works relatively well. On Linux: matlab -nosplash -nodesktop -r "command" or echo "command" | matlab -nosplash -nodesktop ...opens Matlab in a "command line" mode. (That is, no windows are created -- it just reads from STDIN, executes, and outputs to STDOUT/STDERR.) My problem is that on Windows (XP and 7), this same code opens up a window and doesn't read from / write to the command line. It just stares me blankly in the face, totally ignoring STDIN and STDOUT. How can I script running Matlab commands on Windows? I basically want something that will do: ruby database_query.rb perl legacy_code.pl ruby other_stuff.rb matlab processing_step_1.m matlab processing_step_2.m # etc, etc. I've found out that Matlab has an -automation flag on Windows to start an "automation server". That sounds like overkill for my purposes, and I'd like something that works on both platforms. What options do I have for automating Matlab in this workflow?

    Read the article

  • How to store and collect data for mining such information as most viewed for last 24 hours, last 7 d

    - by Kirzilla
    Hello, Let's imagine that we have high traffic project (a tube site) which should provide sorting using this options (NOT IN REAL TIME). Number of videos is about 200K and all information about videos is stored in MySQL. Number of daily video views is about 1.5KK. As instruments we have Hard Disk Drive (text files), MySQL, Redis. Views top viewed top viewed last 24 hours top viewed last 7 days top viewed last 30 days top rated last 365 days How should I store such information? The first idea is to log all visits to text files (single file per hour, for example visits_20080101_00.log). At the beginning of each hour calculate views per video for previous hour and insert this information into MySQL. Then recalculate totals (for last 24 hours) and update statistics in tables. At the beginning of every day we have to do the same but recalculate for last 7 days, last 30 days, last 365 days. This method seems to be very poor for me because we have to store information about last 365 days for each video to make correct calculations. Is there any other good methods? Probably, we have to choose another instruments for this? Thank you.

    Read the article

  • Shouldn't we ignore IE6 and IE7 users?

    - by Sebi
    Lately I created two webpages with a simple CMS for a a small club and a private person (http://foto.roser.li and http://www.ovlu.li/cms). I dont really understand much of HTML/PHP/CSS and all this web stuff, but it was enough to adapt the stylesheets and add some javascript functions and so on to make the pages to look more or less nice in firefox and IE8. Nevertheles, if you open the pages with IE7 or IE6, particulary the second home page really looks terrible. So what should I do know? I don't have the know-how to adapt the stylesheets so that it looks good at such browsers. However if I check the statistics of the pages, i noticed that almost half of the visitors uses this kind of browsers. I could put much effort in adapting the stylesheets to look good at all browser but while thinking about this I'm asking myself if this is really the best way. If all developers put effort in providing solutions for really old browsers (like e.g. the IE6), then people who use these brwoser dont have any incentives to update their browsers. Naturally, I as a provider of information should present the content in a readable form to my users, but were should i draw the line? I also saw that some users are visiting the pages with even IE5 or IE4... but I really can't think of a way how to adapt the content in a suitable form for these very old brwosers. I would appreciaed any hints how to handle this balancing between putting much effort into developing a version for very old brwosers and the necessity of providing content to all users which want to visits my homepages. Im also interested in your thoughts about they idea that putting effort in adapting the content for old browsers prevents the users from updating their browsres because they don't see any need.

    Read the article

  • C++ Bubble Sorting for Singly Linked List [closed]

    - by user1119900
    I have implemented a simple word frequency program in C++. Everything but the sorting is OK, but the sorting in the following script does not work. Any emergent help will be great.. #include <stdio.h> #include <string.h> #include <stdlib.h> #include <ctype.h> #include <iostream> #include <fstream> #include <cstdio> using namespace std; #include "ProcessLines.h" struct WordCounter { char *word; int word_count; struct WordCounter *pNext; // pointer to the next word counter in the list }; /* pointer to first word counter in the list */ struct WordCounter *pStart = NULL; /* pointer to a word counter */ struct WordCounter *pCounter = NULL; /* Print statistics and words */ void PrintWords() { ... pCounter = pStart; bubbleSort(pCounter); ... } //end-PrintWords void bubbleSort(struct WordCounter *ptr) { WordCounter *temp = ptr; WordCounter *curr; for (bool didSwap = true; didSwap;) { didSwap = false; for (curr = ptr; curr->pNext != NULL; curr = curr->pNext) { if (curr->word > curr->pNext->word) { temp->word = curr->word; curr->word = curr->pNext->word; curr->pNext->word = temp->word; didSwap = true; } } } }

    Read the article

  • What is the best Networking implementation for my application?

    - by CaptainPhil
    I am in the planning phases of a project for myself, it is to be a single and multi-player card game. I would like to track statistics for each person such as world rankings etc... My problem is I do not know the best approach for the client - server architecture and programming. My original goal was to program everything in C# as I want to get proficient in that language. My original idea was to have a back-end database and a back end server run on some sort of hosting on the internet, however that seems costly for such a small project that may or may not make any money. I have tried looking into cloud services however I am unfamiliar with the technology, and I am not sure I can make them suit my needs, especially since most like Google's cloud wants you to use their coding architecture from what I understand. Finally my last problem is that I would like an architecture that can be used for different languages so that I can port it from PC to IPhone, Xbox etc... So does anyone have any advice on the best architecture and language to do this in? Am I worrying about architecture and back-end costs to much and should just concentrate on getting the game running any which way?

    Read the article

  • PHP download script for "processes running limited" hosting (eg. hostgator)

    - by Joe
    I am currently with HostGator on a shared hosting plan. I have a new website I'm trying to setup with a download.php script. The issue I am having is that, while someone is "downloading" a file through the download.php script, it counts as a "process" and my hosting plan limits the processes that can be running at the same time to 25 at present. My question is, what options do I have? a). Move to new web hosting that doesn't limit processes running. b). Change the way files are downloaded. I would like to choose option b), however it occurs to me that I need to have the file accessed through PHP in order to restrict the number of downloads and to track download statistics, as well as protecting against hotlinking. If there was a way to have the PHP script send them the file so that the process doesn't need to be running the whole time the file is being downloaded, I would eliminate the problem, however to my knowledge that isn't possible. Should I make the move to a new hosting company? I really enjoy HostGator as they have provided the best hosting experience for me thus far, except for this one issue of course, so I don't want to go on the hunt for another decent shared hosting company that doesn't limit processes running, only to find out there is another restriction or "catch" to the shared hosting deal.

    Read the article

  • Optimal (Time paradigm) solution to check variable within boundary

    - by kumar_m_kiran
    Hi All, Sorry if the question is very naive. I will have to check the below condition in my code 0 < x < y i.e code similar to if(x > 0 && x < y) The basic problem at system level is - currently, for every call (Telecom domain terminology), my existing code is hit (many times). So performance is very very critical, Now, I need to add a check for boundary checking (at many location - but different boundary comparison at each location). At very normal level of coding, the above comparison would look very naive without any issue. However, when added over my statistics module (which is dipped many times), performance will go down. So I would like to know the best possible way to handle the above scenario (kind of optimal way for limits checking technique). Like for example, if bit comparison works better than normal comparison or can both the comparison be evaluation in shorter time span? Other Info x is unsigned integer (which must be checked to be greater than 0 and less than y). y is unsigned integer. y is a non-const and varies for every comparison. Here time is the constraint compared to space. Language - C++. Now, later if I need to change the attribute of y to a float/double, would there be another way to optimize the check (i.e will the suggested optimal technique for integer become non-optimal solution when y is changed to float/double). Thanks in advance for any input. PS : OS used is SUSE 10 64 bit x64_64, AIX 5.3 64 bit, HP UX 11.1 A 64.

    Read the article

  • Using ddply() to Get Frequency of Certain IDs, by Appearance in Multiple Rows (in R)

    - by EconomiCurtis
    Goal If the following description is hard follow, please see the example "before" and "after" to see a straightforward example. I have bartering data, with unique trade ids, and two sides of the trade. Side1 and Side2 are baskets, lists of item ids that represent both sides of the barter transaction. I'd like to count the frequency each ITEM appears in TRADES. E.g, if item "001" appeared in 3 trades, I'd have a count of 3 (ignoring how many times the item appeared in each trade). Further, I'd like to do this with the plyr ddply function. (If you're interested as to my motivation, I working over many hundreds of thousands of transactions and am already using a ddply to calculate several other summary statistics. I'd like to add this to the ddply I'm already using, rather than calculate it after, and merge it into the ddply output.... sorry if that was difficult to follow.) In terms of pseudo code I'm working off of: merge each row of Side1 and Side2 by row, get unique() appearances of each item id apply table() function transpose and relabel output from table Example of the structure of my data, and the output I desire. Data Example (before): df <- data.frame(TradeID = c("01","02","03","04")) df$Side1 = list(c("001","001","002"), c("002","002","003"), c("001","004"), c("001","002","003","004")) df$Side2 = list(c("001"),c("007"),c("009"),c()) Desired Output (after): df.ItemRelFreq_byTradeID <- data.frame(ItemID = c("001","002","003","004","007","009"), RelFreq_byTrade = c(3,3,2,2,1,1)) One method to do this without ddply I've worked out one way to do this below. My problem is that I can't quite seem to get ddply to do this for me. temp <- table(unlist(sapply(mapply(c,df$Side1,df$Side2), unique))) df.ItemRelFreq_byTradeID <- data.frame(ItemID = names(temp), RelFreq_byTrade = temp[]) Thanks for any help you can offer! Curtis

    Read the article

  • Is there a unique computer identifier that can be used reliably even in a virtual machine?

    - by SaUce
    I'm writing a small client program to be run on a terminal server. I'm looking for a way to make sure that it will only run on this server and in case it is removed from the server it will not function. I understand that there is no perfect way of securing it to make it impossible to ran on other platforms, but I want to make it hard enough to prevent 95% of people to try anything. The other 5% who can hack it is not my concern. I was looking at different Unique Identifiers like Processor ID, Windows Product ID, Computer GUID and other UIs. Because the terminal server is a virtual machine, I cannot locate anything that is completely unique to this machine. Any ideas on what I should look into to make this 95% secure. I do not have time or the need to make it as secure as possible because it will defeat the purpose of the application itself. I do not want to user MAC address. Even though it is unique to each machine it can be easily spoofed. As far as Microsoft Product ID, because our system team clones VM servers and we use corporate volume key, I found already two servers that I have access to that have same Product ID Number. I have no Idea how many others out there that have same Product ID By 95% and 5% I just simply wanted to illustrate how far i want to go with securing this software. I do not have precise statistics on how many people can do what. I believe I might need to change my approach and instead of trying to identify the machine, I will be better off by identifying the user and create group based permission for access to this software.

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >