Search Results

Search found 24037 results on 962 pages for 'every'.

Page 70/962 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • What's the best way to cache a growing database table for html generation?

    - by McLeopold
    I've got a database table which will grow in size by about 5000 rows a hour. For a key that I would be querying by, the query will grow in size by about 1 row every hour. I would like a web page to show the latest rows for a key, 50 at a time (this is configurable). I would like to try and implement memcache to keep database activity low for reads. If I run a query and create a cache result for each page of 50 results, that would work until a new entry is added. At that time, the page of latest results gets new result and the oldest results drops off. This cascades down the list of cached pages causing me to update every cache result. It seems like a poor design. I could build the cache pages backwards, then for each page requested I should get the latest 2 pages and truncate to the proper length of 50. I'm not sure if this is good or bad? Ideally, the mechanism I use to insert a new row would also know how to invalidate the proper cache results. Has someone already solved this problem in a widely acceptable way? What's the best method of doing this? EDIT: If my understanding of the MYSQL query cache is correct, it has table level granularity in invalidation. Given the fact that I have about 5000 updates before a query on a key should need to be invalidated, it seems that the database query cache would not be used. MS SQL caches execution plans and frequently accessed data pages, so it may do better in this scenario. My query is not against a single table with TOP N. One version has joins to several tables and another has sub-selects. Also, since I want to cache the html generated table, I'm wondering if a cache at the web server level would be appropriate? Is there really no benefit to any type of caching? Is the best advice really to just allow a website site query to go through all the layers and hit the database every request?

    Read the article

  • Why do we need private variables?

    - by rak
    Why do we need private variables in classes in the context of programming? Every book on programming I've read says this is a private variable, this is how you define it but stops there. The wording of these explanations always seemed to me like we really have a crisis of trust in our profession. The explanations always sounded like other programmers are out to mess up our code. Yet, there are many programming languages that do not have private variables. What do private variables help prevent? How do you decide if a particular of properties should be private or not? If by default every field SHOULD be private then why are there public data members in a class? Under what circumstances should a variable be made public?

    Read the article

  • How do I duplicate a Box2d simulation, mid-simulation?

    - by Whyte
    I want to serialize the state mid-game, send it over the network to an identical computer (same CPU, same OS, same binary), load it there, and have the two games run in tandem doing the exact same simulation, without one of them drifting off and going haywire. In short: I want pop-in, pop-out networking support on my HIGHLY physics-intensive game, where sending object coordinates every few seconds is impossible, due to having thousands of objects, and many clients. I tried this with Box2D, and saving an object's location/velocity/etc wasn't enough... there's internal state that's not accessible through any public methods. My current workaround is to force EVERY client to save its entire worldstate and reload it from scratch, whenever a new player connects... but this is obviously bad practice, because it hangs the game for everyone whenever someone new connects. However, it works, with zero desynchronization. So, anyone know of any other techniques that can help me? Or should I just kiss my project goodbye?

    Read the article

  • How can I design multi-threaded application for larger user base

    - by rokonoid
    Here's my scenario, I need to develop a scalable application. My user base may be over 100K, every user has 10 or more small tasks. Tasks check every minute with a third party application through web services, retrieving data which is written in the database, then the data is forwarded to it's original destination. So here's the processing of a small task: while(true){ Boolean isNewInformationAvailable = checkWhetherInformationIsAvailableOrNot(); If(isNewInformationAvailable ==true){ fetchTheData(); writeToDatabase(); findTheDestination(); deliverTheData(); isAvailable =false; } } As the application is large, how should I approach designing this? I'm going to use Java to write it. Should I use concurrency, and how would you manage the concurrency?

    Read the article

  • Failed Vista/Ubuntu 12.04 attempt at dual boot. Can only boot Ubuntu now

    - by mathematician1975
    I recently tried installing Ubuntu 12.04 alongside my Windows Vista install. I can boot fine into Ubuntu every time, but now every time I choose to boot into Windows, the process fails completely and restarts (or shuts down) my laptop. Is there anything I can do to recover this. I am very happy with Ubuntu and I do not want to recover my old Vista installation to overwrite it, but I do need to be able to use both OS's. Is it recoverable or should I start from scratch?

    Read the article

  • Multiple network installs with cached downloads

    - by popey
    I tend to do lots of Ubuntu installs, and I'd like them to be up to date from the moment I do them, and not have to install a bazillion updates as soon as the install finishes. I'd also like to cache those packages somewhere on a machine on my LAN, so I don't re-download all the packages every time. How easy is this to achieve? Is there a guide somewhere to making this happen? Bonus points if I don't have to burn any CDs or use USB sticks! Edit: I have noticed a lot of guides for PXE booting and would be tempted to set this up but every time I find one they seem to be old/outdated and broken. My home server runs 10.04 LTS.

    Read the article

  • Notes from AT&T ARO Session at Oredev 2013

    - by Geertjan
    The mobile internet is 12 times bigger than internet was 12 years ago. Explosive growth, faster networks, and more powerful devices. 85% of users prefer mobile apps, while 56% have problems. Almost 60% want less than 2 second mobile app startup. App with poor mobile experience results in not buying stuff, going to competitor, not liking your company. Battery life. Bad mobile app is worse than no app at all because it turns people away from brand, etc. Apps didn't exist 10 years ago, 72 billion dollars a year in 2013, 151 billion in 2017.Testing performance. Mobile is different than regular app. Need to fix issues before customers discover them. ARO is free and open source AT&T tool for identifying mobile app performance problems. Mobile data is different -- radio resource control state machine. Radio resource control -- radio from idle to continuous reception -- drains battery, sends data, packets coming through, after packets come through radio is still on which is tail time, after 10 seconds of no data coming through radio goes off. For example, YouTube, e.g., 10 to 15 seconds after every connection, can be huge drain on battery, app traffic triggers RRC state. Goal. Balance fast network connectivity against battery usage. ARO is free and open source and test any platform and won awards. How do I test my app? pcap or tcdump network. Native collector: Android and iOS. Android rooted device is needed. Test app on phone, background data, idle for ads and analytics. Graded against 25 best practices. See all the processes, all network traffic mapped to processes, stats about trace, can look just at your app, exlude Facebook, etc. Many tests conducted, e.g., file download, HTML (wrapped applications, e.g., cordova). Best Practices. Make stuff smaller. GZIP, smaller files, download faster, best for files larger than 800 bytes, minification -- remove tabs and commenting -- browser doesn't need that, just give processor what it needs remove wheat from chaff. Images -- make images smaller, 1024x1024 image for a checkmark, swish it, make it 33% smaller, ARO records the screen, probably could be 9 times smaller. Download less stuff. 17% of HTTP content on mobile is duplicate data because of caching, reloading from cache is 75% to 99% faster than downloading again, 75% possible savings which means app will start up faster because using cache -- everyone wants app starting up 2 seconds. Make fewer HTTP requests. Inline and combine CSS and JS when possible reduces the number of requests, spread images used often. Fewer connections. Faster and use less battery, for example, download an image every 60 secs, download an add every 60 seconds, send analytics every 60 seconds -- instead of that, use transaction manager, download everything at once, reduce amount of time connected to network by 40% also -- 80% of applications do NOT close connections when they are finished, e.g., download picture, 10 seconds later the radio turns off, if you do not explicitly close, eventually server closes, 38% more tail time, 40% less energy if you close connection right away, background data traffic is 27% of data and 55% of network time, this kills the battery. Look at redirection. Adds 200 to 600 ms on each connection, waterfall diagram to all the requests -- e.g., xyz.com redirect to www.xyz.com redirect to xyz.mobi to www.xyz.com, waterfall visualization of packets, minimize redirects but redirects are fine. HTML best practices. Order matters and hiding code (JS downloading blocks rendering, always do CSS before JS or JS asynchronously, CSS 'display:none' hides images from user but the browser downloads them which adds latency to application. Some apps turn on GPS for no reason. Tell network when down, but maybe some other app is using the radio at the same time. It's all about knowing best practices: everyone wins with ARO (carriers, e.g., AT&T, developers, customers). Faster apps, better battery usage, network traffic better, better app reviews, happier customers. MBTA app, referenced as an example.ARO is free, open source, can test all platforms.

    Read the article

  • (Phaser) Preload Future States in Create?

    - by Brian
    I'm a first time user of Phaser, been trying to make a simple point and click type game. I'm trying to keep things very modular, so I'm defining a list of levels (states) in a JSON, and then every level has its own JSON containing the objects within that level. However, I'm encountering an issue in that, when changing states, I get a black flash while the assets for the next state load (this happens whether I iterate through the JSON list or define everything manually). From what I've read, all sprites should be loaded in the preload stage, however, by doing this I'm causing that tiny but noticeable black pause. I know one way would be to simply load every asset at the start of the game, but that seems incredibly inefficient (wouldn't that fill up the memory immensely?). I would rather load a state's assets from the "parent" state. However, in my quick test (which maybe I did wrong) it seems that game.load doesn't work properly if done within the create stage? What is the best approach to doing this?

    Read the article

  • Triggering State Changes with Health Counter

    - by Hairgami_Master
    I'm developing a game where the player changes states as their health decreases. Below 50, it should trigger animation1. Below 30, it should trigger animation2. The problem is, I only want to trigger animation1 once. But my game timer is checking every "frame", so it's triggering animation1 every cycle below 50. I only want it to trigger once, then not again until it's gone over 50 and then naturally decreased back to below 50. Are there any tried and true strategies for triggering state changes as a timer counts down (without the over-triggering problem)? I thought I could say: if (health == 50) animation1.play(); but sometimes, health never equals exactly 50, so it will skip right past that statement.

    Read the article

  • nVidia GT 220 not working properly with Ubuntu 12.10

    - by Glaedr
    I used to enable proprietary nVidia drivers on every previous Ubuntu release to get it working properly (elseway I was forced to a very low resolution and no graphic acceleration), and everything worked fine then. In particular, I noticed - still I don't know why - that the GPU fan on every OS gets noisy until the video drivers are loaded (runlevel 5, it seems, in Linux), and then slows down to a normal speed. Today I installed 12.10. Running the Live CD, surprisingly, everything was working fine: full resolution, acceleration, silent fan, and so on. The running driver was nvidia-current (GT 216). After installing and booting I found that the fan was overrunning. The installed driver is nouveau. I tried installing nvidia-current, or any other proprietary driver, even installing the kernel headers & source and then the drivers (as suggested here), but all I'm getting with proprietary drivers is, the irony, low resolution, noisy fan and no acceleration (thus obviously unity and compiz refusing to start). Does anybody know a way out?

    Read the article

  • Happy Birthday LearnVisualStudio.NET?

    - by TATWORTH
    Back in 2003, I made the changeover to Dot Net with the help of LearnViusal Studio.NET. They provide an excellent learning resource. I commend membership to you. This week only, you can get started for as little as $48.97 for a 1 Year Subscription! Save 30% at LearnVisualStudio.NET http://www.learnvisualstudio.net?awt_l=BN5TZ&awt_m=JaSOlFqKSr1QwB You can also get a Lifetime membership for only $139.97! That's over $59 in savings! A lifetime membership will grant you access to every video on the site and every video we ever create for LearnVisualStudio.NET without giving us another dime! This is a great chance to access over 900 tutorials to help you learn C#, VB, ASP NET and more. Get started today! http://www.learnvisualstudio.net?awt_l=BN5TZ&awt_m=JaSOlFqKSr1QwB

    Read the article

  • Testing complex compositions

    - by phlipsy
    I have a rather large collection of classes which check and mutate a given data structure. They can be composed via the composition pattern into arbitrarily complex tree-like structures. The final product contains a lot of these composed structures. My question is now: How can I test those? Albeit it is easy to test every single unit of these compositions, it is rather expensive to test the whole compositions in the following sense: Testing the correct layout of the composition-tree results in a huge number of test cases Changes in the compositions result in a very laborious review of every single test case What is the general guideline here?

    Read the article

  • Driver inversion

    - by Val
    I have a GUI game, which is driven by user every time it clicks the mouse. Every time user clicks a square on a board, the board state is updated (we re-compute the score, the player to make next move and legal movements it can make) and repainted. Both mouse click, state recomputation and painting are handled in the GUI thread. Now, suppose that I want to train AI to play without GUI. That is, game engine should consume next move by simply calling AI's makeMove function in one thread. This would allow to play millions of games per second automatically. GUI may just screenshot some arbitrary states time after time. How do you switch to this strategy?

    Read the article

  • sound cuts out at random times

    - by jayb151
    Since I updated to 14.04, I have been having a problem with my sound. Every once in a while it "skips a beat." and when I say every once in a while, I mean 2-4 times a minute. It's literally that the sounds goes a way for about half a second. It doesn't pick up where it left off, but rather continues from where it should be...if that makes sense. So, if it cut in mid word, I wouldn't hear the end of the word. I didn't have this problem before I updated from 12.04. There is no popping or static, it just cuts for a fraction of a second. Any thoughts? Thanks!

    Read the article

  • Why am I getting a warning that windows is logging on with a temporary profile to run a task scheduler task?

    - by Dan C
    I am having a strange problem with the Windows Server 2008 Task Scheduler. I have to run a small command-line application every few minutes. This application just executes a quick web service call on the localhost and adds an entry to a log file; so it should not need anything special in terms of permissions. First, I created a new user account "my_scheduler" just for the task. This account is a member of the Users group (not sure what other settings I should turn on/off) and set it's password to not expire. I then create a task to run the application every few minutes. I set it to "Run whether user is logged on or not" and turned on "Do not store password. The task will only have access to local resources" (I did this since it's not hitting anything on the network. I did not turn on "Run with highest privileges" since it does not seem to need them. I set the schedule to "After triggered, repeat every 30 minutes for a duration of 1 day" and "Allow task to be run on demand" (no other settings enabled). However, I notice that in the Event Log, I see a bunch of these warnings whenever the task is run: "Windows cannot find the local profile and is logging you on with a temporary profile. Changes you make to this profile will be lost when you log off." Even though I get the warning, the task is executing (I see the log entries appearing). Another (possibly related) issue is that I also see that it's starting multiple copies of the task (within a few seconds of each other) even though it should only start one. This is also a big problem. Any idea how I can fix this? Thanks in advance, Dan

    Read the article

  • PHP 5.3 on IIS gives 404 error in CGI mode

    - by reinier
    Slowly losing my mind here. I had PHP 5.2 working fine (ISAPI) under IIS, but for some extension I needed 5.3. So no worries, I installed this but it turns out ISAPI is not supplied anymore. I followed the install tutorials for fastcgi and ended up with a 500 internal server error for every PHP page served. So my current situation is: I have fastcgi removed. In my websites I have added PHP (head, get, post) and routed them to c:\php\php-cgi.exe. Result: every PHP page I try (even the ones with just text) gives 404 not found error. Any HTML file I put in the same folder, serves without a hitch. Who can help me please... How hard can something like this be right? For me apparently very hard. Extra information: ran the installer as suggested below. Set it to use fastcgi. my fcgiext.ini file looks like this now: [types] php=c:\php\php-cgi.exe [c:\php\php-cgi.exe] exepath=c:\php\php-cgi.exe from the command-line a 3 line PHP file with just phpinfo(); works fine from the server the same PHP file with just phpinfo(); results in the internal server 500 error. from the server a PHP file with just text works fine when changing the document types in IIS management console and point the PHP extension directly to c:\php\php-cgi.exe results in 404 for every PHP file the php.ini is the php.ini.production file which came in the distribution. No edits were made. Setting the IIS PHP handler directly to PHP (not via fastcgi) c:\php\php-cgi.exe results in the following: display a PHP page with only text....works fine display a page with only phpinfo(); results in 404 not found

    Read the article

  • Can't connect two PCs to a Network Switch at the same time (Windows 7)

    - by puk
    I have two computers connected to a network switch and every once in a while one of the computers will lose its internet connection. It's almost always the same computer every time. However, if I play around with the control panel, I can switch it, so that now the other computer is not connected. Restarting either of the computers does not help either. In Windows, the worlds-greatest-trouble-shooter tells me that a network cable is unplugged and that I should try plugging it in...Disabling and re-enabling my NIC does not fix this problem, neither does swapping cables around. When rebooting, the BIOS complains about how the Ethernet Cable is not plugged in. If it's in any way important, My set up at the office is like so: Modem - Routher - Network Switch 1 - Network Switch 2. I have tried turning off the energy saving option for my NIC, and I tried manually setting the link-speed to 100Mbps Full Duplex without any luck. Also, I have a Realtek PCIe GBE Family controller on both computers Does anyone have any idea why this is happening every 5-10 days? EDIT: I have also tried using a completely different Network Switch and the problem still persists as before.

    Read the article

  • Logging hurts MySQL performance - but, why?

    - by jimbo
    I'm quite surprised that I can't see an answer to this anywhere on the site already, nor in the MySQL documentation (section 5.2 seems to have logging otherwise well covered!) If I enable binlogs, I see a small performance hit (subjectively), which is to be expected with a little extra IO -- but when I enable a general query log, I see an enormous performance hit (double the time to run queries, or worse), way in excess of what I see with binlogs. Of course I'm now logging every SELECT as well as every UPDATE/INSERT, but, other daemons record their every request (Apache, Exim) without grinding to a halt. Am I just seeing the effects of being close to a performance "tipping point" when it comes to IO, or is there something fundamentally difficult about logging queries that causes this to happen? I'd love to be able to log all queries to make development easier, but I can't justify the kind of hardware it feels like we'd need to get performance back up with general query logging on. I do, of course, log slow queries, and there's negligible improvement in general usage if I disable this. (All of this is on Ubuntu 10.04 LTS, MySQLd 5.1.49, but research suggests this is a fairly universal issue)

    Read the article

  • How to disable auto insert notification in Windows 7?

    - by White Phoenix
    Alright, here's the problem. My hard drive activity light on my custom built PC is blinking exactly once every second. Microsoft has this to say on the issue: http://support.microsoft.com/kb/138598 There has been discussion on this issue several months ago: Why does my hard drive LED light blink every second? The problem seems to stem from primarily Windows 7 polling the CD-ROM/DVD drive every second to see if something is inserted. The Windows 7 users in the thread that was linked in the superuser question, https://social.technet.microsoft.com/Forums/fi-FI/w7itprohardware/thread/4f6f63b3-4b58-4154-9298-1566100f9d00, have confirmed that this IS a known issue with Windows 7. Some people point at the motherboard circuitry causing the CD-ROM and SATA activity to both be linked to that hard drive activity, but whatever the case, the temporary solution seems to be to disable the CD/DVD-ROM drive in Device Manager. In fact, disabling the CD/DVD-ROM does stop the blinking, but of course this solution is counterproductive, because I shouldn't have to entirely disable a device to fix this problem. I've done the following suggestions in that thread: Change the autorun registry entry to 0 Completely disable autoplay in the autoplay control panel Disable autoplay in the Local Group Policy Editor. None of these stop the blinking from happening - apparently these solutions work for both XP and Vista, but it seems to be different in Windows 7. So I'm wondering if anyone has found out how to completely disable the polling in Windows 7, or if this will just have to be an issue we will have to deal with. There's no option to disable the auto insert notification when you go to the device within device manager (there was in XP), so I got no idea where this option is hidden, or if there's a registry key entry I could change to stop the polling. Anyone have any idea?

    Read the article

  • SpamAssassin bayesian score discrepancies

    - by CaptSaltyJack
    This makes my brain hurt. For some reason, SpamAssassin is giving high scores to certain emails, but when I test them on the command line, they get a low score. This one particular email has this in the header: X-Spam-Flag: YES X-Spam-Score: 8.521 X-Spam-Level: ******** X-Spam-Status: Yes, score=8.521 tagged_above=-9999 required=5 tests=[BAYES_99=3.5, BAYES_999=0.2, HTML_MESSAGE=0.001, NO_RECEIVED=-0.001, NO_RELAYS=-0.001, RAZOR2_CF_RANGE_51_100=0.5, RAZOR2_CF_RANGE_E8_51_100=1.886, RAZOR2_CHECK=0.922, URIBL_RHS_DOB=1.514] autolearn=no Yet when I dump the raw email into a file msg and run sudo su amavis -c 'spamassassin -t msg', I get this output: Content analysis details: (3.8 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- 1.5 URIBL_RHS_DOB Contains an URI of a new domain (Day Old Bread) [URIs: cliobeads.com] -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP 0.0 HTML_MESSAGE BODY: HTML included in message -0.0 BAYES_20 BODY: Bayes spam probability is 5 to 20% [score: 0.1855] 1.9 RAZOR2_CF_RANGE_E8_51_100 Razor2 gives engine 8 confidence level above 50% [cf: 100] 0.5 RAZOR2_CF_RANGE_51_100 Razor2 gives confidence level above 50% [cf: 100] 0.9 RAZOR2_CHECK Listed in Razor2 (http://razor.sf.net/) I'm really confused as to why when the email comes in, it gets a completely different score attached to it than when I run spamassassin -t. Is there some other way I should be testing emails? Also, my users have the ability to drag false positives into a folder called "False Positives," and every day a cron job fires off that runs this on every message in every user's folder: sa-learn --dbpath=/var/lib/amavis/.spamassassin --ham /tmp/*-*.eml >/dev/null I ran sudo locate bayes_toks and there's definitely only one bayes DB on the system, in /var/lib/amavis/.spamassassin. I'm clueless, any help would be great and may help restore my sanity!

    Read the article

  • Set scheduled task last result to 0x0 manually

    - by Rogier
    Every night a task runs that checks if any scheduled task has a Last Result is not equal to 0x0. If a scheduled tasks has an error like 0x1, then automatically an e-mail is sent to me. As some tasks are running only weekly, and sometimes an error occurs which results in not equal to 0x0, every night an e-mail is sent with the error message, as the Last Result column still shows the last result of 0x1. But I would like to set the Last Result column to 0x0 manually if I solved a problem, so I won't get every night an e-mail with the error message. So is it possible to set the scheduled tasks Last Result to 0x0 manually (or by a script)? @harrymc. See located script underneath that is sending the e-mail. I can easily add a criteria to ignore result 0x1 (or another code), however this is not the solution as most of the times this result is a real error and has to be e-mailed. set [email protected] set SMTPServer=SMTPserver set PathToScript=c:\scripts set [email protected] for /F "delims=" %%a in ('schtasks /query /v /fo:list ^| findstr /i "Taskname Result"') do call :Sub %%a goto :eof :Sub set Line=%* set BOL=%Line:~0,4% set MOL=%Line:~38% if /i %BOL%==Task ( set name=%MOL% goto :eof ) set result=%MOL% echo Task Name=%name%, Task Result=%result% if not %result%==0 ( echo Task %name% failed with result %result% > %PathToScript%\taskcheckerlog.txt bmail %PathToScript%\taskcheckerlog.txt -t %YourEmailAddress% -a "Warning! Failed %name% Scheduled Task on %computername%" -s %SMTPServer% -f %FromAddress% -b "Task %name% failed with result %result% on CorVu scheduler %computername%" )

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >