Search Results

Search found 4689 results on 188 pages for 'no average geek'.

Page 113/188 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • Ask the Readers: How Do You Share Your Photos?

    - by Jason Fitzpatrick
    It’s easy to snap away and fill up a memory card, but not quite as easy to share your best pics with your friends and family. How do you get your pics from your camera to your friends’ monitors? This week we’re interested in hearing about your favorite photo sharing tools and techniques. What’s your workflow for getting your photos from your digital camera to the virtual desktops of friends around the globe? Sound off in the comments with your favorite resources, applications, and photo sharing tricks. Make sure to check in on Friday for the What You Said roundup to see how your fellow readers get the job done. 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • Ask the Readers: How Do You Find Your Next Book?

    - by Jason Fitzpatrick
    It’s never been easier to find book reviews, recommendations, and comparisons; tools which are more necessary than ever thanks to the increasing number of new titles on the market. This week we want to hear all about your techniques for picking your next book. Whether you consult the New York Times best seller list, pore over Amazon book reviews, use a book suggestion engine, or just buy whatever the local book store has on the end-cap display that month, we want to hear about your system for finding new books. Sound off in the comments with your technique (bonus points for including links to any services or sites you use) and then check back on Friday for the What You Said roundup to see how your fellow readers fill their book bags. Secure Yourself by Using Two-Step Verification on These 16 Web Services How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot

    Read the article

  • HTG Explains: What are Shadow Copies and How Can I Use Them to Copy or Backup Locked Files?

    - by Jason Faulkner
    When trying to create simple file copy backups in Windows, a common problem is locked files which can trip up the operation. Whether the file is currently opened by the user or locked by the OS itself, certain files have to be completely unused in order to be copied. Thankfully, there is a simple solution: Shadow Copies. Using our simple tool, you can easily access shadow copies which allows access to point-in-time copies of the currently locked files as created by Windows Restore. Image credit: Best Backup Services How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It

    Read the article

  • Is It Possible for My Router to Wear Out?

    - by Jason Fitzpatrick
    Day after day your humble and hard working router holds your home network together and links it to the greater internet. Is it possible to work it to death? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-drive grouping of Q&A web sites. HTG Explains: How Antivirus Software Works HTG Explains: Why Deleted Files Can Be Recovered and How You Can Prevent It HTG Explains: What Are the Sys Rq, Scroll Lock, and Pause/Break Keys on My Keyboard?

    Read the article

  • What’s the Difference Between Sleep and Hibernate in Windows?

    - by Lori Kaufman
    Windows 7 provides several options for conserving power when you are not using your PC. These options include Sleep, Hibernate, and Hybrid Sleep and are very useful if you are using a laptop. Here’s the difference between them. Note: this article is meant primarily for beginners. Obviously ubergeeky readers will already know the difference between power modes. Screenshot Tour: XBMC 11 Eden Rocks Improved iOS Support, AirPlay, and Even a Custom XBMC OS How To Be Your Own Personal Clone Army (With a Little Photoshop) How To Properly Scan a Photograph (And Get An Even Better Image)

    Read the article

  • Ask the Readers: How Do You Stay Productive Working from Home?

    - by Jason Fitzpatrick
    Roughly 20% of the global workforce telecommutes on a permanent or part-time basis; if you’re one of the many laptop-toting and home-office working telecommuters we want to hear all about how you stay productive outside the walls of a traditional office. Whether you have a dedicated home office or an attache that unfolds into a mobile workstation, we want to hear your tips, tricks, and productivity-focusing methods for getting things done when you’re working from home. Sound off in the comments with your tips and then check back in on Friday for the What You Said Roundup. How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It

    Read the article

  • Generate Unique Abstract Backgrounds with Ablaze

    - by Jason Fitzpatrick
    If you want custom and unique backgrounds without having to code your own image-generating engine, Ablaze makes it simple (and fun) to create abstract images. You can customize a wide array of options in Ablaze including the base shape (ring, horizontal line, or random), number of particles, distance each particle travels, and the speed (if you increase the speed range you get more distinct lines and if you decrease it you get smoother smokier shapes). You can also seed the design with a color palette pulled from any image you provide (the sample above was seeded with a Wonder Woman comic panel). Tweak and reset the pattern generation as much as you want; when you create an abstract image worthy of your desktop just click the save button to grab a copy of it in PNG format. Ablaze [via Flowing Data] How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • DropVox Records Voice Memos Right to Your Dropbox Account

    - by Jason Fitzpatrick
    DropVox is a clever and highly specialized application that, quite effectively, turns your iOS device into a voice recorder with Dropbox-based storage. Install the app, launch it, hit the record button, and your recording is uploaded to your Dropbox account in .m4a format as soon as you’re finished creating it. You can also configure DropVox to start recording immediately after launch and to continue recording if the device is locked or other applications are in use. Hit up the link to grab a copy. DropVox is currently $0.99 (50% off for a limited time) and works on the iPhone, iPad, and iPod Touch with microphone attached. DropVox [via Download Squad] HTG Explains: What’s the Difference Between the Windows 7 HomeGroups and XP-style Networking?Internet Explorer 9 Released: Here’s What You Need To KnowHTG Explains: How Does Email Work?

    Read the article

  • From the Tips Box: Waterproof Boomboxes, Quick Access Laptop Stats, and Stockpiling Free Free Apps and Books

    - by Jason Fitzpatrick
    Once a week we round up some great reader tips and share them with everyone. This week we’re looking at building a waterproof boombox, quick access to laptop stats in Windows 7, and how to stockpile free apps and books at Amazon. How to Banish Duplicate Photos with VisiPic How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It?

    Read the article

  • SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28

    - by pinaldave
    For any good system three things are vital: CPU, Memory and IO (disk). Among these three, IO is the most crucial factor of SQL Server. Looking at real-world cases, I do not see IT people upgrading CPU and Memory frequently. However, the disk is often upgraded for either improving the space, speed or throughput. Today we will look at another IO-related wait type. From Book On-Line: Occurs when a task is waiting for I/Os to finish. ASYNC_IO_COMPLETION Explanation: Any tasks are waiting for I/O to finish. If by any means your application that’s connected to SQL Server is processing the data very slowly, this type of wait can occur. Several long-running database operations like BACKUP, CREATE DATABASE, ALTER DATABASE or other operations can also create this wait type. Reducing ASYNC_IO_COMPLETION wait: When it is an issue related to IO, one should check for the following things associated to IO subsystem: Look at the programming and see if there is any application code which processes the data slowly (like inefficient loop, etc.). Note that it should be re-written to avoid this  wait type. Proper placing of the files is very important. We should check the file system for proper placement of the files – LDF and MDF on separate drive, TempDB on another separate drive, hot spot tables on separate filegroup (and on separate disk), etc. Check the File Statistics and see if there is a higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. Check event log and error log for any errors or warnings related to IO. If you are using SAN (Storage Area Network), check the throughput of the SAN system as well as configuration of the HBA Queue Depth. In one of my recent projects, the SAN was performing really badly and so the SAN administrator did not accept it. After some investigations, he agreed to change the HBA Queue Depth on the development setup (test environment). As soon as we changed the HBA Queue Depth to quite a higher value, there was a sudden big improvement in the performance. It is very likely to happen that there are no proper indexes on the system and yet there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can effectively reduce lots of CPU, Memory and IO (considering cover index has lesser columns than cluster table and all other; it depends upon the situation). You can refer to the following two articles I wrote that talk about how to optimize indexes: Create Missing Indexes Drop Unused Indexes Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – IO_COMPLETION – Wait Type – Day 10 of 28

    - by pinaldave
    For any good system three things are vital: CPU, Memory and IO (disk). Among these three, IO is the most crucial factor of SQL Server. Looking at real-world cases, I do not see IT people upgrading CPU and Memory frequently. However, the disk is often upgraded for either improving the space, speed or throughput. Today we will look at an IO-related wait types. From Book On-Line: Occurs while waiting for I/O operations to complete. This wait type generally represents non-data page I/Os. Data page I/O completion waits appear as PAGEIOLATCH_* waits. IO_COMPLETION Explanation: Any tasks are waiting for I/O to finish. This is a good indication that IO needs to be looked over here. Reducing IO_COMPLETION wait: When it is an issue concerning the IO, one should look at the following things related to IO subsystem: Proper placing of the files is very important. We should check the file system for proper placement of files – LDF and MDF on a separate drive, TempDB on another separate drive, hot spot tables on separate filegroup (and on separate disk),etc. Check the File Statistics and see if there is higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. Check event log and error log for any errors or warnings related to IO. If you are using SAN (Storage Area Network), check the throughput of the SAN system as well as the configuration of the HBA Queue Depth. In one of my recent projects, the SAN was performing really badly so the SAN administrator did not accept it. After some investigations, he agreed to change the HBA Queue Depth on development (test environment) set up and as soon as we changed the HBA Queue Depth to quite a higher value, there was a sudden big improvement in the performance. It is very possible that there are no proper indexes in the system and there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can effectively reduce lots of CPU, Memory and IO (considering cover index has lesser columns than cluster table and all other; it depends upon the situation). You can refer to the two articles that I wrote; they are about how to optimize indexes: Create Missing Indexes Drop Unused Indexes Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Types, SQL White Papers, T SQL, Technology

    Read the article

  • Merge sort versus quick sort performance

    - by Giorgio
    I have implemented merge sort and quick sort using C (GCC 4.4.3 on Ubuntu 10.04 running on a 4 GB RAM laptop with an Intel DUO CPU at 2GHz) and I wanted to compare the performance of the two algorithms. The prototypes of the sorting functions are: void merge_sort(const char **lines, int start, int end); void quick_sort(const char **lines, int start, int end); i.e. both take an array of pointers to strings and sort the elements with index i : start <= i <= end. I have produced some files containing random strings with length on average 4.5 characters. The test files range from 100 lines to 10000000 lines. I was a bit surprised by the results because, even though I know that merge sort has complexity O(n log(n)) while quick sort is O(n^2), I have often read that on average quick sort should be as fast as merge sort. However, my results are the following. Up to 10000 strings, both algorithms perform equally well. For 10000 strings, both require about 0.007 seconds. For 100000 strings, merge sort is slightly faster with 0.095 s against 0.121 s. For 1000000 strings merge sort takes 1.287 s against 5.233 s of quick sort. For 5000000 strings merge sort takes 7.582 s against 118.240 s of quick sort. For 10000000 strings merge sort takes 16.305 s against 1202.918 s of quick sort. So my question is: are my results as expected, meaning that quick sort is comparable in speed to merge sort for small inputs but, as the size of the input data grows, the fact that its complexity is quadratic will become evident? Here is a sketch of what I did. In the merge sort implementation, the partitioning consists in calling merge sort recursively, i.e. merge_sort(lines, start, (start + end) / 2); merge_sort(lines, 1 + (start + end) / 2, end); Merging of the two sorted sub-array is performed by reading the data from the array lines and writing it to a global temporary array of pointers (this global array is allocate only once). After each merge the pointers are copied back to the original array. So the strings are stored once but I need twice as much memory for the pointers. For quick sort, the partition function chooses the last element of the array to sort as the pivot and scans the previous elements in one loop. After it has produced a partition of the type start ... {elements <= pivot} ... pivotIndex ... {elements > pivot} ... end it calls itself recursively: quick_sort(lines, start, pivotIndex - 1); quick_sort(lines, pivotIndex + 1, end); Note that this quick sort implementation sorts the array in-place and does not require additional memory, therefore it is more memory efficient than the merge sort implementation. So my question is: is there a better way to implement quick sort that is worthwhile trying out? If I improve the quick sort implementation and perform more tests on different data sets (computing the average of the running times on different data sets) can I expect a better performance of quick sort wrt merge sort? EDIT Thank you for your answers. My implementation is in-place and is based on the pseudo-code I have found on wikipedia in Section In-place version: function partition(array, 'left', 'right', 'pivotIndex') where I choose the last element in the range to be sorted as a pivot, i.e. pivotIndex := right. I have checked the code over and over again and it seems correct to me. In order to rule out the case that I am using the wrong implementation I have uploaded the source code on github (in case you would like to take a look at it). Your answers seem to suggest that I am using the wrong test data. I will look into it and try out different test data sets. I will report as soon as I have some results.

    Read the article

  • Fun tips with Analytics

    - by user12620172
    If you read this blog, I am assuming you are at least familiar with the Analytic functions in the ZFSSA. They are basically amazing, very powerful and deep. However, you may not be aware of some great, hidden functions inside the Analytic screen. Once you open a metric, the toolbar looks like this: Now, I’m not going over every tool, as we have done that before, and you can hover your mouse over them and they will tell you what they do. But…. Check this out. Open a metric (CPU Percent Utilization works fine), and click on the “Hour” button, which is the 2nd clock icon. That’s easy, you are now looking at the last hour of data. Now, hold down your ‘Shift’ key, and click it again. Now you are looking at 2 hours of data. Hold down Shift and click it again, and you are looking at 3 hours of data. Are you catching on yet? You can do this with not only the ‘Hour’ button, but also with the ‘Minute’, ‘Day’, ‘Week’, and the ‘Month’ buttons. Very cool. It also works with the ‘Show Minimum’ and ‘Show Maximum’ buttons, allowing you to go to the next iteration of either of those. One last button you can Shift-click is the handy ‘Drill’ button. This button usually drills down on one specific aspect of your metric. If you Shift-click it, it will display a “Rainbow Highlight” of the current metric. This works best if this metric has many ‘Range Average’ items in the left-hand window. Give it a shot. Also, one will sometimes click on a certain second of data in the graph, like this:  In this case, I clicked 4:57 and 21 seconds, and the 'Range Average' on the left went away, and was replaced by the time stamp. It seems at this point to some people that you are now stuck, and can not get back to an average for the whole chart. However, you can actually click on the actual time stamp of "4:57:21" right above the chart. Even though your mouse does not change into the typical browser finger that most links look like, you can click it, and it will change your range back to the full metric. Another trick you may like is to save a certain view or look of a group of graphs. Most of you know you can save a worksheet, but did you know you could Sync them, Pause them, and then Save it? This will save the paused state, allowing you to view it forever the way you see it now.  Heatmaps. Heatmaps are cool, and look like this:  Some metrics use them and some don't. If you have one, and wish to zoom it vertically, try this. Open a heatmap metric like my example above (I believe every metric that deals with latency will show as a heatmap). Select one or two of the ranges on the left. Click the "Change Outlier Elimination" button. Click it again and check out what it does.  Enjoy. Perhaps my next blog entry will be the best Analytic metrics to keep your eyes on, and how you can use the Alerts feature to watch them for you. Steve 

    Read the article

  • XML generation with java, trying to copy the whole node

    - by Pawel Mysior
    I've got an xml document that filled with people (parent node is "students", and there are 25+ "student" nodes). Each student looks like this: <student> <name></name> <surname></surname> <grades> <subject name=""> <small_grades></small_grades> <final_grade></final_grade> </subject> <subject name=""> <small_grades></small_grades> <final_grade></final_grade> </subject> </grades> <average></average> </student> Basically, what I want to do ('ve been asked to do) is to make a program that would get 3 students with the best average. While parsing the document and getting three best students isn't too difficult, the XML generation is a pain in the ass. Right now, what I'm doing is getting every single node from student and recreating it to a new file. Is there a way to copy the whole student node with everything that's in it? Regards, Paul

    Read the article

  • My OpenCL kernel is slower on faster hardware.. But why?

    - by matdumsa
    Hi folks, As I was finishing coding my project for a multicore programming class I came up upon something really weird I wanted to discuss with you. We were asked to create any program that would show significant improvement in being programmed for a multi-core platform. I’ve decided to try and code something on the GPU to try out OpenCL. I’ve chosen the matrix convolution problem since I’m quite familiar with it (I’ve parallelized it before with open_mpi with great speedup for large images). So here it is, I select a large GIF file (2.5 MB) [2816X2112] and I run the sequential version (original code) and I get an average of 15.3 seconds. I then run the new OpenCL version I just wrote on my MBP integrated GeForce 9400M and I get timings of 1.26s in average.. So far so good, it’s a speedup of 12X!! But now I go in my energy saver panel to turn on the “Graphic Performance Mode” That mode turns off the GeForce 9400M and turns on the Geforce 9600M GT my system has. Apple says this card is twice as fast as the integrated one. Guess what, my timing using the kick-ass graphic card are 3.2 seconds in average… My 9600M GT seems to be more than two times slower than the 9400M.. For those of you that are OpenCL inclined, I copy all data to remote buffers before starting, so the actual computation doesn’t require roundtrip to main ram. Also, I let OpenCL determine the optimal local-worksize as I’ve read they’ve done a pretty good implementation at figuring that parameter out.. Anyone has a clue? edit: full source code with makefiles here http://www.mathieusavard.info/convolution.zip cd gimage make cd ../clconvolute make put a large input.gif in clconvolute and run it to see results

    Read the article

  • Get percentiles of data-set with group by month

    - by Cylindric
    Hello, I have a SQL table with a whole load of records that look like this: | Date | Score | + -----------+-------+ | 01/01/2010 | 4 | | 02/01/2010 | 6 | | 03/01/2010 | 10 | ... | 16/03/2010 | 2 | I'm plotting this on a chart, so I get a nice line across the graph indicating score-over-time. Lovely. Now, what I need to do is include the average score on the chart, so we can see how that changes over time, so I can simply add this to the mix: SELECT YEAR(SCOREDATE) 'Year', MONTH(SCOREDATE) 'Month', MIN(SCORE) MinScore, AVG(SCORE) AverageScore, MAX(SCORE) MaxScore FROM SCORES GROUP BY YEAR(SCOREDATE), MONTH(SCOREDATE) ORDER BY YEAR(SCOREDATE), MONTH(SCOREDATE) That's no problem so far. The problem is, how can I easily calculate the percentiles at each time-period? I'm not sure that's the correct phrase. What I need in total is: A line on the chart for the score (easy) A line on the chart for the average (easy) A line on the chart showing the band that 95% of the scores occupy (stumped) It's the third one that I don't get. I need to calculate the 5% percentile figures, which I can do singly: SELECT MAX(SubQ.SCORE) FROM (SELECT TOP 45 PERCENT SCORE FROM SCORES WHERE YEAR(SCOREDATE) = 2010 AND MONTH(SCOREDATE) = 1 ORDER BY SCORE ASC) AS SubQ SELECT MIN(SubQ.SCORE) FROM (SELECT TOP 45 PERCENT SCORE FROM SCORES WHERE YEAR(SCOREDATE) = 2010 AND MONTH(SCOREDATE) = 1 ORDER BY SCORE DESC) AS SubQ But I can't work out how to get a table of all the months. | Date | Average | 45% | 55% | + -----------+---------+-----+-----+ | 01/01/2010 | 13 | 11 | 15 | | 02/01/2010 | 10 | 8 | 12 | | 03/01/2010 | 5 | 4 | 10 | ... | 16/03/2010 | 7 | 7 | 9 | At the moment I'm going to have to load this lot up into my app, and calculate the figures myself. Or run a larger number of individual queries and collate the results.

    Read the article

  • High Runtime for Dictionary.Add for a large amount of items

    - by aaginor
    Hi folks, I have a C#-Application that stores data from a TextFile in a Dictionary-Object. The amount of data to be stored can be rather large, so it takes a lot of time inserting the entries. With many items in the Dictionary it gets even worse, because of the resizing of internal array, that stores the data for the Dictionary. So I initialized the Dictionary with the amount of items that will be added, but this has no impact on speed. Here is my function: private Dictionary<IdPair, Edge> AddEdgesToExistingNodes(HashSet<NodeConnection> connections) { Dictionary<IdPair, Edge> resultSet = new Dictionary<IdPair, Edge>(connections.Count); foreach (NodeConnection con in connections) { ... resultSet.Add(nodeIdPair, newEdge); } return resultSet; } In my tests, I insert ~300k items. I checked the running time with ANTS Performance Profiler and found, that the Average time for resultSet.Add(...) doesn't change when I initialize the Dictionary with the needed size. It is the same as when I initialize the Dictionary with new Dictionary(); (about 0.256 ms on average for each Add). This is definitely caused by the amount of data in the Dictionary (ALTHOUGH I initialized it with the desired size). For the first 20k items, the average time for Add is 0.03 ms for each item. Any idea, how to make the add-operation faster? Thanks in advance, Frank

    Read the article

  • MySQL Ratings From Two Tables

    - by DirtyBirdNJ
    I am using MySQL and PHP to build a data layer for a flash game. Retrieving lists of levels is pretty easy, but I've hit a roadblock in trying to fetch the level's average rating along with it's pointer information. Here is an example data set: levels Table: level_id | level_name 1 | Some Level 2 | Second Level 3 | Third Level ratings Table: rating_id | level_id | rating_value 1 | 1 | 3 2 | 1 | 4 3 | 1 | 1 4 | 2 | 3 5 | 2 | 4 6 | 2 | 1 7 | 3 | 3 8 | 3 | 4 9 | 3 | 1 I know this requires a join, but I cannot figure out how to get the average rating value based on the level_id when I request a list of levels. This is what I'm trying to do: SELECT levels.level_id, AVG(ratings.level_rating WHERE levels.level_id = ratings.level_id) FROM levels I know my SQL is flawed there, but I can't figure out how to get this concept across. The only thing I can get to work is returning a single average from the entire ratings table, which is not very useful. Ideal Output from the above conceptually valid but syntactically awry query would be: level_id | level_rating 1| 3.34 2| 1.00 3| 4.54 My main issue is I can't figure out how to use the level_id of each response row before the query has been returned. It's like I want to use a placeholder... or an alias... I really don't know and it's very frustrating. The solution I have in place now is an EPIC band-aid and will only cause me problems long term... please help!

    Read the article

  • C : files manipulation Can't figure out how to simplify this code with files manipulation.

    - by Bon_chan
    Hey guys, I have been working on this code but I can't find out what is wrong. This program does compile and run but it ends up having a fatal error. I have a file called myFile.txt, with the following content : James------ 07.50 Anthony--- 17.00 And here is the code : int main() { int n =2, valueTest=0,count=0; FILE* file = NULL; float temp= 00.00f, average= 00.00f, flTen = 10.00f; float *totalNote = (float*)malloc(n*sizeof(float)); int position = 0; char selectionNote[5+1], nameBuffer[10+1], noteBuffer[5+1]; file = fopen("c:\\myFile.txt","r"); fseek(file,10,SEEK_SET); while(valueTest<2) { fscanf(file,"%5s",&selectionNote); temp = atof(selectionNote); totalNote[position]= temp; position++; valeurTest++; } for(int counter=0;counter<2;counter++) { average += totalNote[counter]; } printf("The total is : %f \n",average); rewind(file); printf("here is the one with less than 10.00 :\n"); while(count<2) { fscanf(file,"%10s",&nameBuffer); fseek(file,10,SEEK_SET); fscanf(file,"%5s",&noteBuffer); temp = atof(noteBuffer); if(temp<flTen) { printf("%s who has %f\n",nameBuffer,temp); } fseek(file,1,SEEK_SET); count++; } fclose(file); } I am pretty new to c and find it more difficult than c# or java. And I woud like to get some suggestions to help me to get better. I think this code could be simplier. Do you think the same ?

    Read the article

  • C++ new memory allocation fragmentation

    - by tamulj
    I was trying to look at the behavior of the new allocator and why it doesn't place data contiguously. My code: struct ci { char c; int i; } template <typename T> void memTest() { T * pLast = new T(); for(int i = 0; i < 20; ++i) { T * pNew = new T(); cout << (pNew - pLast) << " "; pLast = pNew; } } So I ran this with char, int, ci. Most allocations were a fixed length from the last, sometimes there were odd jumps from one available block to another. sizeof(char) : 1 Average Jump: 64 bytes sizeof(int): 4 Average Jump: 16 sizeof(ci): 8 (int has to be placed on a 4 byte align) Average Jump: 9 Can anyone explain why the allocator is fragmenting memory like this? Also why is the jump for char so much larger then ints and a structure that contains both an int and char.

    Read the article

  • Array loading with doubles in C

    - by user2892120
    I am trying to load a 3x8 array of doubles but my code keeps outputting 0.00 for all of the values. The code should be outputting the array (same as the input) under the Read#1 Read#2 Read#3 lines, with the average under average. Here is my code: #include <stdio.h> double getAvg(double num1, double num2, double num3); int main() { int numJ,month,day,year,i,j; double arr[3][8]; scanf("%d %d %d %d",&numJ,&month,&day,&year); for (i = 0; i < 8; i++) { scanf("%f %f %f",&arr[i][0], &arr[i][1], &arr[i][2]); } printf("\nJob %d Date: %d/%d/%d",numJ,month,day,year); printf("\n\nLocation Read#1 Read#2 Read#3 Average"); for (j = 0; j < 8; j++) { printf("\n %d %.2f %.2f %.2f %.2f",j+1,arr[j][0],arr[j] [1],arr[j][2],getAvg(arr[j][0],arr[j][1],arr[j][2])); } return 0; } double getAvg(double num1, double num2, double num3) { double avg = (num1 + num2 + num3) / 3; return avg; } Input example: 157932 09 01 2013 0.00 0.00 0.00 0.36 0.27 0.23 0.18 0.16 0.26 0.27 0.00 0.34 0.24 0.00 0.31 0.16 0.33 0.36 0.29 0.36 0.00 0.21 0.36 0.00

    Read the article

  • The most efficient way to calculate an integral in a dataset range

    - by Annalisa
    I have an array of 10 rows by 20 columns. Each columns corresponds to a data set that cannot be fitted with any sort of continuous mathematical function (it's a series of numbers derived experimentally). I would like to calculate the integral of each column between row 4 and row 8, then store the obtained result in a new array (20 rows x 1 column). I have tried using different scipy.integrate modules (e.g. quad, trpz,...). The problem is that, from what I understand, scipy.integrate must be applied to functions, and I am not sure how to convert each column of my initial array into a function. As an alternative, I thought of calculating the average of each column between row 4 and row 8, then multiply this number by 4 (i.e. 8-4=4, the x-interval) and then store this into my final 20x1 array. The problem is...ehm...that I don't know how to calculate the average over a given range. The question I am asking are: Which method is more efficient/straightforward? Can integrals be calculated over a data set like the one that I have described? How do I calculate the average over a range of rows?

    Read the article

  • How to adjust and combine multiple lower quality photos into one better using FOSS?

    - by Vi
    I have multiple noisy photos (caputed without tripod) that needs to be adjusted (moved/rotated) and averaged. How it's better to do it in Linux with FOSS console-based programs? Current way is something like: mplayer mf://*.JPG -vo yuv4mpeg:file=qqq.yuv transcode -i qqq.yuv -y null -J stabilize=maxshift=500:fieldsize=100:fieldnum=6:stepsize=50:shakiness=10 transcode -i qqq.yuv -J transform=smoothing=100000:sharpen=0:optzoom=0 -y raw -o www.yuv mplayer www.yuv -vo pnm gm convert -average 0*.ppm q.ppm i.e.: Convert photos to video Apply Transcode's "Stabilize" filter Convert the video back to images Average the images. It works, but bad: photos still not perfectly adjusted and the whole sequence is very slow. What is better way of doing it?

    Read the article

  • Brightness keeps changing in Windows 8.1 (on Macbook Pro Retina)

    - by gzak
    Before anyone gets too excited, it's not the "Adaptive Brightness" feature of the OS. I've already turned that off. Also it seems to have nothing to do with ambient light. It actually seems to do with the average "color" of the display. If I'm working in dark-themed Visual Studio, the brightness "pops" brighter. When I switch to the browser, it "pops" darker. So it's kind of adaptive brightness based on average pixel color (or something like that). What makes it rather annoying is that the brightness pops, rather than transitioning gradually. What is this feature, and how do I disable it (or at least make it smoother)?

    Read the article

  • How to monitor current output/receive queue length in Linux

    - by IZhen
    I want to check the capacity and performance of my network. Besides checking the txkB/s and rxkB/s via Sar, I'd also like to see the average queue length of the network interface(so that the average queueing time in the interface can be calculated). It seems that netstat can give a per socket queue length, is it possible to get a per interface statics(a bit like Network Interface\Output Queue Length in Windows)? A related and kind of reverse questions is How do I view the TCP Send and Receive Queue sizes on Windows? Thanks

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >