Search Results

Search found 2042 results on 82 pages for 'average'.

Page 74/82 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • ! Extra }, or forgotten \endgroup. latex

    - by gzou
    hey, I met these latex format problem, anyone can offer some help? the .tex file: \begin{table}{} \renewcommand{\arraystretch}{1.1} \caption{Cambridge Flow feature definition and description} \label{cambridge-feature}} \centering \begin{tabular}{|c|c|} \hline\bfseries Abbreviation &\bfseries Description\\ \hline serv-port & Server port\\ \hline clnt-port & Client port\\ \hline push-pkts-serv & count of all packets with\\ & push bit set in TCP header (server to client)\\ \hline init-win-bytes-clnt & the total number of bytes \\ & sent in initial window (client to server)\\ \hline init-win-bytes-serv & the total number of bytes sent\\ & in initial window (server to client)\\ \hline avg-seg-size-clnt & average segment size: \\ & data bytes devided by number of packets\\ \hline IP-bytes-med-clnt & median of total bytes in IP packet\\ \hline act-data-pkt-serv & count of packet with at least one byte \\ & of TCP data playload (server to client)\\ \hline data-bytes-var-clnt & variance of total \\ & bytes in packets (client to server)\\ \hline min-seg-size-serv & minimum segment size \\ & observed (server to client)\\ \hline RTT-samples-serv & total number of RTT samples\\ & found (server to client),\\ & {\bf see also \cite{Moore05discriminators}}\\ \hline push-pkts-clnt & count of all packets with push bit set \\ & in TCP header (server to client)\\ \hline \end{tabular} \end{table} and the error message: ! Extra }, or forgotten \endgroup. \@endfloatbox ...pagefalse \outer@nobreak \egroup \color@endbox l.892 \end{table} I've deleted a group-closing symbol because it seems to be spurious, as in $x}$'. But perhaps the } is legitimate and you forgot something else, as in\hbox{$x}'. In such cases the way to recover is to insert both the forgotten and the deleted material, e.g., by typing `I$}'. there is no $ in my table, also this { are matching with the }, and also after I comment the citation, the error remains. anyone can offer help? really appreciate all the comments! ! Extra }, or forgotten \endgroup.

    Read the article

  • sql: Group by x,y,z; return grouped by x,y with lowest f(z)

    - by Sai Emrys
    This is for http://cssfingerprint.com I collect timing stats about how fast the different methods I use perform on different browsers, etc., so that I can optimize the scraping speed. Separately, I have a report about what each method returns for a handful of URLs with known-correct values, so that I can tell which methods are bogus on which browsers. (Each is different, alas.) The related tables look like this: CREATE TABLE `browser_tests` ( `id` int(11) NOT NULL AUTO_INCREMENT, `bogus` tinyint(1) DEFAULT NULL, `result` tinyint(1) DEFAULT NULL, `method` varchar(255) DEFAULT NULL, `url` varchar(255) DEFAULT NULL, `os` varchar(255) DEFAULT NULL, `browser` varchar(255) DEFAULT NULL, `version` varchar(255) DEFAULT NULL, `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, `user_agent` varchar(255) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=33784 DEFAULT CHARSET=latin1 CREATE TABLE `method_timings` ( `id` int(11) NOT NULL AUTO_INCREMENT, `method` varchar(255) DEFAULT NULL, `batch_size` int(11) DEFAULT NULL, `timing` int(11) DEFAULT NULL, `os` varchar(255) DEFAULT NULL, `browser` varchar(255) DEFAULT NULL, `version` varchar(255) DEFAULT NULL, `user_agent` varchar(255) DEFAULT NULL, `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=28849 DEFAULT CHARSET=latin1 (user_agent is broken down pre-insert into browser, version, and os from a small list of recognized values using regex; I keep the original user-agent string just in case.) I have a query like this that tells me the average timing for every non-bogus browser / version / method tuple: select c, avg(bogus) as bog, timing, method, browser, version from browser_tests as b inner join ( select count(*) as c, round(avg(timing)) as timing, method, browser, version from method_timings group by browser, version, method having c > 10 order by browser, version, timing ) as t using (browser, version, method) group by browser, version, method having bog < 1 order by browser, version, timing; Which returns something like: c bog tim method browser version 88 0.8333 184 reuse_insert Chrome 4.0.249.89 18 0.0000 238 mass_insert_width Chrome 4.0.249.89 70 0.0400 246 mass_insert Chrome 4.0.249.89 70 0.0400 327 mass_noinsert Chrome 4.0.249.89 88 0.0556 367 reuse_reinsert Chrome 4.0.249.89 88 0.0556 383 jquery Chrome 4.0.249.89 88 0.0556 863 full_reinsert Chrome 4.0.249.89 187 0.0000 105 jquery Chrome 5.0.307.11 187 0.8806 109 reuse_insert Chrome 5.0.307.11 123 0.0000 110 mass_insert_width Chrome 5.0.307.11 176 0.0000 231 mass_noinsert Chrome 5.0.307.11 176 0.0000 237 mass_insert Chrome 5.0.307.11 187 0.0000 314 reuse_reinsert Chrome 5.0.307.11 187 0.0000 372 full_reinsert Chrome 5.0.307.11 12 0.7500 82 reuse_insert Chrome 5.0.335.0 12 0.2500 102 jquery Chrome 5.0.335.0 [...] I want to modify this query to return only the browser/version/method with the lowest timing - i.e. something like: 88 0.8333 184 reuse_insert Chrome 4.0.249.89 187 0.0000 105 jquery Chrome 5.0.307.11 12 0.7500 82 reuse_insert Chrome 5.0.335.0 [...] How can I do this, while still returning the method that goes with that lowest timing? I could filter it app-side, but I'd rather do this in mysql since it'd work better with my caching.

    Read the article

  • Asp.net hosting equivalent of Dreamhost (pricing, features and support)

    - by Cherian
    Disclaimer: I have browsed http://stackoverflow.com/questions/tagged/asp.net+hosting and didn’t find anything quite similar in value to Dreamhost. One of the biggest impediments IMHO for developing web applications on asp.net is the cost of deployment. I am not talking about building sites like Stackoverflow.com or plentyoffish.com. This is about sites that are bigger than brochureware and smaller than ones that require dedicated servers. Let me give you an example. xmec.org is an asp.net site I maintain for my college alumni. On an average it’s slated to hit around 1000-1100 views per day. At present it’s hosted on godaddy. The service is so damn pathetic; I am using it only because of the lack of options. The site doesn’t scale (no, it’s not the code) and the web control panels are extremely slow. The money I pay doesn’t justify the service or the performance. Every deployment push is a visit to the infuriating web control panel to set the permissions and the root directories. Had I developed it in python, this would have been deployed on Dreamhost.com with $10/year hosting fees (they have offers running all throughout) 50 GB space 5 MySQL Databases Shell / FTP Users POP / SMTP Access Unlimited Domains hosting Unlimited Sub domains hosting Unlimited Domains Forwarded/Mirrored Custom DNS (These are the only ones I could think of. More at the feature page) With a dream host shell, I even have a svn checked-out version of wordpress for my blog. Now, that’s control! To my question: Is there any asp.net (preferably .net 3.5. Dreamhost keeps on updating versions every fortnight) hosting company providing remotely similar feature-sets and pricing like Dreamhost. My requirements are: Less than $15-25/ year Typical WISP minus PHP .net 3.5 SP1 Full Trust mode(I can live with medium trust, if not for the IL emitting libraries) Isolated Application Pool 5 – 10 MySQL db’s Unlimited domain hosting MsSql 2005 or 2008 FTP support At Least 5 GB space SMTP IIS 7 Log files Accessibility Moderately good control panel Scripting, shell support Nominal bandwidth Another case in point: Recently I’ve been contemplating building a tool-website to find duplicates and weird characters in my Google contacts and fix them. With asp.net, the best part is that I can do this with LINQ to XML in less than 100 lines of code. What’s bad is the hosting part. I don’t think I stand to make any money out of this and therefore can’t afford to host it on GoGrid or DiscountAsp.net. Godaddy is not an option either. If I do this in python, I can push to this my existing $10 Dreamhost account with another domain pointed. No extra cost. Svn exported with scripts (capability) to change the connection string! Looking at the problem holistically, I think I represent a large breed of programmers playing it cheap and experimenting different things on a regular basis, one of which will become the next twitter/digg.

    Read the article

  • Resizing an image using mouse dragging (C#)

    - by Gaax
    Hi all. I'm having some trouble resizing an image just by dragging the mouse. I found an average resize method and now am trying to modify it to use the mouse instead of given values. The way I'm doing it makes sense to me but maybe you guys can give me some better ideas. I'm basically using the distance between the current location of the mouse and the previous location of the mouse as the scaling factor. If the distance between the current mouse location and the center of of the image is less than the distance between previous mouse location and the center of the image then the image gets smaller, and vice-versa. With the code below I'm getting an Argument Exception (invalid parameter) when creating the new bitmap with the new height and width and I really don't understand why... any ideas? private static Image resizeImage(Image imgToResize, System.Drawing.Point prevMouseLoc, System.Drawing.Point currentMouseLoc) { int sourceWidth = imgToResize.Width; int sourceHeight = imgToResize.Height; float dCurrCent = 0; //Distance between current mouse location and the center of the image float dPrevCent = 0; //Distance between previous mouse location and the center of the image float dCurrPrev = 0; //Distance between current mouse location and the previous mouse location int sign = 1; System.Drawing.Point imgCenter = new System.Drawing.Point(); float nPercent = 0; imgCenter.X = imgToResize.Width / 2; imgCenter.Y = imgToResize.Height / 2; // Calculating the distance between the current mouse location and the center of the image dCurrCent = (float)Math.Sqrt(Math.Pow(currentMouseLoc.X - imgCenter.X, 2) + Math.Pow(currentMouseLoc.Y - imgCenter.Y, 2)); // Calculating the distance between the previous mouse location and the center of the image dPrevCent = (float)Math.Sqrt(Math.Pow(prevMouseLoc.XimgCenter.X,2) + Math.Pow(prevMouseLoc.Y - imgCenter.Y, 2)); // Calculating the sign value if (dCurrCent >= dPrevCent) { sign = 1; } else { sign = -1; } nPercent = sign * (float)Math.Sqrt(Math.Pow(currentMouseLoc.X - prevMouseLoc.X, 2) + Math.Pow(currentMouseLoc.Y - prevMouseLoc.Y, 2)); int destWidth = (int)(sourceWidth * nPercent); int destHeight = (int)(sourceHeight * nPercent); Bitmap b = new Bitmap(destWidth, destHeight); // exception thrown here Graphics g = Graphics.FromImage((Image)b); g.InterpolationMode = InterpolationMode.HighQualityBicubic; g.DrawImage(imgToResize, 0, 0, destWidth, destHeight); g.Dispose(); return (Image)b; }

    Read the article

  • TeamCity Scheduled Build not getting all files from VSS

    - by Kate
    Within TeamCity if I trigger a build it all works correctly, however if the Scheduler triggers a build it does not seem to get all the files from VSS. I have clean checkout directory turned on, so I am not sure how it determines the patch for the VSS root. Does anyone have any suggestions on how I can get it to always get all files, and create a new patch each time? I have put the start of two build logs below, as you can see the first one has the correct 249mb, whereas the second only transfers 2MB. The files it doesn't get from VSS seem sporadic and not in relation to what has changed. Manual Trigger [23:57:49]: Checking for changes [00:09:04]: Clean build enabled: removing old files from C:\Builds\Ab 2.0 [00:09:04]: Clearing temporary directory: C:\TeamCity\buildAgent\temp\buildTmp [00:09:05]: Checkout directory: C:\Builds\Ab 2.0 [00:09:05]: Updating sources: server side checkout... (24m:53s) [00:09:05]: [Updating sources: server side checkout...] Will perform clean checkout [00:09:05]: [Updating sources: server side checkout...] Clean checkout reasons [00:09:05]: [Clean checkout reasons] Checkout directory is empty or doesn't exist [00:09:05]: [Clean checkout reasons] "Clean all files before build" turned on [00:09:05]: [Updating sources: server side checkout...] Transferring cached clean patch for VCS root: Ab 2.0 [00:09:42]: [Updating sources: server side checkout...] Building incremental patch over the cached patch [00:31:50]: [Updating sources: server side checkout...] Transferring repository sources: 124.0Mb so far... [00:32:18]: [Updating sources: server side checkout...] Repository sources transferred: 249.46Mb total [00:32:18]: [Updating sources: server side checkout...] Average transfer speed: 183.40Kb per second Triggered by the Scheduler [07:45:01]: Checking for changes [07:55:09]: Clean build enabled: removing old files from C:\Builds\Ab 2.0 [07:55:22]: Clearing temporary directory: C:\TeamCity\buildAgent\temp\buildTmp [07:55:22]: Checkout directory: C:\Builds\Ab 2.0 [07:55:22]: Updating sources: server side checkout... (24m:24s) [07:55:22]: [Updating sources: server side checkout...] Will perform clean checkout [07:55:22]: [Updating sources: server side checkout...] Clean checkout reasons [07:55:22]: [Clean checkout reasons] Checkout directory is empty or doesn't exist [07:55:22]: [Clean checkout reasons] "Clean all files before build" turned on [07:55:22]: [Updating sources: server side checkout...] Building clean patch for VCS root: Ab 2.0 [08:19:46]: [Updating sources: server side checkout...] Transferring cached clean patch for VCS root: Ab 2.0 [08:19:47]: [Updating sources: server side checkout...] Repository sources transferred: 2.01Mb total

    Read the article

  • Is my understanding of "select distinct" correct?

    - by paxdiablo
    We recently discovered a performance problem with one of our systems and I think I have the fix but I'm not certain my understanding is correct. In simplest form, we have a table blah into which we accumulate various values based on a key field. The basic form is: recdate date rectime time system varchar(20) count integer accum1 integer accum2 integer There are a lot more accumulators than that but they're all of the same form. The primary key is made up of recdate, rectime and system. As values are collected to the table, the count for a given recdate/rectime/system is incremented and the values for that key are added to the accumulators. That means the averages can be obtained by using accumN / count. Now we also have a view over that table specified as follows: create view blah_v ( recdate, rectime, system, count, accum1, accum2 ) as select distinct recdate, rectime, system, count, value (case when count > 0 then accum1 / count end, 0), value (case when count > 0 then accum2 / count end, 0) from blah; In other words, the view gives us the average value of the accumulators rather than the sums. It also makes sure we don't get a divide-by-zero in those cases where the count is zero (these records do exist and we are not allowed to remove them so don't bother telling me they're rubbish - you're preaching to the choir). We've noticed that the time difference between doing: select distinct recdate from XX varies greatly depending on whether we use the table or the view. I'm talking about the difference being 1 second for the table and 27 seconds for the view (with 100K rows). We actually tracked it back to the select distinct. What seems to be happening is that the DBMS is actually loading all the rows in and sorting them so as to remove duplicates. That's fair enough, it's what we stupidly told it to do. But I'm pretty sure the fact that the view includes every component of the primary key means that it's impossible to have duplicates anyway. We've validated the problem since, if we create another view without the distinct, it performs at the same speed as the underlying table. I just wanted to confirm my understanding that a select distinct can not have duplicates if it includes all the primary key components. If that's so, then we can simply change the view appropriately.

    Read the article

  • Learning to create beautiful /next-generation GUI

    - by ShaChris23
    I really want to create a stunning-looking GUI desktop application that looks like, for example: Mac OS X interface Picasa desktop client on windows IPhone apps Office 2007 I've mostly been programming GUI using Qt/Swing/WinForm and I'm tired of creating so plain looking GUI, lol. So I was thinking about diving into stuff like: jQuery WPF/C# iPhone SDK Silverlight Adobe Air/Flex Just to get some ideas on how to create really cool looking UI. Does that sound like a good list? Any developers here that could share what platform they use to create very cool looking desktop app? On a sidenote, I really wonder what developers at Apple / Microsoft use to develop their own cool-looking software. EDIT A lot of responses talk about the importance of usability over "cool-looking".. I totally agree that usability and simplicity are the most important aspects of user interface design. I've been doing GUI development for a while now ( 3 years), so that I understand. But using cool-looking UI also improves user experience + it could make big difference on whether or not your software sells. I mean, otherwise why would Microsoft/Apple try to make their OS UI look "cooler" everytime there's a new version? Why would websites like pragprog.com, or versionsapp.com. make their websites look like that? Basically you kill 2 birds with one stone: stunnning-looking UI + super usability (because it looks simple and intuitive). That is what I'm striving for. And as far as I know, I cannot achieve that using Qt/Winform. Most of the books I have read just show you how to make average-looking (read: 1990's) UI. I want to learn how to create cool-looking UI. And the only place I see cool-looking UIs these days are the technology I list above. I'm not enamored with any technology; but I just want to know how things are done in other technology to see if I could apply them to the technology I'm using, or see if I could use those technology in my line of work. An example: if I were to pick between this UI and this UI, I probably would pick the latter, if just based on looks alone. Functionally, they are just the same UI; they both allow you to keep track of your time. They both contain buttons and textboxes, etc. But the fact that they look different, also differentiate them in terms of attractiveness. So in all, I think the "ice on the cake" is very important. I would say it's the thing you strive for after you are certain you have a totally intuitive, usable UI.

    Read the article

  • Coming Up with a Good Algorithm for a Simple Idea

    - by mkoryak
    I need to come up with an algorithm that does the following: Lets say you have an array of positive numbers (e.g. [1,3,7,0,0,9]) and you know beforehand their sum is 20. You want to abstract some average amount from each number such that the new sum would be less by 7. To do so, you must follow these rules: you can only subtract integers the resulting array must not have any negative values you can not make any changes to the indices of the buckets. The more uniformly the subtraction is distributed over the array the better. Here is my attempt at an algorithm in JavaScript + underscore (which will probably make it n^2): function distributeSubtraction(array, goal){ var sum = _.reduce(arr, function(x, y) { return x + y; }, 0); if(goal < sum){ while(goal < sum && goal > 0){ var less = ~~(goal / _.filter(arr, _.identity).length); //length of array without 0s arr = _.map(arr, function(val){ if(less > 0){ return (less < val) ? val - less : val; //not ideal, im skipping some! } else { if(goal > 0){ //again not ideal. giving preference to start of array if(val > 0) { goal--; return val - 1; } } else { return val; } } }); if(goal > 0){ var newSum = _.reduce(arr, function(x, y) { return x + y; }, 0); goal -= sum - newSum; sum = newSum; } else { return arr; } } } else if(goal == sum) { return _.map(arr, function(){ return 0; }); } else { return arr; } } var goal = 7; var arr = [1,3,7,0,0,9]; var newArray = distributeSubtraction(arr, goal); //returned: [0, 1, 5, 0, 0, 7]; Well, that works but there must be a better way! I imagine the run time of this thing will be terrible with bigger arrays and bigger numbers. edit: I want to clarify that this question is purely academic. Think of it like an interview question where you whiteboard something and the interviewer asks you how your algorithm would behave on a different type of a dataset.

    Read the article

  • Help me stabilize this jRun configuration (CF9/Win2k3/IIS6)

    - by jfrobishow
    Not sure if this would be better suited for ServerFault, but since I am not an admin but a developer I figured I would try SO. We've been struggling to keep our multi-server configuration stable for quite some time now. At the end of last month we were running under CF 7.0.2 on a two servers setup (one instance each). At that point we managed to get our uptime to around 1 week per instance before they would restart by themselves. Since the beginning of the month we upgraded to CF 9 and we're back to square one with multi-restart a day. Our current configuration is 2 Win2k3 servers, running a cluster of 4 instances, 2 instances per server. At this point we are pretty certain this is due to improper JVM settings. We've been toying with them and while some are more stable than others we never quite got it right. From the default: java.args=-server -Xmx512m -Dsun.io.useCanonCaches=false -XX:MaxPermSize=192m -XX:+UseParallelGC -Dcoldfusion.rootDir={application.home}/ To currently: java.args=-server -Xmx896m -Dsun.io.useCanonCaches=false -XX:MaxPermSize=512m -XX:SurvivorRatio=8 -XX:TargetSurvivorRatio=90 -XX:+UseParallelGC -Dcoldfusion.rootDir={application.home}/ -verbose:gc -Xloggc:c:/Jrun4/logs/gc/gcInstance1b.log We have determined that we do need more than the default 512MB simply by monitoring with FusionReactor, on average our amount of memory consumed is hovering in the mid 300MB and can go up to low 700MB under heavy load. Most of the crash will be logged in jrun4/bin/hs_err_pid*.log always an "Out of swap space" I've attached links to the hs_err and garbage collector log file from yesterday at the bottom of the post. The relevant part is (I think) this: Heap PSYoungGen total 89856K, used 19025K [0x55490000, 0x5b6f0000, 0x5b810000) eden space 79232K, 16% used [0x55490000,0x561a64c0,0x5a1f0000) from space 10624K, 52% used [0x5ac90000,0x5b20e2f8,0x5b6f0000) to space 10752K, 0% used [0x5a1f0000,0x5a1f0000,0x5ac70000) PSOldGen total 460416K, used 308422K [0x23810000, 0x3f9b0000, 0x55490000) object space 460416K, 66% used [0x23810000,0x36541bb8,0x3f9b0000) PSPermGen total 107520K, used 106079K [0x03810000, 0x0a110000, 0x23810000) object space 107520K, 98% used [0x03810000,0x09fa7e40,0x0a110000) From it, I gather that its the PSPermGen that is full (most logs will show the same before a crash), which is why we increased MaxPermSize but the total still show as 107520K!??! No one here is a jRun expert, so any help or even ideas on what to try next would be greatly appreciated!! The log files: Sorry I know sendspace isn't the friendliest of places - if you have other host suggestion for log files let me know and I'll update the post (SO doesn't like them inline, it blows up the format of the post). The hs_err log file: http://www.sendspace.com/file/fgak8l The gc log: http://www.sendspace.com/file/w0r2ct

    Read the article

  • How to best integrate generated code

    - by Arne
    I am evaluating the use of code generation for my flight simulation project. More specifically there is a requirement to allow "the average engineer" (no offense I am one myself) to define the differential equations that describe the dynamic system in a more natural syntax than C++ provides. The idea is to devise a abstract descriptor language that can be easily understood and edited to generate C++ code from. This descriptor is supplied by the modeling engineer and used by the ones implementing and maintaining the simulation evironment to generate code. I've got something like this in mind: model Aircraft has state x1, x2; state x3; input double : u; input bool : flag1, flag2; algebraic double : x1x2; model Engine : tw1, tw2; model Gear : gear; model ISA : isa; trim routine HorizontalFight; trim routine OnGround, General; constant double : c1, c2; constant int : ci1; begin differential equations x1' = x1 + 2.*x2; x2' = x2 + x1x2; begin algebraic equations x1x2 = x1*x2 + x1'; end model It is important to retain the flexibility of the C language thus the descriptor language is meant to only define certain parts of the definition and implementation of the model class. This way one enigneer provides the model in from of the descriptor language as examplified above and the maintenance enigneer will add all the code to read parameters from files, start/stop/pause the execution of the simulation and how a concrete object gets instatiated. My first though is to either generate two files from the descriptor file: one .h file containing declarations and one .cpp file containing the implementation of certain functions. These then need to be #included at appropriate places [File Aircarft.h] class Aircraft { public: void Aircraft(..); // hand-written constructor void ReadParameters(string &file_name); // hand-written private: /* more hand wirtten boiler-plate code */ /* generate declarations follow */ #include "Aircraft.generated.decl" }; [File Aircraft.cpp] Aircarft::Aircraft(..) { /* hand-written constructer implementation */ } /* more hand-written implementation code */ /* generated implementation code follows */ #include "Aircraft.generated.impl" Any thoughts or suggestions?

    Read the article

  • Question SpeechSynthesizer.SetOutputToAudioStream audio format problem

    - by Chris Kugler
    Hi, I'm currently working on an application which requires transmission of speech encoded to a specific audio format. System.Speech.AudioFormat.SpeechAudioFormatInfo synthFormat = new System.Speech.AudioFormat.SpeechAudioFormatInfo(System.Speech.AudioFormat.EncodingFormat.Pcm, 8000, 16, 1, 16000, 2, null); This states that the audio is in PCM format, 8000 samples per second, 16 bits per sample, mono, 16000 average bytes per second, block alignment of 2. When I attempt to execute the following code there is nothing written to my MemoryStream instance; however when I change from 8000 samples per second up to 11025 the audio data is written successfully. SpeechSynthesizer synthesizer = new SpeechSynthesizer(); waveStream = new MemoryStream(); PromptBuilder pbuilder = new PromptBuilder(); PromptStyle pStyle = new PromptStyle(); pStyle.Emphasis = PromptEmphasis.None; pStyle.Rate = PromptRate.Fast; pStyle.Volume = PromptVolume.ExtraLoud; pbuilder.StartStyle(pStyle); pbuilder.StartParagraph(); pbuilder.StartVoice(VoiceGender.Male, VoiceAge.Teen, 2); pbuilder.StartSentence(); pbuilder.AppendText("This is some text."); pbuilder.EndSentence(); pbuilder.EndVoice(); pbuilder.EndParagraph(); pbuilder.EndStyle(); synthesizer.SetOutputToAudioStream(waveStream, synthFormat); synthesizer.Speak(pbuilder); synthesizer.SetOutputToNull(); There are no exceptions or errors recorded when using a sample rate of 8000 and I couldn't find anything useful in the documentation regarding SetOutputToAudioStream and why it succeeds at 11025 samples per second and not 8000. I have a workaround involving a wav file that I generated and converted to the correct sample rate using some sound editing tools, but I would like to generate the audio from within the application if I can. One particular point of interest was that the SpeechRecognitionEngine accepts that audio format and successfully recognized the speech in my synthesized wave file... Update: Recently discovered that this audio format succeeds for certain installed voices, but fails for others. It fails specifically for LH Michael and LH Michelle, and failure varies for certain voice settings defined in the PromptBuilder.

    Read the article

  • Feeding PDF through IInternetSession to WebBrowser control - Error

    - by Codesleuth
    As related to my previous question, I have developed a temporary asynchronous pluggable protocol with the specific aim to be able to serve PDF documents directly to a WebBrowser control via a database. I need to do this because my limitations include not being able to access the disk other than IsolatedStorage; and a MemoryStream would be far better for serving up PDF documents that average around 31kb. Unfortunately the code doesn't work, and I'm getting an error from the WebBrowser control (i.e. IE): Unable to download . Unable to open this Internet site. The requested site is either unavailable or cannot be found. Please try again later. The line in my code where this occurs is within the following: pOIProtSink.ReportData(BSCF.BSCF_LASTDATANOTIFICATION, (uint)_stream.Length, (uint)_stream.Length); However, if you download the project and run it, you will be able to see the stream is successfully read and passed to the browser, so it seems like there's a problem somewhere to do with the end of reading the data: public uint Read(IntPtr pv, uint cb, out uint pcbRead) { var bytesToRead = Math.Min(cb, _streamBuffer.Length); pcbRead = (uint)_stream.Read(_streamBuffer, 0, (int)bytesToRead); Marshal.Copy(_streamBuffer, 0, pv, (int)pcbRead); return (pcbRead == 0 || pcbRead < cb) ? HRESULT.S_FALSE : HRESULT.S_OK; } Here is the entire sample project: InternetSessionSample.zip (VS2010) I will leave this up for as long as I can to help other people in the future If anyone has any ideas why I might be getting this message and can shed some light on the problem, I would be grateful for the assistance. EDIT: A friend suggested inserting a line that calls the IInternetProtocolSink.ReportProgress with BINDSTATUS_CACHEFILENAMEAVAILABLE pointing at the original file. This prevents it from failing now and shows the PDF in the Adobe Reader control, but means it defeats the purpose of this by having Adobe Reader simply load from the cache file (which I can't provide). See below: pOIProtSink.ReportProgress(BINDSTATUS.BINDSTATUS_CACHEFILENAMEAVAILABLE, @"D:\Visual Studio Solutions\Projects\InternetSessionSample\bin\Debug\sample.pdf"); pOIProtSink.ReportData(BSCF.BSCF_LASTDATANOTIFICATION, (uint)_stream.Length, (uint)_stream.Length); This is progress though, I guess.

    Read the article

  • Why do I get rows of zeros in my 2D fft?

    - by Nicholas Pringle
    I am trying to replicate the results from a paper. "Two-dimensional Fourier Transform (2D-FT) in space and time along sections of constant latitude (east-west) and longitude (north-south) were used to characterize the spectrum of the simulated flux variability south of 40degS." - Lenton et al(2006) The figures published show "the log of the variance of the 2D-FT". I have tried to create an array consisting of the seasonal cycle of similar data as well as the noise. I have defined the noise as the original array minus the signal array. Here is the code that I used to plot the 2D-FT of the signal array averaged in latitude: import numpy as np from numpy import ma from matplotlib import pyplot as plt from Scientific.IO.NetCDF import NetCDFFile ### input directory indir = '/home/nicholas/data/' ### get the flux data which is in ### [time(5day ave for 10 years),latitude,longitude] nc = NetCDFFile(indir + 'CFLX_2000_2009.nc','r') cflux_southern_ocean = nc.variables['Cflx'][:,10:50,:] cflux_southern_ocean = ma.masked_values(cflux_southern_ocean,1e+20) # mask land nc.close() cflux = cflux_southern_ocean*1e08 # change units of data from mmol/m^2/s ### create an array that consists of the seasonal signal fro each pixel year_stack = np.split(cflux, 10, axis=0) year_stack = np.array(year_stack) signal_array = np.tile(np.mean(year_stack, axis=0), (10, 1, 1)) signal_array = ma.masked_where(signal_array > 1e20, signal_array) # need to mask ### average the array over latitude(or longitude) signal_time_lon = ma.mean(signal_array, axis=1) ### do a 2D Fourier Transform of the time/space image ft = np.fft.fft2(signal_time_lon) mgft = np.abs(ft) ps = mgft**2 log_ps = np.log(mgft) log_mgft= np.log(mgft) Every second row of the ft consists completely of zeros. Why is this? Would it be acceptable to add a randomly small number to the signal to avoid this. signal_time_lon = signal_time_lon + np.random.randint(0,9,size=(730, 182))*1e-05 EDIT: Adding images and clarify meaning The output of rfft2 still appears to be a complex array. Using fftshift shifts the edges of the image to the centre; I still have a power spectrum regardless. I expect that the reason that I get rows of zeros is that I have re-created the timeseries for each pixel. The ft[0, 0] pixel contains the mean of the signal. So the ft[1, 0] corresponds to a sinusoid with one cycle over the entire signal in the rows of the starting image. Here are is the starting image using following code: plt.pcolormesh(signal_time_lon); plt.colorbar(); plt.axis('tight') Here is result using following code: ft = np.fft.rfft2(signal_time_lon) mgft = np.abs(ft) ps = mgft**2 log_ps = np.log1p(mgft) plt.pcolormesh(log_ps); plt.colorbar(); plt.axis('tight') It may not be clear in the image but it is only every second row that contains completely zeros. Every tenth pixel (log_ps[10, 0]) is a high value. The other pixels (log_ps[2, 0], log_ps[4, 0] etc) have very low values.

    Read the article

  • Summarising (permanently) data in a SQL table

    - by Cylindric
    Geetings, Stackers. I have a huge number of data-points in a SQL table, and I want to summarise them in a way reminiscent of RRD. Assuming a table such as ID | ENTITY_ID | SCORE_DATE | SCORE | SOME_OTHER_DATA ----+-----------+------------+-------+----------------- 1 | A00000001 | 01/01/2010 | 100 | some data 2 | A00000002 | 01/01/2010 | 105 | more data 3 | A00000003 | 01/01/2010 | 104 | various text ... | ......... | .......... | ..... | ... ... | A00009999 | 01/01/2010 | 101 | ... | A00000001 | 02/01/2010 | 104 | ... | A00000002 | 02/01/2010 | 119 | ... | A00000003 | 02/01/2010 | 119 | ... | ......... | .......... | ..... | ... | A00009999 | 02/01/2010 | 101 | arbitrary data ... | ......... | .......... | ..... | ... ... | A00000001 | 01/02/2010 | 104 | ... | A00000002 | 01/02/2010 | 119 | ... | A00000003 | 01/01/2010 | 119 | I want to end up with one record per entity, per month: ID | ENTITY_ID | SCORE_DATE | SCORE | ----+-----------+------------+-------+ ... | A00000001 | 01/01/2010 | 100 | ... | A00000002 | 01/01/2010 | 105 | ... | A00000003 | 01/01/2010 | 104 | ... | A00000001 | 01/02/2010 | 100 | ... | A00000002 | 01/02/2010 | 105 | ... | A00000003 | 01/02/2010 | 104 | (I Don't care about the SOME_OTHER_DATA - I'll pick something - either the first or last record probably.) What's an easy way of doing this on a regular basis, so that anything in the last calendar month is summarised in this way? At the moment my plan is kind of: For each EntityID For each month Find average score for all records in given month Update first record with results of previous step Delete all records that aren't the first I can't think of a neat way of doing it though, that doesn't involve lots of updates and iteration. This can either be done in a SQL Stored Procedure, or it can be incorporated into the .Net app that's generating this data, so the solution doesn't really need to be "one big SQL script", but can be :) (SQL-2005)

    Read the article

  • Performance of SHA-1 Checksum from Android 2.2 to 2.3 and Higher

    - by sbrichards
    In testing the performance of: package com.srichards.sha; import android.app.Activity; import android.os.Bundle; import android.widget.TextView; import java.io.IOException; import java.io.InputStream; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; import java.util.zip.ZipEntry; import java.util.zip.ZipFile; import com.srichards.sha.R; public class SHAHashActivity extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); TextView tv = new TextView(this); String shaVal = this.getString(R.string.sha); long systimeBefore = System.currentTimeMillis(); String result = shaCheck(shaVal); long systimeResult = System.currentTimeMillis() - systimeBefore; tv.setText("\nRunTime: " + systimeResult + "\nHas been modified? | Hash Value: " + result); setContentView(tv); } public String shaCheck(String shaVal){ try{ String resultant = "null"; MessageDigest digest = MessageDigest.getInstance("SHA1"); ZipFile zf = null; try { zf = new ZipFile("/data/app/com.blah.android-1.apk"); // /data/app/com.blah.android-2.apk } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } ZipEntry ze = zf.getEntry("classes.dex"); InputStream file = zf.getInputStream(ze); byte[] dataBytes = new byte[32768]; //65536 32768 int nread = 0; while ((nread = file.read(dataBytes)) != -1) { digest.update(dataBytes, 0, nread); } byte [] rbytes = digest.digest(); StringBuffer sb = new StringBuffer(""); for (int i = 0; i< rbytes.length; i++) { sb.append(Integer.toString((rbytes[i] & 0xff) + 0x100, 16).substring(1)); } if (shaVal.equals(sb.toString())) { resultant = ("\nFalse : " + "\nFound:\n" + sb.toString() + "|" + "\nHave:\n" + shaVal); } else { resultant = ("\nTrue : " + "\nFound:\n" + sb.toString() + "|" + "\nHave:\n" + shaVal); } return resultant; } catch (IOException e) { e.printStackTrace(); } catch (NoSuchAlgorithmException e) { e.printStackTrace(); } return null; } } On a 2.2 Device I get average runtime of ~350ms, while on newer devices I get runtimes of 26-50ms which is substantially lower. I'm keeping in mind these devices are newer and have better hardware but am also wondering if the platform and the implementation affect performance much and if there is anything that could reduce runtimes on 2.2 devices. Note, the classes.dex of the .apk being accessed is roughly 4MB. Thanks!

    Read the article

  • Detecting abuse for post rating system

    - by Steven smethurst
    I am using a wordpress plugin called "GD Star Rating" to allow my users to vote on stories that I post to one of my websites. http://everydayfiction.com/ Recently we have been having a lot of abuse of the system. Stories that have obviously been voted up artificially. "GD Star Rating" creates some detailed logs when a user votes on a story. Including; IP, Time of vote, and user_adgent, ect.. For example this story has 181 votes with an average of 5.7 http://www.everydayfiction.com/snowman-by-shaun-simon/ Most other stories only get around ~40 votes each day. At first I thought that the story got on to a social bookmarking site Digg, Stumbleupon ect... but after checking the logs I found that this story is getting the same amount of traffic that a normal story gets ~2k-3k. I checked if all the votes for this perpendicular story where coming from a the same IP address. I could see this happening if a user was at a school's computer lab using all their lab computers to vote up this story. Not one duplicate IP address in the log for this story. SELECT ip, COUNT(*) as count FROM wp_gdsr_votes_log WHERE id=3932 GROUP BY (ip ) ORDER BY count DESC Next I thought that a use might be using a proxy to vote up a story. I checked this by grouping all the browser user_agent together to see if there a single browser voting in a perpendicular way. At most 7 users where using a similar browser but voted sporadically (1-5), no evidence of wrong doing. SELECT user_agent, COUNT(*) as count FROM wp_gdsr_votes_log WHERE id=3932 GROUP BY ( user_agent) ORDER BY count DESC I check was to see if all the votes came in at a once. Maybe someone has a really interesting bot that can change the user_adgent and uses proxies, ect... At most 5 votes came with in 2 mins of each other. It doesn't seem to be any regularity on how people vote (IE a 5 vote does not come in once a min) SELECT * FROM wp_gdsr_votes_log WHERE id =3932 AND vote=5 ORDER BY wp_gdsr_votes_log.voted DESC The obvious solution to this problem is to force people to login before they are allowed to vote. But I would prefer to not have to go down that route unless it is absolutely necessary. I'm looking for suggestions on things to test for to detect the abuse.

    Read the article

  • Matlab cell length

    - by AP
    Ok I seem to have got the most of the problem solved, I just need an expert eye to pick my error as I am stuck. I have a file of length [125 X 27] and I want to convert it to a file of length [144 x 27]. Now, I want to replace the missing files (time stamps) rows of zeros. (ideally its a 10 min daily average thus should have file length of 144) Here is the code I am using: fid = fopen('test.csv', 'rt'); data = textscan(fid, ['%s' repmat('%f',1,27)], 'HeaderLines', 1, 'Delimiter', ','); fclose(fid); %//Make time a datenum of the first column time = datenum(data{1} , 'mm/dd/yyyy HH:MM') %//Find the difference in minutes from each row timeDiff = round(diff(datenum(time)*(24*60))) %//the rest of the data data = cell2mat(data(2:28)); newdata=zeros(144,27); for n=1:length(timeDiff) if timeDiff(n)==10 newdata(n,:)=data(n,:); newdata(n+1,:)=data(n+1,:); else p=timeDiff(n)/10 n=n+p; end end Can somebody please help me to find the error inside my for loop. My output file seems to miss few timestamped values. %*********************************************************************************************************** Can somebody help me to figure out the uiget to read the above file?? i am replacing fid = fopen('test.csv', 'rt'); data = textscan(fid, ['%s' repmat('%f',1,27)], 'HeaderLines', 1, 'Delimiter', ','); fclose(fid); With [c,pathc]=uigetfile({'*.txt'},'Select the file','C:\data'); file=[pathc c]; file= textscan(c, ['%s' repmat('%f',1,27)], 'HeaderLines', 1, 'Delimiter', ','); And its not working % NEW ADDITION to old question p = 1; %index into destination for n = 1:length(timeDiff) % if timeDiff(n) == 10 % newfile(p,:) = file(n,:); % newfile(p+1,:)=file(n+1,:); % p = p + 1; % else % p = p + (timeDiff(n)/10); % end q=cumsum(timeDiff(n)/10); if q==1 newfile(p,:)=file(n,:); p=p+1; else p = p + (timeDiff(n)/10); end end xlswrite('testnewws11.xls',newfile); even with the cumsum command this code fails when my file has 1,2 time stamps in middle of long missing ones example 8/16/2009 0:00 5.34 8/16/2009 0:10 3.23 8/16/2009 0:20 2.23 8/16/2009 0:30 1.23 8/16/2009 0:50 70 8/16/2009 2:00 5.23 8/16/2009 2:20 544 8/16/2009 2:30 42.23 8/16/2009 3:00 71.23 8/16/2009 3:10 3.23 My output looks like 5.34 3.23 2.23 0 0 0 0 0 0 0 0 0 5.23 544. 42.23 0 0 0 3.23 Any ideas?

    Read the article

  • Calculating rotation and translation matrices between two odometry positions for monocular linear triangulation

    - by user1298891
    Recently I've been trying to implement a system to identify and triangulate the 3D position of an object in a robotic system. The general outline of the process goes as follows: Identify the object using SURF matching, from a set of "training" images to the actual live feed from the camera Move/rotate the robot a certain amount Identify the object using SURF again in this new view Now I have: a set of corresponding 2D points (same object from the two different views), two odometry locations (position + orientation), and camera intrinsics (focal length, principal point, etc.) since it's been calibrated beforehand, so I should be able to create the 2 projection matrices and triangulate using a basic linear triangulation method as in Hartley & Zissermann's book Multiple View Geometry, pg. 312. Solve the AX = 0 equation for each of the corresponding 2D points, then take the average In practice, the triangulation only works when there's almost no change in rotation; if the robot even rotates a slight bit while moving (due to e.g. wheel slippage) then the estimate is way off. This also applies for simulation. Since I can only post two hyperlinks, here's a link to a page with images from the simulation (on the map, the red square is simulated robot position and orientation, and the yellow square is estimated position of the object using linear triangulation.) So you can see that the estimate is thrown way off even by a little rotation, as in Position 2 on that page (that was 15 degrees; if I rotate it any more then the estimate is completely off the map), even in a simulated environment where a perfect calibration matrix is known. In a real environment when I actually move around with the robot, it's worse. There aren't any problems with obtaining point correspondences, nor with actually solving the AX = 0 equation once I compute the A matrix, so I figure it probably has to do with how I'm setting up the two camera projection matrices, specifically how I'm calculating the translation and rotation matrices from the position/orientation info I have relative to the world frame. How I'm doing that right now is: Rotation matrix is composed by creating a 1x3 matrix [0, (change in orientation angle), 0] and then converting that to a 3x3 one using OpenCV's Rodrigues function Translation matrix is composed by rotating the two points (start angle) degrees and then subtracting the final position from the initial position, in order to get the robot's straight and lateral movement relative to its starting orientation Which results in the first projection matrix being K [I | 0] and the second being K [R | T], with R and T calculated as described above. Is there anything I'm doing really wrong here? Or could it possibly be some other problem? Any help would be greatly appreciated.

    Read the article

  • How to stream semi-live audio over internet

    - by Thomas Tempelmann
    I want to write something like Skype, i.e. I have a constant audio stream on one computer and then recompress it in a format that's suitable for a latent internet connection, receive it on the other end and play it. Let's also assume that the internet connection is fairly modern and fast, i.e. DSL or alike, no slow connections over phone and such. The involved computers will also be rather modern (Dual Core Intel CPUs at 2GHz or more). I know how to handle the audio on the machines. What I don't know is how to transmit the audio in an efficient way. The challenges are: I'd like get good audio quality across the line. The stream should be received without drops. The stream may, however, be received with a little delay (a second delay is acceptable). I imagine that the transport software could first determine the average (and max) latency, then start the stream and tell the receiver to wait for that max latency before starting to play the audio. With that, if the latency doesn't get any higher, the entire stream will be playable on the other side without stutter or drops. If, due to unexpected IP latencies or blockages, the stream does get cut off, I want to be able to notice this so that I can take actions (e.g. abort the stream) and eventually start a new transmission. What are my options if I want do use ready-made software for the compression and tranmission? I have no intention to write my own audio compression engine, really. OTOH, I plan to sell the solution in a vertical market, meaning I can afford a few dollars of license fees per copy, but not $100s. I guess the simplest solution would be to just open a TCP stream, send a few packets back and forth to determine their running time (or even use UDP for that), then use the results as the guide for my max latency value, then simply fire the audio data in its raw form (uncompressed 16 bit stereo), along with a timing code over the TCP connection. The receiver reads the data and plays it with the pre-determined delay. That might just work with the type of fast connection I expect. I just wonder if there are better solutions to reach this goal, with better performance (lower latency) and less data (compressed). BTW, I first try to implement this on OS X, but might want to do it on Windows, too, if it proves successful.

    Read the article

  • How to get the top keys from a hash by value

    - by Kirs Kringle
    I have a hash that I sorted by values greatest to least. How would I go about getting the top 5? There was a post on here that talked about getting only one value. What is the easiest way to get a key with the highest value from a hash in Perl? I understand that so would lets say getting those values add them to an array and delete the element in the hash and then do the process again? Seems like there should be an easier way to do this then that though. My hash is called %words. use strict; use warnings; use Tk; #Learn to install here: http://factscruncher.blogspot.com/2012/01/easy-way-to-install-tk- on-strawberry.html #Reading in the text file my $file0 = Tk::MainWindow->new->Tk::getOpenFile; open( my $filehandle0, '<', $file0 ) || die "Could not open $file0\n"; my @words; while ( my $line = <$filehandle0> ) { chomp $line; my @word = split( /\s+/, lc($line)); push( @words, @word ); } for (@words) { s/[\,|\.|\!|\?|\:|\;|\"]//g; } #Counting words that repeat; put in hash my %words_count; $words_count{$_}++ for @words; #Reading in the stopwords file my $file1 = "stoplist.txt"; open( my $filehandle1, '<', $file1 ) or die "Could not open $file1\n"; my @stopwords; while ( my $line = <$filehandle1> ) { chomp $line; my @linearray = split( " ", $line ); push( @stopwords, @linearray ); } for my $w ( my @stopwords ) { s/\b\Q$w\E\B//ig; } #Comparing the array to Hash and deleteing stopwords my %words = %words_count; for my $stopwords ( @stopwords ) { delete $words{ $stopwords }; } #Sorting Hash Table my @keys = sort { $words{$b} <=> $words{$a} or "\L$a" cmp "\L$b" } keys %words; #Starting Statistical Work my $value_count = 0; my $key_count = 0; #Printing Hash Table $key_count = keys %words; foreach my $key (@keys) { $value_count = $words{$key} + $value_count; printf "%-20s %6d\n", $key, $words{$key}; } my $value_average = $value_count / $key_count; #my @topwords; #foreach my $key (@keys){ #if($words{$key} > $value_average){ # @topwords = keys %words; # } #} print "\n", "The number of values: ", $value_count, "\n"; print "The number of elements: ", $key_count, "\n"; print "The Average: ", $value_average, "\n\n";

    Read the article

  • CodeIgniter Third party class not loading

    - by Jatin Soni
    I am trying to implement Dashboard widget class (found here: http://harpanet.com/programming/php/codeigniter/dashboard/index#installation) but it is giving me error Unable to load the requested class I have tried to add this class in autoload as well as menually to my controller $this->load->library('dash') but this also giving the same error. I have checked dash.php and found below method private function __example__() but can't understand what the developer is saying in comment. class Dash { private function __example__() { /* * This function is purely to show an example of a dashboard method to place * within your own controller. */ // load third_party hArpanet dashboard library $this->load->add_package_path(APPPATH.'third_party/hArpanet/hDash/'); $dash =& $this->load->library('dash'); $this->load->remove_package_path(APPPATH.'third_party/hArpanet/hDash/'); // configure dashboard widgets - format: type, src, title, cols, alt (for images) $dash->widgets = array( array('type'=>'oop', 'src'=>'test_dash', 'title'=>'Test OOP Widget', 'cols'=>3), // if 'title' is set to FALSE, the title block is omitted entirely // note: this is an 'html' widget but is being fed content from a local method array('type'=>'html', 'src'=>self::test_method(), 'title'=>false, 'cols'=>3), array('type'=>'file', 'src'=>'saf_inv.htm', 'title'=>'Safety Investigation'), // multi-content widget - set widget title in outer array (also note use of CI anchor to create a link) array('title'=>anchor('tz', 'TARGET ZERO'), // sub-content follows same array format as single content widget // 'img' content can also have an 'alt' text array('type'=>'img', 'src'=>'saf_tzout.gif', 'alt'=>'Action Completed'), array('type'=>'file', 'src'=>'saf_tz.htm'), array('type'=>'file', 'src'=>'ave_close.htm', 'title'=>'Average Time to Close') ), array('type'=>'file', 'src'=>'saf_meet.htm', 'title'=>'Safety Meeting'), array('type'=>'file', 'src'=>'saf_acc.htm', 'title'=>'Accident Investigation'), array('type'=>'file', 'src'=>'saf_hazmat.htm', 'title'=>anchor('hazmat', 'HAZMAT')), array('type'=>'file', 'src'=>'saf_cont.htm', 'title'=>'Loss of Containment'), array('type'=>'file', 'src'=>'saf_worksinfo.htm', 'title'=>'Works Information'), // an action widget - 'clear' will generate a blank widget with a style of clear:both array('type'=>'clear'), // multi-content widget - width can be set using the 'cols' param in outer array array('title'=>'RAG Report', 'cols' => 2, array('type'=>'file', 'src'=>'saf_rag.htm'), array('type'=>'img', 'src'=>'ProcSaf.gif')), array('type'=>'file', 'src'=>'saf_chrom.htm', 'title'=>'Chrome checks'), ); // populate the view variable $widgets = $dash->build('safety'); // render the dashboard $this->load->view('layout_default', $widgets); } ................... } // end of Dash class Installation path is root/application/third_party/hArpanet/hDash/libraries/dash.php How can I load this class to my system and use widgets?

    Read the article

  • What are some things you'd like fresh college grads to know?

    - by bradhe
    So I proposed this to the Reddit community and I'd like to get SO's perspective on this. This is pretty much the copypasta of what I put there. I was thinking about this last night and thought it would be neat to compile a list. I'm still a pretty fresh college grad -- been in industry for 2 years -- but I think that I might have a few interesting things to lend. You don't know as much as you think you do. Somehow, college students think they know a lot more than they do (or maybe that was just me). Likewise, they think they can do more than they actually can. You should fairly assess your skills. QA people are not out to get you. Humans introduce bugs to code. It's not (nescessarily) a personal reflection on you and your skills if your code has a bug and it's caught by the QA/testing team. Listen to your senior (developers). They are not actually fuddy duddies who don't know about the new L337 hax in Ruby (okay, sometimes they are, but still...). They have a wealth of knowledge that you can learn from and it's in your best interest to do so. You will most likely not be doing what you want to for a while. This is mostly true in the corporate world -- startups are a different matter. Also, this is due to more than just the economy, man! Junior devs need to earn their keep, so to speak. Everyone wants to be lead dev on the next project and there are a lot of people in line ahead of you! For every elite developer there are 100 average developers. Joel Spolsky, I'm looking at you. Somehow this concept of ninja coders has really ingrained itself in our culture. While I encourage you to be the best you can be don't be disappointed if people aren't writing blog posts about you in the near future. Anyone else have anything they would see added to this list?

    Read the article

  • Database schema for simple stats project

    - by Bubnoff
    Backdrop: I have a file hierarchy of cvs files for multiple locations named by dates they cover ...by month specifically. Each cvs file in the folder is named after the location. eg', folder name: 2010-feb contains: location1.csv location2.csv Each CSV file holds records like this: 2010-06-28, 20:30:00 , 0 2010-06-29, 08:30:00 , 0 2010-06-29, 09:30:00 , 0 2010-06-29, 10:30:00 , 0 2010-06-29, 11:30:00 , 0 meaning of record columns ( column names ): Date, time, # of sessions I have a perl script that pulls the data from this mess and originally I was going to store it as json files, but am thinking a database might be more appropriate long term ...comparing year to year trends ...fun stuff like that. Pt 2 - My question/problem: So I now have a REST service that coughs up json with a test database. My question is [ I suck at db design ], how best to design a database backend for this? I am thinking the following tables would suffice and keep it simple: Location: (PK)location_code, name session: (PK)id, (FK)location_code, month, hour, num_sessions I need to be able to average sessions (plus min and max) for each hour across days of week in addition to days of week in a given month or months. I've been using perl hashes to do this and am trying to decide how best to implement this with a database. Do you think stored procedures should be used? As to the database, depending on info gathered here, it will be postgresql or sqlite. If there is no compelling reason for postgresql I'll stick with sqlite. How and where should I compare the data to hours of operation. I am storing the hours of operation in a yaml file. I currently 'match' the hour in the data to a hash from the yaml to do this. Would a database open simpler methods? I am thinking I would do this comparison as I do now then insert the data. Can be recalled with: SELECT hour, num_sessions FROM session WHERE location_code=LOC1 Since only hours of operation are present, I do not need to worry about it. Should I calculate all results as I do now then store as a stats table for different 'reports'? This, rather than processing on demand? How would this look? Anyway ...I ramble. Thanks for reading! Bubnoff

    Read the article

  • JFrame does not refresh after deleting an image

    - by dajackal
    Hi! I'm working for the first time with images in a JFrame, and I have some problems. I succeeded in putting an image on my JFrame, and now i want after 2 seconds to remove my image from the JFrame. But after 2 seconds, the image does not disappear, unless I resize the frame or i minimize and after that maximize the frame. Help me if you can. Thanks. Here is the code: File f = new File("2.jpg"); System.out.println("Picture " + f.getAbsolutePath()); BufferedImage image = ImageIO.read(f); MyBufferedImage img = new MyBufferedImage(image); img.resize(400, 300); img.setSize(400, 300); img.setLocation(50, 50); getContentPane().add(img); this.setSize(600, 400); this.setLocationRelativeTo(null); this.setVisible(true); Thread.sleep(2000); System.out.println("2 seconds over"); getContentPane().remove(img); Here is the MyBufferedImage class: public class MyBufferedImage extends JComponent{ private BufferedImage image; private int nPaint; private int avgTime; private long previousSecondsTime; public MyBufferedImage(BufferedImage b) { super(); this.image = b; this.nPaint = 0; this.avgTime = 0; this.previousSecondsTime = System.currentTimeMillis(); } @Override public void paintComponent(Graphics g) { Graphics2D g2D = (Graphics2D) g; g2D.setColor(Color.BLACK); g2D.fillRect(0, 0, this.getWidth(), this.getHeight()); long currentTimeA = System.currentTimeMillis(); //g2D.drawImage(this.image, 320, 0, 0, 240, 0, 0, 640, 480, null); g2D.drawImage(image, 0,0, null); long currentTimeB = System.currentTimeMillis(); this.avgTime += currentTimeB - currentTimeA; this.nPaint++; if (currentTimeB - this.previousSecondsTime > 1000) { System.out.format("Drawn FPS: %d\n", nPaint++); System.out.format("Average time of drawings in the last sec.: %.1f ms\n", (double) this.avgTime / this.nPaint++); this.previousSecondsTime = currentTimeB; this.avgTime = 0; this.nPaint = 0; } } }

    Read the article

  • Are there any modern GUI toolkits which implement a heirarchical menu buffer zone?

    - by scomar
    In Bruce Tognazzini's quiz on Fitt's Law, the question discussing the bottleneck in the hierarchical menu (as used in almost every modern desktop UI), talks about his design for the original Mac: The bottleneck is the passage between the first-level menu and the second-level menu. Users first slide the mouse pointer down to the category menu item. Then, they must carefully slide the mouse directly across (horizontally) in order to move the pointer into the secondary menu. The engineer who originally designed hierarchicals apparently had his forearm mounted on a track so that he could move it perfectly in a horizontal direction without any vertical component. Most of us, however, have our forarms mounted on a pivot we like to call our elbow. That means that moving our hand describes an arc, rather than a straight line. Demanding that pivoted people move a mouse pointer along in a straight line horizontally is just wrong. We are naturally going to slip downward even as we try to slide sideways. When we are not allowed to slip downward, the menu we're after is going to slam shut just before we get there. The Windows folks tried to overcome the pivot problem with a hack: If they see the user move down into range of the next item on the primary menu, they don't instantly close the second-level menu. Instead, they leave it open for around a half second, so, if users are really quick, they can be inaccurate but still get into the second-level menu before it slams shut. Unfortunately, people's reactions to heightened chance of error is to slow down, rather than speed up, a well-established phenomenon. Therefore, few users will ever figure out that moving faster could solve their problem. Microsoft's solution is exactly wrong. When I specified the Mac hierarchical menu algorthm in the mid-'80s, I called for a buffer zone shaped like a <, so that users could make an increasingly-greater error as they neared the hierarchical without fear of jumping to an unwanted menu. As long as the user's pointer was moving a few pixels over for every one down, on average, the menu stayed open, no matter how slow they moved. (Cancelling was still really easy; just deliberately move up or down.) This just blew me away! Such a simple idea which would result in a huge improvement in usability. I'm sure I'm not the only one who regularly has the next level of a menu slam shut because I don't move the mouse pointer in a perfectly horizontal line. So my question is: Are there any modern UI toolkits which implement this brilliant idea of a < shaped buffer zone in hierarchical menus? And if not, why not?!

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >