Search Results

Search found 14292 results on 572 pages for 'high integrity systems'.

Page 550/572 | < Previous Page | 546 547 548 549 550 551 552 553 554 555 556 557  | Next Page >

  • Code golf: Word frequency chart

    - by ChristopheD
    The challenge: Build an ASCII chart of the most commonly used words in a given text. The rules: Only accept a-z and A-Z (alphabetic characters) as part of a word. Ignore casing (She == she for our purpose). Ignore the following words (quite arbitary, I know): the, and, of, to, a, i, it, in, or, is Clarification: considering don't: this would be taken as 2 different 'words' in the ranges a-z and A-Z: (don and t). Optionally (it's too late to be formally changing the specifications now) you may choose to drop all single-letter 'words' (this could potentially make for a shortening of the ignore list too). Parse a given text (read a file specified via command line arguments or piped in; presume us-ascii) and build us a word frequency chart with the following characteristics: Display the chart (also see the example below) for the 22 most common words (ordered by descending frequency). The bar width represents the number of occurences (frequency) of the word (proportionally). Append one space and print the word. Make sure these bars (plus space-word-space) always fit: bar + [space] + word + [space] should be always <= 80 characters (make sure you account for possible differing bar and word lenghts: e.g.: the second most common word could be a lot longer then the first while not differing so much in frequency). Maximize bar width within these constraints and scale the bars appropriately (according to the frequencies they represent). An example: The text for the example can be found here (Alice's Adventures in Wonderland, by Lewis Carroll). This specific text would yield the following chart: _________________________________________________________________________ |_________________________________________________________________________| she |_______________________________________________________________| you |____________________________________________________________| said |____________________________________________________| alice |______________________________________________| was |__________________________________________| that |___________________________________| as |_______________________________| her |____________________________| with |____________________________| at |___________________________| s |___________________________| t |_________________________| on |_________________________| all |______________________| this |______________________| for |______________________| had |_____________________| but |____________________| be |____________________| not |___________________| they |__________________| so For your information: these are the frequencies the above chart is built upon: [('she', 553), ('you', 481), ('said', 462), ('alice', 403), ('was', 358), ('that ', 330), ('as', 274), ('her', 248), ('with', 227), ('at', 227), ('s', 219), ('t' , 218), ('on', 204), ('all', 200), ('this', 181), ('for', 179), ('had', 178), (' but', 175), ('be', 167), ('not', 166), ('they', 155), ('so', 152)] A second example (to check if you implemented the complete spec): Replace every occurence of you in the linked Alice in Wonderland file with superlongstringstring: ________________________________________________________________ |________________________________________________________________| she |_______________________________________________________| superlongstringstring |_____________________________________________________| said |______________________________________________| alice |________________________________________| was |_____________________________________| that |______________________________| as |___________________________| her |_________________________| with |_________________________| at |________________________| s |________________________| t |______________________| on |_____________________| all |___________________| this |___________________| for |___________________| had |__________________| but |_________________| be |_________________| not |________________| they |________________| so The winner: Shortest solution (by character count, per language). Have fun! Edit: Table summarizing the results so far (2012-02-15) (originally added by user Nas Banov): Language Relaxed Strict ========= ======= ====== GolfScript 130 143 Perl 185 Windows PowerShell 148 199 Mathematica 199 Ruby 185 205 Unix Toolchain 194 228 Python 183 243 Clojure 282 Scala 311 Haskell 333 Awk 336 R 298 Javascript 304 354 Groovy 321 Matlab 404 C# 422 Smalltalk 386 PHP 450 F# 452 TSQL 483 507 The numbers represent the length of the shortest solution in a specific language. "Strict" refers to a solution that implements the spec completely (draws |____| bars, closes the first bar on top with a ____ line, accounts for the possibility of long words with high frequency etc). "Relaxed" means some liberties were taken to shorten to solution. Only solutions shorter then 500 characters are included. The list of languages is sorted by the length of the 'strict' solution. 'Unix Toolchain' is used to signify various solutions that use traditional *nix shell plus a mix of tools (like grep, tr, sort, uniq, head, perl, awk).

    Read the article

  • Why are there so many man-made edge cases in IT, and is there any hope for simplification / unificat

    - by Hamish Grubijan
    This question is meant to generate discussion and so it is marked as community wiki. My observation is that the field of information technology grows so rapidly and randomly, that for many it takes a lot of time to learn many intricacies of some tools that will be obsolete in just short 3 years. If you look at the questions asked on StackOverflow ... at least half of them stem from the fact that some language / tool / API / protocol was poorly designed, is backwards and has gotchas. There are so many things which distract developers from converting English into machine code; instead they spend their time configuring stuff and gluing together things that do not really fit. How many times do you pick up somebody else's project (or someone picks up yours :) ) and realize that this program does not need half of the dialogs that it has, and that the logic can be simplified a great deal? But, it had to be made and sold here before a better thing is made and sold elsewhere, and hence all this rush. I often wish that things would just slow down. I do not want Microsoft Windows to run on my car's computer, my watch, my table, my toaster oven, and my toilet seat. I'd rather have Windows that DOES NOT HAVE WINDOWS REGISTRY, I'd rather have Windows that allows two different programs to work on the same file at the same time, the way it works on Unix systems. Microsoft is just an example. I am looking forward to the day when I do not have to worry about Windows vs Unix new line break, when System32 actually means that this directory contains 32-bit binaries, and not 64-bit ones, the day when dll hell and manifest hell are no longer an issue, the day when it takes me a lot less than 3 months on a new job to learn the system. I do not mean learning the entire code base of a product (depending on the size of it, it can take a long time). I mean - remembering which build-assisting scripts are written in Perl and which version of it, and which ones are done through .bat files, when do I need to manually make every file in some directory writable before running a script, or else a critical step of a database maintenance home-grown tool will bomb, and it will take 2 days to clean that up. Makes me wonder if humans enslaved computers, or if it is the other way around. The key is that improving those things will not bring extra revenue, and hence those taking the time to fix crap like that are not "business focused". However, these imperfections irritate me immensely, particularly because my memory is limited - I can hold only a small portion of that useless knowledge of a system in my head at any given point in time. I must not be alone. Did you also happen to notice that a programmer can waste a lot of time on things that should have been a lot more straight-forward? Is there hope? Will things get better/simpler in the future, or will there be a lot more IT crap floating around? I suppose I see diversity of tools, protocols, etc. as a bad thing. Thank you for participation.

    Read the article

  • Flash Media Server Streaming: Content Protection

    - by dbemerlin
    Hi, i have to implement flash streaming for the relaunch of our video-on-demand system but either because i haven't worked with flash-related systems before or because i'm too stupid i cannot get the system to work as it has to. We need: Per file & user access control with checks on a WebService every minute if the lease time ran out mid-stream: cancelling the stream rtmp streaming dynamic bandwidth checking Video Playback with Flowplayer (existing license) I've got the streaming and bandwidth check working, i just can't seem to get the access control working. I have no idea how i know which file is played back or how i can play back a file depending on a key the user has entered. Server-Side Code (main.asc): application.onAppStart = function() { trace("Starting application"); this.payload = new Array(); for (var i=0; i < 1200; i++) { this.payload[i] = Math.random(); //16K approx } } application.onConnect = function( p_client, p_autoSenseBW ) { p_client.writeAccess = ""; trace("client at : " + p_client.uri); trace("client from : " + p_client.referrer); trace("client page: " + p_client.pageUrl); // try to get something from the query string: works var i = 0; for (i = 0; i < p_client.uri.length; ++i) { if (p_client.uri[i] == '?') { ++i; break; } } var loadVars = new LoadVars(); loadVars.decode(p_client.uri.substr(i)); trace(loadVars.toString()); trace(loadVars['foo']); // And accept the connection this.acceptConnection(p_client); trace("accepted!"); //this.rejectConnection(p_client); // A connection from Flash 8 & 9 FLV Playback component based client // requires the following code. if (p_autoSenseBW) { p_client.checkBandwidth(); } else { p_client.call("onBWDone"); } trace("Done connecting"); } application.onDisconnect = function(client) { trace("client disconnecting!"); } Client.prototype.getStreamLength = function(p_streamName) { trace("getStreamLength:" + p_streamName); return Stream.length(p_streamName); } Client.prototype.checkBandwidth = function() { application.calculateClientBw(this); } application.calculateClientBw = function(p_client) { /* lots of lines copied from an adobe sample, appear to work */ } Client-Side Code: <head> <script type="text/javascript" src="flowplayer-3.1.4.min.js"></script> </head> <body> <a class="rtmp" href="rtmp://xx.xx.xx.xx/vod_project/test_flv.flv" style="display: block; width: 520px; height: 330px" id="player"> </a> <script> $f( "player", "flowplayer-3.1.5.swf", { clip: { provider: 'rtmp', autoPlay: false, url: 'test_flv' }, plugins: { rtmp: { url: 'flowplayer.rtmp-3.1.3.swf', netConnectionUrl: 'rtmp://xx.xx.xx.xx/vod_project?foo=bar' } } } ); </script> </body> My first Idea was to get a key from the Query String, ask the web service about which file and user that key is for and play the file but i can't seem to find out how to play a file from server side. My second idea was to let flowplayer play a file, pass the key as query string and if the filename and key don't match then reject the connection but i can't seem to find out which file it's currently playing. The only remaining idea i have is: create a list of all files the user is allowed to open and set allowReadAccess or however it was called to allow those files, but that would be clumsy due to the current infrastructure. Any hints? Thanks.

    Read the article

  • C++/msvc6 application crashes due to heap corruption, any hints?

    - by David Alfonso
    Hello all, let me say first that I'm writing this question after months of trying to find out the root of a crash happening in our application. I'll try to detail as much as possible what I've already found out about it. About the application It runs on Windows XP Professional SP2. It's built with Microsoft Visual C++ 6.0 with Service Pack 6. It's MFC based. It uses several external dlls (e.g. Xerces, ZLib or ACE). It has high performance requirements. It does a lot of network and hard disk I/O, but it's also cpu intensive. It has an exception handling mechanism which generates a minidump when an unhandled exception occurs. Facts about the crash It only happens on multiprocessor/multicore machines and under heavy loads of work. It happens at random (neither we nor our client have found a pattern yet). We cannot reproduce the crash on our testing lab. It only happens on some production systems (but always in multicore machines) It always ends up crashing at the same point, although the complete stack is not always the same. Let me add the stack of the crashing thread (obtained using WinDbg, sorry we don't have symbols) ChildEBP RetAddr Args to Child WARNING: Stack unwind information not available. Following frames may be wrong. 030af6c8 7c9206eb 77bfc3c9 01a80000 00224bc3 MyApplication+0x2a85b9 030af960 7c91e9c0 7c92901b 00000ab4 00000000 ntdll!RtlAllocateHeap+0xeac (FPO: [Non-Fpo]) 030af98c 7c9205c8 00000001 00000000 00000000 ntdll!ZwWaitForSingleObject+0xc (FPO: [3,0,0]) 030af9c0 7c920551 01a80898 7c92056d 313adfb0 ntdll!RtlpFreeToHeapLookaside+0x22 (FPO: [2,0,4]) 030afa8c 4ba3ae96 000307da 00130005 00040012 ntdll!RtlFreeHeap+0x1e9 (FPO: [Non-Fpo]) 030afacc 77bfc2e3 0214e384 3087c8d8 02151030 0x4ba3ae96 030afb00 7c91e306 7c80bfc1 00000948 00000001 msvcrt!free+0xc8 (FPO: [Non-Fpo]) 030afb20 0042965b 030afcc0 0214d780 02151218 ntdll!ZwReleaseSemaphore+0xc (FPO: [3,0,0]) 030afb7c 7c9206eb 02e6c471 02ea0000 00000008 MyApplication+0x2965b 030afe60 7c9205c8 02151248 030aff38 7c920551 ntdll!RtlAllocateHeap+0xeac (FPO: [Non-Fpo]) 030afe74 7c92056d 0210bfb8 02151250 02151250 ntdll!RtlpFreeToHeapLookaside+0x22 (FPO: [2,0,4]) 030aff38 77bfc2de 01a80000 00000000 77bfc2e3 ntdll!RtlFreeHeap+0x647 (FPO: [Non-Fpo]) 7c92056d c5ffffff ce7c94be ff7c94be 00ffffff msvcrt!free+0xc3 (FPO: [Non-Fpo]) 7c920575 ff7c94be 00ffffff 12000000 907c94be 0xc5ffffff 7c920579 00ffffff 12000000 907c94be 90909090 0xff7c94be *** WARNING: Unable to verify checksum for xerces-c_2_7.dll *** ERROR: Symbol file could not be found. Defaulted to export symbols for xerces-c_2_7.dll - 7c92057d 12000000 907c94be 90909090 8b55ff8b MyApplication+0xbfffff 7c920581 907c94be 90909090 8b55ff8b 08458bec xerces_c_2_7 7c920585 90909090 8b55ff8b 08458bec 04408b66 0x907c94be 7c920589 8b55ff8b 08458bec 04408b66 0004c25d 0x90909090 7c92058d 08458bec 04408b66 0004c25d 90909090 0x8b55ff8b The address MyApplication+0x2a85b9 corresponds to a call to erase() of a std::list. What I have tried so far Reviewing all the code related to the point where the crash ends happening. Trying to enable pageheap on our testing lab though nothing useful has been found by now. We have substituted the std::list for a C array and then it crashes in other part of the code (although it is related code, it's not in the code where the old list resided). Coincidentally, now it crashes in another erase, though this time of a std::multiset. Let me copy the stack contained in the dump: ntdll.dll!_RtlpCoalesceFreeBlocks@16() + 0x124e bytes ntdll.dll!_RtlFreeHeap@12() + 0x91f bytes msvcrt.dll!_free() + 0xc3 bytes MyApplication.exe!006a4fda() [Frames below may be incorrect and/or missing, no symbols loaded for MyApplication.exe] MyApplication.exe!0069f305() ntdll.dll!_NtFreeVirtualMemory@16() + 0xc bytes ntdll.dll!_RtlpSecMemFreeVirtualMemory@16() + 0x1b bytes ntdll.dll!_ZwWaitForSingleObject@12() + 0xc bytes ntdll.dll!_RtlpFreeToHeapLookaside@8() + 0x26 bytes ntdll.dll!_RtlFreeHeap@12() + 0x114 bytes msvcrt.dll!_free() + 0xc3 bytes c5ffffff() Possible solutions (that I'm aware of) which cannot be applied "Migrate the application to a newer compiler": We are working on this but It's not a solution at the moment. "Enable pageheap (normal or full)": We can't enable pageheap on production machines as this affects performance heavily. I think that's all I remember now, if I have forgotten something I'll add it asap. If you can give me some hint or propose some possible solution, don't hesitate to answer! Thank you in advance for your time and advice.

    Read the article

  • Looking for a function that will split profits/loss equally between 2 business partners.

    - by Hamish Grubijan
    This is not homework, for I am not a student. This is for my general curiosity. I apologize if I am reinventing the wheel here.The function I seek can be defined as follows (language agnostic): int getPercentageOfA(double moneyA, double workA, double moneyB, double workB) { // Perhaps you may assume that workA == 0 // Compute result return result; } Suppose Alice and Bob want to do business together ... such as ... selling used books. Alice is only interested in investing money in it and nothing else. Bob might invest some money, but he might have no $ available to invest. He will, however, put in the effort in finding a seller, a buyer, and doing maintenance. There are no tools, education, health insurance costs, or other expenses to consider. Both Alice and Bob wish to split the profits "equally" (A different weight like 40/60 for advanced users). Both are entrepreneurs, so they deal with low ROI/wage, and high income alike. There is no fixed wage, minimum wage, fixed ROI, or minimum ROI. They try to find the best deal possible, assume risks and go for it. Now, let's stick with the 50/50 model. If Alice invests $100, Bob invests work, and they will end up with a profit (or loss) of $60, they will split it equally - either both get $30 for their efforts/investments, or Bob ends up owing $30 to Alice. A second possibility: Both Alice and Bob invest 100, then Bob does all the work, and they end up splitting $60 profit. It looks like Alice should get only $15, because $30 of that profit came from Bob's investment and Bob's effort, so Alice shall have none of it, and the other $30 is to be split 50/50. Both of the examples above are trivial even when A and B want to split it 35/65 or what have you. Now it gets more complicated: What if Alice invests $70, and Bob invests $30 + does all of the work. It appears simple: (70,30) = (30,30) + (40,0) ... but, if only we knew how to weigh the two parts relative to each other. Another complicated (I think) example: what if Alice and Bob invest $70 and $30 respectively, and also put in an equal amount of work? I have a few data points: When A and B put in the same amount of work and the same $ - 50/50. When A puts in 100% of the money, and B does 100% of the work - 50/50. When A does all of the work and puts in all of the money - 100 for A / 0 for B (and vice-versa). When A puts in 50% of the money, and B puts in 50% of the money as well as does all of the work - 25 for A, and 75 for B (and vice-versa). If I fix things such that always workA = 0%, workB = 100% of the total work, then getPercentageOfA becomes a function: height z given x and y. The question is - how do you extrapolate this function between these several points? What is this function? If you can cover the cases when workA does not have to be 0% of the total work, and when investment vs work is split as 85/15 or using some other model, then what would the new function be?

    Read the article

  • My server app works strangely. What could be the reason(s)?

    - by Poni
    Hi! I've written a server app (two parts actually; proxy server and a game server) using C++ (board game). It uses IOCP as the sockets interface. For that app I've also written a "client simulator" (hereafter "client") app that spawns many client connections, where each of them plays, in very high speed, getting the CPU to be 100% utilized. So, that's how it goes in terms of topology: Game server - holds the game state. Real players do not connect it directly but through the proxy server. When a player joins a game, the proxy actually asks for it on behalf of that player, and the game server spawns a "player instance" for that player, and from now on, every notification between the game server and the player is being passed through the proxy. Proxy server - holds TCP connections with the real players. Players communicate with the game server through it only. Client simulator - connects to the proxy only. When running the server (again, it's actually two server apps) & client locally it all works just fine. I'm talking about 40k+ player instances in which all of them are active in a game. On the other hand, when running the server remotely with, say, 1000 clients who play things getting strange. For example, I run it as said above. Then with Task Manager I kill the client simulator app ("End Process Tree"). Then it seems like the buffer of the remote server got modified by another thread, or in other words, a memory corruption has been occurred. The server crashes because it got an unknown message id (it's a custom protocol where each message has it's own unique number). To make things clear, here is how I run the apps: PC1 - game server and clients simulator (because the clients will connect the proxy). PC2 - proxy server. The strangest thing is this: Only the remote side gets "corrupted". Remote in terms that it's not the PC I use to code the app (VC++ 2008). Let's call the PC I use to code the apps "PC1". Now for example, if this time I ran the game server on PC1 (it means that proxy server on PC2 and clients simulator on PC1), then the proxy server crashes with an "unknown message id" error. Another variation is when I run the proxy server on PC1 (again, the dev machine), the game server and the clients simulator on PC2, then the game server on PC2 gets crashed. As for the IOCP config: The servers' internal connections use the default receive/send buffer sizes. Tried even with setting them to 1MB, but no luck. I have three PCs in total; 2 x Vista 64bit <<-- one of those is the dev machine. The other is connected through WiFi. 1 x WinXP 32bit They're all connected in a "full duplex" manner. What could be the reason? Tried about everything; Stack tracing, recording some actions (like read/write logging).. I want to stress that only the PC I'm not using to code the apps crashes (actually the server app "role" which is running on it - sometimes the game server and sometimes the proxy server). At first I thought that maybe the wireless PC has problems (it's wireless..) but: TCP has it's own mechanisms to make sure the packet is delivered properly. Also, a crash also happens when trying it with the two PCs that are physically connected (Vista vs. XP). Another option is that the Windows DLLs versions might have problems, but then again, one of the tests is Vista vs. Vista, and the other is Vista vs. XP. Any idea?

    Read the article

  • C++ MySQL++ Delete query statement brain killer question

    - by shauny
    Hello all, I'm relatively new to the MySQL++ connector in C++, and have an really annoying issue with it already! I've managed to get stored procedures working, however i'm having issues with the delete statements. I've looked high and low and have found no documentation with examples. First I thought maybe the code needs to free the query/connection results after calling the stored procedure, but of course MySQL++ doesn't have a free_result method... or does it? Anyways, here's what I've got: #include <iostream> #include <stdio.h> #include <queue> #include <deque> #include <sys/stat.h> #include <mysql++/mysql++.h> #include <boost/thread/thread.hpp> #include "RepositoryQueue.h" using namespace boost; using namespace mysqlpp; class RepositoryChecker { private: bool _isRunning; Connection _con; public: RepositoryChecker() { try { this->_con = Connection(false); this->_con.set_option(new MultiStatementsOption(true)); this->_con.set_option(new ReconnectOption(true)); this->_con.connect("**", "***", "***", "***"); this->ChangeRunningState(true); } catch(const Exception& e) { this->ChangeRunningState(false); } } /** * Thread method which runs and creates the repositories */ void CheckRepositoryQueues() { //while(this->IsRunning()) //{ std::queue<RepositoryQueue> queues = this->GetQueue(); if(queues.size() > 0) { while(!queues.empty()) { RepositoryQueue &q = queues.front(); char cmd[256]; sprintf(cmd, "svnadmin create /home/svn/%s/%s/%s", q.GetPublicStatus().c_str(), q.GetUsername().c_str(), q.GetRepositoryName().c_str()); if(this->DeleteQueuedRepository(q.GetQueueId())) { printf("query deleted?\n"); } printf("Repository created!\n"); queues.pop(); } } boost::this_thread::sleep(boost::posix_time::milliseconds(500)); //} } protected: /** * Gets the latest queue of repositories from the database * and returns them inside a cool queue defined with the * RepositoryQueue class. */ std::queue<RepositoryQueue> GetQueue() { std::queue<RepositoryQueue> queues; Query query = this->_con.query("CALL sp_GetRepositoryQueue();"); StoreQueryResult result = query.store(); RepositoryQueue rQ; if(result.num_rows() > 0) { for(unsigned int i = 0;i < result.num_rows(); ++i) { rQ = RepositoryQueue((unsigned int)result[i][0], (unsigned int)result[i][1], (String)result[i][2], (String)result[i][3], (String)result[i][4], (bool)result[i][5]); queues.push(rQ); } } return queues; } /** * Allows the thread to be shut off. */ void ChangeRunningState(bool isRunning) { this->_isRunning = isRunning; } /** * Returns the running value of the active thread. */ bool IsRunning() { return this->_isRunning; } /** * Deletes the repository from the mysql queue table. This is * only called once it has been created. */ bool DeleteQueuedRepository(unsigned int id) { char cmd[256]; sprintf(cmd, "DELETE FROM RepositoryQueue WHERE Id = %d LIMIT 1;", id); Query query = this->_con.query(cmd); return (query.exec()); } }; I've removed all the other methods as they're not needed... Basically it's the DeleteQueuedRepository method which isn't working, the GetQueue works fine. PS: This is on a Linux OS (Ubuntu server) Many thanks, Shaun

    Read the article

  • surfaceDestroyed called out of turn

    - by Avasulthiris
    I'm currently developing on minimum sdk version 3 (Android 1.5 - cupcake) and I'm having a strange unexplained issue that I have not been able to solve on my own. It is now becoming a rather urgent issue as I've already missed 1 deadline... I'm writing a high-level library to make long term android development easier and quicker. The one specific module has to capture images for a application... I've gotten everything right so far over the last couple months, except this one little thing and I don't know what to do any more: When I use the Camera object and implement a SurfaceHolder.Callback, the methods surfaceCreated() and surfaceChanged() are called one after the other. Then when the activity finishes, surfaceDestroyed() is called. This is how it should be, but when I stick the exact same code in my library (plain Java library that references the Android API - not in an activity), surfaceDestroyed() is called directly after created and changed. As a result - the camera object is closed before I can use it and the application force closes. What a pain. I can't do anything! This method call is controlled by the device.. Why does the surface close for no reason? Even when I post it to run on the activity thread through my own invokeAndWait(Runnable) method, like I do for many other things. I have 5 different working examples of different ways and implementations of capturing images in android but I still get the same issue when I plug it into my library. I don't understand what the difference is. The code is pretty much the same - and I post all the related code to the UI thread so its not a thread handling issue or anything like that. I've rewritten it about 20 times in different ways - same issue every time.. The only other way to approach it that I know of is creating a new Camera and setting it to the VideoView. The android source (c++ native code) however provides no Camera constructor, only an open() method which automatically forwards the camera's state to 'prepared' but I can only set the camera to the VideoView from the 'initialized' state. Pretty silly, I know, but there is no way around it unless I modify the Android library source code haha. not an option! The API does not allow for this method - you are expected to use it like my first example. So essentially - i just need to understand exactly why surfaceDestroyed() is called out of turn and if there is anything I can do to avoid it closing? If i can just understand the exact logic behind it and how it works! The documentation isn't much help! Secondly, if someone knows of any alternative ways to do it, as my second example, but hopefully one which the API actually allows for? haha Thanks guys. I would post code, but its fairly complicated, a couple thousand lines for this specific class and it would probably take a couple days to explain with all the threading and event listeners and what not. I just need help with this 1 single thing. Please let me know if you have any questions.

    Read the article

  • DSP - Filtering in the frequency domain via FFT

    - by Trap
    I've been playing around a little with the Exocortex implementation of the FFT, but I'm having some problems. Whenever I modify the amplitudes of the frequency bins before calling the iFFT the resulting signal contains some clicks and pops, especially when low frequencies are present in the signal (like drums or basses). However, this does not happen if I attenuate all the bins by the same factor. Let me put an example of the output buffer of a 4-sample FFT: // Bin 0 (DC) FFTOut[0] = 0.0000610351563 FFTOut[1] = 0.0 // Bin 1 FFTOut[2] = 0.000331878662 FFTOut[3] = 0.000629425049 // Bin 2 FFTOut[4] = -0.0000381469727 FFTOut[5] = 0.0 // Bin 3, this is the first and only negative frequency bin. FFTOut[6] = 0.000331878662 FFTOut[7] = -0.000629425049 The output is composed of pairs of floats, each representing the real and imaginay parts of a single bin. So, bin 0 (array indexes 0, 1) would represent the real and imaginary parts of the DC frequency. As you can see, bins 1 and 3 both have the same values, (except for the sign of the Im part), so I guess bin 3 is the first negative frequency, and finally indexes (4, 5) would be the last positive frequency bin. Then to attenuate the frequency bin 1 this is what I do: // Attenuate the 'positive' bin FFTOut[2] *= 0.5; FFTOut[3] *= 0.5; // Attenuate its corresponding negative bin. FFTOut[6] *= 0.5; FFTOut[7] *= 0.5; For the actual tests I'm using a 1024-length FFT and I always provide all the samples so no 0-padding is needed. // Attenuate var halfSize = fftWindowLength / 2; float leftFreq = 0f; float rightFreq = 22050f; for( var c = 1; c < halfSize; c++ ) { var freq = c * (44100d / halfSize); // Calc. positive and negative frequency indexes. var k = c * 2; var nk = (fftWindowLength - c) * 2; // This kind of attenuation corresponds to a high-pass filter. // The attenuation at the transition band is linearly applied, could // this be the cause of the distortion of low frequencies? var attn = (freq < leftFreq) ? 0 : (freq < rightFreq) ? ((freq - leftFreq) / (rightFreq - leftFreq)) : 1; // Attenuate positive and negative bins. mFFTOut[ k ] *= (float)attn; mFFTOut[ k + 1 ] *= (float)attn; mFFTOut[ nk ] *= (float)attn; mFFTOut[ nk + 1 ] *= (float)attn; } Obviously I'm doing something wrong but can't figure out what. I don't want to use the FFT output as a means to generate a set of FIR coefficients since I'm trying to implement a very basic dynamic equalizer. What's the correct way to filter in the frequency domain? what I'm missing? Thanks in advance.

    Read the article

  • Languages and VMs: Features that are hard to optimize and why

    - by mrjoltcola
    I'm doing a survey of features in preparation for a research project. Name a mainstream language or language feature that is hard to optimize, and why the feature is or isn't worth the price paid, or instead, just debunk my theories below with anecdotal evidence. Before anyone flags this as subjective, I am asking for specific examples of languages or features, and ideas for optimization of these features, or important features that I haven't considered. Also, any references to implementations that prove my theories right or wrong. Top on my list of hard to optimize features and my theories (some of my theories are untested and are based on thought experiments): 1) Runtime method overloading (aka multi-method dispatch or signature based dispatch). Is it hard to optimize when combined with features that allow runtime recompilation or method addition. Or is it just hard, anyway? Call site caching is a common optimization for many runtime systems, but multi-methods add additional complexity as well as making it less practical to inline methods. 2) Type morphing / variants (aka value based typing as opposed to variable based) Traditional optimizations simply cannot be applied when you don't know if the type of someting can change in a basic block. Combined with multi-methods, inlining must be done carefully if at all, and probably only for a given threshold of size of the callee. ie. it is easy to consider inlining simple property fetches (getters / setters) but inlining complex methods may result in code bloat. The other issue is I cannot just assign a variant to a register and JIT it to the native instructions because I have to carry around the type info, or every variable needs 2 registers instead of 1. On IA-32 this is inconvenient, even if improved with x64's extra registers. This is probably my favorite feature of dynamic languages, as it simplifies so many things from the programmer's perspective. 3) First class continuations - There are multiple ways to implement them, and I have done so in both of the most common approaches, one being stack copying and the other as implementing the runtime to use continuation passing style, cactus stacks, copy-on-write stack frames, and garbage collection. First class continuations have resource management issues, ie. we must save everything, in case the continuation is resumed, and I'm not aware if any languages support leaving a continuation with "intent" (ie. "I am not coming back here, so you may discard this copy of the world"). Having programmed in the threading model and the contination model, I know both can accomplish the same thing, but continuations' elegance imposes considerable complexity on the runtime and also may affect cache efficienty (locality of stack changes more with use of continuations and co-routines). The other issue is they just don't map to hardware. Optimizing continuations is optimizing for the less-common case, and as we know, the common case should be fast, and the less-common cases should be correct. 4) Pointer arithmetic and ability to mask pointers (storing in integers, etc.) Had to throw this in, but I could actually live without this quite easily. My feelings are that many of the high-level features, particularly in dynamic languages just don't map to hardware. Microprocessor implementations have billions of dollars of research behind the optimizations on the chip, yet the choice of language feature(s) may marginalize many of these features (features like caching, aliasing top of stack to register, instruction parallelism, return address buffers, loop buffers and branch prediction). Macro-applications of micro-features don't necessarily pan out like some developers like to think, and implementing many languages in a VM ends up mapping native ops into function calls (ie. the more dynamic a language is the more we must lookup/cache at runtime, nothing can be assumed, so our instruction mix is made up of a higher percentage of non-local branching than traditional, statically compiled code) and the only thing we can really JIT well is expression evaluation of non-dynamic types and operations on constant or immediate types. It is my gut feeling that bytecode virtual machines and JIT cores are perhaps not always justified for certain languages because of this. I welcome your answers.

    Read the article

  • Why i cannot get the frame of a UIView in order to move it? The view is defined.

    - by Jann
    I am creating a nav-based app with a view that floats at the bottom of the screen (Alpha .7 most of the time). I create it like this... // stuff to create the tabbar/nav bar. // THIS ALL WORKS... // then add it to subview. [window addSubview:tabBarController.view]; // need this last line to display the window (and tab bar controller) [window makeKeyAndVisible]; // Okay, here is the code i am using to create a grey-ish strip exactly `zlocationAccuracyHeight` pixels high at `zlocationAccuracyVerticalStartPoint` starting point vertically. CGRect locationManagerAccuracyUIViewFrame = CGRectMake(0,zlocationAccuracyVerticalStartPoint,[[UIScreen mainScreen] bounds].size.width,zlocationAccuracyHeight); self.locationManagerAccuracyUIView = [[UIView alloc] initWithFrame:locationManagerAccuracyUIViewFrame]; self.locationManagerAccuracyUIView.autoresizingMask = (UIViewAutoresizingFlexibleWidth); self.locationManagerAccuracyUIView.backgroundColor = [UIColor darkGrayColor]; [self.locationManagerAccuracyUIView setAlpha:0]; CGRect locationManagerAccuracyLabelFrame = CGRectMake(0, 0,[[UIScreen mainScreen] bounds].size.width,zlocationAccuracyHeight); locationManagerAccuracyLabel = [[UILabel alloc] initWithFrame:locationManagerAccuracyLabelFrame]; if ([myGizmoClass useLocationServices] == 0) { locationManagerAccuracyLabel.text = @"GPS Accuracy: Using Manual Location"; } else { locationManagerAccuracyLabel.text = @"GPS Accuracy: One Moment Please..."; } locationManagerAccuracyLabel.font = [UIFont boldSystemFontOfSize:12]; locationManagerAccuracyLabel.textAlignment = UITextAlignmentCenter; locationManagerAccuracyLabel.textColor = [UIColor whiteColor]; locationManagerAccuracyLabel.backgroundColor = [UIColor clearColor]; [locationManagerAccuracyLabel setAlpha:0]; [self.locationManagerAccuracyUIView addSubview: locationManagerAccuracyLabel]; [window addSubview: self.locationManagerAccuracyUIView]; this all works (i am not sure about the order i create the uiview in ... meaning i am creating the frame, the view, creating the "accuracy text" and adding that to the view, then adding the uiview as a subview of the window . It works and seems correct in my logic. So, here is the tough part. I have a timer that i am testing with. I am trying to float the uiview up by 30 pix. here is that code: [UIView beginAnimations:nil context:NULL]; [UIView setAnimationDuration:0.3]; CGRect rect = [ self.locationManagerAccuracyUIView frame]; NSLog(@"ORIGIN: %d x %d (%@)\n",rect.origin.x,rect.origin.y,rect); rect.origin.y -= 30; [UIView commitAnimations]; The problem? rect is nill, rect.origin.x and rect.origin.y are both zero. Can anyone tell me why? Here is how i set up self.locationManagerAccuracyUIView in my files: Delegate.h UIView *locationManagerAccuracyUIView; UILabel *locationManagerAccuracyLabel; ... @property (nonatomic, retain) IBOutlet UIView *locationManagerAccuracyUIView; @property (nonatomic, retain) IBOutlet UILabel *locationManagerAccuracyLabel; Delegate.m ... @synthesize locationManagerAccuracyUIView; @synthesize locationManagerAccuracyLabel; ... BTW: Other places in another timer i DO set the alpha to fade in and out and THAT works! So locationManagerAccuracyUIView is valid and defined as a view... For instance: [UIView beginAnimations:nil context:NULL]; [UIView setAnimationDuration:0.5]; [locationManagerAccuracyLabel setAlpha:1]; [UIView commitAnimations]; [UIView beginAnimations:nil context:NULL]; [UIView setAnimationDuration:0.5]; [self.locationManagerAccuracyUIView setAlpha:.7]; [UIView commitAnimations]; ...and it DOES work. Can anyone help me? As an aside: I know, when typing this I used self.locationManagerAccuracyUIView and locationManagerAccuracyUIView interchangeably to see if for some reason that was the issue. It is not. :) Thx

    Read the article

  • Insane SmartGWT + GWT situation... Error on instantiating ListGridRecord?

    - by Xandel
    Hi all, I am asking this here in the hope that someone has maybe come across this situation too... I have posted this on the SmartGWT forum: I am having an issue when trying to instantiate a ListGridRecord object on my server side. I am using the ListGrid on the client side, I want to use GWT's RPC to pass back an array of ListGridRecord objects to populate the grid with. I know that SmartGWT is designed to link to a datasource but I want full control over when I populate the grid and this shouldn't be as much of a nightmare as it is to do. I have searched high and low and cannot find anyone complaining about the same thing. The exception however (listed below) has come up (in my search findings) as a possible memory error - where increasing the memory (-Xmx512m argument) has apparently solved the problem. It did not, however, sort out mine. If anyone can shed any light on this I would greatly appreciate it! Here are my details: Developing using Eclipse Galileo on Ubuntu 9.04 (Jaunty) and GWT 2.0.3, I built the initial GWT project using the webAppCreator bundled with the GWT 2.0.3 release and imported the project into Eclipse as described on the GWT Getting Started Page (as using the GWT Eclipse plugin caused even more nightmares when trying to connect to a database - this is apparently due to using the Google App Engine and turning it off as all the posts suggested only causes ClassNotFound exceptions). The line that causes the error is literally: ListGridRecord a = new ListGridRecord(); The error I get is the following: 00:00:25.916 [WARN] Exception while dispatching incoming RPC call com.google.gwt.user.server.rpc.UnexpectedException : Service method 'public abstract java.lang.String za.co.company.product.client.service.EmployeeServi ce.getAllEmployeeAsListGridRecord()' threw an unexpected exception: java.lang.UnsatisfiedLinkError: com.smartgwt.client.util.LogUtil.setJSNIErrorHandl er()V at com.google.gwt.user.server.rpc.RPC.encodeResponseF orFailure(RPC.java:378) at com.google.gwt.user.server.rpc.RPC.invokeAndEncode Response(RPC.java:581) at com.google.gwt.user.server.rpc.RemoteServiceServle t.processCall(RemoteServiceServlet.java:188) at com.google.gwt.user.server.rpc.RemoteServiceServle t.processPost(RemoteServiceServlet.java:224) at com.google.gwt.user.server.rpc.AbstractRemoteServi ceServlet.doPost(AbstractRemoteServiceServlet.java :62) at javax.servlet.http.HttpServlet.service(HttpServlet .java:637) at javax.servlet.http.HttpServlet.service(HttpServlet .java:717) at org.mortbay.jetty.servlet.ServletHolder.handle(Ser vletHolder.java:487) at org.mortbay.jetty.servlet.ServletHandler.handle(Se rvletHandler.java:362) at org.mortbay.jetty.security.SecurityHandler.handle( SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(Se ssionHandler.java:181) at org.mortbay.jetty.handler.ContextHandler.handle(Co ntextHandler.java:729) at org.mortbay.jetty.webapp.WebAppContext.handle(WebA ppContext.java:405) at org.mortbay.jetty.handler.HandlerWrapper.handle(Ha ndlerWrapper.java:152) at org.mortbay.jetty.handler.RequestLogHandler.handle (RequestLogHandler.java:49) at org.mortbay.jetty.handler.HandlerWrapper.handle(Ha ndlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:324) at org.mortbay.jetty.HttpConnection.handleRequest(Htt pConnection.java:505) at org.mortbay.jetty.HttpConnection$RequestHandler.co ntent(HttpConnection.java:843) at org.mortbay.jetty.HttpParser.parseNext(HttpParser. java:647) at org.mortbay.jetty.HttpParser.parseAvailable(HttpPa rser.java:211) at org.mortbay.jetty.HttpConnection.handle(HttpConnec tion.java:380) at org.mortbay.io.nio.SelectChannelEndPoint.run(Selec tChannelEndPoint.java:395) at org.mortbay.thread.QueuedThreadPool$PoolThread.run (QueuedThreadPool.java:488) Caused by: java.lang.UnsatisfiedLinkError: com.smartgwt.client.util.LogUtil.setJSNIErrorHandl er()V at com.smartgwt.client.util.LogUtil.setJSNIErrorHandl er(Native Method) at com.smartgwt.client.core.JsObject.(JsObjec t.java:30) at za.co.company.product.server.service.EmployeeServi ceImpl.getAllEmployeeAsListGridRecord(EmployeeServ iceImpl.java:83) at sun.reflect.NativeMethodAccessorImpl.invoke0(Nativ e Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Native MethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(De legatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.gwt.user.server.rpc.RPC.invokeAndEncode Response(RPC.java:562) at com.google.gwt.user.server.rpc.RemoteServiceServle t.processCall(RemoteServiceServlet.java:188) at com.google.gwt.user.server.rpc.RemoteServiceServle t.processPost(RemoteServiceServlet.java:224) at com.google.gwt.user.server.rpc.AbstractRemoteServi ceServlet.doPost(AbstractRemoteServiceServlet.java :62) at javax.servlet.http.HttpServlet.service(HttpServlet .java:637) at javax.servlet.http.HttpServlet.service(HttpServlet .java:717) at org.mortbay.jetty.servlet.ServletHolder.handle(Ser vletHolder.java:487) at org.mortbay.jetty.servlet.ServletHandler.handle(Se rvletHandler.java:362) at org.mortbay.jetty.security.SecurityHandler.handle( SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(Se ssionHandler.java:181) at org.mortbay.jetty.handler.ContextHandler.handle(Co ntextHandler.java:729) at org.mortbay.jetty.webapp.WebAppContext.handle(WebA ppContext.java:405) at org.mortbay.jetty.handler.HandlerWrapper.handle(Ha ndlerWrapper.java:152) at org.mortbay.jetty.handler.RequestLogHandler.handle (RequestLogHandler.java:49) at org.mortbay.jetty.handler.HandlerWrapper.handle(Ha ndlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:324) at org.mortbay.jetty.HttpConnection.handleRequest(Htt pConnection.java:505) at org.mortbay.jetty.HttpConnection$RequestHandler.co ntent(HttpConnection.java:843) at org.mortbay.jetty.HttpParser.parseNext(HttpParser. java:647) at org.mortbay.jetty.HttpParser.parseAvailable(HttpPa rser.java:211) at org.mortbay.jetty.HttpConnection.handle(HttpConnec tion.java:380) at org.mortbay.io.nio.SelectChannelEndPoint.run(Selec tChannelEndPoint.java:395) at org.mortbay.thread.QueuedThreadPool$PoolThread.run (QueuedThreadPool.java:488) Thanks in advance! Xandel

    Read the article

  • Adding a div layer on top of a jquery carousel. Tough one.

    - by wilwaldon
    Hey everyone, I have a tough one, well it's tough for me because I'm kinda new to the whole jQuery carousel thing, never built one before this project. Here's my problem. If you go to the TEST SITE you will see a scroller with a blue background about half way down the page. If you mouse onto the "data analytics" slide you should see a black box fade in. Here is my dilemma. I want that black box to be a menu that's connected to the data analytics slide. I've done a mock up for you so you can see what I'm talking about. Here is my scroller code. I'm using jCarousel. <div class="carousel"> <ul> <li> <div id="homeslide1"> testers sdfasdfasdfas asdftjhs iasndkad kasdnf <a href="#" id="#homeslide1-toggle">Close this</a> </div> <a href="#" id="homeslide1-show"><img src="<?php bloginfo('template_url'); ?>/images/home_data_analytics.jpg" width="200" height="94" /></a> </li> <li><img src="<?php bloginfo('template_url'); ?>/images/home_oem_partnerships.jpg" width="200" height="94" /></li> <li><img src="<?php bloginfo('template_url'); ?>/images/home_reporting.jpg" width="200" height="92" /></li> <li><img src="<?php bloginfo('template_url'); ?>/images/home_returning_lost_customers.jpg" width="200" height="92" /></li> <li><img src="<?php bloginfo('template_url'); ?>/images/home_sales.jpg" width="200" height="92" /></li> <li><img src="<?php bloginfo('template_url'); ?>/images/home_service_retention.jpg" width="200" height="92" /></li> </ul> Here is my scroller css /*HOMEPAGE SCROLLER*/ .carousel {!important padding:10px; width: 890px; margin: 0px 0px 0px 26px;} .carousel ul li element.style{height: 94px;} .carousel ul{width: 200px; padding: 5px;} .carouselitem{height: 94px;} .prev{background: url(images/home_left_scroll.png); height: 94px; width: 16px; text-indent: -999px; outline: none; cursor:pointer; float: left;} .next{background: url(images/home_right_scroll.png); height: 94px; width: 16px; text-indent: -999px; outline: none; cursor:pointer; float: right;} .carousel ul li{ padding: 0px 3px 0px 3px; margin: 0px; height:!important 94px; } .home_right_arrow{ width: 16px; float: right;} .home_left_arrow{ width: 16px; float: left;} .homeslide1{ width: 200px; height: 94px;} I've tried all sorts of z-index tricks but can't seem to figure it out on my own. If you solve this riddle I'll buy you a beer if we ever meet up. I'll also give you a high five through the internet. Is there a simple way to do this via jQuery? If so could you point me in the right direction? Thanks so much.

    Read the article

  • Please recommend me intermediate-to-advanced Python books to buy.

    - by anonnoir
    I'm in the final year, final semester of my law degree, and will be graduating very soon. (April, to be specific.) But before I begin practice, I plan to take 2 two months off, purely for serious programming study. So I'm currently looking for some Python-related books, gauged intermediate to advanced, which are interesting (because of the subject matter itself) and possibly useful to my future line of work. I've identified 2 possible purchases at the moment: Natural Language Processing with Python. The law deals mostly with words, and I've quite a number of ideas as to where I might go with NLP. Data extraction, summaries, client management systems linked with document templates, etc. Programming Collective Intelligence. This book fascinates me, because I've always liked the idea of machine learning (and I'm currently studying it by the side too, for fun). I'd like to build/play around with Web 2.0 applications; and who knows if I can apply some of the things I learn to my legal work. (E.g. Playground experiments to determine how and under what circumstances judges might be biased, by forcing algorithms to pore through judgments and calculate similarities, etc.) Please feel free to criticize my current choices, but do at least offer or recommend other books that I should read in their place. My budget can deal with 4 books, max. These books will be used heavily throughout the 2 months; I will be reading them back to back, absorbing the explanations given, and hacking away at their code. Also, the books themselves should satisfy 2 main criteria: Application. The book must teach how to solve problems. I like reading theory, but I want to build things and solve problems first. Even playful applications are fine, because games and experiments always have real-world applications sooner or later. Readability. I like reading technical books, no matter how difficult they are. I enjoy the effort and the feeling that you're learning something. But the book shouldn't contain code or explanations that are too cryptic or erratic. Even if it's difficult, the book's content should be accessible with focused reading. Note: I realize that I am somewhat of a beginner to the whole programming thing, so please don't put me down. But from experience, I think it's better to aim up and leave my comfort zone when learning new things, rather than to just remain stagnant the way I am. (At least the difficulty gives me focus: i.e. if a programmer can be that good, perhaps if I sustain my own efforts I too can be as good as him someday.) If anything, I'm also a very determined person, so two months of day-to-night intensive programming study with nothing else on my mind should, I think, give me a bit of a fighting chance to push my programming skills to a much higher level.

    Read the article

  • How to cope with developing against a poor 3rd party API/application?

    - by wsanville
    I'm a web developer, and my organization has recently started to use a proprietary ASP.NET CMS for our web sites. I was excited to get started using the CMS, thinking it would bring a lot of value to our end users and be fun to work with, since my skills are a good match for the types of projects we're using it for. That was about a year ago. Since then, we've ran into all kinds of issues, from blatant bugs in the product, to nasty edge cases in the APIs, to extremely poor documentation for developers. On about a weekly basis, we are forced to pursue workarounds and rewrite some of the out of the box functionality, and even find some of the basic features unusable. In many cases, since this is a closed source application (and obfuscated of course), there's nothing we can do as developers to solve these issues. So my question is, how does one attempt to develop a good application in such a scenario? The application mostly works when using the the exact out of the box behavior, or using one of the company's starter sites. However, my attempts to use the underlying APIs to implement slightly different, yet reasonable behavior has proved to be extremely time consuming (not to mention just as buggy), given the lack of good information about the APIs. I've given this a lot of thought, and my conflicting viewpoints are the following: Strongly advise against any customization to the CMS, as development time will rise exponentially, or even have an extremely high chance of failing. While this is accurate, I do not want to give the impression that I am not willing to code my own solutions to problems and take the initiative to implement something difficult or complex. I don't want to be perceived as someone who is not motivated, lazy, or not knowledgeable to do anything complex, because this is simply not the case. I love coding my own solutions, trying new/difficult things, I just dislike the vendor app we're using. Continue on the path I'm on now, which is hacking my way past all issues I encounter and try my best to deliver an application that meets the needs and specs exactly. My goals are to make it as seamless and easy to use as possible to the end user, even when integrating the CMS with our other applications internally. The problem I'm finding with this approach is it is very time consuming. I open support cases with the vendor on a regular basis to solve issues and to gain knowledge of their APIs, but this is extremely time consuming, and in some cases it leads to dead ends. I post on the vendors forums on a regular basis but have become frustrated as most of my posts get 0 replies. So, what would you, a reasonable developer, do in this case? How can I make the best of the situation? And just for fun, here are some of the code smells and anti-patterns I've dealt with using the product (aside from their own code blatantly failing): Use of StringBuilder to concatenate a giant string that is hard coded and does not change. They use it to concatenate their Javascript and write it out into the body tags of their pages. Methods that accept object or Microsoft.VisualBasic.Collection as the parameters. In the case of the VB Collection, the data is not a list of any kind, it's used instead of making a class. Methods that return a Hashtable of VB Collections Method names of the form MethodName_v45, MethodName_v20, etc... Multiple classes with the same name in different namespaces with different functionality/behavior. Intellisense that reads "Note: this parameter is non functional" Complete lack of coding standards, API is filled with magic numbers and magic strings. Properties with a getter of type object that accepts totally different things, like enum or strings, and throw exceptions at runtime when you pass in something not supported. And much, much, more...

    Read the article

  • How can I provide maximum integration between a calendar-like webapp and desktop calendar applicatio

    - by Joshua Carmody
    I've been assigned to upgrade/rewrite a webapp that my company uses to schedule conference calls. One of the goals of the upgrade is to improve integration between the application and our user's Outlook calendars (and ideally other calendar programs as well). At present, when a user is viewing the details of a scheduled conference call on the webapp, they can click an "Add to Outlook calendar" link, which points them to a dynamically generated .ical file. On most of our users' systems, Outlook opens the file by default, bringing up the "create calendar appointment" window with the concall information pre-populated. This link creates a 1-time appointment only, and has to be clicked on for each occurrence of the call. So if a call happened every Monday in June, you would have to click 4 links to add all the appointments to your calendar. This is the full extent of our current level of integration. Ideally, we will be able to upgrade the system so that users can "subscribe" to a con call, which would mean not just the current call, but all calls in a reoccurring series would appear in the user's calendar with a single click. If one call in a series was cancelled, or rescheduled, that call's appointment would change in the users' calendar, without the user having to do anything, and without upsetting the rest of the series' appointments. Also, any changes to the call's info (say, the phone number was changed) would automatically be updated in the Outlook calendars of anyone who subscribed, without them having to come back to the webapp to double-check that their information is up to date. Ideally this would also work with other popular calendar programs, as well as Google Calendar. I don't know if we'll be able to achieve that level of integration, but I'd like to get as close to that as we can. Additional details and challenges: We aren't running Exchange on a public server, and I'm not likely to be able to get that changed Assume that our users are basically "the general internet public". Our users are not members of our office's network, nor can they be. We can't set up network logins or Exchange accounts for them. Some of our users are not using Outlook, but some other calendar program. Of the ones that are using Outlook, not all are using the same version. We have users in more than 50 countries that are using this webapp. Synchronization would be one-directional. Nobody can make changes in their own calendars and expect the server to reflect them/replicate them to other users Current conference calling application is written in ColdFusion. Rewrite will probably be in ASP.NET, but I haven't confirmed that yet. Solutions that work with either or both technologies are appreciated. I know that .ical files can theoretically contain more than one event, but in my own experiments I haven't had success in getting Outlook (2003) to add more than one event at a time using the .ical file method. Maybe someone knows how to set up a multi-event .ical file that Outlook will accept? Could a link to such an .ical file be "subscribed" to? Is there such thing as a calendar RSS feed? Could I simulate running an exchange server? Any other ideas? Thanks everyone!

    Read the article

  • Possible iphone animation timing/rendering bug?

    - by David
    Hi all, I have been working on an iphone apps for several weeks. Now I encounter an animation problem that I can't figure out how to resolve. Mayhbe you can help. Here is the details (a little long, bear with me): Basically the effect I want to achieve is, when user click a button, a loading view pops up, hiding the whole screen; and then the apps does a lot of heavy computation, which takes a few seconds. Once the computation is done, soem result views (something likes checkers on a checker board) are rendered under the loading view. Once all result views are rendered, I used animation animation to remove the loading view nand show the result views to the user. Here is what I do: when user click a button, run this code: [UIView beginAnimations:nil context:nil]; [UIView setAnimationDuration:1.0]; [UIView setAnimationBeginsFromCurrentState:YES]; [UIView setAnimationTransition:UIViewAnimationTransitionCurlDown forView:self.view cache:YES]; [UIView setAnimationDelegate:self]; [UIView setAnimationDidStopSelector:@selector(loadingViewInserted:finished:context:)]; // use a really high index number so it will always on top [self.view insertSubview:loadingViewController.view atIndex:1000]; [UIView commitAnimations]; In the "loadingViewInserted" function, it calls another function doing the heavy computation work. Once the computation is done, a lot of result views (like checkers on a checker board) are rendered under the loading view. for(int colIndex = 1; colIndex <= result.columns; colIndex++) { for(int rowIndex = 1; rowIndex <= result.rows; rowIndex++) { ResultView *rv = [ResultView resultViewWithData:results[colIndex][rowIndex]]; [self.view addSubview:rv]; } } Once all result views are added, following animation is invoked to remove the loading view: [UIView beginAnimations:nil context:nil]; [UIView setAnimationDuration:1.0]; [UIView setAnimationBeginsFromCurrentState:YES]; [UIView setAnimationTransition:UIViewAnimationTransitionCurlUp forView:self.view cache:YES]; [loadingViewController.view removeFromSuperview]; [UIView commitAnimations]; By doing this, most of the time (maybe 90%) it does exactly what I want. However, sometime I see some weird result: the loading view shows up first as expected, then before it disappears, some result views, which suppose to be under the loading view, suddenly appears on top of the loading view; and some of them are partial rendered. And then the loading view curled up, and everything looks normal again. The weird situation only lasts for less than a second, but already bad enough to screw up the UI. I have tried all different kinds of thing to fix this (using another thread to remove the loading view, make the loading view non-transparent), but none of them works. The only thing that makes a little better is, I hide all the result views first; after the last animation finished, in its call back, unhide all result views. But this loses the nice effect that when curling up the loading view, the results are already there. At this point, I really think this is a bug in iphone (I compile it with OS 3.0) OS. Or maybe you can point out what I have done wrong (or could do differently). (thanks for finishing this long post, :-) )

    Read the article

  • Java code optimization leads to numerical inaccuracies and errors

    - by rano
    I'm trying to implement a version of the Fuzzy C-Means algorithm in Java and I'm trying to do some optimization by computing just once everything that can be computed just once. This is an iterative algorithm and regarding the updating of a matrix, the clusters x pixels membership matrix U, this is the update rule I want to optimize: where the x are the element of a matrix X (pixels x features) and v belongs to the matrix V (clusters x features). And m is a parameter that ranges from 1.1 to infinity. The distance used is the euclidean norm. If I had to implement this formula in a banal way I'd do: for(int i = 0; i < X.length; i++) { int count = 0; for(int j = 0; j < V.length; j++) { double num = D[i][j]; double sumTerms = 0; for(int k = 0; k < V.length; k++) { double thisDistance = D[i][k]; sumTerms += Math.pow(num / thisDistance, (1.0 / (m - 1.0))); } U[i][j] = (float) (1f / sumTerms); } } In this way some optimization is already done, I precomputed all the possible squared distances between X and V and stored them in a matrix D but that is not enough, since I'm cycling througn the elements of V two times resulting in two nested loops. Looking at the formula the numerator of the fraction is independent of the sum so I can compute numerator and denominator independently and the denominator can be computed just once for each pixel. So I came to a solution like this: int nClusters = V.length; double exp = (1.0 / (m - 1.0)); for(int i = 0; i < X.length; i++) { int count = 0; for(int j = 0; j < nClusters; j++) { double distance = D[i][j]; double denominator = D[i][nClusters]; double numerator = Math.pow(distance, exp); U[i][j] = (float) (1f / (numerator * denominator)); } } Where I precomputed the denominator into an additional column of the matrix D while I was computing the distances: for (int i = 0; i < X.length; i++) { for (int j = 0; j < V.length; j++) { double sum = 0; for (int k = 0; k < nDims; k++) { final double d = X[i][k] - V[j][k]; sum += d * d; } D[i][j] = sum; D[i][B.length] += Math.pow(1 / D[i][j], exp); } } By doing so I encounter numerical differences between the 'banal' computation and the second one that leads to different numerical value in U (not in the first iterates but soon enough). I guess that the problem is that exponentiate very small numbers to high values (the elements of U can range from 0.0 to 1.0 and exp , for m = 1.1, is 10) leads to ver y small values, whereas by dividing the numerator and the denominator and THEN exponentiating the result seems to be better numerically. The problem is it involves much more operations. Am I doing something wrong? Is there a possible solution to get both the code optimized and numerically stable? Any suggestion or criticism will be appreciated.

    Read the article

  • Help to argue why to develop software on a physical computer rather than via a remote desktop

    - by s5804
    Remote desktops are great and many times a blessing and cost effective (instead of leasing expensive cables). I am not arguing against remote desktops, just if one have the alternative to use either remote desktop or physical computer, I would choose the later. Also note that I am not arguing for or against remote work practices. But in my case I am required to be physically present in the office when developing software. Background, I work in a company which main business is not to develop software. Therefore the company IT policies are mainly focused on security and to efficiently deploying/maintaing thousands of computer to users. Further, the typical employee runs typical Office applications, like a word processors. Because safety/stability is such a big priority, every non production system/application, shall be deployed into a physical different network, called the test network. Software development of course also belongs in the test network. To access the test network the company has created a standard policy, which dictates that access to the test network shall go only via a remote desktop client. Practically from ones production computer one would open up a remote desktop client to a virtual computer located in the test network. On the virtual computer's remote desktop one would be able to access/run/install all development tools, like Eclipse IDE. Another solution would be to have a dedicated physical computer, which is physically only connected to the test network. Both solutions are available in the company. I have tested both approaches and found running Eclipse IDE, SQL developer, in the remote desktop client to be sluggish (keyboard strokes are delayed), commands like alt-tab takes me out of the remote client, enjoying... Further, screen resolution and colors are different, just to mention a few. Therefore there is nothing technical wrong with the remote client, just not optimal and frankly de-motivating. Now with the new policies put in place, plans are to remove the physical computers connected to the test network. I am looking for help to argue for why software developers shall have a dedicated physical software development computer, to be productive and cost effective. Remember that we are physically in office. Further one can notice that we are talking about approx. 50 computers out of 2000 employees. Therefore the extra budget is relatively small. This is more about policy than cost. Please note that there are lots of similar setups in other companies that work great due to a perfectly tuned systems. However, in my case it is sluggish and it would cost more money to trouble shoot the performance and fine tune it rather than to have a few physical computers. As a business case we have argued that productivity will go down by 25%, however it's my feeling that the reality is probably closer to 50%. This business case isn't really accepted and I find it very difficult to defend it to managers that has never ever used a rich IDE in their life, never mind developed software. Further the test network and remote client has no guaranteed service level, therefore it is down for a few hours per month with the lowest priority on the fix list. Help is appreciated.

    Read the article

  • What version-control system is most trivial to set up and use for toy projects?

    - by Norman Ramsey
    I teach the third required intro course in a CS department. One of my homework assignments asks students to speed up code they have written for a previous assignment. Factor-of-ten speedups are routine; factors of 100 or 1000 are not unheard of. (For a factor of 1000 speedup you have to have made rookie mistakes with malloc().) Programs are improved by a sequence is small changes. I ask students to record and describe each change and the resulting improvement. While you're improving a program it is also possible to break it. Wouldn't it be nice to back out? You can see where I'm going with this: my students would benefit enormously from version control. But there are some caveats: Our computing environment is locked down. Anything that depends on a central repository is suspect. Our students are incredibly overloaded. Not just classes but jobs, sports, music, you name it. For them to use a new tool it has to be incredibly easy and have obvious benefits. Our students do most work in pairs. Getting bits back and forth between accounts is problematic. Could this problem also be solved by distributed version control? Complexity is the enemy. I know setting up a CVS repository is too baffling---I myself still have trouble because I only do it once a year. I'm told SVN is even harder. Here are my comments on existing systems: I think central version control (CVS or SVN) is ruled out because our students don't have the administrative privileges needed to make a repository that they can share with one other student. (We are stuck with Unix file permissions.) Also, setup on CVS or SVN is too hard. darcs is way easy to set up, but it's not obvious how you share things. darcs send (to send patches by email) seems promising but it's not clear how to set it up. The introductory documentation for git is not for beginners. Like CVS setup, it's something I myself have trouble with. I'm soliciting suggestions for what source-control to use with beginning students. I suspect we can find resources to put a thin veneer over an existing system and to simplify existing documentation. We probably don't have resources to write new documentation. So, what's really easy to setup, commit, revert, and share changes with a partner but does not have to be easy to merge or to work at scale? A key constraint is that programming pairs have to be able to share work with each other and only each other, and pairs change every week. Our infrastructure is Linux, Solaris, and Windows with a netapp filer. I doubt my IT staff wants to create a Unix group for each pair of students. Is there an easier solution I've overlooked? (Thanks for the accepted answer, which beats the others on account of its excellent reference to Git Magic as well as the helpful comments.)

    Read the article

  • Faulty to use memcache together with a php web-browser-game in this way?

    - by Crowye
    Background We are currently working on a strategy web-browser game based on php, html and javascript. The plan is to have 10,000+ users playing within the same world. Currently we are using memcached to: store json static data, language files store changeable serialized php class objects (such as armies, inventorys, unit-containers, buildings, etc) In the back we have a mysql server running and holding all the game data aswell. When a object is loaded through our ObjectLoader it loads in this order: checks a static hashmap in the script for the object checks memcache if it has already been loaded into it otherwise loads from database, and saves it into memcache and the static temp hashmap We have built the whole game using a class-object-oriented approach where functionality is always made between objects. Beause of this we think we have managed to get a nice structure, and with the help of memcached we have received good request times from client-server when interacting with the game. I'm aware that memcache is not synchronized, and also is not commonly used for holding a full game in memory. In the beginning after a server's startup the load times when loading objects into memcache for the first time will be high, but after the server's been online for a while and most loads are from memcache, the loads will be well reduced. Currently we are saving changed objects into memcache and database at the same time. Earlier we had an idea to save objects into db only after a certain time or at intervals, but due to risk inconsistency if the memcache/server went down, we skipped it for now. Client requests to server often return object's status simple json-format without changing the object, which in turn is represented in the browser visually with images and javascript. But from time to time depending on when an object was last updated, it updates them with new information (e.g. a build-queue holding planned buildings time-progress is increased, and/or planned-queue-items-array has changed). Questions: Do you see how this could work or are we walking in blindness here? Do you expect us to have a lot of inconsistency issues if someone loads and updates the a memcache objects while someone else does the same? Is it even doable to do it in the way he have done it? Seems to be working fine atm, but so far we have only been 4 people online at the same time.. Is some other cache program more fit for this class-object approach than memcached? Is there any other tips you have for this situation? UPDATE Since it is simply a "normal webpage" (no applet, flash, etc), we are implementing the game so that the server is the only one holding a "real game-state".. the state of the different javascript-objects on the client is more like a approximative version of the server's game state. From time to time and before you do certain things important things, the client's visual state is updated to the server's state (e.g. the client things he can afford a barracks, asks the server to build a barracks, server updates current resources according to income-data on server and then tries to build a barracks or casts an error-message, and then sends the current server-state on resources, buildings back to the client).. It is not a fast-paced game lika real strategy game. More like a quite slow 3-4 months playtime game, where buildings can take +1 minute up to several days to complete.

    Read the article

  • Are all <canvas> tag dimensions in pixels?

    - by Simon Omega
    Are all tag dimensions in pixels? I am asking because I understood them to be. But my math is broken or I am just not grasping something here. I have been doing python mostly and just jumped back into Java Scripting. If I am just doing something stupid let me know. For a game I am writing, I wanted to have a blocky gradient. I have the following: HTML <canvas id="heir"></canvas> CSS @media screen { body { font-size: 12pt } /* Game Rendering Space */ canvas { width: 640px; height: 480px; border-style: solid; border-width: 1px; } } JavaScript (Shortened) function testDraw ( thecontext ) { var myblue = 255; thecontext.save(); // Save All Settings (Before this Function was called) for (var i = 0; i < 480; i = i + 10 ) { if (myblue.toString(16).length == 1) { thecontext.fillStyle = "#00000" + myblue.toString(16); } else { thecontext.fillStyle = "#0000" + myblue.toString(16); } thecontext.fillRect(0, i, 640, 10); myblue = myblue - 2; }; thecontext.restore(); // Restore Settings to Save Point (Removing Styles, etc...) } function main () { var targetcontext = document.getElementById(“main”).getContext("2d"); testDraw(targetcontext); } To me this should produce a series of 640w by 10h pixel bars. In Google Chrome and Fire Fox I get 15 bars. To me that means ( 480 / 15 ) is 32 pixel high bars. So I change the code to: function testDraw ( thecontext ) { var myblue = 255; thecontext.save(); // Save All Settings (Before this Function was called) for (var i = 0; i < 16; i++ ) { if (myblue.toString(16).length == 1) { thecontext.fillStyle = "#00000" + myblue.toString(16); } else { thecontext.fillStyle = "#0000" + myblue.toString(16); } thecontext.fillRect(0, (i * 10), 640, 10); myblue = myblue - 10; }; thecontext.restore(); // Restore Settings to Save Point (Removing Styles, etc...) } And get a true 32 pixel height result for comparison. Other than the fact that the first code snippet has shades of blue rendering in non-visible portions of the they are measuring 32 pixels. Now back to the Original Java Code... If I inspect the tag in Chrome it reports 640 x 480. If I inspect it in Fire Fox it reports 640 x 480. BUT! Fire Fox exports the original code to png at 300 x 150 (which is 15 rows of 10). Is it some how being resized to 640 x 480 by the CSS instead of being set to a true 640 x 480? Why, how, what? O_o I confused...

    Read the article

  • clock and date showing on a live site but not on localhost

    - by grumpypanda
    I've got clock.swf and date.swf working fine on a live site, now I am using the same code to set up a local develop environment. Everything is working well except the clock.swf and date.swf stopped working on localhost. Two same yellow errors "You need to update your Flash plugin. Click here if you want to continue." but of course my Flash player is up to date since the live site is working fine. I'll post the code below which I think has caused the error. I've been searching online for the last couple of hours but no luck, anyone has got into an issue like this before? What can be the possible cause? Any help is appreciated. This is on the index.php, I can post more code here if needed. <?php embed_flash("swf/clock.swf", CLOCK_WIDTH, CLOCK_HEIGHT, "8", '', "flashcontent");?> <?php embed_flash("swf/date.swf", DATE_WIDTH, DATE_HEIGHT, "8", '', "flashcontent_date");?> configure.php define('CLOCK_WIDTH', '450'); define('CLOCK_HEIGHT', ''); define('DATE_WIDTH', '440'); define('DATE_HEIGHT', ''); flash_function.php <?php function embed_flash($name, $w, $h, $version, $bgcolor, $id) { $cacheBuster = rand(); $padTop = $h/3; ?> <style> a.noflash:link, a.noflash:visited, a.noflash:active {color: #1860C2; text-decoration: none; background:#FFFFFF;} a.noflash:hover {color:#000; text-decoration:none; background:#EEEEEE;} .message { width: <?=$w;?>px; font-size:12px; font-weight:normal; margin-bottom: 10px; padding: 5px; color: #EEE; background: orange;"} </style> <div id="<?=$id; ?>" align="center"> <noscript> <div class="message"> Please enable <a href="https://www.google.com/support/adsense/bin/answer.py?answer=12654" target="_blank" class="noflash">&nbsp;JavaScript&nbsp;</a> to view this page properly. </div> </noscript> <div class="message"> You need to update your Flash plugin. Click <a href="http://www.adobe.com/shockwave/download/download.cgi?P1_Prod_Version=ShockwaveFlash&promoid=BIOW" target="_blank" class="noflash">&nbsp;here&nbsp;</a> if you want to continue. </div> </div> <script type="text/javascript"> // <![CDATA[ var so = new SWFObject("<?=$name;?>", "", "<?=$w;?>", "<?=$h;?>", "<?=$version;?>", "<?=$bgcolor;?>"); so.addParam("quality", "high"); so.addParam("allowScriptAccess", "sameDomain"); so.addParam("scale", "showall"); so.addParam("loop", "false"); so.addParam("wmode", "transparent"); so.write("<?=$id;?>"); // ]]> </script>

    Read the article

  • C++ Optimize if/else condition

    - by Heye
    I have a single line of code, that consumes 25% - 30% of the runtime of my application. It is a less-than comparator for an std::set (the set is implemented with a Red-Black-Tree). It is called about 180 Million times within 52 seconds. struct Entry { const float _cost; const long _id; // some other vars Entry(float cost, float id) : _cost(cost), _id(id) { } }; template<class T> struct lt_entry: public binary_function <T, T, bool> { bool operator()(const T &l, const T &r) const { // Most readable shape if(l._cost != r._cost) { return r._cost < l._cost; } else { return l._id < r._id; } } }; The entries should be sorted by cost and if the cost is the same by their id. I have many insertions for each extraction of the minimum. I thought about using Fibonacci-Heaps, but I have been told that they are theoretically nice, but suffer from high constants and are pretty complicated to implement. And since insert is in O(log(n)) the runtime increase is nearly constant with large n. So I think its okay to stick to the set. To improve performance I tried to express it in different shapes: return l._cost < r._cost || r._cost > l._cost || l._id < r._id; return l._cost < r._cost || (l._cost == r._cost && l._id < r._id); Even this: typedef union { float _f; int _i; } flint; //... flint diff; diff._f = (l._cost - r._cost); return (diff._i && diff._i >> 31) || l._id < r._id; But the compiler seems to be smart enough already, because I haven't been able to improve the runtime. I also thought about SSE but this problem is really not very applicable for SSE... The assembly looks somewhat like this: movss (%rbx),%xmm1 mov $0x1,%r8d movss 0x20(%rdx),%xmm0 ucomiss %xmm1,%xmm0 ja 0x410600 <_ZNSt8_Rb_tree[..]+96> ucomiss %xmm0,%xmm1 jp 0x4105fd <_ZNSt8_Rb_[..]_+93> jne 0x4105fd <_ZNSt8_Rb_[..]_+93> mov 0x28(%rdx),%rax cmp %rax,0x8(%rbx) jb 0x410600 <_ZNSt8_Rb_[..]_+96> xor %r8d,%r8d I have a very tiny bit experience with assembly language, but not really much. I thought it would be the best (only?) point to squeeze out some performance, but is it really worth the effort? Can you see any shortcuts that could save some cycles? The platform the code will run on is an ubuntu 12 with gcc 4.6 (-stl=c++0x) on a many-core intel machine. Only libraries available are boost, openmp and tbb. I am really stuck on this one, it seems so simple, but takes that much time. I have been crunching my head since days thinking how I could improve this line... Can you give me a suggestion how to improve this part, or is it already at its best?

    Read the article

  • Effective optimization strategies on modern C++ compilers

    - by user168715
    I'm working on scientific code that is very performance-critical. An initial version of the code has been written and tested, and now, with profiler in hand, it's time to start shaving cycles from the hot spots. It's well-known that some optimizations, e.g. loop unrolling, are handled these days much more effectively by the compiler than by a programmer meddling by hand. Which techniques are still worthwhile? Obviously, I'll run everything I try through a profiler, but if there's conventional wisdom as to what tends to work and what doesn't, it would save me significant time. I know that optimization is very compiler- and architecture- dependent. I'm using Intel's C++ compiler targeting the Core 2 Duo, but I'm also interested in what works well for gcc, or for "any modern compiler." Here are some concrete ideas I'm considering: Is there any benefit to replacing STL containers/algorithms with hand-rolled ones? In particular, my program includes a very large priority queue (currently a std::priority_queue) whose manipulation is taking a lot of total time. Is this something worth looking into, or is the STL implementation already likely the fastest possible? Along similar lines, for std::vectors whose needed sizes are unknown but have a reasonably small upper bound, is it profitable to replace them with statically-allocated arrays? I've found that dynamic memory allocation is often a severe bottleneck, and that eliminating it can lead to significant speedups. As a consequence I'm interesting in the performance tradeoffs of returning large temporary data structures by value vs. returning by pointer vs. passing the result in by reference. Is there a way to reliably determine whether or not the compiler will use RVO for a given method (assuming the caller doesn't need to modify the result, of course)? How cache-aware do compilers tend to be? For example, is it worth looking into reordering nested loops? Given the scientific nature of the program, floating-point numbers are used everywhere. A significant bottleneck in my code used to be conversions from floating point to integers: the compiler would emit code to save the current rounding mode, change it, perform the conversion, then restore the old rounding mode --- even though nothing in the program ever changed the rounding mode! Disabling this behavior significantly sped up my code. Are there any similar floating-point-related gotchas I should be aware of? One consequence of C++ being compiled and linked separately is that the compiler is unable to do what would seem to be very simple optimizations, such as move method calls like strlen() out of the termination conditions of loop. Are there any optimization like this one that I should look out for because they can't be done by the compiler and must be done by hand? On the flip side, are there any techniques I should avoid because they are likely to interfere with the compiler's ability to automatically optimize code? Lastly, to nip certain kinds of answers in the bud: I understand that optimization has a cost in terms of complexity, reliability, and maintainability. For this particular application, increased performance is worth these costs. I understand that the best optimizations are often to improve the high-level algorithms, and this has already been done.

    Read the article

< Previous Page | 546 547 548 549 550 551 552 553 554 555 556 557  | Next Page >