Search Results

Search found 286 results on 12 pages for 'jared gross'.

Page 11/12 | < Previous Page | 7 8 9 10 11 12  | Next Page >

  • Windows Messages Bizarreness

    - by jameszhao00
    Probably just a gross oversight of some sort, but I'm not receiving any WM_SIZE messages in the message loop. However, I do receive them in the WndProc. I thought the windows loop gave messages out to WndProc? LRESULT CALLBACK WndProc( HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam ) { switch(message) { // this message is read when the window is closed case WM_DESTROY: { // close the application entirely PostQuitMessage(0); return 0; } break; case WM_SIZE: return 0; break; } printf("wndproc - %i\n", message); // Handle any messages the switch statement didn't return DefWindowProc (hWnd, message, wParam, lParam); } ... and now the message loop... while(TRUE) { // Check to see if any messages are waiting in the queue if(PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { // translate keystroke messages into the right format TranslateMessage(&msg); // send the message to the WindowProc function DispatchMessage(&msg); // check to see if it's time to quit if(msg.message == WM_QUIT) { break; } if(msg.message == WM_SIZING) { printf("loop - resizing...\n"); } } else { //do other stuff } }

    Read the article

  • Retrieve Records from paypal csv file

    - by Pankaj Khurana
    Hi, I want to retrieve all the records from paypal csv file where type='Payment Processed' and store it in a database table.It should be available in the following format: 'Heading':'Value' The format of the csv is: "Transaction ID","Reference Transaction ID","Date","Type","Subject","Item Number","Item Name","Invoice ID","Name","Email","Shipping Name","Shipping Address Line 1","Shipping Address Line 2","Shipping Address City","Shipping State/Province","Shipping Zip/Postal Code","Shipping Address Country","Shipping Method","Address Status","Contact Phone Number","Gross Amount","Receipt ID","Custom Field","Option 1 Name","Option 1 Value","Option 2 Name","Option 2 Value","Note","Auction Site","Auction User ID","Item URL","Auction Closing Date","Insurance Amount","Currency","Fees","Net Amount","Shipping & Handling Amount","Sales Tax Amount","To Email","Time","Time Zone" "1T","",5/5/2010 2:10:44 PM,"Payment Processed","CFP Self Study Kit","1","CFP Self Study Kit","","User1","[email protected]","","","","","","","","","N","","68.18","R1","","","","","","","","","",,"","USD","-2.62","65.56","0","0","[email protected]","01:40","Asia/Calcutta" "2T","",5/19/2010 4:04:08 PM,"Payment Processed","CFP Self Study Kit","1","CFP Self Study Kit","","User2","[email protected]","","","","","","","","","N","","68.18","R2","","","","","","","","","",,"","USD","-2.62","65.56","0","0","[email protected]","03:34","Asia/Calcutta" "3T","1RT",5/19/2010 5:28:45 PM,"Currency Conversion Completed","","","",""," ","","","","","","","","","","N","","17492.6","","","","","","","","","","",,"","INR","0","17492.6","0","0","","04:58","Asia/Calcutta" "4T","2RT",5/19/2010 5:28:45 PM,"Currency Conversion Completed","","","",""," ","","","","","","","","","","N","","-393.36","","","","","","","","","","",,"","USD","0","-393.36","0","0","","04:58","Asia/Calcutta" "5T","",5/19/2010 5:28:45 PM,"Transfer to Bank Initiated","P1006","","P1006",""," ","","","","","","","","","","N","","-17492.6","","","","","","","","","","",,"","INR","0","-17492.6","0","0","","04:58","Asia/Calcutta" "6T","",5/20/2010 5:38:02 PM,"Transfer to Bank Completed","P1006","","P1006",""," ","","","","","","","","","","N","","-17492.6","","","","","","","","","","",,"","INR","0","-17492.6","0","0","","05:08","Asia/Calcutta" "7T","",5/21/2010 12:32:37 PM,"Payment Processed","FP - LVC Plus","","FP - LVC Plus","","User3","[email protected]","User3","NEW DELHI","BEHIND KARNATAKA BANK LD","SOUTH","NEW DELHI","110023","IN","","N","","283.96","","","","","","","","","","",,"","USD","-9.95","274.01","0","0","[email protected]","00:02","Asia/Calcutta" "8T","",5/25/2010 4:40:48 PM,"Transfer to Bank Initiated","P1006","","P1006",""," ","","","","","","","","","","N","","-12569.85","","","","","","","","","","",,"","INR","0","-12569.85","0","0","","04:10","Asia/Calcutta" "9T","3RT",5/25/2010 4:40:48 PM,"Currency Conversion Completed","","","",""," ","","","","","","","","","","N","","-274.01","","","","","","","","","","",,"","USD","0","-274.01","0","0","","04:10","Asia/Calcutta" "10T","4RT",5/25/2010 4:40:48 PM,"Currency Conversion Completed","","","",""," ","","","","","","","","","","N","","12569.85","","","","","","","","","","",,"","INR","0","12569.85","0","0","","04:10","Asia/Calcutta" "11T","",5/26/2010 4:57:39 PM,"Transfer to Bank Completed","P1006","","P1006",""," ","","","","","","","","","","N","","-12569.85","","","","","","","","","","",,"","INR","0","-12569.85","0","0","","04:27","Asia/Calcutta" "Total","-247.05 USD","-15.19","-262.24" "Total","0.00 INR","0.00","0.00" Please help me on this Thanks

    Read the article

  • Javascript Rich Display Component/Methodology

    - by Laramie
    quick back story-- I am working on ASP.Net based template editor that lets authors create text templates using Javascript inserted placeholder tags that will be filled in with dynamic text when the templates are used to display the final results. For example the author might create a template like The word [%12#add] was generated dynamically. The application would eventually replace the tag with a dynamic word down the road (though it's not specifically relevant to this post) The word foo was generated dynmamically. Depending on the circumstances, the template may be created in a text input, textarea or a modified version of the Ajax Control Toolkit HTML Editor. There might be 40 or more of these editable elements on the page, so using lots of stripped down or modified HTML editors would probably bog the page down too much. The problem is that the tags such as [%12#add] are displayed inline with the user text and the result is confusing and aesthetically gross. The goal is parse the contens of the source element and when a tags such as [%12#add] are encountered, display something prettier and less cryptic to the user such as a stylable element or image wherever tags such as [%12#add] occur. The application still needs the template text with the tags on postback. So the user might see The word tag placeholder was generated dynamically. but the original template would still be the value of the text input box The word [%12#add] was generated dynamically. It seems HTML editors like the ACT version and FckEditor accomplish this by rendering their output in an IFrame, but rather than kill myself trying to roll a lighter specialized version myself, I thought I'd ask if anyone knows of an existing free component or approach that has already tackled this. With good reason, I don't think S.O. allows HTML formatting, but the bold "tag placeholder" above would ideally be something like tag placeholder.

    Read the article

  • Is it legal to extend the Class class?

    - by spiralganglion
    I've been writing a system that generates some templates, and then generates some objects based on those templates. I had the idea that the templates could be extensions of the Class class, but that resulted in some magnificent errors: VerifyError: Error #1107: The ABC data is corrupt, attempt to read out of bounds. What I'm wondering is if subclassing Class is even possible, if there is perhaps some case where doing this would be appropriate, and not just a gross misuse of OOP. I believe it should be possible, as ActionScript allows you to create variables of the Class type. This use is described in the LiveDocs entry for Class, yet I've seen no mentions of subclassing Class. Here's a pseudocode example: class Foo extends Class var a:Foo = new Foo(); trace(a is Class) // true, right? var b = new a(); I have no idea what the type of b would be. In summary: can you subclass the Class class? If so, how can you do it without errors, and what type are the instances of the instances of the Class subclass?

    Read the article

  • MySQL Gurus: How to pull a complex grid of data from MySQL database with one query?

    - by iopener
    Hopefully this is less complex than I think. I have one table of companies, and another table of jobs, and a third table with that contains a single entry for each employee in each job from each company. NOTE: Some companies won't have employees in some jobs, and some companies will have more than one employee in some jobs. The company table has a companyid and companyname field, the job table has a jobid and jobtitle field, and the employee table has employeeid, companyid, jobid and employeename fields. I want to build a table like this: +-----------+-----------+-----------+ | Company A | Company B | Company C | ------+-----------+-----------+-----------+ Job A | Emp 1 | Emp 2 | | ------+-----------+-----------+-----------+ Job B | Emp 3 | | Emp 4 | | | | Emp 5 | ------+-----------+-----------+-----------+ Job C | | Emp 6 | | | | Emp 7 | | | | Emp 8 | | ------+-----------+-----------+-----------+ I had previously been looping through a result set of jobs, and for each job, looping through a result set of each company, and for each company, looping through each employee and printing it in a table (gross, but performance was not supposed to be a consideration). The app has grown in popularity, and now we have 100 companies and hundreds of jobs, and the server is crapping out (all the id fields are indexed). Any suggestions on how to write a single query to get this data? I don't need the company names or job titles (obviously), but I do need some way to identify where each row from the result should be printed. I'm imagining a result set that just contained a long list of joined employees, and I could write a loop to use the companyid and employeeid values to tell me when to create a new cell or table row. This works as long as there aren't ZERO employees; I would need a NULL employee name for that I think? Am I completely on the wrong track? Thanks in advance for any ideas!

    Read the article

  • Javascript Rich Display WYSIWYG Component/Methodology

    - by Laramie
    quick back story-- I am working on ASP.Net based template editor that lets authors create text templates using Javascript inserted placeholder tags that will be filled in with dynamic text when the templates are used to display the final results. For example the author might create a template like The word [%12#add] was generated dynamically. The application would eventually replace the tag with a dynamic word down the road (though it's not specifically relevant to this post) The word foo was generated dynmamically. Depending on the circumstances, the template may be created in a text input, textarea or a modified version of the Ajax Control Toolkit HTML Editor. There might be 40 or more of these editable elements on the page, so using lots of stripped down or modified HTML editors would probably bog the page down too much. The problem is that the tags such as [%12#add] are displayed inline with the user text and the result is confusing and aesthetically gross. The goal is parse the contens of the source element and when a tags such as [%12#add] are encountered, display something prettier and less cryptic to the user such as a stylable element or image wherever tags such as [%12#add] occur. The application still needs the template text with the tags on postback. So the user might see The word tag placeholder was generated dynamically. but the original template would still be the value of the text input box The word [%12#add] was generated dynamically. It seems HTML editors like the ACT version and FckEditor accomplish this by rendering their output in an IFrame, but rather than kill myself trying to roll a lighter specialized version myself, I thought I'd ask if anyone knows of an existing free component or approach that has already tackled this. With good reason, I don't think S.O. allows HTML formatting, but the bold "tag placeholder" above would ideally be something like tag placeholder.

    Read the article

  • Tablet interface for the physically disabled?

    - by Glenn
    My sister has Cerebral Palsy, which in her case means she has only gross motor control and her speech is slurred. Implications should be obvious: traditional computer/phone/tablet interfaces won't work for her and she can't speak clearly enough for speech recognition software to help her at all. She enjoys reading but has difficulty holding the book and/or turning a page. There are a few options for helping her use a computer, but nothing for tablets or eReaders. That's where you come in. I would like to make (or buy, if such a thing exists) a better interface to an Android tablet that would work for someone with little to no physical dexterity or speech ability. I'd also be interested in work on the Kindle or iPad, but I'm most familiar with Android so I'm starting there. I know Android has Bluetooth capability. Is it possible to interface a joystick to control the Android device? By "control", I mean the entire operating system - selecting an app, launching it, controlling the menus, etc. I want to give her control over the whole thing, not just a specific app. On a PC this can be accomplished by creating a generic USB HID interface and an arcade joystick to move the mouse over the screen and click on thigns. Is it possible to do something like that in Android? Any help you can offer would be greatly appreciated. Thanks!

    Read the article

  • Pre-populate iPhone Safari SQLite DB

    - by Matt Rogish
    I'm working with a PhoneGap app that uses Safari local storage (SQlite DB) via Javascript: http://developer.apple.com/safari/library/documentation/iPhone/Conceptual/SafariJSDatabaseGuide/UsingtheJavascriptDatabase/UsingtheJavascriptDatabase.html On first load, the app creates the database, tables, and populates the data via a series of INSERT statements. If the user closes the app while this processing is happening, then my app database is left in an inconsistent state. What I prefer to do is deploy the SQLite DB as part of my iTunes App packaging so nothing must be populated at app cold start. However, I'm not sure if that is possible -- all of the google hits for this topic that I can find are referring to the core-data provided SQLite which is not what we're using... If it's not possible, could I wrap the entire thing in a transaction and keep re-trying it when the app is restarted? Failing that, I guess I can create a simple table with one boolean column "is_app_db_loaded?" and set it to true after I've processed all my inserts. But that's really gross... Ideas? Thanks!!

    Read the article

  • JS/CSS include section replacement, Debug vs Release

    - by Bayard Randel
    I'd be interested to hear how people handle conditional markup, specifically in their masterpages between release and debug builds. The particular scenario this is applicable to is handling concatenated js and css files. I'm currently using the .Net port of YUI compress to produce a single site.css and site.js from a large collection of separate files. One thought that occurred to me was to place the js and css include section in a user control or collection of panels and conditionally display the <link> and <script> markup based on the Debug or Release state of the assembly. Something along the lines of: #if DEBUG pnlDebugIncludes.visible = true #else pnlReleaseIncludes.visible = true #endif The panel is really not very nice semantically - wrapping <script> tags in a <div> is a bit gross; there must be a better approach. I would also think that a block level element like a <div> within <head> would be invalid html. Another idea was this could possibly be handled using web.config section replacements, but I'm not sure how I would go about doing that.

    Read the article

  • Avoiding mass propagation of properties and events for exposure to ViewModels.

    - by firoso
    I have an MVVM application I am developing that is to the point where I'm ready to start putting together a user interface (my client code is largely functional) I'm now running into the issue that I'm trying to get my application data to where I need it so that it can be consumed by the view model and then bound to the view. Unfortunately, it seems that I've either got a few structural oversights, or I'm just going to have to face the reality that I need to be propogating events and raising excessive amounts of errors to notify view models that thier properties have changed. Let me go into some examples of my issue: I have a class "Unit" contained in a class "Test", contained in a class "Session" contained in a class "TestManager" which is contained in "TestDataModel" which is utilized by "TestViewModel" which is databound to by my "TestView" .... WHOA. Now, consider that Unit (the bottom of the heiarchy) has a property called "Results" that is updated periodically, I want to expose that to my viewmodel and then databind it to my view, trouble is, the only way I can really think to do this is to perpetuate events WAY up a chain that say "I've been updated!" and then request the new value... This seems like an aweful way to do this. Alternatively, I could register a static event and raise it, and have the appropriate "Unit view model" grab the event and request the update. This SEEMS better... but... static events? Is that a taboo idea? Also, having an expression like: TestDataModel.TestManager.Session.Test.Unit.Results[i] Seems REALLY gross to have on a View Model. I know this all reeks of a bad design issue, but I can't figure out what I did wrong? Should I be using more singleton/container controlled lifetimes type objects? Register object instances with static helper containers? Obviously these are hard questions to answer without being intimate with the existing structure, but if you've run into situations like this, what did you do to refactor? Should I just live with this, add mass events, and propogate them?

    Read the article

  • Is it OK to write code after [super dealloc]? (Objective-C)

    - by Richard J. Ross III
    I have a situation in my code, where I cannot clean up my classes objects without first calling [super dealloc]. It is something like this: // Baseclass.m @implmentation Baseclass ... -(void) dealloc { [self _removeAllData]; [aVariableThatBelongsToMe release]; [anotherVariableThatBelongsToMe release]; [super dealloc]; } ... @end This works great. My problem is, when I went to subclass this huge and nasty class (over 2000 lines of gross code), I ran into a problem: when I released my objects before calling [super dealloc] I had zombies running through the code that were activated when I called the [self _removeAllData] method. // Subclass.m @implementation Subclass ... -(void) deallloc { [super dealloc]; [someObjectUsedInTheRemoveAllDataMethod release]; } ... @end This works great, and It didn't require me to refactor any code. My question Is this: Is it safe for me to do this, or should I refactor my code? Or maybe autorelease the objects? I am programming for iPhone if that matters any.

    Read the article

  • SQL Query to separate data into two fields

    - by Phillip
    I have data in one column that I want to separate into two columns. The data is separated by a comma if present. This field can have no data, only one set of data or two sets of data saperated by the comma. Currently I pull the data and save as a comma delimited file then use an FoxPro to load the data into a table then process the data as needed then I re-insert the data back into a different SQL table for my use. I would like to drop the FoxPro portion and have the SQL query saperate the data for me. Below is a sample of what the data looks like. Store Amount Discount 1 5.95 1 5.95 PO^-479^2 1 5.95 PO^-479^2 2 5.95 2 5.95 PO^-479^2 2 5.95 +CA8A09^-240^4,CORDRC^-239^7 3 5.95 3 5.95 +CA8A09^-240^4,CORDRC^-239^7 3 5.95 +CA8A09^-240^4,CORDRC^-239^7 In the data above I want to sum the data in the amount field to get a gross amount. Then pull out the specific discount amount which is located between the carat characters and sum it to get the total discount amount. Then add the two together and get the total net amount. The query I want to write will separate the discount field as needed, see store 2 line 3 for two discounts being applied, then pull out the value between carat characters.

    Read the article

  • Can Remote Desktop Services be deployed and administered by PowerShell alone, without a Domain in WIndows Server 2012 and 2012 R2?

    - by Warren P
    Windows Server 2008 R2 allowed deployment of Terminal Server (Remote Desktop Services) without a domain, and without any insistence on domains. This was very useful, especially for standalone virtual or cloud deployments of a server that is managed remotely for a remote client who has no need or desire for any ActiveDirectory or Domain features. This has become steadily more and more difficult as Microsoft restricts its technologies further and further in each Windows release. With Windows Server 2012, configuring licensing for Remote Desktop Services, is more difficult when not on a domain, but possible still. With Windows Server 2012 R2 (at least in the preview) the barriers are now severe: The Add/Remove Roles and Features wizard in Windows Server 2012 R2 has a special RDS deployment mode that has a rule that says if you aren't on a domain you can't deploy. It tells you to create or join a domain first. This of course comes in direct conflict with the fact that an Active Directory domain controller should not be the same machine as a terminal server machine. So Microsoft's technology is not such much a Cloud Operating System as a Cluster of Unwanted Nodes, needed to support the one machine I actually WANT to deploy. This is gross, and so I am trying to find a workaround. However if you skip that wizard and just go check the checkboxes in the main Roles/Features wizard, you can deploy the features, but the UI is not there to configure them, and when you go back to the RDS configuration page on the roles wizard, you get a message saying you can not administer your Remote Desktop Services system when you are logged in as a Local-Computer Administrator, because although you have all admin priveleges you could have (in your workgroup based system), the RDS configuration UI will not accept those credentials and let you continue. My question in brief is, can I still somehow, obtain the following end result: I need to allow 10-20 users per system to have an RDS (TS) session. I do not need any of the fancy pants RDS options, unless Microsoft somehow depends on those features being present. I believe I need the "RDS Session Host" as this is the guts of "Terminal Server". Microsoft says it is "full Windows desktop for Remote Desktop Services client. I need to configure licensing so that the Grace Period does not expire leaving my RDS non functional, so this probably means I need a way to configure TS CALs. If all of the above could technically be done with the judicious use of the PowerShell, I am prepared to even consider developing all the PowerShell scripts I would need to do the above. I'm not asking someone to write that for me. What I'm asking is, does anyone know if there is a technical impediment to what I want to do above, other than the deliberate crippling of the 2012 R2 UI for Workgroup users? Would the underlying technologies all still work if I manipulate and control them from a PowerShell script? Obviously a 1 word Yes or No answer isn't that useful to anyone, so the question is really, yes or no, and why? In the case the answer is Yes, then how.

    Read the article

  • On ESXi, guest machines hang for significant intervals compared to real machines. How can I fix this?

    - by Tarbox
    This is ESXi version 5.0.0. We plan on upgrading to 5.5 eventually. I have four code profiles, two taken on a real, unvirtualized machine, two taken on a virtual machine. Ordering the list of subroutines by time spent in each one, the two real profiles are practically identical. The two virtual profiles are different from each other and from the real profiles: a subset of subroutines are taking a lot more time on the virtual machines, and the subset is different for each run. The two virtual profiles take a similar amount of time, which is 3 times the amount of time the real profiles take. This gross "how long does it take?" result is consistent after hundreds of tests across three different virtual machines on two different host machines -- the virtual machine is just slower. I've only the code profiling on the four, however. Here's the most guilty set of lines: This is the real machine: 8µs $text = '' unless defined $text; 1.48ms foreach ( split( "\n", $text ) ) { This is the first run on the virtual machine: 20.1ms $text = '' unless defined $text; 1.49ms foreach ( split( "\n", $text ) ) { This is the second run on the virtual machine: 6µs $text = '' unless defined $text; 21.9ms foreach ( split( "\n", $text ) ) { My WAG is that the VM is swapping out the thread and then swapping it back in, destroying some level of cache in the process, but these code profiles were taken when the vm in question was the only active vm on the host, so... what? What does that mean? The guest itself is under light load, this is a latency problem for my users rather than throughput. The host is also under a light load, if I knew what resources to assign where, I could do it without worrying about the cost. I've attempted to lock memory, reserve cpu, assign a restrictive affinity, and disable hyperthread sharing. They don't help, it still takes the VM 2-4x the amount of time to do the same thing as the real machine. The host the tests were run on is 6x2.50GHz, Intel Xeon E5-26400 w/ 16gigs of ram. The guest exhibits the same performance under a wide combination of settings. The real machine is 4x2.13GHz, Xeon E5506 w/ 2 gigs of ram. Thank you for all advice.

    Read the article

  • Kaiden and the Arachnoid Cyst

    - by Martin Hinshelwood
    Some of you may remember when my son Kaiden was born I posted pictures of him and his sister. Kaiden is now 15 months old and is progressing perfectly in every area except that and we had been worried that he was not walking yet. We were only really concerned as his sister was walking at 8 months. Figure: Kai as his usual self   Jadie and I were concerned over that and that he had a rather large head (noggin) so we talked to various GP’s and our health visitor who immediately dismissed our concerns every time. That was until about two months ago when we happened to get a GP whose daughter had Hyper Mobility and she recognised the symptoms immediately. We were referred to the Southbank clinic who were lovely and the paediatrician confirmed that he had Hyper Mobility after testing all of his faculties. This just means that his joints are overly mobile and would need a little physiotherapy to help him out. At the end the paediatrician remarked offhand that he has a rather large head and wanted to measure it. Sure enough he was a good margin above the highest percentile mark for his height and weight. The paediatrician showed the measurements to a paediatric consultant who, as a precautionary measure, referred us for an MRI at Yorkhill Children's hospital. Now, Yorkhill has always been fantastic to us, and this was no exception. You know we have NEVER had a correct diagnosis for the kids (with the exception of the above) from a GP and indeed twice have been proscribed incorrect medication that made the kids sicker! We now always go strait to Yorkhill to save them having to fix GP mistakes as well. Monday 24th May, 7pm The scan went fantastically, with Kaiden sleeping in the MRI machine for all but 5 minutes at the end where he waited patiently for it to finish. We were not expecting anything to be wrong as this was just a precautionary scan to make sure that nothing in his head was affecting his gross motor skills. After the scan we were told to expect a call towards the end of the week… Tuesday 25th May, 12pm The very next day we got a call from Southbank who said that they has found an Arachnoid Cyst and could we come in the next day to see a Consultant and that Kai would need an operation. Wednesday 26th May, 12:30pm We went into the Southbank clinic and spoke to the paediatric consultant who assured us that it was operable but that it was taking up considerable space in Kai’s head. Cerebrospinal fluid is building up as a cyst is blocking the channels it uses to drain. Thankfully they told us that prospects were good and that Kai would expect to make a full recovery before showing us the MRI pictures. Figure: Normal brain MRI cross section. This normal scan shows the spaces in the middle of the brain that contain and produce the Cerebrospinal fluid. Figure: Normal Cerebrospinal Flow This fluid is needed by the brain but is drained in the middle down the spinal column. Figure: Kai’s cyst blocking the four channels. I do not think that I need to explain the difference between the healthy picture and Kai’s picture. However you can see in this first picture the faint outline of the cyst in the middle that is blocking the four channels from draining. After seeing the scans a Neurosurgeon has decided that he is not acute, but needs an operation to unblock the flow. Figure: OMFG! You can see in the second picture the effect of the build up of fluid. If I was not horrified by the first picture I was seriously horrified by this one. What next? Kai is not presenting the symptoms of vomiting or listlessness that would show an immediate problem and as such we will get an appointment to see the Paediatric Neurosurgeon at the Southern General hospital in about 4 weeks. This timescale is based on the Neurosurgeon seeing the scans. After that Kai will need an operation to release the pressure and either remove the cyst completely or put in a permanent shunt (tube from brain to stomach) to bypass the blockage. We have updated his notes for the referral with additional recent information on top of the scan that the consultant things will help improve the timescales, but that is just a guess.   All we can do now is wait and see, and be watchful for tell tail signs of listlessness, eye problems and vomiting that would signify a worsening of his condition.   Technorati Tags: Personal

    Read the article

  • Real-world SignalR example, ditching ghetto long polling

    - by Jeff
    One of the highlights of BUILD last week was the announcement that SignalR, a framework for real-time client to server (or cloud, if you will) communication, would be a real supported thing now with the weight of Microsoft behind it. Love the open source flava! If you aren’t familiar with SignalR, watch this BUILD session with PM Damian Edwards and dev David Fowler. Go ahead, I’ll wait. You’ll be in a happy place within the first ten minutes. If you skip to the end, you’ll see that they plan to ship this as a real first version by the end of the year. Insert slow clap here. Writing a few lines of code to move around a box from one browser to the next is a way cool demo, but how about something real-world? When learning new things, I find it difficult to be abstract, and I like real stuff. So I thought about what was in my tool box and the decided to port my crappy long-polling “there are new posts” feature of POP Forums to use SignalR. A few versions back, I added a feature where a button would light up while you were pecking out a reply if someone else made a post in the interim. It kind of saves you from that awkward moment where someone else posts some snark before you. While I was proud of the feature, I hated the implementation. When you clicked the reply button, it started polling an MVC URL asking if the last post you had matched the last one the server, and it did it every second and a half until you either replied or the server told you there was a new post, at which point it would display that button. The code was not glam: // in the reply setup PopForums.replyInterval = setInterval("PopForums.pollForNewPosts(" + topicID + ")", 1500); // called from the reply setup and the handler that fetches more posts PopForums.pollForNewPosts = function (topicID) { $.ajax({ url: PopForums.areaPath + "/Forum/IsLastPostInTopic/" + topicID, type: "GET", dataType: "text", data: "lastPostID=" + PopForums.currentTopicState.lastVisiblePost, success: function (result) { var lastPostLoaded = result.toLowerCase() == "true"; if (lastPostLoaded) { $("#MorePostsBeforeReplyButton").css("visibility", "hidden"); } else { $("#MorePostsBeforeReplyButton").css("visibility", "visible"); clearInterval(PopForums.replyInterval); } }, error: function () { } }); }; What’s going on here is the creation of an interval timer to keep calling the server and bugging it about new posts, and setting the visibility of a button appropriately. It looks like this if you’re monitoring requests in FireBug: Gross. The SignalR approach was to call a message broker when a reply was made, and have that broker call back to the listening clients, via a SingalR hub, to let them know about the new post. It seemed weird at first, but the server-side hub’s only method is to add the caller to a group, so new post notifications only go to callers viewing the topic where a new post was made. Beyond that, it’s important to remember that the hub is also the means to calling methods at the client end. Starting at the server side, here’s the hub: using Microsoft.AspNet.SignalR.Hubs; namespace PopForums.Messaging { public class Topics : Hub { public void ListenTo(int topicID) { Groups.Add(Context.ConnectionId, topicID.ToString()); } } } Have I mentioned how awesomely not complicated this is? The hub acts as the channel between the server and the client, and you’ll see how JavaScript calls the above method in a moment. Next, the broker class and its associated interface: using Microsoft.AspNet.SignalR; using Topic = PopForums.Models.Topic; namespace PopForums.Messaging { public interface IBroker { void NotifyNewPosts(Topic topic, int lasPostID); } public class Broker : IBroker { public void NotifyNewPosts(Topic topic, int lasPostID) { var context = GlobalHost.ConnectionManager.GetHubContext<Topics>(); context.Clients.Group(topic.TopicID.ToString()).notifyNewPosts(lasPostID); } } } The NotifyNewPosts method uses the static GlobalHost.ConnectionManager.GetHubContext<Topics>() method to get a reference to the hub, and then makes a call to clients in the group matched by the topic ID. It’s calling the notifyNewPosts method on the client. The TopicService class, which handles the reply data from the MVC controller, has an instance of the broker new’d up by dependency injection, so it took literally one line of code in the reply action method to get things moving. _broker.NotifyNewPosts(topic, post.PostID); The JavaScript side of things wasn’t much harder. When you click the reply button (or quote button), the reply window opens up and fires up a connection to the hub: var hub = $.connection.topics; hub.client.notifyNewPosts = function (lastPostID) { PopForums.setReplyMorePosts(lastPostID); }; $.connection.hub.start().done(function () { hub.server.listenTo(topicID); }); The important part to look at here is the creation of the notifyNewPosts function. That’s the method that is called from the server in the Broker class above. Conversely, once the connection is done, the script calls the listenTo method on the server, letting it know that this particular connection is listening for new posts on this specific topic ID. This whole experiment enables a lot of ideas that would make the forum more Facebook-like, letting you know when stuff is going on around you.

    Read the article

  • South Florida Code Camp 2010 &ndash; VI &ndash; 2010-02-27

    - by Dave Noderer
    Catching up after our sixth code camp here in the Ft Lauderdale, FL area. Website at: http://www.fladotnet.com/codecamp. For the 5th time, DeVry University hosted the event which makes everything else really easy! Statistics from 2010 South Florida Code Camp: 848 registered (we use Microsoft Group Events) ~ 600 attended (516 took name badges) 64 speakers (including speaker idol) 72 sessions 12 parallel tracks Food 400 waters 600 sodas 900 cups of coffee (it was cold!) 200 pounds of ice 200 pizza's 10 large salad trays 900 mouse pads Photos on facebook Dave Noderer: http://www.facebook.com/home.php#!/album.php?aid=190812&id=693530361 Joe Healy: http://www.facebook.com/devfish?ref=mf#!/album.php?aid=202787&id=720054950 Will Strohl:http://www.facebook.com/home.php#!/album.php?aid=2045553&id=1046966128&ref=mf Veronica Gonzalez: http://www.facebook.com/home.php#!/album.php?aid=150954&id=672439484 Florida Speaker Idol One of the sessions at code camp was the South Florida Regional speaker idol competition. After user group level competitions there are five competitors. I acted as MC and score keeper while Ed Hill, Bob O’Connell, John Dunagan and Shervin Shakibi were judges. This statewide competition is being run by Roy Lawsen in Lakeland and the winner, Jeff Truman from Naples will move on to the state finals to be held at the Orlando Code Camp on 3/27/2010: http://www.orlandocodecamp.com/. Each speaker has 10 minutes. The participants were: Alex Koval Jeff Truman Jared Nielsen Chris Catto Venkat Narayanasamy They all did a great job and I’m working with each to make sure they don’t stop there and start speaking at meetings. Thanks to everyone involved! Volunteers As always events like this don’t happen without a lot of help! The key people were: Ed Hill, Bob O’Connell – DeVry For the months leading up to the event, Ed collects all of the swag, books, etc and stores them. He holds meeting with various DeVry departments to coordinate the day, he works with the students in the days  before code camp to stuff bags, print signs, arrange tables and visit BJ’s for our supplies (I go and pay but have a small car!). And of course the day of the event he is there at 5:30 am!! We took two SUV’s to BJ’s, i was really worried that the 36 cases of water were going to break his rear axle! He also helps with the students and works very hard before and after the event. Rainer Haberman – Speakers and Volunteer of the Year Rainer has helped over the past couple of years but this time he took full control of arranging the tracks. I did some preliminary work solicitation speakers but he took over all communications after that. We have tried various organizations around speakers, chair per track, central team but having someone paying attention to the details is definitely the way to go! This was the first year I did not have to jump in at the last minute and re-arrange everything. There were lots of kudo’s from the speakers too saying they felt it was more organized than they have experienced in the past from any code camp. Thanks Rainer! Ray Alamonte – Book Swap We saw the idea of a book swap from the Alabama Code Camp and thought we would give it a try. Ray jumped in and took control. The idea was to get people to bring their old technical books to swap or for others to buy. You got a ticket for each book you brought that you could then turn in to buy another book. If you did not have a ticket you could buy a book for $1. Net proceeds were $153 which I rounded up and donated to the Red Cross. There is plenty going on in Haiti and Chile! I don’t think we really got a count of how many books came in. I many cases the books barely hit the table before being picked up again. At the end we were left with a dozen books which we donated to the DeVry library. A great success we will definitely do again! Jace Weiss / Ratchelen Hut – Coffee and Snacks Wow, this was an eye opener. In past years a few of us would struggle to give some attention to coffee, snacks, etc. But it was always tenuous and always ended up running out of coffee. In the past we have tried buying Dunkin Donuts coffee, renting urns, borrowing urns, etc. This year I actually purchased 2 – 100 cup Westbend commercial brewers plus a couple of small urns (30 and 60 cup we used for decaf). We got them both started early (although i forgot to push the on button on one!) and primed it with 10 boxes of Joe from Dunkin. then Jace and Rachelen took over.. once a batch was brewed they would refill the boxes, keep the area clean and at one point were filling cups. We never ran out of coffee and served a few hundred more than last  year. We did look but next year I’ll get a large insulated (like gatorade) dispensing container. It all went very smoothly and having help focused on that one area was a big win. Thanks Jace and Rachelen! Ken & Shirley Golding / Roberta Barbosa – Registration Ken & Shirley showed up and took over registration. This year we printed small name tags for everyone registered which was great because it is much easier to remember someone’s name when they are labeled! In any case it went the smoothest it has ever gone. All three were actively pulling people through the registration, answering questions, directing them to bags and information very quickly. I did not see that there was too big a line at any time. Thanks!! Scott Katarincic / Vishal Shukla – Website For the 3rd?? year in a row, Scott was in charge of the website starting in August or September when I start on code camp. He handles all the requests, makes changes to the site and admin. I think two years ago he wrote all the backend administration and tunes it and the website a bit but things are pretty stable. The only thing I do is put up the sponsors. It is a big pressure off of me!! Thanks Scott! Vishal jumped into the web end this year and created a new Silverlight agenda page to replace the old ajax page. We will continue to enhance this but it is definitely a good step forward! Thanks! Alex Funkhouser – T-shirts/Mouse pads/tables/sponsors Alex helps in many areas. He helps me bring in sponsors and handles all the logistics for t-shirts, sponsor tables and this year the mouse pads. He is also a key person to help promote the event as well not to mention the after after party which I did not attend and don’t want to know much about! Students There were a number of student volunteers but don’t have all of their names. But thanks to them, they stuffed bags, patrolled pizza and helped with moving things around. Sponsors We had a bunch of great sponsors which allowed us to feed people and give a way a lot of great swag. Our major sponsors of DeVry, Microsoft (both DPE and UGSS), Infragistics, Telerik, SQL Share (End to End, SQL Saturdays), and Interclick are very much appreciated. The other sponsors Applied Innovations (also supply code camp hosting), Ultimate Software (a great local SW company), Linxter (reliable cloud messaging we are lucky to have here!), Mediascend (a media startup), SoftwareFX (another local SW company we are happy to have back participating in CC), CozyRoc (if you do SSIS, check them out), Arrow Design (local DNN and Silverlight experts),Boxes and Arrows (a local SW consulting company) and Robert Half. One thing we did this year besides a t-shirt was a mouse pad. I like it because it will be around for a long time on many desks. After much investigation and years of using mouse pad’s I’ve determined that the 1/8” fabric top is the best and that is what we got!   So now I get a break for a few months before starting again!

    Read the article

  • Dynamic DataGrid columns in WPF DataGrid based on the underlying set of data (and their type)

    - by StatsMan
    Hello everyone, I've got kind of a conceptual question. I am in the process of wrapping some statistics classes I wrote into WPF. For that I have two DataGrid(-Views, currently in WinForms). In one DataGrid each row represents a column in the other. There I can set-up different variables (as in mathematical/statistical variables) with fields like "Header", "DataType", "ValidationBehaviour", "DisplayType". There I can also set-up how it should be displayed. Some Columns can automatically be set to ComboBoxColumns, some TextBoxColumns, and so on and so forth. So, now once I've set-up these Columns I can go to the other grid and enter my data. I may, for instance, have generated (in grid 1) one Column called "Annual Gross Salary" with input of numerical values. Another Column called "Education" with "0=NoEducation", "1=College Level", "3=Universitary" etc. These labels are displayed as text in the combobox and my statistics engine behind then selects the respective value (0-3) for calculations (i.e. ordinal, nominal variables). Sooo. In WinForms I could basically generate all the columns by hand in code and then add my data in the respective cells/rows. Now in WPF I thought that must be easy to realise. However, yesterday I got started with ICustomPropertyDescriptor which (maybe I was too thick) didn't give me the results I was looking for. Basically, I just need to be able to dynamically generate columns (and rows) with different Layout, Controls (ComboBox, simple Input, DateTimes) based on the data that I have. But I don't really know how to go about it? So here in summary: DataGrid 1 Purpose is to display columns that have been specified in DataGrid 2 In rows, the user can add any kind of data in the rows below the columns that is allowed as to the columns specifications DataGrid 2 Each row in this grid represents a column in DataGrid 1 Contains fields like Name/Header, DataType, Validation Behaviour, Default Value, Data Formatting, etc. Also contains a function to be able to set-up how it should be displayed. The user can select from, for instance, ComboBoxColumn (and also add the available options), DateTime, normal TextBox, CheckBox etc. After finishing adding a row it will automatically appear as a new column in DataGrid 1 I'd appreciate any kind of pointer into the right direction. Thanks very, very much in advance! :)

    Read the article

  • Update a list from another list

    - by Langali
    I have a list of users in local store that I need to update from a remote list of users every once in a while. Basically: If a remote user already exists locally, update its fields. If a remote user doesn't already exist locally, add the user. If a local user doesn't appear in the remote list, deactivate or delete. If a local user also appears in the remote list, update its fields. Just a simple case of syncing the local list. Is there a better way to do this in pure Java than the following? I feel gross looking at my own code. public class User { Integer id; String email; boolean active; //Getters and Setters....... public User(Integer id, String email, boolean active) { this.id = id; this.email = email; this.active = active; } @Override public boolean equals(Object other) { boolean result = false; if (other instanceof User) { User that = (User) other; result = (this.getId() == that.getId()); } return result; } } public static void main(String[] args) { //From 3rd party List<User> remoteUsers = getRemoteUsers(); //From Local store List<User> localUsers =getLocalUsers(); for (User remoteUser : remoteUsers) { boolean found = false; for (User localUser : localUsers) { if (remoteUser.equals(localUser)) { found = true; localUser.setActive(remoteUser.isActive()); localUser.setEmail(remoteUser.getEmail()); //update } break; } if (!found) { User user = new User(remoteUser.getId(), remoteUser.getEmail(), remoteUser.isActive()); //Save } } for(User localUser : localUsers ) { boolean found = false; for(User remoteUser : remoteUsers) { if(localUser.equals(remoteUser)) { found = true; localUser.setActive(remoteUser.isActive()); localUser.setEmail(remoteUser.getEmail()); //Update } break; } if(!found) { localUser.setActive(false); // Deactivate } } }

    Read the article

  • Javascript Noob: How to emulate slideshow on front page by automatically cycling through existing ho

    - by Zildjoms
    hey everyone. hope you could help me out am working on this website and i've finished all the hover effects i like - they're exactly how i want them to be: http://s5ent.brinkster.net/beta3.asp - try hovering over the four links and you'll see a very simple fade effect at work, which degrades into a regular css hover without javascript. what i plan to do is to make the page look like it had a fancy slideshow going on upon loading and while idle, and i wanted to achieve that by capitalizing on the existing hover styling/behavior of the main page links instead of using another script to create the effect from scratch. to do this i imagined i'll need a script that emulates a hover action on each link at regular time intervals once the page has loaded, starting from left to right (footcare, lawn & equipment, about us, contact us), looping through all 4 links indefinitely (footcare, lawn & equipment, about us, contact us, footcare, lawn& equipment, etc.) but pauses when any of them have been actually hovered over by a viewer and resumes from wherever the user left off upon mouseout. hope you get my drift... i also want to achieve this without unnecessarily disrupting the current html. so i guess everything will have to be done by scripting as much as possible.. i'm very new to javascript and jquery. as you can see at s5ent.brinkster.net/beta3.1-autohover.asp, the following script i made works wrong: it hovers-on all of them at the same time and doesn't hover-out anymore. when you try to actually hover into and out of each link the link just comes back on: <script type="text/javascript"> $(document).ready(function () { var speed = 5000; var run = setInterval('rotate()', speed); }); function rotate() { $('.lilevel1 a').each(function(i) { $(this).mouseover(); }); } </script> it's just gross. aside from the fact that this last bit of script isn't even working in ie. could you please help me make this thing happen? that'd be really sweet, guys. i know there are tonsa geniuses out there who could whip this up in no time. or if you have a better way to go about it by all means kindly lemme know. thanks guys, hope you're all havin a blast.

    Read the article

  • CSV File Content Display Issue

    - by Pankaj Khurana
    Hi, I want to retrieve contents from a csv file for that i am using following code: <?php $fo = fopen("record.csv", "rb+"); while(!feof($fo)) { $contents[] = fgetcsv($fo,0,';'); } print_r($contents); fclose($fo); ?> But my records are displayed in the following format: ????††???????†††??†††††????"Search Transactions Results" ††††??†???????††††?††††††??? ???????????? ?????????????? ?????????? My csv file format: "Search Transactions Results" "Transaction ID","Reference Transaction ID","Date","Type","Subject","Item Number","Item Name","Invoice ID","Name","Email","Shipping Name","Shipping Address Line 1","Shipping Address Line 2","Shipping Address City","Shipping State/Province","Shipping Zip/Postal Code","Shipping Address Country","Shipping Method","Address Status","Contact Phone Number","Gross Amount","Receipt ID","Custom Field","Option 1 Name","Option 1 Value","Option 2 Name","Option 2 Value","Note","Auction Site","Auction User ID","Item URL","Auction Closing Date","Insurance Amount","Currency","Fees","Net Amount","Shipping & Handling Amount","Sales Tax Amount","To Email","Time","Time Zone" "1T","",5/5/2010 2:10:44 PM,"Payment Processed","CFP Self Study Kit","1","CFP Self Study Kit","","User1","[email protected]","","","","","","","","","N","","68.18","R1","","","","","","","","","",,"","USD","-2.62","65.56","0","0","[email protected]","01:40","Asia/Calcutta" "2T","",5/19/2010 4:04:08 PM,"Payment Processed","CFP Self Study Kit","1","CFP Self Study Kit","","User2","[email protected]","","","","","","","","","N","","68.18","R2","","","","","","","","","",,"","USD","-2.62","65.56","0","0","[email protected]","03:34","Asia/Calcutta" "3T","1RT",5/19/2010 5:28:45 PM,"Currency Conversion Completed","","","",""," ","","","","","","","","","","N","","17492.6","","","","","","","","","","",,"","INR","0","17492.6","0","0","","04:58","Asia/Calcutta" "4T","2RT",5/19/2010 5:28:45 PM,"Currency Conversion Completed","","","",""," ","","","","","","","","","","N","","-393.36","","","","","","","","","","",,"","USD","0","-393.36","0","0","","04:58","Asia/Calcutta" "5T","",5/19/2010 5:28:45 PM,"Transfer to Bank Initiated","P1006","","P1006",""," ","","","","","","","","","","N","","-17492.6","","","","","","","","","","",,"","INR","0","-17492.6","0","0","","04:58","Asia/Calcutta" "6T","",5/20/2010 5:38:02 PM,"Transfer to Bank Completed","P1006","","P1006",""," ","","","","","","","","","","N","","-17492.6","","","","","","","","","","",,"","INR","0","-17492.6","0","0","","05:08","Asia/Calcutta" "7T","",5/21/2010 12:32:37 PM,"Payment Processed","FP - LVC Plus","","FP - LVC Plus","","User3","[email protected]","User3","NEW DELHI","BEHIND KARNATAKA BANK LD","SOUTH","NEW DELHI","110023","IN","","N","","283.96","","","","","","","","","","",,"","USD","-9.95","274.01","0","0","[email protected]","00:02","Asia/Calcutta" "8T","",5/25/2010 4:40:48 PM,"Transfer to Bank Initiated","P1006","","P1006",""," ","","","","","","","","","","N","","-12569.85","","","","","","","","","","",,"","INR","0","-12569.85","0","0","","04:10","Asia/Calcutta" "9T","3RT",5/25/2010 4:40:48 PM,"Currency Conversion Completed","","","",""," ","","","","","","","","","","N","","-274.01","","","","","","","","","","",,"","USD","0","-274.01","0","0","","04:10","Asia/Calcutta" "10T","4RT",5/25/2010 4:40:48 PM,"Currency Conversion Completed","","","",""," ","","","","","","","","","","N","","12569.85","","","","","","","","","","",,"","INR","0","12569.85","0","0","","04:10","Asia/Calcutta" "11T","",5/26/2010 4:57:39 PM,"Transfer to Bank Completed","P1006","","P1006",""," ","","","","","","","","","","N","","-12569.85","","","","","","","","","","",,"","INR","0","-12569.85","0","0","","04:27","Asia/Calcutta" "Total","-247.05 USD","-15.19","-262.24" "Total","0.00 INR","0.00","0.00" I want to retrieve the records where "Type"="Payment Processed". I want to retrieve content in a key value format that is for e.g. Transaction ID-1T as i have to store this values in a database but display is not proper. I am unable to find out the reason for the same please help me on this. Thanks

    Read the article

  • Thread placement policies on NUMA systems - update

    - by Dave
    In a prior blog entry I noted that Solaris used a "maximum dispersal" placement policy to assign nascent threads to their initial processors. The general idea is that threads should be placed as far away from each other as possible in the resource topology in order to reduce resource contention between concurrently running threads. This policy assumes that resource contention -- pipelines, memory channel contention, destructive interference in the shared caches, etc -- will likely outweigh (a) any potential communication benefits we might achieve by packing our threads more densely onto a subset of the NUMA nodes, and (b) benefits of NUMA affinity between memory allocated by one thread and accessed by other threads. We want our threads spread widely over the system and not packed together. Conceptually, when placing a new thread, the kernel picks the least loaded node NUMA node (the node with lowest aggregate load average), and then the least loaded core on that node, etc. Furthermore, the kernel places threads onto resources -- sockets, cores, pipelines, etc -- without regard to the thread's process membership. That is, initial placement is process-agnostic. Keep reading, though. This description is incorrect. On Solaris 10 on a SPARC T5440 with 4 x T2+ NUMA nodes, if the system is otherwise unloaded and we launch a process that creates 20 compute-bound concurrent threads, then typically we'll see a perfect balance with 5 threads on each node. We see similar behavior on an 8-node x86 x4800 system, where each node has 8 cores and each core is 2-way hyperthreaded. So far so good; this behavior seems in agreement with the policy I described in the 1st paragraph. I recently tried the same experiment on a 4-node T4-4 running Solaris 11. Both the T5440 and T4-4 are 4-node systems that expose 256 logical thread contexts. To my surprise, all 20 threads were placed onto just one NUMA node while the other 3 nodes remained completely idle. I checked the usual suspects such as processor sets inadvertently left around by colleagues, processors left offline, and power management policies, but the system was configured normally. I then launched multiple concurrent instances of the process, and, interestingly, all the threads from the 1st process landed on one node, all the threads from the 2nd process landed on another node, and so on. This happened even if I interleaved thread creating between the processes, so I was relatively sure the effect didn't related to thread creation time, but rather that placement was a function of process membership. I this point I consulted the Solaris sources and talked with folks in the Solaris group. The new Solaris 11 behavior is intentional. The kernel is no longer using a simple maximum dispersal policy, and thread placement is process membership-aware. Now, even if other nodes are completely unloaded, the kernel will still try to pack new threads onto the home lgroup (socket) of the primordial thread until the load average of that node reaches 50%, after which it will pick the next least loaded node as the process's new favorite node for placement. On the T4-4 we have 64 logical thread contexts (strands) per socket (lgroup), so if we launch 48 concurrent threads we will find 32 placed on one node and 16 on some other node. If we launch 64 threads we'll find 32 and 32. That means we can end up with our threads clustered on a small subset of the nodes in a way that's quite different that what we've seen on Solaris 10. So we have a policy that allows process-aware packing but reverts to spreading threads onto other nodes if a node becomes too saturated. It turns out this policy was enabled in Solaris 10, but certain bugs suppressed the mixed packing/spreading behavior. There are configuration variables in /etc/system that allow us to dial the affinity between nascent threads and their primordial thread up and down: see lgrp_expand_proc_thresh, specifically. In the OpenSolaris source code the key routine is mpo_update_tunables(). This method reads the /etc/system variables and sets up some global variables that will subsequently be used by the dispatcher, which calls lgrp_choose() in lgrp.c to place nascent threads. Lgrp_expand_proc_thresh controls how loaded an lgroup must be before we'll consider homing a process's threads to another lgroup. Tune this value lower to have it spread your process's threads out more. To recap, the 'new' policy is as follows. Threads from the same process are packed onto a subset of the strands of a socket (50% for T-series). Once that socket reaches the 50% threshold the kernel then picks another preferred socket for that process. Threads from unrelated processes are spread across sockets. More precisely, different processes may have different preferred sockets (lgroups). Beware that I've simplified and elided details for the purposes of explication. The truth is in the code. Remarks: It's worth noting that initial thread placement is just that. If there's a gross imbalance between the load on different nodes then the kernel will migrate threads to achieve a better and more even distribution over the set of available nodes. Once a thread runs and gains some affinity for a node, however, it becomes "stickier" under the assumption that the thread has residual cache residency on that node, and that memory allocated by that thread resides on that node given the default "first-touch" page-level NUMA allocation policy. Exactly how the various policies interact and which have precedence under what circumstances could the topic of a future blog entry. The scheduler is work-conserving. The x4800 mentioned above is an interesting system. Each of the 8 sockets houses an Intel 7500-series processor. Each processor has 3 coherent QPI links and the system is arranged as a glueless 8-socket twisted ladder "mobius" topology. Nodes are either 1 or 2 hops distant over the QPI links. As an aside the mapping of logical CPUIDs to physical resources is rather interesting on Solaris/x4800. On SPARC/Solaris the CPUID layout is strictly geographic, with the highest order bits identifying the socket, the next lower bits identifying the core within that socket, following by the pipeline (if present) and finally the logical thread context ("strand") on the core. But on Solaris on the x4800 the CPUID layout is as follows. [6:6] identifies the hyperthread on a core; bits [5:3] identify the socket, or package in Intel terminology; bits [2:0] identify the core within a socket. Such low-level details should be of interest only if you're binding threads -- a bad idea, the kernel typically handles placement best -- or if you're writing NUMA-aware code that's aware of the ambient placement and makes decisions accordingly. Solaris introduced the so-called critical-threads mechanism, which is expressed by putting a thread into the FX scheduling class at priority 60. The critical-threads mechanism applies to placement on cores, not on sockets, however. That is, it's an intra-socket policy, not an inter-socket policy. Solaris 11 introduces the Power Aware Dispatcher (PAD) which packs threads instead of spreading them out in an attempt to be able to keep sockets or cores at lower power levels. Maximum dispersal may be good for performance but is anathema to power management. PAD is off by default, but power management polices constitute yet another confounding factor with respect to scheduling and dispatching. If your threads communicate heavily -- one thread reads cache lines last written by some other thread -- then the new dense packing policy may improve performance by reducing traffic on the coherent interconnect. On the other hand if your threads in your process communicate rarely, then it's possible the new packing policy might result on contention on shared computing resources. Unfortunately there's no simple litmus test that says whether packing or spreading is optimal in a given situation. The answer varies by system load, application, number of threads, and platform hardware characteristics. Currently we don't have the necessary tools and sensoria to decide at runtime, so we're reduced to an empirical approach where we run trials and try to decide on a placement policy. The situation is quite frustrating. Relatedly, it's often hard to determine just the right level of concurrency to optimize throughput. (Understanding constructive vs destructive interference in the shared caches would be a good start. We could augment the lines with a small tag field indicating which strand last installed or accessed a line. Given that, we could augment the CPU with performance counters for misses where a thread evicts a line it installed vs misses where a thread displaces a line installed by some other thread.)

    Read the article

  • Beyond Cloud Technology, Enabling A More Agile and Responsive Organization

    - by sxkumar
    This is the second part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain”. In the first part,  I was sharing with you how a broad-based transformation makes cloud more than a technology initiative, I will describe in this section how it requires people (organizational) and process changes as well, and these changes are as critical as is the choice of right tools and technology. People: Most IT organizations have a fairly complex organizational structure. There are different groups, managing different pieces of the puzzle, and yet, they don't always work together. Provisioning a new application therefore may require a request to float endlessly through system administrators, DBAs and middleware admin worlds – resulting in long delays and constant finger pointing.  Cloud users expect end-to-end automation - which requires these silos to be greatly simplified, if not completely eliminated.  Most customers I talk to acknowledge this problem but are quick to admit that such a transformation is hard. As hard as it may be, I am afraid that the status quo is no longer an option. Sticking to an organizational structure that was created ages back will not only impede cloud adoption,  it also risks making the IT skills increasingly irrelevant in a world that is rapidly moving towards converged applications and infrastructure.   Process: Most IT organizations today operate with a mindset that they must fully "control" access to any and all types of IT services. This in turn leads to people clinging on to outdated manual approval processes .  While requiring approvals for scarce resources makes sense, insisting that every single request must be manually approved defeats the very purpose of cloud. Not only this causes delays, thereby at least partially negating the agility benefits, it also results in gross inefficiency. In a cloud environment, self-service access should be governed by policies, quotas that the administrators can define upfront . For a cloud initiative to be successful, IT organizations MUST be ready to empower users by giving them real control rather than insisting on brokering every single interaction between users and the cloud resources. Technology: From a technology perspective, cloud is about consolidation, standardization and automation. A consolidated and standardized infrastructure helps increase utilization and reduces cost. Additionally, it  enables a much higher degree of automation - thereby providing users the required agility while minimizing operational costs.  Obviously, automation is the key to cloud. Unfortunately it hasn’t received as much attention within enterprises as it should have.  Many organizations are just now waking up to the criticality of automation and it still often gets relegated to back burner in favor of other "high priority" projects. However, it is important to understand that without the right type and level of automation, cloud will remain a distant dream for most enterprises. This in turn makes the choice of the cloud management software extremely critical.  For a cloud management software to be effective in an enterprise environment, it must meet the following qualifications: Broad and Deep Solution It should offer a broad and deep solution to enable the kind of broad-based transformation we are talking about.  Its footprint must cover physical and virtual systems, as well as infrastructure, database and application tiers. Too many enterprises choose to equate cloud with virtualization. While virtualization is a critical component of a cloud solution, it is just a component and not the whole solution. Similarly, too many people tend to equate cloud with Infrastructure-as-a-Service (IaaS). While it is perfectly reasonable to treat IaaS as a starting point, it is important to realize that it is just the first stepping stone - and on its own it can only provide limited business benefits. It is actually the higher level services, such as (application) platform and business applications, that will bring about a more meaningful transformation to your enterprise. Run and Manage Efficiently Your Mission Critical Applications It should not only be able to run your mission critical applications, it should do so better than before.  For enterprises, applications and data are the critical business assets  As such, if you are building a cloud platform that cannot run your ERP application, it isn't truly a "enterprise cloud".  Also, be wary of  vendors who try to sell you the idea that your applications must be written in a certain way to be able to run on the cloud. That is nothing but a bogus, self-serving argument. For the cloud to be meaningful to enterprises, it should adopt to your applications - and not the other way around.  Automated, Integrated Set of Cloud Management Capabilities At the root of many of the problems plaguing enterprise IT today is complexity. A complex maze of tools and technology, coupled with archaic  processes, results in an environment which is inflexible, inefficient and simply too hard to manage. Management tool consolidation, therefore, is key to the success of your cloud as tool proliferation adds to complexity, encourages compartmentalization and defeats the very purpose that you are building the cloud for. Decision makers ought to be extra cautious about vendors trying to sell them a "suite" of disparate and loosely integrated products as a cloud solution.  An effective enterprise cloud management solution needs to provide a tightly integrated set of capabilities for all aspects of cloud lifecycle management. A simple question to ask: will your environment be more or less complex after you implement your cloud? More often than not, the answer will surprise you.  At Oracle, we have understood these challenges and have been working hard to create cloud solutions that are relevant and meaningful for enterprises.  And we have been doing it for much longer than you may think. Oracle was one of the very first enterprise software companies to make our products available on the Amazon Cloud. As far back as in 2007, we created new cloud solutions such as Cloud Database Backup that are helping customers like Amazon save millions every year.  Our cloud solution portfolio is also the broadest and most deep in the industry  - covering public, private, hybrid, Infrastructure, platform and applications clouds. It is no coincidence therefore that the Oracle Cloud today offers the most comprehensive set of public cloud services in the industry.  And to a large part, this has been made possible thanks to our years on investment in creating cloud enabling technologies. I will dedicated the third and final part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain” to Oracle Cloud Technologies Building Blocks and how they mapped into our vision of Enterprise Cloud. Stay Tuned.

    Read the article

  • Maven antrun with sequential ant-contrib fails to run

    - by codevour
    We have a special routine to explode files in a subfolder into extensions, which will be copied and jared into single extension files. For this special approach I wanted to use the maven-antrun-plugin, for the sequential iteration and jar packaging through the dirset, we need the library ant-contrib. The upcoming plugin configuration fails with an error. What did I misconfigured? Thank you. Plugin configuration <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-antrun-plugin</artifactId> <version>1.6</version> <executions> <execution> <phase>validate</phase> <goals> <goal>run</goal> </goals> <configuration> <target> <for param="extension"> <path> <dirset dir="${basedir}/src/main/webapp/WEB-INF/resources/extensions/"> <include name="*" /> </dirset> </path> <sequential> <basename property="extension.name" file="${extension}" /> <echo message="Creating JAR for extension '${extension.name}'." /> <jar destfile="${basedir}/target/extension-${extension.name}-1.0.0.jar"> <zipfileset dir="${extension}" prefix="WEB-INF/resources/extensions/${extension.name}/"> <include name="**/*" /> </zipfileset> </jar> </sequential> </for> </target> </configuration> </execution> </executions> <dependencies> <dependency> <groupId>ant-contrib</groupId> <artifactId>ant-contrib</artifactId> <version>1.0b3</version> <exclusions> <exclusion> <groupId>ant</groupId> <artifactId>ant</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.apache.ant</groupId> <artifactId>ant-nodeps</artifactId> <version>1.8.1</version> </dependency> </dependencies> </plugin> Error [ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (default) on project extension-platform: An Ant BuildException has occured: Problem: failed to create task or type for [ERROR] Cause: The name is undefined. [ERROR] Action: Check the spelling. [ERROR] Action: Check that any custom tasks/types have been declared. [ERROR] Action: Check that any <presetdef>/<macrodef> declarations have taken place. [ERROR] -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException

    Read the article

  • Issue Parsing File with YAML-CPP

    - by Andrew
    In the following code, I'm having some sort of issue getting my .yaml file parsed using parser.GetNextDocument(doc);. After much gross debugging, I've found that the (main) issue here is that my for loop is not running, due to doc.size() == 0; What am I doing wrong? void BookView::load() { aBook.clear(); QString fileName = QFileDialog::getOpenFileName(this, tr("Load Address Book"), "", tr("Address Book (*.yaml);;All Files (*)")); if(fileName.isEmpty()) { return; } else { try { std::ifstream fin(fileName.toStdString().c_str()); YAML::Parser parser(fin); YAML::Node doc; std::map< std::string, std::string > entry; parser.GetNextDocument(doc); std::cout << doc.size(); for( YAML::Iterator it = doc.begin(); it != doc.end(); it++ ) { *it >> entry; aBook.push_back(entry); } } catch(YAML::ParserException &e) { std::cout << "YAML Exception caught: " << e.what() << std::endl; } } updateLayout( Navigating ); } The .yaml file being read was generated using yaml-cpp, so I assume it is correctly formed YAML, but just in case, here's the file anyways. ^@^@^@\230--- - address: ****************** comment: None. email: andrew(dot)levenson(at)gmail(dot)com name: Andrew Levenson phone: **********^@ Edit: By request, the emitting code: void BookView::save() { QString fileName = QFileDialog::getSaveFileName(this, tr("Save Address Book"), "", tr("Address Book (*.yaml);;All Files (*)")); if (fileName.isEmpty()) { return; } else { QFile file(fileName); if(!file.open(QIODevice::WriteOnly)) { QMessageBox::information(this, tr("Unable to open file"), file.errorString()); return; } std::vector< std::map< std::string, std::string > >::iterator itr; std::map< std::string, std::string >::iterator mItr; YAML::Emitter yaml; yaml << YAML::BeginSeq; for( itr = aBook.begin(); itr < aBook.end(); itr++ ) { yaml << YAML::BeginMap; for( mItr = (*itr).begin(); mItr != (*itr).end(); mItr++ ) { yaml << YAML::Key << (*mItr).first << YAML::Value << (*mItr).second; } yaml << YAML::EndMap; } yaml << YAML::EndSeq; QDataStream out(&file); out.setVersion(QDataStream::Qt_4_5); out << yaml.c_str(); } }

    Read the article

< Previous Page | 7 8 9 10 11 12  | Next Page >