Search Results

Search found 7097 results on 284 pages for 'calls'.

Page 122/284 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • Custom event loop and UIKit controls. What extra magic Apple's event loop does?

    - by tequilatango
    Does anyone know or have good links that explain what iPhone's event loop does under the hood? We are using a custom event loop in our OpenGL-based iPhone game framework. It calls our game rendering system, calls presentRenderbuffer and pumps events using CFRunLoopRunInMode. See the code below for details. It works well when we are not using UIKit controls (as a proof, try Facetap, our first released game). However, when using UIKit controls, everything almost works, but not quite. Specifically, scrolling of UIKit controls doesn't work properly. For example, let's consider following scenario. We show UIImagePickerController on top of our own view. UIImagePickerController covers our custom view We also pause our own rendering, but keep on using the custom event loop. As said, everything works, except scrolling. Picking photos works. Drilling down to photo albums works and transition animations are smooth. When trying to scroll photo album view, the view follows your finger. Problem: when scrolling, scrolling stops immediately after you lift your finger. Normally, it continues smoothly based on the speed of your movement, but not when we are using the custom event loop. It seems that iPhone's event loop is doing some magic related to UIKit scrolling that we haven't implemented ourselves. Now, we can get UIKit controls to work just fine and dandy together with our own system by using Apple's event loop and calling our own rendering via NSTimer callbacks. However, I'd still like to understand, what is possibly happening inside iPhone's event loop that is not implemented in our custom event loop. - (void)customEventLoop { OBJC_METHOD; float excess = 0.0f; while(isRunning) { animationInterval = 1.0f / openGLapp->ticks_per_second(); // Calculate the target time to be used in this run of loop float wait = max(0.0, animationInterval - excess); Systemtime target = Systemtime::now().after_seconds(wait); Scope("event loop"); NSAutoreleasePool* pool = [[ NSAutoreleasePool alloc] init]; // Call our own render system and present render buffer [self drawView]; // Pump system events [self handleSystemEvents:target]; [pool release]; excess = target.seconds_to_now(); } } - (void)drawView { OBJC_METHOD; // call our own custom rendering bool bind = openGLapp->app_render(); // bind the buffer to be THE renderbuffer and present its contents if (bind) { opengl::bind_renderbuffer(renderbuffer); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; } } - (void) handleSystemEvents:(Systemtime)target { OBJC_METHOD; SInt32 reason = 0; double time_left = target.seconds_since_now(); if (time_left <= 0.0) { while((reason = CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0, TRUE)) == kCFRunLoopRunHandledSource) {} } else { float dt = time_left; while((reason = CFRunLoopRunInMode(kCFRunLoopDefaultMode, dt, FALSE)) == kCFRunLoopRunHandledSource) { double time_left = target.seconds_since_now(); if (time_left <= 0.0) break; dt = (float) time_left; } } }

    Read the article

  • Passing unknown amounts of variables using through a string string and eval and multiple functions a

    - by user300797
    I'm not sure how best to describe this problem... In short, I want to use object literal to allow me to pass a random amount of variables in any order to a function. Whilst this is not big deal in theory, in my code, this object literal is passed to a second function call on_change. on_change works comparing an element inner HTML to a string. if it is the same, it sets a time out of to call the function again (this is sort of/almost recursive, but the function dose actually get to end before it is called again). if the elements inner HTML is different from the string, then the third parameter is executed. this will either be a function or a string. either way it will execute. I have tested this function plenty and used it for a while now. how ever, it cannot seem to get the object literal to flow through the function calls... var params = { xpos:'false'}; on_change('window_3_cont_buffer','',' if(Window_manager.windows[3].window_cont_buffer.getElementsByTagName(\'content\')[0].getElementsByTagName(\'p\')[0].innerHTML == \'ERROR\'){ alert(Window_manager.windows[3].window_cont_buffer.getElementsByTagName(\'content\')[0].getElementsByTagName(\'p\')[1].innerHTML); return false; } else { Window_manager.windows[3].load_xml(\'location/view.php?location_ID=3\', \'\', ' + params + ' ); } '); I call this as part of the form submission. after this line, I then call a function to load some content via ajax, which works fine and will trigger the code from the on_change function. I have tested load_xml function it is able to call alert(param.xpos) and get the correct response. I can even added in a check for being undefined so that rest of the times I cam load_xml I don't get swamped with alerts. The load_xml function first set up the on_change function, then calls the function to load the content to a hidden div. Once the AJAX request has updated that DIV, the on_change function should now call the parse_xml function. This pulls out the information from the xml file. How ever... The idea of this object literal param is that it can tell this parse_xml function to ignore certain things. on_change("window_" + this.id + "_cont_buffer", "", "Window_manager.windows[" + this.id + "].parse_xml('" + param + "')"); this is part of load_xml. it works perfectly fine, even with the param bit in there. except, parse_xml dose not seem to be able to use that parameter. I have been able to get it to a point where parse_xml can alert(param) and give [object object] which I would of thought meant that the object litteral had been passed through, but when I try and call alert(param.xpos) I get undefined. I know this is a pig of a problem, and I could get around it by just having the function take a zillion boolean parameters, but its just not practical or elegant. I'm sure you will need to ask me plenty more questions before I can solve this. I will post more complete code, I just cut it down to what is actually going on. Thanks

    Read the article

  • Sometimes big delay when using PHP mail()

    - by Robbert Dam
    I have a website which processes orders from a Windows application. This works as follows: User clicks "Order now" in the windows app App uploads a file with POST to a PHP script The script immediately calls the PHP mail() function (order is not stored in a db) This works fine most of the time. However, sometimes a big delay occurs (several days). Customers calls why the product has not yet been delivered. E-mail headers of delayed mail follows: Microsoft Mail Internet Headers Version 2.0 Received: from barracuda.nkl.nl ([10.0.0.1]) by smtp.nkl.nl with Microsoft SMTPSVC(6.0.3790.3959); Wed, 26 May 2010 16:26:51 +0200 X-ASG-Debug-ID: 1274883818-2f8800000000-X58hIK X-Barracuda-URL: http://10.0.0.1:8000/cgi-bin/mark.cgi Received: from server45.firstfind.nl (localhost [127.0.0.1]) by barracuda.nkl.nl (Spam & Virus Firewall) with ESMTP id ECAFD15776A for <[email protected]>; Wed, 26 May 2010 16:23:38 +0200 (CEST) Received: from server45.firstfind.nl (server45.firstfind.nl [93.94.226.76]) by barracuda.nkl.nl with ESMTP id 85bAT2AU58kkxjPb for <[email protected]>; Wed, 26 May 2010 16:23:38 +0200 (CEST) X-Barracuda-Envelope-From: [email protected] Received: from server45.firstfind.nl (localhost [127.0.0.1]) by server45.firstfind.nl (8.13.8/8.13.8/Debian-3+etch1) with ESMTP id o4QEM3Hb004301 for <[email protected]>; Wed, 26 May 2010 16:23:31 +0200 Received: (from nklsemin@localhost) by server45.firstfind.nl (8.13.8/8.13.8/Submit) id o4J9lA7M031307; Wed, 19 May 2010 11:47:10 +0200 Date: Wed, 19 May 2010 11:47:10 +0200 Message-Id: <[email protected]> To: [email protected] X-ASG-Orig-Subj: easyfit - ref: Hoen3443 Subject: easyfit - ref: Hoen3443 X-PHP-Script: www.nklseminar.nl/emailer/upload.php for 77.61.220.217 From: [email protected] Content-type: text/html X-Virus-Scanned: by amavisd-new X-Barracuda-Connect: server45.firstfind.nl[93.94.226.76] X-Barracuda-Start-Time: 1274883820 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by Barracuda Spam & Virus Firewall at nkl.nl X-Barracuda-Spam-Score: 0.92 X-Barracuda-Spam-Status: No, SCORE=0.92 using global scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests=DATE_IN_PAST_96_XX, DATE_IN_PAST_96_XX_2, HTML_MESSAGE, MIME_HEADER_CTYPE_ONLY, MIME_HTML_ONLY, NO_REAL_NAME X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.2.30817 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.00 NO_REAL_NAME From: does not include a real name 0.01 DATE_IN_PAST_96_XX Date: is 96 hours or more before Received: date 0.00 MIME_HTML_ONLY BODY: Message only has text/html MIME parts 0.00 HTML_MESSAGE BODY: HTML included in message 0.86 MIME_HEADER_CTYPE_ONLY 'Content-Type' found without required MIME headers 2.07 DATE_IN_PAST_96_XX_2 DATE_IN_PAST_96_XX_2 Return-Path: [email protected] X-OriginalArrivalTime: 26 May 2010 14:26:51.0343 (UTC) FILETIME=[7B80DDF0:01CAFCDF] The delay seems to occur here: Received: from server45.firstfind.nl (localhost [127.0.0.1]) by server45.firstfind.nl (8.13.8/8.13.8/Debian-3+etch1) with ESMTP id o4QEM3Hb004301 for <[email protected]>; Wed, 26 May 2010 16:23:31 +0200 Received: (from nklsemin@localhost) by server45.firstfind.nl (8.13.8/8.13.8/Submit) id o4J9lA7M031307; Wed, 19 May 2010 11:47:10 +0200 I've reported this issue various times to the web hosting service that hosts my website. They say the delay does not occur in their network (impossible). But they do confirm that the e-mail is first seen in their mail server on May 26, which is 7 days after the mail has been composed. The order is marked with the timestamp of the user's local PC, which also matches May 19 (so it's not a PC clock problem) It's also interesting to see that all delayed mails (orders were placed on different days) come in at once. So I suddenly receive 14 e-mail in my mailbox from various days. Any idea were this delay may be introduced? Could there be a bug in my PHP code that causes this? (I cannot believe I can introduce a loop of 7 days in my PHP code)

    Read the article

  • MVC using ODP.NET getting ORA-01840

    - by sse
    I am writing a simple MVC Application using ODP.NET. I am trying to call a Pl/Sql proc that inserts a record. Here is the simple Pl/Sql: procedure spAddCountry(pGisRecid in country.GISRECID%type, pCountryCode in country.COUNTRYCODE%type, pCountryName in country.COUNTRYNAME%type, pCurrencyCode in country.CURRENCYCODE%type, pEUTerritory in country.EUTERRITORY%type, pFatCAStatus in country.FATCASTATUS%type, pFATF in country.FATF%type, pFSCountryCode in country.COUNTRYCODE%type, pInsertedBy in country.INSERTEDBY%type, pInsertedOn in country.INSERTEDON%type, pLanguages in country.LANGUAGES%type, pNCCT in country.NCCT%type) is PRAGMA AUTONOMOUS_TRANSACTION; begin INSERT INTO COUNTRY (GISRECID, COUNTRYCODE, COUNTRYNAME, CURRENCYCODE, EUTERRITORY, FATCASTATUS, FATF, FSCOUNTRYCODE, INSERTEDBY, INSERTEDON, LANGUAGES, NCCT) VALUES(pGISRECID, pCOUNTRYCODE, pCOUNTRYNAME, pCURRENCYCODE, pEUTERRITORY, pFATCASTATUS, pFATF, pFSCOUNTRYCODE, pINSERTEDBY, pINSERTEDON, pLANGUAGES, pNCCT); Commit; end; I am having difficulty passing the date parameter, pInsertedOn, to the Stored Proc. I have verified that the web form retrieves the form data successfully and calls the AddCountry method below, which in turns calls the stored proc, spAddCountry, after populating all of the parms. Here is a snippet of the MVC C# code. I get the following exception: "ORA-01840 input value not long enough for date format". public void AddCountry(Country aCountry) //because the country object field names match the form field names they automatically get bound!! { string oradb = "Data Source=XYZ;User Id=XYZ;Password=xyz;"; OracleConnection conn = new OracleConnection(oradb); OracleCommand cmd = conn.CreateCommand(); cmd.CommandText = "tstpack.spAddCountry"; cmd.CommandType = CommandType.StoredProcedure; ... OracleParameter paramInsertedBy = new OracleParameter(); paramInsertedBy.ParameterName = "pInsertedBy"; paramInsertedBy.Value = aCountry.InsertedBy; cmd.Parameters.Add(paramInsertedBy); // CultureInfo ci = new CultureInfo("en-US"); OracleParameter paramInsertedOn = new OracleParameter(); paramInsertedOn.ParameterName = "pInsertedOn"; // paramInsertedOn.Value = DateTime.Now; //just testing to see if it's WebForm issue // paramInsertedOn.Value = Convert.ToDateTime(DateTime.Now.ToString(), ci); //flail! paramInsertedOn.Value = aCountry.InsertedOn; cmd.Parameters.Add(paramInsertedOn); ... conn.Open(); cmd.ExecuteNonQuery(); //CRASH! ORA-01840 conn.Close(); } Just to verify that the flow of the program is working, I tried removing the date parm "pInsertedOn" from the pl/sql and from the parm list above, and everything worked fine. I know I am going off of the rails with the date. Can someone tell me how to pass a date to Oracle from an MVC WebForm? Is there some sort of type cast needed? I would really appreciate an example too. Thanks so much! ps, I did try changing the parm type to Varchar2 in the Pl/Sql and doing some conversions myself in the Pl/Sql, the automatic MVC binder was getting in my way, forcing the property of paramInsertedOn.OracleType to DateTime. I tried forcing it to Varchar2, but no luck there either...

    Read the article

  • Suggestions on how to map from Domain (ORM) objects to Data Transfer Objects (DTO)

    - by FryHard
    The current system that I am working on makes use of Castle Activerecord to provide ORM (Object Relational Mapping) between the Domain objects and the database. This is all well and good and at most times actually works well! The problem comes about with Castle Activerecords support for asynchronous execution, well, more specifically the SessionScope that manages the session that objects belong to. Long story short, bad stuff happens! We are therefore looking for a way to easily convert (think automagically) from the Domain objects (who know that a DB exists and care) to the DTO object (who know nothing about the DB and care not for sessions, mapping attributes or all thing ORM). Does anyone have suggestions on doing this. For the start I am looking for a basic One to One mapping of object. Domain object Person will be mapped to say PersonDTO. I do not want to do this manually since it is a waste. Obviously reflection comes to mind, but I am hoping with some of the better IT knowledge floating around this site that "cooler" will be suggested. Oh, I am working in C#, the ORM objects as said before a mapped with Castle ActiveRecord. Example code: By @ajmastrean's request I have linked to an example that I have (badly) mocked together. The example has a capture form, capture form controller, domain objects, activerecord repository and an async helper. It is slightly big (3MB) because I included the ActiveRecored dll's needed to get it running. You will need to create a database called ActiveRecordAsync on your local machine or just change the .config file. Basic details of example: The Capture Form The capture form has a reference to the contoller private CompanyCaptureController MyController { get; set; } On initialise of the form it calls MyController.Load() private void InitForm () { MyController = new CompanyCaptureController(this); MyController.Load(); } This will return back to a method called LoadComplete() public void LoadCompleted (Company loadCompany) { _context.Post(delegate { CurrentItem = loadCompany; bindingSource.DataSource = CurrentItem; bindingSource.ResetCurrentItem(); //TOTO: This line will thow the exception since the session scope used to fetch loadCompany is now gone. grdEmployees.DataSource = loadCompany.Employees; }, null); } } this is where the "bad stuff" occurs, since we are using the child list of Company that is set as Lazy load. The Controller The controller has a Load method that was called from the form, it then calls the Asyc helper to asynchronously call the LoadCompany method and then return to the Capture form's LoadComplete method. public void Load () { new AsyncListLoad<Company>().BeginLoad(LoadCompany, Form.LoadCompleted); } The LoadCompany() method simply makes use of the Repository to find a know company. public Company LoadCompany() { return ActiveRecordRepository<Company>.Find(Setup.company.Identifier); } The rest of the example is rather generic, it has two domain classes which inherit from a base class, a setup file to instert some data and the repository to provide the ActiveRecordMediator abilities.

    Read the article

  • Drupal's not reading correct values from DB

    - by John
    Hey Everyone, Here is my current problem. I am working with the chat module and I'm building a module that notifies users via AJAX that they have been invited to a chat. The current table structure for the invites table looks like this: |-------------------------------------------------------------------------| | CCID | NID | INVITER_UID | INVITEE_UID | NOTIFIED | ACCEPTED | |-------------------------------------------------------------------------| | int | int | int | int | (0 or 1) | (0 or 1) | |-------------------------------------------------------------------------| I'm using the periodical updater plug-in for JQuery to continually poll the server to check for invites. When an invite is found, I set the notified from 0 to 1. However, my problem is the periodical updater. When I first see that there is an invite, I notify the user, and set notified to 1. On the next select though, I get the same results before, as if the update didn't work. But, when I got check the database, I can see that it worked just fine. It's as if the query is querying a cache, but I can't figure it out. My code for the periodical updater is as follows: window.onload = function() { var uid = $('a#chat_uid').html(); $.PeriodicalUpdater( '/steelylib/sites/all/modules/_chat_whos_online/ajax/ajax.php', //url to service { method: 'get', //send data via... data: {uid: uid}, //data to send minTimeout: '1000', //min time before server is polled (milli-sec.) maxTimeout: '20000', //max time before server is polled (milli-sec.) multiplyer: '1.5', //multiply against curretn poll time every time constant data is returned type: 'text', //type of data recieved (response type) maxCalls: 0, //max calls to make (0=unlimited) autoStop: 0 //max calls with constant data (0=unlimited/disabled) }, function(data) //callback function { alert( data ); //for now, until i get it working } ); } And my code for the ajax call is as follows: <?php #bootstrap Drupal, and call function, passing current user's uid. function _create_chat_node_check_invites($uid) { cache_clear_all('chatroom_chat_list', 'cache'); $query = "SELECT * FROM {chatroom_chat_invite} WHERE notified=0 AND invitee_uid=%d and accepted=0"; $query_results = db_query( $query, $uid ); $json = '{"invites":['; while( $row = db_fetch_object($query_results) ) { var_dump($row); global $base_url; $url = $base_url . '/content/privatechat' . $uid .'-' . $row->inviter_uid; $inviter = db_fetch_object( db_query( "SELECT name FROM {users} WHERE uid = %d", $row->inviter_uid ) ); $invitee = db_fetch_object( db_query( "SELECT name FROM {users} WHERE uid = %d", $row->invitee_uid ) ); #reset table $query = "UPDATE {chatroom_chat_invite} " ."SET notified=1 " ."WHERE inviter_uid=%d AND invitee_uid=%d"; db_query( $query, $row->inviter_uid, $row->invitee_uid ); $json .= '['; $json .= '"' . $url . '",'; $json .= '"' . ($inviter->name) . '",'; $json .= '"' . ($invitee->name) . '"' ; $json .= '],'; } $json = substr($json, 0, -1); $json .= ']}'; return $json; } ?> I can't figure out what is going wrong, any help is greatly appreciated!

    Read the article

  • Strange performance behaviour for 64 bit modulo operation

    - by codymanix
    The last three of these method calls take approx. double the time than the first four. The only difference is that their arguments doesn't fit in integer anymore. But should this matter? The parameter is declared to be long, so it should use long for calculation anyway. Does the modulo operation use another algorithm for numbersmaxint? I am using amd athlon64 3200+, winxp sp3 and vs2008. Stopwatch sw = new Stopwatch(); TestLong(sw, int.MaxValue - 3l); TestLong(sw, int.MaxValue - 2l); TestLong(sw, int.MaxValue - 1l); TestLong(sw, int.MaxValue); TestLong(sw, int.MaxValue + 1l); TestLong(sw, int.MaxValue + 2l); TestLong(sw, int.MaxValue + 3l); Console.ReadLine(); static void TestLong(Stopwatch sw, long num) { long n = 0; sw.Reset(); sw.Start(); for (long i = 3; i < 20000000; i++) { n += num % i; } sw.Stop(); Console.WriteLine(sw.Elapsed); } EDIT: I now tried the same with C and the issue does not occur here, all modulo operations take the same time, in release and in debug mode with and without optimizations turned on: #include "stdafx.h" #include "time.h" #include "limits.h" static void TestLong(long long num) { long long n = 0; clock_t t = clock(); for (long long i = 3; i < 20000000LL*100; i++) { n += num % i; } printf("%d - %lld\n", clock()-t, n); } int main() { printf("%i %i %i %i\n\n", sizeof (int), sizeof(long), sizeof(long long), sizeof(void*)); TestLong(3); TestLong(10); TestLong(131); TestLong(INT_MAX - 1L); TestLong(UINT_MAX +1LL); TestLong(INT_MAX + 1LL); TestLong(LLONG_MAX-1LL); getchar(); return 0; } EDIT2: Thanks for the great suggestions. I found that both .net and c (in debug as well as in release mode) does't not use atomically cpu instructions to calculate the remainder but they call a function that does. In the c program I could get the name of it which is "_allrem". It also displayed full source comments for this file so I found the information that this algorithm special cases the 32bit divisors instead of dividends which was the case in the .net application. I also found out that the performance of the c program really is only affected by the value of the divisor but not the dividend. Another test showed that the performance of the remainder function in the .net program depends on both the dividend and divisor. BTW: Even simple additions of long long values are calculated by a consecutive add and adc instructions. So even if my processor calls itself 64bit, it really isn't :( EDIT3: I now ran the c app on a windows 7 x64 edition, compiled with visual studio 2010. The funny thing is, the performance behavior stays the same, although now (I checked the assembly source) true 64 bit instructions are used.

    Read the article

  • Qt QAbstractButton setDown interferes with grabMouse

    - by edA-qa mort-ora-y
    I have some weird behaviour in Qt that seems like a defect. I'd like to know if anybody has a good workaround. I have a popup widget that contains many buttons in it. The user activates the popup by pressing the mouse button down. The popup widget calls grabMouse when shown. It gets all the mouse events. As it rolls over a button it calls setDown(true) on the button. Now however, when the mouse button is released the popup widget does not get the mouseReleaseEvent, that goes to the button. That is, calling setDown(true) on a button causes the button to steal mouse events, bypassing the grabMouse in the popup widget. I've looked at the source code for setDown but I can't see anything there that would do it directly. I also notice however that sometimes a button gets a hover event, sometimes not. I would assume it would never get those events when the mouse is grabbed. //g++ -o grab_lost grab_lost.cpp -lQtCore -lQtGui -I /usr/include/qt4/ -I /usr/include/qt4/QtCore -I /usr/include/qt4/QtGui /** Demonstrates the defect of losing the mouse. Run the program and: 1. Press mouse anywhere 2. release in purple block (not on X) 3. Release message written (GrabLost receives the mouseReleaseEvent) For defect: 1. Pree mouse anywhere 2. Release inside the X button 3. button is clicked, no release message (GrabLost does not get the mouseReleaseEvent) */ #include <QWidget> #include <QPushButton> #include <QApplication> #include <QMouseEvent> #include <QPainter> class GrabLost : public QWidget { QPushButton * btn; public: GrabLost( QWidget * parent = 0) : QWidget( parent, Qt::Popup ) { btn = new QPushButton( "X", this ); setMouseTracking( true ); } protected: void showEvent( QShowEvent * ev ) { QWidget::showEvent( ev ); grabMouse(); } void closeEvent( QCloseEvent * ev ) { releaseMouse(); QWidget::closeEvent( ev ); } void hideEvent( QHideEvent * ev ) { releaseMouse(); QWidget::hideEvent( ev ); } void mouseReleaseEvent( QMouseEvent * ev ) { qDebug( "mouseRelease" ); close(); } void mouseMoveEvent( QMouseEvent * ev ) { QWidget * w = childAt( ev->pos() ); bool ours = dynamic_cast<QPushButton*>( w ) == btn; btn->setDown( ours ); } void paintEvent( QPaintEvent * ev ) { //just to show where the widget is QPainter pt( this ); pt.setPen( QColor( 0,0,0 ) ); pt.setBrush( QColor( 128,0,128) ); pt.drawRect( 0, 0, size().width(), size().height() ); } }; class GrabMe : public QWidget { protected: void mousePressEvent( QMouseEvent * ev ) { GrabLost * gl = new GrabLost(); gl->resize( 100, 100 ); QPoint at( mapToGlobal( ev->pos() ) ); gl->move( at.x() - 50, at.y() - 50 ); gl->show(); } }; int main( int argc, char** argv ) { QApplication app( argc, argv ); GrabMe * gm = new GrabMe(); gm->move( 100, 100 ); gm->resize( 300, 300 ); gm->show(); app.exec(); return 0; }

    Read the article

  • Confusion on C++ Python extensions. Things like getting C++ values for python values.

    - by Matthew Mitchell
    I'm wanted to convert some of my python code to C++ for speed but it's not as easy as simply making a C++ function and making a few function calls. I have no idea how to get a C++ integer from a python integer object. I have an integer which is an attribute of an object that I want to use. I also have integers which are inside a list in the object which I need to use. I wanted to test making a C++ extension with this function: def setup_framebuffer(surface,flip=False): #Create texture if not done already if surface.texture is None: create_texture(surface) #Render child to parent if surface.frame_buffer is None: surface.frame_buffer = glGenFramebuffersEXT(1) glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, c_uint(int(surface.frame_buffer))) glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, surface.texture, 0) glPushAttrib(GL_VIEWPORT_BIT) glViewport(0,0,surface._scale[0],surface._scale[1]) glMatrixMode(GL_PROJECTION) glLoadIdentity() #Load the projection matrix if flip: gluOrtho2D(0,surface._scale[0],surface._scale[1],0) else: gluOrtho2D(0,surface._scale[0],0,surface._scale[1]) That function calls create_texture, so I will have to pass that function to the C++ function which I will do with the third argument. This is what I have so far, while trying to follow information on the python documentation: #include <Python.h> #include <GL/gl.h> static PyMethodDef SpamMethods[] = { ... {"setup_framebuffer", setup_framebuffer, METH_VARARGS,"Loads a texture from a Surface object to the OpenGL framebuffer."}, ... {NULL, NULL, 0, NULL} /* Sentinel */ }; static PyObject * setup_framebuffer(PyObject *self, PyObject *args){ bool flip; PyObject *create_texture, *arg_list,*pyflip,*frame_buffer_id; if (!PyArg_ParseTuple(args, "OOO", &surface,&pyflip,&create_texture)){ return NULL; } if (PyObject_IsTrue(pyflip) == 1){ flip = true; }else{ flip = false; } Py_XINCREF(create_texture); //Create texture if not done already if(texture == NULL){ arglist = Py_BuildValue("(O)", surface) result = PyEval_CallObject(create_texture, arglist); Py_DECREF(arglist); if (result == NULL){ return NULL; } Py_DECREF(result); } Py_XDECREF(create_texture); //Render child to parent frame_buffer_id = PyObject_GetAttr(surface, Py_BuildValue("s","frame_buffer")) if(surface.frame_buffer == NULL){ glGenFramebuffersEXT(1,frame_buffer_id); } glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, surface.frame_buffer)); glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, surface.texture, 0); glPushAttrib(GL_VIEWPORT_BIT); glViewport(0,0,surface._scale[0],surface._scale[1]); glMatrixMode(GL_PROJECTION); glLoadIdentity(); //Load the projection matrix if (flip){ gluOrtho2D(0,surface._scale[0],surface._scale[1],0); }else{ gluOrtho2D(0,surface._scale[0],0,surface._scale[1]); } Py_INCREF(Py_None); return Py_None; } PyMODINIT_FUNC initcscalelib(void){ PyObject *module; module = Py_InitModule("cscalelib", Methods); if (m == NULL){ return; } } int main(int argc, char *argv[]){ /* Pass argv[0] to the Python interpreter */ Py_SetProgramName(argv[0]); /* Initialize the Python interpreter. Required. */ Py_Initialize(); /* Add a static module */ initscalelib(); }

    Read the article

  • Where are the function address literals in c++?

    - by academicRobot
    First of all, maybe literals is not the right term for this concept, but its the closest I could think of (not literals in the sense of functions as first class citizens). <UPDATE> After some reading with help from answer by Chris Dodd, what I'm looking for is literal function addresses as template parameters. Chris' answer indicates how to do this for standard functions, but how can the addresses of member functions be used as template parameters? Since the standard prohibits non-static member function addresses as template parameters (c++03 14.3.2.3), I suspect the work around is quite complicated. Any ideas for a workaround? Below the original form of the question is left as is for context. </UPDATE> The idea is that when you make a conventional function call, it compiles to something like this: callq <immediate address> But if you make a function call using a function pointer, it compiles to something like this: mov <memory location>,%rax callq *%rax Which is all well and good. However, what if I'm writing a template library that requires a callback of some sort with a specified argument list and the user of the library is expected to know what function they want to call at compile time? Then I would like to write my template to accept a function literal as a template parameter. So, similar to template <int int_literal> struct my_template {...};` I'd like to write template <func_literal_t func_literal> struct my_template {...}; and have calls to func_literal within my_template compile to callq <immediate address>. Is there a facility in C++ for this, or a work around to achieve the same effect? If not, why not (e.g. some cataclysmic side effects)? How about C++0x or another language? Solutions that are not portable are fine. Solutions that include the use of member function pointers would be ideal. I'm not particularly interested in being told "You are a <socially unacceptable term for a person of low IQ>, just use function pointers/functors." This is a curiosity based question, and it seems that it might be useful in some (albeit limited) applications. It seems like this should be possible since function names are just placeholders for a (relative) memory address, so why not allow more liberal use (e.g. aliasing) of this placeholder. p.s. I use function pointers and functions objects all the the time and they are great. But this post got me thinking about the don't pay for what you don't use principle in relation to function calls, and it seems like forcing the use of function pointers or similar facility when the function is known at compile time is a violation of this principle, though a small one. Edit The intent of this question is not to implement delegates, rather to identify a pattern that will embed a conventional function call, (in immediate mode) directly into third party code, possibly a template.

    Read the article

  • How can I provide maximum integration between a calendar-like webapp and desktop calendar applicatio

    - by Joshua Carmody
    I've been assigned to upgrade/rewrite a webapp that my company uses to schedule conference calls. One of the goals of the upgrade is to improve integration between the application and our user's Outlook calendars (and ideally other calendar programs as well). At present, when a user is viewing the details of a scheduled conference call on the webapp, they can click an "Add to Outlook calendar" link, which points them to a dynamically generated .ical file. On most of our users' systems, Outlook opens the file by default, bringing up the "create calendar appointment" window with the concall information pre-populated. This link creates a 1-time appointment only, and has to be clicked on for each occurrence of the call. So if a call happened every Monday in June, you would have to click 4 links to add all the appointments to your calendar. This is the full extent of our current level of integration. Ideally, we will be able to upgrade the system so that users can "subscribe" to a con call, which would mean not just the current call, but all calls in a reoccurring series would appear in the user's calendar with a single click. If one call in a series was cancelled, or rescheduled, that call's appointment would change in the users' calendar, without the user having to do anything, and without upsetting the rest of the series' appointments. Also, any changes to the call's info (say, the phone number was changed) would automatically be updated in the Outlook calendars of anyone who subscribed, without them having to come back to the webapp to double-check that their information is up to date. Ideally this would also work with other popular calendar programs, as well as Google Calendar. I don't know if we'll be able to achieve that level of integration, but I'd like to get as close to that as we can. Additional details and challenges: We aren't running Exchange on a public server, and I'm not likely to be able to get that changed Assume that our users are basically "the general internet public". Our users are not members of our office's network, nor can they be. We can't set up network logins or Exchange accounts for them. Some of our users are not using Outlook, but some other calendar program. Of the ones that are using Outlook, not all are using the same version. We have users in more than 50 countries that are using this webapp. Synchronization would be one-directional. Nobody can make changes in their own calendars and expect the server to reflect them/replicate them to other users Current conference calling application is written in ColdFusion. Rewrite will probably be in ASP.NET, but I haven't confirmed that yet. Solutions that work with either or both technologies are appreciated. I know that .ical files can theoretically contain more than one event, but in my own experiments I haven't had success in getting Outlook (2003) to add more than one event at a time using the .ical file method. Maybe someone knows how to set up a multi-event .ical file that Outlook will accept? Could a link to such an .ical file be "subscribed" to? Is there such thing as a calendar RSS feed? Could I simulate running an exchange server? Any other ideas? Thanks everyone!

    Read the article

  • GCC error with variadic templates: "Sorry, unimplemented: cannot expand 'Identifier...' into a fixe

    - by Dennis
    While doing variadic template programming in C++0x on GCC, once in a while I get an error that says "Sorry, unimplemented: cannot expand 'Identifier...' into a fixed-length arugment list." If I remove the "..." in the code then I get a different error: "error: parameter packs not expanded with '...'". So if I have the "..." in, GCC calls that an error, and if I take the "..." out, GCC calls that an error too. The only way I have been able to deal with this is to completely rewrite the template metaprogram from scratch using a different approach, and (with luck) I eventually come up with code that doesn't cause the error. But I would really like to know what I was doing wrong. Despite Googling for it and despite much experimentation, I can't pin down what it is that I'm doing differently between variadic template code that does produce this error, and code that does not have the error. The wording of the error message seems to imply that the code should work according the C++0x standard, but that GCC doesn't support it yet. Or perhaps it is a compiler bug? Here's some code that produces the error. Note: I don't need you to write a correct implementation for me, but rather just to point out what is about my code that is causing this specific error // Used as a container for a set of types. template <typename... Types> struct TypePack { // Given a TypePack<T1, T2, T3> and T=T4, returns TypePack<T1, T2, T3, T4> template <typename T> struct Add { typedef TypePack<Types..., T> type; }; }; // Takes the set (First, Others...) and, while N > 0, adds (First) to TPack. // TPack is a TypePack containing between 0 and N-1 types. template <int N, typename TPack, typename First, typename... Others> struct TypePackFirstN { // sorry, unimplemented: cannot expand ‘Others ...’ into a fixed-length argument list typedef typename TypePackFirstN<N-1, typename TPack::template Add<First>::type, Others...>::type type; }; // The stop condition for TypePackFirstN: when N is 0, return the TypePack that has been built up. template <typename TPack, typename... Others> struct TypePackFirstN<0, TPack, Others...> //sorry, unimplemented: cannot expand ‘Others ...’ into a fixed-length argument list { typedef TPack type; }; EDIT: I've noticed that while a partial template instantiation that looks like does incur the error: template <typename... T> struct SomeStruct<1, 2, 3, T...> {}; Rewriting it as this does not produce an error: template <typename... T> struct SomeStruct<1, 2, 3, TypePack<T...>> {}; It seems that you can declare parameters to partial specializations to be variadic; i.e. this line is OK: template <typename... T> But you cannot actually use those parameter packs in the specialization, i.e. this part is not OK: SomeStruct<1, 2, 3, T... The fact that you can make it work if you wrap the pack in some other type, i.e. like this: SomeStruct<1, 2, 3, TypePack<T...>> to me implies that the declaration of the variadic parameter to a partial template specialization was successful, and you just can't use it directly. Can anyone confirm this?

    Read the article

  • mysql_close doesn't kill locked sql requests

    - by Nikita
    I use mysqld Ver 5.1.37-2-log for debian-linux-gnu I perform mysql calls from c++ code with functions mysql_query. The problem occurs when mysql_query execute procedure, procedure locked on locked table, so mysql_query hangs. If send kill signal to application then we can see lock until table is locked. Create the following SQL table and procedure CREATE TABLE IF NOT EXISTS `tabletolock` ( `id` INT NOT NULL AUTO_INCREMENT, PRIMARY KEY (`id`) )ENGINE = InnoDB; DELIMITER $$ DROP PROCEDURE IF EXISTS `LOCK_PROCEDURE` $$ CREATE PROCEDURE `LOCK_PROCEDURE`() BEGIN SELECT id INTO @id FROM tabletolock; END $$ DELOMITER; There are sql commands to reproduce the problem: 1. in one terminal execute lock tables tabletolock write; 2. in another terminal execute call LOCK_PROCEDURE(); 3. In first terminal exeute show processlist and see | 2492 | root | localhost | syn_db | Query | 12 | Locked | SELECT id INTO @id FROM tabletolock | Then perfrom Ctrl-C in second terminal to interrupt our procudere and see processlist again. It is not changed, we already see locked select request and can teminate it by unlock tables or kill commands. Problem described is occured with mysql command line client. Also such problem exists when we use functions mysql_query and mysql_close. Example of c code: #include <iostream> #include <mysql/mysql.h> #include <mysql/errmsg.h> #include <signal.h> // g++ -Wall -g -fPIC -lmysqlclient dbtest.cpp using namespace std; MYSQL * connection = NULL; void closeconnection() { if(connection != NULL) { cout << "close connection !\n"; mysql_close(connection); mysql_thread_end(); delete connection; mysql_library_end(); } } void sigkill(int s) { closeconnection(); signal(SIGINT, NULL); raise(s); } int main(int argc, char ** argv) { signal(SIGINT, sigkill); connection = new MYSQL; mysql_init(connection); mysql_options(connection, MYSQL_READ_DEFAULT_GROUP, "nnfc"); if (!mysql_real_connect(connection, "127.0.0.1", "user", "password", "db", 3306, NULL, CLIENT_MULTI_RESULTS)) { delete connection; cout << "cannot connect\n"; return -1; } cout << "before procedure call\n"; mysql_query(connection, "CALL LOCK_PROCEDURE();"); cout << "after procedure call\n"; closeconnection(); return 0; } Compile it, and perform the folloing actions: 1. in first terminal local tables tabletolock write; 2. run program ./a.out 3. interrupt program Ctrl-C. on the screen we see that closeconnection function is called, so connection is closed. 4. in first terminal execute show processlist and see that procedure was not intrrupted. My question is how to terminate such locked calls from c code? Thank you in advance!

    Read the article

  • Magento Onepage Success Conversion Tracking Design Pattern

    - by user1734954
    My intent is to track conversions through multiple channels by inserting third party javascript (for example google analytics, optimizely, pricegrabber etc.) into the footer of onepage success . I've accomplished this by adding a block to the footer reference inside of the checkout success node within local.xml and everything works appropriately. My questions are more about efficiency and extensibility. It occurred to me that it would be better to combine all of the blocks into a single block reference and then use a various methods acting on a single call to the various related models to provide the data needed for insertion into the javascript for each of the conversion tracking scripts. Some examples of the common data that conversion tracking may rely on(pseudo): Order ID , Order Total, Order.LineItem.Name(foreach) and so on Currently for each of the scripts I've made a call to the appropriate model passing the customers last order id as the load value and the calling a get() assigning the return value to a variable and then iterating through the data to match the values with the expectations of the given third party service. All of the data should be pulled once when checkout is complete each third party services may expect different data in different formats Here is an example of one of the conversion tracking template files which loads at the footer of checkout success. $order = Mage::getModel('sales/order')->loadByIncrementId(Mage::getSingleton('checkout/session')->getLastRealOrderId()); $amount = number_format($order->getGrandTotal(),2); $customer = Mage::helper('customer')->getCustomer()->getData(); ?> <script type="text/javascript"> popup_email = '<?php echo($customer['email']);?>'; popup_order_number = '<?php echo $this->getOrderId() ?>'; </script> <!-- PriceGrabber Merchant Evaluation Code --> <script type="text/javascript" charset="UTF-8" src="https://www.pricegrabber.com/rating_merchrevpopjs.php?retid=<something>"></script> <noscript><a href="http://www.pricegrabber.com/rating_merchrev.php?retid=<something>" target=_blank> <img src="https://images.pricegrabber.com/images/mr_noprize.jpg" border="0" width="272" height="238" alt="Merchant Evaluation"></a></noscript> <!-- End PriceGrabber Code --> Having just a single piece of code like this is not that big of a deal, but we are doing similar things with a number of different third party services. Pricegrabber is one of the simpler examples. A more sophisticated tracking service expects a comma separated list of all of the product names, ids, prices, categories , order id etc. I would like to make it all more manageable so my idea to do the following: combine all of the template files into a single file Develop a helper class or library to deliver the data to the conversion template Goals Include Extensibility Minimal Model Calls Minimal Method Calls The Questions 1. Is a Mage helper the best route to take? 2. Is there any design pattern you may recommend for the "helper" class? 3. Why would this the design pattern you've chosen be best for this instance?

    Read the article

  • Lines don't overlap when they should Java Swing

    - by Sven
    I'm drawing lines in a JFrame on a self made gridPanel. Problem is, I draw the lines between 2 points. When I have a line that is between point 1 and point 2 and a line between point 2 and point 3, the lines should connect. This however isn,t the case, there is a small gap in between, no idea why. But it isn't drawing till the end of the specified point. (start point is correct.) Here is the code of the JFrame: public void initialize(){ this.setLayout(new BorderLayout()); this.setPreferredSize(new Dimension(500, 400)); gridPane = new GridPane(); gridPane.setBackground(Color.WHITE); gridPane.setSize(this.getPreferredSize()); gridPane.setLocation(0, 0); this.add(gridPane,BorderLayout.CENTER); //createSampleLabyrinth(); drawWall(0,5,40,5); //These are the 2 lines that don't connect. drawWall(40,5,80,5); this.pack(); } drawWall calls a method that calls a method in GridPane. The relevant code in gridPane: /** * Draws a wall on this pane. With the starting point being x1, y1 and its end x2,y2. * @param x1 * @param y1 * @param x2 * @param y2 */ public void drawWall(int x1, int y1, int x2, int y2) { Wall wall = new Wall(x1,y1,x2,y2, true); wall.drawGraphic(); wall.setLocation(x1, y1); wall.setSize(10000,10000); this.add(wall, JLayeredPane.DEFAULT_LAYER); this.repaint(); } This method creates a wall and puts it in the Jframe. The relevant code of the wall: public class Wall extends JPanel { private int x1; private int x2; private int y1; private int y2; private boolean black; /** * x1,y1 is the start point of the wall (line) end is x2,y2 * * @param x1 * @param y1 * @param x2 * @param y2 */ public Wall(int x1, int y1, int x2, int y2, boolean black) { this.x1 = x1; this.x2 = x2; this.y1 = y1; this.y2 = y2; this.black = black; setOpaque(false); } private static final long serialVersionUID = 1L; public void drawGraphic() { repaint(); } public void paintComponent(Graphics g) { super.paintComponent(g); Graphics2D g2 = (Graphics2D) g; if(black){ g2.setColor(Color.BLACK); g2.setStroke(new BasicStroke(8)); } else { g2.setColor(Color.YELLOW); g2.setStroke(new BasicStroke(3)); } g2.drawLine(x1, y1, x2, y2); } } So, where am I going wrong? The true/false is to determine if the wall should be black or yellow, nothing to be concerned about.

    Read the article

  • REST WCF service locks thread when called using AJAX in an ASP.Net site

    - by Jupaol
    I have a WCF REST service consumed in an ASP.Net site, from a page, using AJAX. I want to be able to call methods from my service async, which means I will have callback handlers in my javascript code and when the methods finish, the output will be updated. The methods should run in different threads, because each method will take different time to complete their task I have the code semi-working, but something strange is happening because the first time I execute the code after compiling, it works, running each call in a different threads but subsequent calls blocs the service, in such a way that each method call has to wait until the last call ends in order to execute the next one. And they are running on the same thread. I have had the same problem before when I was using Page Methods, and I solved it by disabling the session in the page but I have not figured it out how to do the same when consuming WCF REST services Note: Methods complete time (running them async should take only 7 sec and the result should be: Execute1 - Execute3 - Execute2) Execute1 -- 2 sec Execute2 -- 7 sec Execute3 -- 4 sec Output After compiling Output subsequent calls (this is the problem) I will post the code...I'll try to simplify it as much as I can Service Contract [ServiceContract( SessionMode = SessionMode.NotAllowed )] public interface IMyService { // I have other 3 methods like these: Execute2 and Execute3 [OperationContract] [WebInvoke( RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json, UriTemplate = "/Execute1", Method = "POST")] string Execute1(string param); } [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] [ServiceBehavior( InstanceContextMode = InstanceContextMode.PerCall )] public class MyService : IMyService { // I have other 3 methods like these: Execute2 (7 sec) and Execute3(4 sec) public string Execute1(string param) { var t = Observable.Start(() => Thread.Sleep(2000), Scheduler.NewThread); t.First(); return string.Format("Execute1 on: {0} count: {1} at: {2} thread: {3}", param, "0", DateTime.Now.ToString(), Thread.CurrentThread.ManagedThreadId.ToString()); } } ASPX page <%@ Page EnableSessionState="False" Title="Home Page" Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="RestService._Default" %> <asp:Content ID="HeaderContent" runat="server" ContentPlaceHolderID="HeadContent"> <script type="text/javascript"> function callMethodAsync(url, data) { $("#message").append("<br/>" + new Date()); $.ajax({ cache: false, type: "POST", async: true, url: url, data: '"de"', contentType: "application/json", dataType: "json", success: function (msg) { $("#message").append("<br/>&nbsp;&nbsp;&nbsp;" + msg); }, error: function (xhr) { alert(xhr.responseText); } }); } $(function () { $("#callMany").click(function () { $("#message").html(""); callMethodAsync("/Execute1", "hello"); callMethodAsync("/Execute2", "crazy"); callMethodAsync("/Execute3", "world"); }); }); </script> </asp:Content> <asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent"> <input type="button" id="callMany" value="Post Many" /> <div id="message"> </div> </asp:Content> Web.config (relevant) <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> </system.webServer> <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> <standardEndpoints> <webHttpEndpoint> <standardEndpoint name="" helpEnabled="true" automaticFormatSelectionEnabled="true" /> </webHttpEndpoint> </standardEndpoints> </system.serviceModel> Global.asax void Application_Start(object sender, EventArgs e) { RouteTable.Routes.Ignore("{resource}.axd/{*pathInfo}"); RouteTable.Routes.Add(new ServiceRoute("", new WebServiceHostFactory(), typeof(MyService))); }

    Read the article

  • SOAP and NHibernate Session in C#

    - by Anonymous Coward
    In a set of SOAP web services the user is authenticated with custom SOAP header (username/password). Each time the user call a WS the following Auth method is called to authenticate and retrieve User object from NHibernate session: [...] public Services : Base { private User user; [...] public string myWS(string username, string password) { if( Auth(username, password) ) { [...] } } } public Base : WebService { protected static ISessionFactory sesFactory; protected static ISession session; static Base { Configuration conf = new Configuration(); [...] sesFactory = conf.BuildSessionFactory(); } private bool Auth(...) { session = sesFactory.OpenSession(); MembershipUser user = null; if (UserCredentials != null && Membership.ValidateUser(username, password)) { luser = Membership.GetUser(username); } ... try { user = (User)session.Get(typeof(User), luser.ProviderUserKey.ToString()); } catch { user = null; throw new [...] } return user != null; } } When the WS work is done the session is cleaned up nicely and everything works: the WSs create, modify and change objects and Nhibernate save them in the DB. The problems come when an user (same username/password) calls the same WS at same time from different clients (machines). The state of the saved objects are inconsistent. How do I manage the session correctly to avoid this? I searched and the documentation about Session management in NHibernate is really vast. Should I Lock over user object? Should I set up a "session share" management between WS calls from same user? Should I use Transaction in some savvy way? Thanks Update1 Yes, mSession is 'session'. Update2 Even with a non-static session object the data saved in the DB are inconsistent. The pattern I use to insert/save object is the following: var return_value = [...]; try { using(ITransaction tx = session.Transaction) { tx.Begin(); MyType obj = new MyType(); user.field = user.field - obj.field; // The fields names are i.e. but this is actually what happens. session.Save(user); session.Save(obj); tx.Commit(); return_value = obj.another_field; } } catch ([...]) { // Handling exceptions... } finally { // Clean up session.Flush(); session.Close(); } return return_value; All new objects (MyType) are correctly saved but the user.field status is not as I would expect. Even obj.another_field is correct (the field is an ID with generated=on-save policy). It is like 'user.field = user.field - obj.field;' is executed more times then necessary.

    Read the article

  • C++ - getline() keeps reading the same line over and over again for some reason

    - by Jammanuser
    I am wondering WTF my while loop which calls istream& getline ( istream& is, string& str ); keeps reading the same line again. I have the following while loop (nested down several levels of other while loops and if statements) which calls getline, but my output statement which is the first code line in the while loop's block of code tells me it is reading the same line over and over again, which explains why my output file doesn't contain the right data when my program is finished. while (getline(file_handle, buffer_str)) { cout<< buffer_str <<endl; cin.get(); if ((buffer_str.find(';', 0) != string::npos) && (buffer_str.find('\"', 0) != string::npos)) { //we're now at the end of the 'exc' initialiation statement buffer_str.erase(buffer_str.size() - 2, 1); buffer_str += '\n'; for (size_t i = 0; i < pos; i++) { buffer_str += ' '; } buffer_str += "throw(exc);\n"; for (size_t i = 0; i < (pos - 3); i++) { buffer_str += ' '; } buffer_str += '}'; } else if (buffer_str.find(search_str6, 0) != string::npos) { //we're now at the second problem line of the first case buffer_str += " {\n"; output_str += buffer_str; output_str += '\n'; getline(file_handle, buffer_str); //We're now at the beginning of the 'exc' initialiation statement output_str += buffer_str; output_str += '\n'; while (getline(file_handle, buffer_str)) { if ((buffer_str.find(';', 0) != string::npos) && (buffer_str.find('\"', 0) != string::npos)) { //we're now at the end of the 'exc' initialiation statement buffer_str.erase(buffer_str.size() - 2, 1); buffer_str += '\n'; for (size_t i = 0; i < pos; i++) { buffer_str += ' '; } buffer_str += "throw(exc);\n"; for (size_t i = 0; i < (pos - 3); i++) { buffer_str += ' '; } buffer_str += '}'; } output_str += buffer_str; output_str += '\n'; if (buffer_str.find("return", 0) != string::npos) { getline(file_handle, buffer_str); output_str += buffer_str; output_str += '\n'; about_to_break = true; break; //out of this while loop } } } if (about_to_break) { break; //out of the level 3 while loop (execution then goes back up to beginning of level 2 while loop) } output_str += buffer_str; output_str += '\n'; } Because of this problem, my if statement and then my else statement in my loop are not functioning as they should, and it doesn't break out of that loop when it should (though it eventually does break out of it, but I don't know exactly how yet). Anyone have any idea what could be causing this problem?? Thanks in advance.

    Read the article

  • Override `drop` for a custom sequence

    - by Bruno Reis
    In short: in Clojure, is there a way to redefine a function from the standard sequence API (which is not defined on any interface like ISeq, IndexedSeq, etc) on a custom sequence type I wrote? 1. Huge data files I have big files in the following format: A long (8 bytes) containing the number n of entries n entries, each one being composed of 3 longs (ie, 24 bytes) 2. Custom sequence I want to have a sequence on these entries. Since I cannot usually hold all the data in memory at once, and I want fast sequential access on it, I wrote a class similar to the following: (deftype DataSeq [id ^long cnt ^long i cached-seq] clojure.lang.IndexedSeq (index [_] i) (count [_] (- cnt i)) (seq [this] this) (first [_] (first cached-seq)) (more [this] (if-let [s (next this)] s '())) (next [_] (if (not= (inc i) cnt) (if (next cached-seq) (DataSeq. id cnt (inc i) (next cached-seq)) (DataSeq. id cnt (inc i) (with-open [f (open-data-file id)] ; open a memory mapped byte array on the file ; seek to the exact position to begin reading ; decide on an optimal amount of data to read ; eagerly read and return that amount of data )))))) The main idea is to read ahead a bunch of entries in a list and then consume from that list. Whenever the cache is completely consumed, if there are remaining entries, they are read from the file in a new cache list. Simple as that. To create an instance of such a sequence, I use a very simple function like: (defn ^DataSeq load-data [id] (next (DataSeq. id (count-entries id) -1 []))) ; count-entries is a trivial "open file and read a long" memoized As you can see, the format of the data allowed me to implement count in very simply and efficiently. 3. drop could be O(1) In the same spirit, I'd like to reimplement drop. The format of these data files allows me to reimplement drop in O(1) (instead of the standard O(n)), as follows: if dropping less then the remaining cached items, just drop the same amount from the cache and done; if dropping more than cnt, then just return the empty list. otherwise, just figure out the position in the data file, jump right into that position, and read data from there. My difficulty is that drop is not implemented in the same way as count, first, seq, etc. The latter functions call a similarly named static method in RT which, in turn, calls my implementation above, while the former, drop, does not check if the instance of the sequence it is being called on provides a custom implementation. Obviously, I could provide a function named anything but drop that does exactly what I want, but that would force other people (including my future self) to remember to use it instead of drop every single time, which sucks. So, the question is: is it possible to override the default behaviour of drop? 4. A workaround (I dislike) While writing this question, I've just figured out a possible workaround: make the reading even lazier. The custom sequence would just keep an index and postpone the reading operation, that would happen only when first was called. The problem is that I'd need some mutable state: the first call to first would cause some data to be read into a cache, all the subsequent calls would return data from this cache. There would be a similar logic on next: if there's a cache, just next it; otherwise, don't bother populating it -- it will be done when first is called again. This would avoid unnecessary disk reads. However, this is still less than optimal -- it is still O(n), and it could easily be O(1). Anyways, I don't like this workaround, and my question is still open. Any thoughts? Thanks.

    Read the article

  • Coding With Windows Azure IaaS

    - by Hisham El-bereky
    This post will focus on some advanced programming topics concerned with IaaS (Infrastructure as a Service) which provided as windows azure virtual machine (with its related resources like virtual disk and virtual network), you know that windows azure started as PaaS cloud platform but regarding to some business cases which need to have full control over their virtual machine, so windows azure directed toward providing IaaS. Sometimes you will need to manage your cloud IaaS through code may be for these reasons: Working on hyper-cloud system by providing bursting connector to windows azure virtual machines Providing multi-tenant system which consume windows azure virtual machine Automated process on your on-premises or cloud service which need to utilize some virtual resources We are going to implement the following basic operation using C# code: List images Create virtual machine List virtual machines Restart virtual machine Delete virtual machine Before going to implement the above operations we need to prepare client side and windows azure subscription to communicate correctly by providing management certificate (x.509 v3 certificates) which permit client access to resources in your Windows Azure subscription, whilst requests made using the Windows Azure Service Management REST API require authentication against a certificate that you provide to Windows Azure More info about setting management certificate located here. And to install .cer on other client machine you will need the .pfx file, or if not exist by exporting .cer as .pfx Note: You will need to install .net 4.5 on your machine to try the code So let start This post built on the post sent by Michael Washam "Advanced Windows Azure IaaS – Demo Code", so I'm here to declare some points and to add new operation which is not exist in Michael's demo The basic C# class object used here as client to azure REST API for IaaS service is HttpClient (Provides a base class for sending HTTP requests and receiving HTTP responses from a resource identified by a URI) this object must be initialized with the required data like certificate, headers and content if required. Also I'd like to refer here that the code is based on using Asynchronous programming with calls to azure which enhance the performance and gives us the ability to work with complex calls which depends on more than one sub-call to achieve some operation The following code explain how to get certificate and initializing HttpClient object with required data like headers and content HttpClient GetHttpClient() { X509Store certificateStore = null; X509Certificate2 certificate = null; try { certificateStore = new X509Store(StoreName.My, StoreLocation.CurrentUser); certificateStore.Open(OpenFlags.ReadOnly); string thumbprint = ConfigurationManager.AppSettings["CertThumbprint"]; var certificates = certificateStore.Certificates.Find(X509FindType.FindByThumbprint, thumbprint, false); if (certificates.Count > 0) { certificate = certificates[0]; } } finally { if (certificateStore != null) certificateStore.Close(); }   WebRequestHandler handler = new WebRequestHandler(); if (certificate!= null) { handler.ClientCertificates.Add(certificate); HttpClient httpClient = new HttpClient(handler); //And to set required headers lik x-ms-version httpClient.DefaultRequestHeaders.Add("x-ms-version", "2012-03-01"); httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/xml")); return httpClient; } return null; }  Let us keep the object httpClient as reference object used to call windows azure REST API IaaS service. For each request operation we need to define: Request URI HTTP Method Headers Content body (1) List images The List OS Images operation retrieves a list of the OS images from the image repository Request URI https://management.core.windows.net/<subscription-id>/services/images] Replace <subscription-id> with your windows Id HTTP Method GET (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None.  C# Code List<String> imageList = new List<String>(); //replace _subscriptionid with your WA subscription String uri = String.Format("https://management.core.windows.net/{0}/services/images", _subscriptionid);  HttpClient http = GetHttpClient(); Stream responseStream = await http.GetStreamAsync(uri);  if (responseStream != null) {      XDocument xml = XDocument.Load(responseStream);      var images = xml.Root.Descendants(ns + "OSImage").Where(i => i.Element(ns + "OS").Value == "Windows");      foreach (var image in images)      {      string img = image.Element(ns + "Name").Value;      imageList.Add(img);      } } More information about the REST call (Request/Response) located here on this link http://msdn.microsoft.com/en-us/library/windowsazure/jj157191.aspx (2) Create Virtual Machine Creating virtual machine required service and deployment to be created first, so creating VM should be done through three steps incase hosted service and deployment is not created yet Create hosted service, a container for service deployments in Windows Azure. A subscription may have zero or more hosted services Create deployment, a service that is running on Windows Azure. A deployment may be running in either the staging or production deployment environment. It may be managed either by referencing its deployment ID, or by referencing the deployment environment in which it's running. Create virtual machine, the previous two steps info required here in this step I suggest here to use the same name for service, deployment and service to make it easy to manage virtual machines Note: A name for the hosted service that is unique within Windows Azure. This name is the DNS prefix name and can be used to access the hosted service. For example: http://ServiceName.cloudapp.net// 2.1 Create service Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices HTTP Method POST (HTTP 1.1) Header x-ms-version: 2012-03-01 Content-Type: application/xml Body More details about request body (and other information) are located here http://msdn.microsoft.com/en-us/library/windowsazure/gg441304.aspx C# code The following method show how to create hosted service async public Task<String> NewAzureCloudService(String ServiceName, String Location, String AffinityGroup, String subscriptionid) { String requestID = String.Empty;   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices", subscriptionid); HttpClient http = GetHttpClient();   System.Text.ASCIIEncoding ae = new System.Text.ASCIIEncoding(); byte[] svcNameBytes = ae.GetBytes(ServiceName);   String locationEl = String.Empty; String locationVal = String.Empty;   if (String.IsNullOrEmpty(Location) == false) { locationEl = "Location"; locationVal = Location; } else { locationEl = "AffinityGroup"; locationVal = AffinityGroup; }   XElement srcTree = new XElement("CreateHostedService", new XAttribute(XNamespace.Xmlns + "i", ns1), new XElement("ServiceName", ServiceName), new XElement("Label", Convert.ToBase64String(svcNameBytes)), new XElement(locationEl, locationVal) ); ApplyNamespace(srcTree, ns);   XDocument CSXML = new XDocument(srcTree); HttpContent content = new StringContent(CSXML.ToString()); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml");   HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); } return requestID; } 2.2 Create Deployment Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>/deploymentslots/<deployment-slot-name> <deployment-slot-name> with staging or production, depending on where you wish to deploy your service package <service-name> provided as input from the previous step HTTP Method POST (HTTP 1.1) Header x-ms-version: 2012-03-01 Content-Type: application/xml Body More details about request body (and other information) are located here http://msdn.microsoft.com/en-us/library/windowsazure/ee460813.aspx C# code The following method show how to create hosted service deployment async public Task<String> NewAzureVMDeployment(String ServiceName, String VMName, String VNETName, XDocument VMXML, XDocument DNSXML) { String requestID = String.Empty;     String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments", _subscriptionid, ServiceName); HttpClient http = GetHttpClient(); XElement srcTree = new XElement("Deployment", new XAttribute(XNamespace.Xmlns + "i", ns1), new XElement("Name", ServiceName), new XElement("DeploymentSlot", "Production"), new XElement("Label", ServiceName), new XElement("RoleList", null) );   if (String.IsNullOrEmpty(VNETName) == false) { srcTree.Add(new XElement("VirtualNetworkName", VNETName)); }   if(DNSXML != null) { srcTree.Add(new XElement("DNS", new XElement("DNSServers", DNSXML))); }   XDocument deploymentXML = new XDocument(srcTree); ApplyNamespace(srcTree, ns);   deploymentXML.Descendants(ns + "RoleList").FirstOrDefault().Add(VMXML.Root);     String fixedXML = deploymentXML.ToString().Replace(" xmlns=\"\"", ""); HttpContent content = new StringContent(fixedXML); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml");   HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); }   return requestID; } 2.3 Create Virtual Machine Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<cloudservice-name>/deployments/<deployment-name>/roles <cloudservice-name> and <deployment-name> are provided as input from the previous steps Http Method POST (HTTP 1.1) Header x-ms-version: 2012-03-01 Content-Type: application/xml Body More details about request body (and other information) located here http://msdn.microsoft.com/en-us/library/windowsazure/jj157186.aspx C# code async public Task<String> NewAzureVM(String ServiceName, String VMName, XDocument VMXML) { String requestID = String.Empty;   String deployment = await GetAzureDeploymentName(ServiceName);   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}/roles", _subscriptionid, ServiceName, deployment);   HttpClient http = GetHttpClient(); HttpContent content = new StringContent(VMXML.ToString()); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml"); HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); } return requestID; } (3) List Virtual Machines To list virtual machine hosted on windows azure subscription we have to loop over all hosted services to get its hosted virtual machines To do that we need to execute the following operations: listing hosted services listing hosted service Virtual machine 3.1 Listing Hosted Services Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices HTTP Method GET (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. More info about this HTTP request located here on this link http://msdn.microsoft.com/en-us/library/windowsazure/ee460781.aspx C# Code async private Task<List<XDocument>> GetAzureServices(String subscriptionid) { String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices ", subscriptionid); List<XDocument> services = new List<XDocument>();   HttpClient http = GetHttpClient();   Stream responseStream = await http.GetStreamAsync(uri);   if (responseStream != null) { XDocument xml = XDocument.Load(responseStream); var svcs = xml.Root.Descendants(ns + "HostedService"); foreach (XElement r in svcs) { XDocument vm = new XDocument(r); services.Add(vm); } }   return services; }  3.2 Listing Hosted Service Virtual Machines Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>/deployments/<deployment-name>/roles/<role-name> HTTP Method GET (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. More info about this HTTP request here http://msdn.microsoft.com/en-us/library/windowsazure/jj157193.aspx C# Code async public Task<XDocument> GetAzureVM(String ServiceName, String VMName, String subscriptionid) { String deployment = await GetAzureDeploymentName(ServiceName); XDocument vmXML = new XDocument();   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}/roles/{3}", subscriptionid, ServiceName, deployment, VMName);   HttpClient http = GetHttpClient(); Stream responseStream = await http.GetStreamAsync(uri); if (responseStream != null) { vmXML = XDocument.Load(responseStream); }   return vmXML; }  So the final method which can be used to list all virtual machines is: async public Task<XDocument> GetAzureVMs() { List<XDocument> services = await GetAzureServices(); XDocument vms = new XDocument(); vms.Add(new XElement("VirtualMachines")); ApplyNamespace(vms.Root, ns); foreach (var svc in services) { string ServiceName = svc.Root.Element(ns + "ServiceName").Value;   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deploymentslots/{2}", _subscriptionid, ServiceName, "Production");   try { HttpClient http = GetHttpClient(); Stream responseStream = await http.GetStreamAsync(uri);   if (responseStream != null) { XDocument xml = XDocument.Load(responseStream); var roles = xml.Root.Descendants(ns + "RoleInstance"); foreach (XElement r in roles) { XElement svcnameel = new XElement("ServiceName", ServiceName); ApplyNamespace(svcnameel, ns); r.Add(svcnameel); // not part of the roleinstance vms.Root.Add(r); } } } catch (HttpRequestException http) { // no vms with cloud service } } return vms; }  (4) Restart Virtual Machine Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>/deployments/<deployment-name>/roles/<role-name>/Operations HTTP Method POST (HTTP 1.1) Headers x-ms-version: 2012-03-01 Content-Type: application/xml Body <RestartRoleOperation xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <OperationType>RestartRoleOperation</OperationType> </RestartRoleOperation>  More details about this http request here http://msdn.microsoft.com/en-us/library/windowsazure/jj157197.aspx  C# Code async public Task<String> RebootVM(String ServiceName, String RoleName) { String requestID = String.Empty;   String deployment = await GetAzureDeploymentName(ServiceName); String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}/roleInstances/{3}/Operations", _subscriptionid, ServiceName, deployment, RoleName);   HttpClient http = GetHttpClient();   XElement srcTree = new XElement("RestartRoleOperation", new XAttribute(XNamespace.Xmlns + "i", ns1), new XElement("OperationType", "RestartRoleOperation") ); ApplyNamespace(srcTree, ns);   XDocument CSXML = new XDocument(srcTree); HttpContent content = new StringContent(CSXML.ToString()); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml");   HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); } return requestID; }  (5) Delete Virtual Machine You can delete your hosted virtual machine by deleting its deployment, but I prefer to delete its hosted service also, so you can easily manage your virtual machines from code 5.1 Delete Deployment Request URI https://management.core.windows.net/< subscription-id >/services/hostedservices/< service-name >/deployments/<Deployment-Name> HTTP Method DELETE (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. C# code async public Task<HttpResponseMessage> DeleteDeployment( string deploymentName) { string xml = string.Empty; String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}", _subscriptionid, deploymentName, deploymentName); HttpClient http = GetHttpClient(); HttpResponseMessage responseMessage = await http.DeleteAsync(uri); return responseMessage; }  5.2 Delete Hosted Service Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name> HTTP Method DELETE (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. C# code async public Task<HttpResponseMessage> DeleteService(string serviceName) { string xml = string.Empty; String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}", _subscriptionid, serviceName); Log.Info("Windows Azure URI (http DELETE verb): " + uri, typeof(VMManager)); HttpClient http = GetHttpClient(); HttpResponseMessage responseMessage = await http.DeleteAsync(uri); return responseMessage; }  And the following is the method which can used to delete both of deployment and service async public Task<string> DeleteVM(string vmName) { string responseString = string.Empty;   // as a convention here in this post, a unified name used for service, deployment and VM instance to make it easy to manage VMs HttpClient http = GetHttpClient(); HttpResponseMessage responseMessage = await DeleteDeployment(vmName);   if (responseMessage != null) {   string requestID = responseMessage.Headers.GetValues("x-ms-request-id").FirstOrDefault(); OperationResult result = await PollGetOperationStatus(requestID, 5, 120); if (result.Status == OperationStatus.Succeeded) { responseString = result.Message; HttpResponseMessage sResponseMessage = await DeleteService(vmName); if (sResponseMessage != null) { OperationResult sResult = await PollGetOperationStatus(requestID, 5, 120); responseString += sResult.Message; } } else { responseString = result.Message; } } return responseString; }  Note: This article is subject to be updated Hisham  References Advanced Windows Azure IaaS – Demo Code Windows Azure Service Management REST API Reference Introduction to the Azure Platform Representational state transfer Asynchronous Programming with Async and Await (C# and Visual Basic) HttpClient Class

    Read the article

  • Windows Azure: General Availability of Web Sites + Mobile Services, New AutoScale + Alerts Support, No Credit Card Needed for MSDN

    - by ScottGu
    This morning we released a major set of updates to Windows Azure.  These updates included: Web Sites: General Availability Release of Windows Azure Web Sites with SLA Mobile Services: General Availability Release of Windows Azure Mobile Services with SLA Auto-Scale: New automatic scaling support for Web Sites, Cloud Services and Virtual Machines Alerts/Notifications: New email alerting support for all Compute Services (Web Sites, Mobile Services, Cloud Services, and Virtual Machines) MSDN: No more credit card requirement for sign-up All of these improvements are now available to use immediately (note: some are still in preview).  Below are more details about them. Web Sites: General Availability Release of Windows Azure Web Sites I’m incredibly excited to announce the General Availability release of Windows Azure Web Sites. The Windows Azure Web Sites service is perfect for hosting a web presence, building customer engagement solutions, and delivering business web apps.  Today’s General Availability release means we are taking off the “preview” tag from the Free and Standard (formerly called reserved) tiers of Windows Azure Web Sites.  This means we are providing: A 99.9% monthly SLA (Service Level Agreement) for the Standard tier Microsoft Support available on a 24x7 basis (with plans that range from developer plans to enterprise Premier support) The Free tier runs in a shared compute environment and supports up to 10 web sites. While the Free tier does not come with an SLA, it works great for rapid development and testing and enables you to quickly spike out ideas at no cost. The Standard tier, which was called “Reserved” during the preview, runs using dedicated per-customer VM instances for great performance, isolation and scalability, and enables you to host up to 500 different Web sites within them.  You can easily scale your Standard instances on-demand using the Windows Azure Management Portal.  You can adjust VM instance sizes from a Small instance size (1 core, 1.75GB of RAM), up to a Medium instance size (2 core, 3.5GB of RAM), or Large instance (4 cores and 7 GB RAM).  You can choose to run between 1 and 10 Standard instances, enabling you to easily scale up your web backend to 40 cores of CPU and 70GB of RAM: Today’s release also includes general availability support for custom domain SSL certificate bindings for web sites running using the Standard tier. Customers will be able to utilize certificates they purchase for their custom domains and use either SNI or IP based SSL encryption. SNI encryption is available for all modern browsers and does not require an IP address.  SSL certificates can be used for individual sites or wild-card mapped across multiple sites (we charge extra for the use of a SSL cert – but the fee is per-cert and not per site which means you pay once for it regardless of how many sites you use it with).  Today’s release also includes the following new features: Auto-Scale support Today’s Windows Azure release adds preview support for Auto-Scaling web sites.  This enables you to setup automatic scale rules based on the activity of your instances – allowing you to automatically scale down (and save money) when they are below a CPU threshold you define, and automatically scale up quickly when traffic increases.  See below for more details. 64-bit and 32-bit mode support You can now choose to run your standard tier instances in either 32-bit or 64-bit mode (previously they only ran in 32-bit mode).  This enables you to address even more memory within individual web applications. Memory dumps Memory dumps can be very useful for diagnosing issues and debugging apps. Using a REST API, you can now get a memory dump of your sites, which you can then use for investigating issues in Visual Studio Debugger, WinDbg, and other tools. Scaling Sites Independently Prior to today’s release, all sites scaled up/down together whenever you scaled any site in a sub-region. So you may have had to keep your proof-of-concept or testing sites in a separate sub-region if you wanted to keep them in the Free tier. This will no longer be necessary.  Windows Azure Web Sites can now mix different tier levels in the same geographic sub-region. This allows you, for example, to selectively move some of your sites in the West US sub-region up to Standard tier when they require the features, scalability, and SLA of the Standard tier. Full pricing details on Windows Azure Web Sites can be found here.  Note that the “Shared Tier” of Windows Azure Web Sites remains in preview mode (and continues to have discounted preview pricing).  Mobile Services: General Availability Release of Windows Azure Mobile Services I’m incredibly excited to announce the General Availability release of Windows Azure Mobile Services.  Mobile Services is perfect for building scalable cloud back-ends for Windows 8.x, Windows Phone, Apple iOS, Android, and HTML/JavaScript applications.  Customers We’ve seen tremendous adoption of Windows Azure Mobile Services since we first previewed it last September, and more than 20,000 customers are now running mobile back-ends in production using it.  These customers range from startups like Yatterbox, to university students using Mobile Services to complete apps like Sly Fox in their spare time, to media giants like Verdens Gang finding new ways to deliver content, and telcos like TalkTalk Business delivering the up-to-the-minute information their customers require.  In today’s Build keynote, we demonstrated how TalkTalk Business is using Windows Azure Mobile Services to deliver service, outage and billing information to its customers, wherever they might be. Partners When we unveiled the source control and Custom API features I blogged about two weeks ago, we enabled a range of new scenarios, one of which is a more flexible way to work with third party services.  The following blogs, samples and tutorials from our partners cover great ways you can extend Mobile Services to help you build rich modern apps: New Relic allows developers to monitor and manage the end-to-end performance of iOS and Android applications connected to Mobile Services. SendGrid eliminates the complexity of sending email from Mobile Services, saving time and money, while providing reliable delivery to the inbox. Twilio provides a telephony infrastructure web service in the cloud that you can use with Mobile Services to integrate phone calls, text messages and IP voice communications into your mobile apps. Xamarin provides a Mobile Services add on to make it easy building cross-platform connected mobile aps. Pusher allows quickly and securely add scalable real-time messaging functionality to Mobile Services-based web and mobile apps. Visual Studio 2013 and Windows 8.1 This week during //build/ keynote, we demonstrated how Visual Studio 2013, Mobile Services and Windows 8.1 make building connected apps easier than ever. Developers building Windows 8 applications in Visual Studio can now connect them to Windows Azure Mobile Services by simply right clicking then choosing Add Connected Service. You can either create a new Mobile Service or choose existing Mobile Service in the Add Connected Service dialog. Once completed, Visual Studio adds a reference to Mobile Services SDK to your project and generates a Mobile Services client initialization snippet automatically. Add Push Notifications Push Notifications and Live Tiles are a key to building engaging experiences. Visual Studio 2013 and Mobile Services make it super easy to add push notifications to your Windows 8.1 app, by clicking Add a Push Notification item: The Add Push Notification wizard will then guide you through the registration with the Windows Store as well as connecting your app to a new or existing mobile service. Upon completion of the wizard, Visual Studio will configure your mobile service with the WNS credentials, as well as add sample logic to your client project and your mobile service that demonstrates how to send push notifications to your app. Server Explorer Integration In Visual Studio 2013 you can also now view your Mobile Services in the the Server Explorer. You can add tables, edit, and save server side scripts without ever leaving Visual Studio, as shown on the image below: Pricing With today’s general availability release we are announcing that we will be offering Mobile Services in three tiers – Free, Standard, and Premium.  Each tier is metered using a simple pricing model based on the # of API calls (bandwidth is included at no extra charge), and the Standard and Premium tiers are backed by 99.9% monthly SLAs.  You can elastically scale up or down the number of instances you have of each tier to increase the # of API requests your service can support – allowing you to efficiently scale as your business grows. The following table summarizes the new pricing model (full pricing details here):   You can find the full details of the new pricing model here. Build Conference Talks The //BUILD/ conference will be packed with sessions covering every aspect of developing connected applications with Mobile Services. The best part is that, even if you can’t be with us in San Francisco, every session is being streamed live. Be sure not to miss these talks: Mobile Services – Soup to Nuts — Josh Twist Building Cross-Platform Apps with Windows Azure Mobile Services — Chris Risner Connected Windows Phone Apps made Easy with Mobile Services — Yavor Georgiev Build Connected Windows 8.1 Apps with Mobile Services — Nick Harris Who’s that user? Identity in Mobile Apps — Dinesh Kulkarni Building REST Services with JavaScript — Nathan Totten Going Live and Beyond with Windows Azure Mobile Services — Kirill Gavrylyuk , Paul Batum Protips for Windows Azure Mobile Services — Chris Risner AutoScale: Dynamically scale up/down your app based on real-world usage One of the key benefits of Windows Azure is that you can dynamically scale your application in response to changing demand. In the past, though, you have had to either manually change the scale of your application, or use additional tooling (such as WASABi or MetricsHub) to automatically scale your application. Today, we’re announcing that AutoScale will be built-into Windows Azure directly.  With today’s release it is now enabled for Cloud Services, Virtual Machines and Web Sites (Mobile Services support will come soon). Auto-scale enables you to configure Windows Azure to automatically scale your application dynamically on your behalf (without any manual intervention) so you can achieve the ideal performance and cost balance. Once configured it will regularly adjust the number of instances running in response to the load in your application. Currently, we support two different load metrics: CPU percentage Storage queue depth (Cloud Services and Virtual Machines only) We’ll enable automatic scaling on even more scale metrics in future updates. When to use Auto-Scale The following are good criteria for services/apps that will benefit from the use of auto-scale: The service/app can scale horizontally (e.g. it can be duplicated to multiple instances) The service/app load changes over time If your app meets these criteria, then you should look to leverage auto-scale. How to Enable Auto-Scale To enable auto-scale, simply navigate to the Scale tab in the Windows Azure Management Portal for the app/service you wish to enable.  Within the scale tab turn the Auto-Scale setting on to either CPU or Queue (for Cloud Services and VMs) to enable Auto-Scale.  Then change the instance count and target CPU settings to configure the Auto-Scale ranges you want to maintain. The image below demonstrates how to enable Auto-Scale on a Windows Azure Web-Site.  I’ve configured the web-site so that it will run using between 1 and 5 VM instances.  The exact # used will depend on the aggregate CPU of the VMs using the 40-70% range I’ve configured below.  If the aggregate CPU goes above 70%, then Windows Azure will automatically add new VMs to the pool (up to the maximum of 5 instances I’ve configured it to use).  If the aggregate CPU drops below 40% then Windows Azure will automatically start shutting down VMs to save me money: Once you’ve turned auto-scale on, you can return to the Scale tab at any point and select Off to manually set the number of instances. Using the Auto-Scale Preview With today’s update you can now, in just a few minutes, have Windows Azure automatically adjust the number of instances you have running  in your apps to keep your service performant at an even better cost. Auto-scale is being released today as a preview feature, and will be free until General Availability. During preview, each subscription is limited to 10 separate auto-scale rules across all of the resources they have (Web sites, Cloud services or Virtual Machines). If you hit the 10 limit, you can disable auto-scale for any resource to enable it for another. Alerts and Notifications Starting today we are now providing the ability to configure threshold based alerts on monitoring metrics. This feature is available for compute services (cloud services, VM, websites and mobiles services). Alerts provide you the ability to get proactively notified of active or impending issues within your application.  You can define alert rules for: Virtual machine monitoring metrics that are collected from the host operating system (CPU percentage, network in/out, disk read bytes/sec and disk write bytes/sec) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured. Cloud service monitoring metrics that are collected from the host operating system (same as VM), monitoring metrics from the guest VM (from performance counters within the VM) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured. For Web Sites and Mobile Services, alerting rules can be configured on monitoring metrics from monitoring endpoint urls (response time and uptime) that you have configured. Creating Alert Rules You can add an alert rule for a monitoring metric by navigating to the Setting -> Alerts tab in the Windows Azure Management Portal. Click on the Add Rule button to create an alert rule. Give the alert rule a name and optionally add a description. Then pick the service which you want to define the alert rule on: The next step in the alert creation wizard will then filter the monitoring metrics based on the service you selected:   Once created the rule will show up in your alerts list within the settings tab: The rule above is defined as “not activated” since it hasn’t tripped over the CPU threshold we set.  If the CPU on the above machine goes over the limit, though, I’ll get an email notifying me from an Windows Azure Alerts email address ([email protected]). And when I log into the portal and revisit the alerts tab I’ll see it highlighted in red.  Clicking it will then enable me to see what is causing it to fail, as well as view the history of when it has happened in the past. Alert Notifications With today’s initial preview you can now easily create alerting rules based on monitoring metrics and get notified on active or impending issues within your application that require attention. During preview, each subscription is limited to 10 alert rules across all of the services that support alert rules. No More Credit Card Requirement for MSDN Subscribers Earlier this month (during TechEd 2013), Windows Azure announced that MSDN users will get Windows Azure Credits every month that they can use for any Windows Azure services they want. You can read details about this in my previous Dev/Test blog post. Today we are making further updates to enable an easier Windows Azure signup for MSDN users. MSDN users will now not be required to provide payment information (e.g. no credit card) during sign-up, so long as they use the service within the included monetary credit for the billing period. For usage beyond the monetary credit, they can enable overages by providing the payment information and remove the spending limit. This enables a super easy, one page sign-up experience for MSDN users.  Simply sign-up for your Windows Azure trial using the same Microsoft ID that you use to manage your MSDN account, then complete the one page sign-up form below and you will be able to spend your free monthly MSDN credits (up to $150 each month) on any Windows Azure resource for dev/test:   This makes it trivially easy for every MDSN customer to start using Windows Azure today.  If you haven’t signed up yet, I definitely recommend checking it out. Summary Today’s release includes a ton of great features that enable you to build even better cloud solutions.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • MVVM in Task-It

    As I'm gearing up to write a post about dynamic XAP loading with MEF, I'd like to first talk a bit about MVVM, the Model-View-ViewModel pattern, as I will be leveraging this pattern in my future posts. Download Source Code Why MVVM? Your first question may be, "why do I need this pattern? I've been using a code-behind approach for years and it works fine." Well, you really don't have to make the switch to MVVM, but let me first explain some of the benefits I see for doing so. MVVM Benefits Testability - This is the one you'll probably hear the most about when it comes to MVVM. Moving most of the code from your code-behind to a separate view model class means you can now write unit tests against the view model without any knowledge of a view (UserControl). Multiple UIs - Let's just say that you've created a killer app, it's running in the browser, and maybe you've even made it run out-of-browser. Now what if your boss comes to you and says, "I heard about this new Windows Phone 7 device that is coming out later this year. Can you start porting the app to that device?". Well, now you have to create a new UI (UserControls, etc.) because you have a lot less screen real estate to work with. So what do you do, copy all of your existing UserControls, paste them, rename them, and then start changing the code? Hmm, that doesn't sound so good. But wait, if most of the code that makes your browser-based app tick lives in view model classes, now you can create new view (UserControls) for Windows Phone 7 that reference the same view model classes as your browser-based app. Page state - In Silverlight you're at some point going to be faced with the same issue you dealt with for years in ASP.NET, maintaining page state. Let's say a user hits your Products page, does some stuff (filters record, etc.), then leaves the page and comes back later. It would be best if the Products page was in the same state as when they left it right? Well, if you've thrown away your view (UserControl or Page) and moved off to another part of the UI, when you come back to Products you're probably going to re-instantiate your view...which will put it right back in the state it was when it started. Hmm, not good. Well, with a little help from MEF you can store the state in your view model class, MEF will keep that view model instance hanging around in memory, and then you simply rebind your view to the view model class. I made that sound easy, but it's actually a bit of work to properly store and restore the state. At least it can be done though, which will make your users a lot happier! I'll talk more about this in an upcoming blog post. No event handlers? Another nice thing about MVVM is that you can bind your UserControls to the view model, which may eliminate the need for event handlers in your code-behind. So instead of having a Click handler on a Button (or RadMenuItem), for example, you can now bind your control's Command property to a DelegateCommand in your view model (I'll talk more about Commands in an upcoming post). Instead of having a SelectionChanged event handler on your RadGridView you can now bind its SelectedItem property to a property in your view model, and each time the user clicks a row, the view model property's setter will be called. Now through the magic of binding we can eliminate the need for traditional code-behind based event handlers on our user interface controls, and the best thing is that the view model knows about everything that's going on...which means we can test things without a user interface. The brains of the operation So what we're seeing here is that the view is now just a dumb layer that binds to the view model, and that the view model is in control of just about everything, like what happens when a RadGridView row is selected, or when a RadComboBoxItem is selected, or when a RadMenuItem is clicked. It is also responsible for loading data when the page is hit, as well as kicking off data inserts, updates and deletions. Once again, all of this stuff can be tested without the need for a user interface. If the test works, then it'll work regardless of whether the user is hitting the browser-based version of your app, or the Windows Phone 7 version. Nice! The database Before running the code for this app you will need to create the database. First, create a database called MVVMProject in SQL Server, then run MVVMProject.sql in the MVVMProject/Database directory of your downloaded .zip file. This should give you a Task table with 3 records in it. When you fire up the solution you will also need to update the connection string in web.config to point to your database instead of IBM12\SQLSERVER2008. The code One note about this code is that it runs against the latest Silverlight 4 RC and WCF RIA Services code. Please see my first blog post about updating to the RC bits. Beta to RC - Part 1 At the top of this post is a link to a sample project that demonstrates a sample application with a Tasks page that uses the MVVM pattern. This is a simplified version of how I have implemented the Tasks page in the Task-It application. Youll notice that Tasks.xaml has very little code to it. Just a TextBlock that displays the page title and a ContentControl. <StackPanel>     <TextBlock Text="Tasks" Style="{StaticResource PageTitleStyle}"/>     <Rectangle Style="{StaticResource StandardSpacerStyle}"/>     <ContentControl x:Name="ContentControl1"/> </StackPanel> In List.xaml we have a RadGridView. Notice that the ItemsSource is bound to a property in the view model class call Tasks, SelectedItem is bound to a property in the view model called SelectedItem, and IsBusy is bound to a property in the view model called IsLoading. <Grid>     <telerikGridView:RadGridView ItemsSource="{Binding Tasks}" SelectedItem="{Binding SelectedItem, Mode=TwoWay}"                                  IsBusy="{Binding IsLoading}" AutoGenerateColumns="False" IsReadOnly="True" RowIndicatorVisibility="Collapsed"                IsFilteringAllowed="False" ShowGroupPanel="False">         <telerikGridView:RadGridView.Columns>             <telerikGridView:GridViewDataColumn Header="Name" DataMemberBinding="{Binding Name}" Width="3*"/>             <telerikGridView:GridViewDataColumn Header="Due" DataMemberBinding="{Binding DueDate}" DataFormatString="{}{0:d}" Width="*"/>         </telerikGridView:RadGridView.Columns>     </telerikGridView:RadGridView> </Grid> In Details.xaml we have a Save button that is bound to a property called SaveCommand in our view model. We also have a simple form (Im using a couple of controls here from Silverlight.FX for the form layout, FormPanel and Label simply because they make for a clean XAML layout). Notice that the FormPanel is also bound to the SelectedItem in the view model (the same one that the RadGridView is). The two form controls, the TextBox and RadDatePicker) are bound to the SelectedItem's Name and DueDate properties. These are properties of the Task object that WCF RIA Services creates. <StackPanel>     <Button Content="Save" Command="{Binding SaveCommand}" HorizontalAlignment="Left"/>     <Rectangle Style="{StaticResource StandardSpacerStyle}"/>     <fxui:FormPanel DataContext="{Binding SelectedItem}" Style="{StaticResource FormContainerStyle}">         <fxui:Label Text="Name:"/>         <TextBox Text="{Binding Name, Mode=TwoWay}"/>         <fxui:Label Text="Due:"/>         <telerikInput:RadDatePicker SelectedDate="{Binding DueDate, Mode=TwoWay}"/>     </fxui:FormPanel> </StackPanel> In the code-behind of the Tasks control, Tasks.xaml.cs, I created an instance of the view model class (TasksViewModel) in the constructor and set it as the DataContext for the control. The Tasks page will load one of two child UserControls depending on whether you are viewing the list of tasks (List.xaml) or the form for editing a task (Details.xaml). // Set the DataContext to an instance of the view model class var viewModel = new TasksViewModel(); DataContext = viewModel;   // Child user controls (inherit DataContext from this user control) List = new List(); // RadGridView Details = new Details(); // Form When the page first loads, the List is loaded into the ContentControl. // Show the RadGridView first ContentControl1.Content = List; In the code-behind we also listen for a couple of the view models events. The ItemSelected event will be fired when the user clicks on a record in the RadGridView in the List control. The SaveCompleted event will be fired when the user clicks Save in the Details control (the form). Here the view model is in control, and is letting the view know when something needs to change. // Listeners for the view model's events viewModel.ItemSelected += OnItemSelected; viewModel.SaveCompleted += OnSaveCompleted; The event handlers toggle the view between the RadGridView (List) and the form (Details). void OnItemSelected(object sender, RoutedEventArgs e) {     // Show the form     ContentControl1.Content = Details; }   void OnSaveCompleted(object sender, RoutedEventArgs e) {     // Show the RadGridView     ContentControl1.Content = List; } In TasksViewModel, we instantiate a DataContext object and a SaveCommand in the constructor. DataContext is a WCF RIA Services object that well use to retrieve the list of Tasks and to save any changes to a task. Ill talk more about this and Commands in future post, but for now think of the SaveCommand as an event handler that is called when the Save button in the form is clicked. DataContext = new DataContext(); SaveCommand = new DelegateCommand(OnSave); When the TasksViewModel constructor is called we also make a call to LoadTasks. This sets IsLoading to true (which causes the RadGridViews busy indicator to appear) and retrieves the records via WCF RIA Services.         public LoadOperation<Task> LoadTasks()         {             // Show the loading message             IsLoading = true;             // Get the data via WCF RIA Services. When the call has returned, called OnTasksLoaded.             return DataContext.Load(DataContext.GetTasksQuery(), OnTasksLoaded, false);         } When the data is returned, OnTasksLoaded is called. This sets IsLoading to false (which hides the RadGridViews busy indicator), and fires property changed notifications to the UI to let it know that the IsLoading and Tasks properties have changed. This property changed notification basically tells the UI to rebind. void OnTasksLoaded(LoadOperation<Task> lo) {     // Hide the loading message     IsLoading = false;       // Notify the UI that Tasks and IsLoading properties have changed     this.OnPropertyChanged(p => p.Tasks);     this.OnPropertyChanged(p => p.IsLoading); } Next lets look at the view models SelectedItem property. This is the one thats bound to both the RadGridView and the form. When the user clicks a record in the RadGridView its setter gets called (set a breakpoint and see what I mean). The other code in the setter lets the UI know that the SelectedItem has changed (so the form displays the correct data), and fires the event that notifies the UI that a selection has occurred (which tells the UI to switch from List to Details). public Task SelectedItem {     get { return _selectedItem; }     set     {         _selectedItem = value;           // Let the UI know that the SelectedItem has changed (forces it to re-bind)         this.OnPropertyChanged(p => p.SelectedItem);         // Notify the UI, so it can switch to the Details (form) page         NotifyItemSelected();     } } One last thing, saving the data. When the Save button in the form is clicked it fires the SaveCommand, which calls the OnSave method in the view model (once again, set a breakpoint to see it in action). public void OnSave() {     // Save the changes via WCF RIA Services. When the save is complete, call OnSaveCompleted.     DataContext.SubmitChanges(OnSaveCompleted, null); } In OnSave, we tell WCF RIA Services to submit any changes, which there will be if you changed either the Name or the Due Date in the form. When the save is completed, it calls OnSaveCompleted. This method fires a notification back to the UI that the save is completed, which causes the RadGridView (List) to show again. public virtual void OnSaveCompleted(SubmitOperation so) {     // Clear the item that is selected in the grid (in case we want to select it again)     SelectedItem = null;     // Notify the UI, so it can switch back to the List (RadGridView) page     NotifySaveCompleted(); } Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • A jQuery Plug-in to monitor Html Element CSS Changes

    - by Rick Strahl
    Here's a scenario I've run into on a few occasions: I need to be able to monitor certain CSS properties on an HTML element and know when that CSS element changes. The need for this arose out of wanting to build generic components that could 'attach' themselves to other objects and monitor changes on the ‘parent’ object so the dependent object can adjust itself accordingly. What I wanted to create is a jQuery plug-in that allows me to specify a list of CSS properties to monitor and have a function fire in response to any change to any of those CSS properties. The result are the .watch() and .unwatch() jQuery plug-ins. Here’s a simple example page of this plug-in that demonstrates tracking changes to an element being moved with draggable and closable behavior: http://www.west-wind.com/WestWindWebToolkit/samples/Ajax/jQueryPluginSamples/WatcherPlugin.htm Try it with different browsers – IE and FireFox use the DOM event handlers and Chrome, Safari and Opera use setInterval handlers to manage this behavior. It should work in all of them but all but IE and FireFox will show a bit of lag between the changes in the main element and the shadow. The relevant HTML for this example is this fragment of a main <div> (#notebox) and an element that is to mimic a shadow (#shadow). <div class="containercontent"> <div id="notebox" style="width: 200px; height: 150px;position: absolute; z-index: 20; padding: 20px; background-color: lightsteelblue;"> Go ahead drag me around and close me! </div> <div id="shadow" style="background-color: Gray; z-index: 19;position:absolute;display: none;"> </div> </div> The watcher plug in is then applied to the main <div> and shadow in sync with the following plug-in code: <script type="text/javascript"> $(document).ready(function () { var counter = 0; $("#notebox").watch("top,left,height,width,display,opacity", function (data, i) { var el = $(this); var sh = $("#shadow"); var propChanged = data.props[i]; var valChanged = data.vals[i]; counter++; showStatus("Prop: " + propChanged + " value: " + valChanged + " " + counter); var pos = el.position(); var w = el.outerWidth(); var h = el.outerHeight(); sh.css({ width: w, height: h, left: pos.left + 5, top: pos.top + 5, display: el.css("display"), opacity: el.css("opacity") }); }) .draggable() .closable() .css("left", 10); }); </script> When you run this page as you drag the #notebox element the #shadow element will maintain and stay pinned underneath the #notebox element effectively keeping the shadow attached to the main element. Likewise, if you hide or fadeOut() the #notebox element the shadow will also go away – show the #notebox element and the shadow also re-appears because we are assigning the display property from the parent on the shadow. Note we’re attaching the .watch() plug-in to the #notebox element and have it fire whenever top,left,height,width,opacity or display CSS properties are changed. The passed data element contains a props[] and vals[] array that holds the properties monitored and their current values. An index passed as the second parm tells you which property has changed and what its current value is (propChanged/valChanged in the code above). The rest of the watcher handler code then deals with figuring out the main element’s position and recalculating and setting the shadow’s position using the jQuery .css() function. Note that this is just an example to demonstrate the watch() behavior here – this is not the best way to create a shadow. If you’re interested in a more efficient and cleaner way to handle shadows with a plug-in check out the .shadow() plug-in in ww.jquery.js (code search for fn.shadow) which uses native CSS features when available but falls back to a tracked shadow element on browsers that don’t support it, which is how this watch() plug-in came about in the first place :-) How does it work? The plug-in works by letting the user specify a list of properties to monitor as a comma delimited string and a handler function: el.watch("top,left,height,width,display,opacity", function (data, i) {}, 100, id) You can also specify an interval (if no DOM event monitoring isn’t available in the browser) and an ID that identifies the event handler uniquely. The watch plug-in works by hooking up to DOMAttrModified in FireFox, to onPropertyChanged in Internet Explorer, or by using a timer with setInterval to handle the detection of changes for other browsers. Unfortunately WebKit doesn’t support DOMAttrModified consistently at the moment so Safari and Chrome currently have to use the slower setInterval mechanism. In response to a changed property (or a setInterval timer hit) a JavaScript handler is fired which then runs through all the properties monitored and determines if and which one has changed. The DOM events fire on all property/style changes so the intermediate plug-in handler filters only those hits we’re interested in. If one of our monitored properties has changed the specified event handler function is called along with a data object and an index that identifies the property that’s changed in the data.props/data.vals arrays. The jQuery plugin to implement this functionality looks like this: (function($){ $.fn.watch = function (props, func, interval, id) { /// <summary> /// Allows you to monitor changes in a specific /// CSS property of an element by polling the value. /// when the value changes a function is called. /// The function called is called in the context /// of the selected element (ie. this) /// </summary> /// <param name="prop" type="String">CSS Properties to watch sep. by commas</param> /// <param name="func" type="Function"> /// Function called when the value has changed. /// </param> /// <param name="interval" type="Number"> /// Optional interval for browsers that don't support DOMAttrModified or propertychange events. /// Determines the interval used for setInterval calls. /// </param> /// <param name="id" type="String">A unique ID that identifies this watch instance on this element</param> /// <returns type="jQuery" /> if (!interval) interval = 100; if (!id) id = "_watcher"; return this.each(function () { var _t = this; var el$ = $(this); var fnc = function () { __watcher.call(_t, id) }; var data = { id: id, props: props.split(","), vals: [props.split(",").length], func: func, fnc: fnc, origProps: props, interval: interval, intervalId: null }; // store initial props and values $.each(data.props, function (i) { data.vals[i] = el$.css(data.props[i]); }); el$.data(id, data); hookChange(el$, id, data); }); function hookChange(el$, id, data) { el$.each(function () { var el = $(this); if (typeof (el.get(0).onpropertychange) == "object") el.bind("propertychange." + id, data.fnc); else if ($.browser.mozilla) el.bind("DOMAttrModified." + id, data.fnc); else data.intervalId = setInterval(data.fnc, interval); }); } function __watcher(id) { var el$ = $(this); var w = el$.data(id); if (!w) return; var _t = this; if (!w.func) return; // must unbind or else unwanted recursion may occur el$.unwatch(id); var changed = false; var i = 0; for (i; i < w.props.length; i++) { var newVal = el$.css(w.props[i]); if (w.vals[i] != newVal) { w.vals[i] = newVal; changed = true; break; } } if (changed) w.func.call(_t, w, i); // rebind event hookChange(el$, id, w); } } $.fn.unwatch = function (id) { this.each(function () { var el = $(this); var data = el.data(id); try { if (typeof (this.onpropertychange) == "object") el.unbind("propertychange." + id, data.fnc); else if ($.browser.mozilla) el.unbind("DOMAttrModified." + id, data.fnc); else clearInterval(data.intervalId); } // ignore if element was already unbound catch (e) { } }); return this; } })(jQuery); Note that there’s a corresponding .unwatch() plug-in that can be used to stop monitoring properties. The ID parameter is optional both on watch() and unwatch() – a standard name is used if you don’t specify one, but it’s a good idea to use unique names for each element watched to avoid overlap in event ids especially if you’re monitoring many elements. The syntax is: $.fn.watch = function(props, func, interval, id) props A comma delimited list of CSS style properties that are to be watched for changes. If any of the specified properties changes the function specified in the second parameter is fired. func The function fired in response to a changed styles. Receives this as the element changed and an object parameter that represents the watched properties and their respective values. The first parameter is passed in this structure: { id: watcherId, props: [], vals: [], func: thisFunc, fnc: internalHandler, origProps: strPropertyListOnWatcher }; A second parameter is the index of the changed property so data.props[i] or data.vals[i] gets the property and changed value. interval The interval for setInterval() for those browsers that don't support property watching in the DOM. In milliseconds. id An optional id that identifies this watcher. Required only if multiple watchers might be hooked up to the same element. The default is _watcher if not specified. It’s been a Journey I started building this plug-in about two years ago and had to make many modifications to it in response to changes in jQuery and also in browser behaviors. I think the latest round of changes made should make this plug-in fairly future proof going forward (although I hope there will be better cross-browser change event notifications in the future). One of the big problems I ran into had to do with recursive change notifications – it looks like starting with jQuery 1.44 and later, jQuery internally modifies element properties on some calls to some .css()  property retrievals and things like outerHeight/Width(). In IE this would cause nasty lock up issues at times. In response to this I changed the code to unbind the events when the handler function is called and then rebind when it exits. This also makes user code less prone to stack overflow recursion as you can actually change properties on the base element. It also means though that if you change one of the monitors properties in the handler the watch() handler won’t fire in response – you need to resort to a setTimeout() call instead to force the code to run outside of the handler: $("#notebox") el.watch("top,left,height,width,display,opacity", function (data, i) { var el = $(this); … // this makes el changes work setTimeout(function () { el.css("top", 10) },10); }) Since I’ve built this component I’ve had a lot of good uses for it. The .shadow() fallback functionality is one of them. Resources The watch() plug-in is part of ww.jquery.js and the West Wind West Wind Web Toolkit. You’re free to use this code here or the code from the toolkit. West Wind Web Toolkit Latest version of ww.jquery.js (search for fn.watch) watch plug-in documentation © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  JavaScript  jQuery  

    Read the article

  • CodePlex Daily Summary for Friday, November 19, 2010

    CodePlex Daily Summary for Friday, November 19, 2010Popular ReleasesSQL Server CLR Function for Address Correction and Geocoding: Release 2.0: Release 2.0. New User Defined Function fields added.MiniTwitter: 1.58: MiniTwitter 1.58 ???? ?? ??????????????????、????????????????????????LateBindingApi.Excel: LateBindingApi.Excel Release 0.7d: Release+Samples V0.7: - Enthält Laufzeit DLL und Beispielprojekte Beispielprojekte: COMAddinExample - Demonstriert ein versionslos angebundenes COMAddin Example01 - Background Colors und Borders für Cells Example02 - Font Attributes undAlignment für Cells Example03 - Numberformats Example04 - Shapes, WordArts, Pictures, 3D-Effects Example05 - Charts Example06 - Dialoge in Excel Example07 - Einem Workbook VBA Code hinzufügen Example08 - Events Example09 - Eigene Gui Elemente erstellen und Ere...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.4 Released: Hi, Today we are releasing Visifire 3.6.4 with few bug fixes: * Multi-line Labels were getting clipped while exploding last DataPoint in Funnel and Pyramid chart. * ClosestPlotDistance property in Axis was not behaving as expected. * In DateTime Axis, Chart threw exception on mouse click over PlotArea if there were no DataPoints present in Chart. * ToolTip was not disappearing while changing the DataSource property of the DataSeries at real-time. * Chart threw exception ...Opalis Community Releases: Opalis Architecture and Workflow Deployment Docs: Opalis Architecture & Workflow Deployment Process Documentation The Opalis Architecture & Workflow Deployment Process Documentation includes two documents (for presentation purposes only, one is a DOCX with and embedded Visio diagram and the other is a PDF). The documentation is here as a "Best Practice Guide". The phrase "Best Practice Guide" is in quotes because this is an UNOFFICIAL example of an Opalis Architecture as well as an UNOFFICIAL example of a Workflow Deployment Process. The int...Sexy Select: sexyselect.0.2: Review index.html inside the source code for a working demoMicrosoft SQL Server Product Samples: Database: AdventureWorks 2008R2 SR1: Sample Databases for Microsoft SQL Server 2008R2 (SR1)This release is dedicated to the sample databases that ship for Microsoft SQL Server 2008R2. See Database Prerequisites for SQL Server 2008R2 for feature configurations required for installing the sample databases. See Installing SQL Server 2008R2 Databases for step by step installation instructions. The SR1 release contains minor bug fixes to the installer used to create the sample databases. There are no changes to the databases them...VidCoder: 0.7.2: Fixed duplicated subtitles when running multiple encodes off of the same title.Razor Templating Engine: Razor Template Engine v1.1: Release 1.1 Changes: ADDED: Signed assemblies with strong name to allow assemblies to be referenced by other strongly-named assemblies. FIX: Filter out dynamic assemblies which causes failures in template compilation. FIX: Changed ASCII to UTF8 encoding to support UTF-8 encoded string templates. FIX: Corrected implementation of TemplateBase adding ITemplate interface.Prism Training Kit: Prism Training Kit - 1.1: This is an updated version of the Prism training Kit that targets Prism 4.0 and fixes the bugs reported in the version 1.0. This release consists of a Training Kit with Labs on the following topics Modularity Dependency Injection Bootstrapper UI Composition Communication Note: Take into account that this is a Beta version. If you find any bugs please report them in the Issue Tracker PrerequisitesVisual Studio 2010 Microsoft Word 2007/2010 Microsoft Silverlight 4 Microsoft S...Craig's Utility Library: Craig's Utility Library Code 2.0: This update contains a number of changes, added functionality, and bug fixes: Added transaction support to SQLHelper. Added linked/embedded resource ability to EmailSender. Updated List to take into account new functions. Added better support for MAC address in WMI classes. Fixed Parsing in Reflection class when dealing with sub classes. Fixed bug in SQLHelper when replacing the Command that is a select after doing a select. Fixed issue in SQL Server helper with regard to generati...MFCMAPI: November 2010 Release: Build: 6.0.0.1023 Full release notes at SGriffin's blog. If you just want to run the tool, get the executable. If you want to debug it, get the symbol file and the source. The 64 bit build will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit build, regardless of the operating system. Facebook BadgeDotNetNuke® Community Edition: 05.06.00: Major HighlightsAdded automatic portal alias creation for single portal installs Updated the file manager upload page to allow user to upload multiple files without returning to the file manager page. Fixed issue with Event Log Email Notifications. Fixed issue where Telerik HTML Editor was unable to upload files to secure or database folder. Fixed issue where registration page is not set correctly during an upgrade. Fixed issue where Sendmail stripped HTML and Links from emails...mVu Mobile Viewer: mVu Mobile Viewer 0.7.10.0: Tube8 fix.EPPlus-Create advanced Excel 2007 spreadsheets on the server: EPPlus 2.8.0.1: EPPlus-Create advanced Excel 2007 spreadsheets on the serverNew Features Improved chart support Different chart-types series on the same chart Support for secondary axis and a lot of new properties Better styling Encryption and Workbook protection Table support Import csv files Array formulas ...and a lot of bugfixesAutoLoL: AutoLoL v1.4.2: Added support for more clients (French and Russian) Settings are now stored sepperatly for each user on a computer Auto Login is much faster now Auto Login detects and handles caps lock state properly nowTailspinSpyworks - WebForms Sample Application: TailspinSpyworks-v0.9: Contains a number of bug fixes and additional tutorial steps as well as complete database implementation details.ASP.NET MVC Project Awesome (rich jQuery AJAX helpers): 1.3 and demos: a library with mvc helpers and a demo project that demonstrates an awesome way of doing asp.net mvc. tested on mozilla, safari, chrome, opera, ie 9b/8/7/6 new stuff in 1.3 Autocomplete helper Autocomplete and AjaxDropdown can have parentId and be filled with data depending on the value of the parent PopupForm besides Content("ok") on success can also return Json(data) and use 'data' in a client side function Awesome demo improved (cruder, builder, added service layer)UltimateJB: UltimateJB 2.01 PL3 KakaRoto + PSNYes by EvilSperm: Voici une version attendu avec impatience pour beaucoup : - La Version PSNYes pour pouvoir jouer sur le PSN avec une PS3 Jailbreaker. - Pour l'instant le PSNYes n'est disponible qu'avec les PS3 en firmwares 3.41 !!! - La version PL3 KAKAROTO intégre ses dernières modification et prépare a l'intégration du Firmware 3.30 !!! Conclusion : - UltimateJB PSNYes => Valide l'utilisation du PSN : Uniquement compatible avec les 3.41 - ultimateJB DEFAULT => Pas de PSN mais disponible pour les PS3 sui...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite 2.0: Fluent Ribbon Control Suite 2.0(supports .NET 4.0 RTM and .NET 3.5) Includes: Fluent.dll (with .pdb and .xml) Showcase Application Samples (only for .NET 4.0) Foundation (Tabs, Groups, Contextual Tabs, Quick Access Toolbar, Backstage) Resizing (ribbon reducing & enlarging principles) Galleries (Gallery in ContextMenu, InRibbonGallery) MVVM (shows how to use this library with Model-View-ViewModel pattern) KeyTips ScreenTips Toolbars ColorGallery NEW! *Walkthrough (documenta...New ProjectsALARM - ALert Application for Resource Managers: ALert Application for Resource ManagersAmino: Amino coming soon. A very dynamic MVVM application environment. More details to follow.ASDF-CRM: A Cleanly Designed CRM system based on .Net4 technologies with Silverlight GUI.Auto Slideshow with description: Auto Slideshow with image description targeted for webpage banners or website introduction. This is developed in XAML and C# using stroryboard and defining the timelines. This is a Silverlight 4 application. Can be resized depending on your requirements. Base De Datos: proyecto de base de datosBesteam Developments Safe Driving: School project developed with Visual Studio 2010, C# 4.0 and .NET 4.0BizTalk Mapper Extensions UtilityPack: BizTalk Mapper Extensions UtilityPack is a set of libraries with several useful functoids to include and use it in a map, which will provide an extension of BizTalk Mapper capabilities.BlogEngine Additions (Widgets,Extensions,Custom Code): Additions and custom code for BlogEngine. Widgets Extensions Custom code that can't be use in Widgets or Extenstions Equals Verifier .Net: Equals Verifier .Net is a small library to verify if classes implement Equals according to msdn guidelines.ERPSia: Proyecto de SIA TecFalafel Solution Rename Script: The Falafel Software Solution Rename PowerShell Script makes it easy to reuse an existing solution/project by performing a global rename. If you have a solution named ExampleSolution and want to reuse it as WidgetSolution, this script will rename everything for you.fOrganiz: This application allows you to automatically organize by date in specific subdirectories your picturesGEChecker: GEChecker makes it easier for you to view your RuneScape Grand Exchange offers whilst offline. It's developed in VB.NetHBUIMIS: HBUIMISHospital Management: 3 -tier architectinterpool: proyecto interpool - pis 2010 loud tweets: loud tweetsLuminous: Luminous library consists of various .NET components, controls and classes which make programming easier: WPF and Windows Forms TaskDialog (previously VDialog), Simple Popup Control, Glass Button, Linq to CSS, Linq to XHTML and various useful classes and extension methods.MailMonitr: MailMonitr helps improve email push notifications to your iPhone by using Prowl to deliver notifications, instead of the default "ding." Prowl has the ability to set "quiet hours." Also, a summery of unread messages is displayed on the lock screen for each push notification.MemoryGames: TODOMiko Ling's Open Source Projects: Open SourceMSCRM - Duplicate Checker - Plugin: Plugin to handle real-time Duplicate Checking and Constraining on any Int attribute specified. I use this to prevent service calls that are creating entities from creating duplicates. The external systems making the service calls use int as the primary key.MyTestProject: Test net projectNellen.dk: Det kan blive meget vildere....Orange Library: Orange LibraryRateIt: Rating plugin for jQuery RTL support, Progressive enhancement, Unobtrusive javascript (using HTML5 data-* attributes), supports as many stars as you'd like, and also any step size.RDPAddins .NET: With RDPAddins .NET framework you can build rdp channel addins in your favourite .NET languageREMS - Real State Management System based on ASP.NET 4.0: real state management system based on ASP.NET 4.0Repository of pan: My source code repository. record my idea and test code here. SharePoint Log Investigation Tool (SPLIT): SPLIT makes searching SharePoint logs easy. SQL Monitor: monitor sql server processes and jobs, view executing sql query, kill process / stop jobTestCodePlexForMe: TestCodePlexForMeweibo wp7 client: It's a project to create a windows phone 7 client for sina weibo, which is http://t.sina.com.cnWM2Day: WM2Day is a Windows Mobile (both Smartphone and PDA) client for Me2day.net, the Korean micro-blogging service.X10Dispatcher: Interface for automating x10 cm15a home automation powerline control unit. Requires physical cm15a control unit connected to computer running this program. Extends remote monitoring and automation of computer activities based on sensors and events.XNA 2D Particle Engine: XNA 2D Particle Engine is a flexible, extensible particle engine written in XNA Game Studio 4.0. The engine can emit texture-based particles in almost anyway you like and can easily be integrated as a (drawable) Game component in your XNA Game Studio 4.0 projects.

    Read the article

  • Inside the Concurrent Collections: ConcurrentDictionary

    - by Simon Cooper
    Using locks to implement a thread-safe collection is rather like using a sledgehammer - unsubtle, easy to understand, and tends to make any other tool redundant. Unlike the previous two collections I looked at, ConcurrentStack and ConcurrentQueue, ConcurrentDictionary uses locks quite heavily. However, it is careful to wield locks only where necessary to ensure that concurrency is maximised. This will, by necessity, be a higher-level look than my other posts in this series, as there is quite a lot of code and logic in ConcurrentDictionary. Therefore, I do recommend that you have ConcurrentDictionary open in a decompiler to have a look at all the details that I skip over. The problem with locks There's several things to bear in mind when using locks, as encapsulated by the lock keyword in C# and the System.Threading.Monitor class in .NET (if you're unsure as to what lock does in C#, I briefly covered it in my first post in the series): Locks block threads The most obvious problem is that threads waiting on a lock can't do any work at all. No preparatory work, no 'optimistic' work like in ConcurrentQueue and ConcurrentStack, nothing. It sits there, waiting to be unblocked. This is bad if you're trying to maximise concurrency. Locks are slow Whereas most of the methods on the Interlocked class can be compiled down to a single CPU instruction, ensuring atomicity at the hardware level, taking out a lock requires some heavy lifting by the CLR and the operating system. There's quite a bit of work required to take out a lock, block other threads, and wake them up again. If locks are used heavily, this impacts performance. Deadlocks When using locks there's always the possibility of a deadlock - two threads, each holding a lock, each trying to aquire the other's lock. Fortunately, this can be avoided with careful programming and structured lock-taking, as we'll see. So, it's important to minimise where locks are used to maximise the concurrency and performance of the collection. Implementation As you might expect, ConcurrentDictionary is similar in basic implementation to the non-concurrent Dictionary, which I studied in a previous post. I'll be using some concepts introduced there, so I recommend you have a quick read of it. So, if you were implementing a thread-safe dictionary, what would you do? The naive implementation is to simply have a single lock around all methods accessing the dictionary. This would work, but doesn't allow much concurrency. Fortunately, the bucketing used by Dictionary allows a simple but effective improvement to this - one lock per bucket. This allows different threads modifying different buckets to do so in parallel. Any thread making changes to the contents of a bucket takes the lock for that bucket, ensuring those changes are thread-safe. The method that maps each bucket to a lock is the GetBucketAndLockNo method: private void GetBucketAndLockNo( int hashcode, out int bucketNo, out int lockNo, int bucketCount) { // the bucket number is the hashcode (without the initial sign bit) // modulo the number of buckets bucketNo = (hashcode & 0x7fffffff) % bucketCount; // and the lock number is the bucket number modulo the number of locks lockNo = bucketNo % m_locks.Length; } However, this does require some changes to how the buckets are implemented. The 'implicit' linked list within a single backing array used by the non-concurrent Dictionary adds a dependency between separate buckets, as every bucket uses the same backing array. Instead, ConcurrentDictionary uses a strict linked list on each bucket: This ensures that each bucket is entirely separate from all other buckets; adding or removing an item from a bucket is independent to any changes to other buckets. Modifying the dictionary All the operations on the dictionary follow the same basic pattern: void AlterBucket(TKey key, ...) { int bucketNo, lockNo; 1: GetBucketAndLockNo( key.GetHashCode(), out bucketNo, out lockNo, m_buckets.Length); 2: lock (m_locks[lockNo]) { 3: Node headNode = m_buckets[bucketNo]; 4: Mutate the node linked list as appropriate } } For example, when adding another entry to the dictionary, you would iterate through the linked list to check whether the key exists already, and add the new entry as the head node. When removing items, you would find the entry to remove (if it exists), and remove the node from the linked list. Adding, updating, and removing items all follow this pattern. Performance issues There is a problem we have to address at this point. If the number of buckets in the dictionary is fixed in the constructor, then the performance will degrade from O(1) to O(n) when a large number of items are added to the dictionary. As more and more items get added to the linked lists in each bucket, the lookup operations will spend most of their time traversing a linear linked list. To fix this, the buckets array has to be resized once the number of items in each bucket has gone over a certain limit. (In ConcurrentDictionary this limit is when the size of the largest bucket is greater than the number of buckets for each lock. This check is done at the end of the TryAddInternal method.) Resizing the bucket array and re-hashing everything affects every bucket in the collection. Therefore, this operation needs to take out every lock in the collection. Taking out mutiple locks at once inevitably summons the spectre of the deadlock; two threads each hold a lock, and each trying to acquire the other lock. How can we eliminate this? Simple - ensure that threads never try to 'swap' locks in this fashion. When taking out multiple locks, always take them out in the same order, and always take out all the locks you need before starting to release them. In ConcurrentDictionary, this is controlled by the AcquireLocks, AcquireAllLocks and ReleaseLocks methods. Locks are always taken out and released in the order they are in the m_locks array, and locks are all released right at the end of the method in a finally block. At this point, it's worth pointing out that the locks array is never re-assigned, even when the buckets array is increased in size. The number of locks is fixed in the constructor by the concurrencyLevel parameter. This simplifies programming the locks; you don't have to check if the locks array has changed or been re-assigned before taking out a lock object. And you can be sure that when a thread takes out a lock, another thread isn't going to re-assign the lock array. This would create a new series of lock objects, thus allowing another thread to ignore the existing locks (and any threads controlling them), breaking thread-safety. Consequences of growing the array Just because we're using locks doesn't mean that race conditions aren't a problem. We can see this by looking at the GrowTable method. The operation of this method can be boiled down to: private void GrowTable(Node[] buckets) { try { 1: Acquire first lock in the locks array // this causes any other thread trying to take out // all the locks to block because the first lock in the array // is always the one taken out first // check if another thread has already resized the buckets array // while we were waiting to acquire the first lock 2: if (buckets != m_buckets) return; 3: Calculate the new size of the backing array 4: Node[] array = new array[size]; 5: Acquire all the remaining locks 6: Re-hash the contents of the existing buckets into array 7: m_buckets = array; } finally { 8: Release all locks } } As you can see, there's already a check for a race condition at step 2, for the case when the GrowTable method is called twice in quick succession on two separate threads. One will successfully resize the buckets array (blocking the second in the meantime), when the second thread is unblocked it'll see that the array has already been resized & exit without doing anything. There is another case we need to consider; looking back at the AlterBucket method above, consider the following situation: Thread 1 calls AlterBucket; step 1 is executed to get the bucket and lock numbers. Thread 2 calls GrowTable and executes steps 1-5; thread 1 is blocked when it tries to take out the lock in step 2. Thread 2 re-hashes everything, re-assigns the buckets array, and releases all the locks (steps 6-8). Thread 1 is unblocked and continues executing, but the calculated bucket and lock numbers are no longer valid. Between calculating the correct bucket and lock number and taking out the lock, another thread has changed where everything is. Not exactly thread-safe. Well, a similar problem was solved in ConcurrentStack and ConcurrentQueue by storing a local copy of the state, doing the necessary calculations, then checking if that state is still valid. We can use a similar idea here: void AlterBucket(TKey key, ...) { while (true) { Node[] buckets = m_buckets; int bucketNo, lockNo; GetBucketAndLockNo( key.GetHashCode(), out bucketNo, out lockNo, buckets.Length); lock (m_locks[lockNo]) { // if the state has changed, go back to the start if (buckets != m_buckets) continue; Node headNode = m_buckets[bucketNo]; Mutate the node linked list as appropriate } break; } } TryGetValue and GetEnumerator And so, finally, we get onto TryGetValue and GetEnumerator. I've left these to the end because, well, they don't actually use any locks. How can this be? Whenever you change a bucket, you need to take out the corresponding lock, yes? Indeed you do. However, it is important to note that TryGetValue and GetEnumerator don't actually change anything. Just as immutable objects are, by definition, thread-safe, read-only operations don't need to take out a lock because they don't change anything. All lockless methods can happily iterate through the buckets and linked lists without worrying about locking anything. However, this does put restrictions on how the other methods operate. Because there could be another thread in the middle of reading the dictionary at any time (even if a lock is taken out), the dictionary has to be in a valid state at all times. Every change to state has to be made visible to other threads in a single atomic operation (all relevant variables are marked volatile to help with this). This restriction ensures that whatever the reading threads are doing, they never read the dictionary in an invalid state (eg items that should be in the collection temporarily removed from the linked list, or reading a node that has had it's key & value removed before the node itself has been removed from the linked list). Fortunately, all the operations needed to change the dictionary can be done in that way. Bucket resizes are made visible when the new array is assigned back to the m_buckets variable. Any additions or modifications to a node are done by creating a new node, then splicing it into the existing list using a single variable assignment. Node removals are simply done by re-assigning the node's m_next pointer. Because the dictionary can be changed by another thread during execution of the lockless methods, the GetEnumerator method is liable to return dirty reads - changes made to the dictionary after GetEnumerator was called, but before the enumeration got to that point in the dictionary. It's worth listing at this point which methods are lockless, and which take out all the locks in the dictionary to ensure they get a consistent view of the dictionary: Lockless: TryGetValue GetEnumerator The indexer getter ContainsKey Takes out every lock (lockfull?): Count IsEmpty Keys Values CopyTo ToArray Concurrent principles That covers the overall implementation of ConcurrentDictionary. I haven't even begun to scratch the surface of this sophisticated collection. That I leave to you. However, we've looked at enough to be able to extract some useful principles for concurrent programming: Partitioning When using locks, the work is partitioned into independant chunks, each with its own lock. Each partition can then be modified concurrently to other partitions. Ordered lock-taking When a method does need to control the entire collection, locks are taken and released in a fixed order to prevent deadlocks. Lockless reads Read operations that don't care about dirty reads don't take out any lock; the rest of the collection is implemented so that any reading thread always has a consistent view of the collection. That leads us to the final collection in this little series - ConcurrentBag. Lacking a non-concurrent analogy, it is quite different to any other collection in the class libraries. Prepare your thinking hats!

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >