Search Results

Search found 5644 results on 226 pages for 'somewhat confused'.

Page 214/226 | < Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >

  • Revision control for writing programming lessons

    - by Dietrich Epp
    I'd like to write a series programming lessons that guide programmers to build a certain kind of program. After each lesson, I'd like to provide sample code that implements what that lesson covered, and the next lesson would use that code as a starting point. Right now I'm using Git to keep track of the code from lesson to lesson. Each lesson has its own branch. lesson1: A--B--C \ lesson2: D--E--F \ lesson3: G--H--I However, suppose that now I want to make it easier on the Windows programmers using my lessons, so I add a Visual Studio project to lesson 1 and then merge it into lessons 2 and 3. lesson1: A--B--C--------------J \ \ lesson2: D--E--F--------K \ \ lesson3: G--H--I--L And then someone points out a bug in lesson 2 that causes crashes on certain systems. (This diagram is where I am right now, and I'm having doubts about continuing along this path.) lesson1: A--B--C--------------J \ \ lesson2: D--E--F--------K--M \ \ \ lesson3: G--H--I--L--N Here are the problems I imagine having: If I had many lessons, and I fix something in lesson 1, am I going to have to spend fifteen minutes or more just merging that one simple change? I know I'll probably have to test all of those lessons again, but I can put that off. When I make a bunch of changes to various lessons on one computer, how do I pull all of the branches at the same time? If I decide to publish these lessons, I'd like a way to tag all of the branches to correspond with what I publish. I figure I'll just need to tag each branch separately, but it would be nice if there were a better way. When I look at the history, I imagine becoming terribly confused about what I've done. Compare the above diagram to a hypothetical diagram below, where I use rebase instead of merge (and rebase has its own problems): lesson1: A--B--C--J \ lesson2: D2--E2--F2--M \ lesson3: G2--H2--I2 Do any of you have experience working with a project like this? Should I consider using a different VCS, such as Darcs? (Note: it would be a real pain to use centralized VCS, so don't suggest one of those unless the benefits are clear.) Should I consider writing plugins or extra tools for a VCS (such as a "meta tag" which tags several branches)?

    Read the article

  • Python Expand Tabs Length Calculation

    - by Mithrill
    I'm confused by how the length of a string is calculated when expandtabs is used. I thought expandtabs replaces tabs with the appropriate number of spaces (with the default number of spaces per tab being 8). However, when I ran the commands using strings of varying lengths and varying numbers of tabs, the length calculation was different than I thought it would be (i.e., each tab didn't always result in the string length being increased by 8 for each instance of "/t"). Below is a detailed script output with comments explaining what I thought should be the result of the command executed above. Would someone please explain the how the length is calculated when expand tabs is used? IDLE 2.6.5 >>> s = '\t' >>> print len(s) 1 >>> #the length of the string without expandtabs was one (1 tab counted as a single space), as expected. >>> print len(s.expandtabs()) 8 >>> #the length of the string with expandtabs was eight (1 tab counted as eight spaces). >>> s = '\t\t' >>> print len(s) 2 >>> #the length of the string without expandtabs was 2 (2 tabs, each counted as a single space). >>> print len(s.expandtabs()) 16 >>> #the length of the string with expandtabs was 16 (2 tabs counted as 8 spaces each). >>> s = 'abc\tabc' >>> print len(s) 7 >>> #the length of the string without expandtabs was seven (6 characters and 1 tab counted as a single space). >>> print len(s.expandtabs()) 11 >>> #the length of the string with expandtabs was NOT 14 (6 characters and one 8 space tabs). >>> s = 'abc\tabc\tabc' >>> print len(s) 11 >>> #the length of the string without expandtabs was 11 (9 characters and 2 tabs counted as a single space). >>> print len(s.expandtabs()) 19 >>> #the length of the string with expandtabs was NOT 25 (9 characters and two 8 space tabs). >>>

    Read the article

  • Help with jQuery Traversal

    - by Jack Webb-Heller
    Hi guys, I'm struggling a bit with traversing in jQuery. Here's the relevant markup: <ul id="sortable" class="ui-sortable"> <li> <img src="http://ecx.images-amazon.com/images/I/51Vza76tCxL._SL160_.jpg"> <div class="controls"> <span class="move">?</span> <span class="delete">?</span> </div> <div class="data"> <h1>War and Peace (Oxford World's Classics)</h1> <textarea>Published to coincide with the centenary of Tolstoy's death, here is an exciting new edition of one of the great literary works of world literature.</textarea> </div> </li> <li> <img src="http://ecx.images-amazon.com/images/I/51boZxm2seL._SL160_.jpg"> <div class="controls"> <span class="move">?</span> <span class="delete">?</span> </div> <div class="data"> <h1>A Christmas Carol and Other Christmas Writings (Penguin Classics)</h1> <span>Optionally, write your own description in the box below.</span> <textarea>Dicken's Christmas writings-in a new, sumptuous, and delightful clothbound edition.</textarea> </div> </li> </ul> This is code for a jQuery UI 'Sortable' element. Here's what I want to happen. When the Delete thing is clicked ($('.delete')), I want the <li> item it's contained within to be removed. I've tried using $('.delete').parent().parent().remove(); but in the case of having two items, that seems to delete both of them. I'm a bit confused by this. I also tried using closest() to find the closest li, but that didn't seem to work either. How should I best traverse the DOM in this case? Thanks! Jack

    Read the article

  • Best practice: How to use (repeat) CSS style attributes correctly?

    - by ellie
    Hi guys! As a CSS newbie I'm wondering if it's recommended by professionals to repeat specific style attributes and their not inherited but default properties for every relevant selector? For example, should I rather use body {background:transparent none no-repeat; border:0 none transparent; margin:0; padding:0;} img {background:transparent none no-repeat; border:0 none transparent; margin:0; outline:transparent none 0; padding:0;} div#someID {background:transparent none no-repeat; border:0 none; margin:0 auto; padding:0; text-align:left; width:720px; ...} or body {background:transparent; border:0; margin:0; padding:0;} img {background:transparent; border:0; margin:0; outline:0; padding:0;} div#someID {background:transparent; border:0; margin:0 auto; padding:0; text-align:left; width:720px; ...} or just what (I think) I really need body {background:transparent; margin:0; padding:0;} img {border:0; outline:0;} div#someID {margin:0 auto; width:720px; ...} If it's best practice to go with the first or second one what do you think about defining a class like .foo {background:transparent; border:0; margin:0; padding:0;} and then applying it to every relevant selector: <div id="someID" class="foo">...</div> Yep, now I'm totally confused... so please advise! Thanks!

    Read the article

  • What is the relationship between WebProxy & IWebProxy with respect to WebClient?

    - by Streamline
    I am creating an app (.NET 2.0) that uses WebClient to connect (downloaddata, etc) to/from a http web service. I am adding a form now to handle allowing proxy information to either be stored or set to use the defaults. I am a little confused about some things. First, some of the methods & properties available in either WebProxy or IWebProxy are not in both. What is the difference here with respect to setting up how WebClient will be have when it is called? Secondly, do I have to tell WebClient to use the proxy information if I set it using either WebProxy or IWebProxy class elsewhere? Or is it automatically inherited? Thirdly, when giving the option for the user to use the default proxy (whatever is set in IE) and using the default credentials (I assume also whatever is set in IE) are these two mutually exclusive? Or you only use default credentials when you have also used default proxy? This gets me to the whole difference between WebProxy and IWebProxy. WebRequest.DefaultProxy is a IWebPRoxy class but UseDefaultCredentials is not a method on the IWebProxy class, rather it is only on WebProxy and in turn, How to set the proxy to the WebRequest.DefautlProxy if they are two different classes? Here is my current method to read the stored form settings by the user - but I am not sure if this is correct, not enough, overkill, or just wrong because of the mix of WebProxy and IWebProxy: private WebProxy _proxyInfo = new WebProxy(); private WebProxy SetProxyInfo() { if (UseProxy) { if (UseIEProxy) { // is doing this enough to set this as default for WebClient? IWebProxy iProxy = WebRequest.DefaultWebProxy; if (UseIEProxyCredentials) { _proxyInfo.UseDefaultCredentials = true; } else { // is doing this enough to set this as default credentials for WebClient? WebRequest.DefaultWebProxy.Credentials = new NetworkCredential(ProxyUsername, ProxyPassword); } } else { // is doing this enough to set this as default for WebClient? WebRequest.DefaultWebProxy = new WebProxy(ProxyAddress, ParseLib.StringToInt(ProxyPort)); if (UseIEProxyCredentials) { _proxyInfo.UseDefaultCredentials = true; } else { WebRequest.DefaultWebProxy.Credentials = new NetworkCredential(ProxyUsername, ProxyPassword); } } } // Do I need to WebClient to absorb this returned proxy info if I didn't set or use defaults? return _proxyInfo; } Is there any reason to not just scrap storing app specific proxy information and only allow the app the ability to use the default proxy information & credentials for the logged in user? Will this ever not be enough if using HTTP? Part 2 Question: How can I test that the WebClient instance is using the proxy information or not?

    Read the article

  • Insertions into Zipper trees on XML files in Clojure

    - by ivar
    I'm confused as how to idiomatically change a xml tree accessed through clojure.contrib's zip-filter.xml. Should be trying to do this at all, or is there a better way? Say that I have some dummy xml file "itemdb.xml" like this: <itemlist> <item id="1"> <name>John</name> <desc>Works near here.</desc> </item> <item id="2"> <name>Sally</name> <desc>Owner of pet store.</desc> </item> </itemlist> And I have some code: (require '[clojure.zip :as zip] '[clojure.contrib.duck-streams :as ds] '[clojure.contrib.lazy-xml :as lxml] '[clojure.contrib.zip-filter.xml :as zf]) (def db (ref (zip/xml-zip (lxml/parse-trim (java.io.File. "itemdb.xml"))))) ;; Test that we can traverse and parse. (doall (map #(print (format "%10s: %s\n" (apply str (zf/xml-> % :name zf/text)) (apply str (zf/xml-> % :desc zf/text)))) (zf/xml-> @db :item))) ;; I assume something like this is needed to make the xml tags (defn create-item [name desc] {:tag :item :attrs {:id "3"} :contents (list {:tag :name :attrs {} :contents (list name)} {:tag :desc :attrs {} :contents (list desc)})}) (def fred-item (create-item "Fred" "Green-haired astrophysicist.")) ;; This disturbs the structure somehow (defn append-item [xmldb item] (zip/insert-right (-> xmldb zip/down zip/rightmost) item)) ;; I want to do something more like this (defn append-item2 [xmldb item] (zip/insert-right (zip/rightmost (zf/xml-> xmldb :item)) item)) (dosync (alter db append-item2 fred-item)) ;; Save this simple xml file with some added stuff. (ds/spit "appended-itemdb.xml" (with-out-str (lxml/emit (zip/root @db) :pad true))) I am unclear about how to use the clojure.zip functions appropriately in this case, and how that interacts with zip-filter. If you spot anything particularly weird in this small example, please point it out.

    Read the article

  • Flex 4: Traversing the Stage More Easily

    - by Steve
    The following is a MXML Module I am producing in Flex 4: <?xml version="1.0" encoding="utf-8"?> <mx:Module xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" creationComplete="init()" layout="absolute" width="100%" height="100%"> <fx:Declarations> <!-- Place non-visual elements (e.g., services, value objects) here --> </fx:Declarations> <fx:Style source="BMChartModule.css" /> <s:Panel id="panel" title="Benchmark Results" height="100%" width="100%" dropShadowVisible="false"> <mx:TabNavigator id="tn" height="100%" width="100%" /> </s:Panel> <fx:Script> <![CDATA[ import flash.events.Event; import mx.charts.ColumnChart; import mx.charts.effects.SeriesInterpolate; import mx.controls.Alert; import spark.components.BorderContainer; import spark.components.Button; import spark.components.Label; import spark.components.NavigatorContent; import spark.components.RadioButton; import spark.components.TextInput; import spark.layouts.*; private var xml:XML; private function init():void { var seriesInterpolate:SeriesInterpolate = new SeriesInterpolate(); seriesInterpolate.duration = 1000; xml = parentApplication.model.xml; var sectorList:XMLList = xml.SECTOR; for each(var i:XML in sectorList) { var ncLayout:HorizontalLayout = new HorizontalLayout(); var nc:NavigatorContent = new NavigatorContent(); nc.label = i.@NAME; nc.name = "NC_" + nc.label; nc.layout = ncLayout; tn.addElement(nc); var cC:ColumnChart = new ColumnChart(); cC.percentWidth = 70; cC.name = "CC"; nc.addElement(cC); var bClayout:VerticalLayout = new VerticalLayout(); var bC:BorderContainer = new BorderContainer(); bC.percentWidth = 30; bC.layout = bClayout; nc.addElement(bC); var bClabel:Label = new Label(); bClabel.percentWidth = 100; bClabel.text = "Select a graph to view it in the column chart:"; var dpList:XMLList = sectorList.(@NAME == i.@NAME).DATAPOINT; for each(var j:XML in dpList) { var rB:RadioButton = new RadioButton(); rB.groupName = "dp"; rB.label = j.@NAME; rB.addEventListener(MouseEvent.CLICK, rBclick); bC.addElement(rB); } } } private function rBclick(e:MouseEvent):void { var selectedTab:NavigatorContent = this.tn.selectedChild as NavigatorContent; var colChart:ColumnChart = selectedTab.getChildByName("CC") as ColumnChart; trace(selectedTab.getChildAt(0)); } ]]> </fx:Script> </mx:Module> I'm writing this function rBclick to redraw the column chart when a radio button is clicked. In order to do this I need to find the column chart on the stage using actionscript. I've currently got 3 lines of code in here to do this: var selectedTab:NavigatorContent = this.tn.selectedChild as NavigatorContent; var colChart:ColumnChart = selectedTab.getChildByName("CC") as ColumnChart; trace(selectedTab.getChildAt(0)); Getting to the active tab in the tabnavigator is easy enough, but then selectedTab.getChildAt(0) - which I was expecting to be the chart - is a "spark.skin.spark.SkinnableContainerSkin"...anyway, I can continue to traverse the tree using this somewhat annoying code, but I'm hoping there is an easier way. So in short, at run time I want to, with as little code as possible, identify the column chart in the active tab so I can redraw it. Any advice would be greatly appreciated.

    Read the article

  • Problem inserting JavaScript into a CKEditor Instance running inside a jQuery-UI Dialog Box

    - by PlasmaFlux
    Hello Overflowers! Here's what's going on: I'm building an app (using PHP, jQuery, etc), part of which lets users edit various bits of a web page (Header, Footer, Main Content, Sidebar) using CKEditor. It's set up so that each editable bit has an "Edit Content" button in the upper right that, on click, launches an instance of CKEditor inside a jQuery-UI Dialog box. After the user is done editing, they can click an 'Update Changes' button that passes the edited content back off to the main page and closes the Dialog/CKeditor. It's all working magnificently, save for one thing. If someone inserts any JavaScript code, wrapped in 'script' tags, using either the Insert HTML Plugin for CKEditor or by going to 'Source' in CKEditor and placing the code in the source, everything seems okay until they click the 'Update Changes' button. The JavaScript appears to be inserting properly, but when 'Update Changes' is clicked, instead of the Dialog closing and passing the script back into the div where it belongs, what I get instead is an all-white screen with just the output of the JavaScript. For example, a simple 'Hello World' script results in a white screen with the string 'Hello World' in the upper left corner; for more intricate scripts, like an API call to, say Aweber, that generates a newsletter signup form, I get an all-white screen with the resulting form from the Aweber API call perfectly rendered in the middle of the screen. One of the most confusing parts is that, on these pages, if I click 'View Source', I get absolutely nothing. Blank emptiness. Here's all my code that handles the launching of the CKEditor instance inside the jQuery-UI Dialog, and the passing of the edited data back into its associated div on click of the 'Update Changes' button: $(function() { $('.foobar_edit_button') .button() .click(function() { var beingEdited = $(this).nextAll('.foobar_editable').attr('id'); var content = $(this).nextAll('.foobar_editable').html(); $('#foobar_editor').html(content); $('#foobar_editor').dialog( { open:function() { $(this).ckeditor(function() { CKFinder.SetupCKEditor( this, '<?php echo '/foobar/lib/editor/ckfinder/'; ?>' ); }); }, close:function() { $(this).ckeditorGet().destroy(); }, autoOpen: false, width: 840, modal: true, buttons: { 'Update Changes': function() { // TODO: submit changes to server via ajax once its completed: for ( instance in CKEDITOR.instances ) CKEDITOR.instances[instance].updateElement(); var edited_content = $('#foobar_editor').html(); $('#' + beingEdited).html(edited_content); $(this).dialog('close'); } } }); $('#foobar_editor').dialog('open'); }); }); I'm all sorts of confused. If anyone can point me in the right direction, it will be greatly appreciated. Thanks!

    Read the article

  • Structuring Win32 GUI code

    - by kraf
    I wish to improve my code and file structure in larger Win32 projects with plenty of windows and controls. Currently, I tend to have one header and one source file for the entire implementation of a window or dialog. This works fine for small projects, but now it has come to the point where these implementations are starting to reach 1000-2000 lines, which is tedious to browse. A typical source file of mine looks like this: static LRESULT CALLBACK on_create(const HWND hwnd, WPARAM wp, LPARAM lp) { setup_menu(hwnd); setup_list(hwnd); setup_context_menu(hwnd); /* clip */ return 0; } static LRESULT CALLBACK on_notify(HWND hwnd, UINT msg, WPARAM wp, LPARAM lp) { const NMHDR* header = (const NMHDR*)lp; /* At this point I feel that the control's event handlers doesn't * necessarily belong in the same source file. Perhaps I could move * each control's creation code and event handlers into a separate * source file? Good practice or cause of confusion? */ switch (header->idFrom) { case IDC_WINDOW_LIST: switch (header->code) { case NM_RCLICK: return on_window_list_right_click(hwnd, wp, lp); /* clip */ } } } static LRESULT CALLBACK wndmain_proc(HWND hwnd, UINT msg, WPARAM wp, LPARAM lp) { switch (msg) { case WM_CREATE: return on_create(hwnd, wp, lp); case WM_CLOSE: return on_close(hwnd, wp, lp); case WM_NOTIFY: return on_notify(hwnd, wp, lp); /* It doesn't matter much how the window proc looks as it just forwards * events to the appropriate handler. */ /* clip */ default: return DefWindowProc(hwnd, msg, wp, lp); } } But now as the window has a lot more controls, and these controls in turn have their own message handlers, and then there's the menu click handlers, and so on... I'm getting lost, and I really need advice on how to structure this mess up in a good and sensible way. I have tried to find good open source examples of structuring Win32 code, but I just get more confused since there are hundreds of files, and within each of these files that seem GUI related, the Win32 GUI code seems so far encapsulated away. And when I finally find a CreateWindowEx statement, the window proc is nowhere to be found. Any advice on how to structure all the code while remaining sane would be greatly appreciated. Thanks! I don't wish to use any libraries or frameworks as I find the Win32 API interesting and valuable for learning. Any insight into how you structure your own GUI code could perhaps serve as inspiration.

    Read the article

  • Setting UITableViewCell button image values during scrolling

    - by Rob
    I am having an issue where I am trying to save whether a repeat image will show selected or not (it is created as a UIButton on a UITableViewCell created in IB). The issue is that since the cell gets re-used randomly, the image gets reset or set somewhere else after you start scrolling. I've looked all over and the advice was to setup an NSMutableArray to store the button's selection state and I am doing that in an array called checkRepeatState My question is: where do I put the if-then-else statement in the code to where it will actually change the button image based on if the checkRepeatState is set to 0 or 1 for the given cell that is coming back into view? Right now, the if-then-else statement I am using has absolutely no effect on the button image when it comes into view. I'm very confused at this point. Thank you ahead of time for any insight you can give!!! My code is as follows: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { // set up the cell static NSString *CellIdentifier = @"PlayerCell"; PlayerCell *cell = (PlayerCell *)[tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (!cell) { [[NSBundle mainBundle] loadNibNamed:@"PlayerNibCells" owner:self options:nil]; cell = tmpCell; self.tmpCell = nil; NSLog(@"Creating a new cell"); } // Display dark and light background in alternate rows -- see tableView:willDisplayCell:forRowAtIndexPath:. cell.useDarkBackground = (indexPath.row % 2 == 0); // Configure the data for the cell. NSDictionary *dataItem = [soundCategories objectAtIndex:indexPath.row]; UILabel *label; label = (UILabel *)[cell viewWithTag:1]; label.text = [dataItem objectForKey:@"AnimalName"]; label = (UILabel *)[cell viewWithTag:2]; label.text = [dataItem objectForKey:@"Description"]; UIImageView *img; img = (UIImageView *)[cell viewWithTag:3]; img.image = [UIImage imageNamed:[dataItem objectForKey:@"Icon"]]; NSInteger row = indexPath.row; NSNumber *checkValue = [checkRepeatState objectAtIndex:row]; NSLog(@"checkValue is %d", [checkValue intValue]); // Reusing cell; make sure it has correct image based on checkRepeatState value UIButton *repeatbutton = (UIButton *)[cell viewWithTag:4]; if ([checkValue intValue] == 1) { NSLog(@"repeatbutton is selected"); [repeatbutton setImage:[UIImage imageNamed:@"repeatselected.png"] forState:UIControlStateSelected]; [repeatbutton setNeedsDisplay]; } else { NSLog(@"repeatbutton is NOT selected"); [repeatbutton setImage:[UIImage imageNamed:@"repeat.png"] forState:UIControlStateNormal]; [repeatbutton setNeedsDisplay]; } cell.accessoryType = UITableViewCellAccessoryNone; return cell; }

    Read the article

  • What happens when several Java servlet apps running on the same port ?

    - by Frank
    Something strange happened to my servlets and I think I've figured out why, yet I'm more confused. I used Netbean6.7 to develop a Paypal IPN (Instant Payment Notification) message servlet, it listens on port 8080 by default for Paypal IPN messages. I used some sample Java code from it's web site, but when it ran, only about 1 in 10 messages came through, and they looked correct, but why 1 in 10 ? Not 100% or none ? So I asked some questions here and got some advices, one in particular points me to Google's App Engine, so I downloaded it and ran the demo guestbook while my IPN servlet is still running on Netbeans, the strange thing happened, after I entered "appengine-java-sdk-1.3.2\bin\dev_appserver.cmd appengine-java-sdk-1.3.2\demos\guestbook\war" from the command prompt, I went to the following url on my browser "http://localhost:8080/", I thought I would see the Google demo guestbook page, NO, what I saw was another servlet I developed 2 years ago : "Web Academy", online course registration app. How can that happen ? I never started it, and I haven't touch that project for years. I guess because it's also listening on port 8080, so now I understand why the IPN messages only came through 1 in 10 times, because another servlet was also listening on that port and could have got the messages intended for IPN, or some how those two servlets' processes mixed up and therefore couldn't respond to Paypal properly, and failed. In order to verify some of my guesses, I turn off Netbeans, and ran the Google guestbook again at the prompt, this time on my browser http://localhost:8080/ points to the demo guestbook page. My Urls look like this : [A] Paypal IPN : http://localhost:8080/PayPal_App/PayPal_Servlet [B] Web Academy : http://localhost:8080/ So now, my questions are : <1> Why the "Web Academy" servlet was auto started when I ran the Paypal servlet ? <2> If I change the IPN listening port to 8083, would that mean I can run both of them on my PC at the same time without affecting each other ? <3> But I still don't understand, [A] and [B] look different, if a page for [A] is refreshed, it should show the Paypal content, and another page looking at [B] should show the Web Academy content, and that's exactly what happens when I started Netbeans to run the Paypal servlet, both pages show their respective content correctly side by side without interfering with each other, how come the IPN messages couldn't get through 100% of the time ? <4> In Netbeans how to assign 8080 to servlet [A] and assign port 8083 to servlet [B] ? <5> How to turn off auto start of Web Academy by Netbeans ?

    Read the article

  • Can a conforming C implementation #define NULL to be something wacky

    - by janks
    I'm asking because of the discussion that's been provoked in this thread: http://stackoverflow.com/questions/2597142/when-was-the-null-macro-not-0/2597232 Trying to have a serious back-and-forth discussion using comments under other people's replies is not easy or fun. So I'd like to hear what our C experts think without being restricted to 500 characters at a time. The C standard has precious few words to say about NULL and null pointer constants. There's only two relevant sections that I can find. First: 3.2.2.3 Pointers An integral constant expression with the value 0, or such an expression cast to type void * , is called a null pointer constant. If a null pointer constant is assigned to or compared for equality to a pointer, the constant is converted to a pointer of that type. Such a pointer, called a null pointer, is guaranteed to compare unequal to a pointer to any object or function. and second: 4.1.5 Common definitions <stddef.h> The macros are NULL which expands to an implementation-defined null pointer constant; The question is, can NULL expand to an implementation-defined null pointer constant that is different from the ones enumerated in 3.2.2.3? In particular, could it be defined as: #define NULL __builtin_magic_null_pointer Or even: #define NULL ((void*)-1) My reading of 3.2.2.3 is that it specifies that an integral constant expression of 0, and an integral constant expression of 0 cast to type void* must be among the forms of null pointer constant that the implementation recognizes, but that it isn't meant to be an exhaustive list. I believe that the implementation is free to recognize other source constructs as null pointer constants, so long as no other rules are broken. So for example, it is provable that #define NULL (-1) is not a legal definition, because in if (NULL) do_stuff(); do_stuff() must not be called, whereas with if (-1) do_stuff(); do_stuff() must be called; since they are equivalent, this cannot be a legal definition of NULL. But the standard says that integer-to-pointer conversions (and vice-versa) are implementation-defined, therefore it could define the conversion of -1 to a pointer as a conversion that produces a null pointer. In which case if ((void*)-1) would evaluate to false, and all would be well. So what do other people think? I'd ask for everybody to especially keep in mind the "as-if" rule described in 2.1.2.3 Program execution. It's huge and somewhat roundabout, so I won't paste it here, but it essentially says that an implementation merely has to produce the same observable side-effects as are required of the abstract machine described by the standard. It says that any optimizations, transformations, or whatever else the compiler wants to do to your program are perfectly legal so long as the observable side-effects of the program aren't changed by them. So if you are looking to prove that a particular definition of NULL cannot be legal, you'll need to come up with a program that can prove it. Either one like mine that blatantly breaks other clauses in the standard, or one that can legally detect whatever magic the compiler has to do to make the strange NULL definition work. Steve Jessop found an example of way for a program to detect that NULL isn't defined to be one of the two forms of null pointer constants in 3.2.2.3, which is to stringize the constant: #define stringize_helper(x) #x #define stringize(x) stringize_helper(x) Using this macro, one could puts(stringize(NULL)); and "detect" that NULL does not expand to one of the forms in 3.2.2.3. Is that enough to render other definitions illegal? I just don't know. Thanks!

    Read the article

  • What do I need to write a small game on Linux?

    - by Michas
    I want to make a simple game: 2d, single-player, without tons of animations and special effects. I am not interested in ready to use game engines, I want to learn to write some code in a quite universal language. I am using Linux (AMD64) and looking for something easy with nice library for games. I do not want to mix few languages, most of them are in fact fast enough themselves for my needs. Cross platform would be an advantage, however all I need is a good Linux support. I have been considering few solutions. Ruby + Language looks very nice. + I am going to learn Ruby. - I am afraid I can have problems with additional libraries. - This thread about game libraries for Ruby could be longer. SDL + C + It is used for games. + It is very easy to set up. + There is a lot of additional libraries. + It is cross-platform. - The solution is quite low level. - The language is sometimes quite hard to read. QT + C++ + It is very easy to set up. + The standard QT libraries supports everything I can possibly need. + It is cross-platform. + The documentation is good. - The compilation is slow. - The language looks horrible. - The size of standard QT libraries is too big to comprehend. Environment of web browser + I am going to learn something more about this environment. + It is somewhat used for games. + It is quite cross-platform. - It would be too much experimental. Java + It is used for games. + The standard Java libraries supports everything I can possibly need. + It is cross-platform. - It is quite hard to set up. - The size of standard Java libraries is too big to comprehend. - The source code in Java could look better. - I think I do not want to learn Java. Google Go + I am going to learn Google Go. - There is big problem with libraries. - The solution would be quite low level. Python + It looks some people do games in Python, according to this thread. + It looks there are probably more libraries than for Ruby. - The Ruby language looks better. - I think I do not want to learn Python. C++ + something else + It is used for games. + It would be probably cross-platform. + There is a lot of libraries. - I do not need C++ extensions over C. - Compilation could be slow, there are fast compilers for C, not for C++. Haskell + I am going to learn Haskell. - Many things about programming computer games looks too much imperative. - It looks I can have some problems with libraries. - Compilation (GHC) looks slow. There is probably something more to consider. Does anyone have experience in making small games for Linux in non mainstream solutions? Does anyone have an advice for me?

    Read the article

  • Linq2SQL vs NHibernate performance (have I gone mad?)

    - by HeavyWave
    I have written the following tests to compare performance of Linq2SQL and NHibernate and I find results to be somewhat strange. Mappings are straight forward and identical for both. Both are running against a live DB. Although I'm not deleting Campaigns in case of Linq, but that shouldn't affect performance by more than 10 ms. Linq: [Test] public void Test1000ReadsWritesToAgentStateLinqPrecompiled() { Stopwatch sw = new Stopwatch(); Stopwatch swIn = new Stopwatch(); sw.Start(); for (int i = 0; i < 1000; i++) { swIn.Reset(); swIn.Start(); ReadWriteAndDeleteAgentStateWithLinqPrecompiled(); swIn.Stop(); Console.WriteLine("Run ReadWriteAndDeleteAgentState: " + swIn.ElapsedMilliseconds + " ms"); } sw.Stop(); Console.WriteLine("Total Time: " + sw.ElapsedMilliseconds + " ms"); Console.WriteLine("Average time to execute queries: " + sw.ElapsedMilliseconds / 1000 + " ms"); } private static readonly Func<AgentDesktop3DataContext, int, EntityModel.CampaignDetail> GetCampaignById = CompiledQuery.Compile<AgentDesktop3DataContext, int, EntityModel.CampaignDetail>( (ctx, sessionId) => (from cd in ctx.CampaignDetails join a in ctx.AgentCampaigns on cd.CampaignDetailId equals a.CampaignDetailId where a.AgentStateId == sessionId select cd).FirstOrDefault()); private void ReadWriteAndDeleteAgentStateWithLinqPrecompiled() { int id = 0; using (var ctx = new AgentDesktop3DataContext()) { EntityModel.AgentState agentState = new EntityModel.AgentState(); var campaign = new EntityModel.CampaignDetail { CampaignName = "Test" }; var campaignDisposition = new EntityModel.CampaignDisposition { Code = "123" }; campaignDisposition.Description = "abc"; campaign.CampaignDispositions.Add(campaignDisposition); agentState.CallState = 3; campaign.AgentCampaigns.Add(new AgentCampaign { AgentState = agentState }); ctx.CampaignDetails.InsertOnSubmit(campaign); ctx.AgentStates.InsertOnSubmit(agentState); ctx.SubmitChanges(); id = agentState.AgentStateId; } using (var ctx = new AgentDesktop3DataContext()) { var dbAgentState = ctx.GetAgentStateById(id); Assert.IsNotNull(dbAgentState); Assert.AreEqual(dbAgentState.CallState, 3); var campaignDetails = GetCampaignById(ctx, id); Assert.AreEqual(campaignDetails.CampaignDispositions[0].Description, "abc"); } using (var ctx = new AgentDesktop3DataContext()) { ctx.DeleteSessionById(id); } } NHibernate (the loop is the same): private void ReadWriteAndDeleteAgentState() { var id = WriteAgentState().Id; StartNewTransaction(); var dbAgentState = agentStateRepository.Get(id); Assert.IsNotNull(dbAgentState); Assert.AreEqual(dbAgentState.CallState, 3); Assert.AreEqual(dbAgentState.Campaigns[0].Dispositions[0].Description, "abc"); var campaignId = dbAgentState.Campaigns[0].Id; agentStateRepository.Delete(dbAgentState); NHibernateSession.Current.Transaction.Commit(); Cleanup(campaignId); NHibernateSession.Current.BeginTransaction(); } Results: NHibernate: Total Time: 9469 ms Average time to execute 13 queries: 9 ms Linq: Total Time: 127200 ms Average time to execute 13 queries: 127 ms Linq lost by 13.5 times! Event with precompiled queries (both read queries are precompiled). This can't be right, although I expected NHibernate to be faster, this is just too big of a difference, considering mappings are identical and NHibernate actually executes more queries against the DB.

    Read the article

  • jcuda library usage problem

    - by user513164
    hi m very new to java and Linux i have a code which is taken from examples of jcuda.the code is following import jcuda.CUDA; import jcuda.driver.CUdevprop; import jcuda.driver.types.CUdevice; public class EnumDevices { public static void main(String args[]) { //Init CUDA Driver CUDA cuda = new CUDA(true); int count = cuda.getDeviceCount(); System.out.println("Total number of devices: " + count); for (int i = 0; i < count; i++) { CUdevice dev = cuda.getDevice(i); String name = cuda.getDeviceName(dev); System.out.println("Name: " + name); int version[] = cuda.getDeviceComputeCapability(dev); System.out.println("Version: " + String.format("%d.%d", version[0], version[1])); CUdevprop prop = cuda.getDeviceProperties(dev); System.out.println("Clock rate: " + prop.clockRate + " MHz"); System.out.println("Threads per block: " + prop.maxThreadsPerBlock); } } } I'm using Ubuntu as my operating system i compiled it with following command 1:-javac -cp /home/manish.yadav/Desktop/JCuda-All-0.3.2-bin-linux-x86_64 EnumDevices i got following error error: Class names, 'EnumDevices', are only accepted if annotation processing is explicitly requested 1 error i don't know what is the meaning of this error.what should i do to compile the program than i changed the compiling option which is javac -cp /home/manish.yadav/Desktop/JCuda-All-0.3.2-bin-linux-x86_64 EnumDevices.java than i got following error EnumDevices.java:36: clockRate is not public in jcuda.driver.CUdevprop; cannot be accessed from outside package System.out.println("Clock rate: " + prop.clockRate + " MHz"); ^ EnumDevices.java:37: maxThreadsPerBlock is not public in jcuda.driver.CUdevprop; cannot be accessed from outside package System.out.println("Threads per block: " + prop.maxThreadsPerBlock); ^ 2 errors Now I'm completely confused i don't know what to do? how to compile this program ? how to install the jcuda package or how to use it ? how to use package which have only jar files and .so files and the jar files don't having manifest file ? please help me

    Read the article

  • is it possible to extract certain strings based off a predefined white-space count?

    - by s2xi
    So after several Advil's I think I need help I am trying to make a script that lets the user upload a .txt file, the file will look like this as an example EXT. DUNKIN' DONUTS - DAY Police vehicles remain in the parking lot. The determined female reporter from the courthouse steps, MELINDA FUENTES (32), interviews Comandante Chitt, who holds a napkin to his jaw, like he cut himself shaving. MELINDA < Comandante Chitt, how does it feel to get shot in the face? > COMANDANTE CHITT < Not too different than getting shot in the arm or leg. > MELINDA < Tell us what happened. > COMANDANTE CHITT < I parked my car. (indicates assault vehicle in donut shop) He aimed his weapon at my head. I fired seven shots. He stopped aiming his weapon at my head. > Melinda waits for more, but Chitt turns and walks away into the roped-off crime scene. Melinda is confused for a second, then resumes smiling. MELINDA < And there you have it... A man of few words. > Ok, so based off of this what I want to do is this: The PHP script looks at the file and counts 35 white spaces, since all files will have the same layout and never differ in white spaces I chose this as the best way to go. for every 35 white spaces extract character 36 until the end of line. Then tally up $character++ so in the end the output would look like ----------------------------------- It looks like you have 2 characters in your script Melinda Commandante Chitt ----------------------------------- using PHP to select distinct names, and use the strtolower() to lower case the strings and ucfirst() to make the first letter upper-case thats my project, I'm at the stage where I'm going crazy trying to figure out how to count white-spaces and everything after that white space until the first white-space after the word IS a character name

    Read the article

  • AD-DirectoryServices: .NET2.0 - Speaking architecture, approach and best practices... Suggestions?

    - by Will Marcouiller
    I've been mandated to write an application to migrate the Active Directory access models to another environment. Here's the context: I'm stuck with VB.NET 2005 and .NET Framework 2.0; The application must use the Windows authenticated user to manage AD; The objects I have to handle are Groups, Users and OrganizationalUnits; I intend to use the Façade design pattern to provider ease of use and a fully reusable code; I plan to write a factory for each of the objects managed (group, ou, user); The use of Attributes should be useful here, I guess; As everything is about the DirectoryEntry class when accessing the AD, it seems a good candidate for generic types. Obligatory features: User creates new OUs manually; User creates new group manually; User creates new user (these users are services accounts) manually; Application reads an XML file which contains the OUs, groups and users to create; Application informs the user about the OUs, groups and users that shall be created; User specifies the domain environment where to migrate the XML input file designated objects; User makes changes if needed, and launches the task operations; Application performs required by the XML input file operations against the underlying AD as specified by the user; Application informs the user upon completion. Linear features: User fetches OUs, groups, users; User changes OUs, groups, users; User deletes OUs, groups, users; The application logs AD entries and operations performed, plus errors and exceptions; Nice-to-have features: Application rollbacks operations on error or exception. I've been working for weeks now to get acquainted with the AD and the System.DirectoryServices assembly. But I don't seem to find a way to be fully satisfied with what I'm doing and always looking for better. I have studied Bret de Smet's Linq to AD on CodePlex, but then again, I can't use it as I'm stuck with .NET 2.0, so no Linq! But I've learned about Attributes, and seen that he's working with generic types as he codes a DirectorySource class to perform the operations for OUs, groups and users. I have been able to add groups to the AD; I have been able to add users to the AD; The created user is automatically disabled? I seem to get confused with the use of a LDAP path to add objects. For instance, one needs two instances of a System.DirectoryServices.DirectoryEntry class to add a group, for instance. Why this? Any suggestions? Thanks for any help, code sample, ideas, architural solution, everything!

    Read the article

  • ActiveRecord Logic Challenge - Smart Ways to Use AR Timestamp

    - by keruilin
    My question is somewhat specific to my app's issue, but the answer should be instructive in terms of use cases for association logic and the record timestamp. I have an NBA pick 'em game where I want to award badges for picking x number of games in a row correctly -- 10, 20, 30. Here are the models, attributes, and associations in-play: User id Pick id result # (values can be 'W', 'L', 'T', or nil. nil means hasn't resolved yet.) resolved # (values can be true, false, or nil.) game_time created_at *Note: There are cases where a pick's result field and resolved field will always be nil. Perhaps the game was cancelled. Badge id Award id user_id badge_id created_at User has many awards. User has many picks. Pick belongs to user. Badge has many awards. Award belongs to user. Award belongs to badge. One of the important rules here to capture in the code is that while a user can be awarded multiple streak badges (e.g., a user can win multiple 10-streak badges), the user CAN'T be awarded another badge for consecutive winning picks that were previously granted an award badge. One way to think of this is that all the dates of the winning picks must come after the date that the streak badge was awarded. For example, let's pretend that a user made 13 winning picks from May 5 to May 8, with the 10th winning pick occurring on May 7, and the last 3 on May 8. The user would be awarded a 10-streak badge on May 7. Now if the user makes another winning pick on May 9, the code must recognize that the user only has a streak of 4 winning picks, not 14, because the user already received an award for the first 10. Now let's assume that the user makes 6 more winning picks. In this case, the code must recognize that all winning picks since May 5 are eligible for a 20-streak badge award, and make the award. Another important rule is that when looking at a winning streak, we don't care about the game time, but rather when the pick was made (created_at). For example, let's say that Team A plays Team B on Sat. And Team C plays Team D on Sun. If the user picks Team C to beat Team D on Thurs, and Team A to beat Team C on Fri, and Team A wins on Sat, but Team C loses on Sun, then the user has a losing streak of 1. So when must the streak-check kick-in? As soon as a pick is a win. If it's a loss or tie, no point in checking. One more note: if the pick is not resolved (false) and the result is nil, that means the game was postponed and must be factored out. With all that said, what is the most efficient, effective and lean way to determine whether a user has a 10-, 20- or 30-win streak?

    Read the article

  • Postgresql: Implicit lock acquisition from foreign-key constraint evaluation

    - by fennec
    So, I'm being confused about foreign key constraint handling in Postgresql. (version 8.4.4, for what it's worth). We've got a couple of tables, mildly anonymized below: device: (id, blah, blah, blah, blah, blah x 50)… primary key on id whooooole bunch of other junk device_foo: (id, device_id, left, right) Foreign key (device_id) references device(id) on delete cascade; primary key on id btree index on 'left' and 'right' So I set out with two database windows to run some queries. db1> begin; lock table device in exclusive mode; db2> begin; update device_foo set left = left + 1; The db2 connection blocks. It seems odd to me that an update of the 'left' column on device_stuff should be affected by activity on the device table. But it is. In fact, if I go back to db1: db1> select * from device_stuff for update; *** deadlock occurs *** The pgsql log has the following: blah blah blah deadlock blah. CONTEXT: SQL statement "SELECT 1 FROM ONLY "public"."device" x WHERE "id" OPERATOR(pg_catalog.=) $1 FOR SHARE OF X: update device_foo set left = left + 1; I suppose I've got two issues: the first is that I don't understand the precise mechanism by which this sort of locking occurs. I have got a couple of useful queries to query pg_locks to see what sort of locks a statement invokes, but I haven't been able to observe this particular sort of locking when I run the update device_foo command in isolation. (Perhaps I'm doing something wrong, though.) I also can't find any documentation on the lock acquisition behavior of foreign-key constraint checks. All I have is a log message. Am I to infer from this that any change to a row will acquire an update lock on all the tables which it's foreign-keyed against? The second issue is that I'd like to find some way to make it not happen like that. I'm ending up with occasional deadlocks in the actual application. I'd like to be able to run big update statements that impact all rows on device_foo without acquiring a big lock on the device table. (There's a lot of access going on in the device table, and it's kind of an expensive lock to get.)

    Read the article

  • Another C datatypes question

    - by b-gen-jack-o-neill
    Hello. Well, I completely get the most basic datatypes of C, like short, int, long, float, to be exact, all numerical types.These types are needed to be known perform right operations with right numbers. For example to use FPU to add two float numbers. So the compiler must know what the type is. But, when it comes to characters I am little bit off. I know that basic C datatype char is there for ASCII characters coding. But what I don´t know is, why you even need another datatype for characters. Why could not you just use 1 byte integer value to store ASCII character. If you call printf, you apecify the datatype in the call, so you could say to printf that the integer represents ASCII character. I dont know how cout resolves datatype, but I guess you could just specify it somehow. Another thing is, when you want to use Unicode, you must use datatype wchar. But, what if I would like to use some another, for example ISO, or Windows coding instead of UTF? Becouse wchar codes characters as UTF-16 or UTF-32 (I read its compiler specific). And, what if I would want to use for example some imaginary new 8 byte text coding? What datatype should I use for it? I am actually pretty confused of this, becouse I always expected that if I want to use UTF-32 instead of ASCII, I just tell compiler "get UTF-32 value of the character I typed and save it into 4 char field." I thought that text coding is to be dealt with by the end, print function for example. That I just need to specify the coding for the compiler to use, since Windows doesent use ASCII in win32 apps, I guess C compiler must convert the char I typed to ASCII from whatever the type is that windows sends to the C editor. And the last thing is, what if I want to use for example 25 Byte integer for some high math operations? C has no specify-yourself datatype. Yes, I know that this would be difficult since all the math operations would need to be changed, becouse CPU can not add 25 Bytes numbers together. But is there a way to do it? Or is there some math library for it? What if I want to compute Pi to 1000000000000000 digits? :) I know my question is pretty long, but I just wanted to explain my thoughts the best I can in English, since its not my native language it is difficult. And I believe there is simple answer to my question(s), something I missed that explains everything. I read lot about text coding, C tutorials, but nothing about his. Thank you for your time.

    Read the article

  • Silverlight? WPF? or Windows Form?

    - by Amit
    After Silverlight 4.0 has been released with new WPF, I am kind of confused with these technologies: Silverlight? WPF? Windows Form? The main motive that we want to achieve for BIG business project is following: Performance Security And platform independent** If I consider all above three points then only Silverlight is the option as I don’t want people buying emulator on MacOS for WPF or Windows Form. Now how good the Silverlight is for Business applications, I was completely against when Silverlight 2.0 was in the market but now it is Silverlight 4.0 and they have provided many new features (but still basics) that is required in any challenging business applications. Comparing Silverlight and WPF -* Silverlight and WPF are very new technology and if I'd to compare from these two then I'd prefer WPF because it can be considered stable and mature. But it is not same as Windows Form. -* If I go with Silverlight then I am sure about keep updating to the latest version of Silverlight. I remembered when we were developing software for version 2.0 then we'd to create our own framework with dynamic loading DLL, and then Navigation concept. But everything was got changed once Silverlight 3.0 came. I don't want this to be happening with this new product. -* If we go with WPF then we don't get the platform independence. Now, why not we just focus on making WPF and then move to Silverlght. As someone (Tim?) from Microsoft has said that the idea is to make Silverlight as close as WPF. But if that is the case then why XAML structure is different; I will not be convinced with by saying that .Net framework for SL is too small.. well the difference is coming from the namespace ? I was searching on this subject and found "Microsoft WPF-Silverlight Comparison Whitepaper v1.1.pdf". This guide is very good that gives you ins-outs about how can we build common apps that runs on both. But again, it is comparing Silverlight 2 and not 4. I am sure many architect/ developers/ project managers must be facing similar kind of questions in their premises and wants to initiate this discussion, if it has not been :). We've still got 2 weeks to make this decision, so I'm expecting everyone to participate, gurus?

    Read the article

  • Why does extend() engage in bizarre behaviour when passed the same list twice?

    - by intuited
    I'm pretty confused by one of the subtleties of the vimscript extend() function. If you use it to extend a list with another list, it does pretty much what you'd expect, which is to insert the second list into the first list at the index given by the third parameter: let list1 = [1,2,3,4,5,6] | echo extend(list1,[1,2,3,4,5,6],5) " [1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 6, 6] However if you give it the same list twice it starts tripping out a bit. let list1 = [1,2,3,4,5,6] | echo extend(list1,list1,0) " [1, 2, 3, 4, 5, 6, 1, 2, 3, 4, 5, 6] let list1 = [1,2,3,4,5,6] | echo extend(list1,list1,1) " [1, 1, 1, 1, 1, 1, 1, 2, 3, 4, 5, 6] let list1 = [1,2,3,4,5,6] | echo extend(list1,list1,2) " [1, 2, 1, 2, 1, 2, 1, 2, 3, 4, 5, 6] let list1 = [1,2,3,4,5,6] | echo extend(list1,list1,3) " [1, 2, 3, 1, 2, 3, 1, 2, 3, 4, 5, 6] let list1 = [1,2,3,4,5,6] | echo extend(list1,list1,4) " [1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 5, 6] let list1 = [1,2,3,4,5,6] | echo extend(list1,list1,5) " [1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 1, 6] let list1 = [1,2,3,4,5,6] | echo extend(list1,list1,6) " [1, 2, 3, 4, 5, 6, 1, 2, 3, 4, 5, 6] Extra-confusingly, this behaviour applies when the list is referenced with two different variables: let list1 = [1,2,3,4,5,6] | let list2 = list1 | echo extend(list1,list2,4) " [1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 5, 6] This is totally bizarre to me. I can't fathom a use for this functionality, and it seems like it would be really easy to invoke it by accident when you just wanted to insert one list into another and didn't realize that the variables were referencing the same array. The documentation says the following: If they are |Lists|: Append {expr2} to {expr1}. If {expr3} is given insert the items of {expr2} before item {expr3} in {expr1}. When {expr3} is zero insert before the first item. When {expr3} is equal to len({expr1}) then {expr2} is appended. Examples: :echo sort(extend(mylist, [7, 5])) :call extend(mylist, [2, 3], 1) When {expr1} is the same List as {expr2} then the number of items copied is equal to the original length of the List. E.g., when {expr3} is 1 you get N new copies of the first item (where N is the original length of the List). Does this make sense in a way that I'm not getting, or is it just an eccentricity?

    Read the article

  • How to initiate chatting between two clients and two clients only, using applets and servlets?

    - by mithun1538
    Hello everyone, I first need to apologize for my earlier questions. (You can check my profile for them)They seemed to ask more questions than give answers. Hence, I am laying down the actual question that started all them absurd questions. I am trying to design a chat applet. Till now, I have coded the applet, servlet and communication between the applet and the servlet. The code in the servlet side is such that I was able to establish chatting between clients using the applets, but the code was more like a broadcast all feature, i.e. all clients would be chatting with each other. That was my first objective when I started designing the chat applet. The second step is chatting between only two specific users, much like any other chat application we have. So this was my idea for it: I create an instance of the servlet that has the 'broadcast-all' code. I then pass the address of this instance to the respective clients. 2 client applets use the address to then chat. Technically the code is 'broadcast-all', but since only 2 clients are connected to it, it gives the chatting between two clients feature. Thus, groups of 2 clients have different instances of the same servlet, and each instance handles chatting between two clients at a max. However, as predicted, the idea didn't materialize! I tried to create an instance of the servlet but the only solution for that was using sessions on the servlet side, and I don't know how to use this session for later communications. I then tried to modify my broadcast-all code. In that code, I was using classes that implemented Observer and Observable interfaces. So the next idea that I got was: Create a new object of the Observable class(say class_1). This object be common to 2 clients. 2 clients that wish to chat will use same object of the class_1. 2 other clients will use a different object of class_1. But the problem here lies with the class that implements the Observer interface(say class_2). Since this has observers monitoring the same type of class, namely class_1, how do I establish an observer monitoring one object of class_1 and another observer monitoring another object of the same class class_1 (Because notifyObservers() would notify all the observers and I can't assign a particular observer to a particular object)? I first decided to ask individual problems, like how to create instances of servlets, using objects of observable and observer and so on in stackoverflow... but I got confused even more. Can anyone give me an idea how to establish chatting between two clients only?(I am using Http and not sockets or RMI). Regards, Mithun. P.S. Thanks to all who replied to my previous (absurd) queries. I should have stated the purpose earlier so that you guys could help me better.

    Read the article

  • Troubles with list "dropdowns" and which list item gets the dropdown

    - by Andrew
    I'm working on a project for an MMO "guild" that gives members of the guild randomly generated tasks for the game. They can "block" three tasks from being assigned. The lists will look something like this: <ul> <li class="blocked">Task that is blocked</li> <li class="blocked-open">Click to block a task</li> <li class="blocked-open">Click to block a task</li> </ul> The blocked-open class means they haven't chosen a task to block yet. The blocked task means they've already blocked a task. When they click the list item, I want this to appear: <ul class="tasks-dropdown no-display"> <li><h1>Click a Task to Block</h1></li> <ul class="task-dropdown-inner"> <?php //output all tasks foreach($tasks as $task) { echo '<li class="blocked-option"><span id="'.$task.'">'.$task.'</span></li>'; } ?> <br class="clear" /> </ul> </ul> I don't quite know how, when the user clicks the .blocked-open line-item, to show that dropdown under only the one they clicked. My jQuery looked like this before I became confused. $("li.blocked-open").click(function() { $("ul.no-display").slideToggle("900"); }); $(".blocked-option span").click(function() { var task = $(this).attr('id'); alert("You have blocked: " + task); location.reload(true); }); I tested it by putting the dropdown under a line item in the code, and it worked fine, but when I have more than one dropdown in the code, clicking on one line item toggles all the dropdowns. I'm not sure what to do. :-p.

    Read the article

  • Is there a path of least resistance that a newcomer to graphics-technology-adoption can take at this point in the .NET graphics world?

    - by Rao
    For the past 5 months or so, I've spent time learning C# using Andrew Troelsen's book and getting familiar with stuff in the .NET 4 stack... bits of ADO.NET, EF4 and a pinch of WCF to taste. I'm really interested in graphics development (not for games though), which is why I chose to go the .NET route when I decided choose from either Java or .NET to learn... since I heard about WPF and saw some sexy screenshots and all. I'm even almost done with the 4 WPF chapters in Troelsen's book. Now, all of a sudden I saw some post on a forum about how "WPF was dead" in the face of something called Silverlight. I searched more and saw all the confusion going on at present... even stuff like "Silverlight is dead too!" wrt HTML5. From what I gather, we are in a delicate period of time that will eventually decide which technology will stabilize, right? Even so, as someone new moving into UI & graphics development via .NET, I wish I could get some guidance from people more experienced people. Maybe I'm reading too much? Maybe I have missed some pieces of information? Maybe a path exists that minimizes tears of blood? In any case, here is a sample vomiting of my thoughts on which I'd appreciate some clarification or assurance or spanking: My present interest lies in desktop development. But on graduating from college, I wish to market myself as a .NET developer. The industry seems to be drooling for web stuff. Can Silverlight do both equally well? (I see on searches that SL works "out of browser"). I have two fair-sized hobby projects planned that will have hawt UIs with lots of drag n drop, sliding animations etc. These are intended to be desktop apps that will use reflection, database stuff using EF4, networking over LAN, reading-writing of files... does this affect which graphics technology can be used? At some laaaater point, if I become interested in doing a bit of 3D stuff in .NET, will that affect which technologies can be used? Or what if I look up to the heavens, stick out my middle finger, and do something crazy like go learn HTML5 even though my knowledge of it can be encapsulated in 2 sentences? Sorry I seem confused so much, I just want to know if there's a path of least resistance that a newcomer to graphics-technology-adoption can take at this point in the graphics world.

    Read the article

< Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >