Search Results

Search found 14780 results on 592 pages for 'low level'.

Page 546/592 | < Previous Page | 542 543 544 545 546 547 548 549 550 551 552 553  | Next Page >

  • Loading external pngs into an AS2 swf that is loaded into an AS3 swf wrapper

    - by James Fassett
    I have a Wrapper SWF that loads a series of AS2 movies. Each AS2 movie loads a series of .png files. AS3_wrapper.swf |-> AS2_1.swf |-> image_1.png |-> image_2.png |-> AS2_2.swf |-> image_1.png |-> image_2.png Inside of the AS2 I listen for the load of the pngs using onLoadInit and update my UI. This works fine for the first AS2 swf. But when I load the second AS2 swf the onLoadInit isn't triggered for the pngs. My guess is that the images are in a cache or something like that. I put a random string on the end of the request to try and avoid the cache but that doesn't seem to work. The code in the as2 looks roughly like this: var flagLoader:MovieClipLoader = new MovieClipLoader(); var listener:Object = new Object(); listener.onLoadInit = Delegate.create(this, handleImageLoad); flagLoader.addListener(listener); var row:MovieClip = frame1["row" + (numLoaded + 1)]; flagLoader.loadClip(predictionData[numLoaded].flag + "?r="+Math.random(), row.flag); I'm making sure to load only one image at a time (I've read anecdotal evidence loading more than one thing at a time can confuse the MovieClipLoader). For the first as2 file everything works great. When I load the second as2 file the handleImageLoad never gets called. Update: Even more perplexing is if I reload the first AS2 movie (after the second AS2 movie fails to load the images) the first AS2 movie loads the images again fine. Update 2: After trying to change from using a MovieClipLoader to polling (as was helpfully suggested) I have found some more evidence that is relevant. When I load the first AS2 files and trace from the top level clip it prints out _root. The second AS2 file when loaded traces the same _root. This lead me to check if they were clashing on names and they are. Both have a child called frame. The first one, when I trace it comes out as _root.frame as expected. The second AS2 file traces _level0.instance3.instance118.instance111.frame. I'm guessing this is related to the problem. Flash is keeping the _root of the two files the same but it is changing the locations of their children (for subsequently loaded files that have children with the same names). So either the onLoad is going to the wrong clip or the events about it loading are.

    Read the article

  • WCF Duplex Interaction with Web Server

    - by Mark Struzinski
    Here is my scenario, and it is causing us a considerable amount of grief at the moment: We have a vendor web service which provides base level telephony functionality. This service has a SOAP api, which we are leveraging to build up a custom UI that is integrated into our in house web apps. The api functions on 2 levels. You make standard client calls into the service to initiate actions, such as Login, Place Call, Hang Up, etc. On a different thread, the service sends events back to the client to alert the user of things that are occurring on the system (agent successfully logged in, call was disconnected, etc). I implemented a WCF service to sit between the web server and the vendor service. This WCF service operates in duplex mode, establishing a 2 way connection with the web server. The web server makes outbound calls to the WCF service, which routes them to the vendor's web service. Events are received back to the WCF service, which passes them onto the web server via a callback channel on the WCF client. As events are received on the web server, they are placed into a hash table with the user's name as the key, and a .NET queue as the value to hold the event. Each event is enqueued to the agent who owns it. On a 2 second interval, the web page polls the web server via an ajax request to get new events for the logged in user. It hits the hash table for the user key, dequeues any events that are present, and serializes them back up to the web page. From there, they are processed in order and appropriate messages are displayed to the user. This implementation performs well in a single user scenario. The second I put more than 1 user on the system, I start getting frequent timeouts with the following CommunicationException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond We are running Windows Server 2008 R2 both servers. Both the web app and WCF service are running on .NET 3.5. The WCF service is running under the net.tcp protocol in duplex mode. The web app is ASP.NET MVC 2. Has anyone dealt with anything like this scenario? Is there a more efficient way (or a widely accepted pattern) to implement this?

    Read the article

  • Binary Search Help

    - by aloh
    Hi, for a project I need to implement a binary search. This binary search allows duplicates. I have to get all the index values that match my target. I've thought about doing it this way if a duplicate is found to be in the middle: Target = G Say there is this following sorted array: B, D, E, F, G, G, G, G, G, G, Q, R S, S, Z I get the mid which is 7. Since there are target matches on both sides, and I need all the target matches, I thought a good way to get all would be to check mid + 1 if it is the same value. If it is, keep moving mid to the right until it isn't. So, it would turn out like this: B, D, E, F, G, G, G, G, G, G (MID), Q, R S, S, Z Then I would count from 0 to mid to count up the target matches and store their indexes into an array and return it. That was how I was thinking of doing it if the mid was a match and the duplicate happened to be in the mid the first time and on both sides of the array. Now, what if it isn't a match the first time? For example: B, D, E, F, G, G, J, K, L, O, Q, R, S, S, Z Then as normal, it would grab the mid, then call binary search from first to mid-1. B, D, E, F, G, G, J Since G is greater than F, call binary search from mid+1 to last. G, G, J. The mid is a match. Since it is a match, search from mid+1 to last through a for loop and count up the number of matches and store the match indexes into an array and return. Is this a good way for the binary search to grab all duplicates? Please let me know if you see problems in my algorithm and hints/suggestions if any. The only problem I see is that if all the matches were my target, I would basically be searching the whole array but then again, if that were the case I still would need to get all the duplicates. Thank you BTW, my instructor said we cannot use Vectors, Hash or anything else. He wants us to stay on the array level and get used to using them and manipulating them.

    Read the article

  • Avoiding Redundancies in XML documents

    - by MarceloRamires
    I was working with a certain XML where there were no redundancies <person> <eye> <eye_info> <eye_color> blue </eye_color> </eye_info> </eye> <hair> <hair_info> <hair_color> blue </hair_color> </hair_info> </hair> </person> As you can see, the sub-tag eye-color makes reference to eye in it's name, so there was no need to avoid redundancies, I could get the eye color in a single line after loading the XML into a dataset: dataset.ReadXml(path); value = dataset.Tables("eye_info").Rows(0)("eye_color"); I do realise it's not the smartest way of doing so, and this situation I'm having now wasn't unforeseen. Now, let's say I have to read xml's that are in this format: <person> <eye> <info> <color> blue </color> </info> </eye> <hair> <info> <color> blue </color> </info> </hair> </person> So If I try to call it like this: dataset.ReadXml(path); value = dataset.Tables("info").Rows(0)("color"); There will be a redundancy, because I could only go as far as one up level to identify a single field in a XML with my previous method, and the 'disambiguator' is three levels above. Is there a practical way to reach with no mistake a single field given all the above (or at least a few) fields ?

    Read the article

  • How do I use IImgCtx to get load an image with an alpha channel?

    - by fret
    I have some working code that uses IImgCtx to load images, but I can't work out how to get at the alpha channel. For images like .gif's and .png's there are transparent pixels, but using anything other than a 24-bit bitmap as a drawing surface doesn't work. For reference on the interface: http://www.codeproject.com/KB/graphics/JianImgCtxDecoder.aspx My code looks like this: IImgCtx *Ctx = 0; HRESULT hr = CoCreateInstance(CLSID_IImgCtx, NULL, CLSCTX_INPROC_SERVER, IID_IImgCtx, (LPVOID*)&Ctx); if (SUCCEEDED(hr)) { GVariant Fn = Name; hr = Ctx->Load(Fn.WStr(), 0); if (SUCCEEDED(hr)) { SIZE Size = { -1, -1 }; ULONG State = 0; while (true) { hr = Ctx->GetStateInfo(&State, &Size, false); if (SUCCEEDED(hr)) { if ((State & IMGLOAD_COMPLETE) || (State & IMGLOAD_STOPPED) || (State & IMGLOAD_ERROR)) { break; } else { LgiSleep(1); } } else break; } if (Size.cx > 0 && Size.cy > 0 && pDC.Reset(new GMemDC)) { if (pDC->Create(Size.cx, Size.cy, 32)) { HDC hDC = pDC->StartDC(); if (hDC) { RECT rc = { 0, 0, pDC->X(), pDC->Y() }; Ctx->Draw(hDC, &rc); pDC->EndDC(); } } else pDC.Reset(); } } Ctx->Release(); Where "StartDC" basically wraps CreateCompatibleDC(NULL) and "EndDC" wraps DeleteDC, with appropriate SelectObjects for the HBITMAPS etc. And pDC-Create(x, y, bit_depth) calls CreateDIBSection(...DIB_RGB_COLORS...). So it works if I create a 24 bits/pixel bitmap but has no alpha to speak of, and it leaves the 32 bits/pixel bitmap blank. Now this interface apparently is used by Internet Explorer to load images, and obviously THAT supports transparency, so I believe that it's possible to get some level of alpha out of the interface. The question is how? (I also have fall back code that will call libpng/libjpeg/my .gif loader etc)

    Read the article

  • Writing reports with Perl

    - by georgemp
    Hi, I am trying to write out multiple report files using perl. Each file has the same structure, but with different data. So, my basic code looks something like #begin code our $log_fh; open %log_fh, ">" . $logfile our $rep; if (multipleReports) { while (@reports) { printReport($report[0]); } } sub printReports { open $rep, ">" . $[0]; printHeaders(); printBody(); close $rep; } sub printHeader() { format HDR = @>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> $generatedLine . format HDR_TOP = . $rep->format_name("HDR"); $rep->format_top_name("HDR_TOP"); $generatedLine = "test"; write($rep); $generatedLine = "next item"; write($rep); $generatedLine = "last header item"; write($rep); } sub printBody #There are multiple such sections in my code. For simplicity, I have just shown 1 here { #declare own header and header top. Set report to use these and print items to $rep } #end code The above is just a high level of the code I am using and I hope I have captured all the salient points. However, for some reason, I get the first report file output correctly. The second file instead of having in the first section test next item last item reads last item last item last item I have tried a whole lot of options primarily around autoflush, but, for the life of me can't figure out why it is doing this. I am using Perl 5.8.2. Any help/pointers much appreciated. Thanks George

    Read the article

  • PHP behaves weird, mixing up HTML markup.

    - by adardesign
    Thanks for looking on this problem. I have a page that is totally valid page, and there is a PHP loop that brings in a <li> for each entry of the table. When i check this page locally it looks 100% OK, but when veiwing the page online the left side bar (which creates this markup is broken randomly mixing <div>'s and <li>'s and i have no clue what the problem is. See page (problem is on the left side) php code <?php do { ?> <li class="clear-block" id="<?php echo $row_Recordset1['penSKU']; ?>"> <a title="Click to view the <?php echo $row_Recordset1['penName']; ?> collection" rel="<?php echo $row_Recordset1['penSKU']; ?>"> <img src="prodImages/small/<?php echo $row_Recordset1['penSKU']; ?>.png" alt="" /> <div class="prodInfoCntnr"> <div class="basicInfo"> <span class="prodName"><?php echo $row_Recordset1['penName']; ?></span> <span class="prodSku"><?php echo $row_Recordset1['penSKU']; ?></span> </div> <div class="secondaryInfo"> <span>As low as .<?php echo $row_Recordset1['price25000']; ?>¢ <!--<em>(R)</em>--></span> <div class="colorPlacholder" rel="<?php echo $row_Recordset1['penColors']; ?>"></div> </div> </div> <div class="additPenInfo"> <div class="imprintInfo"><span>Imprint area: </span><?php echo $row_Recordset1['imprintArea']; ?></div> <div class="colorInfo"><span>Available in: </span><?php echo $row_Recordset1['penColors']; ?></div> <table border="0" cellspacing="0" cellpadding="0"> <tr> <th>Amount</th> <th>500</th> <th>1,000</th> <th>2,500</th> <th>5,000</th> <th>10,000</th> <th>20,000</th> </tr> <tr> <td>Price <span>(R)</span></td> <td><?php echo $row_Recordset1['price500'];?>¢</td> <td><?php echo $row_Recordset1['price1000'];?>¢</td> <td><?php echo $row_Recordset1['price2500'];?>¢</td> <td><?php echo $row_Recordset1['price5000'];?>¢</td> <td>Please Contact</td> <td>Please Contact</td> </tr> </table> </div> </a> </li> <?php } while ($row_Recordset1 = mysql_fetch_assoc($Recordset1)); ?>

    Read the article

  • jquery ajax error cannot find url outside of debug mode

    - by John Orlandella Jr.
    I inherited some code two weeks ago that is using the jquery.ajax method to connect to a .NET web service. Here is the piece of code give me the trouble... if (MSCTour.AppSettings.OFFLINE !== 'TRUE') { $.ajax({ url: url, data: json, type: "POST", contentType: "application/json", timeout: 10000, dataType: "json", // not "json" we'll parse success: function(res){ if (!callback) { return; } /* // *** Use json library so we can fix up MS AJAX dates */ var result = ""; if (res !== "") { try { result = $.evalJSON(res); } catch (e) { result = {}; bare = true; } } /* // *** Bare message IS result */ if (bare) { callback(result); return; } /* // *** Wrapped message contains top level object node // *** strip it off */ for (var property in result) { callback(result[property]); break; } }, error: function(xhr,status,error){ if (status === 'parsererror') {} else {return error;} }, complete: function(res, status){ if (callback) { if ((status != 'success' && status != 'error') || status === 'parsererror' || (status === 'timeout' && res !== '')) { try { result = $.secureEvalJSON(res); } catch (e) { result = {}; bare = true; } callback(res); } } return; } }); } The url variable at this point equals /testsite/service.svc/GetItems Now here is where my problem lies... When running this site out of debug mode through visual studio I am not having any problem connecting to the database through the web service and seeing all my data, for both viewing and updating. When I go through the normal web server for the same site, on the same page, no data is showing up. When I put a break on the error portion of the code above in firebug this is information I am getting in the image linked below. link text I am getting what appears to be a 404 error, but when I look on the server all of the files are in the right place... coupled with the fact that it works when in debug mode, I think I am slowly going crazy staring at these same lines of code trying to find the needle in the haystack. Any help or just a direction to look in would be greatly appreciated.

    Read the article

  • Configuring Hibernate logging using Log4j XML config file?

    - by James McMahon
    I haven't been able to find any documentation on how to configure Hibernate's logging using the XML style configuration file for Log4j. Is this even possible or do I have use a properties style configuration file to control Hibernate's logging? If anyone has any information or links to documentation it would appreciated. EDIT: Just to clarify, I am looking for example of the actual XML syntax to control Hibernate. EDIT2: Here is what I have in my XML config file. <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE log4j:configuration SYSTEM "log4j.dtd"> <log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/"> <appender name="console" class="org.apache.log4j.ConsoleAppender"> <param name="Threshold" value="info"/> <param name="Target" value="System.out"/> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d{ABSOLUTE} [%t] %-5p %c{1} - %m%n"/> </layout> </appender> <appender name="rolling-file" class="org.apache.log4j.RollingFileAppender"> <param name="file" value="Program-Name.log"/> <param name="MaxFileSize" value="1000KB"/> <!-- Keep one backup file --> <param name="MaxBackupIndex" value="4"/> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d [%t] %-5p %l - %m%n"/> </layout> </appender> <root> <priority value ="debug" /> <appender-ref ref="console" /> <appender-ref ref="rolling-file" /> </root> </log4j:configuration> Logging works fine but I am looking for a way to step down and control the hibernate logging in way that separate from my application level logging, as it currently is flooding my logs. I have found examples of using the preference file to do this, I was just wondering how I can do this in a XML file.

    Read the article

  • How do I prevent programmatically the "Program Compatibility Assistant" in Vista (and Windows 7) fro

    - by Asaf
    I develop a C++ program which might use adobe flash, although it is not essential. I use CoCreateInstance to create the flash object, and if it fails, I know flash is not installed so I don't use it. However, in Vista (and I think Windows 7 as well), when flash is not installed, after leaving the application, the "Program Compatibility Assistant" pops up a message saying that "This program requires a missing Windows component" specifying the flash.ocx. Is there a way to prevent this message from appearing? I don't want to force any user to install flash (especially since it's the IE ActiveX, and FireFox users might not have it installed), and my application can operate well without the flash. Plus this message is really annoying when it appears after every run. I don't mean of course disabling the PCA on the user's machine, but programmatically disable this specific appearance on all machines. Any thoughts? Thanks [EDIT:] I followed Shay's lead (thanks), and did some more digging of my own. I added the following XML to the application's manifest: <trustInfo xmlns="urn:schemas-microsoft-com:asm.v2"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false"> </requestedExecutionLevel> </requestedPrivileges> </security> </trustInfo> (see also: msdn.microsoft.com/en-us/library/bb756929.aspx) This solved the problem on Vista 64. To solve the same problem on Windows 7, I added the following: <compatibility xmlns="urn:schemas-microsoft-com:compatibility.v1"> <application> <!--The ID below indicates application support for Windows Vista --> <supportedOS Id="{e2011457-1546-43c5-a5fe-008deee3d3f0}"/> <!--The ID below indicates application support for Windows 7 --> <supportedOS Id="{35138b9a-5d96-4fbd-8e2d-a2440225f93a}"/> </application> </compatibility> (See also: blogs.msdn.com/yvesdolc/archive/2009/09/22/the-new-compatibility-section-in-the-application-manifest.aspx) Solved Windows 7. But for some reason, it still happens in Vista 32... I also tried editing the manifest of the specific DLL which causes the problem, but it had no effect. Only the executable's manifest itself affected the problem. So... Vista 32?

    Read the article

  • Why is it still so hard to write software?

    - by nornagon
    Writing software, I find, is composed of two parts: the Idea, and the Implementation. The Idea is about thinking: "I have this problem; how do I solve it?" and further, "how do I solve it elegantly?" The answers to these questions are obtainable by thinking about algorithms and architecture. The ideas come partially through analysis and partially through insight and intuition. The Idea is usually the easy part. You talk to your friends and co-workers and you nut it out in a meeting or over coffee. It takes an hour or two, plus revisions as you implement and find new problems. The Implementation phase of software development is so difficult that we joke about it. "Oh," we say, "the rest is a Simple Matter of Code." Because it should be simple, but it never is. We used to write our code on punch cards, and that was hard: mistakes were very difficult to spot, so we had to spend extra effort making sure every line was perfect. Then we had serial terminals: we could see all our code at once, search through it, organise it hierarchically and create things abstracted from raw machine code. First we had assemblers, one level up from machine code. Mnemonics freed us from remembering the machine code. Then we had compilers, which freed us from remembering the instructions. We had virtual machines, which let us step away from machine-specific details. And now we have advanced tools like Eclipse and Xcode that perform analysis on our code to help us write code faster and avoid common pitfalls. But writing code is still hard. Writing code is about understanding large, complex systems, and tools we have today simply don't go very far to help us with that. When I click "find all references" in Eclipse, I get a list of them at the bottom of the window. I click on one, and I'm torn away from what I was looking at, forced to context switch. Java architecture is usually several levels deep, so I have to switch and switch and switch until I find what I'm really looking for -- by which time I've forgotten where I came from. And I do that all day until I've understood a system. It's taxing mentally, and Eclipse doesn't do much that couldn't be done in 1985 with grep, except eat hundreds of megs of RAM. Writing code has barely changed since we were staring at amber on black. We have the theoretical groundwork for much more advanced tools, tools that actually work to help us comprehend and extend the complex systems we work with every day. So why is writing code still so hard?

    Read the article

  • compiler warning at C++ template base class

    - by eike
    I get a compiler warning, that I don't understand in that context, when I compile the "Child.cpp" from the following code. (Don't wonder: I stripped off my class declarations to the bare minuum, so the content will not make much sense, but you will see the problem quicker). I get the warning with VS2003 and VS2008 on the highest warning level. The code AbstractClass.h : #include <iostream> template<typename T> class AbstractClass { public: virtual void Cancel(); // { std::cout << "Abstract Cancel" << std::endl; }; virtual void Process() = 0; }; //outside definition. if I comment out this and take the inline //definition like above (currently commented out), I don't get //a compiler warning template<typename T> void AbstractClass<T>::Cancel() { std::cout << "Abstract Cancel" << std::endl; } Child.h : #include "AbstractClass.h" class Child : public AbstractClass<int> { public: virtual void Process(); }; Child.cpp : #include "Child.h" #include <iostream> void Child::Process() { std::cout << "Process" << std::endl; } The warning The class "Child" is derived from "AbstractClass". In "AbstractClass" there's the public method "AbstractClass::Cancel()". If I define the method outside of the class body (like in the code you see), I get the compiler warning... AbstractClass.h(7) : warning C4505: 'AbstractClass::Cancel' : unreferenced local function has been removed with [T=int] ...when I compile "Child.cpp". I do not understand this, because this is a public function and the compiler can't know if I later reference this method or not. And, in the end, I reference this method, because I call it in main.cpp and despite this compiler warning, this method works if I compile and link all files and execute the program: //main.cpp #include <iostream> #include "Child.h" int main() { Child child; child.Cancel(); //works, despite the warning } If I do define the Cancel() function as inline (you see it as out commented code in AbstractClass.h), then I don't get the compiler warning. Of course my program works, but I want to understand this warning or is this just a compiler mistake? Furthermore, if do not implement AbsctractClass as a template class (just for a test purpose in this case) I also don't get the compiler warning...?

    Read the article

  • Running out of memory with UIImage creation on an offscreen Bitmap Context by NSOperation

    - by sigsegv
    I have an app with multiple UIView subclasses that acts as pages for a UIScrollView. UIViews are moved back and forth to provide a seamless experience to the user. Since the content of the views is rather slow to draw, it's rendered on a single shared CGBitmapContext guarded by locks by NSOperation subclasses - executed one at once in an NSOperationQueue - wrapped up in an UIImage and then used by the main thread to update the content of the views. -(void)main { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc]init]; if([self isCancelled]) { return; } if(nil == data) { return; } // Buffer is the shared instance of a CG Bitmap Context wrapper class // data is a dictionary CGImageRef img = [buffer imageCreateWithData:data]; UIImage * image = [[UIImage alloc]initWithCGImage:img]; CGImageRelease(img); if([self isCancelled]) { [image release]; return; } NSDictionary * result = [[NSDictionary alloc]initWithObjectsAndKeys:image,@"image",id,@"id",nil]; // target is the instance of the UIView subclass that will use // the image [target performSelectorOnMainThread:@selector(updateContentWithData:) withObject:result waitUntilDone:NO]; [result release]; [image release]; [pool release]; } The updateContentWithData: of the UIView subclass performed on the main thread is just as simple -(void)updateContentWithData:(NSDictionary *)someData { NSDictionary * data = [someData retain]; if([[data valueForKey:@"id"]isEqualToString:[self pendingRequestId]]) { UIImage * image = [data valueForKey:@"image"]; [self setCurrentImage:image]; [self setNeedsDisplay]; } // If the image has not been retained, it should be released together // with the dictionary retaining it [data release]; } The drawLayer:inContext: method of the subclass will just get the CGImage from the UIImage and use it to update the backing layer or part of it. No retain or release is involved in the process. The problem is that after a while I run out of memory. The number of the UIViews is static. CGImageRef and UIImage are created, retained and released correctly (or so it seems to me). Instruments does not show any leaks, just the free memory available dip constantly, rise a few times, and then dip even lower until the application is terminated. The app cycles through about 2-300 of the aforementioned pages before that, but I would expect to have the memory usage reach a more or less stable level of used memory after a bunch of pages have been already skimmed at fast speed or, since the images are up to 3MB in size, deplete way earlier. Any suggestion will be greatly appreciated.

    Read the article

  • Cannot Logout of Facebook with Facebook C# SDK

    - by Ryan Smyth
    I think I've read just about everything out there on the topic of logging out of Facebook inside of a Desktop application. Nothing so far works. Specifically, I would like to log the user out so that they can switch identities, e.g. People sharing a computer at home could then use the software with their own Facebook accounts, but with no chance to switch accounts, it's quite messy. (Have not yet tested switching Windows users accounts as that is simply far too much to ask of the end user and should not be necessary.) Now, I should say that I have set the application to use these permissions: string[] permissions = new string[] { "user_photos", "publish_stream", "offline_access" }; So, "offline_access" is included there. I do not know if this does/should affect logging out or not. Again, my purpose for logging out is merely to switch users. (If there's a better approach, please let me know.) The purported solutions seem to be: Use the JavaScript SDK (FB.logout()) Use "m.facebook.com" instead Create your own URL (and possibly use m.facebook.com) Create your own URL and use the session variable (in ASP.NET) The first is kind of silly. Why resort to JavaScript when you're using C#? It's kind of a step backwards and has a lot of additional overhead in a desktop application. (I have not tried this as it's simply disgustingly messy to do this in a desktop application.) If anyone can confirm that this is the only working method, please do so. I'm desperately trying to avoid it. The second doesn't work. Perhaps it worked in the past, but my umpteen attempts to get it to work have all failed. The third doesn't work. I've tried umpteen dozen variations with zero success. The last option there doesn't work for a desktop application because it's not ASP.NET and you don't have a session variable to work with. The Facebook C# SDK logout also no longer works. i.e. public FacebookLoginDialog(string appId, string[] extendedPermissions, bool logout) { IDictionary<string, object> loginParameters = new Dictionary<string, object> { { "response_type", "token" }, { "display", "popup" } }; _navigateUri = FacebookOAuthClient.GetLoginUrl(appId, null, extendedPermissions, logout, loginParameters); InitializeComponent(); } I remember it working in the past, but it no longer works now. (Which truly puzzles me...) It instead now directs the user to the Facebook mobile page, where the user must manually logout. Now, I could do browser automation to automatically click the logout link for the user, however, this is prone to breaking if Facebook updates the mobile UI. It is also messy, and possibly a worse solution than trying to use the JavaScript SDK FB.logout() method (though not by much). I have searched for some kind of documentation, however, I cannot find anything in the Facebook developer documentation that illustrates how to logout an application. Has anyone solved this problem, or seen any documentation that can be ported to work with the Facebook C# SDK? I am certainly open to using a WebClient or HttpClient/Response if anyone can point to some documentation that could work with it. I simply have not been able to find any low-level documentation that shows how this approach could work. Thank you in advance for any advice, pointers, or links.

    Read the article

  • Problem with combination boost::exception and boost::variant

    - by Rick
    Hello all, I have strange problem with two-level variant struct when boost::exception is included. I have following code snippet: #include <boost/variant.hpp> #include <boost/exception/all.hpp> typedef boost::variant< int > StoredValue; typedef boost::variant< StoredValue > ExpressionItem; inline std::ostream& operator << ( std::ostream & os, const StoredValue& stvalue ) { return os;} inline std::ostream& operator << ( std::ostream & os, const ExpressionItem& stvalue ) { return os; } When I try to compile it, I have following error: boost/exception/detail/is_output_streamable.hpp(45): error C2593: 'operator <<' is ambiguous test.cpp(11): could be 'std::ostream &operator <<(std::ostream &,const ExpressionItem &)' [found using argument-dependent lookup] test.cpp(8): or 'std::ostream &operator <<(std::ostream &,const StoredValue &)' [found using argument-dependent lookup] 1> while trying to match the argument list '(std::basic_ostream<_Elem,_Traits>, const boost::error_info<Tag,T>)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> and 1> [ 1> Tag=boost::tag_original_exception_type, 1> T=const type_info * 1> ] Code snippet is simplified as much as possible, in the real code are structures much more complicated and each variant has five sub-types. When i remove #include and try following test snippet, program is compiled correctly: void TestVariant() { ExpressionItem test; std::stringstream str; str << test; } Could someone please advise me how to define operators << in order to function even when using boost::Exception ? Thanks and regards Rick

    Read the article

  • Can ASM method-visitors be used with interfaces?

    - by Olaf Mertens
    I need to write a tool that lists the classes that call methods of specified interfaces. It will be used as part of the build process of a large java application consisting of many modules. The goal is to automatically document the dependencies between certain java modules. I found several tools for dependency analysis, but they don't work on the method level, just for packages or jars. Finally I found ASM, that seems to do what I need. The following code prints the method dependencies of all class files in a given directory: import java.io.*; import java.util.*; import org.objectweb.asm.ClassReader; public class Test { public static void main(String[] args) throws Exception { File dir = new File(args[0]); List<File> classFiles = new LinkedList<File>(); findClassFiles(classFiles, dir); for (File classFile : classFiles) { InputStream input = new FileInputStream(classFile); new ClassReader(input).accept(new MyClassVisitor(), 0); input.close(); } } private static void findClassFiles(List<File> list, File dir) { for (File file : dir.listFiles()) { if (file.isDirectory()) { findClassFiles(list, file); } else if (file.getName().endsWith(".class")) { list.add(file); } } } } import org.objectweb.asm.MethodVisitor; import org.objectweb.asm.commons.EmptyVisitor; public class MyClassVisitor extends EmptyVisitor { private String className; @Override public void visit(int version, int access, String name, String signature, String superName, String[] interfaces) { this.className = name; } @Override public MethodVisitor visitMethod(int access, String name, String desc, String signature, String[] exceptions) { System.out.println(className + "." + name); return new MyMethodVisitor(); } } import org.objectweb.asm.commons.EmptyVisitor; public class MyMethodVisitor extends EmptyVisitor { @Override public void visitMethodInsn(int opcode, String owner, String name, String desc) { String key = owner + "." + name; System.out.println(" " + key); } } The Problem: The code works for regular classes only! If the class file contains an interface, visitMethod is called, but not visitMethodInsn. I don't get any info about the callers of interface methods. Any ideas?

    Read the article

  • Problem with return 2 libc method

    - by jth
    Hi, I'am trying to understand the return2libc method. I'am using an ubuntu linux 9.10, 32 bit with ASLR disabled. In theory, it sounds quite easy, overwrite the saved eip with the address of system() (or whatever function you want), then put the address to which system() should return and after that, the parameter for system, the "/bin/bash"-string. But what happens is that my exploit keeps segfaulting the vulnerable program. I assume something with the system()-address went wrong. This is what I did so far: Determined the address of system(): (gdb) print system $1 = {<text variable, no debug info>} 0x167020 <system> (gdb) x/x system 0x167020 <system>: 0x890cec83 I used the subsequent x/x system because those 3 bytes returned by print system looks like an index in some sort of jumptable (PLT?), so I assume 0x890cec83 is the right address which is used to overwrite the saved eip. After that I determined the address of the /bin/bash string in memory, using a small C program which basically consists of this line: printf("Address of string /bin/bash: %p\n", getenv("SHELL")); Then I looked a little bit around in the memory and fount /bin/bash: (gdb) x/s 0xbffff6ca 0xbffff6ca: "/bin/bash" After I gathered this information, I filled the buffer: (gdb) b 9 Breakpoint 1 at 0x8048407: file victim.c, line 9. (gdb) r `perl -e 'print "A"x9 . "\x83\xec\x0c\x89FAKE\xca\f6\ff\bf";'` Breakpoint 1, main (argc=1111638594, argv=0xc360cca) at victim.c:10 10 return 0; (gdb) x/s 0xbffff6ca 0xbffff6ca: "/bin/bash" Stack frame looks like this: (gdb) i f Stack level 0, frame at 0xbffff440: eip = 0x8048407 in main (victim.c:10); saved eip 0x890cec83 source language c. Arglist at 0xbffff438, args: argc=1111638594, argv=0xc360cca Locals at 0xbffff438, Previous frame's sp is 0xbffff440 Saved registers: ebp at 0xbffff438, eip at 0xbffff43c This seems all right to me, saved eip was overwritten with the (hopefully) correct system()-address, return address for system was set to "FAKE" (shouldn't matter) and the address of /bin/bash also seems to be correct. When I'am continuing the execution, victim segfaults on some strange address and certainly not in 0x890cec83: (gdb) cont Continuing. Program received signal SIGSEGV, Segmentation fault. 0x0804840d in main (argc=Cannot access memory at address 0x41414149 ) at victim.c:11 11 } Has anyone an explanation or a hint what happens here and why the execution isn't redirected to 0x890cec83? Thanks in advance, any hint, and be it only vague, would be appreciated. I have no idea why this doesn't work.

    Read the article

  • How to modify a given class to use const operators

    - by zsero
    I am trying to solve my question regarding using push_back in more than one level. From the comments/answers it is clear that I have to: Create a copy operator which takes a const argument Modify all my operators to const But because this header file is given to me there is an operator what I cannot make into const. It is a simple: float & operator [] (int i) { return _item[i]; } In the given program, this operator is used to get and set data. My problem is that because I need to have this operator in the header file, I cannot turn all the other operators to const, what means I cannot insert a copy operator. How can I make all my operators into const, while preserving the functionality of the already written program? Here is the full declaration of the class: class Vector3f { float _item[3]; public: float & operator [] (int i) { return _item[i]; } Vector3f(float x, float y, float z) { _item[0] = x ; _item[1] = y ; _item[2] = z; }; Vector3f() {}; Vector3f & operator = ( const Vector3f& obj) { _item[0] = obj[0]; _item[1] = obj[1]; _item[2] = obj[2]; return *this; }; Vector3f & operator += ( const Vector3f & obj) { _item[0] += obj[0]; _item[1] += obj[1]; _item[2] += obj[2]; return *this; }; bool operator ==( const Vector3f & obj) { bool x = (_item[0] == obj[0]) && (_item[1] == obj[1]) && (_item[2] == obj[2]); return x; } // my copy operator Vector3f(const Vector3f& obj) { _item[0] += obj[0]; _item[1] += obj[1]; _item[2] += obj[2]; return this; } };

    Read the article

  • Mapping two tables 0..n in Hibernate

    - by simon
    I have a table Users CREATE TABLE "USERS" ( "ID" NUMBER NOT NULL , "LOGINNAME" VARCHAR2 (150) NOT NULL ) and I have a second table SpecialUsers. No UserId can occur twice in the SpecialUsers table, and only a small subset of the ids of users in the Users table are contained in the SpecialUsers table. CREATE TABLE "SPECIALUSERS" ( "USERID" NUMBER NOT NULL, CONSTRAINT "PK_SPECIALUSERS" PRIMARY KEY ("USERID") ) ALTER TABLE "SPECIALUSERS" ADD CONSTRAINT "FK_SPECIALUSERS_USERID" FOREIGN KEY ("USERID") REFERENCES "USERS" ("ID") / Mapping the Users table in Hibernate works ok <hibernate-mapping package="com.initech.domain"> <class name="com.initech.User" table="USERS"> <id name="id" column="ID" type="java.lang.Long"> <meta attribute="use-in-tostring">true</meta> <generator class="sequence"> <param name="sequence">SEQ_USERS_ID</param> </generator> </id> <property name="loginName" column="LOGINNAME" type="java.lang.String" not-null="true"> <meta attribute="use-in-tostring">true</meta> </property> </class> </hibernate-mapping> But I'm struggling in creating the mapping for the SpecialUsers table. All the examples (e.g. in Hibernate documentation) in Internet I found don't have this type of self-reference. I tried a mapping like this: <hibernate-mapping package="com.initech.domain"> <class name="com.initech.User" table="SPECIALUSERS"> <id name="id" column="USERID"> <meta attribute="use-in-tostring">true</meta> <generator class="foreign"> <param name="property">user</param> </generator> </id> <one-to-one name="user" class="User"/> </class> </hibernate-mapping> but got the error Invocation of init method failed; nested exception is org.hibernate.DuplicateMappingException: Duplicate class/entity mapping com.initech.User How should I map the SpecialUsers table? What I need on the application level is a list of the User objects contained in the SpecialUsers table.

    Read the article

  • SBT run differences between scala and java?

    - by Eric Cartner
    I'm trying to follow the log4j2 configuration tutorials in a SBT 0.12.1 project. Here is my build.sbt: name := "Logging Test" version := "0.0" scalaVersion := "2.9.2" libraryDependencies ++= Seq( "org.apache.logging.log4j" % "log4j-api" % "2.0-beta3", "org.apache.logging.log4j" % "log4j-core" % "2.0-beta3" ) When I run the main() defined in src/main/scala/logtest/Foo.scala: package logtest import org.apache.logging.log4j.{Logger, LogManager} object Foo { private val logger = LogManager.getLogger(getClass()) def main(args: Array[String]) { logger.trace("Entering application.") val bar = new Bar() if (!bar.doIt()) logger.error("Didn't do it.") logger.trace("Exiting application.") } } I get the output I was expecting given that src/main/resources/log4j2.xml sets the root logging level to trace: [info] Running logtest.Foo 08:39:55.627 [run-main] TRACE logtest.Foo$ - Entering application. 08:39:55.630 [run-main] TRACE logtest.Bar - entry 08:39:55.630 [run-main] ERROR logtest.Bar - Did it again! 08:39:55.630 [run-main] TRACE logtest.Bar - exit with (false) 08:39:55.630 [run-main] ERROR logtest.Foo$ - Didn't do it. 08:39:55.630 [run-main] TRACE logtest.Foo$ - Exiting application. However, when I run the main() defined in src/main/java/logtest/LoggerTest.java: package logtest; import org.apache.logging.log4j.Logger; import org.apache.logging.log4j.LogManager; public class LoggerTest { private static Logger logger = LogManager.getLogger(LoggerTest.class.getName()); public static void main(String[] args) { logger.trace("Entering application."); Bar bar = new Bar(); if (!bar.doIt()) logger.error("Didn't do it."); logger.trace("Exiting application."); } } I get the output: [info] Running logtest.LoggerTest ERROR StatusLogger Unable to locate a logging implementation, using SimpleLogger ERROR Bar Did it again! ERROR LoggerTest Didn't do it. From what I can tell, ERROR StatusLogger Unable to ... is usually a sign that log4j-core is not on my classpath. The lack of TRACE messages seems to indicate that my log4j2.xml settings aren't on the classpath either. Why should there be any difference in classpath if I'm running Foo.main versus LoggerTest.main? Or is there something else causing this behavior? Update I used SBT Assembly to build a fat jar of this project and specified logtest.LoggerTest to be the main class. Running it from the command line produced correct results: Eric-Cartners-iMac:target ecartner$ java -jar "Logging Test-assembly-0.0.jar" 10:52:23.220 [main] TRACE logtest.LoggerTest - Entering application. 10:52:23.221 [main] TRACE logtest.Bar - entry 10:52:23.221 [main] ERROR logtest.Bar - Did it again! 10:52:23.221 [main] TRACE logtest.Bar - exit with (false) 10:52:23.221 [main] ERROR logtest.LoggerTest - Didn't do it. 10:52:23.221 [main] TRACE logtest.LoggerTest - Exiting application.

    Read the article

  • gcc/g++: error when compiling large file

    - by Alexander
    Hi, I have a auto-generated C++ source file, around 40 MB in size. It largely consists of push_back commands for some vectors and string constants that shall be pushed. When I try to compile this file, g++ exits and says that it couldn't reserve enough virtual memory (around 3 GB). Googling this problem, I found that using the command line switches --param ggc-min-expand=0 --param ggc-min-heapsize=4096 may solve the problem. They, however, only seem to work when optimization is turned on. 1) Is this really the solution that I am looking for? 2) Or is there a faster, better (compiling takes ages with these options acitvated) way to do this? Best wishes, Alexander Update: Thanks for all the good ideas. I tried most of them. Using an array instead of several push_back() operations reduced memory usage, but as the file that I was trying to compile was so big, it still crashed, only later. In a way, this behaviour is really interesting, as there is not much to optimize in such a setting -- what does the GCC do behind the scenes that costs so much memory? (I compiled with deactivating all optimizations as well and got the same results) The solution that I switched to now is reading in the original data from a binary object file that I created from the original file using objcopy. This is what I originally did not want to do, because creating the data structures in a higher-level language (in this case Perl) was more convenient than having to do this in C++. However, getting this running under Win32 was more complicated than expected. objcopy seems to generate files in the ELF format, and it seems that some of the problems I had disappeared when I manually set the output format to pe-i386. The symbols in the object file are by standard named after the file name, e.g. converting the file inbuilt_training_data.bin would result in these two symbols: binary_inbuilt_training_data_bin_start and binary_inbuilt_training_data_bin_end. I found some tutorials on the web which claim that these symbols should be declared as extern char _binary_inbuilt_training_data_bin_start;, but this does not seem to be right -- only extern char binary_inbuilt_training_data_bin_start; worked for me.

    Read the article

  • Can I create a custom class that inherits from a strongly typed DataRow?

    - by Calvin Fisher
    I'm working on a huge, old project with a lot of brittle code, some of which has been around since the .NET 1.0 era, and it has been and will be worked on by other people... so I'd like to change as little as possible. I have one project in my solution that contains DataSet.xsd. This project compiles to a separate assembly (Data.dll). The database schema includes several tables arranged more or less hierarchically, but the only way the tables are actually linked together is through joins. I can get, e.g. DepartmentRow and EmployeeRow objects from the autogenerated code. EmployeeRow contains information from the employee's corresponding DepartmentRow through a join. I'm making a new report to view multiple departments and all their employees. If I use the existing data access scheme, all I will be able to get is a spreadsheet-like output where each employee is represented on one line, with department information repeated over and over in its appropriate columns. E.g.: Department1...Employee1... Department1...Employee2... Department2...Employee3... But what the customer would like is to have each department render like a heading, with a list of employees beneath each. E.g.: - Department1... Employee1... Employee2... + Department2... I'm trying to do this by inheriting hierarchical objects from the autogenerated Row objects. E.g.: public class Department : DataSet.DepartmentRow { public List<Employee> Employees; } That way I could nest the data in the report by using a collection of Department objects as the DataSource, each of which will put its list of Employees in a subreport. The problem is that this gives me a The type Data.DataSet.DepartmentRow has no constructors defined error. And when I try to make a constructor, e.g. public class Department : DataSet.DepartmentRow { private Department() { } public List<Employee> Employees; } I get a 'Data.DataSet.DepartmentRow(System.Data.DataRowBuilder)' is inaccessible due to its protection level. error in addition to the first one. Is there a way to accomplish what I'm trying to do? Or is there something else I should be trying entirely?

    Read the article

  • Is this scatter-brained workflow realizable in Git?

    - by Luke Maurer
    This is what I'd like my workflow to look like at a conceptual level: I hack on my new feature for a while I notice a typo in a comment I change it Since the typo is completely unrelated to anything else, I put that change in a pile of comment fixes I keep working on the code I realize I need to flesh out a few utility functions I do so I put that change in its own pile Steps 2, 3, and 4 each repeat throughout the day I finish the new feature and put the changes for that feature in a pile I push nice patches upstream: One with the new feature, a few for the other tweaks, and one with a bunch of comment fixes if enough have accumulated Since I'm both lazy and a perfectionist, I want to be able to do some things out of order: I might correct a typo but forget to put it in the comment fix pile; when I prepare the upstream patches (I'm using git-svn, so I need to be pretty deliberate about these), I'll then pull out the comment fixes at that point. I might forget to separate things altogether until the very end. But I might /also/ have committed some of the piles along the way (sorry, the metaphor is breaking down …). This is all rather like just using Eclipse changesets with SVN, only I can have different changes to the same file in different piles (having to disentangle changes into different commits is what motivated me to move to git-svn, in fact …), and with Git I can have my full discombobulated change history, experimental branches and all, but still make a nice, neat patch. I've just recently started with Git after having wanted to for a good while, and I'm quite happy so far. The biggest way in which the above workflow doesn't really map into Git, though, is that a “bin” can't really be just a local branch, since the working tree only ever reflects the state of a single branch. Or maybe the Git index is a “pile,” and what I want is to have more than one somehow (effectively). I can think of a few ways to approximate what I want (maybe creative use of stash? Intricate stash-checkout-merge dances?), but my grasp on Git isn't solid enough to be sure of how best to put all the pieces together. It's said that Git is more a toolkit than a VCS, so I guess the question comes down to: How do I build this thing with these tools?

    Read the article

  • Rails / ActiveRecord Modeling Help

    - by JM
    I’m trying to model a relationship in ActiveRecord and I think it’s a little beyond my skill level. Here’s the background. This is a horse racing project and I’m trying to model a horses Connections over time. Connections are defined as the Horse’s Current: Owner, Trainer and Jockey. Over time, a horse’s connections can change for a lot of different reasons: The owner sells the horse in a private sale The horse is claimed (purchase in a public sale) The Trainer switches jockeys The owner switches trainers In my first attempt at modeling this, I created the following tables: Horses, Owners, Trainers, Jockeys and Connections. Essentially, the Connections table was the has-many-through join table and was structured as follows: Connections Table 1 Id Horse_id Owner_id Trainer_id Jockey_id Status_Code Status_Date Change_Code The Horse, Owner, Trainer and Jockey foreign keys are self explanatory. The status code is 1 or 0 (1 active, 0 inactive) and the status date is the date the status changed. Change_code is and integer or string value that represent the reason for the change (private sale, claim, jockey change, etc) The key benefit of this approach is that the Connection is represented as one record in the connections table. The downside is that I have to have a table for Owner (1), Trainer (2) and Jockey (3) when one table could due. In my second attempt at modeling this I created the following tables: Horses, Connections, Entities The Entities tables has the following structure Entities Table id First_name Last_name Role where Role represents if the entity is a Owner, Trainer or Jockey. Under this approach, my Connections table has the following structure Connections Table 2 id Horse_id Entity_id Role Status_Code Status_Date Change_Code 1 1 1 1 1 1/1/2010 2 1 4 2 1 1/1/2010 3 1 10 3 1 1/1/2010 This approach has the benefit of eliminating two tables, but on the other hand the Connection is now comprised of three different records as opposed to one in the first approach. What believe I’m looking for is an approach that allows me to capture the Connection in one record, but also uses an Entities table with roles instead of the Owner, Trainer and Jockey tables. I’m new to ActiveRecord and rails so any and all input would be greatly appreciated. Perhaps there are other ways that would even be better. Thanks!

    Read the article

  • Normalization of database for timesheet tool and ensure data integrity

    - by fireeyedboy
    I'm creating a timesheet application. I have the following entities (amongst others): Company Employee = an employee associated with a company Client = a client associated with a company So far I have the following (abbreviated) database setup: Company - id - name Employee - id - companyId (FK to Company.id) - name Client - id - companyId (FK to Company.id) - name Now, I want an employee to be associated with a client, but only if that client is associated with the company the employee works for. How would you guarantee this data integrity on a database level? Or should I just depend on the application to guarantee this data integrity? I thought about creating a many to many table like this: EmployeeClient - employeeId (FK to Employee.id) - companyId \ (combined FK to Client.companyId, Client.id) - clientId / Thus, when I insert a client for an employee along with the employee's company id, the database should prevent this when the client is not associated with the employee's company id. Does this make sense? Because this still doesn't guarantee the employee is associated with the company. How do you deal with these things? UPDATE The scenario is as followed: A company has multiple employees. Employees will only be linked to one company. A company has multiple clients also. Clients will only be linked to one company. (Company is a sandbox, so to speak). An employee of a company can be linked to a client of it's company, but only if the client is part of the company's clientele. In other words: The application will allow a company to create/add employees and create/add clients (hence the companyId FK in the Employee and Client tables). Next, the company will be allowed to assign certain clients to certain of it's employees (EmployeeClient table). Imagine an employee working on projects for a few clients for which s/he can write billable hours, but the employee must not be allowed to write billable hours for clients they are not assigned to by their employer (the company). So, employees will not automatically have access to all their company's clients, but only to those that the company has selected for them. Hopefully this has shed some more light on the matter.

    Read the article

< Previous Page | 542 543 544 545 546 547 548 549 550 551 552 553  | Next Page >