Search Results

Search found 24498 results on 980 pages for 'lock pages in memory'.

Page 234/980 | < Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >

  • How do I pass arguments to pages in a WPF application?

    - by Rod
    I'm working on upgrading a really old VB6 app to a WPF application. This will be a page-based app, but not a XBAP. The old VB6 app had a start form where a user would enter search criteria. Then they would get results in a grid, select a row in the grid and then click on one of 3 buttons. I am thinking that what I'll do is use hyperlink controls on the WPF app. No matter what button the user clicked on the old VB6 app, it would go to a second form. What it did on the second form was dependent upon which button the user clicked on the first form. So, I want the first page in my WPF app to do the same thing, but depending upon which hyperlink they click on will dictate what happens on the second page. They will either (a) go to the second page to edit the details as well as a lot more information, related to what they selected on the first page, or (b) enter a new record and all associated data (a new client, in this case), or (c) create a new case for the same client, selected on the first page. For me the hard thing is I don't know how to pass that information along to the second page. Is there something in WPF like in HTML where there's a query string? Or how do you get information from the first page to the second page, in WPF? I'm working in VS 2008.

    Read the article

  • how I can print WPF treeview items over multiple pages?

    - by RAJKISHOR
    Hello friend, I want to print tree structure showing in WPF treeview control in multiple page. I tried PrintVisual() but it only prints only visible parts. Then I tried FlowDocument and written AddNode(), but its not showing the same result as treeview doimg. Please help me with code. public void AddNodes(int uid, ListItem tSubNode) { string query = "select fullname, id from members where refCode=" + uid + ";"; String memValue; MySqlCommand cmd = new MySqlCommand(query, db.conn); MySqlDataAdapter _DA = new MySqlDataAdapter(cmd); DataTable _DT = new DataTable(); _DA.Fill(_DT); ListOffset += 20; foreach (DataRow _dr in _DT.Rows) { ListItem tNode = new ListItem(); tNode.Margin = new Thickness(ListOffset,0,0,0); memValue = _dr["fullname"].ToString() + " (" + _dr["id"].ToString() + ")"; tNode.Blocks.Add(new Paragraph(new Run(hyp+memValue))); myList.ListItems.Add(tNode); flowDoc.Blocks.Add(myList); _fdrMembers.Document = flowDoc; if (db.HasMembers(Convert.ToInt32(_dr["id"].ToString()))) { AddNodes(Convert.ToInt32(_dr["id"]), tNode); } } ListOffset = 20; } private void button_Click(object sender, RoutedEventArgs e) { ListOffset = 0; myList.ListItems.Clear(); tSuper.Blocks.Clear(); if (db.GetNameByUID(100001) != null) { tSuper.Blocks.Add(new Paragraph(new Run(db.GetNameByUID(100001)))); myList.ListItems.Add(tSuper); AddNodes(100001, tSuper); } MessageBox.Show("Member by ID - "+does.ToString()+", "+dosnt.ToString(), "Error!", MessageBoxButton.OK, MessageBoxImage.Information); }**strong text**

    Read the article

  • How should I handle pages that move to a new url with regards to search engines?

    - by Anders Juul
    Hi all, I have done some refactoring on a asp.net mvc application already deployed to a live web site. Among the refactoring was moving functionality to a new controller, causing some urls to change. Shortly after the various search engine robots start hammering the old urls. What is the right way to handle this in general? Ignore it? In time the SEs should find out that they get nothing but 400 from the old urls. Block old urls with robots.txt? Continue to catch the old urls, then redirect to new ones? Users navigating the site would never get the redirection as the urls are updated through-out the new version of the site. I see it as garbage code - unless it could be handled by some fancy routing? Other? As always, all comments welcome... Thanks, Anders, Denmark

    Read the article

  • How can i group and display facebook comments done with FB comments plugin on different pages (same page in different languages)?

    - by user1061544
    I use the Facebook comment plugin on my multilingual website. My website's URL contains website language and when someone comments on example.com/en/page.html it is not visible for user who is viewing the same page in French example.com/fr/page.html I want to display all comments done on the page in different languages (different URLs in this case). How can do that?. This is the comment code as it is described here. <div class="fb-comments" data-href="example.com/fr/page.html" data-num-posts="2" data-width="630"></div>

    Read the article

  • Googles App Engine, Python: How to get parameters from a log-in pages?

    - by brilliant
    Here is a quote from here: So in short ... you need to look into login page, see what params it uses e.g login=xxx, password=yyy, post it to that page and you will have to manage the cookies too, that is where library like twill etc come into picture. How could I do it using Python and Google App Engine? Can anybody please give me some clue? I have already asked a question about the authenticated request, but here it seems the matter is different as here I am suggested to look into login page and get parameters, and also I have to deal with cookies.

    Read the article

  • .htacces file. Can I block access to a directory without blocking access to the files within it?

    - by steph
    I'm building a website, but I'm not entirely sure what to do with the .htaccess file. Say for example I have a folder called pages which holds all my pages, can i deny access to someone if they type in www.website.com/pages so that they can't see the directory? I've tried putting the .htaccess file in the pages folder with the "deny from all" line and although it denies access, it's also denying access to the actual pages. Is there a way to do this without denying access to see the pages on the website, just denying access to the directory? Sorry if this doesn't make much sense, I'm so confused lol. Thanks for any help.

    Read the article

  • Which metadata I should save when downloading web-pages?

    - by Vojtech R.
    Hi, I'm going to download (for future purposes of language processing) some thousands webpages. Now I'm thinking, which metadata I should save. I explore this, but I do not wont to neglect something important. <title> <link> <publish_date> <date_downloaded> <source> // to this page <keyword> // for Solr indexing <text> // cleaned body of page Is there something important what I could miss in future?

    Read the article

  • How to set QNetworkReply properties to get correct NCBI pages?

    - by Claire Huang
    I try to get this following url using the downloadURL function: http://www.ncbi.nlm.nih.gov/nuccore/27884304 But the data is not as what we can see through the browser. Now I know it's because that I need to give the correct information such as browser, how can I know what kind of information I need to set, and how can I set it? (By setHeader function??) In VC++, we can use CInternetSession and CHttpConnection Object to get the correct information without setting any other detail information, is there any similar way in Qt or other cross-platform C++ network lib?? (Yes, I need the the cross-platform property.) QNetworkReply::NetworkError downloadURL(const QUrl &url, QByteArray &data) { QNetworkAccessManager manager; QNetworkRequest request(url); request.setHeader(QNetworkRequest::ContentTypeHeader ,"Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7 (.NET CLR 3.5.30729)"); QNetworkReply *reply = manager.get(request); QEventLoop loop; QObject::connect(reply, SIGNAL(finished()), &loop, SLOT(quit())); loop.exec(); QVariant statusCodeV = reply->attribute(QNetworkRequest::RedirectionTargetAttribute); QUrl redirectTo = statusCodeV.toUrl(); if (!redirectTo.isEmpty()) { if (redirectTo.host().isEmpty()) { const QByteArray newaddr = ("http://"+url.host()+redirectTo.encodedPath()).toAscii(); redirectTo.setEncodedUrl(newaddr); redirectTo.setHost(url.host()); } return (downloadURL(redirectTo, data)); } if (reply->error() != QNetworkReply::NoError) { return reply->error(); } data = reply->readAll(); delete reply; return QNetworkReply::NoError; }

    Read the article

  • joomla SEF shows differrent links in Homepage than inner pages !!

    - by user200297
    I'm enabling Joomla SEF , and get the following results when I link to an article from a homepage (frontpage) article: anywebsite.com/component/content/article/26/141-Z1-Z2-Z3-Z4 but when linking from other articles I get the result I want which is : anywebsite.com/Categor/141-Z1-Z2-Z3-Z4 and the link is both equal which is : index.php?option=com_content&view=article&id=141:Z1-Z2-Z3-Z4&catid=26 any idea?! Edit: Does manually linking with this SEF link is a good idea , instead of waiting for joomla to convert it .. ? atleast as a last resort?

    Read the article

  • PHP $_SERVER['HTTP_HOST'] vs. $_SERVER['SERVER_NAME'], am I understanding the man pages correctly?

    - by Jeff
    I did a lot of searching and also read the PHP $_SERVER man page. Do I have this right regarding which to use for my PHP scripts for simple link definitions used throughout my site? $_SERVER['SERVER_NAME'] is based on your web servers' config file (Apache2 in my case), and varies depending on a few directives: (1) VirtualHost, (2) ServerName, (3) UseCanonicalName, etc. $_SERVER['HTTP_HOST'] is based on the request from the client. Therefore, it would seem to me that the proper one to use in order to make my scripts as compatible as possible would be $_SERVER['HTTP_HOST']. Is this assumption correct? Followup comments: I guess I got a little paranoid after reading this article and noting that someone said "they wouldn't trust any of the $_SERVER vars": http://markjaquith.wordpress.com/2009/09/21/php-server-vars-not-safe-in-forms-or-links/ and also: http://www.php.net/manual/en/reserved.variables.server.php (comment: Vladimir Kornea 14-Mar-2009 01:06) Apparently the discussion is mainly about $_SERVER['PHP_SELF'] and why you shouldn't use it in the form action attribute without proper escaping to prevent XSS attacks. My conclusion about my original question above is that it is "safe" to use $_SERVER['HTTP_HOST'] for all links on a site without having to worry about XSS attacks, even when used in forms. Please correct me if I'm wrong.

    Read the article

  • How can I make WWW:Mechanize to not fetch pages twice?

    - by planetp
    I have a web scraping application, written in OO perl. There's single WWW::Mechanize object used in the app. How can I make it to not fetch the same url twice, i.e. make the second get() with the same url nop: my $mech = WWW::Mechanize->new(); my $url = 'http:://google.com'; $mech->get( $url ); # first time, fetch $mech->get( $url ); # same url, do nothing

    Read the article

  • % style macros not supported in some C++/CLI project property pages under VS2010?

    - by Dave Foster
    We're currently evaluating VS2010 and have upgraded our VS2008 C++/CLI project to the new .vcxproj format. I've noticed that a certain property we had set in the project settings did not get translated properly. Under Configuration Properties - Managed Resources - Resource Logical Name, we used to have (in VS2008) the setting: $(IntDir)\$(RootNamespace).$(InputName).resources which indicated that all .resx files were to compile into OurLib.SomeForm.resources inside of the assembly. (the Debug portion is dropped when assembled) According to MSDN, the $(InputName) macro no longer exists and should be replaced with %(Filename). However, when translating the above line to swap those macros, it does not seem to ever expand. The second .resx file it tries to compile, I get a "LINK : fatal error LNK1316: duplicate managed resource name 'Debug\OurLib.%(Filename).resources". This indicates to me that the % style macros are not being expanded here, at least in this specific property. If we don't set anything in that property, the default behavior seems to be to add the subdirectory as a prefix, such as: OurLib.Forms.SomeForm.resources where Forms is the subdir of our project that the .resx file lives. This only occurs when the .resx file is in an immediate subdirectory of the project being built. If a .resx file exists somewhere else on disk (aka ..\OtherLib\Forms\SomeForm2.resx) this prefix is NOT added. This is causing an issue with loading form resources, as it does not account for this possible prefix, even though we are using the standard Forms Designer method of getting at resources: System::ComponentModel::ComponentResourceManager^ resources = (gcnew System::ComponentModel::ComponentResourceManager(SomeForm::typeid)); and do not specify the .resources file by name. The issue I've just described may not be the same as the original question, but if I were to fix the Resource Logical Name issue I think this would all go away. Does anyone have any information about these % macros and where they are allowed to be used?

    Read the article

  • Iterating anchors in jquery doesn't seem to work...

    - by bala3569
    Hai i am generating page numbers based on currentpage and lastpage using jquery ... Here is my function and as i am newbie i dont know how it can be done... function generatePages(currentPage, LastPage) { if (LastPage <= 5) { var pages = ''; for(var i=1;i<=5;i++) { pages += "<a class='page-numbers' href='#'>" + i + "</a>" } $("#PagerDiv").append(pages); } if (LastPage > 5) { var pages = ''; for (var i = 1; i <= 5; i++) { pages += "<a class='page-numbers' href='#'>" + i + "</a>" } $("#PagerDiv").append(pages); } } I want the result to be like this If it is the first page If it is in the middle, If it is the last page, I have the lastPage and currentPage values please help me out getting this...

    Read the article

  • Java map / nio / NFS issue causing a VM fault: "a fault occurred in a recent unsafe memory access op

    - by Matthew Bloch
    I have written a parser class for a particular binary format (nfdump if anyone is interested) which uses java.nio's MappedByteBuffer to read through files of a few GB each. The binary format is just a series of headers and mostly fixed-size binary records, which are fed out to the called by calling nextRecord(), which pushes on the state machine, returning null when it's done. It performs well. It works on a development machine. On my production host, it can run for a few minutes or hours, but always seems to throw "java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code", fingering one of the Map.getInt, getShort methods, i.e. a read operation in the map. The uncontroversial (?) code that sets up the map is this: /** Set up the map from the given filename and position */ protected void open() throws IOException { // Set up buffer, is this all the flexibility we'll need? channel = new FileInputStream(file).getChannel(); MappedByteBuffer map1 = channel.map(FileChannel.MapMode.READ_ONLY, 0, channel.size()); map1.load(); // we want the whole thing, plus seems to reduce frequency of crashes? map = map1; // assumes the host writing the files is little-endian (x86), ought to be configurable map.order(java.nio.ByteOrder.LITTLE_ENDIAN); map.position(position); } and then I use the various map.get* methods to read shorts, ints, longs and other sequences of bytes, before hitting the end of the file and closing the map. I've never seen the exception thrown on my development host. But the significant point of difference between my production host and development is that on the former, I am reading sequences of these files over NFS (probably 6-8TB eventually, still growing). On my dev machine, I have a smaller selection of these files locally (60GB), but when it blows up on the production host it's usually well before it gets to 60GB of data. Both machines are running java 1.6.0_20-b02, though the production host is running Debian/lenny, the dev host is Ubuntu/karmic. I'm not convinced that will make any difference. Both machines have 16GB RAM, and are running with the same java heap settings. I take the view that if there is a bug in my code, there is enough of a bug in the JVM not to throw me a proper exception! But I think it is just a particular JVM implementation bug due to interactions between NFS and mmap, possibly a recurrence of 6244515 which is officially fixed. I already tried adding in a "load" call to force the MappedByteBuffer to load its contents into RAM - this seemed to delay the error in the one test run I've done, but not prevent it. Or it could be coincidence that was the longest it had gone before crashing! If you've read this far and have done this kind of thing with java.nio before, what would your instinct be? Right now mine is to rewrite it without nio :)

    Read the article

  • Java - Error Message Help

    - by Brian
    In the Code, mem is a of Class Memory and getMDR and getMAR ruturn ints. When I try to compile the code I get the following errors.....how can I fix this? Computer.java:25: write(int,int) in Memory cannot be applied to (int) Input.getInt(mem.write(cpu.getMDR())); ^ Computer.java:28: write(int,int) in Memory cannot be applied to (int) mem.write(cpu.getMAR()); Here is the code for Computer: class Computer{ private Cpu cpu; private Input in; private OutPut out; private Memory mem; public Computer() { Memory mem = new Memory(100); Input in = new Input(); OutPut out = new OutPut(); Cpu cpu = new Cpu(); System.out.println(in.getInt()); } public void run() { cpu.reset(); cpu.setMDR(mem.read(cpu.getMAR())); cpu.fetch2(); while (!cpu.stop()) { cpu.decode(); if (cpu.OutFlag()) OutPut.display(mem.read(cpu.getMAR())); if (cpu.InFlag()) Input.getInt(mem.write(cpu.getMDR())); if (cpu.StoreFlag()) { mem.write(cpu.getMAR()); cpu.getMDR(); } else { cpu.setMDR(mem.read(cpu.getMAR())); cpu.execute(); cpu.fetch(); cpu.setMDR(mem.read(cpu.getMAR())); cpu.fetch2(); } } } Here is the code for Memory: class Memory{ private MemEl[] memArray; private int size; public Memory(int s) {size = s; memArray = new MemEl[s]; for(int i = 0; i < s; i++) memArray[i] = new MemEl(); } public void write (int loc, int val) {if (loc >=0 && loc < size) memArray[loc].write(val); else System.out.println("Index Not in Domain"); } public int read (int loc) {return memArray[loc].read(); } public void dump() { for(int i = 0; i < size; i++) if(i%1 == 0) System.out.println(memArray[i].read()); else System.out.print(memArray[i].read()); } } Here is the code for getMAR and getMDR: public int getMAR() { return ir.getOpcode(); } public int getMDR() { return mdr.read(); }

    Read the article

  • Development deployment: how to achive edit-and-reload with JSP pages?

    - by doublep
    Out project uses WebLogic as web-server and uses mostly JSP for user interface. With standard setup it is possible to copy edited JSP files into the exploded deployment directory and WebLogic will automatically pick them up, recompile and serve new content through HTTP. However, is it possible to avoid copying at all, so that I just save a file in my editor and it is immediately (well, after a couple of seconds for recompilation) visible? The project uses Apache Ant as building tool. I would imagine what I want would be possible with symlinks (since this is for deployment only I don't care about cross-platformity), but then I don't see how it is possible to symlink lots of files at once with Ant. So, how do I achieve save-JSP-hit-F5-in-browser functionality either with some setting in WebLogic; or with symlinking JSPs using Apache Ant (instead of copying them as is done now); or something else completely?

    Read the article

  • How can I set QNetworkReply properties to get correct NCBI pages?

    - by Claire Huang
    I try to get this following url using the downloadURL function: http://www.ncbi.nlm.nih.gov/nuccore/27884304 But the data is not as what we can see through the browser. Now I know it's because that I need to give the correct information such as browser, how can I know what kind of information I need to set, and how can I set it? (By setHeader function??) In VC++, we can use CInternetSession and CHttpConnection Object to get the correct information without setting any other detail information, is there any similar way in Qt or other cross-platform C++ network lib?? (Yes, I need the the cross-platform property.) QNetworkReply::NetworkError downloadURL(const QUrl &url, QByteArray &data) { QNetworkAccessManager manager; QNetworkRequest request(url); request.setHeader(QNetworkRequest::ContentTypeHeader ,"Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7 (.NET CLR 3.5.30729)"); QNetworkReply *reply = manager.get(request); QEventLoop loop; QObject::connect(reply, SIGNAL(finished()), &loop, SLOT(quit())); loop.exec(); int direction; QVariant statusCodeV = reply->attribute(QNetworkRequest::RedirectionTargetAttribute); QUrl redirectTo = statusCodeV.toUrl(); if (!redirectTo.isEmpty()) { if (redirectTo.host().isEmpty()) { const QByteArray newaddr = ("http://"+url.host()+redirectTo.encodedPath()).toAscii(); redirectTo.setEncodedUrl(newaddr); redirectTo.setHost(url.host()); } return (downloadURL(redirectTo, data)); } if (reply->error() != QNetworkReply::NoError) { return reply->error(); } data = reply->readAll(); delete reply; return QNetworkReply::NoError; }

    Read the article

  • Dynamically loading CSS and JavaScript using Prototype

    - by Salman A
    I have a classic ASP application that I've been constantly trying to modularize. Currently, almost all pages are divided in to two pages: an outer page that contains the layout, header, sidebar, footer an inner page that contains ASP code The outer pages use dreamweaver templates so updating layout and replicating changes is easy. The inner pages are managed by me. Now here is the problem: I had to add a lightbox to one page, I chose Lightbox 2 which requires Prototype. I ended up adding Prototype on every page, assuming that sooner or later I'll upgrade all pages, forms, ajax requests and other javascript to use Prototype. I've now added two other plugins -- Modalbox and Protofade; each with a pair of .JS and .CSS files. Since I'll be using these three plugins on specific set of pages I am wondering if I can load the required CSS and JS files dynamically. I do not want to access the document head and add include files there, I'll have to do this from inside a DIV where all ASP code is supposed to go.

    Read the article

  • How can I track an asp.net pages size without tracing?

    - by Middletone
    I want to be able to track the amount of data that is being transfered from my web site to each user that accesses the site. I can do this for file downloads and such but what about the pure html content itself. How can I track the output size of a page (or the data that's trasnfered via an AJAX call) to the client and log it against a particular users session? Also how would this differ when GZip is used in IIS 6.0?

    Read the article

  • How do I load ascx pages faster in visual studio 2008?

    - by diadem
    One of my (my team's) biggest peeves with VS2008 is the slow speed in which ascx load. It could take up to a couple minutes to do something as simple as a text or style change simply because of the time it takes to load an ascx page into the visual studio text editor. Half the time I'm tempted to check out the file, edit it in notepad, then check it back in. Is there any trick to speeding this up?

    Read the article

  • RDLC item width is dynamic and causing extra pages to be generated (image included)?

    - by Paul Mendoza
    I'm trying to format an RDLC report file in Visual Studio 2008 and I am having a formatting issue. I have a list at the bottom that contains a matrix that expands horizontally to the right. That pink box is just to visualize the problem I'm having. When the report is rendered the matrix expands and instead of filling the pink box with the matrix is pushes the space in the pink box to the right resulting in an extra page when printing the reports. One solution would be to shrink the pink box to be the size of the matrix which I've done. But then when the matrix grows the fields at the top of the report get pushed to the right by the same amount as the growth of the matrix. Can someone please let me know what they think the solution would be? Thank you!

    Read the article

  • Will creating a background thread in a WCF service during a call, take up a thread in the ASP .NET t

    - by Nate Pinchot
    The following code is part of a WCF service. Will eventWatcher take up a thread in the ASP .NET thread pool, even if it is set IsBackground = true? /// <summary> /// Provides methods to work with the PhoneSystem web services SDK. /// This is a singleton since we need to keep track of what lines (extensions) are open. /// </summary> public sealed class PhoneSystemWebServiceFactory : IDisposable { // singleton instance reference private static readonly PhoneSystemWebServiceFactory instance = new PhoneSystemWebServiceFactory(); private static readonly object l = new object(); private static volatile Hashtable monitoredExtensions = new Hashtable(); private static readonly PhoneSystemWebServiceClient webServiceClient = CreateWebServiceClient(); private static volatile bool isClientRegistered; private static volatile string clientHandle; private static readonly Thread eventWatcherThread = new Thread(EventPoller) {IsBackground = true}; #region Constructor // these constructors are hacks to make the C# compiler not mark beforefieldinit // more info: http://www.yoda.arachsys.com/csharp/singleton.html static PhoneSystemWebServiceFactory() { } PhoneSystemWebServiceFactory() { } #endregion #region Properties /// <summary> /// Gets a thread safe instance of PhoneSystemWebServiceFactory /// </summary> public static PhoneSystemWebServiceFactory Instance { get { return instance; } } #endregion #region Private methods /// <summary> /// Create and configure a PhoneSystemWebServiceClient with basic http binding and endpoint from app settings. /// </summary> /// <returns>PhoneSystemWebServiceClient</returns> private static PhoneSystemWebServiceClient CreateWebServiceClient() { string url = ConfigurationManager.AppSettings["PhoneSystemWebService_Url"]; if (string.IsNullOrEmpty(url)) { throw new ConfigurationErrorsException( "The AppSetting \"PhoneSystemWebService_Url\" could not be found. Check the application configuration and ensure that the element exists. Example: <appSettings><add key=\"PhoneSystemWebService_Url\" value=\"http://xyz\" /></appSettings>"); } return new PhoneSystemWebServiceClient(new BasicHttpBinding(), new EndpointAddress(url)); } #endregion #region Event poller public static void EventPoller() { while (true) { if (Thread.CurrentThread.ThreadState == ThreadState.Aborted || Thread.CurrentThread.ThreadState == ThreadState.AbortRequested || Thread.CurrentThread.ThreadState == ThreadState.Stopped || Thread.CurrentThread.ThreadState == ThreadState.StopRequested) break; // get events //webServiceClient.GetEvents(clientHandle, 30, 100); } Thread.Sleep(5000); } #endregion #region Client registration methods private static void RegisterClientIfNeeded() { if (isClientRegistered) { return; } lock (l) { // double lock check if (isClientRegistered) { return; } //clientHandle = webServiceClient.RegisterClient("PhoneSystemWebServiceFactoryInternal", null); isClientRegistered = true; } } private static void UnregisterClient() { if (!isClientRegistered) { return; } lock (l) { // double lock check if (!isClientRegistered) { return; } //webServiceClient.UnegisterClient(clientHandle); } } #endregion #region Phone extension methods public bool SubscribeToEventsForExtension(string extension) { if (monitoredExtensions.Contains(extension)) { return false; } lock (monitoredExtensions.SyncRoot) { // double lock check if (monitoredExtensions.Contains(extension)) { return false; } RegisterClientIfNeeded(); // open line so we receive events for extension LineInfo lineInfo; try { //lineInfo = webServiceClient.OpenLine(clientHandle, extension); } catch (FaultException<PhoneSystemWebSDKErrorDetail>) { // TODO: log error return false; } // add extension to list of monitored extensions //monitoredExtensions.Add(extension, lineInfo.lineID); monitoredExtensions.Add(extension, 1); // start event poller thread if not already started if (eventWatcherThread.ThreadState == ThreadState.Stopped || eventWatcherThread.ThreadState == ThreadState.Unstarted) { eventWatcherThread.Start(); } return true; } } public bool UnsubscribeFromEventsForExtension(string extension) { if (!monitoredExtensions.Contains(extension)) { return false; } lock (monitoredExtensions.SyncRoot) { if (!monitoredExtensions.Contains(extension)) { return false; } // close line try { //webServiceClient.CloseLine(clientHandle, (int) monitoredExtensions[extension]); } catch (FaultException<PhoneSystemWebSDKErrorDetail>) { // TODO: log error return false; } // remove extension from list of monitored extensions monitoredExtensions.Remove(extension); // if we are not monitoring anything else, stop the poller and unregister the client if (monitoredExtensions.Count == 0) { eventWatcherThread.Abort(); UnregisterClient(); } return true; } } public bool IsExtensionMonitored(string extension) { lock (monitoredExtensions.SyncRoot) { return monitoredExtensions.Contains(extension); } } #endregion #region Dispose public void Dispose() { lock (l) { // close any open lines var extensions = monitoredExtensions.Keys.Cast<string>().ToList(); while (extensions.Count > 0) { UnsubscribeFromEventsForExtension(extensions[0]); extensions.RemoveAt(0); } if (!isClientRegistered) { return; } // unregister web service client UnregisterClient(); } } #endregion }

    Read the article

  • javascript simple object creation test: opera leaks?

    - by joe
    Hi, I am trying to figure out certain memory leak conditions in javascript on a few browsers. Currently I'm only testing FF 3.6, Opera 10.10, and Safari 4.0.3. I've started with a fairly simple test, and can confirm no memory leaks in Firefox and Safari. But Opera just takes memory and never gives it back. What gives? Here's the test: <html> <head> <script type="text/javascript"> window.onload = init; //window.onunload = cleanup; var a=[]; function init() { var d = document.createElement('div'); d.innerHTML = "page loading..."; document.body.appendChild(d); for (var i=0; i<400000; i++) { a[i] = new Obj("xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"); } d.innerHTML = "PAGE LOADED"; } function cleanup() { for (var i=0; i<400000; i++) { a[i] = null; } } function Obj(msg) { this.msg=msg; } </script> </head> <body> </body> </html> I shouldn't need the cleanup() call on window.unload, but tried that also. No luck. As you can see this is simple JS, no circular DOM links, no closures. I monitor the memory usage using 'top' on Mac 10.4.11. Memory usage spikes up on page load, as expected. In FF and Safari reloading the page does not use any further memory, and all memory is returned when the window (tab) is closed. In Opera, memory spikes on load, and seems to also spike further on each reload (but not always...). But regardless of reload, memory never goes back down below the initial load spike. I had hoped this was a no-brainer test that all browsers would pass, so I could move on to more "interesting" conditions. Am I doing something wrong here? Or is this a known Opera issue? Thanks! -joe

    Read the article

< Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >