Search Results

Search found 4677 results on 188 pages for 'alternative'.

Page 161/188 | < Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >

  • JSR-299 (CDI) configuration at runtime

    - by nsn
    I need to configure different @Alternatives, @Decorators and @Injectors for different runtime environments (think testing, staging and production servers). Right now I use maven to create three wars, and the only difference between those wars are in the beans.xml files. Is there a better way to do this? I do have @Alternative @Stereotypes for the different environments, but even then I need to alter beans.xml, and they don't work for @Decorators (or do they?) Is it somehow possible to instruct CDI to ignore the values in beans.xml and use a custom configuration source? Because then I could for example read a system property or other environment variable. The application exclusively runs in containers that use Weld, so a weld-specific solution would be ok. I already tried to google this but can't seem to find good search terms, and I asked the Weld-Users-Forums, but to no avail. Someone over there suggested to write my own custom extension, but I can't find any API to actually change the container configuration at runtime. I think it would be possible to have some sort of @ApplicationScoped configuration bean and inject that into all @Decorators which could then decide themselves whether they should be active or not and then in order to configure @Alternatives write @Produces methods for every interface with multiple implementations and inject the config bean there too. But this seems to me like a lot of unnecessary work to essentially duplicate functionality already present in CDI?

    Read the article

  • Browser Issue: Charts are not rendered on IE8

    - by Rachel
    We have inhouse library which uses canvas for displaying charts in my application. And dojo as scripting language.Everything is fine, but my charts are not appearing in IE8. I google about this, and found that there is some VML issue in IE8. I found this: var printChart = function(time, freq){ if (!document.namespaces['g_vml_']) { document.namespaces.add('g_vml_', 'urn:schemas-microsoft-com:vml', '#default#VML'); } if (!document.namespaces['g_o_']) { document.namespaces.add('g_o_', 'urn:schemas-microsoft-com:office:office', '#default#VML'); } if (freq === undefined) { this.freq = "1mi"; } if (time === undefined) { this.time = "1dy"; } self.reload(); } Now I was trying to add this in my DOJO code and that is creating problem. As when I do document.namespace I get firebug error 'document.namespaces is undefined'. Q: How can we fix this, are the any better alternative approaches for the same, basic problem am having is browser related, charts are rendered properly on other browsers but not on IE8, any suggestions ? Update: What are ways to deal with such cross browser issue ?

    Read the article

  • Can't post with Perl's Net::Blogger

    - by Ovid
    I'm trying to automatically post to blogger using Perl's Net::Blogger but it keeps returning false and not posting. The main portion of my code looks like this: use Net::Blogger; my $blogger = Net::Blogger->new({ debug => 1, appkey => '0123456789ABCDEF', # doesn't matter? blogid => $blogid, username => $username, password => $password, }); say 'got to here'; my $result = $blogger->newPost({ postbody => \'<p>This is text</p><hr/><p><strong>Whee!</strong></p>', publish => 1, }); say 'done posting'; use Data::Dumper; print Dumper($result); Sure enough, $result is 0 and in checking the blog, nothing has been posted. The error I'm getting when I enable debugging is: Element '' can't be allowed in valid XML message. Died. at /Library/Perl/5.10.1/SOAP/Lite.pm line 1410. What am I doing wrong? If you can suggest an alternative to Net::Blogger, that would be fine.

    Read the article

  • Efficient update of SQLite table with many records

    - by blackrim
    I am trying to use sqlite (sqlite3) for a project to store hundreds of thousands of records (would like sqlite so users of the program don't have to run a [my]sql server). I have to update hundreds of thousands of records sometimes to enter left right values (they are hierarchical), but have found the standard update table set left_value = 4, right_value = 5 where id = 12340; to be very slow. I have tried surrounding every thousand or so with begin; .... update... update table set left_value = 4, right_value = 5 where id = 12340; update... .... commit; but again, very slow. Odd, because when I populate it with a few hundred thousand (with inserts), it finishes in seconds. I am currently trying to test the speed in python (the slowness is at the command line and python) before I move it to the C++ implementation, but right now this is way to slow and I need to find a new solution unless I am doing something wrong. Thoughts? (would take open source alternative to SQLite that is portable as well)

    Read the article

  • Looking for an email template engine for end-users ...

    - by RizwanK
    We have a number of customers that we have to send monthly invoices too. Right now, I'm managing a codebase that does SQL queries against our customer database and billing database and places that data into emails - and sends it. I grow weary of maintaining this every time we want to include a new promotion or change our customer service phone numbers. So, I'm looking for a replacement to move more of this into the hands of those requesting the changes. In my ideal world, I need : A WYSIWYG (man, does anyone even say that anymore?) email editor that generates templates based upon the output from a Database Query. The ability to drag and drop various fields from the database query into the email template. Display of sample email results with the database query. Web application, preferably not requiring IIS. Involve as little code as possible for the end-user, but allow basic functionality (i.e. arrays/for loops) Either comes with it's own email delivery engine, or writes output in a way that I can easily write a Python script to deliver the email. Support for generic Database Connectors. (I need MSSQL and MySQL) F/OSS So ... can anyone suggest a project like this, or some tools that'd be useful for rolling my own? (My current alternative idea is using something like ERB or Tenjin, having them write the code, but not having live-preview for the editor would suck...)

    Read the article

  • How to add additional condition to the JOIN generated by include?

    - by KandadaBoggu
    I want to add additional criteria to the LEFT OUTER JOIN generated by the :include option in ActiveRecord finder. class Post has_many :comments end class Comment belongs_to :post has_many :comment_votes end class CommentVote belongs_to :comment end Now lets say I want to find last 10 posts with their associated comments and the up comment votes. Post.find.all(:limit => 10, :order => "created_at DESC", :include => [{:comments => :comment_votes]) I cant add the condition to check for up votes as it will ignore the posts without the up votes. So the condition has to go the ON clause of the JOIN generated for the comment_votes. I am wishing for a syntax such as: Post.find.all(:limit => 10, :order => "created_at DESC", :include => [{:comments => [:comment_votes, :on => "comment_votes.vote > 0"]) Have you faced such problems before? Did you managed to solve the problem using the current finder? I hope to hear some interesting ideas from the community. PS: I can write a join SQL to get the expected result and stitch the results together. I want to make sure there is no other alternative before going down that path.

    Read the article

  • efficient thread-safe singleton in C++

    - by user168715
    The usual pattern for a singleton class is something like static Foo &getInst() { static Foo *inst = NULL; if(inst == NULL) inst = new Foo(...); return *inst; } However, it's my understanding that this solution is not thread-safe, since 1) Foo's constructor might be called more than once (which may or may not matter) and 2) inst may not be fully constructed before it is returned to a different thread. One solution is to wrap a mutex around the whole method, but then I'm paying for synchronization overhead long after I actually need it. An alternative is something like static Foo &getInst() { static Foo *inst = NULL; if(inst == NULL) { pthread_mutex_lock(&mutex); if(inst == NULL) inst = new Foo(...); pthread_mutex_unlock(&mutex); } return *inst; } Is this the right way to do it, or are there any pitfalls I should be aware of? For instance, are there any static initialization order problems that might occur, i.e. is inst always guaranteed to be NULL the first time getInst is called?

    Read the article

  • how to Retrieve the parameters of document.write to detect the creation of dynamic tags

    - by user1335906
    In my Project i am supposed to identify the dynamically created tags which can be done in scripts through document.write("<script src='jquery.js'></script>") For this i used Regular expressions and my code is as follows function find_tag_docwrite(text) { var attrib=new Object; var pat_tag=/<((\S+)\s(.*))>/g; while(t=pat_tag.exec(text) { var tag=RegExp.$1; for(i=0;i<tags.length;i++) { var pat=/(\S+)=((['"]*)(\S+)(['"]*)\3)/g; while(p=pat.exec(f)) { attr=RegExp.$1;val=RegExp.$4; attrib[attr]=val; } } } } in the above function text is parameters of document.write function. Now through this code i am getting the tag names and all the attributes of the tags. But for the below example the above code is not working var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); In such cases Regular expressions does not work so after searching some time where i found hooks on dom methods. so by using this i thought of creating hook for document.write method but i am able to understand how it is done i included the following code in my program but it is not working. function someFunction(text) { console.log(text); } document.write = someFunction; where text is the parameters of document.write. Another problem is After monitoring all the document.write methods using hooks again i have to use regex for finding tag creations. Is there Any alternative

    Read the article

  • F# ref-mutable vars vs object fields

    - by rwallace
    I'm writing a parser in F#, and it needs to be as fast as possible (I'm hoping to parse a 100 MB file in less than a minute). As normal, it uses mutable variables to store the next available character and the next available token (i.e. both the lexer and the parser proper use one unit of lookahead). My current partial implementation uses local variables for these. Since closure variables can't be mutable (anyone know the reason for this?) I've declared them as ref: let rec read file includepath = let c = ref ' ' let k = ref NONE let sb = new StringBuilder() use stream = File.OpenText file let readc() = c := stream.Read() |> char // etc I assume this has some overhead (not much, I know, but I'm trying for maximum speed here), and it's a little inelegant. The most obvious alternative would be to create a parser class object and have the mutable variables be fields in it. Does anyone know which is likely to be faster? Is there any consensus on which is considered better/more idiomatic style? Is there another option I'm missing?

    Read the article

  • Seperating and counting CSV entries from database (Access/ASp Classic)

    - by Katherine Perotta
    hey i could really use some help with this one. I have a faq with multiple "tags" and I would like to separate and count them. They are currently in the database as follows: ID-----------------TITLE--------------CONTENT-----------TAGS Sample Records: 1---------------sampletitle 1---------amplecontent--------tag1,tag2,tag3 2---------------sampletitle 2---------moresamplestuff-----tag3,tag4,tag5 How could I go about counting the number of times each tag is used? In the end, would it be easier to just create a separate table called TAGS, with a single tag corresponding to a single ID in FAQ? The only reason I don't prefer doing something like that is because I have so much data already it would take quite a while. However, if there's no alternative or if its easier than doing string parsing like that, im willing to do it. The goal is to display each unique tag and the number of times it is used. Would it be better to do the heavy lifting in the database or ASP? I have gotten as far as getting a list of all tags and displaying them in an array (with each tag separated). So at this point what I need to do is count each value and then remove the duplicates (while preserving the count number somewhere). This is in ASP classic using an Access database. Thanks!

    Read the article

  • compact XSLT code to drop N number of tags if all are null.

    - by infant programmer
    This is my input xml: <root> <node1/> <node2/> <node3/> <node4/> <othertags/> </root> The output must be: <root> <othertags/> </root> if any of the 4 nodes isn't null then none of the tags must be dropped. example: <root> <node1/> <node2/> <node3/> <node4>sample_text</node4> <othertags/> </root> Then the output must be same as input xml. <root> <node1/> <node2/> <node3/> <node4>sample_text</node4> <othertags/> </root> This is the XSL code I have designed :: <xsl:template match="@*|node()"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:template> <xsl:template match="/root/node1[.='' and ../node2/.='' and ../node3/.='' and ../node4/.=''] |/root/node2[.='' and ../node1/.='' and ../node3/.='' and ../node4/.=''] |/root/node3[.='' and ../node1/.='' and ../node2/.='' and ../node4/.=''] |/root/node4[.='' and ../node1/.='' and ../node2/.='' and ../node3/.='']"/> As you can see the code requires more effort and becomes more bulky as the number of nodes increase. Is there any alternative way to overcome this bottleneck?

    Read the article

  • ASP.net attachment then refresh page

    - by Russell
    I am working on a c# project using ASP .net. I have a list of reports with a hyperlink for each, which calls the web server, retrieves a PDF and then returns the PDF for the user to save or open: ASPX page: <table> <tr> <td> <a href="#" onclick="SubmitFormToOpenReport();">Open Report 1</a> <td> </tr> ... </table> ASP.Net: context.Response.Clear(); context.Response.AddHeader("content-disposition", "attachment;filename=report.pdf"); context.Response.Charset = ""; context.Response.ContentType = "application/pdf"; context.Response.BinaryWrite(myReport); context.Response.Flush(); This works as expected, however I would like it to also refresh the page with an updated list. I am having trouble as the single request/response is returning the report. Is there a way to refresh the page as well? While there is a correct response, feel free to include answers which details alternative solutions/ideas for doing this.

    Read the article

  • Implementing Tagging System with PHP and mySQL. Caching help!!!

    - by Hamid Sarfraz
    With reference to this post: http://stackoverflow.com/questions/2122546/how-to-implement-tag-counting I have implemented the suggested 3 table tagging system completely. To count the number of Articles per tag, i am using another column named tagArticleCount in the tag definition table. (other columns are tagId, tagText, tagUrl, tagArticleCount). If i implement realtime editing of this table, so that whenever user adds another tag to article or deletes an existing tag, the tag_definition_table is updated to update the counter of the added/removed tag. This will cost an extra query each time any modification is made. (at the same time, related link entry for tag and article is deleted from tagLinkTable). An alternative to this is not allowing any real time editing to the counter, instead use CRONs to update counter of each tag after a specified time period. Here comes the problem that i want to discuss. This can be seen as caching the article count in database. Can you please help me find a way to present the articles in a list when a tag is explored and when the article counter for that tag is not up to date. For example: 1. Counter shows 50 articles, but there are infact 55 entries in the tag link table (that links tags and articles). 2. Counter shows 50 articles, but there are infact 45 extries in the tag link table. How to handle these 2 scenerios given in example. I am going to use APC to keep cache of these counters. Consider it too in your solution. Also discuss performance in the realtime / CRONNED counter updates.

    Read the article

  • Can the Visual Studio (2010) Command Window handle "external tools" with project/solution relative p

    - by ee
    I have been playing with the Command Window in Visual Studio (View-Other Windows-Command Window). It is great for several mouse-free scenarios. (The autocompleting file "Open" command rocks in a non-trivial solution.) That success got me thinking and experimenting: Possibility 1.1: You can use the Alias commands to create custom commands Possibility 1.2: You can use the Shell command to run arbitrary executables and specify parameters (and pipe the result to the output or command windows) Possibility 2: A previously setup external tool definition (with project-relative path variables) could be run from the command window What I am stuck on is: There doesn't appear to be a way to send parameters to an aliased command (and thus the underlying Shell call) There doesn't appear to be a way to use project/solution relative paths ($SolutionDir/$ProjectDir) on a Shell call Using absolute paths in Shell works, but is fragile and high-maintenance (one alias for each needed use case). Typically you want the command to run against a file relative to your project/solution. It seems you can't run the traditional external tools (Tools-External Tools...) in the command window Ultimately I want the external tool functionality in the command window in some way. Can anyone see a way to do this? Or am I barking up the wrong tree? So my questions: Can an "external tool" of some sort (using relative project/solution path parameters) be used in the Command Window? If yes, How? If no, what might be a suitable alternative?

    Read the article

  • The proper way to script periodically pulling a page from an https site

    - by DarthShader
    I want to create a command-line script for Cygwin/Bash that logs into a site, navigates to a specific page and compares it with the results of the last run. So far, I have it working with Lynx like so: ----snpipped, just setting variables---- echo "# Command logfile created by Lynx 2.8.5rel.5 (29 Oct 2005) ----snipped the recorded keystrokes------- key Right Arrow key p key Right Arrow key ^U" >> $tmp1 #p, right arrow initiate the page saving #"type" the filename inside the "where to save" dialog for i in $(seq 0 $((${#tmp2} - 1))) do echo "key ${tmp2:$i:1}" >> $tmp1 done #hit enter and quit echo "key ^J key y key q key y " >> $tmp1 lynx -accept_all_cookies -cmd_script=$tmp1 https://thewebpage.com/login diff $tmp2 $oldComp mv $tmp2 $oldComp It definitely does not feel "right": the cmd_script consists of relative user actions instead of specifying the exact link names and actions. So, if anything on the site ever changes, switches places, or a new link is added - I will have to re-create the actions. Also, I can't check for any errors so I can't abort the script if something goes wrong (login failed, etc) Another alternative I have been looking at is Mechanize with Ruby (as a note - I have 0 experience with Ruby). What would be the best way to improve or rewrite this?

    Read the article

  • When is a bool not a bool (compiler warning C4800)

    - by omatai
    Consider this being compiled in MS Visual Studio 2005 (and probably others): CPoint point1( 1, 2 ); CPoint point2( 3, 4 ); const bool point1And2Identical( point1 == point2 ); // C4800 warning const bool point1And2TheSame( ( point1 == point2 ) == TRUE ); // no warning What the...? Is the MSVC compiler brain-dead? As far as I can tell, TRUE is #defined as 1, without any type information. So by what magic is there any difference between these two lines? Surely the type of the expression inside the brackets is the same in both cases? [This part of the question now satisfactorily answered in the comments just below] Personally, I think that avoiding the warning by using the == TRUE option is ugly (though less ugly than the != 0 alternative, despite being more strictly correct), and it is better to use #pragma warning( disable:4800 ) to imply "my code is good, the compiler is an ass". Agree? Note - I have seen all manner of discussion on C4800 talking about assigning ints to bools, or casting a burger combo with large fries (hold the onions) to a bool, and wondering why there are strange results. I can't find a clear answer on what seems like a much simpler question... that might just shine line on C4800 in general.

    Read the article

  • How to limit TCP writes to particular size and then block untlil the data is read

    - by ustulation
    {Qt 4.7.0 , VS 2010} I have a Server written in Qt and a 3rd party client executable. Qt based server uses QTcpServer and QTcpSocket facilities (non-blocking). Going through the articles on TCP I understand the following: the original implementation of TCP mentioned the negotiable window size to be a 16-bit value, thus maximum being 65535 bytes. But implementations often used the RFC window-scale-extension that allows the sliding window size to be scalable by bit-shifting to yield a maximum of 1 gigabyte. This is implementation defined. This could have resulted in majorly different window sizes on receiver and sender end as the server uses Qt facilities without hardcoding any window size limit. Client 1st asks for all information it can based on the previous messages from the server before handling the new (accumulating) incoming messages. So at some point Server receives a lot of messages each asking for data of several MB's. This the server processes and puts it into the sender buffer. Client however is unable to handle the messages at the same pace and it seems that client’s receiver buffer is far smaller (65535 bytes maybe) than sender’s transmit window size. The messages thus get accumulated at sender’s end until the sender’s buffer is full too after which the TCP writes on sender would block. This however does not happen as sender buffer is much larger. Hence this manifests as increase in memory consumption on the sender’s end. To prevent this from happening, I used Qt’s socket’s waitForBytesWritten() with timeout set to -1 for infinite waiting period. This as I see from the behaviour blocks the thread writing TCP data until the data has actually been sensed by the receiver’s window (which will happen when earlier messages have been processed by the client at application level). This has caused memory consumption at Server end to be almost negligible. is there a better alternative to this (in Qt) if i want to restrict the memory consumption at server end to say x MB's? Also please point out if any of my understandings is incorrect.

    Read the article

  • Editable, growable DataGrid that retains values on postback and updates underlying DataTable

    - by jlstrecker
    I'm trying to create an ASP.NET/C# page that allows the user to edit a table of data, add rows to it, and save the data back to the database. For the table of data, I'm using a DataGrid whose cells contain TextBoxes or CheckBoxes. I have a button for adding rows (which works) and a button for saving the data. However, I'm quite stuck on two things: The TextBoxes and CheckBoxes should retain their values on postback. So if the user edits a TextBox and clicks the button to add more rows, the edits should be retained when the page reloads. However, the edits should not be saved to the database at this point. When the user clicks the save button, or anytime before, the DataTable underlying the DataGrid needs to be updated with the values of the TextBoxes and CheckBoxes so that the DataTable can be sent to the database. I have a method that does this, but I can't figure out when to call it. Any help getting this to work, or suggestions of alternative user interfaces that would behave similarly, would be appreciated.

    Read the article

  • Need advice on C++ coding pattern

    - by Kotti
    Hi! I have a working prototype of a game engine and right now I'm doing some refactoring. What I'm asking for is your opinion on usage of the following C++ coding patterns. I have implemented some trivial algorithms for collision detection and they are implemented the following way: Not shown here - class constructor is made private and using algorithms looks like Algorithm::HandleInnerCollision(...) struct Algorithm { // Private routines static bool is_inside(Point& p, Object& object) { // (...) } public: /** * Handle collision where the moving object should be always * located inside the static object * * @param MovingObject & mobject * @param const StaticObject & sobject * @return void * @see */ static void HandleInnerCollision(MovingObject& mobject, const StaticObject& sobject) { // (...) } So, my question is - somebody advised me to do it "the C++" way - so that all functions are wrapped in a namespace, but not in a class. Is there some good way to preserve privating if I will wrap them into a namespace as adviced? What I want to have is a simple interface and ability to call functions as Algorithm::HandleInnerCollision(...) while not polluting the namespace with other functions such as is_inside(...) Of, if you can advise any alternative design pattern for such kind of logics, I would really appreciate that...

    Read the article

  • C#: How to get all public (both get and set) string properties of a type

    - by Svish
    I am trying to make a method that will go through a list of generic objects and replace all their properties of type string which is either null or empty with a replacement. How is a good way to do this? I have this kind of... shell... so far: public static void ReplaceEmptyStrings<T>(List<T> list, string replacement) { var properties = typeof(T).GetProperties( -- What BindingFlags? -- ); foreach(var p in properties) { foreach(var item in list) { if(string.IsNullOrEmpty((string) p.GetValue(item, null))) p.SetValue(item, replacement, null); } } } So, how do I find all the properties of a type that are: Of type string Has public get Has public set ? I made this test class: class TestSubject { public string Public; private string Private; public string PublicPublic { get; set; } public string PublicPrivate { get; private set; } public string PrivatePublic { private get; set; } private string PrivatePrivate { get; set; } } The following does not work: var properties = typeof(TestSubject) .GetProperties(BindingFlags.Instance|BindingFlags.Public) .Where(ø => ø.CanRead && ø.CanWrite) .Where(ø => ø.PropertyType == typeof(string)); If I print out the Name of those properties I get there, I get: PublicPublic PublicPrivate PrivatePublic In other words, I get two properties too much. Note: This could probably be done in a better way... using nested foreach and reflection and all here... but if you have any great alternative ideas, please let me know cause I want to learn!

    Read the article

  • Help: List contents of iPhone application bundle subpath??

    - by bluepill
    Hi, I am trying to get a listing of the directories contained within a subpath of my application bundle. I've done some searching and this is what I have come up with - (void) contents { NSArray *contents = [[NSBundle mainBundle] pathsForResourcesOfType:nil inDirectory:@"DataDir"]; if (contents = nil) { NSLog(@"Failed: path doesn't exist or error!"); } else { NSString *bundlePathName = [[NSBundle mainBundle] bundlePath]; NSString *dataPathName = [bundlePathName stringByAppendingPathComponent: @"DataDir"]; NSFileManager *fileManager = [NSFileManager defaultManager]; NSMutableArray *directories = [[NSMutableArray alloc] init]; for (NSString *entityName in contents) { NSString *fullEntityName = [dataPathName stringByAppendingPathComponent:entityName]; NSLog(@"entity = %@", fullEntityName); BOOL isDir = NO; [fileManager fileExistsAtPath:fullEntityName isDirectory:(&isDir)]; if (isDir) { [directories addObject:fullEntityName]; NSLog(@" is a directory"); } else { NSLog(@" is not a directory"); } } NSLog(@"Directories = %@", directories); [directories release]; } } As you can see I am trying to get a listing of directories in the app bundle's DataDir subpath. The problem is that I get no strings in my contents NSArray. note: - I am using the simulator - When I manually look in the .app file I can see DataDir and the contents therein - The contents of DataDir are png files and directories that contain png files - The application logic needs to discover the contents of DataDir at runtime - I have also tried using NSArray *contents = [fileManager contentsOfDirectoryAtPath:DataDirPathName error:nil]; and I still get no entries in my contents array Any suggestions/alternative approaches? Thanks.

    Read the article

  • Unknown ListView Behavior

    - by st0le
    I'm currently making a SMS Application in Android, the following is a code snippet from Inbox Listactivity, I have requested a cursor from the contentresolver and used a custom adapter to add custom views into the list. Now, in the custom view i've got 2 TextViews (tvFullBody,*tvBody*)... tvFullBody contains the Full SMS Text while tvBody contains a short preview (35 characters) The tvFullBody Visibility is by default set to GONE. My idea is, when the user clicks on a list item, the tvBody should dissappear(GONE) and the tvFullBody should become visible (VISIBLE). On Clicking again, it should revert back to its original state. //isExpanded is a BitSet of the size = no of list items...keeps track of which items are expanded and which are not @Override protected void onListItemClick(ListView l, View v, int position, long id) { if(isExpanded.get(position)) { v.findViewById(R.id.tvFullBody).setVisibility(View.GONE); v.findViewById(R.id.tvBody).setVisibility(View.VISIBLE); }else { v.findViewById(R.id.tvFullBody).setVisibility(View.VISIBLE); v.findViewById(R.id.tvBody).setVisibility(View.GONE); } isExpanded.flip(position); super.onListItemClick(l, v, position, id); } The Code works as it is supposed to :) except for an undesired sideeffect.... Every 10th (or so) List Item also gets "toggled". eg. If i Expand the 1st, then the 11th, 21th list items are also expanded...Although they remain off screen, but on scrolling you get to see the undesired "expansion". By my novice analysis, i'm guessing Listview keeps track of 10 list items that are currently visible, upon scrolling, it "reuses" those same variables, which is causing this problem...(i didn't check the android source code yet.) I'd be gratefull for any suggestion, on how i should tackle this! :) I'm open to alternative methods aswell....Thanks in advance! :)

    Read the article

  • MEF: Satisfy part on an Export using and Export from the composed part.

    - by Pop Catalin
    Hi, I have the following scenario in Silverlight 4: I have a notifications service Snippet [InheritedExport] public interface INotificationsService : IObservable<ReceivedNotification> { void IssueNotifications(IEnumerable<ClientIssuedNotification> notifications); } and and implementation of this service Snippet [PartCreationPolicy(CreationPolicy.NonShared)] public class ClientNotificationService : INotificationsService { [Import] IPlugin Plugin { get; set; } ... } How can I say to MEF that the Plugin property of the ClientNotificationService must be provided by the importing class that imports the INotificationsService. For example: Snippet public class Client { [Export] IPlugin Current { get; set; } [Import] INotificationService NotificationService; } How can I say that I want MEF to satisfy the ClientNotificationService.Plugin part with the exported IPlugin by the Client class. Basically I want the NotificationService, to receive a Unique ID provided by the importing class, whenever it is created and composed to a new class, or if there's and alternative method, like using meta data to do this I'd appreciate any insights. I've been struggling with this for a while. Thanks

    Read the article

  • Any techniques to interrupt, kill, or otherwise unwind (releasing synchronization locks) a single de

    - by gojomo
    I have a long-running process where, due to a bug, a trivial/expendable thread is deadlocked with a thread which I would like to continue, so that it can perform some final reporting that would be hard to reproduce in another way. Of course, fixing the bug for future runs is the proper ultimate resolution. Of course, any such forced interrupt/kill/stop of any thread is inherently unsafe and likely to cause other unpredictable inconsistencies. (I'm familiar with all the standard warnings and the reasons for them.) But still, since the only alternative is to kill the JVM process and go through a more lengthy procedure which would result in a less-complete final report, messy/deprecated/dangerous/risky/one-time techniques are exactly what I'd like to try. The JVM is Sun's 1.6.0_16 64-bit on Ubuntu, and the expendable thread is waiting-to-lock an object monitor. Can an OS signal directed to an exact thread create an InterruptedException in the expendable thread? Could attaching with gdb, and directly tampering with JVM data or calling JVM procedures allow a forced-release of the object monitor held by the expendable thread? Would a Thread.interrupt() from another thread generate a InterruptedException from the waiting-to-lock frame? (With some effort, I can inject an arbitrary beanshell script into the running system.) Can the deprecated Thread.stop() be sent via JMX or any other remote-injection method? Any ideas appreciated, the more 'dangerous', the better! And, if your suggestion has worked in personal experience in a similar situation, the best!

    Read the article

  • Design for fastest page download

    - by mexxican
    I have a file with millions of URLs/IPs and have to write a program to download the pages really fast. The connection rate should be at least 6000/s and file download speed at least 2000 with avg. 15kb file size. The network bandwidth is 1 Gbps. My approach so far has been: Creating 600 socket threads with each having 60 sockets and using WSAEventSelect to wait for data to read. As soon as a file download is complete, add that memory address(of the downloaded file) to a pipeline( a simple vector ) and fire another request. When the total download is more than 50Mb among all socket threads, write all the files downloaded to the disk and free the memory. So far, this approach has been not very successful with the rate at which I could hit not shooting beyond 2900 connections/s and downloaded data rate even less. Can somebody suggest an alternative approach which could give me better stats. Also I am working windows server 2008 machine with 8 Gig of memory. Also, do we need to hack the kernel so as we could use more threads and memory. Currently I can create a max. of 1500 threads and memory usage not going beyond 2 gigs [ which technically should be much more as this is a 64-bit machine ]. And IOCP is out of question as I have no experience in that so far and have to fix this application today. Thanks Guys!

    Read the article

< Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >