Search Results

Search found 10328 results on 414 pages for 'behavior tree'.

Page 129/414 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • EXEC_BAD_ACCESS error comes in my applicatoin in iphone

    - by Jaimin
    when i print dictionary i got this error.. here my dictTf is mutabledictionay.. when i m in home page i selct few fields and click find. so new view comes with the result.. now i go back and again click find without changing anything.. now comes proper.. now at this moment when i go back it shows this in the dictionay and EXEC_BAD_ACCESS eror comes... Printing description of dictTf: The program being debugged was signaled while in a function called from GDB. GDB has restored the context to what it was before the call. To change this behavior use "set unwindonsignal off" Evaluation of the expression containing the function (CFShow) will be abandoned. The program being debugged was signaled while in a function called from GDB. GDB has restored the context to what it was before the call. To change this behavior use "set unwindonsignal off" Evaluation of the expression containing the function (CFShow) will be abandoned. (gdb)

    Read the article

  • How to enforce users to create objects of class derived from mine with "new" only?

    - by sharptooth
    To implement reference counting we use an IUnknown-like interface and a smart pointer template class. The interface has implementation for all the reference-count methods, including Release(): void IUnknownLike::Release() { if( --refCount == 0 ) { delete this; } } The smart pointer template class has a copy constructor and an assignment operator both accepting raw pointers. So users can do the following: class Class : public IUnknownLike { }; void someFunction( CSmartPointer<Class> object ); //whatever function Class object; someFunction( &object ); and the program runs into undefined behavior - the object is created with reference count zero, the smart pointer is constructed and bumps it to one, then the function returns, smart pointer is destroyed, calls Release() which leads to delete of a stack-allocated variable. Users can as well do the following: struct COuter { //whatever else; Class inner;// IUnknownLike descendant }; COuter object; somefunction( &object.Inner ); and again an object not created with new is deleted. Undefined behavior at its best. Is there any way to change the IUnknownLike interface so that the user is forced to use new for creating all objects derived from IUnknownLike - both directly derived and indirectly derived (with classes in between the most derived and the base)?

    Read the article

  • Why does the JSF action tag handler in JSP invoke rendering immediately after creation?

    - by Pentius
    Dear fellows, I read the article "Improving JSF by Dumping JSP" from Hans Bergsten. There I read the following: The JSP container processes the page and invokes the JSF action tag handlers as they are encountered. A JSF tag handler looks for the JSF component it represents in the component tree. If it can't find the component, it creates it and adds it to the component tree. It then asks the component to render itself. and furthermore On the first request, the action creates its component and asks it to render itself. I understand that the immediate rendering after the creation of the component is the problem here (The reference to the input component can't be resolved in the example). That's one point, why JSF doesn't fit with JSP. But it reads as if the action tag handler itself would ask the component to render. Or is it JSP that triggers the rendering directly after the action tag handler created the component. If it is the action tag handler, I don't understand, why this is the fault of JSP. What is different here than from JSF intended? Thanks for your help, I need this for my thesis.

    Read the article

  • MFC (C++) CDialog DoModal() not working as expected

    - by krebstar
    Hi, I have a plugin that is loaded by this application.. This plugin calls some dialog boxes with DoModal(). I'm expecting these dialog boxes to function like this: If I click on the application window behind the dialog box, the dialog box flashes and does not allow the application to be in focus. However, in one of the other dialog boxes, called with DoModal(), if I click on the application window, it doesn't do the flashing thing, and after a while the application's close/minimize buttons become active (well, just the color). They're not really active and the window turns somewhat white and the title bar says (Not Responding)... What could possibly be wrong and how do I fix it? I've tried setting the dialog box's properties to System Modal: True, and Set Foreground: True but it doesn't seem to work.. :( Thanks.. EDIT: I'd like to note that the in the Windows taskbar, there is only one entry for the application for the correct behavior, but when the dialog box with the incorrect behavior is launched, another "window" is launched.. So it looks like (Application)(Dialog box title).. The effect I'm trying to achieve is just (Application)..

    Read the article

  • PEAR:DB connection parameters

    - by Markus Ossi
    I just finished my first PHP site and now I have a security-related question. I used PEAR:DB for the database connection and made a separate parameter file for it. How should I hide this parameter file? I found a guide (http://www.kitebird.com/articles/peardb.html) that says: Another way to specify connection parameters is to put them in a separate file that you reference from your main script. ... It also enables you to move the parameter file outside of the web server's document tree, which prevents its contents from being displayed literally if the server becomes misconfigured and starts serving PHP scripts as plain text. I have now put my file in a directory like this /include/db_parameters.inc However, if I go to this URL, the web server shows me the contents of the file including my database username and password. From what I've understood, I should protect this file so, that even though PHP would be served as text, nobody could read this. What does outside of web server's document tree mean here? Put the PHP file out of public_html directory altogether deeper into the server file system? Some CHMOD?

    Read the article

  • Is `List<Dog>` a subclass of `List<Animal>`? Why aren't Java's generics implicitly polymorphic?

    - by froadie
    I'm a bit confused about how Java generics handle inheritance / polymorphism. Assume the following hierarchy - Animal (Parent) Dog - Cat (Children) So suppose I have a method doSomething(List<Animal> animals). By all the rules of inheritance and polymorphism, I would assume that a List<Dog> is a List<Animal> and a List<Cat> is a List<Animal> - and so either one could be passed to this method. Not so. If I want to achieve this behavior, I have to explicitly tell the method to accept a list of any subset of Animal by saying doSomething(List<? extends Animal> animals). I understand that this is Java's behavior. My question is why? Why is polymorphism generally implicit, but when it comes to generics it must be specified?

    Read the article

  • Searching in graphs trees with Depth/Breadth first/A* algorithms

    - by devoured elysium
    I have a couple of questions about searching in graphs/trees: Let's assume I have an empty chess board and I want to move a pawn around from point A to B. A. When using depth first search or breadth first search must we use open and closed lists ? This is, a list that has all the elements to check, and other with all other elements that were already checked? Is it even possible to do it without having those lists? What about A*, does it need it? B. When using lists, after having found a solution, how can you get the sequence of states from A to B? I assume when you have items in the open and closed list, instead of just having the (x, y) states, you have an "extended state" formed with (x, y, parent_of_this_node) ? C. State A has 4 possible moves (right, left, up, down). If I do as first move left, should I let it in the next state come back to the original state? This, is, do the "right" move? If not, must I transverse the search tree every time to check which states I've been to? D. When I see a state in the tree where I've already been, should I just ignore it, as I know it's a dead end? I guess to do this I'd have to always keep the list of visited states, right? E. Is there any difference between search trees and graphs? Are they just different ways to look at the same thing?

    Read the article

  • Creating a Linq->HQL provider

    - by Mike Q
    Hi all, I have a client application that connects to a server. The server uses hibernate for persistence and querying so it has a set of annotated hibernate objects for persistence. The client sends HQL queries to the server and gets responses back. The client has an auto-generated set of objects that match the server hibernate objects for query results and basic persistence. I would like to support using Linq to query as well as Hql as it makes the queries typesafe and quicker to build (no more typos in HQL string queries). I've looked around at the following but I can't see how to get them to fit with what I have. NHibernate's Linq provider - requires using NHibernate ISession and ISessionFactory, which I don't have LinqExtender - requires a lot of annotations on the objects and extending a base type, too invasive What I really want is something that will generate give me a nice easy to process structure to build the HQL queries from. I've read most of a 15 page article written by one of the C# developers on how to create custom providers and it's pretty fraught, mainly because of the complexity of the expression tree. Can anyone suggest an approach for implementing Linq - HQL translation? Perhaps a library that will the cleanup of the expression tree into something more SQL/HQLish. I would like to support select/from/where/group by/order by/joins. Not too worried about subqueries.

    Read the article

  • Searching Natural Language Sentence Structure

    - by Cerin
    What's the best way to store and search a database of natural language sentence structure trees? Using OpenNLP's English Treebank Parser, I can get fairly reliable sentence structure parsings for arbitrary sentences. What I'd like to do is create a tool that can extract all the doc strings from my source code, generate these trees for all sentences in the doc strings, store these trees and their associated function name in a database, and then allow a user to search the database using natural language queries. So, given the sentence "This uploads files to a remote machine." for the function upload_files(), I'd have the tree: (TOP (S (NP (DT This)) (VP (VBZ uploads) (NP (NNS files)) (PP (TO to) (NP (DT a) (JJ remote) (NN machine)))) (. .))) If someone entered the query "How can I upload files?", equating to the tree: (TOP (SBARQ (WHADVP (WRB How)) (SQ (MD can) (NP (PRP I)) (VP (VB upload) (NP (NNS files)))) (. ?))) how would I store and query these trees in a SQL database? I've written a simple proof-of-concept script that can perform this search using a mix of regular expressions and network graph parsing, but I'm not sure how I'd implement this in a scalable way. And yes, I realize my example would be trivial to retrieve using a simple keyword search. The idea I'm trying to test is how I might take advantage of grammatical structure, so I can weed-out entries with similar keywords, but a different sentence structure. For example, with the above query, I wouldn't want to retrieve the entry associated with the sentence "Checks a remote machine to find a user that uploads files." which has similar keywords, but is obviously describing a completely different behavior.

    Read the article

  • Delegate Methods Not Firing on Search Results Table

    - by Slinky
    All, I have a UITableView w/detail, with a Search bar, created from code. Works fine on selected items except it doesn't work when an item is clicked from the search results; no delegate methods fire. Never seen this type of behavior before so I don't know where the issue lies. I get the same behavior with standard table cells as well as custom table cells. Appreciate any guidance on this and thanks. Just Hangs Here //ViewController.h @interface SongsViewController : UITableViewController <UISearchBarDelegate,UISearchDisplayDelegate> { NSMutableArray *searchData; UISearchBar *searchBar; UISearchDisplayController *searchDisplayController; UITableViewCell *customSongCell; } //ViewController.m -(void)viewDidLoad { [super viewDidLoad]; CGRect screenRect = [[UIScreen mainScreen] bounds]; CGFloat screenWidth = screenRect.size.width; searchBar = [[UISearchBar alloc] initWithFrame:CGRectMake(0, 0, screenWidth, 88)]; [searchBar setShowsScopeBar:YES]; [searchBar setScopeButtonTitles:[[NSArray alloc] initWithObjects:@"Title",@"Style", nil]]; searchDisplayController = [[UISearchDisplayController alloc] initWithSearchBar:searchBar contentsController:self]; searchDisplayController.delegate = self; searchDisplayController.searchResultsDataSource = self; self.tableView.tableHeaderView = searchBar; //Load custom table cell UINib *songCellNib = [UINib nibWithNibName:@"SongItem" bundle:nil]; //Register this nib, which contains the cell [[self tableView] registerNib:songCellNib forCellReuseIdentifier:@"SongItemCell"]; // create a filtered list that will contain products for the search results table. SongStore *ps = [SongStore defaultStore]; NSArray *songObjects = [ps allSongs]; self.filteredListContent = [NSMutableArray arrayWithCapacity: [songObjects count]]; self.masterSongArr = [[NSMutableArray alloc] initWithArray:[ps allSongs]]; [self refreshData]; [self.tableView reloadData]; self.tableView.scrollEnabled = YES; }

    Read the article

  • When is the reintegrate option really necessary?

    - by Tor Hovland
    If you always sync a feature branch before you merge it back, why do you really have to use the --reintegrate option? The Subversion book says: When merging your branch back to the trunk, however, the underlying mathematics is quite different. Your feature branch is now a mishmosh of both duplicated trunk changes and private branch changes, so there's no simple contiguous range of revisions to copy over. By specifying the --reintegrate option, you're asking Subversion to carefully replicate only those changes unique to your branch. (And in fact, it does this by comparing the latest trunk tree with the latest branch tree: the resulting difference is exactly your branch changes!) So the --reintegrate option only merges the changes that are unique to the feature branch. But if you always sync before merge (which is a recommended practice, in order to deal with any conflicts on the feature branch), then the only changes between the branches are the changes that are unique to the feature branch, right? And if Subversion tries to merge code that is already on the target branch, it will just do nothing, right? In this blog post, Mark Phippard writes: http://blogs.open.collab.net/svn/2008/07/subversion-merg.html If we include those synched revisions, then we merge back changes that already exist in trunk. This yields unnecessary and confusing conflicts. Can somebody give me an example of when dropping reintegrate gives me unnecessary conflicts?

    Read the article

  • Adding a button bar at the bottom of ListView upon checkbox select (as in gmail app)??

    - by elto
    I have a ListView with custom adapter. In each row there is a checkbox and couple of textviews. I want user to give option to delete the check marked items, so as soon as soon clicks on one of the checkbox, I want a button bar to slide in from the bottom and stay at the bottom regardless of listview scroll. This is something like the email app behavior of Motorola Cliq and to some extent gmail app itself. I have tried adding a relativelayout (containing buttons) below the listview, which has visibility set to gone initially, but as soon as user checks a button, the visibility changes to "visible". I have added a slide-in animation to it too. It is working but problem is that it is overlapping the last element of the listview which user can not checkmark if the button bar has already become visible. So I tried to set the bottom margin of the listview equal to the height of the button bar when I'm changing the button bar visibility, which solves the problem of overlap, but now the checkbox behavior has gone weird. Clicking on one checkmark tries to checkmark another checkmark in the list for some weird reason. I noticed that this happens because as soon as I change the listview margin, list redraws itself, and during this new call to getView() method of adapter, things mess up. I wanted to ask if anyone has done something like this. What is the best method to add such button bar below list while keeping the slide-in animation intact. Also, What is the footer-view of listview and can that solve my problem?

    Read the article

  • Where are the network boundaries in the Java Connector Architecture (JCA)?

    - by Laird Nelson
    I am writing a JCA resource adapter. I'm also, as I go, trying to fully understand the connection management portion of the JCA specification. As a thought experiment, pretend that the only client of this adapter will be a Swing Java Application Client located on a different machine. Also assume that the resource adapter will communicate with its "enterprise information system" (EIS) over the network as well. As I understand the JCA specification, the .rar file is deployed to the application server. The application server creates the .rar file's implementation of the ManagedConnectionFactory interface. It then asks it to produce a connection factory, which is the opaque object that is deployed to JNDI for the user to use to obtain a connection to the resource. (In the case of JDBC, the connection factory is a javax.sql.DataSource.) It is a requirement that the connection factory retain a reference to the application-server-supplied ConnectionManager, which, in turn, is required to be Serializable. This makes sense--in order for the connection factory to be stored in JNDI, it must be serializable, and in order for it to keep a reference to the ConnectionManager, the ConnectionManager must also be serializable. So fine, this little object graph gets installed in the application client's JNDI tree. This is where I start to get queasy. Is the ConnectionManager--the piece supplied by the application server that is supposed to handle connection management, sharing, pooling, etc.--wholly present on the client at this point? One of its jobs is to create ManagedConnection instances, and a ManagedConnection is not required to be Serializable, and the user connection handles it vends are also not required to be Serializable. That suggests to me that the whole connection pooling machinery is shipped wholesale to the application client and stuffed into its JNDI tree. Does this all mean that JCA interactions from the client side bypass the server-side componentry of the application server? Where are the network boundaries in the JCA API?

    Read the article

  • How to deal with a flaw in System.Data.DataTableExtensions.CopyToDataTable()

    - by andy
    Hey guys, so I've come across something which is perhaps a flaw in the Extension method .CopyToDataTable. This method is used by Importing (in VB.NET) System.Data.DataTableExtensions and then calling the method against an IEnumerable. You would do this if you want to filter a Datatable using LINQ, and then restore the DataTable at the end. i.e: Imports System.Data.DataRowExtensions Imports System.Data.DataTableExtensions Public Class SomeClass Private Shared Function GetData() As DataTable Dim Data As DataTable Data = LegacyADO.NETDBCall Data = Data.AsEnumerable.Where(Function(dr) dr.Field(Of Integer)("SomeField") = 5).CopyToDataTable() Return Data End Function End Class In the example above, the "WHERE" filtering might return no results. If this happens CopyToDataTable throws an exception because there are no DataRows. Why? The correct behavior should be to return a DataTable with Rows.Count = 0. Can anyone think of a clean workaround to this, in such a way that whoever calls CopyToDataTable doesn't have to be aware of this issue? System.Data.DataTableExtensions is a Static Class so I can't override the behavior....any ideas? Have I missed something? cheers UPDATE: I have submitted this as an issue to Connect. I would still like some suggestions, but if you agree with me, you could vote up the issue at Connect via the link above cheers

    Read the article

  • Python : How to close a UDP socket while is waiting for data in recv ?

    - by alexroat
    Hello, let's consider this code in python: import socket import threading import sys import select class UDPServer: def __init__(self): self.s=None self.t=None def start(self,port=8888): if not self.s: self.s=socket.socket(socket.AF_INET, socket.SOCK_DGRAM) self.s.bind(("",port)) self.t=threading.Thread(target=self.run) self.t.start() def stop(self): if self.s: self.s.close() self.t.join() self.t=None def run(self): while True: try: #receive data data,addr=self.s.recvfrom(1024) self.onPacket(addr,data) except: break self.s=None def onPacket(self,addr,data): print addr,data us=UDPServer() while True: sys.stdout.write("UDP server> ") cmd=sys.stdin.readline() if cmd=="start\n": print "starting server..." us.start(8888) print "done" elif cmd=="stop\n": print "stopping server..." us.stop() print "done" elif cmd=="quit\n": print "Quitting ..." us.stop() break; print "bye bye" It runs an interactive shell with which I can start and stop an UDP server. The server is implemented through a class which launches a thread in which there's a infinite loop of recv/*onPacket* callback inside a try/except block which should detect the error and the exits from the loop. What I expect is that when I type "stop" on the shell the socket is closed and an exception is raised by the recvfrom function because of the invalidation of the file descriptor. Instead, it seems that recvfrom still to block the thread waiting for data even after the close call. Why this strange behavior ? I've always used this patter to implements an UDP server in C++ and JAVA and it always worked. I've tried also with a "select" passing a list with the socket to the xread argument, in order to get an event of file descriptor disruption from select instead that from recvfrom, but select seems to be "insensible" to the close too. I need to have a unique code which maintain the same behavior on Linux and Windows with python 2.5 - 2.6. Thanks.

    Read the article

  • Using the Queue class in Python 2.6

    - by voipme
    Let's assume I'm stuck using Python 2.6, and can't upgrade (even if that would help). I've written a program that uses the Queue class. My producer is a simple directory listing. My consumer threads pull a file from the queue, and do stuff with it. If the file has already been processed, I skip it. The processed list is generated before all of the threads are started, so it isn't empty. Here's some pseudo-code. import Queue, sys, threading processed = [] def consumer(): while True: file = dirlist.get(block=True) if file in processed: print "Ignoring %s" % file else: # do stuff here dirlist.task_done() dirlist = Queue.Queue() for f in os.listdir("/some/dir"): dirlist.put(f) max_threads = 8 for i in range(max_threads): thr = Thread(target=consumer) thr.start() dirlist.join() The strange behavior I'm getting is that if a thread encounters a file that's already been processed, the thread stalls out and waits until the entire program ends. I've done a little bit of testing, and the first 7 threads (assuming 8 is the max) stop, while the 8th thread keeps processing, one file at a time. But, by doing that, I'm losing the entire reason for threading the application. Am I doing something wrong, or is this the expected behavior of the Queue/threading classes in Python 2.6?

    Read the article

  • Snapshot agent obliterates conflicts

    - by mwolfe02
    We are using merge replication in SQL Server 2000. We have a snapshot agent that runs every night that updates the publication snapshot. About six months ago we updated from SQL Server 7.0 to 2000 (that's not a typo). We noticed a sharp decline in conflicts at that time but could not track down the reason. We finally found that the daily snapshot agent is recreating the conflict tables every night. This seems to be a change in functionality from SQL Server 7.0. We were running the snapshot agent before and the conflicts would accumulate. Is there some way to prevent the data in the conflict tables from being lost when the snapshot runs? Can anyone confirm a change in behavior between 7.0 and 2000? Our current plan is to simply stop automatically updating the publication snapshot. Is that a reasonable workaround? Here is the line from the script that is adding the snapshot: exec sp_addpublication_snapshot @publication = N'MyPub' , @frequency_type = 4 , @frequency_interval = 1 , @frequency_relative_interval = 1 , @frequency_recurrence_factor = 0 , @frequency_subday = 1 , @frequency_subday_interval = 5 , @active_start_date = 0 , @active_end_date = 0 , @active_start_time_of_day = 500 , @active_end_time_of_day = 235959 Here is the step that runs in the agent job: Step Name: Run agent. Type: Replication Snapshot Command: -Publisher [WCDBS02] -PublisherDB [TaxDB] -Distributor [WCDBS02] -Publication [TaxDB] -ReplicationType 2 -DistributorSecurityMode 1 This appears to be running the Replication Snapshot Agent Utility. There is no mention on that link about dropping and recreating system conflict tables, nor is there any flag that can be set to alter this behavior.

    Read the article

  • git push says everything up to date when it definitely is not

    - by Wolf
    I have a public repository. No one else has forked, pulled, or done anything else to it. I made some minor changes to one file, successfully committed them, and tried to push. It says 'Everything up-to-date'. There are no branches. I'm very, very new to git and I don't understand what on earth is going on. git remote show origin tells me: HEAD branch: master Remote branch: master tracked Local ref configured for 'git push': master pushes to master (up to date) Any ideas what I can do to make this understand that it's NOT up to date? Thanks Updates: git status: # On branch master # Untracked files: # (use "git add ..." to include in what will be committed) # # histmarkup.el # vendor/yasnippet-0.6.1c/snippets/ no changes added to commit (use "git add" and/or "git commit -a") git branch -a: * master remotes/origin/master git fsck: dangling tree 105cb101ca1a4d2cbe1b5c73eb4a238e22cb4998 dangling tree 85bd0461f0fcb1618d46c8a80d3a4a7932de34bb Update 2: I re-opened the modified file, and the modifications I KNOW I had made were gone. So I added them again, went through the rigamarole of git status, git add filename, git commit -m "(message)", and git push origin master, and all of a sudden it works the way it's supposed to.

    Read the article

  • CREATE VIEW called multiple times not creating all views

    - by theninepoundhammer
    Noticing strange behavior in SQL 2005, both Express and Enterprise Edition: In my code I need to loop through a series of values (about five in a row), and for each value, I need to insert the value into a table and dynamically create a new view using that value as part of the where clause and the name of the view. The code runs pretty quickly, but what I'm noticing is that all the values are inserted into the table correctly but only the LAST view is being created. Every time. For example, if the values I'm using are X1, X2, X3, X4, and X5, I'll run the process, open up Mgmt Studio, and see five rows in the table with the correct five values, but only one view named MyView_x5 that has the correct WHERE clause. At first, I had this loop in an SSIS package as part of a larger data flow. When I started noticing this behavior, I created a stored proc that would create the CREATE VIEW statement dynamically after the insert and called EXECUTE to create the view. Same result. Finally, I created some C# code using the Enterprise Library DAAB, and did the insert and CREATE VIEW statements from my DLL. Same result every time. Most recently, I turned on Profiler while running against the Enterprise Edition and was able to verify that the Batch Started and Batch Completed events were being fired off for each instance of the view. However, like I said, only the last view is actually being created. Does anyone have any idea why this might be happening? Or any suggestions about what else to check or profile? I've profiled for error messages, exceptions, etc. but don't see any in my trace file. My express edition is 9.00.1399.06. Not sure about the Enterprise edition but think it is SP2.

    Read the article

  • Testing When Correctness is Poorly Defined?

    - by dsimcha
    I generally try to use unit tests for any code that has easily defined correct behavior given some reasonably small, well-defined set of inputs. This works quite well for catching bugs, and I do it all the time in my personal library of generic functions. However, a lot of the code I write is data mining code that basically looks for significant patterns in large datasets. Correct behavior in this case is often not well defined and depends on a lot of different inputs in ways that are not easy for a human to predict (i.e. the math can't reasonably be done by hand, which is why I'm using a computer to solve the problem in the first place). These inputs can be very complex, to the point where coming up with a reasonable test case is near impossible. Identifying the edge cases that are worth testing is extremely difficult. Sometimes the algorithm isn't even deterministic. Usually, I do the best I can by using asserts for sanity checks and creating a small toy test case with a known pattern and informally seeing if the answer at least "looks reasonable", without it necessarily being objectively correct. Is there any better way to test these kinds of cases?

    Read the article

  • Slider with buttons. How to improve?

    - by Kalinin
    I need to make slider. I have content (which should shift horizontally) and two buttons - "right" and "left". If you press the button and hold it, the content starts to move (in the appropriate direction). If you not hold the button, then the movement stops. This behavior is copies the behavior of usual window scrollbar. I wrote code: var animateTime = 1, offsetStep = 5; //event handling for buttons "left", "right" $('.bttR, .bttL') .mousedown(function() { scrollContent.data('loop', true).loopingAnimation($(this)); }) .bind("mouseup mouseout", function(){ scrollContent.data('loop', false); }); $.fn.loopingAnimation = function(el){ if(this.data('loop') == true){ var leftOffsetStr; leftOffsetInt = parseInt(this.css('marginLeft').slice(0, -2)); if(el.attr('class') == 'bttR') leftOffsetStr = (leftOffsetInt - offsetStep).toString() + 'px'; else if(el.attr('class') == 'bttL') leftOffsetStr = (leftOffsetInt + offsetStep).toString() + 'px'; this.animate({marginLeft: leftOffsetStr}, animateTime, function(){ $(this).loopingAnimation(el); }) } return this; } But it does have a few things that I do not like: Always call the function animation (loopingAnimation) - I think that this is an extra load (not good). When moving content he "twitches and trembling" - (it's not pretty). How can I solve this problem more elegantly and without the drawbacks of my code?

    Read the article

  • Longer execution through Java shell than console?

    - by czuk
    I have a script in Python which do some computations. When I run this script in console it takes about 7 minutes to complete but when I run it thought Java shell it takes three times longer. I use following code to execute the script in Java: this.p = Runtime.getRuntime().exec("script.py --batch", envp); this.input = new BufferedReader(new InputStreamReader(p.getInputStream())); this.output = new BufferedWriter(new OutputStreamWriter(p.getOutputStream())); this.error = new BufferedReader(new InputStreamReader(p.getErrorStream())); Do you have any suggestion why the Python script runs three time longer in Java than in a console? update The computation goes as follow: Java sends data to the Python. Python reads the data. Python generates a decision tree --- this is a long operation. Python sends a confirmation that the tree is ready. Java receives the confirmation. Later there is a series of communications between Java and Python but it takes only several second.

    Read the article

  • JavaScript window object element properties

    - by Timothy
    A coworker showed me the following code and asked me why it worked. <span id="myspan">Do you like my hat?</span> <script type="text/javascript"> var spanElement = document.getElementById("myspan"); alert("Here I am! " + spanElement.innerHTML + "\n" + myspan.innerHTML); </script> I explained that a property is attached to the window object with the name of the element's id when the browser parses the document which then contains a reference to the appropriate dom node. It's sort of as if window.myspan = document.getElementById("myspan") is called behind the scenes as the page is being rendered. The ensuing discussion we had raised a few of questions: The window object and most of the DOM are not part of the official JavaScript/ECMA standards, but is the above behavior documented in any other official literature, perhaps browser-related? The above works in a browser (at least the main contenders) because there is a window object, but fails in something like rhino. Is writing code that relys on this considered bad practice because it makes too many assumptions about the execution environment? Are there any browsers in which the above would fail, or is this considered standard behavior across the board? Does anyone here know the answers to those questions and would be willing to enlighten me? I tried a quick internet search, but I admit I'm not sure how to even properly phrase the query. Pointers to references and documentation are welcome.

    Read the article

  • How do i set the Transaction Isolation in EJB?

    - by Nitesh Panchal
    Hello, I am not able to find a way to set TransactionIsolation in ejb. Can anybody tell me how do i set it? I am using persistence. I have looked the following classes : EntityManager , EntityManagerFactory, UserTransaction. None of them seems to have any method like setTransactionIsolation or such. Do we need to change persistence.xml? I just read a book named Mastering EJB 3.0 4th edition. They gave a full 10 page theory about Isolation level that this problems occur and that occurs and such things but at the end they gave this paragraph :- "As we now know, the EJB standard does not deal with isolation levels directly, and rightly so. EJB is a component specification. It defines the behavior and contracts of a business component with clients and middleware infrastructure (containers) such that the component can be rendered as various middleware services properly. EJBs therefore are transactional components that interact with resource managers, such as the JDBC resource manager or JMS resource manager, via JTS, as part of a transaction. They are not, hence, resource components in themselves. Since isolation levels are very specific to the behavior and capabilities of the underlying resources, they should therefore be specified at the resource API levels. " What exactly does it mean? What is meant by resource level APIs? Please help me. If persistence has no way to set Isolation Level then why do they give such huge theory in an EJB book and make it heavy in weight unnecessarily :(

    Read the article

  • Deleting elements from stl set while iterating through it does not invalidate the iterators.

    - by pedromanoel
    I need to go through a set and remove elements that meet a predefined criteria. This is the test code I wrote: #include <set> #include <algorithm> void printElement(int value) { std::cout << value << " "; } int main() { int initNum[] = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 }; std::set<int> numbers(initNum, initNum + 10); // print '0 1 2 3 4 5 6 7 8 9' std::for_each(numbers.begin(), numbers.end(), printElement); std::set<int>::iterator it = numbers.begin(); // iterate through the set and erase all even numbers for (; it != numbers.end(); ++it) { int n = *it; if (n % 2 == 0) { // wouldn't invalidate the iterator? numbers.erase(it); } } // print '1 3 5 7 9' std::for_each(numbers.begin(), numbers.end(), printElement); return 0; } At first, I thought that erasing an element from the set while iterating through it would invalidate the iterator, and the increment at the for loop would have undefined behavior. Even though, I executed this test code and all went well, and I can't explain why. My question: Is this the defined behavior for std sets or is this implementation specific? I am using gcc 4.3.3 on ubuntu 10.04 (32-bit version), by the way. Thanks!

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >