Search Results

Search found 3071 results on 123 pages for 'executing'.

Page 103/123 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • WPF: issue updating UI from background thread

    - by Ted Shaffer
    My code launches a background thread. The background thread makes changes and wants the UI in the main thread to update. The code that launches the thread then waits looks something like: Thread fThread = new Thread(new ThreadStart(PerformSync)); fThread.IsBackground = true; fThread.Start(); fThread.Join(); MessageBox.Show("Synchronization complete"); When the background wants to update the UI, it sets a StatusMessage and calls the code below: static StatusMessage _statusMessage; public delegate void AddStatusDelegate(); private void AddStatus() { AddStatusDelegate methodForUIThread = delegate { _statusMessageList.Add(_statusMessage); }; this.Dispatcher.BeginInvoke(methodForUIThread, System.Windows.Threading.DispatcherPriority.Send); } _statusMessageList is an ObservableCollection that is the source for a ListBox. The AddStatus method is called but the code on the main thread never executes - that is, _statusMessage is not added to _statusMessageList while the thread is executing. However, once it is complete (fThread.Join() returns), all the stacked up calls on the main thread are executed. But, if I display a message box between the calls to fThread.Start() and fThread.Join(), then the status messages are updated properly. What do I need to change so that the code in the main thread executes (UI updates) while waiting for the thread to terminate? Thanks.

    Read the article

  • Parallelizing L2S Entity Retrieval

    - by MarkB
    Assuming a typical domain entity approach with SQL Server and a dbml/L2S DAL with a logic layer on top of that: In situations where lazy loading is not an option, I have settled on a convention where getting a list of entities does not also get each item's child entities (no loading), but getting a single entity does (eager loading). Since getting a single entity also gets children, it causes a cascading effect in which each child then gets its children too. This sounds bad, but as long as the model is not too deep, I usually don't see performance problems that outweigh the benefits of the ease of use. So if I want to get a list in which each of the items is fully hydrated with children, I combine the GetList and GetItem methods. So I'll get a list and then loop through it getting each item with the full cascade. Even this is generally acceptable in many of the projects I've worked on - but I have recently encountered situations with larger models and/or more data in which it needs to be more efficient. I've found that partitioning the loop and executing it on multiple threads yields excellent results. In my first experiment with a list of 50 items from one particular project, I did 5 threads of 10 items each and got a 3X improvement in time. Of course, the mileage will vary depending on the project but all else being equal this is clearly a big opportunity. However, before I go further, I was wondering what others have done that have already been through this. What are some good approaches to parallelizing this type of thing?

    Read the article

  • .NET 4 ... Parallel.ForEach() question

    - by CirrusFlyer
    I understand that the new TPL (Task Parallel Library) has implemented the Parallel.ForEach() such that it works with "expressed parallelism." Meaning, it does not guarantee that your delegates will run in multiple threads, but rather it checks to see if the host platform has multiple cores, and if true, only then does it distribute the work across the cores (essentially 1 thread per core). If the host system does not have multiple cores (getting harder and harder to find such a computer) then it will run your code sequenceally like a "regular" foreach loop would. Pretty cool stuff, frankly. Normally I would do something like the following to place my long running operation on a background thread from the ThreadPool: ThreadPool.QueueUserWorkItem( new WaitCallback(targetMethod), new Object2PassIn() ); In a situation whereby the host computer only has a single core does the TPL's Parallel.ForEach() automatically place the invocation on a background thread? Or, should I manaully invoke any TPL calls from a background thead so that if I am executing from a single core computer at least that logic will be off of the GUI's dispatching thread? My concern is if I leave the TPL in charge of all this I want to ensure if it determines it's a single core box that it still marshalls the code that's inside of the Parallel.ForEach() loop on to a background thread like I would have done, so as to not block my GUI. Thanks for any thoughts or advice you may have ...

    Read the article

  • Performance difference between functions and pattern matching in Mathematica

    - by Samsdram
    So Mathematica is different from other dialects of lisp because it blurs the lines between functions and macros. In Mathematica if a user wanted to write a mathematical function they would likely use pattern matching like f[x_]:= x*x instead of f=Function[{x},x*x] though both would return the same result when called with f[x]. My understanding is that the first approach is something equivalent to a lisp macro and in my experience is favored because of the more concise syntax. So I have two questions, is there a performance difference between executing functions versus the pattern matching/macro approach? Though part of me wouldn't be surprised if functions were actually transformed into some version of macros to allow features like Listable to be implemented. The reason I care about this question is because of the recent set of questions (1) (2) about trying to catch Mathematica errors in large programs. If most of the computations were defined in terms of Functions, it seems to me that keeping track of the order of evaluation and where the error originated would be easier than trying to catch the error after the input has been rewritten by the successive application of macros/patterns.

    Read the article

  • Execute external program from Java

    - by Saurabh Lalwani
    Hi, I am trying to execute a program from the Java code. Here is my code: public static void main(String argv[]) { try { String line; Process p = Runtime.getRuntime().exec( "/bin/bash -c ls > OutputFileNames.txt"); BufferedReader input = new BufferedReader( new InputStreamReader(p.getInputStream())); while ((line = input.readLine()) != null) { System.out.println(line); } input.close(); } catch (Exception err) { err.printStackTrace(); } } My OS is Mac OS X 10.6. If I remove the "> OutputFileNames.txt" from the getRuntime().exec() method, all the file names are printed on the console just fine. But I need it to be printed to a file. Also, if I change the command to: Process p = Runtime.getRuntime().exec( "cmd \c dir > OutputFileNames.txt"); and run it on Windows, it runs and prints the results in the file perfectly fine too. I have read the other posts for executing another application from Java but none seemed to relate to my problem. I would really appreciate any help I can get. Thanks,

    Read the article

  • make target is never determined up to date

    - by Michael
    Cygwin make always processing $(chrome_jar_file) target, after first successful build. So I never get up to date message and always see commands for $(chrome_jar_file) are executing. However it happens only on Windows 7. On Windows XP once it built and intact, no more builds. I narrowed down the issue to one prerequisite - $(jar_target_dir). Here is part of the code # The location where the JAR file will be created. jar_target_dir := $(build_dir)/chrome # The main chrome JAR file. chrome_jar_file := $(jar_target_dir)/$(extension_name).jar # The root of the JAR sources. jar_source_root := chrome # The sources for the JAR file. jar_sources := bla #... some files, doesn't matter jar_sources_no_dir := $(subst $(jar_source_root)/,,$(jar_sources)) $(chrome_jar_file): $(jar_sources) $(jar_target_dir) @echo "Creating chrome JAR file." @cd $(jar_source_root); $(ZIP) ../$(chrome_jar_file) $(jar_sources_no_dir) @echo "Creating chrome JAR file. Done!" $(jar_target_dir): $(build_dir) echo "Creating jar target dir..." if [ ! -x $(jar_target_dir) ]; \ then \ mkdir $(jar_target_dir); \ fi $(build_dir): @if [ ! -x $(build_dir) ]; \ then \ mkdir $(build_dir); \ fi so if I just remove $(jar_target_dir) from $(chrome_jar_file) rule, it works fine.

    Read the article

  • Call 32-bit or 64-bit program from bootloader

    - by user1002358
    There seems to be quite a lot of identical information on the Internet about writing the following 3 bootloaders: Infinite loop jmp $ Print a single character Print "Hello World". This is fantastic, and I've gone through these 3 variations with very little trouble. I'd like to write some 32- or 64-bit code in C and compile it, and call that code from the bootloader... basically a bootloader that, for example, sets the computer up to run some simple numerical simulation. I'll start by listing primes, for example, and then maybe some input/output from the user to maybe compute a Fourier transform. I don't know. I haven't found any information on how to do this, but I can already foresee some problems before I even begin. First of all, compiling a C program compiles it into one of several different files, depending on the target. For Windows, it's a PE file. For Linux, it's a .out file. These files are both quite different. In my instance, the target isn't Windows or Linux, it's just whatever I have written in the bootloader. Secondly, where would the actual code reside? The bootloader is exactly 512 bytes, but the program I write in C will certainly compile to something much larger. It will need to sit on my (virtual) hard disk, probably in some sort of file system (which I haven't even defined!) and I will need to load the information from this file into memory before I can even think about executing it. But from my understanding, all this is many, many orders of magnitude more complex than a 12-line "Hello World" bootloader. So my question is: How do I call a large 32- or 64-bit program (written in C/C++) from my 16-bit bootloader.

    Read the article

  • Is there any better way to capture the screen than PIL.ImageGrab.grab()?

    - by user1474837
    I am making a screen capture program with python. My current problem is PIL.ImageGrab.grab() gives me the same output as 2 seconds later. For instance, for I think I am not being clear, in the following program, almost all the images are the same, have the same Image.tostring() value, even though I was moving my screen during the time the PIL.ImageGrab.grab loop was executing. >>> from PIL.ImageGrab import grab >>> l = [] >>> import time >>> for a in l: l.append(grab()) time.sleep(0.01) >>> for a in range(0, 30): l.append(grab()) time.sleep(0.01) >>> b = [] >>> for a in l: b.append(a.tostring()) >>> len(b) 30 >>> del l >>> last = [] >>> a = 0 >>> a = -1 >>> last = "" >>> same = -1 >>> for pic in b: if b == last: same = same + 1 last = b >>> same 28 >>> This is a problem, as all the images are the same but 1. 1 out of 30 is different. That would make for a absolutly horrable quality video. Please, tell me if there is any better quality alternative to PIL.ImageGrab.grab(). I need to capture the whole screen. Thanks!

    Read the article

  • Error while closing SQL Connection

    - by Wickedman84
    I have a problem with closing the SQLconnection in my application. My application is in VB.net. I have a reference in my application to a class with code to open and close the database connection and to execute all sql scripts. The error occurs when i close my application. In the formClosing event of my main form I call a function that closes all the connections. But just before I close the connections I perform an SQLquery to delete a row from a table with the function below. Public Function DeleteFunction(ByVal mySQLQuery As String, ByVal cmd As SqlCommand) As Boolean Try cmd.Connection = myConnection cmd.CommandText = mySQLQuery cmd.ExecuteNonQuery() Return True Catch ex As Exception WriteErrorMessage("DeleteFunction", ex, Logpath, "SQL Error: " & mySQLQuery) Return False End Try End Function In my application I check the result of the boolean. If it returns True, then i call the function to close the database connection. The returned boolean is True and the requested row is deleted in my database. This means i can close my connection which I do with the function below. Public Sub DatabaseConnClose() myCommand.CommandText = "" myConnection.Close() myCommand = Nothing myConnection = Nothing End Sub After executing this code I receive an error in my logfile from the DeleteFunction. It says: "Connection property has not been initialized." It seems very strange to receive an error from a function that was completely executed, or am i wrong to think that? Can anyone tell me why I receive this error and how I can solve the problem?

    Read the article

  • Can a second stored procedure doing the same thing finish before first one?

    - by evanmortland
    Hello, I have an audit record table that I am writing to. I am connecting to MyDb, which has a stored procedure called 'CreateAudit', which is a passthrough stored procedure to another database on the same machine called 'CreatedAudit' as well. I call the CreateAudit stored procedure from my application, using subsonic as the DAL. The first time I call it, I call it with the following (pseudocode): Result = CreateAudit(recordId, "Opened") Right after that, I call: Result2 = CreateAudit(recordId, "Closed") In my second stored procedure it is supposed to mark the record that was created by the CreateAudit(recordId, "Opened") with a status of closed. It works great if I run them independently of one another, but when they run in sequence in the application, the record is not marked as "Closed". When I run SQL profiler I see that both queries ran, and if I copy the queries out and run them from query analyzer the record gets marked as closed 100% of the time! When I run it from the application, about once every 20 times or so, the record is successfully marked closed - the other 19 times nothing happens, but I do not get an error! Is it possible for the .NET app to skip over the ouput from the first stored procedure and start executing the second stored procedure before the record in the first is created? When I add a "WAITFOR DELAY '00:00:00:003'" to the top of my stored procedure, the record is also closed 100% of the time. My head is spinning, any ideas why this is happening! Thanks for any responses, very interested in hearing how this can happen.

    Read the article

  • Making firefox refresh images faster

    - by Earlz
    I have a thing I'm doing where I need a webpage to stream a series of images from the local client computer. I have a very simple run here: http://jsbin.com/idowi/34 The code is extremely simple setTimeout ( "refreshImage()", 100 ); function refreshImage(){ var date = new Date() var ticks = date.getTime() $('#image').attr('src','http://127.0.0.1:2723/signature?'+ticks.toString()); setTimeout ("refreshImage()", 100 ); } Basically I have a signature pad being used on the client machine. We want for the signature to show up in the web page and for them to see themselves signing it within the web page(the pad does not have an LCD to show it to them right there). So I setup a simple local HTTP server which grabs an image of what the current state of the signature pad looks like and it gets sent to the browser. This has no problems in any browser(tested in IE7, 8, and Chrome) but Firefox where it is extremely laggy and jumpy and doesn't keep with the 10 FPS rate. Does anyone have any ideas on how to fix this? I've tried creating very simple double buffering in javascript but that just made things worse. Also for a bit more information it seems that Firefox is executing the javascript at the correct framerate as on the server the requests are coming in at a constant speed. But the images are only refreshed inconsistently ranging from 5 times per second all the way down to 0 times per second(taking 2 seconds to do a refresh) Also I have tried using different image formats all with the same results. The formats I've tried include bitmaps, PNGs, and GIFs (GIFs caused a minor problem in Chrome with flicker though) Could it be possible that Firefox is somehow caching my images causing a slight lag? I send these headers though: Pragma-directive: no-cache Cache-directive: no-cache Cache-control: no-cache Pragma: no-cache Expires: 0

    Read the article

  • "Address of" (&) an array / address of being ignored be gcc?

    - by dbarbosa
    Hi, I am a teaching assistant of a introductory programming course, and some students made this type of error: char name[20]; scanf("%s",&name); which is not surprising as they are learning... What is surprising is that, besides gcc warning, the code works (at least this part). I have been trying to understand and I wrote the following code: void foo(int *str1, int *str2) { if (str1 == str2) printf("Both pointers are the same\n"); else printf("They are not the same\n"); } int main() { int test[50]; foo(&test, test); if (&test == test) printf("Both pointers are the same\n"); else printf("They are not the same\n"); } Compiling and executing: $ gcc test.c -g test.c: In function ‘main’: test.c:12: warning: passing argument 1 of ‘foo’ from incompatible pointer type test.c:13: warning: comparison of distinct pointer types lacks a cast $ ./a.out Both pointers are the same Both pointers are the same Can anyone explain why they are not different? I suspect it is because I cannot get the address of an array (as I cannot have & &x), but in this case the code should not compile.

    Read the article

  • wget not completely processing the http call

    - by user578458
    Here is a wget command that executes a HTML / PHP stack report suite that is hosted by a third party - we don't have control over the PHP or HTML page wget --no-check-certificate --http-user=/myacc --http-password=mypass -O /tmp/myoutput.csv "https://myserver.mydomain.com/mymodule.php?myrepcode=9999&action=exportcsv&admin=myappuserid&password=myappuserpass&startdate=2011-01-16&enddate=2011-01-16&reportby=mypreferredview" All the elements are working perfectly: --http-user / --http-pass as offered by a browsers standard popup for username and password prompt -O /tmp/myoutput.csv - the output file of interest https://myserver.mydomain.com/mymodule.php?myrepcode=9999&action=exportcsv&admin=myappuserid&password=myappuserpass&startdate=2011-01-16&enddate=2011-01-16&reportby=mypreferredview" The file generated on the fly by the parameters myrepcode=9999 - a reference to the report in question action=exportcsv internally written in the function admin=myappuserid the third party operats SSL to access the site - then internal username and password stored in a database to access the functions of the site) password=myappuserpass startdate=2011-01-16 this and end data are parameters specific to the report 9999 enddate=2011-01-16 reportby=mypreferredview This is an option in the report that facilitates different levels of detail or aggregation The problem is that the reportby parameter is a radio button selection in a list of 5 selections (sure I enough the default is highest level of aggregation , I want the last one which is the most detailed) Here is a sample of the HTML page code for the options of reportby View by The Default My Least Preferred My Second Least Preferred My Third Least Preferred My Preferred No matter which of the reportby items I select in the wget statement - thedefault is always executed. Questions 1) Has anyone come across this notation in HTML (id=inputname[inputelement]) I spoke to a senior web developer and he has never seen this notation for inputs (id=inputname[inputelement]) - and w3schools do not appear familiar with this either based on an extensive search 2) Can a wget command select a none default radio item when executing the command ? This probably will be initially received with a "Use CURL" response- however the wget approach works very well in the limited environment I am operating in - particularly as I need to download 10000 of these such items. Thanks ahead of response

    Read the article

  • C# Set combo item with selectedValue

    - by Martijn
    I am dynamically creating a combobox like this: public Control GenerateList(Question question) { // Get a list with answer possibilities List<QuestionAnswer> answers = question.GetAnswers(); // Get a collection of given answers Collection<QuestionnaireAnswer> givenAnswers = question.GetFilledAnswers(); ComboBox cmb = new ComboBox(); cmb.Name = "cmb"; cmb.DataSource = answers; cmb.DisplayMember = "Answer"; cmb.ValueMember = "Id"; // Check an answer is given to the question if (givenAnswers != null && givenAnswers.Count > 0) { cmb.SelectedValue = givenAnswers[0].AnswerId; } cmb.DropDownStyle = ComboBoxStyle.DropDownList; cmb.SelectedIndexChanged += new EventHandler(cmb_SelectedIndexChanged); cmb.Leave += new EventHandler(cmb_Leave); return cmb; } The problem is,when executing cmb.SelectedValue = givenAnswers[0].AnswerId; cmb.SelectedValue is always null. When debugging and I explore answers (the datasource) I see that Id (ValueMember) is exactle the same as AnswerId (in the if statement). Both have the same type (long) and the same value, but SelectedValue stays null. Is there something I don't see?

    Read the article

  • How do I daemonize an arbitrary script in unix?

    - by dreeves
    I'd like a daemonizer that can turn an arbitrary, generic script or command into a daemon. There are two common cases I'd like to deal with: I have a script that should run forever. If it ever dies (or on reboot), restart it. Don't let there ever be two copies running at once (detect if a copy is already running and don't launch it in that case). I have a simple script or command line command that I'd like to keep executing repeatedly forever (with a short pause between runs). Again, don't allow two copies of the script to ever be running at once. Of course it's trivial to write a "while(true)" loop around the script in case 2 and then apply a solution for case 1, but a more general solution will just solve case 2 directly since that applies to the script in case 1 as well (you may just want a shorter or no pause if the script is not intended to ever die (of course if the script really does never die then the pause doesn't actually matter)). Note that the solution should not involve, say, adding file-locking code or PID recording to the existing scripts. More specifically, I'd like a program "daemonize" that I can run like % daemonize myscript arg1 arg2 or, for example, % daemonize 'echo `date` >> /tmp/times.txt' which would keep a growing list of dates appended to times.txt. (Note that if the argument(s) to daemonize is a script that runs forever as in case 1 above, then daemonize will still do the right thing, restarting it when necessary.) I could then put a command like above in my .login and/or cron it hourly or minutely (depending on how worried I was about it dying unexpectedly). NB: The daemonize script will need to remember the command string it is daemonizing so that if the same command string is daemonized again it does not launch a second copy. Also, the solution should ideally work on both OS X and linux but solutions for one or the other are welcome. (If I'm thinking of this all wrong or there are quick-and-dirty partial solutions, I'd love to hear that too.)

    Read the article

  • Delay PHP execution until JavaScript cookie set?

    - by Adam184
    I am trying to delay PHP execution until a cookie is set through JavaScript. The code is below, I trimmed the createCookie JavaScript function for simplicity (I've tested the function itself and it works). <?php if(!isset($_COOKIE["test"])) { ?> <script type="text/javascript"> $(function() { // createCookie script createCookie("test", 1, 3600); }); </script> <?php // Reload the page to ensure cookie was set if(!isset($_COOKIE["test"])) { header("Location: http://localhost/asdf.php/"); } } ?> At first I had no idea why this didn't work, however after using microtime() I figured out that the PHP after the <script> was executing before the jQuery ready function. I reduced my code significantly to show a simple version that is answerable, I am well aware that I am able to use setcookie() in PHP, the requirements for the cookie are client-side. I understand mixing PHP and JavaScript is incorrect, but any help on how to make this work (is there a PHP delay? - I tried sleep(), didn't work and didn't think it would work, since the scripts would be delayed as well) would be greatly appreciated.

    Read the article

  • Get parameter values from method at run time

    - by Landin Martens
    I have the current method example: public void MethodName(string param1,int param2) { object[] obj = new object[] { (object) param1, (object) param2 }; //Code to that uses this array to invoke dynamic methods } Is there a dynamic way (I am guessing using reflection) that will get the current executing method parameter values and place them in a object array? I have read that you can get parameter information using MethodBase and MethodInfo but those only have information about the parameter and not the value it self which is what I need. So for example if I pass "test" and 1 as method parameters without coding for the specific parameters can I get a object array with two indexes { "test", 1 }? I would really like to not have to use a third party API, but if it has source code for that API then I will accept that as an answer as long as its not a huge API and there is no simple way to do it without this API. I am sure there must be a way, maybe using the stack, who knows. You guys are the experts and that is why I come here. Thank you in advance, I can't wait to see how this is done. EDIT It may not be clear so here some extra information. This code example is just that, an example to show what I want. It would be to bloated and big to show the actual code where it is needed but the question is how to get the array without manually creating one. I need to some how get the values and place them in a array without coding the specific parameters.

    Read the article

  • How to catch this low level MySQL (?) error in PHP/Magento

    - by andnil
    When I'm executing the following statement in Magento with a really large $sku, the execution terminates without any errors thrown what so ever. There are no errors in either Magento's, Apache's or PHP's error logs. Mage::getModel('catalog/product')-loadByAttribute('sku', $sku); Question: How do I catch the error? I've tried to set custom error handlers, and for testing purposes I've also managed to trigger error situations where each of the error handler functions are invoked. But when running the previously mentioned Magento code with a large $sku, none of the error handling functions are executed. error_reporting( -1 ); set_error_handler( array( 'Error', 'captureNormal' ) ); set_exception_handler( array( 'Error', 'captureException' ) ); register_shutdown_function( array( 'Error', 'captureShutdown' ) ); For completeness, this is the $sku I'm passing to loadByAttribute(). (The sku is invalid, but that is not the issue) 1- 9685 0102046|1- 9685 1212100|1- 9685 1212092|1- 9685 1212096|1- 9685 1102100|1- 9685 1102108|1- 9685 1102112|1- 9685 1102092|1- 9685 0102048|1- 9685 0102054|1- 9685 0102056|1- 9685 0102058|1- 9685 1212104|1- 9685 1212108|1- 9685 0212058|1- 9685 0104050|1- 9685 0212050|1- 9685 0212056|1- 9685 0212044|1- 9685 0212048|1- 9685 0212052|1- 9685 0212054|1- 9685 1102104|1- 9685 1102124 Any insight into this matter is much appreciated! Update: Upon further investigation, this is the exact point in the code where execution terminates. when the foreach is executed I guess Magento goes into MySQL world and starts loading up data from the database. \Mage\Catalog\Model\Abstract.php public function loadByAttribute($attribute, $value, $additionalAttributes = '*') { $collection = $this->getResourceCollection() ->addAttributeToSelect($additionalAttributes) ->addAttributeToFilter($attribute, $value) ->setPage(1,1); foreach ($collection as $object) { // <--------------- HERE return $object; } return false; } Note, I'm ONLY interested in finding out how to properly CATCH these kinds of errors, not "fix" the logic. This is so that I can present a proper error message to the user. The example above with the malformed sku is contrived and I have no desire to make my Magento app work with those erroneous skus.

    Read the article

  • Create ImageButton programatically depending on local database records

    - by user2507920
    As i described in title, i have a local db in sqlite. I want to create ImageButton parametrically. How many times is local database loop executing? Please see code below : RelativeLayout outerRelativeLayout = (RelativeLayout)findViewById(R.id.relativeLayout2_making_dynamically); db.open(); Cursor c = db.getAllLocal_Job_Data(); if(c!=null){ if(c.moveToFirst()){ do { RelativeLayout innerRelativeLayout = new RelativeLayout(CalendarActivity.this); innerRelativeLayout.setGravity(Gravity.CENTER); ImageButton imgBtn = new ImageButton(CalendarActivity.this); imgBtn.setBackgroundResource(R.drawable.color_001); RelativeLayout.LayoutParams imageViewParams = new RelativeLayout.LayoutParams(ViewGroup.LayoutParams.WRAP_CONTENT, ViewGroup.LayoutParams.WRAP_CONTENT); imgBtn.setLayoutParams(imageViewParams); // Adding the textView to the inner RelativeLayout as a child innerRelativeLayout.addView(imgBtn, new RelativeLayout.LayoutParams( ViewGroup.LayoutParams.WRAP_CONTENT, ViewGroup.LayoutParams.WRAP_CONTENT)); outerRelativeLayout.addView(innerRelativeLayout, new RelativeLayout.LayoutParams(ViewGroup.LayoutParams.WRAP_CONTENT, ViewGroup.LayoutParams.WRAP_CONTENT)); } while (c.moveToNext()); } } db.close(); But when i run the project, i can see only one button created there. I think there are many buttons but here image button is creating on last created image button. I think i should use android:layout_toRightOf with previous created button but i cant find how to place it here. I have tried some ideas but it did not change any thing. So please anybody has any idea to solve my problem then please share it with me.

    Read the article

  • Can a Java HashMap's size() be out of sync with its actual entries' size ?

    - by trix
    I have a Java HashMap called statusCountMap. Calling size() results in 30. But if I count the entries manually, it's 31 This is in one of my TestNG unit tests. These results below are from Eclipse's Display window (type code - highlight - hit Display Result of Evaluating Selected Text). statusCountMap.size() (int) 30 statusCountMap.keySet().size() (int) 30 statusCountMap.values().size() (int) 30 statusCountMap (java.util.HashMap) {40534-INACTIVE=2, 40526-INACTIVE=1, 40528-INACTIVE=1, 40492-INACTIVE=3, 40492-TOTAL=4, 40513-TOTAL=6, 40532-DRAFT=4, 40524-TOTAL=7, 40526-DRAFT=2, 40528-ACTIVE=1, 40524-DRAFT=2, 40515-ACTIVE=1, 40513-DRAFT=4, 40534-DRAFT=1, 40514-TOTAL=3, 40529-DRAFT=4, 40515-TOTAL=3, 40492-ACTIVE=1, 40528-TOTAL=4, 40514-DRAFT=2, 40526-TOTAL=3, 40524-INACTIVE=2, 40515-DRAFT=2, 40514-ACTIVE=1, 40534-TOTAL=3, 40513-ACTIVE=2, 40528-DRAFT=2, 40532-TOTAL=4, 40524-ACTIVE=3, 40529-ACTIVE=1, 40529-TOTAL=5} statusCountMap.entrySet().size() (int) 30 What gives ? Anyone has experienced this ? I'm pretty sure statusCountMap is not being modified at this point. There are 2 methods (lets call them methodA and methodB) that modify statusCountMap concurrently, by repeatedly calling incrementCountInMap. private void incrementCountInMap(Map map, Long id, String qualifier) { String key = id + "-" + qualifier; if (map.get(key) == null) { map.put(key, 0); } synchronized (map) { map.put(key, map.get(key).intValue() + 1); } } methodD is where I'm getting the issue. methodD has a TestNG @dependsOnMethods = { "methodA", "methodB" } so when methodD is executing, statusCountMap is pretty much static already. I'm mentioning this because it might be a bug in TestNG. I'm using Sun JDK 1.6.0_24. TestNG is testng-5.9-jdk15.jar Hmmm ... after rereading my post, could it be because of concurrent execution of outside-of-synchronized-block map.get(key) == null & map.put(key,0) that's causing this issue ?

    Read the article

  • How can I compare the performance of log() and fp division in C++?

    - by Ventzi Zhechev
    Hi, I’m using a log-based class in C++ to store very small floating-point values (as the values otherwise go beyond the scope of double). As I’m performing a large number of multiplications, this has the added benefit of converting the multiplications to sums. However, at a certain point in my algorithm, I need to divide a standard double value by an integer value and than do a *= to a log-based value. I have overloaded the *= operator for my log-based class and the right-hand side value is first converted to a log-based value by running log() and than added to the left-hand side value. Thus the operations actually performed are floating-point division, log() and floating-point summation. My question whether it would be faster to first convert the denominator to a log-based value, which would replace the floating-point division with floating-point subtraction, yielding the following chain of operations: twice log(), floating-point subtraction, floating-point summation. In the end, this boils down to whether floating-point division is faster or slower than log(). I suspect that a common answer would be that this is compiler and architecture dependent, so I’ll say that I use gcc 4.2 from Apple on darwin 10.3.0. Still, I hope to get an answer with a general remark on the speed of these two operators and/or an idea on how to measure the difference myself, as there might be more going on here, e.g. executing the constructors that do the type conversion etc. Cheers!

    Read the article

  • Why isn't this json/jquery function returning 10 results and why does it return 0 sometimes?

    - by Scarface
    Hey guys, quick question. I have a json function returning results from a php query but it is not working out because it seems to keep executing the query until all results are returned and sometimes returns 0 results (if I keep refreshing page, it returns 0 results sometimes). I want to limit results returned to 10 so I was wondering if anyone could point out the error of my code. Would really appreciate it. PS I am positive the query is returning the correct results and only 10 on its own. It just seems to be this one function that is not working out. SELECT time, user, comment FROM comments WHERE topic_id='$topic_id' ORDER BY time DESC LIMIT 10 var count=0; function prepare(response) { count++; var string = '<li class="list" id="list-'+count+'">' +'</li>'; return string; } $.getJSON(files+"comments.php?action=view&load=initial&topic_id="+topic_id+"&t=" + (new Date()), function(json) { if(json.length) { for(i=0; i < json.length; i++) { $('#shoutbox-list').prepend(prepare(json[i])); $('#list-' + count).fadeIn(1500); } var j = i-1; lastTime = json[j].time; } });

    Read the article

  • Combining two exe files

    - by Sophia
    Here's some background to my problem: I have a project in Visual C++ 2006 and a project in in Visual C++ 2010 Express. Both compiles to form an exe file each. I cannot convert my 2006 project to 2010 because I get a lot of "unable to load project" errors. I also cannot port my 2010 project code to 2006 (I always get errors no matter what I try, something to do with libraries). My final solution requires me to only have ONE executable. Is there anything I can do to achieve that? I've done some quick search on Google and found there to be exe joiners, but I've also heard that those things are often used to make malware. For reference, I am working with "dummy" clients, and therefore want to simplify things on their end as much as possible. Thus, having them executing one exe is better than having them execute two. Also, I do not wish for their antivirus to go haywire because I used some program to join two exe together. What do? Edit: The two project files do different things. For example, the project in VS2006 one sets up a server, and the project in VS2010 one grabs info on the user's OS. The code for the "server", I think, has a lot of dependencies and for some reason cannot convert to Visual c++ 2010. The code for "grabbing" requires some newer libraries and compiling options, and would not work if I port to 2006.

    Read the article

  • What is the meaning of @ModelAttribute annotation at method argument level?

    - by beemaster
    Spring 3 reference teaches us: When you place it on a method parameter, @ModelAttribute maps a model attribute to the specific, annotated method parameter I don't understand this magic spell, because i sure that model object's alias (key value if using ModelMap as return type) passed to the View after executing of the request handler method. Therefore when request handler method executes the model object's name can't be mapped to the method parameter. To solve this contradiction i went to stackoverflow and found this detailed example. The author of example said: // The "personAttribute" model has been passed to the controller from the JSP It seems, he is charmed by Spring reference... To dispel the charms i deployed his sample app in my environment and cruelly cut @ModelAttribute annotation from method MainController.saveEdit. As result the application works without any changes! So i conclude: the @ModelAttribute annotation is not needed to pass web form's field values to the argument's fields. Then i stuck to the question: what is the mean of @ModelAttribute annotation? If the only mean is to set alias for model object in View, then why this way better than explicitly adding of object to ModelMap?

    Read the article

  • What happens when you create an instance of an object containing no state in C#?

    - by liquorice
    I am I think ok at algorithmic programming, if that is the right term? I used to play with turbo pascal and 8086 assembly language back in the 1980s as a hobby. But only very small projects and I haven't really done any programming in the 20ish years since then. So I am struggling for understanding like a drowning swimmer. So maybe this is a very niave question or I'm just making no sense at all, but say I have an object kind of like this: class Something : IDoer { void Do(ISomethingElse x) { x.DoWhatEverYouWant(42); } } And then I do var Thing1 = new Something(); var Thing2 = new Something(); Thing1.Do(blah); Thing2.Do(blah); does Thing1 = Thing2? does "new Something()" create anything? Or is it not much different different from having a static class, except I can pass it around and swap it out etc. Is the "Do" procedure in the same location in memory for both the Thing1(blah) and Thing2(blah) objects? I mean when executing it, does it mean there are two Something.Do procedures or just one?

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >