Search Results

Search found 6346 results on 254 pages for 'turn a'.

Page 207/254 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • UITableView, having problems changing accessory when selected

    - by zpasternack
    I'm using a UITableView to allow selection of one (of many) items. Similar to the UI when selecting a ringtone, I want the selected item to be checked, and the others not. I would like to have the cell selected when touched, then animated back to the normal color (again, like the ringtone selection UI). A UIViewController subclass is my table's delegate and datasource (not UITableViewController, because I also have a toolbar in there). I'm setting the accessoryType of the cells in cellForRowAtIndexPath:, and updating my model when cells are selected in didSelectRowAtIndexPath:. The only way I can think of to set the selected cell to checkmark (and clear the previous one) is to call [tableView reloadData] in didSelectRowAtIndexPath:. However, when I do this, the animating of the cell deselection is weird (a white box appears where the cell's label should be). If I don't call reloadData, of course, the accessoryType won't change, so the checkmarks won't appear. I suppose I could turn the animation off, but that seems lame. I also toyed with getting and altering the cells in didSelectRowAtIndexPath:, but that's a big pain. Any ideas? Abbreviated code follows... - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { UITableViewCell* aCell = [tableView dequeueReusableCellWithIdentifier:kImageCell]; if( aCell == nil ) { aCell = [[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:kImageCell]; } aCell.text = [imageNames objectAtIndex:[indexPath row]]; if( [indexPath row] == selectedImage ) { aCell.accessoryType = UITableViewCellAccessoryCheckmark; } else { aCell.accessoryType = UITableViewCellAccessoryNone; } return aCell; } - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { [tableView deselectRowAtIndexPath:indexPath animated:YES]; selectedImage = [indexPath row] [tableView reloadData]; }

    Read the article

  • Calling QAxWidget method outside of the GUI thread

    - by user304361
    I'm beginning to wonder if this is impossible, but I thought I'd ask in case there's a clever way to get around the problems I'm having. I have a Qt application that uses an ActiveX control. The control is held by a QAxWidget, and the QAxWidget itself is contained within another QWidget (I needed to add additional signals/slots to the widget, and I couldn't just subclass QAxWidget because the class doesn't permit that). When I need to interact with the ActiveX control, I call a method of the QWidget, which in turn calls the dynamicCall method of the QAxWidget in order to invoke the appropriate method of the ActiveX control. All of that is working fine. However, one method of the ActiveX control takes several seconds to return. When I call this method, my entire GUI locks up for a few seconds until the method returns. This is undesirable. I'd like the ActiveX control to go off and do its processing by itself and come back to me when it's done without locking up the Qt GUI. I've tried a few things without success: Creating a new QThread and calling QAxWidget::dynamicCall from the new thread Connecting a signal to the appropriate slot method of the QAxWidget and calling the method using signals/slots instead of using dynamicCall Calling QAxWidget::dynamicCall using QtConcurrent::run Nothing seems to affect the behavior. No matter how or where I use dynamicCall (or trigger the appropriate slot of the QAxWidget), the GUI locks until the ActiveX control completes its operation. Is there any way to detach this ActiveX processing from the Qt GUI thread so that the GUI doesn't lock up while the ActiveX control is running a method? Is there something clever I can do with QAxBase or QAxObject to get my desired results?

    Read the article

  • Using a Javascript Variable & Sending to JSON

    - by D Franks
    Hello all! I'm trying to take a URL's hash value, send it through a function, turn that value into an object, but ultimately send the value to JSON. I have the following setup: function content(cur){ var mycur = $H(cur); var pars = "p="+mycur.toJSON(); new Ajax.Updater('my_box', 'test.php', { parameters: pars }); } function update(){ if(window.location.hash.length > 0){ content(window.location.hash.substr(1)); // Everything after the '#' } } var curHashVal = window.location.hash; window.onload = function(){ setInterval(function(){ if(curHashVal != window.location.hash){ update(); curHashVal = window.location.hash; } },1); } But for some reason, I can't seem to get the right JSON output. It will either return as a very large object (1:"{",2:"k") or not return at all. I doubt that it is impossible to accomplish, but I've exhausted most of the ways I can think of. Other ways I've tried were "{" + cur + "}" as well as cur.toObject(), however, none seemed to get the job done. Thanks for the help!

    Read the article

  • VB.NET vs. C#.NET?

    - by Onion-Knight
    Hello everyone, The company I work for has all of its legacy ("legacy" being used rather liberally in this context) code in VB.NET. They have about 6000+ lines of VB.NET code, so all of the developers are comfortable with it. We have started to develop a new product, and are finding that some modules are easier to complete in C# than in VB.NET, such as Interprocess Communication via WCF. The things our product will eventually need to do are as follows: Communicate via IPC between Windows Services, Silverlight, and WinForms Handle parallization, and all the complexity that comes along with it Windows Service and WinForms development ASP.NET, AJAX, and Silverlight development Database (SQL) access Lots of event handling (Sync and Async events) My question is: Given the type of work we will be doing to complete our product, are there features of one language that will make life easier that the other does not have? And if so, it is worth asking the developers to switch to a language they are less comfortable with? I was hoping to keep this as objective as possible, by listing exactly what type of work we will be doing with the product. Please don't turn this into a VB/C# holy war. Thanks, Onion-Knight

    Read the article

  • Eclipse CDT: cannot debug or terminate application

    - by Paul Lammertsma
    I have Eclipse set up fairly nicely to run the G++ compiler through Cygwin. Even the character encoding is set up correctly! There still seems to be something wrong with my configuration: I can't debug. The pause button in the debug view is simply disabled, and no threads appear in my application tree. It seems that gdb is simply not communicating with Eclipse. Presently, I have the debug settings as follows: Debugger: "Cygwin gdb Debugger" GDB debugger: gdb GDB command file: .gdbinit Protocol: Default I should mention here that I have no idea what .gdbinit does; in my project it is merely an empty file. What is wrong with my configuration? Debugging When attempting to terminate the application in debug mode, Eclipse displays the following error: Target request failed: failed to interrupt. I can't kill the process, either; I have to kill its parent gdb.exe, which in turn kills my application. Running When running it normally, a bunch of kill.exes are called, doing nothing, while Eclipse displays the following error: Terminate failed. I can kill FaceDetector.exe from the task manager. Process Explorer This is what it looks like in Process Explorer (debugging left, running right):

    Read the article

  • How close can I get C# to the performance of C++ for small intensive tasks?

    - by SLC
    I was thinking about the speed difference of C++ to C# being mostly about C# compiling to byte-code that is taken in by the JIT compiler (is that correct?) and all the checks C# does. I notice that it is possible to turn a lot of these functions off, both in the compile options, and possibly through using the unsafe keyword as unsafe code is not verifiable by the common language runtime. Therefore if you were to write a simple console application in both languages, that flipped an imaginary coin an infinite number of times and displayed the results to the screen every 10,000 or so iterations, how much speed difference would there be? I chose this because it's a very simple program. I'd like to test this but I don't know C++ or have the tools to compile it. This is my C# version though: static void Main(string[] args) { unsafe { Random rnd = new Random(); int heads = 0, tails = 0; while (true) { if (rnd.NextDouble() > 0.5) heads++; else tails++; if ((heads + tails) % 1000000 == 0) Console.WriteLine("Heads: {0} Tails: {1}", heads, tails); } } } Is the difference enough to warrant deliberately compiling sections of code "unsafe" or into DLLs that do not have some of the compile options like overflow checking enabled? Or does it go the other way, where it would be beneficial to compile sections in C++? I'm sure interop speed comes into play too then. To avoid subjectivity, I reiterate the specific parts of this question as: Does C# have a performance boost from using unsafe code? Do the compile options such as disabling overflow checking boost performance, and do they affect unsafe code? Would the program above be faster in C++ or negligably different? Is it worth compiling long intensive number-crunching tasks in a language such as C++ or using /unsafe for a bonus? Less subjectively, could I complete an intensive operation faster by doing this?

    Read the article

  • Fastest way to generate delimited string from 1d numpy array

    - by Abiel
    I have a program which needs to turn many large one-dimensional numpy arrays of floats into delimited strings. I am finding this operation quite slow relative to the mathematical operations in my program and am wondering if there is a way to speed it up. For example, consider the following loop, which takes 100,000 random numbers in a numpy array and joins each array into a comma-delimited string. import numpy as np x = np.random.randn(100000) for i in range(100): ",".join(map(str, x)) This loop takes about 20 seconds to complete (total, not each cycle). In contrast, consider that 100 cycles of something like elementwise multiplication (x*x) would take than one 1/10 of a second to complete. Clearly the string join operation creates a large performance bottleneck; in my actual application it will dominate total runtime. This makes me wonder, is there a faster way than ",".join(map(str, x))? Since map() is where almost all the processing time occurs, this comes down to the question of whether there a faster to way convert a very large number of numbers to strings.

    Read the article

  • Add ability to provide list items to composite control with DropDownLIst

    - by Kyle
    I'm creating a composite control for a DropDownList (that also includes a Label). The idea being that I can use my control like a dropdown list, but also have it toss a Label onto the page in front of the DDL. I have this working perfectly for TextBoxes, but am struggling with the DDL because of the Collection (or Datasource) component to populate the DDL. Basically I want to be able to do something like this: <ecc:MyDropDownList ID="AnimalType" runat="server" LabelText="this is what will be in the label"> <asp:ListItem Text="dog" Value="dog" /> <asp:ListItem Text="cat" Value="cat" /> </ecc:MyDropDownList> The problem is, I'm not extending the DropDownList class for my control, so I can't simply work it with that magic. I need some pointers to figure out how I can turn my control (MyDropDownList), which is currently just a System.Web.UI.UserControl, into something that will accept List items within the tag and ideally, I'd like to be able to plug it into a datasource (the same functions that the regular DDL offers). I tried with no luck just extending the regular DDL, but couldn't get the Label component to fly with it.

    Read the article

  • Unexpected performance curve from CPython merge sort

    - by vkazanov
    I have implemented a naive merge sorting algorithm in Python. Algorithm and test code is below: import time import random import matplotlib.pyplot as plt import math from collections import deque def sort(unsorted): if len(unsorted) <= 1: return unsorted to_merge = deque(deque([elem]) for elem in unsorted) while len(to_merge) > 1: left = to_merge.popleft() right = to_merge.popleft() to_merge.append(merge(left, right)) return to_merge.pop() def merge(left, right): result = deque() while left or right: if left and right: elem = left.popleft() if left[0] > right[0] else right.popleft() elif not left and right: elem = right.popleft() elif not right and left: elem = left.popleft() result.append(elem) return result LOOP_COUNT = 100 START_N = 1 END_N = 1000 def test(fun, test_data): start = time.clock() for _ in xrange(LOOP_COUNT): fun(test_data) return time.clock() - start def run_test(): timings, elem_nums = [], [] test_data = random.sample(xrange(100000), END_N) for i in xrange(START_N, END_N): loop_test_data = test_data[:i] elapsed = test(sort, loop_test_data) timings.append(elapsed) elem_nums.append(len(loop_test_data)) print "%f s --- %d elems" % (elapsed, len(loop_test_data)) plt.plot(elem_nums, timings) plt.show() run_test() As much as I can see everything is OK and I should get a nice N*logN curve as a result. But the picture differs a bit: Things I've tried to investigate the issue: PyPy. The curve is ok. Disabled the GC using the gc module. Wrong guess. Debug output showed that it doesn't even run until the end of the test. Memory profiling using meliae - nothing special or suspicious. ` I had another implementation (a recursive one using the same merge function), it acts the similar way. The more full test cycles I create - the more "jumps" there are in the curve. So how can this behaviour be explained and - hopefully - fixed? UPD: changed lists to collections.deque UPD2: added the full test code UPD3: I use Python 2.7.1 on a Ubuntu 11.04 OS, using a quad-core 2Hz notebook. I tried to turn of most of all other processes: the number of spikes went down but at least one of them was still there.

    Read the article

  • Database access through collections

    - by Mike
    Hi All, I have an 3 tiered application where I need to get database results and populated the UI. I have a MessagesCollection class that deals with messages. I load my user from the database. On the instantiation of a user (ie. new User()), a MessageCollection Messages = new MessageCollection(this) is performed. Message collection accepts a user as a parameter. User user = user.LoadUser("bob"); I want to get the messages for Bob. user.Messages.GetUnreadMessages(); GetUnreadMessages calls my Business Data provider which in turn calls the data access layer. The Business data provider returns List. My question is - I am not sure what the best practice is here - If I have a collection of messages in an array inside the MessagesCollection class, I could implement ICollection to provide GetEnumerator() and ability to traverse the messages. But what happens if the messages change and the the user has old messages loaded? What about big message collections? What if my user had 10,000 unread messages? I don't think accessing the database and returning 10,000 Message objects would be efficient.

    Read the article

  • How can I apply a PSSM efficiently?

    - by flies
    I am fitting for position specific scoring matrices (PSSM aka Position Specific Weight Matrices). The fit I'm using is like simulated annealing, where I the perturb the PSSM, compare the prediction to experiment and accept the change if it improves agreement. This means I apply the PSSM millions of times per fit; performance is critical. In my particular problem, I'm applying a PSSM for an object of length L (~8 bp) at every position of a DNA sequence of length M (~30 bp) (so there are M-L+1 valid positions). I need an efficient algorithm to apply a PSSM. Can anyone help improve performance? My best idea is to convert the DNA into some kind of a matrix so that applying the PSSM is matrix multiplication. There are efficient linear algebra libraries out there (e.g. BLAS), but I'm not sure how best to turn an M-length DNA sequence into a matrix M x 4 matrix and then apply the PSSM at each position. The solution needs to work for higher order/dinucleotide terms in the PSSM - presumably this means representing the sequence-matrix for mono-nucleotides and separately for dinucleotides. My current solution iterates over each position m, then over each letter in word from m to m+L-1, adding the corresponding term in the matrix. I'm storing the matrix as a multi-dimensional STL vector, and profiling has revealed that a lot of the computation time is just accessing the elements of the PSSM (with similar performance bottlenecks accessing the DNA sequence). If someone has an idea besides matrix multiplication, I'm all ears.

    Read the article

  • Phonegap bluetooth plugin not working

    - by user2907333
    First time poster here , so I'm sorry if this question is asked incorrect or if there are any issues. I'm working on a phonegap app for which I need to be able to set up a bluetooth connection between a tablet (android in this case) and a Win CE PC. Currently i'm trying to get the bluetooth part to work on the tablet and i'm using the plug in you can find at https://github.com/tanelih/phonegap-bluetooth-plugin And for the momemt I'm just trying to enable and disable the Bluetooth on the device. In my bluetoothpage.js file i'm using the following method window.bluetooth.prototype.enable(bluetoothTestSucces(),bluetoothTestFail()); The bluetoothTestSucces and bluetoothTestFail functions just show an alert, nothing else. And if i understand the working of phonegap plugins correctly this uses the following code in my bluetooth.js file Bluetooth.prototype.enable = function(onSuccess, onError) { exec(onSuccess, onError, "Bluetooth", "enable", []); } which calls private void enable(JSONArray args, CallbackContext callbackCtx) { try { _bluetooth.enable(); callbackCtx.success(); } catch(Exception e) { this.error(callbackCtx, e.getMessage(), BluetoothError.ERR_UNKNOWN); } } in my BluetoothPlugin.java file. And if the java file returns Succes bluetoothTestSucces() is used and if the java file returns an error bluetoothTestFail() is used. But for some reason it runs both and does not turn on bluetooth on my device. I'm almost certain I've forgotten a link to a file or have linked it wrong somewhere. But I've followed the instructions that were included in the plugin. I've included the Bluetooth permission in my AndoridManifest file which is located in the root directory of my app I've included the plugin in my config.xml file which is located in res/xml I've required the plugin after the deviceready event as follows document.addEventListener("deviceready", onDeviceReady, false); function onDeviceReady() { window.bluetooth = cordova.require("cordova/plugin/bluetooth"); } Could anyone tell me how to fix this or what i've done wrong? thanks Martijn PS: I'm sorry for any language errors, English isn't my native language. edit: forgot to include some code

    Read the article

  • .NET out of memory troubleshooting

    - by bushman
    After reading a few enlightening articles about memory in the .NET technology, Out of Memory does not refer to physical memory, 597499. I thought I understood why a C# app would throw an out of memory exception -- until I started experimenting with two servers-- both are having 2.5 gigs of ram, windows server 2003 and identical programs running. The only significant difference between the two being one has 7% hard drive storage left and the other more than 50%. The server with 7% storage space left is consistently throwing an out of memory while the other is performing consistently well. My app is a C# web application that process' hundreds of MBs of String object. Why would this difference happen seeing that the most likely reason for the out of memory issue is out of contiguous virtual address space -- What solutions do you guys propose -- and what do you say about the following 1. turn on the 3gb switch to increase the virtual address space -- 2. instead of using one giant string object, break it up into smaller pieces and collect it in a jagged array (here I have to find a way to return to the caller in some other way as right now, the return type is a string) thanks SO

    Read the article

  • Saving a record in Authlogic table

    - by denniss
    I am using authlogic to do my authentication. The current model that serves as the authentication model is the user model. I want to add a "belongs to" relationship to user which means that I need a foreign key in the user table. Say the foreign key is called car_id in the user's model. However, for some reason, when I do u = User.find(1) u.car_id = 1 u.save! I get ActiveRecord::RecordInvalid: Validation failed: Password can't be blank My guess is that this has something to do with authlogic. I do not have validation on password on the user's model. This is the migration for the user's table. def self.up create_table :users do |t| t.string :email t.string :first_name t.string :last_name t.string :crypted_password t.string :password_salt t.string :persistence_token t.string :single_access_token t.string :perishable_token t.integer :login_count, :null => false, :default => 0 # optional, see Authlogic::Session::MagicColumns t.integer :failed_login_count, :null => false, :default => 0 # optional, see Authlogic::Session::MagicColumns t.datetime :last_request_at # optional, see Authlogic::Session::MagicColumns t.datetime :current_login_at # optional, see Authlogic::Session::MagicColumns t.datetime :last_login_at # optional, see Authlogic::Session::MagicColumns t.string :current_login_ip # optional, see Authlogic::Session::MagicColumns t.string :last_login_ip # optional, see Authlogic::Session::MagicColumns t.timestamps end end And later I added the car_id column to it. def self.up add_column :users, :user_id, :integer end Is there anyway for me to turn off this validation?

    Read the article

  • How can I speed up Subversion checkins? (Using ANKH, latest, Visual Studio 2010)

    - by Timothy Khouri
    I've started working on a new web project with some friends... we are using the latest Subversion server (installed last week), the latest version of ANKH. My web project is a whapping 1.5 megabytes (that's with all images, css files, dll's after compiling, pdb files... etc). Checking in even super small changes (literally adding the letter "x" to a few files for testing)... takes FOREVER! (about 10 seconds - I almost killed myself). The ANKH client is measuring in BYTES PER SECOND ... BYTES? per second... I must be doing something wrong. Does anyone what config file has a joke totallyMessWithPeople=true so that I can turn that off or something? Oh, also, changing one "big" file of a super 10k gains speed up to nearly the speed of light (which is apparently 857 bytes per second). Help me obi wan kenobi, your my only hope! EDIT: As a note... my real work project that uses Visual Source Safe 2005 (I know, ouch) uploads files at about 200-500kbps from this very same computer/internet connection.

    Read the article

  • How do I efficiently write a "toggle database value" function in AJAX?

    - by AmbroseChapel
    Say I have a website which shows the user ten images and asks them to categorise each image by clicking on buttons. A button for "funny", a button for "scary", a button for "pretty" and so on. These buttons aren't exclusive. A picture can be both funny and scary. The user clicks the "funny" button. An AJAX request is sent off to the database to mark that image as funny. The "funny" button lights up, by assigning a class in the DOM to mark it as "on". But the user made a mistake. They meant to hit the next button over. They should click "funny" again to turn it off, right? At this point I'm not sure whats the most efficient way to proceed. The database knows that the "funny" flag is set, but it's inefficient to query the database every time a button is clicked to say, is this flag set or not, then go on with a second database call to toggle it. Should I infer the state of the database flag from the DOM, i.e. if that button has the class "on" then the flag must be set, and branch at that point? Or would it be better to have a data structure in Javascript in the page which duplicates the state of each image in the database, so that every time I set the database flag to true, I also set the value in the Javascript data to true and so on?

    Read the article

  • Unable to find reference to std library math function inside library

    - by Alex Marshall
    Hello, I've got several programs that use shared libraries. Those shared libraries in turn use various standard C libraries. ie Program A and Program B both use Shared Library S. Shared Library S uses std C math. I want to be able to statically link Shared Library S against the standard library, and then statically link Programs A and B against S so that I don't have to be dragging around the library files, because these programs are going to be running on an embedded system running BusyBox 0.61. However, when I try to statically link the programs against Shared Library S, I get an error message from GCC stating : ../lib/libgainscalecalc.a(gainscalecalc.): In function 'float2gs': [path to my C file].c:73: undefined reference to 'log' Can somebody please help me out ? The make commands I'm using are below : CFLAGS += -Wall -g -W INCFLAGS = -I$(CROSS_INCLUDE)/usr/include LIBFLAGS += -L$(CROSS_LIB)/usr/lib -lm gainscalecalc_static.o: gainscalecalc.c $(CC) $(CFLAGS) -c $< -I. $(INCFLAGS) -o $@ gainscalecalc_dynamic.o: gainscalecalc.c $(CC) $(CFLAGS) -fPIC -c $< -o $@ all: staticlib dynamiclib static_driver dynamic_driver clean: $(RM) *.o *.a *.so *~ driver core $(OBJDIR) static_driver: driver.c staticlib $(CC) $(CFLAGS) -static driver.c $(INCFLAGS) $(LIBFLAGS) -I. -L. -lgainscalecalc -o $@ dynamic_driver: driver.c dynamiclib $(CC) $(CFLAGS) driver.c -o $@ -L. -lgainscalecalc staticlib: gainscalecalc_static.o $(AR) $(ARFLAGS) libgainscalecalc.a gainscalecalc_static.o $(RANLIB) libgainscalecalc.a chmod 777 libgainscalecalc.a dynamiclib: gainscalecalc_dynamic.o $(CC) -shared -o libgainscalecalc.so gainscalecalc_dynamic.o chmod 777 libgainscalecalc.so Edit: Linking against the shared libraries compiles fine, I just haven't tested them out yet

    Read the article

  • Drawing a relative line in C#

    - by icemanind
    Guys, I know this is going to turn out to be a simple answer, but I can't seem to figure it out. I have a C# Winform application that I am trying to build. I am trying to draw a white line 60 pixels above the bottom of the form. I am using this code: private void MainForm_Paint(object sender, PaintEventArgs e) { e.Graphics.DrawLine(Pens.White, 10, this.Height-60, 505, this.Height-60); } Simple enough, however no line is drawn. After some debugging, I figured out that it IS drawing the line, but it is drawing it outside my form. If I change the -60 to -175, then I can see it at the bottom of my form. This would solve my problem, except as my form's height changes, the line draws closer and closer to the bottom of my form until eventually, its off the form again. What am I doing wrong? Am I using the wrong graphics unit? Or is there a more complex calculation I need to do to determine 60 pixels from the bottom of my form?

    Read the article

  • Can Haskell's Parsec library be used to implement a recursive descent parser with backup?

    - by Thor Thurn
    I've been considering using Haskell's Parsec parsing library to parse a subset of Java as a recursive descent parser as an alternative to more traditional parser-generator solutions like Happy. Parsec seems very easy to use, and parse speed is definitely not a factor for me. I'm wondering, though, if it's possible to implement "backup" with Parsec, a technique which finds the correct production to use by trying each one in turn. For a simple example, consider the very start of the JLS Java grammar: Literal: IntegerLiteral FloatingPointLiteral I'd like a way to not have to figure out how I should order these two rules to get the parse to succeed. As it stands, a naive implementation like this: literal = do { x <- try (do { v <- integer; return (IntLiteral v)}) <|> (do { v <- float; return (FPLiteral v)}); return(Literal x) } Will not work... inputs like "15.2" will cause the integer parser to succeed first, and then the whole thing will choke on the "." symbol. In this case, of course, it's obvious that you can solve the problem by re-ordering the two productions. In the general case, though, finding things like this is going to be a nightmare, and it's very likely that I'll miss some cases. Ideally, I'd like a way to have Parsec figure out stuff like this for me. Is this possible, or am I simply trying to do too much with the library? The Parsec documentation claims that it can "parse context-sensitive, infinite look-ahead grammars", so it seems like something like I should be able to do something here.

    Read the article

  • How to work threading with ConcurrentQueue<T>.

    - by dboarman
    I am trying to figure out what the best way of working with a queue will be. I have a process that returns a DataTable. Each DataTable, in turn, is merged with the previous DataTable. There is one problem, too many records to hold until the final BulkCopy (OutOfMemory). So, I have determined that I should process each incoming DataTable immediately. Thinking about the ConcurrentQueue<T>...but I don't see how the WriteQueuedData() method would know to dequeue a table and write it to the database. For instance: public class TableTransporter { private ConcurrentQueue<DataTable> tableQueue = new ConcurrentQueue<DataTable>(); public TableTransporter() { tableQueue.OnItemQueued += new EventHandler(WriteQueuedData); // no events available } public void ExtractData() { DataTable table; // perform data extraction tableQueue.Enqueue(table); } private void WriteQueuedData(object sender, EventArgs e) { BulkCopy(e.Table); } } My first question is, aside from the fact that I don't actually have any events to subscribe to, if I call ExtractData() asynchronously will this be all that I need? Second, is there something I'm missing about the way ConcurrentQueue<T> functions and needing some form of trigger to work asynchronously with the queued objects?

    Read the article

  • Dependency between operations in scala actors

    - by paradigmatic
    I am trying to parallelise a code using scala actors. That is my first real code with actors, but I have some experience with Java Mulithreading and MPI in C. However I am completely lost. The workflow I want to realise is a circular pipeline and can be described as the following: Each worker actor has a reference to another one, thus forming a circle There is a coordinator actor which can trigger a computation by sending a StartWork() message When a worker receives a StartWork() message, it process some stuff locally and sends DoWork(...) message to its neighbour in the circle. The neighbours do some other stuff and sends in turn a DoWork(...) message to its own neighbour. This continues until the initial worker receives a DoWork() message. The coordinator can send a GetResult() message to the initial worker and wait for a reply. The point is that the coordinator should only receive a result when data is ready. How can a worker wait that the job returned to it before answering the GetResult() message ? To speed up computation, any worker can receive a StartWork() at any time. Here is my first try pseudo-implementation of the worker: class Worker( neighbor: Worker, numWorkers: Int ) { var ready = Foo() def act() { case StartWork() => { val someData = doStuff() neighbor ! DoWork( someData, numWorkers-1 ) } case DoWork( resultData, remaining ) => if( remaining == 0 ) { ready = resultData } else { val someOtherData = doOtherStuff( resultData ) neighbor ! DoWork( someOtherData, remaining-1 ) } case GetResult() => reply( ready ) } } On the coordinator side: worker ! StartWork() val result = worker !? GetResult() // should wait

    Read the article

  • Is there a good way of automatically generating javascript client code from server side python

    - by tat.wright
    I basically want to be able to: Write a few functions in python (with the minimum amount of extra meta data) Turn these functions into a web service (with the minimum of effort / boiler plate) Automatically generate some javascript functions / objects for rpc (this should prevent me from doing as many stupid things as possible like mistyping method names, forgetting the names of methods, passing the wrong number of arguments) Example python: def hello_world(): return "Hello world" javascript: ... <!-- This file is automatically generated (either dynamically or statically) --> <script src="http://myurl.com/webservice/client_side_javascript"> </script> ... <script> $('#button').click(function () { hello_world(function (data){ $('#label').text(data))) } </script> A bit of research has shown me some approaches that come close to this: Automatic generation of json-rpc services from functions with a little boiler plate code in python and then using jquery and json to do the calls (still easy to make mistakes with method names - still need to be aware of urls when calling, very irritating to write these calls yourself in the firebug shell) Using a library like soaplib to generate wsdl from python (by adding copious type information). And then somehow convert this into javascript (not sure if there is even a library to do this) But are there any approaches closer to what I want?

    Read the article

  • Ext JS 4: Show all columns in Ext.grid.Panel as custom option

    - by MacGyver
    Is there a function that can be called on an Ext.grid.Panel in ExtJS that will make all columns visible, if some of them are hidden by default? Whenever an end-user needs to show the hidden columns, they need to click each column. Below, I have some code to add a custom menu option when you select a field header. I'd like to execute this function so all columns show. As an example below, I have 'Project ID' and 'User Created' hidden by default. By choosing 'Select All Columns' would turn those columns on, so they show in the list view. listeners: { ... }, afterrender: function() { var menu = this.headerCt.getMenu(); menu.add([{ text: 'Select All Columns', handler: function() { var columnDataIndex = menu.activeHeader.dataIndex; alert('custom item for column "'+columnDataIndex+'" was pressed'); } }]); } } }); =========================== Answer (with code): Here's what I decided to do based on Eric's code below, since hiding all columns was silly. afterrender: function () { var menu = this.headerCt.getMenu(); menu.add([{ text: 'Show All Columns', handler: function () { var columnDataIndex = menu.activeHeader.dataIndex; Ext.each(grid.headerCt.getGridColumns(), function (column) { column.show(); }); } }]); menu.add([{ text: 'Hide All Columns Except This', handler: function () { var columnDataIndex = menu.activeHeader.dataIndex; alert(columnDataIndex); Ext.each(grid.headerCt.getGridColumns(), function (column) { if (column.dataIndex != columnDataIndex) { column.hide(); } }); } }]); }

    Read the article

  • Difficulty with projectile's tracking code

    - by RCIX
    I wrote some code for a projectile class in my game that makes it track targets if it can: if (_target != null && !_target.IsDead) { Vector2 currentDirectionVector = this.Body.LinearVelocity; currentDirectionVector.Normalize(); float currentDirection = (float)Math.Atan2(currentDirectionVector.Y, currentDirectionVector.X); Vector2 targetDirectionVector = this._target.Position - this.Position; targetDirectionVector.Normalize(); float targetDirection = (float)Math.Atan2(targetDirectionVector.Y, targetDirectionVector.X); float targetDirectionDelta = targetDirection - currentDirection; if (MathFunctions.IsInRange(targetDirectionDelta, -(Info.TrackingRate * deltaTime), Info.TrackingRate * deltaTime)) { Body.LinearVelocity = targetDirectionVector * Info.FiringVelocity; } else if (targetDirectionDelta > 0) { float newDirection = currentDirection + Info.TrackingRate * deltaTime; Body.LinearVelocity = new Vector2( (float)Math.Cos(newDirection), (float)Math.Sin(newDirection)) * Info.FiringVelocity; } else if (targetDirectionDelta < 0) { float newDirection = currentDirection - Info.TrackingRate * deltaTime; Body.LinearVelocity = new Vector2( (float)Math.Cos(newDirection), (float)Math.Sin(newDirection)) * Info.FiringVelocity; } } This works sometimes, but depending on the relative angle to the target projectiles turn away from the target instead. I'm stumped; can someone point out the flaw in my code?

    Read the article

  • Is there a library that can decompile a method into an Expression tree, with support for CLR 4.0?

    - by Daniel Earwicker
    Previous questions have asked if it is possible to turn compiled delegates into expression trees, for example: http://stackoverflow.com/questions/767733/converting-a-net-funct-to-a-net-expressionfunct The sane answers at the time were: It's possible, but very hard and there's no standard library solution. Use Reflector! But fortunately there are some greatly-insane/insanely-great people out there who like reverse engineering things, and they make difficult things easy for the rest of us. Clearly it is possible to decompile IL to C#, as Reflector does it, and so you could in principle instead target CLR 4.0 expression trees with support for all statement types. This is interesting because it wouldn't matter if the compiler's built-in special support for Expression<> lambdas is never extended to support building statement expression trees in the compiler. A library solution could fill the gap. We would then have a high-level starting point for writing aspect-like manipulations of code without having to mess with raw IL. As noted in the answers to the above linked question, there are some promising signs but I haven't succeeded in finding if there's been much progress since by searching. So has anyone finished this job, or got very far with it? Note: CLR 4.0 is now released. Time for another look-see.

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >