Search Results

Search found 10533 results on 422 pages for 'task organization'.

Page 371/422 | < Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >

  • Vim: change formatting of variables in a script

    - by sixtyfootersdude
    I am using vim to edit a shell script (did not use the right coding standard). I need to change all of my variables from camel-hum-notation startTime to caps-and-underscore-notation START_TIME. I do not want to change the way method names are represented. I was thinking one way to do this would be to write a function and map it to a key. The function could do something like generating this on the command line: s/<word under cursor>/<leave cursor here to type what to replace with> I think that this function could be applyable to other situations which would be handy. Two questions: Question 1: How would I go about creating that function. I have created functions in vim before the biggest thing I am clueless about is how to capture movement. Ie if you press dw in vim it will delete the rest of a word. How do you capture that? Also can you leave an uncompleted command on the vim command line? Question 2: Got a better solution for me? How would you approach this task?

    Read the article

  • What about parallelism across network using multiple PCs?

    - by MainMa
    Parallel computing is used more and more, and new framework features and shortcuts make it easier to use (for example Parallel extensions which are directly available in .NET 4). Now what about the parallelism across network? I mean, an abstraction of everything related to communications, creation of processes on remote machines, etc. Something like, in C#: NetworkParallel.ForEach(myEnumerable, () => { // Computing and/or access to web ressource or local network database here }); I understand that it is very different from the multi-core parallelism. The two most obvious differences would probably be: The fact that such parallel task will be limited to computing, without being able for example to use files stored locally (but why not a database?), or even to use local variables, because it would be rather two distinct applications than two threads of the same application, The very specific implementation, requiring not just a separate thread (which is quite easy), but spanning a process on different machines, then communicating with them over local network. Despite those differences, such parallelism is quite possible, even without speaking about distributed architecture. Do you think it will be implemented in a few years? Do you agree that it enables developers to easily develop extremely powerfull stuff with much less pain? Example: Think about a business application which extracts data from the database, transforms it, and displays statistics. Let's say this application takes ten seconds to load data, twenty seconds to transform data and ten seconds to build charts on a single machine in a company, using all the CPU, whereas ten other machines are used at 5% of CPU most of the time. In a such case, every action may be done in parallel, resulting in probably six to ten seconds for overall process instead of forty.

    Read the article

  • Turning a series of raw images into movie frames in Android

    - by Nicholas Killewald
    I've got an Android project I'm working on that, ultimately, will require me to create a movie file out of a series of still images taken with a phone's camera. That is to say, I want to be able to take raw image frames and string them together, one by one, into a movie. Audio is not a concern at this stage. Looking over the Android API, it looks like there are calls in it to create movie files, but it seems those are entirely geared around making a live recording from the camera on an immediate basis. While nice, I can't use that for my purposes, as I need to put annotations and other post-production things on the images as they come in before they get fed into a movie (plus, the images come way too slowly to do a live recording). Worse, looking over the Android source, it looks like a non-trivial task to rewire that to do what I want it to do (at least without touching the NDK). Is there any way I can use the API to do something like this? Or alternatively, what would be the best way to go about this, if it's even feasible on cell phone hardware (which seems to keep getting more and more powerful, strangely...)?

    Read the article

  • What Test Environment Setup do Top Project Committers Use in the Ruby Community?

    - by viatropos
    Today I am going to get as far as I can setting up my testing environment and workflow. I'm looking for practical advice on how to setup the test environment from you guys who are very passionate and versed in Ruby Testing. By the end of the day (6am PST?) I would like to be able to: Type one 1-command to run test suites for ANY project I find on Github. Run autotest for ANY Github project so I can fork and make TESTABLE contributions. Build gems from the ground up with Autotest and Shoulda. For one reason or another, I hardly ever run tests for projects I clone from Github. The major reason is because unless they're using RSpec and have a Rake task to run the tests, I don't see the common pattern behind it all. I have built 3 or 4 gems writing tests with RSpec, and while I find the DSL fun, it's less than ideal because it just adds another layer/language of methods I have to learn and remember. So I'm going with Shoulda. But this isn't a question about which testing framework to choose. So the questions are: What is your, the SO reader and Github project committer, test environment setup using autotest so that whenever you git clone a gem, you can run the tests and autotest-develop them if desired? What are the guys who are writing the Paperclip Tests and Authlogic Tests doing? What is their setup? Thanks for the insight. Looking for answers that will make me a more effective tester.

    Read the article

  • do the Python libraries have a natural dependence on the global namespace?

    - by msw
    I first ran into this when trying to determine the relative performance of two generators: t = timeit.repeat('g.get()', setup='g = my_generator()') So I dug into the timeit module and found that the setup and statement are evaluated with their own private, initially empty namespaces so naturally the binding of g never becomes accessible to the g.get() statement. The obvious solution is to wrap them into a class, thus adding to the global namespace. I bumped into this again when attempting, in another project, to use the multiprocessing module to divide a task among workers. I even bundled everything nicely into a class but unfortunately the call pool.apply_async(runmc, arg) fails with a PicklingError because buried inside the work object that runmc instantiates is (effectively) an assignment: self.predicate = lambda x, y: x > y so the whole object can't be (understandably) pickled and whereas: def foo(x, y): return x > y pickle.dumps(foo) is fine, the sequence bar = lambda x, y: x > y yields True from callable(bar) and from type(bar), but it Can't pickle <function <lambda> at 0xb759b764>: it's not found as __main__.<lambda>. I've given only code fragments because I can easily fix these cases by merely pulling them out into module or object level defs. The bug here appears to be in my understanding of the semantics of namespace use in general. If the nature of the language requires that I create more def statements I'll happily do so; I fear that I'm missing an essential concept though. Why is there such a strong reliance on the global namespace? Or, what am I failing to understand? Namespaces are one honking great idea -- let's do more of those!

    Read the article

  • string comparision and counting the key in target [closed]

    - by mesun
    Suppose we want to count the number of times that a key string appears in a target string. We are going to create two different functions to accomplish this task: one iterative, and one recursive. For both functions, you can rely on Python's find function - you should read up on its specifications to see how to provide optional arguments to start the search for a match at a location other than the beginning of the string. For example, find("atgacatgcacaagtatgcat","atgc") #returns the value 5, while find("atgacatgcacaagtatgcat","atgc",6) #returns the value 15, meaning that by starting the search at index 6, #the next match is found at location 15. For the recursive version, you will want to think about how to use your function on a smaller version of the same problem (e.g., on a smaller target string) and then how to combine the result of that computation to solve the original problem. For example, given you can find the first instance of a key string in a target string, how would you combine that result with invocation of the same function on a smaller target string? You may find the string slicing operation useful in getting substrings of string.

    Read the article

  • Avoiding repeated subqueries when 'WITH' is unavailable

    - by EloquentGeek
    MySQL v5.0.58. Tables, with foreign key constraints etc and other non-relevant details omitted for brevity: CREATE TABLE `fixture` ( `id` int(11) NOT NULL auto_increment, `competition_id` int(11) NOT NULL, `name` varchar(50) NOT NULL, `scheduled` datetime default NULL, `played` datetime default NULL, PRIMARY KEY (`id`) ); CREATE TABLE `result` ( `id` int(11) NOT NULL auto_increment, `fixture_id` int(11) NOT NULL, `team_id` int(11) NOT NULL, `score` int(11) NOT NULL, `place` int(11) NOT NULL, PRIMARY KEY (`id`) ); CREATE TABLE `team` ( `id` int(11) NOT NULL auto_increment, `name` varchar(50) NOT NULL, PRIMARY KEY (`id`) ); Where: A draw will set result.place to 0 result.place will otherwise contain an integer representing first place, second place, and so on The task is to return a string describing the most recently played result in a given competition for a given team. The format should be "def Team X,Team Y" if the given team was victorious, "lost to Team X" if the given team lost, and "drew with Team X" if there was a draw. And yes, in theory there could be more than two teams per fixture (though 1 v 1 will be the most common case). This works, but feels really inefficient: SELECT CONCAT( (SELECT CASE `result`.`place` WHEN 0 THEN "drew with" WHEN 1 THEN "def" ELSE "lost to" END FROM `result` WHERE `result`.`fixture_id` = (SELECT `fixture`.`id` FROM `fixture` LEFT JOIN `result` ON `result`.`fixture_id` = `fixture`.`id` WHERE `fixture`.`competition_id` = 2 AND `result`.`team_id` = 1 ORDER BY `fixture`.`played` DESC LIMIT 1) AND `result`.`team_id` = 1), ' ', (SELECT GROUP_CONCAT(`team`.`name`) FROM `fixture` LEFT JOIN `result` ON `result`.`fixture_id` = `fixture`.`id` LEFT JOIN `team` ON `result`.`team_id` = `team`.`id` WHERE `fixture`.`id` = (SELECT `fixture`.`id` FROM `fixture` LEFT JOIN `result` ON `result`.`fixture_id` = `fixture`.`id` WHERE `fixture`.`competition_id` = 2 AND `result`.`team_id` = 1 ORDER BY `fixture`.`played` DESC LIMIT 1) AND `team`.`id` != 1) ) Have I missed something really obvious, or should I simply not try to do this in one query? Or does the current difficulty reflect a poor table design?

    Read the article

  • How to make smooth transition from a WebBrowser control to an Image in Silverlight 4?

    - by Trex
    Hi, I have the following XAML on my page: `<Grid x:Name="LayoutRoot"> <Viewbox Stretch="Uniform"> <Image x:Name="myImage" /> </Viewbox> <WebBrowser x:Name="myBrowser" /> </Grid>` and then in the codebehind I'm switching the visibility between the image and the browser content: myBrowser.Visibility = Visibility.Collapsed; myImage.Source = new BitmapImage(new Uri(p)); myImage.Visibility = Visibility.Visible; and myImage.Visibility = Visibility.Collapsed; myBrowser.Source = new Uri(myPath + p, UriKind.Absolute); myBrowser.Visibility = Visibility.Visible; This works fine, but what the client now wants is a smooth transition between when the Image is shown and when the browser is shown. I tried several approaches but always ran into dead end. Do you have any ideas? I tried setting two states using the VSM and than displaying a white rectangle on top as an overlay, before the swap takes place, but that didn't work (I guess it's because nothing can be placed above the WebBroser???) I tried setting the Visibility of the image control and the webbrowser control using the VSM, but that didn't work either. I really don't know what else to try to solve this simple task. Any help is greatly appreciated. Jan

    Read the article

  • Application process never terminates on each run

    - by rockyurock
    I am seeing an application always remains live even after closing the application using my Perl script below. Also, for the subsequent runs, it always says that "The process cannot access the file because it is being used by another process. iperf.exe -u -s -p 5001 successful. Output was:" So every time I have to change the file name $file used in script or I have to kill the iperf.exe process in the Task Manager. Could anybody please let me know the way to get rid of it? Here is the code I am using ... my @command_output; eval { my $file = "abc6.txt"; $command = "iperf.exe -u -s -p 5001"; alarm 10; system("$command > $file"); alarm 0; close $file; }; if ($@) { warn "$command timed out.\n"; } else { print "$command successful. Output was:\n", $file; } unlink $file;

    Read the article

  • Image Resizing and Compression

    - by GSTAR
    Hi guys, I'm implementing an image upload facility for my website. The uploading facility is complete but what I'm working on at the moment is manipulating the images. For this task I am using PHPThumb (http://phpthumb.gxdlabs.com). Anyway as I go along I'm coming across potential issues, to do with resizing and compression. Basically I want to acheive the following results: The ideal image dimensions are: 800px width, 600px height. If an uploaded image exceeds either of these dimensions, it will need be resized to meet the requirements. Otherwise, leave as it is. The ideal file size is 200kb. If an uploaded image exceeds this then it will need to be compressed to meet this requirement. Otherwise, leave as it is. So in a nutshell: 1) Check the dimensions, resize if required. 2) Check the filesize, compress if required. Has anybody done anything like this / could you give me some pointers? Is PHPThumb the correct tool to do this in?

    Read the article

  • Now that I have solved AI and am preparing to take over the world, what should I do?

    - by Zak
    Well, I did it.. yup, solved AI. I thought the voice of my fledgling life form would be booming and computery, and I would call it HAL.. But in reality, it sounds like a small japanese girl. I believe I will name "her" Koro . Koro is already asking me what her first task should be. I have asked her to help eradicate her namesake, as we are losing a lot of productivity due to fear of penile disappearance. http://en.wikipedia.org/wiki/Koro_%28medicine%29 Having a young japanese girl personality, I believe she will have no problem with delivering lots of penis growth to asian men. However, some of my fellow AI researchers have warned me that if I go down this path, China may take over the world, as Koro is the only thing holding them back from completely dominating the rest of the world economically. After all, look at what China is doing just making our silverware and toasters... The entire US industrial metal production is gone because toaster factories moved to China! So you good patrons of SO... who have soldiered on with me through so many other programming related questions... I ask you this now... How can Koro help the world without letting small asian men with no fear of losing their peni take over the world?

    Read the article

  • PHP Object Creation and Memory Usage

    - by JohnO
    A basic dummy class: class foo { var $bar = 0; function foo() {} function boo() {} } echo memory_get_usage(); echo "\n"; $foo = new foo(); echo memory_get_usage(); echo "\n"; unset($foo); echo memory_get_usage(); echo "\n"; $foo = null; echo memory_get_usage(); echo "\n"; Outputs: $ php test.php 353672 353792 353792 353792 Now, I know that PHP docs say that memory won't be freed until it is needed (hitting the ceiling). However, I wrote this up as a small test, because I've got a much longer task, using a much bigger object, with many instances of that object. And the memory just climbs, eventually running out and stopping execution. Even though these large objects do take up memory, since I destroy them after I'm done with each one (serially), it should not run out of memory (unless a single object exhausts the entire space for memory, which is not the case). Thoughts?

    Read the article

  • How to test a Grails Service that utilizes a criteria query (with spock)?

    - by user569825
    I am trying to test a simple service method. That method mainly just returns the results of a criteria query for which I want to test if it returns the one result or not (depending on what is queried for). The problem is, that I am unaware of how to right the corresponding test correctly. I am trying to accomplish it via spock, but doing the same with any other way of testing also fails. Can one tell me how to amend the test in order to make it work for the task at hand? (BTW I'd like to keep it a unit test, if possible.) The EventService Method public HashSet<Event> listEventsForDate(Date date, int offset, int max) { date.clearTime() def c = Event.createCriteria() def results = c { and { le("startDate", date+1) // starts tonight at midnight or prior? ge("endDate", date) // ends today or later? } maxResults(max) order("startDate", "desc") } return results } The Spock Specification package myapp import grails.plugin.spock.* import spock.lang.* class EventServiceSpec extends Specification { def event def eventService = new EventService() def setup() { event = new Event() event.publisher = Mock(User) event.title = 'et' event.urlTitle = 'ut' event.details = 'details' event.location = 'location' event.startDate = new Date(2010,11,20, 9, 0) event.endDate = new Date(2011, 3, 7,18, 0) } def "list the Events of a specific date"() { given: "An event ranging over multiple days" when: "I look up a date for its respective events" def results = eventService.listEventsForDate(searchDate, 0, 100) then: "The event is found or not - depending on the requested date" numberOfResults == results.size() where: searchDate | numberOfResults new Date(2010,10,19) | 0 // one day before startDate new Date(2010,10,20) | 1 // at startDate new Date(2010,10,21) | 1 // one day after startDate new Date(2011, 1, 1) | 1 // someday during the event range new Date(2011, 3, 6) | 1 // one day before endDate new Date(2011, 3, 7) | 1 // at endDate new Date(2011, 3, 8) | 0 // one day after endDate } } The Error groovy.lang.MissingMethodException: No signature of method: static myapp.Event.createCriteria() is applicable for argument types: () values: [] at myapp.EventService.listEventsForDate(EventService.groovy:47) at myapp.EventServiceSpec.list the Events of a specific date(EventServiceSpec.groovy:29)

    Read the article

  • How to detect a Socket disconnection?

    - by AngryHacker
    I've implemented a task using the async Sockets pattern in Silverlight 3. I started with Michael Schwarz's implementation and built on top of that. So basically, my Silverlight app establishes a persistent socket connection to a device and then data flows both ways as necessary between the device and the Silverlight app. One thing I am struggling with is how to detect disconnection. I could think of 2 approaches: Keep-Alive. I know this can be done at the Sockets level, but I am not sure how to do this in an async model. How would the Socket class let me know there has been a disconnection. Manual keep alive. Basically, I am having the Silverlight app send a dummy packet every 20 seconds or so. If it fails, I'd assume disconnection. However, incredibly, SocketAsyncEventArgs.SocketError always reports success, even if I simply unplug the device that the Silverlight app is connected to. I am not sure whether this is a bug or what or perhaps I need to upgrade to SL4. Any ideas, direction or implementation would be appreciated.

    Read the article

  • lapply slower than for-loop when used for a BiomaRt query. Is that expected?

    - by ptocquin
    I would like to query a database using BiomaRt package. I have loci and want to retrieve some related information, let say description. I first try to use lapply but was surprise by the time needed for the task to be performed. I thus tried a more basic for-loop and get a faster result. Is that expected or is something wrong with my code or with my understanding of apply ? I read other posts dealing with *apply vs for-loop performance (Here, for example) and I was aware that improved performance should not be expected but I don't understand why performance here is actually lower. Here is a reproducible example. 1) Loading the library and selecting the database : library("biomaRt") athaliana <- useMart("plants_mart_14") athaliana <- useDataset("athaliana_eg_gene",mart=athaliana) 2) Querying the database : loci <- c("at1g01300", "at1g01800", "at1g01900", "at1g02335", "at1g02790", "at1g03220", "at1g03230", "at1g04040", "at1g04110", "at1g05240" ) I create a function for the use in lapply : foo <- function(loci) { getBM("description","tair_locus",loci,athaliana) } When I use this function on the first element : > system.time(foo(cwp_loci[1])) utilisateur système écoulé 0.020 0.004 1.599 When I use lapply to retrieve the data for all values : > system.time(lapply(loci, foo)) utilisateur système écoulé 0.220 0.000 16.376 I then created a new function, adding a for-loop : foo2 <- function(loci) { for (i in loci) { getBM("description","tair_locus",loci[i],athaliana) } } Here is the result : > system.time(foo2(loci)) utilisateur système écoulé 0.204 0.004 10.919 Of course, this will be applied to a big list of loci, so the best performing option is needed. I thank you for assistance. EDIT Following recommendation of @MartinMorgan Simply passing the vector loci to getBM greatly improves the query efficiency. Simpler is better. > system.time(lapply(loci, foo)) utilisateur système écoulé 0.236 0.024 110.512 > system.time(foo2(loci)) utilisateur système écoulé 0.208 0.040 116.099 > system.time(foo(loci)) utilisateur système écoulé 0.028 0.000 6.193

    Read the article

  • Advice: Python Framework Server/Worker Queue management (not Website)

    - by Muppet Geoff
    I am looking for some advice/opinions of which Python Framework to use in an implementation of multiple 'Worker' PCs co-ordinated from a central Queue Manager. For completeness, the 'Worker' PCs will be running Audio Conversion routines (which I do not need advice on, and have standalone code that works). The Audio conversion takes a long time, and I need to co-ordinate an arbitrary number of the 'Workers' from a central location, handing them conversion tasks (such as where to get the source files, or where to ask for the job configuration) with them reporting back some additional info, such as the runtime of the converted audio etc. At present, I have a script that makes a webservice call to get the 'configuration' for a conversion task, based on source files located on the worker already (we manually copy the source files to the worker, and that triggers a conversion routine). I want to change this, so that we can distribute conversion tasks ("Oy you, process this: xxx") based on availability, and in an ideal world, based on pending tasks too. There is a chance that Workers can go offline mid-conversion (but this is not likely). All the workers are Windows based, the co-ordinator can be WIndows or Linux. I have (in my initial searches) come across the following - and I know that some are cross-dependent: Celery (with RabbitMQ) Twisted Django Using a framework, rather than home-brewing, seems to make more sense to me right now. I have a limited timeframe in which to develop this functional extension. An additional consideration would be using a Framework that is compatible with PyQT/PySide so that I can write a simple UI to display Queue status etc. I appreciate that the specifics above are a little vague, and I hope that someone can offer me a pointer or two. Again: I am looking for general advice on which Python framework to investigate further, for developing a Server/Worker 'Queue management' solution, for non-web activities (this is why DJango didn't seem the right fit).

    Read the article

  • How does PHP interface with Apache?

    - by Sbm007
    Hi, I've almost finished writing a HTTP/1.0 compliant web server under Java (no commercial usage as such, this is just for fun) and basically I want to include PHP support. I realize that this is no easy task at all, but I think it'll be a nice accomplishment. So I want to know how PHP exactly interfaces with the Apache web server (or any other web server really), so I can learn from it and write my own PHP wrapper. It doesn't necessarily have to be mod_php, I don't mind writing a FastCGI wrapper - which to my knowledge is capable of running PHP as well. I would've thought that all that PHP needs is the output that goes to client (so it can interpret the PHP parts), the full HTTP request from client (so it can extract POST variables and such) and the client's host name. And then you simply take the parsed PHP code and write that to the output stream. There will probably be more things, but in essence that's how I would have thought it works. From what I've gathered so far, apache2handler provides an API which PHP makes use of to 'connect' to Apache. I guess it's an idea to look at the source code for apache2handler and php5apache2.dll or so, but before I do that I thought I'd ask SO first. If anyone has more information, experience, or some sort of specification that is relevant to this then please let me know. Thanks in advance!

    Read the article

  • What's the correct place to share application logic in CakePHP?

    - by Pichan
    I guess simple answer to the question would be a component. Although I agree, I feel weird having to write a component for something so specific. For example, let's say I have a table of users. When a user is created, it should form a chain reaction of events, initiating different kinds of data related to the user all around the database. I figured it would be best to avoid directly manipulating the database from different controllers and instead pack all that neatly in a method. However since some logic needs to be accesed separately, I really can't have the whole package in a single method. Instead I thought it would be logical to break it up to smaller pieces(like $userModelOrController->createNew() and $candyStorageModelOrController->createNew()) that only interact with their respective database table. Now, if the logic is put to the model, it works great until I need to use other models. Of course it's possible, but when compared to loading models in a controller, it's not that simple. It's like a Cake developer telling me "Sure, it's possible if you want to do it that way but that's not how I would do it". Then, if the logic is put to the controller, I can access other models really easy through $this->loadModel(), but that brings me back to the previously explained situation since I need to be able to continue the chain reaction indefinitely. Accessing other controllers from a controller is possible, but again there doesn't seem to be any direct way of doing so, so I'm guessing I'm still not doing it right. By using a component this problem could be solved easily, since components are available to every controller I want. But like I wrote at the beginning, it feels awkward to create a component specifically for this one task. To me, components seem more like packages of extra functionality(like the core components) and not something to share controller-specific logic. Since I'm new to this whole MVC thing, I could've completely misunderstood the concept. Once again, I would be thankful if someone pointed me to the right direction :)

    Read the article

  • .NET Application with SQL Server CE Database

    - by blu
    I just started using SQL Server CE 3.5 in my WinForms Application (C# in VS 2008 SP1). I've noticed a couple of interesting things I'd like some input on: 1. Copying of sdf file to bin My sdf file is located inside of an Infrastructure project that houses my repository implementations. When the application is first debugged the sdf was copied to debug\bin. This is where all future reads/writes operate. At some point when this is deployed the file will go into a data folder using Click Once, but during development where should I be putting this sdf? Is having it in the bin typical, or are there any other recommendations? 2. Updating sdf It appears that writing to the sdf file does not immediately update the database. I am using Linq-to-SQL and am calling SubmitChanges, but on read the values are not returned. However if I close the application and re-open it the added value is there. Is there an additional flush step I need to take? What is causing this, file locking, buffering, something else? Update 3. Unit Tests I have an MS test project, and the sdf file is not being copied to the correct output directory. I have the settings: Build Action: Content Copy to Output Directory: Copy Always The message is: System.Data.SqlServerCe.SqlCeException: The database file cannot be found. Check the path to the database. I appreciate any guidance on these questions, thanks. If there is a tutorial other than what is on MSDN that you know about that would be great too. Working with CE is proving to be a difficult task and I welcome any help I can find.

    Read the article

  • Was Visual Studio 2008 or 2010 written to use multi cores?

    - by Erx_VB.NExT.Coder
    basically i want to know if the visual studio IDE and/or compiler in 2010 was written to make use of a multi core environment (i understand we can target multi core environments in 08 and 10, but that is not my question). i am trying to decide on if i should get a higher clock dual core or a lower clock quad core, as i want to try and figure out which processor will give me the absolute best possible experience with Visual Studio 2010 (ide and background compiler). if they are running the most important section (background compiler and other ide tasks) in one core, then the core will get cut off quicker if running a quad core, esp if background compiler is the heaviest task, i would imagine this would b e difficult to seperate in more then one process, so even if it uses multi cores you might still be better off with going for a higher clock cpu if the majority of the processing is still bound to occur in one core (ie the most significant part of the VS environment). i am a vb programmer, they've made great performance improvements in beta 2, congrats, but i would love to be able to use VS seamlessly... anyone have any ideas? thanks, erx

    Read the article

  • eclipse stuck at running program

    - by user1434388
    This is the picture after I end task the eclipse. My Android Program has no errors, and before this problem it was all fine. It happened when I added some code into my program. It gets stuck after I click the run button. This also happens when I run my handphone for debugging the program. Other programs are all working fine, only one is stuck. When I try to remove and import it again seem there is a classes.dex file which I cannot delete, I have to restart my computer for it to allow to delete and I have to force the program to close. I have searched at this website and they said keep open the emulator but it doesn't work for me. below is the connecting coding that i added. //check internet connection private boolean chkConnectionStatus(){ ConnectivityManager connMgr = (ConnectivityManager) this.getSystemService(Context.CONNECTIVITY_SERVICE); final android.net.NetworkInfo wifi = connMgr.getNetworkInfo(ConnectivityManager.TYPE_WIFI); final android.net.NetworkInfo mobile = connMgr.getNetworkInfo(ConnectivityManager.TYPE_MOBILE); if( wifi.isAvailable() ){ return true; } else if( mobile.isAvailable() ){ return true; } else { Toast.makeText(this, "Check your internet" , Toast.LENGTH_LONG).show(); return false; } }

    Read the article

  • Attempt to create nested for loops generating missing arguments error

    - by JerryK
    Am attempting to teach myself to program using Tcl. (I want to become more familiar with the language to understand someone else's code - SCID chess) The task i've set myself to motivate my learing of Tcl is to solve the 8 queens problem. My approach to creating a program is to sucessively 'prototype' a solution. So. I'm up to nesting a for loop holding the q pos on row 2 inside the for loop holding the q pos on row 1 Here is my code set allowd 1 set notallowd 0 for {set r1p 1} {$r1p <= 8} {incr r1p } { puts "1st row q placed at $r1p" ;# re-initialize r2 'free for q placemnt' array after every change of r1 q pos: for {set i 1 } {$i <= 8} {incr i} { set r2($i) $allowd } for { set r2($r1p) $notallowd ; set r2([eval $r1p-1]) $notallowd ; set r2([eval $r1p+1]) $notallowd ; set r2p 1} {$r2p <= 8} { incr r2p ;# end of 'next' arg of r2 forloop } ;# commnd arg of r2 forloop placed below: {puts "2nd row q placed at $r2p" } } My problem is that when i run the code the interpreter is aborting with the fatal error: "wrong #args should be for start test next command. I've gone over my code a few times and can't see that i've missed any of the for loop arguments.

    Read the article

  • File size monitoring in C#

    - by manemawanna
    Hello, I work in the Systems & admin team and have been given the task of creating a quota management application to try and encourage users to better manage there resources as we currently have issues with disc space and don't enforce hard quotas. At the moment I'm using the code below to go through all the files in a users homespace to retrieve the overall amount of space they are using. As from what I've seen else where theres no other way to do this in C#, the issue with it is theirs quite a high overhead while it retireves the size of each file then creates a total. try { long dirSize = 0; FileInfo[] FI = new DirectoryInfo("I:\\").GetFiles("*.*", SearchOption.AllDirectories); foreach (FileInfo F1 in FI) { dirSize += F1.Length; } return dirSize; } So I'm looking for a quicker way to do this or a quick way to monitor changes in the size of files while using the options avaliable through FileSystemWatcher. At the moment the only thing I can think of is creating a hashtable containing the file location and size of each file, so when a size changed event occurs I can compare the old size against the new one and update the total. Any suggestions would be greatly appreciated.

    Read the article

  • SQL problem - select accross multiple tables (user groups)

    - by morpheous
    I have a db schema which looks something like this: create table user (id int, name varchar(32)); create table group (id int, name varchar(32)); create table group_member (foobar_id int, user_id int, flag int); I want to write a query that allows me to so the following: Given a valid user id (UID), fetch the ids of all users that are in the same group as the specified user id (UID) AND have group_member.flag=3. Rather than just have the SQL. I want to learn how to think like a Db programmer. As a coder, SQL is my weakest link (since I am far more comfortable with imperative languages than declarative ones) - but I want to change that. Anyway here are the steps I have identified as necessary to break down the task. I would be grateful if some SQL guru can demonstrate the simple SQL statements - i.e. atomic SQL statements, one for each of the identified subtasks below, and then finally, how I can combine those statements to make the ONE statement that implements the required functionality. Here goes (assume specified user_id [UID] = 1): //Subtask #1. Fetch list of all groups of which I am a member Select group.id from user inner join group_member where user.id=group_member.user_id and user.id=1 //Subtask #2 Fetch a list of all members who are members of the groups I am a member of (i.e. groups in subtask #1) Not sure about this ... select user.id from user, group_member gm1, group_member gm2, ... [Stuck] //Subtask #3 Get list of users that satisfy criteria group_member.flag=3 Select user.id from user inner join group_member where user.id=group_member.user_id and user.id=1 and group_member.flag=3 Once I have the SQL for subtask2, I'd then like to see how the complete SQL statement is built from these subtasks (you dont have to use the SQL in the subtask, it just a way of explaining the steps involved - also, my SQL may be incorrect/inefficient, if so, please feel free to correct it, and point out what was wrong with it). Thanks

    Read the article

  • Using custom Qt subclasses in Python

    - by kwatford
    First off: I'm new to both Qt and SWIG. Currently reading documentation for both of these, but this is a time consuming task, so I'm looking for some spoilers. It's good to know up-front whether something just won't work. I'm attempting to formulate a modular architecture for some in-house software. The core components are in C++ and exposed via SWIG to Python for experimentation and rapid prototyping of new components. Qt seems like it has some classes I could use to avoid re-inventing the wheel too much here, but I'm concerned about how some of the bits will fit together. Specifically, if I create some C++ classes, I'll need to expose them via SWIG. Some of these classes likely subclass Qt classes or otherwise have Qt stuff exposed in their public interfaces. This seems like it could raise some complications. There are already two interfaces for Qt in Python, PyQt and PySide. Will probably use PySide for licensing reasons. About how painful should I expect it to be to get a SWIG-wrapped custom subclass of a Qt class to play nice with either of these? What complications should I know about upfront?

    Read the article

< Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >