Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 371/457 | < Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >

  • Strange Ruby String Selection

    - by Daniel
    The string in question (read from a file): if (true) then { _this = createVehicle ["Land_hut10", [6226.8901, 986.091, 4.5776367e-005], [], 0, "CAN_COLLIDE"]; _vehicle_10 = _this; _this setDir -2.109278; }; Retrieved from a large list of similar (all same file) strings via the following: get_stringR(string,"if","};") And the function code: def get_stringR(a,b,c) b = a.index(b) b ||= 0 c = a.rindex(c) c ||= b r = a[b,c] return r end As so far, this works fine, but what I wanted to do is select the array after "createVehicle", the following (I thought) should work. newstring = get_string(myString,"\[","\];") Note get_string is the same as get_stringR, except it uses the first occurrence of the pattern both times, rather then the first and last occurrence. The output should have been: ["Land_hut10", [6226.8901, 986.091, 4.5776367e-005], [], 0, "CAN_COLLIDE"]; Instead it was the below, given via 'puts': ["Land_hut10", [6226.8901, 986.091, 4.5776367e-005], [], 0, "CAN_COLLIDE"]; _vehicle_10 = _this; _this setDir Some 40 characters past the point it should have retrieve, which was very strange... Second note, using both get_string and get_stringR produced the exact same result with the parameters given. I then decided to add the following to my get_string code: b = a.index(b) b ||= 0 c = a.index(c) c ||= b if c 40 then c -= 40 end r = a[b,c] return r And it works as expected (for every 'block' in the file, even though the strings after that array are not identical in any way), but something obviously isn't right :).

    Read the article

  • array of structs in C

    - by Hristo
    I'm trying to create an array of structs and also a pointer to that array. I don't know how large the array is going to be, so it should be dynamic. My struct would look something like this: typedef struct _stats_t { int hours[24]; int numPostsInHour; int days[7]; int numPostsInDay; int weeks[20]; int numPostsInWeek; int totNumLinesInPosts; int numPostsAnalyzed; } stats_t; ... and I need to have multiple of these structs for each file (unknown amount) that I will analyze. I'm not sure how to do this. I don't like the following approach because of the limit of the size of the array: # define MAX 10 typedef struct _stats_t { int hours[24]; int numPostsInHour; int days[7]; int numPostsInDay; int weeks[20]; int numPostsInWeek; int totNumLinesInPosts; int numPostsAnalyzed; } stats_t[MAX]; So how would I create this array? Also, would a pointer to this array would look something this? stats_t stats[]; stats_t *statsPtr = &stats[0]; Thanks, Hristo

    Read the article

  • SQL Server Multi-statement UDF - way to store data temporarily required

    - by Kharlos Dominguez
    Hello, I have a relatively complex query, with several self joins, which works on a rather large table. For that query to perform faster, I thus need to only work with a subset of the data. Said subset of data can range between 12 000 and 120 000 rows depending on the parameters passed. More details can be found here: http://stackoverflow.com/questions/3054843/sql-server-cte-referred-in-self-joins-slow As you can see, I was using a CTE to return the data subset before, which caused some performance problems as SQL Server was re-running the Select statement in the CTE for every join instead of simply being run once and reusing its data set. The alternative, using temporary tables worked much faster (while testing the query in a separate window outside the UDF body). However, when I tried to implement this in a multi-statement UDF, I was harshly reminded by SQL Server that multi-statement UDFs do not support temporary tables for some reason... UDFs do allow table variables however, so I tried that, but the performance is absolutely horrible as it takes 1m40 for my query to complete whereas the the CTE version only took 40minutes. I believe the table variables is slow for reasons listed in this thread: http://stackoverflow.com/questions/1643687/table-variable-poor-performance-on-insert-in-sql-server-stored-procedure Temporary table version takes around 1 seconds, but I can't make it into a function due to the SQL Server restrictions, and I have to return a table back to the caller. Considering that CTE and table variables are both too slow, and that temporary tables are rejected in UDFs, What are my options in order for my UDF to perform quickly? Thanks a lot in advance.

    Read the article

  • Techniques for sharing a value among classes in a program

    - by Kenneth Cochran
    I'm using Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData) + "\MyProgram" As the path to store several files used by my program. I'd like to avoid pasting the same snippet of code all over the my applcation. I need to ensure that: The path cannot be accidentally changed once its been set The classes that need it have access to it. I've considered: Making it a singleton Using constructor dependency injection Using property dependency injection Using AOP to create the path where its needed. Each has pros and cons. The singleton is everyone's favorite whipping boy. I'm not opposed to using one but there are valid reasons to avoid it if possible. I'm already heavily using constructor injection through Castle Windsor. But this is a path string and Windsor doesn't handle system type dependencies very gracefully. I could always wrap it in a class but that seems like overkill for something as simple as a passing around a string value. In any case this route would add yet another constructor argument to each class where it is used. The problem I see with property injection in this case is that there is a large amount of indirection from the where the value is set to where it is needed. I would need a very long line of middlemen to reach all the places where its used. AOP looks promising and I'm planning on using AOP for logging anyway so this at least sounds like a simple solution. Is there any other options I haven't considered? Am I off base with my evaluation of the options I have considered?

    Read the article

  • learning to type - tips for programmers?

    - by OrbMan
    After hunting and pecking for about 35 years, I have decided to learn to type. I am learning QWERTY and have learned about 2/3 of the letters so far. While learning, I have noticed how asymmeterical the keyboard is, which really bothers me. (I will probably switch to a symmetrical keyboard eventually, but for now am trying to do everything as standard and "correct" as possible.) Although I am not there yet in my lessons, it seems that many of the keys I am going to use as a C# web developer are supposed to be typed by the pinky of my right hand. Are there any typing patterns you have developed that are more ergonomic (or faster) when typing large volumes of code rife with braces, colons, semi-colons and quotes? Or, should I just accept the fact that every other key is going to be hit with my right pinky? It is not that speed is such a huge concern, as much as that it seems so inefficient to rely on one finger so much... As an example, some of the conventions I use as a hunt and pecker, like typing open and close braces right away with my index and middle finger, and then hitting the left arrow key to fill in the inner content, don't seem to work as well with just a pinky. What are some typing patterns using a standard QWERTY keyboard that work really well for you as a programmer?

    Read the article

  • LINQ to SQL and DataPager

    - by Jonathan S.
    I'm using LINQ to SQL to search a fairly large database and am unsure of the best approach to perform paging with a DataPager. I am aware of the Skip() and Take() methods and have those working properly. However, I'm unable to use the count of the results for the datapager, as they will always be the page size as determined in the Take() method. For example: var result = (from c in db.Customers where c.FirstName == "JimBob" select c).Skip(0).Take(10); This query will always return 10 or fewer results, even if there are 1000 JimBobs. As a result, the DataPager will always think there's a single page, and users aren't able to navigate across the entire result set. I've seen one online article where the author just wrote another query to get the total count and called that. Something like: int resultCount = (from c in db.Customers where c.FirstName == "JimBob" select c).Count(); and used that value for the DataPager. But I'd really rather not have to copy and paste every query into a separate call where I want to page the results for obvious reasons. Is there an easier way to do this that can be reused across multiple queries? Thanks.

    Read the article

  • Svn repository split problem

    - by Tuminoid
    I want to split a directory from a large Subversion repository to a repository of its own, and keep the history of the files in that directory. I tried the regular way of doing it first svnadmin dump /path/to/repo > largerepo.dump cat largerepo.dump | svndumpfilter include my/directory >mydir.dump but that does not work, since the directory has been moved and copied over the years and files have been moved into and out of it to other parts of the repository. The result is a lot of these: svndumpfilter: Invalid copy source path '/some/old/path' Next thing I tried is to include those /some/old/path as they appear and after a long, long list of files and directories included, the svndumpfilter completes, BUT importing the resulting dump isn't producing the same files as the current directory has. So, how do I properly split the directory from that repository while keeping the history? EDIT: I specifically want trunk/myproj to be the trunk in a new repository PLUS have the new repository include none of the other old stuff, ie. there should not be possibility for anyone to update to old revision before the split and get/see the files. The svndumpfilter solution I tried would achieve exactly that, sadly its not doable since the path/files have been moved around. The solution by ng isn't accetable since its basically a clone+removal of extras which keeps ALL the history, not just relevant myproj history. BUMP C'moon, there must be someone who definitely knows if this is doable or not, and how!

    Read the article

  • Aggregating / Collecting AJAX requests

    - by Ganesh Shankar
    I have situation where a user can manipulate a large set of data (presented in a table) by using a bunch of filters represented as checkboxes. The page is AJAXed up so the user doesn't have to wait for a full page refresh every time they click a filter. The way it's currently implemented is by having an event handler watch all the checkboxes and request filtered data from the server when a click event is triggered. This works fine. However, there is a usability & performance issue with doing it this way. For example, if a user clicks 6 checkboxes, 6 AJAX requests are triggered and they all come back at various intervals causing the page to be updated 6 times. This will most probably annoy the user and seems rather inefficient. I want to put some kind of timeout on the event handler to do something like this: "Wait for 1 second and if there are no more filters clicked trigger the AJAX request". However, at the moment I've only been able to delay all 6 requests by 1 second. I'm not sure how to aggregate / collect the filter info into 1 AJAX request. Any suggestions would be greatly appreciated!

    Read the article

  • Java: Reading images and displaying as an ImageIcon

    - by 11helen
    I'm writing an application which reads and displays images as ImageIcons (within a JLabel), the application needs to be able to support jpegs and bitmaps. For jpegs I find that passing the filename directly to the ImageIcon constructor works fine (even for displaying two large jpegs), however if I use ImageIO.read to get the image and then pass the image to the ImageIcon constructor, I get an OutOfMemoryError( Java Heap Space ) when the second image is read (using the same images as before). For bitmaps, if I try to read by passing the filename to ImageIcon, nothing is displayed, however by reading the image with ImageIO.read and then using this image in the ImageIcon constructor works fine. I understand from reading other forum posts that the reason that the two methods don't work the same for the different formats is down to java's compatability issues with bitmaps, however is there a way around my problem so that I can use the same method for both bitmaps and jpegs without an OutOfMemoryError? (I would like to avoid having to increase the heap size if possible!) The OutOfMemoryError is triggered by this line: img = getFileContentsAsImage(file); and the method definition is: public static BufferedImage getFileContentsAsImage(File file) throws FileNotFoundException { BufferedImage img = null; try { ImageIO.setUseCache(false); img = ImageIO.read(file); img.flush(); } catch (IOException ex) { //log error } return img; } The stack trace is: Exception in thread "AWT-EventQueue-0" java.lang.OutOfMemoryError: Java heap space at java.awt.image.DataBufferByte.<init>(DataBufferByte.java:58) at java.awt.image.ComponentSampleModel.createDataBuffer(ComponentSampleModel.java:397) at java.awt.image.Raster.createWritableRaster(Raster.java:938) at javax.imageio.ImageTypeSpecifier.createBufferedImage(ImageTypeSpecifier.java:1056) at javax.imageio.ImageReader.getDestination(ImageReader.java:2879) at com.sun.imageio.plugins.jpeg.JPEGImageReader.readInternal(JPEGImageReader.java:925) at com.sun.imageio.plugins.jpeg.JPEGImageReader.read(JPEGImageReader.java:897) at javax.imageio.ImageIO.read(ImageIO.java:1422) at javax.imageio.ImageIO.read(ImageIO.java:1282) at framework.FileUtils.getFileContentsAsImage(FileUtils.java:33)

    Read the article

  • How would I make a mouse controlled physics object in Box2D / AS3?

    - by Marty Wallace
    I recently created this tennis game using my own basic physics: http://martywallace.com/sandbox/tennis/ Basically a tennis racquet sticks to your mouse and you can hit the tennis balls upward. The physics aren't that great, and I want to make a more interesting version of this game with milestones and levels in Flash. I am planning to use Box2D because I have moderate experience with it. I'm not sure how to go about creating the racquet - as far as I understand Box2D, the racquet needs a velocity to influence the velocities of the balls when you hit them (so that you can hit them harder or softer upward to keep them up). With that said, I'm assuming I can't just have a kinematic body that will have its position set to the mouse, because it won't affect the velocities of the balls as expected. I've also thought about setting the velocity to the difference between the racquet position and the mouse each frame, but I am concerned that won't provide accurate positioning and am also thinking that the velocity could end up really large if you move the mouse quickly. What is the correct way to have a physics object locked to the mouse but also to have its displacement in the last frame (from where it was to the mouse) affect the balls?

    Read the article

  • Copy whole SQL Server database into JSON from Python

    - by Oli
    I facing an atypical conversion problem. About a decade ago I coded up a large site in ASP. Over the years this turned into ASP.NET but kept the same database. I've just re-done the site in Django and I've copied all the core data but before I cancel my account with the host, I need to make sure I've got a long-term backup of the data so if it turns out I'm missing something, I can copy it from a local copy. To complicate matters, I no longer have Windows. I moved to Ubuntu on all my machines some time back. I could ask the host to send me a backup but having no access to a machine with MSSQL, I wouldn't be able to use that if I needed to. So I'm looking for something that does: db = {} for table in database: db[table.name] = [row for row in table] And then I could serialize db off somewhere for later consumption... But how do I do the table iteration? Is there an easier way to do all of this? Can MSSQL do a cross-platform SQLDump (inc data)? For previous MSSQL I've used pymssql but I don't know how to iterate the tables and copy rows (ideally with column headers so I can tell what the data is). I'm not looking for much code but I need a poke in the right direction.

    Read the article

  • Dragging an UIView inside UIScrollView

    - by Sergey Mikhanov
    Hello community! I am trying to solve a basic problem with drag and drop on iPhone. Here's my setup: I have a UIScrollView which has one large content subview (I'm able to scroll and zoom it) Content subview has several small tiles as subviews that should be dragged around inside it. My UIScrollView subclass has this method: - (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event { UIView *tile = [contentView pointInsideTiles:[self convertPoint:point toView:contentView] withEvent:event]; if (tile) { return tile; } else { return [super hitTest:point withEvent:event]; } } Content subview has this method: - (UIView *)pointInsideTiles:(CGPoint)point withEvent:(UIEvent *)event { for (TileView *tile in tiles) { if ([tile pointInside:[self convertPoint:point toView:tile] withEvent:event]) return tile; } return nil; } And tile view has this method: - (void)touchesMoved:(NSSet*)touches withEvent:(UIEvent*)event { UITouch *touch = [touches anyObject]; CGPoint location = [touch locationInView:self.superview]; self.center = location; } This works, but not fully correct: the tile sometimes "falls down" during the drag process. More precisely, it stops receiving touchesMoved: invocations, and scroll view starts scrolling instead. I noticed that this depends on the drag speed: the faster I drag, the quicker the tile "falls". Any ideas on how to keep the tile glued to the dragging finger? Thanks in advance!

    Read the article

  • Subscription website architecture questions + SQL Server & .NET

    - by chopps
    Hey Guys, I have a few questions about the architecture of a subscription service I am about to embark on and I am looking for some feedback on how best to set it up. I won’t have a large amount of customers as Basecamp, maybe a few hundred and was wondering what would be a solid architecture for setting up the customer sites. I’m running SQL Server and .NET on a dedicated machine. Should create a new database for each customer as to have control and isolation of data or keep them all in one database? I am also thinking of creating a sub-domain for each customer as well so modifications can be made to each site as needed. The customer URLs would look like this: https://customer1.foobar.com https://customer2.foobar.com I am going to have the ability to ‘plug-in’ reports that will be uploaded to the site so each customer can customize as needed. Off the top of my head this necessitates having each sub domain on its own code-base for the uploading of these reports. So on the main site the customer would sign up for their new subscription and I would programmatically create a new directory for the customer from the main code base and then create a sub domain pointing to the new directory for the customer and then finally their database. Does this sound about right? Am I on the right track? How do other such sites accomplish the same thing? Thanks for letting me bend your ear for a bit on this.

    Read the article

  • How would you protect a database of links from being scraped?

    - by Yegor
    I have a large database of links, which are all sorted in specific ways and are attached to other information, which is valuable (to some people). Currently my setup (which seems to work) simply calls a php file like link.php?id=123, it logs the request with a timestamp into the DB. Before it spits out the link, it checks how many requests were made from that IP in the last 5 minutes. If its greater than x, it redirects you to a captcha page. That all works fine and dandy, but the site has been getting really popular (as well as been getting DDOsed for about 6 weeks), so php has been getting floored, so Im trying to minimize the times I have to hit up php to do something. I wanted to show links in plain text instead of thru link.php?id= and have an onclick function to simply add 1 to the view count. Im still hitting up php, but at least if it lags, it does so in the background, and the user can see the link they requested right away. Problem is, that makes the site REALLY scrapable. Is there anything I can do to prevent this, but still not rely on php to do the check before spitting out the link?

    Read the article

  • storing/retrieving data for graph with long continuous stretches

    - by james
    i have a large 2-dimensional data set which i would like to graph. the graph is displayed in a browser and the data is retrieved via ajax. long stretches of this graph will be continuous - e.g., for x=0 through x=1000, y=9, then for x=1001 through x=1100, y=80, etc. the approach i'm considering is to send (from the server) and store (in the browser) only the points where the data changes. so for the example above, i would say data[0] = 9, then data[1001] = 80. then given x=999 for example, retrieving data[999] would actually look up data[0]. the problem that arises is finding a dictionary-like data structure which behaves like this. the approach i'm considering is to store the data in a traditional dictionary object, then also maintain a sorted array of key for that object. when given x=999, it would look at the mid-point of this array, determine whether the nearest lower key is left or right of that midpoint, then repeat with the correct subsection, etc.. does anyone have thoughts on this problem/approach?

    Read the article

  • sybase - fails to use index unless string is hard-coded

    - by Garrett
    I'm using Sybase 12.5.3 (ASE); I'm new to Sybase though I've worked with MSSQL pretty extensively. I'm running into a scenario where a stored procedure is really very slow. I've traced the issue to a single SELECT stmt for a relatively large table. Modifying that statement dramatically improves the performance of the procedure (and reverting it drastically slows it down; i.e., the SELECT stmt is definitely the culprit). -- Sybase optimizes and uses multi-column index... fast!<br> SELECT ID,status,dateTime FROM myTable WHERE status in ('NEW','SENT') ORDER BY ID -- Sybase does not use index and does very slow table scan<br> SELECT ID,status,dateTime FROM myTable WHERE status in (select status from allowableStatusValues) ORDER BY ID The code above is an adapted/simplified version of the actual code. Note that I've already tried recompiling the procedure, updating statistics, etc. I have no idea why Sybase ASE would choose an index only when strings are hard-coded and choose a table scan when choosing from another table. Someone please give me a clue, and thank you in advance.

    Read the article

  • Web Workers - Transferable Objects for JSON

    - by kclem06
    HTML 5 Web workers are very slow when using worker.postMessage on a large JSON object. I'm trying to figure out how to transfer a JSON Object to a web worker - using the 'Transferable Objects' types in Chrome, in order to increase the speed of this. Here is what I'm referring to and appears it should speed this up quite a bit: http://updates.html5rocks.com/2011/12/Transferable-Objects-Lightning-Fast I'm having trouble finding a good example of this (and I don't believe I want to use an ArrayBuffer). Any help would be appreciated. I'm imagining something like this: worker = new Worker('workers.js'); var large_json = {}; for(var i = 0; i < 20000; ++i){ large_json[i] = i; large_json["test" + i] = "string"; }; //How to make this call to use Transfer Objects? Takes approx 2 seconds to serialize this for me currently. worker.webkitPostMessage(large_json);

    Read the article

  • What is the general feeling about reflection extensions in std::type_info?

    - by Evan Teran
    I've noticed that reflection is one feature that developers from other languages find very lacking in c++. For certain applications I can really see why! It is so much easier to write things like an IDE's auto-complete if you had reflection. And certainly serialization APIs would be a world easier if we had it. On the other side, one of the main tenets of c++ is don't pay for what you don't use. Which makes complete sense. That's something I love about c++. But it occurred to me there could be a compromise. Why don't compilers add extensions to the std::type_info structure? There would be no runtime overhead. The binary could end up being larger, but this could be a simple compiler switch to enable/disable and to be honest, if you are really concerned about the space savings, you'll likely disable exceptions and RTTI anyway. Some people cite issues with templates, but the compiler happily generates std::type_info structures for template types already. I can imagine a g++ switch like -fenable-typeinfo-reflection which could become very popular (and mainstream libs like boost/Qt/etc could easily have a check to generate code which uses it if there, in which case the end user would benefit with no more cost than flipping a switch). I don't find this unreasonable since large portable libraries like this already depend on compiler extensions. So why isn't this more common? I imagine that I'm missing something, what are the technical issues with this?

    Read the article

  • Erlang OTP application design

    - by Toby Hede
    I am struggling a little coming to grips with the OTP development model as I convert some code into an OTP app. I am essentially making a web crawler and I just don't quite know where to put the code that does the actual work. I have a supervisor which starts my worker: -behaviour(supervisor). -define(CHILD(I, Type), {I, {I, start_link, []}, permanent, 5000, Type, [I]}). init(_Args) -> Children = [ ?CHILD(crawler, worker) ], RestartStrategy = {one_for_one, 0, 1}, {ok, {RestartStrategy, Children}}. In this design, the Crawler Worker is then responsible for doing the actual work: -behaviour(gen_server). start_link() -> gen_server:start_link(?MODULE, [], []). init([]) -> inets:start(), httpc:set_options([{verbose_mode,true}]), % gen_server:cast(?MODULE, crawl), % ok = do_crawl(), {ok, #state{}}. do_crawl() -> % crawl! ok. handle_cast(crawl}, State) -> ok = do_crawl(), {noreply, State}; do_crawl spawns a fairly large number of processes and requests that handle the work of crawling via http. Question, ultimately is: where should the actual crawl happen? As can be seen above I have been experimenting with different ways of triggering the actual work, but still missing some concept essential for grokering the way things fit together. Note: some of the OTP plumbing is left out for brevity - the plumbing is all there and the system all hangs together

    Read the article

  • Fastest PNG decoder for .NET

    - by sboisse
    Our web server needs to process many compositions of large images together before sending the results to web clients. This process is performance critical because the server can receive several thousands of requests per hour. Right now our solution loads PNG files (around 1MB each) from the HD and sends them to the video card so the composition is done on the GPU. We first tried loading our images using the PNG decoder exposed by the XNA API. We saw the performance was not too good. To understand if the problem was loading from the HD or the decoding of the PNG, we modified that by loading the file in a memory stream, and then sending that memory stream to the .NET PNG decoder. The difference of performance using XNA or using System.Windows.Media.Imaging.PngBitmapDecoder class is not significant. We roughly get the same levels of performance. Our benchmarks show the following performance results: Load images from disk: 37.76ms 1% Decode PNGs: 2816.97ms 77% Load images on Video Hardware: 196.67ms 5% Composition: 87.80ms 2% Get composition result from Video Hardware: 166.21ms 5% Encode to PNG: 318.13ms 9% Store to disk: 3.96ms 0% Clean up: 53.00ms 1% Total: 3680.50ms 100% From these results we see that the slowest parts are when decoding the PNG. So we are wondering if there wouldn't be a PNG decoder we could use that would allow us to reduce the PNG decoding time. We also considered keeping the images uncompressed on the hard disk, but then each image would be 10MB in size instead of 1MB and since there are several tens of thousands of these images stored on the hard disk, it is not possible to store them all without compression.

    Read the article

  • PocketPC c++ windows message processing recursion problem

    - by user197350
    Hello, I am having a problem in a large scale application that seems related to windows messaging on the Pocket PC. What I have is a PocketPC application written in c++. It has only one standard message loop. while (GetMessage (&msg, NULL, 0, 0)) { { TranslateMessage (&msg); DispatchMessage (&msg); } } We also have standard dlgProc's. In the switch of the dlgProc, we will call a proprietary 3rd party API. This API uses a socket connection to communicate with another process. The problem I am seeing is this: whenever two of the same messages come in quickly (from the user clicking the screen twice too fast and shouldn't be) it seems as though recursion is created. Windows begins processing the first message, gets the api into a thread safe state, and then jumps to process the next (identical ui) message. Well since the second message also makes the API call, the call fails because it is locked. Because of the design of this legacy system, the API will be locked until the recursion comes back out (which also is triggered by the user; so it could be locked the entire working day). I am struggling to figure out exactly why this is happening and what I can do about it. Is this because windows recognizes the socket communication will take time and preempts it? Is there a way I can force this API call to complete before preemption? Is there a way I can slow down the message processing or re-queue the message to ensure the first will execute (capturing it and doing a PostMessage back to itself didnt work). We don't want to lock the ui down while the first call completes. Any insight is greatly appreciated! Thanks!!

    Read the article

  • Android 2.1 fling gesture captured on textview but still a contextmenu opens

    - by hermo
    The following problem seems unique to 2.1, happens both on an emulator and on a nexus. The same example works fine on other platforms I've tested (1.5, 1.6 and 2.0 emulators). I've added created gestureListener as described in this post. The difference is that I've added the listener on a TextView which also has a contextMenu registered, i.e. sth like the following: onCreate(...) { ... // Layout contains a large TextView on which I want to add a context menu tv = findViewById(R.id.text_view); tv.registerForContextMenu(this); // create the gestureListener according above mentioned post. gestureListener = ... // set the listener on the text-view tv.setOnTouchListener(gestureListener); ... } When testing it, the correct gesture is recognized alright, but every other time it also causes the context menu to be opened. As the same example is working on non 2.1 platforms, I've got a feeling it is not my code that is the problem... Thankful for any suggestions.

    Read the article

  • Writing fortran robust and "modern" code

    - by Blklight
    In some scientific environments, you often cannot go without FORTRAN as most of the developers only know that idiom, and there is lot of legacy code and related experience. And frankly, there are not many other cross-platform options for high performance programming ( C++ would do the task, but the syntax, zero-starting arrays, and pointers are too much for most engineers ;-) ). I'm a C++ guy but I'm stuck with some F90 projects. So, let's assume a new project must use FORTRAN (F90), but I want to build the most modern software architecture out of it. while being compatible with most "recent" compilers (intel ifort, but also including sun/HP/IBM own compilers) So I'm thinking of imposing: global variable forbidden, no gotos, no jump labels, "implicit none", etc. "object-oriented programming" (modules with datatypes + related subroutines) modular/reusable functions, well documented, reusable libraries assertions/preconditions/invariants (implemented using preprocessor statements) unit tests for all (most) subroutines and "objects" an intense "debug mode" (#ifdef DEBUG) with more checks and all possible Intel compiler checks possible (array bounds, subroutine interfaces, etc.) uniform and enforced legible coding style, using code processing tools C stubs/wrappers for libpthread, libDL (and eventually GPU kernels, etc.) C/C++ implementation of utility functions (strings, file operations, sockets, memory alloc/dealloc reference counting for debug mode, etc.) ( This may all seem "evident" modern programming assumptions, but in a legacy fortran world, most of these are big changes in the typical programmer workflow ) The goal with all that is to have trustworthy, maintainable and modular code. Whereas, in typical fortran, modularity is often not a primary goal, and code is trustworthy only if the original developer was very clever, and the code was not changed since then ! (i'm a bit joking here, but not much) I searched around for references about object-oriented fortran, programming-by-contract (assertions/preconditions/etc.), and found only ugly and outdated documents, syntaxes and papers done by people with no large-scale project involvement, and dead projects. Any good URL, advice, reference paper/books on the subject?

    Read the article

  • Why is VB6 still so widely used?

    - by Uri
    Note that this question is not meant to start an argument, I am genuinely curious: Back in the 90s I used to work for a large CPU maker and we were building some debuggers. All our core logic was in C++, but the GUI was made in VB6. We couldn't figure out MFC and it was too much a hassle. We glued together VB6 and C++ using COM which we created via ATL. Fast forward to 2009, having been mostly in the Java world I am looking at job boards and seeing more VB6 jobs than I expected. I genuinely thought that VB6 was extinct, especially since VB.NET supposedly solved a lot of the problems with the VB6 which at the time felt a lot like a scripting language than a true OOP language. So what happened? Why is new code still written in it? Isn't there a better way to glue C++ and VB.NET? Note that I haven't used VB.NET, I just understand that it is a much more "stricter" language than VB6 was. Even with Option Explicit or whatever it was.

    Read the article

  • ASP.NET NamingContainer naming convention

    - by EOLeary
    The Background Hello! I'm working on a project in which the client has required a lot of things to happen on a single page, and this has resulted in a rather large blob of HTML being rendered out to the client browser. The main issue is with input tags (where runat="server" attribute is set), these tend to cause a drastic increase in markup size due to validation, updatepanel triggers, viewstate, and the control markup itself. I've done what I can to reduce the amount of triggers I'm using, I'm compressing the viewstate (to something like 8% of the original viewstate size), I've gotten rid of a lot of ASP.NET Validators and rolled my own, and and I've been using ClientIdMode to reduce the length of the ID attributes of many asp.net elements. All of these combined significantly reduces the amount of HTML being sent to the client, (for example going from 2 megabytes for a request down to 500-600 kb - these are HUGE pages, mind you). The Issue One area which I've been having trouble reducing is simply the auto-generated 'name' attribute of input elements. <input name="ctl00$ctl00$ctl00$_main$_main$_bodyMatterPhase$_phaseTree$ctl00$_taskTree$ctl00$_taskDetails$_detailList$ctrl0$_row$_descriptionText" type="text" value="Investigation Week 1" maxlength="100" id="_taskTree_0__taskDetails_0__detailList_0__row_0__descriptionText_0" style="width:170px;"> As you can see above, the name attribute is 139 out of 297 characters, that's almost 50% of the tag markup taken up by that HUGE name. Does anyone have any ideas on how to stick a hook in somewhere in ASP.NET where I can somehow translate these or generate them differently; say instead of ctl00$ctl00$ctl00$_main$_main$_bodyMatterPhase$_phaseTree$ctl00$_taskTree$ctl00$_taskDetails$_detailList$ctrl0$_row$_descriptionText, it could be a GUID like 0x0AEED4B6445A11E08F873606E0D72085, which is 105 characters shorter. Any help would be greatly appreciated!

    Read the article

< Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >