Search Results

Search found 14861 results on 595 pages for 'high speed computing'.

Page 490/595 | < Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >

  • Looking for a good C++/.net

    - by Michael Minerva
    I have recently started to feel that I need to greatly improve my C++ skills especially in the realm of .net. I graduated from a good four year university with a degree in computer science about 9 months ago and I have since been doing full time contract work for a small software company in my local area. Most of my work has been done using Java/lisp/cocoa/XML and before that most of my programming in my senior year was in java/C#. I did a decent amount of C++ in my Sophomore year and in my free time before that but I feel that my general knowledge of C++/.net is very lacking for the opportunities that are now coming my way. Can anyone recommend a good book that could help me get up too speed? I feel I do not need a very basic introduction to C++ but something that covers the fundamentals of .net would be good for me. So basically what I need is a book or books that would be good for a .net novice and a C++ developer who is just beyond novice. Also, a book that would help bein an interview by giving me a conversional understanding of C++ would be great. Thanks a lot!.

    Read the article

  • Efficient update of SQLite table with many records

    - by blackrim
    I am trying to use sqlite (sqlite3) for a project to store hundreds of thousands of records (would like sqlite so users of the program don't have to run a [my]sql server). I have to update hundreds of thousands of records sometimes to enter left right values (they are hierarchical), but have found the standard update table set left_value = 4, right_value = 5 where id = 12340; to be very slow. I have tried surrounding every thousand or so with begin; .... update... update table set left_value = 4, right_value = 5 where id = 12340; update... .... commit; but again, very slow. Odd, because when I populate it with a few hundred thousand (with inserts), it finishes in seconds. I am currently trying to test the speed in python (the slowness is at the command line and python) before I move it to the C++ implementation, but right now this is way to slow and I need to find a new solution unless I am doing something wrong. Thoughts? (would take open source alternative to SQLite that is portable as well)

    Read the article

  • Computation overhead in C# - Using getters/setters vs. modifying arrays directly and casting speeds

    - by Jeffrey Kern
    I was going to write a long-winded post, but I'll boil it down here: I'm trying to emulate the graphical old-school style of the NES via XNA. However, my FPS is SLOW, trying to modify 65K pixels per frame. If I just loop through all 65K pixels and set them to some arbitrary color, I get 64FPS. The code I made to look-up what colors should be placed where, I get 1FPS. I think it is because of my object-orented code. Right now, I have things divided into about six classes, with getters/setters. I'm guessing that I'm at least calling 360K getters per frame, which I think is a lot of overhead. Each class contains either/and-or 1D or 2D arrays containing custom enumerations, int, Color, or Vector2D, bytes. What if I combined all of the classes into just one, and accessed the contents of each array directly? The code would look a mess, and ditch the concepts of object-oriented coding, but the speed might be much faster. I'm also not concerned about access violations, as any attempts to get/set the data in the arrays will done in blocks. E.g., all writing to arrays will take place before any data is accessed from them. As for casting, I stated that I'm using custom enumerations, int, Color, and Vector2D, bytes. Which data types are fastest to use and access in the .net Framework, XNA, XBox, C#? I think that constant casting might be a cause of slowdown here. Also, instead of using math to figure out which indexes data should be placed in, I've used precomputed lookup tables so I don't have to use constant multiplication, addition, subtraction, division per frame. :)

    Read the article

  • Using Lucene to index private data, should I have a separate index for each user or a single index

    - by Nathan Bayles
    I am developing an Azure based website and I want to provide search capabilities using Lucene. (structured json objects would be indexed and stored in Lucene and other content such as Word documents, etc. would be indexed in lucene but stored in blob storage) I want the search to be secure, such that one user would never see a document belonging to another user. I want to allow ad-hoc searches as typed by the user. Lastly, I want to query programmatically to return predefined sets of data, such as "all notes for user X". I think I understand how to add properties to each document to achieve these 3 objectives. (I am listing them here so if anyone is kind enough to answer, they will have better idea of what I am trying to do) My questions revolve around performance and security. Can I improve document security by having a separate index for each user, or is including the user's ID as a parameter in each search sufficient? Can I improve indexing speed and total throughput of the system by having a separate index for each user? My thinking is that having separate indexes would allow me to scale the system by having multiple index writers (perhaps even on different server instances) working at the same time, each on their own index. Any insight would be greatly appreciated. Regards, Nate

    Read the article

  • How to identify multiple identical pairs in two vectors

    - by Sacha Epskamp
    In my graph-package (as in graph theory, nodes connected by edges) I have a vector indicating for each edge the node of origin from, a vector indicating for each edge the node of destination to and a vector indicating the curve of each edge curve. By default I want edges to have a curve of 0 if there is only one edge between two nodes and curve of 0.2 if there are two edges between two nodes. The code that I use now is a for-loop, and it is kinda slow: curve <- rep(0,5) from<-c(1,2,3,3,2) to<-c(2,3,4,2,1) for (i in 1:length(from)) { if (any(from==to[i] & to==from[i])) { curve[i]=0.2 } } So basically I look for each edge (one index in from and one in to) if there is any other pair in from and to that use the same nodes (numbers). What I am looking for are two things: A way to identify if there is any pair of nodes that have two edges between them (so I can omit the loop if not) A way to speed up this loop # EDIT: To make this abit clearer, another example: from <- c(4L, 6L, 7L, 8L, 1L, 9L, 5L, 1L, 2L, 1L, 10L, 2L, 6L, 7L, 10L, 4L, 9L) to <- c(1L, 1L, 1L, 2L, 3L, 3L, 4L, 5L, 6L, 7L, 7L, 8L, 8L, 8L, 8L, 10L, 10L) cbind(from,to) from to [1,] 4 1 [2,] 6 1 [3,] 7 1 [4,] 8 2 [5,] 1 3 [6,] 9 3 [7,] 5 4 [8,] 1 5 [9,] 2 6 [10,] 1 7 [11,] 10 7 [12,] 2 8 [13,] 6 8 [14,] 7 8 [15,] 10 8 [16,] 4 10 [17,] 9 10 In these two vectors, pair 3 is identical to pair 10 (both 1 and 7 in different orders) and pairs 4 and 12 are identical (both 2 and 8). So I would want curve to become: [1,] 0.0 [2,] 0.0 [3,] 0.2 [4,] 0.2 [5,] 0.0 [6,] 0.0 [7,] 0.0 [8,] 0.0 [9,] 0.0 [10,] 0.2 [11,] 0.0 [12,] 0.2 [13,] 0.0 [14,] 0.0 [15,] 0.0 [16,] 0.0 [17,] 0.0 (as I vector, I transposed twice to get row numbers).

    Read the article

  • Quickly accessing files in a 'project'

    - by bbbscarter
    Hi all. I'm looking for a way to quickly open files in my project's source tree. What I've been doing so far is adding files to the file-name-cache like so: (file-cache-add-directory-recursively (concat project-root "some/sub/folder") ".*\\.\\(py\\)$") after which I can use anything-for-files to access any file in the source tree with about 4 keystrokes. Unfortunately, this solution started falling over today. I've added another folder to the cache and emacs has started running out of memory. What's weird is that this folder contains less than 25% of files I'm adding, and yet emacs memory use goes up from 20mb to 400mb on adding just this folder. The total number of files is around 2000, so this memory use seems very high. Presumably I'm abusing the file cache. Anyway, what do other people do for this? I like this solution for its simplicity and speed; I've looked at some of the many, many project management packages for emacs and none of them really grabbed me... Thanks in advance! Simon

    Read the article

  • How to draw a part of a window into a memory device context?

    - by Nell
    I'm using simple statements to keep it, er, simple: The screen goes from 0, 0 to 1000, 1000 (screen coordinates). A window goes from 100, 100 to 900, 900 (screen coordinates). I have a memory device context that goes from 0, 0 to 200, 200 (logical coordinates). I need to send a WM_PRINT message to the window. I can pass the device context to the window via WM_PRINT, but I cannot pass which part of its window it should draw into the device context. Is there some way to alter the device context that will result in the window drawing a specific part of itself into the device context (say, its bottom right portion from 700, 700 to 900, 900)? (This is all under plain old GDI and in C or C++. Any solution must be too.) Please note: This problem is part of a larger solution in which the device context size is fixed and speed is crucial, so I cannot draw the window in full into a separate device context and blit the part I want from the resultant full bitmap into my device context.

    Read the article

  • gcc compilations (sometimes) result in cpu underload

    - by confusedCoder
    I have a larger C++ program which starts out by reading thousands of small text files into memory and storing data in stl containers. This takes about a minute. Periodically, a compilation will exhibit behavior where that initial part of the program will run at about 22-23% CPU load. Once that step is over, it goes back to ~100% CPU. It is more likely to happen with O2 flag turned on but not consistently. It happens even less often with the -p flag which makes it almost impossible to profile. I did capture it once but the gprof output wasn't helpful - everything runs with the same relative speed just at low cpu usage. I am quite certain that this has nothing to do with multiple cores. I do have a quad-core cpu, and most of the code is multi-threaded, but I tested this issue running a single thread. Also, when I run the problematic step in multiple threads, each thread only runs at ~20% CPU. I apologize ahead of time for the vagueness of the question but I have run out of ideas as to how to troubleshoot it further, so any hints might be helpful. UPDATE: Just to make sure it's clear, the problematic part of the code does sometimes (~30-40% of the compilations) run at 100% CPU, so it's hard to buy the (otherwise reasonable) argument that I/O is the bottleneck

    Read the article

  • How does a java compiler resolve a non-imported name

    - by gexicide
    Consider I use a type X in my java compilation unit from package foo.bar and X is not defined in the compilation unit itself nor is it directly imported. How does a java compiler resolve X now efficiently? There are a few possibilities where X could reside: X might be imported via a star import a.b.* X might reside in the same package as the compilation unit X might be a language type, i.e. reside in java.lang The problem I see is especially (2.). Since X might be a package-private type, it is not even required that X resides in a compilation unit that is named X.java. Thus, the compiler must look into all entries of the class path and search for any classes in a package foo.bar, it then must read every class that is in package foo.bar to check whether X is included. That sounds very expensive. Especially when I compile only a single file, the compiler has to read dozens of class files only to find a type X. If I use a lot of star imports, this procedure has to be repeated for a lot of types (although class files won't be read twice, of course). So is it advisable to import also types from the same package to speed up the compilation process? Or is there a faster method for resolving an unimported type X which I was not able to find?

    Read the article

  • Why doesn't this CompiledQuery give a performance improvement?

    - by Grammarian
    I am trying to speed up an often used query. Using a CompiledQuery seemed to be the answer. But when I tried the compiled version, there was no difference in performance between the compiled and non-compiled versions. Can someone please tell me why using Queries.FindTradeByTradeTagCompiled is not faster than using Queries.FindTradeByTradeTag? static class Queries { // Pre-compiled query, as per http://msdn.microsoft.com/en-us/library/bb896297 private static readonly Func<MyEntities, int, IQueryable<Trade>> mCompiledFindTradeQuery = CompiledQuery.Compile<MyEntities, int, IQueryable<Trade>>( (entities, tag) => from trade in entities.TradeSet where trade.trade_tag == tag select trade); public static Trade FindTradeByTradeTagCompiled(MyEntities entities, int tag) { IQueryable<Trade> tradeQuery = mCompiledFindTradeQuery(entities, tag); return tradeQuery.FirstOrDefault(); } public static Trade FindTradeByTradeTag(MyEntities entities, int tag) { IQueryable<Trade> tradeQuery = from trade in entities.TradeSet where trade.trade_tag == tag select trade; return tradeQuery.FirstOrDefault(); } }

    Read the article

  • R optimization: How can I avoid a for loop in this situation?

    - by chrisamiller
    I'm trying to do a simple genomic track intersection in R, and running into major performance problems, probably related to my use of for loops. In this situation, I have pre-defined windows at intervals of 100bp and I'm trying to calculate how much of each window is covered by the annotations in mylist. Graphically, it looks something like this: 0 100 200 300 400 500 600 windows: |-----|-----|-----|-----|-----|-----| mylist: |-| |-----------| So I wrote some code to do just that, but it's fairly slow and has become a bottleneck in my code: ##window for each 100-bp segment windows <- numeric(6) ##second track mylist = vector("list") mylist[[1]] = c(1,20) mylist[[2]] = c(120,320) ##do the intersection for(i in 1:length(mylist)){ st <- floor(mylist[[i]][1]/100)+1 sp <- floor(mylist[[i]][2]/100)+1 for(j in st:sp){ b <- max((j-1)*100, mylist[[i]][1]) e <- min(j*100, mylist[[i]][2]) windows[j] <- windows[j] + e - b + 1 } } print(windows) [1] 20 81 101 21 0 0 Naturally, this is being used on data sets that are much larger than the example I provide here. Through some profiling, I can see that the bottleneck is in the for loops, but my clumsy attempt to vectorize it using *apply functions resulted in code that runs an order of magnitude more slowly. I suppose I could write something in C, but I'd like to avoid that if possible. Can anyone suggest another approach that will speed this calculation up?

    Read the article

  • Is it possible to load an entire SQL Server CE database into RAM?

    - by DanM
    I'm using LinqToSql to query a small SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table via a foreign key, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of the appeal of LinqToSql. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view.

    Read the article

  • What is the most efficient way of associating information with a Type in .Net?

    - by Miguel Angelo
    I want to associate custom data to a Type, and retrieve that data in run-time, blasingly fast. This is just my imagination, of my perfect world: var myInfo = typeof(MyClass).GetMyInformation(); this would be very fast... of course this does not exist! If it did I would not be asking. hehe ;) This is the way using custom attributes: var myInfo = typeof(MyClass).GetCustomAttribute("MyInformation"); this is slow because it requires a lookup of the string "MyInformation" This is a way using a Dictionary<Type, MyInformation: var myInfo = myInformationDictionary[typeof(MyClass)]; This is also slow because it is still a lookup of 'typeof(MyClass)'. I know that dictionary is very fast, but this is not enough... it is not as fast as calling a method. It is not even the same order of speed. I am not saying I want it to be as fast as a method call. I want to associate information with a type and access it as fast as possible. I am asking whether there is a better way, or event a best way of doing it. Any ideas?? Thanks!

    Read the article

  • Image creation performance / image caching

    - by Kilnr
    Hello, I'm writing an application that has a scrollable image (used to display a map). The map background consists of several tiles (premade from a big JPG file), that I draw on a Graphics object. I also use a cache (Hashtable), to prevent from having to create every image when I need it. I don't keep everything in memory, because that would be too much. The problem is that when I'm scrolling through the map, and I need an image that wasn't cached, it takes about 60-80 ms to create it. Depending on screen resolution, tile size and scroll direction, this can occur multiple times in one scroll operation (for different tiles). In my case, it often happens that this needs to be done 4 times, which introduces a delay of more than 300 ms, which is extremely noticeable. The easiest thing for me would be that there's some way to speed up the creation of Images, but I guess that's just wishful thinking... Besides that, I suppose the most obvious thing to do is to load the tiles predictively (e.g. when scrolling to the right, precache the tiles to the right), but then I'm faced with the rather difficult task of thinking up a halfway decent algorithm for this. My actual question then is: how can I best do this predictive loading? Maybe I could offload the creation of images to a separate thread? Other things to consider? Thanks in advance.

    Read the article

  • Deleting list items via ProcessBatchData()

    - by q-tuyen
    You can build a batch string to delete all of the items from a SharePoint list like this: 1: //create new StringBuilder 2: StringBuilder batchString= new StringBuilder(); 3: 4: //add the main text to the stringbuilder 5: batchString.Append(""); 6: 7: //add each item to the batch string and give it a command Delete 8: foreach (SPListItem item in itemCollection) 9: { 10: //create a new method section 11: batchString.Append(""); 12: //insert the listid to know where to delete from 13: batchString.Append("" + Convert.ToString(item.ParentList.ID) + ""); 14: //the item to delete 15: batchString.Append("" + Convert.ToString(item.ID) + ""); 16: //set the action you would like to preform 17: batchString.Append("Delete"); 18: //close the method section 19: batchString.Append(""); 20: } 21: 22: //close the batch section 23: batchString.Append(""); 24: 25: //preform the batch 26: SPContext.Current.Web.ProcessBatchData(batchString.ToString()); The only disadvantage that I can think of right know is that all the items you delete will be put in the recycle bin. How can I prevent that? I also found the solution like below: // web is a the SPWeb object your lists belong to // Before starting your deletion web.Site.WebApplication.RecycleBinEnabled = false; // When your are finished re-enable it web.Site.WebApplication.RecycleBinEnabled = true; Ref [Here](http://www.entwicklungsgedanken.de/2008/04/02/how-to-speed-up-the-deletion-of-large-amounts-of-list-items-within-sharepoint/) But the disadvantage of that solution is that only future deletion will not be sent to the Recycle Bins but it will delete all existing items as well which user do not want. Any idea to prevent not to delete existing items? Many thanks in advance, TQT

    Read the article

  • How to Check Authenticity of an AJAX Request

    - by Alex Reisner
    I am designing a web site in which users solve puzzles as quickly as they can. JavaScript is used to time each puzzle, and the number of milliseconds is sent to the server via AJAX when the puzzle is completed. How can I ensure that the time received by the server was not forged by the user? I don't think a session-based authenticity token (the kind used for forms in Rails) is sufficient because I need to authenticate the source of a value, not just the legitimacy of the request. Is there a way to cryptographically sign the request? I can't think of anything that couldn't be duplicated by a hacker. Is any JavaScript, by its exposed, client-side nature, subject to tampering? Am I going to have to use something that gets compiled, like Flash? (Yikes.) Or is there some way to hide a secret key? Or something else I haven't thought of? Update: To clarify, I don't want to penalize people with slow network connections (and network speed should be considered inconsistent), so the timing needs to be 100% client-side (the timer starts only when we know the user can see the puzzle). Also, there is money involved so no amount of "trusting the user" is acceptable.

    Read the article

  • PHP OO vs Procedure with AJAX

    - by vener
    I currently have a AJAX heavy(almost everything) intranet webapp for a business. It is highly modularized(components and modules ala Joomla), with plenty of folders and files. ~ 80-100 different viewing pages (each very unique in it's own sense) on last count and will likely to increase in the near future. I based around the design around commands and screens, the client request a command and sends the required data and receives the data that is displayed via javascript on the screen. That said, there are generally two types of files, a display files with html, javascript, and a little php for templating. And also a php backend file with a single switch statement with actions such as, save, update and delete and maybe other function. There is very little code reuse. Recently, I have been adding an server sided undo function that requires me to reuse some code. So, I took the chance to try out OOP but I notice that some functions are so simple, that creating a class, retrieving all the data then update all the related rows on the database seems like overkill for a simple action as speed is quite critical. Also I noticed there is only one class in an entire file. So, what if the entire php is a class. So, between creating a class and methods, and using global variables and functions. Which is faster?

    Read the article

  • Quickest way to compare a buch of array or list of values.

    - by zapping
    Can you please let me know on the quickest and efficient way to compare a large set of values. Its like there are a list of parent codes(string) and each code has a series of child values(string). The child lists have to be compared with each other and find out duplicates and count how many times they repeat. code1(code1_value1, code1_value2, code3_value3, ..., code1_valueN); code2(code2_value1, code1_value2, code2_value3, ..., code2_valueN); code3(code2_value1, code3_value2, code3_value3, ..., code3_valueN); . . . codeN(codeN_value1, codeN_value2, codeN_value3, ..., codeN_valueN); The lists are huge say like there are 100 parent codes and each has about 250 values in them. There will not be duplicates within a code list. Doing it in java and the solution i could figure out is. Store the values of first set of code in as codeMap.put(codeValue, duplicateCount). The count initialized to 0. Then compare the rest of the values with this. If its in the map then increment the count otherwise append it to the map. The downfall of this is to get the duplicates. Another iteration needs to be performed on a very large list. An alternative is to maintain another hashmap for duplicates like duplicateCodeMap.put(codeValue, duplicateCount) and change the initial hashmap to codeMap.put(codeValue, codeValue). Speed is what is requirement. Hope one of you can help me with it.

    Read the article

  • simple image animation with the keyboard

    - by conorhiggins
    Hi the app I am currently working on requires a simple animation to be performed, which I am able to do but timing is an issue. The app uses the slightly opaque keyboard and I have set up a black background to make it more visually pleasing . The app is a utility app and as the user clicks on a settings button the app will flip which looks great. However there is a problem at the moment in that while the app is flipping the keyboard also closes which itself looks great also, however the background remains. I was wondering if there was a way of animating the background image to follow the keyboard's animation pattern and speed? If I set up a discrete animation time the chances are there will be some sort of overlap and either the image will show too long or the keyboard will show too long. Any suggestions? Another option is to make the keyboard show constantly if anyone knows how to do that? i would rather animate it though as it is quite a simple app and so I would like to make it as aesthetically pleasing as possible. /Conor

    Read the article

  • Is it theoretically possible to emulate a human brain on a computer?

    - by JoelK
    Our brain consists of billions of neurons which basically work with all the incoming data from our senses, handle our consciousness, emotions and creativity as well as our hormone system, etc. So I'm completely new to this topic but doesn't each neuron have a fixed function? E.g.: If a signal of strength x enters, if the last signal was x ms ago, redirect it. From what I've learned in biology about our nerves system which includes our brain because both consist of simple neurons, it seems to me as our brain is one big, complicated computer. Maybe so complicated that things such as intelligence and cognition become possible? As the most complicated things about a neuron pretty much are the chemical aspects on generating an electric singal, keeping itself alive, and eventually segmenting itself, it should be pretty easy emulating some on a computer, or? You won't have to worry about keeping your virtual neuron alive, or? If you can emulate a single neuron on a computer, which shouldn't be too hard, could you theoretically emulate more than 1000 billions of them, recreating intelligence, cognition and maybe even creativity? In my question I'm leaving out the following aspects: Speed of our current (super) computers Actually writing a program for emulating neurons I don't know much about this topic, please tell me if I got anything wrong :) (My secret goal: Make a copy of my brain and store it on some 10 million TB HDD and make someone start it up in the future)

    Read the article

  • Fast find object by string property

    - by Andrew Kalashnikov
    Hello, colleagues. I've got task to fast find object by its string property. Object: class DicDomain { public virtual string Id{ get; set; } public virtual string Name { get; set; } } For storing my object I use List[T] dictionary where T is DicDomain for now . I've got 5-10 such lists, which contain about 500-20000 at each one. Task is find objects by its Name. I use next code now: List<T> entities = dictionary.FindAll(s => s.Name.Equals(word, StringComparison.OrdinalIgnoreCase)); I've got some questions: Is my search speed optimal. I think now. Data structure. It List good for this task. What about hashtable,sorted... Method Find. May be i should use string intern?? I haven't much exp at these tasks. Can u give me good advice for increase perfomance. Thanks

    Read the article

  • What is the fastest way to insert 100 000 records from one database to another?

    - by Pentium10
    I have a mobile application. My client has a large data set ~100.000 records. It's updated frequently. When we sync we need to copy from one database to another. I have attached the second database to the main, and run an insert into table select * from sync.table. This is extremely slow, it takes about 10 minutes I think. I noticed that the journal file gets increased step by step. How can I speed this up? EDITED 1 I have indexes off, and I have journal off. Using insert into table select * from sync.table it still takes 10 minutes. EDITED 2 If I run a query like select id,invitem,invid,cost from inventory where itemtype = 1 order by invitem limit 50 it takes 15-20 seconds. The table schema is: CREATE TABLE inventory ('id' INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, 'serverid' INTEGER NOT NULL DEFAULT 0, 'itemtype' INTEGER NOT NULL DEFAULT 0, 'invitem' VARCHAR, 'instock' FLOAT NOT NULL DEFAULT 0, 'cost' FLOAT NOT NULL DEFAULT 0, 'invid' VARCHAR, 'categoryid' INTEGER DEFAULT 0, 'pdacategoryid' INTEGER DEFAULT 0, 'notes' VARCHAR, 'threshold' INTEGER NOT NULL DEFAULT 0, 'ordered' INTEGER NOT NULL DEFAULT 0, 'supplier' VARCHAR, 'markup' FLOAT NOT NULL DEFAULT 0, 'taxfree' INTEGER NOT NULL DEFAULT 0, 'dirty' INTEGER NOT NULL DEFAULT 1, 'username' VARCHAR, 'version' INTEGER NOT NULL DEFAULT 15 ) Indexes are created like CREATE INDEX idx_inventory_categoryid ON inventory (pdacategoryid); CREATE INDEX idx_inventory_invitem ON inventory (invitem); CREATE INDEX idx_inventory_itemtype ON inventory (itemtype); I am wondering, the insert into ... select * from isn't the fastest built-in way to do massive data copy? EDITED 3 SQLite is serverless, so please stop voting a particular answer, because that is not the answer I'm sure.

    Read the article

  • .net remoting - Better solution to wait for a service to initialize ?

    - by CitizenInsane
    Context I have a client application (which i cannot modify, i.e. i only have the binary) that needs to run from time to time external commands that depends on a resource which is very long to initialize (about 20s). I thus decided to initialize this resource once for all in a "CommandServer.exe" application (single instance in the system tray) and let my client application call an intermediate "ExecuteCommand.exe" program that uses .net remoting to perform the operation on the server. The "ExecuteCommand.exe" is in charge for starting the server on first call and then leave it alive to speed up further commands. The service: public interface IMyService { void ExecuteCommand(string[] args); } The "CommandServer.exe" (using WindowsFormsApplicationBase for single instance management + user friendly splash screen during resource initializations): private void onStartupFirstInstance(object sender, StartupEventArgs e) { // Register communication channel channel = new TcpServerChannel("CommandServerChannel", 8234); ChannelServices.RegisterChannel(channel, false); // Register service var resource = veryLongToInitialize(); service = new MyServiceImpl(resource); RemotingServices.Marshal(service, "CommandServer"); // Create icon in system tray notifyIcon = new NotifyIcon(); ... } The intermediate "ExecuteCommand.exe": static void Main(string[] args) { startCommandServerIfRequired(); var channel = new TcpClientChannel(); ChannelServices.RegisterChannel(channel, false); var service = (IMyService)Activator.GetObject(typeof(IMyService), "tcp://localhost:8234/CommandServer"); service.RunCommand(args); } Problem As the server is very long to start (about 20s to initialize the required resources), the "ExecuteCommand.exe" fails on service.RunCommand(args) line because the server is yet not available. Question Is there a elegant way I can tune the delay before to receive "service not available" when calling service.RunCommand ? NB1: Currently I'm working around the issue by adding a mutex in server to indicate for complete initiliazation and have "ExecuteCommand.exe" to wait for this mutex before to call service.RunCommand. NB2: I have no background with .net remoting, nor WCF which is recommended replacer. (I chose .net remoting because this looked easier to set-up for this single shot issue in running external commands).

    Read the article

  • JQuery transition animation

    - by kk-dev11
    This program randomly selects two employees from a json-object Employees array, winnerPos is already defined. For better user experience I programmed these functions to change pictures one by one. The animation stops when the randomly selected person is shown on the screen. The slideThrough function will be triggered when the start button is pressed. function slideThrough() { counter = 0; start = true; clearInterval(picInterval); picInterval = setInterval(function () { changePicture(); }, 500); } function changePicture() { if (start) { if (counter > winnerPos) { setWinner(); start = false; killInterval(); } else { var employee = Employees[counter]; winnerPic.fadeOut(200, function () { this.src = 'img/' + employee.image; winnerName.html(employee.name); $(this).fadeIn(300); }); counter++; } } } The problem is the animation doesn't work smoothly. At first it works, but not perfect. The second time the transition happens in an irregular way, i.e. different speed and fadeIn/fadeOut differs from picture to picture. Could anyone help me to fine-tune the transition?

    Read the article

  • List all foreign key constraints that refer to a particular column in a specific table

    - by Sid
    I would like to see a list of all the tables and columns that refer (either directly or indirectly) a specific column in the 'main' table via a foreign key constraint that has the ON DELETE=CASCADE setting missing. The tricky part is that there would be an indirect relationships buried across up to 5 levels deep. (example: ... great-grandchild- FK3 = grandchild = FK2 = child = FK1 = main table). We need to dig up the leaf tables-columns, not just the very 1st level. The 'good' part about this is that execution speed isn't of concern, it'll be run on a backup copy of the production db to fix any relational issues for the future. I did SELECT * FROM sys.foreign_keys but that gives me the name of the constraint - not the names of the child-parent tables and the columns in the relationship (the juicy bits). Plus the previous designer used short, non-descriptive/random names for the FK constraints, unlike our practice below The way we're adding constraints into SQL Server: ALTER TABLE [dbo].[UserEmailPrefs] WITH CHECK ADD CONSTRAINT [FK_UserEmailPrefs_UserMasterTable_UserId] FOREIGN KEY([UserId]) REFERENCES [dbo].[UserMasterTable] ([UserId]) ON DELETE CASCADE GO ALTER TABLE [dbo].[UserEmailPrefs] CHECK CONSTRAINT [FK_UserEmailPrefs_UserMasterTable_UserId] GO The comments in this SO question inpire this question.

    Read the article

< Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >