Search Results

Search found 6199 results on 248 pages for 'fast enumeration'.

Page 18/248 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Automatic profile load on boot

    - by AnthonyW
    Is there any way to configure a computer to login automatically at bootup and then IMMEDIATELY switch users? The purpose is to trigger the profile loading process for the assumed user so that when they go to login, their profile loads instantaneously. Yet, the immediate user switch means that the login password is still required before any actual use. A few of the attempts I have made require storing the password in plain-text for the system to use. Needless to say that is undesireable. I have been looking for this solution for years; if anyone knows of a better solution to skinning this cat I am all ears. EDIT: tsdiscon command will Lock the workstation.

    Read the article

  • Is Objective C fast enough for DSP/audio programming

    - by morgancodes
    I've been making some progress with audio programming for iPhone. Now I'm doing some performance tuning, trying to see if I can squeeze more out of this little machine. Running Shark, I see that a significant part of my cpu power (16%) is getting eaten up by objc_msgSend. I understand I can speed this up somewhat by storing pointers to functions (IMP) rather than calling them using [object message] notation. But if I'm going to go through all this trouble, I wonder if I might just be better off using C++. Any thoughts on this?

    Read the article

  • fast, clean, C, timsort implementation?

    - by Drew Wagner
    Does anyone know of a clean implementation of timsort? The Python sources contain a description and code for the original timsort, but it is understandably full of python-specific calls. I have a smoothly varying 2D array of double floats that I would like to sort as quickly as possible. It ought to contain a lot of monotonically increasing and decreasing runs. I'd like to try timsorting the rows individually, and then merging the sorted rows. If you know of a better sort technique, I'm open to suggestions. Thanks!

    Read the article

  • Robust and fast checksum algorithm?

    - by bene
    Which checksum algorithm can you recommend in the following use case? I want to generate checksums of small JPEG files (~8 kB each) to check if the content changed. Using the filesystem's date modified is unfortunately not an option. The checksum need not be cryptographically strong but it should robustly indicate changes of any size. The second criterion is speed since it should be possible to process at least hundreds of images per second (on a modern CPU). The calculation will be done on a server with several clients. The clients send the images over Gigabit TCP to the server. So there's no disk I/O as bottleneck.

    Read the article

  • SQL Server Reporting Services - Fast TimeDataRetrieval - Long TimeProcessing

    - by user197529
    An application that I support has recently begun experiencing extended periods of time required to execute a report in SQL Server Reporting Services. The reports that are being executed are not terribly complex. There are multiple stored procedures (between 5 and 8) which return anywhere from a handful to 8000 records total. Reports are generally from 2 to 100 pages. One can argue (and I have) the benefit of a 100 page report, but the client is footing the bill. At any rate, the problem is that even the reports with 500 records (11 pages) being returned takes 5 minutes to return to the browser. In the execution log the TimeDataRetrieval is 60 seconds, but the TimeProcessing is 235 seconds. It seems bizarre to me that my query runs so quickly, but it takes Reporting Services so long to process the data. Any suggestions are greatly appreciated. Kind Regards, Bernie

    Read the article

  • Fast string suffix checking in C# (.NET 4.0)?

    - by ilitirit
    What is the fastest method of checking string suffixes in C#? I need to check each string in a large list (anywhere from 5000 to 100000 items) for a particular term. The term is guaranteed never to be embedded within the string. In other words, if the string contains the term, it will be at the end of the string. The string is also guaranteed to be longer than the suffix. Cultural information is not important. These are how different methods performed against 100000 strings (half of them have the suffix): 1. Substring Comparison - 13.60ms 2. String.Contains - 22.33ms 3. CompareInfo.IsSuffix - 24.60ms 4. String.EndsWith - 29.08ms 5. String.LastIndexOf - 30.68ms These are average times. [Edit] Forgot to mention that the strings also get put into separate lists, but this is not important. It does add to the running time though. On my system substring comparison (extracting the end of the string using the String.Substring method and comparing it to the suffix term) is consistently the fastest when tested against 100000 strings. The problem with using substring comparison though is that Garbage Collection can slow it down considerably (more than the other methods) because String.Substring creates new strings. The effect is not as bad in .NET 4.0 as it was in 3.5 and below, but it is still noticeable. In my tests, String.Substring performed consistently slower on sets of 12000-13000 strings. This will obviously differ between systems and implementations. [EDIT] Benchmark code: http://pastebin.com/smEtYNYN

    Read the article

  • Fast file search algorithm for IP addresses

    - by Dave Jarvis
    Question What is the fastest way to find if an IP address exists in a file that contains IP addresses sorted as: 219.93.88.62 219.94.181.87 219.94.193.96 220.1.72.201 220.110.162.50 220.126.52.187 220.126.52.247 Constraints No database (e.g., MySQL, PostgreSQL, Oracle, etc.). Infrequent pre-processing is allowed (see possibilities section) Would be nice not to have to load the file each query (131Kb) Uses under 5 megabytes of disk space File Details One IP address per line 9500+ lines Possible Solutions Create a directory hierarchy (radix tree?) then use is_dir() (sadly, this uses 87 megabytes)

    Read the article

  • fast similarity detection

    - by reinierpost
    I have a large collection of objects and I need to figure out the similarities between them. To be exact: given two objects I can compute their dissimilarity as a number, a metric - higher values mean less similarity and 0 means the objects have identical contents. The cost of computing this number is proportional to the size of the smaller object (each object has a given size). I need the ability to quickly find, given an object, the set of objects similar to it. To be exact: I need to produce a data structure that maps any object o to the set of objects no more dissimilar to o than d, for some dissimilarity value d, such that listing the objects in the set takes no more time than if they were in an array or linked list (and perhaps they actually are). Typically, the set will be very much smaller than the total number of objects, so it is really worthwhile to perform this computation. It's good enough if the data structure assumes a fixed d, but if it works for an arbitrary d, even better. Have you seen this problem before, or something similar to it? What is a good solution? To be exact: a straightforward solution involves computing the dissimilarities between all pairs of objects, but this is slow - O(n2) where n is the number of objects. Is there a general solution with lower complexity?

    Read the article

  • Fast Remote PHP Technique To Detect Image 404

    - by Volomike
    What PHP script technique runs the fastest in detecting if a remote image does not exist before I include the image? I mean, I don't want to download all the bytes of the remote image -- just enough to detect if it exists. And while on the subject but with just a slight deviation, I'd like to download just enough bytes to determine a JPEG's width and height information. Speed is very important in my concern here on this system design I'm working on.

    Read the article

  • fast sphere-grid intersection

    - by Mat
    hi! given a 3D grid, a 3d point as sphere center and a radius, i'd like to quickly calculate all cells contained or intersected by the sphere. Currently i take the the (gridaligned) boundingbox of the sphere and calculate the two cells for the min anx max point of this boundingbox. then, for each cell between those two cells, i do a box-sphere intersection test. would be great if there was something more efficient thanks!

    Read the article

  • Do any JS implementations currently support (or have support on the roadmap for) fast, vectorized op

    - by agnoster
    I'd like to do a bit of matrix/vector arithmetic in JavaScript, and was wondering if any browsers or other JS implementations actually have support for vectorized operations, for instance for quickly summing the entries of two Arrays (or summing, or whatever). Even if that currently doesn't mean it compiles down to vectorized operations, at least some language support would be nice for when it does get implemented - I'd take the existence of functions or syntax to support it as a step in the right direction. (Understandably, "vectorization javascript" searches are pretty much all about graphics and SVG.)

    Read the article

  • Fast way to get a list of group members in Active Directory with C#

    - by Jeremy
    In a web app, we're looking to display a list of sam accounts for users that are a member of a certain group. Groups could have 500 or more members in many cases and we need the page to be responsive. With a group of about 500 members it takes 7-8 seconds to get a list of sam accounts for all members of the group. Are there faster ways? I know the Active Directory Management Console does it in under a second. I've tried a few methods: 1) PrincipalContext pcRoot = new PrincipalContext(ContextType.Domain) GroupPrincipal grp = GroupPrincipal.FindByIdentity(pcRoot, "MyGroup"); List<string> lst = grp.Members.Select(g => g.SamAccountName).ToList(); 2) PrincipalContext pcRoot = new PrincipalContext(ContextType.Domain) GroupPrincipal grp = GroupPrincipal.FindByIdentity(pcRoot, "MyGroup"); PrincipalSearchResult<Principal> lstMembers = grp.GetMembers(true); List<string> lst = new List<string>(); foreach (Principal member in lstMembers ) { if (member.StructuralObjectClass.Equals("user")) { lst.Add(member .SamAccountName); } } 3) PrincipalContext pcRoot = new PrincipalContext(ContextType.Domain) GroupPrincipal grp = GroupPrincipal.FindByIdentity(pcRoot, "MyGroup"); System.DirectoryServices.DirectoryEntry de = (System.DirectoryServices.DirectoryEntry)grp.GetUnderlyingObject(); List<string> lst = new List<string>(); foreach (string sDN in de.Properties["member"]) { System.DirectoryServices.DirectoryEntry deMember = new System.DirectoryServices.DirectoryEntry("LDAP://" + sDN); lst.Add(deMember.Properties["samAccountName"].Value.ToString()); }

    Read the article

  • haskell: a data structure for storing ascending integers with a very fast lookup

    - by valya
    Hello! (This question is related to my previous question, or rather to my answer to it.) I want to store all qubes of natural numbers in a structure and look up specific integers to see if they are perfect cubes. For example, cubes = map (\x -> x*x*x) [1..] is_cube n = n == (head $ dropWhile (<n) cubes) It is much faster than calculating the cube root, but It has complexity of O(n^(1/3)) (am I right?). I think, using a more complex data structure would be better. For example, in C I could store a length of an already generated array (not list - for faster indexing) and do a binary search. It would be O(log n) with lower ?oefficient than in another answer to that question. The problem is, I can't express it in Haskell (and I don't think I should). Or I can use a hash function (like mod). But I think it would be much more memory consuming to have several lists (or a list of lists), and it won't lower the complexity of lookup (still O(n^(1/3))), only a coefficient. I thought about a kind of a tree, but without any clever ideas (sadly I've never studied CS). I think, the fact that all integers are ascending will make my tree ill-balanced for lookups. And I'm pretty sure this fact about ascending integers can be a great advantage for lookups, but I don't know how to use it properly (see my first solution which I can't express in Haskell).

    Read the article

  • MYSQL Fast Insert dependent on flag from a seperate table

    - by Stuart P
    Hi all. For work I'm dealing with a large database (160 million + rows a year, 10 years of data) and have a quandary; A large percentage of the data we upload is null data and I'd like to stop it from being uploaded. The data in question is spatial in nature, so I have one table like so: idLocations (Auto-increment int, PK) X (float) Y (foat) Alwaysignore (Bool) Which is used as a reference in a second table like so: idLocations (Int, PK, "FK") idDates (Int, PK, "FK") DATA1 (float) DATA2 (float) ... DATA7 (float) So, Ideally I'd like to find a method where I can do something like: INSERT INTO tblData(idLocations, idDates, DATA1, ..., DATA7) VALUES (...), ..., (...) WHERE VALUES(idLocations) NOT LIKE (SELECT FROM tblLocation WHERE alwaysignore=TRUE ON DUPLICATE KEY UPDATE DATA1=VALUES(DATA1) So, for my large batch of input data (250 values in a block), ignore the inserts where the idLocations matches an idLocations values flagged with alwaysignore. Anyone have any suggestions? Cheers. -Stuart Other details: Running MySQL on a semi-dedicated machine, MyISAM engine for the tables.

    Read the article

  • Jquery Fast image switching

    - by echedey lorenzo
    Hi, I have a php class that generates a map image depending on my db data. It is periodically updated thru a serInterval loop. I am trying to update it with no flickering but I just can't. I've tried different methods (preloader, imageswitcher) with no success. //first load function map() { $("#map").html("<img src=map.php?randval="+Math.random()+">"); } //update it from setInterval calls function updatemap() { $("#map").fadeOut(function() { $(this).load(function() { $(this).fadeIn(); }); $(this).attr("src", "map.php?randval="+Math.random()); }) } Is there any way to update the image with no flickering at all? I would prefer an inmediate swap insteado of a fade. The problem I'm having is that after calling updatemap() the image just dissapears. ¿Maybe it is a problem with the attribute src I am parsing? THanks for your help.

    Read the article

  • Fast search in XMl files in .NET (or How to index XML files)

    - by codymanix
    I have to implement a search feature which is able to quickly perform arbitrary complex queries to XML-data. If the user makes a query, all XML files must be searched to find possible matches. The users will have lots of XML-Files (a few 10000 or more) which are typically a few kilobytes in size. All the XML-files have almost the same structure. I already benchmarked XPath, it is too slow for my needs. How can it be done most efficiently? Is is possible to create indexes for the contents of the XML files (preserving content semantics, not just plain fulltext search)? Will it be useful to put the XML data into an (embedded) SQL database and do the queries with SQL? What other possibilities do I have?

    Read the article

  • Neural Network settings for fast training

    - by danpalmer
    I am creating a tool for predicting the time and cost of software projects based on past data. The tool uses a neural network to do this and so far, the results are promising, but I think I can do a lot more optimisation just by changing the properties of the network. There don't seem to be any rules or even many best-practices when it comes to these settings so if anyone with experience could help me I would greatly appreciate it. The input data is made up of a series of integers that could go up as high as the user wants to go, but most will be under 100,000 I would have thought. Some will be as low as 1. They are details like number of people on a project and the cost of a project, as well as details about database entities and use cases. There are 10 inputs in total and 2 outputs (the time and cost). I am using Resilient Propagation to train the network. Currently it has: 10 input nodes, 1 hidden layer with 5 nodes and 2 output nodes. I am training to get under a 5% error rate. The algorithm must run on a webserver so I have put in a measure to stop training when it looks like it isn't going anywhere. This is set to 10,000 training iterations. Currently, when I try to train it with some data that is a bit varied, but well within the limits of what we expect users to put into it, it takes a long time to train, hitting the 10,000 iteration limit over and over again. This is the first time I have used a neural network and I don't really know what to expect. If you could give me some hints on what sort of settings I should be using for the network and for the iteration limit I would greatly appreciate it. Thank you!

    Read the article

  • Should I use Python or Assembly for a super fast copy program

    - by PyNEwbie
    As a maintenance issue I need to routinely (3-5 times per year) copy a repository that is now has over 20 million files and exceeds 1.5 terabytes in total disk space. I am currently using RICHCOPY, but have tried others. RICHCOPY seems the fastest but I do not believe I am getting close to the limits of the capabilities of my XP machine. I am toying around with using what I have read in The Art of Assembly Language to write a program to copy my files. My other thought is to start learning how to multi-thread in Python to do the copies. I am toying around with the idea of doing this in Assembly because it seems interesting, but while my time is not incredibly precious it is precious enough that I am trying to get a sense of whether or not I will see significant enough gains in copy speed. I am assuming that I would but I only started really learning to program 18 months and it is still more or less a hobby. Thus I may be missing some fundamental concept of what happens with interpreted languages. Any observations or experiences would be appreciated. Note, I am not looking for any code. I have already written a basic copy program in Python 2.6 that is no slower than RICHCOPY. I am looking for some observations on which will give me more speed. Right now it takes me over 50 hours to make a copy from a disk to a Drobo and then back from the Drobo to a disk. I have a LogicCube for when I am simply duplicating a disk but sometimes I need to go from a disk to Drobo or the reverse. I am thinking that given that I can sector copy a 3/4 full 2 terabyte drive using the LogicCube in under seven hours I should be able to get close to that using Assembly, but I don't know enough to know if this is valid. (Yes, sometimes ignorance is bliss) The reason I need to speed it up is I have had two or three cycles where something has happened during copy (fifty hours is a long time to expect the world to hold still) that has caused me to have to trash the copy and start over. For example, last week the water main broke under our building and shorted out the power.

    Read the article

  • Is C# fast enough for games

    - by Matt
    Will a game written in C# have any speed issues after long periods of play, like for 24 hours at a time? I'm specifically talking about a 2D RPG similar to old Final Fantasy or Dragon Quest games. I know that languages like Python will slow down too much, curious how C# would stand.

    Read the article

  • Delphi: Fast(er) widestring concatenation

    - by Ian Boyd
    i have a function who's job is to convert an ADO Recordset into html: class function RecordsetToHtml(const rs: _Recordset): WideString; And the guts of the function involves a lot of wide string concatenation: while not rs.EOF do begin Result := Result+CRLF+ '<TR>'; for i := 0 to rs.Fields.Count-1 do Result := Result+'<TD>'+VarAsString(rs.Fields[i].Value)+'</TD>'; Result := Result+'</TR>'; rs.MoveNext; end; With a few thousand results, the function takes, what any user would feel, is too long to run. The Delphi Sampling Profiler shows that 99.3% of the time is spent in widestring concatenation (@WStrCatN and @WstrCat). Can anyone think of a way to improve widestring concatenation? i don't think Delphi 5 has any kind of string builder. And Format doesn't support Unicode. And to make sure nobody tries to weasel out: pretend you are implementing the interface: IRecordsetToHtml = interface(IUnknown) function RecordsetToHtml(const rs: _Recordset): WideString; end; Update One I thought of using an IXMLDOMDocument, to build up the HTML as xml. But then i realized that the final HTML would be xhtml and not html - a subtle, but important, difference. Update Two Microsoft knowledge base article: How To Improve String Concatenation Performance

    Read the article

  • How fast can you make linear search?

    - by Mark Probst
    I'm looking to optimize this linear search: static int linear (const int *arr, int n, int key) { int i = 0; while (i < n) { if (arr [i] >= key) break; ++i; } return i; } The array is sorted and the function is supposed to return the index of the first element that is greater or equal to the key. They array is not large (below 200 elements) and will be prepared once for a large number of searches. Array elements after the n-th can if necessary be initialized to something appropriate, if that speeds up the search. No, binary search is not allowed, only linear search.

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >