Search Results

Search found 3618 results on 145 pages for 'huge'.

Page 113/145 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • Updating a DLL in a Production ASP.NET Web Site bin folder

    - by Josh Stodola
    I want to update a class library (a single DLL file) in a production web application. This web app is pre-compiled (published). I read an answer on StackOverflow (sorry, can't seem to find it anymore because the Search function does not work very well), that led me to believe that I could just paste the new DLL in the bin folder and it would be picked up without problems (this would cause the WP to recycle, which is fine with me because we do not use InProc session state). However, when I tried this, my site blows up and gives a FileLoadException saying that the assembly manifest definition does not match the assembly reference. What in the world is this?! Updating the DLL in Visual Studio and re-deploying the entire site works just fine, but it is a huge pain in the rear. What is the point of having a separate DLL if you have to re-deploy the entire site to implement any changes? Here's the question: How can I update a DLL on a production web site without breaking the app and without re-deploying all of the files?

    Read the article

  • What is the fastest, most efficient way to get up to speed on a new technology?

    - by SLC
    My current job involves working with a huge number of technologies, most of which are very niche and unheard of. In some cases I have to write something about the technology, or with the technology, such as some lessons, examples, or tutorials, on behalf of the developer of that technology or someone that is backing it. When I get told to learn about a new technology, my first port of call is to check our internal library, and then look on amazon for a book on the subject. Failing that, or if the project is too small to warrant a purchase, I hit up google and youtube. However the results of randomly googling what I want to learn are hit and miss. Some days, I can find everything I want to know in a series of lessons or videos, and it's no problem. Other times, I can find almost nothing, and I really have to piece together things from sites. The result is that there are various resources out there, videos, interactive lessons, tutorials, books etc. but when I need to learn something fast, I often don't know the best way to go about it. It's not about fun, because I don't always have the luxury of working my way through a 600 page textbook named "A Complete Guide To Technology X", I have to deliver results quickly. One of the examples I'd like to use is ASP.NET MVC 2 which is something I have been told to learn. I grabbed a book on MVC 1 to refresh my knowledge, but googling it does't produce much useful information. I've seen a ton of ScottGu's tutorials on it, but they are mostly feature presentations, and some date back almost a year. The same applies to channel 9 and there are no books out yet on amazon. My question therefore has two parts, the first asks, "Where are the best places to look to get the information needed to learn a new technology?" and the second asks "What is the most efficient way to use such resources to learn a new technology?"

    Read the article

  • efficiently determining if a polynomial has a root in the interval [0,T]

    - by user168715
    I have polynomials of nontrivial degree (4+) and need to robustly and efficiently determine whether or not they have a root in the interval [0,T]. The precise location or number of roots don't concern me, I just need to know if there is at least one. Right now I'm using interval arithmetic as a quick check to see if I can prove that no roots can exist. If I can't, I'm using Jenkins-Traub to solve for all of the polynomial roots. This is obviously inefficient since it's checking for all real roots and finding their exact positions, information I don't end up needing. Is there a standard algorithm I should be using? If not, are there any other efficient checks I could do before doing a full Jenkins-Traub solve for all roots? For example, one optimization I could do is to check if my polynomial f(t) has the same sign at 0 and T. If not, there is obviously a root in the interval. If so, I can solve for the roots of f'(t) and evaluate f at all roots of f' in the interval [0,T]. f(t) has no root in that interval if and only if all of these evaluations have the same sign as f(0) and f(T). This reduces the degree of the polynomial I have to root-find by one. Not a huge optimization, but perhaps better than nothing.

    Read the article

  • Design suggestions for creating document management structure using hidden shares.

    - by focus.nz
    I need to add some document management functionality into my software. Documents will be grouped by company name and project name. The folders need to be accessed by the application using the id numbers of clients/projects, but also easily browsed by the end user using windows explorer. Clients and Projects will be stored in a database. I am thinking of having the software create the folders using the friendly name and then using a hidden share with the id number for the software to access the files. The folder structure would be something like this --Company 1 (Company-1234$) -- Project 101 (Project-101$) -- Project 102 (Project-102$) -- Project 103 (Project-103$) -- Company 2 (Company-5678$) -- Project 201 (Project-201$) -- Project 202 (Project-202$) -- Project 203 (Project-203$) So in the example above there would be a company called "Company 1" with a ID of "1234". When browsing the folders using windows explorer the user would see \\ServerName\Documents\Company1 and you could also access the same folder from \\ServerName\Documents\Company-1234$ By using the hidden share, if the company name changes or its renamed for some reason it doesn't break the link in the application because its using the hidden shared based on the ID that never changes. Will having hundreds (maybe thousands) or hidden shares on a server provide a huge performance hit? Does any one have any suggestions or alternatives to provide this feature?

    Read the article

  • How to share data between SSRS Security and Data Processing extension?

    - by user2904681
    I've spent a lot of time trying to solve the issue pointed in title and have no found a solution yet. I use MS SSRS 2012 with custom Security (based on Form Authentication and ClaimsPrincipal) and Data Processing extensions. In Data extension level I need to apply filter programmatically based on one of the claim which I have access in Security extension level only. Here is the problem: I do know how to pass the claim from Security to Data Processing extension code... What I've tried: IAuthenticationExtension.LogonUser(string userName, string password, string authority) { ... ClaimsPrincipal claimsPrincipal = CreateClaimsPrincipal(...); Thread.CurrentPrincipal = claimsPrincipal; HttpContext.Current.User = claimsPrincipal; ... }; But it doesn't work. It seems SSRS overrides it within either GenericPrincipal or FormsIdentity internally. The possible workaround I'm thinking about (but haven't checked it yet): 1. Create HttpModule which will create HttpContext with all required information (minus: will be invoke getting claims each time - huge operation) 2. Write to custom SQL table to store logged users information which is required for Data extension and then read it 3. try somehow to append to cookies due to LogOn and then read each time on IAuthenticationExtension.GetUserInfo and fill HttpContext None of them seems to be a good solution. I would be grateful for any help/advise/comments.

    Read the article

  • multiple redirect??

    - by Ryan Pitts
    Ok, i won't go into the full details (too much to explain) but here is what i am trying to do. I have a button on a webpage (we'll call page-1) that links to a page (we'll call it page-2). This page opens in the same window. However, i need the page (page-2) that opens to open up a new window with another page (we'll call this one page-3) when it loads. So virtually when you click the initial button (on page-1) it will go to a new page (page-2) AND a window will open as well with a different page (page-3). This is where it gets tricky. I need page-2 to automatically redirect back to page-1 after it launches page-3. Is this possible and if so, how?? Could it be a JQuery thing? Please just give some answers...i know this is way crazy and a huge workaround - but this is my last resort for this particular website. I have to do it this way because of the lack of control over some of the code.

    Read the article

  • VS2008 - Find and Replace - Searches too many files.

    - by Pam Bullock
    I've used VS2008 a lot and have never had this problem. However, I started a new job and am using a new machine. Ever since I've gotten here the VS Find feature has been acting funny. I first noticed it when I did a replace all for "All Open Files". The project wouldn't build because the values had actually been replaced in other files within the solution that were not open and didn't even open after I pressed replace all. I have found that I can never use replace all on this machine because I never know what it is going to do. Even if I just do a find on "Current Document", once it's done with the document and I should get that message that says "No more matches found" it actually OPENS another random file from my solution where there is a match and keeps on going. It seems to never make any difference what "Look in" option I've chosen. My coworker has an install off the same disk and claims to not be experiencing this. We're in the middle of a stressful, huge project with a close deadline so I know my boss won't let me do a reinstall. Has anyone else ever had this happen? Anyone know a fix?? Thanks, Pam

    Read the article

  • Need events to execute on timer events, metronome precision.

    - by user295734
    I setup a timer to call an event in my application. The problme is the event execution is being skewed by other windows operations. Ex. openning and window, loading a webpage. I need the event to excute exactly on time, every time. When i first set up the app, used a sound file, like a metronome to listen to the even firing, in a steady state, its firing right on, but as soon do something in the windows environment, the sound fires slower, then sort of sppeds up a bit to catch up. So i added a logging method to the event to ctahc the timer ticks. From that data, it appears that the timer is not being affected by the windows app, but my application event calls are being affected. I figured this out by checking the datetime.now in the event, and if i set it to 250 milliseconds, which is 4 clicks per second. You get data something like below. (sec):(ms) 1:000 1:250 1:500 1:750 2:000 2:250 2:500 2:750 3:000 3:250 3:500 3:750 (lets say i execute some windows event)(time will skew) 4:122 4:388 4:600 4:876 (stop doing what i was doing in windows) (going to shorten the data for simplicit, my list was 30sec long) 5:124 5:268 5:500 5:750 (you would se the time go back the same milliseconds it was at the begining) 6:000 6:250 6:500 6:750 7:000 7:250 7:500 7:750 So i'm thinking the timer continues to fire on the same millisecond every time, but its the event that is being skewed to fire off time by other windows operations. Its not a huge skew, but for what i need to accomplish, its unacceptable. Is there anyhting i can do in .NET, hoping to use XAML/WPF application, thats will correct the skewing of events? thx.

    Read the article

  • Loading a class file immediately AFTER startup

    - by Striker
    We have a few war files deployed inside an ear file. Some of the war files have a class that caches static data from our PLM system in singletons. Since some of the classes take several minutes to load we use the load-on-startup in the web.xml to load them ahead of time. This all works fine until we attempt to re-deploy the application on our production servers. (WebLogic 10.3) We get an exception from our PLM API about a dll already being loaded. Our PLM vendor has confirmed that this is a problem and stated that they don't support using the load-on-startup. This is also a huge problem on our development boxes where we have redeploy the app all the time. Most of us, when we're not working on one of the apps that uses a cache, have them commented out. Obviously we can't do that for the production servers. Right now we transfer the ear to the production server, deploy it in the console, wait for it to crash, shut the app server instance down and then start it up again. We need to find a way around this... One suggestion was to create a servlet that we can call after the server boots that will load the various caches. While this will work I'm looking for something a bit cleaner. Is there anyway to detect once the server started and then fire off the methods? Thanks.

    Read the article

  • Discover periodic patterns in a large data-set

    - by Miner
    I have a large sequence of tuples on disk in the form (t1, k1) (t2, k2) ... (tn, kn) ti is a monotonically increasing timestamp and ki is a key (assume a fixed length string if needed). Neither ti nor ki are guaranteed to be unique. However, the number of unique tis and kis is huge (millions). n itself is very large (100 Million+) and the size of k (approx 500 bytes) makes it impossible to store everything in memory. I would like to find out periodic occurrences of keys in this sequence. For example, if I have the sequence (1, a) (2, b) (3, c) (4, b) (5, a) (6, b) (7, d) (8, b) (9, a) (10, b) The algorithm should emit (a, 4) and (b, 2). That is a occurs with a period of 4 and b occurs with a period of 2. If I build a hash of all keys and store the average of the difference between consecutive timestamps of each key and a std deviation of the same, I might be able to make a pass, and report only the ones that have an acceptable std deviation(ideally, 0). However, it requires one bucket per unique key, whereas in practice, I might have very few really periodic patterns. Any better ways?

    Read the article

  • Why won't VS2010 RC use my existing types when I add a service reference?

    - by Johan Driessen
    I have a huge problem getting services references in VS2010 RC to use existing assemblies. Even though I have a class library with all the data contracts (classes marked with DataContract and properties with DataMember) that is shared between the service project and the consuming project (which is a class library), when I add a service reference, the data contracts are regenerated withing the service reference instead of using the existing types. When I was using VS2010 beta 2, this worked fine, and I have existing service references using the very same data contracts. But if I add a new service reference, or even update an old one, it won't use the existing types anymore. I have made a mini-test-solution, with one service, one data contract type and one console app as a consumer (all in the same solution), and there it seems to work, but that's no great comfort to me. Is there any way to see why it can't use the existing types? Edit to clearify. It works to generate the proxy classes with svcutil.exe, and point to the data contracts dll, like this: svcutil.exe http://localhost/MyService.svc /reference:[Path To DataContracts]\DataContracts.dll /n:*,MyProject.MyServiceReference /ct:System.Collections.Generic.List`1 The question is, what possible reason could there be for Visual Studio to generate its own datacontracts instead of using the existing ones even though the "reuse" checkbox is checked and the datacontracts assembly is referenced.

    Read the article

  • NHibernate IQueryable Collection as Property of Root

    - by Khalid Abuhakmeh
    Hello and thank you for taking the time to read this. I have a root object that has a property that is a collection. For example : I have a Shelf object that has Books. // now public class Shelf { public ICollection<Book> Books {get; set;} } // want public class Shelf { public IQueryable<Book> Books {get;set;} } What I want to accomplish is to return a collection that is IQueryable so that I can run paging and filtering off of the collection directly from the the parent. var shelf = shelfRepository.Get(1); var filtered = from book in shelf.Books where book.Name == "The Great Gatsby" select book; I want to have that query executed specifically by NHibernate and not a get all to load a whole collection and then parse it in memory (which is what currently happens when I use ICollection). The reasoning behind this is that my collection could be huge, tens of thousands of records, and a get all query could bash my database. I would like to do this implicitly so that when NHibernate sees and IQueryable on my class it knows what to do. I have looked at NHibernates Linq provider and currently I am making the decision to take large collections and split them into their own repository so that I can make explicit calls for filtering and paging. Linq To SQL offers something similar to what I'm talking about.

    Read the article

  • FLV performance and garbage collection

    - by justinbach
    I'm building a large flash site (AS3) that uses huge FLVs as transition videos from section to section. The FLVs are 1280x800 and are being scaled to 1680x1050 (much of which is not displayed to users with smaller screens), and are around 5-8 seconds apiece. I'm encoding the videos using On2's hi-def codec, VP6-S, and playback is pretty good with native FLV players, Perian-equipped Quicktime, and simple proof-of-concept FLV playback apps built in AS3. The problem I'm having is that in the context of the actual site, playback isn't as smooth; the framerate isn't quite as good as it should be, and more problematically, there's occasional jerkiness and dropped frames (sometimes pausing the video for as long as a quarter of a second or so). My guess is that this is being caused by garbage collection in the Flash player, which happens nondeterministically and is therefore hard to test and control for. I'm using a single instance of FLVPlayback to play the videos; I originally was using NetStream objects and so forth directly but switched to FLVPlayback for this reason. Has anyone experienced this sort of jerkiness with FLVPlayback (or more generally, with hi-def Flash video)? Am I right about GC being the culprit here, and if so, is there any way to prevent it during playback of these system-intensive transitions?

    Read the article

  • Commit is VERY slow in my NHibernate / SQLite project

    - by Tom Bushell
    I've just started doing some real-world performance testing on my Fluent NHibernate / SQLite project, and am experiencing some serious delays when when I Commit to the database. By serious, I mean taking 20 - 30 seconds to Commit 30 K of data! This delay seems to get worse as the database grows. When the SQLite DB file is empty, commits happen almost instantly, but when it grows to 10 Meg, I see these huge delays. The database has 16 tables, averaging 10 columns each. One possible problem is that I'm storing a dozen or so IList members, but they are typically only 200 elements long. But this is a recent addition to Fluent NHibernate automapping, which stores each float in a single table row, so maybe that's a potential problem. Any suggestions on how to track this down? I suspect SQLite is the culprit, but maybe it's NHibernate? I don't have any experience with profilers, but am thinking of getting one. I'm aware of NHibernate Profiler - any recommendations for profilers that work well with SQLite? Here's the method that saves the data - it's just a SaveOrUpdate call and a Commit, if you ignore all the error handling and debug logging. public static void SaveMeasurement(object measurement) { Debug.WriteLine("\r\n---SaveMeasurement---"); // Get the application's database session var session = GetSession(); using (var transaction = session.BeginTransaction()) { try { session.SaveOrUpdate(measurement); } catch (Exception e) { throw new ApplicationException( "\r\n SaveMeasurement->SaveOrUpdate failed\r\n\r\n", e); } try { Debug.WriteLine("\r\n---Commit---"); transaction.Commit(); Debug.WriteLine("\r\n---Commit Complete---"); } catch (Exception e) { throw new ApplicationException( "\r\n SaveMeasurement->Commit failed\r\n\r\n", e); } } }

    Read the article

  • random data using php & mysql

    - by Prakash
    I have mysql database structure like below: CREATE TABLE test ( id int(11) NOT NULL auto_increment, title text NULL, tags text NULL, PRIMARY KEY (id) ); data on field tags is stored as a comma separated text like html,php,mysql,website,html etc... now I need create an array that contains around 50 randomly selected tags from random records. currently I am using rand() to select 15 random mysql data from database and then holding all the tags from 15 records in an array. Then I am using array_rand() for randomizing the array and selecting only 50 random records. $query=mysql_query("select * from test order by id asc, RAND() limit 15"); $tags=""; while ($eachData=mysql_fetch_array($query)) { $additionalTags=$eachData['tags']; if ($tags=="") { $tags.=$additionalTags; } else { $tags.=$tags.",".$additionalTags; } } $tags=explode(",", $tags); $newTags=array(); foreach ($tags as $tag) { $tag=trim($tag); if ($tag!="") { if (!in_array($tag, $newTags)) { $newTags[]=$tag; } } } $random_newTags=array_rand($newTags, 50); Now I have huge records on the database, and because of that; rand() is performing very slow and sometimes it doesn't work. So can anyone let me know how to handle this situation correctly so that my page will work normally.

    Read the article

  • Poor execution plans when using a filter and CONTAINSTABLE in a query

    - by Paul McLoughlin
    We have an interesting problem that I was hoping someone could help to shed some light on. At a high level the problem is as below: The following query executes quickly (1 second): SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID but if we add a filter to the query, then it takes approximately 2 minutes to return: SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID WHERE SA.CHG_DATE'19 Feb 2010' Looking at the execution plan for the two queries, I can see that in the second case there are two places where there are huge differences between the actual and estimated number of rows, these being: 1) For the FulltextMatch table valued function where the estimate is approx 22,000 rows and the actual is 29 million rows (which are then filtered down to 1670 rows before the join) and 2) For the index seek on the full text index, where the estimate is 1 row and the actual is 13,000 rows As a result of the estimates, the optimiser is choosing to use a nested loops join (since it assumes a small number of rows) hence the plan is inefficient. We can work around the problem by either (a) parameterising the query and adding an OPTION (OPTIMIZE FOR UNKNOWN) to the query or (b) by forcing a HASH JOIN to be used. In both of these cases the query returns in sub 1 second and the estimates appear reasonable. My question really is 'why are the estimates being used in the poorly performing case so wildly inaccurate and what can be done to improve them'? Statistics are up to date on the indexes on the indexed view being used here. Any help greatly appreciated.

    Read the article

  • Self-describing file format for gigapixel images?

    - by Adam Goode
    In medical imaging, there appears to be two ways of storing huge gigapixel images: Use lots of JPEG images (either packed into files or individually) and cook up some bizarre index format to describe what goes where. Tack on some metadata in some other format. Use TIFF's tile and multi-image support to cleanly store the images as a single file, and provide downsampled versions for zooming speed. Then abuse various TIFF tags to store metadata in non-standard ways. Also, store tiles with overlapping boundaries that must be individually translated later. In both cases, the reader must understand the format well enough to understand how to draw things and read the metadata. Is there a better way to store these images? Is TIFF (or BigTIFF) still the right format for this? Does XMP solve the problem of metadata? The main issues are: Storing images in a way that allows for rapid random access (tiling) Storing downsampled images for rapid zooming (pyramid) Handling cases where tiles are overlapping or sparse (scanners often work by moving a camera over a slide in 2D and capturing only where there is something to image) Storing important metadata, including associated images like a slide's label and thumbnail Support for lossy storage What kind of (hopefully non-proprietary) formats do people use to store large aerial photographs or maps? These images have similar properties.

    Read the article

  • Getting started with massive data

    - by Max
    I'm a math guy and occasionally do some statistics/machine learning analysis consulting projects on the side. The data I have access to are usually on the smaller side, at most a couple hundred of megabytes (and almost always far less), but I want to learn more about handling and analyzing data on the gigabyte/terabyte scale. What do I need to know and what are some good resources to learn from? Hadoop/MapReduce is one obvious start. Is there a particular programming language I should pick up? (I primarily work now in Python, Ruby, R, and occasionally Java, but it seems like C and Clojure are often used for large-scale data analysis?) I'm not really familiar with the whole NoSQL movement, except that it's associated with big data. What's a good place to learn about it, and is there a particular implementation (Cassandra, CouchDB, etc.) I should get familiar with? Where can I learn about applying machine learning algorithms to huge amounts of data? My math background is mostly on the theory side, definitely not on the numerical or approximation side, and I'm guessing most of the standard ML algorithms don't really scale. Any other suggestions on things to learn would be great!

    Read the article

  • C#+BDE+DBF problem

    - by Drabuna
    I have huge problem: I have lots of .dbf files(~50000) and I need to import them into Oracle database. I open conncection like this: OleDbConnection oConn = new OleDbConnection(); OleDbCommand oCmd = new OleDbCommand(); oConn.ConnectionString = @"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + directory + ";Extended Properties=dBASE IV;User ID=Admin;Password="; oCmd.Connection = oConn; oCmd.CommandText = @"SELECT * FROM " + tablename; try { oConn.Open(); resultTable.Load(oCmd.ExecuteReader()); } catch (Exception ex) { MessageBox.Show(ex.Message); } oConn.Close(); oCmd.Dispose(); oConn.Dispose(); I read them in loop, and then insert into oracle. Everything's fine. BUT: There is about 1000 files, that I can't open. They raise exception "not a table". So I google, and install Borland Database Engine. Now everything wokrs fine....but no. Now, when I'm reading files, on 1024 file exception raises: "System resource exceeded". But I have lots of free resources. When I remove BDE, everything's fine again, no "system resource exceeded" error, but I cant read all files. Help please. PS: Tried using ODBC but nothing changes.

    Read the article

  • What is the basic pattern for using (N)Hibernate?

    - by Vilx-
    I'm creating a simple Windows Forms application with NHibernate and I'm a bit confused about how I'm supposed to use it. To quote the manual: ISession (NHibernate.ISession) A single-threaded, short-lived object representing a conversation between the application and the persistent store. Wraps an ADO.NET connection. Factory for ITransaction. Holds a mandatory (first-level) cache of persistent objects, used when navigating the object graph or looking up objects by identifier. Now, suppose I have the following scenario: I have a simple classifier which is a MSSQL table with two columns - ID (auto_increment) and Name (nvarchar). To edit this classifier I create a form which contains a single gridview and two buttons - OK and Cancel. The user can nearly directly edit the table in the gridview, and when he hits OK the changes he made are persisted to the DB (or if he hits cancel, nothing happens). Now, I have several questions about how to organize this: What should the lifetime of my ISession be? Should I create a single ISession for my whole application; an ISession for each of my forms (the application is single-threaded MDI); or an ISession for every DB operation/transaction? Does NHibernate offer some kind of built-in dirty tracking or must I do this myself? The manual mentions something like it here and there but does not go into details. How is this done? Is there not a huge overhead? Is it somehow tied with the cache(s) that NHibernate has? What are these caches for? Are they not specific to a single ISession? That is, if I use a seperate ISession for every transaction, won't it break the dirty tracking? How does the built-in dirty tracking detect deleted objects?

    Read the article

  • Delphi: Alternative to using Reset/ReadLn for text file reading

    - by Ian Boyd
    i want to process a text file line by line. In the olden days i loaded the file into a StringList: slFile := TStringList.Create(); slFile.LoadFromFile(filename); for i := 0 to slFile.Count-1 do begin oneLine := slFile.Strings[i]; //process the line end; Problem with that is once the file gets to be a few hundred megabytes, i have to allocate a huge chunk of memory; when really i only need enough memory to hold one line at a time. (Plus, you can't really indicate progress when you the system is locked up loading the file in step 1). The i tried using the native, and recommended, file I/O routines provided by Delphi: var f: TextFile; begin Reset(f, filename); while ReadLn(f, oneLine) do begin //process the line end; Problem withAssign is that there is no option to read the file without locking (i.e. fmShareDenyNone). The former stringlist example doesn't support no-lock either, unless you change it to LoadFromStream: slFile := TStringList.Create; stream := TFileStream.Create(filename, fmOpenRead or fmShareDenyNone); slFile.LoadFromStream(stream); stream.Free; for i := 0 to slFile.Count-1 do begin oneLine := slFile.Strings[i]; //process the line end; So now even though i've gained no locks being held, i'm back to loading the entire file into memory. Is there some alternative to Assign/ReadLn, where i can read a file line-by-line, without taking a sharing lock? i'd rather not get directly into Win32 CreateFile/ReadFile, and having to deal with allocating buffers and detecting CR, LF, CRLF's. i thought about memory mapped files, but there's the difficulty if the entire file doesn't fit (map) into virtual memory, and having to maps views (pieces) of the file at a time. Starts to get ugly. i just want Reset with fmShareDenyNone!

    Read the article

  • mysql: Average over multiple columns in one row, ignoring nulls

    - by Sai Emrys
    I have a large table (of sites) with several numeric columns - say a through f. (These are site rankings from different organizations, like alexa, google, quantcast, etc. Each has a different range and format; they're straight dumps from the outside DBs.) For many of the records, one or more of these columns is null, because the outside DB doesn't have data for it. They all cover different subsets of my DB. I want column t to be their weighted average (each of a..f have static weights which I assign), ignoring null values (which can occur in any of them), except being null if they're all null. I would prefer to do this with a simple SQL calculation, rather than doing it in app code or using some huge ugly nested if block to handle every permutation of nulls. (Given that I have an increasing number of columns to average over as I add in more outside DB sources, this would be exponentially more ugly and bug-prone.) I'd use AVG but that's only for group by, and this is w/in one record. The data is semantically nullable, and I don't want to average in some "average" value in place of the nulls; I want to only be counting the columns for which data is there. Is there a good way to do this? Ideally, what I want is something like UPDATE sites SET t = AVG(a*a_weight,b*b_weight,...) where any null values are just ignored and no grouping is happening.

    Read the article

  • Only last row in method execute (Objective-C)

    - by Emil
    First I want to apologize for my lack of knowledge in programming in general. I am in the way of learning and have discovered that this site is a huge compliment for me of doing so. I have created a program with three UITextField(thePlace,theVerb,theOutput) and two UITextView where one of the textviews(theOutput) take the text from the other textview(theTemplate) and replaces some strings with the text entered in the textfields. When I click a button it triggers the method createStory that is listed below. This works fine with one exception. The text in the output only changes the string 'number' to the text in the textfield. However if I change the order in the method to replace 'place','number','verb' only the verb is changed, not the number. Im sure this is some sort of easy fix but I can't find it. Can some of you help me with my troubleshooting? - (IBAction)createStory:(id)sender { theOutput.text=[theTemplate.text stringByReplacingOccurrencesOfString:@"<place>" withString:thePlace.text]; theOutput.text=[theTemplate.text stringByReplacingOccurrencesOfString:@"<verb>" withString:theVerb.text]; theOutput.text=[theTemplate.text stringByReplacingOccurrencesOfString:@"<number>" withString:theNumber.text]; } Many thanks // Emil

    Read the article

  • Dynamically populated NSPopUpButtonCell menu in an NSOutlineView

    - by Mo
    I’m working with an NSOutlineView which has two columns. My dataSource supplies the outline view with a tree of items of a custom class which represents file types (that is, you initialise it with a UTI). The first column is the display name of the file type (e.g., “Source code”, “Interface Builder NIB document”, etc.). The second column is an NSPopUpButtonCell which is supposed to allow the user to pick a handler for the given document type (think of Xcode’s “File Types” preference pane, and you’re pretty much there). I can generate an NSMenu for a given item in the tree, populated with options based upon the Launch Services database entries for the UTI, complete with the relevant application icon and and so on. In fact, the menu itself works wonderfully, populated by way of NSPopUpButtonCellWillPopUpNotification. The problem is, try as I might, the cell when the menu isn’t popped up always contains precisely one of two things: either an empty string, or the default text for the cell, the former if the result of -handlerName on the item (the attribute assigned to the column) is non-nil, the latter otherwise. Moreover, I’m manually calling -selectItem: on the NSPopUpButtonCell instance, which just seems Wrong. In contrast, the left-hand column, which is just an NSTextFieldCell, everything just works (although granted, all it’s got to do is read the value from the item and present it). (Disclaimer: I’m fairly new at Cocoa UI stuff; I know Objective-C, and lots of other programming languages, but I’ve not a huge amount of experience of building Mac OS X UIs, so be gentle).

    Read the article

  • database design suggestion needed

    - by JMSA
    I need to design a table for daily sales of pharmaceutical products. There are hundreds of types of products available {Name, code}. Thousands of sales-persons are employed to sell those products{name, code}. They collect products from different depots{name, code}. They work in different Areas - Zones - Markets - Outlets, etc. {All have names and codes} Each product has various types of prices {Production Price, Trade Price, Business Price, Discount Price, etc.}. And, sales-persons are free to choose from those combination to estimate the sales price. The problem is, daily sales requires huge amount of data-entry. Within couple of years there may be gigabytes of data (if not terabytes). If I need to show daily, weekly, monthly, quarterly and yearly sales reports there will be various types of sql queries I shall need. This is my initial design: Product {ID, Code, Name, IsActive} ProductXYZPriceHistory {ID, ProductID, Date, EffectDate, Price, IsCurrent} SalesPerson {ID, Code, Name, JoinDate, and so on..., IsActive} SalesPersonSalesAraeaHistory {ID, SalesPersonID, SalesAreaID, IsCurrent} Depot {ID, Code, Name, IsActive} Outlet {ID, Code, Name, AreaID, IsActive} AreaHierarchy {ID, Code, Name, PrentID, AreaLevel, IsActive} DailySales {ID, ProductID, SalesPersonID, OutletID, Date, PriceID, SalesPrice, Discount, etc...} Now, apart from indexing, how can I normalize my DailySales table to have a fine grained design that I shall not need to change for years to come? Please show me a sample design of only the DailySales data-entry table (from which all types of reports would be queried) on the basis of above information. I don't need a detailed design advice. I just need an advice regarding only the DailySales table. Is there any way to break this particular table to achieve granularity?

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >