Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 137/457 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • Thoughts on GoGrid vs EC2

    - by Jason
    I am currently hosting my SaaS application at GoGrid (Microsoft stack). Here's what I have: Database Server - physical box, 12 GB RAM, 2 X Quad Core CPU (2.13 GHz Xeon E5506) 2 Web / App servers - cloud servers, 2 GB RAM, 2 VCPUs 300 GB monthly bandwidth I am paying around $900 / month for this. My web / app servers are busting at the seams and need to be upgraded to 4 GB of RAM. I also need a firewall, and GoGrid just added this service for an additional $200. After the upgrade, I will be paying around $1,400. I started looking at Amazon EC2, specifically this config: Database server - "High Memory Double Extra Large Instance" - 34 GB RAM, 13 EC2 compute units 2 Web / App servers - "Large Instance" - 7.5 GB RAM, 4 EC2 compute units If I go with 1 year reserved instances, my upfront cost would be $4,500 and my monthly would be $700. This comes to $1,075 / month when amortized. Amazon also includes a firewall for free. Here are my questions: Do any of you have experience running a database (especially SQL Server) on an EC2 instance? How did it perform compared to a dedicated machine? One of my major concerns is with disk I/O. Amazon's description of a compute unit is fairly vague. Any ideas on how the CPU performance on the database servers would compare? I am hoping that the Amazon solution will provide significantly better performance than my current or even improved GoGrid setup. Having a virtual database server would also be nice in terms of availability. Right now I would be in serious trouble if I had any hardware issues. Thanks for any insight...

    Read the article

  • WYSIWYG in Doxygen

    - by Adam Shiemke
    I'm working on a fairly large project written in C. The idea was to build a library of modular blocks that can be reused across several platforms. Each module is assocaited with a word document in .docx format (huge pain to diff-merge). In these docs, an interface section is specified, listing datatypes and publicly accessable functions. These were often inconsistant with the actual implementation in code, and wading through all this documentation was a pain. I've been working to switch to doxygen to simplify document managemnet. I haven't found a good way to embed the previously written documentation into the doxygen output. I've copy-pasted them into sections and used modules to group the sources together, but the document sections look ugly in the comments (the output is pretty) and since doxygen takes a while to parse through our code (about 30 mins), validating formatting is a pain. Is there some way to WYSIWIG large blocks of documentation into doxygen? I feel this would improve the number of people documenting their code, and the quality of that documentation. I considered linking to html, but that splits out the documentation. I also considered putting them inline in html, but this also seems like a pain and would mean everyone needs a WYSIWIG HTML edditor (or some html skillz). Any ideas on how to make things easier and prettier? Thanks loads.

    Read the article

  • Ajax heavy JS apps using excessive amounts of memory over time.

    - by Shane Reustle
    I seem to have some pretty large memory leaks in an app that I am working on. The app itself is not very complex. Every 15 seconds, the page requests approx 40kb of JSON from the server, and draws a table on the page using it. It is cheaper to draw the table over because the data is usually always new. I am attaching a few events to the table, approx 5 per line, 30 lines in the table. I used jQuery's .html() method to put the new html into the container and overwrite the existing. I do this specifically so that jQuery's special cleanup functions go in and attempt to detach all events on the elements in the element that it is overwriting. I then also delete the large variables of html once they are sent to the DOM using delete my_var. I have checked for circular references and attached events that are never cleared a few times, but never REALLY dug into it. I was wondering if someone could give me a few pointers on how to optimize a very heavy app like this. I just picked up "High Performance Javascript" by Nicholas Zakas, but didn't have much time to get into it yet. To give an idea on how much memory this is using, after 4~ hours, it is using about 420,000k on chrome, and much more on Firefox or IE. Thanks!

    Read the article

  • Put together tiles in android sdk and use as background

    - by Jon
    In a feeble attempt to learn some Android development am I stuck at graphics. My aim here is pretty simple: Take n small images and build a random image, larger than the screen with possibility to scroll around. Have an animated object move around on it I have looked at the SDK examples, Lunar Lander especially but there are a few things I utterly fail to wrap my head around. I've got a birds view plan (which in my head seems reasonably sane): How do I merge the tiles into one large image? The background is static so I figure I should do like this: Make a 2d array with refs to the tiles Make a large Drawable and draw the tiles on it At init draw this big image as the background At each onDraw redraw the background of the previous spot of the moving object, and the moving object at its new location The problem is the hands on things. I load the small images with "Bitmap img1 = BitmapFactory.decodeResource (res, R.drawable.img1)", but then what? Should I make a canvas and draw the images on it with "canvas.drawBitmap (img1, x, y, null);"? If so how to get a Drawable/Bitmap from that? I'm totally lost here, and would really appreciate some hands on help (I would of course be grateful for general hints as well, but I'm primarily trying to understand the Graphics objects). To make you, dear reader, see my level of confusion will I add my last desperate try: Drawable drawable; Canvas canvas = new Canvas (); Bitmap img1 = BitmapFactory.decodeResource (res, R.drawable.img1); // 50 x 100 px image Bitmap img2 = BitmapFactory.decodeResource (res, R.drawable.img2); // 50 x 100 px image canvas.drawBitmap (img1, 0, 0, null); canvas.drawBitmap (img2, 50, 0, null); drawable.draw (canvas); // obviously wrong as draw == null this.setBackground (drawable); Thanks in advance

    Read the article

  • iPhone: Leak with UIWebView loading Office documents. Any ideas how to avoid it?

    - by Thomas Tempelmann
    While there are already quite a few posts about leaks around UIWebView, mine is a bit more special, I believe, and thus deserves its own post here. I see a reproducible large leak every time I load a Office document such as a Word or Excel file. For instance, every time I display a 180KB .doc file, I get a 100KB leak. And that happens with both the simulator and an actual device, running OS 3.1.3. The leak is not visible with the Leaks instrument but only by looking at the malloc instances via the ObjectAlloc instrument. Here's a picture from the instruments trace: I've also made a demo project, UIWebView-Leak.zip, so you can verify this yourself. To see the leak, use the ObjectAlloc instrument, switch to the view where you see individual allocation objects, and sort by size so that you see the large ones in a group, just like in my picture above. Then view a Office document a few times and find the Malloc objects that keep staying "Live" even after the actual UIWebView has been freed. Is this a known bug? Or is there any way I can avoid these leaks? I.e, have you successfully shown Office documents on an iPhone withing getting such leaks? Note: I've reported this as a bug to Apple now, too (ID 7950594) I am still waiting for someone (including Apple) to confirm this as a true leak or show why it isn't (i.e. that I do something wrong or make wrong assumptions)

    Read the article

  • UIImage - should be loaded with [UIImage imageNamed:@""] or not ?

    - by sagar
    Hello ! every one. I am having number of images with in my application. ( images more than 50 - approximately & it can extend according to client's need ) Each image are very large round about - 1024 x 768 & 150 dpi Now, I have to add all this images in a scroll view & display it. Ok, My question is as follows. According to me there are two options of loading large images imageNamed:@"" load asynchronously when viewDidLoad Called. Which is more preferable ? imgModel.image=[UIImage imageNamed:[dMain valueForKey:@"imgVal"]]; or like this. NSURL *ur=[[NSURL alloc] initFileURLWithPath:[[NSBundle mainBundle] pathForResource:lblModelName.text ofType:@"png"] isDirectory:NO]; NSURLRequest *req=[NSURLRequest requestWithURL:ur cachePolicy:NSURLCacheStorageNotAllowed timeoutInterval:40]; [ur release]; NSURLConnection *con=[[NSURLConnection alloc] initWithRequest:req delegate:self]; if(con){ myWebData=[[NSMutableData data] retain]; } else { } -(void)connection:(NSURLConnection *)connection didReceiveResponse:(NSURLResponse *)response { [myWebData setLength: 0]; } -(void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { [myWebData appendData:data]; } -(void)connection:(NSURLConnection *)connection didFailWithError:(NSError *)error { [connection release]; } -(void)connectionDidFinishLoading:(NSURLConnection *)connection { NSLog(@"ImageView Ref From - %@",imgV); // my image view & set image imgV.image=[UIImage imageWithData:myWebData]; [connection release]; connection=nil; }

    Read the article

  • Reducing load time, or making the user think the load time is less

    - by Malfist
    I've been working on a website, and we've managed to reduce the total content for a page load from 13.7MiB's to 2.4, but the page still takes forever to load. It's a joomla site (ick), and it has a lot of redundant DOM elements (2000+ for the home page), and make 60+ HttpRequest's per page load, counting all the css, js, and image requests. Unlike drupal, joomla won't merge them all on the fly, and they have to be kept separate or else the joomla components will go nuts. What can I do to improve load time? Things I've done: Added colors to dom elements that have large images as their background so the color is loaded, then the image Reduced excessively large images to much smaller file sizes Reduced DOM elements to ~2000, from ~5000 Loading CSS at the start of the page, and javascript at the end Not totally possible, joomla injects it's own javascript and css and it does it at the header, always. Minified most javascript Setup caching and gziping on server Uncached size 2.4MB, cached is ~300KB, but even with so many dom elements, the page takes a good bit of time to render. What more can I do to improve the load time?

    Read the article

  • jquery post and get request different on local intranet and live server

    - by nccsbim071
    Hi, I have been developing an asp.net mvc application where i need to make large amounts of jquery post and get request to call controller methods and get back json result. Everything is working fine. The problem is i had to write different jquery post and get request url on local intranet(deployed by making virtual directory) and live server. the current jquery request url is given as below: $.post("/ProjectsChat/GetMessages", { roomId: 24 },.......... now this format of url for jquery request works fine for live server but not for local intranet. Since on local intranet i have made a virtual directory. It only works when i append the name of the virtual directory like this "$.post("MyProjectVirutalDirName/ProjectsChat..................." I am sure most of you must have come across same problem. now i have made a full project, there are large number of jquery requests made, i want to test the application by deploying on local intranet and fix the bugs. Changing all the jquery requests for local intranet doesn't seem feasible solution to me, i am really in a big problem, i can't deploy the same project on live server just like that and test it there, client will kill me. I need some expert advice. Please help Thanks

    Read the article

  • Creating nested arrays on the fly

    - by adardesign
    I am trying to do is to loop this HTML and get a nested array of this HTML values that i want to grab. It might look complex at first but is a simple question... This script is just part of a Object containing methods. html <div class="configureData"> <div title="Large"> <a href="yellow" title="true" rel="$55.00" name="sku22828"></a> <a href="green" title="true" rel="$55.00" name="sku224438"></a> <a href="Blue" title="true" rel="$55.00" name="sku22222"></a> </div> <div title="Medium"> <a href="yellow" title="true" rel="$55.00" name="sku22828"></a> <a href="green" title="true" rel="$55.00" name="sku224438"></a> <a href="Blue" title="true" rel="$55.00" name="sku22222"></a> </div> <div title="Small"> <a href="yellow" title="true" rel="$55.00" name="sku22828"></a> <a href="green" title="true" rel="$55.00" name="sku224438"></a> <a href="Blue" title="true" rel="$55.00" name="sku22222"></a> </div> </div> javascript // this is part of a script..... parseData:function(dH){ dH.find(".configureData div").each(function(indA, eleA){ colorNSize.tempSizeArray[indA] = [eleA.title,[],[],[],[]] $(eleZ).find("a").each(function(indB, eleB){ colorNSize.tempSizeArray[indA][indB+1] = eleC.title }) }) }, I expect the end array should look like this. [ ["large", ["yellow", "green", "blue"], ["true", "true", "true"], ["$55", "$55","$55"] ], ["Medium", ["yellow", "green", "blue"], ["true", "true", "true"], ["$55", "$55","$55"] ] ] // and so on....

    Read the article

  • How to extract block of XML from a log file on Linux

    - by dragonmantank
    I have a log file that looks like the following: 2010-05-12 12:23:45 Some sort of log entry 2010-05-12 01:45:12 Request XML: <RootTag> <Element>Value</Element> <Element>Another Value</Element> </RootTag> 2010-05-12 01:45:32 Response XML: <ResponseRoot> <Element>Value</Element> </ResponseRoot> 2010-05-12 01:45:49 Another log entry What I want to do is extract the Request and Response XML (and ultimately dump them into their own single files). I had a similar parser that used egrep but the XML was all on one line, not multiple ones like above. The log files are also somewhat large, hitting 500-600 megs a log. Smaller logs I would read in via a PHP script and use regex matching, but the amount of memory required for such a large file would more than likely kill the script. Is there an easy way using the built-in tools on a Linux box (CentOS in this case) to extract multiple lines or am I going to have to bite the bullet and use Perl or PHP to read in the entire file to extract it?

    Read the article

  • How to detect changing directory size in Perl

    - by materiamage
    Hello, I am trying to find a way of monitoring directories in Perl, in particular the size of a directory, and upon detecting a change in directory size, perform a particular action. The issue I have is with large files that require a noticeable amount of time to copy into this directory, i.e. 100MB. What happens (in Windows, not Unix) is the system reserves enough disk space for the entire file, even though the file is still copying in progress. This causes problems for me, because my script will try to perform an action on this file that has not finished copying over. I can easily detect directory size changes in Unix via 'du', but 'du' in Windows does not behave the same way. Are there any accurate methods of detecting directory size changes in Perl? Edit: Some points to clarify: - My Perl script is only monitoring a particular directory, and upon detecting a new file or a new directory, perform an action on this new file or directory. It is not copying any files; users on the network will be copying files into the directory I am monitoring. - The problem occurs when a new file or directory appears (copied, not moved) that is significantly large ( 100MB, but usually a couple GB) and my program fires before this copy completes - In Unix I can easily 'du' to see that the file/directory in question is growing in size, and take the appropriate action - In Windows the size is static, so I cannot detect this change - opendir/readdir/closedir is not feasible, as some of the directories that appear may contain thousands of files, and I want to avoid the overhead of Ideally I would like my program to be triggered on change, but I am not sure how to do this. As of right now it busy waits until it detects a change. The change in file/directory size is not in my control.

    Read the article

  • using a database and deploying the application

    - by evan
    I have a WPF application that stores a large amount of information in XML files and as the user uses the application they add more information to the XML files. It's basically using the XML files as a database. Since over the life of the program the XML files have gotten quite large, and I've been think about putting the data on a website, I've been looking into how to move all the information into an SQL database. I've used SQL databases with web applications (PHP, Ruby, and ASP.NET) but never with a Desktop application. Ideally I'd like to be able to keep all the information in one database file and distribute it along with the application without requiring the user to connect to a remote database (so they don't need an internet connection - though eventually it would be nice if could compare the local file's version with one online somewhere and update if necessary) and without making them install a local database server on their computer. Is this possible? I'd also like to use LINQ with any new database solution so switching to a database doesn't force to many changes (I read the XML with LINQ). I'm sure this question has been asked and that there are already some good tutorials on the subject but I just can't find them.

    Read the article

  • ASP .NET page runs slow in production

    - by Brandi
    I have created an ASP .NET page that works flawlessly and quickly from Visual Studio. It does a very large database read from a database on our network to load a gridview inside of an update panel. It displays progress in an Ajax modalpopupextender. Of course I don't expect it to be instant what with the large db reads, but it takes on the order of seconds, not on the order of minutes. This is all working great until I put it up on the server - it is very, VERY slow when I access it via the internet - takes several minutes to load the database information into the gridview. I'm baffled why it would not perform the exact same as it had from Visual Studio. (It is in release mode and I have taken off the debug flag) I have since been trying things like eliminating unneeded update panels and throwing out the ajax tool. Nothing has made it any faster on production. It is not the database as far as I know, since it has been consistently fast from my computer (from visual studio) and consistently slow from the server. I am wondering, where do I look next? Has anyone else had this problem before? Could this be caused by update panels or Ajax modalpopupextenders in different parts of the application? Why would the live behaviour differ so much from the localhost behaviour? Both the server with the ASP .NET page and the server with the database are servers on our network. I'm using Visual Studio 2008. Thank you in advance for any insight or advice.

    Read the article

  • Is Stream.Write thread-safe?

    - by Mike Spross
    I'm working on a client/server library for a legacy RPC implementation and was running into issues where the client would sometimes hang when waiting to a receive a response message to an RPC request message. It turns out the real problem was in my message framing code (I wasn't handling message boundaries correctly when reading data off the underlying NetworkStream), but it also made me suspicious of the code I was using to send data across the network, specifically in the case where the RPC server sends a large amount of data to a client as the result of a client RPC request. My send code uses a BinaryWriter to write a complete "message" to the underlying NetworkStream. The RPC protocol also implements a heartbeat algorithm, where the RPC server sends out PING messages every 15 seconds. The pings are sent out by a separate thread, so, at least in theory, a ping can be sent while the server is in the middle of streaming a large response back to a client. Suppose I have a Send method as follows, where stream is a NetworkStream: public void Send(Message message) { //Write the message to a temporary stream so we can send it all-at-once MemoryStream tempStream = new MemoryStream(); message.WriteToStream(tempStream); //Write the serialized message to the stream. //The BinaryWriter is a little redundant in this //simplified example, but here because //the production code uses it. byte[] data = tempStream.ToArray(); BinaryWriter bw = new BinaryWriter(stream); bw.Write(data, 0, data.Length); bw.Flush(); } So the question I have is, is the call to bw.Write (and by implication the call to the underlying Stream's Write method) atomic? That is, if a lengthy Write is still in progress on the sending thread, and the heartbeat thread kicks in and sends a PING message, will that thread block until the original Write call finishes, or do I have to add explicit synchronization to the Send method to prevent the two Send calls from clobbering the stream?

    Read the article

  • What arguments to use to explain why a SQL DB is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from MS SQL server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be espically when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurent access Performance with large ammounts of data Ammount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • SQLite Databases and Grid Hosting

    - by jocull
    I'm considering moving my site from a GoDaddy shared hosting account to a Media Temple grid hosting account in anticipation of traffic. However, I first have some concerns with the grid hosting setup. My site stores a large personal set of data on a per-user basis (possibly 3-4MB per user). At this rate I was worried about blowing over a 1GB MySQL limit in no time. To deal with this I created distributed SQLite databases per user to store large data objects. It's worked wonderfully so far. SQLite is super fast and simple. I know that reading from and writing to files is different in a Grid Hosting environment. I need to know if this setup is going to cause serious problems. These databases are not (and will not be) highly trafficked. They are personal to the user and will only be touched maybe two locations at the same time (one updating the data hourly at the most, and one or more reading on demand). I'd like to keep this setup as getting additional space (beyond 4GB) on a MySQL database seems to be a real trouble point. Will Grid Hosting cause me serious problems? Thanks.

    Read the article

  • How do you keep text from wrapping in an NSTableView using NSAttributedString

    - by Justin
    I have an NSTableView that has 2 columns, one for an icon and the other for two lines of text. In the second column, the text column, I have some larger text that is for the name of an item. Then I have a new line and some smaller text that describes the state of the item. When the name becomes so large that it doesn't fit on one line it wraps (or when you shrink the window down so small that it causes the names to not fit on a single line). row1=============== | image |  some name   | | image |   idle               | row2================ | image |  some name really long name   | <- this gets wrapped pushing 'idle' out of the view | image |   idle               | =================== My question is, how could I keep the text from wrapping and just have the NSTableView display a horizontal scroll-bar once the name is too large to fit?

    Read the article

  • How do you get the viewport scale after pinch/zoom on an iPhone web app?

    - by Loktar
    Does anyone know how to get the size in pixels or scale value of the viewport after a user has pinched or double tapped to zoom in/out on a page in JavaScript? I've tried using window.innerWidth but I've had mixed results. Sometimes it seems to accurately give the number of pixels the viewport is showing, however, if I zoom way in on a page and then do a large pinch to zoom back out, window.innerWidth will be around 600-700 even though it is only showing ~200px of the page. The page is only 400px wide and it didn't show the checkered "you've gone too far" background you see when you zoom out beyond the page size. If I do small pinches to zoom in and out, window.innerWidth appears to work just fine. Unfortunately I can't rely on a user only making small pinch gestures :) I've also tried to use the scale property on the gesture event object, but I've found that unreliable because you don't always know the initial scale when you reload the page or use back/forward buttons to navigate to it even when using the meta tag to specify it. Ultimately, I'm trying to make an app that is aware of when a user is trying to zoom out beyond the maximum zoom level so if there is another way to do this I'm interested in hearing about it :) Here's the code I'm using to get the innerWidth: document.body.addEventListener('gestureend', function (evt) { console.log(window.innerWidth); // inaccurate when doing large pinch gestures }, false); Thanks!

    Read the article

  • What is an elegant way to solve this max and min problem in Ruby or Python?

    - by ????
    The following can be done by step by step, somewhat clumsy way, but I wonder if there are elegant method to do it. There is a page: http://www.mariowiki.com/Mario_Kart_Wii, where there are 2 tables... there is Mario - 6 2 2 3 - - Luigi 2 6 - - - - - Diddy Kong - - 3 - 3 - 5 [...] The name "Mario", etc are the Mario Kart Wii character names. The numbers are for bonus points for: Speed Weight Acceleration Handling Drift Off-Road Mini-Turbo and then there is table 2 Standard Bike S 39 21 51 51 54 43 48 Out Bullet Bike 53 24 32 35 67 29 67 In Bubble Bike / Jet Bubble 48 27 40 40 45 35 37 In [...] These are also the characteristics for the Bike or Kart. I wonder what's the most elegant solution for finding all the maximum combinations of Speed, Weight, Acceleration, etc, and also for the minimum, either by directly using the HTML on that page or copy and pasting the numbers into a text file. Actually, in that character table, Mario to Bower Jr are all medium characters, Baby Mario to Dry Bones are small characters, and the rest are all big characters, except the small, medium, or large Mii are just as what the name says. Small characters can only ride small bike or small kart, and so forth for medium and large.

    Read the article

  • How can you pre-define a variable that will contain an anonymous type?

    - by HitLikeAHammer
    In the simplified example below I want to define result before it is assinged. The linq queries below return lists of anonymous types. result will come out of the linq queries as an IEnumerable<'a but I can't define it that way at the top of the method. Is what I am trying to do possible (in .NET 4)? bool large = true; var result = new IEnumerable(); //This is wrong List<int> numbers = new int[]{1,2,3,4,5,6,7,8,9,10}.ToList<int>(); if (large) { result = from n in numbers where n > 5 select new { value = n }; } else { result = from n in numbers where n < 5 select new { value = n }; } foreach (var num in result) { Console.WriteLine(num.value); } EDIT: To be clear I know that I do not need anonymous types in the example above. It is just to illustrate my question with a small, simple example.

    Read the article

  • What arguments to use to explain why SQL Server is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from SQL Server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be especially when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurrent access Performance with large amounts of data Amount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files NTFS doesn't support tons of files in a directory well I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • How should I organize complex SQL views in Rails?

    - by Benjamin Oakes
    I manage a research database with Ruby on Rails. The data that is entered is primarily used by scientists who prefer to have all the relevant information for a study in one single massive table for use in their statistics software of choice. I'm currently presenting it as CSV, as it's very straightforward to do and compatible with the tools people want to use. I've written many views (the SQL kind, not the Rails HTML/ERB kind) to make the output they expect a reality. Some of these views are quite large and have a fair amount of complexity behind them. I wrote them in SQL because there are many calculations and comparisons that are more easily done with SQL. They're currently loaded into the database straight from a file named views.sql. To get the requested data, I do a select * from my_view;. The views.sql file is getting quite large. Part of the problem is that we're still figuring out what the data we collect means, so there's a lot of changes being made to the views all the time -- and a ton of them are being created. Many of them need to be repeatable. I've recently run into issues organizing and testing these views. Rails works great for user interface stuff and business logic, but I'm not aware of much existing structure for handling the reporting we require. Some options I've thought of: Should I move them into the most relevant models somehow? Several of the views interact with each other, which makes this situation more complex than just doing a single find_by_sql, so I don't know if they should only be part of the model. Perhaps they should be treated as a "view" in the MVC sense? (That is, they could be moved into app/views/ and live alongside the HTML, perhaps as files named something like my_view.csv.sql which return CSV.) How would you deal with a complex reporting problem like this?

    Read the article

  • What database systems should an startup company consider?

    - by Am
    Right now I'm developing the prototype of a web application that aggregates large number of text entries from a large number of users. This data must be frequently displayed back and often updated. At the moment I store the content inside a MySQL database and use NHibernate ORM layer to interact with the DB. I've got a table defined for users, roles, submissions, tags, notifications and etc. I like this solution because it works well and my code looks nice and sane, but I'm also worried about how MySQL will perform once the size of our database reaches a significant number. I feel that it may struggle performing join operations fast enough. This has made me think about non-relational database system such as MongoDB, CouchDB, Cassandra or Hadoop. Unfortunately I have no experience with either. I've read some good reviews on MongoDB and it looks interesting. I'm happy to spend the time and learn if one turns out to be the way to go. I'd much appreciate any one offering points or issues to consider when going with none relational dbms?

    Read the article

  • Ad distribution problem: an optimal solution?

    - by Mokuchan
    I'm asked to find a 2 approximate solution to this problem: You’re consulting for an e-commerce site that receives a large number of visitors each day. For each visitor i, where i € {1, 2 ..... n}, the site has assigned a value v[i], representing the expected revenue that can be obtained from this customer. Each visitor i is shown one of m possible ads A1, A2 ..... An as they enter the site. The site wants a selection of one ad for each customer so that each ad is seen, overall, by a set of customers of reasonably large total weight. Thus, given a selection of one ad for each customer, we will define the spread of this selection to be the minimum, over j = 1, 2 ..... m, of the total weight of all customers who were shown ad Aj. Example Suppose there are six customers with values 3, 4, 12, 2, 4, 6, and there are m = 3 ads. Then, in this instance, one could achieve a spread of 9 by showing ad A1 to customers 1, 2, 4, ad A2 to customer 3, and ad A3 to customers 5 and 6. The ultimate goal is to find a selection of an ad for each customer that maximizes the spread. Unfortunately, this optimization problem is NP-hard (you don’t have to prove this). So instead give a polynomial-time algorithm that approximates the maximum spread within a factor of 2. The solution I found is the following: Order visitors values in descending order Add the next visitor value (i.e. assign the visitor) to the Ad with the current lowest total value Repeat This solution actually seems to always find the optimal solution, or I simply can't find a counterexample. Can you find it? Is this a non-polinomial solution and I just can't see it?

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >