Search Results

Search found 742 results on 30 pages for 'giant squid'.

Page 28/30 | < Previous Page | 24 25 26 27 28 29 30  | Next Page >

  • Datatable add new column and values speed issue

    - by Cine
    I am having some speed issue with my datatables. In this particular case I am using it as holder of data, it is never used in GUI or any other scenario that actually uses any of the fancy features. In my speed trace, this particular constructor was showing up as a heavy user of time when my database is ~40k rows. The main user was set_Item of DataTable. protected myclass(DataTable dataTable, DataColumn idColumn) { this.dataTable = dataTable; IdColumn = idColumn ?? this.dataTable.Columns.Add(string.Format("SYS_{0}_SYS", Guid.NewGuid()), Type.GetType("System.Int32")); JobIdColumn = this.dataTable.Columns.Add(string.Format("SYS_{0}_SYS", Guid.NewGuid()), Type.GetType("System.Int32")); IsNewColumn = this.dataTable.Columns.Add(string.Format("SYS_{0}_SYS", Guid.NewGuid()), Type.GetType("System.Int32")); int id = 1; foreach (DataRow r in this.dataTable.Rows) { r[JobIdColumn] = id++; r[IsNewColumn] = (r[IdColumn] == null || r[IdColumn].ToString() == string.Empty) ? 1 : 0; } Digging deeper into the trace, it turns out that set_Item calls EndEdit, which brings my thoughts to the transaction support of the DataTable, for which I have no usage for in my scenario. So my solution to this was to open editing on all of the rows and never close them again. _dt.BeginLoadData(); foreach (DataRow row in _dt.Rows) row.BeginEdit(); Is there a better solution? This feels too much like a big giant hack that will eventually come and bite me. You might suggest that I dont use DataTable at all, but I have already considered that and rejected it due to the amount of effort that would be required to reimplement with a custom class. The main reason it is a datatable is that it is ancient code (.net 1.1 time) and I dont want to spend that much time changing it, and it is also because the original table comes out of a third party component.

    Read the article

  • Table valued function only returns CLR error

    - by Anthony
    I have read-only access to a database that was set up for a third-party, closed-source app. Once group of (hopefully) useful table functions only returns the error: Failed to initialize the Common Language Runtime (CLR) v2.0.50727 with HRESULT 0x80131522. You need to restart SQL server to use CLR integration features. (severity 16) But in theory, the third-party app should be able to use the function (either directly or indirectly), so I'm convinced I'm not setting things up right. I'm very new to SQL Server, so I could be missing something obvious. Or I could be missing something really slight, I have no idea. Here is an example of a query that returns the above error: SELECT * FROM dbo.UncompressDataDateRange(4,'Apr 24 2010 12:00AM','Apr 30 2010 12:00AM') Where the function takes three parameters: The Data Set (int) -- basically the data has 6 classifications, and the giant table this should be pulling from has a column to indicate which is which. startDate (smalldatetime) endDate (smalldatetime) There are other, similar functions that expand on the same idea, all returning the same error. Quick Note: I'm not sure if this is relevant, but I was able to connect to the database via SQL Studio (but without the privs to script the functions as code), and a checked the dependency for the above sample function. It turns out that it is a dependent of a view that I have gotten to work, and that view is dependent of the larger, much-hairier data-table. This makes me think I should somehow be pointing the function at the results of the view, but I'm not seeing any documentation that shows how that is done.

    Read the article

  • Textually diffing JSON

    - by Richard Levasseur
    As part of my release processes, I have to compare some JSON configuration data used by my application. As a first attempt, I just pretty-printed the JSON and diff'ed them (using kdiff3 or just diff). As that data has grown, however, kdiff3 confuses different parts in the output, making additions look like giant modifies, odd deletions, etc. It makes it really hard to figure out what is different. I've tried other diff tools, too (meld, kompare, diff, a few others), but they all have the same problem. Despite my best efforts, I can't seem to format the JSON in a way that the diff tools can understand. Example data: [ { "name": "date", "type": "date", "nullable": true, "state": "enabled" }, { "name": "owner", "type": "string", "nullable": false, "state": "enabled", } ...lots more... ] The above probably wouldn't cause the problem (the problem occurs when there begin to be hundreds of lines), but thats the gist of what is being compared. Thats just a sample; the full objects are 4-5 attributes, and some attributes have 4-5 attributes in them. The attribute names are pretty uniform, but their values pretty varied. In general, it seems like all the diff tools confuse the closing "}" with the next objects closing "}". I can't seem to break them of this habit. I've tried adding whitespace, changing indentation, and adding some "BEGIN" and "END" strings before and after the respective objects, but the tool still get confused.

    Read the article

  • .NET out of memory troubleshooting

    - by bushman
    After reading a few enlightening articles about memory in the .NET technology, Out of Memory does not refer to physical memory, 597499. I thought I understood why a C# app would throw an out of memory exception -- until I started experimenting with two servers-- both are having 2.5 gigs of ram, windows server 2003 and identical programs running. The only significant difference between the two being one has 7% hard drive storage left and the other more than 50%. The server with 7% storage space left is consistently throwing an out of memory while the other is performing consistently well. My app is a C# web application that process' hundreds of MBs of String object. Why would this difference happen seeing that the most likely reason for the out of memory issue is out of contiguous virtual address space -- What solutions do you guys propose -- and what do you say about the following 1. turn on the 3gb switch to increase the virtual address space -- 2. instead of using one giant string object, break it up into smaller pieces and collect it in a jagged array (here I have to find a way to return to the caller in some other way as right now, the return type is a string) thanks SO

    Read the article

  • Writing a JavaScript zip code validation function

    - by mkoryak
    I would like to write a JavaScript function that validates a zip code, by checking if the zip code actually exists. Here is a list of all zip codes: http://www.census.gov/tiger/tms/gazetteer/zips.txt (I only care about the 2nd column) This is really a compression problem. I would like to do this for fun. OK, now that's out of the way, here is a list of optimizations over a straight hashtable that I can think of, feel free to add anything I have not thought of: Break zipcode into 2 parts, first 2 digits and last 3 digits. Make a giant if-else statement first checking the first 2 digits, then checking ranges within the last 3 digits. Or, covert the zips into hex, and see if I can do the same thing using smaller groups. Find out if within the range of all valid zip codes there are more valid zip codes vs invalid zip codes. Write the above code targeting the smaller group. Break up the hash into separate files, and load them via Ajax as user types in the zipcode. So perhaps break into 2 parts, first for first 2 digits, second for last 3. Lastly, I plan to generate the JavaScript files using another program, not by hand. Edit: performance matters here. I do want to use this, if it doesn't suck. Performance of the JavaScript code execution + download time. Edit 2: JavaScript only solutions please. I don't have access to the application server, plus, that would make this into a whole other problem =)

    Read the article

  • Log4net RollingFileAppender doesn't roll over anymore after a couple of weeks

    - by Rocko
    Hello, I'm using log4net (v1.2.9.0) in a web project. Everything works like a charm, but after a couple of weeks the RollingFileAppender stops to roll over. Instead every log message is appended to the same file which therefore has a giant size by now. Here is my log4net configuration: <?xml version="1.0" encoding="utf-8"?> <log4net> <appender name="RollingLogFileAppender" type="log4net.Appender.RollingFileAppender"> <param name="File" value="C:\\Documents and Settings\\All Users\\Application Data\\CAPServer\\log\\CWSServer.log"/> <param name="AppendToFile" value="true"/> <param name="MaxSizeRollBackups" value="50"/> <param name="RollingStyle" value="Date"/> <param name="DatePattern" value="yyyyMMdd"/> <param name="StaticLogFileName" value="true"/> <layout type="log4net.Layout.PatternLayout"> <param name="ConversionPattern" value="%d{dd.MM.yyyy HH:mm:ss} [%t] %-5p %c{1} - %m%n"/> </layout> </appender> <root> <level value="ALL"/> <appender-ref ref="RollingLogFileAppender"/> </root> </log4net> Can you help? Thanks in advance, Rocko

    Read the article

  • Rails' page caching vs. HTTP reverse proxy caches

    - by John Topley
    I've been catching up with the Scaling Rails screencasts. In episode 11 which covers advanced HTTP caching (using reverse proxy caches such as Varnish and Squid etc.), they recommend only considering using a reverse proxy cache once you've already exhausted the possibilities of page, action and fragment caching within your Rails application (as well as memcached etc. but that's not relevant to this question). What I can't quite understand is how using an HTTP reverse proxy cache can provide a performance boost for an application that already uses page caching. To simplify matters, let's assume that I'm talking about a single host here. This is my understanding of how both techniques work (maybe I'm wrong): With page caching the Rails process is hit initially and then generates a static HTML file that is served directly by the Web server for subsequent requests, for as long as the cache for that request is valid. If the cache has expired then Rails is hit again and the static file is regenerated with the updated content ready for the next request With an HTTP reverse proxy cache the Rails process is hit when the proxy needs to determine whether the content is stale or not. This is done using various HTTP headers such as ETag, Last-Modified etc. If the content is fresh then Rails responds to the proxy with an HTTP 304 Not Modified and the proxy serves its cached content to the browser, or even better, responds with its own HTTP 304. If the content is stale then Rails serves the updated content to the proxy which caches it and then serves it to the browser If my understanding is correct, then doesn't page caching result in less hits to the Rails process? There isn't all that back and forth to determine if the content is stale, meaning better performance than reverse proxy caching. Why might you use both techniques in conjunction?

    Read the article

  • Stored Procedure: Variable passed from PHP alters second half of query string

    - by Stephanie
    Hello everyone, Basically, we have an ASP website right now that I'm converting to PHP. We're still using the MSSQL server for the DB -- it's not moving. In the ASP, now, there's an include file with a giant sql query that's being executed. This include sits on a lot of pages and this is a simple version of what happens. Pages A, B and C all use this include file to return a listing. In ASP, Page A passes through variable A to the include file - page B passes through variable B -- page C passes through variable C, and so on. The include file builds the SQL query like this: sql = "SELECT * from table_one LEFT OUTER JOIN table_two ON table_one.id = table_two.id" then adds (remember, ASP), based on the variable passed through from the parent page, Select Case sType Case "A" sql = sql & "WHERE LOWER(column_a) <> 'no' AND LTRIM(ISNULL(column_b),'') <> '' ORDER BY column_a Case "B" sql = sql & "WHERE LOWER(column_c) <> 'no' ORDER BY lastname, firstname Case "C" sql = sql & "WHERE LOWER(column_f) <> 'no' OR LOWER(column_g) <> 'no' ORDER BY column_g As you notice, every string that's added on as the second part of the sql query is different than the previous; not just one variable can be substituted out, which is what has me stumped. How do I translate this case / switch into the stored procedure, based on the varchar input that I pass to the stored procedure via PHP? This stored procedure will actually handle a query listing for about 20 pages, so it's a hefty one and this is my first major complicated one. I'm getting there, though! I'm also just more used to MySQL, too. Not that they're that different. :P Thank you very much for your help in advance. Stephanie

    Read the article

  • Are functional programming languages good for practical tasks?

    - by Clueless
    It seems to me from my experimenting with Haskell, Erlang and Scheme that functional programming languages are a fantastic way to answer scientific questions. For example, taking a small set of data and performing some extensive analysis on it to return a significant answer. It's great for working through some tough Project Euler questions or trying out the Google Code Jam in an original way. At the same time it seems that by their very nature, they are more suited to finding analytical solutions than actually performing practical tasks. I noticed this most strongly in Haskell, where everything is evaluated lazily and your whole program boils down to one giant analytical solution for some given data that you either hard-code into the program or tack on messily through Haskell's limited IO capabilities. Basically, the tasks I would call 'practical' such as Aceept a request, find and process requested data, and return it formatted as needed seem to translate much more directly into procedural languages. The most luck I have had finding a functional language that works like this is Factor, which I would liken to a reverse-polish-notation version of Python. So I am just curious whether I have missed something in these languages or I am just way off the ball in how I ask this question. Does anyone have examples of functional languages that are great at performing practical tasks or practical tasks that are best performed by functional languages?

    Read the article

  • Filter large amounts of data from a HTML table w/ jQuery

    - by Bry4n
    I work for a transit agency and I have large amounts of data (mostly times), and I need a way to filter the data using two textboxes (To and From). I found jQuery quick search, but it seems to only work with one textbox. If anyone has any ideas via jQuery or some other client side library, that would be fantastic. Ideal example: To: [Textbox] From:[Textbox] <table> <tr> <td>69th street</td><td>5:00pm</td><td>5:06pm</td><td>5:10pm</td><td>5:20pm</td> </tr> <tr> <td>Millbourne</td><td>5:09pm</td><td>5:15pm</td><td>5:20pm</td><td>5:25pm</td> </tr> <tr> <td>Spring Garden</td><td>6:00pm</td><td>6:15pm</td><td>6:20pm</td><td>6:25pm</td> </tr> </table> I have an HTML page with a giant table on it listing the station names and each stations times. I want to be able to put my starting location in one box and my ending location in another box and have all the items in the table disappear that don't relate to either of the two locations typed in, leaving only two rows that match what was typed in (even if they don't spell it right or type it all the way) Similar to the jQuery quick search plugin

    Read the article

  • Harvesting Dynamic HTTP Content to produce Replicating HTTP Static Content

    - by Neil Pitman
    I have a slowly evolving dynamic website served from J2EE. The response time and load capacity of the server are inadequate for client needs. Moreover, ad hoc requests can unexpectedly affect other services running on the same application server/database. I know the reasons and can't address them in the short term. I understand HTTP caching hints (expiry, etags....) and for the purpose of this question, please assume that I have maxed out the opportunities to reduce load. I am thinking of doing a brute force traversal of all URLs in the system to prime a cache and then copying the cache contents to geodispersed cache servers near the clients. I'm thinking of Squid or Apache HTTPD mod_disk_cache. I want to prime one copy and (manually) replicate the cache contents. I don't need a federation or intelligence amongst the slaves. When the data changes, invalidating the cache, I will refresh my master cache and update the slave versions, probably once a night. Has anyone done this? Is it a good idea? Are there other technologies that I should investigate? I can program this, but I would prefer a configuration of open source technologies solution Thanks

    Read the article

  • When to drop an IT job

    - by Nippysaurus
    In my career I have had two programming jobs. Both these jobs were in a field that I am most familiar with (C# / MSSQL) but I have quit both jobs for the same reason: unmanageable code and bad (loose) company structure. There was something in common with both these jobs: small companies (in one I was the only developer). Currently I am in the following position: being given written instructions which are almost impossible to follow (somewhat of a fools errand). we are given short time constraints, but seldom asked how long work will take, and when we do it is always too long and needs to be shorter (and when it ends up taking longer than they need it to take, it's always our fault). there is no time for proper documenting, but we get blamed for not documenting (see previous point). Management is constantly screwing me around, saying I'm underperforming on a given task (which is not true, and switching me to a task which is much more confusing). So I must ask my fellow developers: how bad does a job need to be before you would consider jumping ship? And what to look out for when considering taking a job. In future I will be asking about documented procedures, release control, bug management and adoption of new technologies. EDIT: Let me add some more fuel to the fire ... I have been in my current job for just over a year, and the work I am doing almost never uses any of the knowledge I have gained from the other work I have been doing here. Everything is a giant learning curve. Because of this about 30% of my time is learning what is going on with this new product (who's owner / original developer has left the company), 30% trying to find the relevant documentation that helps the whole thing make sense, 30% actually finding where to make the change, 10% actually making the change.

    Read the article

  • Stored Procedures In Source Control - Automate Build/Deployment Process

    - by Alex
    My company provides a large .NET service-oriented solution. The services layer interact with a T-SQL back-end consisting of hundreds of tables and stored procedures. Our C# code is in version-control (SVN) but our stored procedures and schema are not. After much lobbying of expedient upper-management, I was allowed to review our (non-existent) build/deployment process to accomplish the following goals: Place schema and stored procedures under source-control. Automate the build/deployment process. I would like to proceed per the accepted answer's strategy in this post but have additional questions: I would like to use Hudson as my build server. Is this a reasonable choice for a C#/SQL solution? What better alternatives should I explore? Assuming I have all triggers, stored-procedures, schema, etc... under source control, and that they are scripted to individual files, how do I generate a build script which will take into account dependencies/references between these items? (SQL Server does this automatically, but it generates one giant script) What does the workflow of performing an update at the client look like? i.e. I have to keep existing table data. How do I roll-back schema changes? I am the only programmer. Several other pseudo-technical staff like to make changes directly inside SQL Management Studio. Is it realistic to expect others to adhere to this solution -- how can I enforce this? Thank you in advance for your help.

    Read the article

  • How should I manage/declare dependencies between open source C# projects?

    - by munificent
    I've got a game (a roguelike to be specific) in C# that I'm in the process of cleaning up to open source. One step I'd like to take is splitting it into three distinct pieces: A simple package of utility classes, things like 2D arrays, vectors, etc. A terminal UI package that gives you a curses-like display. It depends on 1. The actual game, which uses 1 and 2. Right now, these are all separate projects in the same solution, but I'd kind of like to make them completely separate projects (in the "open source project" sense, not the "visual studio project" use of the term) with their own names and repos. I think, at the very least, #1 is generally useful even if you aren't building game, and I don't want someone to have to build an entire game just to get some handy functions. What I'm not sure about is how to handle the dependencies if I split up the solution. If someone decides they want to sync the game, how should I ensure they also get 1 and 2? Include the built dependent .dlls in the games repo? Just document, "you need these other projects and they must be in a path relative to the game like this". Just leave it all one giant solution and a single repo. Something I'm not thinking of?

    Read the article

  • Cocoa NSTextField Drag & Drop Requires Subclass... Really?

    - by ipmcc
    Until today, I've never had occasion to use anything other than an NSWindow itself as an NSDraggingDestination. When using a window as a one-size-fits-all drag destination, the NSWindow will pass those messages on to its delegate, allowing you to handle drops without subclassing NSWindow. The docs say: Although NSDraggingDestination is declared as an informal protocol, the NSWindow and NSView subclasses you create to adopt the protocol need only implement those methods that are pertinent. (The NSWindow and NSView classes provide private implementations for all of the methods.) Either a window object or its delegate may implement these methods; however, the delegate’s implementation takes precedence if there are implementations in both places. Today, I had a window with two NSTextFields on it, and I wanted them to have different drop behaviors, and I did not want to allow drops anywhere else in the window. The way I interpret the docs, it seems that I either have to subclass NSTextField, or make some giant spaghetti-conditional drop handlers on the window's delegate that hit-checks the draggingLocation against each view in order to select the different drop-area behaviors for each field. The centralized NSWindow-delegate-based drop handler approach seems like it would be onerous in any case where you had more than a small handful of drop destination views. Likewise, the subclassing approach seems onerous regardless of the case, because now the drop handling code lives in a view class, so once you accept the drop you've got to come up with some way to marshal the dropped data back to the model. The bindings docs warn you off of trying to drive bindings by setting the UI value programmatically. So now you're stuck working your way back around that too. So my question is: "Really!? Are those the only readily available options? Or am I missing something straightforward here?" Thanks.

    Read the article

  • Compatibility jquery with IE - click function and fade

    - by Julien Fotnaine
    Here my script : http://jsfiddle.net/3XwZv/153/ HTML <div id="box1" class="choice" style="background:blue;"> <div class="selection ordinateur"> <div class="choix1"><a class="link1" href="#"></a></div> </div> </div> <div id="box2" class="choice" style="display:none;background:red;"> <div class="selection ordinateur"> <div class="choix1"><a class="link2" href="#"></a></div> </div> </div> <div id="box3" class="choice" style="display:none;background:green;"> <div class="selection ordinateur"> <div class="choix1"><a href="#"></a></div> </div> </div> JS $(".link1").click(function() { $('#box1').fadeOut("slow", function(){ $('#box2').css("display","block"); $('#box2').replaceWith(div); $('#box1').fadeIn("slow"); }); $('.link1').fadeOut("slow"); return false; }); $(".link2").click(function() { $('#box2').fadeOut("slow", function(){ $('#box3').css("display","block"); $('#box3').replaceWith(div); $('#box2').fadeIn("slow"); }); $('.link2').fadeOut("slow"); return false; }); The main goal is that when you click on the giant square, I have three differents action. However, in Internet Explorer I block to the second. (the red square does not go to the green square). Please I need your help guys!

    Read the article

  • Sql Server 2005 multiple insert with c#

    - by bottlenecked
    Hello. I have a class named Entry declared like this: class Entry{ string Id {get;set;} string Name {get;set;} } and then a method that will accept multiple such Entry objects for insertion into the database using ADO.NET: static void InsertEntries(IEnumerable<Entry> entries){ //build a SqlCommand object using(SqlCommand cmd = new SqlCommand()){ ... const string refcmdText = "INSERT INTO Entries (id, name) VALUES (@id{0},@name{0});"; int count = 0; string query = string.Empty; //build a large query foreach(var entry in entries){ query += string.Format(refcmdText, count); cmd.Parameters.AddWithValue(string.Format("@id{0}",count), entry.Id); cmd.Parameters.AddWithValue(string.Format("@name{0}",count), entry.Name); count++; } cmd.CommandText=query; //and then execute the command ... } } And my question is this: should I keep using the above way of sending multiple insert statements (build a giant string of insert statements and their parameters and send it over the network), or should I keep an open connection and send a single insert statement for each Entry like this: using(SqlCommand cmd = new SqlCommand(){ using(SqlConnection conn = new SqlConnection(){ //assign connection string and open connection ... cmd.Connection = conn; foreach(var entry in entries){ cmd.CommandText= "INSERT INTO Entries (id, name) VALUES (@id,@name);"; cmd.Parameters.AddWithValue("@id", entry.Id); cmd.Parameters.AddWithValue("@name", entry.Name); cmd.ExecuteNonQuery(); } } } What do you think? Will there be a performance difference in the Sql Server between the two? Are there any other consequences I should be aware of? Thank you for your time!

    Read the article

  • Organising UI code in .NET forms

    - by sb3700
    Hi I'm someone who has taught myself programming, and haven't had any formal training in .NET programming. A while back, I started C# in order to develop a GUI program to control sensors, and the project has blossomed. I was just wondering how best to organise the code, particularly UI code, in my forms. My forms currently are a mess, or at least seem a mess to me. I have a constructor which initialises all the parameters and creates events. I have a giant State property, which updates the Enabled state of all my form control as users progress through the application (ie: disconnected, connected, setup, scanning) controlled by a States enum. I have 3-10 private variables accessed through properties, some of which have side-effects in changing the values of form elements. I have a lot of "UpdateXXX" functions to handle UI elements that depend on other UI elements - ie: if a sensor is changed, then change the baud rate drop down list. They are separated into regions I have a lot of events calling these Update functions I have a background worker which does all the scanning and analysis. My problem is this seems like a mess, particularly the State property, and is getting unmaintainable. Also, my application logic code and UI code are in the same file and to some degree, intermingled which seems wrong and means I need to do a lot of scrolling to find what I need. How do you structure your .net forms? Thanks

    Read the article

  • Speed comparison - Template specialization vs. Virtual Function vs. If-Statement

    - by Person
    Just to get it out of the way... Premature optimization is the root of all evil Make use of OOP etc. I understand. Just looking for some advice regarding the speed of certain operations that I can store in my grey matter for future reference. Say you have an Animation class. An animation can be looped (plays over and over) or not looped (plays once), it may have unique frame times or not, etc. Let's say there are 3 of these "either or" attributes. Note that any method of the Animation class will at most check for one of these (i.e. this isn't a case of a giant branch of if-elseif). Here are some options. 1) Give it boolean members for the attributes given above, and use an if statement to check against them when playing the animation to perform the appropriate action. Problem: Conditional checked every single time the animation is played. 2) Make a base animation class, and derive other animations classes such as LoopedAnimation and AnimationUniqueFrames, etc. Problem: Vtable check upon every call to play the animation given that you have something like a vector<Animation>. Also, making a separate class for all of the possible combinations seems code bloaty. 3) Use template specialization, and specialize those functions that depend on those attributes. Like template<bool looped, bool uniqueFrameTimes> class Animation. Problem: The problem with this is that you couldn't just have a vector<Animation> for something's animations. Could also be bloaty. I'm wondering what kind of speed each of these options offer? I'm particularly interested in the 1st and 2nd option because the 3rd doesn't allow one to iterate through a general container of Animations. In short, what is faster - a vtable fetch or a conditional?

    Read the article

  • Handling large datasets with PHP/Drupal

    - by jo
    Hi all, I have a report page that deals with ~700k records from a database table. I can display this on a webpage using paging to break up the results. However, my export to PDF/CSV functions rely on processing the entire data set at once and I'm hitting my 256MB memory limit at around 250k rows. I don't feel comfortable increasing the memory limit and I haven't got the ability to use MySQL's save into outfile to just serve a pre-generated CSV. However, I can't really see a way of serving up large data sets with Drupal using something like: $form = array(); $table_headers = array(); $table_rows = array(); $data = db_query("a query to get the whole dataset"); while ($row = db_fetch_object($data)) { $table_rows[] = $row->some attribute; } $form['report'] = array('#value' => theme('table', $table_headers, $table_rows); return $form; Is there a way of getting around what is essentially appending to a giant array of arrays? At the moment I don't see how I can offer any meaningful report pages with Drupal due to this. Thanks

    Read the article

  • How can I make "month" columns in Sql?

    - by Beska
    I've got a set of data that looks something like this (VERY simplified): productId Qty dateOrdered --------- --- ----------- 1 2 10/10/2008 1 1 11/10/2008 1 2 10/10/2009 2 3 10/12/2009 1 1 10/15/2009 2 2 11/15/2009 Out of this, we're trying to create a query to get something like: productId Year Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec --------- ---- --- --- --- --- --- --- --- --- --- --- --- --- 1 2008 0 0 0 0 0 0 0 0 0 2 1 0 1 2009 0 0 0 0 0 0 0 0 0 3 0 0 2 2009 0 0 0 0 0 0 0 0 0 3 2 0 The way I'm doing this now, I'm doing 12 selects, one for each month, and putting those in temp tables. I then do a giant join. Everything works, but this guy is dog slow. I know this isn't much to go on, but knowing that I barely qualify as a tyro in the db world, I'm wondering if there is a better high level approach to this that I might try. (I'm guessing there is.) (I'm using MS Sql Server, so answers that are specific to that DB are fine.) (I'm just starting to look at "PIVOT" as a possible help, but I don't know anything about it yet, so if someone wants to comment about that, that might be helpful as well.)

    Read the article

  • jQuery & IE crashing on $('#someDiv').hide();

    - by Jimmy
    Well after a while of scratching my head and going "huh?" trying to figure out why IE would straight up crash when loading one of my pages loaded with jQuery goodness, I narrowed down the culprit to this line $('div#questions').hide(); And when I say IE crashes, I mean it fully crashes, attempting to do its webpage recovery nonsense that fails. I am running jQuery 1.4.2 and using IE 8 (haven't tested with any other versions) my current workaround is this: if ($.browser.msie) { window.location = "http://www.mozilla.com/en-US/products/download.html"; } For some reason I feel my IE users won't be very pleased with this solution though. The div in question has a lot of content in it and other divs that get hidden and displayed again, and all of that works just fine and dandy, it is only when the giant parent div is hidden that IE flips out and stabs itself. Has anyone encountered this or have any possible ideas of what is going wrong? EDIT: Everything is wrapped up in the $(document).ready(function() { }); And my code is all internal so I can't link it unfortunately.

    Read the article

  • Git + SoA, one repo or many?

    - by parsenome
    Normally, when I start up a new application, I'd create a new git repository for it. That's well accepted and plays nice with Github when I want to share my code. At work, I'm working in a service oriented architecture. One very common pattern is to add some code to two different applications at the same time - perhaps adding a model with a RESTful interface to one and a web frontend for managing it on another. Using separate git repositories has some warts in this case. Here are what I see as the downsides of doing separate repositories: I have to commit twice I can't correllate related commits very well No single place to go back and trace history - I'd love to be able to bring up all my commits for the day in a single place Forgetting to pull one repo or another is a gotcha On the other hand, I've used perforce a lot and its one giant repository model has lots of warts too. Perforce has features designed to let help you with those, git doesn't. Has anyone else run into this situation? How did you handle it? What worked well, and what didn't?

    Read the article

  • What is good about php/what is php good for?

    - by Roman A. Taycher
    I have often seen php bashed around the webs as a loosely typed(loose typing as in a lot of type coercion and/or easy(and perhaps common) to cast object all over not dynamic typing) language without a great compiler/interpreter/vm, with even the standard library using a number of different naming conventions. A lot of people complain about perl but many (including a lot of the complainers) also give it a lot of credit for its regexes and general flexibility and power. Other then legacy code , giant web frameworks that can do tons(drupal,ect.), and easy cheap hosting what is good about php (,also what criticism are unfair, and how is the language evolving to overcome its problems). Why would i want to learn it? why would I want to do an independent project in it? The main thing I have heard is that its php codes simplicity is sometimes easier then the over-engineered complexity you find in certain Java frameworks and applications. I'm not just trolling, i'm genuinly curious what makes php programmers use it. try to convince me to put it on my languages to dabble in and languages to learn more in depth lists.

    Read the article

  • Update MySQl table onDrop?

    - by dougvt
    Hi all. I am writing a PHP/MySQL application (using CodeIgniter) that uses some jQuery functionality for dragging table rows. I have a table in which the user can drag rows to the desired order (kind of a queue for which I need to preserve the rank of each row). I've been trying to figure out how to (and whether I should) update the database each time the user drops a row, in order to simplify the UI and avoid a "Save" button. I have the jQuery working and can send a serialized list back to the server onDrop, but is it good design practice to run an update query this often? The table will usually have 30-40 rows max, but if the user drags row 1 far down the list, then potentially all the rows would need to be updated to update the rank field. I've been wondering whether to send a giant query to the server, to loop through the rows in PHP and update each row with its own Update query, to send a small serialized list to a stored procedure to let the server do all the work, or perhaps a better method I haven't considered. I've read that stored procedures in MySQL are not very efficient and use a separate process for each call. Any advice as to the right solution here? Thanks very much for your help!

    Read the article

< Previous Page | 24 25 26 27 28 29 30  | Next Page >