Search Results

Search found 65999 results on 2640 pages for 'large data volumes'.

Page 149/2640 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • New Whitepaper: Evolution from the Traditional Data Center to Exalogic: An Operational Perspective

    - by Javier Puerta
    IT organizations are struggling with the need to balance the day-to-day concerns of data center management against the business level requirements to deliver long-term value. This balancing act has proven difficult and inefficient: systems and application management tools are resource intensive and traditional infrastructure management architectures have developed over time on a project by project basis. These traditional management systems consist of multiple tools that require administrators to waste time performing too many steps to handle routine administrative tasks. Operational efficiency and agility in your enterprise are directly linked to the capabilities provided by the management layer across the entire stack, from the application, middleware, operating system, compute, network and storage. Only when this end to end capability is provided will we experience the full benefit of a scalable, efficient, responsive and secure datacenter. Managing Exalogic is substantially less complex and error prone than managing traditional systems built from individually sourced, multi-vendor components because Exalogic is designed to be administered and maintained as a single, integrated system (Figure 1). It is at the forefront of the industry-wide shift away from costly and inferior one-off platforms toward private clouds and Engineered Systems. Read the full whitepaper "Evolution from the Traditional Data Center to Exalogic: An Operational Perspective". Full document is available for download at the Exadata Partner Community Collaborative Workspace (for community members only - if you get an error message, please register for the Community first).

    Read the article

  • Large number of soft page faults when assigning a TJpegImage to a TBitmap

    - by Robert Oschler
    I have a Delphi 6 Pro application that processes incoming jpeg frames from a streaming video server. The code works but I recently noticed that it generates a huge number of soft page faults over time. After doing some investigation, the page faults appear to be coming from one particular graphics operation. Note, the uncompressed bitmaps in question are 320 x 240 or about 300 KB in size so it's not due to the handling of large images. The number of page faults being generated isn't tolerable. Over an hour it can easily top 1000000 page faults. I created a stripped down test case that executes the code I have included below on a timer, 10 times a second. The page faults appear to happen when I try to assign the TJpegImage to a TBitmap in the GetBitmap() method. I know this because I commented out that line and the page faults do not occur. The assign() triggers a decompression operation on the part of TJpegImage as it pushes the decompressed bits into a newly created bitmap that GetBitmap() returns. When I run Microsoft's pfmon utility (page fault monitor), I get a huge number of soft page fault error lines concerning RtlFillMemoryUlong, so it appears to happen during a memory buffer fill operation. One puzzling note. The summary part of pfmon's report where it shows which DLL caused what page fault does not show any DLL names in the far left column. I tried this on another system and it happens there too. Can anyone suggest a fix or a workaround? Here's the code. Note, IReceiveBufferForClientSocket is a simple class object that holds bytes in an accumulating buffer. function GetBitmap(theJpegImage: TJpegImage): Graphics.TBitmap; begin Result := TBitmap.Create; Result.Assign(theJpegImage); end; // --------------------------------------------------------------- procedure processJpegFrame(intfReceiveBuffer: IReceiveBufferForClientSocket); var theBitmap: TBitmap; theJpegStream, theBitmapStream: TMemoryStream; theJpegImage: TJpegImage; begin theBitmap := nil; theJpegImage := TJPEGImage.Create; theJpegStream:= TMemoryStream.Create; theBitmapStream := TMemoryStream.Create; try // 2 // ************************ BEGIN JPEG FRAME PROCESSING // Load the JPEG image from the receive buffer. theJpegStream.Size := intfReceiveBuffer.numBytesInBuffer; Move(intfReceiveBuffer.bufPtr^, theJpegStream.Memory^, intfReceiveBuffer.numBytesInBuffer); theJpegImage.LoadFromStream(theJpegStream); // Convert to bitmap. theBitmap := GetBitmap(theJpegImage); finally // Free memory objects. if Assigned(theBitmap) then theBitmap.Free; if Assigned(theJpegImage) then theJpegImage.Free; if Assigned(theBitmapStream) then theBitmapStream.Free; if Assigned(theJpegStream) then theJpegStream.Free; end; // try() end; // --------------------------------------------------------------- procedure TForm1.Timer1Timer(Sender: TObject); begin processJpegFrame(FIntfReceiveBufferForClientSocket); end; // --------------------------------------------------------------- procedure TForm1.FormCreate(Sender: TObject); var S: string; begin FIntfReceiveBufferForClientSocket := TReceiveBufferForClientSocket.Create(1000000); S := loadStringFromFile('c:\test.jpg'); FIntfReceiveBufferForClientSocket.assign(S); end; // --------------------------------------------------------------- Thanks, Robert

    Read the article

  • Categorising a large mysql result with php while?

    - by Ryan
    I'm using the following to grab my large result set from a mysql db: $discresult = 'SELECT t.id, t.subject, t.topicimage, t.topictype, c.user_id, c.disc_id FROM topics AS t LEFT JOIN collections AS c ON t.id=c.disc_id WHERE c.user_id='.$user_id; $userdiscs = $db->query($discresult) or error('Error.', __FILE__, __LINE__, $db->error()); This returns a list of all items that the user owns. I'm then needing to categorise these items based on the value of the "topictype" column, which Im currently doing by using: <h2>Category 1</h2> <?php while ($cur_img = mysql_fetch_array($userdiscs)) { if ($cur_img['topictype']=="cat-1") { if ($cur_img['topicimage']!="") { echo "<div><a href=\"viewtopic.php?id=".$cur_img['id']."\" title=\"".$cur_img['subject']."\"><img src=\"".$cur_img['topicimage']."\" style=\"width:60px; height: 60px\" /></a><br /><a href=\"viewtopic.php?id=".$cur_img['id']."\" title=\"".$cur_img['subject']."\">".$cur_img['subject']."</a></div>"; } else { echo "<div><a href=\"viewtopic.php?id=".$cur_img['id']."\" title=\"".$cur_img['subject']."\"><img src=\"img/no-disc-art.jpg\" style=\"width:60px; height: 60px\" /></a><br /><a href=\"viewtopic.php?id=".$cur_img['id']."\" title=\"".$cur_img['subject']."\">".$cur_img['subject']."</a></div>"; } } } mysql_data_seek($userdiscs, 0); ?> <h2>Category 2</h2> <?php while ($cur_img = mysql_fetch_array($userdiscs)) { if ($cur_img['topictype']=="cat-2") { if ($cur_img['topicimage']!="") { echo "<div><a href=\"viewtopic.php?id=".$cur_img['id']."\" title=\"".$cur_img['subject']."\"><img src=\"".$cur_img['topicimage']."\" style=\"width:60px; height: 60px\" /></a><br /><a href=\"viewtopic.php?id=".$cur_img['id']."\" title=\"".$cur_img['subject']."\">".$cur_img['subject']."</a></div>"; } else { echo "<div><a href=\"viewtopic.php?id=".$cur_img['id']."\" title=\"".$cur_img['subject']."\"><img src=\"img/no-disc-art.jpg\" style=\"width:60px; height: 60px\" /></a><br /><a href=\"viewtopic.php?id=".$cur_img['id']."\" title=\"".$cur_img['subject']."\">".$cur_img['subject']."</a></div>"; } } } mysql_data_seek($userdiscs, 0); ?> This works fine when I rinse and repeat the code, but as the site grows I expect I'll run into problems as the number of "topictype" options increases (expecting around 30 categories). I dont want to have to make seperate queries for each categorised group of discs either, as Id eventually have 30 queries being run as categories increase, so hoping to hear some suggestions or alternative approaches :) Thanks

    Read the article

  • Cutting large XML file into smaller pieces in C#

    - by NDraskovic
    I have a problem that I'm working on for quite some time now. I have an XML file with over 50000 records (one record has 3 levels). This file is used by one of my applications to control document sending (the record holds, among other informations, the type of document that has to be sent to a certain person). So in my application I load the XML file into a XmlDocument, and then by using SelectNodes method, I create a XmlNodeList from which I read the data I want. The process is like this - our worker takes the persons ID card (simple eith barcode) and reads it with barcode reader. When the barcode value has been read, my application finds the person with that ID in the XML file, and stores the type of the document into a string variable. Then the worker takes the document and reads its barcode, and if the value of documents barcode and the value in the value in the string variable match, the application makes a record that document of type xxxxxxxx will be sent to the person with ID yyyyyyyyy. This is very simple code, it works perfectly for now, and this is how it looks: On textBox1_TextChanged event (worker read persons ID): foreach(XmlNode node in NodeList){ if(String.Compare(node.Attributes.GetNamedItem("ID").Value.ToString(),textBox1.Text)==0) { ControlString = node.ChildNode[3].FirstChild.Attributes.GetNamedItem("doctype").Value.ToString(); break; } } textBox2.Focus(); And on textBox2_TextChanged event (worker read the documents barcode): if(String.Compare(textBox2.Text,ControlString)==0) { //Create a record and insert it into a SQL database } My question is - how will my application perform with larger XML files (I was told that the XML file might be up to 500,000 records large), will this approach be valid, or will I need to cut the file into smaller files. If I have to cut it, please give me an idea with some code samples, I've tried to do it like this: Reading entire record and storing it into a string: private void WriteXml(XmlNode record) { tempXML = record.InnerXml; temp = "<" + record.Name + " code=\"" + record.Attributes.GetNamedItem("code").Value + "\">" + Environment.NewLine; temp += tempXML + Environment.NewLine; temp += "</" + record.Name + ">"; SmallerXMLDocument += temp + Environment.NewLine; temp = ""; i++; } tempXML, temp and SmallerXMLDocument are all string variables. And then in button_Click method I load the XML file into a XmlNodeList (again by using XmlDocument.SelectNodes method) and I try to create one big string value that would hold all records like this: foreach(XmlNode node in nodes) { if(String.Compare(node.ChildNode[3].FirstChild.Attributes.GetNamedItem("doctype").Value.ToString(),doctype1)==0) { WriteXML(node); } } My idea was to create a string value (in this case called SmallerXmlDocument), and when I pass trough the entire XML file, to simply copy the value of that string into a new file. This works, but only for files that have up to 2000 records (and my has way more than that). So, if I need to cut the file into smaller pieces, what would be the best way to do it (keep in mind that there could be up to half a million records in a XML file)? Thanks

    Read the article

  • Split a Large File In C++

    - by wdow88
    Hey all, I'm trying to write a program that takes a large file (of any time) and splits it into many smaller "chunks". I think I have the basic idea down, but for some reason I cannot create a chunk size over 12,000 bites. I know there are a few solutions on google, etc. but I am more interested in learning what the origin of this limitation is then actually using the program to split files. //This file splits are larger into smaller files of a user inputted size. #include<iostream> #include<fstream> #include<string> #include<sstream> #include <direct.h> #include <stdlib.h> using namespace std; void GetCurrentPath(char* buffer) { _getcwd(buffer, _MAX_PATH); } int main() { // use the function to get the path char CurrentPath[_MAX_PATH]; GetCurrentPath(CurrentPath);//Get the current directory (used for displaying output) fstream bigFile; string filename; int partsize; cout << "Enter a file name: "; cin >> filename; //Recieve target file cout << "Enter the number of bites in each smaller file: "; cin >> partsize; //Recieve volume size bigFile.open(filename.c_str(),ios::in | ios::binary); bigFile.seekg(0, ios::end); // position get-ptr 0 bytes from end int size = bigFile.tellg(); // get-ptr position is now same as file size bigFile.seekg(0, ios::beg); // position get-ptr 0 bytes from beginning for (int i = 0; i <= (size / partsize); i++) { //Build File Name string partname = filename; //The original filename string charnum; //archive number stringstream out; //stringstream object out, used to build the archive name out << "." << i; charnum = out.str(); partname.append(charnum); //put the part name together //Write new file part fstream filePart; filePart.open(partname.c_str(),ios::out | ios::binary); //Open new file with the name built above //Check if near the end of file if (bigFile.tellg() < (size - (size%partsize))) { filePart.write(reinterpret_cast<char *>(&bigFile),partsize); //Write the selected amount to the file filePart.close(); //close file bigFile.seekg(partsize, ios::cur); //move pointer to next position to be written } //Changes the size of the last volume because it is the end of the file else { filePart.write(reinterpret_cast<char *>(&bigFile),(size%partsize)); //Write the selected amount to the file filePart.close(); //close file } cout << "File " << CurrentPath << partname << " produced" << endl; //display the progress of the split } bigFile.close(); cout << "Split Complete." << endl; return 0; } Any ideas? Thanks!

    Read the article

  • Error: Expected specifier-qualifier-list before ... in XCode Core Data

    - by Doron Katz
    Hi guys, I keep on getting this error "error: expected specifier-qualifier-list for core data code im working with, in the app delegate. Now when i get this error and about 40 other errors relating to managedobjectcontext etc, i thought maybe the library needs to be imported. Now i havent done this before, but i went to Frameworks group and add existing frameworks and it added CoreData.framework. I re-build and it still came up with the error. Do I need to import anything in the headers explicitly or is there some other step i need to do? Thanks

    Read the article

  • WCF- Large Data

    - by Pinu
    I have a WCF Web Service with basicHTTPBinding , the server side the transfer mode is streamedRequest as the client will be sending us large file in the form of memory stream. But on client side when i use transfer mode as streamedRequest its giving me a this errro "The remote server returned an error: (400) Bad Request" And when i look in to trace information , i see this as the error message Exception: There is a problem with the XML that was received from the network. See inner exception for more details. InnerException: The body of the message cannot be read because it is empty. I am able to send up to 5MB of data using trasfermode as buffered , but it will effect the performance of my web service in the long run , if there are many clients who are trying to access the service in buffered transfer mode. SmartConnect.Service1Client Serv = new SmartConnectClient.SmartConnect.Service1Client(); SmartConnect.OrderCertMailResponse OrderCert = new SmartConnectClient.SmartConnect.OrderCertMailResponse(); OrderCert.UserID = "abcd"; OrderCert.Password = "7a80f6623"; OrderCert.SoftwareKey = "90af1"; string applicationDirectory = @"\\inid\utty\Bran"; byte[] CertMail = File.ReadAllBytes(applicationDirectory + @"\5mb_test.zip"); MemoryStream str = new MemoryStream(CertMail); //OrderCert.Color = true; //OrderCert.Duplex = false; //OrderCert.FirstClass = true; //OrderCert.File = str; //OrderCert.ReturnAddress1 = "Test123"; //OrderCert.ReturnAddress2 = "Test123"; //OrderCert.ReturnAddress3 = "Test123"; //OrderCert.ReturnAddress4 = "Test123"; OrderCert.File = str; //string OrderNumber = ""; //string Password = OrderCert.Password; //int ReturnCode = 0; //string ReturnMessage = ""; //string SoftwareKey = OrderCert.SoftwareKey; //string UserID = OrderCert.UserID; //OrderCert.File = str; MemoryStream FileStr = str; Serv.OrderCertMail(OrderCert); // Serv.OrderCertMail(ref OrderNumber, ref Password, ref ReturnCode, ref ReturnMessage, ref SoftwareKey, ref UserID, ref FileStr ); lblON.Text = OrderCert.OrderNumber; Serv.Close(); // My Web Service - Service Contract [OperationContract] OrderCertMailResponse OrderCertMail(OrderCertMailResponse OrderCertMail); [MessageContract] public class OrderCertMailResponse { string userID = ""; string password = ""; string softwareID = ""; MemoryStream file = null; //MemoryStream str = null; [MessageHeader] //[DataMember] public string UserID { get { return userID; } set { userID = value; } } [MessageHeader] //[DataMember] public string Password { get { return password; } set { password = value; } } [MessageHeader] //[DataMember] public string SoftwareKey { get { return softwareID; } set { softwareID = value; } } [MessageBodyMember] // [DataMember] public MemoryStream File { get { return file; } set { file = value; } } [MessageHeader] //[DataMember] public string ReturnMessage; [MessageHeader] //[DataMember] public int ReturnCode; [MessageHeader] public string OrderNumber; //// Control file fields //[MessageHeader] ////[DataMember] //public string ReturnAddress1; //[MessageHeader] ////[DataMember] //public string ReturnAddress2; //[MessageHeader] ////[DataMember] //public string ReturnAddress3; //[MessageHeader] ////[DataMember] //public string ReturnAddress4; //[MessageHeader] ////[DataMember] //public bool FirstClass; //[MessageHeader] ////[DataMember] //public bool Color; //[MessageHeader] ////[DataMember] //public bool Duplex; } [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Service1 : IService1 { public OrderCertMailResponse OrderCertMail(OrderCertMailResponse OrderCertMail) { OrderService CertOrder = new OrderService(); ClientUserInfo Info = new ClientUserInfo(); ControlFileInfo Control = new ControlFileInfo(); //Info.Password = "f2496623"; // hard coded password for development testing purposes //Info.SoftwareKey = "6dbb71"; // hard coded software key this is a developement software key //Info.UserName = "sdfs"; // hard coded UserID - for testing Info.UserName = OrderCertMail.UserID.ToString(); Info.Password = OrderCertMail.Password.ToString(); Info.SoftwareKey = OrderCertMail.SoftwareKey.ToString(); //Control.ReturnAddress1 = OrderCertMail.ReturnAddress1; //Control.ReturnAddress2 = OrderCertMail.ReturnAddress2; //Control.ReturnAddress3 = OrderCertMail.ReturnAddress3; //Control.ReturnAddress4 = OrderCertMail.ReturnAddress4; //Control.CertMailFirstClass = OrderCertMail.FirstClass; //Control.CertMailColor = OrderCertMail.Color; //Control.CertMailDuplex = OrderCertMail.Duplex; //byte[] CertFile = new byte[0]; //byte[] CertFile = null; //string applicationDirectory = @"\\intrepid\utility\Bryan"; // byte[] CertMailFile = File.ReadAllBytes(applicationDirectory + @"\3mb_test.zip"); //MemoryStream str = new MemoryStream(CertMailFile); OrderCertMailResponseClass OrderCertResponse = CertOrder.OrderCertmail(Info,Control,OrderCertMail.File); OrderCertMail.ReturnMessage = OrderCertResponse.ReturnMessage.ToString(); OrderCertMail.ReturnCode = Convert.ToInt32(OrderCertResponse.ReturnCode.ToString()); OrderCertMail.OrderNumber = OrderCertResponse.OrderNumber; return OrderCertMail; }

    Read the article

  • Slow select when inserting large amounts of data (MYSQL)

    - by siannopollo
    I have a process that imports a lot of data (950k rows) using inserts that insert 500 rows at a time. The process generally takes about 12 hours, which isn't too bad. Normally doing a query on the table is pretty quick (under 1 second) as I've put (what I think to be) the proper indexes in place. The problem I'm having is trying to run a query when the import process is running. It is making the query take almost 2 minutes! What can I do to make these two things not compete for resources (or whatever)? I've looked into "insert delayed" but not sure I want to change the table to MyISAM. Thanks for any help!

    Read the article

  • ASP.NET Web API and Simple Value Parameters from POSTed data

    - by Rick Strahl
    In testing out various features of Web API I've found a few oddities in the way that the serialization is handled. These are probably not super common but they may throw you for a loop. Here's what I found. Simple Parameters from Xml or JSON Content Web API makes it very easy to create action methods that accept parameters that are automatically parsed from XML or JSON request bodies. For example, you can send a JavaScript JSON object to the server and Web API happily deserializes it for you. This works just fine:public string ReturnAlbumInfo(Album album) { return album.AlbumName + " (" + album.YearReleased.ToString() + ")"; } However, if you have methods that accept simple parameter types like strings, dates, number etc., those methods don't receive their parameters from XML or JSON body by default and you may end up with failures. Take the following two very simple methods:public string ReturnString(string message) { return message; } public HttpResponseMessage ReturnDateTime(DateTime time) { return Request.CreateResponse<DateTime>(HttpStatusCode.OK, time); } The first one accepts a string and if called with a JSON string from the client like this:var client = new HttpClient(); var result = client.PostAsJsonAsync<string>(http://rasxps/AspNetWebApi/albums/rpc/ReturnString, "Hello World").Result; which results in a trace like this: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnString HTTP/1.1Content-Type: application/json; charset=utf-8Host: rasxpsContent-Length: 13Expect: 100-continueConnection: Keep-Alive "Hello World" produces… wait for it: null. Sending a date in the same fashion:var client = new HttpClient(); var result = client.PostAsJsonAsync<DateTime>(http://rasxps/AspNetWebApi/albums/rpc/ReturnDateTime, new DateTime(2012, 1, 1)).Result; results in this trace: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnDateTime HTTP/1.1Content-Type: application/json; charset=utf-8Host: rasxpsContent-Length: 30Expect: 100-continueConnection: Keep-Alive "\/Date(1325412000000-1000)\/" (yes still the ugly MS AJAX date, yuk! This will supposedly change by RTM with Json.net used for client serialization) produces an error response: The parameters dictionary contains a null entry for parameter 'time' of non-nullable type 'System.DateTime' for method 'System.Net.Http.HttpResponseMessage ReturnDateTime(System.DateTime)' in 'AspNetWebApi.Controllers.AlbumApiController'. An optional parameter must be a reference type, a nullable type, or be declared as an optional parameter. Basically any simple parameters are not parsed properly resulting in null being sent to the method. For the string the call doesn't fail, but for the non-nullable date it produces an error because the method can't handle a null value. This behavior is a bit unexpected to say the least, but there's a simple solution to make this work using an explicit [FromBody] attribute:public string ReturnString([FromBody] string message) andpublic HttpResponseMessage ReturnDateTime([FromBody] DateTime time) which explicitly instructs Web API to read the value from the body. UrlEncoded Form Variable Parsing Another similar issue I ran into is with POST Form Variable binding. Web API can retrieve parameters from the QueryString and Route Values but it doesn't explicitly map parameters from POST values either. Taking our same ReturnString function from earlier and posting a message POST variable like this:var formVars = new Dictionary<string,string>(); formVars.Add("message", "Some Value"); var content = new FormUrlEncodedContent(formVars); var client = new HttpClient(); var result = client.PostAsync(http://rasxps/AspNetWebApi/albums/rpc/ReturnString, content).Result; which produces this trace: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnString HTTP/1.1Content-Type: application/x-www-form-urlencodedHost: rasxpsContent-Length: 18Expect: 100-continue message=Some+Value When calling ReturnString:public string ReturnString(string message) { return message; } unfortunately it does not map the message value to the message parameter. This sort of mapping unfortunately is not available in Web API. Web API does support binding to form variables but only as part of model binding, which binds object properties to the POST variables. Sending the same message as in the previous example you can use the following code to pick up POST variable data:public string ReturnMessageModel(MessageModel model) { return model.Message; } public class MessageModel { public string Message { get; set; }} Note that the model is bound and the message form variable is mapped to the Message property as would other variables to properties if there were more. This works but it's not very dynamic. There's no real easy way to retrieve form variables (or query string values for that matter) in Web API's Request object as far as I can discern. Well only if you consider this easy:public string ReturnString() { var formData = Request.Content.ReadAsAsync<FormDataCollection>().Result; return formData.Get("message"); } Oddly FormDataCollection does not allow for indexers to work so you have to use the .Get() method which is rather odd. If you're running under IIS/Cassini you can always resort to the old and trusty HttpContext access for request data:public string ReturnString() { return HttpContext.Current.Request.Form["message"]; } which works fine and is easier. It's kind of a bummer that HttpRequestMessage doesn't expose some sort of raw Request object that has access to dynamic data - given that it's meant to serve as a generic REST/HTTP API that seems like a crucial missing piece. I don't see any way to read query string values either. To me personally HttpContext works, since I don't see myself using self-hosted code much.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • HTML table headers always visible at top of window when viewing a large table

    - by Craig McQueen
    I would like to be able to "tweak" an HTML table's presentation to add a single feature: when scrolling down through the page so that the table is on the screen but the header rows are off-screen, I would like the headers to remain visible at the top of the viewing area. This would be conceptually like the "freeze panes" feature in Excel. However, an HTML page might contain several tables in it and I only would want it to happen for the table that is currently in-view, only while it is in-view. Note: I've seen one solution where the table data area is made scrollable while the headers do not scroll. That's not the solution I'm looking for.

    Read the article

  • RichFaces rich:insert takes a long time to output large files

    - by Mark Lewis
    Hello I'm using a RichFaces <rich:insert like this: <rich:panel header="my head"> <a4j:outputPanel ajaxRendered="true"> <rich:insert src="#{MyBacking.myPath}" highlight="groovy" /> </a4j:outputPanel> </rich:panel> If I have a 60k file to output, it takes 23 seconds. I've got a requirement to output the contents of some larger files than that and obviously the larger the file, the larger the wait for content. The recommendation in the answer to another related question is to introduce paging. I will, but the question is, why does it take so long to output 60k of text using JSF/RichFaces? That is, reading off a local disk with Windows XP SP2 PC - I can see from the log the data has already been written to disk from the network. Other scripting languages appear to be faster than this - is it something to do with the JSF lifecycle having to handle the text maybe? Thanks

    Read the article

  • Are some data structures more suitable for functional programming than others?

    - by Rob Lachlan
    In Real World Haskell, there is a section titled "Life without arrays or hash tables" where the authors suggest that list and trees are preferred in functional programming, whereas an array or a hash table might be used instead in an imperative program. This makes sense, since it's much easier to reuse part of an (immutable) list or tree when creating a new one than to do so with an array. So my questions are: Are there really significantly different usage patterns for data structures between functional and imperative programming? If so, is this a problem? What if you really do need a hash table for some application? Do you simply swallow the extra expense incurred for modifications?

    Read the article

  • How do You Get a Specific Value From a System.Data.DataTable Object?

    - by Giffyguy
    I'm a low-level algorithm programmer, and databases are not really my thing - so this'll be a n00b question if ever there was one. I'm running a simple SELECT query through our development team's DAO. The DAO returns a System.Data.DataTable object containing the results of the query. This is all working fine so far. The problem I have run into now: I need to pull a value out of one of the fields of the first row in the resulting DataTable - and I have no idea where to even start. Microsoft is so confusing about this! Arrrg! Any advice would be appreciated. I'm not providing any code samples, because I believe that context is unnecessary here. I'm assuming that all DataTable objects work the same way, no matter how you run your queries - and therefore any additional information would just make this more confusing for everyone.

    Read the article

  • What are the "Navigation Properties" in this data model for?

    - by d03boy
    I've been wondering how to properly set up many-to-many relationships in ASP.NET MVC 2 using Linq2Sql for quite some time now. I found this blog post that seems to have a similar model layout as mine. If you take a look at the first screenshot showing the data model you can see that each model has "Navigation Properties" at the bottom of it. What exactly is this and why don't my models have them? I have the proper foreign keys put in to place. Most specifically, I am looking at the relationship between the Article and Category models since that is the only many-to-many relationship that I see and that's what I'm trying to model. Obviously I use an intermediary joining table between these two tables but I am having trouble understanding the proper methodology for modeling that relationship and I'm not finding this information anywhere on The Google.

    Read the article

  • One large file or multiple small files?

    - by Dan
    I have an application (currently written in Python as we iron out the specifics but eventually it will be written in C) that makes use of individual records stored in plain text files. We can't use a database and new records will need to be manually added regularly. My question is this: would it be faster to have a single file (500k-1Mb) and have my application open, loop through, find and close a file OR would it be faster to have the records separated and named using some appropriate convention so that the application could simply loop over filenames to find the data it needs? I know my question is quite general so direction to any good articles on the topic are as appreciated as much as suggestions. Thanks very much in advance for your time, Dan

    Read the article

  • loading data from file into 2d array

    - by Chris
    I am just starting with perl and would like some help with arrays please. I am reading lines from a data file and splitting the line into fields: open (INFILE, $infile); do { my $linedata = <INFILE>; my @data= split ',',$linedata; .... } until eof; I then want to store the individual field values (in @data) in and array so that the array looks like the input data file ie, the first "row" of the array contains the first line of data from INFILE etc. Each line of data from the infile contains 4 values, x,y,z and w and once the data are all in the array, I have to pass the array into another program which reads the x,y,z,w and displays the w value on a screen at the point determined by the x,y,z value. I can not pas the data to the other program on a row-by-row basis as the program expects the data to in a 2d matrtix format. Any help greatly appreciated. Chris

    Read the article

  • How to retrieve JSON Data Array from ExtJS Store

    - by user291928
    Is there a method allowing me to return my stored data in an ExtJS Grid Panel exactly the way I loaded it using: var data = ["value1", "value2"] Store.loadData(data); I would like to have a user option to reload the Grid, but changes to the store need to be taken into account. The user can make changes and the grid is dynamically updated, but if I reload the grid, the data that was originally loaded is shown even though the database has been updated with the new changes. I would prefer not to reload the page and just let them reload the grid data itself with the newly changed store. I guess im looking for something like this: var data = Store.getData(); //data = ["value1", "value2"] after its all said and done. Or is there a different way to refresh the grid with new data that I am not aware of. Even using the proxy still uses the "original" data, not the new store. Thanks in advance

    Read the article

  • How to migrate large amounts of data from old database to new

    - by adam0101
    I need to move a huge amount of data from a couple tables in an old database to a couple different tables in a new database. The databases are SQL Server 2005 and are on the same box and sql server instance. I was told that if I try to do it all in one shot that the transaction log would fill up. Is there a way to disable the transaction log per table? If not, what is a good method for doing this? Would a cursor do it? This is just a one-time conversion.

    Read the article

  • how to format date when i load data from google-app-engine..

    - by zjm1126
    i use remote_api to load data from google-app-engine. appcfg.py download_data --config_file=helloworld/GreetingLoad.py --filename=a.csv --kind=Greeting helloworld the setting is: class AlbumExporter(bulkloader.Exporter): def __init__(self): bulkloader.Exporter.__init__(self, 'Greeting', [('author', str, None), ('content', str, None), ('date', str, None), ]) exporters = [AlbumExporter] and i download a.csv is : the date is not readable , and the date in appspot.com admin is : so how to get the full date ?? thanks i change this : class AlbumExporter(bulkloader.Exporter): def __init__(self): bulkloader.Exporter.__init__(self, 'Greeting', [('author', str, None), ('content', str, None), ('date', lambda x: datetime.datetime.strptime(x, '%m/%d/%Y').date(), None), ]) exporters = [AlbumExporter] but the error is :

    Read the article

  • How to handle concurrency control in ASP.NET Dynamic Data?

    - by Andrew
    I've been quite impressed with dynamic data and how easy and quick it is to get a simple site up and running. I'm planning on using it for a simple internal HR admin site for registering people's skills/degrees/etc. I've been watching the intro videos at www.asp.net/dynamicdata and one thing they never mention is how to handle concurrency control. It seems that DD does not handle it right out of the box (unless there is some setting I haven't seen) as I manually generated a change conflict exception and the app failed without any user friendly message. Anybody know if DD handles it out of the box? Or do you have to somehow build it into the site?

    Read the article

  • How does the iPhone SDK Core Data system store date types to sqlite?

    - by Andrew Arrow
    I used core data to do this: NSManagedObjectContext *m = [self managedObjectContext]; Foo *f = (Foo *)[NSEntityDescription insertNewObjectForEntityForName:@"Foo" inManagedObjectContext:m]; f.created_at = [NSDate date]; [m insertObject:f]; NSError *error; [m save:&error]; Where the created_at field is defined as type "Date" in the xcdatamodel. When I export the sql from the sqlite database it created, created_at is defined as type "timestamp" and the values look like: 290902422.72624 Nine digits before the . and then some fraction. What is this format? It's not epoch time and it's not julianday format. Epoch would be: 1269280338.81213 julianday would be: 2455278.236746875 (notice only 7 digits before the . not 9 like I have) How can I convert a number like 290902422.72624 to epoch time? Thanks!

    Read the article

  • The best way to display data in a table view?

    - by LawVS
    Hey guys, I'm trying to display nutritional information in a table view but am struggling about the best way to show the info. I don't want to create unique views for each page as that would take up a lot of space. My ideal avenue would be like a website having links to data pages instead of individual views for each page. How would I best implement that? For example, I would want the first page to be a list of restaurants, then as you press the restaurant it would list the different types of food...the next page would be a list of those foods and then the final page would be the nutrition info itself. Thanks

    Read the article

  • How do I display non-normalized data in a hierarchical structure?

    - by rofly
    My issue is that I want to display data in a hierarchal structure as so: Democrat County Clerk Candidate 1 Candidate 2 Magistrate Candidate 1 Candidate 2 Candidate 3 But I'm retrieving the dataset like this: Party | Office | Candidate Democrat | County Clerk | Candidate 1 Democrat | County Clerk | Candidate 2 Democrat | Magistrate | Candidate 1 Democrat | Magistrate | Candidate 2 Democrat | Magistrate | Candidate 3 I planned on using nested repeaters, but I need a distinct value of Party, and then distinct values of office name within that party in order to do it. Are there any .NET functions to easily do what I'm attempting to? Would there be a better way of displaying the information other than repeaters? Thanks in advance!

    Read the article

  • What is the best data structure and algorithm for comparing a list of strings?

    - by Chiraag E Sehar
    I want to find the longest possible sequence of words that match the following rules: Each word can be used at most once All words are Strings Two strings sa and sb can be concatenated if the LAST two characters of sa matches the first two characters of sb. In the case of concatenation, it is performed by overlapping those characters. For example: sa = "torino" sb = "novara" sa concat sb = "torinovara" For example, I have the following input file, "input.txt": novara torino vercelli ravenna napoli liverno messania noviligure roma And, the output of the above file according to the above rules should be: torino novara ravenna napoli livorno noviligure since the longest possible concatenation is: torinovaravennapolivornovilligure Can anyone please help me out with this? What would be the best data structure for this?

    Read the article

  • Cache for large read only database recommendation

    - by paddydub
    I am building site on with Spring, Hibernate and Mysql. The mysql database contains information on coordinates and locations etc, it is never updated only queried. The database contains 15000 rows of coordinates and 48000 rows of coordinate connections. Every time a request is processed, the application needs to read all these coordinates which is taking approx 3-4 seconds. I would like to set up a cache, to allow quick access to the data. I'm researching memcached at the moment, can you please advise if this would be my best option?

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >