Search Results

Search found 59643 results on 2386 pages for 'data migration'.

Page 175/2386 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • saveall not saving associated data

    - by junior29101
    I'm having a problem trying to save (update) some associated data. I've read about a million google returns, but nothing seems to be the solution. I'm at my wit's end and hope some kind soul here can help. I'm using 1.3.0-RC4, my database is in InnoDB. Course has many course_tees CourseTee belongs to course My controller function is pretty simple (I've made it as simple as possible): if(!empty($this-data)) $this-Course-saveAll($this-data); I've tried a lot of different variations of that $this-data['Course'], save($this-data), etc without luck. It saves the Course info, but not the CourseTee stuff. I don't get an error message. Since I don't know how many tees any given course will have, I generate the form inputs dynamically in a loop. $form-input('CourseTee.'.$i.'.teeName', array('error' = false, 'label' = false, 'value'=$data['course_tees'][$i]['teeName'])) The course inputs are simpler: $form-input('Course.hcp'.$j, array('error' = false, 'label' = false, 'class' = 'form_small_w', 'value'=$data['Course']['hcp'.$j])) And this is how my data is formatted: Array ( [Course] = Array ( [id] = 1028476 ... ) [CourseTee] = Array ( [0] = Array ( [key] = 636 [courseid] = 1028476 ... ) [1] = Array ( [key] = 637 [courseid] = 1028476 ... ) ... ) )

    Read the article

  • JMS message. Model to include data or pointers to data?

    - by John
    I am trying to resolve a design difference of opinion where neither of us has experience with JMS. We want to use JMS to communicate between a j2ee application and the stand-alone application when a new event occurs. We would be using a single point-to-point queue. Both sides are Java-based. The question is whether to send the event data itself in the JMS message body or to send a pointer to the data so that the stand-alone program can retrieve it. Details below. I have a j2ee application that supports data entry of new and updated persons and related events. The person records and associated events are written to an Oracle database. There are also stand-alone, separate programs that contribute new person and event records to the database. When a new event occurs through any of 5-10 different application functions, I need to notify remote systems through an outbound interface using an industry-specific standard messaging protocol. The outbound interface has been designed as a stand-alone application to support scalability through asynchronous operation and by moving it to a separate server. The j2ee application currently has most of the data in memory at the time the event is entered. The data would consist of approximately 6 different objects; a person object and some with multiple instances for an average size in the range of 3000 to 20,000 bytes. Some special cases could be many times this amount. From a performance and reliability perspective, should I model the JMS message to pass all the data needed to create the interface message, or model the JMS message to contain record keys for the data and have the stand-alone Java application retrieve the data to create the interface message?

    Read the article

  • my webserver with 16GB ram shows all RAM as used, but is it really, see the 'top'

    - by Alex
    I have some questions about my web server. Its a LAMP web server running centos 5.5 and php5, mysql5. The server gets hundreds (maybe thousand) of concurrent users during peak hours. I'm trying to optimize a little and understand "top". From what I can see: all 16GB of my ram have been used up? does that mean that my server needs more memory? My swap is only 2GB, should it be increased? usually during peak hours my server load average first number is about 2.5-3. What could I do to optimize the server so that the load average even during peak doesn't go above 1? In the past I was told a good working server should stay under 1 load, is this still true? Although even during load of 2.5-3, server pages and applications seem to load with pretty good speed. what should the memory size in php.ini be set to? top - 14:30:18 up 2 days, 12:41, 5 users, load average: 1.25, 1.74, 2.92 Tasks: 305 total, 2 running, 302 sleeping, 0 stopped, 1 zombie Cpu(s): 6.3%us, 0.9%sy, 0.0%ni, 92.5%id, 0.2%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 16427200k total, 16111472k used, 315728k free, 3120316k buffers Swap: 2104496k total, 268k used, 2104228k free, 6216756k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 29080 apache 15 0 358m 36m 5192 S 20.2 0.2 2:08.40 httpd 29093 apache 18 0 357m 36m 5192 S 18.2 0.2 2:02.52 httpd 29079 apache 15 0 370m 49m 5832 S 10.0 0.3 2:32.14 httpd 1812 apache 15 0 370m 49m 5196 S 7.3 0.3 2:25.30 httpd 5204 apache 15 0 358m 36m 5168 S 5.3 0.2 0:59.28 httpd 29075 apache 15 0 370m 48m 5184 S 3.3 0.3 2:15.93 httpd 9712 apache 15 0 360m 38m 5180 S 3.0 0.2 0:54.81 httpd 29072 apache 16 0 358m 36m 5192 S 2.7 0.2 2:24.43 httpd 6310 apache 17 0 388m 67m 5180 S 2.3 0.4 0:58.85 httpd 8674 apache 15 0 343m 21m 4980 S 2.0 0.1 0:07.91 httpd 29085 apache 15 0 371m 49m 5224 S 2.0 0.3 2:16.86 httpd 29083 apache 15 0 370m 48m 5196 S 1.7 0.3 2:10.64 httpd 5575 apache 15 0 357m 36m 5228 S 1.3 0.2 0:53.78 httpd 29066 apache 15 0 379m 59m 5860 R 1.3 0.4 2:11.93 httpd 29078 apache 15 0 370m 48m 5188 S 1.3 0.3 2:14.52 httpd 29084 apache 15 0 370m 48m 5208 S 1.0 0.3 2:02.49 httpd 29089 apache 15 0 370m 48m 5188 S 1.0 0.3 2:27.61 httpd 29082 apache 15 0 390m 68m 5188 S 0.7 0.4 2:32.48 httpd 29984 apache 15 0 358m 36m 5228 S 0.7 0.2 2:08.32 httpd 3571 root 16 0 13400 1792 848 S 0.3 0.0 2:37.89 top 4419 mysql 15 0 668m 175m 7204 S 0.3 1.1 3:32.25 mysqld 28181 root 15 0 90460 3624 2680 S 0.3 0.0 0:17.60 sshd 29091 apache 15 0 390m 69m 5196 S 0.3 0.4 2:29.99 httpd 32476 root 15 0 12900 1320 848 R 0.3 0.0 0:06.46 top 1 root 15 0 10372 680 572 S 0.0 0.0 0:02.01 init 2 root RT -5 0 0 0 S 0.0 0.0 0:00.51 migration/0 3 root 34 19 0 0 0 S 0.0 0.0 0:00.07 ksoftirqd/0 4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0.0 0.0 0:00.12 migration/1 6 root 34 19 0 0 0 S 0.0 0.0 0:00.03 ksoftirqd/1 7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1 8 root RT -5 0 0 0 S 0.0 0.0 0:00.06 migration/2 9 root 34 19 0 0 0 S 0.0 0.0 0:00.03 ksoftirqd/2 10 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/2 11 root RT -5 0 0 0 S 0.0 0.0 0:00.06 migration/3 12 root 34 19 0 0 0 S 0.0 0.0 0:00.04 ksoftirqd/3 13 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/3 14 root RT -5 0 0 0 S 0.0 0.0 0:01.45 migration/4 15 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/4 16 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/4 17 root RT -5 0 0 0 S 0.0 0.0 0:00.22 migration/5 18 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/5 19 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/5 20 root RT -5 0 0 0 S 0.0 0.0 0:00.15 migration/6 21 root 34 19 0 0 0 S 0.0 0.0 0:00.02 ksoftirqd/6 22 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/6 23 root RT -5 0 0 0 S 0.0 0.0 0:00.15 migration/7 24 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/7 25 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/7 26 root RT -5 0 0 0 S 0.0 0.0 0:00.19 migration/8 27 root 34 19 0 0 0 S 0.0 0.0 0:00.04 ksoftirqd/8 28 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/8 29 root RT -5 0 0 0 S 0.0 0.0 0:00.34 migration/9 30 root 34 19 0 0 0 S 0.0 0.0 0:00.03 ksoftirqd/9 31 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/9 32 root RT -5 0 0 0 S 0.0 0.0 0:00.16 migration/10 33 root 34 19 0 0 0 S 0.0 0.0 0:00.04 ksoftirqd/10 34 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/10 35 root RT -5 0 0 0 S 0.0 0.0 0:00.12 migration/11 36 root 34 19 0 0 0 S 0.0 0.0 0:00.05 ksoftirqd/11 37 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/11 38 root RT -5 0 0 0 S 0.0 0.0 0:00.35 migration/12

    Read the article

  • A failed disk (Pay for professional service or SpinRite?)(new edit)

    - by huggie
    EDIT: After much negotiating and begging and seeing through promotion smoke screen, thanks to the nice representative who took my case, I now know that the engineer has already fixed my NTFS partition (I guess it might be a bad block in the partition table?). She told me that the problem was considered minor, and I should be able to boot normally and just copy stuff out. Whew..I'm glad I didn't agree to the NTD $16,000 deal. New question (should this be in a new thread?): is it safer to use the linux "dd" command or is it better to boot normally into Windows XP and just copy stuff out? EDIT2: Thanks to all the help. I give the best answer to Console as it's most directed related to my question. But many suggestion are helpful and informational. ---- ORIGINAL POST BELOW --- Hi, in my previous post (You don't need to read but it's at http://superuser.com/questions/48838/windows-xp-a-disk-read-error-occurred), I said that my hard disk was not booting and is showing "a disk read error occurred". I took it to a recovery professional. A representative responded today told me that the NTFS partitions have a "NTFS partition system crash". I have no idea what that means. The engineer handling my drive will not be available for contact till tomorrow. Now the company charges me NTD (New Taiwan Dollar) $16,000 to recover lost data, that's kind of a lot considering that my graduate student monthly stipend is currently NTD $32,000 (max. allowed by regulation, may be lower, may change depend on funding). Now I'm weighting in between the options. Option A: let the professional recovers it with the half of my monthly stipend. If file/directories I designated are not recovered I don't pay a penny. (other than the initial examination fee of NTD $1000 which I've already paid.) Option B: let me try SpinRite, if failed, back to Option A. I spoke to the representative at the company they recommended me not to handle it on my own (yeah of course that's what they all want to say, right?), and at the price tag the disk error is probably relatively minor and data recoverable. But the representative really did not have detailed information of the disk failure so I didn't take her recommendation readily. Though one thing I heed was that she said that what they would do is to duplicate the disk before attempting discovery, so there would be no data loss (Is this true? can't duplicating invoke further data loss?). That sounds very good to me. Or maybe a third option: Option C: Negotiate with them to pay them to duplicate the disk hopefully for a much smaller price tag. Let me try SpinRite, if failed, back to Option A. This is a difficult decision. Ultimately I want my data back, but if a cheaper way is available to achieve the same thing... Can operating with SpinRite also corrupt data in someway? I've no idea what happened to my drive. I'll attempt to contact the engineer and hope to get it clarified and make an edit here.

    Read the article

  • Implementing an async "read all currently available data from stream" operation

    - by Jon
    I recently provided an answer to this question: C# - Realtime console output redirection. As often happens, explaining stuff (here "stuff" was how I tackled a similar problem) leads you to greater understanding and/or, as is the case here, "oops" moments. I realized that my solution, as implemented, has a bug. The bug has little practical importance, but it has an extremely large importance to me as a developer: I can't rest easy knowing that my code has the potential to blow up. Squashing the bug is the purpose of this question. I apologize for the long intro, so let's get dirty. I wanted to build a class that allows me to receive input from a console's standard output Stream. Console output streams are of type FileStream; the implementation can cast to that, if needed. There is also an associated StreamReader already present to leverage. There is only one thing I need to implement in this class to achieve my desired functionality: an async "read all the data available this moment" operation. Reading to the end of the stream is not viable because the stream will not end unless the process closes the console output handle, and it will not do that because it is interactive and expecting input before continuing. I will be using that hypothetical async operation to implement event-based notification, which will be more convenient for my callers. The public interface of the class is this: public class ConsoleAutomator { public event EventHandler<ConsoleOutputReadEventArgs> StandardOutputRead; public void StartSendingEvents(); public void StopSendingEvents(); } StartSendingEvents and StopSendingEvents do what they advertise; for the purposes of this discussion, we can assume that events are always being sent without loss of generality. The class uses these two fields internally: protected readonly StringBuilder inputAccumulator = new StringBuilder(); protected readonly byte[] buffer = new byte[256]; The functionality of the class is implemented in the methods below. To get the ball rolling: public void StartSendingEvents(); { this.stopAutomation = false; this.BeginReadAsync(); } To read data out of the Stream without blocking, and also without requiring a carriage return char, BeginRead is called: protected void BeginReadAsync() { if (!this.stopAutomation) { this.StandardOutput.BaseStream.BeginRead( this.buffer, 0, this.buffer.Length, this.ReadHappened, null); } } The challenging part: BeginRead requires using a buffer. This means that when reading from the stream, it is possible that the bytes available to read ("incoming chunk") are larger than the buffer. Remember that the goal here is to read all of the chunk and call event subscribers exactly once for each chunk. To this end, if the buffer is full after EndRead, we don't send its contents to subscribers immediately but instead append them to a StringBuilder. The contents of the StringBuilder are only sent back whenever there is no more to read from the stream. private void ReadHappened(IAsyncResult asyncResult) { var bytesRead = this.StandardOutput.BaseStream.EndRead(asyncResult); if (bytesRead == 0) { this.OnAutomationStopped(); return; } var input = this.StandardOutput.CurrentEncoding.GetString( this.buffer, 0, bytesRead); this.inputAccumulator.Append(input); if (bytesRead < this.buffer.Length) { this.OnInputRead(); // only send back if we 're sure we got it all } this.BeginReadAsync(); // continue "looping" with BeginRead } After any read which is not enough to fill the buffer (in which case we know that there was no more data to be read during the last read operation), all accumulated data is sent to the subscribers: private void OnInputRead() { var handler = this.StandardOutputRead; if (handler == null) { return; } handler(this, new ConsoleOutputReadEventArgs(this.inputAccumulator.ToString())); this.inputAccumulator.Clear(); } (I know that as long as there are no subscribers the data gets accumulated forever. This is a deliberate decision). The good This scheme works almost perfectly: Async functionality without spawning any threads Very convenient to the calling code (just subscribe to an event) Never more than one event for each time data is available to be read Is almost agnostic to the buffer size The bad That last almost is a very big one. Consider what happens when there is an incoming chunk with length exactly equal to the size of the buffer. The chunk will be read and buffered, but the event will not be triggered. This will be followed up by a BeginRead that expects to find more data belonging to the current chunk in order to send it back all in one piece, but... there will be no more data in the stream. In fact, as long as data is put into the stream in chunks with length exactly equal to the buffer size, the data will be buffered and the event will never be triggered. This scenario may be highly unlikely to occur in practice, especially since we can pick any number for the buffer size, but the problem is there. Solution? Unfortunately, after checking the available methods on FileStream and StreamReader, I can't find anything which lets me peek into the stream while also allowing async methods to be used on it. One "solution" would be to have a thread wait on a ManualResetEvent after the "buffer filled" condition is detected. If the event is not signaled (by the async callback) in a small amount of time, then more data from the stream will not be forthcoming and the data accumulated so far should be sent to subscribers. However, this introduces the need for another thread, requires thread synchronization, and is plain inelegant. Specifying a timeout for BeginRead would also suffice (call back into my code every now and then so I can check if there's data to be sent back; most of the time there will not be anything to do, so I expect the performance hit to be negligible). But it looks like timeouts are not supported in FileStream. Since I imagine that async calls with timeouts are an option in bare Win32, another approach might be to PInvoke the hell out of the problem. But this is also undesirable as it will introduce complexity and simply be a pain to code. Is there an elegant way to get around the problem? Thanks for being patient enough to read all of this. Update: I definitely did not communicate the scenario well in my initial writeup. I have since revised the writeup quite a bit, but to be extra sure: The question is about how to implement an async "read all the data available this moment" operation. My apologies to the people who took the time to read and answer without me making my intent clear enough.

    Read the article

  • Sensible Way to Pass Web Data in XML to a SQL Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • Sensible Way to Pass Web Data to Sql Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • uploading via http post (multipart/form-data) silently fails with big files

    - by matteo
    When uploading multipart/form-data forms via a http post request to my apache web server, very big files (i.e. 30MB) are silently discarded. On the server side all looks as if the attached file was received with 0 bytes size. On the client side all looks like it had been uploaded succesfully (it takes the expected long time to upload and the browser gives no error message). On the server, nothing is logged into the error log. An entry is logged into the access log as if everything was ok (a post request and a 200 ok response). These uploads are being posted to a php script. In the php script, If I print_r $_FILES, I see the following information for the relevant file: [file5] => Array ( [name] => MOV023.3gp [type] => video/3gpp [tmp_name] => /tmp/phpgOdvYQ [error] => 0 [size] => 0 ) Note both [error] = 0 (which should mean no error) and [size] = 0 (as if the file was empty). My php script runs fine and receives all the rest of the data except these files. move_uploaded_file succeeds on these files and actually copies them as 0byte files. I've already changed the php directives max_upload_size to 50M and post_max_size to 200M, so neither the single file nor the request exceed any size limit. max_execution_time is not relevant, because the time to transfer the data does not count; and I've increased max_input_time to 1000 seconds, though this shouldn't be necessary since this is the time taken to parse the input data, not the time taken to upload it. Is there any apache configuration, prior to php, that could be causing these files to be discarded even prior to php execution? Some limit in size or in upload time? I've read about a default 300 seconds timeout limit, but this should apply to the time the connection is idle, not the time it takes while actually transferring data, right? Needless to say, uploads with all exactly identical conditions (including file format, client and everything) except smaller file size, work seamlessly, so the issue is clearly related to the file or request size, or to the time it takes to send it.

    Read the article

  • What is the right way to process inconsistent data files?

    - by Tahabi
    I'm working at a company that uses Excel files to store product data, specifically, test results from products before they are shipped out. There are a few thousand spreadsheets with anywhere from 50-100 relevant data points per file. Over the years, the schema for the spreadsheets has changed significantly, but not unidirectionally - in the sense that, changes often get reverted and then re-added in the space of a few dozen to few hundred files. My project is to convert about 8000 of these spreadsheets into a database that can be queried. I'm using MongoDB to deal with the inconsistency in the data, and Python. My question is, what is the "right" or canonical way to deal with the huge variance in my source files? I've written a data structure which stores the data I want for the latest template, which will be the final template used going forward, but that only helps for a few hundred files historically. Brute-forcing a solution would mean writing similar data structures for each version/template - which means potentially writing hundreds of schemas with dozens of fields each. This seems very inefficient, especially when sometimes a change in the template is as little as moving a single line of data one row down or splitting what used to be one data field into two data fields. A slightly more elegant solution I have in mind would be writing schemas for all the variants I can find for pre-defined groups in the source files, and then writing a function to match a particular series of files with a series of variants that matches that set of files. This is because, more often that not, most of the file will remain consistent over a long period, only marred by one or two errant sections, but inside the period, which section is inconsistent, is inconsistent. For example, say a file has four sections with three data fields, which is represented by four Python dictionaries with three keys each. For files 7000-7250, sections 1-3 will be consistent, but section 4 will be shifted one row down. For files 7251-7500, 1-3 are consistent, section 4 is one row down, but a section five appears. For files 7501-7635, sections 1 and 3 will be consistent, but section 2 will have five data fields instead of three, section five disappears, and section 4 is still shifted down one row. For files 7636-7800, section 1 is consistent, section 4 gets shifted back up, section 2 returns to three cells, but section 3 is removed entirely. Files 7800-8000 have everything in order. The proposed function would take the file number and match it to a dictionary representing the data mappings for different variants of each section. For example, a section_four_variants dictionary might have two members, one for the shifted-down version, and one for the normal version, a section_two_variants might have three and five field members, etc. The script would then read the matchings, load the correct mapping, extract the data, and insert it into the database. Is this an accepted/right way to go about solving this problem? Should I structure things differently? I don't know what to search Google for either to see what other solutions might be, though I believe the problem lies in the domain of ETL processing. I also have no formal CS training aside from what I've taught myself over the years. If this is not the right forum for this question, please tell me where to move it, if at all. Any help is most appreciated. Thank you.

    Read the article

  • Moving from wordpress.com to self-hosted wordpress blog

    - by Sarfraz
    Hello, I have been writing articles on the wordpress.com blog, now i am looking to move it to self-hosted wordpress blog but i wonder: 1) Should i move all my articles on the new blog or just put an article on my last blog that more articles will be posted on my new blog?* 2) If i move all articles on my new blog, i am not sure about how google will react to it because there are articles with good number of visitors, won't this be seo-un-friendly because i am not sure but google will re-create page reputation stuff, etc or those articles will have same popularity even if i move elsewhere?* 3) What are the implications and side-effects in moving from wordpress.com blog to self-hosted wordpress blog?* Thanks

    Read the article

  • jstree dynamic JSON data from django

    - by danspants
    I'm trying to set up jsTree to dynamically accept JSON data from django. This is the test data i have django returning to jstree: result=[{ "data" : "A node", "children" : [ { "data" : "Only child", "state" : "closed" } ], "state" : "open" },"Ajax node"] response=HttpResponse(content=result,mimetype="application/json") this is the jstree code I'm using: jQuery("#demo1").jstree({ "json_data" : { "ajax" : { "url" : "/dirlist", "data" : function (n) { return { id : n.attr ? n.attr("id") : 0 }; }, error: function(e){alert(e);} } }, "plugins" : [ "themes","json_data"] }); All I get is the ajax loading symbol, the ajax error response is also triggered and it alerts "undefined". I've also tried simpleJson encoding in django but with the same result. If I change the url so that it is receiving a JSON file with identical data, it works as expected. Any ideas on what the issue might be?

    Read the article

  • TypeError: Error #1007: Instantiation attempted on a non-constructor. on port to Flex 4

    - by Josh Handel
    I have been porting an app from Flex 3.4.x to 4.0.. I have successfully ported the app and its libraries to flex 4.0, I've also removed ALL the references to http://www.adobe.com/2006/flex/mx in any of my mxml files... In short I "think" I have moved everything over to the new mx framework (2009).. But I still get the following error (which never happend in 3.4 or 3.5 with this same app) when I try to run my flex app. TypeError: Error #1007: Instantiation attempted on a non-constructor. at mx.preloaders::Preloader/initialize()[E:\dev\4.0.0\frameworks\projects\framework\src\mx\preloaders\Preloader.as:253] at mx.managers::SystemManager/http://www.adobe.com/2006/flex/mx/internal::initialize()[E:\dev\4.0.0\frameworks\projects\framework\src\mx\managers\SystemManager.as:1925] at mx.managers::SystemManager/initHandler()[E:\dev\4.0.0\frameworks\projects\framework\src\mx\managers\SystemManager.as:2419] At this point I am completely stumped.. anyone have any ideas? Thanks

    Read the article

  • TFS 2010, can I migrate source code only ?

    - by driis
    Our old TFS 2008 installation is getting old, and we would like to migrate to TFS 2010. However, the structure of our team projects is not so well thought out, and there has been various experiments with process templates and editing of work items and so on. Therefore, I would like to migrate our source code to TFS 2010, with full history, and keep all other aspects of the new TFS 2010 installation as clean as possible. How can I do that ?

    Read the article

  • How to filter the jqGrid data NOT using the built in search/filter box

    - by Jimbo
    I want users to be able to filter grid data without using the intrinsic search box. I have created two input fields for date (from and to) and now need to tell the grid to adopt this as its filter and then to request new data. Forging a server request for grid data (bypassing the grid) and setting the grid's data to be the response data wont work - because as soon as the user tries to re-order the results or change the page etc. the grid will request new data from the server using a blank filter. I cant seem to find grid API to achieve this - does anyone have any ideas? Thanks.

    Read the article

  • java 7 upgrade and hibernate annotation processor error

    - by Bill Turner
    I am getting the following warning, which seems to be triggering a subsequent warning and an error. I have been googling like mad, though have not found anything that makes it clear what it is I should do to resolve this. This issue occurs when I execute an Ant build. I am trying to migrate our project to Java 7. I have changed all the source='1.6' and target="1.6" to 1.7. I did find this related article: Forward compatible Java 6 annotation processor and SupportedSourceVersion It seems to indicate that I should build the Hibernate annotation processor jar myself, compiling it with with 1.7. It does not seem I should be required to do so. The latest version of the class in question (in hibernate-validator-annotation-processor-5.0.1.Final.jar) has been compiled with 1.6. Since the code in said class refers to SourceVersion.latestSupported(), and the 1.6 of that returns only RELEASE_6, there does not seem to be a generally available solution. Here is the warning: [javac] warning: Supported source version 'RELEASE_6' from annotation processor 'org.hibernate.validator.ap.ConstraintValidationProcessor' less than -source '1.7' And, here are the subsequent warnings/error. [javac] warning: No processor claimed any of these annotations: javax.persistence.PersistenceContext,javax.persistence.Column,org.codehaus.jackson.annotate.JsonIgnore,javax.persistence.Id,org.springframework.context.annotation.DependsOn,com.trgr.cobalt.infrastructure.datasource.Bucketed,org.codehaus.jackson.map.annotate.JsonDeserialize,javax.persistence.DiscriminatorColumn,com.trgr.cobalt.dataroom.authorization.secure.Secured,org.hibernate.annotations.GenericGenerator,javax.annotation.Resource,com.trgr.cobalt.infrastructure.spring.domain.DomainField,org.codehaus.jackson.annotate.JsonAutoDetect,javax.persistence.DiscriminatorValue,com.trgr.cobalt.dataroom.datasource.config.core.CoreTransactionMandatory,org.springframework.stereotype.Repository,javax.persistence.GeneratedValue,com.trgr.cobalt.dataroom.datasource.config.core.CoreTransactional,org.hibernate.annotations.Cascade,javax.persistence.Table,javax.persistence.Enumerated,org.hibernate.annotations.FilterDef,javax.persistence.OneToOne,com.trgr.cobalt.dataroom.datasource.config.core.CoreEntity,org.springframework.transaction.annotation.Transactional,com.trgr.cobalt.infrastructure.util.enums.EnumConversion,org.springframework.context.annotation.Configuration,com.trgr.cobalt.infrastructure.spring.domain.UpdatedFields,com.trgr.cobalt.infrastructure.spring.documentation.SampleValue,org.springframework.context.annotation.Bean,org.codehaus.jackson.annotate.JsonProperty,javax.persistence.Basic,org.codehaus.jackson.map.annotate.JsonSerialize,com.trgr.cobalt.infrastructure.spring.validation.Required,com.trgr.cobalt.dataroom.datasource.config.core.CoreTransactionNever,org.springframework.context.annotation.Profile,com.trgr.cobalt.infrastructure.spring.stereotype.Persistor,javax.persistence.Transient,com.trgr.cobalt.infrastructure.spring.validation.NotNull,javax.validation.constraints.Size,javax.persistence.Entity,javax.persistence.PrimaryKeyJoinColumn,org.hibernate.annotations.BatchSize,org.springframework.stereotype.Service,org.springframework.beans.factory.annotation.Value,javax.persistence.Inheritance [javac] error: warnings found and -Werror specified TIA!

    Read the article

  • Machine learning challenge: diagnosing program in java/groovy (datamining, machine learning)

    - by Registered User
    Hi All! I'm planning to develop program in Java which will provide diagnosis. The data set is divided into two parts one for training and the other for testing. My program should learn to classify from the training data (BTW which contain answer for 30 questions each in new column, each record in new line the last column will be diagnosis 0 or 1, in the testing part of data diagnosis column will be empty - data set contain about 1000 records) and then make predictions in testing part of data :/ I've never done anything similar so I'll appreciate any advice or information about solution to similar problem. I was thinking about Java Machine Learning Library or Java Data Mining Package but I'm not sure if it's right direction... ? and I'm still not sure how to tackle this challenge... Please advise. All the best!

    Read the article

  • qT quncompress gzip data

    - by talei
    Hello, I stumble upon a problem, and can't find a solution. So what I want to do is uncompress data in qt, using qUncompress(QByteArray), send from www in gzip format. I used wireshark to determine that this is valid gzip stream, also tested with zip/rar and both can uncompress it. Code so far, is like this: static const char dat[40] = { 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0xaa, 0x2e, 0x2e, 0x49, 0x2c, 0x29, 0x2d, 0xb6, 0x4a, 0x4b, 0xcc, 0x29, 0x4e, 0xad, 0x05, 0x00, 0x00, 0x00, 0xff, 0xff, 0x03, 0x00, 0x2a, 0x63, 0x18, 0xc5, 0x0e, 0x00, 0x00, 0x00 }; //this data contains string: {status:false}, in gzip format QByteArray data; data.append( dat, sizeof(dat) ); unsigned int size = 14; //expected uncompresed size, reconstruct it BigEndianes //prepand expected uncompressed size, last 4 byte in dat 0x0e = 14 QByteArray dataPlusSize; dataPlusSize.append( (unsigned int)((size >> 24) & 0xFF)); dataPlusSize.append( (unsigned int)((size >> 16) & 0xFF)); dataPlusSize.append( (unsigned int)((size >> 8) & 0xFF)); dataPlusSize.append( (unsigned int)((size >> 0) & 0xFF)); QByteArray uncomp = qUncompress( dataPlusSize ); qDebug() << uncomp; And uncompression fails with: qUncompress: Z_DATA_ERROR: Input data is corrupted. AFAIK gzip consist of 10 byte header, DEFLATE peyload, 12 byte trailer ( 8 byte CRC32 + 4 byte ISIZE - uncompresed data size ). Striping header and trailer should leave me with DEFLATE data stream, qUncompress yields same error. I checked with data string compressed in PHP, like this: $stringData = gzcompress( "{status:false}", 1); and qUncompress uncompress that data.(I didn't see and gzip header though i.e. ID1 = 0x1f, ID2 = 0x8b ) I checked above code with debug, and error occurs at: if ( #endif ((BITS(8) << 8) + (hold >> 8)) % 31) { //here is error, WHY? long unsigned int hold = 35615 strm->msg = (char *)"incorrect header check"; state->mode = BAD; break; } inflate.c line 610. I know that qUncompress is simply a wrapper to zlib, so I suppose it should handle gzip without any problem. Any comments are more then welcome. Best regards

    Read the article

  • How can I manipulate my Drupal blogs?

    - by Ralph
    Hi. I'm migrating a website from FrogCMS to Drupal. My questions are: How do you limit the list of recent blog entries (title and content) on the MAIN page and let it limit (to at least five) on the other page (i.e. List of Blogs page)? How do you place a 'read more' link for each blog and when viewed full, the 'read more' should not be displayed? Is there a way to remove the pagination without hacking node.module? I tried Nodequeue module but I am not sure on how to use it. I read the documentation and tried outputing <?php print nodequeue_node_titles($subqueue_id); ?> but had no luck at all. Check it here I need an answer ASAP. I need to finish this by tomorrow. Thanks!

    Read the article

  • How should I proceed to upgrade from TFS2008 to TFS2010

    - by Stephane
    Hi, We have a TFS2008 server with multiple team projects (about 20) What is the best way to migrate to TFS2010 without losing the history. I believe there are 2 ways, correct me if I'm wrong: installing a fresh tfs2010 and importing the DB from TFS 2008 or in-place upgrade. What is your recommendation and why? Are there any issue I should be prepared to face? Any advice is welcome.

    Read the article

  • RDLC - Adding a Data Source in VS2010

    - by Kezzer
    Greetings. I have an RDLC file and am wanting to add a data source to it, although without any luck so far. The data source is a custom class written by myself (just to add, we do this all the time). We recently converted over to the VS2010 RDLC format which caused some problems, but we've made some changes to our implementation that workaround the more major issues. So, back to the issue at hand, when I attempt to add my data source to the DummyDataSource list in the RDLC view in VS2010 it just does nothing, however it does add the data source to the list of data sources, but you can't select it from the drop-down list in the RDLC view which means I can't add the data source at all. Has anyone come across this problem? Is there anything I need to check? I've searched with fervour and had no luck.

    Read the article

  • weblogic 10.3 custom authenticator

    - by hbagchi
    we are migrating weblogic from 8.1 to 10.3. We had custom authenticator provider. Is there any standard way to migrate custom authenticator from 8.1 to 10.3? In fact, I could not find the wlManagement.jar in 10.3. I did find a wlManagement.jar file but it only contains .java files, no .class files. Please advise.

    Read the article

  • Database design for very large amount of data

    - by Hossein
    Hi, I am working on a project, involving large amount of data from the delicious website.The data available is at files are "Date,UserId,Url,Tags" (for each bookmark). I normalized my database to a 3NF, and because of the nature of the queries that we wanted to use In combination I came down to 6 tables....The design looks fine, however, now a large amount of data is in the database, most of the queries needs to "join" at least 2 tables together to get the answer, sometimes 3 or 4. At first, we didn't have any performance issues, because for testing matters we haven't had added too much data in the database. No that we have a lot of data, simply joining extremely large tables does take a lot of time and for our project which has to be real-time is a disaster.I was wondering how big companies solve these issues.Looks like normalizing tables just adds complexity, but how does the big company handle large amounts of data in their databases, don't they do the normalization? thanks

    Read the article

  • Moving from Silverlight 3 to Silverlight 4

    - by Manoj
    Hi, I moved my app from silverlight 3 VS2010 to a system having Silverlight 4 and VS 2010. When I open the Solution I get the following error in the MainPAge.xml. What is wrong. Could not resolve mscorlib for target framework 'Silverlight,Version=v3.0'. This can happen if the target framework is not installed or if the framework moniker is incorrectly formatted.

    Read the article

  • Best Practices - Data Annotations vs OnChanging in Entity Framework 4

    - by jptacek
    I was wondering what the general recommendation is for Entity Framework in terms of data validation. I am relatively new to EF, but it appears there are two main approaches to data validation. The first is to create a partial class for the model, and then perform data validations and update a rule violation collection of some sort. This is outlined at http://msdn.microsoft.com/en-us/library/cc716747.aspx The other is to use data annotations and then have the annotations perform data validation. Scott Guthrie explains this on his blog at http://weblogs.asp.net/scottgu/archive/2010/01/15/asp-net-mvc-2-model-validation.aspx. I was wondering what the benefits are of one over the other. It seems the data annotations would be the preferred mechanism, especially as you move to RIA Services, but I want to ensure I am not missing something. Of course, nothing precludes using both of them together. Thanks John

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >