Search Results

Search found 67192 results on 2688 pages for 'excel external data'.

Page 303/2688 | < Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >

  • HTML + Button with text and image on it.

    - by lucky
    Hello, I have a problem in creating a Button with text and image on it. <td> <button type="submit" name="report" value="Report" <?php if($tab == 'Excel') echo "id=\"tab_inactive\""; else echo "id=\"tab_active\"" ; ?>> <img src="images/report.gif" alt="Report"/>Report </button> <button type="submit" name="excel" value="Excel" <?php if($tab == 'Excel' ) echo "id=\"tab_active\""; else echo "id=\"tab_inactive\"" ; ?>> <img src="images/Excel.gif" alt="Excel" width="16" height="16" /> Excel </button> </td> Here $tab is $tab = strlen(trim($_POST[excel]))>0 ? $_POST[excel] : $_POST[report]; I tried this way, but this is behaving so strangely. On click the button:- The submit function is working properly in firefox, but not in IE. Instead of submitting the value(in this example values are 'Report' and 'Excel'), indeed it is submitting the label of the button. That is if i am checking the value of array PRINT_R($_POST). The value of it is Array ( [report] =>(Icon that i used) Report [excel] => (Icon that i used) Excel [frm_analysis] => [to_analysis] => ) Here i have more than 1 button, then all the labels are submitted eventhough one of them is pressed. i dont how to capture which button is pressed. I even tried changing button type=button and onclick="document.formname.submit()" Even this is resulting in the same. Can you please help me to solve this.

    Read the article

  • SQL query to select a range

    - by hansika attanayake
    I need to write an sql query (in c#) to select excel sheet data in "C" column starting from C19. But i cant specify the ending cell number because more data are getting added to the column. Hence i need to know how to specify the end of the column. Please help. I have mentioned the query that im using. I need to know what should be entered at the position of "C73"? OleDbCommand ccmd = new OleDbCommand(@"Select * From [SPAT$C19:C73]", conn);

    Read the article

  • A failed disk (Pay for professional service or SpinRite?)(new edit)

    - by huggie
    EDIT: After much negotiating and begging and seeing through promotion smoke screen, thanks to the nice representative who took my case, I now know that the engineer has already fixed my NTFS partition (I guess it might be a bad block in the partition table?). She told me that the problem was considered minor, and I should be able to boot normally and just copy stuff out. Whew..I'm glad I didn't agree to the NTD $16,000 deal. New question (should this be in a new thread?): is it safer to use the linux "dd" command or is it better to boot normally into Windows XP and just copy stuff out? EDIT2: Thanks to all the help. I give the best answer to Console as it's most directed related to my question. But many suggestion are helpful and informational. ---- ORIGINAL POST BELOW --- Hi, in my previous post (You don't need to read but it's at http://superuser.com/questions/48838/windows-xp-a-disk-read-error-occurred), I said that my hard disk was not booting and is showing "a disk read error occurred". I took it to a recovery professional. A representative responded today told me that the NTFS partitions have a "NTFS partition system crash". I have no idea what that means. The engineer handling my drive will not be available for contact till tomorrow. Now the company charges me NTD (New Taiwan Dollar) $16,000 to recover lost data, that's kind of a lot considering that my graduate student monthly stipend is currently NTD $32,000 (max. allowed by regulation, may be lower, may change depend on funding). Now I'm weighting in between the options. Option A: let the professional recovers it with the half of my monthly stipend. If file/directories I designated are not recovered I don't pay a penny. (other than the initial examination fee of NTD $1000 which I've already paid.) Option B: let me try SpinRite, if failed, back to Option A. I spoke to the representative at the company they recommended me not to handle it on my own (yeah of course that's what they all want to say, right?), and at the price tag the disk error is probably relatively minor and data recoverable. But the representative really did not have detailed information of the disk failure so I didn't take her recommendation readily. Though one thing I heed was that she said that what they would do is to duplicate the disk before attempting discovery, so there would be no data loss (Is this true? can't duplicating invoke further data loss?). That sounds very good to me. Or maybe a third option: Option C: Negotiate with them to pay them to duplicate the disk hopefully for a much smaller price tag. Let me try SpinRite, if failed, back to Option A. This is a difficult decision. Ultimately I want my data back, but if a cheaper way is available to achieve the same thing... Can operating with SpinRite also corrupt data in someway? I've no idea what happened to my drive. I'll attempt to contact the engineer and hope to get it clarified and make an edit here.

    Read the article

  • Implementing an async "read all currently available data from stream" operation

    - by Jon
    I recently provided an answer to this question: C# - Realtime console output redirection. As often happens, explaining stuff (here "stuff" was how I tackled a similar problem) leads you to greater understanding and/or, as is the case here, "oops" moments. I realized that my solution, as implemented, has a bug. The bug has little practical importance, but it has an extremely large importance to me as a developer: I can't rest easy knowing that my code has the potential to blow up. Squashing the bug is the purpose of this question. I apologize for the long intro, so let's get dirty. I wanted to build a class that allows me to receive input from a console's standard output Stream. Console output streams are of type FileStream; the implementation can cast to that, if needed. There is also an associated StreamReader already present to leverage. There is only one thing I need to implement in this class to achieve my desired functionality: an async "read all the data available this moment" operation. Reading to the end of the stream is not viable because the stream will not end unless the process closes the console output handle, and it will not do that because it is interactive and expecting input before continuing. I will be using that hypothetical async operation to implement event-based notification, which will be more convenient for my callers. The public interface of the class is this: public class ConsoleAutomator { public event EventHandler<ConsoleOutputReadEventArgs> StandardOutputRead; public void StartSendingEvents(); public void StopSendingEvents(); } StartSendingEvents and StopSendingEvents do what they advertise; for the purposes of this discussion, we can assume that events are always being sent without loss of generality. The class uses these two fields internally: protected readonly StringBuilder inputAccumulator = new StringBuilder(); protected readonly byte[] buffer = new byte[256]; The functionality of the class is implemented in the methods below. To get the ball rolling: public void StartSendingEvents(); { this.stopAutomation = false; this.BeginReadAsync(); } To read data out of the Stream without blocking, and also without requiring a carriage return char, BeginRead is called: protected void BeginReadAsync() { if (!this.stopAutomation) { this.StandardOutput.BaseStream.BeginRead( this.buffer, 0, this.buffer.Length, this.ReadHappened, null); } } The challenging part: BeginRead requires using a buffer. This means that when reading from the stream, it is possible that the bytes available to read ("incoming chunk") are larger than the buffer. Remember that the goal here is to read all of the chunk and call event subscribers exactly once for each chunk. To this end, if the buffer is full after EndRead, we don't send its contents to subscribers immediately but instead append them to a StringBuilder. The contents of the StringBuilder are only sent back whenever there is no more to read from the stream. private void ReadHappened(IAsyncResult asyncResult) { var bytesRead = this.StandardOutput.BaseStream.EndRead(asyncResult); if (bytesRead == 0) { this.OnAutomationStopped(); return; } var input = this.StandardOutput.CurrentEncoding.GetString( this.buffer, 0, bytesRead); this.inputAccumulator.Append(input); if (bytesRead < this.buffer.Length) { this.OnInputRead(); // only send back if we 're sure we got it all } this.BeginReadAsync(); // continue "looping" with BeginRead } After any read which is not enough to fill the buffer (in which case we know that there was no more data to be read during the last read operation), all accumulated data is sent to the subscribers: private void OnInputRead() { var handler = this.StandardOutputRead; if (handler == null) { return; } handler(this, new ConsoleOutputReadEventArgs(this.inputAccumulator.ToString())); this.inputAccumulator.Clear(); } (I know that as long as there are no subscribers the data gets accumulated forever. This is a deliberate decision). The good This scheme works almost perfectly: Async functionality without spawning any threads Very convenient to the calling code (just subscribe to an event) Never more than one event for each time data is available to be read Is almost agnostic to the buffer size The bad That last almost is a very big one. Consider what happens when there is an incoming chunk with length exactly equal to the size of the buffer. The chunk will be read and buffered, but the event will not be triggered. This will be followed up by a BeginRead that expects to find more data belonging to the current chunk in order to send it back all in one piece, but... there will be no more data in the stream. In fact, as long as data is put into the stream in chunks with length exactly equal to the buffer size, the data will be buffered and the event will never be triggered. This scenario may be highly unlikely to occur in practice, especially since we can pick any number for the buffer size, but the problem is there. Solution? Unfortunately, after checking the available methods on FileStream and StreamReader, I can't find anything which lets me peek into the stream while also allowing async methods to be used on it. One "solution" would be to have a thread wait on a ManualResetEvent after the "buffer filled" condition is detected. If the event is not signaled (by the async callback) in a small amount of time, then more data from the stream will not be forthcoming and the data accumulated so far should be sent to subscribers. However, this introduces the need for another thread, requires thread synchronization, and is plain inelegant. Specifying a timeout for BeginRead would also suffice (call back into my code every now and then so I can check if there's data to be sent back; most of the time there will not be anything to do, so I expect the performance hit to be negligible). But it looks like timeouts are not supported in FileStream. Since I imagine that async calls with timeouts are an option in bare Win32, another approach might be to PInvoke the hell out of the problem. But this is also undesirable as it will introduce complexity and simply be a pain to code. Is there an elegant way to get around the problem? Thanks for being patient enough to read all of this. Update: I definitely did not communicate the scenario well in my initial writeup. I have since revised the writeup quite a bit, but to be extra sure: The question is about how to implement an async "read all the data available this moment" operation. My apologies to the people who took the time to read and answer without me making my intent clear enough.

    Read the article

  • Sensible Way to Pass Web Data in XML to a SQL Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • xlwt data garbled

    - by zhangzhong
    I retrieve the data of chinese characters from DB and write the data into excel by xlwt, code as below: ws0.write(unicode(cell, 'big5')) It is ok under Windows, but when I deloyed it under Linux, the data in excel garbled, Could you help to do with it?

    Read the article

  • Sensible Way to Pass Web Data to Sql Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • Cloud computing?

    - by Shawn H
    I'm an analyst and intermediate programmer working for a consulting company. Sometimes we are doing some intensive computing in Excel which can be frustrating because we have slow computers. My company does not have enough money to buy everyone new computers right now. Is there a cloud computing service that allows me to login to a high performance virtual computer from remote desktop? We are not that technical so preferrably the computer is running Windows and I can run Excel and other applications from this computer. Thanks

    Read the article

  • loading data into rails applicaton

    - by ash34
    Hi, I have a list of rows in an excel sheet which I need to load as historical transactions into a table in my rails applications. I saved the excel file as a csv file. I tried using csv from the standard library but keep getting the following exception CSV::IllegalFormatError. Not sure how to even figure out where the problem lies. Any suggestions on how to do this. thanks, ash

    Read the article

  • Export unicode data from mysql

    - by Nayn
    Hi Folks, I have some data in one of my mysql table stored as utf8. The data is some japanese text. I need to export it to excel. Could you tell how to do it? Exporting by SELECT ... INTO OUTFILE returns some plain text file. I'm not sure how to read it back in excel so that japanese character would show properly Thanks Nayn

    Read the article

  • Trouble calling a method from an external class

    - by Bradley Hobbs
    Here is my employee database program: import java.util.*; import java.io.*; import java.io.File; import java.io.FileReader; import java.util.ArrayList; public class P { //Instance Variables private static String empName; private static String wage; private static double wages; private static double salary; private static double numHours; private static double increase; // static ArrayList<String> ARempName = new ArrayList<String>(); // static ArrayList<Double> ARwages = new ArrayList<Double>(); // static ArrayList<Double> ARsalary = new ArrayList<Double>(); static ArrayList<Employee> emp = new ArrayList<Employee>(); public static void main(String[] args) throws Exception { clearScreen(); printMenu(); question(); exit(); } public static void printArrayList(ArrayList<Employee> emp) { for (int i = 0; i < emp.size(); i++){ System.out.println(emp.get(i)); } } public static void clearScreen() { System.out.println("\u001b[H\u001b[2J"); } private static void exit() { System.exit(0); } private static void printMenu() { System.out.println("\t------------------------------------"); System.out.println("\t|Commands: n - New employee |"); System.out.println("\t| c - Compute paychecks |"); System.out.println("\t| r - Raise wages |"); System.out.println("\t| p - Print records |"); System.out.println("\t| d - Download data |"); System.out.println("\t| u - Upload data |"); System.out.println("\t| q - Quit |"); System.out.println("\t------------------------------------"); System.out.println(""); } public static void question() { System.out.print("Enter command: "); Scanner q = new Scanner(System.in); String input = q.nextLine(); input.replaceAll("\\s","").toLowerCase(); boolean valid = (input.equals("n") || input.equals("c") || input.equals("r") || input.equals("p") || input.equals("d") || input.equals("u") || input.equals("q")); if (!valid){ System.out.println("Command was not recognized; please try again."); printMenu(); question(); } else if (input.equals("n")){ System.out.print("Enter the name of new employee: "); Scanner stdin = new Scanner(System.in); empName = stdin.nextLine(); System.out.print("Hourly (h) or salaried (s): "); Scanner stdin2 = new Scanner(System.in); wage = stdin2.nextLine(); wage.replaceAll("\\s","").toLowerCase(); if (!(wage.equals("h") || wage.equals("s"))){ System.out.println("Input was not h or s; please try again"); } else if (wage.equals("h")){ System.out.print("Enter hourly wage: "); Scanner stdin4 = new Scanner(System.in); wages = stdin4.nextDouble(); Employee emp1 = new HourlyEmployee(empName, wages); emp.add(emp1); printMenu(); question();} else if (wage.equals("s")){ System.out.print("Enter annual salary: "); Scanner stdin5 = new Scanner(System.in); salary = stdin5.nextDouble(); Employee emp1 = new SalariedEmployee(empName, salary); printMenu(); question();}} else if (input.equals("c")){ for (int i = 0; i < emp.size(); i++){ System.out.println("Enter number of hours worked by " + emp.get(i) + ":"); } Scanner stdin = new Scanner(System.in); numHours = stdin.nextInt(); System.out.println("Pay: " + emp1.computePay(numHours)); System.out.print("Enter number of hours worked by " + empName); Scanner stdin2 = new Scanner(System.in); numHours = stdin2.nextInt(); System.out.println("Pay: " + emp1.computePay(numHours)); printMenu(); question();} else if (input.equals("r")){ System.out.print("Enter percentage increase: "); Scanner stdin = new Scanner(System.in); increase = stdin.nextDouble(); System.out.println("\nNew Wages"); System.out.println("---------"); // System.out.println(Employee.toString()); printMenu(); question(); } else if (input.equals("p")){ printArrayList(emp); printMenu(); question(); } else if (input.equals("q")){ exit(); } } } Here is one of the class files: public abstract class Employee { private String name; private double wage; protected Employee(String name, double wage){ this.name = name; this.wage = wage; } public String getName() { return name; } public double getWage() { return wage; } public void setName(String name) { this.name = name; } public void setWage(double wage) { this.wage = wage; } public void percent(double wage, double percent) { wage *= percent; } } And here are the errors: P.java:108: cannot find symbol symbol : variable emp1 location: class P System.out.println("Pay: " + emp1.computePay(numHours)); ^ P.java:112: cannot find symbol symbol : variable emp1 location: class P System.out.println("Pay: " + emp1.computePay(numHours)); ^ 2 errors I'm trying to the get paycheck to print out but i'm having trouble with how to call the method. It should take the user inputed numHours and calculate it then print on the paycheck for each employee. Thanks!

    Read the article

  • uploading via http post (multipart/form-data) silently fails with big files

    - by matteo
    When uploading multipart/form-data forms via a http post request to my apache web server, very big files (i.e. 30MB) are silently discarded. On the server side all looks as if the attached file was received with 0 bytes size. On the client side all looks like it had been uploaded succesfully (it takes the expected long time to upload and the browser gives no error message). On the server, nothing is logged into the error log. An entry is logged into the access log as if everything was ok (a post request and a 200 ok response). These uploads are being posted to a php script. In the php script, If I print_r $_FILES, I see the following information for the relevant file: [file5] => Array ( [name] => MOV023.3gp [type] => video/3gpp [tmp_name] => /tmp/phpgOdvYQ [error] => 0 [size] => 0 ) Note both [error] = 0 (which should mean no error) and [size] = 0 (as if the file was empty). My php script runs fine and receives all the rest of the data except these files. move_uploaded_file succeeds on these files and actually copies them as 0byte files. I've already changed the php directives max_upload_size to 50M and post_max_size to 200M, so neither the single file nor the request exceed any size limit. max_execution_time is not relevant, because the time to transfer the data does not count; and I've increased max_input_time to 1000 seconds, though this shouldn't be necessary since this is the time taken to parse the input data, not the time taken to upload it. Is there any apache configuration, prior to php, that could be causing these files to be discarded even prior to php execution? Some limit in size or in upload time? I've read about a default 300 seconds timeout limit, but this should apply to the time the connection is idle, not the time it takes while actually transferring data, right? Needless to say, uploads with all exactly identical conditions (including file format, client and everything) except smaller file size, work seamlessly, so the issue is clearly related to the file or request size, or to the time it takes to send it.

    Read the article

  • External data in TileList Flex

    - by Adam
    I'm working for the first time with a TileList and an itemRenderer and I'm having a bit of trouble getting the information from my array collection to display and looking for some advice. Here's what I've got private function loadData():void{ var stmt:SQLStatement = new SQLStatement(); stmt.sqlConnection = sqlConn; stmt.text = "SELECT * FROM tbl_user ORDER BY user_id ASC"; stmt.execute(); var result:SQLResult = stmt.getResult(); acData = new ArrayCollection(result.data); } <mx:TileList id="userlist" labelField="user_id" dataProvider="{acData}" width="600" height="200" paddingTop="25" left="117.15" y="327.25"> <!--itemClick="showMessage(event)"--> <mx:itemRenderer> <mx:Component> <mx:VBox width="125" height="125" paddingRight="5" paddingLeft="5" horizontalAlign="center"> <mx:Label id="username" text="{}"/> <mx:Label id="userjob" text="{}"/> </mx:VBox> </mx:Component> </mx:itemRenderer> </mx:TileList> So I'm just not sure how I go about pulling the information from the array and putting it into labels like username, userjob, userbio ect, Inside the TitleList and itemRenderer.

    Read the article

  • Loop a formula in excel VBA

    - by CEMG
    I am trying to loop a formula in Column "D" until Column "B" doesn't have any more data. The formula I am adding to Column "D" is : IF(ISNUMBER(C5),C5,IF(C5A5/3+OFFSET(C5,-1,0)) ,IF(C5<C6,((OFFSET(C5,1,0)-OFFSET(C5,-2,0))(A5/3)+OFFSET(C5,-2,0)),""))) So the result I want in Column "D" once the macro is run is this: A B C D 3 May-10 78.0000 78.00000 1 Jun-10 52.06667 2 Jul-10 26.13333 3 Aug-10 0.2000 0.20000 1 Sep-10 0.21393 2 Oct-10 0.22786 3 Nov-10 0.2418 0.24179 1 Dec-10 0.26640 2 Jan-11 0.29102 3 Feb-11 0.3156 0.31563 1 Mar-11 0.34821 2 Apr-11 0.38080 3 May-11 0.4134 0.41338 1 Jun-11 0.44992 2 Jul-11 0.48646 3 Aug-11 0.5230 0.52300 1 Sep-11 0.56440 2 Oct-11 0.60580 3 Nov-11 0.6472 0.64720 1 Dec-11 0.43147 If someone can help me at what I am doing wrong with the VBA codes I would greatly appreciated. My CODES are the following: Sub IsNumeric() // first logic: IF(ISNUMBER(C6),C6 // If Application.IsNumber(Range("c5").Value) Then Range("d5").Value = Range("C5").Value // second logic: IF(C6 ElseIf Range("c6").Value < Range("c5").Value Then Range("d6").Value = Range("c6").Offset(2, 0).Value - Range("c6").Offset(-1, 0).Value * (Range("a6").Value / 3) + Range("c6").Offset(-1, 0).Value // third logic: IF(C6<C7,((OFFSET(C6,1,0)-OFFSET(C6,-2,0))*(A6/3)+OFFSET(C6,-2,0)),""))) // ElseIf Range("c6").Value < Range("c7").Value Then Range("d6").Value = (Range("c6").Offset(1, 0).Select) - Range("c6").Offset(-2, 0).Select * (Range("a6").Select / 3) + Range("c6").Offset(-2, 0).Select Else Range("d6").Value = "" End If End Sub

    Read the article

  • Creating one row of information in excel using a unique value

    - by user1426513
    This is my first post. I am currently working on a project at work which requires that I work with several different worksheets in order to create one mail master worksheet, as it were, in order to do a mail merge. The worksheet contains information regarding different purchases, and each purchaser is identified with their own ID number. Below is an example of what my spreadsheet looks like now (however I do have more columns): ID Salutation Address ID Name Donation ID Name Tickets 9 Mr. John Doe 123 12 Ms. Jane Smith 100.00 12 Ms.Jane Smith 300.00 12 Ms. Jane Smith 456 22 Mr. Mike Man 500.00 84 Ms. Jo Smith 300.00 What I would like to do is somehow sort my data so that everythign with the same unique identifier (ID) lines up on the same row. For example ID 12 Jane Smith - all the information for her will show up under her name matched by her ID number, and ID 22 will match up with 22 etc... When I merged all of my spreadsheets together, I sorted them all by ID number, however my problem is, not everyone who made a donation bought a ticket or some people just bought tickets and nothing us, so sorting doesn't work. Hopefully this makes sense. Thanks in advance.

    Read the article

  • post values to external page explicitly PHP

    - by JPro
    Hi, I want to post values to a page through a hyperlink in another page. Is it possible to do that? Let's say I have a page results.php which has the starting line, <?php if(isset($_POST['posted_value'])) { echo "expected value"; // do something with the data } else { echo "no the value expected"; } If from another page say link.php I place a hyperlink like this: <a href="results.php?posted_value=1"> , will this be accepetd by the results page? If instead if I replace the above starting line with if(isset($_REQUEST['posted_value'])), will this work? I believe the above hyperlink evaluates to GET, but since the only visibility difference between GET and POST that is you can see parameters in the address bar with GET But, is there any other way to place a hyperlink which can post values to a page? or can we use jquery in the place of hyperlink to POST the values? Can anyone please suggest me something on this please? Thanks.

    Read the article

  • What is the right way to process inconsistent data files?

    - by Tahabi
    I'm working at a company that uses Excel files to store product data, specifically, test results from products before they are shipped out. There are a few thousand spreadsheets with anywhere from 50-100 relevant data points per file. Over the years, the schema for the spreadsheets has changed significantly, but not unidirectionally - in the sense that, changes often get reverted and then re-added in the space of a few dozen to few hundred files. My project is to convert about 8000 of these spreadsheets into a database that can be queried. I'm using MongoDB to deal with the inconsistency in the data, and Python. My question is, what is the "right" or canonical way to deal with the huge variance in my source files? I've written a data structure which stores the data I want for the latest template, which will be the final template used going forward, but that only helps for a few hundred files historically. Brute-forcing a solution would mean writing similar data structures for each version/template - which means potentially writing hundreds of schemas with dozens of fields each. This seems very inefficient, especially when sometimes a change in the template is as little as moving a single line of data one row down or splitting what used to be one data field into two data fields. A slightly more elegant solution I have in mind would be writing schemas for all the variants I can find for pre-defined groups in the source files, and then writing a function to match a particular series of files with a series of variants that matches that set of files. This is because, more often that not, most of the file will remain consistent over a long period, only marred by one or two errant sections, but inside the period, which section is inconsistent, is inconsistent. For example, say a file has four sections with three data fields, which is represented by four Python dictionaries with three keys each. For files 7000-7250, sections 1-3 will be consistent, but section 4 will be shifted one row down. For files 7251-7500, 1-3 are consistent, section 4 is one row down, but a section five appears. For files 7501-7635, sections 1 and 3 will be consistent, but section 2 will have five data fields instead of three, section five disappears, and section 4 is still shifted down one row. For files 7636-7800, section 1 is consistent, section 4 gets shifted back up, section 2 returns to three cells, but section 3 is removed entirely. Files 7800-8000 have everything in order. The proposed function would take the file number and match it to a dictionary representing the data mappings for different variants of each section. For example, a section_four_variants dictionary might have two members, one for the shifted-down version, and one for the normal version, a section_two_variants might have three and five field members, etc. The script would then read the matchings, load the correct mapping, extract the data, and insert it into the database. Is this an accepted/right way to go about solving this problem? Should I structure things differently? I don't know what to search Google for either to see what other solutions might be, though I believe the problem lies in the domain of ETL processing. I also have no formal CS training aside from what I've taught myself over the years. If this is not the right forum for this question, please tell me where to move it, if at all. Any help is most appreciated. Thank you.

    Read the article

  • Dock with dual external DVI monitors with Intel + Nvidia Optimus?

    - by Ryan
    I have a Dell Latitude E6420 laptop plugged into a docking station, and the dock has 2 monitors (connected with DVI). Also note that I've installed Ubuntu alongside (dual-boot) Windows 7. I can't get the dual monitors to work both on Ubuntu (either 11.10 or 12.04) and Windows 7. When I run lspci | grep VGA, I get: 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: nVidia Corporation GF108 [Quadro NVS 4200M] (rev a1) If I then reboot and uncheck Optimus setting in the BIOS during reboot, I'm able to get the dual monitors to work in Ubuntu 12.04 (but I need to configure them every boot in Nvidia Settings). When I run lspci | grep VGA, I get: 01:00.0 VGA compatible controller: NVIDIA Corporation GF119 [Quadro NVS 4200M] (rev a1) But then if I reboot into Windows (leaving the Optimus unchecked), Windows can't detect external monitors, and the resolution is unacceptably low. I've seen on many forum posts that this particular graphics card setup causes lots of headaches. I haven't been able to resolve my problem yet. How can I use my external display on my laptop with intel and nvidia video cards? How to use external displays with Intel driver on a NVidia/Intel hybrid system nVidia Optimus , Unity 3D and Dual Monitors "Just use VGA instead of DVI" isn't an option because my dock has only 1 VGA port (and 2 DVI). Switching the BIOS setting on every reboot and then reconfiguring the display settings every time is tedious, time-consuming, and impractical. Do you know how to make this work smoothly? Thanks for your help! P.S. see also: http://superuser.com/questions/434358/dell-latitude-e6420-dual-boot-ubuntu-windows-7-optimus-graphics-problems

    Read the article

  • Can T520+Bumblebee run an external monitor via DisplayPort?

    - by Fen
    Using 64-bit Ubuntu 11.10 and integrated (Intel) graphics, I can run the 1600x900 laptop display plus a 1600x1200 external monitor connected to the VGA adapter. But my external monitor is 1920x1200 so I have black stripes on each side. I believe the resolution is limited like this as the maximum resolution available from the Intel GPU is 2560x1600 = 4,096,000 pixels and I'm asking for a 3520x1200 = 4,224,000 display (with 1200x300 lost above the laptop screen). At 3200x1200 = 3,840,000 pixels, the Intel GPU seems happy. Under Windows, the same limit exists when using the VGA adapter, but if I turn Optimus on then I can connect the external monitor to the DisplayPort and get its full resolution and an extended desktop. I've seen that Bumblebee can run apps on the DisplayPort using the 'optirun' command. My question is: can Bumblebee run the DisplayPort in concert with the Intel card running the laptop screen creating a large virtual desktop (as on Windows)? If so, are there any pointers to how to do this? I tried once, failed, and dropped back to Integrated Graphics (and black stripes) as I could find no reports of this configuration working.

    Read the article

  • jstree dynamic JSON data from django

    - by danspants
    I'm trying to set up jsTree to dynamically accept JSON data from django. This is the test data i have django returning to jstree: result=[{ "data" : "A node", "children" : [ { "data" : "Only child", "state" : "closed" } ], "state" : "open" },"Ajax node"] response=HttpResponse(content=result,mimetype="application/json") this is the jstree code I'm using: jQuery("#demo1").jstree({ "json_data" : { "ajax" : { "url" : "/dirlist", "data" : function (n) { return { id : n.attr ? n.attr("id") : 0 }; }, error: function(e){alert(e);} } }, "plugins" : [ "themes","json_data"] }); All I get is the ajax loading symbol, the ajax error response is also triggered and it alerts "undefined". I've also tried simpleJson encoding in django but with the same result. If I change the url so that it is receiving a JSON file with identical data, it works as expected. Any ideas on what the issue might be?

    Read the article

  • How to filter the jqGrid data NOT using the built in search/filter box

    - by Jimbo
    I want users to be able to filter grid data without using the intrinsic search box. I have created two input fields for date (from and to) and now need to tell the grid to adopt this as its filter and then to request new data. Forging a server request for grid data (bypassing the grid) and setting the grid's data to be the response data wont work - because as soon as the user tries to re-order the results or change the page etc. the grid will request new data from the server using a blank filter. I cant seem to find grid API to achieve this - does anyone have any ideas? Thanks.

    Read the article

  • Machine learning challenge: diagnosing program in java/groovy (datamining, machine learning)

    - by Registered User
    Hi All! I'm planning to develop program in Java which will provide diagnosis. The data set is divided into two parts one for training and the other for testing. My program should learn to classify from the training data (BTW which contain answer for 30 questions each in new column, each record in new line the last column will be diagnosis 0 or 1, in the testing part of data diagnosis column will be empty - data set contain about 1000 records) and then make predictions in testing part of data :/ I've never done anything similar so I'll appreciate any advice or information about solution to similar problem. I was thinking about Java Machine Learning Library or Java Data Mining Package but I'm not sure if it's right direction... ? and I'm still not sure how to tackle this challenge... Please advise. All the best!

    Read the article

  • qT quncompress gzip data

    - by talei
    Hello, I stumble upon a problem, and can't find a solution. So what I want to do is uncompress data in qt, using qUncompress(QByteArray), send from www in gzip format. I used wireshark to determine that this is valid gzip stream, also tested with zip/rar and both can uncompress it. Code so far, is like this: static const char dat[40] = { 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0xaa, 0x2e, 0x2e, 0x49, 0x2c, 0x29, 0x2d, 0xb6, 0x4a, 0x4b, 0xcc, 0x29, 0x4e, 0xad, 0x05, 0x00, 0x00, 0x00, 0xff, 0xff, 0x03, 0x00, 0x2a, 0x63, 0x18, 0xc5, 0x0e, 0x00, 0x00, 0x00 }; //this data contains string: {status:false}, in gzip format QByteArray data; data.append( dat, sizeof(dat) ); unsigned int size = 14; //expected uncompresed size, reconstruct it BigEndianes //prepand expected uncompressed size, last 4 byte in dat 0x0e = 14 QByteArray dataPlusSize; dataPlusSize.append( (unsigned int)((size >> 24) & 0xFF)); dataPlusSize.append( (unsigned int)((size >> 16) & 0xFF)); dataPlusSize.append( (unsigned int)((size >> 8) & 0xFF)); dataPlusSize.append( (unsigned int)((size >> 0) & 0xFF)); QByteArray uncomp = qUncompress( dataPlusSize ); qDebug() << uncomp; And uncompression fails with: qUncompress: Z_DATA_ERROR: Input data is corrupted. AFAIK gzip consist of 10 byte header, DEFLATE peyload, 12 byte trailer ( 8 byte CRC32 + 4 byte ISIZE - uncompresed data size ). Striping header and trailer should leave me with DEFLATE data stream, qUncompress yields same error. I checked with data string compressed in PHP, like this: $stringData = gzcompress( "{status:false}", 1); and qUncompress uncompress that data.(I didn't see and gzip header though i.e. ID1 = 0x1f, ID2 = 0x8b ) I checked above code with debug, and error occurs at: if ( #endif ((BITS(8) << 8) + (hold >> 8)) % 31) { //here is error, WHY? long unsigned int hold = 35615 strm->msg = (char *)"incorrect header check"; state->mode = BAD; break; } inflate.c line 610. I know that qUncompress is simply a wrapper to zlib, so I suppose it should handle gzip without any problem. Any comments are more then welcome. Best regards

    Read the article

< Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >