Search Results

Search found 58440 results on 2338 pages for 'data cleansing'.

Page 40/2338 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • jquery ajax form function(data)

    - by RussP
    Can some one please tell me where I have gone wrong. What ever I do I get the answer "no" JQuery to send data to php query $j.post("logincheck.php",{ username:$j('#username').attr('value'), password:$j('#password').attr('value'), rand:Math.random() } , function(data) { if(data=='yes') {alert('yes');} else {alert('no');} } ); Here is the php query if(isset($_POST['username'])): $username = $_POST['username']; $password = $_POST['password']; $posts = mysql_query("SELECT * FROM users WHERE username='$username'"); $no_rows = mysql_num_rows($posts ); while($row = mysql_fetch_array($posts)): print 'yes'; endwhile; else: print 'no'; //header('location: index.php'); endif; endif; Thank in adance

    Read the article

  • Designing a table to store EXIF data

    - by rafale
    I'm looking to get the best performance out of querying a table containing EXIF data. The queries in question will only search the EXIF data for the specified strings and return the row index on a match. With that said, would it better to store the EXIF data in a table with separate columns for each of the tags, or would storing all of the tags in a single column as one long delimited string suit me just as well? There are around 115 EXIF tags I'll be storing, and each record would be around 1500 to 2000 chars in length if concatenated into a single string.

    Read the article

  • What frameworks exist for data subscription and update?

    - by Timothy Pratley
    There is one server with multiple clients. The clients are viewing subsets of the servers entire data. If the data that a client is viewing changes, the client should be informed of the changes so that it displays the current data. Example: Two clients are viewing a list of users in an administration screen. One client adds a new user to the list and modifies the permissions of another user. The other client sees the changes propagated to their view. In the client side code I would like the users list to be updated by the framework itself, raising changed events such that it will be redrawn - similar to 'cells' or dataflow. I am looking specifically for a .NET or java implementation.

    Read the article

  • JavaScript: how to use data but to hide it so as it cannot be reused

    - by loukote
    Hi all. I've some data that i'd like to publish just on one website, ie. it should not be reused on other websites. The data is a set of numbers that change every day, our journalists work to get hard gather it. Is there any way to hide, crypt, etc. the data in a way that it cannot be reused by others? But to show it in a graph in the same time? I found the ASCII to HEX tool that could be used for (http://utenti.multimania.it/ascii2hex/). I wonder if you can suggest other ways. (Even if I have to completely change the strategy.) Many thanks!

    Read the article

  • Best data recovery tools?

    - by Nonick
    So due to a recent act of stupidity and bravado, I uttered the words "backups! who needs backups?!" and what followed was the tragic loss of 260gb of data. This scenario in particular is requiring me to recover a repartitioned hard disk, but I was wondering what tools people here use in general to recover lost data. I'm sure everyone has been there, either accidentally rewriting files, resaving an old version, computer crash, hard disk death, user deletes an important document etc. So was thinking it might be an interesting point of discussion as to what you guys use to recover lost data. I appologise if this is considered irrelevant, but considering there has been a few recovery questions, I think this might be interesting.

    Read the article

  • Storing data with a stand-alone C++ application

    - by Mike
    I work with Apache, PHP, and MySQL for web development and local applications. For the past couple of years I have slowly been learning C++ and want to build an application this summer. Specifically, I want to make a "library" application in which I can store information about the books, CDs, and records that I own. I know this type of app exists, but I want to learn C++ and this seems like a good way to go about it. Here are a few questions: Is it possible to create a stand-alone application that does not require a database for storing data? If the answer to #1 above is "yes", is it a good idea to do this for an application that could potentially need to manage a lot of data? What data-storage options would you recommend for use with a C++ application? Thanks!

    Read the article

  • Selecting data from mysql table and related data from another to join them

    - by knittledan
    Ive looked at other questions and answers but still dont understand which brings me here. I have one data base two tables. lets say table1 and table2 from database. I'm looking to grab all the information from table1 and only one column from table2 that coincides with the correct row in table1. Example which I know is wrong: SELECT table1.*, table2.time_stamp FROM table1, table2 WHERE table1.ticket_id=$var AND table1.user_id = table2.user_id Basically select data from table1 then use a value from the selected table to grab the related data from table2 and join them to output them as one mysql_query. Im sure its simple and has been asked before. edit: I dont receive an error. SQL just returns noting. log form of this would be: $sqlResults = mysql_query("SELECT table1.* FROM table1 WHERE table1.ticket_id=$var") while($rowResult = mysql_fetch_array( $sqlResults )) { $userID = $rowResult['user_id']; $sqlResults2 = mysql_query("SELECT table2.time_stamp FROM table2 WHERE table2.user_id=$userID") } I want to combine that into one sql statement so i dont have to hit table2 for every row table1 has

    Read the article

  • Ordered Data Structure that allows to efficiently remove duplicate items

    - by devoured elysium
    I need a data structure that Must be ordered (adding elements a, b and c to an empty structure, will make them be at positions 0, 1 and 2). Allows to add repeated items. This is, I can have a list with a, b, c, a, b. Allows removing all ocurrences of a given item (if I do something like delete(1), it will delete all ocurrences of 1 in the structure). I can't really pick what the best data structure could be in here. I thought at first about something like a List(the problem is having an O(n) operation when removing items), but maybe I'm missing something? What about trees/heaps? Hashtables/maps? I'll have to assume I'll do as much adding as removing with this data structure. Thanks

    Read the article

  • Importing data from many excel workbooks and sheets into a single workbook/table

    - by Max Rusalen
    Hi, I have 54 excel files with three sheets each, each sheet has a different amount of data entries but they are set out in a identical format, and I need to import the data from those sheets into a single workbook using VBA. Is there any way I can program it so I can build the loops to import the data, but without having to write in each workbook name for each loop/sheet? I think I can use the call function, but I don't know how to make the loop codes independent of the workbook name they apply to. Thank you so much in advance, Millie

    Read the article

  • Data access layer design

    - by Sam
    I have a web app and a console application accessing a db. The db has 2 tables (A, B) one of which (A) is specific to the web app. When writing a data access layer, what is the best way to do it? Technically data access layer should provide access to all the data accessible. In doing so, methods to interact with A are exposed to the console application if we have single access layer. Does creating 2 access layers to 2 table in the same database makes any sense? What is a good way to do it?

    Read the article

  • How to insert data in mysql data base table

    - by user1289538
    I am inserting data in MySQL data base but in field it does not insert data. I am using following code $providernpi=$_POST['ProviderNPI']; $patienid=$_POST['PatientID']; $fileurl=$_POST['FileURL']; $filetype=$_POST['FileTYPE']; $datasynid=$_POST['DataSynID']; $appointmentlistingsid=$_POST ['AppointmentListingsID']; $query=("INSERT INTO AppointmentDataSync (ProviderNPI,PatientID, FileURL,FileType,DataSyncID,AppointmentListingsID) VALUES ('$providernpi', '$patientid','$fileurl','$filetype','$datasynid','$appointmentlistingid')"); mysql_query($query,$con); printf("Records inserted: %d\n", mysql_affected_rows()); echo($patienid) ?>

    Read the article

  • From Binary to Data Structures

    - by Cédric Menzi
    Table of Contents Introduction PE file format and COFF header COFF file header BaseCoffReader Byte4ByteCoffReader UnsafeCoffReader ManagedCoffReader Conclusion History This article is also available on CodeProject Introduction Sometimes, you want to parse well-formed binary data and bring it into your objects to do some dirty stuff with it. In the Windows world most data structures are stored in special binary format. Either we call a WinApi function or we want to read from special files like images, spool files, executables or may be the previously announced Outlook Personal Folders File. Most specifications for these files can be found on the MSDN Libarary: Open Specification In my example, we are going to get the COFF (Common Object File Format) file header from a PE (Portable Executable). The exact specification can be found here: PECOFF PE file format and COFF header Before we start we need to know how this file is formatted. The following figure shows an overview of the Microsoft PE executable format. Source: Microsoft Our goal is to get the PE header. As we can see, the image starts with a MS-DOS 2.0 header with is not important for us. From the documentation we can read "...After the MS DOS stub, at the file offset specified at offset 0x3c, is a 4-byte...". With this information we know our reader has to jump to location 0x3c and read the offset to the signature. The signature is always 4 bytes that ensures that the image is a PE file. The signature is: PE\0\0. To prove this we first seek to the offset 0x3c, read if the file consist the signature. So we need to declare some constants, because we do not want magic numbers.   private const int PeSignatureOffsetLocation = 0x3c; private const int PeSignatureSize = 4; private const string PeSignatureContent = "PE";   Then a method for moving the reader to the correct location to read the offset of signature. With this method we always move the underlining Stream of the BinaryReader to the start location of the PE signature.   private void SeekToPeSignature(BinaryReader br) { // seek to the offset for the PE signagure br.BaseStream.Seek(PeSignatureOffsetLocation, SeekOrigin.Begin); // read the offset int offsetToPeSig = br.ReadInt32(); // seek to the start of the PE signature br.BaseStream.Seek(offsetToPeSig, SeekOrigin.Begin); }   Now, we can check if it is a valid PE image by reading of the next 4 byte contains the content PE.   private bool IsValidPeSignature(BinaryReader br) { // read 4 bytes to get the PE signature byte[] peSigBytes = br.ReadBytes(PeSignatureSize); // convert it to a string and trim \0 at the end of the content string peContent = Encoding.Default.GetString(peSigBytes).TrimEnd('\0'); // check if PE is in the content return peContent.Equals(PeSignatureContent); }   With this basic functionality we have a good base reader class to try the different methods of parsing the COFF file header. COFF file header The COFF header has the following structure: Offset Size Field 0 2 Machine 2 2 NumberOfSections 4 4 TimeDateStamp 8 4 PointerToSymbolTable 12 4 NumberOfSymbols 16 2 SizeOfOptionalHeader 18 2 Characteristics If we translate this table to code, we get something like this:   [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct CoffHeader { public MachineType Machine; public ushort NumberOfSections; public uint TimeDateStamp; public uint PointerToSymbolTable; public uint NumberOfSymbols; public ushort SizeOfOptionalHeader; public Characteristic Characteristics; } BaseCoffReader All readers do the same thing, so we go to the patterns library in our head and see that Strategy pattern or Template method pattern is sticked out in the bookshelf. I have decided to take the template method pattern in this case, because the Parse() should handle the IO for all implementations and the concrete parsing should done in its derived classes.   public CoffHeader Parse() { using (var br = new BinaryReader(File.Open(_fileName, FileMode.Open, FileAccess.Read, FileShare.Read))) { SeekToPeSignature(br); if (!IsValidPeSignature(br)) { throw new BadImageFormatException(); } return ParseInternal(br); } } protected abstract CoffHeader ParseInternal(BinaryReader br);   First we open the BinaryReader, seek to the PE signature then we check if it contains a valid PE signature and rest is done by the derived implementations. Byte4ByteCoffReader The first solution is using the BinaryReader. It is the general way to get the data. We only need to know which order, which data-type and its size. If we read byte for byte we could comment out the first line in the CoffHeader structure, because we have control about the order of the member assignment.   protected override CoffHeader ParseInternal(BinaryReader br) { CoffHeader coff = new CoffHeader(); coff.Machine = (MachineType)br.ReadInt16(); coff.NumberOfSections = (ushort)br.ReadInt16(); coff.TimeDateStamp = br.ReadUInt32(); coff.PointerToSymbolTable = br.ReadUInt32(); coff.NumberOfSymbols = br.ReadUInt32(); coff.SizeOfOptionalHeader = (ushort)br.ReadInt16(); coff.Characteristics = (Characteristic)br.ReadInt16(); return coff; }   If the structure is as short as the COFF header here and the specification will never changed, there is probably no reason to change the strategy. But if a data-type will be changed, a new member will be added or ordering of member will be changed the maintenance costs of this method are very high. UnsafeCoffReader Another way to bring the data into this structure is using a "magically" unsafe trick. As above, we know the layout and order of the data structure. Now, we need the StructLayout attribute, because we have to ensure that the .NET Runtime allocates the structure in the same order as it is specified in the source code. We also need to enable "Allow unsafe code (/unsafe)" in the project's build properties. Then we need to add the following constructor to the CoffHeader structure.   [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct CoffHeader { public CoffHeader(byte[] data) { unsafe { fixed (byte* packet = &data[0]) { this = *(CoffHeader*)packet; } } } }   The "magic" trick is in the statement: this = *(CoffHeader*)packet;. What happens here? We have a fixed size of data somewhere in the memory and because a struct in C# is a value-type, the assignment operator = copies the whole data of the structure and not only the reference. To fill the structure with data, we need to pass the data as bytes into the CoffHeader structure. This can be achieved by reading the exact size of the structure from the PE file.   protected override CoffHeader ParseInternal(BinaryReader br) { return new CoffHeader(br.ReadBytes(Marshal.SizeOf(typeof(CoffHeader)))); }   This solution is the fastest way to parse the data and bring it into the structure, but it is unsafe and it could introduce some security and stability risks. ManagedCoffReader In this solution we are using the same approach of the structure assignment as above. But we need to replace the unsafe part in the constructor with the following managed part:   [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct CoffHeader { public CoffHeader(byte[] data) { IntPtr coffPtr = IntPtr.Zero; try { int size = Marshal.SizeOf(typeof(CoffHeader)); coffPtr = Marshal.AllocHGlobal(size); Marshal.Copy(data, 0, coffPtr, size); this = (CoffHeader)Marshal.PtrToStructure(coffPtr, typeof(CoffHeader)); } finally { Marshal.FreeHGlobal(coffPtr); } } }     Conclusion We saw that we can parse well-formed binary data to our data structures using different approaches. The first is probably the clearest way, because we know each member and its size and ordering and we have control about the reading the data for each member. But if add member or the structure is going change by some reason, we need to change the reader. The two other solutions use the approach of the structure assignment. In the unsafe implementation we need to compile the project with the /unsafe option. We increase the performance, but we get some security risks.

    Read the article

  • General Purpose ASP.NET Data Source Control

    - by Ricardo Peres
    OK, you already know about the ObjectDataSource control, so what’s wrong with it? Well, for once, it doesn’t pass any context to the SelectMethod, you only get the parameters supplied on the SelectParameters plus the desired ordering, starting page and maximum number of rows to display. Also, you must have two separate methods, one for actually retrieving the data, and the other for getting the total number of records (SelectCountMethod). Finally, you don’t get a chance to alter the supplied data before you bind it to the target control. I wanted something simple to use, and more similar to ASP.NET 4.5, where you can have the select method on the page itself, so I came up with CustomDataSource. Here’s how to use it (I chose a GridView, but it works equally well with any regular data-bound control): 1: <web:CustomDataSourceControl runat="server" ID="datasource" PageSize="10" OnData="OnData" /> 2: <asp:GridView runat="server" ID="grid" DataSourceID="datasource" DataKeyNames="Id" PageSize="10" AllowPaging="true" AllowSorting="true" /> The OnData event handler receives a DataEventArgs instance, which contains some properties that describe the desired paging location and size, and it’s where you return the data plus the total record count. Here’s a quick example: 1: protected void OnData(object sender, DataEventArgs e) 2: { 3: //just return some data 4: var data = Enumerable.Range(e.StartRowIndex, e.PageSize).Select(x => new { Id = x, Value = x.ToString(), IsPair = ((x % 2) == 0) }); 5: e.Data = data; 6: //the total number of records 7: e.TotalRowCount = 100; 8: } Here’s the code for the DataEventArgs: 1: [Serializable] 2: public class DataEventArgs : EventArgs 3: { 4: public DataEventArgs(Int32 pageSize, Int32 startRowIndex, String sortExpression, IOrderedDictionary parameters) 5: { 6: this.PageSize = pageSize; 7: this.StartRowIndex = startRowIndex; 8: this.SortExpression = sortExpression; 9: this.Parameters = parameters; 10: } 11:  12: public IEnumerable Data 13: { 14: get; 15: set; 16: } 17:  18: public IOrderedDictionary Parameters 19: { 20: get; 21: private set; 22: } 23:  24: public String SortExpression 25: { 26: get; 27: private set; 28: } 29:  30: public Int32 StartRowIndex 31: { 32: get; 33: private set; 34: } 35:  36: public Int32 PageSize 37: { 38: get; 39: private set; 40: } 41:  42: public Int32 TotalRowCount 43: { 44: get; 45: set; 46: } 47: } As you can guess, the StartRowIndex and PageSize receive the starting row and the desired page size, where the page size comes from the PageSize property on the markup. There’s also a SortExpression, which gets passed the sorted-by column and direction (if descending) and a dictionary containing all the values coming from the SelectParameters collection, if any. All of these are read only, and it is your responsibility to fill in the Data and TotalRowCount. The code for the CustomDataSource is very simple: 1: [NonVisualControl] 2: public class CustomDataSourceControl : DataSourceControl 3: { 4: public CustomDataSourceControl() 5: { 6: this.SelectParameters = new ParameterCollection(); 7: } 8:  9: protected override DataSourceView GetView(String viewName) 10: { 11: return (new CustomDataSourceView(this, viewName)); 12: } 13:  14: internal void GetData(DataEventArgs args) 15: { 16: this.OnData(args); 17: } 18:  19: protected virtual void OnData(DataEventArgs args) 20: { 21: EventHandler<DataEventArgs> data = this.Data; 22:  23: if (data != null) 24: { 25: data(this, args); 26: } 27: } 28:  29: [Browsable(false)] 30: [DesignerSerializationVisibility(DesignerSerializationVisibility.Visible)] 31: [PersistenceMode(PersistenceMode.InnerProperty)] 32: public ParameterCollection SelectParameters 33: { 34: get; 35: private set; 36: } 37:  38: public event EventHandler<DataEventArgs> Data; 39:  40: public Int32 PageSize 41: { 42: get; 43: set; 44: } 45: } Also, the code for the accompanying internal – as there is no need to use it from outside of its declaring assembly - data source view: 1: sealed class CustomDataSourceView : DataSourceView 2: { 3: private readonly CustomDataSourceControl dataSourceControl = null; 4:  5: public CustomDataSourceView(CustomDataSourceControl dataSourceControl, String viewName) : base(dataSourceControl, viewName) 6: { 7: this.dataSourceControl = dataSourceControl; 8: } 9:  10: public override Boolean CanPage 11: { 12: get 13: { 14: return (true); 15: } 16: } 17:  18: public override Boolean CanRetrieveTotalRowCount 19: { 20: get 21: { 22: return (true); 23: } 24: } 25:  26: public override Boolean CanSort 27: { 28: get 29: { 30: return (true); 31: } 32: } 33:  34: protected override IEnumerable ExecuteSelect(DataSourceSelectArguments arguments) 35: { 36: IOrderedDictionary parameters = this.dataSourceControl.SelectParameters.GetValues(HttpContext.Current, this.dataSourceControl); 37: DataEventArgs args = new DataEventArgs(this.dataSourceControl.PageSize, arguments.StartRowIndex, arguments.SortExpression, parameters); 38:  39: this.dataSourceControl.GetData(args); 40:  41: arguments.TotalRowCount = args.TotalRowCount; 42: arguments.MaximumRows = this.dataSourceControl.PageSize; 43: arguments.AddSupportedCapabilities(DataSourceCapabilities.Page | DataSourceCapabilities.Sort | DataSourceCapabilities.RetrieveTotalRowCount); 44: arguments.RetrieveTotalRowCount = true; 45:  46: if (!(args.Data is ICollection)) 47: { 48: return (args.Data.OfType<Object>().ToList()); 49: } 50: else 51: { 52: return (args.Data); 53: } 54: } 55: } As always, looking forward to hearing from you!

    Read the article

  • JAVA image transfer problem

    - by user579098
    Hi, I have a school assignment, to send a jpg image,split it into groups of 100 bytes, corrupt it, use a CRC check to locate the errors and re-transmit until it eventually is built back into its original form. It's practically ready, however when I check out the new images, they appear with errors.. I would really appreciate if someone could look at my code below and maybe locate this logical mistake as I can't understand what the problem is because everything looks ok :S For the file with all the data needed including photos and error patterns one could download it from this link:http://rapidshare.com/#!download|932tl2|443122762|Data.zip|739 Thanks in advance, Stefan p.s dont forget to change the paths in the code for the image and error files package networks; import java.io.*; // for file reader import java.util.zip.CRC32; // CRC32 IEEE (Ethernet) public class Main { /** * Reads a whole file into an array of bytes. * @param file The file in question. * @return Array of bytes containing file data. * @throws IOException Message contains why it failed. */ public static byte[] readFileArray(File file) throws IOException { InputStream is = new FileInputStream(file); byte[] data=new byte[(int)file.length()]; is.read(data); is.close(); return data; } /** * Writes (or overwrites if exists) a file with data from an array of bytes. * @param file The file in question. * @param data Array of bytes containing the new file data. * @throws IOException Message contains why it failed. */ public static void writeFileArray(File file, byte[] data) throws IOException { OutputStream os = new FileOutputStream(file,false); os.write(data); os.close(); } /** * Converts a long value to an array of bytes. * @param data The target variable. * @return Byte array conversion of data. * @see http://www.daniweb.com/code/snippet216874.html */ public static byte[] toByta(long data) { return new byte[] { (byte)((data >> 56) & 0xff), (byte)((data >> 48) & 0xff), (byte)((data >> 40) & 0xff), (byte)((data >> 32) & 0xff), (byte)((data >> 24) & 0xff), (byte)((data >> 16) & 0xff), (byte)((data >> 8) & 0xff), (byte)((data >> 0) & 0xff), }; } /** * Converts a an array of bytes to long value. * @param data The target variable. * @return Long value conversion of data. * @see http://www.daniweb.com/code/snippet216874.html */ public static long toLong(byte[] data) { if (data == null || data.length != 8) return 0x0; return (long)( // (Below) convert to longs before shift because digits // are lost with ints beyond the 32-bit limit (long)(0xff & data[0]) << 56 | (long)(0xff & data[1]) << 48 | (long)(0xff & data[2]) << 40 | (long)(0xff & data[3]) << 32 | (long)(0xff & data[4]) << 24 | (long)(0xff & data[5]) << 16 | (long)(0xff & data[6]) << 8 | (long)(0xff & data[7]) << 0 ); } public static byte[] nextNoise(){ byte[] result=new byte[100]; // copy a frame's worth of data (or remaining data if it is less than frame length) int read=Math.min(err_data.length-err_pstn, 100); System.arraycopy(err_data, err_pstn, result, 0, read); // if read data is less than frame length, reset position and add remaining data if(read<100){ err_pstn=100-read; System.arraycopy(err_data, 0, result, read, err_pstn); }else // otherwise, increase position err_pstn+=100; // return noise segment return result; } /** * Given some original data, it is purposefully corrupted according to a * second data array (which is read from a file). In pseudocode: * corrupt = original xor corruptor * @param data The original data. * @return The new (corrupted) data. */ public static byte[] corruptData(byte[] data){ // get the next noise sequence byte[] noise = nextNoise(); // finally, xor data with noise and return result for(int i=0; i<100; i++)data[i]^=noise[i]; return data; } /** * Given an array of data, a packet is created. In pseudocode: * frame = corrupt(data) + crc(data) * @param data The original frame data. * @return The resulting frame data. */ public static byte[] buildFrame(byte[] data){ // pack = [data]+crc32([data]) byte[] hash = new byte[8]; // calculate crc32 of data and copy it to byte array CRC32 crc = new CRC32(); crc.update(data); hash=toByta(crc.getValue()); // create a byte array holding the final packet byte[] pack = new byte[data.length+hash.length]; // create the corrupted data byte[] crpt = new byte[data.length]; crpt = corruptData(data); // copy corrupted data into pack System.arraycopy(crpt, 0, pack, 0, crpt.length); // copy hash into pack System.arraycopy(hash, 0, pack, data.length, hash.length); // return pack return pack; } /** * Verifies frame contents. * @param frame The frame data (data+crc32). * @return True if frame is valid, false otherwise. */ public static boolean verifyFrame(byte[] frame){ // allocate hash and data variables byte[] hash=new byte[8]; byte[] data=new byte[frame.length-hash.length]; // read frame into hash and data variables System.arraycopy(frame, frame.length-hash.length, hash, 0, hash.length); System.arraycopy(frame, 0, data, 0, frame.length-hash.length); // get crc32 of data CRC32 crc = new CRC32(); crc.update(data); // compare crc32 of data with crc32 of frame return crc.getValue()==toLong(hash); } /** * Transfers a file through a channel in frames and reconstructs it into a new file. * @param jpg_file File name of target file to transfer. * @param err_file The channel noise file used to simulate corruption. * @param out_file The name of the newly-created file. * @throws IOException */ public static void transferFile(String jpg_file, String err_file, String out_file) throws IOException { // read file data into global variables jpg_data = readFileArray(new File(jpg_file)); err_data = readFileArray(new File(err_file)); err_pstn = 0; // variable that will hold the final (transfered) data byte[] out_data = new byte[jpg_data.length]; // holds the current frame data byte[] frame_orig = new byte[100]; byte[] frame_sent = new byte[100]; // send file in chunks (frames) of 100 bytes for(int i=0; i<Math.ceil(jpg_data.length/100); i++){ // copy jpg data into frame and init first-time switch System.arraycopy(jpg_data, i*100, frame_orig, 0, 100); boolean not_first=false; System.out.print("Packet #"+i+": "); // repeat getting same frame until frame crc matches with frame content do { if(not_first)System.out.print("F"); frame_sent=buildFrame(frame_orig); not_first=true; }while(!verifyFrame(frame_sent)); // usually, you'd constrain this by time to prevent infinite loops (in // case the channel is so wacked up it doesn't get a single packet right) // copy frame to image file System.out.println("S"); System.arraycopy(frame_sent, 0, out_data, i*100, 100); } System.out.println("\nDone."); writeFileArray(new File(out_file),out_data); } // global variables for file data and pointer public static byte[] jpg_data; public static byte[] err_data; public static int err_pstn=0; public static void main(String[] args) throws IOException { // list of jpg files String[] jpg_file={ "C:\\Users\\Stefan\\Desktop\\Data\\Images\\photo1.jpg", "C:\\Users\\Stefan\\Desktop\\Data\\Images\\photo2.jpg", "C:\\Users\\Stefan\\Desktop\\Data\\Images\\photo3.jpg", "C:\\Users\\Stefan\\Desktop\\Data\\Images\\photo4.jpg" }; // list of error patterns String[] err_file={ "C:\\Users\\Stefan\\Desktop\\Data\\Error Pattern\\Error Pattern 1.DAT", "C:\\Users\\Stefan\\Desktop\\Data\\Error Pattern\\Error Pattern 2.DAT", "C:\\Users\\Stefan\\Desktop\\Data\\Error Pattern\\Error Pattern 3.DAT", "C:\\Users\\Stefan\\Desktop\\Data\\Error Pattern\\Error Pattern 4.DAT" }; // loop through all jpg/channel combinations and run tests for(int x=0; x<jpg_file.length; x++){ for(int y=0; y<err_file.length; y++){ System.out.println("Transfering photo"+(x+1)+".jpg using Pattern "+(y+1)+"..."); transferFile(jpg_file[x],err_file[y],jpg_file[x].replace("photo","CH#"+y+"_photo")); } } } }

    Read the article

  • Looking for a .NET data access layer

    - by Luke101
    Hello, I am looking for a data access layer for ado.net. I am not interested in linq, EF, NHibernate or any other ORM. Currently, I am using the data access layer from umbraco. The DAL is very good but they stopped developing it so i am looking for a different one. Does anyone know where I can find a list of DALs that I can test?

    Read the article

  • iPhone: Save boolean into Core Data

    - by Nic Hubbard
    I have set up one of my core data attributes as a Boolean. Now, I need to set it, but XCode keeps telling me that it may not respond to setUseGPS. [ride setUseGPS: useGPS.on]; What is the method for setting a boolean in core data? All my other attributes are set this way, and they work great. So, not sure why a boolean does not work to be set this way?

    Read the article

  • Stream data with Node.js

    - by Saif Bechan
    I want to know if it is possible to stream data from the server to the client with Node.js. From what I can understand all over the internet is that this has to be possible, yet I fail to find a correct example or solution. What I want is a single http post to node.js with AJAX. Than leave the connection open and continuously stream data to the client. The client will receive this stream and update the page continuously.

    Read the article

  • Adding "System.Data.SQLite" as refernce.

    - by subash
    hi all, when i was building my project done in asp.net & c#.net ,it produced the error as "The type or namespace name 'SQLite' does not exist in the namespace 'System.Data' (are you missing an assembly reference?" so, when i tried to add it as reference ,i was not able to find "System.Data.SQLite" in my library . how to overcome this problem.?

    Read the article

  • Selenium Data Driven Testing with C#

    - by Dinesh Kanojia
    HI All, i am Dinesh Kanojia and i am new in the Automate testing and i want to perform data driven testing in selenium using ASP.NET(C#),ajax and almost all the features of jquery so any one can give me the step how to perform data driven testing using c# or some demo through which i can perform my testing thanks in advance, warm regards, Dinesh.K

    Read the article

  • Freely available dictionary data for Chinese, Japanese, CJK characters

    - by devio
    I am developing an online CJK character dictionary application, and already found the following databases: Unicode Unihan Database Jim Breen's JMDict and KanjiDic CEDict HanDeDict As I am looking for more data, web searches often lead me to online dictionaries, but not the data itself, using the same sources over again. If you know of any CJK-relevant downloadable dictionaries, please add them.

    Read the article

  • SAS: rearrange field order in data step

    - by Dan
    In SAS 9, how can I in a simple data step, rearrange the order the field. Data set2; /*Something probably goes here*/ set set1; run; So if set1 has the following fields: Name Title Salary A Chief 40000 B Chief 45000 Then I can change the field order of set2 to: Title Salary Name Chief 40000 A Chief 45000 B Thanks, Dan

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >