Search Results

Search found 85480 results on 3420 pages for 'change data capture'.

Page 219/3420 | < Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >

  • is the jQuery function "change()" working in IE ?

    - by Patrick
    is the jQuery function "change()" working in IE ? I usually use it to detect changes in forms (select/unselect check boxes), and submit them automatically without having to click on submit button (which is hided). i.e. $("#views-exposed-form-Portfolio-page-1").change(function(){ $("#views-exposed-form-Portfolio-page-1").submit(); }); But in Ie it doesn't work. It seems I have to use "click" instead. thanks

    Read the article

  • Permission to view, but not to change! - Django

    - by RadiantHex
    Hi folks, is it possible to give users the permission to view, but not to change or delete. currently in the only permissions I see are "add", "change" and "delete"... but there is no "read/view" in there. I really need this as some users will only be able to consult the admin panel, in order to see what has been added in. Help would be amazing!

    Read the article

  • C++ Edit Box Text Change

    - by user1218395
    Is their any cases in C++ Like these case WM_COMMAND: switch(LOWORD(wParam)) That happen when you change the text of a edit Box, I need to call a function when i change in edit box and store the value of the edit box into a Integer. case EditCD: //ID of your edit { if (HIWORD(wParam) == EN_CHANGE) MessageBox(hwnd, "Text!", "Test!", MB_OK); return TRUE; } Doesnt work did i do it wrong?

    Read the article

  • uploading via http post (multipart/form-data) silently fails with big files

    - by matteo
    When uploading multipart/form-data forms via a http post request to my apache web server, very big files (i.e. 30MB) are silently discarded. On the server side all looks as if the attached file was received with 0 bytes size. On the client side all looks like it had been uploaded succesfully (it takes the expected long time to upload and the browser gives no error message). On the server, nothing is logged into the error log. An entry is logged into the access log as if everything was ok (a post request and a 200 ok response). These uploads are being posted to a php script. In the php script, If I print_r $_FILES, I see the following information for the relevant file: [file5] => Array ( [name] => MOV023.3gp [type] => video/3gpp [tmp_name] => /tmp/phpgOdvYQ [error] => 0 [size] => 0 ) Note both [error] = 0 (which should mean no error) and [size] = 0 (as if the file was empty). My php script runs fine and receives all the rest of the data except these files. move_uploaded_file succeeds on these files and actually copies them as 0byte files. I've already changed the php directives max_upload_size to 50M and post_max_size to 200M, so neither the single file nor the request exceed any size limit. max_execution_time is not relevant, because the time to transfer the data does not count; and I've increased max_input_time to 1000 seconds, though this shouldn't be necessary since this is the time taken to parse the input data, not the time taken to upload it. Is there any apache configuration, prior to php, that could be causing these files to be discarded even prior to php execution? Some limit in size or in upload time? I've read about a default 300 seconds timeout limit, but this should apply to the time the connection is idle, not the time it takes while actually transferring data, right? Needless to say, uploads with all exactly identical conditions (including file format, client and everything) except smaller file size, work seamlessly, so the issue is clearly related to the file or request size, or to the time it takes to send it.

    Read the article

  • Can't find windows 2000 domain after PDC Change

    - by Mark A Kruger
    This is a windows 2000 domain issue. I had an old win2000 PDC that was beginning to fail. So, trying to be pre-emptive, I installed a new BDC, then "demoted" the old PDC and took it off the network. Now it appears that no member server can "find" the domain anymore. No logins work (for services or a RDP or anything). What I've tried (based on googling): Verified sysvol is shared on all servers. Used nslookup to verify that DC's are being found. netdiag /fix meta data cleanup routines. verified no firewall issues (port 389 etc) seizing all roles to new PDC (I did that as part of the original promotion). LMHOST file and Netbios settings. At the moment it seems like I can get the DC's returned but cannot contact them. I'm at a loss. My latest attempt was to remove a member server from the domain and try to "re-add" it. When I do that I get this message: The query was for the SRV record for _ldap._tcp.dc._msdcs.cfwebtools.com The following domain controllers were identified by the query: db-dev1.cfwebtools.com file-prod1.cfwebtools.com cfwt-pdc2.cfwebtools.com However no domain controllers could be contacted. It then goes on to ask if I've checked my A record and made sure they are running. Is there a way to force this domain to be seen? I also shared sysvol (or double checked it) and restarted the dfsr service. More information. I got looking at sysvol and found it was not shared on 2 of these servers. Only one of them (db-dev1) has a "good" or at least "populated" sys vol store. So I tried doing a "d2" recovery of my PDC against that good sysvol. But it never synchs - or at least it does not seem to synch. I'm guessing if I could get sysvol and netlogin to kick in and replicate that would fix my issue. I think these DC's aren't responding because they are waiting for replication which is broken somehow. Would taking down all the DC's except for db-dev1 fix the issue - at least temporarily? I know I can't just copy the sysvol stuff over to the other 2 can I?

    Read the article

  • FMw Diagnostic Framework : Automatic Capture of Diagnostic Data Upon First Failure!

    - by Daniel Mortimer
    Introduction There is nothing more frustrating than a problem that "cannot be reproduced". Logs, configuration files have been analysed but there just isn't enough information to establish the root cause. The issue maybe closed, but you are left with the feeling that the problem will raise its ugly head again in the future. Trouble is, to resolve such issues you need to capture diagnostic data at the exact time the incident occurs. Step forward Fusion Middleware Diagnostic Framework!  Diagnostic Framework monitors WebLogic Managed Servers and delivers "Automatic capture of diagnostic data upon first failure". To quote fromOracle Fusion Middleware Administrator's Guide 11g Release 1 (11.1.1)Chapter 13 Diagnosing Problems "When a critical error occurs ... the Diagnostic Framework automatically collects diagnostics, such as thread dumps, DMS metric dumps, and WebLogic Diagnostics Framework (WLDF) server image dumps ... The data is stored in a file-based repository and is accessible with command-line utilities." In other words the data collected upon first failure - especially the thread and image dumps - provides a snapshot of the system as or immediately after the problem occurs. The table below shows the type of WebLogic Server issues which fall into the scope of Diagnostic Framework How to Configure Diagnostic Framework? Depending on your Fusion Middleware product choice you may not need to do anything! Diagnostic Framework is automatically installed, configured and initiated for any WebLogic Domain which has the Oracle Java Required Files (JRF) template applied. This template is applied by default whenever you configure WebLogic Managed Servers for products such as Portal / Forms / Reports / Discoverer Identity Management ( OID , OAM , OIM etc) WebCenter SOA Check your WebLogic Domain directory structure. If you have an "adr" sub directory under DOMAIN_HOME/servers/<servername>/ then JRF template has been applied and Diagnostic Framework will be in play. Should the "adr" sub directory not exist, review the advice given in My Oracle Support article How to Apply FMW ( EM ) Control and JRF to a WebLogic Domain and Managed Servers [ID 947043.1] If you are working with a standalone WebLogic Server solution and applying Oracle JRF is not acceptable, consider using WLDF - WebLogic Diagnostic Framework. (Fusion Middleware Diagnostic Framework makes use of WLDF under the covers.) Couple of useful links about WLDF are listed below Configuring and Using the Diagnostics Framework for Oracle WebLogic Server 11g WebLogic Diagnostics Framework-A Very Useful Tool [A nice blog which describes a WLDF use case] How to Get Started With Diagnostic Framework To be frank, the Fusion Middleware Administrator's Guide is the best place to start your learning Oracle Fusion Middleware Administrator's Guide 11g Release 1 (11.1.1)Chapter 13 Diagnosing Problems A lot of reading here,  but if you are in hurry and just want to get the right information to Oracle Support to help resolve your issue, check out the next section below. How to Upload Diagnostic Framework Incident Data to Oracle Support Some Background Information There are three interfaces to the Repository: Enterprise Manager Cloud Control (Support Workbench) WLST (Command Line) ADRCI (Command Line) The Enterprise Manager Cloud Control does provide a nice GUI interface to search, view and package diagnostic framework incidents. However, this software is not to be confused with Fusion Middleware (EM) Control. Cloud Control (formerly known as Grid Control) is part of the Enterprise Manager media package. EM Cloud Control has it's own install and configuration story. Therefore, for the benefit of those yet to install and play with Cloud Control, I am going to describe how to use the command line tools. Ideally, you would only need to one command line interface, but currently I suggest using both - mainly due to the fact that ADRCI SHOW INCIDENTS does not reveal the description behind the Diagnostic Framework error code. Instructions: Note: WLST and ADRCI are case sensitive when it comes to handling parameter values. If you make a mistake, expect an unfriendly syntax error message. 1) Find the incident Note: The managed server which you are troubleshooting must be up and running. If the managed server is down, ensure the domain's Admin Server is accessible. If you cannot connect to the Admin Server or the Managed Server the example WLST commands will not work. a) Launch WLST  Note: Use the WLST which resides in the "oracle_common" directory (not WL_HOME/common/bin) otherwise you will get a syntax error like the one below Traceback (innermost last):  File "<console>", line 1, in ?NameError: listIncidents MW_HOME/oracle_common/common/bin/wlst.sh b) Connect to the managed server or the admin server e.g. wls:/offline> connect('weblogic','welcome1','t3://localhost:7020') c) Run the command wls:/MyDomain/serverConfig> listIncidents() This will list the incidents for the server to which you have connected. If you have connected to the Admin Server and want to list the incidents for a managed server within the domain, use the command wls:/MyDomain/serverConfig> listIncidents(adrHome='diag\ofm\MyDomain\MyManagedServer' ,server='MyManagedServer') Example output Incident Id     Problem Key              Incident Time         1       DFW-99998 [java.lang.NullPointerException] [oracle.error.simulator.ErrorSimulator.createNullPointerException][errorWebApp_1-0-0-0]        Fri Nov 02 10:38:46 GMT 2012  The piece highlighted in bold is the description you do not see when using the ADRCI 'SHOW INCIDENT' command. Make a note of the incident id. You are ready to move to step 2 2. Package the incident a) Set up the environment - example commands below are for Unix cd <DOMAIN_HOME>/bin . ./setDomainEnv.sh If you want ADRCI to run a Remote Diagnostic Agent collection (recommended) at generate package time, point ORACLE_HOME at oracle_common ORACLE_HOME=$MW_HOME/oracle_common; export ORACLE_HOME To prevent ADRCI from running RDA at generate package time, point ORACLE_HOME at WL_HOME/server/adr directory.  ORACLE_HOME=$WL_HOME/server/adr; export ORACLE_HOME b) Launch adrci $WL_HOME/server/adr/adrci c) Set BASE and HOMEPATH adrci> SET BASE /oracle/middleware/user_projects/domains/ mydomain/servers/mymanagedserver/adr adrci> SET HOMEPATH diag/ofm/mydomain/mymanagedserver d)  Optionally run SHOW INCIDENTS e.g. adrci> SHOW INCIDENTS -MODE DETAIL ADR Home = /oracle/middleware/user_projects/domains/mydomain/ servers/mymanagedserver/adr/diag/ofm/mydomain/mymanagedserver:***********************************************************************************************************************************INCIDENT INFO RECORD 1**********************************************************   INCIDENT_ID                   1   STATUS                        ready   CREATE_TIME                   2012-11-02 10:38:46.468000 +00:00   PROBLEM_ID                    1   CLOSE_TIME                    <NULL>   FLOOD_CONTROLLED              none   ERROR_FACILITY                DFW   ERROR_NUMBER                  99998   ERROR_ARG1                    <NULL>   ERROR_ARG2                    <NULL>   ERROR_ARG3                    <NULL>   ERROR_ARG4                    <NULL>   ERROR_ARG5                    <NULL>   ERROR_ARG6                    <NULL>   ERROR_ARG7                    <NULL>   ERROR_ARG8                    <NULL>   ERROR_ARG9                    <NULL>   ERROR_ARG10                   <NULL>   ERROR_ARG11                   <NULL>   ERROR_ARG12                   <NULL>   SIGNALLING_COMPONENT          <NULL>   SIGNALLING_SUBCOMPONENT       <NULL>   SUSPECT_COMPONENT             <NULL>   SUSPECT_SUBCOMPONENT          <NULL>   ECID                          5162744c6a2eea5e:155ff445:13ac0aae7cb:-8000-0000000000000325   IMPACTS                       01 rows fetched e)  Create a logical package IPS CREATE PACKAGE INCIDENT incident_number e.g. adrci> IPS CREATE PACKAGE INCIDENT 1Created package 1 based on incident id 1, correlation level typical f) Generate the package IPS GENERATE PACKAGE package_number IN path e.g. adrci> IPS GENERATE PACKAGE 1 IN /tmp Generated package 1 in file /tmp/DFW99998j_20121102113633_COM_1.zip, mode complete Note: If the generate package command hangs, ADRCI may be experiencing an issue when running RDA. To avoid such trouble, exit ADRCI and point the ORACLE_HOME environment variable at WL_HOME/server/adr 3) Upload the package zip to Oracle Support via your Service Request a) Log into My Oracle Support and locate your Service Request b) Click on "Add Attachments c) And upload the zip file

    Read the article

  • What is the right way to process inconsistent data files?

    - by Tahabi
    I'm working at a company that uses Excel files to store product data, specifically, test results from products before they are shipped out. There are a few thousand spreadsheets with anywhere from 50-100 relevant data points per file. Over the years, the schema for the spreadsheets has changed significantly, but not unidirectionally - in the sense that, changes often get reverted and then re-added in the space of a few dozen to few hundred files. My project is to convert about 8000 of these spreadsheets into a database that can be queried. I'm using MongoDB to deal with the inconsistency in the data, and Python. My question is, what is the "right" or canonical way to deal with the huge variance in my source files? I've written a data structure which stores the data I want for the latest template, which will be the final template used going forward, but that only helps for a few hundred files historically. Brute-forcing a solution would mean writing similar data structures for each version/template - which means potentially writing hundreds of schemas with dozens of fields each. This seems very inefficient, especially when sometimes a change in the template is as little as moving a single line of data one row down or splitting what used to be one data field into two data fields. A slightly more elegant solution I have in mind would be writing schemas for all the variants I can find for pre-defined groups in the source files, and then writing a function to match a particular series of files with a series of variants that matches that set of files. This is because, more often that not, most of the file will remain consistent over a long period, only marred by one or two errant sections, but inside the period, which section is inconsistent, is inconsistent. For example, say a file has four sections with three data fields, which is represented by four Python dictionaries with three keys each. For files 7000-7250, sections 1-3 will be consistent, but section 4 will be shifted one row down. For files 7251-7500, 1-3 are consistent, section 4 is one row down, but a section five appears. For files 7501-7635, sections 1 and 3 will be consistent, but section 2 will have five data fields instead of three, section five disappears, and section 4 is still shifted down one row. For files 7636-7800, section 1 is consistent, section 4 gets shifted back up, section 2 returns to three cells, but section 3 is removed entirely. Files 7800-8000 have everything in order. The proposed function would take the file number and match it to a dictionary representing the data mappings for different variants of each section. For example, a section_four_variants dictionary might have two members, one for the shifted-down version, and one for the normal version, a section_two_variants might have three and five field members, etc. The script would then read the matchings, load the correct mapping, extract the data, and insert it into the database. Is this an accepted/right way to go about solving this problem? Should I structure things differently? I don't know what to search Google for either to see what other solutions might be, though I believe the problem lies in the domain of ETL processing. I also have no formal CS training aside from what I've taught myself over the years. If this is not the right forum for this question, please tell me where to move it, if at all. Any help is most appreciated. Thank you.

    Read the article

  • What language and tools can I use to create a simple game with child-lock (capture all key press) for Windows? [closed]

    - by scw
    I'm writing an open source program that changes colors & plays sounds when keys are pressed. I want it to run in full screen mode and have a child-lock so kids can't exit accidentally. I want it to capture all keys including ctrl alt delete. (So it's partially a game, but partially windows utility.) My target OS is Windows 7 (32 & 64 bit), keeping Windows 8 in mind. My options: Visual Studio using .net C# Windows Forms - the devil I know. But not a "game" platform, which is why I'm asking this question. Visual Studio & XNA - have never used XNA, not sure of capabilities or support future Python - What flavor, what modules, what IDE? I've never done anything with Python but I found a couple of similar open source projects in python. Something else that I don't know about? Any input is appreciated.

    Read the article

  • jstree dynamic JSON data from django

    - by danspants
    I'm trying to set up jsTree to dynamically accept JSON data from django. This is the test data i have django returning to jstree: result=[{ "data" : "A node", "children" : [ { "data" : "Only child", "state" : "closed" } ], "state" : "open" },"Ajax node"] response=HttpResponse(content=result,mimetype="application/json") this is the jstree code I'm using: jQuery("#demo1").jstree({ "json_data" : { "ajax" : { "url" : "/dirlist", "data" : function (n) { return { id : n.attr ? n.attr("id") : 0 }; }, error: function(e){alert(e);} } }, "plugins" : [ "themes","json_data"] }); All I get is the ajax loading symbol, the ajax error response is also triggered and it alerts "undefined". I've also tried simpleJson encoding in django but with the same result. If I change the url so that it is receiving a JSON file with identical data, it works as expected. Any ideas on what the issue might be?

    Read the article

  • How to filter the jqGrid data NOT using the built in search/filter box

    - by Jimbo
    I want users to be able to filter grid data without using the intrinsic search box. I have created two input fields for date (from and to) and now need to tell the grid to adopt this as its filter and then to request new data. Forging a server request for grid data (bypassing the grid) and setting the grid's data to be the response data wont work - because as soon as the user tries to re-order the results or change the page etc. the grid will request new data from the server using a blank filter. I cant seem to find grid API to achieve this - does anyone have any ideas? Thanks.

    Read the article

  • Machine learning challenge: diagnosing program in java/groovy (datamining, machine learning)

    - by Registered User
    Hi All! I'm planning to develop program in Java which will provide diagnosis. The data set is divided into two parts one for training and the other for testing. My program should learn to classify from the training data (BTW which contain answer for 30 questions each in new column, each record in new line the last column will be diagnosis 0 or 1, in the testing part of data diagnosis column will be empty - data set contain about 1000 records) and then make predictions in testing part of data :/ I've never done anything similar so I'll appreciate any advice or information about solution to similar problem. I was thinking about Java Machine Learning Library or Java Data Mining Package but I'm not sure if it's right direction... ? and I'm still not sure how to tackle this challenge... Please advise. All the best!

    Read the article

  • qT quncompress gzip data

    - by talei
    Hello, I stumble upon a problem, and can't find a solution. So what I want to do is uncompress data in qt, using qUncompress(QByteArray), send from www in gzip format. I used wireshark to determine that this is valid gzip stream, also tested with zip/rar and both can uncompress it. Code so far, is like this: static const char dat[40] = { 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0xaa, 0x2e, 0x2e, 0x49, 0x2c, 0x29, 0x2d, 0xb6, 0x4a, 0x4b, 0xcc, 0x29, 0x4e, 0xad, 0x05, 0x00, 0x00, 0x00, 0xff, 0xff, 0x03, 0x00, 0x2a, 0x63, 0x18, 0xc5, 0x0e, 0x00, 0x00, 0x00 }; //this data contains string: {status:false}, in gzip format QByteArray data; data.append( dat, sizeof(dat) ); unsigned int size = 14; //expected uncompresed size, reconstruct it BigEndianes //prepand expected uncompressed size, last 4 byte in dat 0x0e = 14 QByteArray dataPlusSize; dataPlusSize.append( (unsigned int)((size >> 24) & 0xFF)); dataPlusSize.append( (unsigned int)((size >> 16) & 0xFF)); dataPlusSize.append( (unsigned int)((size >> 8) & 0xFF)); dataPlusSize.append( (unsigned int)((size >> 0) & 0xFF)); QByteArray uncomp = qUncompress( dataPlusSize ); qDebug() << uncomp; And uncompression fails with: qUncompress: Z_DATA_ERROR: Input data is corrupted. AFAIK gzip consist of 10 byte header, DEFLATE peyload, 12 byte trailer ( 8 byte CRC32 + 4 byte ISIZE - uncompresed data size ). Striping header and trailer should leave me with DEFLATE data stream, qUncompress yields same error. I checked with data string compressed in PHP, like this: $stringData = gzcompress( "{status:false}", 1); and qUncompress uncompress that data.(I didn't see and gzip header though i.e. ID1 = 0x1f, ID2 = 0x8b ) I checked above code with debug, and error occurs at: if ( #endif ((BITS(8) << 8) + (hold >> 8)) % 31) { //here is error, WHY? long unsigned int hold = 35615 strm->msg = (char *)"incorrect header check"; state->mode = BAD; break; } inflate.c line 610. I know that qUncompress is simply a wrapper to zlib, so I suppose it should handle gzip without any problem. Any comments are more then welcome. Best regards

    Read the article

  • RDLC - Adding a Data Source in VS2010

    - by Kezzer
    Greetings. I have an RDLC file and am wanting to add a data source to it, although without any luck so far. The data source is a custom class written by myself (just to add, we do this all the time). We recently converted over to the VS2010 RDLC format which caused some problems, but we've made some changes to our implementation that workaround the more major issues. So, back to the issue at hand, when I attempt to add my data source to the DummyDataSource list in the RDLC view in VS2010 it just does nothing, however it does add the data source to the list of data sources, but you can't select it from the drop-down list in the RDLC view which means I can't add the data source at all. Has anyone come across this problem? Is there anything I need to check? I've searched with fervour and had no luck.

    Read the article

  • UIVIew layout and orientation changes

    - by Raja Marimuthu
    Hi, I am supporting all orientation for the iPad app. I am adjusting the my view with autoresizingMask for orienttaion changes (main view and tabbar) . But the subviews in the main view are flowing out of the mainview in landscape mode. so i forced a "setNeedsLayout for the mainview, making subviews in mainview to fit into the mainview boundary. But the issues is that subviews added in the lower part fo mainview are not responding to touches in landscape mode, but working fine with portrait mode. Any F1 ?

    Read the article

  • Database design for very large amount of data

    - by Hossein
    Hi, I am working on a project, involving large amount of data from the delicious website.The data available is at files are "Date,UserId,Url,Tags" (for each bookmark). I normalized my database to a 3NF, and because of the nature of the queries that we wanted to use In combination I came down to 6 tables....The design looks fine, however, now a large amount of data is in the database, most of the queries needs to "join" at least 2 tables together to get the answer, sometimes 3 or 4. At first, we didn't have any performance issues, because for testing matters we haven't had added too much data in the database. No that we have a lot of data, simply joining extremely large tables does take a lot of time and for our project which has to be real-time is a disaster.I was wondering how big companies solve these issues.Looks like normalizing tables just adds complexity, but how does the big company handle large amounts of data in their databases, don't they do the normalization? thanks

    Read the article

  • Best Practices - Data Annotations vs OnChanging in Entity Framework 4

    - by jptacek
    I was wondering what the general recommendation is for Entity Framework in terms of data validation. I am relatively new to EF, but it appears there are two main approaches to data validation. The first is to create a partial class for the model, and then perform data validations and update a rule violation collection of some sort. This is outlined at http://msdn.microsoft.com/en-us/library/cc716747.aspx The other is to use data annotations and then have the annotations perform data validation. Scott Guthrie explains this on his blog at http://weblogs.asp.net/scottgu/archive/2010/01/15/asp-net-mvc-2-model-validation.aspx. I was wondering what the benefits are of one over the other. It seems the data annotations would be the preferred mechanism, especially as you move to RIA Services, but I want to ensure I am not missing something. Of course, nothing precludes using both of them together. Thanks John

    Read the article

  • What's an efficient way of calculating the nearest point?

    - by Griffo
    I have objects with location data stored in Core Data, I would like to be able to fetch and display just the nearest point to the current location. I'm aware there are formulas which will calculate the distance from current lat/long to a stored lat/long, but I'm curious about the best way to perform this for a set of 1000+ points stored in Core Data. I know I could just return the points from Core Data to an array and then loop through that looking for the min value for distance between the points but I'd imagine there's a more efficient method, possibly leveraging Core Data in some way. Any insight would be appreciated. EDIT: I don't know how I missed this on my initial search but this SO question suggests just iterating through an array of Core Data objects but limiting the array size with a bounding box based on the current location. Is this the best I can do?

    Read the article

  • Need help receiving ByteArray data

    - by k80sg
    Hi folks, I am trying to receive byte array data from a machine. It sends out 3 different types of data structure each with different number of fields which consist of mostly int and a few floats, and byte sizes, the first being 320 bytes, 420 for the second type and 560 for the third. When the sending program is launched, it fires all 3 types of data simultaneouly with an interval of 1 sec. Example: Sending order: Pack1 - 320 bytes 1 sec later Pack2 - 420 bytes 1 sec later Pack3 - 560 bytes 1 sec later Pack1 - 320 bytes ... .. . How do I check the incoming byte size before passing it to: byte[] handsize = new byte[bytesize]; as the data I receive are all out of order, for instance using the following the read all int: System.out.println("Reading data in int format:" + " " + datainput.readInt()); I get many different sets of values whenever I run my prog although with some valid field data but they are all over the place. I am not too sure how exactly should I do it but I tried the following and apparently my data fields are not receiving in correct sequence: BufferedInputStream bais = new BufferedInputStream(requestSocket.getInputStream()); DataInputStream datainput = new DataInputStream(bais); byte[] handsize = new byte[560]; datainput.readFully(handsize); int n = 0; int intByte[] = new int[140]; for (int i = 0; i < 140 ; i++) { System.out.println("Reading data in int format:" + " " + datainput.readInt()); intByte[n] = datainput.readInt(); n = n + 1; System.out.println("The value in array is:" + intByte[0]); System.out.println("The value in array is:" + intByte[1]); System.out.println("The value in array is:" + intByte[2]); System.out.println("The value in array is:" + intByte[3]); Also from the above code, the order of the values printed out with System.out.println("Reading data in int format:" + " " + datainput.readInt()); and System.out.println("The value in array is:" + intByte[0]); System.out.println("The value in array is:" + intByte[1]); are different. Any help will be appreciated. Thanks

    Read the article

  • C# Winforms ADO.NET - DataGridView INSERT starting with null data

    - by Geo Ego
    I have a C# Winforms app that is connecting to a SQL Server 2005 database. The form I am currently working on allows a user to enter a large amount of different types of data into various textboxes, comboboxes, and a DataGridView to insert into the database. It all represents one particular type of machine, and the data is spread out over about nine tables. The problem I have is that my DataGridView represents a type of data that may or may not be added to the database. Thus, when the DataGridView is created, it is empty and not databound, and so data cannot be entered. My question is, should I create the table with hard-coded field names representing the way that the data looks in the database, or is there a way to simply have the column names populate with no data so that the user can enter it if they like? I don't like the idea of hard-coding them in case there is a change in the database schema, but I'm not sure how else to deal with this problem.

    Read the article

< Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >