Search Results

Search found 3203 results on 129 pages for 'transfer'.

Page 29/129 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Is there a way to transfer windows license to a different machine?

    - by Alex Khvatov
    I purchased a license and used Windows 7 Home Premium 32 bit OS for a while. But recently I bought a 64 bit version to take advantage of the larger RAM the machine had and hence reinstalled the OS and activated a new license for the 64-bit version. Now, I am in a need to install the 32 bit version on another machine. How do I go about reactivating a license on another machine? (again the license currently is not used) Am I going to have issues with Microsoft not letting me reactivate that license on a different machine? Thank you.

    Read the article

  • Win32: What is the status of chunked encoding support in WinHttpReadData?

    - by Cheeso
    The documentation for WinHttpReadData says, regarding HTTP's chunked transfer coding: Starting in Windows Vista and Windows Server 2008, WinHttp enables applications to perform chunked transfer encoding on data sent to the server. When the Transfer-Encoding header is present on the WinHttp response, WinHttpReadData strips the chunking information before giving the data to the application. Can anyone decipher this? Q1 First, this text is on the page for WinHttpReadData, which is used to ... read data within an HTTP client application, specifically the response data. So what does it mean when it says Starting in Windows Vista and Windows Server 2008, WinHttp enables applications to perform chunked transfer encoding on data sent to the server. The WinHttpReadData function isn't used with data being sent to the server. It is used when reading data from the server. Consulting the doc for the WinHttpWriteData function, which is used to send data to the server as part of an HTTP request, there is no mention of the chunked transfer capability. Q2 Supposing that I figure out just what the newish chunked transfer support amounts to, how do I get that support? It says that it is new on Vista and WS2008. What happens if I write an app that runs on WS2003, and uses WinHttpReadData and it encounters a chunked response, or WinHttpWriteData, and it wants to send a chunked request? Between the lines, is this documentation saying that I need to link against the Vista-era Windows SDK, or later, in order to get the capability to do chunked encoding? Or is it really impossible on WS2003?, in other words it is the case that the app doing chunked transfer using this library must run on the OS specified? This might read like a rant, but it's not. I truly want to know.

    Read the article

  • C Language: Why I cannot transfer file from server to client?

    - by user275753
    I want to ask, why I cannot transfer file from server to client? When I start to send the file from server, the client side program will have problem. So, I spend some times to check the code, But I still cannot find out the problem Can anyone point out the problem for me? thanks a lot! [client side code] include include include include include include include define SA struct sockaddr define S_PORT 5678 define MAXLEN 1000 define true 1 void errexit(const char *format, ...) { va_list args; va_start(args, format); vfprintf(stderr, format, args); va_end(args); WSACleanup(); exit(1); } int main(int argc, char *argv []) { WSADATA wsadata; SOCKET sockfd; int number,message; char outbuff[MAXLEN],inbuff[MAXLEN]; char PWD_buffer[_MAX_PATH]; struct sockaddr_in servaddr; FILE *fp; int numbytes; char buf[2048]; if (WSAStartup(MAKEWORD(2,2), &wsadata) != 0) errexit("WSAStartup failed\n"); if (argc != 2) errexit("client IPaddress"); if ( (sockfd = socket(AF_INET, SOCK_STREAM, 0)) == INVALID_SOCKET ) errexit("socket error: error number %d\n", WSAGetLastError()); memset(&servaddr, 0, sizeof(servaddr)); servaddr.sin_family = AF_INET; servaddr.sin_port = htons(S_PORT); if ( (servaddr.sin_addr.s_addr = inet_addr(argv[1])) == INADDR_NONE) errexit("inet_addr error: error number %d\n", WSAGetLastError()); if (connect(sockfd, (SA *) &servaddr, sizeof(servaddr)) == SOCKET_ERROR) errexit("connect error: error number %d\n", WSAGetLastError()); if ( (fp = fopen("C:\\users\\pc\\desktop\\COPY.c", "wb")) == NULL){ perror("fopen"); exit(1); } printf("Still NO PROBLEM!\n"); //Receive file from server while(1){ numbytes = read(sockfd, buf, sizeof(buf)); printf("read %d bytes, ", numbytes); if(numbytes == 0){ printf("\n"); break; } numbytes = fwrite(buf, sizeof(char), numbytes, fp); printf("fwrite %d bytes\n", numbytes); } fclose(fp); close(sockfd); return 0; } server side code include include include include include include include include define SA struct sockaddr define S_PORT 5678 define MAXLEN 1000 void errexit(const char *format, ...) { va_list args; va_start(args, format); vfprintf(stderr, format, args); va_end(args); WSACleanup(); exit(1); } int main(int argc, char *argv []) { WSADATA wsadata; SOCKET listenfd, connfd; int number, message, numbytes; int h, i, j, alen; int nread; struct sockaddr_in servaddr, cliaddr; FILE *in_file, *out_file, *fp; char buf[4096]; if (WSAStartup(MAKEWORD(2,2), &wsadata) != 0) errexit("WSAStartup failed\n"); listenfd = socket(AF_INET, SOCK_STREAM, 0); if (listenfd == INVALID_SOCKET) errexit("cannot create socket: error number %d\n", WSAGetLastError()); memset(&servaddr, 0, sizeof(servaddr)); servaddr.sin_family = AF_INET; servaddr.sin_addr.s_addr = htonl(INADDR_ANY); servaddr.sin_port = htons(S_PORT); if (bind(listenfd, (SA *) &servaddr, sizeof(servaddr)) == SOCKET_ERROR) errexit("can't bind to port %d: error number %d\n", S_PORT, WSAGetLastError()); if (listen(listenfd, 5) == SOCKET_ERROR) errexit("can't listen on port %d: error number %d\n", S_PORT, WSAGetLastError()); alen = sizeof(SA); connfd = accept(listenfd, (SA *) &cliaddr, &alen); if (connfd == INVALID_SOCKET) errexit("accept failed: error number %d\n", WSAGetLastError()); printf("accept one client from %s!\n", inet_ntoa(cliaddr.sin_addr)); fp = fopen ("client.c", "rb"); // open file stored in server if (fp == NULL) { printf("\nfile NOT exist"); } //Sending file while(!feof(fp)){ numbytes = fread(buf, sizeof(char), sizeof(buf), fp); printf("fread %d bytes, ", numbytes); numbytes = write(connfd, buf, numbytes); printf("Sending %d bytes\n",numbytes); } fclose (fp); closesocket(listenfd); closesocket(connfd); return 0; }

    Read the article

  • CentOS tftp server is broken

    - by Mike Pennington
    I'm trying to run tftpd from xinetd on CentOS 6; however, I can only tftp from localhost. I have a file in /opt/tftpboot/fw.test.conf that I can retrieve if I tftp to localhost: [mpenning@localhost ~]$ tftp localhost tftp> get fw.test.conf tftp> quit [mpenning@localhost ~]$ ls fw.test.conf [mpenning@localhost ~]$ However, I cannot receive this file if I tftp to eth1 on this server (the address on eth1 is 172.16.1.4). [mpenning@localhost ~]$ sudo tshark -i eth1 udp and host 172.16.1.5 Running as user "root" and group "root". This could be dangerous. Capturing on eth1 0.000000 172.16.1.5 -> 172.16.1.4 TFTP Read Request, File: fw.test.conf\000, Transfer type: netascii\000 5.000133 172.16.1.5 -> 172.16.1.4 TFTP Read Request, File: fw.test.conf\000, Transfer type: netascii\000 10.000184 172.16.1.5 -> 172.16.1.4 TFTP Read Request, File: fw.test.conf\000, Transfer type: netascii\000 15.000297 172.16.1.5 -> 172.16.1.4 TFTP Read Request, File: fw.test.conf\000, Transfer type: netascii\000 20.000331 172.16.1.5 -> 172.16.1.4 TFTP Read Request, File: fw.test.conf\000, Transfer type: netascii\000 ^C5 packets captured [mpenning@localhost ~]$ I have the following xinetd configuration: [root@localhost mpenning]# cat /etc/xinetd.d/tftp # default: off # description: The tftp server serves files using the trivial file transfer \ # protocol. The tftp protocol is often used to boot diskless \ # workstations, download configuration files to network-aware printers, \ # and to start the installation process for some operating systems. service tftp { socket_type = dgram protocol = udp wait = yes user = root server = /usr/sbin/in.tftpd server_args = -s /opt/tftpboot disable = no per_source = 11 cps = 100 2 flags = IPv4 } [root@localhost mpenning]#

    Read the article

  • Network access lags for Win7 when server network utilization is high

    - by Jeff Miles
    We have a Dell PE2950 file server running Windows 2008, hosting a DFS namespace of ~1.2 TB. This server has two Broadcom 1Gbps NICs teamed together. When there is high traffic going to the server across the network (greater than 200 Mbps), any Windows 7 client accessing a DFS share at the time experiences severe performance problems. For example: Computer A has an AutoCAD drawing opened directly from the DFS share. Performance is normal, not causing any issues. Computer B begins a file transfer, putting a 11GB file onto a different DFS namespace, on the same server Computer A immediately notices lag while using AutoCAD. The cursor momentarily freezes within AutoCAD every 10 seconds or so, and any browsing of the DFS share is extremely slow. Computer B completes file transfer, and performance resumes to normal for Computer A. This is only affecting Windows 7 clients, using a variety of hardware (desktop + laptop). All of our Windows XP clients see no performance impact during the file transfer. Things I have tried with no change: Had Computer A work from an entirely different RAID array from the file transfer destination Updated NIC drivers on clients and server Enabled TCP offload and receive side scaling on the server NIC (previously disabled when the issue began) Antivirus disabled during file transfer I am currently having a user test applications other than AutoCAD when the file transfer occurs, and will update the question with that result. Does anyone have any recommendations for resolution or additional troubleshooting steps?

    Read the article

  • Problems with Net::FTP slowing down

    - by c0bra
    I'm running into an issue with using Net::FTP (latest version 2.77) to transfer files to a remote host where a process is waiting to take the file and feed into some other system. The remote process does this every 5 minutes but ignores files that were modified in the last 0.2 seconds (that's right, 1/5th of a second). The problem is that for some reason the transfer seems to halt or slow down a bit and no data is transferred for several seconds and during this time the process picks up the incomplete file and removes it. The weird thing is that manually using the ftp binary the file seems to transfer fine. I've tried messing with all of the Net::FTP switches (Active/Passive, different Blocksizes) and nothing seems to help. What's also weird is that the file seems to transfer fairly quickly at first and on occasion a bit into the transfer. Like 300-500k will go up immediately, but then it slows down to where the file size is only increasing by 2,896 bytes every several seconds. It doesn't seem to happen when I try sending the file to a different remote host, but since a regular manual ftp transfer works with this host I don't know what to think. Some combination of Net::FTP and possibly a slow or wonky connection?

    Read the article

  • how to properly transfer user input from index to invoice?

    - by Romel
    I got a dillema, I'm trying to find a solution for my code. How do I make it so that when the user inputs a given quantity in the text box from the index.php it will transfer that input quantity to the invoice.php. I've tried doing the post method but it seems like it's not working :/ As always, any help or suggestions will be greatly appreciated! I hope it's something simple:( Here's my code: index.php <html> <style> body{ background-image: url('URL HERE'); font-family: "Helvetica"; font-size:15px; } h1{ color:black; text-align:center; } p{ font-size:15px; } </style> <h1> STORE TITLE HERE </h1> <body> <form action="login.php" method="post"> <?php //Include products info.inc (Which holds all our product arrays and info) //Credit: Tracy & Mark (THank you!) include 'products_info.inc'; /*The following code centers my table on the page, makes the table background white, makes the table 50% of the browser window, gives it a border of 1 px, gives a padding of 2 px between the cell border and content, and gives 1 px of spacing between cells. */ echo "<table align=center bgcolor='FFFFFF' width=50% border=1 cellpadding=1 cellspacing=2>"; //Credit: Tracy & Mark (THank you!) echo '<th>Product</th> <th>Description</th> <th>Price</th> <th>Quantity</th>'; //The following code loops through the whole table body and then prints each row. for($i=0; $i<count($allfood); $i++) { //Credit: Tracy & Mark (THank you!) echo "<tr align=center>"; echo "<td>{$allfood[$i]['Product']}</td>"; echo "<td>{$allfood[$i]['Description']}</td>"; echo "<td>{$allfood[$i]['Price']}</td>"; echo "<td>{$allfood[$i]['Quantity']}</td>"; echo "</tr>"; } //This code ends the table. echo "</table>"; echo "<br>"; ?> <br><center><input type='submit' name='purchase' value='Purchase'></center> </form> </body> </html> And here's my invoice.php <html> <style> body{ background-image: url('URL HERE'); font-family: "Helvetica"; font-size:15px; } h1{ color:black; text-align:center; } p{ font-size:15px; } </style> <h1> Invoice </h1> </html> <?php //Include products info.inc (Which holds all our product arrays and info) //Credit: Tracy & Mark (Thank you!) include 'products_info.inc'; //Display the invoice & 'WELCOME USER. THANK YOU FOR USING THIS DAMN THING msg' /*The following code centers my invoice table on the page, makes the table background white, makes the table 50% of the browser window, gives it a border of 1 px, gives a padding of 2 px between the cell border and content, and gives 1 px of spacing between cells. */ echo "<table align=center bgcolor='FFFFFF' width=50% border=1 cellpadding=1cellspacing=2>"; echo "<tr>"; echo "<td align=center><b>Product</b></td>"; echo "<td align=center><b>Quantity</b></td>"; echo "<td align=center><b>Price</></td>"; echo "<td align=center><b>Extended Price</b></td>"; echo "</tr>"; for($i=0; $i<count($allfood); $i++) { //Credit: Tracy & Mark (Thank you!) $qty= @$_POST['Quantity']['$i']; // This calculates the price if the user orders more than 1 item. $extendedprice = $qty*$allfood[$i]['Price']; echo "<tr>"; echo "<td align=center>{$allfood[$i]['Product']}</td>"; echo "<td align=center>$extendedprice</td>"; echo "<td align=center>{$allfood[$i]['Price']}</td>"; echo "</tr>"; } // The goal here was to make it so that if the user selected a certain quantity from index.php, it would carry over and display on the invoice.php if ($qty = 0) { echo "please choose 1"; } elseif ($qty > 0) { echo $qty; } /*echo "<tr>"; echo "<td align=center>Subtotal</b></td>"; echo "<td align=center></td>"; echo "<td align=center></td>"; echo "<tr>"; echo "<td align=center>Tax at 5.75%</td>"; echo "<td align=center></td>"; echo "<td align=center></td>"; echo "<tr>"; echo "<td align=center><b>Grand total</b></td>"; echo "<td align=center></td>"; echo "<td align=center></td>"; */ echo "</table>"; ?> <br> <center> <form action="index.php" method="post"> <input type="submit" name="home" value="Back to homepage"> </form> </center>

    Read the article

  • Page Refresh Returns to previous page

    - by Rajesh Rolen- DotNet Developer
    I am using server.transfer to redirect from one to another page... lets say when i click on button1 of page1 i redirects to page2 using server.transfer but than when i refresh that page2 , it get postback and redirects me page1 again.. please tell me where i am doing wrong.? I have tried with both.. but result is same server.Transfer("~/admin/mypage.aspx?msg=A",False ) server.Transfer("~/admin/mypage.aspx?msg=A",True )

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • How can I transfer output that appears on the console and format it so that it appears on a web page

    - by lojayna
    package collabsoft.backlog_reports.c4; import java.sql.CallableStatement; import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.ResultSetMetaData; import java.sql.Statement; //import collabsoft.backlog_reports.c4.Report; public class Report { private Connection con; public Report(){ connectUsingJDBC(); } public static void main(String args[]){ Report dc = new Report(); dc.reviewMeeting(6, 8, 10); dc.createReport("dede",100); //dc.viewReport(100); // dc.custRent(3344,123,22,11-11-2009); } /** the following method is used to connect to the database **/ public void connectUsingJDBC() { // This is the name of the ODBC data source String dataSourceName = "Simple_DB"; try { // loading the driver in the memory Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); // This is the connection URL String dbURL = "jdbc:odbc:" + dataSourceName; con = DriverManager.getConnection("jdbc:mysql://localhost:3306/Collabsoft","root",""); // This line is used to print the name of the driver and it would throw an exception if a problem occured System.out.println("User connected using driver: " + con.getMetaData().getDriverName()); //Addcustomer(con,1111,"aaa","aaa","aa","aam","111","2222","111"); //rentedMovies(con); //executePreparedStatement(con); //executeCallableStatement(con); //executeBatch(con); } catch (Exception e) { e.printStackTrace(); } } /** *this code is to link the SQL code with the java for the task *as an admin I should be able to create a report of a review meeting including notes, tasks and users *i will take the task id and user id and note id that will be needed to be added in the review *meeting report and i will display the information related to these ida **/ public void reviewMeeting(int taskID, int userID, int noteID)// law el proc bt return table { try{ CallableStatement callableStatement = con.prepareCall("{CALL report_review_meeting(?,?,?)}"); callableStatement.setInt(1,taskID); callableStatement.setInt(2,userID); callableStatement.setInt(3,noteID); ResultSet resultSet = callableStatement.executeQuery(); // or executeupdate() or updateQuery ResultSetMetaData rsm = resultSet.getMetaData(); int numOfColumns = rsm.getColumnCount(); System.out.println("lojayna"); while (resultSet.next()) { System.out.println("New Row:"); for (int i = 1; i <= numOfColumns; i++) System.out.print(rsm.getColumnName(i) + ": " + resultSet.getObject(i) + " "); System.out.println(); } } catch(Exception e) { System.out.println("E"); } } ////////////////////////////////// ///////////////////////////////// public void allproject(int projID)// law el proc bt return table { try{ CallableStatement callableStatement = con.prepareCall("{CALL all_project(?)}"); callableStatement.setInt(1,projID); //callableStatement.setInt(2,userID); //callableStatement.setInt(3,noteID); ResultSet resultSet = callableStatement.executeQuery(); // or executeupdate() or updateQuery ResultSetMetaData rsm = resultSet.getMetaData(); int numOfColumns = rsm.getColumnCount(); System.out.println("lojayna"); while (resultSet.next()) { System.out.println("New Row:"); for (int i = 1; i <= numOfColumns; i++) System.out.print(rsm.getColumnName(i) + ": " + resultSet.getObject(i) + " "); System.out.println(); } } catch(Exception e) { System.out.println("E"); } } /////////////////////////////// /** * here i take the event id and i take a string report and then * i relate the report with the event **/ public void createReport(String report,int E_ID )// law el proc bt return table { try{ Statement st = con.createStatement(); st.executeUpdate("UPDATE e_vent SET e_vent.report=report WHERE e_vent.E_ID= E_ID;"); /* CallableStatement callableStatement = con.prepareCall("{CALL Create_report(?,?)}"); callableStatement.setString(1,report); callableStatement.setInt(2,E_ID); ResultSet resultSet = callableStatement.executeQuery(); // or executeupdate() or updateQuery ResultSetMetaData rsm = resultSet.getMetaData(); int numOfColumns = rsm.getColumnCount(); System.out.println("lojayna"); while (resultSet.next()) { System.out.println("New Row:"); for (int i = 1; i <= numOfColumns; i++) System.out.print(rsm.getColumnName(i) + ": " + resultSet.getObject(i) + " "); System.out.println(); }*/ } catch(Exception e) { System.out.println("E"); System.out.println(e); } } /** *in the following method i view the report of the event having the ID eventID **/ public void viewReport(int eventID)// law el proc bt return table { try{ CallableStatement callableStatement = con.prepareCall("{CALL view_report(?)}"); callableStatement.setInt(1,eventID); ResultSet resultSet = callableStatement.executeQuery(); // or executeupdate() or updateQuery ResultSetMetaData rsm = resultSet.getMetaData(); int numOfColumns = rsm.getColumnCount(); System.out.println("lojayna"); while (resultSet.next()) { System.out.println("New Row:"); for (int i = 1; i <= numOfColumns; i++) System.out.print(rsm.getColumnName(i) + ": " + resultSet.getObject(i) + " "); System.out.println(); } } catch(Exception e) { System.out.println("E"); } } } // the result of these methods is being showed on the console , i am using WIcket and i want it 2 be showed on the web how is that done ?!

    Read the article

  • How to transfer large amount of data using WCF?

    - by JSprang
    We are currently trying to move large amounts of data to a Silverlight 3 client using WCF with PollingDuplex. I have read about the MultiplerMessagesPerPoll in Silverlight 4 and it appears to be quite a bit faster. Are there any examples out there for me to reference (using MultipleMessagesPerPoll)? Or maybe some good references on using Net.TCP? Maybe I should be taking a completely different approach? Any ideas or suggestions would be greatly appreciated. Thanks!

    Read the article

  • ASP.NET MVC: How to transfer more than one object to View method?

    - by ile
    I finished NerdDinner tutorial and now I'm playing a bit with project. Index page shows all upcoming dinners: public ActionResult Index() { var dinners = dinnerRepository.FindUpComingDinners().ToList(); return View(dinners); } In DinnerRepository class I have method FindAllDinners and I would like to add to above Index method number of all dinners, something like this: public ActionResult Index() { var dinners = dinnerRepository.FindUpComingDinners().ToList(); var numberOfAllDinners = dinnerRepository.FindAllDinners().Count(); return View(dinners, numberOfAllDinners); } Of course, this doesn't work. As I'm pretty new to OOP I would need help with this one. Thanks, Ile

    Read the article

  • How do i transfer this unmanaged code from asp to asp.net 2/mvc?

    - by melaos
    hi guys, i'm a newbie to ASP.net interop features, so what i have right here is some unmanaged dll that i need to call from my asp.net mvc app. the dll name is CTSerialNumChecksum.dll set CheckSumObj = Server.CreateObject("CTSerialNumChecksum.CRC32API") validSno = CheckSumObj.ValidateSerialNumber(no) i know it's unmanaged because when i try to add reference to the dll it doesn't work. i try to follow some tutorials on interop and marshalling but thus far i wasn't able to get the code to work. i'm trying to wrap the object into another static class and just let the rest of the app to call the code. using System; using System.Runtime.InteropServices; namespace OnlineRegisteration.Models { public static class SerialNumberChecksum { [DllImport("CTSerialNumChecksum")] public static extern int ValidateSerialNumber(string serialNo); } } Questions: How do i write the class? And what tool can i use to identify what type of dll a particular file is, i.e. unmanaged c++, etc? Also i intend to make use jquery to do ajax call later so i can use this to validate my form pre-submission. Is there a better way to handle this?

    Read the article

  • How do I transfer this logic into a LINQ statement?

    - by Geoffrey
    I can't get this bit of logic converted into a Linq statement and it is driving me nuts. I have a list of items that have a category and a createdondate field. I want to group by the category and only return items that have the max date for their category. So for example, the list contains items with categories 1 and 2. The first day (1/1) I post two items to both categories 1 and 2. The second day (1/2) I post three items to category 1. The list should return the second day postings to category 1 and the first day postings to category 2. Right now I have it grouping by the category then running through a foreach loop to compare each item in the group with the max date of the group, if the date is less than the max date it removes the item. There's got to be a way to take the loop out, but I haven't figured it out!

    Read the article

  • How to transfer binary data through multiple http server?

    - by solotim
    Well, the question is not intended to be that big. Let me explain the scenario: I have two http servers. server A is accessible to end user by web browser, while server B is internal server which can only be accessed by server A. If server B generate some big jpeg image in local disk, obviously we can't just delivery those path to image to server A and eventually to end user. Then, how to let end user see those image without firstly storing those image data in server A temporarily? I run PHP on server A and perl on server B, but this should not matter. I need a general pattern for implementing this.

    Read the article

  • How to safely transfer reference to object across window?

    - by Morgan Cheng
    I'm debugging a web application. Javasript in one window create one object and use it as argument to invoke global method in another window. Pseudo code is like below. var obj = new Foo(); anotherWin.bar(obj); In anotherWin, the argument is stored in global variable. var g_obj; function bar(obj) { g_obj = obj; ... } When other function tries to reference g_obj.Id, it throws exception "Cannot evaluate expression". This happens in IE8.0.7600.16385 on Windows 7. In Visual Studio debugger, when this exception happens, the g_obj shows as {...} It looks all its properties are lost. Perhaps the root reason is the object is created in one window but only referenced in another window. The object might be garbage-collected at any time. Is there any way to work around this?

    Read the article

  • Is there a way to transfrom a list of key/value pairs into a data transfer object

    - by weevie
    ...apart from the obvious looping through the list and a dirty great case statement! I've turned over a few Linq queries in my head but nothing seems to get anywhere close. Here's the an example DTO if it helps: class ClientCompany { public string Title { get; private set; } public string Forenames { get; private set; } public string Surname { get; private set; } public string EmailAddress { get; private set; } public string TelephoneNumber { get; private set; } public string AlternativeTelephoneNumber { get; private set; } public string Address1 { get; private set; } public string Address2 { get; private set; } public string TownOrDistrict { get; private set; } public string CountyOrState { get; private set; } public string PostCode { get; private set; } } We have no control over the fact that we're getting the data in as KV pairs, I'm afraid.

    Read the article

  • What's the best way to transfer a large dataset over a .NET web service?

    - by Malvineous
    I've inherited a C# .NET application which talks to a web service, and the web service talks to an Oracle database. I need to add an export function to the UI, to produce an Excel spreadsheet of some of the data. I have created a web service function to run a database query, load the data into a DataTable and then return it, which works fine for a small number of rows. However there is enough data in the full run that the client application locks up for a few minutes and then returns a timeout error. Obviously this isn't the best way to retrieve such a large dataset. Before I go ahead and come up with some dodgy way of splitting the call, I'm wondering if there is already something in place that can handle this. At the moment I'm thinking of a startExport function then repeatedly calling a next50Rows function until there is no data left, but because web services are stateless this means I'm going to have to keep some sort of ID number around and deal with the associated permissions. It would mean that I don't have to load the entire data set into the web server's memory though, which is one good thing. So if anyone knows a better way to retrieve a large amount of data (in a table format) over a .NET web service, please let me know!

    Read the article

  • How to transfer a post request in curl into a ruby script?

    - by 0x90
    I have this post request: curl -i -X POST \ -H "Accept:application/json" \ -H "content-type:application/x-www-form-urlencoded" \ -d "disambiguator=Document&confidence=-1&support=-1&text=President%20Obama%20called%20Wednesday%20on%20Congress%20to%20extend%20a%20tax%20break%20for%20students%20included%20in%20last%20year%27s%20economic%20stimulus%20package" \ http://spotlight.dbpedia.org/dev/rest/annotate/ How can I write it in ruby? I tried this as Kyle told me: require 'rubygems' require 'net/http' require 'uri' uri = URI.parse('http://spotlight.dbpedia.org/rest/annotate') http = Net::HTTP.new(uri.host, uri.port) request = Net::HTTP::Post.new(uri.request_uri) request.set_form_data({ "disambiguator" => "Document", "confidence" => "0.3", "support" => "0", "text" => "President Obama called Wednesday on Congress to extend a tax break for students included in last year's economic stimulus package" }) request.add_field("Accept", "application/json") request.add_field("Content-Type", "application/x-www-form-urlencoded") response = http.request(request) puts response.inspect but got this error: #<Net::HTTPInternalServerError 500 Internal Error readbody=true>

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >