Search Results

Search found 21759 results on 871 pages for 'int 0'.

Page 479/871 | < Previous Page | 475 476 477 478 479 480 481 482 483 484 485 486  | Next Page >

  • OO - inheritance vs. decoration problem

    - by Karel J
    Hi all, I have an OOP-related question. I have an interface, say: class MyInterface { public int getValue(); } In my project, this interface is implemented by 7 implementations: class MyImplementation1 implements MyInterface { ... } ... class MyImplementation7 implements MyInterface { ... } These implementations are used by several different modules. For some modules, the behaviour of the MyInterface must be adjusted slightly. Let's that it must return the value of the implementator + 1 (for the sake of example). I solved this by creating a little decorator: class MyDifferentInterface implements MyInterface { private MyInterface i; public MyDifferentInterface(MyInterface i) { this.i = i; } public int getValue() { return i.getValue() + 1; } } This does the job. Here is my problem: one of the modules doesn't accept an MyInterface parameter, but MyImplementation4 directly. The reason for this is that this module needs specific behaviour of MyImplementation4, which are not covered by the interface MyInterface on itself. But, and here comes the difficulty, this module must also work on the modified version of MyImplementation4. That is, getValue() must return +1; What is the best way to solve this? I fail to come up with a solution which does not include lots of code duplicates. Please note that although the example above is pretty small and simple, the interface and the decorator is quite large and complicated. Thanks a lot all.

    Read the article

  • Strange inheritance behaviour in Objective-C

    - by Smikey
    Hi all, I've created a class called SelectableObject like so: #define kNumberKey @"Object" #define kNameKey @"Name" #define kThumbStringKey @"Thumb" #define kMainStringKey @"Main" #import <Foundation/Foundation.h> @interface SelectableObject : NSObject <NSCoding> { int number; NSString *name; NSString *thumbString; NSString *mainString; } @property (nonatomic, assign) int number; @property (nonatomic, retain) NSString *name; @property (nonatomic, retain) NSString *thumbString; @property (nonatomic, retain) NSString *mainString; @end So far so good. And the implementation section conforms to the NSCoding protocol as expected. HOWEVER, when I add a new class which inherits from this class, i.e. #import <Foundation/Foundation.h> #import "SelectableObject.h" @interface Pet : SelectableObject <NSCoding> { } @end I suddenly get the following compiler error in the Selectable object class! SelectableObject.h:16: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'interface' This makes no sense to me. Why is the interface declaration for the SelectableObject class suddenly broken? I also import it in a couple of other classes I've written... Any help would be very much appreciated. Thanks! Michael

    Read the article

  • How to create a list/structure? in JAVA

    - by lox
    i have to create a list of ,let's say 50 people, (in JAVA) and display the list, and i don't really know how to do that. so this is what i have tried to do so far . please correct and complete some of my code . public class Person { String name; String stuff; } public class CreatePerson { public static void ang() { ArrayList<Person> thing=new ArrayList<Person>(); Scanner diskScanner = new Scanner(in); for(int i=0; i<50; i++){ Person pers = new Person(); out.print("name: "); pers.name=diskScanner.nextLine(); out.print("stuff: "); pers.stuff=diskScanner.nextLine(); thing.add(pers); break; } // Display people for (int i=0; i<50; i++) { out.println(??);{ } } }}

    Read the article

  • Inject runtime exception to pthread sometime fails. How to fix that?

    - by lionbest
    I try to inject the exception to thread using signals, but some times the exception is not get caught. For example the following code: void _sigthrow(int sig) { throw runtime_error(strsignal(sig)); } struct sigaction sigthrow = {{&_sigthrow}}; void* thread1(void*) { sigaction(SIGINT,&sigthrow,NULL); try { while(1) usleep(1); } catch(exception &e) { cerr << "Thread1 catched " << e.what() << endl; } }; void* thread2(void*) { sigaction(SIGINT,&sigthrow,NULL); try { while(1); } catch(exception &e) { cerr << "Thread2 catched " << e.what() << endl; //never goes here } }; If I try to execute like: int main() { pthread_t p1,p2; pthread_create( &p1, NULL, &thread1, NULL ); pthread_create( &p2, NULL, &thread2, NULL ); sleep(1); pthread_kill( p1, SIGINT); pthread_kill( p2, SIGINT); sleep(1); return EXIT_SUCCESS; } I get the following output: Thread1 catched Interrupt terminate called after throwing an instance of 'std::runtime_error' what(): Interrupt Aborted How can I make second threat catch exception? Is there better idea about injecting exceptions?

    Read the article

  • What would be different in Java if Enum declaration didn't have the recursive part

    - by atamur
    Please see http://stackoverflow.com/questions/211143/java-enum-definition and http://stackoverflow.com/questions/3061759/why-in-java-enum-is-declared-as-enume-extends-enume for general discussion. Here I would like to learn what exactly would be broken (not typesafe anymore, or requiring additional casts etc) if Enum class was defined as public class Enum<E extends Enum> I'm using this code for testing my ideas: interface MyComparable<T> { int myCompare(T o); } class MyEnum<E extends MyEnum> implements MyComparable<E> { public int myCompare(E o) { return -1; } } class FirstEnum extends MyEnum<FirstEnum> {} class SecondEnum extends MyEnum<SecondEnum> {} With it I wasn't able to find any benefits in this exact case. PS. the fact that I'm not allowed to do class ThirdEnum extends MyEnum<SecondEnum> {} when MyEnum is defined with recursion is a) not relevant, because with real enums you are not allowed to do that just because you can't extend enum yourself b) not true - pls try it in a compiler and see that it in fact is able to compile w/o any errors PPS. I'm more and more inclined to believe that the correct answer here would be "nothing would change if you remove the recursive part" - but I just can't believe that.

    Read the article

  • Socket.Recieve Failing When Multithreaded

    - by Qua
    The following piece of code runs fine when parallelized to 4-5 threads, but starts to fail as the number of threads increase somewhere beyond 10 concurrentthreads int totalRecieved = 0; int recieved; StringBuilder contentSB = new StringBuilder(4000); while ((recieved = socket.Receive(buffer, SocketFlags.None)) > 0) { contentSB.Append(Encoding.ASCII.GetString(buffer, 0, recieved)); totalRecieved += recieved; } The Recieve method returns with zero bytes read, and if I continue calling the recieve method then I eventually get a 'An established connection was aborted by the software in your host machine'-exception. So I'm assuming that the host actually sent data and then closed the connection, but for some reason I never recieved it. I'm curious as to why this problem arises when there are a lot of threads. I'm thinking it must have something to do with the fact that each thread doesn't get as much execution time and therefore there are some idle time for the threads which causes this error. Just can't figure out why idle time would cause the socket not to recieve any data.

    Read the article

  • Few Basic Questions in Overriding

    - by Dahlia
    I have few problems with my basic and would be thankful if someone can clear this. What does it mean when I say base *b = new derived; Why would one go for this? We very well separately can create objects for class base and class derived and then call the functions accordingly. I know that this base *b = new derived; is called as Object Slicing but why and when would one go for this? I know why it is not advisable to convert the base class object to derived class object (because base class is not aware of the derived class members and methods). I even read in other StackOverflow threads that if this is gonna be the case then we have to change/re-visit our design. I understand all that, however, I am just curious, Is there any way to do this? class base { public: void f(){cout << "In Base";} }; class derived:public base { public: void f(){cout << "In Derived";} }; int _tmain(int argc, _TCHAR* argv[]) { base b1, b2; derived d1, d2; b2 = d1; d2 = reinterpret_cast<derived*>(b1); //gives error C2440 b1.f(); // Prints In Base d1.f(); // Prints In Derived b2.f(); // Prints In Base d1.base::f(); //Prints In Base d2.f(); getch(); return 0; } In case of my above example, is there any way I could call the base class f() using derived class object? I used d1.base()::f() I just want to know if there any way without using scope resolution operator? Thanks a lot for your time in helping me out!

    Read the article

  • linux raw socket programming question

    - by user194420
    Hi all, I am trying to create a raw socket which send and receive message with ip/tcp header under linux. I can successfully binds to a port and receive tcp message(ie:syn) However, the message seems to be handled by the os, but not mine. I am just a reader of it(like wireshark). My raw socket binds to port 8888, and then i try to telnet to that port . In wireshark, it shows that the port 8888 reply a "rst ack" when it receive the "syn" request. In my program, it shows that it receive a new message and it doesnot reply with any message. Any way to actually binds to that port?(prevent os handle it) Here is part of my code, i try to cut those error checking for easy reading sockfd = socket(AF_INET, SOCK_RAW, IPPROTO_TCP); int tmp = 1; const int *val = &tmp; setsockopt (sockfd, IPPROTO_IP, IP_HDRINCL, val, sizeof (tmp)); servaddr.sin_family = AF_INET; servaddr.sin_addr.s_addr = htonl(INADDR_ANY); servaddr.sin_port = htons(8888); bind(sockfd, (struct sockaddr*)&servaddr, sizeof(servaddr)); //call recv in loop

    Read the article

  • Factorial in Prolog and C++

    - by Joshua Green
    I would like to work out a number's factorial. My factorial rule is in a Prolog file and I am connecting it to a C++ code. Can someone tell me what is wrong with my C++ interface please? % factorial.pl factorial( 1, 1 ):- !. factorial( X, Fac ):- X > 1, Y is X - 1, factorial( Y, New_Fac ), Fac is X * New_Fac. // factorial.cpp # headerfiles term_t t1; term_t t2; term_t goal_term; functor_t goal_functor; int main( int argc, char** argv ) { argc = 4; argv[0] = "libpl.dll"; argv[1] = "-G32m"; argv[2] = "-L32m"; argv[3] = "-T32m"; PL_initialise(argc, argv); if ( !PL_initialise(argc, argv) ) PL_halt(1); PlCall( "consult(swi('plwin.rc'))" ); PlCall( "consult('factorial.pl')" ); cout << "Enter your factorial number: "; long n; cin >> n; PL_put_integer( t1, n ); t1 = PL_new_term_ref(); t2 = PL_new_term_ref(); goal_term = PL_new_term_ref(); goal_functor = PL_new_functor( PL_new_atom("factorial"), 2 ); PL_put_atom( t1, t2 ); PL_cons_functor( goal_term, goal_functor, t1, t2 ); PL_halt( PL_toplevel() ? 0 : 1 ); }

    Read the article

  • Why SetMinimumSize sets the minimal heights but not width?

    - by Roman
    Here is my code: import javax.swing.*; import java.awt.*; public class PanelModel { public static void main(String[] args) { JFrame frame = new JFrame("Colored Trails"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); JPanel mainPanel = new JPanel(); mainPanel.setLayout(new BoxLayout(mainPanel, BoxLayout.Y_AXIS)); JPanel firstPanel = new JPanel(); firstPanel.setLayout(new GridLayout(4, 4)); firstPanel.setMaximumSize(new Dimension(4*100, 4*100)); firstPanel.setMinimumSize(new Dimension(4*100, 4*100)); JButton btn; for (int i=1; i<=4; i++) { for (int j=1; j<=4; j++) { btn = new JButton(); btn.setPreferredSize(new Dimension(100, 100)); firstPanel.add(btn); } } mainPanel.add(firstPanel); frame.add(mainPanel); frame.setSize(520,600); //frame.setMinimumSize(new Dimension(520,600)); frame.setVisible(true); } } When I increase the size of the window (by mouse) I see that my panel does not increase its size. It is the expected behavior (because I set the maximal size of the panel). However, when I decrease the size of the window, I see that width of the panel is decreased too (while the height is constant). So, the setMinimumSize works only partially. Why is that?

    Read the article

  • create board for game with events support in WPF

    - by netmajor
    How can I create board for simple game, looks like for chess,but user could dynamically change number of column and rows? In cells I could insert symbol of pawn, like small image or just ellipse or rectangle with fill. This board should have possibility add and remove pawn from cells and move pown from one cell to another. My first idea was Grid. I do it in code behind, but it's hurt to implement events or everything in runtime create board :/ int size = 12; Grid board = new Grid(); board.ShowGridLines = true; for (int i = 0; i < size;i++ ) { board.ColumnDefinitions.Add(new ColumnDefinition()); board.RowDefinitions.Add(new RowDefinition()); } //komputer Rectangle ai = new Rectangle(); ai.Height = 20; ai.Width = 20; ai.AllowDrop = true; ai.Fill = Brushes.Orange; Grid.SetRow(ai, 0); Grid.SetColumn(ai,0); //czlowiek Rectangle hum = new Rectangle(); hum.Height = 20; hum.Width = 20; hum.AllowDrop = true; hum.Fill = Brushes.Green; Grid.SetRow(hum,size); Grid.SetColumn(hum,size); board.Children.Add(ai); board.Children.Add(hum); this.Content = board; It's a way to do this dynamically col and row change in XAML ? It's better way to implement that board create and implement events on move pawn from one cell to another ?

    Read the article

  • How to produce 64 bit masks?

    - by egiakoum1984
    Based on the following simple program the bitwise left shit operator works only for 32 bits. Is it true? #include <iostream> #include <stdlib.h> using namespace std; int main(void) { long long currentTrafficTypeValueDec; int input; cout << "Enter input:" << endl; cin >> input; currentTrafficTypeValueDec = 1 << (input - 1); cout << currentTrafficTypeValueDec << endl; cout << (1 << (input - 1)) << endl; return 0; } The output of the program: Enter input: 30 536870912 536870912 Enter input: 62 536870912 536870912 How could I produce 64-bit masks?

    Read the article

  • Get an array of structures from native dll to c# application

    - by PaulH
    I have a C# .NET 2.0 CF project where I need to invoke a method in a native C++ DLL. This native method returns an array of type TableEntry. At the time the native method is called, I do not know how large the array will be. How can I get the table from the native DLL to the C# project? Below is effectively what I have now. // in C# .NET 2.0 CF project [StructLayout(LayoutKind.Sequential)] public struct TableEntry { [MarshalAs(UnmanagedType.LPWStr)] public string description; public int item; public int another_item; public IntPtr some_data; } [DllImport("MyDll.dll", CallingConvention = CallingConvention.Winapi, CharSet = CharSet.Auto)] public static extern bool GetTable(ref TableEntry[] table); SomeFunction() { TableEntry[] table = null; bool success = GetTable( ref table ); // at this point, the table is empty } // In Native C++ DLL std::vector< TABLE_ENTRY > global_dll_table; extern "C" __declspec(dllexport) bool GetTable( TABLE_ENTRY* table ) { table = &global_dll_table.front(); return true; } Thanks, PaulH

    Read the article

  • Mapping an instance of IList in NHibernate

    - by Martin Kirsche
    I'm trying to map a parent-child relationship using NHibernate (2.1.2), MySql.Data (6.2.2) and MySQL Server (5.1). I figured out that this must be done using a <bag> in the mapping file. I build a test app which is running without yielding any errors and is doing an insert for each entry but somehow the foreign key inside the children table (ParentId) is always empty (null). Here are the important parts of my code... Parent public class Parent { public virtual int Id { get; set; } public virtual IList<Child> Children { get; set; } } <class name="Parent"> <id name="Id"> <generator class="native"/> </id> <bag name="Children" cascade="all"> <key column="ParentId"/> <one-to-many class="Child"/> </bag> </class> Child public class Child { public virtual int Id { get; set; } } <class name="Child"> <id name="Id"> <generator class="native"/> </id> </class> Program using (ISession session = sessionFactory.OpenSession()) { session.Save( new Parent() { Children = new List<Child>() { new Child(), new Child() } }); } Any ideas what I did wrong?

    Read the article

  • Bitwise OR of constants

    - by ryyst
    While reading some documentation here, I came across this: unsigned unitFlags = NSYearCalendarUnit | NSMonthCalendarUnit | NSDayCalendarUnit; NSDateComponents *comps = [gregorian components:unitFlags fromDate:date]; I have no idea how this works. I read up on the bitwise operators in C, but I do not understand how you can fit three (or more!) constants inside one int and later being able to somehow extract them back from the int? Digging further down the documentation, I also found this, which is probably related: typedef enum { kCFCalendarUnitEra = (1 << 1), kCFCalendarUnitYear = (1 << 2), kCFCalendarUnitMonth = (1 << 3), kCFCalendarUnitDay = (1 << 4), kCFCalendarUnitHour = (1 << 5), kCFCalendarUnitMinute = (1 << 6), kCFCalendarUnitSecond = (1 << 7), kCFCalendarUnitWeek = (1 << 8), kCFCalendarUnitWeekday = (1 << 9), kCFCalendarUnitWeekdayOrdinal = (1 << 10), } CFCalendarUnit; How do the (1 << 3) statements / variables work? I'm sorry if this is trivial, but could someone please enlighten me by either explaining or maybe posting a link to a good explanation? Thanks! -- ry

    Read the article

  • Why can I call a non-const member function pointer from a const method?

    - by sdg
    A co-worker asked about some code like this that originally had templates in it. I have removed the templates, but the core question remains: why does this compile OK? #include <iostream> class X { public: void foo() { std::cout << "Here\n"; } }; typedef void (X::*XFUNC)() ; class CX { public: explicit CX(X& t, XFUNC xF) : object(t), F(xF) {} void execute() const { (object.*F)(); } private: X& object; XFUNC F; }; int main(int argc, char* argv[]) { X x; const CX cx(x,&X::foo); cx.execute(); return 0; } Given that CX is a const object, and its member function execute is const, therefore inside CX::execute the this pointer is const. But I am able to call a non-const member function through a member function pointer. Are member function pointers a documented hole in the const-ness of the world? What (presumably obvious to others) issue have we missed?

    Read the article

  • Find Adjacent Nodes A Star Path-Finding C++

    - by Infinity James
    Is there a better way to handle my FindAdjacent() function for my A Star algorithm? It's awfully messy, and it doesn't set the parent node correctly. When it tries to find the path, it loops infinitely because the parent of the node has a pent of the node and the parents are always each other. Any help would be amazing. This is my function: void AStarImpl::FindAdjacent(Node* pNode) { for (int i = -1; i <= 1; i++) { for (int j = -1; j <= 1; j++) { if (pNode->mX != Map::GetInstance()->mMap[pNode->mX + i][pNode->mY + j].mX || pNode->mY != Map::GetInstance()->mMap[pNode->mX + i][pNode->mY + j].mY) { if (pNode->mX + i <= 14 && pNode->mY + j <= 14) { if (pNode->mX + i >= 0 && pNode->mY + j >= 0) { if (Map::GetInstance()->mMap[pNode->mX + i][pNode->mY + j].mTypeID != NODE_TYPE_SOLID) { if (find(mOpenList.begin(), mOpenList.end(), &Map::GetInstance()->mMap[pNode->mX + i][pNode->mY + j]) == mOpenList.end()) { Map::GetInstance()->mMap[pNode->mX+i][pNode->mY+j].mParent = &Map::GetInstance()->mMap[pNode->mX][pNode->mY]; mOpenList.push_back(&Map::GetInstance()->mMap[pNode->mX+i][pNode->mY+j]); } } } } } } } mClosedList.push_back(&Map::GetInstance()->mMap[pNode->mX][pNode->mY]); } If you'd like any more code, just ask and I can post it.

    Read the article

  • Mutual class instances in C++

    - by SepiDev
    Hi guys. What is the issue with this code? Here we have two files: classA.h and classB.h classA.h: #ifndef _class_a_h_ #define _class_a_h_ #include "classB.h" class B; //???? class A { public: A() { ptr_b = new B(); //???? } virtual ~A() { if(ptr_b) delete ptr_b; //???? num_a = 0; } int num_a; B* ptr_b; //???? }; #endif //_class_a_h_ classB.h: #ifndef _class_b_h_ #define _class_b_h_ #include "classA.h" class A; //???? class B { public: B() { ptr_a = new A(); //???? num_b = 0; } virtual ~B() { if(ptr_a) delete ptr_a; //???? } int num_b; A* ptr_a; //???? }; #endif //_class_b_h_ when I try to compile it, the compiler (g++) says: classB.h: In constructor ‘B::B()’: classB.h:12: error: invalid use of incomplete type ‘struct A’ classB.h:6: error: forward declaration of ‘struct A’ classB.h: In destructor ‘virtual B::~B()’: classB.h:16: warning: possible problem detected in invocation of delete operator: classB.h:16: warning: invalid use of incomplete type ‘struct A’ classB.h:6: warning: forward declaration of ‘struct A’ classB.h:16: note: neither the destructor nor the class-specific operator delete will be called, even if they are declared when the class is defined.

    Read the article

  • Download dynaic file with GWT

    - by Maksim
    I have a GWT page where user enter data (start date, end date, etc.), then this data goes to the server via RPC call. On the server I want to generate Excel report with POI and let user save that file on their local machine. This is my test code to stream file back to the client but for some reason it does not know: public class ReportsServiceImpl extends RemoteServiceServlet implements ReportsService { public String myMethod(String s) { File f = new File("/excelTestFile.xls"); String filename = f.getName(); int length = 0; try { HttpServletResponse resp = getThreadLocalResponse(); ServletOutputStream op = resp.getOutputStream(); ServletContext context = getServletConfig().getServletContext(); resp.setContentType("application/octet-stream"); resp.setContentLength((int) f.length()); resp.setHeader("Content-Disposition", "attachment; filename*=\"utf-8''" + filename + ""); byte[] bbuf = new byte[1024]; DataInputStream in = new DataInputStream(new FileInputStream(f)); while ((in != null) && ((length = in.read(bbuf)) != -1)) { op.write(bbuf, 0, length); } in.close(); op.flush(); op.close(); } catch (Exception ex) { ex.printStackTrace(); } return "Server says: " + filename; } } I've red somewhere on internet that you can't do file stream with RPC and I have to use Servlet for that. Is there any example of how to use Servlet and how to call that servlet from ReportsServiceImpl

    Read the article

  • Incorrect logic flow? function that gets coordinates for a sudoku game

    - by igor
    This function of mine keeps on failing an autograder, I am trying to figure out if there is a problem with its logic flow? Any thoughts? Basically, if the row is wrong, "invalid row" should be printed, and clearInput(); called, and return false. When y is wrong, "invalid column" printed, and clearInput(); called and return false. When both are wrong, only "invalid row" is to be printed (and still clearInput and return false. Obviously when row and y are correct, print no error and return true. My function gets through most of the test cases, but fails towards the end, I'm a little lost as to why. bool getCoords(int & x, int & y) { char row; bool noError=true; cin>>row>>y; row=toupper(row); if(row>='A' && row<='I' && isalpha(row) && y>=1 && y<=9) { x=row-'A'; y=y-1; return true; } else if(!(row>='A' && row<='I')) { cout<<"Invalid row"<<endl; noError=false; clearInput(); return false; } else { if(noError) { cout<<"Invalid column"<<endl; } clearInput(); return false; } }

    Read the article

  • C system calls open / read / write / close problem.

    - by Andrei Ciobanu
    Hello, given the following code (it's supposed to write "hellowolrd" in a "helloworld" file, and then read the text): #include <fcntl.h> #include <sys/types.h> #include <sys/stat.h> #define FNAME "helloworld" int main(){ int filedes, nbytes; char buf[128]; /* Creates a file */ if((filedes=open(FNAME, O_CREAT | O_EXCL | O_WRONLY | O_APPEND, S_IRUSR | S_IWUSR)) == -1){ write(2, "Error1\n", 7); } /* Writes hellow world to file */ if(write(filedes, FNAME, 10) != 10) write(2, "Error2\n", 7); /* Close file */ close(filedes); if((filedes = open(FNAME, O_RDONLY))==-1) write(2, "Error3\n", 7); /* Prints file contents on screen */ if((nbytes=read(filedes, buf, 128)) == -1) write(2, "Error4\n", 7); if(write(1, buf, nbytes) != nbytes) write(2, "Error5\n", 7); /* Close rile afte read */ close(filedes); return (0); } The first time i run the program, the output is: helloworld After that every time I to run the program, the output is: Error1 Error2 helloworld I don't understand why the text isn't appended, as I've specified the O_APPEND file. Is it because I've included O_CREAT ? It the file is already created, shouldn't O_CREAT be ignored ?

    Read the article

  • How to handle image/gif type response on client side using GWT

    - by user200340
    Hi all, I have a question about how to handle image/gif type response on client side, any suggestion will be great. There is a service which responds for retrieving image (only one each time at the moment) from database. The code is something like, JDBC Connection Construct MYSQL query. Execute query If has ResultSet, retrieve first one { //save image into Blob image, “img” is the only entity in the image table. image = rs.getBlob("img"); } response.setContentType("image/gif"); //set response type InputStream in = image.getBinaryStream(); //output Blob image to InputStream int bufferSize = 1024; //buffer size byte[] buffer = new byte[bufferSize]; //initial buffer int length =0; //read length data from inputstream and store into buffer while ((length = in.read(buffer)) != -1) { out.write(buffer, 0, length); //write into ServletOutputStream } in.close(); out.flush(); //write out The code on client side .... imgform.setAction(GWT.getModuleBaseURL() + "serviceexample/ImgRetrieve"); .... ClickListener { OnClick, then imgform.submit(); } formHandler { onSubmit, form validation onSubmitComplete ??????? //handle response, and display image **Here is my question, i had tried Image img = new Image(GWT.getHostPageBaseURL() +"serviceexample/ImgRetrieve"); mg.setSize("300", "300"); imgpanel.add(img); but i only got a non-displayed image with 300X300 size.** } So, how should i handle the responde in this case? Thanks,

    Read the article

  • Sending files from server to client in Java

    - by Lee Jacobson
    Hi, I'm trying to find a way to send files of different file types from a server to a client. I have this code on the server to put the file into a byte array: File file = new File(resourceLocation); byte[] b = new byte[(int) file.length()]; FileInputStream fileInputStream; try { fileInputStream = new FileInputStream(file); try { fileInputStream.read(b); } catch (IOException ex) { System.out.println("Error, Can't read from file"); } for (int i = 0; i < b.length; i++) { fileData += (char)b[i]; } } catch (FileNotFoundException e) { System.out.println("Error, File Not Found."); } I then send fileData as a string to the client. This works fine for txt files but when it comes to images I find that although it creates the file fine with the data in, the image won't open. I'm not sure if I'm even going about this the right way. Thanks for the help.

    Read the article

  • Arduino variable going blank after the first pass

    - by user541597
    I have an Arduino sketch that takes a timet and when that timet is equal to the current time it sets the new timet to timet + 2. For example: char* convert(char* x, String y) { int hour; int minute; sscanf(x, "%d:%d", &hour, &minute); char buf[6]; if (y == "6") { if (hour > 17) { hour = (hour+6)%24; snprintf(buf, 10, "%d:%d", hour, minute ); } else if (hour < 18) { //hour = hour + 6; minute = (minute + 2); snprintf(buf, 10, "%d:%d", hour, minute); } } if (y == "12") { if (hour > 11) { hour = (hour+12)%24; snprintf(buf, 10, "%d:%d", hour, minute ); } else if (hour < 12) { hour = hour + 12; snprintf(buf, 10, "%d:%d", hour, minute); } } if (y == "24") { hour = (hour+24)%24; snprintf(buf, 10, "%d:%d", hour, minute ); } return buf; } The sketch starts for example at 1:00am. timet is set to 1:02, at system time 1:02 timet is equal to the system time. My loops looks like this: if (timet == currenttime) { timet = convert(timet) } Whenever I check the value of timet it should equal 1:04, however I get the correct value at the first run after the execution of convert, however every time after that my timet value is blank. I tried changing the code instead of using the if loop. I only run the convert function when I send for example t through the serial monitor. This works fine and outputs the correct timet after the execution of the convert function, So I figured the problem is in the if loop... Any ideas?

    Read the article

  • Declaration, allocation and assignment of an array of pointers to function pointers

    - by manneorama
    Hello Stack Overflow! This is my first post, so please be gentle. I've been playing around with C from time to time in the past. Now I've gotten to the point where I've started a real project (a 2D graphics engine using SDL, but that's irrelevant for the question), to be able to say that I have some real C experience. Yesterday, while working on the event system, I ran into a problem which I couldn't solve. There's this typedef, //the void parameter is really an SDL_Event*. //but that is irrelevant for this question. typedef void (*event_callback)(void); which specifies the signature of a function to be called on engine events. I want to be able to support multiple event_callbacks, so an array of these callbacks would be an idea, but do not want to limit the amount of callbacks, so I need some sort of dynamic allocation. This is where the problem arose. My first attempt went like this: //initial size of callback vector static const int initial_vecsize = 32; //our event callback vector static event_callback* vec = 0; //size static unsigned int vecsize = 0; void register_event_callback(event_callback func) { if (!vec) __engine_allocate_vec(vec); vec[vecsize++] = func; //error here! } static void __engine_allocate_vec(engine_callback* vec) { vec = (engine_callback*) malloc(sizeof(engine_callback*) * initial_vecsize); } First of all, I have omitted some error checking as well as the code that reallocates the callback vector when the number of callbacks exceed the vector size. However, when I run this code, the program crashes as described in the code. I'm guessing segmentation fault but I can't be sure since no output is given. I'm also guessing that the error comes from a somewhat flawed understanding on how to declare and allocate an array of pointers to function pointers. Please Stack Overflow, guide me.

    Read the article

< Previous Page | 475 476 477 478 479 480 481 482 483 484 485 486  | Next Page >