Search Results

Search found 8875 results on 355 pages for 'mime types'.

Page 298/355 | < Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >

  • "Invalid assignment" error from == operator

    - by Tom
    I was trying to write a simple method: boolean validate(MyObject o) { // propertyA && propertyB are not primitive types. return o.getPropertyA() == null && o.getPropertyB() == null; } And got a strange error on the == null part: Syntax error on token ==. Invalid assignment operator. Maybe my Java is rusty after a season in PLSQL. So I tried a simpler example: Integer i = 4; i == null; // compile error: Syntax error on token ==. Invalid assignment operator. Integer i2 = 4; if (i == null); //No problem How can this be? I'm using jdk160_05. To clarify: I'm not trying to assign anything, just do an && operation between two boolean values. I don't want to do this: if (o.propertyA() == null && o.propertyB() == null) { return true; } else { return false; }

    Read the article

  • Interesting Scala typing solution, doesn't work in 2.7.7?

    - by djc
    I'm trying to build some image algebra code that can work with images (basically a linear pixel buffer + dimensions) that have different types for the pixel. To get this to work, I've defined a parametrized Pixel trait with a few methods that should be able to get used with any Pixel subclass. (For now, I'm only interested in operations that work on the same Pixel type.) Here it is: trait Pixel[T <: Pixel[T]] { def mul(v: Double): T def max(v: T): T def div(v: Double): T def div(v: T): T } Now I define a single Pixel type that has storage based on three doubles (basically RGB 0.0-1.0), I've called it TripleDoublePixel: class TripleDoublePixel(v: Array[Double]) extends Pixel[TripleDoublePixel] { var data: Array[Double] = v def this() = this(Array(0.0, 0.0, 0.0)) def toString(): String = { "(" + data(0) + ", " + data(1) + ", " + data(2) + ")" } def increment(v: TripleDoublePixel) { data(0) += v.data(0) data(1) += v.data(1) data(2) += v.data(2) } def mul(v: Double): TripleDoublePixel = { new TripleDoublePixel(data.map(x => x * v)) } def div(v: Double): TripleDoublePixel = { new TripleDoublePixel(data.map(x => x / v)) } def div(v: TripleDoublePixel): TripleDoublePixel = { var tmp = new Array[Double](3) tmp(0) = data(0) / v.data(0) tmp(1) = data(1) / v.data(1) tmp(2) = data(2) / v.data(2) new TripleDoublePixel(tmp) } def max(v: TripleDoublePixel): TripleDoublePixel = { val lv = data(0) * data(0) + data(1) * data(1) + data(2) * data(2) val vv = v.data(0) * v.data(0) + v.data(1) * v.data(1) + v.data(2) * v.data(2) if (lv > vv) (this) else v } } Now I want to write code to use this, that doesn't have to know what type the pixels are. For example: def idiv[T](a: Image[T], b: Image[T]) { for (i <- 0 until a.data.size) { a.data(i) = a.data(i).div(b.data(i)) } } Unfortunately, this doesn't compile: (fragment of lindet-gen.scala):145: error: value div is not a member of T a.data(i) = a.data(i).div(b.data(i)) I was told in #scala that this worked for someone else, but that was on 2.8. I've tried to get 2.8-rc1 going, but it doesn't compile for me. Is there any way to get this to work in 2.7.7?

    Read the article

  • Do I need to using locking against integers in c++ threads

    - by Shane MacLaughlin
    The title says it all really. If I am accessing a single integer type (e.g. long, int, bool, etc...) in multiple threads, do I need to use a synchronisation mechanism such as a mutex to lock them. My understanding is that as atomic types, I don't need to lock access to a single thread, but I see a lot of code out there that does use locking. Profiling such code shows that there is a significant performance hit for using locks, so I'd rather not. So if the item I'm accessing corresponds to a bus width integer (e.g. 4 bytes on a 32 bit processor) do I need to lock access to it when it is being used across multiple threads? Put another way, if thread A is writing to integer variable X at the same time as thread B is reading from the same variable, is it possible that thread B could end up a few bytes of the previous value mixed in with a few bytes of the value being written? Is this architecture dependent, e.g. ok for 4 byte integers on 32 bit systems but unsafe on 8 byte integers on 64 bit systems? Edit: Just saw this related post which helps a fair bit.

    Read the article

  • How to handle 30k files in a project which requires them?

    - by Jeremiah
    Visual Studio 2010 RC - Silverlight Application We have a library of images that we need to have access to. They are given to us from a vendor (through an installer) and they are not in a database, they are files in a folder (a very large monster of a folder). We do not control when the images change, so the vendor needs to be able to override them individually. We get updates frequently enough from this vendor to state that these images change "randomly" and without our (programmer) knowledge. The problem: I don't want 30K images in SVN. Heck, I don't even want to imagine them in my Solution. However, our application requires them in order to run properly. So, our build/staging servers need access to these images (we have two build servers). The Question: How would you handle it when your application will not work as specified without access to each of 30k images and you don't control when those images change? I'm do not want to have a crazy large SVN repository. Because I don't know when any of these images change, I really don't want them in my solution (definitely do not want a large solution, either). I also don't want a bunch of manual steps to do every time these images change. Our mantra, up to this point, has always been, any developer could download from SVN, compile and run our app. These images are going to kill that mantra. I'm tempted to make a WCF service that will return images if they exist and a dummy image if they don't. This way all dev boxes will return a dummy image and our build/staging/production boxes will return real images (ones that actually have the vendor's image installer installed on). This has to be a solved problem. What have other people done to handle these types of problems? I'm open to suggestions.

    Read the article

  • RIA Service - without database?

    - by Heko
    Helo! I need to write a RIA service to call Java webservices from Silverlight 3.0 app. I'm testing how stuff works and in my Web app I have a MyData class which has 2 properties (int ID, string Text): namespace SilverlightApplication1.Web { public class MyData { [Key] public int ID { get; set; } public string Text { get; set; } } } Then I wrote simple DomainService: [EnableClientAccess()] public class MyService : DomainService { public IQueryable<MyData> GetMyData(string Url) { // here I will call my WebService List<MyData> result = new List<MyData>(); result.Add(new MyData { ID = 1, Text = Url }); return result.AsQueryable(); } } } How can I get data into my SL app? Now I have this: namespace SilverlightApplication1 { public partial class MainPage : UserControl { public MainPage() { InitializeComponent(); MyContext context = new MyContext(); } } } I called and load but nothink worsk (exceptions, or nulls)... I had Invoke annotation but MyData is not TEntity and I can't use Strings or other simple types as well... :/ I'm reading and reading posts and nothing works like it should.. Any help would be really appreciated. Thank you!

    Read the article

  • Database source control with Oracle

    - by borjab
    I have been looking during hours for a way to check in a database into source control. My first idea was a program for calculating database diffs and ask all the developers to imlement their changes as new diff scripts. Now, I find that if I can dump a database into a file I cound check it in and use it as just antother type of file. The main conditions are: Works for Oracle 9R2 Human readable so we can use diff to see the diferences. (.dmp files doesn't seem readable) All tables in a batch. We have more than 200 tables. It stores BOTH STRUCTURE AND DATA It supports CLOB and RAW Types. It stores Procedures, Packages and its bodies, functions, tables, views, indexes, contraints, Secuences and synonims. It can be turned into an executable script to rebuild the database into a clean machine. Not limitated to really small databases (Supports least 200.000 rows) It is not easy. I have downloaded a lot of demos that does fail in one way or another. EDIT: I wouldn't mind alternatives aproaches provided that they allows us to check a working system against our release DATABASE STRUCTURE AND OBJECTS + DATA in a bath mode. By the way. Our project has been developed for years. Some aproaches can be easily implemented when you make a fresh start but seem hard at this point. EDIT: To understand better the problem let's say that some users can sometimes do changes to the config data in the production eviroment. Or developers might create a new field or alter a view without notice in the realease branch. I need to be aware of this changes or it will be complicated to merge the changes into production.

    Read the article

  • VS2008 Link Error Using SafeInt3.hpp in 64bit mode.

    - by photo_tom
    I have the below code that links and runs fine in 32bit mode - #include "safeint3.hpp" typedef SafeInt<SIZE_T> SAFE_SIZE_T; SAFE_SIZE_T sizeOfCache; SAFE_SIZE_T _allocateAmt; Where safeint3.hpp is current version that can be found on Codeplex SafeInt. For those who are unaware of it, safeint is a template class that makes working with different integer types and sizes "safe". To quote channel 9 video on software - "it writes the code that you should". Which is my case. I have a class that is managing a large in-memory cache of objects (6gb) and I am very concerned about making sure that I don't have overflow/underflow issues on my pointers/sizes/other integer variables. In this use, it solves many problems. My problem is coming when moving from 32bit dev mode to 64bit production mode. When I build the app in this mode, I'm getting the following linker warnings - 1>cachecontrol.obj : warning LNK4006: "bool __cdecl IntrinsicMultiplyUint64(unsigned __int64 const &,unsigned __int64 const &,unsigned __int64 *)" (?IntrinsicMultiplyUint64@@YA_NAEB_K0PEA_K@Z) already defined in ImageInRamCache.obj; second definition ignored 1>cachecontrol.obj : warning LNK4006: "bool __cdecl IntrinsicMultiplyInt64(__int64 const &,__int64 const &,__int64 *)" (?IntrinsicMultiplyInt64@@YA_NAEB_J0PEA_J@Z) already defined in ImageInRamCache.obj; second definition ignored While I understand I can ignore the error, I would like either (a) prevent the warning from occurring or (b) make it disappear so that my QA department doesn't flag it as a problem. And after spending some time researching it, I cannot find a way to do either.

    Read the article

  • which one of these is an example of coercion

    - by user1890210
    I have been pondering a multiple choice question on coercion. One of the 4 examples a,b,c or d is an example of coercion. I narrowed it down to A or B. But I am having a problem choosing between the two. Cane someone please explain why one is coercion and one isn't. A) string s="tomat"; char c='o'; s=s+c; I thought A could be correct because we have two different types, character and string, being added. Meaning that c is promoted to string, hence coercion. B) double x=1.0; double y=2.0; int i=(int)(x+y); I also thought B was the correct answer because the double (x+y) is being turned into a int to be placed in i. But I thought this could be wrong because its being done actively through use of (int) rather than passively such as "int i = x + y" I'll list the other two options, even though I believe that neither one is the correct answer C) char A=0x20; A = A << 1 | 0x01; cout << A << endl; D) double x=1.0; double y=x+1; return 0; I'm not just looking for an answer, but an explanation. I have read tons of things on coercion and A and B both look like the right answer. So why is one correct and the other not.

    Read the article

  • Unit test approach for generic classes/methods

    - by Greg
    Hi, What's the recommended way to cover off unit testing of generic classes/methods? For example (referring to my example code below). Would it be a case of have 2 or 3 times the tests to cover testing the methods with a few different types of TKey, TNode classes? Or is just one class enough? public class TopologyBase<TKey, TNode, TRelationship> where TNode : NodeBase<TKey>, new() where TRelationship : RelationshipBase<TKey>, new() { // Properties public Dictionary<TKey, NodeBase<TKey>> Nodes { get; private set; } public List<RelationshipBase<TKey>> Relationships { get; private set; } // Constructors protected TopologyBase() { Nodes = new Dictionary<TKey, NodeBase<TKey>>(); Relationships = new List<RelationshipBase<TKey>>(); } // Methods public TNode CreateNode(TKey key) { var node = new TNode {Key = key}; Nodes.Add(node.Key, node); return node; } public void CreateRelationship(NodeBase<TKey> parent, NodeBase<TKey> child) { . . .

    Read the article

  • What C# container is most resource-efficient for existence for only one operation?

    - by ccornet
    I find myself often with a situation where I need to perform an operation on a set of properties. The operation can be anything from checking if a particular property matches anything in the set to a single iteration of actions. Sometimes the set is dynamically generated when the function is called, some built with a simple LINQ statement, other times it is a hard-coded set that will always remain the same. But one constant always exists: the set only exists for one single operation and has no use before or after it. My problem is, I have so many points in my application where this is necessary, but I appear to be very, very inconsistent in how I store these sets. Some of them are arrays, some are lists, and just now I've found a couple linked lists. Now, none of the operations I'm specifically concerned about have to care about indices, container size, order, or any other functionality that is bestowed by any of the individual container types. I picked resource efficiency because it's a better idea than flipping coins. I figured, since array size is configured and it's a very elementary container, that might be my best choice, but I figure it is a better idea to ask around. Alternatively, if there's a better choice not out of resource-efficiency but strictly as being a better choice for this kind of situation, that would be nice as well.

    Read the article

  • What new Unicode functions are there in C++0x?

    - by luiscubal
    It has been mentioned in several sources that C++0x will include better language-level support for Unicode(including types and literals). If the language is going to add these new features, it's only natural to assume that the standard library will as well. However, I am currently unable to find any references to the new standard library. I expected to find out the answer for these answers: Does the new library provide standard methods to convert UTF-8 to UTF-16, etc.? Does the new library allowing writing UTF-8 to files, to the console (or from files, from the console). If so, can we use cout or will we need something else? Does the new library include "basic" functionality such as: discovering the byte count and length of a UTF-8 string, converting to upper-case/lower-case(does this consider the influence of locales?) Finally, are any of these functions are available in any popular compilers such as GCC or Visual Studio? I have tried to look for information, but I can't seem to find anything? I am actually starting to think that maybe these things aren't even decided yet(I am aware that C++0x is a work in progress).

    Read the article

  • How can I avoid encoding mixups of strings in a C/C++ API?

    - by Frerich Raabe
    I'm working on implementing different APIs in C and C++ and wondered what techniques are available for avoiding that clients get the encoding wrong when receiving strings from the framework or passing them back. For instance, imagine a simple plugin API in C++ which customers can implement to influence translations. It might feature a function like this: const char *getTranslatedWord( const char *englishWord ); Now, let's say that I'd like to enforce that all strings are passed as UTF-8. Of course I'd document this requirement, but I'd like the compiler to enforce the right encoding, maybe by using dedicated types. For instance, something like this: class Word { public: static Word fromUtf8( const char *data ) { return Word( data ); } const char *toUtf8() { return m_data; } private: Word( const char *data ) : m_data( data ) { } const char *m_data; }; I could now use this specialized type in the API: Word getTranslatedWord( const Word &englishWord ); Unfortunately, it's easy to make this very inefficient. The Word class lacks proper copy constructors, assignment operators etc.. and I'd like to avoid unnecessary copying of data as much as possible. Also, I see the danger that Word gets extended with more and more utility functions (like length or fromLatin1 or substr etc.) and I'd rather not write Yet Another String Class. I just want a little container which avoids accidental encoding mixups. I wonder whether anybody else has some experience with this and can share some useful techniques. EDIT: In my particular case, the API is used on Windows and Linux using MSVC 6 - MSVC 10 on Windows and gcc 3 & 4 on Linux.

    Read the article

  • getting string.substring(N) not to choke when N > string.length

    - by aape
    I'm writing some code that takes a report from the mainframe and converts it to a spreadsheet. They can't edit the code on the MF to give me a delimited file, so I'm stuck dealing with it as fixed width. It's working okay now, but I need to get it more stable before I release it for testing. My problem is that in any given line of data, say it could have three columns of numbers, each five chars wide at positions 10, 16, and 22. If on this one particular row, there's no data for the last two cols, it won't be padded with spaces; rather, the length of the string will be only 14. So, I can't just blindly have dim s as string = someStream.readline a = s.substring(10, 5) b = s.substring(16, 5) c = s.substring(22, 5) because it'll choke when it substrings past the length of the string. I know I could test the length of the string before processing each row, and I have automated the filling of some of the vsariables using a counter and a loop, and using the counter*theWidthOfTheGivenVariable to jump around, but this project was a dog to start with (come on! turning a report into a spreadsheet?), but there are many different types of rows (it's not just a grid), and the code's getting ugly fast. I'd like this to be clean, clear, and maintainable for the poor sucker that gets this after me. If it matters, here's my code so far (it's really crufty at the moment). You can see some of my/its idiocy in the processSection#data subs So, I'm wondering 1) is there a way baked in to .NET to have string.substring not error when reading past the end of a string without wrapping it in a try...catch? and 2) would it be appropriate in this situation to write a new string class that inherits from string that has a more friendly substring function in it? ETA: Thanks for all the advice and knowledge everyone. I'll go with the extension. Hopefully one of these years, I'll get my chops up enough to pay someone back in kind. :)

    Read the article

  • Why Is the sender type null when dealing with events

    - by ChloeRadshaw
    From C# Via CLR: Note A lot of people wonder why the event pattern requires the sender parameter to always be of type Object After all, since the MailManager will be the only type raising an event with a NewMail EventArgs object, it makes more sense for the callback method to be prototyped like this: void MethodName(Mai l Manager sender, NewMail EventArgs e); The pattern requires the sender parameter to be of type Object mostly because of inheritance What if Mai lManager were used as a base class for SmtpMailManager? In this case, the callback method should have the sender parameter prototyped as SmtpMailManager instead of Mail Manager, but this can’t happen because SmtpMai lManager just inherited the NewMai l event So the code that was expecting SmtpMail Manager to raise the event must still have to cast the sender argument to SmtpMailManager In other words, the cast is still required, so the sender parameter might as well be typed as Obj ect The next reason for typing the sender parameter as Obj ect is just fexibility It allows the delegate to be used by multiple types that offer an event that passes a NewMail EventArgs object For example, a PopMai lManager class could use the delegate even if this class were not derived from Mail Manager I just simply cannot understand why the sender is an object - Why can it not be generified? so most of the time we do not need to do generic casts

    Read the article

  • What are the Limitations for Connecting to an Access Query in Excel

    - by thornomad
    I have an Access 2007 database that has a number of tables, some are fairly large (100,000+ records); I have created a union query to pull some of the same types of data from multiple tables into one large query for pivot table manipulation and reporting. For example: SELECT Language FROM Table1 UNION ALL SELECT Language FROM Table2 UNION ALL SELECT Language FROM Table3; This works. I found, quickly, however, that a union query will not show up when connecting to the datasource from Excel 2007. So, I created a second query to reference the union query. Like so: SELECT * FROM [The Above Union Query]; This query works and it, initially, was accessible from Excel. Time passed, I've added more data. Suddenly, when I connect to my Access database from Excel my query referencing the union has disappeared. MS Access shows no signs of an issue (data displays in Access) and my other non-union queries are showing up in Excel 2007 ... but not the one that references the union. What could be going on? Why did it disappear? I noticed if I switch some of the referenced tables in the union query to a smaller table (with less rows) all of sudden the query appears in Excel again. At least, I think that's what the difference is. I really can't put my finger on why some of the union queries won't show up and some will. Am stumped and need some guidance. Thanks.

    Read the article

  • Why null reference exception in SetMolePublicInstance?

    - by OldGrantonian
    I get a "null reference" exception in the following line: MoleRuntime.SetMolePublicInstance(stub, receiverType, objReceiver, name, null); The program builds and compiles correctly. There are no complaints about any of the parameters to the method. Here's the specification of SetMolePublicInstance, from the object browser: SetMolePublicInstance(System.Delegate _stub, System.Type receiverType, object _receiver, string name, params System.Type[] parameterTypes) Here are the parameter values for "Locals": + stub {Method = {System.String <StaticMethodUnitTestWithDeq>b__0()}} System.Func<string> + receiverType {Name = "OrigValue" FullName = "OrigValueP.OrigValue"} System.Type {System.RuntimeType} objReceiver {OrigValueP.OrigValue} object {OrigValueP.OrigValue} name "TestString" string parameterTypes null object[] I know that TestString() takes no parameters and returns string, so as a starter to try to get things working, I specified "null" for the final parameter to SetMolePublicInstance. As already mentioned, this compiles OK. Here's the stack trace: Unhandled Exception: System.NullReferenceException: Object reference not set to an instance of an object. at Microsoft.ExtendedReflection.Collections.Indexable.ConvertAllToArray[TInput,TOutput](TInput[] array, Converter`2 converter) at Microsoft.Moles.Framework.Moles.MoleRuntime.SetMole(Delegate _stub, Type receiverType, Object _receiver, String name, MoleBindingFlags flags, Type[] parameterTypes) at Microsoft.Moles.Framework.Moles.MoleRuntime.SetMolePublicInstance(Delegate _stub, Type receiverType, Object _receiver, String name, Type[] parameterTypes) at DeqP.Deq.Replace[T](Func`1 stub, Type receiverType, Object objReceiver, String name) in C:\0VisProjects\DecP_04\DecP\DeqC.cs:line 38 at DeqPTest.DecCTest.StaticMethodUnitTestWithDeq() in C:\0VisProjects\DecP_04\DecPTest\DeqCTest.cs:line 28 at Starter.Start.Main(String[] args) in C:\0VisProjects\DecP_04\Starter\Starter.cs:line 14 Press any key to continue . . . To avoid the null parameter, I changed the final "null" to "parameterTypes" as in the following line: MoleRuntime.SetMolePublicInstance(stub, receiverType, objReceiver, name, parameterTypes); I then tried each of the following (before the line): int[] parameterTypes = null; // if this is null, I don't think the type will matter int[] parameterTypes = new int[0]; object[] parameterTypes = new object[0]; // this would allow for various parameter types All three attempts produce a red squiggly line under the entire line for SetMolePublicInstance Mouseover showed the following message: The best overloaded method match for 'Microsoft.Moles.Framework.Moles.MoleRuntime.SetMolePublicInstance(System.Delegate, System.Type, object, string, params System.Type[])' has some invalid arguments. I'm assuming that the first four arguments are OK, and that the problem is with the params array.

    Read the article

  • Why does mmap() fail with ENOMEM on a 1TB sparse file?

    - by metadaddy
    I've been working with large sparse files on openSUSE 11.2 x86_64. When I try to mmap() a 1TB sparse file, it fails with ENOMEM. I would have thought that the 64 bit address space would be adequate to map in a terabyte, but it seems not. Experimenting further, a 1GB file works fine, but a 2GB file (and anything bigger) fails. I'm guessing there might be a setting somewhere to tweak, but an extensive search turns up nothing. Here's some sample code that shows the problem - any clues? #include <errno.h> #include <fcntl.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/mman.h> #include <sys/types.h> #include <unistd.h> int main(int argc, char *argv[]) { char * filename = argv[1]; int fd; off_t size = 1UL << 40; // 30 == 1GB, 40 == 1TB fd = open(filename, O_RDWR | O_CREAT | O_TRUNC, 0666); ftruncate(fd, size); printf("Created %ld byte sparse file\n", size); char * buffer = (char *)mmap(NULL, (size_t)size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); if ( buffer == MAP_FAILED ) { perror("mmap"); exit(1); } printf("Done mmap - returned 0x0%lx\n", (unsigned long)buffer); strcpy( buffer, "cafebabe" ); printf("Wrote to start\n"); strcpy( buffer + (size - 9), "deadbeef" ); printf("Wrote to end\n"); if ( munmap(buffer, (size_t)size) < 0 ) { perror("munmap"); exit(1); } close(fd); return 0; }

    Read the article

  • multiple models in Rails with a shared interface

    - by dfondente
    I'm not sure of the best structure for a particular situation in Rails. We have several types of workshops. The administration of the workshops is the same regardless of workshop type, so the data for the workshops is in a single model. We collect feedback from participants about the workshops, and the questionnaire is different for each type of workshop. I want to access the feedback about the workshop from the workshop model, but the class of the associated model will depend on the type of workshop. If I was doing this in something other than Rails, I would set up an abstract class for WorkshopFeedback, and then have subclasses for each type of workshop: WorkshopFeedbackOne, WorkshopFeedbackTwo, WorkshopFeedbackThree. I'm unsure how to best handle this with Rails. I currently have: class Workshop < ActiveRecord::Base has_many :workshop_feedbacks end class Feedback < ActiveRecord::Base belongs_to :workshop has_many :feedback_ones has_many :feedback_twos has_many :feedback_threes end class FeedbackOne < ActiveRecord::Base belongs_to :feedback end class FeedbackTwo < ActiveRecord::Base belongs_to :feedback end class FeedbackThree < ActiveRecord::Base belongs_to :feedback end This doesn't seem like to the cleanest way to access the feedback from the workshop model, as accessing the correct feedback will require logic investigating the Workshop type and then choosing, for instance, @workshop.feedback.feedback_one. Is there a better way to handle this situation? Would it be better to use a polymorphic association for feedback? Or maybe using a Module or Mixin for the shared Feedback interface? Note: I am avoiding using Single Table Inheritance here because the FeedbackOne, FeedbackTwo, FeedbackThree models do not share much common data, so I would end up with a large sparsely populated table with STI.

    Read the article

  • Trouble accessing fields of a serialized object in Java

    - by typoknig
    I have instantized a class that implements Serializable and I am trying to stream that object like this: try{ Socket socket = new Socket("localhost", 8000); ObjectOutputStream toServer = new ObjectOutputStream(socket.getOutputStream()); toServer.writeObject(myObject); } catch (IOException ex) { System.err.println(ex); } All good so far right? Then I am trying to read the fields of that object like this: //This is an inner class class HandleClient implements Runnable{ private ObjectInputStream fromClient; private Socket socket; // This socket was established earlier try { fromClient = new ObjectInputStream(socket.getInputStream()); GetField inputObjectFields = fromClient.readFields(); double myFristVariable = inputObjectFields.get("myFirstVariable", 0); int mySecondVariable = inputObjectFields.get("mySecondVariable", 0); //do stuff } catch (IOException ex) { System.err.println(ex); } catch (ClassNotFoundException ex) { System.err.println(ex); } finally { try { fromClient.close(); } catch (Exception ex) { ex.printStackTrace(); } } } But I always get the error: java.io.NotActiveException: not in call to readObject This is my first time streaming objects instead of primitive data types, what am I doing wrong? BONUS When I do get this working correctly, is the ENTIRE CLASS passed with the serialized object (i.e. will I have access to the methods of the object's class)? My reading suggests that the entire class is passed with the object, but I have been unable to use the objects methods thus far. How exactly do I call on the object's methods? In addition to my code above I also experimented with the readObject method, but I was probably using it wrong too because I couldn't get it to work. Please enlighten me.

    Read the article

  • casting char[][] to char** causes segfault?

    - by Earlz
    Ok my C is a bit rusty but I figured I'd make my next(small) project in C so I could polish back up on it and less than 20 lines in I already have a seg fault. This is my complete code: #define ROWS 4 #define COLS 4 char main_map[ROWS][COLS+1]={ "a.bb", "a.c.", "adc.", ".dc."}; void print_map(char** map){ int i; for(i=0;i<ROWS;i++){ puts(map[i]); //segfault here } } int main(){ print_map(main_map); //if I comment out this line it will work. puts(main_map[3]); return 0; } I am completely confused as to how this is causing a segfault. What is happening when casting from [][] to **!? That is the only warning I get. rushhour.c:23:3: warning: passing argument 1 of ‘print_map’ from incompatible pointer type rushhour.c:13:7: note: expected ‘char **’ but argument is of type ‘char (*)[5]’ Are [][] and ** really not compatible pointer types? They seem like they are just syntax to me.

    Read the article

  • Unit Testing Interfaces in Python

    - by Nicholas Mancuso
    I am currently learning python in preperation for a class over the summer and have gotten started by implementing different types of heaps and priority based data structures. I began to write a unit test suite for the project but ran into difficulties into creating a generic unit test that only tests the interface and is oblivious of the actual implementation. I am wondering if it is possible to do something like this.. suite = HeapTestSuite(BinaryHeap()) suite.run() suite = HeapTestSuite(BinomialHeap()) suite.run() What I am currently doing just feels... wrong (multiple inheritance? ACK!).. class TestHeap: def reset_heap(self): self.heap = None def test_insert(self): self.reset_heap() #test that insert doesnt throw an exception... for x in self.inseq: self.heap.insert(x) def test_delete(self): #assert we get the first value we put in self.reset_heap() self.heap.insert(5) self.assertEquals(5, self.heap.delete_min()) #harder test. put in sequence in and check that it comes out right self.reset_heap() for x in self.inseq: self.heap.insert(x) for x in xrange(len(self.inseq)): val = self.heap.delete_min() self.assertEquals(val, x) class BinaryHeapTest(TestHeap, unittest.TestCase): def setUp(self): self.inseq = range(99, -1, -1) self.heap = BinaryHeap() def reset_heap(self): self.heap = BinaryHeap() class BinomialHeapTest(TestHeap, unittest.TestCase): def setUp(self): self.inseq = range(99, -1, -1) self.heap = BinomialHeap() def reset_heap(self): self.heap = BinomialHeap() if __name__ == '__main__': unittest.main()

    Read the article

  • Intelligent search and generation of Java code, preferrably using Python?

    - by Ipsquiggle
    Basically, I do lots of one-off code generation, large-scale refactorings, etc. etc. in Java. My tool language of choice is Python, but I'll take whatever solutions you can offer. Here is a simplified illustration of what I would like, in a pseudocode Generating an implementation for an interface search within my project: for each Interface as iName: write class(name=iName+"Impl", implements=iName) search within the body of iName: for each Method as mName: write method(name=mName, body="// TODO implement this...") Basically, the tool I'm searching for would allow me to: parse files according to their Java structure ("search for interfaces") search for words contextualized by language elements and types ("variables of type SomeClass", "doStuff() method calls on SomeClass instances") to run searches with structural context ("within the body of the current result") easily replace or generate code (with helpers to generate, as above, or functions for replacing, "rename the interface to Foo", "insert the line Blah.Blah()", etc.) The point is, I don't want to spend a lot of time writing these things, as they are usually throwaway. But sometimes I need something just a little smarter than what grep offers. It wouldn't be too hard to write up a simplistic version of this, but if I'm going to use something like this at all, I'd expect it to be robust. Any suggestions of a tool/library that will help me accomplish this?

    Read the article

  • Creating serializeable unique compile-time identifiers for arbitrary UDT's.

    - by Endiannes
    I would like a generic way to create unique compile-time identifiers for any C++ user defined types. for example: unique_id<my_type>::value == 0 // true unique_id<other_type>::value == 1 // true I've managed to implement something like this using preprocessor meta programming, the problem is, serialization is not consistent. For instance if the class template unique_id is instantiated with other_type first, then any serialization in previous revisions of my program will be invalidated. I've searched for solutions to this problem, and found several ways to implement this with non-consistent serialization if the unique values are compile-time constants. If RTTI or similar methods, like boost::sp_typeinfo are used, then the unique values are obviously not compile-time constants and extra overhead is present. An ad-hoc solution to this problem would be, instantiating all of the unique_id's in a separate header in the correct order, but this causes additional maintenance and boilerplate code, which is not different than using an enum unique_id{my_type, other_type};. A good solution to this problem would be using user-defined literals, unfortunately, as far as I know, no compiler supports them at this moment. The syntax would be 'my_type'_id; 'other_type'_id; with udl's. I'm hoping somebody knows a trick that allows implementing serialize-able unique identifiers in C++ with the current standard (C++03/C++0x), I would be happy if it works with the latest stable MSVC and GNU-G++ compilers, although I expect if there is a solution, it's not portable.

    Read the article

  • How should I map an abstract class with simple xml in Java?

    - by spderosso
    Hi, I want to achieve the following xml using simple xml framework (http://simple.sourceforge.net/): <events> <course-added date="01/01/2010"> ... </course-added> <course-removed date="01/02/2010"> .... </course-removed> <student-enrolled date="01/02/2010"> ... </student-enrolled> </events> I have the following (but it doesn't achieve the desired xml): @Root(name="events") class XMLEvents { @ElementList(inline=true) ArrayList<XMLEvent> events = Lists.newArrayList(); ... } abstract class XMLEvent { @Attribute(name="date") String dateOfEventFormatted; ... } And different type of XMLNodes that have different information (but are all different types of events) @Root(name="course-added") class XMLCourseAdded extends XMLEvent{ @Element(name="course") XMLCourseLongFormat course; .... } @Root(name="course-removed") class XMLCourseRemoved extends XMLEvent { @Element(name="course-id") String courseId; ... } How should I do the mapping or what should I change in order to be able to achieve de desired xml? Thanks!

    Read the article

  • Coldfusion 8 and HTTP PUT - is there a way to PUT an object?

    - by ciaranarcher
    Hi all We are using EHCache with CF 8 to cache stuff on a central server using a RESTful interface over HTTP. I am trying to cache a cfquery object to the cache server. I can get this to work if I call EHCache direct (i.e. store it in a local cache) but if I try to cache on a remote server over HTTP I am running into problems. The code I am using is as follows: <cfhttp url="http://localhost:8080/myCache/myKey" method="put" result="r" timeout="2" throwonerror="true" > <cfhttpparam type="body" value="#ARGUMENTS.item#" /> </cfhttp> CF doesn't like this reference to #ARGUMENTS.item# and it complains Complex object types cannot be converted to simple values. Can anyone give me an example of how to put an object over http using CF? If this is not possible with CF then a Java example would be the next best thing. Many thanks in advance! PS: I do not want to use serialization to text/JSON etc. as this approach has issues with data integrity and most importantly it's not fast enough.

    Read the article

< Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >