Search Results

Search found 25521 results on 1021 pages for 'static objects'.

Page 302/1021 | < Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >

  • not able to Deserialize object

    - by Ravisha
    I am having following peice of code ,where in i am trying to serialize and deserailize object of StringResource class. Please note Resource1.stringXml = its coming from resource file.If i pass strelemet.outerXMl i get the object from Deserialize object ,but if i pass Resource1.stringXml i am getting following exception {"< STRING xmlns='' was not expected."} System.Exception {System.InvalidOperationException} class Program { static void Main(string[] args) { StringResource str = new StringResource(); str.DELETE = "CanDelete"; str.ID= "23342"; XmlElement strelemet = SerializeObjectToXmlNode (str); StringResource strResourceObject = DeSerializeXmlNodeToObject<StringResource>(Resource1.stringXml); Console.ReadLine(); } public static T DeSerializeXmlNodeToObject<T>(string objectNodeOuterXml) { try { TextReader objStringsTextReader = new StringReader(objectNodeOuterXml); XmlSerializer stringResourceSerializer = new XmlSerializer(typeof(T),string.Empty); return (T)stringResourceSerializer.Deserialize(objStringsTextReader); } catch (Exception excep) { return default(T); } } public static XmlElement SerializeObjectToXmlNode(object obj) { using (MemoryStream memoryStream = new MemoryStream()) { try { XmlSerializerNamespaces xmlNameSpace = new XmlSerializerNamespaces(); xmlNameSpace.Add(string.Empty, string.Empty); XmlWriterSettings writerSettings = new XmlWriterSettings(); writerSettings.CloseOutput = false; writerSettings.Encoding = System.Text.Encoding.UTF8; writerSettings.Indent = false; writerSettings.OmitXmlDeclaration = true; XmlWriter writer = XmlWriter.Create(memoryStream, writerSettings); XmlSerializer xmlserializer = new XmlSerializer(obj.GetType()); xmlserializer.Serialize(writer, obj, xmlNameSpace); writer.Close(); memoryStream.Position = 0; XmlDocument serializeObjectDoc = new XmlDocument(); serializeObjectDoc.Load(memoryStream); return serializeObjectDoc.DocumentElement; } catch (Exception excep) { return null; } } } } public class StringResource { [XmlAttribute] public string DELETE; [XmlAttribute] public string ID; }

    Read the article

  • Is it possible to set properties on a Mock Object in Simpletest

    - by JW
    I normally use getter and setter methods on my objects and I am fine with testing them as mock objects in SimpleTest by manipulating them with code like: Mock::generate('MyObj'); $MockMyObj->setReturnValue('getPropName', 'value') However, I have recently started to use magic interceptors (__set() __get()) and access properties like so: $MyObj->propName = 'blah'; But I am having difficulty making a mock object have a particular property accessed by using that technique. So is there some special way of setting properties on MockObjects. I have tried doing: $MockMyObj->propName = 'test Value'; but this does not seem to work. Not sure if it is my test Subject, Mock, magic Interceptors, or SimpleTest that is causing the property to be unaccessable. Any advice welcome.

    Read the article

  • Use of COM object in IIS 7

    - by Wouter d.A.
    Hi all, I am currently moving an ASP.NET web-project from an IIS 6 to a IIS 7 hosting environment. Everything seems to be running OK, except my calls to a COM object. I can perfectly instantiate an object of the COM type, but when I call one of its methods, the IIS crashes. The event log reports an error code "0xc0000374", which indicates a heap corruption. When I run the application inside the visual studio development server, everything goes well and the COM object code gets executed without any errors. This is also the case when the application is hosted on an IIS 6 machine. I have looked through all settings of the IIS 7 and have not found anything configurable for COM objects, like security or ... I have been struggling with this for a while and I'm out of ideas. Does anyone have any experience deploying COM objects on IIS 7? Your help would be very appreciated!

    Read the article

  • Using Mapped Memory Files in C# to store reference types

    - by Khash
    I need to store a dictionary to a file as fast as possible. Both key and value are objects and not guaranteed to be marked as Serializable. Also I prefer a method faster than serializing thousands of objects. So I looked into Mapped Memory Files support in .NET 4. However, it seems MemoryMappedViewAccessor only allows storage of structs and not reference types. Is there a way of storing the memory used by a reference type of a file and reconstructing the object from that blob of memory (without binary serialization)?

    Read the article

  • How to create custom get and set methods for Linq2SQL object

    - by optician
    I have some objects which have been created automatically by linq2SQL. I would like to write some code which should be run whenever the properties on these objects are read or changed. Can I use typical get { //code } and set {//code } in my partial class file to add this functionality? Currently I get an error about this member already being defined. This all makes sense. Is it correct that I will have to create a method to function as the entry point for this functionality, as I cannot redefine the get and set methods for this property. I was hoping to just update the get and set, as this would mean I wouldn't have to change all the reference points in my app. But I think I may just have to update it everywhere.

    Read the article

  • Curious to Know what Eclipse 'Show Heap Status' does

    - by GustlyWind
    Hi All In Eclipse (I am using 3.4 Ganymede) there is an option under Preferences>General>Show Heap Status which when checked shows near bottom of IDE like 46M of 98M and if we move the mouse over 'Recycle Bin' it says 'Run Garbage Collector'. I am curoius to know how this works.What will happen when 'Run Garbage Collector' is clicked. My enivroment set up is something like jdk6 is insatlled and IDE used for development and run in Tomcat server. So my understanding is all the objects which are run through Tomcat should be garbage collected. Is this correct. Is there a way to see what objects Eclipse identified as Garbage Cheers

    Read the article

  • Generic object to object mapping with parametrized constructor

    - by Rody van Sambeek
    I have a data access layer which returns an IDataRecord. I have a WCF service that serves DataContracts (dto's). These DataContracts are initiated by a parametrized constructor containing the IDataRecord as follows: [DataContract] public class DataContractItem { [DataMember] public int ID; [DataMember] public string Title; public DataContractItem(IDataRecord record) { this.ID = Convert.ToInt32(record["ID"]); this.Title = record["title"].ToString(); } } Unfortanately I can't change the DAL, so I'm obliged to work with the IDataRecord as input. But in generat this works very well. The mappings are pretty simple most of the time, sometimes they are a bit more complex, but no rocket science. However, now I'd like to be able to use generics to instantiate the different DataContracts to simplify the WCF service methods. I want to be able to do something like: public T DoSomething<T>(IDataRecord record) { ... return new T(record); } So I'd tried to following solutions: Use a generic typed interface with a constructor. doesn't work: ofcourse we can't define a constructor in an interface Use a static method to instantiate the DataContract and create a typed interface containing this static method. doesn't work: ofcourse we can't define a static method in an interface Use a generic typed interface containing the new() constraint doesn't work: new() constraint cannot contain a parameter (the IDataRecord) Using a factory object to perform the mapping based on the DataContract Type. does work, but: not very clean, because I now have a switch statement with all mappings in one file. I can't find a real clean solution for this. Can somebody shed a light on this for me? The project is too small for any complex mapping techniques and too large for a "switch-based" factory implementation.

    Read the article

  • Grails UrlMappings with .html

    - by Glennn
    I'm developing a Grails web application (mainly as a learning exercise). I have previously written some standard Grails apps, but in this case I wanted to try creating a controller that would intercept all requests (including static html) of the form: <a href="/testApp/testJsp.jsp">test 1</a> <a href="/testApp/testGsp.gsp">test 2</a> <a href="/testApp/testHtm.htm">test 3</a> <a href="/testApp/testHtml.html">test 4</a> The intent is to do some simple business logic (auditing) each time a user clicks a link. I know I could do this using a Filter (or a range of other methods), however I thought this should work too and wanted to do this using a Grails framework. I set up the Grail UrlMappings.groovy file to map all URLs of that form (/$myPathParam?) to a single controller: class UrlMappings { static mappings = { "/$controller/$action?/$id?"{ constraints { } } "/$path?" (controller: 'auditRecord', action: 'showPage') "500"(view:'/error') } } In that controller (in the appropriate "showPage" action) I've been printing out the path information, for example: def showPage = { println "params.path = " + params.path ... render(view: resultingView) } The results of the println in the showPage action for each of my four links are testJsp.jsp testGsp.gsp testHtm.htm testHtml Why is the last one "testHtml", not "testHtml.html"? In a previous (Stack Overflow query) Olexandr encountered this issue and was advised to simply concatenate the value of request.format - which, indeed, does return "html". However request.format also returns "html" for all four links. I'm interested in gaining an understanding of what Grails is doing and why. Is there some way to configure Grails so the params.path variable in the controller shows "testHtml.html" rather than stripping off the "html" extension? It doesn't seem to remove the extension for any other file type (including .htm). Is there a good reason it's doing this? I know that it is a bit unusual to use a controller for static html, but still would like to understand what's going on.

    Read the article

  • How to I serialize a large graph of .NET object into a SQL Server BLOB without creating a large bu

    - by Ian Ringrose
    We have code like: ms = New IO.MemoryStream bin = New System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bin.Serialize(ms, largeGraphOfObjects) dataToSaveToDatabase = ms.ToArray() // put dataToSaveToDatabase in a Sql server BLOB But the memory steam allocates a large buffer from the large memory heap that is giving us problems. So how can we stream the data without needing enough free memory to hold the serialized objects. I am looking for a way to get a Stream from SQL server that can then be passed to bin.Serialize() so avoiding keeping all the data in my processes memory. Likewise for reading the data back... Some more background. This is part of a complex numerical processing system that processes data in near real time looking for equipment problems etc, the serialization is done to allow a restart when there is a problem with data quality from a data feed etc. (We store the data feeds and can rerun them after the operator has edited out bad values.) Therefore we serialize the object a lot more often then we de-serialize them. The objects we are serializing include very large arrays mostly of doubles as well as a lot of small “more normal” objects. We are pushing the memory limit on a 32 bit system and make the garage collector work very hard. (Effects are being made elsewhere in the system to improve this, e.g. reusing large arrays rather then create new arrays.) Often the serialization of the state is the last straw that courses an out of memory exception; our peak memory usage is while this serialization is being done. I think we get large memory pool fragmentation when we de-serialize the object, I expect there are also other problem with large memory pool fragmentation given the size of the arrays. (This has not yet been investigated, as the person that first looked at this is a numerical processing expert, not a memory management expert.) Are customers use a mix of Sql Server 2000, 2005 and 2008 and we would rather not have different code paths for each version of Sql Server if possible. We can have many active models at a time (in different process, across many machines), each model can have many saved states. Hence the saved state is stored in a database blob rather then a file. As the spread of saving the state is important, I would rather not serialize the object to a file, and then put the file in a BLOB one block at a time. Other related questions I have asked How to Stream data from/to SQL Server BLOB fields? Is there a SqlFileStream like class that works with Sql Server 2005?

    Read the article

  • How to reliably replace a library-defined error handler with my own?

    - by sharptooth
    On certain error cases ATL invokes AtlThrow() which is implemented as ATL::AtlThrowImpl() which in turn throws CAtlException. The latter is not very good - CAtlException is not even derived from std::exception and also we use our own exceptions hierarchy and now we will have to catch CAtlException separately here and there which is lots of extra code and error-prone. Looks like it is possible to replace ATL::AtlThrowImpl() with my own handler - define _ATL_CUSTOM_THROW and define AtlThrow() to be the custom handler before including atlbase.h - and ATL will call the custom handler. Not so easy. Some of ATL code is not in sources - it comes compiled as a library - either static or dynamic. We use the static - atls.lib. And... it is compiled in such way that it has ATL::ThrowImpl() inside and some code calling it. I used a static analysis tool - it clearly shows that there're paths on which the old default handler is called. To ensure I even tried to "reimplement" ATL::AtlThrowImpl() in my code. Now the linker says it sees two declarations of ATL::AtlThrowImpl() which I suppose confirms that there's another implementation that can be called by some code. How can I handle this? How do I replace the default handler completely and ensure that the default handler is never called?

    Read the article

  • LINQ to SQL Web Application Best Practices

    - by derek
    In my experience building web applications, I've always used a n-tier approach. A DAL that gets data from the db and populates the objects, and BLL that gets objects from the DAL and performs any business logic required on them, and the website that gets it's display data from the BLL. I've recently started learning LINQ, and most of the examples show the queries occurring right from the Web Application code-behinds(it's possible that I've only seen overly simplified examples). In the n-tier architectures, this was always seen as a big no-no. I'm a bit unsure of how to architect a new Web Application. I've been using the Server Explorer and dbml designer in VS2008 to create the dbml and object relationships. It seems a little unclear to me if the dbml would be considered the DAL layer, if the website should call methods within a BLL, which then would do the LINQ queries, etc. What are some general architecture best practices, or approaches to creating a Web Application solution using LINQ to SQL?

    Read the article

  • How do I build mDNSResponder?

    - by Alex
    I have tried checking out the mDNSResponder source from Apple's SVN host, with the thought of compiling it and tweaking it. This failed miserably. Here is the last line of the output of cd trunk SRCROOT=. make I get the same error for several tags in the SVN tree, so I'm not sure if there is something on my end wrong? The following build commands failed: mDNSResponder: CompileC mDNSResponder.build/mDNSResponder.build/Objects-normal/i386/mDNSMacOSX.o /Users/myname/Desktop/mDNSResponder/trunk/mDNSMacOSX/mDNSMacOSX.c normal i386 c com.apple.compilers.gcc.4_2 PhaseScriptExecution "Run Script" /Users/myname/Desktop/mDNSResponder/trunk/mDNSMacOSX/mDNSResponder.build/mDNSResponder.build/Script-D284BE6C0ADD80740027CCDF.sh mDNSResponder debug: CompileC "mDNSResponder.build/mDNSResponder debug.build/Objects-normal/i386/mDNSMacOSX.o" /Users/myname/Desktop/mDNSResponder/trunk/mDNSMacOSX/mDNSMacOSX.c normal i386 c com.apple.compilers.gcc.4_2 Build Some: PhaseScriptExecution "Run Script" "/Users/myname/Desktop/mDNSResponder/trunk/mDNSMacOSX/mDNSResponder.build/Development/Build Some.build/Script-FF045B6A0C7E4AA600448140.sh" (4 failures)

    Read the article

  • Adding an Object to Vector loses Reference using Java?

    - by thechiman
    I have a Vector that holds a number of objects. My code uses a loop to add objects to the Vector depending on certain conditions. My question is, when I add the object to the Vector, is the original object reference added to the vector or does the Vector make a new instance of the object and adds that? For example, in the following code: private Vector numbersToCalculate; StringBuffer temp = new StringBuffer(); while(currentBuffer.length() > i) { //Some other code numbersToCalculate.add(temp); temp.setLength(0); //resets the temp StringBuffer } What I'm doing is adding the "temp" StringBuffer to the numbersToCalculate Vector. Should I be creating a new StringBuffer within the loop and adding that or will this code work? Thanks for the help! Eric

    Read the article

  • Modifying records in my migration throws an authlogic error

    - by nfm
    I'm adding some columns to one of my database tables, and then populating those columns: def self.up add_column :contacts, :business_id, :integer add_column :contacts, :business_type, :string Contact.reset_column_information Contact.all.each do |contact| contact.update_attributes(:business_id => contact.client_id, :business_type => 'Client') end remove_column :contacts, :client_id end The line contact.update_attributes is causing the following Authlogic error: You must activate the Authlogic::Session::Base.controller with a controller object before creating objects I have no idea what is going on here - I'm not using a controller method to modify each row in the table. Nor am I creating new objects. The error doesn't occur if the contacts table is empty. I've had a google and it seems like this error can occur when you run your controller tests, and is fixed by adding before_filter :activate_authlogic to them, but this doesn't seem relevant in my case. Any ideas? I'm stumped.

    Read the article

  • how to implement class with collection of string/object pairs so that an object can be returned with

    - by matti
    The values in a file are read as string and can be double, string or int or maybe even lists. An example file: DatabaseName=SomeBase Classes=11;12;13 IntValue=3 //this is required! DoubleValue=4.0 I was thinking something like this: public static T GetConfigValue(string cfgName) { // here we just return for example the value which could // be List[int] if parameter cfgName='Classes' // and LoadConfig was called with Dictionary containing // keyvaluepair 'Classes' / typeof(List[int]) } public static bool LoadConfig(Dictionary reqSettings, Dictionary optSettings) { foreach (KeyValuePair kvPair in reqSettings) { if (ReadCheckAndStore(kVPair, true)) return false; } foreach (KeyValuePair kvPair in reqSettings) { if (ReadCheckAndStore(kVPair, false)) return false; } return true; } private static bool ReadCheckAndStore(KeyValuePair kVPair, bool isRequired) { if (!ReadValue(kVPair.Key, out confValue) && isRequired) //req. IntValue !found return false; //here also have to test if read value is wanted type. //and if yes store to collection. } Thanks a lot & BR! -Matti PS. Additional issue is default values for optional settings. It's not elegant to pass them to LoadConfig in separate Dictionary, but that is an other issue...

    Read the article

  • Is this a correct porting of java.util.Random in objectiveC

    - by dipu
    I have ported the code inside java.util.Random class in objectivec. I want to have an identical random number generator so that it synchs with the server app running on java. Now is this a safe porting and if not is there a way to mimic AtomicLong as it is found in java? Please see my code below. static long long multiplier = 0x5DEECE66DL; static long addend = 0xBL; static long long mask = (0x1000000000000001L << 48) - 1; -(void) initWithSeed:(long long) seed1 { [self setRandomSeed: 0L];// = new AtomicLong(0L); [self setSeed: seed1]; } -(int) next:(int)bits { long long oldseed, nextseed; long long seed1 = [self.randomSeed longLongValue]; //AtomicLong //do { oldseed = seed1; nextseed = (oldseed * multiplier + addend) & mask; //} while (!seed.compareAndSet(oldseed, nextseed)); [self setRandomSeed: [NSNumber numberWithLongLong:nextseed]]; ///int ret = (int)(nextseed >>> (48 - bits)); int ret = (unsigned int)(nextseed >> (48 - bits)); return ret; } -(void) setSeed:(long long) seed1 { seed1 = (seed1 ^ multiplier) & mask; [self setRandomSeed: [NSNumber numberWithLongLong:seed1]]; }

    Read the article

  • TransactionScope Prematurely Completed

    - by Chris
    I have a block of code that runs within a TransactionScope and within this block of code I make several calls to the DB. Selects, Updates, Creates, and Deletes, the whole gamut. When I execute my delete I execute it using an extension method of the SqlCommand that will automatically resubmit the query if it deadlocks as this query could potentially hit a deadlock. I believe the problem occurs when a deadlock is hit and the function tries to resubmit the query. This is the error I receive: The transaction associated with the current connection has completed but has not been disposed. The transaction must be disposed before the connection can be used to execute SQL statements. This is the simple code that executes the query (all of the code below executes within the using of the TransactionScope): using (sqlCommand.Connection = new SqlConnection(ConnectionStrings.App)) { sqlCommand.Connection.Open(); sqlCommand.ExecuteNonQueryWithDeadlockHandling(); } Here is the extension method that resubmits the deadlocked query: public static class SqlCommandExtender { private const int DEADLOCK_ERROR = 1205; private const int MAXIMUM_DEADLOCK_RETRIES = 5; private const int SLEEP_INCREMENT = 100; public static void ExecuteNonQueryWithDeadlockHandling(this SqlCommand sqlCommand) { int count = 0; SqlException deadlockException = null; do { if (count > 0) Thread.Sleep(count * SLEEP_INCREMENT); deadlockException = ExecuteNonQuery(sqlCommand); count++; } while (deadlockException != null && count < MAXIMUM_DEADLOCK_RETRIES); if (deadlockException != null) throw deadlockException; } private static SqlException ExecuteNonQuery(SqlCommand sqlCommand) { try { sqlCommand.ExecuteNonQuery(); } catch (SqlException exception) { if (exception.Number == DEADLOCK_ERROR) return exception; throw; } return null; } } The error occurs on the line that executes the nonquery: sqlCommand.ExecuteNonQuery();

    Read the article

  • ASP.NET MVC authorization & permission to use model classes

    - by Tomek
    Hi, This is my first post here, so hello :) Okey, let's get to the point... I am writing my first app in ASP.NET MVC Framework and i have a problem with checking privileges to use instances of model classes (read, edit). Sample code looks like this: // Controller action [CustomAuthorize(Roles="Editor, Admin")] public ActionResult Stats(int id) { User user = userRepository.GetUser(id); if (user == null || !user.Activated || user.Removed) return View("NotFound"); else if (!user.IsCurrentSessionUserOwned) return View("NotAuthorized"); return View(user); } So far authorize attribute protects only controller actions, so my question is: how to make (custom) authorize attribute to check not only user role, usernames but also did i.e. resources instantiated in action methods (above: User class, but there are other ORM objects like News, Photos etc.) All of these object to check have their unique ID's, so user have own ID, News have their ID and UserID field referecned to Users table (i mean these objects are LINQ2SQL classes). How should i resolve that problem?

    Read the article

  • which collection should I use

    - by Masna
    Hello, I have a number of custom objects of type X. X has a number of parameters and must be unique in the collection. (I created my own equals method based on the custom parameters to examine this) In each object of type x, I have a list of objects y. I want to add/remove/modify easily an object y. For example: To write the add method, it would be something like add(objTypeX, objTypeY) I would check or the collections already has a objTypeX. If so: i would add the objTypeY to the already existing objTypeX else: i would create objTypeX and add objTypeY to this object. To modify an objTypeY, it would be something like(objTypeX, objTypeY, newobjTypeY) I would get objTypeX out of the collections and modify objTypeY to newobjTypeY Which collections should I use? I tried with hashset but i can get a specific object out of the list, without run down the list till I find that object. I develop this in vb.net 3.5

    Read the article

  • Arguments for moving from LINQtoSQL to Nhibernate?

    - by sah302
    Backstory: Hi all, I just spent a lot of time reading many of the LINQ vs Nhibernate threads here and on other sites. I work in a small development team of 4 people and we don't even have really any super experienced developers. We work for a small company that has a lot of technical needs but not enough developers to implement them (and hiring more is out of the question right now). Typically our projects (which individually are fairly small) have been coded separately and weren't really layered in anyway, code wasn't re-used, no class libraries, and we just use the LINQtoSQL .dbml files for our pojects, we really don't even use objects but pass around values and stuff, the only time we use objects is when inserting to a database (heck not even querying since you don't need to assign it to a type and can just bind to gridview). Despite all this as I said our company has a lot of technical needs, no one could come to us for a year and we would have plenty of work to implement requested features. Well I have decided to change that a bit first by creating class libraries and actually adding layers to our applications. I am trying to meet these guys halfway by still using LINQtoSQL as the ORM yet and still use VB as the language. However I am finding it a b***h of a time dealing with so many thing in LINQtoSQL that I found easy in Nhibernate (automatic handling of the session, criteria creation easier than expression trees, generic an dynamic querying easier etc.) So... Question: How can I convince my lead developers and other senior programmers that switching to Nhibernate is a good thing? That being in control of our domain objects is a good thing? That being able to implement interfaces is a good? I've tried exlpaining the advantages of this before but it's not understood by them because they've never programmed in a true OO & layered way. Also one of the counter arguments to this I can see is sqlMetal generates those classes automatically and therefore it saves a lot of time. I can't really counter that other than saying spending more time on infrastructure to make it more scalable and flexible is good, but they can't see how. Again, I know the features and advantages (somewhat enough I believe) of each, but I need arguments applicable to my context, hence why I provided the context. I just am not a very good arguer I guess. (Caveat: For all the LINQtoSQL lovers, I may just not be super proficient as LINQ, but I find it very cumbersome that you are required to download some extra library for dynamic queries which don't by default support guid comparisons, and I also find the way of updating entitites to be cumbersome as well in terms of data context managing, so it could just be that I suck hehe.)

    Read the article

  • How do I use texture-mapping in a simple ray tracer?

    - by fastrack20
    I am attempting to add features to a ray tracer in C++. Namely, I am trying to add texture mapping to the spheres. For simplicity, I am using an array to store the texture data. I obtained the texture data by using a hex editor and copying the correct byte values into an array in my code. This was just for my testing purposes. When the values of this array correspond to an image that is simply red, it appears to work close to what is expected except there is no shading. The bottom right of the image shows what a correct sphere should look like. This sphere's colour using one set colour, not a texture map. Another problem is that when the texture map is of something other than just one colour pixels, it turns white. My test image is a picture of water, and when it maps, it shows only one ring of bluish pixels surrounding the white colour. When this is done, it simply appears as this: Here are a few code snippets: Color getColor(const Object *object,const Ray *ray, float *t) { if (object->materialType == TEXTDIF || object->materialType == TEXTMATTE) { float distance = *t; Point pnt = ray->origin + ray->direction * distance; Point oc = object->center; Vector ve = Point(oc.x,oc.y,oc.z+1) - oc; Normalize(&ve); Vector vn = Point(oc.x,oc.y+1,oc.z) - oc; Normalize(&vn); Vector vp = pnt - oc; Normalize(&vp); double phi = acos(-vn.dot(vp)); float v = phi / M_PI; float u; float num1 = (float)acos(vp.dot(ve)); float num = (num1 /(float) sin(phi)); float theta = num /(float) (2 * M_PI); if (theta < 0 || theta == NAN) {theta = 0;} if (vn.cross(ve).dot(vp) > 0) { u = theta; } else { u = 1 - theta; } int x = (u * IMAGE_WIDTH) -1; int y = (v * IMAGE_WIDTH) -1; int p = (y * IMAGE_WIDTH + x)*3; return Color(TEXT_DATA[p+2],TEXT_DATA[p+1],TEXT_DATA[p]); } else { return object->color; } }; I call the colour code here in Trace: if (object->materialType == MATTE) return getColor(object, ray, &t); Ray shadowRay; int isInShadow = 0; shadowRay.origin.x = pHit.x + nHit.x * bias; shadowRay.origin.y = pHit.y + nHit.y * bias; shadowRay.origin.z = pHit.z + nHit.z * bias; shadowRay.direction = light->object->center - pHit; float len = shadowRay.direction.length(); Normalize(&shadowRay.direction); float LdotN = shadowRay.direction.dot(nHit); if (LdotN < 0) return 0; Color lightColor = light->object->color; for (int k = 0; k < numObjects; k++) { if (Intersect(objects[k], &shadowRay, &t) && !objects[k]->isLight) { if (objects[k]->materialType == GLASS) lightColor *= getColor(objects[k], &shadowRay, &t); // attenuate light color by glass color else isInShadow = 1; break; } } lightColor *= 1.f/(len*len); return (isInShadow) ? 0 : getColor(object, &shadowRay, &t) * lightColor * LdotN; } I left out the rest of the code as to not bog down the post, but it can be seen here. Any help is greatly appreciated. The only portion not included in the code, is where I define the texture data, which as I said, is simply taken straight from a bitmap file of the above image. Thanks.

    Read the article

  • Best practice Unit testing abstract classes?

    - by Paul Whelan
    Hello I was wondering what the best practice is for unit testing abstract classes and classes that extend abstract classes. Should I test the abstract class by extending it and stubbing out the abstract methods and then test all the concrete methods? Then only test the methods I override and the abstract methods in the unit tests for objects that extend my abstract class. Should I have an abstract test case that can be used to test the methods of the abstract class and extend this class in my test case for objects that extend the abstract class? EDIT: My abstract class has some concrete methods. I would be interested to see what people are using. Thanks Paul

    Read the article

  • Qt - problem appending to QList of QList

    - by bullettime
    I'm trying to append items to a QList at runtime, but I'm running on a error message. Basically what I'm trying to do is to make a QList of QLists and add a few customClass objects to each of the inner lists. Here's my code: widget.h: class Widget : public QWidget { Q_OBJECT public: Widget(QWidget *parent = 0); ~Widget(); public slots: static QList<QList<customClass> > testlist(){ QList<QList<customClass> > mylist; for(int w=0 ; w<5 ; w++){ mylist.append(QList<customClass>()); } for(int z=0 ; z<mylist.size() ; z++){ for(int x=0 ; x<10 ; x++){ customClass co = customClass(); mylist.at(z).append(co); } } return mylist; } }; customclass.h: class customClass { public: customClass(){ this->varInt = 1; this->varQString = "hello world"; } int varInt; QString varQString; }; main.cpp: int main(int argc, char *argv[]) { QApplication a(argc, argv); Widget w; QList<QList<customClass> > list; list = w.testlist(); w.show(); return a.exec(); } But when I try to run the application, it gives off this error: error: passing `const QList<customClass>' as `this' argument of `void List<T>::append(const T&) [with T = customClass]' discards qualifiers I also tried inserting the objects using foreach: foreach(QList<customClass> list, mylist){ for(int x=0 ; x<10 ; x++){ list.append(customClass()); } } The error was gone, but the customClass objects weren't appended, I could verify that by using a debugging loop in main that showed the inner QLists sizes as zero. What am I doing wrong?

    Read the article

  • FluentValidation + s#arp

    - by csetzkorn
    Hi, Did someone implement something like this: http://www.jeremyskinner.co.uk/2010/02/22/using-fluentvalidation-with-an-ioc-container/ in s#arp? Thanks. Christian PS: Hi, I have made a start in using FluentValidation in S#arp. I have implemented a Validator factory: public class ResolveType { private static IWindsorContainer _windsorContainer; public static void Initialize(IWindsorContainer windsorContainer) { _windsorContainer = windsorContainer; } public static object Of(Type type) { return _windsorContainer.Resolve(type); } } public class CastleWindsorValidatorFactory : ValidatorFactoryBase { public override IValidator CreateInstance(Type validatorType) { return ResolveType.Of(validatorType) as IValidator; } } I think I will use services which can be used by the controllers etc.: public class UserValidator : AbstractValidator { private readonly IUserRepository UserRepository; public UserValidator(IUserRepository UserRepository) { Check.Require(UserRepository != null, "UserRepository may not be null"); this.UserRepository = UserRepository; RuleFor(user => user.Email).NotEmpty(); RuleFor(user => user.FirstName).NotEmpty(); RuleFor(user => user.LastName).NotEmpty(); } } public interface IUserService { User CreateUser(User User); } public class UserService : IUserService { private readonly IUserRepository UserRepository; private readonly UserValidator UserValidator; public UserService ( IUserRepository UserRepository ) { Check.Require(UserRepository != null, "UserRepository may not be null"); this.UserRepository = UserRepository; this.UserValidator = new UserValidator(UserRepository); } public User CreateUser(User User) { UserValidator.Validate(User); ... } } Instead of putting concrete validators in the service, I would like to use the above factory somehow. Where do I register it and how in s#arp (Global.asax)? I believe s#arp is geared towards the nhibernator validator. How do I deregister it? Thanks. Best wishes, Christian

    Read the article

  • Doubt about django model API

    - by Clash
    Hello guys! So, here is what I want to do. I have a model Staff, that has a foreign key to the User model. I also have a model Match that has a foreign key to the User model. I want to select how much Matches every Staff has. I don't know how to do that, so far I only got it working for the User model. From Staff, it will not allow to annonate Match. This is what is working right now User.objects.annotate(ammount=Count("match")).filter(Q(ammount__gt=0)).order_by("ammount") And this is what I wanted to do Staff.objects.annotate(ammount=Count("match")).filter(Q(ammount__gt=0)).order_by("ammount") And by the way, is there any way to filter the matches? I want to filter the matches by a certain column. Thanks a lot in advance!

    Read the article

< Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >