Search Results

Search found 5469 results on 219 pages for 'compare and swap'.

Page 199/219 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • PyOpenGL - passing transformation matrix into shader

    - by M-V
    I am having trouble passing projection and modelview matrices into the GLSL shader from my PyOpenGL code. My understanding is that OpenGL matrices are column major, but when I pass in projection and modelview matrices as shown, I don't see anything. I tried the transpose of the matrices, and it worked for the modelview matrix, but the projection matrix doesn't work either way. Here is the code: import OpenGL from OpenGL.GL import * from OpenGL.GL.shaders import * from OpenGL.GLU import * from OpenGL.GLUT import * from OpenGL.GLUT.freeglut import * from OpenGL.arrays import vbo import numpy, math, sys strVS = """ attribute vec3 aVert; uniform mat4 uMVMatrix; uniform mat4 uPMatrix; uniform vec4 uColor; varying vec4 vCol; void main() { // option #1 - fails gl_Position = uPMatrix * uMVMatrix * vec4(aVert, 1.0); // option #2 - works gl_Position = vec4(aVert, 1.0); // set color vCol = vec4(uColor.rgb, 1.0); } """ strFS = """ varying vec4 vCol; void main() { // use vertex color gl_FragColor = vCol; } """ # particle system class class Scene: # initialization def __init__(self): # create shader self.program = compileProgram(compileShader(strVS, GL_VERTEX_SHADER), compileShader(strFS, GL_FRAGMENT_SHADER)) glUseProgram(self.program) self.pMatrixUniform = glGetUniformLocation(self.program, 'uPMatrix') self.mvMatrixUniform = glGetUniformLocation(self.program, "uMVMatrix") self.colorU = glGetUniformLocation(self.program, "uColor") # attributes self.vertIndex = glGetAttribLocation(self.program, "aVert") # color self.col0 = [1.0, 1.0, 0.0, 1.0] # define quad vertices s = 0.2 quadV = [ -s, s, 0.0, -s, -s, 0.0, s, s, 0.0, s, s, 0.0, -s, -s, 0.0, s, -s, 0.0 ] # vertices self.vertexBuffer = glGenBuffers(1) glBindBuffer(GL_ARRAY_BUFFER, self.vertexBuffer) vertexData = numpy.array(quadV, numpy.float32) glBufferData(GL_ARRAY_BUFFER, 4*len(vertexData), vertexData, GL_STATIC_DRAW) # render def render(self, pMatrix, mvMatrix): # use shader glUseProgram(self.program) # set proj matrix glUniformMatrix4fv(self.pMatrixUniform, 1, GL_FALSE, pMatrix) # set modelview matrix glUniformMatrix4fv(self.mvMatrixUniform, 1, GL_FALSE, mvMatrix) # set color glUniform4fv(self.colorU, 1, self.col0) #enable arrays glEnableVertexAttribArray(self.vertIndex) # set buffers glBindBuffer(GL_ARRAY_BUFFER, self.vertexBuffer) glVertexAttribPointer(self.vertIndex, 3, GL_FLOAT, GL_FALSE, 0, None) # draw glDrawArrays(GL_TRIANGLES, 0, 6) # disable arrays glDisableVertexAttribArray(self.vertIndex) class Renderer: def __init__(self): pass def reshape(self, width, height): self.width = width self.height = height self.aspect = width/float(height) glViewport(0, 0, self.width, self.height) glEnable(GL_DEPTH_TEST) glDisable(GL_CULL_FACE) glClearColor(0.8, 0.8, 0.8,1.0) glutPostRedisplay() def keyPressed(self, *args): sys.exit() def draw(self): glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) # build projection matrix fov = math.radians(45.0) f = 1.0/math.tan(fov/2.0) zN, zF = (0.1, 100.0) a = self.aspect pMatrix = numpy.array([f/a, 0.0, 0.0, 0.0, 0.0, f, 0.0, 0.0, 0.0, 0.0, (zF+zN)/(zN-zF), -1.0, 0.0, 0.0, 2.0*zF*zN/(zN-zF), 0.0], numpy.float32) # modelview matrix mvMatrix = numpy.array([1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.5, 0.0, -5.0, 1.0], numpy.float32) # render self.scene.render(pMatrix, mvMatrix) # swap buffers glutSwapBuffers() def run(self): glutInitDisplayMode(GLUT_RGBA) glutInitWindowSize(400, 400) self.window = glutCreateWindow("Minimal") glutReshapeFunc(self.reshape) glutDisplayFunc(self.draw) glutKeyboardFunc(self.keyPressed) # Checks for key strokes self.scene = Scene() glutMainLoop() glutInit(sys.argv) prog = Renderer() prog.run() When I use option #2 in the shader without either matrix, I get the following output: What am I doing wrong?

    Read the article

  • Routed Command Question

    - by Andrew
    I'd like to implement a custom command to capture a Backspace key gesture inside of a textbox, but I don't know how. I wrote a test program in order to understand what's going on, but the behaviour of the program is rather confusing. Basically, I just need to be able to handle the Backspace key gesture via wpf commands while keyboard focus is in the textbox, and without disrupting the normal behaviour of the Backspace key within the textbox. Here's the xaml for the main window and the corresponding code-behind, too (note that I created a second command for the Enter key, just to compare its behaviour to that of the Backspace key): <Window x:Class="WpfApplication1.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="300" Width="300"> <Grid> <TextBox Margin="44,54,44,128" Name="textBox1" /> </Grid> </Window> And here's the corresponding code-behind: using System.Windows; using System.Windows.Input; namespace WpfApplication1 { /// <summary> /// Interaction logic for EntryListView.xaml /// </summary> public partial class Window1 : Window { public static RoutedCommand EnterCommand = new RoutedCommand(); public static RoutedCommand BackspaceCommand = new RoutedCommand(); public Window1() { InitializeComponent(); CommandBinding cb1 = new CommandBinding(EnterCommand, EnterExecuted, EnterCanExecute); CommandBinding cb2 = new CommandBinding(BackspaceCommand, BackspaceExecuted, BackspaceCanExecute); this.CommandBindings.Add(cb1); this.CommandBindings.Add(cb2); KeyGesture kg1 = new KeyGesture(Key.Enter); KeyGesture kg2 = new KeyGesture(Key.Back); InputBinding ib1 = new InputBinding(EnterCommand, kg1); InputBinding ib2 = new InputBinding(BackspaceCommand, kg2); this.InputBindings.Add(ib1); this.InputBindings.Add(ib2); } #region Command Handlers private void EnterCanExecute(object sender, CanExecuteRoutedEventArgs e) { MessageBox.Show("Inside EnterCanExecute Method."); e.CanExecute = true; } private void EnterExecuted(object sender, ExecutedRoutedEventArgs e) { MessageBox.Show("Inside EnterExecuted Method."); e.Handled = true; } private void BackspaceCanExecute(object sender, CanExecuteRoutedEventArgs e) { MessageBox.Show("Inside BackspaceCanExecute Method."); e.Handled = true; } private void BackspaceExecuted(object sender, ExecutedRoutedEventArgs e) { MessageBox.Show("Inside BackspaceExecuted Method."); e.Handled = true; } #endregion Command Handlers } } Any help would be very much appreciated. Thanks! Andrew

    Read the article

  • Java: CSV file read & write.

    - by battousai622
    Im reading in 2 csv file: store_inventory & new_acquisitions... I want to be able to compare the store_inventory csv file with new_acquisitions. 1) If the item names match just update the quantity in store_inventory. 2) If new_acquisitions has a new item that does not exist in store_inventory, then add it to the store_inventory. Heres what i have so far but its not very good. I added comments where i need to add taks 1) & 2). Any advice or code would be great, thanks. File new_acq = new File("/src/test/new_acquisitions.csv"); Scanner acq_scan = null; try { acq_scan = new Scanner(new_acq); } catch (FileNotFoundException ex) { Logger.getLogger(mainpage.class.getName()).log(Level.SEVERE, null, ex); } String itemName; int quantity; Double cost; Double price; File store_inv = new File("/src/test/store_inventory.csv"); Scanner invscan = null; try { invscan = new Scanner(store_inv); } catch (FileNotFoundException ex) { Logger.getLogger(mainpage.class.getName()).log(Level.SEVERE, null, ex); } String itemNameInv; int quantityInv; Double costInv; Double priceInv; while (acq_scan.hasNext()) { String line = acq_scan.nextLine(); if (line.charAt(0) == '#') { continue; } String[] split = line.split(","); itemName = split[0]; quantity = Integer.parseInt(split[1]); cost = Double.parseDouble(split[2]); price = Double.parseDouble(split[3]); while(invscan.hasNext()) { String line2 = invscan.nextLine(); if (line2.charAt(0) == '#') { continue; } String[] split2 = line2.split(","); itemNameInv = split2[0]; quantityInv = Integer.parseInt(split2[1]); costInv = Double.parseDouble(split2[2]); priceInv = Double.parseDouble(split2[3]); if(itemName == itemNameInv) { //update quantity } } //add new entry into csv file } Thanks again for any help. =]

    Read the article

  • What was "The Next Big Thing" when you were just starting out in programming?

    - by Andrew
    I'm at the beginning of my career and there are lots of things which are being touted as "The Next Big Thing". For example: Dependency Injection (Spring, etc) MVC (Struts, ASP.NET MVC) ORMs (Linq To SQL, Hibernate) Agile Software Development These things have probably been around for some time, but I've only just started out. And don't get me wrong, I think these things are great! So, what was "The Next Big Thing" when you were starting out? When was it? Were people sceptical of it at first? Why? Did you think it would catch on? Did it pan out and become widely accepted/used? If not, why not? EDIT It's been nearly a week since I first posted this question and I can safely say that I did not expect such explosive interest. I asked the question so that I could gain a perspective of what kinds of innovations in programming people thought were most important when they were starting out. At the time of writing this I have read ~95% of all answers. To answer a few questions, the "Next Big Things" I listed are ones that I am currently really excited about and that I had not really been exposed to until I started working. I'm hoping to implement some or all of these in the near future at my current workplace. To many people they are probably old news. In regards to the "is this a real question" debate, I can see that obviously hasn't been settled yet. I feel bad whenever I read a comment saying that these kinds of questions take away from the real meaning of SO. I'm not wholly convinced that it doesn't. On the other hand, I have seen a lot of comments saying what a great question it is. Anyway, I have chosen "The Internet!" as my answer to this question. I don't think (in my very humble opinion, and, it seems many SOers opinions) that many things related to programming can compare. Nowadays every business and their dog has a website which can do anything from simply supplying information to purchasing goods halfway around the world to updating your blog. And of course, all these businesses need people like us. Thanks to everyone for all the great answers!

    Read the article

  • What is the best algorithm for this problem?

    - by mark
    What is the most efficient algorithm to solve the following problem? Given 6 arrays, D1,D2,D3,D4,D5 and D6 each containing 6 numbers like: D1[0] = number D2[0] = number ...... D6[0] = number D1[1] = another number D2[1] = another number .... ..... .... ...... .... D1[5] = yet another number .... ...... .... Given a second array ST1, containing 1 number: ST1[0] = 6 Given a third array ans, containing 6 numbers: ans[0] = 3, ans[1] = 4, ans[2] = 5, ......ans[5] = 8 Using as index for the arrays D1,D2,D3,D4,D5 and D6, the number that goes from 0, to the number stored in ST1[0] minus one, in this example 6, so from 0 to 6-1, compare each res array against each D array My algorithm so far is: I tried to keep everything unlooped as much as possible. EML := ST1[0] //number contained in ST1[0] EML1 := 0 //start index for the arrays D While EML1 < EML if D1[ELM1] = ans[0] goto two if D2[ELM1] = ans[0] goto two if D3[ELM1] = ans[0] goto two if D4[ELM1] = ans[0] goto two if D5[ELM1] = ans[0] goto two if D6[ELM1] = ans[0] goto two ELM1 = ELM1 + 1 return 0 //bad row of numbers, if while ends two: EML1 := 0 start index for arrays Ds While EML1 < EML if D1[ELM1] = ans[1] goto two if D2[ELM1] = ans[1] goto two if D3[ELM1] = ans[1] goto two if D4[ELM1] = ans[1] goto two if D5[ELM1] = ans[1] goto two if D6[ELM1] = ans[1] goto two ELM1 = ELM1 + 1 return 0 three: EML1 := 0 start index for arrays Ds While EML1 < EML if D1[ELM1] = ans[2] goto two if D2[ELM1] = ans[2] goto two if D3[ELM1] = ans[2] goto two if D4[ELM1] = ans[2] goto two if D5[ELM1] = ans[2] goto two if D6[ELM1] = ans[2] goto two ELM1 = ELM1 + 1 return 0 four: EML1 := 0 start index for arrays Ds While EML1 < EML if D1[ELM1] = ans[3] goto two if D2[ELM1] = ans[3] goto two if D3[ELM1] = ans[3] goto two if D4[ELM1] = ans[3] goto two if D5[ELM1] = ans[3] goto two if D6[ELM1] = ans[3] goto two ELM1 = ELM1 + 1 return 0 five: EML1 := 0 start index for arrays Ds While EML1 < EML if D1[ELM1] = ans[4] goto two if D2[ELM1] = ans[4] goto two if D3[ELM1] = ans[4] goto two if D4[ELM1] = ans[4] goto two if D5[ELM1] = ans[4] goto two if D6[ELM1] = ans[4] goto two ELM1 = ELM1 + 1 return 0 six: EML1 := 0 start index for arrays Ds While EML1 < EML if D1[ELM1] = ans[0] return 1 //good row of numbers if D2[ELM1] = ans[0] return 1 if D3[ELM1] = ans[0] return 1 if D4[ELM1] = ans[0] return 1 if D5[ELM1] = ans[0] return 1 if D6[ELM1] = ans[0] return 1 ELM1 = ELM1 + 1 return 0 As language of choice, it would be pure c

    Read the article

  • Mercurial for Beginners: The Definitive Practical Guide

    - by Laz
    Inspired by Git for beginners: The definitive practical guide. This is a compilation of information on using Mercurial for beginners for practical use. Beginner - a programmer who has touched source control without understanding it very well. Practical - covering situations that the majority of users often encounter - creating a repository, branching, merging, pulling/pushing from/to a remote repository, etc. Notes: Explain how to get something done rather than how something is implemented. Deal with one question per answer. Answer clearly and as concisely as possible. Edit/extend an existing answer rather than create a new answer on the same topic. Please provide a link to the the Mercurial wiki or the HG Book for people who want to learn more. Questions: Installation/Setup How to install Mercurial? How to set up Mercurial? How do you create a new project/repository? How do you configure it to ignore files? Working with the code How do you get the latest code? How do you check out code? How do you commit changes? How do you see what's uncommitted, or the status of your current codebase? How do you destroy unwanted commits? How do you compare two revisions of a file, or your current file and a previous revision? How do you see the history of revisions to a file? How do you handle binary files (visio docs, for instance, or compiler environments)? How do you merge files changed at the "same time"? Tagging, branching, releases, baselines How do you 'mark' 'tag' or 'release' a particular set of revisions for a particular set of files so you can always pull that one later? How do you pull a particular 'release'? How do you branch? How do you merge branches? How do you merge parts of one branch into another branch? Other Good GUI/IDE plugin for Mercurial? Advantages/disadvantages? Any other common tasks a beginner should know? How do I interface with Subversion? Other Mercurial references Mercurial: The Definitive Guide Mercurial Wiki Meet Mercurial | Peepcode Screencast

    Read the article

  • Help me stabilize this jRun configuration (CF9/Win2k3/IIS6)

    - by jfrobishow
    Not sure if this would be better suited for ServerFault, but since I am not an admin but a developer I figured I would try SO. We've been struggling to keep our multi-server configuration stable for quite some time now. At the end of last month we were running under CF 7.0.2 on a two servers setup (one instance each). At that point we managed to get our uptime to around 1 week per instance before they would restart by themselves. Since the beginning of the month we upgraded to CF 9 and we're back to square one with multi-restart a day. Our current configuration is 2 Win2k3 servers, running a cluster of 4 instances, 2 instances per server. At this point we are pretty certain this is due to improper JVM settings. We've been toying with them and while some are more stable than others we never quite got it right. From the default: java.args=-server -Xmx512m -Dsun.io.useCanonCaches=false -XX:MaxPermSize=192m -XX:+UseParallelGC -Dcoldfusion.rootDir={application.home}/ To currently: java.args=-server -Xmx896m -Dsun.io.useCanonCaches=false -XX:MaxPermSize=512m -XX:SurvivorRatio=8 -XX:TargetSurvivorRatio=90 -XX:+UseParallelGC -Dcoldfusion.rootDir={application.home}/ -verbose:gc -Xloggc:c:/Jrun4/logs/gc/gcInstance1b.log We have determined that we do need more than the default 512MB simply by monitoring with FusionReactor, on average our amount of memory consumed is hovering in the mid 300MB and can go up to low 700MB under heavy load. Most of the crash will be logged in jrun4/bin/hs_err_pid*.log always an "Out of swap space" I've attached links to the hs_err and garbage collector log file from yesterday at the bottom of the post. The relevant part is (I think) this: Heap PSYoungGen total 89856K, used 19025K [0x55490000, 0x5b6f0000, 0x5b810000) eden space 79232K, 16% used [0x55490000,0x561a64c0,0x5a1f0000) from space 10624K, 52% used [0x5ac90000,0x5b20e2f8,0x5b6f0000) to space 10752K, 0% used [0x5a1f0000,0x5a1f0000,0x5ac70000) PSOldGen total 460416K, used 308422K [0x23810000, 0x3f9b0000, 0x55490000) object space 460416K, 66% used [0x23810000,0x36541bb8,0x3f9b0000) PSPermGen total 107520K, used 106079K [0x03810000, 0x0a110000, 0x23810000) object space 107520K, 98% used [0x03810000,0x09fa7e40,0x0a110000) From it, I gather that its the PSPermGen that is full (most logs will show the same before a crash), which is why we increased MaxPermSize but the total still show as 107520K!??! No one here is a jRun expert, so any help or even ideas on what to try next would be greatly appreciated!! The log files: Sorry I know sendspace isn't the friendliest of places - if you have other host suggestion for log files let me know and I'll update the post (SO doesn't like them inline, it blows up the format of the post). The hs_err log file: http://www.sendspace.com/file/fgak8l The gc log: http://www.sendspace.com/file/w0r2ct

    Read the article

  • OpenGL Polygon Stipple Not Working On Different Machine

    - by FranticPedantic
    I have a situation where I am trying to draw a semi-transparent rectangle over a background that is not using openGL and so I can not use blending. I decided to use polygon stippling for a 'screen door transparency' effect as recommended by some. It works fine on my machine and some others, but on some machines with slightly old Intel graphics cards it's failing to render the rectangle at all. If I turn off polygon stipple, it renders fine (but without the stipple). I have compared many of the state variables that I thought might affect it (see code) between machines and they are all the same, and I get no errors. static const GLubyte stipplePatternChkr[128]; //definition omitted for clarity //but works on my machine // stipple the box glPolygonMode(GL_FRONT_AND_BACK, GL_FILL); glColor4ubv(Color(COLORREF_PADGRAY)); glEnable(GL_POLYGON_STIPPLE); glPolygonStipple(stipplePatternChkr); CRect rcStipple(dim); rcStipple.DeflateRect(padding - 1, padding - 1); glBegin(GL_QUADS); glVertex2i(rcStipple.left, rcStipple.bottom); glVertex2i(rcStipple.right, rcStipple.bottom); glVertex2i(rcStipple.right, rcStipple.top); glVertex2i(rcStipple.left, rcStipple.top); glEnd(); glDisable(GL_POLYGON_STIPPLE); int err = glGetError(); if (err != GL_NO_ERROR) { TRACE("glError(%s: %s)\n", s, gluErrorString(err)); } float x; glGetFloatv(GL_UNPACK_ALIGNMENT, &x); TRACE("unpack alignment %f\n", x); glGetFloatv(GL_UNPACK_IMAGE_HEIGHT, &x); TRACE("unpack height %f\n", x); glGetFloatv(GL_UNPACK_LSB_FIRST, &x); TRACE("unpack lsb %f\n", x); glGetFloatv(GL_UNPACK_ROW_LENGTH, &x); TRACE("unpack length %f\n", x); glGetFloatv(GL_UNPACK_SKIP_PIXELS, &x); TRACE("upnack skip %f\n", x); glGetFloatv(GL_UNPACK_SWAP_BYTES, &x); TRACE("upnack swap %f\n", x);

    Read the article

  • Is ResourceBundle fallback resolution broken in Resin3x?

    - by LES2
    Given the following ResourceBundle properties files: messages.properties messages_en.properties messages_es.properties messages_{some locale}.properties Note: messages.properties contains all the messages for the default locale. messages_en.properties is really empty - it's just there for correctness. messages_en.properties will fall back to messages.properties! And given the following config params in web.xml: <context-param> <param-name>javax.servlet.jsp.jstl.fmt.localizationContext</param-name> <param-value>messages</param-value> </context-param> <context-param> <param-name>javax.servlet.jsp.jstl.fmt.fallbackLocale</param-name> <param-value>en</param-value> </context-param> I would expect that if the chosen locale is 'es', and a resource is not translated in 'es', then it would fall back to 'en', and finally to 'messages.properties' (since messages_en.properties is empty). This is how things work in Jetty. I've also tested this on WebSphere. Resin Is the Problem The problem is when I get to Resin (3.0.23). Fallback resolution does not work at all! In order to get an messages to display, I must do the following: Rename messages.properties to messages_en.properties (essentially, swap the contents of messages.properties and messages_en.properties) Make sure ever key in messages_en.properties is also defined in messages_{every other locale}.properties (even if the exact same). If I don't do this, I get "???some.key???" in the JSPs. Please help! This is perplexing. -- LES SOLUTION Add following to pom.xml (if you're using maven) ... <properties> <taglibs.version>1.1.2</taglibs.version> </properties> ... <!-- Resin ships with a crappy JSTL implementation that doesn't work with fallback locales for resource bundles correctly; we therefore include our own JSTL implementation in the WAR, and avoid this problem. This can be removed if the target container is not resin. --> <dependency> <groupId>taglibs</groupId> <artifactId>standard</artifactId> <version>${taglibs.version}</version> <scope>compile</scope> </dependency>

    Read the article

  • Reducing Integer Fractions Algorithm - Solution Explanation?

    - by Andrew Tomazos - Fathomling
    This is a followup to this problem: Reducing Integer Fractions Algorithm Following is a solution to the problem from a grandmaster: #include <cstdio> #include <algorithm> #include <functional> using namespace std; const int MAXN = 100100; const int MAXP = 10001000; int p[MAXP]; void init() { for (int i = 2; i < MAXP; ++i) { if (p[i] == 0) { for (int j = i; j < MAXP; j += i) { p[j] = i; } } } } void f(int n, vector<int>& a, vector<int>& x) { a.resize(n); vector<int>(MAXP, 0).swap(x); for (int i = 0; i < n; ++i) { scanf("%d", &a[i]); for (int j = a[i]; j > 1; j /= p[j]) { ++x[p[j]]; } } } void g(const vector<int>& v, vector<int> w) { for (int i: v) { for (int j = i; j > 1; j /= p[j]) { if (w[p[j]] > 0) { --w[p[j]]; i /= p[j]; } } printf("%d ", i); } puts(""); } int main() { int n, m; vector<int> a, b, x, y, z; init(); scanf("%d%d", &n, &m); f(n, a, x); f(m, b, y); printf("%d %d\n", n, m); transform(x.begin(), x.end(), y.begin(), insert_iterator<vector<int> >(z, z.end()), [](int a, int b) { return min(a, b); }); g(a, z); g(b, z); return 0; } It isn't clear to me how it works. Can anyone explain it? The equivilance is as follows: a is the numerator vector of length n b is the denominator vector of length m

    Read the article

  • What is the Fastest Way to Check for a Keyword in a List of Keywords in Delphi?

    - by lkessler
    I have a small list of keywords. What I'd really like to do is akin to: case MyKeyword of 'CHIL': (code for CHIL); 'HUSB': (code for HUSB); 'WIFE': (code for WIFE); 'SEX': (code for SEX); else (code for everything else); end; Unfortunately the CASE statement can't be used like that for strings. I could use the straight IF THEN ELSE IF construct, e.g.: if MyKeyword = 'CHIL' then (code for CHIL) else if MyKeyword = 'HUSB' then (code for HUSB) else if MyKeyword = 'WIFE' then (code for WIFE) else if MyKeyword = 'SEX' then (code for SEX) else (code for everything else); but I've heard this is relatively inefficient. What I had been doing instead is: P := pos(' ' + MyKeyword + ' ', ' CHIL HUSB WIFE SEX '); case P of 1: (code for CHIL); 6: (code for HUSB); 11: (code for WIFE); 17: (code for SEX); else (code for everything else); end; This, of course is not the best programming style, but it works fine for me and up to now didn't make a difference. So what is the best way to rewrite this in Delphi so that it is both simple, understandable but also fast? (For reference, I am using Delphi 2009 with Unicode strings.) Followup: Toby recommended I simply use the If Then Else construct. Looking back at my examples that used a CASE statement, I can see how that is a viable answer. Unfortunately, my inclusion of the CASE inadvertently hid my real question. I actually don't care which keyword it is. That is just a bonus if the particular method can identify it like the POS method can. What I need is to know whether or not the keyword is in the set of keywords. So really I want to know if there is anything better than: if pos(' ' + MyKeyword + ' ', ' CHIL HUSB WIFE SEX ') > 0 then The If Then Else equivalent does not seem better in this case being: if (MyKeyword = 'CHIL') or (MyKeyword = 'HUSB') or (MyKeyword = 'WIFE') or (MyKeyword = 'SEX') then In Barry's comment to Kornel's question, he mentions the TDictionary Generic. I've not yet picked up on the new Generic collections and it looks like I should delve into them. My question here would be whether they are built for efficiency and how would using TDictionary compare in looks and in speed to the above two lines? In later profiling, I have found that the concatenation of strings as in: (' ' + MyKeyword + ' ') is VERY expensive time-wise and should be avoided whenever possible. Almost any other solution is better than doing this.

    Read the article

  • Why does Python's math.factorial not play nice with threads?

    - by W1N9Zr0
    Why does math.factorial act so weird in a thread? Here is an example, it creates three threads: thread that just sleeps for a while thread that increments an int for a while thread that does math.factorial on a large number. It calls start on the threads, then join with a timeout The sleep and spin threads work as expected and return from start right away, and then sit in the join for the timeout. The factorial thread on the other hand does not return from start until it runs to the end! import sys from threading import Thread from time import sleep, time from math import factorial # Helper class that stores a start time to compare to class timed_thread(Thread): def __init__(self, time_start): Thread.__init__(self) self.time_start = time_start # Thread that just executes sleep() class sleep_thread(timed_thread): def run(self): sleep(15) print "st DONE:\t%f" % (time() - time_start) # Thread that increments a number for a while class spin_thread(timed_thread): def run(self): x = 1 while x < 120000000: x += 1 print "sp DONE:\t%f" % (time() - time_start) # Thread that calls math.factorial with a large number class factorial_thread(timed_thread): def run(self): factorial(50000) print "ft DONE:\t%f" % (time() - time_start) # the tests print print "sleep_thread test" time_start = time() st = sleep_thread(time_start) st.start() print "st.start:\t%f" % (time() - time_start) st.join(2) print "st.join:\t%f" % (time() - time_start) print "sleep alive:\t%r" % st.isAlive() print print "spin_thread test" time_start = time() sp = spin_thread(time_start) sp.start() print "sp.start:\t%f" % (time() - time_start) sp.join(2) print "sp.join:\t%f" % (time() - time_start) print "sp alive:\t%r" % sp.isAlive() print print "factorial_thread test" time_start = time() ft = factorial_thread(time_start) ft.start() print "ft.start:\t%f" % (time() - time_start) ft.join(2) print "ft.join:\t%f" % (time() - time_start) print "ft alive:\t%r" % ft.isAlive() And here is the output on Python 2.6.5 on CentOS x64: sleep_thread test st.start: 0.000675 st.join: 2.006963 sleep alive: True spin_thread test sp.start: 0.000595 sp.join: 2.010066 sp alive: True factorial_thread test ft DONE: 4.475453 ft.start: 4.475589 ft.join: 4.475615 ft alive: False st DONE: 10.994519 sp DONE: 12.054668 I've tried this on python 2.6.5 on CentOS x64, 2.7.2 on Windows x86 and the factorial thread does not return from start on either of them until the thread is done executing. I've also tried this with PyPy 1.8.0 on Windows x86, and there result is slightly different. The start does return immediately, but then the join doesn't time out! sleep_thread test st.start: 0.001000 st.join: 2.001000 sleep alive: True spin_thread test sp.start: 0.000000 sp DONE: 0.197000 sp.join: 0.236000 sp alive: False factorial_thread test ft.start: 0.032000 ft DONE: 9.011000 ft.join: 9.012000 ft alive: False st DONE: 12.763000

    Read the article

  • How to stream XML data using XOM?

    - by Jonik
    Say I want to output a huge set of search results, as XML, into a PrintWriter or an OutputStream, using XOM. The resulting XML would look like this: <?xml version="1.0" encoding="UTF-8"?> <resultset> <result> [child elements and data] </result> ... ... [1000s of result elements more] </resultset> Because the resulting XML document could be big (tens or hundreds of megabytes, perhaps), I want to output it in a streaming fashion (instead of creating the whole Document in memory and then writing that). The granularity of outputting one <result> at a time is fine, so I want to generate one <result> after another, and write it into the stream. Assume there's already a method that helps with iterating the results and generating Element objects: public nu.xom.Element getNextResult(); So I'd simply like to do something like this pseudocode (automatic flushing enabled, so don't worry about that) : open stream/writer write declaration write start tag for <resultset> while more results: write next <result> element write end tag for <resultset> close stream/writer I've been looking at Serializer, but the necessary methods, writeStartTag(Element), writeEndTag(Element), write(DocType) are protected, not public! Is there no other way than to subclass Serializer to be able to use those methods, or to manually write the start and end tags directly into the stream as Strings, bypassing XOM altogether? (The latter wouldn't be too bad in this simple example, but in the general case it would get quite ugly.) Am I missing something or is XOM just not made for this? With dom4j I could do this easily using XMLWriter - it has constructors that take a Writer or OutputStream, and methods writeOpen(Element), writeClose(Element), writeDocType(DocumentType) etc. Compare to XOM's Serializer where the only public write method is the one that takes a whole Document. Please refrain from answering if you're not familiar with XOM! I specifically want to know if and how you can do this kind of streaming with that library. (This is related to my question about the best dom4j replacement where XOM is a strong contender.)

    Read the article

  • cannot retrieve effect.fx file

    - by numerical25
    I am having issues loading my effect.fx from directx. When I step into my application, my ID3D10Effect *m_pDefaultEffect; pointer remains empty. the address remains at 0x000000 below is my code #pragma once #include "stdafx.h" #include "resource.h" #include "d3d10.h" #include "d3dx10.h" #include "dinput.h" #define MAX_LOADSTRING 100 class RenderEngine { protected: RECT m_screenRect; //direct3d Members ID3D10Device *m_pDevice; // The IDirect3DDevice10 // interface ID3D10Texture2D *m_pBackBuffer; // Pointer to the back buffer ID3D10RenderTargetView *m_pRenderTargetView; // Pointer to render target view IDXGISwapChain *m_pSwapChain; // Pointer to the swap chain RECT m_rcScreenRect; // The dimensions of the screen ID3D10Texture2D *m_pDepthStencilBuffer; ID3D10DepthStencilState *m_pDepthStencilState; ID3D10DepthStencilView *m_pDepthStencilView; //transformation matrixs D3DXMATRIX g_mtxWorld; D3DXMATRIX g_mtxView; D3DXMATRIX g_mtxProj; //Effect members ID3D10Effect *m_pDefaultEffect; ID3D10EffectTechnique *m_pDefaultTechnique; ID3DX10Font *m_pFont; // The font used for rendering text // Sprites used to hold font characters ID3DX10Sprite *m_pFontSprite; ATOM RegisterEngineClass(); void DoFrame(float); bool LoadEffects(); public: static HINSTANCE m_hInst; HWND m_hWnd; int m_nCmdShow; TCHAR m_szTitle[MAX_LOADSTRING]; // The title bar text TCHAR m_szWindowClass[MAX_LOADSTRING]; // the main window class name void DrawTextString(int x, int y, D3DXCOLOR color, const TCHAR *strOutput); //static functions static LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam); static INT_PTR CALLBACK About(HWND hDlg, UINT message, WPARAM wParam, LPARAM lParam); bool InitWindow(); bool InitDirectX(); bool InitInstance(); int Run(); void ShutDown(); RenderEngine() { m_screenRect.right = 800; m_screenRect.bottom = 600; } }; below is the implementation bool RenderEngine::LoadEffects() { HRESULT hr; ID3D10Blob *pErrors = 0; // Create the default rendering effect hr = D3DX10CreateEffectFromFile(L"effect.fx", NULL, NULL, "fx_4_0", D3D10_SHADER_DEBUG, 0, m_pDevice, NULL, NULL, &m_pDefaultEffect, &pErrors, NULL); if(pErrors)// at this point, m_pDefaultEffect is still empty but pErrors returns data which means there is {//errors return false; //ends here } //m_pDefaultTechnique = m_pDefaultEffect->GetTechniqueByName("DefaultTechnique"); return true; } My directx Device does work. My effect.fx file is in the same folder as my solution files (.cpp and header files)

    Read the article

  • SurfaceView for Camera Preview won't get destroyed when pressing Power-Botton

    - by for3st
    I want to implement a camera preview. For that I have a custom View CameraView extends ViewGroup that in the constructor programatically creates an surfaceView. I have the following components (higly simplified for beverity): ScannerFragment.java public View onCreateView(..) { //inflate view and get cameraView } public void onResume() { //open camera -> set rotation -> startPreview (in a thread) -> //set preview callback -> start decoding worker } public void onPause() { // stop decoding worker -> stop Preview -> release camera } CameraView.java extends ViewGroup public void setUpCalledInConstructor(Context context) { //create a surfaceview and add it to this viewgroup -> //get SurfaceHolder and set callback } /* SurfaceHolder.Callback */ public void surfaceCreated(SurfaceHolder holder) { camera.setPreviewDisplay(holder); } public void surfaceDestroyed(SurfaceHolder holder) { //NOTHING is done here } public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) { camera.getParameters().setPreviewSize(previewSize.width, previewSize.height); } fragment_scanner.xml <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent"> <com.myapp.camera.CameraView android:id="@+id/cameraPreview" android:layout_width="match_parent" android:layout_height="match_parent"/> </RelativeLayout> I think I have set the lifecycle correct (getting resources onResume(), releasing it onPause() roughly said) and the following works just fine: pressing home and returning pressing Taskswitcher and returning rotation But one thing doesn't work and that is when I press the power-button on the device and then return to the camera-preview. The result is: the preview is stuck with the image that was last captured before button was pressed. If I rotate it works fine again, since it will get through the lifecycle. After some research I found out that this is probably due to the fact that surfaceView won't get destroyed when the power-button is pressed, i.e. SurfaceHolder.Callback.surfaceDestroyed(SurfaceHolder holder) won't be called. And in fact when I compare the (very verbose) log output of the home-button-case and the power-button-case it's the same except that 'surfaceDestroyed' won't get called. So far I found no solution whatsoever to work around it. I purposely avoid any resource cleaning code in my surfaceDestroyed(), but this does not help. My idea was to manually destroy the surfaceView like asked in this question but this seems not possible. I also tested other applications with surfaceViews/cameras and they don't seem to have this issue. So I would appreciate any hints or tips on that.

    Read the article

  • Direct show video renderers suck?

    - by Daniel
    So I've been looking into the world of media playback for windows and I've started making a C# Media Player using DirectShow. I started off using the VRM-7 windowed video renderer and it was brilliant except it had a couple of small problems (multi monitors, fullscreen). But after some research I found that it's deprecated and I should be using VRM9. So I changed it to use VRM9 windowless then found out that was an old post rofl _< so finally I'm using Vista/Win7 (or XP .net 3) Enhanced Video Renderer (EVR) which is apparently the most up to date Microsoft video renderer and has all the flashy performance/quality things added to it. (tbh I haven't noticed any difference but maybe I need a blue-ray or HQ video to notice it). With using EVR everything is working fine except resizing the video. Its really laggy/choppy/teary and problem something to do with its frame queueing mechanism. To demonstrate my problem open up windows media player classic. View - Options - Playback - output Chose the "EVR" DirectShow Video renderer Now restart wmp class and play a video, while it's playing click and drag a corner to resize it. You'll notice its horribly laggy. This is the exact same problem i am having. But if you chose "EVR Custom Pres. *" or EVR Sync *" resizing works beautifully! So i tried googling around for anything about EVR resizing issues and how to fix it but i couldn't believe how little i could find. I'm guessing "Custom Pres." stands for "Custom Presenter" which sounds like they made their own. Also you'll notice on the right hand size when you swap between EVR and the other EVR's the Resizer drop down on the right greys out. So basically I wan't to know how I can fix this retarded resizing problem and is there any decent documentation out there? There is a fair bit for VMR7/9 but not much for EVR. I downloaded the DirectX SDK which apparently has samples but it was a waste of 500mb of bandwidth as it had nothing relevant. Perhaps there is some way to force it not queueing up frames if that is the problem? If you want code say the word and I'll paste some in. But it's really quite simple and nothing much happens, i'm convinced it's a problem with the EVR renderer. EDIT: Oh and one other thing, what does VLC use? If you go into vlc options and change the renderer to anything but default, they all suck. So is it using VMR7? Or its own?

    Read the article

  • DirectShow EVR resizing window problem

    - by Daniel
    So I've been looking into the world of media playback for windows and I've started making a C# Media Player using DirectShow. I started off using the VRM-7 windowed video renderer and it was brilliant except it had a couple of small problems (multi monitors, fullscreen). But after some research I found that it's deprecated and I should be using VRM9. So I changed it to use VRM9 windowless then found out that was an old post rofl _< so finally I'm using Vista/Win7 (or XP .net 3) Enhanced Video Renderer (EVR) which is apparently the most up to date Microsoft video renderer and has all the flashy performance/quality things added to it. (tbh I haven't noticed any difference but maybe I need a blue-ray or HQ video to notice it). With using EVR everything is working fine except resizing the video. Its really laggy/choppy/teary and probably something to do with its frame queueing mechanism. To demonstrate my problem open up windows media player classic. View - Options - Playback - output Chose the "EVR" DirectShow Video renderer Now restart wmp class and play a video, while it's playing click and drag a corner to resize it. You'll notice its horribly laggy. This is the exact same problem i am having. But if you chose "EVR Custom Pres. *" or EVR Sync *" resizing works beautifully! So i tried googling around for anything about EVR resizing issues and how to fix it but i couldn't believe how little i could find. I'm guessing "Custom Pres." stands for "Custom Presenter" which sounds like they made their own. Also you'll notice on the right hand size when you swap between EVR and the other EVR's the Resizer drop down on the right greys out. So basically I wan't to know how I can fix this retarded resizing problem and is there any decent documentation out there? There is a fair bit for VMR7/9 but not much for EVR. I downloaded the DirectX SDK which apparently has samples but it was a waste of 500mb of bandwidth as it had nothing relevant. Perhaps there is some way to force it not queueing up frames if that is the problem? If you want code say the word and I'll paste some in. But it's really quite simple and nothing much happens, i'm convinced it's a problem with the EVR renderer. EDIT: Oh and one other thing, what does VLC use? If you go into vlc options and change the renderer to anything but default, they all suck. So is it using VMR7? Or its own?

    Read the article

  • JNI - GetObjectField returns NULL

    - by Daniel
    I'm currently working on Mangler's Android implementation. I have a java class that looks like so: public class VentriloEventData { public short type; public class _pcm { public int length; public short send_type; public int rate; public byte channels; }; _pcm pcm; } The signature for my pcm object: $ javap -s -p VentriloEventData ... org.mangler.VentriloEventData$_pcm pcm; Signature: Lorg/mangler/VentriloEventData$_pcm; I am implementing a native JNI function called getevent, which will write to the fields in an instance of the VentriloEventData class. For what it's worth, it's defined and called in Java like so: public static native int getevent(VentriloEventData data); VentriloEventData data = new VentriloEventData(); getevent(data); And my JNI implementation of getevent: JNIEXPORT jint JNICALL Java_org_mangler_VentriloInterface_getevent(JNIEnv* env, jobject obj, jobject eventdata) { v3_event *ev = v3_get_event(V3_BLOCK); if(ev != NULL) { jclass event_class = (*env)->GetObjectClass(env, eventdata); // Event type. jfieldID type_field = (*env)->GetFieldID(env, event_class, "type", "S"); (*env)->SetShortField( env, eventdata, type_field, 1234 ); // Get PCM class. jfieldID pcm_field = (*env)->GetFieldID(env, event_class, "pcm", "Lorg/mangler/VentriloEventData$_pcm;"); jobject pcm = (*env)->GetObjectField( env, eventdata, pcm_field ); jclass pcm_class = (*env)->GetObjectClass(env, pcm); // Set PCM fields. jfieldID pcm_length_field = (*env)->GetFieldID(env, pcm_class, "length", "I"); (*env)->SetIntField( env, pcm, pcm_length_field, 1337 ); free(ev); } return 0; } The code above works fine for writing into the type field (that is not wrapped by the _pcm class). Once getevent is called, data.type is verified to be 1234 at the java side :) My problem is that the assertion "pcm != NULL" will fail. Note that pcm_field != NULL, which probably indicates that the signature to that field is correct... so there must be something wrong with my call to GetObjectField. It looks fine though if I compare it to the official JNI docs. Been bashing my head on this problem for the past 2 hours and I'm getting a little desperate.. hoping a different perspective will help me out on this one.

    Read the article

  • Prevent SQL Injection in Dynamic column names

    - by Mr Shoubs
    I can't get away without writing some dynamic sql conditions in a part of my system (using Postgres). My question is how best to avoid SQL Injection with the method I am currently using. EDIT (Reasoning): There are many of columns in a number of tables (a number which grows (only) and is maintained elsewhere). I need a method of allowing the user to decide which (predefined) column they want to query (and if necessary apply string functions to). The query itself is far too complex for the user to write themselves, nor do they have access to the db. There are 1000's of users with varying requirements and I need to remain as flexible as possible - I shouldn't have to revisit the code unless the main query needs to change - Also, there is no way of knowing what conditions the user will need to use. I have objects (received via web service) that generates a condition (the generation method is below - it isn't perfect yet) for some large sql queries. The _FieldName is user editable (parameter name was, but it didn't need to be) and I am worried it could be an attack vector. I put double quotes (see quoted identifier) around the field name in an attempt to sanitize the string, this way it can never be a key word. I could also look up the field name against a list of fields, but it would be difficult to maintain on a timely basis. Unfortunately the user must enter the condition criteria, I am sure there must be more I can add to the sanatize method? and does quoting the column name make it safe? (my limited testing seems to think so). an example built condition would be "AND upper(brandloaded.make) like 'O%' and upper(brandloaded.make) not like 'OTHERBRAND'" ... Any help or suggestions are appreciated. Public Function GetCondition() As String Dim sb As New Text.StringBuilder 'put quote around the table name in an attempt to prevent some sql injection 'http://www.postgresql.org/docs/8.2/static/sql-syntax-lexical.html sb.AppendFormat(" {0} ""{1}"" ", _LogicOperator.ToString, _FieldName) Select Case _ConditionOperator Case ConditionOperatorOptions.Equals sb.Append(" = ") ... End Select sb.AppendFormat(" {0} ", Me.UniqueParameterName) 'for parameter Return Me.Sanitize(sb) End Function Private Function Sanitize(ByVal sb As Text.StringBuilder) As String 'compare against a similar blacklist mentioned here: http://forums.asp.net/t/1254125.aspx sb.Replace(";", "") sb.Replace("'", "") sb.Replace("\", "") sb.Replace(Chr(8), "") Return sb.ToString End Function Public ReadOnly Property UniqueParameterName() As String Get Return String.Concat(":" _UniqueIdentifier) End Get End Property

    Read the article

  • Silverlight? WPF? or Windows Form?

    - by Amit
    After Silverlight 4.0 has been released with new WPF, I am kind of confused with these technologies: Silverlight? WPF? Windows Form? The main motive that we want to achieve for BIG business project is following: Performance Security And platform independent** If I consider all above three points then only Silverlight is the option as I don’t want people buying emulator on MacOS for WPF or Windows Form. Now how good the Silverlight is for Business applications, I was completely against when Silverlight 2.0 was in the market but now it is Silverlight 4.0 and they have provided many new features (but still basics) that is required in any challenging business applications. Comparing Silverlight and WPF -* Silverlight and WPF are very new technology and if I'd to compare from these two then I'd prefer WPF because it can be considered stable and mature. But it is not same as Windows Form. -* If I go with Silverlight then I am sure about keep updating to the latest version of Silverlight. I remembered when we were developing software for version 2.0 then we'd to create our own framework with dynamic loading DLL, and then Navigation concept. But everything was got changed once Silverlight 3.0 came. I don't want this to be happening with this new product. -* If we go with WPF then we don't get the platform independence. Now, why not we just focus on making WPF and then move to Silverlght. As someone (Tim?) from Microsoft has said that the idea is to make Silverlight as close as WPF. But if that is the case then why XAML structure is different; I will not be convinced with by saying that .Net framework for SL is too small.. well the difference is coming from the namespace ? I was searching on this subject and found "Microsoft WPF-Silverlight Comparison Whitepaper v1.1.pdf". This guide is very good that gives you ins-outs about how can we build common apps that runs on both. But again, it is comparing Silverlight 2 and not 4. I am sure many architect/ developers/ project managers must be facing similar kind of questions in their premises and wants to initiate this discussion, if it has not been :). We've still got 2 weeks to make this decision, so I'm expecting everyone to participate, gurus?

    Read the article

  • How can I limit the cache used by copying so there is still memory available for other cache?

    - by Peter
    Basic situation: I am copying some NTFS disks in openSuSE. Each one is 2TB. When I do this, the system runs slow. My guesses: I believe it is likely due to caching. Linux decides to discard useful cache (eg. kde4 bloat, virtual machine disks, LibreOffice binaries, Thunderbird binaries, etc.) and instead fill all available memory (24 GB total) with stuff from the copying disks, which will be read only once, then written and never used again. So then any time I use these apps (or kde4), the disk needs to be read again, and reading the bloat off the disk again makes things freeze/hiccup. Due to the cache being gone and the fact that these bloated applications need lots of cache, this makes the system horribly slow. Since it is USB,the disk and disk controller are not the bottleneck, so using ionice does not make it faster. I believe it is the cache rather than just the motherboard going too slow, because if I stop everything copying, it still runs choppy for a while until it recaches everything. And if I restart the copying, it takes a minute before it is choppy again. But also, I can limit it to around 40 MB/s, and it runs faster again (not because it has the right things cached, but because the motherboard busses have lots of extra bandwidth for the system disks). I can fully accept a performance loss from my motherboard's IO capability being completely consumed (which is 100% used, meaning 0% wasted power which makes me happy), but I can't accept that this caching mechanism performs so terribly in this specific use case. # free total used free shared buffers cached Mem: 24731556 24531876 199680 0 8834056 12998916 -/+ buffers/cache: 2698904 22032652 Swap: 4194300 24764 4169536 I also tried the same thing on Ubuntu, which causes a total system hang instead. ;) And to clarify, I am not asking how to leave memory free for the "system", but for "cache". I know that cache memory is automatically given back to the system when needed, but my problem is that it is not reserved for caching of specific things. Question: Is there some way to tell these copy operations to limit memory usage so some important things remain cached, and therefore any slowdowns are a result of normal disk usage and not rereading the same commonly used files? For example, is there a setting of max memory per process/user/file system allowed to be used as cache/buffers?

    Read the article

  • Comparing Nested object properties using C#

    - by Kumar
    I have a method which compares two objects and returns a list of all the property names which are different. public static IList<string> GetDifferingProperties(object source, object) { var sourceType = source.GetType(); var sourceProperties = sourceType.GetProperties(); var targetType = target.GetType(); var targetProperties = targetType.GetProperties(); var properties = (from s in sourceProperties from t in targetProperties where s.Name == t.Name && s.PropertyType == t.PropertyType && s.GetValue(source,null) != t.GetValue(target,null) select s.Name).ToList(); return properties; } For example if I have two classes as follows: public class Address { public string AddressLine1 { get; set; } public string AddressLine2 { get; set; } public string City { get; set; } public string State { get; set; } public string Zip { get; set; } } public class Employee { public string FirstName { get; set; } public string MiddleName { get; set; } public string LastName { get; set; } public Address EmployeeAddress { get; set; } } I am trying to compare the following two employee instances: var emp1Address = new Address(); emp1Address.AddressLine1 = "Microsoft Corporation"; emp1Address.AddressLine2 = "One Microsoft Way"; emp1Address.City = "Redmond"; emp1Address.State = "WA"; emp1Address.Zip = "98052-6399"; var emp1 = new Employee(); emp1.FirstName = "Bill"; emp1.LastName = "Gates"; emp1.EmployeeAddress = emp1Address; var emp2Address = new Address(); emp2Address.AddressLine1 = "Gates Foundation"; emp2Address.AddressLine2 = "One Microsoft Way"; emp2Address.City = "Redmond"; emp2Address.State = "WA"; emp2Address.Zip = "98052-6399"; var emp2 = new Employee(); emp2.FirstName = "Melinda"; emp2.LastName = "Gates"; emp2.EmployeeAddress = emp2Address; So when I pass these two employee objects to my GetDifferingProperties method currently it returns FirstName and EmployeeAddress, but it does not tell me which exact property (which in this case is Address1) in the EmployeeAddress has changed. How can I tweak this method to get something like EmployeeAddress.Address1?

    Read the article

  • Create class instance in assembly from string name

    - by Arcadian
    I'm not sure if this is possible, and I'm quite new to using assemblies in C#.NET. What I would like to do is to create an instance of a class when supplied the string name of that class. Something like this: using MyAssembly; namespace MyNameSpace { Class MyClass { int MyValue1; int MyValue2; public MyClass(string myTypeName) { foreach(Type type in MyAssembly) { if((string)type == myTypeName) { //create a new instance of the type } } AssignInitialValues(//the type created above) } //Here I use an abstract type which the type above inherits from private void AssignInitialValues(AbstractType myClass) { this.value1 = myClass.value1; this.value2 = myClass.value2; } } } Obviously you cannot compare strings to types but it illustrates what I'm trying to do: create a type from a supplied string. Any thoughts? EDIT: After attempting: var myObject = (AbstractType) Activator.CreateInstance(null, myTypeName); AssignInitialValues(myObject); I get a number of errors: Inconsistent accessibility: parameter type 'MyAssembly.AbstractType' is less accessible than method 'MyNameSpace.MyClass.AssignInitialValues(MyAssembly.AstractType)' 'MyAssembly.AstractType' is inaccessible due to it's protection level The type or namespace name 'MyAssembly' could not be found (are you missing a using directive or an assembly reference?) The type or namespace name 'AbstractType' could not be found (are you missing a using directive or an assembly reference?) Not exactly sure why it can't find the assembly; I've added a reference to the assembly and I use a Using Directive for the namespace in the assembly. As for the protection level, it's calling classes (or rather the constructors of classes) which can only be public. Any clues on where the problem is? UPDATE: After looking through several articles on SO I came across this: http://stackoverflow.com/a/1632609/360627 Making the AbstractTypeclass public solved the issue of inconsistent accessibility. The new compiler error is this: Cannot convert type 'System.Runtime.Remoting.ObjectHandle' to 'MyAssembly.AbstractType' The line it references is this one: var myObject = (AbstractType) Activator.CreateInstance(null, myTypeName); Using .Unwrap() get's me past this error and I think it's the right way to do it (uncertain). However, when running the program I then get a TypeLoadException when this code is called. TypeLoadException: Could not load type ‘AbstractType’ from assembly ‘MyNameSpace'... Right away I can spot that the type its looking for is correct but the assembly it's looking in is wrong. Looking up the Activator.CreateInstance(String, String) method revealed that the null as the first argument means that the method will look in the executing assembly. This is contrary to the required behavior as in the original post. I've tried using MyAssembly as the first argument but this produces the error: 'MyAssembly' is a 'namespace' but is used like a 'variable' Any thoughts on how to fix this?

    Read the article

  • Java - Is Set.contains() broken on OpenJDK 6?

    - by Peter
    Hey, I've come across a really strange problem. I have written a simple Deck class which represents a standard 52 card deck of playing cards. The class has a method missingCards() which returns the set of all cards which have been drawn from the deck. If I try and compare two identical sets of missing cards using .equals() I'm told they are different, and if I check to see if a set contains an element that I know is there using .contains() I am returned false. Here is my test code: public void testMissingCards() { Deck deck = new Deck(true); Set<Card> drawnCards = new HashSet<Card>(); drawnCards.add(deck.draw()); drawnCards.add(deck.draw()); drawnCards.add(deck.draw()); Set<Card> missingCards = deck.missingCards(); System.out.println(drawnCards); System.out.println(missingCards); Card c1 = null; for (Card c : drawnCards){ c1 = c; } System.out.println("C1 is "+c1); for (Card c : missingCards){ System.out.println("C is "+c); System.out.println("Does c1.equal(c) "+c1.equals(c)); System.out.println("Does c.equal(c1) "+c.equals(c1)); } System.out.println("Is c1 in missingCards "+missingCards.contains(c1)); assertEquals("Deck confirm missing cards",drawnCards,missingCards); } (Edit: Just for clarity I added the two loops after I noticed the test failing. The first loop pulls out a card from drawnCards and then this card is checked against every card in missingCards - it always matches one, so that card must be contained in missingCards. However, missingCards.contains() fails) And here is an example of it's output: [5C, 2C, 2H] [2C, 5C, 2H] C1 is 2H C is 2C Does c1.equal(c) false Does c.equal(c1) false C is 5C Does c1.equal(c) false Does c.equal(c1) false C is 2H Does c1.equal(c) true Does c.equal(c1) true Is c1 in missingCards false I am completely sure that the implementation of .equals on my card class is correct and, as you can see from the output it does work! What is going on here? Cheers, Pete

    Read the article

  • Operator== in derived class never gets called.

    - by Robin Welch
    Can someone please put me out of my misery with this? I'm trying to figure out why a derived operator== never gets called in a loop. To simplify the example, here's my Base and Derived class: class Base { // ... snipped bool operator==( const Base& other ) const { return name_ == other.name_; } }; class Derived : public Base { // ... snipped bool operator==( const Derived& other ) const { return ( static_cast<const Base&>( *this ) == static_cast<const Base&>( other ) ? age_ == other.age_ : false ); }; Now when I instantiate and compare like this ... Derived p1("Sarah", 42); Derived p2("Sarah", 42); bool z = ( p1 == p2 ); ... all is fine. Here the operator== from Derived gets called, but when I loop over a list, comparing items in a list of pointers to Base objects ... list<Base*> coll; coll.push_back( new Base("fred") ); coll.push_back( new Derived("sarah", 42) ); // ... snipped // Get two items from the list. Base& obj1 = **itr; Base& obj2 = **itr2; cout << obj1.asString() << " " << ( ( obj1 == obj2 ) ? "==" : "!=" ) << " " << obj2.asString() << endl; Here asString() (which is virtual and not shown here for brevity) works fine, but obj1 == obj2 always calls the Base operator== even if the two objects are Derived. I know I'm going to kick myself when I find out what's wrong, but if someone could let me down gently it would be much appreciated.

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >