Search Results

Search found 11051 results on 443 pages for 'bind variables'.

Page 150/443 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • What the Hekaton?

    - by Tony Davis
    Hekaton, the power behind SQL Server 2014′s In-Memory OLTP technology, is intended to make data operations run orders of magnitude faster on SQL Server. This works its magic partly by serving database workloads entirely from main memory, using memory-optimized table structures. It replaces the relational engine’s standard locking model with an optimistic concurrency model based on time-stamped row versions. Deeper down the Hekaton engine uses new, ‘latch free’ data structures. So far, so good, but performance improvements on this scale require a compromise, and the compromise is that these aren’t tables as we understand them. For the database developer, these differences are painful because they involve sacrificing some very important bits of the relational model. Most importantly, Hekaton tables don’t currently support FOREIGN KEY constraints or CHECK constraints, and you can’t put the checks in triggers because there aren’t any DML triggers either. Constraints allow a relational designer to enforce relational integrity and data integrity. Without them, of course, ‘bad data’ can get into our Hekaton tables. There is no easy way of preventing it. For several classes of database and data, this is a show-stopper. One may regard all these restrictions regretfully, seeing limited opportunity to try out Hekaton with current databases, but perhaps there is also a sudden glow of recognition. Isn’t this how we all originally imagined table variables were going to be, back in SQL 2005? And they have much the same restrictions. Maybe, instead of pretending that a currently-designed database can be ‘Hekatonized’ with a few mouse clicks, we should redesign databases for SQL 2014 to replace table variables with Hekaton tables, exploiting this technology for fast intermediate processing, and for the most part forget, for now, the idea of trying to convert our base relational tables into Hekaton tables. Few database developers would be averse to having their working tables running an order of magnitude faster, as long as it didn’t compromise the integrity of the data in the base tables.

    Read the article

  • Tor Browser Failing to Load

    - by Ben
    Dec 12 22:32:25.313 [notice] Tor v0.2.3.22-rc (git-4a0c70a817797420) running on Linux. Dec 12 22:32:25.313 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning Dec 12 22:32:25.313 [notice] Read configuration file "/etc/tor/torrc". Dec 12 22:32:25.319 [notice] Initialized libevent version 2.0.19-stable using method epoll (with changelist). Good. Dec 12 22:32:25.319 [notice] Opening Socks listener on 127.0.0.1:9050 Dec 12 22:32:25.319 [warn] Could not bind to 127.0.0.1:9050: Address already in use. Is Tor already running? Dec 12 22:32:25.319 [warn] Failed to parse/validate config: Failed to bind one of the listener ports. Dec 12 22:32:25.319 [err] Reading config failed--see warnings above. I've tried reinstalling it and I always get this error after powering off and back on, despite it working fine directly after the install...

    Read the article

  • Toon shader with Texture. Can this be optimized?

    - by Alex
    I am quite new to OpenGL, I have managed after long trial and error to integrate Nehe's Cel-Shading rendering with my Model loaders, and have them drawn using the Toon shade and outline AND their original texture at the same time. The result is actually a very nice Cel Shading effect of the model texture, but it is havling the speed of the program, it's quite very slow even with just 3 models on screen... Since the result was kind of hacked together, I am thinking that maybe I am performing some extra steps or extra rendering tasks that maybe are not needed, and are slowing down the game? Something unnecessary that maybe you guys could spot? Both MD2 and 3DS loader have an InitToon() function called upon creation to load the shader initToon(){ int i; // Looping Variable ( NEW ) char Line[255]; // Storage For 255 Characters ( NEW ) float shaderData[32][3]; // Storate For The 96 Shader Values ( NEW ) FILE *In = fopen ("Shader.txt", "r"); // Open The Shader File ( NEW ) if (In) // Check To See If The File Opened ( NEW ) { for (i = 0; i < 32; i++) // Loop Though The 32 Greyscale Values ( NEW ) { if (feof (In)) // Check For The End Of The File ( NEW ) break; fgets (Line, 255, In); // Get The Current Line ( NEW ) shaderData[i][0] = shaderData[i][1] = shaderData[i][2] = float(atof (Line)); // Copy Over The Value ( NEW ) } fclose (In); // Close The File ( NEW ) } else return false; // It Went Horribly Horribly Wrong ( NEW ) glGenTextures (1, &shaderTexture[0]); // Get A Free Texture ID ( NEW ) glBindTexture (GL_TEXTURE_1D, shaderTexture[0]); // Bind This Texture. From Now On It Will Be 1D ( NEW ) // For Crying Out Loud Don't Let OpenGL Use Bi/Trilinear Filtering! ( NEW ) glTexParameteri (GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri (GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexImage1D (GL_TEXTURE_1D, 0, GL_RGB, 32, 0, GL_RGB , GL_FLOAT, shaderData); // Upload ( NEW ) } This is the drawing for the animated MD2 model: void MD2Model::drawToon() { float outlineWidth = 3.0f; // Width Of The Lines ( NEW ) float outlineColor[3] = { 0.0f, 0.0f, 0.0f }; // Color Of The Lines ( NEW ) // ORIGINAL PART OF THE FUNCTION //Figure out the two frames between which we are interpolating int frameIndex1 = (int)(time * (endFrame - startFrame + 1)) + startFrame; if (frameIndex1 > endFrame) { frameIndex1 = startFrame; } int frameIndex2; if (frameIndex1 < endFrame) { frameIndex2 = frameIndex1 + 1; } else { frameIndex2 = startFrame; } MD2Frame* frame1 = frames + frameIndex1; MD2Frame* frame2 = frames + frameIndex2; //Figure out the fraction that we are between the two frames float frac = (time - (float)(frameIndex1 - startFrame) / (float)(endFrame - startFrame + 1)) * (endFrame - startFrame + 1); // I ADDED THESE FROM NEHE'S TUTORIAL FOR FIRST PASS (TOON SHADE) glHint (GL_LINE_SMOOTH_HINT, GL_NICEST); // Use The Good Calculations ( NEW ) glEnable (GL_LINE_SMOOTH); // Cel-Shading Code // glEnable (GL_TEXTURE_1D); // Enable 1D Texturing ( NEW ) glBindTexture (GL_TEXTURE_1D, shaderTexture[0]); // Bind Our Texture ( NEW ) glColor3f (1.0f, 1.0f, 1.0f); // Set The Color Of The Model ( NEW ) // ORIGINAL DRAWING CODE //Draw the model as an interpolation between the two frames glBegin(GL_TRIANGLES); for(int i = 0; i < numTriangles; i++) { MD2Triangle* triangle = triangles + i; for(int j = 0; j < 3; j++) { MD2Vertex* v1 = frame1->vertices + triangle->vertices[j]; MD2Vertex* v2 = frame2->vertices + triangle->vertices[j]; Vec3f pos = v1->pos * (1 - frac) + v2->pos * frac; Vec3f normal = v1->normal * (1 - frac) + v2->normal * frac; if (normal[0] == 0 && normal[1] == 0 && normal[2] == 0) { normal = Vec3f(0, 0, 1); } glNormal3f(normal[0], normal[1], normal[2]); MD2TexCoord* texCoord = texCoords + triangle->texCoords[j]; glTexCoord2f(texCoord->texCoordX, texCoord->texCoordY); glVertex3f(pos[0], pos[1], pos[2]); } } glEnd(); // ADDED THESE FROM NEHE'S FOR SECOND PASS (OUTLINE) glDisable (GL_TEXTURE_1D); // Disable 1D Textures ( NEW ) glEnable (GL_BLEND); // Enable Blending ( NEW ) glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA); // Set The Blend Mode ( NEW ) glPolygonMode (GL_BACK, GL_LINE); // Draw Backfacing Polygons As Wireframes ( NEW ) glLineWidth (outlineWidth); // Set The Line Width ( NEW ) glCullFace (GL_FRONT); // Don't Draw Any Front-Facing Polygons ( NEW ) glDepthFunc (GL_LEQUAL); // Change The Depth Mode ( NEW ) glColor3fv (&outlineColor[0]); // Set The Outline Color ( NEW ) // HERE I AM PARSING THE VERTICES AGAIN (NOT IN THE ORIGINAL FUNCTION) FOR THE OUTLINE AS PER NEHE'S TUT glBegin (GL_TRIANGLES); // Tell OpenGL What We Want To Draw for(int i = 0; i < numTriangles; i++) { MD2Triangle* triangle = triangles + i; for(int j = 0; j < 3; j++) { MD2Vertex* v1 = frame1->vertices + triangle->vertices[j]; MD2Vertex* v2 = frame2->vertices + triangle->vertices[j]; Vec3f pos = v1->pos * (1 - frac) + v2->pos * frac; Vec3f normal = v1->normal * (1 - frac) + v2->normal * frac; if (normal[0] == 0 && normal[1] == 0 && normal[2] == 0) { normal = Vec3f(0, 0, 1); } glNormal3f(normal[0], normal[1], normal[2]); MD2TexCoord* texCoord = texCoords + triangle->texCoords[j]; glTexCoord2f(texCoord->texCoordX, texCoord->texCoordY); glVertex3f(pos[0], pos[1], pos[2]); } } glEnd (); // Tell OpenGL We've Finished glDepthFunc (GL_LESS); // Reset The Depth-Testing Mode ( NEW ) glCullFace (GL_BACK); // Reset The Face To Be Culled ( NEW ) glPolygonMode (GL_BACK, GL_FILL); // Reset Back-Facing Polygon Drawing Mode ( NEW ) glDisable (GL_BLEND); } Whereas this is the drawToon function in the 3DS loader void Model_3DS::drawToon() { float outlineWidth = 3.0f; // Width Of The Lines ( NEW ) float outlineColor[3] = { 0.0f, 0.0f, 0.0f }; // Color Of The Lines ( NEW ) //ORIGINAL CODE if (visible) { glPushMatrix(); // Move the model glTranslatef(pos.x, pos.y, pos.z); // Rotate the model glRotatef(rot.x, 1.0f, 0.0f, 0.0f); glRotatef(rot.y, 0.0f, 1.0f, 0.0f); glRotatef(rot.z, 0.0f, 0.0f, 1.0f); glScalef(scale, scale, scale); // Loop through the objects for (int i = 0; i < numObjects; i++) { // Enable texture coordiantes, normals, and vertices arrays if (Objects[i].textured) glEnableClientState(GL_TEXTURE_COORD_ARRAY); if (lit) glEnableClientState(GL_NORMAL_ARRAY); glEnableClientState(GL_VERTEX_ARRAY); // Point them to the objects arrays if (Objects[i].textured) glTexCoordPointer(2, GL_FLOAT, 0, Objects[i].TexCoords); if (lit) glNormalPointer(GL_FLOAT, 0, Objects[i].Normals); glVertexPointer(3, GL_FLOAT, 0, Objects[i].Vertexes); // Loop through the faces as sorted by material and draw them for (int j = 0; j < Objects[i].numMatFaces; j ++) { // Use the material's texture Materials[Objects[i].MatFaces[j].MatIndex].tex.Use(); // AFTER THE TEXTURE IS APPLIED I INSERT THE TOON FUNCTIONS HERE (FIRST PASS) glHint (GL_LINE_SMOOTH_HINT, GL_NICEST); // Use The Good Calculations ( NEW ) glEnable (GL_LINE_SMOOTH); // Cel-Shading Code // glEnable (GL_TEXTURE_1D); // Enable 1D Texturing ( NEW ) glBindTexture (GL_TEXTURE_1D, shaderTexture[0]); // Bind Our Texture ( NEW ) glColor3f (1.0f, 1.0f, 1.0f); // Set The Color Of The Model ( NEW ) glPushMatrix(); // Move the model glTranslatef(Objects[i].pos.x, Objects[i].pos.y, Objects[i].pos.z); // Rotate the model glRotatef(Objects[i].rot.z, 0.0f, 0.0f, 1.0f); glRotatef(Objects[i].rot.y, 0.0f, 1.0f, 0.0f); glRotatef(Objects[i].rot.x, 1.0f, 0.0f, 0.0f); // Draw the faces using an index to the vertex array glDrawElements(GL_TRIANGLES, Objects[i].MatFaces[j].numSubFaces, GL_UNSIGNED_SHORT, Objects[i].MatFaces[j].subFaces); glPopMatrix(); } glDisable (GL_TEXTURE_1D); // Disable 1D Textures ( NEW ) // THIS IS AN ADDED SECOND PASS AT THE VERTICES FOR THE OUTLINE glEnable (GL_BLEND); // Enable Blending ( NEW ) glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA); // Set The Blend Mode ( NEW ) glPolygonMode (GL_BACK, GL_LINE); // Draw Backfacing Polygons As Wireframes ( NEW ) glLineWidth (outlineWidth); // Set The Line Width ( NEW ) glCullFace (GL_FRONT); // Don't Draw Any Front-Facing Polygons ( NEW ) glDepthFunc (GL_LEQUAL); // Change The Depth Mode ( NEW ) glColor3fv (&outlineColor[0]); // Set The Outline Color ( NEW ) for (int j = 0; j < Objects[i].numMatFaces; j ++) { glPushMatrix(); // Move the model glTranslatef(Objects[i].pos.x, Objects[i].pos.y, Objects[i].pos.z); // Rotate the model glRotatef(Objects[i].rot.z, 0.0f, 0.0f, 1.0f); glRotatef(Objects[i].rot.y, 0.0f, 1.0f, 0.0f); glRotatef(Objects[i].rot.x, 1.0f, 0.0f, 0.0f); // Draw the faces using an index to the vertex array glDrawElements(GL_TRIANGLES, Objects[i].MatFaces[j].numSubFaces, GL_UNSIGNED_SHORT, Objects[i].MatFaces[j].subFaces); glPopMatrix(); } glDepthFunc (GL_LESS); // Reset The Depth-Testing Mode ( NEW ) glCullFace (GL_BACK); // Reset The Face To Be Culled ( NEW ) glPolygonMode (GL_BACK, GL_FILL); // Reset Back-Facing Polygon Drawing Mode ( NEW ) glDisable (GL_BLEND); glPopMatrix(); } Finally this is the tex.Use() function that loads a BMP texture and somehow gets blended perfectly with the Toon shading void GLTexture::Use() { glEnable(GL_TEXTURE_2D); // Enable texture mapping glBindTexture(GL_TEXTURE_2D, texture[0]); // Bind the texture as the current one }

    Read the article

  • Can't complete dropbox installation from behind proxy in Ubuntu 11.10

    - by Mark Jones
    Problem: My PC on campus sits behind a proxy (requiring authentication) and I can't setup Dropbox. I am convinced that this is a proxy issue as I can't setup Ubuntu one either (but I don't use Ubuntu One so that is not a problem). I have looked at the Ubuntu One fix but it seems to be to modify settings explicitly related to Ubuntu One. I can install the nautilus-dropbox package (compiled from source and from .deb package from website and from software centre) but once I click OK from the "Dropbox Installation" dialog box (prompting me to download the proprietary daemon) the installation just freezes with the OK button pressed. When I look at its process in System Monitor its waiting channel is inet_wait_for_connect. I have set the following proxy directives thus far: Added mj22:**@proxy.waikato.ac.nz:80 information to network proxy settings under network in settings. Added http_host and http_port variables under gconf-editor-system-proxy Added 'host', 'authentication_password' 'authentication_user' and ticked 'user authentication' and 'use_http_proxy' under gconf-editor-system-http_proxy Added export http_proxy="http://mj22:**@proxy.waikato.ac.nz:80/" to /etc/bash.bashrc Added Acquire::http::proxy "http://mj22:**@proxy.waikato.ac.nz:80/"; to /etc/apt/apt.conf (which is what I imagine is letting Software Center retrieve packages). (where ** is my password) I have also added the equivalent ftp and https lines for the above entries. I get the internet fine and Software Centre can download packages but thats it. Related issues: The software centre can't fetch reviews (but can download packages). When trying to add an online account in Gnome 3 a dialog pop up appears with "Error getting a Request Token: Cannot connect to proxy (proxy.waikato.ac.nz)" Updates: After some time (10mins ish) Dropbox shows an error dialog box that reads: Trouble connecting to Dropbox servers. Maybe your internet connection is down, or you need to set you http_proxy environment variable. Is there a way I can see what environment variables are currently set?

    Read the article

  • How to restore Windows7 after restore ubuntu bootloader?

    - by Mateusz Rogulski
    At first I will describe my situation in a few points: I have installed Windows7, and then Ubuntu 11.04 on my machine. Then everything works fine and at start of system I have screen from linux where I can choose the system. Then I reinstall Windows7 and install Windows 8 on other partition. Then I can choose between Win7 and win8 when I start system. Then I need my Ubuntu back so I want restore my bootloader from Ubuntu. I boot Ubuntu from USB and in terminal write this commands: sudo fdisk -l Then I get: /dev/sda1 1 13 104391 de Dell Utility /dev/sda2 14 2805 22425601 5 Rozszerzona /dev/sda3 * 2805 41968 314572800 7 HPFS/NTFS /dev/sda4 41968 60802 151282688 7 HPFS/NTFS /dev/sda5 14 2445 19530752 83 Linux /dev/sda6 2445 2805 2893824 82 Linux swap / Solaris Next commands: sudo mount /dev/sda5 /mnt sudo mount --bind /dev /mnt/dev sudo mount --bind /proc /mnt/proc sudo chroot /mnt grub-install /dev/sda I get Installation finished. No error reported.. And when I start my machine I have old Ubuntu start screen to choose system. Ubuntu works well. But There are no Windows 8 option. But my primary problem is when I choose Windows 7 I have: error: no such device ... error: no such disk so I have no idea what can I do. I really need both systems to work. Any help would be appreciated.

    Read the article

  • Documenting mathematical logic in code

    - by Kiril Raychev
    Sometimes, although not often, I have to include math logic in my code. The concepts used are mostly very simple, but the resulting code is not - a lot of variables with unclear purpose, and some operations with not so obvious intent. I don't mean that the code is unreadable or unmaintainable, just that it's waaaay harder to understand than the actual math problem. I try to comment the parts which are hardest to understand, but there is the same problem as in just coding them - text does not have the expressive power of math. I am looking for a more efficient and easy to understand way of explaining the logic behind some of the complex code, preferably in the code itself. I have considered TeX - writing the documentation and generating it separately from the code. But then I'd have to learn TeX, and the documentation will not be in the code itself. Another thing I thought of is taking a picture of the mathematical notations, equations and diagrams written on paper/whiteboard, and including it in javadoc. Is there a simpler and clearer way? P.S. Giving descriptive names(timeOfFirstEvent instead of t1) to the variables actually makes the code more verbose and even harder too read.

    Read the article

  • Inside the DLR – Invoking methods

    - by Simon Cooper
    So, we’ve looked at how a dynamic call is represented in a compiled assembly, and how the dynamic lookup is performed at runtime. The last piece of the puzzle is how the resolved method gets invoked, and that is the subject of this post. Invoking methods As discussed in my previous posts, doing a full lookup and bind at runtime each and every single time the callsite gets invoked would be far too slow to be usable. The results obtained from the callsite binder must to be cached, along with a series of conditions to determine whether the cached result can be reused. So, firstly, how are the conditions represented? These conditions can be anything; they are determined entirely by the semantics of the language the binder is representing. The binder has to be able to return arbitary code that is then executed to determine whether the conditions apply or not. Fortunately, .NET 4 has a neat way of representing arbitary code that can be easily combined with other code – expression trees. All the callsite binder has to return is an expression (called a ‘restriction’) that evaluates to a boolean, returning true when the restriction passes (indicating the corresponding method invocation can be used) and false when it does’t. If the bind result is also represented in an expression tree, these can be combined easily like so: if ([restriction is true]) { [invoke cached method] } Take my example from my previous post: public class ClassA { public static void TestDynamic() { CallDynamic(new ClassA(), 10); CallDynamic(new ClassA(), "foo"); } public static void CallDynamic(dynamic d, object o) { d.Method(o); } public void Method(int i) {} public void Method(string s) {} } When the Method(int) method is first bound, along with an expression representing the result of the bind lookup, the C# binder will return the restrictions under which that bind can be reused. In this case, it can be reused if the types of the parameters are the same: if (thisArg.GetType() == typeof(ClassA) && arg1.GetType() == typeof(int)) { thisClassA.Method(i); } Caching callsite results So, now, it’s up to the callsite to link these expressions returned from the binder together in such a way that it can determine which one from the many it has cached it should use. This caching logic is all located in the System.Dynamic.UpdateDelegates class. It’ll help if you’ve got this type open in a decompiler to have a look yourself. For each callsite, there are 3 layers of caching involved: The last method invoked on the callsite. All methods that have ever been invoked on the callsite. All methods that have ever been invoked on any callsite of the same type. We’ll cover each of these layers in order Level 1 cache: the last method called on the callsite When a CallSite<T> object is first instantiated, the Target delegate field (containing the delegate that is called when the callsite is invoked) is set to one of the UpdateAndExecute generic methods in UpdateDelegates, corresponding to the number of parameters to the callsite, and the existance of any return value. These methods contain most of the caching, invoke, and binding logic for the callsite. The first time this method is invoked, the UpdateAndExecute method finds there aren’t any entries in the caches to reuse, and invokes the binder to resolve a new method. Once the callsite has the result from the binder, along with any restrictions, it stitches some extra expressions in, and replaces the Target field in the callsite with a compiled expression tree similar to this (in this example I’m assuming there’s no return value): if ([restriction is true]) { [invoke cached method] return; } if (callSite._match) { _match = false; return; } else { UpdateAndExecute(callSite, arg0, arg1, ...); } Woah. What’s going on here? Well, this resulting expression tree is actually the first level of caching. The Target field in the callsite, which contains the delegate to call when the callsite is invoked, is set to the above code compiled from the expression tree into IL, and then into native code by the JIT. This code checks whether the restrictions of the last method that was invoked on the callsite (the ‘primary’ method) match, and if so, executes that method straight away. This means that, the next time the callsite is invoked, the first code that executes is the restriction check, executing as native code! This makes this restriction check on the primary cached delegate very fast. But what if the restrictions don’t match? In that case, the second part of the stitched expression tree is executed. What this section should be doing is calling back into the UpdateAndExecute method again to resolve a new method. But it’s slightly more complicated than that. To understand why, we need to understand the second and third level caches. Level 2 cache: all methods that have ever been invoked on the callsite When a binder has returned the result of a lookup, as well as updating the Target field with a compiled expression tree, stitched together as above, the callsite puts the same compiled expression tree in an internal list of delegates, called the rules list. This list acts as the level 2 cache. Why use the same delegate? Stitching together expression trees is an expensive operation. You don’t want to do it every time the callsite is invoked. Ideally, you would create one expression tree from the binder’s result, compile it, and then use the resulting delegate everywhere in the callsite. But, if the same delegate is used to invoke the callsite in the first place, and in the caches, that means each delegate needs two modes of operation. An ‘invoke’ mode, for when the delegate is set as the value of the Target field, and a ‘match’ mode, used when UpdateAndExecute is searching for a method in the callsite’s cache. Only in the invoke mode would the delegate call back into UpdateAndExecute. In match mode, it would simply return without doing anything. This mode is controlled by the _match field in CallSite<T>. The first time the callsite is invoked, _match is false, and so the Target delegate is called in invoke mode. Then, if the initial restriction check fails, the Target delegate calls back into UpdateAndExecute. This method sets _match to true, then calls all the cached delegates in the rules list in match mode to try and find one that passes its restrictions, and invokes it. However, there needs to be some way for each cached delegate to inform UpdateAndExecute whether it passed its restrictions or not. To do this, as you can see above, it simply re-uses _match, and sets it to false if it did not pass the restrictions. This allows the code within each UpdateAndExecute method to check for cache matches like so: foreach (T cachedDelegate in Rules) { callSite._match = true; cachedDelegate(); // sets _match to false if restrictions do not pass if (callSite._match) { // passed restrictions, and the cached method was invoked // set this delegate as the primary target to invoke next time callSite.Target = cachedDelegate; return; } // no luck, try the next one... } Level 3 cache: all methods that have ever been invoked on any callsite with the same signature The reason for this cache should be clear – if a method has been invoked through a callsite in one place, then it is likely to be invoked on other callsites in the codebase with the same signature. Rather than living in the callsite, the ‘global’ cache for callsite delegates lives in the CallSiteBinder class, in the Cache field. This is a dictionary, typed on the callsite delegate signature, providing a RuleCache<T> instance for each delegate signature. This is accessed in the same way as the level 2 callsite cache, by the UpdateAndExecute methods. When a method is matched in the global cache, it is copied into the callsite and Target cache before being executed. Putting it all together So, how does this all fit together? Like so (I’ve omitted some implementation & performance details): That, in essence, is how the DLR performs its dynamic calls nearly as fast as statically compiled IL code. Extensive use of expression trees, compiled to IL and then into native code. Multiple levels of caching, the first of which executes immediately when the dynamic callsite is invoked. And a clever re-use of compiled expression trees that can be used in completely different contexts without being recompiled. All in all, a very fast and very clever reflection caching mechanism.

    Read the article

  • Is there an excuse for excessively short variable names?

    - by KChaloux
    This has become a large frustration with the codebase I'm currently working in; many of our variable names are short and undescriptive. I'm the only developer left on the project, and there isn't documentation as to what most of them do, so I have to spend extra time tracking down what they represent. For example, I was reading over some code that updates the definition of an optical surface. The variables set at the start were as follows: double dR, dCV, dK, dDin, dDout, dRin, dRout dR = Convert.ToDouble(_tblAsphere.Rows[0].ItemArray.GetValue(1)); dCV = convert.ToDouble(_tblAsphere.Rows[1].ItemArray.GetValue(1)); ... and so on Maybe it's just me, but it told me essentially nothing about what they represented, which made understanding the code further down difficult. All I knew was that it was a variable parsed out specific row from a specific table, somewhere. After some searching, I found out what they meant: dR = radius dCV = curvature dK = conic constant dDin = inner aperture dDout = outer aperture dRin = inner radius dRout = outer radius I renamed them to essentially what I have up there. It lengthens some lines, but I feel like that's a fair trade off. This kind of naming scheme is used throughout a lot of the code however. I'm not sure if it's an artifact from developers who learned by working with older systems, or if there's a deeper reason behind it. Is there a good reason to name variables this way, or am I justified in updating them to more descriptive names as I come across them?

    Read the article

  • Is there an excuse for short variable names?

    - by KChaloux
    This has become a large frustration with the codebase I'm currently working in; many of our variable names are short and undescriptive. I'm the only developer left on the project, and there isn't documentation as to what most of them do, so I have to spend extra time tracking down what they represent. For example, I was reading over some code that updates the definition of an optical surface. The variables set at the start were as follows: double dR, dCV, dK, dDin, dDout, dRin, dRout dR = Convert.ToDouble(_tblAsphere.Rows[0].ItemArray.GetValue(1)); dCV = convert.ToDouble(_tblAsphere.Rows[1].ItemArray.GetValue(1)); ... and so on Maybe it's just me, but it told me essentially nothing about what they represented, which made understanding the code further down difficult. All I knew was that it was a variable parsed out specific row from a specific table, somewhere. After some searching, I found out what they meant: dR = radius dCV = curvature dK = conic constant dDin = inner aperture dDout = outer aperture dRin = inner radius dRout = outer radius I renamed them to essentially what I have up there. It lengthens some lines, but I feel like that's a fair trade off. This kind of naming scheme is used throughout a lot of the code however. I'm not sure if it's an artifact from developers who learned by working with older systems, or if there's a deeper reason behind it. Is there a good reason to name variables this way, or am I justified in updating them to more descriptive names as I come across them?

    Read the article

  • What is the philosophy/reasoning behind C#'s Pascal-casing method names?

    - by Nocturne
    I'm just starting to learn C#. Coming from a background in Java, C++ and Objective-C, I find C#'s Pascal-casing its method-names rather unique, and a tad difficult to get used to at first. What is the reasoning and philosophy behind this? I'm guessing it is because of C# properties. Unlike in Objective-C, where method names can be exactly the same as an instance variables, this is not the case with C#. I would guess one of the goals with properties (as it is with most of the languages that support it) is to make properties truly indistinguishable from variables and methods. So, one can have an "int x" in C#, and the corresponding property becomes X. To ensure that properties and methods are indistinguishable, all method names I'm guessing are also therefore expected to start with an uppercase letter. (This is just my hypothesis based on what I know of C# so far—I'm still learning). I'm very curious to know how this curious guideline came into being (given that it's not something one sees in most other languages where method names are expected to start with a lowercase letter) (EDIT: By Pascal-casing, I mean PascalCase (which is basically camelCase but starting with a capital letter). Method names typically start with a lowercase letter in most languages)

    Read the article

  • Input/Output console window in XNA

    - by Will Bagley
    I am currently making a simple game in XNA but am at a point where testing various aspect gets a bit tricky, especially when you have to wait till you have 1000 score to see if your animation is playing correctly etc. Of course i could just edit the starting variable in the code before I launched but I have recently been interested in trying to implement a console style window which can print out values and take input to alter public variables during run-time. I am aware that VS has the immediate window which achieves a similar thing but i would prefer mine is an actual part of the game with the intention that the user may have limited access to it in the future. Some of the key things i have yet to find an answer to after looking around for a while are: how i would support free text entry how i would access variables during runtime how i would edit these variable I have also read about using a property grid from windows form aps (and partially reflection) which looked like it could simplify a lot of things but i am not sure how I would get that running inside my XNA game window or how i would get it to not look out of place (as the visual aspect of is seems to be aimed just for development time viewing). All in all I'm quite open to any suggestions on how to approach this task as currently I'm not sure where to start. Thanks in advance.

    Read the article

  • How to install Tor (Web Browser) in Ubuntu 12.10?

    - by Zignd
    I would like to install the Tor, but I'm having some problems. I know that someone will say "This question is a exactly duplication of How to install tor?", but it's not, because the another question can not be applied to Ubuntu 12.10 as the deb command is not available anymore. I did a research and even at the Tor's Official Website the available resource can not be applied to Ubuntu 12.10. I tried to use the deb command (as the above question says: deb http://deb.torproject.org/torproject.org <DISTRIBUTION> main) and the Terminal says deb: command not found and when I try to install it says E: Unable to locate package deb. I've also tried to use the ppa: ubun-tor, but it's not compatible with Quantal Quetzal, because it's too old. I've also tried to use sudo apt-get install tor, but browser icon don't shows up after installation and if you try to use the command tor in the Terminal I get the following error message: Nov 26 10:59:25.731 [notice] Tor v0.2.3.22-rc (git-4a0c70a817797420) running on Linux. Nov 26 10:59:25.731 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning Nov 26 10:59:25.731 [notice] Read configuration file "/etc/tor/torrc". Nov 26 10:59:25.737 [notice] Initialized libevent version 2.0.19-stable using method epoll (with changelist). Good. Nov 26 10:59:25.737 [notice] Opening Socks listener on 127.0.0.1:9050 Nov 26 10:59:25.737 [warn] Could not bind to 127.0.0.1:9050: Address already in use. Is Tor already running? Nov 26 10:59:25.737 [warn] Failed to parse/validate config: Failed to bind one of the listener ports. Nov 26 10:59:25.737 [err] Reading config failed--see warnings above. Thanks in advance.

    Read the article

  • Keyring no longer prompts for password when SSH-ing

    - by Lie Ryan
    I remember that I used to be able to do ssh [email protected] and have a prompt asks me for a password to unlock the keyring for the whole GNOME session so subsequent ssh wouldn't need to enter the keyring password any longer (not quite sure if this is in Ubuntu or other distro). But nowadays doing ssh [email protected] would ask me, in the terminal, my keyring password every single time; which defeats the purpose of using SSH keys. I checked $ cat /etc/pam.d/lightdm | grep keyring auth optional pam_gnome_keyring.so session optional pam_gnome_keyring.so auto_start which looks fine, and $ pgrep keyring 1784 gnome-keyring-d so the keyring daemon is alive. I finally found that SSH_AUTH_SOCK variable (and GNOME_KEYRING_CONTROL and GPG_AGENT_INFO and GNOME_KEYRING_PID) are not being set properly. What is the proper way to set this variable and why aren't they being set in my environment (i.e. shouldn't they be set in default install)? I guess I can set it in .bashrc, but then the variables would only be defined in bash session, while that is fine for ssh, I believe the other environment variables are necessary for GUI apps to use keyring.

    Read the article

  • Languages with a clear distinction between subroutines that are purely functional, mutating, state-changing, etc?

    - by CPX
    Lately I've become more and more frustrated that in most modern programming languages I've worked with (C/C++, C#, F#, Ruby, Python, JS and more) there is very little, if any, language support for determining what a subroutine will actually do. Consider the following simple pseudo-code: var x = DoSomethingWith(y); How do I determine what the call to DoSomethingWith(y) will actually do? Will it mutate y, or will it return a copy of y? Does it depend on global or local state, or is it only dependent on y? Will it change the global or local state? How does closure affect the outcome of the call? In all languages I've encountered, almost none of these questions can be answered by merely looking at the signature of the subroutine, and there is almost never any compile-time or run-time support either. Usually, the only way is to put your trust in the author of the API, and hope that the documentation and/or naming conventions reveal what the subroutine will actually do. My question is this: Does there exist any languages today that make symbolic distinctions between these types of scenarios, and places compile-time constraints on what code you can actually write? (There is of course some support for this in most modern languages, such as different levels of scope and closure, the separation between static and instance code, lambda functions, et cetera. But too often these seem to come into conflict with each other. For instance, a lambda function will usually either be purely functional, and simply return a value based on input parameters, or mutate the input parameters in some way. But it is usually possible to access static variables from a lambda function, which in turn can give you access to instance variables, and then it all breaks apart.)

    Read the article

  • Are long methods always bad?

    - by wobbily_col
    So looking around earlier I noticed some comments about long methods being bad practice. I am not sure I always agree that long methods are bad (and would like opinions from others). For example I have some Django views that do a bit of processing of the objects before sending them to the view, a long method being 350 lines of code. I have my code written so that it deals with the paramaters - sorting / filtering the queryset, then bit by bit does some processing on the objects my query has returned. So the processing is mainly conditional aggregation, that has complex enough rules it can't easily be done in the database, so I have some variables declared outside the main loop then get altered during the loop. varaible_1 = 0 variable_2 = 0 for object in queryset : if object.condition_condition_a and variable_2 > 0 : variable 1+= 1 ..... ... . more conditions to alter the variables return queryset, and context So according to the theory I should factor out all the code into smaller methods, so That I have the view method as being maximum one page long. However having worked on various code bases in the past, I sometimes find it makes the code less readable, when you need to constantly jump from one method to the next figuring out all the parts of it, while keeping the outermost method in your head. I find that having a long method that is well formatted, you can see the logic more easily, as it isn't getting hidden away in inner methods. I could factor out the code into smaller methods, but often there is is an inner loop being used for two or three things, so it would result in more complex code, or methods that don't do one thing but two or three (alternatively I could repeat inner loops for each task, but then there will be a performance hit). So is there a case that long methods are not always bad? Is there always a case for writing methods, when they will only be used in one place?

    Read the article

  • SSIS Catalog: How to use environment in every type of package execution

    - by Kevin Shyr
    Here is a good blog on how to create a SSIS Catalog and setting up environments.  http://sqlblog.com/blogs/jamie_thomson/archive/2010/11/13/ssis-server-catalogs-environments-environment-variables-in-ssis-in-denali.aspx Here I will summarize 3 ways I know so far to execute a package while using variables set up in SSIS Catalog environment. First way, we have SSIS project having reference to environment, and having one of the project parameter using a value set up in the environment called "Development".  With this set up, you are limited to calling the packages by right-clicking on the packages in the SSIS catalog list and select Execute, but you are free to choose absolute or relative path of the environment. The following screenshot shows the 2 available paths to your SSIS environments.  Personally, I use absolute path because of Option 3, just to keep everything simple for myself. The second option is to call through SQL Job.  This does require you to configure your project to already reference an environment and use its variable.  When a job step is set up, the configuration part will require you to select that reference again.  This is more useful when you want to automate the same package that needs to be run in different environments. The third option is the most important to me as I have a SSIS framework that calls hundreds of packages.  The main part of the stored procedure is in this post (http://geekswithblogs.net/LifeLongTechie/archive/2012/11/14/time-to-stop-using-ldquoexecute-package-taskrdquondash-a-way-to.aspx).  But the top part had to be modified to include the logic to use environment reference. CREATE PROCEDURE [AUDIT].[LaunchPackageExecutionInSSISCatalog] @PackageName NVARCHAR(255) , @ProjectFolder NVARCHAR(255) , @ProjectName NVARCHAR(255) , @AuditKey INT , @DisableNotification BIT , @PackageExecutionLogID INT , @EnvironmentName NVARCHAR(128) = NULL , @Use32BitRunTime BIT = FALSE AS BEGIN TRY DECLARE @execution_id BIGINT = 0; -- Create a package execution IF @EnvironmentName IS NULL BEGIN   EXEC [SSISDB].[catalog].[create_execution]     @package_name=@PackageName,     @execution_id=@execution_id OUTPUT,     @folder_name=@ProjectFolder,     @project_name=@ProjectName,     @use32bitruntime=@Use32BitRunTime; END ELSE BEGIN   DECLARE @EnvironmentID AS INT   SELECT @EnvironmentID = [reference_id]    FROM SSISDB.[internal].[environment_references] WITH(NOLOCK)    WHERE [environment_name] = @EnvironmentName     AND [environment_folder_name] = @ProjectFolder      EXEC [SSISDB].[catalog].[create_execution]     @package_name=@PackageName,     @execution_id=@execution_id OUTPUT,     @folder_name=@ProjectFolder,     @project_name=@ProjectName,     @reference_id=@EnvironmentID,     @use32bitruntime=@Use32BitRunTime; END

    Read the article

  • 3d Picking under reticle

    - by Wolftousen
    i'm currently trying to work out some 3d picking code that I started years ago, but then lost interested the assignment was completed (this part wasn't actually part of the assignment). I am not using the mouse coords for picking, i'm just using the position in 3d space and a ray directly out from there. A small hitch though is that I want to use a cone and not a ray. Here are the variables i'm using: float iReticleSlope = 95/3000; //inverse reticle slope float baseReticle = 1; //radius of the reticle at z = 0 float maxRange = 3000; //max range to target Quaternion orientation; //the cameras orientation Vector3d position; //the cameras position Then I loop through each object in the world: Vector3d transformed; //object position after transformations float d, r; //holder variables for(i = 0; i < objects.length; i++) { transformed = position - objects[i].position; //transform the position relative to camera orientation.multiply(transformed); //orient the object relative to the camera if(transformed.z < 0) { d = sqrt(transformed[0] * transformed[0] + transformed[1] * transformed[1]); r = -transformed[2] * iReticleSlope + objects[i].radius; if(d < r && -transformed[2] - objects[i].radius <= maxRange) { //the object is under the reticle } else { //the object is not under the reticle } } else { //the object is not under the reticle } } Now this all works fine and dandy until the window ratio doesn't match the resolution ratio. Is there any simple way to account for that

    Read the article

  • Laptop Display Not Working

    - by etrask
    Hello, I just purchased this laptop. It is working fine but I want to use Xubuntu on it. I managed to get 10.04 installed and running but it did not recognize/use the wireless card. So I want to try 10.10. However, every time I try, the screen goes black and never comes back on. I have tried the Live CD and Alternate install CDs, for both Ubuntu and Xubuntu 10.10. After I select "Install Ubuntu" or "Try Ubuntu", the screen blacks out and never displays anything. So I deleted "quiet" and "splash" off the command line to see what messages would come up. A bunch of text flies by but it ends at: [time] TCP established hash table entries: 524288 (order: 11, 8388608 bytes) [time] TCP bind hash table entries 65536 (order: 8, 1048576 bytes) [time] TCP: Hash tables configured (established 524288 bind 65536) [time] TCP reno registered [time] UDP hash table entries: 2048 (order: 4, 65536 bytes) [time] UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes) [time] NET: Registered protocol family 1 And freezes here forever. I have tried nomodeset, vga=771, and xforcevesa with no progress. Is my video card simply not going to work with this distro? It seems strange that 10.04 would work fine but not 10.10. Any help is appreciated, thank you.

    Read the article

  • Are these advanced/unfair interview questions regarding Java concurrency?

    - by sparc_spread
    Here are some questions I've recently asked interviewees who say they know Java concurrency: Explain the hazard of "memory visibility" - the way the JVM can reorder certain operations on variables that are unprotected by a monitor and not declared volatile, such that one thread may not see the changes made by another thread. Usually I ask this one by showing code where this hazard is present (e.g. the NoVisibility example in Listing 3.1 from "Java Concurrency in Practice" by Goetz et al) and asking what is wrong. Explain how volatile affects not just the actual variable declared volatile, but also any changes to variables made by a thread before it changes the volatile variable. Why might you use volatile instead of synchronized? Implement a condition variable with wait() and notifyAll(). Explain why you should use notifyAll(). Explain why the condition variable should be tested with a while loop. My question is - are these appropriate or too advanced to ask someone who says they know Java concurrency? And while we're at it, do you think that someone working in Java concurrency should be expected to have an above-average knowledge of Java garbage collection?

    Read the article

  • How should I start refactoring my mostly-procedural C++ application?

    - by oob
    We have a program written in C++ that is mostly procedural, but we do use some C++ containers from the standard library (vector, map, list, etc). We are constantly making changes to this code, so I wouldn't call it a stagnant piece of legacy code that we can just wrap up. There are a lot of issues with this code making it harder and harder for us to make changes, but I see the three biggest issues being: Many of the functions do more (way more) than one thing We violate the DRY principle left and right We have global variables and global state up the wazoo. I was thinking we should attack areas 1 and 2 first. Along the way, we can "de-globalize" our smaller functions from the bottom up by passing in information that is currently global as parameters to the lower level functions from the higher level functions and then concentrate on figuring out how to removing the need for global variables as much as possible. I just finished reading Code Complete 2 and The Pragmatic Programmer, and I learned a lot, but I am feeling overwhelmed. I would like to implement unit testing, change from a procedural to OO approach, automate testing, use a better logging system, fully validate all input, implement better error handling and many other things, but I know if we start all this at once, we would screw ourselves. I am thinking the three I listed are the most important to start with. Any suggestions are welcome. We are a team of two programmers mostly with experience with in-house scripting. It is going to be hard to justify taking the time to refactor, especially if we can't bill the time to a client. Believe it or not, this project has been successful enough to keep us busy full time and also keep several consultants busy using it for client work.

    Read the article

  • Binding in the view or the controller?

    - by da_b0uncer
    I've seen 2 different approaches with MVC on the web. One, like in ExtJS, is to bind the callbacks to the view via the controller. Finding every element on the view and adding the functionallity. The other, like in angular.js and in the lift-framework server-side, too, is to bind in the views and just write the functionallity in the controller. Which is better and cleaner? The ExtJS approach has dumb views and all the logic in the controller. Which seems clean to me. I had problems with global IDs for GUI-elements or relative navigation to GUI-elements in this approach. When I changed the view, the controller couldn't find the buttons anymore or I had multiple instances of one button with the same ID on a single application, because of the global ID. But I solved this with IDs that are only global in a view and can be on the application multiple times. So I could mess with the (dumb) views layout and design and the functionallity wouldn't break. The angular.js approach with the bindings in the view don't has the problem with global IDs. Also, the person who changes something in the view layout has to know the IDs anyway, so the controller can put the data at the right spot. So if I write <a ng-click="doThis()" /> for angular.js and implement doThis() or <a lid="buttonwhichdoesthis" /> for extjs and find the element with the local id and add doThis() as handler on the controller side, seems to be not so different. The only thing is, the second one has one more layer of indirection, which seems cleaner. The first one seems somehow to cost less effort.

    Read the article

  • Are specific types still necessary?

    - by MKO
    One thing that occurred to me the other day, are specific types still necessary or a legacy that is holding us back. What I mean is: do we really need short, int, long, bigint etc etc. I understand the reasoning, variables/objects are kept in memory, memory needs to be allocated and therefore we need to know how big a variable can be. But really, shouldn't a modern programming language be able to handle "adaptive types", ie, if something is only ever allocated in the shortint range it uses fewer bytes, and if something is suddenly allocated a very big number the memory is allocated accordinly for that particular instance. Float, real and double's are a bit trickier since the type depends on what precision you need. Strings should however be able to take upp less memory in many instances (in .Net) where mostly ascii is used buth strings always take up double the memory because of unicode encoding. One argument for specific types might be that it's part of the specification, ie for example a variable should not be able to be bigger than a certain value so we set it to shortint. But why not have type constraints instead? It would be much more flexible and powerful to be able to set permissible ranges and values on variables (and properties). I realize the immense problem in revamping the type architecture since it's so tightly integrated with underlying hardware and things like serialization might become tricky indeed. But from a programming perspective it should be great no?

    Read the article

  • How can I define custom keyboard mappings to resize, move, and manage windows?

    - by fumon
    I just returned to ubuntu (13.04) after a year using OS X exclusively. I love the improvements that have come to ubuntu and unity, and I'm glad to be back. There's just one thing, though... Slate is a simple OS X tool that allows users to quickly create powerful keyboard macros and really take advantage of their screen space. I have to say I was spoiled by it. Even on a tiny laptop, my workflow was never interrupted by changing workspaces or leaving the keyboard to adjust a window, because perfect adjustment was a keystroke or two away. For example: bind h:ctrl;alt;cmd resize -10% +0 # this increases the window's left width by 10% bind h:shift;alt nudge -10% +0 # this moves the window left by 10% You make a big config file, and like vim, tmux, and everything else, it just becomes muscle memory. I can't seem to find a way to achieve anything close to this in linux or ubuntu. I've tried to make do with compiz window settings and the built-in stuff Ubuntu offers, but it's not even in the same realm. Although to be fair, this level of tuning isn't something most people care about. Thanks, guys. :) Any feedback would be appreciated.

    Read the article

  • Nullable types and ?? operator C# [en-US]

    - by ruimachado
    Nullable types vs Non-nullable types   While developing our C# projects its frequent the null comparison operation to avoid null exceptions. This simple operation is mainly coded using the "var x = null" code example inside an if clause. However not all types of variables are nullable, which means that setting a variable to null is not allowed in every cases, it depends on what kind of type are you defining. But what if there was an extension to your non-nullable type that would convert your variable types to nullable? This extension really exists. As I said before in C# you have nullable types which represent all the values of an underlying type, and an additional null value and can be declared easily using "T?", where T is the type of the variable and for example the normal int type cannot be null, so its a non-nullable type, however if you define a "int?" your variable can be null, what you do is convert a non-nullable type to a nullable type. Example: int x=null;     Not allowed     int? x=null;   Allowed     While using nullable types you can check if a variable is null the same way you do it with nullable types:     But what about setting a default value when a certain variable is null?   In this cases the c# .net framework let you set a default value when you try to assign a nullable type to a non-nullable type, using the ?? operator. If you don't use this operator you can still catch the InvalidOperationException which is throw in this cases. For example  without the ?? operator :     Using the ?? operator your code becomes cleaner and more easy to read and you get a bonus, you can set a default value for multiple variables using the ?? in a chain set.     That’s it,   Thanks, Rui Machado rpmachado.wordpress.com

    Read the article

  • Matlab: Why is '1' + 1 == 50? [migrated]

    - by phi
    Matlab has weak dynamic typing, which is what causes this weird behaviour. What I do not understand is what exactly happens, as this result really surprises me. Edit: To clarify, what I'm describing is clearly a result of Matlab storing chars in ASCII-format, which was also mentioned in the comments. I'm more interested in the way Matlab handles its variables, and specifically, how and when it assigns a type/tag to the values. Thanks. '1' is a 1-by-1 matrix of chars in matlab and '123' is a 1-by-3 matrix of chars. As expected, 1 returns a 1-by-1 double. Now if I enter '1' + 1 I get 50 as a 1-by-1 double, and if I enter '123' + 1 I get a 1-by-3 double [ 50 51 52 ] Furthermore, if I type 'a' + 1 the result is 98 in a 1-by-1 double. I assume this has to do with how Matlab stores char-variables in ascii form, but how exactly is it handling these? Are the data actually unityped and tagged, or how does it work? Thanks.

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >