Search Results

Search found 19913 results on 797 pages for 'bit packing'.

Page 181/797 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • recursive delete trigger and ON DELETE CASCADE contraints are not deleting everything

    - by bitbonk
    I have a very simple datamodel that represents a tree structure: The RootEntity is the root of such a tree, it can contain children of type ContainerEntity and of type AtomEntity. The type ContainerEntity again can contain children of type ContainerEntity and of type AtomEntity but can not contain children of type RootEntity. Children are referenced in a well known order. The DB model for this is below. My problem now is that when I delete a RootEntity I want all children to be deleted recursively. I have create foreign key with CASCADE DELETE and two delete triggers for this. But it is not deleting everything, it always leaves some items in the ContainerEntity, AtomEntity, ContainerEntity_Children and AtomEntity_Children tables. Seemling beginning with the recursionlevel of 3. CREATE TABLE RootEntity ( Id UNIQUEIDENTIFIER NOT NULL, Name VARCHAR(500) NOT NULL, CONSTRAINT PK_RootEntity PRIMARY KEY NONCLUSTERED (Id), ); CREATE TABLE ContainerEntity ( Id UNIQUEIDENTIFIER NOT NULL, Name VARCHAR(500) NOT NULL, CONSTRAINT PK_ContainerEntity PRIMARY KEY NONCLUSTERED (Id), ); CREATE TABLE AtomEntity ( Id UNIQUEIDENTIFIER NOT NULL, Name VARCHAR(500) NOT NULL, CONSTRAINT PK_AtomEntity PRIMARY KEY NONCLUSTERED (Id), ); CREATE TABLE RootEntity_Children ( ParentId UNIQUEIDENTIFIER NOT NULL, OrderIndex INT NOT NULL, ChildContainerEntityId UNIQUEIDENTIFIER NULL, ChildAtomEntityId UNIQUEIDENTIFIER NULL, ChildIsContainerEntity BIT NOT NULL, CONSTRAINT PK_RootEntity_Children PRIMARY KEY NONCLUSTERED (ParentId, OrderIndex), -- foreign key to parent RootEntity CONSTRAINT FK_RootEntiry_Children__RootEntity FOREIGN KEY (ParentId) REFERENCES RootEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) ContainerEntity CONSTRAINT FK_RootEntiry_Children__ContainerEntity FOREIGN KEY (ChildContainerEntityId) REFERENCES ContainerEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) AtomEntity CONSTRAINT FK_RootEntiry_Children__AtomEntity FOREIGN KEY (ChildAtomEntityId) REFERENCES AtomEntity (Id) ON DELETE CASCADE, ); CREATE TABLE ContainerEntity_Children ( ParentId UNIQUEIDENTIFIER NOT NULL, OrderIndex INT NOT NULL, ChildContainerEntityId UNIQUEIDENTIFIER NULL, ChildAtomEntityId UNIQUEIDENTIFIER NULL, ChildIsContainerEntity BIT NOT NULL, CONSTRAINT PK_ContainerEntity_Children PRIMARY KEY NONCLUSTERED (ParentId, OrderIndex), -- foreign key to parent ContainerEntity CONSTRAINT FK_ContainerEntity_Children__RootEntity FOREIGN KEY (ParentId) REFERENCES ContainerEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) ContainerEntity CONSTRAINT FK_ContainerEntity_Children__ContainerEntity FOREIGN KEY (ChildContainerEntityId) REFERENCES ContainerEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) AtomEntity CONSTRAINT FK_ContainerEntity_Children__AtomEntity FOREIGN KEY (ChildAtomEntityId) REFERENCES AtomEntity (Id) ON DELETE CASCADE, ); CREATE TRIGGER Delete_RootEntity_Children ON RootEntity_Children FOR DELETE AS DELETE FROM ContainerEntity WHERE Id IN (SELECT ChildContainerEntityId FROM deleted) DELETE FROM AtomEntity WHERE Id IN (SELECT ChildAtomEntityId FROM deleted) GO CREATE TRIGGER Delete_ContainerEntiy_Children ON ContainerEntity_Children FOR DELETE AS DELETE FROM ContainerEntity WHERE Id IN (SELECT ChildContainerEntityId FROM deleted) DELETE FROM AtomEntity WHERE Id IN (SELECT ChildAtomEntityId FROM deleted) GO

    Read the article

  • VPython in Eclipse - thinks it has the wrong architecture type.

    - by Duncan Tait
    Evening, So I've recently installed VPython on my MacBook (OS X, Snow Leopard) - and it works absolutely fine in IDLE and from the command line (interactive mode). However, eclipse has issues. Firstly it couldn't find it (which is a bit of an issue actually with all these 'easy install' python modules - when they don't tell you where they actually install to!) but I searched it out in the depths of Library\Frameworks... and added that to the System PYTHONPATH listbox in Eclipse. Now it can find it, but it says the following: Traceback (most recent call last): File "/Users/duncantait/dev/workspace/Network_Simulation/src/Basic/Net_Sim1.py", line 15, in <module> import visual File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/visual/__init__.py", line 59, in <module> import cvisual ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/visual/cvisual.so, 2): no suitable image found. Did find: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/visual/cvisual.so: mach-o, but wrong architecture I am guessing that VPython might not be built for a 64-bit architecture (Intel), but the fact remains that it works in both IDLE and command prompt... So there must be a way to configure Eclipse to run it right? (Wishful thinking). Thanks for any help! Duncan

    Read the article

  • Can't operator == be applied to generic types in C#?

    - by Hosam Aly
    According to the documentation of the == operator in MSDN, For predefined value types, the equality operator (==) returns true if the values of its operands are equal, false otherwise. For reference types other than string, == returns true if its two operands refer to the same object. For the string type, == compares the values of the strings. User-defined value types can overload the == operator (see operator). So can user-defined reference types, although by default == behaves as described above for both predefined and user-defined reference types. So why does this code snippet fail to compile? void Compare<T>(T x, T y) { return x == y; } I get the error Operator '==' cannot be applied to operands of type 'T' and 'T'. I wonder why, since as far as I understand the == operator is predefined for all types? Edit: Thanks everybody. I didn't notice at first that the statement was about reference types only. I also thought that bit-by-bit comparison is provided for all value types, which I now know is not correct. But, in case I'm using a reference type, would the the == operator use the predefined reference comparison, or would it use the overloaded version of the operator if a type defined one? Edit 2: Through trial and error, we learned that the == operator will use the predefined reference comparison when using an unrestricted generic type. Actually, the compiler will use the best method it can find for the restricted type argument, but will look no further. For example, the code below will always print true, even when Test.test<B>(new B(), new B()) is called: class A { public static bool operator==(A x, A y) { return true; } } class B : A { public static bool operator==(B x, B y) { return false; } } class Test { void test<T>(T a, T b) where T : A { Console.WriteLine(a == b); } }

    Read the article

  • rails + compass: advantages vs using haml + blueprint directly

    - by egarcia
    I've got some experience using haml (+sass) on rails projects. I recently started using them with blueprintcss - the only thing I did was transform blueprint.css into a sass file, and started coding from there. I even have a rails generator that includes all this by default. It seems that Compass does what I do, and other things. I'm trying to understand what those other things are - but the documentation/tutorials weren't very clear. These are my conclusions: Compass comes with built-in sass mixins that implement common CSS idioms, such as links with icons or horizontal lists. My solution doesn't provide anything like that. (1 point for Compass). Compass has several command-line options: you can create a rails project, but you can also "install" it on an existing rails project. A rails generator could be personalized to do the same thing, I guess. (Tie). Compass has two modes of working with blueprint: "basic" and "semantic" usage. I'm not clear about the differences between those. With my rails generator I only have one mode, but it seems enough. (Tie) Apparently, Compass is prepared to use other frameworks, besides blueprint (e.g. YUI). I could not find much documentation about this, and I'm not interested on it anyway - blueprint is ok for me (Tie). Compass' learning curve seems a bit stiff and the documentation seems sparse. Learning could be a bit difficult. On the other hand, I know the ins and outs of my own system and can use it right away. (1 point for my system). With this analysis, I'm hesitant to give Compass a try. Is my analysis correct? Are Am I missing any key points, or have I evaluated any of these points wrongly?

    Read the article

  • Linux 2.6.31 Scheduler and Multithreaded Jobs

    - by dsimcha
    I run massively parallel scientific computing jobs on a shared Linux computer with 24 cores. Most of the time my jobs are capable of scaling to 24 cores when nothing else is running on this computer. However, it seems like when even one single-threaded job that isn't mine is running, my 24-thread jobs (which I set for high nice values) only manage to get ~1800% CPU (using Linux notation). Meanwhile, about 500% of the CPU cycles (again, using Linux notation) are idle. Can anyone explain this behavior and what I can do about it to get all of the 23 cores that aren't being used by someone else? Notes: In case it's relevant, I have observed this on slightly different kernel versions, though I can't remember which off the top of my head. The CPU architecture is x64. Is it at all possible that the fact that my 24-core jobs are 32-bit and the other jobs I'm competing w/ are 64-bit is relevant? Edit: One thing I just noticed is that going up to 30 threads seems to alleviate the problem to some degree. It gets me up to ~2100% CPU.

    Read the article

  • Design to distribute work when generating task oriented input for legacy dos application?

    - by TheDeeno
    I'm attempting to automate a really old dos application. I've decided the best way to do this is via input redirection. The legacy app (menu driven) has many tasks within tasks with branching logic. In order to easily understand and reuse the input for these tasks, I'd like to break them into bit size pieces. Since I'll need to start a fresh app on each run, repeating a context to consume a bit might be messy. I'd like to create an object model that: allows me to concentrate on the task at hand allows me to reuse common tasks from different start points prevents me from calling a task from the wrong start point To be more explicit, given I have the following task hierarchy: START A A1 A1a A1b A2 A2a B B1 B1a I'd like an object model that lets me generate an input file for task "A1b" buy using building blocks like: START -> do_A, do_A1, do_A1b but prevents me from: START -> do_A1 // because I'm assuming a different call chain from above This will help me write "do_A1b" because I can always assume the same starting context and will simplify writing "do_A1a" because it has THE SAME starting context. What patterns will help me out here? I'm using ruby at the moment so if dynamic language features can help, I'm game.

    Read the article

  • Bitap algorithm in Java [closed]

    - by davit-datuashvili
    The following is the bitap algorithm according to Wikipedia. Can someone translate this to Java? #include <string.h> #include <limits.h> const char *bitap_bitwise_search(const char *text, const char *pattern) { int m = strlen(pattern); unsigned long R; unsigned long pattern_mask[CHAR_MAX+1]; int i; if (pattern[0] == '\0') return text; if (m > 31) return "The pattern is too long!"; /* Initialize the bit array R */ R = ~1; /* Initialize the pattern bitmasks */ for (i=0; i <= CHAR_MAX; ++i) pattern_mask[i] = ~0; for (i=0; i < m; ++i) pattern_mask[pattern[i]] &= ~(1UL << i); for (i=0; text[i] != '\0'; ++i) { /* Update the bit array */ R |= pattern_mask[text[i]]; R <<= 1; if (0 == (R & (1UL << m))) return (text + i - m) + 1; } return NULL; }

    Read the article

  • Switching from Sourcesafe - What to look for in a product

    - by asp316
    We're looking to move off of sourcesafe and on to a more robust source control system for our .Net apps. We're also looking for scripted/automated deployments. I'm a .Net developer (web and winforms). However, most of our development staff is RPG for the IBM iSeries and the devs use Aldon's LMI for source control and deployment. Our manager would prefer to stick with Aldon so all of our products are in the same system. However, I don't have experience with Aldon's products on the .Net side. I've used TFS and Subversion with Tortoise a bit, but not enough to recommend one or the other, especially in comparison to Aldon's product. Does anybody have experience with Aldon's products? If so, thoughts please? Also, other than the obvious things source control systems do, are there things I should avoid or are there must haves? I'm open to any system. A bit of background, I'm the only .Net dev in our company but I let operations do the deployments. I do want the ability to support concurrent checkouts if we hire a new dev.

    Read the article

  • Investigating the root cause behind SharePoint's "request not found in the TrackedRequests"

    - by Muhimbi
    We have a long standing issue in our bug tracking system about the dreaded "ERROR: request not found in the TrackedRequests. We might be creating and closing webs on different threads." message in SharePoint's trace log. As we develop Workflow software for the SharePoint market, we look into this issue from time to time to make sure it is not caused by our products. I have personally come to the conclusion that this is a problem in SharePoint, but perhaps someone else can prove me wrong. Here is what I know: According to the hundreds of search results returned by Google on this topic, this issue appears to be mainly related to SharePoint Workflows, both SharePoint Designer and Visual Studio based workflows. Assuming ULS logging is set to Monitorable, the easiest way to reproduce this problem is to create a new SharePoint Designer Workflow, attach it to a document library, set it to auto start on add/update, don't add any actions, save the workflow and upload a file to the document library. The error is only visible in the SharePoint trace log, it does not appear to impact the execution of the workflow at hand. I have verified that the problem occurs on 32 bit as well as 64 bit systems, Win2K3 and 2K8, WSS and MOSS with SharePoint versions up to the December 2009 Cumulative Update (6524). The problem does not occur when a workflow is started manually. There are dozens of related posts on MSDN Forums, hundreds on Google, one on StackOverflow and none on SharePoint Overflow. There appears to be no answer. Does anyone have any idea about what is going on, what is causing this and if we should worry or file this under 'Red Herrings'.

    Read the article

  • HELP!!! Upgrading to windows 2008 R2 server has caused major issue with screen scraping remote serve

    - by bobsov534
    I have three servers. One is Windows 2003(A) and another is Windows 2008(B) and third one is also Windows 2008 (C). All of them are web servers. A and B contains classic asp pages and they are 32 bit servers and C contains asp.net pages and is 64 bit server. The asp pages of A and B use the screen scraping technology to render the asp.net pages from C. When A is ran, the asp.net page is rendered fine, meaning there are no broken images or file not found error. When B is ran, the images appears to be broken because its looking for those images in Server B instead of Server C itself. I believe this issue is caused by IIS 7 or 7.5 since IIS 6 has no problem scraping the remote server pages. Can you please help me with solution to this problem? This is sort of urgent since upgrading to windows server 2008 R2 now has been a major show stopper for us at the moment. Thanks in advance.

    Read the article

  • How to create a high quality icon for my Windows application?

    - by Patrick Klug
    If you are running Windows with a higher DPI setting you will notice that most application icons on the desktop look terrible. Even high profile application icons such as Google Chrome look terrible while for example Firefox, Skype and MS Office icons look sharp: (example) I suspect that most icons look blurry because a lower resolution icon is scaled up rather than using a higher resolution icon. I want to give my application a high quality icon and can't seem to convince Windows to use the higher resolution icon. I have created a multi-resolution icon with the free icon editor IcoFX. The icon is provided in 16x16, 24x24, 32x32,48x48, 128x128 and 256x256 (!) (all in 32 bit including alpha channel) yet Windows seems to use the 128x128 version of the icon on the desktop and scale it up which looks terrible. (I am using Windows 7 - 64 bit - the icon is placed by means of setting up a shortcut in the msi (created via Visual Studio 2008 Setup Project) and pointing it to the .ico file that contains the multi-resolution icon) I have tried removing the 128x128 icon but to no avail. Interestingly in Windows Explorer the icon looks great even when using the Extra Large Icon setting. How can I create a high quality desktop icon that looks great on higher DPI settings on Windows?

    Read the article

  • Getting the size of the window WITHOUT title/notification bars

    - by Anidamo
    Hi there, I've been playing around with Android development and one of the things I'd like to be able to do is dynamically create a background image for my windows, similar to the one below. This is from my BlackBerry app. It consists of three separate parts, the bottom right logo, the top left watermark, and the bottom right name. It works independent of screen size because the BlackBerry app just gets all three parts and generates an appropriately sized bitmap using the screen width and height. Since Android has quite a bit more screen resolution possibilities I need to be able to generate backgrounds on the fly like this. However, I have not found any way to get the height/width of the window in Android. I can get the screen resolution, but that includes the application title bar and the notification bar, which is unacceptable. I'd like to know how to get the size of my window, or screen resolution minus the title and notification bars. I think this might be possible using the dimensions of my layout managers but I cannot get the height/width of them in the onCreate method so I'm at a little bit of a loss. Thanks.

    Read the article

  • Is this way of storing typed objects in memory good?

    - by Pindatjuh
    This is an "is this okay, or can it be done better" question. Topic: Storing typed objects in memory. Background information: I'm building a compiler for the x86-32 platform for my language. My goal includes typed objects. Idea: Every primitive is a semi-class (it can be used as if it was a normal class, but it's stored more compact). Every class is represented by primitives and some meta-data (containing class-properties, inheritance stuff, etc.). The meta-data is complex: it doesn't use fields but instead context-switches. For primitives, the meta-data is very small, compared to a "real" class, which is alot bigger. This enables another idea that "primitives are objects", in my language, which I found nessecairy. Example: If I have an array of 32 booleans, then the pure content of this array is exactly 4 byte (32 bits of booleans). The meta-data will contain flags that the type is an array of booleans, which contains 32 entries. The meta-data is very compacted, on bit-level: using a sort of "packing" mechanism, which is read by a FSM at runtime, when doing inspection of the type (like when passing the object to methods for checking, etc.) For instance (read from left to right, top to bottom, remember vertical possition when going to the right, and check nearest column header for meaning of switch): Primitive? Array? Type-Meta 1 Byte? || Size (1 byte) 1 1 [...] 1 [...] done 0 2 Bytes? || Size (2 bytes) 1 [...] done || Size (4 bytes) 0 [...] done Integer? 1 Byte? 2 Bytes? 0 1 0 1 done 1 done 0 done Boolean? Byte? 0 1 0 done 1 done More-Primitives 0 .... Class-Stuff (Huge) 0 ... (After reaching done the data is inserted. || = byte alignement. [...] is variable sized. ... is not described here, for simplicity. And let's call them cost-based-data-structures.) For an array of 32 booleans containing all true values, the memory for this type would be (read top-down): 1 Primitive 1 Array 1 ArrayType: Primitive 0 Not-Array 0 Not-Integer 1 Boolean 0 Not-Byte (thus bit) 1 Integer Size: 1 Byte 00100000 Array size 11111111 11111111 11111111 11111111 Data Thus, 8 bytes represent 32 booleans in an array: 11100101 00100000 11111111 11111111 11111111 11111111 Is this okay, or can it be done better?

    Read the article

  • Graph drawing for the Web 2.0

    - by tokel
    Hi. There are a lot of chart drawing libraries out there, but what I am looking for is an interactive(!) graph (nodes and edges) drawing library. At best some kind of AJAX, but I am also open for other technologies (Java, Flash). However I would really prefer an AJAX implementation. Also only Open Source framework suggestions please (I already know about yFiles). The thing I have in mind is a bit like GWTUML. That is quite nice already, but misses an API for their graph drawing. It seems that it is an custom implementation for their product. Also some layout algorithms would be nice. Is there really no library around for that task? I wonder a bit as there are so many chart libraries available. Let me emphasize again (seems to be necessary cause of the first two comments): I am NOT looking for another chart library! Regards, Kai P.S. Things I know about already: JUNG (Java), Prefuse (Java), GINY (Java), yFiles (AJAX, but not Open Source). Just another graph library I found (Javascript, not sure if it is still maintained): Graph Gear

    Read the article

  • Rails 3 with Ruby 1.9.1 on Heroku

    - by stephen murdoch
    I've decided that I am going to man-up and start using Rails 3 from now on but the following note found here puts me off a bit: Note that Ruby 1.8.7 has marshaling bugs that crash both Rails 2.3.x and Rails 3.0.0. Ruby 1.9.1 outright segfaults on Rails 3.0.0, so if you want to use Rails 3 with 1.9.x, jump on 1.9.2 trunk for smooth sailing. I use Heroku for my deployment and as far as I am aware they do not plan to add 1.9.2 to the stack until it's stable (which might be in August) so I was thinking of doing it with 1.9.1. and seeing what happens. I know that there is a 3rd beta release now but the comments on the blog imply that it's still a little bit buggy. Is DHH inferring that you shouldn't touch Rails 3 at all if you are on 1.9.1? What are other Heroku-ists doing regarding Rails 3? Anyone using it for any production apps? I guess I'll only know once I've tried but any advice would be nice.

    Read the article

  • JVM terminates when launching eclipse with J2SE 6.0 on mac os x (need J2SE 6.0 for Oracle enterprise

    - by rooban bajwa
    I know my issue has party been addressed at this link http://stackoverflow.com/questions/245803/jvm-terminates-when-launching-eclipse-mat-on-mac-os-with-j2se-60 but it was a year+ ago.. plus the link that's provided in there http://landonf.bikemonkey.org/static/soylatte/ does not seem to be alive (i mean the download section on that link no longer provide the 32-bit port of j2se 6.0 for mac osx 10.5) I am trying to run eclipse 3.5 on mac OSX 10.5. It works fine with J2SE 5.0. But when I installed the Oracle enterprise pack for eclipse - it requires to start eclipse with J2SE 6.0 JVM otherwise it will get disabled. Here's the exact message I get from it - "You are running Eclipse on Java VM version: 1.5.0_22 Oracle Enterprise Pack for Eclipse requires Java version 6 or higher. Click next to configure a compatible Java VM." It asks me to point to J2SE 6.0 JVM, when I do that (i.e point it to "/System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home") , it asks to restart eclipse , when I do that, eclipse just bombs .. with JVM terminated error .. SO I need to start eclipse with J2SE 6.0 JVM but eclipse needs carbon which is only available in 32 bits and hence I cann't start eclipse with J2SE 6.0 JVM which is only available in 64bit mode from mac. And the site providing 32 bit port of J2SE 6.0 JVM does not seem to be active anymore.. Can someone help me on this issue, Thanks in advance,

    Read the article

  • Create a dynamic project template in VS 2010?

    - by jonhobbs
    This might sound a bit of an odd question but I know what I want to achieve, just don't know if it's possible. Firstly, I'd like to be able to create a visual studio project that the 2 developers that work with me can use as a basis for all new websites. I want to drop all the common files that we use in there, like jQuery, CMS files etc. so that every time they start a new project they don't have to worry about all of that stuff. I guess to do this I just set up a project and "File Export Template" ? Now, here's the tricky bit... When you open up one of the default templates in VS it asks you a few questions, such as if you want to use a master page or if you want to use code behind etc. What I would like to do is set up something similar so that when you use the project template it asks you what version of jQuery you want to use so that it can import the right file, or for example it might ask you if you want to include certain user controls that the CMS contains. If you tick the box then the folder with the necessary user controls would be put in your new project for you. I know MS can do this but can a user like me include functionality like that in my own project template? Hope that makes sense.

    Read the article

  • Prototype JS swallows errors in dom:loaded, and ajax callbacks?

    - by WishCow
    I can't figure out why prototype suppressess the error messages in the dom:loaded event, and in AJAX handlers. Given the following piece of HTML: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Conforming XHTML 1.1 Template</title> <script type="text/javascript" src="prototype.js"></script> <script type="text/javascript"> document.observe('dom:loaded', function() { console.log('domready'); console.log(idontexist); }); </script> </head> <body> </body> </html> The domready event fires, I see the log in the console, but there is no indication of any errors whatsoever. If you move the console.log(idontexist); line out of the handler, you get the idontexist is not defined error in the console. I find it a little weird, that in other event handlers, like 'click', you get the error message, it seems that it's only the dom:loaded that has this problem. The same goes for AJAX handlers: new Ajax.Request('/', { method: 'get', onComplete: function(r) { console.log('xhr complete'); alert(youwontseeme); } }); You won't see any errors. This is with prototype.js 1.6.1, and I can't find any indication of this behavior in the docs, nor a way to enable error reporting in these handlers. I have tried stepping through the code with FireBug's debugger, and it seems to jump to a function on line 53 named K, when it encounters the missing variable in the dom:loaded handler: K: function(x) { return x } But how? Why? When? I can't see any try/catch block there, how does the program flow end up there? I know that I can make the errors visible by packing my dom:ready handler(s) in try/catch blocks, but that's not a very comfortable option. Same goes for registering a global onException handler for the AJAX calls. Why does it even suppress the errors? Did someone encounter this before?

    Read the article

  • Working with ieee format numbers in ARM

    - by Jake Sellers
    I'm trying to write an ARM program that will convert an ieee number to a TNS format number. TNS is a format used by some super computers, and is similar to ieee but different. I'm trying to use several masks to place the three different "part" of the ieee number in separate registers so I can move them around accordingly. Here is my unpack subroutine: UnpackIEEE LDR r1, SMASK ;load the sign bit mask into r1 LDR r2, EMASK ;load the exponent mask into r2 LDR r3, GMASK ;load the significand mask into r3 AND r4, r0, r1 ;apply sign mask to IEEE and save into r4 AND r5, r0, r2 ;apply exponent mask to IEEE and save into r5 AND r6, r0, r3 ;apply significand mask to IEEE and save into r6 MOV pc, r14 ;return And here are the masks and number declarations so you can understand: IEEE DCD 0x40300000 ;2.75 decimal or 01000000001100000000000000000000 binary SMASK DCD 0x80000000 ;Sign bit mask EMASK DCD 0x7F800000 ;Exponent mask GMASK DCD 0x007FFFFF ;Significand mask When I step through with the debugger, the results I get are not what I expect after working through it on paper. EDIT: What I mean, is that after the subroutine runs, registers 4, 5, and 6 all remain 0. I can't figure out why the masks are not working. I think I do not fully understand how the number is being stored in the register or using the masks wrong. Any help appreciated. If you need more info just ask. EDIT: entry point: Very simple, just trying to get these subroutines working. ENTRY LDR r1, IEEE ;load IEEE num into r1 BL UnpackIEEE ;call unpack sub SWI SWI_Exit ;finish

    Read the article

  • Version Control and Coding Formatting

    - by Martin Giffy D'Souza
    Hi, I'm currently part of the team implementing a new version control system (Subversion) within my organization. There's been a bit of a debate on how to handle code formatting and I'd like to get other peoples opinions and experiences on this topic. We currently have ~10 developers each using different tools (due to licensing and preference). Some of these tools have automatic code formatters and others don't. If we allow "blind" checkins the code will look drastically different each time someone does a check in. This will make things such as diffs and merges complicated. I've talked to several people and they've mentioned the following solutions: Use the same developer program with the same code formatter (not really an option due to licensing) Have a hook (either client or server side) which will automatically format the code before going into the repository Manually format the code. Regarding the 3rd point, the concept is to never auto-format the code and have some standards. Right now that seems to be what we're leaning towards. I'm a bit hesitant on that approach as it could lead to developers spending a lot of time manually formatting code. If anyone can please provide some their thoughts and experience on this that would be great. Thank you, Martin

    Read the article

  • Using MD5 to generate an encryption key from password?

    - by Charles
    I'm writing a simple program for file encryption. Mostly as an academic exercise but possibly for future serious use. All of the heavy lifting is done with third-party libraries, but putting the pieces together in a secure manner is still quite a challenge for the non-cryptographer. Basically, I've got just about everything working the way I think it should. I'm using 128-bit AES for the encryption with a 128-bit key length. I want users to be able to enter in variable-length passwords, so I decided to hash the password with MD5 and then use the hash as the key. I figured this was acceptable--the key is always supposed to be a secret, so there's no reason to worry about collision attacks. Now that I've implemented this, I ran across a couple articles indicating that this is a bad idea. My question is: why? If a good password is chosen, the cipher is supposed to be strong enough on its own to never reveal the key except via an extraordinary (read: currently infeasible) brute-force effort, right? Should I be using something like PBKDF2 to generate the key or is that just overkill for all but the most extreme cryptographic applications?

    Read the article

  • Accessing Static Methods on a Generic class in c#

    - by mrlane
    Hello, I have the following situation in code, which I suspect may be a bit dodgey: I have a class: abstract class DataAccessBase<T> : IDataAccess where T : AnotherAbstractClass This class DataAccessBase also has a static factory method which creates instances of derived classes of itself using an enum value in a which statement to decide which derived type to create: static IDataAccess CreateInstance(TypeToCreateEnum) Now, the types derived from DataAccessBase<T> are themselves NOT generic, they specify a type for T: class PoLcZoneData : DataAccessBase<PoLcZone> // PoLcZone is derived from AnotherAbstractClass So far I am not sure if this is pushing the limits of good use of generics, but what I am really concerned about is how to access the static CreateInstance() method in the first place: The way I am doing this at the moment is to simply pass any type T where T : AnotherAbstractClass. In particular I am passing AnotherAbstractClass itself. This allows compilation just fine, but it does seem to me that passing any type to a generic class just to get at the statics is a bit dodgey. I have actually simplified the situation somewhat as DataAccessBase<T> is the lower level in the inheritance chain, but the static factory methods exists in a middle tier with classes such as PoLcZoneData being the most derived on the only level that is not generic. What are peoples thoughts on this arrangement?

    Read the article

  • How to get a volume measurement of iPhone recording in dB, with a limit of at least 120dB

    - by Cyber
    Hello, I am trying to make a simple volume meter for the iPhone. I want the volume displayed in dB. When using this turorial, I am only getting measurements up to 78 dB. I've read that that is because the dBFS spectrum for 16 bit audio recordings is only 96 dB. I tried modifying this piece of code in the init funcyion: dataFormat.mSampleRate = 44100.0f; dataFormat.mFormatID = kAudioFormatLinearPCM; dataFormat.mFramesPerPacket = 1; dataFormat.mChannelsPerFrame = 1; dataFormat.mBytesPerFrame = 2; dataFormat.mBytesPerPacket = 2; dataFormat.mBitsPerChannel = 16; dataFormat.mReserved = 0; I changed the value of mBitsPerChannel, hoping to increase the bit value of the recording. dataFormat.mBitsPerChannel = 32; With that variable set to 32, the "mAveragePower" function returns only 0. So, how can i measure more decibels? All my code is practically the same as in the tutorial i posted above. Thanks in advance, Thomas

    Read the article

  • unable to place breakpoints in eclipse

    - by anjanb
    I am using eclipse europa (3.5) on windows vista home premium 64-bit using JDK 1.6.0_18 (32 BIT). Normally, I am able to put breakpoints just fine; However, for a particular class which is NOT part of the project (this class is inside a .JAR file (.JAR file is part of the project) ), although I have attached a source directory to this .JAR file, I am unable to place a breakpoint in this class. If I double-click on the breakpoint pane(left border), I notice that a class breakpoint is placed. I was wondering if there was NO debug info; However, found that this particular class was compiled using ant/javac task using debug="true" and debuglevel="lines,vars,source". I even ran jad on this class to confirm that it indeed contained the debug info. So, why is eclipse preventing me from placing a breakpoint ? EDIT : Just so everyone understands the context, this is a webapp running under tomcat 6.0. I am remote debugging the application from eclipse after having started tomcat outside. The application is working just fine. I am trying to understand the behavior of the above class which I'm unable to do since eclipse is not letting me set a BP. P.S : I saw a few threads here talking about BPs not being hit but in my case, I am unable to place the BP! P.P.S : I tried JDK 1.6.0_16 before trying out 1.6.0_18. Thanks for any pointers.

    Read the article

  • Bootstrapper (setup.exe) says ".NET 3.5 not found" but launching .msi directly installs application

    - by Marek
    Our installer generates a bootstrapper (setup.exe) and a MSI file - a pretty common scenario. One of the production machines reports a strange problem during install: If the user launches the bootstrapper (setup.exe), it reports that .NET 3.5 is not installed. This happens with account under administator group. No matter if they launch it as administrator or not, same behavior. the application installs fine when application.msi or OurInstallLauncher.exe (see below for explanation) is started directly no matter if run as administrator is applied. We have checked that .NET is installed on the machine (both 64bit and 32bit "versions" = under both C:\Windows\Microsoft.NET\Framework64 and C:\Windows\Microsoft.NET\Framework there is a folder named v3.5. This happens on a 64 bit Windows 7. I can not reproduce it on my development 64 bit Windows 7. On Windows XP and Vista, it has worked without any problem for a long time so far. Part of our build script that declares the GenerateBootStrapper task (nothing special): <ItemGroup> <BootstrapperFile Include="Microsoft.Windows.Installer.3.1"> <ProductName>Microsoft Windows Installer 3.1</ProductName> </BootstrapperFile> <BootstrapperFile Include="Microsoft.Net.Framework.3.5"> <ProductName>Microsoft .NET Framework 3.5</ProductName> </BootstrapperFile> </ItemGroup> <GenerateBootstrapper ApplicationFile=".\Files\OurInstallLauncher.exe" ApplicationName="App name" Culture="en" ComponentsLocation ="HomeSite" CopyComponents="True" Validate="True" BootstrapperItems="@(BootstrapperFile)" OutputPath="$(OutSubDir)" Path="$(SdkBootstrapperPath)" /> Note: OurInstallLauncher.exe is language selector that applies a transform to the msi based on user selection. This is not relevant to the question at all because the installer never gets as far as launching this exe! It displays that .NET 3.5 is missing right after starting setup.exe. Has anyone seen this behavior before?

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >