Search Results

Search found 89880 results on 3596 pages for 'code sign'.

Page 406/3596 | < Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >

  • Pro/con of using Angular directives for complex form validation/ GUI manipulation

    - by tengen
    I am building a new SPA front end to replace an existing enterprise's legacy hodgepodge of systems that are outdated and in need of updating. I am new to angular, and wanted to see if the community could give me some perspective. I'll state my problem, and then ask my question. I have to generate several series of check boxes based on data from a .js include, with data like this: $scope.fieldMappings.investmentObjectiveMap = [ {'id':"CAPITAL PRESERVATION", 'name':"Capital Preservation"}, {'id':"STABLE", 'name':"Moderate"}, {'id':"BALANCED", 'name':"Moderate Growth"}, // etc {'id':"NONE", 'name':"None"} ]; The checkboxes are created using an ng-repeat, like this: <div ng-repeat="investmentObjective in fieldMappings.investmentObjectiveMap"> ... </div> However, I needed the values represented by the checkboxes to map to a different model (not just 2-way-bound to the fieldmappings object). To accomplish this, I created a directive, which accepts a destination array destarray which is eventually mapped to the model. I also know I need to handle some very specific gui controls, such as unchecking "None" if anything else gets checked, or checking "None" if everything else gets unchecked. Also, "None" won't be an option in every group of checkboxes, so the directive needs to be generic enough to accept a validation function that can fiddle with the checked state of the checkbox group's inputs based on what's already clicked, but smart enough not to break if there is no option called "NONE". I started to do that by adding an ng-click which invoked a function in the controller, but in looking around Stack Overflow, I read people saying that its bad to put DOM manipulation code inside your controller - it should go in directives. So do I need another directive? So far: (html): <input my-checkbox-group type="checkbox" fieldobj="investmentObjective" ng-click="validationfunc()" validationfunc="clearOnNone()" destarray="investor.investmentObjective" /> Directive code: .directive("myCheckboxGroup", function () { return { restrict: "A", scope: { destarray: "=", // the source of all the checkbox values fieldobj: "=", // the array the values came from validationfunc: "&" // the function to be called for validation (optional) }, link: function (scope, elem, attrs) { if (scope.destarray.indexOf(scope.fieldobj.id) !== -1) { elem[0].checked = true; } elem.bind('click', function () { var index = scope.destarray.indexOf(scope.fieldobj.id); if (elem[0].checked) { if (index === -1) { scope.destarray.push(scope.fieldobj.id); } } else { if (index !== -1) { scope.destarray.splice(index, 1); } } }); } }; }) .js controller snippet: .controller( 'SuitabilityCtrl', ['$scope', function ( $scope ) { $scope.clearOnNone = function() { // naughty jQuery DOM manipulation code that // looks at checkboxes and checks/unchecks as needed }; The above code is done and works fine, except the naughty jquery code in clearOnNone(), which is why I wrote this question. And here is my question: after ALL this, I think to myself - I could be done already if I just manually handled all this GUI logic and validation junk with jQuery written in my controller. At what point does it become foolish to write these complicated directives that future developers will have to puzzle over more than if I had just written jQuery code that 99% of us would understand with a glance? How do other developers draw the line? I see this all over Stack Overflow. For example, this question seems like it could be answered with a dozen lines of straightforward jQuery, yet he has opted to do it the angular way, with a directive and a partial... it seems like a lot of work for a simple problem. Specifically, I suppose I would like to know: how SHOULD I be writing the code that checks whether "None" has been selected (if it exists as an option in this group of checkboxes), and then check/uncheck the other boxes accordingly? A more complex directive? I can't believe I'm the only developer that is having to implement code that is more complex than needed just to satisfy an opinionated framework.

    Read the article

  • Tip #19 Module Private Visibility in OSGi

    - by ByronNevins
    I hate public and protected methods and classes.  It requires so much work to change them in a huge project like GlassFish.  Not to mention that you may well have to support those APIs forever.  They are highly overused in GlassFish.  In fact I'd bet that > 95% of classes are marked as public for no good reason.  It's just (bad) habit is my guess. private and default visibility (I call it package-private) is easier to maintain.  It is much much easier to change such classes and methods around.  If you have ANY public method or public class in GlassFish you'll need to grep through a tremendous amount of source code to find all callers.  But even that won't be theoretically reliable.  What if a caller is using reflection to access public methods?  You may never find such usages. If you have package private methods, it's easy.  Simply grep through all the code in that one package.  As long as that package compiles ok you're all set.  There can' be any compile errors anywhere else.  It's a waste of time to even look around or build the "outside" world.  So you may be thinking: "Aha!  I'll just make my module have one giant package with all the java files.  Then I can use the default visibility and maintenance will be much easier.  But there's a problem.  You are wasting a very nice feature of java -- organizing code into separate packages.  It also makes the code much more encapsulated.  Unfortunately to share code between the packages you have no choice but to declare public visibility. What happens in practice is that a module ends up having tons of public classes and methods that are used exclusively inside the module.  Which finally brings me to the point of this blog:  If Only There Was A Module-Private Visibility Available Well, surprise!  There is such a mechanism.  If your project is running under OSGi that is.  Like GlassFish does!  With this mechanism you can easily add another level of visibility by telling OSGi exactly which public you want to be exposed outside of the module.  You get the best of both worlds: Better encapsulation of your code so that maintenance is easier and productivity is increased. Usage of public visibility inside the module so that you can encapsulate intra-module better with packages. How I do this in GlassFish: Carefully plan out at least one package that will contain "true" publics.  This is the package that will be exported by OSGi.  I recommend just one package. Here is how to tell OSGi to use it in GlassFish -- edit osgi.bundle like so:-exportcontents:     org.glassfish.mymodule.truepublics;  version=${project.osgi.version} Now all publics declared in any other packages will be visible module-wide but not outside the module. There is one caveat: Accessing "module-private" items outside of the module is controlled at run-time, not compile-time.  The compiler has no clue that a public in a dependent module isn't really public.  it will happily compile it.  At runtime you will definitely see fireworks.  The good news is that you don't have to wait for the code path that tries to use the "module-private" items to fire.  OSGi will complain loudly when that module gets loaded.  OSGi will refuse to load it.  You will see an error like this: remote failure: Error while loading FOO: Exception while adding the new configuration : Error occurred during deployment: Exception while loading the app : org.osgi.framework.BundleException: Unresolved constraint in bundle com.oracle.glassfish.miscreant.code [115]: Unable to resolve 115.0: missing requirement [115.0] osgi.wiring.package; (osgi.wiring.package=org.glassfish.mymodule.unexported). Please see server.log for more details. That is if you accidentally change code in module B to use a public that is really a "module-private" in module A, then you will see the error immediately when you try to test whatever you were changing in module B.

    Read the article

  • Migrating VB6 to HTML5 is not a fiction - Customer success story

    - by Webgui
    All of you VB developers in the present or past would probably find it hard to believe that the old VB code can be migrated and modernized into the latest .NET based HTML5 without having to rewrite the application. But we have been working on such tools for the past couple of years and already have several real world applications that were fully 'transposed' from VB6. The solution is called Instant CloudMove and its main tool is called the TranspositionStudio. It is a unique solution that relies on the concept of transposition. Transposition comes from mathematics and music and refers to exchanging elements while everything else remains the same or moving an element as is from one environment to another. This means that we are taking the source code and put it in a modern technological environment with relatively few adjustments.The concept is based on a set of Mapping Expressions which are basically links between an element in the source environment and one in the target environment that has the same functionality. About 95% of the code is usually mapped out-of-the-box and the rest is handled with easy-to-use mapping tools designed for Visual Studio developers providing them with a familiar environment and concepts for completing the mapping and allowing them to extend and customize existing mapping expressions. The solution is also based on a circular workflow that enables developers to make any changes as required until the result is satisfying.As opposed to existing migration solutions that offer automation are usually a “black box” to the user, the transposition concept enables full visibility, flexibility and control over the code and process at all times allowing to also add/change functionalities or upgrade the UI within the process and tools.This is exactly the case with our customer’s aging VB6 PMS (Property Management System) which needed a technological update as well as a design refresh. The decision was to move the VB6 application which had about 1 million lines of code into the latest web technology. Since the application was initially written 13 years ago and had many upgrades since the code must be very patchy and includes unused sections. As a result, the company Mihshuv Group considered rewriting the entire application in Java since it already had the knowledge. Rewrite would allow starting with a clean slate and designing functionality, database architecture, UI without any constraints. On the other hand, rewrite entitles a long and detailed specification work as well as a thorough QA and this translates into a long project with high risk and costs.So the company looked for a migration solution as an alternative; the research lead to Gizmox and after examining the technology it was decided to perform a hybrid project which would include an automatic transposition of the core of the VB6 application (200,000 lines of code) while they redesigning the UI, adding new functionality, deleting unused code and rewriting about 140 reports with Crystal Reports will be done manually using Visual WebGui development tools.The migration part of the project was completed in 65 days by 3 developers from Mihshuv Group guided by Gizmox migration experts while the rewrite and UI upgrade tasks took about the same. So in only a few months period Mihshuv Group generated an up-to-date product, written in the latest Web technology with modern, friendly UI and improved functionality. Guest selection screen of the original VB6 PMS Guest selection screen on the new web–based PMS Compared to the initial plan to rewrite the entire application in Java, the hybrid migration/rewrite approach taken by Mihshuv Group using Gizmox technology proved as a great decision. In terms of time and cost there were substantial savings; from a project that was priced for at least a year (without taking into account the huge risk and uncertainty) it became a few months project only. More about this and other customer stories can be found here

    Read the article

  • OpenGL Fast-Object Instancing Error

    - by HJ Media Studios
    I have some code that loops through a set of objects and renders instances of those objects. The list of objects that needs to be rendered is stored as a std::map, where an object of class MeshResource contains the vertices and indices with the actual data, and an object of classMeshRenderer defines the point in space the mesh is to be rendered at. My rendering code is as follows: glDisable(GL_BLEND); glEnable(GL_CULL_FACE); glDepthMask(GL_TRUE); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnable(GL_DEPTH_TEST); for (std::map<MeshResource*, std::vector<MeshRenderer*> >::iterator it = renderables.begin(); it != renderables.end(); it++) { it->first->setupBeforeRendering(); cout << "<"; for (unsigned long i =0; i < it->second.size(); i++) { //Pass in an identity matrix to the vertex shader- used here only for debugging purposes; the real code correctly inputs any matrix. uniformizeModelMatrix(Matrix4::IDENTITY); /** * StartHere fix rendering problem. * Ruled out: * Vertex buffers correctly. * Index buffers correctly. * Matrices correct? */ it->first->render(); } it->first->cleanupAfterRendering(); } geometryPassShader->disable(); glDepthMask(GL_FALSE); glDisable(GL_CULL_FACE); glDisable(GL_DEPTH_TEST); The function in MeshResource that handles setting up the uniforms is as follows: void MeshResource::setupBeforeRendering() { glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); glEnableVertexAttribArray(2); glEnableVertexAttribArray(3); glEnableVertexAttribArray(4); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, iboID); glBindBuffer(GL_ARRAY_BUFFER, vboID); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), 0); // Vertex position glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 12); // Vertex normal glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 24); // UV layer 0 glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 32); // Vertex color glVertexAttribPointer(4, 1, GL_UNSIGNED_SHORT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 44); //Material index } The code that renders the object is this: void MeshResource::render() { glDrawElements(GL_TRIANGLES, geometry->numIndices, GL_UNSIGNED_SHORT, 0); } And the code that cleans up is this: void MeshResource::cleanupAfterRendering() { glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); glDisableVertexAttribArray(2); glDisableVertexAttribArray(3); glDisableVertexAttribArray(4); } The end result of this is that I get a black screen, although the end of my rendering pipeline after the rendering code (essentially just drawing axes and lines on the screen) works properly, so I'm fairly sure it's not an issue with the passing of uniforms. If, however, I change the code slightly so that the rendering code calls the setup immediately before rendering, like so: void MeshResource::render() { setupBeforeRendering(); glDrawElements(GL_TRIANGLES, geometry->numIndices, GL_UNSIGNED_SHORT, 0); } The program works as desired. I don't want to have to do this, though, as my aim is to set up vertex, material, etc. data once per object type and then render each instance updating only the transformation information. The uniformizeModelMatrix works as follows: void RenderManager::uniformizeModelMatrix(Matrix4 matrix) { glBindBuffer(GL_UNIFORM_BUFFER, globalMatrixUBOID); glBufferSubData(GL_UNIFORM_BUFFER, 0, sizeof(Matrix4), matrix.ptr()); glBindBuffer(GL_UNIFORM_BUFFER, 0); }

    Read the article

  • How to be Agile when new work keeps affecting completed work?

    - by jdln
    The project I'm working on is to re-skin an existing website. The functionally will stay the same, its just the styles that are changing. The HTML is not changing, I'm only modifying the CSS files. The site is pretty complex. There are dozens of pages. Users can be logged in and have a number of different roles. Depending on their role the content of the page and what pages they are allowed to see varys. We're using GIT and Github. I'm trying to write CSS that works as components. So when the same form elements, headings, etc appear on multiple pages they are already styled and are consistent. Most of time this is working well. Sadly the format and class names in the HTML are at times messy and unpredictable. When I fix something on one page it can break another. The job is also harder as no one knows exactly all the variations that are possible due to the user roles. As such I'm continuously finding new variations as I go along. I'm making headway by putting a lot of comments in my CSS. If I need to remove a CSS rule Ill comment it out so I can still see it with the chrome dev tools, and ill put a comment in the CSS saying why I removed it and for what page this was done. This means that if on another page I'm about to add add the rule to fix a different problem, there is more of a chance I will see how this would break the first page. This allows me to either find a different solution that will work for both pages, or I can make the override page specific. This has been working quite well for me. If I had complete free reign and the only deadline was to finish the project by the end then this method would be fine. However my manager is trying to mitigate risk by breaking the work into areas to be completed per sprint. This is counter to how I have been approaching things as something like my typography styles will affect all other pages on the site. The other issue is that the different stakeholders want to sign off each section as I go along. However once I've finished a section it may change if I change CSS that affects it and also affects a new section I'm working on. I've asked that the stakeholders have a quick unofficial sign off in stages (eg per sprint), and have the final official sign off at the end of the project, but this is being met with resistance. I do understand why it would be higher risk to do this, but the only way to guarantee that a signed off section will not change is to make ALL future changes page specific. In addition to this I'm being told that all work that I push to the Git repo should be ready to go live, and as such should not contain any code comments. This is risky for me as I wont know until I've finished the site if I will ever benefit from these comments or not. Has anyone else been in a similar situation and managed to find a compromise that worked for my development approach and also the desires of management and stakeholders to have a more Agile approach? A more Agile workflow works great when you can break the work into components and know that once something is done it wont be affected by future work. However the nature of this project makes this hard to achieve.

    Read the article

  • Plagued by multithreaded bugs

    - by koncurrency
    On my new team that I manage, the majority of our code is platform, TCP socket, and http networking code. All C++. Most of it originated from other developers that have left the team. The current developers on the team are very smart, but mostly junior in terms of experience. Our biggest problem: multi-threaded concurrency bugs. Most of our class libraries are written to be asynchronous by use of some thread pool classes. Methods on the class libraries often enqueue long running taks onto the thread pool from one thread and then the callback methods of that class get invoked on a different thread. As a result, we have a lot of edge case bugs involving incorrect threading assumptions. This results in subtle bugs that go beyond just having critical sections and locks to guard against concurrency issues. What makes these problems even harder is that the attempts to fix are often incorrect. Some mistakes I've observed the team attempting (or within the legacy code itself) includes something like the following: Common mistake #1 - Fixing concurrency issue by just put a lock around the shared data, but forgetting about what happens when methods don't get called in an expected order. Here's a very simple example: void Foo::OnHttpRequestComplete(statuscode status) { m_pBar->DoSomethingImportant(status); } void Foo::Shutdown() { m_pBar->Cleanup(); delete m_pBar; m_pBar=nullptr; } So now we have a bug in which Shutdown could get called while OnHttpNetworkRequestComplete is occuring on. A tester finds the bug, captures the crash dump, and assigns the bug to a developer. He in turn fixes the bug like this. void Foo::OnHttpRequestComplete(statuscode status) { AutoLock lock(m_cs); m_pBar->DoSomethingImportant(status); } void Foo::Shutdown() { AutoLock lock(m_cs); m_pBar->Cleanup(); delete m_pBar; m_pBar=nullptr; } The above fix looks good until you realize there's an even more subtle edge case. What happens if Shutdown gets called before OnHttpRequestComplete gets called back? The real world examples my team has are even more complex, and the edge cases are even harder to spot during the code review process. Common Mistake #2 - fixing deadlock issues by blindly exiting the lock, wait for the other thread to finish, then re-enter the lock - but without handling the case that the object just got updated by the other thread! Common Mistake #3 - Even though the objects are reference counted, the shutdown sequence "releases" it's pointer. But forgets to wait for the thread that is still running to release it's instance. As such, components are shutdown cleanly, then spurious or late callbacks are invoked on an object in an state not expecting any more calls. There are other edge cases, but the bottom line is this: Multithreaded programming is just plain hard, even for smart people. As I catch these mistakes, I spend time discussing the errors with each developer on developing a more appropriate fix. But I suspect they are often confused on how to solve each issue because of the enormous amount of legacy code that the "right" fix will involve touching. We're going to be shipping soon, and I'm sure the patches we're applying will hold for the upcoming release. Afterwards, we're going to have some time to improve the code base and refactor where needed. We won't have time to just re-write everything. And the majority of the code isn't all that bad. But I'm looking to refactor code such that threading issues can be avoided altogether. One approach I am considering is this. For each significant platform feature, have a dedicated single thread where all events and network callbacks get marshalled onto. Similar to COM apartment threading in Windows with use of a message loop. Long blocking operations could still get dispatched to a work pool thread, but the completion callback is invoked on on the component's thread. Components could possibly even share the same thread. Then all the class libraries running inside the thread can be written under the assumption of a single threaded world. Before I go down that path, I am also very interested if there are other standard techniques or design patterns for dealing with multithreaded issues. And I have to emphasize - something beyond a book that describes the basics of mutexes and semaphores. What do you think? I am also interested in any other approaches to take towards a refactoring process. Including any of the following: Literature or papers on design patterns around threads. Something beyond an introduction to mutexes and semaphores. We don't need massive parallelism either, just ways to design an object model so as to handle asynchronous events from other threads correctly. Ways to diagram the threading of various components, so that it will be easy to study and evolve solutions for. (That is, a UML equivalent for discussing threads across objects and classes) Educating your development team on the issues with multithreaded code. What would you do?

    Read the article

  • Why is 0 false?

    - by Morwenn
    This question may sound dumb, but why does 0 evaluates to false and any other [integer] value to true is most of programming languages? String comparison Since the question seems a little bit too simple, I will explain myself a little bit more: first of all, it may seem evident to any programmer, but why wouldn't there be a programming language - there may actually be, but not any I used - where 0 evaluates to true and all the other [integer] values to false? That one remark may seem random, but I have a few examples where it may have been a good idea. First of all, let's take the example of strings three-way comparison, I will take C's strcmp as example: any programmer trying C as his first language may be tempted to write the following code: if (strcmp(str1, str2)) { // Do something... } Since strcmp returns 0 which evaluates to false when the strings are equal, what the beginning programmer tried to do fails miserably and he generally does not understand why at first. Had 0 evaluated to true instead, this function could have been used in its most simple expression - the one above - when comparing for equality, and the proper checks for -1 and 1 would have been done only when needed. We would have considered the return type as bool (in our minds I mean) most of the time. Moreover, let's introduce a new type, sign, that just takes values -1, 0 and 1. That can be pretty handy. Imagine there is a spaceship operator in C++ and we want it for std::string (well, there already is the compare function, but spaceship operator is more fun). The declaration would currently be the following one: sign operator<=>(const std::string& lhs, const std::string& rhs); Had 0 been evaluated to true, the spaceship operator wouldn't even exist, and we could have declared operator== that way: sign operator==(const std::string& lhs, const std::string& rhs); This operator== would have handled three-way comparison at once, and could still be used to perform the following check while still being able to check which string is lexicographically superior to the other when needed: if (str1 == str2) { // Do something... } Old errors handling We now have exceptions, so this part only applies to the old languages where no such thing exist (C for example). If we look at C's standard library (and POSIX one too), we can see for sure that maaaaany functions return 0 when successful and any integer otherwise. I have sadly seen some people do this kind of things: #define TRUE 0 // ... if (some_function() == TRUE) { // Here, TRUE would mean success... // Do something } If we think about how we think in programming, we often have the following reasoning pattern: Do something Did it work? Yes -> That's ok, one case to handle No -> Why? Many cases to handle If we think about it again, it would have made sense to put the only neutral value, 0, to yes (and that's how C's functions work), while all the other values can be there to solve the many cases of the no. However, in all the programming languages I know (except maybe some experimental esotheric languages), that yes evaluates to false in an if condition, while all the no cases evaluate to true. There are many situations when "it works" represents one case while "it does not work" represents many probable causes. If we think about it that way, having 0 evaluate to true and the rest to false would have made much more sense. Conclusion My conclusion is essentially my original question: why did we design languages where 0 is false and the other values are true, taking in account my few examples above and maybe some more I did not think of? Follow-up: It's nice to see there are many answers with many ideas and as many possible reasons for it to be like that. I love how passionate you seem to be about it. I originaly asked this question out of boredom, but since you seem so passionate, I decided to go a little further and ask about the rationale behind the Boolean choice for 0 and 1 on Math.SE :)

    Read the article

  • What is the coolest thing you can do in <10 lines of simple code? Help me inspire beginners!

    - by Tom Ritter
    I'm looking for the coolest thing you can do in a few lines of simple code. I'm sure you can write a Mandelbrot set in Haskell in 15 lines but it's difficult to follow. My goal is to inspire students that programming is cool. We know that programming is cool because you can create anything you imagine - it's the ultimate creative outlet. I want to inspire these beginners and get them over as many early-learning humps as I can. Now, my reasons are selfish. I'm teaching an Intro to Computing course to a group of 60 half-engineering, half business majors; all freshmen. They are the students who came from underprivileged High schools. From my past experience, the group is generally split as follows: a few rock-stars, some who try very hard and kind of get it, the few who try very hard and barely get it, and the few who don't care. I want to reach as many of these groups as effectively as I can. Here's an example of how I'd use a computer program to teach: Here's an example of what I'm looking for: a 1-line VBS script to get your computer to talk to you: CreateObject("sapi.spvoice").Speak InputBox("Enter your text","Talk it") I could use this to demonstrate order of operations. I'd show the code, let them play with it, then explain that There's a lot going on in that line, but the computer can make sense of it, because it knows the rules. Then I'd show them something like this: 4(5*5) / 10 + 9(.25 + .75) And you can see that first I need to do is (5*5). Then I can multiply for 4. And now I've created the Object. Dividing by 10 is the same as calling Speak - I can't Speak before I have an object, and I can't divide before I have 100. Then on the other side I first create an InputBox with some instructions for how to display it. When I hit enter on the input box it evaluates or "returns" whatever I entered. (Hint: 'oooooo' makes a funny sound) So when I say Speak, the right side is what to Speak. And I get that from the InputBox. So when you do several things on a line, like: x = 14 + y; You need to be aware of the order of things. First we add 14 and y. Then we put the result (what it evaluates to, or returns) into x. That's my goal, to have a bunch of these cool examples to demonstrate and teach the class while they have fun. I tried this example on my roommate and while I may not use this as the first lesson, she liked it and learned something. Some cool mathematica programs that make beautiful graphs or shapes that are easy to understand would be good ideas and I'm going to look into those. Here are some complicated actionscript examples but that's a bit too advanced and I can't teach flash. What other ideas do you have?

    Read the article

  • Dismount USB External Drive using powershell

    - by JC
    Hello, I am attempting to dismount an external USB drive using powershell and I cannot successfuly do this. The following script is what I use: #get the Win32Volume object representing the volume I wish to eject $drive = Get-WmiObject Win32_Volume -filter "DriveLetter = 'F:'" #call dismount on that object there by ejecting drive $drive.Dismount($Force , $Permanent) I then check my computer to check if drive is unmounted but it is now. The boolean parameters $force and $permanent have been tried with different permutations to no avail. The exit code returned by the dismount command changes when the params are toggled. (0,0) = exit code 0 (0,1) = exit code 2 (1,0) = exit code 0 (1,1) = exit code 2 The documentation for exit code 2 indicates that there are existing mount points as a reason why it cannot dismount. Although I am trying to dismount the only mount point that exists so I am unsure what this exit code is trying to tell me. Having already trawled the web for people experiencing similar problems I have only found one additional command to try and that is the following: # executed after the .Dismount() command $drive.Put() This additional command does not help. I am running out of things to try, so any assistance anyone can give me would be greatly appreciated, Thanks.

    Read the article

  • Registry remotley hacked win 7 need help tracking the perp

    - by user577229
    I was writing some .VBS code at thhe office that would allow certain file extensions to be downloaded without a warning dialog on a w7x32 system. The system I was writing this on is in a lab on a segmented subnet. All web access is via a proxy server. The only means of accessing my machine is via the internet or from within the labs MSFT AD domain. While writing and testing my code I found a message of sorts. Upon refresing the registry to verify my code changed a dword, instead the message HELLO was written and visible in regedit where the dword value wass called for. I took a screen shot and proceeded to edit my code. This same weird behavior occurred last time I was writing registry code except on another internal server. I understand that remote registry access exists for windows systems. I will block this immediately once I return to the office. What I want to know is, can I trace who made this connection? How would I do this? I suspect the cause of this is the cause of other "odd" behaviors I'm experiencing at work such as losing control of my input director master control for over an hour and unchanged code that all of a sudden fails for no logical region. These failures occur at funny times, whenver I'm about to give a demonstration of my test code. I know this sounds crazy however knowledge of the registry component makes this believable. Once the registry can be accessed, the entire system is compromised. Any help or sanity checking is appreciated.

    Read the article

  • Why is there an extra HDD under /dev being added in my Linux Kernel?

    - by user1279156
    I have created a Linux kernel and for some reason an extra drive is always added at bootup. My hard drive is listed as /dev/sdb. /dev/sda is created too, and it is 8 MB in size. I can't find anything in the kernel config that is creating this, but if I use a different kernel it is not there. Kernel logs show it as an attached SCSI device, looks just like my hard drive but only 8 MB, and has no partition table. It also doesn't appear to be a physical device. I've tried the kernel on many different models of PCs and it is always there. Does anyone know how to remove it? /dev/disk/by-id gives me: scsi-1AMCC_U21413034D98EB000584 scsi-1AMCC_U21413034D98EB000584-part1 scsi-353333330000007d0 scsi-SATA_ST3250312AS_5VY7SH42 scsi-SATA_WDC_WD800JD-60L_WD-WMAM9Y085675 scsi-SATA_WDC_WD800JD-60L_WD-WMAM9Y085675-part1 scsi-SATA_WDC_WD800JD-60L_WD-WMAM9Y085675-part2 hdparm -i /dev/sda gives me an "invalid argument". dd if=/dev/sda of=sda.img the resulting file does not have any content sdparm results: /dev/sda: Linux scsi_debug 0004 Device identification VPD page: Addressed logical unit: designator type: T10 vendor identification, code set: ASCII vendor id: Linux vendor specific: scsi_debug 2000 designator type: NAA, code set: Binary 0x53333330000007d0 Target port: designator type: Relative target port, code set: Binary transport: Serial Attached SCSI (SAS) Relative target port: 0x1 designator type: NAA, code set: Binary transport: Serial Attached SCSI (SAS) 0x52222220000007ce designator type: Target port group, code set: Binary transport: Serial Attached SCSI (SAS) Target port group: 0x100 Target device that contains addressed lu: designator type: NAA, code set: Binary transport: Serial Attached SCSI (SAS) 0x52222220000007cd designator type: SCSI name string, code set: UTF-8 transport: Serial Attached SCSI (SAS) SCSI name string: naa.52222220000007CD

    Read the article

  • Faster caching method

    - by pataroulis
    I have a service that provides HTML code which at some point it is not updated anymore. The code is always generated dynamically from a database with 10 million entries so each HTML code page rendering searches there for say 60 or 70 of those entries and then renders the page. So, for those expired pages, I want to use a caching system which will be VERY simple (like just enter a record with the rendered HTML and (if I need) remove it). I tried to do it file-based but the search for the existence of a file and then passing it through php to actually render it , seems like too much for what I want to do. I was thinking of doing it on mysql with a table with MEDIUMBLOBs (each page is around 100k). It would hold about 150000 such records (for now, at least). My question is: Would it be faster to let mysql do the lookup of the file and the passing to php or is the file-based approach faster? The lookup code for the file based version looks like this: $page = @file_get_contents(getCacheFilename($pageId)); if($page!=NULL) { echo $page; } else { renderAndCachePage($pageId); } which does one lookup whether it finds the file or not. The mysql table would just have an ID (the page id) and the blob entry. The disk of the system is a simple SATA raid 1 , the mysql daemon can grab up to 2.5GB of memory (i have a proxy running too, eating the rest of the 16GB of the machine. ) In general the disk is quite busy already. My not using PEAR cache, is because I think (please feel free to correct me on this) it adds overhead I do not need because the page rendering code is called about 2M times per day and I wouldn't want to go through the whole code each time (and yes, I have eaccelerator to cache the code too). Any pointer to what direction I should go, would be greatly welcome. Thanks!

    Read the article

  • Cannot find protocol declaration in Xcode

    - by edie
    Hi.. I've experienced something today while I'm building my app. I've added a protocol in my object and assign delegate object on it. I added the protocol on the object that will implement the protocol's method. I've added it in this way as usual @interface MyObject : UIViewController <NameOfDelegate> But the Xcode says that my the protocol declaration cannot be found. I've check my code but I've declared this protocol. I've try to assign MyObject as delegate of other Object. I've edit my code like this @interface MyObject : UIViewController <UITableViewDelegate,NameOfDelegate> but Xcode say again that it cannot found declaration of protocol of NameOfDelegate. I've tried to delete the NameOfDelegate on my code and add assign MyObject as delegate of other object and it goes like this. @interface MyObject : UIViewController <UITableViewDelegate,UITabBarDelegate> No errors have been found. Then I've tried again to add again my NameOfDelegate in the code @interface MyObject : UIViewController <UITableViewDelegate,UITabBarDelegate,NameOfDelegate> At that time Xcode did not find any error on my code. So I tried again to remove the UITableViewDelegate and UITabBarDelegate on my code. @interface MyObject : UIViewController <NameOfDelegate> At that time No error had found but that was the same code I've write before. What should probably the cause of that stuff on my code? Thanks...

    Read the article

  • Using QTDesigner with PyQT and Python 2.6

    - by PyNewbie27
    Hi. I'm fairly new to Python and trying to work with the latest versions of QTDesigner, PyQT 4.7 and QT4.7 (I downloaded the whole package from PyQT4.7 website). I can't figure out how to make QTDesigner integrate closely with Python: ie. If I select "Form" View Code in QTDesigners menu, it errors saying "Unable to launch C:/Python26/Lib/site-packages/PyQT4/bin\uic." If I look in that directory there is a pyuic.py but not "uic". From searching online it seems this doesn't exist because it's expecting a C++ install instead of the python version. Is there anyway to make QTDesigner use/call pyuic.py to generate the code, then open an IDE or text editor of my choice to show me the PYTHON code generated by the QTDesigner-PyUIC chain? I'd like Designer to integrate closely with python, so I can make custom slots/signals in Designer while designing, then tweak the python code directly in my IDE later. If it is not possible to code directly inside QTDesigner using python, does that mean I have to hand code my programs entire UI directly in my PythonIDE? Using Designer directly would seemingly be nearly very very nice for a newbie such as myself, since I can see what properties each widget has and visually edit them while still learning the QT syntax without constantly having to use web resources to see what properties each widget should have and helps with boilerplate code generation, and what their defaults are, etc. I've googled and nobody seems to be using QTDesigner and Python in this manner together. It seems most are either handcoding all the QT code in their Python IDE of choice, or have found an obvious/easy method of doing what I want, therefore not really producing up to date tutorials on making this work together. Please enlighten me if you can. Thanks in advance for your time. Please include any suggestions you might have to a newbie trying to use Python with QT and QTDesigner. Thank you.

    Read the article

  • JSF Page does not submit when onclick javascript is added to menu item?

    - by Padmarag
    I show some detail using popup windows. I want to close those when the user clicks on sign-out link. I have a JavaScript function that'll close the windows. The sign-out link is rendered using Navigation MenuModel. The definition in faces-config is as below - <managed-bean> <managed-bean-name>signoutNavigation</managed-bean-name> <managed-bean-class>com.xxx.xxx.framework.NavigationItem</managed-bean-class> <managed-bean-scope>none</managed-bean-scope> <managed-property> <property-name>label</property-name> <value>Sign Out</value> </managed-property> <managed-property> <property-name>viewId</property-name> <value>/signout.jsp</value> </managed-property> <managed-property> <property-name>outcome</property-name> <value>signout</value> </managed-property> <managed-property> <property-name>onclick</property-name> <value>closeOrderWindows()</value> </managed-property> </managed-bean> The problem is when I use the "onclick" property on managed-bean, the page doesn't submit to "signout.jsp" and remains on same page. When I remove/comment the "onclick" part, the page gets submitted properly. I use MyFaces Trinidad.

    Read the article

  • JPA Lookup Hierarchy Mapping

    - by Andy Trujillo
    Given a lookup table: | ID | TYPE | CODE | DESCRIPTION | | 1 | ORDER_STATUS | PENDING | PENDING DISPOSITION | | 2 | ORDER_STATUS | OPEN | AWAITING DISPOSITION | | 3 | OTHER_STATUS | OPEN | USED BY OTHER ENTITY | If I have an entity: @MappedSuperclass @Table(name="LOOKUP") @Inheritance(strategy=InheritanceType.SINGLE_TABLE) @DiscriminatorColumn(name="TYPE", discriminatorType=DiscriminatorType.STRING) public abstract class Lookup { @Id @Column(name="ID") int id; @Column(name="TYPE") String type; @Column(name="CODE") String code; @Column(name="DESC") String description; ... } Then a subclass: @Entity @DiscriminatorValue("ORDER_STATUS") public class OrderStatus extends Lookup { } The expectation is to map it: @Entity @Table(name="ORDERS") public class OrderVO { ... @ManyToOne @JoinColumn(name="STATUS", referencedColumnName="CODE") OrderStatus status; ... } So that the CODE is stored in the database. I expected that if I wrote: OrderVO o = entityManager.find(OrderVO.class, 123); the SQL generated using OpenJPA would look something like: SELECT t0.id, ..., t1.CODE, t1.TYPE, t1.DESC FROM ORDERS t0 LEFT OUTER JOIN LOOKUP t1 ON t0.STATUS = t1.CODE AND t1.TYPE = 'ORDER_STATUS' WHERE t0.ID = ? But the actual SQL that gets generated is missing the AND t1.TYPE = 'ORDER_STATUS' This causes a problem when the CODE is not unique. Is there a way to have the discriminator included on joins or am I doing something wrong here?

    Read the article

  • Spring HandlerInterceptor or Spring Security to protect resource

    - by richever
    I've got a basic Spring Security 3 set up using my own login page. My configuration is below. I have the login and sign up page accessible to all as well as most everything else. I'm new to Spring Security and understand that if a user is trying to access a protected resource they will be taken to the defined login page. And upon successful login they are taken to some other page, home in my case. I want to keep the latter behavior; however, I'd like specify that if a user tries to access certain resources they are taken to the sign up page, not the login page. Currently, in my annotated controllers I check the security context to see if the user is logged in and if not I redirect them to the sign up page. I only do this currently with two urls and no others. This seemed redundant so I tried creating a HandlerInterceptor to redirect for these requests but realized that with annotations, you can't specify specific requests to be handled - they all are. So I'm wondering if there is some way to implement this type of specific url handling in Spring Security, or is going the HandlerInterceptor route my only option? Thanks! <http auto-config="true" use-expressions="true"> <intercept-url pattern="/login*" access="permitAll"/> <intercept-url pattern="/signup*" access="permitAll"/> <intercept-url pattern="/static/**" filters="none" /> <intercept-url pattern="/" access="permitAll"/> <form-login login-page="/login" default-target-url="/home"/> <logout logout-success-url="/home"/> <anonymous/> <remember-me/> </http>

    Read the article

  • Print raw data to a thermal-printer using .NET

    - by blauesocke
    I'm trying to print out raw ascii data to a thermal printer. I do this by using this code example: http://support.microsoft.com/kb/322091 but my printer prints always only one character and this not until I press the form feed button. If I print something with notepad the printer will do a form feed automatically but without printing any text. The printer is connected via usb over a lpt2usb adapter and Windows 7 uses the "Generic - Generic / Text Only" driver. Anyone knows what is going wrong? How is it possible to print some words and do some form feeds? Are there some control characters I have to send? And if yes: How do I send them? Edit 14.04.2010 21:51 My code (C#) looks like this: PrinterSettings s = new PrinterSettings(); s.PrinterName = "Generic / Text Only"; RawPrinterHelper.SendStringToPrinter(s.PrinterName, "Test"); This code will return a "T" after I pressed the form feed button (This litte black button here: swissmania.ch/images/935-151.jpg - sorry, not enough reputation for two hyperlinks) Edit 15.04.2010 16:56 I'm using now the code form here: c-sharpcorner.com/UploadFile/johnodonell/PrintingDirectlytothePrinter11222005001207AM/PrintingDirectlytothePrinter.aspx I modified it a bit that I can use the following code: byte[] toSend; // 10 = line feed // 13 carriage return/form feed toSend = new byte[1] { 13 }; PrintDirect.WritePrinter(lhPrinter, toSend, toSend.Length, ref pcWritten); Running this code has the same effekt like pressing the form feed button, it works fine! But code like this still does not work: byte[] toSend; // 10 = line feed // 13 carriage return/form feed toSend = new byte[2] { 66, 67 }; PrintDirect.WritePrinter(lhPrinter, toSend, toSend.Length, ref pcWritten); This will print out just a "B" but I expect "BC" and after running any code I have to reconnect the USB cable to make it work agian. Any ideas?

    Read the article

  • TDD vs. Unit testing

    - by Walter
    My company is fairly new to unit testing our code. I've been reading about TDD and unit testing for some time and am convinced of their value. I've attempted to convince our team that TDD is worth the effort of learning and changing our mindsets on how we program but it is a struggle. Which brings me to my question(s). There are many in the TDD community who are very religious about writing the test and then the code (and I'm with them), but for a team that is struggling with TDD does a compromise still bring added benefits? I can probably succeed in getting the team to write unit tests once the code is written (perhaps as a requirement for checking in code) and my assumption is that there is still value in writing those unit tests. What's the best way to bring a struggling team into TDD? And failing that is it still worth writing unit tests even if it is after the code is written? EDIT What I've taken away from this is that it is important for us to start unit testing, somewhere in the coding process. For those in the team who pickup the concept, start to move more towards TDD and testing first. Thanks for everyone's input. FOLLOW UP We recently started a new small project and a small portion of the team used TDD, the rest wrote unit tests after the code. After we wrapped up the coding portion of the project, those writing unit tests after the code were surprised to see the TDD coders already done and with more solid code. It was a good way to win over the skeptics. We still have a lot of growing pains ahead, but the battle of wills appears to be over. Thanks for everyone who offered advice!

    Read the article

  • how to create wizard forms in ruby on rails

    - by Branden Silva
    I'm trying to understand the best options for pulling off a wizard form in ruby on rails. Ideally I'd like to have it so the application signup has a back and next button that allows the user to submit data in steps. So in step 1 they could fill out contact info. Once they are done they could click next and be on step 2 to fill out payment info, etc. If they make a mistake, they can click back and correct it. Some steps will be required, while others will not, but you do have to make it to the last step to submit the data to the database to sign up. They then need the ability to go back and fill out the past steps in the same fashion after completion. (example: perhaps if they clicked on a profile link they could recomplete the steps in the same fashion because they didn't want to complete all the steps right away. Maybe by being given a skip button before they completed the steps to sign up?). I also need validation to happen on what steps have been completed preventing them from moving onto the next step until corrected or completed. Option 1) I've noticed that ajax has been recommended as an option in other questions on stackoverflow. The only problem I have with it is that the user would not be able to sign up if javascript was disabled. Ideally I'd like to have it be native to ruby on rails but I'm willing to work with whatever is necessary to get it to work.

    Read the article

  • How to compile Python scripts for use in FORTRAN?

    - by Vincent Poirier
    Hello, Although I found many answers and discussions about this question, I am unable to find a solution particular to my situation. Here it is: I have a main program written in FORTRAN. I have been given a set of python scripts that are very useful. My goal is to access these python scripts from my main FORTRAN program. Currently, I simply call the scripts from FORTRAN as such: CALL SYSTEM ('python pyexample.py') Data is read from .dat files and written to .dat files. This is how the python scripts and the main FORTRAN program communicate to each other. I am currently running my code on my local machine. I have python installed with numpy, scipy, etc. My problem: The code needs to run on a remote server. For strictly FORTRAN code, I compile the code locally and send the executable to the server where it waits in a queue. However, the server does not have python installed. The server is being used as a number crunching station between universities and industry. Installing python along with the necessary modules on the server is not an option. This means that my “CALL SYSTEM ('python pyexample.py')” strategy no longer works. Solution?: I found some information on a couple of things in thread http://stackoverflow.com/questions/138521/is-it-feasible-to-compile-python-to-machine-code Shedskin, Psyco, Cython, Pypy, Cpython API These “modules”(? Not sure if that's what to call them) seem to compile python script to C code or C++. Apparently not all python features can be translated to C. As well, some of these appear to be experimental. Is it possible to compile my python scripts with my FORTRAN code? There exists f2py which converts FORTRAN code to python, but it doesn't work the other way around. Any help would be greatly appreciated. Thank you for your time. Vincent PS: I'm using python 2.6 on Ubuntu

    Read the article

  • Automate the signature of the update.rdf manifest for my firefox extension

    - by streetpc
    Hello, I'm developing a firefox extension and I'd like to provide automatic update to my beta-testers (who are not tech-savvy). Unfortunately, the update server doesn't provide HTTPS. According to the Extension Developer Guide on signing updates, I have to sign my update.rdf and provide an encoded public key in the install.rdf. There is the McCoy tool to do all of this, but it is an interactive GUI tool and I'd like to automate the extension packaging using an Ant script (as this is part of a much bigger process). I can't find a more precise description of what's happening to sign the update.rdf manifest than below, and McCoy source is an awful lot of javascript. The doc says: The add-on author creates a public/private RSA cryptographic key pair. The public part of the key is DER encoded and then base 64 encoded and added to the add-on's install.rdf as an updateKey entry. (...) Roughly speaking the update information is converted to a string, then hashed using a sha512 hashing algorithm and this hash is signed using the private key. The resultant data is DER encoded then base 64 encoded for inclusion in the update.rdf as an signature entry. I don't know well about DER encoding, but it seems like it needs some parameters. So would anyone know either the full algortihm to sign the update.rdf and install.rdf using a predefined keypair, or a scriptable alternative to McCoy whether a command-line tool like asn1coding will suffise a good/simple developer tutorial on DER encoding

    Read the article

  • Working with ieee format numbers in ARM

    - by Jake Sellers
    I'm trying to write an ARM program that will convert an ieee number to a TNS format number. TNS is a format used by some super computers, and is similar to ieee but different. I'm trying to use several masks to place the three different "part" of the ieee number in separate registers so I can move them around accordingly. Here is my unpack subroutine: UnpackIEEE LDR r1, SMASK ;load the sign bit mask into r1 LDR r2, EMASK ;load the exponent mask into r2 LDR r3, GMASK ;load the significand mask into r3 AND r4, r0, r1 ;apply sign mask to IEEE and save into r4 AND r5, r0, r2 ;apply exponent mask to IEEE and save into r5 AND r6, r0, r3 ;apply significand mask to IEEE and save into r6 MOV pc, r14 ;return And here are the masks and number declarations so you can understand: IEEE DCD 0x40300000 ;2.75 decimal or 01000000001100000000000000000000 binary SMASK DCD 0x80000000 ;Sign bit mask EMASK DCD 0x7F800000 ;Exponent mask GMASK DCD 0x007FFFFF ;Significand mask When I step through with the debugger, the results I get are not what I expect after working through it on paper. EDIT: What I mean, is that after the subroutine runs, registers 4, 5, and 6 all remain 0. I can't figure out why the masks are not working. I think I do not fully understand how the number is being stored in the register or using the masks wrong. Any help appreciated. If you need more info just ask. EDIT: entry point: Very simple, just trying to get these subroutines working. ENTRY LDR r1, IEEE ;load IEEE num into r1 BL UnpackIEEE ;call unpack sub SWI SWI_Exit ;finish

    Read the article

  • Suppress Eclipse compiler errors in in a plug-in

    - by Jan Gorzny
    Hi, I'm currently working on a plug-in for Eclipse that translates some custom Java code (which doesn't necessarily run/compile), to runnable Java code. In particular, the plug-in allows code to be written using classes created or imported during the translation. In general, the pre-translation code runs/compiles fine provided the writer uses import statements at the top of their class files. However, it would be convenient for my users if it was not necessary to import these classes. At the moment, the lack of import statements results in (obvious) compiler errors. Would it be possible to empower my plug-in to either a) suppress/ignore these errors, or b) have Eclipse find these classes automatically, without the use of import statements? I should point out that the translated code would these include the required import statements--but this is not a problem for me. I'm also aware that this could lead to lazy programmers and some bad habits. To clarify, consider the following example of pre-translated code: File f = new File("Somefilename.txt"); which clearly requires the possibly imported class File. Without an import statement (import java.io.File;), Eclipse reports that File can not be resolved to a type. This is the error I'd like to hide in files pertaining to projects created for use with my plug-in. (The translated code would include import java.io.File; so that it would be runnable) In closing, I should point out I'm not necessarily looking for code (though I wouldn't be opposed to it), but rather some links to some relevant tutorials (if they exist), or helpful tips/ideas. Also, as this is my first plug-in, it's entirely possible that what I'd like to do is not possible and that I don't realize it--if this is the case, please let me know, preferably with some justification. Thanks!

    Read the article

  • PHP: Array of objects is empty when I come to retrieve one from the array

    - by Tom
    Good morning, I am trying to load rows from a database and then create objects from them and add these objects to a private array. Here are my classes: <?php include("databaseconnect.php"); class stationItem { private $code = ''; private $description = ''; public function setCode($code ){ $this->code = $code; } public function getCode(){ return $this->code; } public function setDescription($description){ $this->description = $description; } public function getDescription(){ return $this->description; } } class stationList { private $stationListing; function __construct() { connect(); $stationListing = array(); $result = mysql_query('SELECT * FROM stations'); while ($row = mysql_fetch_assoc($result)) { $station = new stationItem(); $station->setCode($row['code']); $station->setDescription($row['description']); array_push($stationListing, $station); } mysql_free_result($result); } public function getStation($index){ return $stationListing[$index]; } } ?> As you can see I am creating a stationItem object per database row (which for now has a code and description) and I then push these on to the end of my array which is held as a private variable in stationList. This is code which creates this classes and attempts to access the properties on them: $stations = new stationList(); $station = $stations->getStation(0); echo $station->getCode(); I am finding that the sizeof($stationList) at the end of the constructor is 1 but then it is zero when we come to try to get an object from the array using the index. Therefore the error that I get is: Fatal error: Call to a member function getCode() on a non-object Please can someone explain to me why this is happening? I guess I am misunderstanding how object references work in PHP5.

    Read the article

< Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >