Search Results

Search found 6887 results on 276 pages for 'internal'.

Page 217/276 | < Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >

  • .NET Test Harness what should it have

    - by Conor
    Hi Folks, We have a software house developing code for us on a project, .NET Web Service (WCF) and we are also paying for a test harness to be built as a separate billable task on a daily rate. I have just joined the company and am reviewing what we are getting from the software house and wanted to know what you guys in industry thought about it? Basically what we got was a WinForm that called the w/s that had an input area (Web Service Request) to drop our XML a Submit button along with a response area for the result of the Web Response and that's it... Our internal BA has created all the xml request documents so there was no logic put into the harness around this. Looking on the Net for a definition of a Test Harness I got this: http://en.wikipedia.org/wiki/Test_harness It states it should have these 3 below things: Automate the testing process. Execute test suites of test cases. Generate associated test reports. Clearly we have got none of this apart from a partial "Automate the testing process" via a WinForm. OK, from my development background I would expect someone to Produce a WinForm as a test harness 5 years ago and really should be using some sort of Tooling around this, I explicitly told the Software House I expected some sort of tooling (NUnit,NBUnit, SOAPIU) so we could create a regression test pack for future use. [Didn’t get it but I asked for this after the requirements were signed off as I wasn’t employed then :)] Would someone be able to clarify with me if my requirement for this is over realistic, I know if I did this, I would use NUnit and TDD and then reuse the test harness as a regression test pack in future? I am interested to see what the community thought. Cheers

    Read the article

  • Moq and accessing called parameters

    - by lozzar
    I've just started to implement unit tests (using xUnit and Moq) on an already established project of mine. The project extensively uses dependency injection via the unity container. I have two services A and B. Service A is the one being tested in this case. Service A calls B and gives it a delegate to an internal function. This 'callback' is used to notify A when a message has been received that it must handle. Hence A calls (where b is an instance of service B): b.RegisterHandler(Guid id, Action<byte[]> messageHandler); In order to test service A, I need to be able to call messageHandler, as this is the only way it currently accepts messages. Can this be done using Moq? ie. Can I mock service B, such that when RegisterHandler is called, the value of messageHandler is passed out to my test? Or do I need to redesign this? Are there any design patterns I should be using in this case? Does anyone know of any good resources on this kind of design?

    Read the article

  • OData / WCF Data Service - HTTP 500 Error

    - by Eric
    I have created an OData/WCF service using Visual Studio 2010 on Windows XP SP3 with all current patches installed. When I click on "view in browser", the service opens and I see the 3 tables from my EF model. However, when I add a table name ("Commands" in this case) to the end of the query string, rather than seeing the data from the table, I get an HTTP 500 error. (This error (HTTP 500 Internal Server Error) means that the website you are visiting had a server problem which prevented the webpage from displaying.). I have not only followed the examples from 2 sites, but have also tried running the sample application that the blog poster sent me (that works on his machine), and still am not having any luck. The blog post is at Exposing OData from an Entity Framework Model Does anyone have an idea why this is occurring and how to resolve it? Here is the output of the "View in Browser": <?xml version="1.0" encoding="utf-8" standalone="yes" ?> - <service xml:base="http://localhost:1883/VistaDBCommandService.svc/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:app="http://www.w3.org/2007/app" xmlns="http://www.w3.org/2007/app"> - <workspace> <atom:title>Default</atom:title> - <collection href="Commands"> <atom:title>Commands</atom:title> </collection> - <collection href="Databases"> <atom:title>Databases</atom:title> </collection> - <collection href="Statuses"> <atom:title>Statuses</atom:title> </collection> </workspace> </service> ============================= Thanks, Eric

    Read the article

  • ASP.NET MembershipProvider and StructureMap

    - by Ben
    I was using the default AspNetSqlMembershipProvider in my application. Authentication is performed via an AuthenticationService (since I'm also supporting other forms of membership like OpenID). My AuthenticationService takes a MembershipProvider as a constructor parameter and I am injecting the dependency using StructureMap like so: For<MembershipProvider>().Use(Membership.Provider); This will use the MembershipProvider configured in web.config. All this works great. However, now I have rolled my own MembershipProvider that makes use of a repository class. Since the MembershipProvider isn't exactly IoC friendly, I added the following code to the MembershipProvider.Initialize method: _membershipRepository = ObjectFactory.GetInstance<IMembershipRepository>(); However, this raises an exception, like StructureMap hasn't been initialized (cannot get instance of IMembershipRepository). However, if I remove the code and put breakpoints at my MembershipProvider's initialize method and my StructureMap bootstrapper, it does appear that StructureMap is configured before the MembershipProvider is initialized. My only workaround so far is to add the above code to each method in the MembershipProvider that needs the repository. This works fine, but I am curious as to why I can't get my instance in the Initialize method. Is the MembershipProvider performing some internal initialization that runs before any of my own application code does? Thanks Ben

    Read the article

  • number->string and related procedures in GIMP scheme scripting

    - by dim fish
    I am frustrated with string-to-number and number-to-string conversion in GIMP scripting. I am runnning GIMP 2.6.8 in Windows Vista. I understand that GIMP's internal Scheme implementation changes over the versions and I can't seem to nail down the documentation. From what I can gather GIMP's Scheme is a subset of TinyScheme and/or supports the R5RS standard procedures. In any case, I usually just look in the packaged script directory for examples when I want to try something new, because that should work for sure, right? For example, grid-system.scm comes with the latest GIMP release and has the expression, (string-append (number->string obj) " ") which is exactly what I want. However, if I use number-string in my own script, or even type it into GIMP's script console (which is how I usually test out new stuff I want to do) it tells me number-string is an unbound variable: > (number->string 3) Error: eval: unbound variable: number->string Other standard procedures from, say R5RS, work just fine: > (string-append "frust" "rated") "frustrated" So, 1) Is there some lurking documentation for current GIMP Scheme scripting other than something drastic like searching GIMP's source code? 2) Can I use the GIMP console to spit out a list of all defined procedures to find something I need? 3) Anyone else confirm that number-string is not defined for the current Windows build, even though it appears in the packaged scripts? My web searches haven't turned up any related problems, and a complete uninstall of all GIMP versions, back to latest puts me in the same scrape.

    Read the article

  • Django forms, inheritance and order of form fields

    - by Hannson
    I'm using Django forms in my website and would like to control the order of the fields. Here's how I define my forms: class edit_form(forms.Form): summary = forms.CharField() description = forms.CharField(widget=forms.TextArea) class create_form(edit_form): name = forms.CharField() The name is immutable and should only be listed when the entity is created. I use inheritance to add consistency and DRY principles. What happens which is not erroneous, in fact totally expected, is that the name field is listed last in the view/html but I'd like the name field to be on top of summary and description. I do realize that I could easily fix it by copying summary and description into create_form and loose the inheritance but I'd like to know if this is possible. Why? Imagine you've got 100 fields in edit_form and have to add 10 fields on the top in create_form - copying and maintaining the two forms wouldn't look so sexy then. (This is not my case, I'm just making up an example) So, how can I override this behavior? Edit: Apparently there's no proper way to do this without going through nasty hacks (fiddling with .field attribute). The .field attribute is a SortedDict (one of Django's internal datastructures) which doesn't provide any way to reorder key:value pairs. It does how-ever provide a way to insert items at a given index but that would move the items from the class members and into the constructor. This method would work, but make the code less readable. The only other way I see fit is to modify the framework itself which is less-than-optimal in most situations. In short the code would become something like this: class edit_form(forms.Form): summary = forms.CharField() description = forms.CharField(widget=forms.TextArea) class create_form(edit_form): def __init__(self,*args,**kwargs): forms.Form.__init__(self,*args,**kwargs) self.fields.insert(0,'name',forms.CharField()) That shut me up :)

    Read the article

  • Delete array of size 1

    - by Arne
    This is possibly a candidate for a one-line answer. I would like know it anyway.. I am writing a simple circular buffer and for some reasons that are not important for the question I need to implement it using an array of doubles. In fact I have not investiated other ways to do it, but since an array is required anyway I have not spent much time on looking for alternatives. template<typename T> class CircularBuffer { public: CircularBuffer(unsigned int size); ~CircularBuffer(); void Resize(unsigned int new_size); ... private: T* buffer; unsigned int buffer_size; }; Since I need to have the buffer dynamically sized the buffer_size is neither const nor a template parameter. Now the question: During construction and in function Resize(int) I only require the size to be at least one, although a buffer of size one is effectively no longer a buffer. Of course using a simple double instead would be more appropriate but anyway. Now when deleting the internal buffer in the destructor - or in function resize for that matter - I need to delete the allocated memory. Question is, how? First candidate is of course delete[] buffer; but then again, if I have allocated a buffer of size one, that is if the pointer was aquired with buffer = new T[0], is it still appropriate to call delete[] on the pointer or do I need to call delete buffer; (without brackets) ? Thanks, Arne

    Read the article

  • Speed up bitstring/bit operations in Python?

    - by Xavier Ho
    I wrote a prime number generator using Sieve of Eratosthenes and Python 3.1. The code runs correctly and gracefully at 0.32 seconds on ideone.com to generate prime numbers up to 1,000,000. # from bitstring import BitString def prime_numbers(limit=1000000): '''Prime number generator. Yields the series 2, 3, 5, 7, 11, 13, 17, 19, 23, 29 ... using Sieve of Eratosthenes. ''' yield 2 sub_limit = int(limit**0.5) flags = [False, False] + [True] * (limit - 2) # flags = BitString(limit) # Step through all the odd numbers for i in range(3, limit, 2): if flags[i] is False: # if flags[i] is True: continue yield i # Exclude further multiples of the current prime number if i <= sub_limit: for j in range(i*3, limit, i<<1): flags[j] = False # flags[j] = True The problem is, I run out of memory when I try to generate numbers up to 1,000,000,000. flags = [False, False] + [True] * (limit - 2) MemoryError As you can imagine, allocating 1 billion boolean values (1 byte 4 or 8 bytes (see comment) each in Python) is really not feasible, so I looked into bitstring. I figured, using 1 bit for each flag would be much more memory-efficient. However, the program's performance dropped drastically - 24 seconds runtime, for prime number up to 1,000,000. This is probably due to the internal implementation of bitstring. You can comment/uncomment the three lines to see what I changed to use BitString, as the code snippet above. My question is, is there a way to speed up my program, with or without bitstring?

    Read the article

  • ACL architechture for a Software As a service in Spring 3.0

    - by geoaxis
    I am making a software as a service using Spring 3.0 (Spring MVC, Spring Security, Spring Roo, Hibernate) I have to come up with a flexible access control list mechanism.I have three different kinds of users System (who can do any thing to the system, includes admin and internal daemons) Operations (who can add and delete users, organizations, and do maintenance work on behalf of users and organizations) End Users (they belong to one or more organization, for each organization, the user can have one or more roles, like being organization admin, or organization read-only member) (role like orgadmin can also add users for that organization) Now my question is, how should i model the entity of User? If I just take the End User, it can belong to one or more organizations, so each user can contain a set of references to its organizations. But how do we model the users role for each organization, So for example User UX belongs to organizations og1, og2 and og3, and for og1 he is both orgadmin, and org-read-only-user, where as for og2 he is only orgadmin and for og3 he is only org-read-only-user I have the possibility of making each user belong to one organization alone, but that's making the system bounded and I don't like that idea (although i would still satisfy the requirement) If you have a better extensible ACL architecture, please suggest it. Since its a software as a service, one would expect that alot of different organizations would be part if the same system. I had one concern that it is not a good idea to keep og1 and og2 data on the same DB (if og1 decides to spawn a 100 reports on the system, og2 should not suffer) But that is some thing advanced for now and is not directly related to ACL but to the physical distribution of data and setup of services based on those ACLs This is a community Wiki question, please correct any thing which you wish to do so. Thanks

    Read the article

  • Flash Security Error Accessing URL with crossdomain.xml

    - by user163757
    Hello, I recently deployed a Flash application to a server, and am now experiencing errors when making HTTPService requests. I have put what I believe to be the most permissive crossdomain.xml possible in the wwwroot folder, and still get the errors. Interestingly enough, the error only seems to occur when the request is made from a direct user interaction (i.e. button click). The application makes other requests that are initiated by other means(i.e creationComplete) , and they seem to work as expected. Anyone see anything wrong with the crossdomain.xml, or have any other suggestions? ERROR MESSAGE [RPC Fault faultString="Security error accessing url" faultCode="Channel.Security.Error" faultDetail="Destination: DefaultHTTP"] at mx.rpc::AbstractInvoker/http://www.adobe.com/2006/flex/mx/internal%3A%3AfaultHandler() at mx.rpc::Responder/fault() at mx.rpc::AsyncRequest/fault() at DirectHTTPMessageResponder/securityErrorHandler() at flash.events::EventDispatcher/dispatchEventFunction() at flash.events::EventDispatcher/dispatchEvent() at flash.net::URLLoader/redirectEvent() <!DOCTYPE cross-domain-policy SYSTEM "http://www.macromedia.com/xml/dtds/cross-domain-policy.dtd"> <cross-domain-policy> <site-control permitted-cross-domain-policies="all" /> <allow-access-from domain="*" secure="false" /> <allow-http-request-headers-from domain="*" headers="*" secure="false" /> </cross-domain-policy>

    Read the article

  • TimHeuer's FloatableWindow Issue

    - by MarioEspinoza
    Hi, I've some trouble with Tim's FloatableWindow. (<--SourceCode & DLLs) It throws the following Exception once closed the control. Object reference not set to an instance of an object in System.Windows.Controls.FloatableWindow.b__0(Object s, EventArgs args) in System.Windows.CoreInvokeHandler.InvokeEventHandler(Int32 typeIndex, Delegate handlerDelegate, Object sender, Object args) in MS.Internal.JoltHelper.FireEvent(IntPtr unmanagedObj, IntPtr unmanagedObjArgs, Int32 argsTypeIndex, String eventName) First I created a control by using the FloatableWindow template, And then i just created the Window on CodeBehind. private void Button_Click_1(object sender, RoutedEventArgs e) { FloatableWindow1 f1 = new FloatableWindow1();//TheTemplatedOne f1.ShowDialog(); } private void Button_Click_2(object sender, RoutedEventArgs e) { FloatableWindow f = new FloatableWindow(); f.Height = 100; f.Width = 100; f.Background = new SolidColorBrush(Colors.Yellow); f.ShowDialog(); } But stills the same issue... Im not trying to access any information on the Closed EventHandler. Im running v3.0.40624.4 Release of the dll on SL v3.0.50106.0 in a C# project w/RiaServices Thanks

    Read the article

  • Difference between GL10 and GLES10 on Android

    - by kayahr
    The GLSurfaceView.Renderer interface of the Android SDK gives me a GL interface as parameter which has the type GL10. This interface is implemented by some private internal jni wrapper class. But there is also the class GLES10 where all the GL methods are available as static methods. Is there an important difference between them? So what if I ignore the gl parameter of onDrawFrame and instead use the static methods of GLES10 everywhere? Here is an example. Instead of doing this: void onDrawFrame(GL10 gl) { drawSomething(gl); } void drawSomething(GL10 gl) { gl.glLoadIdentity(); ... } I could do this: void onDrawFrame(GL10 gl) { drawSomething(); } void drawSomething() { GLES10.glLoadIdentity(); ... } The advantage is that I don't have to pass the GL context to all called methods. But even it it works (And it works, I tried it) I wonder if there are any disadvantages and reasons to NOT do it like that.

    Read the article

  • Multiple sendto() using UDP socket

    - by ereOn
    Hi, I have a network software which uses UDP to communicate with other instances of the same program. For different reasons, I must use UDP here. I recently had problems sending huge ammounts of data over UDP and had to implement a fragmentation system to split my messages into small data chunks. So far, it worked well but I now encounter an issue when I have to send a lot of data chunks. I have the following algorithm: Split message into small data chunks (around 1500 bytes) Iterate over the data chunks list and for each, send it using sendto() However, when I send a lot of data chunks, the receiver only gets the first 6 messages. Sometimes it misses the sixth and receives the seventh. It depends. Anyway, sendto() always indicates success. This always happen when I test my software over a loopback interface (127.0.0.1) but never over my LAN network. If I add something like std::cout << "test" << std::endl; between the sendto() then every frame is received. I am aware that UDP allows packet loss and that my frames might be loss for a lot of reasons and I suppose it has to do with the rate I am sending the data chunks at. What would be the right approach here ? Implementing some acknowledgement mechanism (just like TCP) seems overkill. Adding some arbitrary waiting time between the sendto() is ugly and will probably decrease performance. Increasing (if possible) the receiver UDP internal buffer ? I don't even know if this is possible. Something else ? I really need your advices here. Thank very much.

    Read the article

  • Turing Model Vs Von Neuman model

    - by Santhosh
    First some background (based on my understanding).. The Von-Neumann architecture describes the stored-program computer where instructions and data are stored in memory and the machine works by changing it's internal state, i.e an instruction operated on some data and modifies the data. So inherently, there is state msintained in the system. The Turing machine architecture works by manipulating symbols on a tape. i.e A tape with infinite number of slots exists, and at any one point in time, the Turing machine is in a particular slot. Based on the symbol read at that slot, the machine change the symbol and move to a different slot. All of this is deterministic. My questions are Is there any relation between these two models (Was the Von Neuman model based on or inspired by the Turing model)? Can we say that Turing model is a superset of Von Newman model? Does functional Programming fit into Turing model. If so how? (I assume FP does not lend itself nicely to the Von Neuman model)

    Read the article

  • MDX equivalent to SQL subqueries with aggregation

    - by James Lampe
    I'm new to MDX and trying to solve the following problem. Investigated calculated members, subselects, scope statements, etc but can't quite get it to do what I want. Let's say I'm trying to come up with the MDX equivalent to the following SQL query: SELECT SUM(netMarketValue) net, SUM(CASE WHEN netMarketValue > 0 THEN netMarketValue ELSE 0 END) assets, SUM(CASE WHEN netMarketValue < 0 THEN netMarketValue ELSE 0 END) liabilities, SUM(ABS(netMarketValue)) gross someEntity1 FROM ( SELECT SUM(marketValue) netMarketValue, someEntity1, someEntity2 FROM <some set of tables> GROUP BY someEntity1, someEntity2) t GROUP BY someEntity1 In other words, I have an account ledger where I hide internal offsetting transactions (within someEntity2), then calculate assets & liabilities after aggregating them by someEntity2. Then I want to see the grand total of those assets & liabilities aggregated by the bigger entity, someEntity1. In my MDX schema I'd presumably have a cube with dimensions for someEntity1 & someEntity2, and marketValue would be my fact table/measure. I suppose i could create another DSV that did what my subquery does (calculating net), and simply create a cube with that as my measure dimension, but I wonder if there is a better way. I'd rather not have 2 cubes (one for these net calculations and another to go to a lower level of granularity for other use cases), since it will be a lot of duplicate info in my database. These will be very large cubes.

    Read the article

  • How to use an OSGi service from a web application?

    - by Jaime Soriano
    I'm trying to develop a web application that is going to be launched from a HTTP OSGi service, this application needs to use other OSGi service (db4o OSGi), for what I need a reference to a BundleContext. I have tried two different approaches to get the OSGi context in the web application: Store the BundleContext of the Activator in an static field of a class that the web service can import and use. Use FrameworkUtil.getBundle(this.getClass()).getBundleContext() (being this an instance of MainPage, a class of the web application). I think that first option is completely wrong, but anyway I'm having problems with the class loaders in both options. In the second one it raises a LinkageError: java.lang.LinkageError: loader constraint violation: loader (instance of org/apache/felix/framework/ModuleImpl$ModuleClassLoader) previously initiated loading for a different type with name "com/db4o/ObjectContainer" Also tried with Equinox and I have a similar error: java.lang.LinkageError: loader constraint violation: loader (instance of org/eclipse/osgi/internal/baseadaptor/DefaultClassLoader) previously initiated loading for a different type with name "com/db4o/ObjectContainer" The code that provokes the exception is: ServiceReference reference = context.getServiceReference(Db4oService.class.getName()); Db4oService service = (Db4oService)context.getService(reference); database = service.openFile("foo.db"); The exception is raised in the last line, database class is ObjectContainer, if I change the type of this variable to Object exception is not raised, but It's not useful as an Object :)

    Read the article

  • Software development metrics and reporting

    - by David M
    I've had some interesting conversations recently about software development metrics, in particular how they can be used in a reasonably large organisation to help development teams work better. I know there have been Stack Overflow questions about which metrics are good to use - like this one, but my question is more about which metrics are useful to which stakeholders, and at what level of aggregation. As an example, my view is that code coverage is a useful metric in the following ways (and maybe others): For a team's own internal use when combined with other measurements. For facilitating/enabling/mentoring teams, where it might be instructive when considered on a team-by-team basis as a trend (e.g. if team A and B have coverage this month of 75 and 50, I'd be more concerned with team A than B if the previous month they'd had 80 and 40). For senior management when presented as an aggregated statistic across a number of teams or a whole department. But I don't think it's useful for senior management to see this on a team-by-team basis, as this encourages artifical attempts to bolster coverage with tests that merely exercise, rather than test, code. I'm in an organisation with a couple of levels in its management hierarchy, but where the vast majority of managers are technically minded and able (with many still getting their hands dirty). Some of the development teams are leading the way in driving towards agile development practices, but others lag, and there is now a serious mandate from the top for this to be the way the organisation works. A couple of us are starting a programme to encourage this. In this sort of an organisation, what sort of metrics do you think are useful, to whom, why, and at what level of aggregation? I don't want people to feel their performance is being assessed based on a metric that they can artificially influence; at the same time, the senior management are going to want some sort of evidence that progress is being made. What advice or caveats can you provide based on experience in your own organisations? EDIT We are definitely wanting to use metrics as a tool for organisational improvement not as a tool for individual performance measurement.

    Read the article

  • clientaccesspolicy.xml not being requested via HTTPS

    - by Philip
    I have a silverlight app that has been using http to communicate w/self-hosted WCF services during development. I am now securing the services via https. I am getting an error I had back at the beginning of the project: "An error occurred while trying to make a request to URI 'https://localhost:8303/service'. This could be due to attempting to access a service in a cross-domain way without a proper cross-domain policy in place, or a policy that is unsuitable for SOAP services. You may need to contact the owner of the service to publish a cross-domain policy file and to ensure it allows SOAP-related HTTP headers to be sent. This error may also be caused by using internal types in the web service proxy without using the InternalsVisibleToAttribute attribute. Please see the inner exception for more details." My clientaccesspolicy.xml file is setup to allow access from http://* and https://*. The only difference is using http vs https. The issue is I can usually see (via Fiddler) the clientaccesspolicy.xml file being requested, but now I cannot. I'm assuming it is failing because of this. Any ideas?

    Read the article

  • End-to-end kerberos delegated authentication in ASP.NET

    - by Erlend
    I'm trying to setup an internal website that will contact another backend service within the network on behalf of the user using a HttpWebRequest. I have to use Integrated Windows Authentication on the ASP.NET application as the backend system only supports this type of authentication. I'm able to setup IWA on the ASP.NET application, and it's using kerberos as I expect it to. However when the authentication is delegated to the backend system it doesn't work anymore. This is because the backend system only supports kerberos IWA, but the delegation for some reason - even though the incoming request is kerberos authenticated - converts the authentication to NTLM before forwaring to the backend system. Does anybody know what I need to do on the ASP.NET application in order to allow it to forward the identity using kerberos? I've currently tried the followin but it doesn't seem to work CredentialCache credentialCache = new CredentialCache(); credentialCache.Add(request.RequestUri, "Negotiate", CredentialCache.DefaultCredentials.GetCredential(request.RequestUri, "Kerberos")); request.Credentials = credentialCache; I've also tried to set "Kerberos" where it now says "Negotiate", but it doesn't seem to do much.

    Read the article

  • Packaging ejb3 swing client

    - by soontobeared
    Hi, I get the "java.lang.ClassNotFoundException: org.jnp.interfaces.NamingContextFac tory" error while running my packaged ejb3 swing client jar. Here's the stack trace. G:\Courses\OSUMC\Installables\June 5\New>java -jar MetaDB-Client.jar javax.naming.NoInitialContextException: Cannot instantiate class: org.jnp.interf aces.NamingContextFactory [Root exception is java.lang.ClassNotFoundException: o rg.jnp.interfaces.NamingContextFactory] at javax.naming.spi.NamingManager.getInitialContext(Unknown Source) at javax.naming.InitialContext.getDefaultInitCtx(Unknown Source) at javax.naming.InitialContext.init(Unknown Source) at javax.naming.InitialContext.<init>(Unknown Source) at net.massmatrix.metadb.ui.facade.BaseEJBFacade.getInitialContext(BaseE JBFacade.java:26) at net.massmatrix.metadb.ui.facade.UserManagerFacade.getUserManager(User ManagerFacade.java:24) at net.massmatrix.metadb.ui.facade.UserManagerFacade.isUserNameAvailable (UserManagerFacade.java:44) at net.massmatrix.metadb.ui.MainFrame.main(MainFrame.java:269) Caused by: java.lang.ClassNotFoundException: org.jnp.interfaces.NamingContextFac tory at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Unknown Source) at com.sun.naming.internal.VersionHelper12.loadClass(Unknown Source) ... 8 more Exception in thread "main" java.lang.NullPointerException at net.massmatrix.metadb.ui.facade.UserManagerFacade.isUserNameAvailable (UserManagerFacade.java:44) at net.massmatrix.metadb.ui.MainFrame.main(MainFrame.java:269) Here are my packaged swing client Jar contents:- MetaDB-Client.jar \lib - contains all jboss\client jars \net\.. - contains class files(from both client and server) META-INF MANIFEST.MF jndi.properties Here's my jndi.properties:- java.naming.factory.initial=org.jnp.interfaces.NamingContextFactory java.naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces java.naming.provider.url=localhost:1099 Here's my MANIFEST.MF:- Manifest-Version: 1.0 Main-Class: net.massmatrix.metadb.ui.MainFrame Class-Path: lib/* Command used to create the jar:- jar cfm MetaDB-Client.jar MANIFEST.MF net\* lib\* jndi.properties What else am i missing ? Thanks.

    Read the article

  • How to use 3rd party libraries in glassfish?

    - by Hank
    I need to connect to a MongoDB instance from my EJB3 application, running on glassfish 3.0.1. The Mongo project provides a set of drivers, and I'm able to use them in a standalone Java application. How would I use them in a JEE application? Or maybe better phrasing: how would I make a 3rd party library available to my application when it runs in an EJB container? At the moment, I'm getting a java.lang.NoClassDefFoundError when deploying a bean that tries to import from the library: [#|2010-03-24T11:42:15.164+0100|SEVERE|glassfishv3.0|global|_ThreadID=28;_ThreadName=Thread-1;|Class [ com/mongodb/DBObject ] not found. Error while loading [ class mvs.core.LocationCacheService ]|#] [#|2010-03-24T11:42:15.164+0100|WARNING|glassfishv3.0|javax.enterprise.system.tools.deployment.org.glassfish.deployment.common|_ThreadID=28;_ThreadName=Thread-1;|Error in annotation processing: java.lang.NoClassDefFoundError: com/mongodb/DBObject|#] [#|2010-03-24T11:42:15.259+0100|SEVERE|glassfishv3.0|javax.enterprise.system.core.com.sun.enterprise.v3.server|_ThreadID=28;_ThreadName=Thread-1;|Exception while loading the app org.glassfish.deployment.common.DeploymentException: java.lang.NoClassDefFoundError: com/mongodb/DBObject at org.glassfish.weld.WeldDeployer.event(WeldDeployer.java:171) at org.glassfish.kernel.event.EventsImpl.send(EventsImpl.java:125) at org.glassfish.internal.data.ApplicationInfo.load(ApplicationInfo.java:224) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:338) I tried adding it to the NetBeans project (Properties - Libraries - Compile - Add Jar, enable 'Package'), and I also tried manually copying the jar file to $GF_HOME/glassfish/domains/domain1/lib (where the mysql-connector already resides). Do I need to 'register' the library with the container? Reference it via Annotation? Extend the classpath of the container to include the library?

    Read the article

  • Resize transparent images using C#

    - by MartinHN
    Does anyone have the secret formula to resizing transparent images (mainly GIFs) without ANY quality loss - what so ever? I've tried a bunch of stuff, the closest I get is not good enough. Take a look at my main image: http://www.thewallcompany.dk/test/main.gif And then the scaled image: http://www.thewallcompany.dk/test/ScaledImage.gif //Internal resize for indexed colored images void IndexedRezise(int xSize, int ySize) { BitmapData sourceData; BitmapData targetData; AdjustSizes(ref xSize, ref ySize); scaledBitmap = new Bitmap(xSize, ySize, bitmap.PixelFormat); scaledBitmap.Palette = bitmap.Palette; sourceData = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.ReadOnly, bitmap.PixelFormat); try { targetData = scaledBitmap.LockBits(new Rectangle(0, 0, xSize, ySize), ImageLockMode.WriteOnly, scaledBitmap.PixelFormat); try { xFactor = (Double)bitmap.Width / (Double)scaledBitmap.Width; yFactor = (Double)bitmap.Height / (Double)scaledBitmap.Height; sourceStride = sourceData.Stride; sourceScan0 = sourceData.Scan0; int targetStride = targetData.Stride; System.IntPtr targetScan0 = targetData.Scan0; unsafe { byte* p = (byte*)(void*)targetScan0; int nOffset = targetStride - scaledBitmap.Width; int nWidth = scaledBitmap.Width; for (int y = 0; y < scaledBitmap.Height; ++y) { for (int x = 0; x < nWidth; ++x) { p[0] = GetSourceByteAt(x, y); ++p; } p += nOffset; } } } finally { scaledBitmap.UnlockBits(targetData); } } finally { bitmap.UnlockBits(sourceData); } } I'm using the above code, to do the indexed resizing. Does anyone have improvement ideas?

    Read the article

  • What is the best way to send structs containing enum values via sockets in C.

    - by Axel
    I've lots of different structs containing enum members that I have to transmit via TCP/IP. While the communication endpoints are on different operating systems (Windows XP and Linux) meaning different compilers (gcc 4.x.x and MSVC 2008) both program parts share the same header files with type declarations. For performance reasons, the structures should be transmitted directly (see code sample below) without expensively serializing or streaming the members inside. So the question is how to ensure that both compilers use the same internal memory representation for the enumeration members (i.e. both use 32-bit unsigned integers). Or if there is a better way to solve this problem... //type and enum declaration typedef enum { A = 1, B = 2, C = 3 } eParameter; typedef enum { READY = 400, RUNNING = 401, BLOCKED = 402 FINISHED = 403 } eState; #pragma pack(push,1) typedef struct { eParameter mParameter; eState mState; int32_t miSomeValue; uint8_t miAnotherValue; ... } tStateMessage; #pragma pack(pop) //... send via socket tStateMessage msg; send(iSocketFD,(void*)(&msg),sizeof(tStateMessage)); //... receive message on the other side tStateMessage msg_received; recv(iSocketFD,(void*)(&msg_received),sizeof(tStateMessage)); Additionally... Since both endpoints are little endian maschines, endianess is not a problem here. And the pack #pragma solves alignment issues satisfactorily. Thx for your answers, Axel

    Read the article

  • Winform User Settings - Allow multiple choice values at Runtime

    - by Refracted Paladin
    I created a simple User Settings Dialog by binding the Property.Settings to a PropertyGrid. This works like a charm but now I would like to allow only certain choices for some values. I have noticed that some Types will give a dropdown of possible choices. This is what I am shooting for but for, say, Strings. Example, one of the Settings is UserTheme which is a String. Black, Blue, Silver. The program reads that string from the Settings File and sets the Theme on Startup. I can type in a correct theme and it works but if I type in Pink it will not as there is not a pink option. This is my VERY simple UserSettingsForm code. #region FIELDS internal Settings userSettings; #endregion #region EVENTS private void frmEditUserControl_Load(object sender, EventArgs e) { userSettings = Settings.Default; this.propertyGrid1.SelectedObject = userSettings; this.propertyGrid1.PropertySort = PropertySort.Alphabetical; } private void btnSave_Click(object sender, EventArgs e) { userSettings.Save(); //this.DialogResult = DialogResult.OK; this.Close(); } private void btnCancel_Click(object sender, EventArgs e) { userSettings.Reload(); this.Close(); } #endregion

    Read the article

  • while(1) block my recv thread

    - by zp26
    Hello. I have a problem with this code. As you can see a launch with an internal thread recv so that the program is blocked pending a given but will continue its execution, leaving the task to lock the thread. My program would continue to receive the recv data socket new_sd and so I entered an infinite loop (the commented code). The problem is that by entering the while (1) my program block before recv, but not inserting it correctly receives a string, but after that stop. Someone could help me make my recv always waiting for information? Thanks in advance for your help. -(IBAction)Chat{ [NSThread detachNewThreadSelector:@selector(riceviDatiServer) toTarget:self withObject:nil]; } -(void)riceviDatiServer{ NSAutoreleasePool *pool = [[NSAutoreleasePool alloc]init]; labelRicevuti.text = [[NSString alloc] initWithFormat:@"In attesa di ricevere i dati"]; char datiRicevuti[500]; int ricevuti; //while(1){ ricevuti = recv(new_sd, &datiRicevuti, 500, 0); labelRicevuti.text = [[NSString alloc] initWithFormat:@"%s", datiRicevuti]; //} [pool release]; }

    Read the article

< Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >