Search Results

Search found 28818 results on 1153 pages for 'main loop'.

Page 339/1153 | < Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >

  • Android custom categories

    - by marian
    Hello, I have a view as a main screen of the application which contains the available application's actions as icon+text pairs ( desktop like). I want to find out programatically what are the activities defined ONLY in my AndroidManifest.xml Suppose I have : < activity android:name="example.mainActivity" android:label="mainActivity" < intent-filter < action android:name="android.intent.action.MAIN" / < category android:name="android.intent.category.LAUNCHER" / < /intent-filter < /activity < activity android:name="example.activity1" android:label="Activity1" < intent-filter < action android:name="android.intent.action.VIEW" / < category android:name="example.custom.ACTIVITY" / < /intent-filter < /activity < activity android:name="example.activity2" android:label="Activity2" < intent-filter < action android:name="android.intent.action.VIEW" / < category android:name="example.custom.ACTIVITY" / < /intent-filter < /activity I want that in the mainActivity to dinamically read Activity1 and Activity2 because when i add Activity3 for example it will be automatically read. I thought that this could be done by defining a custom category, example.custom.ACTIVITY, and in the mainActivity use the packageManager.queryIntentActivities(Intent intent, int flags) but it doesn't seem to be working. I really would like to code it to dinamically discover the installed activities in my application. Do you have any ideas on how to do this? Thank you

    Read the article

  • How to best design a date/geographic proximity query on GAE?

    - by Dane
    Hi all, I'm building a directory for finding athletic tournaments on GAE with web2py and a Flex front end. The user selects a location, a radius, and a maximum date from a set of choices. I have a basic version of this query implemented, but it's inefficient and slow. One way I know I can improve it is by condensing the many individual queries I'm using to assemble the objects into bulk queries. I just learned that was possible. But I'm also thinking about a more extensive redesign that utilizes memcache. The main problem is that I can't query the datastore by location because GAE won't allow multiple numerical comparison statements (<,<=,=,) in one query. I'm already using one for date, and I'd need TWO to check both latitude and longitude, so it's a no go. Currently, my algorithm looks like this: 1.) Query by date and select 2.) Use destination function from geopy's distance module to find the max and min latitude and longitudes for supplied distance 3.) Loop through results and remove all with lat/lng outside max/min 4.) Loop through again and use distance function to check exact distance, because step 2 will include some areas outside the radius. Remove results outside supplied distance (is this 2/3/4 combination inefficent?) 5.) Assemble many-to-many lists and attach to objects (this is where I need to switch to bulk operations) 6.) Return to client Here's my plan for using memcache.. let me know if I'm way out in left field on this as I have no prior experience with memcache or server caching in general. -Keep a list in the cache filled with "geo objects" that represent all my data. These have five properties: latitude, longitude, event_id, event_type (in anticipation of expanding beyond tournaments), and start_date. This list will be sorted by date. -Also keep a dict of pointers in the cache which represent the start and end indices in the cache for all the date ranges my app uses (next week, 2 weeks, month, 3 months, 6 months, year, 2 years). -Have a scheduled task that updates the pointers daily at 12am. -Add new inserts to the cache as well as the datastore; update pointers. Using this design, the algorithm would now look like: 1.) Use pointers to slice off appropriate chunk of list based on supplied date. 2-4.) Same as above algorithm, except with geo objects 5.) Use bulk operation to select full tournaments using remaining geo objects' event_ids 6.) Assemble many-to-manys 7.) Return to client Thoughts on this approach? Many thanks for reading and any advice you can give. -Dane

    Read the article

  • Mimic CALayer shadow properties found in iPhone OS 3.2 for OS 3.1

    - by niblha
    The CALayer shadow properties like shadowOffset, shadowRadius, shadowColor are not available in iPhone OS versions below 3.2 and I'm wondering how I could mimic that functionality for use with 3.1 and below. I want to use this to be able to add drop shadows to UIViews in a clean way so that the shadows are drawn at layer level somehow, and not by drawing it in a view's -(void)drawRect:(CGRect)rect method which requires to shrink the actual views frame to accomodate for the shadow. (This shrinking approach have been proposed in the other UIView drop shadow related questions I found here on SO). I was thinking a layered approach would be cleaner. For example I tried creating subclassing CALayer to which I added a separate shadow layer as a sublayer, but then that would be drawn on top of whatever was draw in the drawRect: method of the UIView that had the main layer as backing layer. I've also tried implementing the subclass CALayer's drawInContext: something like this, - (void)drawInContext:(CGContextRef)ctx { // code to draw shadow for a frame the size of the layer's frame [super drawInContext:ctx]; } But then the shadow is still clipped to the current clipping bounding box of the context, which seems to be the layers own frame. I also had some idea of redirecting the drawing of the main layer to a sublayer, which would be placed above another sublayer which had the shadow drawn onto it. Then I would probably get rid of the clipping and the shadow would be farthest away. But I couldn't really wrap my head around how I would do that, and it doesn't really feel like a clean approach. Any ideas on how to go about this? Just to make clear how my UIView drop shadow related question is different from the other ones I found here on SO; I do not want to shrink the actual drawing frame of a UIView to accomodate for a shadow. I want it to somehow be on a separate layer in the background, whithout beeing clipped.

    Read the article

  • Array variable initialization error in Java

    - by trinity
    Hello I am trying to write a Java program that reads an input file consisting of URLs, extracts tokens from these, and keeps track of how many times each token appears in the file. I've written the following code: import java.io.*; import java.net.*; public class Main { static class Tokens { String name; int count; } public static void main(String[] args) { String url_str,host; String htokens[]; URL url; boolean found=false; Tokens t[]; int i,j,k; try { File f=new File("urlfile.txt"); FileReader fr=new FileReader(f); BufferedReader br=new BufferedReader(fr); while((url_str=br.readLine())!=null) { url=new URL(url_str); host=url.getHost(); htokens=host.split("\\.|\\-|\\_|\\~|[0-9]"); for(i=0;i<htokens.length;i++) { if(!htokens[i].isEmpty()) { for(j=0;j<t.length;j++) { if(htokens[i].equals(t[j].name)) { t[j].count++; found=true; } } if(!found) { k=t.length; t[k].name=htokens[i]; t[k].count=1; } } } System.out.println(t.length + "class tokens :"); for(i=0;i<t.length;i++) { System.out.println( "name :"+t[i].name+" frequency :"+t[i].count); } } br.close(); fr.close(); } catch(Exception e) { System.out.println(e); } } } But when I run it, it says: variable t not initialized.. What should I do to set it right?

    Read the article

  • Why is the compiler not selecting my function-template overload in the following example?

    - by Steve Guidi
    Given the following function templates: #include <vector> #include <utility> struct Base { }; struct Derived : Base { }; // #1 template <typename T1, typename T2> void f(const T1& a, const T2& b) { }; // #2 template <typename T1, typename T2> void f(const std::vector<std::pair<T1, T2> >& v, Base* p) { }; Why is it that the following code always invokes overload #1 instead of overload #2? void main() { std::vector<std::pair<int, int> > v; Derived derived; f(100, 200); // clearly calls overload #1 f(v, &derived); // always calls overload #1 } Given that the second parameter of f is a derived type of Base, I was hoping that the compiler would choose overload #2 as it is a better match than the generic type in overload #1. Are there any techniques that I could use to rewrite these functions so that the user can write code as displayed in the main function (i.e., leveraging compiler-deduction of argument types)?

    Read the article

  • Get/open temp file in .NET

    - by acidzombie24
    I would like to do something like the below. What function returns me an unique file that is opened? so i can ensure it is mine and i wont overwrite anything or write a complex fn generate/loop BinaryWriter w = GetTempFile(out fn); w.close(); File.Move(fn, newFn);

    Read the article

  • Python - Execute Process -> Block till it exits & Supress Output

    - by Jason
    Hi, I'm using the following to execute a process and hide its output from Python. It's in a loop though, and I need a way to block until the sub process has terminated before moving to the next iteration. subprocess.Popen(["scanx", "--udp", host], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) Thanks for any help.

    Read the article

  • flexible length struct array inside another struct using C

    - by renz
    Hi I am trying to use C to implement a simple struct: 2 boxes, each contains different number of particles; the exact number of particles are passed in main(). I wrote the following code: typedef struct Particle{ float x; float y; float vx; float vy; }Particle; typedef struct Box{ Particle p[]; }Box; void make_box(Box *box, int number_of_particles); int main(){ Box b1, b2; make_box(&b1, 5); //create a box containing 5 particles make_box(&b2, 10); //create a box containing 10 particles } I've tried to implement make_box with the following code void make_box(struct Box *box, int no_of_particles){ Particle po[no_of_particles]; po[0].x = 1; po[1].x = 2; //so on and so forth... box->p = po; } It's always giving me "invalid use of flexible array member". Greatly appreciated if someone can shed some light on this.

    Read the article

  • Need help implementing simple socket server using GIOService (GLib, Glib-GIO)

    - by Mark Renouf
    I'm learning the basics of writing a simple, efficient socket server using GLib. I'm experimenting with GSocketService. So far I can only seem to accept connections but then they are immediately closed. From the docs I can't figure out what step I am missing. I'm hoping someone can shed some light on this for me. When running the following: # telnet localhost 4000 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. Connection closed by foreign host. # telnet localhost 4000 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. Connection closed by foreign host. # telnet localhost 4000 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. Connection closed by foreign host. Output from the server: # ./server New Connection from 127.0.0.1:36962 New Connection from 127.0.0.1:36963 New Connection from 127.0.0.1:36965 Current code: /* * server.c * * Created on: Mar 10, 2010 * Author: mark */ #include <glib.h> #include <gio/gio.h> gchar *buffer; gboolean network_read(GIOChannel *source, GIOCondition cond, gpointer data) { GString *s = g_string_new(NULL); GError *error; GIOStatus ret = g_io_channel_read_line_string(source, s, NULL, &error); if (ret == G_IO_STATUS_ERROR) g_error ("Error reading: %s\n", error->message); else g_print("Got: %s\n", s->str); } gboolean new_connection(GSocketService *service, GSocketConnection *connection, GObject *source_object, gpointer user_data) { GSocketAddress *sockaddr = g_socket_connection_get_remote_address(connection, NULL); GInetAddress *addr = g_inet_socket_address_get_address(G_INET_SOCKET_ADDRESS(sockaddr)); guint16 port = g_inet_socket_address_get_port(G_INET_SOCKET_ADDRESS(sockaddr)); g_print("New Connection from %s:%d\n", g_inet_address_to_string(addr), port); GSocket *socket = g_socket_connection_get_socket(connection); gint fd = g_socket_get_fd(socket); GIOChannel *channel = g_io_channel_unix_new(fd); g_io_add_watch(channel, G_IO_IN, (GIOFunc) network_read, NULL); return TRUE; } int main(int argc, char **argv) { g_type_init(); GSocketService *service = g_socket_service_new(); GInetAddress *address = g_inet_address_new_from_string("127.0.0.1"); GSocketAddress *socket_address = g_inet_socket_address_new(address, 4000); g_socket_listener_add_address(G_SOCKET_LISTENER(service), socket_address, G_SOCKET_TYPE_STREAM, G_SOCKET_PROTOCOL_TCP, NULL, NULL, NULL); g_object_unref(socket_address); g_object_unref(address); g_socket_service_start(service); g_signal_connect(service, "incoming", G_CALLBACK(new_connection), NULL); GMainLoop *loop = g_main_loop_new(NULL, FALSE); g_main_loop_run(loop); }

    Read the article

  • Zooming in an NSView

    - by jasonbogd
    Hi, I have an NSView in which the user can draw circles. These circles are stored as an array of NSBezierPaths, and in drawRect:, I loop through the array and call -stroke on each of the paths. How do I add a button to zoom in and out the NSView? Just change the bounds of the view? Thanks.

    Read the article

  • Amazon ELB CNAME record not working

    - by BarthQuay
    Hi there, I have set up my EC2 infrastructure behind an ELB instance and by using the ELBs DNS name everything works as expected. Now i wanted to forward a subdomain of my main project domain to the ELBs DNS Name with a CNAME entry. I did this about 12 hours ago and it doesnt seem to work, and i dont know why. The subdomain just cant be resolved. This is the DNS entry which was processed from my DNS provider without errors yesterday: @ IN A 111.111.111.111 localhost IN A 127.0.0.1 mail IN A 111.111.111.111 www IN A 111.111.111.111 ftp IN CNAME www beta IN CNAME myelbnamehere.eu-west-1.elb.amazonaws.com imap IN CNAME www loopback IN CNAME localhost pop IN CNAME www relay IN CNAME www smtp IN CNAME www @ IN MX 10 mail Using nslookup, all the subdomains and main domain gets looked up correctly, but beta.domain.com doesnt. I get "** server can't find beta.domain.com: NXDOMAIN" What am i doing wrong ? Do i need to wait longer ? When i use the ELB DNS name directly everything works as expected. When i do an NSlookup on my providers DNS Server, the CNAME gets resolved, but it looks like any other DNS server cant find the subdomain thanks in advance

    Read the article

  • Mochiweb's Scalability Features

    - by ErJab
    From all the articles I've read so far about Mochiweb, I've heard this over and over again that Mochiweb provides very good scalability. My question is, how exactly does Mochiweb get its scalability property? Is it from Erlang's inherent scalability properties or does Mochiweb have any additional code that explicitly enables it to scale well? Put another way, if I were to write a simple HTTP server in Erlang myself, with a simple 'loop' (recursive function) to handle requests, would it have the same level scalability as a simple web server built using the Mochiweb framework?

    Read the article

  • Searching for flexible way to specify conenction string used by APS.NET membership

    - by bzamfir
    Hi, I develop an asp.net web application and I use ASP.NET membership provider. The application uses the membership schema and all required objects inside main application database However, during development I have to switch to various databases, for different development and testing scenarios. For this I have an external connection strings section file and also an external appsettings section, which allow me to not change main web.config but switch the db easily, by changing setting only in appsettings section. My files are as below: <connectionStrings configSource="connections.config"> </connectionStrings> <appSettings file="local.config"> .... ConnectionStrings looks as usual: <connectionStrings> <add name="MyDB1" connectionString="..." ... /> <add name="MyDB2" connectionString="..." ... /> .... </connectionStrings> And local.config as below <appSettings> <add key="ConnectionString" value="MyDB2" /> My code takes into account to use this connection string But membership settings in web.config contains the connection string name directly into the setting, like <add name="MembershipProvider" connectionStringName="MyDB2" ...> .... <add name="RoleProvider" connectionStringName="MyDB2" ...> Because of this, every time I have to edit them too to use the new db. Is there any way to config membership provider to use an appsetting to select db connection for membership db? Or to "redirect" it to read connection setting from somewhere else? Or at least to have this in some external file (like local.config) Maybe is some easy way to wrap asp.net membership provider intio my own provider which will just read connection string from where I want and pass it to original membership provider, and then just delegate the whole membership functionality to asp.net membership provider. Thanks.

    Read the article

  • Why does ICEfaces send dispose-window request on page unload when using view-scoped bean?

    - by woflrevo
    in our application ICEfaces always sends a dispose-window request just before navigating to another JSF Page. as much as i understand this should not happen when having org.icefaces.lazyWindowScope set to true and there is no window-scoped bean involved in current request. but it happens on each link and makes our UI less responsive. but we don't have any window-scoped bean in our application. is that a bug in icefaces that the dispose request is sent when using view-scoped beans? Is it possible to disable? ViewScope is defined in JSF not in ICEfaces, it should work without this dispose request i guess... @ManagedBean(name="viewScopeBean") @ViewScoped public class ViewScopeBean { public void doSomething(){ // } } And here the example jsf: <ice:form> <ice:commandButton value="doSomething" action="#{viewScopeBean.doSomething}"/> <h:link outcome="index" value="Link to same page"/> </ice:form> To reproduce do the following using the code above: open firebug's net tab and activate persist option click doSomething-Button click "link to same page" = dispose-window will be send before navigation Dispose Request Parameters: ice.submit.type=ice.dispose.window ice.window=4guthcbue javax.faces.ViewState=-8138151632882151449%3A-6709064564386098402 Environment: ICEfaces-EE 2.0.0.GA ICEpush-EE 2.0.0.GA Mojarra 2.1.1 JRockit 1.6.0_22 WebLogic Server 10.3.4.0 ICEfaces Configuration: org.icefaces.render.auto: true [default] org.icefaces.autoid: true [default] org.icefaces.aria.enabled: true [default] org.icefaces.blockUIOnSubmit: false [default] org.icefaces.compressDOM: false [default] org.icefaces.compressResources: true [default] org.icefaces.connectionLostRedirectURI: /pages/main.jsf org.icefaces.deltaSubmit: false [default] org.icefaces.lazyPush: true [default] org.icefaces.sessionExpiredRedirectURI: /pages/main.jsf org.icefaces.standardFormSerialization: false [default] org.icefaces.strictSessionTimeout: false [default] org.icefaces.windowScopeExpiration = 1000 [default] org.icefaces.mandatoryResourceConfiguration: null [default] org.icefaces.uniqueResourceURLs: true [default] org.icefaces.lazyWindowScope: true [default] org.icefaces.disableDefaultErrorPopups: false [default]

    Read the article

  • Progress Bar design patterns?

    - by shoosh
    The application I'm writing performs a length algorithm which usually takes a few minutes to finish. During this time I'd like to show the user a progress bar which indicates how much of the algorithm is done as precisely as possible. The algorithm is divided into several steps, each with its own typical timing. For instance- initialization (500 milli-sec) reading inputs (5 sec) step 1 (30 sec) step 2 (3 minutes) writing outputs (7 sec) shutting down (10 milli-sec) Each step can report its progress quite easily by setting the range its working on, say [0 to 150] and then reporting the value it completed in its main loop. What I currently have set up is a scheme of nested progress monitors which form a sort of implicit tree of progress reporting. All progress monitors inherit from an interface IProgressMonitor: class IProgressMonitor { public: void setRange(int from, int to) = 0; void setValue(int v) = 0; }; The root of the tree is the ProgressMonitor which is connected to the actual GUI interface: class GUIBarProgressMonitor : public IProgressMonitor { GUIBarProgressMonitor(ProgressBarWidget *); }; Any other node in the tree are monitors which take control of a piece of the parent progress: class SubProgressMonitor : public IProgressMonitor { SubProgressMonitor(IProgressMonitor *parent, int parentFrom, int parentLength) ... }; A SubProgressMonitor takes control of the range [parentFrom, parentFrom+parentLength] of its parent. With this scheme I am able to statically divide the top level progress according to the expected relative portion of each step in the global timing. Each step can then be further subdivided into pieces etc' The main disadvantage of this is that the division is static and it gets painful to make changes according to variables which are discovered at run time. So the question: are there any known design patterns for progress monitoring which solve this issue?

    Read the article

  • C# File IO with Streams - Best Memory Buffer Size

    - by AJ
    Hi, I am writing a small IO library to assist with a larger (hobby) project. A part of this library performs various functions on a file, which is read / written via the FileStream object. On each StreamReader.Read(...) pass, I fire off an event which will be used in the main app to display progress information. The processing that goes on in the loop is vaired, but is not too time consuming (it could just be a simple file copy, for example, or may involve encryption...). My main question is: What is the best memory buffer size to use? Thinking about physical disk layouts, I could pick 2k, which would cover a CD sector size and is a nice multiple of a 512 byte hard disk sector. Higher up the abstraction tree, you could go for a larger buffer which could read an entire FAT cluster at a time. I realise with today's PC's, I could go for a more memory hungry option (a couple of MiB, for example), but then I increase the time between UI updates and the user perceives a less responsive app. As an aside, I'm eventually hoping to provide a similar interface to files hosted on FTP / HTTP servers (over a local network / fastish DSL). What would be the best memory buffer size for those (again, a "best-case" tradeoff between perceived responsiveness vs. performance). Thanks in advance for any ideas, Adam

    Read the article

  • Sequence reduction in R

    - by drknexus
    Assume you have a vector like so: v <- c(1,1,1,2,2,2,2,1,1,3,3,3,3) How can it be best reduced to a data.frame like this? v.df <- data.frame(value=c(1,2,1,3),repetitions=c(3,4,2,4)) In a procedural language I might just iterate through a loop and build the data.frame as I go, but with a large dataset in R such an approach is inefficient. Any advice?

    Read the article

  • Database structure - is mySQL the right choice?

    - by Industrial
    Hi everyone, We are currently planning the database structure of a quite complex e-commerce web app that has flexibility as it's main cornerstone. Our app features a large amount of data (products) and we have run into a slight headache trying to keep performance high without compromizing normalization rules in the database, or leaving our highly beloved flexibility concept behind when integrating product options (also widely known as product attributes or parameters). Based on various references and sources available, we have made up lists on pros and cons of all major and well known database patterns to solve this. After comparing these, we have come up with two final alternatives: EAV (Entity-attribute-value model) : Pros: Database is used for all sorting. Cons: All related queries will include a number of joins between multiple tables in order to complete the collection of data. SLOB (Serialized LOB, also known as Facade?) : Pros: Very flexible. Keeping the number of necessary joins low compared to a EAV design pattern. Easy to update/add/remove data from each product. Cons: All sorting will be done by the application instead of the database. Will use lots of performance (memory?) when big datasets is processed by a large number of users. Our main questions: Which pattern/structure would you use, or maybe even a different solution? Is there better databases besides mySQL available nowadays to accomplish what we want? Thanks a lot! Reference: http://stackoverflow.com/questions/695752/product-table-many-kinds-of-product-each-product-has-many-parameters

    Read the article

  • Cocoa Bindings and Application Preferences - Crash

    - by iaefai
    Using the document provided by Apple to create an application preferences window that doesn't require any extra code, I seem to have triggered a crash that cannot be traced by me. While the stuff from Apple is older, I believe I have the settings pretty much the same as shown here: When I run my application (Hcode) and go to the preferences menu item, it brings up the proper window with the defaults I specified in the bindings with the exception of the Spaces per tab is blank (no idea how to fix this). When the window is closed, the application crashes with a backtrace similar to this: (gdb) bt #0 0x00007fff800cb1d4 in objc_msgSend_vtable5 () #1 0x00007fff80447cf3 in -[NSMenu _enableItem:] () #2 0x00007fff80447ad8 in -[NSCarbonMenuImpl _carbonUpdateStatusEvent:handlerCallRef:] () #3 0x00007fff8042b3b0 in NSSLMMenuEventHandler () #4 0x00007fff80e06b57 in DispatchEventToHandlers () #5 0x00007fff80e060a6 in SendEventToEventTargetInternal () #6 0x00007fff80e23d85 in SendEventToEventTarget () #7 0x00007fff80e52e61 in SendHICommandEvent () #8 0x00007fff80e66357 in UpdateHICommandStatusWithCachedEvent () #9 0x00007fff80e02a6d in HIApplication::EventHandler () #10 0x00007fff80e06b57 in DispatchEventToHandlers () #11 0x00007fff80e060a6 in SendEventToEventTargetInternal () #12 0x00007fff80e23d85 in SendEventToEventTarget () #13 0x00007fff80e6599b in SendMenuOpening () #14 0x00007fff80e65388 in DrawTheMenu () #15 0x00007fff80e65149 in MenuChanged () #16 0x00007fff80e643d4 in TrackMenuCommon () #17 0x00007fff80e60dbe in MenuSelectCore () #18 0x00007fff80e60596 in _HandleMenuSelection2 () #19 0x00007fff802fc3b9 in _NSHandleCarbonMenuEvent () #20 0x00007fff802cfeda in _DPSNextEvent () #21 0x00007fff802cf379 in -[NSApplication nextEventMatchingMask:untilDate:inMode:dequeue:] () #22 0x00007fff8029505b in -[NSApplication run] () #23 0x00007fff8028dd7c in NSApplicationMain () #24 0x0000000100001cac in main (argc=1, argv=0x7fff5fbff5e0) at /Users/iaefai/Projects/Hcode/Source/main.m:13 I am at a complete loss as to what the problem is. Is there potentially a better way of doing this?

    Read the article

  • Java Classpath Issues with Webservices(CXF) and Jboss

    - by JohnC
    I am using CXF(which autogenerates my webservices in my pom.xml from my wsdl) with JBoss(eclipse ide), and I am having some trouble accessing the webservice from my web application. I found this resource: http://blog.progs.be/?p=92 but I am having a really hard time using WSDL_LOCATION = cl.getResource( "my/progam/pack/wsdl/myService.wsdl" ); to work properly in my code. I have my wsdls located in src/main/wsdl and have added the following line to the .classpath file: classpathentry kind="src" path="src/main/wsdl" I also created the folders my,program,pack,wsdl and dropped my wsdls into that location, so it is accessible. However, the classloader.getResource call always returns null no matter what. When I specify getResource( "/wsdl/myService.wsdl" ) it does not return null, but I believe it looks at the full file path and not what I need (considering part of the URL contains the path to the wsdl file all the way through the jboss server directory and includes the WEB-INF dir. Is my .classpath file set up incorrectly or am I missing something else? if the WSDL Location is not correct it always throws a ClassCast Exception like so: java.lang.ClassCastException: org.apache.cxf.jaxws.ServiceImpl at javax.xml.ws.Service.(Service.java:81)

    Read the article

  • C++: Explicit DLL Loading: First-chance Exception on non "extern C" functions

    - by Shiftbit
    I am having trouble importing my C++ functions. If I declare them as C functions I can successfully import them. When explicit loading, if any of the functions are missing the extern as C decoration I get a the following exception: First-chance exception at 0x00000000 in cpp.exe: 0xC0000005: Access violation. DLL.h: extern "C" __declspec(dllimport) int addC(int a, int b); __declspec(dllimport) int addCpp(int a, int b); DLL.cpp: #include "DLL.h" int addC(int a, int b) { return a + b; } int addCpp(int a, int b) { return a + b; } main.cpp: #include "..DLL/DLL.h" #include <stdio.h> #include <windows.h> int main() { int a = 2; int b = 1; typedef int (*PFNaddC)(int,int); typedef int (*PFNaddCpp)(int,int); HMODULE hDLL = LoadLibrary(TEXT("../Debug/DLL.dll")); if (hDLL != NULL) { PFNaddC pfnAddC = (PFNaddC)GetProcAddress(hDLL, "addC"); PFNaddCpp pfnAddCpp = (PFNaddCpp)GetProcAddress(hDLL, "addCpp"); printf("a=%d, b=%d\n", a,b); printf("pfnAddC: %d\n", pfnAddC(a,b)); printf("pfnAddCpp: %d\n", pfnAddCpp(a,b)); //EXCEPTION ON THIS LINE } getchar(); return 0; } How can I import c++ functions for dynamic loading? I have found that the following code works with implicit loading by referencing the *.lib, but I would like to learn about dynamic loading. Thank you to all in advance.

    Read the article

  • Please suggest me the ( Interaction model of view model) MVVM design in the simple scenario discusse

    - by Jack
    Data Layer I have an Order class as an entity. This Order entity is my model object. Order can be different types, let it be A B C D Also Order class may have common properties like Name, Time of creation, etc. Also based on the order type there are different fields that are not common. View Layer The view contains the following Main Menu ListView The Main Menu contains the drop down menu button which is used to create the order based on the type selected from the drop down. The drop down contains the Order types ( A ,B , C and D). There are different user control based on the order type. Like for example if user chooses to create an order of type A then different view with different inputs field is popped up. Hence, there are four user control for each order type. If user selects A option from the drop down then Order of type A is created and vica versa. Now below is the List View that contains the List of orders so far created by the user. To Edit any particular order user may double click the list view row. Based on the order type clicked by the user in the listview, the view of that order type opens in edit mode. For example if user selects an order type A from the list view then view for order type A open in edit mode. Please suggest me interaction model for view model's in the scenario discussed above. Please excume me if the query is very basic, since I am new new to MVVM and WPF ,

    Read the article

  • adjacency list creation , out of Memory error

    - by p1
    Hello , I am trying to create an adjacency list to store a graph.The implementation works fine while storing 100,000 records. However,when I tried to store around 1million records I ran into OutofMemory Error : Exception in thread "main" java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOfRange(Arrays.java:3209) at java.lang.String.(String.java:215) at java.io.BufferedReader.readLine(BufferedReader.java:331) at java.io.BufferedReader.readLine(BufferedReader.java:362) at liarliar.main(liarliar.java:39) Following is my implementation HashMap<String,ArrayList<String>> adj = new HashMap<String,ArrayList<String>>(num); while ((str = in.readLine()) != null) { StringTokenizer Tok = new StringTokenizer(str); name = (String) Tok.nextElement(); cnt = Integer.valueOf(Tok.nextToken()); ArrayList<String> templist = new ArrayList<String>(cnt); while(cnt>0) { templist.add(in.readLine()); cnt--; } adj.put(name,templist); } //done creating a adjacency list I am wondering, if there is any better way to implement the adjacency list. Also, I know number of nodes right in the begining and , in the future I flatten the list as I visit nodes. Any suggestions ? Thanks

    Read the article

  • Laravel - public layout not needed in every function

    - by fischer
    I have just recently started working with Laravel. Great framework so far! However I have a question. I am using a layout template like this: public $layout = 'layouts.private'; This is set in my Base_Controller: public function __construct(){ //Styles Asset::add('reset', 'css/reset.css'); Asset::add('main', 'css/main.css'); //Scripts Asset::add('jQuery', 'http://ajax.googleapis.com/ajax/libs/jquery/1.8.1/jquery.min.js'); //Switch layout template according to the users auth credentials. if (Auth::check()) { $this -> layout = 'layouts.private'; } else { $this -> layout = 'layouts.public'; } parent::__construct(); } However I get an error exception now when I try to access functions in my diffrent controllers, which should not call any view, i.e. when a user is going to login: class Login_Controller extends Base_Controller { public $restful = true; public function post_index() { $user = new User(); $credentials = array('username' => Input::get('email'), 'password' => Input::get('password')); if (Auth::attempt($credentials)) { } else { } } } The error I get, is that I do not set the content of the different variables in my public $layout. But since no view is needed in this function, how do I tell Laravel not to include the layout in this function? The best solution that I my self have come a cross (don't know if this is a bad way?) is to unset($this -> layout); from function post_index()... To sum up my question: how do I tell Laravel not to include public $layout in certain functions, where a view is not needed? Thanks in advance, fischer

    Read the article

  • fabric deploy problem

    - by alexarsh
    Hi, I'm trying to deploy a django app with fabric and get the following error: Alexs-MacBook:fabric alex$ fab config:instance=peergw deploy -H <ip> - u <username> -p <password> [192.168.2.93] run: cat /etc/issue Traceback (most recent call last): File "build/bdist.macosx-10.6-universal/egg/fabric/main.py", line 419, in main File "/Users/alex/Rabota/server/mx30/scripts/fabric/fab/ commands.py", line 37, in deploy checkup() File "/Users/alex/Rabota/server/mx30/scripts/fabric/fab/ commands.py", line 140, in checkup if not 'Ubuntu' in run('cat /etc/issue'): File "build/bdist.macosx-10.6-universal/egg/fabric/network.py", line 382, in host_prompting_wrapper File "build/bdist.macosx-10.6-universal/egg/fabric/operations.py", line 414, in run File "build/bdist.macosx-10.6-universal/egg/fabric/network.py", line 65, in __getitem__ File "build/bdist.macosx-10.6-universal/egg/fabric/network.py", line 140, in connect File "build/bdist.macosx-10.6-universal/egg/paramiko/client.py", line 149, in load_system_host_keys File "build/bdist.macosx-10.6-universal/egg/paramiko/hostkeys.py", line 154, in load File "build/bdist.macosx-10.6-universal/egg/paramiko/hostkeys.py", line 66, in from_line File "build/bdist.macosx-10.6-universal/egg/paramiko/rsakey.py", line 61, in __init__ paramiko.SSHException: Invalid key Alexs-MacBook:fabric alex$ I can't connect to the server via ssh. What can be my problem? Regards, Arshavski Alexander.

    Read the article

< Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >