Search Results

Search found 435 results on 18 pages for 'mandatory'.

Page 13/18 | < Previous Page | 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • How do I use a dependency on a Perl module installed in a non-standard location?

    - by Kinopiko
    I need to install two Perl modules on a web host. Let's call them A::B and X::Y. X::Y depends on A::B (needs A::B to run). Both of them use Module::Install. I have successfully installed A::B into a non-system location using perl Makefile.PL PREFIX=/non/system/location make; make test; make install Now I want to install X::Y, so I try the same thing perl Makefile.PL PREFIX=/non/system/location The output is $ perl Makefile.PL PREFIX=/non/system/location/ Cannot determine perl version info from lib/X/Y.pm *** Module::AutoInstall version 1.03 *** Checking for Perl dependencies... [Core Features] - Test::More ...loaded. (0.94) - ExtUtils::MakeMaker ...loaded. (6.54 >= 6.11) - File::ShareDir ...loaded. (1.00) - A::B ...missing. ==> Auto-install the 1 mandatory module(s) from CPAN? [y] It can't seem to find A::B in the system, although it is installed, and when it tries to auto-install the module from CPAN, it tries to write it into the system directory (ignoring PREFIX). I have tried using variables like PERL_LIB and LIB on the command line, after PREFIX=..., but nothing I have done seems to work. I can do make and make install successfully, but I can't do make test because of this problem. Any suggestions? I found some advice at http://servers.digitaldaze.com/extensions/perl/modules.html to use an environment variable PERL5LIB, but this also doesn't seem to work: export PERL5LIB=/non/system/location/lib/perl5/ didn't solve the problem.

    Read the article

  • Django - Override admin site's login form

    - by TrojanCentaur
    I'm currently trying to override the default form used in Django 1.4 when logging in to the admin site (my site uses an additional 'token' field required for users who opt in to Two Factor Authentication, and is mandatory for site staff). Django's default form does not support what I need. Currently, I've got a file in my templates/ directory called templates/admin/login.html, which seems to be correctly overriding the template used with the one I use throughout the rest of my site. The contents of the file are simply as below: # admin/login.html: {% extends "login.html" %} The actual login form is as below: # login.html: {% load url from future %}<!DOCTYPE html> <html> <head> <title>Please log in</title> </head> <body> <div id="loginform"> <form method="post" action="{% url 'id.views.auth' %}"> {% csrf_token %} <input type="hidden" name="next" value="{{ next }}" /> {{ form.username.label_tag }}<br/> {{ form.username }}<br/> {{ form.password.label_tag }}<br/> {{ form.password }}<br/> {{ form.token.label_tag }}<br/> {{ form.token }}<br/> <input type="submit" value="Log In" /> </form> </div> </body> </html> My issue is that the form provided works perfectly fine when accessed using my normal login URLs because I supply my own AuthenticationForm as the form to display, but through the Django Admin login route, Django likes to supply it's own form to this template and thus only the username and password fields render. Is there any way I can make this work, or is this something I am just better off 'hard coding' the HTML fields into the form for?

    Read the article

  • Resizing video best practices (frame size)

    - by undefined
    I have read the following which is from Best Practices for Encoding Video with the VP6 Codec on the Adobe website here - http://www.adobe.com/devnet/flash/articles/encoding_video_print.html. It is talking about common video ratios (320x240, 640x480) Although these ratios are standard, and should be used to avoid distorting the video, the size of the encoded video is not set in stone. The original web video sizes used heights and widths that were evenly divisible by 16. This was mandatory for many early codecs. Although this is not necessary for modern codecs, you should stick to even heights and widths. What do they mean by 'even heights and widths'. I am thinking about encoding my video at 400x300 to make it slightly bigger, this is still 4x3 format but should I just stick at 320x240 and resize it on the screen? Clearly there are benefits to this in terms of storage size and delivery costs. In some places on my site I want to show the video at 400x300 but in others I want it to play full screen so this is why I am wondering if a larger original size (400x300) will give better results when blown up. Any thoughts?

    Read the article

  • Sudden issues reading uncompressed video using opencv

    - by JohnSavage
    I have been using a particular pipeline to process video using opencv to encode uncompressed video (fourcc = 0), and opencv python bindings to then open and work on these files. This has been working fine for me on OpenCV 2.3.1a on Ubuntu 11.10 until just a few days ago. For some reason it currently is only allowing me to read the first frame of a given file the first time I open that file. Further frames are not read, and once I touch the file once with my program, it then cannot even read the first frame. More detail: I created the uncompressed video files as follows: out_video.open(out_vid_name, 0, // FOURCC = 0 means record raw fps, Size(640, 480)) Again, these videos worked fine for me until about a week ago. Now, when I try to open one of these I get the following message (from what I think is ffmpeg): Processing video.avi Using network protocols without global network initialization. Please use avformat_network_init(), this will become mandatory later. [avi @ 0x29251e0] parser not found for codec rawvideo, packets or times may be invalid. It reads and displays the first frame fine, but then fails to read the next frame. Then, when I try to run my code on the same video, the capture still opens with the same message as above. However, it cannot even read the very first frame. Here is the code to open the capture: self.capture = cv2.VideoCapture(filename) if not self.capture.isOpened() print "Error: could not open capture" sys.exit() Again, this part is passed without any issue, but then the break happens at: success, rgb = self.capture.read() if not success: print "error: could not read frame" return False This part breaks at the second frame on the first run of the video file, and then on the first frame on subsequent runs. I really don't know where to even begin debugging this. Please help!

    Read the article

  • Looking for recommnedation on JavaScript libraries in the leage of ExtJS and Qooxdoo for serious web

    - by Kabeer
    Hello. I'm looking for a JavaScript library for my web application. The application is very data intensive and has rich form controls (almost windows like). AJAX will be used liberally. The development platform is ASP.Net (mostly ASP.Net MVC will be used). I cannot pursue with ExtJs due to the price/license factor. I checked Qooxdoo but it is very windows-unfriendly. YIU fell short of my needs w.r.t. form controls it offers. Other libraries like jQuery do not offer rich form controls. So I am looking recommendations for a library that satisfies most of following needs: Rich UI controls Solid API for AJAX handling Employs good programming practices for scripting in frontend (preferably OO but not mandatory) Free. Else has only development cost and not production Windows friendly (or at least not unfriendly) Not monolithic. Should be independent (Not have development & production dependencies) Theme'ing should be easy (preferably wrapped by the library) I am not mentioning other basic needs (like browser compatibility). I hope any popular library will honor those.

    Read the article

  • tabBarController and navigationControllers in landscape mode, episode II

    - by unforgiven
    I have a UITabBarController, and each tab handles a different UIViewController that pushes on the stack new controllers as needed. In two of these tabs I need, when a specific controller is reached, the ability to rotate the iPhone and visualize a view in landscape mode. After struggling a lot I have found that it is mandatory subclassing UITabBarController to override shouldAutorotateToInterfaceOrientation. However, if i simply return YES in the implementation, the following undesirable side effect arises: every controller in every tab is automatically put in landscape mode when rotating the iPhone. Even overriding shouldAutorotateToInterfaceOrientation in each controller to return NO does not work: when the iPhone is rotated, the controller is put in landscape mode. I implemented shouldAutorotateToInterfaceOrientation as follows in the subclassed UITabBarController: (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { if([self selectedIndex] == 0 || [self selectedIndex] == 3) return YES; return NO; } So that only the two tabs I am interested in actually get support for landscape mode. Is there a way to support landscape mode for a specific controller on the stack of a particular tab? I tried, without success, something like (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { if([self selectedIndex] == 0 || [self selectedIndex] == 3) { if ([[self selectedViewController] isKindOfClass: [landscapeModeViewController class]]) return YES; } return NO; } Also, I tried using the delegate method didSelectViewController, without success. Any help is greatly appreciated. Thank you.

    Read the article

  • Tool for generating flat files from SQL objects dynamically

    - by Fabio Gouw
    Hello! I'm looking for a tool or component that generates flat files given a SQL Server's query result (from a stored procedure or a SELECT * over a table or view). This will be a batch process which runs every day, and every day a new file is created. I could use SQL Server Integration Services (DTS), but I have a mandatory requirement: the output of the file need to be dynamic. If a new column is added in my query result, the file must have this new column too, without having to modify my SSIS package. If a column is removed, then the flat file no longer will have it. I’ve tried to do this with SSIS, but when I create a new package I need to specify the number of columns. Another requirement is configuring the format of the output, depending on the data type of the column. If it’s a datetime, the format needs to be YYYY-MM-DD. If it’s a float, then I need to use 2 decimal digits, and so on. Does anyone know a tool that does this job? Thanks

    Read the article

  • What is the basic pattern for using (N)Hibernate?

    - by Vilx-
    I'm creating a simple Windows Forms application with NHibernate and I'm a bit confused about how I'm supposed to use it. To quote the manual: ISession (NHibernate.ISession) A single-threaded, short-lived object representing a conversation between the application and the persistent store. Wraps an ADO.NET connection. Factory for ITransaction. Holds a mandatory (first-level) cache of persistent objects, used when navigating the object graph or looking up objects by identifier. Now, suppose I have the following scenario: I have a simple classifier which is a MSSQL table with two columns - ID (auto_increment) and Name (nvarchar). To edit this classifier I create a form which contains a single gridview and two buttons - OK and Cancel. The user can nearly directly edit the table in the gridview, and when he hits OK the changes he made are persisted to the DB (or if he hits cancel, nothing happens). Now, I have several questions about how to organize this: What should the lifetime of my ISession be? Should I create a single ISession for my whole application; an ISession for each of my forms (the application is single-threaded MDI); or an ISession for every DB operation/transaction? Does NHibernate offer some kind of built-in dirty tracking or must I do this myself? The manual mentions something like it here and there but does not go into details. How is this done? Is there not a huge overhead? Is it somehow tied with the cache(s) that NHibernate has? What are these caches for? Are they not specific to a single ISession? That is, if I use a seperate ISession for every transaction, won't it break the dirty tracking? How does the built-in dirty tracking detect deleted objects?

    Read the article

  • Struggling to create correct relationships in MS Access

    - by Yandawl
    http://img714.imageshack.us/img714/7820/croppercapture1.png Basically: an award(course) has many units, which can be either optional or core(mandatory), depending on the award. So for example: the unit 'Advanced Software Engineering' maybe a core unit for the award 'Software Engineering BSc' but only an optional unit for the course 'Web Technology BSc'. I've used flags for that purpose. A student is enrolled on an award so I need to get a complete list of core and optional units (bearing in mind that a student chooses 1 out of many possible optional units). Also, these units have events, e.g, a lecture, workshop or seminar, etc. and those events have sessions or instances of events where students enrolled on that particular unit are required to attend, and those attendances are stored in a separate table to form a register. So I need a hierarchy of expanding the tables something like this I guess: Awards - Students - Units - Sessions - Attendances Any help with this would be appreciated... It's blowing my mind and I'm really close to going insane! My tutor didn't spot I'd got it wrong when I showed my original data model to him and it's due in next week! Thank you :D

    Read the article

  • N-tier architecture and unit tests (using Java)

    - by Alexandre FILLATRE
    Hi there, I'd like to have your expert explanations about an architectural question. Imagine a Spring MVC webapp, with validation API (JSR 303). So for a request, I have a controller that handles the request, then passes it to the service layer, which passes to the DAO one. Here's my question. At which layer should the validation occur, and how ? My though is that the controller has to handle basic validation (are mandatory fields empty ? Is the field length ok ? etc.). Then the service layer can do some tricker stuff, that involve other objets. The DAO does no validation at all. BUT, if I want to implement some unit testing (i.e. test layers below service, not the controllers), I'll end up with unexpected behavior because some validations should have been done in the Controller layer. As we don't use it for unit testing, there is a problem. What is the best way to deal with this ? I know there is no universal answer, but your personal experience is very welcomed. Thanks a lot. Regards.

    Read the article

  • Losing URI segments when paginating with CodeIgniter

    - by Danny Herran
    I have a /payments interface where the user should be able to filter via price range, bank, and other stuff. Those filters are standard select boxes. When I submit the filter form, all the post data goes to another method called payments/search. That method performs the validation, saves the post values into a session flashdata and redirects the user back to /payments passing the flashdata name via URL. So my standard pagination links with no filters are exactly like this: payments/index/20/ payments/index/40/ payments/index/60/ And if you submit the filter form, the returning URL is: payments/index/0/b48c7cbd5489129a337b0a24f830fd93 This works just great. If I change the zero for something else, it paginates just fine. The only issue however is that the << 1 2 3 4 page links wont keep the hash after the pagination offset. CodeIgniter is generating the page links ignoring that additional uri segment. My uri_segment config is already set to 3: $config['uri_segment'] = 3; I cannot set the page offset to 4 because that hash may or may not exists. Any ideas of how can I solve this? Is it mandatory for CI to have the offset as the last segment in the uri? Maybe I am trying an incorrect approach, so I am all ears. Thank you folks.

    Read the article

  • JSF how to temporary disable validators to save draft

    - by Swiety
    I have a pretty complex form with lots of inputs and validators. For the user it takes pretty long time (even over an hour) to complete that, so they would like to be able to save the draft data, even if it violates rules like mandatory fields being not typed in. I believe this problem is common to many web applications, but can't find any well recognised pattern how this should be implemented. Can you please advise how to achieve that? For now I can see the following options: use of immediate=true on "Save draft" button doesn't work, as the UI data would not be stored on the bean, so I wouldn't be able to access it. Technically I could find the data in UI component tree, but traversing that doesn't seem to be a good idea. remove all the fields validation from the page and validate the data programmaticaly in the action listener defined for the form. Again, not a good idea, form is really complex, there are plenty of fields so validation implemented this way would be very messy. implement my own validators, that would be controlled by some request attribute, which would be set for standard form submission (with full validation expected) and would be unset for "save as draft" submission (when validation should be skipped). Again, not a good solution, I would need to provide my own wrappers for all validators I am using. But as you see no one is really reasonable. Is there really no simple solution to the problem?

    Read the article

  • ~1 second TcpListener Pending()/AcceptTcpClient() lag

    - by cpf
    Probably just watch this video: http://screencast.com/t/OWE1OWVkO As you see, the delay between a connection being initiated (via telnet or firefox) and my program first getting word of it. Here's the code that waits for the connection public IDLServer(System.Net.IPAddress addr,int port) { Listener = new TcpListener(addr, port); Listener.Server.NoDelay = true;//I added this just for testing, it has no impact Listener.Start(); ConnectionThread = new Thread(ConnectionListener); ConnectionThread.Start(); } private void ConnectionListener() { while (Running) { while (Listener.Pending() == false) { System.Threading.Thread.Sleep(1); }//this is the part with the lag Console.WriteLine("Client available");//from this point on everything runs perfectly fast TcpClient cl = Listener.AcceptTcpClient(); Thread proct = new Thread(new ParameterizedThreadStart(InstanceHandler)); proct.Start(cl); } } (I was having some trouble getting the code into a code block) I've tried a couple different things, could it be I'm using TcpClient/Listener instead of a raw Socket object? It's not a mandatory TCP overhead I know, and I've tried running everything in the same thread, etc.

    Read the article

  • how to ignore ivy revision number?

    - by user315228
    Guys, I have certain jar files without revision number. But as rev is mandatory attribute for ivy dependency, i am providing the revision attribute. But i have something like (-[revision]) in url resolver. but its taking the module number instead of ignoring the revision attribute. I know it wont ignore the revision attribute as its not null. Following is the output that i get default-cache: no cached resolved revision for perltools#perltools;latest.integration [ivy:retrieve] tried [ivy:retrieve] listing all in [ivy:retrieve] using privateRepo to list all in [ivy:retrieve] ApacheURLLister found URL=[httP://myrepo/ivyRepository/perltools/jars/perltools.jar]. [ivy:retrieve] found 1 resources [ivy:retrieve] found revs: [perltools.jar] [ivy:retrieve] HTTP response status: 404 url=httP://myrepo/ivyRepository/perltools/jars/perltools.jar/perltools-perltools.jar.jar [ivy:retrieve] CLIENT ERROR: Not Found url=httP://myrepo/ivyRepository/perltools/jars/perltools.jar/perltools-perltools.jar.jar Can somebody please explain why its taking module.ext as revision where revision i specified is latest.integration and in myrepo, i dont have revision attribute. its just has [http://myrepo/ivyRepository/perltools/jars//perltools.jar] Can somebody please help me so that i can avoid revision attribute?

    Read the article

  • Python - Checking for membership inside nested dict

    - by victorhooi
    heya, This is a followup questions to this one: http://stackoverflow.com/questions/2901422/python-dictreader-skipping-rows-with-missing-columns Turns out I was being silly, and using the wrong ID field. I'm using Python 3.x here. I have a dict of employees, indexed by a string, "directory_id". Each value is a nested dict with employee attributes (phone number, surname etc.). One of these values is a secondary ID, say "internal_id", and another is their manager, call it "manager_internal_id". The "internal_id" field is non-mandatory, and not every employee has one. (I've simplified the fields a little, both to make it easier to read, and also for privacy/compliance reasons). The issue here is that we index (key) each employee by their directory_id, but when we lookup their manager, we need to find managers by their "internal_id". Before, when employee.keys() was a list of internal_ids, I was using a membership check on this. Now, the last part of my if statement won't work, since the internal_ids is part of the dict values, instead of the key itself. def lookup_supervisor(manager_internal_id, employees): if manager_internal_idis not None and manager_internal_id!= "" and manager_internal_id in employees.keys(): return (employees[manager_internal_id]['mail'], employees[manager_internal_id]['givenName'], employees[manager_internal_id]['sn']) else: return ('Supervisor Not Found', 'Supervisor Not Found', 'Supervisor Not Found') So the first question is, how do I check whether the manager_internal_id is present in the dict's values. I've tried substituting employee.keys() with employee.values(), that didn't work. Also, I'm hoping for something a little more efficient, not sure if there's a way to get a subset of the values, specifically, all the entries for employees[directory_id]['internal_id']. Hopefully there's some Pythonic way of doing this, without using a massive heap of nested for/if loops. My second question is, how do I then cleanly return the required employee attributes (mail, givenname, surname etc.). My for loop is iterating over each employee, and calling lookup_supervisor. I'm feeling a bit stupid/stumped here. def tidy_data(employees): for directory_id, data in employees.items(): # We really shouldnt' be passing employees back and forth like this - hmm, classes? data['SupervisorEmail'], data['SupervisorFirstName'], data['SupervisorSurname'] = lookup_supervisor(data['manager_internal_id'], employees) Thanks in advance =), Victor

    Read the article

  • Maven - 'all' or 'parent' project for aggregation?

    - by disown
    For educational purposes I have set up a project layout like so (flat in order to suite eclipse better): -product | |-parent |-core |-opt |-all Parent contains an aggregate project with core, opt and all. Core implements the mandatory part of the application. Opt is an optional part. All is supposed to combine core with opt, and has these two modules listed as dependencies. I am now trying to make the following artifacts: product-core.jar product-core-src.jar product-core-with-dependencies.jar product-opt.jar product-opt-src.jar product-opt-with-dependencies.jar product-all.jar product-all-src.jar product-all-with-dependencies.jar Most of them are fairly straightforward to produce. I do have some problem with the aggregating artifacts though. I have managed to make the product-all-src.jar with a custom assembly descriptor in the 'all' module which downloads the sources for all non-transitive deps, and this works fine. This technique also allows me to make the product-all-with-dependencies.jar. I however recently found out that you can use the source:aggregate goal in the source plugin to aggregate sources of the entire aggregate project. This is also true for the javadoc plugin, which also aggregates through the usage of the parent project. So I am torn between my 'all' module approach and ditching the 'all' module and just use the 'parent' module for all aggregation. It feels unclean to have some aggregate artifacts produced in 'parent', and others produced in 'all'. Is there a way of making an 'product-all' jar in the parent project, or to aggregate javadoc in the 'all' project? Or should I just keep both? Thanks

    Read the article

  • Why should I override hashCode() when I override equals() method?

    - by Bragaadeesh
    Ok, I have heard from many places and sources that whenever I override the equals() method, I need to override the hashCode() method as well. But consider the following piece of code package test; public class MyCustomObject { int intVal1; int intVal2; public MyCustomObject(int val1, int val2){ intVal1 = val1; intVal2 = val2; } public boolean equals(Object obj){ return (((MyCustomObject)obj).intVal1 == this.intVal1) && (((MyCustomObject)obj).intVal2 == this.intVal2); } public static void main(String a[]){ MyCustomObject m1 = new MyCustomObject(3,5); MyCustomObject m2 = new MyCustomObject(3,5); MyCustomObject m3 = new MyCustomObject(4,5); System.out.println(m1.equals(m2)); System.out.println(m1.equals(m3)); } } Here the output is true, false exactly the way I want it to be and I dont care of overriding the hashCode() method at all. This means that hashCode() overriding is an option rather being a mandatory one as everyone says. I want a second confirmation.

    Read the article

  • setting cookies

    - by aharon
    Okay, so I'm trying to set cookies using Ruby. I'm in a Rack environment. response[name]=value will add an HTTP header into the HTTP headers hash rack has. I know that it works. The following method doesn't work: def set_cookie(opts={}) args = { :name => nil, :value => nil, :expires => Time.now+314, :path => '/', :domain => Cambium.uri #contains the IP address of the dev server this is running on }.merge(opts) raise ArgumentError, ":name and :value are mandatory" if args[:name].nil? or args[:value].nil? response['Set-Cookie']="#{args[:name]}=#{args[:value]}; expires=#{args[:expires].clone.gmtime.strftime("%a, %d-%b-%Y %H:%M:%S GMT")}; path=#{args[:path]}; domain=#{args[:domain]}" end Why not? And how can I solve it? Thanks.

    Read the article

  • What is the benefit of using ONLY OpenID authentication on a site?

    - by Peter
    From my experience with OpenID, I see a number of significant downsides: Adds a Single Point of Failure to the site It is not a failure that can be fixed by the site even if detected. If the OpenID provider is down for three days, what recourse does the site have to allow its users to login and access the information they own? Takes a user to another sites content and every time they logon to your site Even if the OpenID provider does not have an error, the user is re-directed to their site to login. The login page has content and links. So there is a chance a user will actually be drawn away from the site to go down the Internet rabbit hole. Why would I want to send my users to another company's website? [ Note: my provider no longer does this and seems to have fixed this problem (for now).] Adds a non-trivial amount of time to the signup To sign up with the site a new user is forced to read a new standard, chose a provider, and signup. Standards are something that the technical people should agree to in order to make a user experience frictionless. They are not something that should be thrust on the users. It is a Phisher's Dream OpenID is incredibly insecure and stealing the person's ID as they log in is trivially easy. [ taken from David Arno's Answer below ] For all of the downside, the one upside is to allow users to have fewer logins on the Internet. If a site has opt-in for OpenID then users who want that feature can use it. What I would like to understand is: What benefit does a site get for making OpenID mandatory?

    Read the article

  • Ant: use include and exclude together

    - by Ken
    OK, this seems like it should be really simple. I'm using Apache Ant 1.8, and I have a target which does: <delete file="output/program.tar.bz2"/> <tar basedir="input" destfile="output/program.tar.bz2" compression="bzip2"> <tarfileset dir="input"> <include name="goodfolder1/**"/> <include name="goodfolder2/**"/> <exclude name="**/badfile"/> <exclude name="**/*.badext"/> </tarfileset> </tar> I want it to make a .tar.bz2 of input/goodfolder1 and input/goodfolder2, excluding files named "badfile", and excluding files with extension ".badext". It's giving me a .tar.bz2, but it's including badfile and *.badext -- the excludes seem to be ignored. The order of include/exclude doesn't seem to make a difference. I tried wrapping the includes/excludes in a (the docs say it's implicit?), but it made no difference. I'm sure there's something simple I'm missing, since the manual has a very similar example, though in a somewhat different context. EDIT: It looks like it could be related to the dir="input" attribute: it's adding everything in "input", and then adding everything in the tarfileset to that. Files I want appear twice in the program.tar.bz2, but files that are excluded only appear once. But dir is mandatory, and I don't see how this is different from the examples in the manual.

    Read the article

  • Serializable object in intent returning as String

    - by B_
    In my application, I am trying to pass a serializable object through an intent to another activity. The intent is not entirely created by me, it is created and passed through a search suggestion. In the content provider for the search suggestion, the object is created and placed in the SUGGEST_COLUMN_INTENT_EXTRA_DATA column of the MatrixCursor. However, when in the receiving activity I call getIntent().getSerializableExtra(SearchManager.EXTRA_DATA_KEY), the returned object is of type String and I cannot cast it into the original object class. I tried making a parcelable wrapper for my object that calls out.writeSerializable(...) and use that instead but the same thing happened. The string that is returned is like a generic Object toString(), i.e. com.foo.yak.MyAwesomeClass@4350058, so I'm assuming that toString() is being called somewhere where I have no control. Hopefully I'm just missing something simple. Thanks for the help! Edit: Some of my code This is in the content provider that acts as the search authority: //These are the search suggestion columns private static final String[] COLUMNS = { "_id", // mandatory column SearchManager.SUGGEST_COLUMN_TEXT_1, SearchManager.SUGGEST_COLUMN_INTENT_EXTRA_DATA }; //This places the serializable or parcelable object (and other info) into the search suggestion private Cursor getSuggestions(String query, String[] projection) { List<Widget> widgets = WidgetLoader.getMatches(query); MatrixCursor cursor = new MatrixCursor(COLUMNS); for (Widget w : widgets) { cursor.addRow(new Object[] { w.id w.name w.data //This is the MyAwesomeClass object I'm trying to pass }); } return cursor; } This is in the activity that receives the search suggestion: public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); Object extra = getIntent().getSerializableExtra(SearchManager.EXTRA_DATA_KEY); //extra.getClass() returns String, when it should return MyAwesomeClass, so this next line throws a ClassCastException and causes a crash MyAwesomeClass mac = (MyAwesomeClass)extra; ... }

    Read the article

  • PowerShell function arguments: Can the first one be optional first?

    - by Johannes Rössel
    I have an advanced function in PowerShell, which roughly looks like this: function Foo { [CmdletBinding] param ( [int] $a = 42, [int] $b ) } The idea is that it can be run with either two, one or no arguments. However, the first argument to become optional is the first one. So the following scenarios are possible to run the function: Foo a b # the normal case, both a and b are defined Foo b # a is omitted Foo # both a and b are omitted However, normally PowerShell tries to fit the single argument into a. So I thought about specifying the argument positions explicitly, where a would have position 0 and b position 1. However, to allow for only b specified I tried putting a into a parameter set. But then b would need a different position depending on the currently-used parameter set. Any ideas how to solve this properly? I'd like to retain the parameter names (which aren't a and b actually), so using $args is probably a last resort. I probably could define two parameter sets, one with two mandatory parameters and one with a single optional one, but I guess the parameter names have to be different in that case, then, right?

    Read the article

  • On the search for my next great .Net Read

    - by user127954
    Just got done with "The art of unit testing". It was a great read and i think everyone should go buy a copy. With that said i think the next book I'm like to read would be a architecture / Design type book that would focus heavily on building your objects / software in such a way that it would be: Low Coupling High Cohesion Easily Maintainable / Extended Easy to test Easy to Navigate / Debug The above characteristcs are the most important ones but also maybe it would also include (but not necessary) designing for: Performance - Don't want to design a system at at the end find out its dog slow :) Scalability - Again don't want to design something at the end find out it won't scale. I'd also prefer (but not necessary again): Something newer - Architectural principles seem to gradually evolve / improve over time and id like something with current thinking. .Net as illustrating language - like i said above its not mandatory but since its what i use every day id prefer it to be in .net. Doesn't really matter if its in vb.net or c# Some of the topics that would be talked about its how to minimize dependencies and using interfaces throughout your solution rather than concrete classes. Maybe it would constract /compare some of the newest design principles like DDD, Repository Pattern, Ect... I already have "Clean Code" (don't know if its this type of book or not) and "Working effectively with legacy code" on my radar but id like to read a book based upon the topic i talked about above first. Is there such a book?

    Read the article

  • Symfony2 entity field type alternatives to "property" or "__toString()"?

    - by Polmonino
    Using Symfony2 entity field type one should specify property option: $builder->add('customers', 'entity', array( 'multiple' => true, 'class' => 'AcmeHelloBundle:Customer', 'property' => 'first', )); But sometimes this is not sufficient: think about two customers with the same name, so display the email (unique) would be mandatory. Another possibility is to implement __toString() into the model: class Customer { public $first, $last, $email; public function __toString() { return sprintf('%s %s (%s)', $this->first, $this->last, $this->email); } } The disadvances of the latter is that you are forced to display the entity the same way in all your forms. Is there any other way to make this more flexible? I mean something like a callback function: $builder->add('customers', 'entity', array( 'multiple' => true, 'class' => 'AcmeHelloBundle:Customer', 'property' => function($data) { return sprintf('%s %s (%s)', $data->first, $data->last, $data->email); }, ));

    Read the article

  • web service slowdown

    - by user238591
    Hi, I have a web service slowdown. My (web) service is in gsoap & managed C++. It's not IIS/apache hosted, but speaks xml. My client is in .NET The service computation time is light (<0.1s to prepare reply). I expect the service to be smooth, fast and have good availability. I have about 100 clients, response time is 1s mandatory. Clients have about 1 request per minute. Clients are checking web service presence by tcp open port test. So, to avoid possible congestion, I turned gSoap KeepAlive to false. Until there everything runs fine : I bearly see connections in TCPView (sysinternals) New special synchronisation program now calls the service in a loop. It's higher load but everything is processed in less 30 seconds. With sysinternals TCPView, I see that about 1 thousands connections are in TIME_WAIT. They slowdown the service and It takes seconds for the service to reply, now. Could it be that I need to reset the SoapHttpClientProtocol connection ? Someone has TIME_WAIT ghosts with a web service call in a loop ?

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18  | Next Page >