Search Results

Search found 8463 results on 339 pages for 'bad learner'.

Page 41/339 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Java error: Bad version number in .class file error when trying to run Cassandra on OS X

    - by Sam Lee
    I am trying to get Cassandra to work on OS X. When I run bin/cassandra, I get the following error: ~/apache-cassandra-incubating-0.4.1-src > bin/cassandra -f Listening for transport dt_socket at address: 8888 Exception in thread "main" java.lang.UnsupportedClassVersionError: Bad version number in .class file at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:675) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:124) at java.net.URLClassLoader.defineClass(URLClassLoader.java:260) at java.net.URLClassLoader.access$100(URLClassLoader.java:56) at java.net.URLClassLoader$1.run(URLClassLoader.java:195) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:316) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:280) at java.lang.ClassLoader.loadClass(ClassLoader.java:251) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:374) From what I could determine by searching, this error is related to incompatible versions of Java. However, as far as I can tell, I have the latest version of Java: ~/apache-cassandra-incubating-0.4.1-src > java -version java version "1.6.0_13" Java(TM) SE Runtime Environment (build 1.6.0_13-b03-211) Java HotSpot(TM) 64-Bit Server VM (build 11.3-b02-83, mixed mode) ~/apache-cassandra-incubating-0.4.1-src > javac -version javac 1.6.0_13 ~/Downloads/apache-cassandra-incubating-0.4.1-src > echo $JAVA_HOME /System/Library/Frameworks/JavaVM.framework/Versions/1.6/Home Any ideas on what I'm doing wrong?

    Read the article

  • Two objects with dependencies for each other. Is that bad?

    - by Kasper Grubbe
    Hi SO. I am learning a lot about design patterns these days. And I want to ask you about a design question that I can't find an answer to. Currently I am building a little Chat-server using sockets, with multiple Clients. Currently I have three classes. Person-class which holds information like nick, age and a Room-object. Room-class which holds information like room-name, topic and a list of Persons currently in that room. Hotel-class which have a list of Persons and a list of Rooms on the server. I have made a diagram to illustrate it (Sorry for the big size!): http://i.imgur.com/Kpq6V.png I have a list of players on the server in the Hotel-class because it would be nice to keep track of how many there are online right now (Without having to iterate through all of the rooms). The persons live in the Hotel-class because I would like to be able to search for a specific Person without searching the rooms. Is this bad design? Is there another way of achieve it? Thanks.

    Read the article

  • Is it bad practice to make a setter return "this"?

    - by Ken Liu
    Is it a good or bad idea to make setters in java return "this"? public Employee setName(String name){ this.name = name; return this; } This pattern can be useful because then you can chain setters like this: list.add(new Employee().setName("Jack Sparrow").setId(1).setFoo("bacon!")); instead of this: Employee e = new Employee(); e.setName("Jack Sparrow"); ...and so on... list.add(e); ...but it sort of goes against standard convention. I suppose it might be worthwhile just because it can make that setter do something else useful. I've seen this pattern used some places (e.g. JMock, JPA), but it seems uncommon, and only generally used for very well defined APIs where this pattern is used everywhere. Update: What I've described is obviously valid, but what I am really looking for is some thoughts on whether this is generally acceptable, and if there are any pitfalls or related best practices. I know about the Builder pattern but it is a little more involved then what I am describing - as Josh Bloch describes it there is an associated static Builder class for object creation.

    Read the article

  • Is having a lot of DOM elements bad for performance?

    - by rFactor
    Hi, I am making a button that looks like this: <!-- Container --> <div> <!-- Top --> <div> <div></div> <div></div> <div></div> </div> <!-- Middle --> <div> <div></div> <div></div> <div></div> </div> <!-- Bottom --> <div> <div></div> <div></div> <div></div> </div> </div> It has many elements, because I want it to be skinnable without limiting the skinners abilities. However, I am concerned about performance. Does having a lot of DOM elements refrect bad performance? Obviously there will always be an impact, but how great is that?

    Read the article

  • Why do I get "Bad File Descriptor" when I try to read a file with Perl?

    - by Magicked
    I'm trying to read a binary file 40 bytes at a time, then check to see if all those bytes are 0x00, and if so ignore them. If not, it will write them back out to another file (basically just cutting out large blocks of null bytes). This may not be the most efficient way to do this, but I'm not worried about that. However, right now I'm getting a "Bad File Descriptor" error and I cannot figure out why. my $comp = "\x00" * 40; my $byte_count = 0; my $infile = "/home/magicked/image1"; my $outfile = "/home/magicked/image1_short"; open IN, "<$infile"; open OUT, ">$outfile"; binmode IN; binmode OUT; my ($buf, $data, $n); while (read (IN, $buf, 40)) { ### Problem is here ### $boo = 1; for ($i = 0; $i < 40; $i++) { if ($comp[$i] != $buf[$i]) { $i = 40; print OUT $buf; $byte_count += 40; } } } die "Problems! $!\n" if $!; close OUT; close IN; I marked with a comment where it is breaking. Thanks for any help!

    Read the article

  • Is there anything bad in declaring static inner class inside interface in java?

    - by Roman
    I have an interface ProductService with method findByCriteria. This method had a long list of nullable parameters, like productName, maxCost, minCost, producer and so on. I refactored this method by introducing Parameter Object. I created class SearchCriteria and now method signature looks like this: findByCriteria (SearchCriteria criteria) I thought that instances of SearchCriteria are only created by method callers and are only used inside findByCriteria method, i.e.: void processRequest() { SearchCriteria criteria = new SearchCriteria () .withMaxCost (maxCost) ....... .withProducer (producer); List<Product> products = productService.findByCriteria (criteria); .... } and List<Product> findByCriteria(SearchCriteria criteria) { return doSmthAndReturnResult(criteria.getMaxCost(), criteria.getProducer()); } So I did not want to create separate public class for SearchCriteria and put it inside ProductServiceInterface: public interface ProductService { List<Product> findByCriteria (SearchCriteria criteria); static class SearchCriteria { ... } } Is there anything bad in this interface? Where whould you place SearchCriteria class?

    Read the article

  • Is there anything bad in declaring nested class inside interface in java?

    - by Roman
    I have an interface ProductService with method findByCriteria. This method had a long list of nullable parameters, like productName, maxCost, minCost, producer and so on. I refactored this method by introducing Parameter Object. I created class SearchCriteria and now method signature looks like this: findByCriteria (SearchCriteria criteria) I thought that instances of SearchCriteria are only created by method callers and are only used inside findByCriteria method, i.e.: void processRequest() { SearchCriteria criteria = new SearchCriteria () .withMaxCost (maxCost) ....... .withProducer (producer); List<Product> products = productService.findByCriteria (criteria); .... } and List<Product> findByCriteria(SearchCriteria criteria) { return doSmthAndReturnResult(criteria.getMaxCost(), criteria.getProducer()); } So I did not want to create a separate public class for SearchCriteria and put it inside ProductServiceInterface: public interface ProductService { List<Product> findByCriteria (SearchCriteria criteria); static class SearchCriteria { ... } } Is there anything bad with this interface? Where whould you place SearchCriteria class?

    Read the article

  • What's so bad about building XML with string concatenation?

    - by wsanville
    In the thread What’s your favorite “programmer ignorance” pet peeve?, the following answer appears, with a large amount of upvotes: Programmers who build XML using string concatenation. My question is, why is building XML via string concatenation (such as a StringBuilder in C#) bad? I've done this several times in the past, as it's sometimes the quickest way for me to get from point A to point B when to comes to the data structures/objects I'm working with. So far, I have come up with a few reasons why this isn't the greatest approach, but is there something I'm overlooking? Why should this be avoided? Probably the biggest reason I can think of is you need to escape your strings manually, and most programmers will forget this. It will work great for them when they test it, but then "randomly" their apps will fail when someone throws an & symbol in their input somewhere. Ok, I'll buy this, but it's really easy to prevent the problem (SecurityElement.Escape to name one). When I do this, I usually omit the XML declaration (i.e. <?xml version="1.0"?>). Is this harmful? Performance penalties? If you stick with proper string concatenation (i.e. StringBuilder), is this anything to be concerned about? Presumably, a class like XmlWriter will also need to do a bit of string manipulation... There are more elegant ways of generating XML, such as using XmlSerializer to automatically serialize/deserialize your classes. Ok sure, I agree. C# has a ton of useful classes for this, but sometimes I don't want to make a class for something really quick, like writing out a log file or something. Is this just me being lazy? If I am doing something "real" this is my preferred approach for dealing w/ XML.

    Read the article

  • Why would 1.000 subforms in a db be a bad idea?

    - by KlaymenDK
    Warm-up I'm trying to come up with a good way to implement customized document forms. It's for a tool to request access to applications; each application will want to ask its own specific questions. The thing is, we have one kind of (common) user who needs to fill in and submit documents based on templates, and another kind of (super) user who needs to be able to define what each template needs to contain. One implementation option would be to use a form (with the basic mandatory stuff), and have that form dynamically include a subform appropriate to the specific task at hand. The gist of the matter is that we could (=will!) quite easily end up having many hundreds of different subforms! (NB. These subforms will be maintained in an automated manner, but that is another topic that may be considered outside the scope of this Question.) Question It's common knowledge that having a lot of views in a Notes database is Bad Thing. But has anyone tried pushing the number of forms or subforms and made any experiences regarding performance?

    Read the article

  • How bad is it to embed JavaScript into the body of HTML?

    - by Andrew
    A team that I am working on has gotten into the habit of using <script> tags in random places in the body of our HTML pages. For example: <html> <head></head> <body> <div id="some-div"> <script type="text/javascript">//some javascipt here</script> </div> </body> </html> I had not seen this before. It seems to work in the few browsers that I've tested. But as far as I know, it's not valid to put script tags in places like this. Am I wrong? How bad is it that we are putting script tags within div tags like this? Are there any browser compatibility issues I should be aware of?

    Read the article

  • Is it a bad practice to use divs for styling purposes?

    - by caisah
    I've seen lately a lot of discussions about this new concept called oocss and I was wondering if it is a bad practice to wrap your main tags in divs only for styling/page layout purposes. I'm asking this because I see some frameworks like Twitter Bootstrap use such a method. What are the implications of such a markup from a semantic and accessibility point of view? For example: <div class="container"> <div class="row"> <div class="span4"> <nav class="nav">...</nav> </div> <div class="span8"> <a href="#" class="btn btn-large">...</a> </div> </div> </div> instead of <div class="menu"> <nav class="nav">...</nav> <a href="#" class="bttn">...</a> </div>

    Read the article

  • int foo(type& bar); is a bad practice?

    - by Earlz
    Well, here we are. Yet another proposed practice that my C++ book has an opinion on. It says "a returning-value(non-void) function should not take reference types as a parameter." So basically if you were to implement a function like this: int read_file(int& into){ ... } and used the integer return value as some sort of error indicator (ignoring the fact that we have exceptions) then that function would be poorly written and it should actually be like void read_file(int& into, int& error){ } Now to me, the first one is much clearer and nice to use. If you want to ignore the error value, you do so with ease. But this book suggests the later. Note that this book does not say returning value functions are bad. It rather says that you should either only return a value or you should only use references. What are your thoughts on this? Is my book full of crap? (again)

    Read the article

  • Fulltext search for django : Mysql not so bad ? (vs sphinx, xapian)

    - by Eric
    I am studying fulltext search engines for django. It must be simple to install, fast indexing, fast index update, not blocking while indexing, fast search. After reading many web pages, I put in short list : Mysql MYISAM fulltext, djapian/python-xapian, and django-sphinx I did not choose lucene because it seems complex, nor haystack as it has less features than djapian/django-sphinx (like fields weighting). Then I made some benchmarks, to do so, I collected many free books on the net to generate a database table with 1 485 000 records (id,title,body), each record is about 600 bytes long. From the database, I also generated a list of 100 000 existing words and shuffled them to create a search list. For the tests, I made 2 runs on my laptop (4Go RAM, Dual core 2.0Ghz): the first one, just after a server reboot to clear all caches, the second is done juste after in order to test how good are cached results. Here are the "home made" benchmark results : 1485000 records with Title (150 bytes) and body (450 bytes) Mysql 5.0.75/Ubuntu 9.04 Fulltext : ========================================================================== Full indexing : 7m14.146s 1 thread, 1000 searchs with single word randomly taken from database : First run : 0:01:11.553524 next run : 0:00:00.168508 Mysql 5.5.4 m3/Ubuntu 9.04 Fulltext : ========================================================================== Full indexing : 6m08.154s 1 thread, 1000 searchs with single word randomly taken from database : First run : 0:01:11.553524 next run : 0:00:00.168508 1 thread, 100000 searchs with single word randomly taken from database : First run : 9m09s next run : 5m38s 1 thread, 10000 random strings (random strings should not be found in database) : just after the 100000 search test : 0:00:15.007353 1 thread, boolean search : 1000 x (+word1 +word2) First run : 0:00:21.205404 next run : 0:00:00.145098 Djapian Fulltext : ========================================================================== Full indexing : 84m7.601s 1 thread, 1000 searchs with single word randomly taken from database with prefetch : First run : 0:02:28.085680 next run : 0:00:14.300236 python-xapian Fulltext : ========================================================================== 1 thread, 1000 searchs with single word randomly taken from database : First run : 0:01:26.402084 next run : 0:00:00.695092 django-sphinx Fulltext : ========================================================================== Full indexing : 1m25.957s 1 thread, 1000 searchs with single word randomly taken from database : First run : 0:01:30.073001 next run : 0:00:05.203294 1 thread, 100000 searchs with single word randomly taken from database : First run : 12m48s next run : 9m45s 1 thread, 10000 random strings (random strings should not be found in database) : just after the 100000 search test : 0:00:23.535319 1 thread, boolean search : 1000 x (word1 word2) First run : 0:00:20.856486 next run : 0:00:03.005416 As you can see, Mysql is not so bad at all for fulltext search. In addition, its query cache is very efficient. Mysql seems to me a good choice as there is nothing to install (I need just to write a small script to synchronize an Innodb production table to a MyISAM search table) and as I do not really need advanced search feature like stemming etc... Here is the question : What do you think about Mysql fulltext search engine vs sphinx and xapian ?

    Read the article

  • Ubuntu on Pandaboard giving me troubles

    - by Jeroen Jacobs
    I'm trying to install the OMAP4 extras for ubuntu on my pandaboard. For some reason, a few packages can't seem to be agree with eachother. This what I did so far: installed on Ubuntu 11.10 on sd card Powered on Pandaboard and let it finish it's initial install Did an "apt-get update" and "apt-get upgrade", to install updates So far, everything went fine, and I was quite happy with my Pandaboard, but then I made the mistake of typing this: apt-get install ubuntu-imap4-extras At first, everything seemed ok, and it started downloading and installing. But then after a while it just crashed. I tried it again but then it gave me this: Reading package lists... Done Building dependency tree Reading state information... Done ubuntu-omap4-extras is already the newest version. You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: gstreamer0.10-openmax : Depends: gstreamer0.10-plugins-bad but it is not going to be installed gstreamer0.10-plugin-ducati : Depends: gstreamer0.10-plugins-bad but it is not going to be installed ubuntu-omap4-extras-multimedia : Depends: gstreamer0.10-plugins-bad (>= 0.10.22-2ubuntu4+ti1.5) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). So I tried to the suggestion: apt-get -f install: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: gstreamer0.10-plugins-bad The following NEW packages will be installed: gstreamer0.10-plugins-bad 0 upgraded, 1 newly installed, 0 to remove and 4 not upgraded. 88 not fully installed or removed. Need to get 0 B/1,794 kB of archives. After this operation, 4,571 kB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 143575 files and directories currently installed.) Unpacking gstreamer0.10-plugins-bad (from .../gstreamer0.10-plugins bad_0.10.22-2ubuntu4+ti1.5.4.8+1_armel.deb) ... dpkg: error processing /var/cache/apt/archives/gstreamer0.10-plugins-bad_0.10.22-2ubuntu4+ti1.5.4.8+1_armel.deb (--unpack): trying to overwrite '/usr/lib/libgstbasecamerabinsrc-0.10.so.0.0.0', which is also in package gstreamer0.10-plugins-good 0.10.30-1ubuntu7.1 dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/gstreamer0.10-plugins-bad_0.10.22-2ubuntu4+ti1.5.4.8+1_armel.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Seems like two packages (plugins-good and plugins-bad) are fighting over the same library. Any idea on how to fix this??

    Read the article

  • Streaming binary data to WCF rest service gives Bad Request (400) when content length is greater than 64k

    - by Mikey Cee
    I have a WCF service that takes a stream: [ServiceContract] public class UploadService : BaseService { [OperationContract] [WebInvoke(BodyStyle=WebMessageBodyStyle.Bare, Method=WebRequestMethods.Http.Post)] public void Upload(Stream data) { // etc. } } This method is to allow my Silverlight application to upload large binary files, the easiest way being to craft the HTTP request by hand from the client. Here is the code in the Silverlight client that does this: const int contentLength = 64 * 1024; // 64 Kb var request = (HttpWebRequest)WebRequest.Create("http://localhost:8732/UploadService/"); request.AllowWriteStreamBuffering = false; request.Method = WebRequestMethods.Http.Post; request.ContentType = "application/octet-stream"; request.ContentLength = contentLength; using (var outputStream = request.GetRequestStream()) { outputStream.Write(new byte[contentLength], 0, contentLength); outputStream.Flush(); using (var response = request.GetResponse()); } Now, in the case above, where I am streaming 64 kB of data (or less), this works OK and if I set a breakpoint in my WCF method, and I can examine the stream and see 64 kB worth of zeros - yay! The problem arises if I send anything more than 64 kB of data, for instance by changing the first line of my client code to the following: const int contentLength = 64 * 1024 + 1; // 64 kB + 1 B This now throws an exception when I call request.GetResponse(): The remote server returned an error: (400) Bad Request. In my WCF configuration I have set maxReceivedMessageSize, maxBufferSize and maxBufferPoolSize to 2147483647, but to no avail. Here are the relevant sections from my service's app.config: <service name="UploadService"> <endpoint address="" binding="webHttpBinding" bindingName="StreamedRequestWebBinding" contract="UploadService" behaviorConfiguration="webBehavior"> <identity> <dns value="localhost" /> </identity> </endpoint> <host> <baseAddresses> <add baseAddress="http://localhost:8732/UploadService/" /> </baseAddresses> </host> </service> <bindings> <webHttpBinding> <binding name="StreamedRequestWebBinding" bypassProxyOnLocal="true" useDefaultWebProxy="false" hostNameComparisonMode="WeakWildcard" sendTimeout="00:05:00" openTimeout="00:05:00" receiveTimeout="00:05:00" maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" maxBufferPoolSize="2147483647" transferMode="StreamedRequest"> <readerQuotas maxArrayLength="2147483647" maxStringContentLength="2147483647" /> </binding> </webHttpBinding> </bindings> <behaviors> <endpointBehaviors> <behavior name="webBehavior"> <webHttp /> </behavior> <endpointBehaviors> </behaviors> How do I make my service accept more than 64 kB of streamed post data?

    Read the article

  • Install 32-bit gstreamer plugins on 64-bit

    - by Rua
    I am trying to install the 32-bit gstreamer plugins on my 64-bit system (Ubuntu 12.10 based). I can install the packages gstreamer0.10-plugins-base:i386 and gstreamer0.10-plugins-good:i386. However, gstreamer0.10-plugins-bad:i386, gstreamer0.10-plugins-bad-multiverse:i386 and gstreamer0.10-plugins-ugly:i386 conflict with 64-bit packages already installed on my system: $ sudo apt-get install gstreamer0.10-plugins-bad:i386 Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: gstreamer0.10-plugins-bad:i386 : Depends: libass4:i386 (>= 0.9.7) but it is not going to be installed Depends: libdca0:i386 but it is not going to be installed Depends: libdvdnav4:i386 (>= 4.2.0+20120524) but it is not going to be installed Depends: libdvdread4:i386 but it is not going to be installed Depends: libslv2-9:i386 (>= 0.6.4-1~) but it is not going to be installed E: Unable to correct problems, you have held broken packages. ... $ sudo apt-get install gstreamer0.10-plugins-bad-multiverse:i386 Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libavcodec53:i386 libavutil51:i386 libfaac0:i386 libfaad2:i386 libgsm1:i386 libmjpegtools-1.9:i386 libmp3lame0:i386 libquicktime2:i386 libschroedinger-1.0-0:i386 libswscale2:i386 libva1:i386 libvpx1:i386 libx264-123:i386 libxvidcore4:i386 The following packages will be REMOVED: gstreamer0.10-plugins-bad-multiverse libfaac0 libmjpegtools-1.9 mint-meta-codecs The following NEW packages will be installed: gstreamer0.10-plugins-bad-multiverse:i386 libavcodec53:i386 libavutil51:i386 libfaac0:i386 libfaad2:i386 libgsm1:i386 libmjpegtools-1.9:i386 libmp3lame0:i386 libquicktime2:i386 libschroedinger-1.0-0:i386 libswscale2:i386 libva1:i386 libvpx1:i386 libx264-123:i386 libxvidcore4:i386 0 upgraded, 15 newly installed, 4 to remove and 5 not upgraded. Need to get 9,198 kB of archives. After this operation, 23.3 MB of additional disk space will be used. Do you want to continue [Y/n]? ... $ sudo apt-get install gstreamer0.10-plugins-ugly:i386 Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: gstreamer0.10-plugins-ugly:i386 : Depends: libdvdread4:i386 but it is not going to be installed E: Unable to correct problems, you have held broken packages. This means that I can't play mp3s (among other things) in 32-bit applications that use gstreamer. Is there a way around this?

    Read the article

  • Circular database relationships. Good, Bad, Exceptions?

    - by jim
    I have been putting off developing this part of my app for sometime purely because I want to do this in a circular way but get the feeling its a bad idea from what I remember my lecturers telling me back in school. I have a design for an order system, ignoring the everything that doesn't pertain to this example I'm left with: CreditCard Customer Order I want it so that, Customers can have credit cards (0-n) Customers have orders (1-n) Orders have one customer(1-1) Orders have one credit card(1-1) Credit cards can have one customer(1-1) (unique ids so we can ignore uniqueness of cc number, husband/wife may share cc instances ect) Basically the last part is where the issue shows up, sometimes credit cards are declined and they wish to use a different one, this needs to update which their 'current' card is but this can only change the current card used for that order, not the other orders the customer may have on disk. Effectively this creates a circular design between the three tables. Possible solutions: Either Create the circular design, give references: cc ref to order, customer ref to cc customer ref to order or customer ref to cc customer ref to order create new table that references all three table ids and put unique on the order so that only one cc may be current to that order at any time Essentially both model the same design but translate differently, I am liking the latter option best at this point in time because it seems less circular and more central. (If that even makes sense) My questions are, What if any are the pros and cons of each? What is the pitfalls of circular relationships/dependancies? Is this a valid exception to the rule? Is there any reason I should pick the former over the latter? Thanks and let me know if there is anything you need clarified/explained. --Update/Edit-- I have noticed an error in the requirements I stated. Basically dropped the ball when trying to simplify things for SO. There is another table there for Payments which adds another layer. The catch, Orders can have multiple payments, with the possibility of using different credit cards. (if you really want to know even other forms of payment). Stating this here because I think the underlying issue is still the same and this only really adds another layer of complexity.

    Read the article

  • Is it a good or bad practice to call instance methods from a java constructor?

    - by Steve
    There are several different ways I can initialize complex objects (with injected dependencies and required set-up of injected members), are all seem reasonable, but have various advantages and disadvantages. I'll give a concrete example: final class MyClass { private final Dependency dependency; @Inject public MyClass(Dependency dependency) { this.dependency = dependency; dependency.addHandler(new Handler() { @Override void handle(int foo) { MyClass.this.doSomething(foo); } }); doSomething(0); } private void doSomething(int foo) { dependency.doSomethingElse(foo+1); } } As you can see, the constructor does 3 things, including calling an instance method. I've been told that calling instance methods from a constructor is unsafe because it circumvents the compiler's checks for uninitialized members. I.e. I could have called doSomething(0) before setting this.dependency, which would have compiled but not worked. What is the best way to refactor this? Make doSomething static and pass in the dependency explicitly? In my actual case I have three instance methods and three member fields that all depend on one another, so this seems like a lot of extra boilerplate to make all three of these static. Move the addHandler and doSomething into an @Inject public void init() method. While use with Guice will be transparent, it requires any manual construction to be sure to call init() or else the object won't be fully-functional if someone forgets. Also, this exposes more of the API, both of which seem like bad ideas. Wrap a nested class to keep the dependency to make sure it behaves properly without exposing additional API:class DependencyManager { private final Dependency dependency; public DependecyManager(Dependency dependency) { ... } public doSomething(int foo) { ... } } @Inject public MyClass(Dependency dependency) { DependencyManager manager = new DependencyManager(dependency); manager.doSomething(0); } This pulls instance methods out of all constructors, but generates an extra layer of classes, and when I already had inner and anonymous classes (e.g. that handler) it can become confusing - when I tried this I was told to move the DependencyManager to a separate file, which is also distasteful because it's now multiple files to do a single thing. So what is the preferred way to deal with this sort of situation?

    Read the article

  • The remote server returned an error: (400) Bad Request - uploading less 2MB file size?

    - by fiberOptics
    The file succeed to upload when it is 2KB or lower in size. The main reason why I use streaming is to be able to upload file up to at least 1 GB. But when I try to upload file with less 1MB size, I get bad request. It is my first time to deal with downloading and uploading process, so I can't easily find the cause of error. Testing part: private void button24_Click(object sender, EventArgs e) { try { OpenFileDialog openfile = new OpenFileDialog(); if (openfile.ShowDialog() == System.Windows.Forms.DialogResult.OK) { string port = "3445"; byte[] fileStream; using (FileStream fs = new FileStream(openfile.FileName, FileMode.Open, FileAccess.Read, FileShare.Read)) { fileStream = new byte[fs.Length]; fs.Read(fileStream, 0, (int)fs.Length); fs.Close(); fs.Dispose(); } string baseAddress = "http://localhost:" + port + "/File/AddStream?fileID=9"; HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(baseAddress); request.Method = "POST"; request.ContentType = "text/plain"; //request.ContentType = "application/octet-stream"; Stream serverStream = request.GetRequestStream(); serverStream.Write(fileStream, 0, fileStream.Length); serverStream.Close(); using (HttpWebResponse response = request.GetResponse() as HttpWebResponse) { int statusCode = (int)response.StatusCode; StreamReader reader = new StreamReader(response.GetResponseStream()); } } } catch (Exception ex) { MessageBox.Show(ex.Message); } } Service: [WebInvoke(UriTemplate = "AddStream?fileID={fileID}", Method = "POST", BodyStyle = WebMessageBodyStyle.Bare)] public bool AddStream(long fileID, System.IO.Stream fileStream) { ClasslLogic.FileComponent svc = new ClasslLogic.FileComponent(); return svc.AddStream(fileID, fileStream); } Server code for streaming: namespace ClasslLogic { public class StreamObject : IStreamObject { public bool UploadFile(string filename, Stream fileStream) { try { FileStream fileToupload = new FileStream(filename, FileMode.Create); byte[] bytearray = new byte[10000]; int bytesRead, totalBytesRead = 0; do { bytesRead = fileStream.Read(bytearray, 0, bytearray.Length); totalBytesRead += bytesRead; } while (bytesRead > 0); fileToupload.Write(bytearray, 0, bytearray.Length); fileToupload.Close(); fileToupload.Dispose(); } catch (Exception ex) { throw new Exception(ex.Message); } return true; } } } Web config: <system.serviceModel> <bindings> <basicHttpBinding> <binding> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="2097152" maxBytesPerRead="4096" maxNameTableCharCount="2097152" /> <security mode="None" /> </binding> <binding name="ClassLogicBasicTransfer" closeTimeout="00:05:00" openTimeout="00:05:00" receiveTimeout="00:15:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="67108864" maxReceivedMessageSize="67108864" messageEncoding="Mtom" textEncoding="utf-8" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="67108864" maxBytesPerRead="4096" maxNameTableCharCount="67108864" /> <security mode="None"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> <binding name="BaseLogicWSHTTP"> <security mode="None" /> </binding> <binding name="BaseLogicWSHTTPSec" /> </basicHttpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true" /> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" aspNetCompatibilityEnabled="true" /> </system.serviceModel> I'm not sure if this affects the streaming function, because I'm using WCF4.0 rest template which config is dependent in Global.asax. One more thing is this, whether I run the service and passing a stream or not, the created file always contain this thing. How could I remove the "NUL" data? Thanks in advance. Edit public bool UploadFile(string filename, Stream fileStream) { try { FileStream fileToupload = new FileStream(filename, FileMode.Create); byte[] bytearray = new byte[10000]; int bytesRead, totalBytesRead = 0; do { bytesRead = fileStream.Read(bytearray, totalBytesRead, bytearray.Length - totalBytesRead); totalBytesRead += bytesRead; } while (bytesRead > 0); fileToupload.Write(bytearray, 0, totalBytesRead); fileToupload.Close(); fileToupload.Dispose(); } catch (Exception ex) { throw new Exception(ex.Message); } return true; }

    Read the article

  • Is Visual Source Safe (The latest Version) really that bad? Why? What's the Best Alternative? Why? [closed]

    - by hanzolo
    Over the years I've constantly heard horror stories, had people say "Real Programmers Dont Use VSS", and so on. BUT, then in the workplace I've worked at two companies, one, a very well known public facing high traffic website, and another high end Financial Services "Web-Based" hosted solution catering to some very large, very well known companies, which is where I currently Reside and everything's working just fine (KNOCK KNOCK!!). I'm constantly interfacing with EXTREMELY Old technology with some of these financial institutions.. OLD LIKE YOU WOULDN'T BELIEVE.. which leads me to the conclusion that if it works "LEAVE IT", and that maybe there's some value in old technology? at least enough value to overrule a rewrite!? right?? Is there something fundamentally flawed with the underlying technology that VSS uses? I have a feeling that if i said "someone said VSS Sucks" they would beg to differ, most likely give me this look like i dont know -ish, and I'd never gain back their respect and my credibility (well, that'll be hard to blow.. lol), BUT, give me an argument that I can take to someone whose been coding for 30 years, that builds Platforms that leverage current technology (.NET 3.5 / SQL 2008 R2 ), write's their own ORM with scaffolding and is able to provide a quality platform that supports thousands of concurrent users on a multi-tenant hosted solution, and does not agree with any benefits from having Source Control Integrated, and yet uses the Infamous Visual Source Safe. I have extensive experience with TFS up to 2010, and honestly I think it's great when a team (beyond developers) can embrace it. I've worked side by side with someone whose a die hard SVN'r and from a purist standpoint, I see the beauty in it (I need a bit more, out of my SS, but it surely suffices). So, why are such smarties not running away from Visual Source Safe? surely if it was so bad, it would've have been realized by now, and I would not be sitting here with this simple old, Check In, Check Out, Version Resistant, Label Intensive system. But here I am... I would love to drop an argument that would be the end all argument, but if it's a matter of opinion and personal experience, there seems to be too much leeway for keeping VSS. UPDATE: I guess the best case is to have the VSS supporters check other people's experiences and draw from that until we (please no) experience the breaking factor ourselves. Until then, i wont be engaging in a discussion to migrate off of VSS.. UPDATE 11-2012: So i was able to convince everyone at my work place that since MS is sun downing Visual Source Safe it might be time to migrate over to TFS. I was able to convince them and have recently upgraded our team to Visual Studio 2012 and TFS 2012. The migration was fairly painless, had to run analyze.exe which found a bunch of errors (not sure they'll ever affect the project) and then manually run the VSSConverter.exe. Again, painless, except it took 16 hours to migrate 5 years worth of everything.. and now we're on TFS.. much more integrated.. much more cooler.. so all in all, VSS served it's purpose for years without hick-up. There were no horror stories and Visual Source Save as source control worked just fine. so to all the nay sayers (me included). there's nothing wrong with using VSS. i wouldnt start a new project with it, and i would definitely consider migrating to TFS. (it's really not super difficult and a new "wizard" type converter is due out any day now so migrating should be painless). But from my experience, it worked just fine and got the job done.

    Read the article

  • How can I disable DNSSC for Google Apps (GMail) MX records on my authoritative domains?

    - by meinemitternacht
    I'm running a BIND Master / Slave setup with DNSSEC, but some of my domains use Google Apps for e-mail services. Google doesn't support DNSSEC and BIND doesn't like it at all. Log output: Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ALT2.ASPMX.L.GOOGLE.COM.dlv.isc.org/DLV/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ALT2.ASPMX.L.GOOGLE.COM/A/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ALT2.ASPMX.L.GOOGLE.COM/AAAA/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f755cb83950: ALT2.ASPMX.L.GOOGLE.COM AAAA: bad cache hit (ALT2.ASPMX.L.GOOGLE.COM.dlv.isc.org/DLV) Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ALT2.ASPMX.L.GOOGLE.COM/AAAA/IN': 69.147.224.178#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f755ca52c30: ALT2.ASPMX.L.GOOGLE.COM A: bad cache hit (ALT2.ASPMX.L.GOOGLE.COM.dlv.isc.org/DLV) Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ALT2.ASPMX.L.GOOGLE.COM/A/IN': 69.147.224.178#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f755ca52c30: ASPMX2.GOOGLEMAIL.COM AAAA: bad cache hit (ASPMX2.GOOGLEMAIL.COM.dlv.isc.org/DLV) Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ASPMX2.GOOGLEMAIL.COM/AAAA/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f755cb83950: ASPMX2.GOOGLEMAIL.COM A: bad cache hit (ASPMX2.GOOGLEMAIL.COM.dlv.isc.org/DLV) Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ASPMX2.GOOGLEMAIL.COM/A/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f754c1b0bd0: ASPMX2.GOOGLEMAIL.COM A: bad cache hit (ASPMX2.GOOGLEMAIL.COM.dlv.isc.org/DLV) Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ASPMX2.GOOGLEMAIL.COM/A/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f754c1a6a30: ASPMX2.GOOGLEMAIL.COM AAAA: bad cache hit (ASPMX2.GOOGLEMAIL.COM.dlv.isc.org/DLV) Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ASPMX2.GOOGLEMAIL.COM/AAAA/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f755cb83950: ASPMX3.GOOGLEMAIL.COM AAAA: bad cache hit (ASPMX3.GOOGLEMAIL.COM.dlv.isc.org/DLV) I'm not absolutely sure this is stopping Google Apps from working, because I just enabled all of the DNSSEC features. Does anyone here have experience with this?

    Read the article

  • Enablement 2.0 Get Specialized!

    - by michaela.seika(at)oracle.com
      Check the new OPN Certified Specialist Exam Study Guides – your quick reference to the training options to guide partners pass OPN Specialist Exams! What are the advantages of the Exam Study Guides? Cover the Implementation Specialist Exams that count towards OPN Specialization program. Capture Exam Topics, Exam Objectives and Training Options. Define the Exam Objectives by learner or practitioner level of knowledge: Learner-level: questions require the candidate to recall information to derive the correct answer Practitioner-level: questions require the candidate to derive the correct answer from an application of their knowledge. Map by each Exam Topic the alternative training options that are available at Oracle. Where to find the Exam Study Guides? On Enablement 2.0 > Spotlight On each Knowledge Zone > Implement On each Specialist Implementation Guided Learning Path For More Information Oracle Certification Program Beta Exams OPN Certified Specialist Exams OPN Certified Specialist FAQ Contact Us Please direct any inquiries you may have to Oracle Partner Enablement team at [email protected].

    Read the article

  • Display page numbers in a excel sheet generated using C#.NET

    - by constant learner
    Hello Stackers Does anyone have an idea on how to include or input the page numbers in the excel sheet generated using C# code. I use the libraries available in Microsoft.Office.Interop.Excel to generate the file. However by default in the output i cannot see the page numbers. I know to enable this via excel options (View -- Header and Footer ...) but i want to automate this via C#. Is this possible, if yes kindly share the snippet for the same. Thanks Constant Learner

    Read the article

  • java.lang.UnsupportedClassVersionError: Bad version number in .class file?

    - by grmn.bob
    I am getting this error when I include an opensource library that I had to compile from source. Now, all the suggestions on the web indicate that the code was compiled in one version and executed in another version (new on old). However, I only have one version of JRE on my system. If I run the commands: $ javac -version javac 1.5.0_18 $ java -version java version "1.5.0_18" Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_18-b02) Java HotSpot(TM) Server VM (build 1.5.0_18-b02, mixed mode) and check in Eclipse for the properties of the java library, I get 1.5.0_18 Therefore, I have to conclude something else, internal to a class itself, is throwing the exception?? Is that even possible?

    Read the article

  • Is this so bad when using MySQL queries in PHP?

    - by alex
    I need to update a lot of rows, per a user request. It is a site with products. I could... Delete all old rows for that product, then loop through string building a new INSERT query. This however will lose all data if the INSERT fails. Perform an UPDATE through each loop. This loop currently iterates over 8 items, but in the future it may get up to 15. This many UPDATEs doesn't sound like too good an idea. Change DB Schema, and add an auto_increment Id to the rows. Then first do a SELECT, get all old rows ids in a variable, perform one INSERT, and then a DELETE WHERE IN SET. What is the usual practice here? Thanks

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >