Search Results

Search found 22300 results on 892 pages for 'half bit'.

Page 768/892 | < Previous Page | 764 765 766 767 768 769 770 771 772 773 774 775  | Next Page >

  • Evaluating expressions using Visual Studio 2005 SDK rather than automation's Debugger::GetExpression

    - by brone
    I'm looking into writing an addin (or package, if necessary) for Visual Studio 2005 that needs watch window type functionality -- evaluation of expressions and examination of the types. The automation facilities provide Debugger::GetExpression, which is useful enough, but the information provided is a bit crude. From looking through the docs, it sounds like an IDebugExpressionContext2 would be more useful. With one of these it looks as if I can get more information from an expression -- detailed information about the type and any members and so on and so forth, without having everything come through as strings. I can't find any way of actually getting a IDebugExpressionContext2, though! IDebugProgramProvider2 sort of looks relevant, in that I could start with IDebugProgramProvider2::GetProviderProcessData and then slowly drill down until reaching something that can supply my expression context -- but I'll need to supply a port to this, and it's not clear how to retrieve the port corresponding to the current debug session. (Even if I tried every port, it's not obvious how to tell which port is the right one...) I'm becoming suspicious that this simply isn't a supported use case, but with any luck I've simply missed something crashingly obvious. Can anybody help?

    Read the article

  • Best implementation of Java Queue?

    - by Georges Oates Larsen
    I am working (In java) on a recursive image processing algorithm that recursively traverses the pixels of the image, outward from a center point. Unfortunately... That causes stack overflows, so I have decided to switch to a Queue-based algorithm. Now, this is all fine and dandy -- But considering the fact that its queue will be analyzing THOUSANDS of pixels in a very short amount of time, while constantly popping and pushing, WITHOUT maintaining a predictable state (It could be anywhere between length 100, and 20000); The queue implementation needs to have significantly fast popping and pushing abilities. A linked list seems attractive due to its ability to push elements unto its self without rearranging anything else in the list, but in order for it to be fast enough, it would need easy access to both its head, AND its tail (or second-to-last node if it were not doubly-linked). Sadly, though I cannot find any information related to the underlying implementation of linked lists in Java, so it's hard to say if a linked list is really the way to go... This brings me to my question... What would be the best implementation of the Queue interface in Java for what I intend to do? (I do not wish to edit or even access anything other than the head and tail of the queue -- I do not wish to do any sort of rearranging, or anything. On the flip side, I DO intend to do a lot of pushing and popping, and the queue will be changing size quite a bit, so preallocating would be inefficient)

    Read the article

  • problem in handling menu - submenu based on spring security

    - by Nirmal
    Hi All... I have configured spring security core plugin using requestmap table inside the database.. Now inside requestmap table I have all the possible urls and it's equivalent roles who can access that url... Now I want to generate menus and submenus based on the urls stored in requestmap table... So my requirement is to check the urls of menu & submenus against the logged in users privileges... And if logged in user has any one privilege then I need to display that main menu and the available submenus.... For e.g. I have a menu in my project called user which has a following submenus : **Users (main menu)** Manage Users (sub menu) Import Users (sub menu) Now inside my header.gsp I have successfully achieved the above requirement using if else condition, like : if ( privs.contains("/users/manageUsers") || privs.contains("/users/importUsers")) here privs are the list of urls from requestmap table for logged in user. But I want to achieve these using spring security tag lib, so for comparing urls I have find following tag from spring security core documentation : <sec:access url="/users/manageUsers"> But i am bit confuse that how I can replace or condition using tag library.. Is there any tag available which checks from multiple urls and evaluate it to true or false ? Of course I can do using sec:access tag with some flag logic, but is there any tags available which can fulfill my requirement directly ? Thanks in advance...

    Read the article

  • problem finding a header with a c++ makefile

    - by Max
    Hi. I've started working with my first makefile. I'm writing a roguelike in C++ using the libtcod library, and have the following hello world program to test if my environment's up and running: #include "libtcod.hpp" int main() { TCODConsole::initRoot(80, 50, "PartyHack"); TCODConsole::root->printCenter(40, 25, TCOD_BKGND_NONE, "Hello World"); TCODConsole::flush(); TCODConsole::waitForKeypress(true); } My project directory structure looks like this: /CppPartyHack ----/libtcod-1.5.1 # this is the libtcod root folder --------/include ------------libtcod.hpp ----/PartyHack --------makefile --------partyhack.cpp # the above code (while we're here, how do I do proper indentation? Using those dashes is silly.) and here's my makefile: SRCDIR = . INCDIR = ../libtcod-1.5.1/include CFLAGS = $(FLAGS) -I$(INCDIR) -I$(SRCDIR) -Wall CC = gcc CPP = g++ .SUFFIXES: .o .h .c .hpp .cpp $(TEMP)/%.o : $(SRCDIR)/%.cpp $(CPP) $(CFLAGS) -o $@ -c $< $(TEMP)/%.o : $(SRCDIR)/%.c $(CC) $(CFLAGS) -o $@ -c $< CPP_OBJS = $(TEMP)partyhack.o all : partyhack partyhack : $(CPP_OBJS) $(CPP) $(CPP_OBJS) -o $@ -L../libtcod-1.5.1 -ltcod -ltcod++ -Wl,-rpath,. clean : \rm -f $(CPP_OBJS) partyhack I'm using Ubuntu, and my terminal gives me the following errors: max@max-desktop:~/Desktop/Development/CppPartyhack/PartyHack$ make g++ -c -o partyhack.o partyhack.cpp partyhack.cpp:1:23: error: libtcod.hpp: No such file or directory partyhack.cpp: In function ‘int main()’: partyhack.cpp:5: error: ‘TCODConsole’ has not been declared partyhack.cpp:6: error: ‘TCODConsole’ has not been declared partyhack.cpp:6: error: ‘TCOD_BKGND_NONE’ was not declared in this scope partyhack.cpp:7: error: ‘TCODConsole’ has not been declared partyhack.cpp:8: error: ‘TCODConsole’ has not been declared make: * [partyhack.o] Error 1 So obviously, the makefile can't find libtcod.hpp. I've double checked and I'm sure the relative path to libtcod.hpp in INCDIR is correct, but as I'm just starting out with makefiles, I'm uncertain what else could be wrong. My makefile is based off a template that the libtcod designers provided along with the library itself, and while I've looked at a few online makefile tutorials, the code in this makefile is a good bit more complicated than any of the examples the tutorials showed, so I'm assuming I screwed up something basic in the conversion. Thanks for any help.

    Read the article

  • BinaryFormatter in C# a good way to read files?

    - by mr-pac
    I want to read a binary file which was created outside of my program. One obvious way in C# to read a binary file is to define class representing the file and then use a BinaryReader and read from the file via the Read* methods and assign the return values to the class properties. What I don't like with the approach is that I manually have to write code that reads the file, although the defined structure represents how the file is stored. I also have to keep the order correct when I read. After looking a bit around I came across the BinaryFormatter which can automatically serialize and deserialze object in binary format. One great advantage would be that I can read and also write the file without creating additional code. However I wonder if this approach is good for files created from other programs on not just serialized .NET objects. Take for example a graphics format file like BMP. Would it be a good idea to read the file with a BinaryFormatter or is it better to manually and write via BinaryReader and BinaryWriter? Or are there any other approaches which suit better? I'am not looking for concrete examples but just for an advice what is the best way to implement that.

    Read the article

  • What is the best / proper idiom in django for modifying a field during a .save() where you need to o

    - by MDBGuy
    Hi, say I've got: class LogModel(models.Model): message = models.CharField(max_length=512) class Assignment(models.Model): someperson = models.ForeignKey(SomeOtherModel) def save(self, *args, **kwargs): super(Assignment, self).save() old_person = #????? LogModel(message="%s is no longer assigned to %s"%(old_person, self).save() LogModel(message="%s is now assigned to %s"%(self.someperson, self).save() My goal is to save to LogModel some messages about who Assignment was assigned to. Notice that I need to know the old, presave value of this field. I have seen code that suggests, before super().save(), retrieve the instance from the database via primary key and grab the old value from there. This could work, but is a bit messy. In addition, I plan to eventually split this code out of the .save() method via signals - namely pre_save() and post_save(). Trying to use the above logic (Retrieve from the db in pre_save, make the log entry in post_save) seemingly fails here, as pre_save and post_save are two seperate methods. Perhaps in pre_save I can retrieve the old value and stick it on the model as an attribute? I was wondering if there was a common idiom for this. Thanks.

    Read the article

  • C++ Changing a class in a dll where a pointer to that class is returned to other dlls...

    - by Patrick
    Hello, Horrible title I know, horrible question too. I'm working with a bit of software where a dll returns a ptr to an internal class. Other dlls (calling dlls) then use this pointer to call methods of that class directly: //dll 1 internalclass m_class; internalclass* getInternalObject() { return &m_class; } //dll 2 internalclass* classptr = getInternalObject(); classptr->method(); This smells pretty bad to me but it's what I've got... I want to add a new method to internalclass as one of the calling dlls needs additional functionality. I'm certain that all dlls that access this class will need to be rebuilt after the new method is included but I can't work out the logic of why. My thinking is it's something to do with the already compiled calling dll having the physical address of each function within internalclass in the other dll but I don't really understand it; is anyone here able to provide a concise explanation of how the dlls (new internal class dll, rebuilt calling dll and calling dll built with previous version of the internal class dll) would fit together? Thanks, Patrick

    Read the article

  • Using items in a list as arguments

    - by Travis Brown
    Suppose I have a function with the following type signature: g :: a -> a -> a -> b I also have a list of as—let's call it xs—that I know will contain at least three items. I'd like to apply g to the first three items of xs. I know I could define a combinator like the following: ($$$) :: (a -> a -> a -> b) -> [a] -> b f $$$ (x:y:z:_) = f x y z Then I could just use g $$$ xs. This makes $$$ a bit like uncurry, but for a function with three arguments of the same type and a list instead of a tuple. Is there a way to do this idiomatically using standard combinators? Or rather, what's the most idiomatic way to do this in Haskell? I thought trying pointfree on a non-infix version of $$$ might give me some idea of where to start, but the output was an abomination with 10 flips, a handful of heads and tails and aps, and 28 parentheses. (NB: I know this isn't a terribly Haskelly thing to do in the first place, but I've come across a couple of situations where it seems like a reasonable solution, especially when using Parsec. I'll certainly accept "don't ever do this in real code" if that's the best answer, but I'd prefer to see some clever trick involving the ((->) r) monad or whatever.)

    Read the article

  • Qt/C++ - confused about caller/callee, object ownership

    - by Isabel
    I am creating a GUI to manipulate a robot arm. The location of the arm can be described by 6 floats (describing the positions of the various arm joints. The interface consists of a QGraphicsView with a diagram of the arm (which can be clicked to change the arm position - adjusting the 6 floats). The interface also has 6 lineEdit boxes, to also adjust those values separately. When the graphics view is clicked, and when the line edit boxes are changed, I'd like the line edit boxes / graphics view to stay in synchronisation. This brings me to confusion about how to store the 6 floats, and trigger events when they're updated. My current idea is this: The robot arm's location should be represented by a class, RobotArmLocation. Objects of this class then have methods such as obj.ShoulderRotation() and obj.SetShoulderRotation(). The MainWindow has a single instance of RobotArmLocation. Next is the bit I'm more confused about, how to join everything up. I am thinking: The MainWindow has a ArmLocationChanged slot. This is signalled whenever the location object is changed. The diagram class will have a SetRobotArmLocation(RobotArmLocation &loc). When the diagram is changed, it's free to change the location object, and fire a signal to the ArmLocationChanged slot. Likewise, changing any of the text boxes will fire a signal to that ArmLocationChanged slot. The slot then has code to synchronise all the elements. This kind of seems like a mess to me, does anyone have any other suggestions? I've also thought of the following, does it have any merrit? The RobotArmLocation class has a ValueChanged slot, the diagram and textboxes can use that directly, and bypass the MainWindow directly (seems cleaner?) thanks for any wisdom!

    Read the article

  • Preloading CSS Background Images

    - by Peanuts
    Hello guys, I have a hidden contact form which is deployed clicking on a button. Its fields are set as CSS background images, and they always appears a bit later than the div that have been toggled. I was using this snippet in the <head> section, but with no luck (after I cleared the cache) : <script type="text/javascript"> $(document).ready(function() { pic = new Image(); pic2 = new Image(); pic3 = new Image(); pic.src="<?php bloginfo('template_directory'); ?>/images/inputs/input1.png"; pic2.src="<?php bloginfo('template_directory'); ?>/images/inputs/input2.png"; pic3.src="<?php bloginfo('template_directory'); ?>/images/inputs/input3.png"; }); </script> I'm using jQuery as my library, and it would be cool if I could use it as well for arranging this issue. Thanks for your thoughs.

    Read the article

  • iPhone --- 3DES Encryption returns "wrong" results?

    - by Jan Gressmann
    Hello fellow developers, I have some serious trouble with a CommonCrypto function. There are two existing applications for BlackBerry and Windows Mobile, both use Triple-DES encryption with ECB mode for data exchange. On either the encrypted results are the same. Now I want to implent the 3DES encryption into our iPhone application, so I went straight for CommonCrypto: http://www.opensource.apple.com/source/CommonCrypto/CommonCrypto-32207/CommonCrypto/CommonCryptor.h I get some results if I use CBC mode, but they do not correspond with the results of Java or C#. Anyway, I want to use ECB mode, but I don't get this working at all - there is a parameter error showing up... This is my call for the ECB mode... I stripped it a little bit: const void *vplainText; plainTextBufferSize = [@"Hello World!" length]; bufferPtrSize = (plainTextBufferSize + kCCBlockSize3DES) & ~(kCCBlockSize3DES - 1); plainText = (const void *) [@"Hello World!" UTF8String]; NSString *key = @"abcdeabcdeabcdeabcdeabcd"; ccStatus = CCCrypt(kCCEncrypt, kCCAlgorithm3DES, kCCOptionECBMode, key, kCCKeySize3DES, nil, // iv, not used with ECB plainText, plainTextBufferSize, (void *)bufferPtr, // output bufferPtrSize, &movedBytes); t is more or less the code from here: http://discussions.apple.com/thread.jspa?messageID=9017515 But as already mentioned, I get a parameter error each time... When I use kCCOptionPKCS7Padding instead of kCCOptionECBMode and set the same initialization vector in C# and my iPhone code, the iPhone gives me different results. Is there a mistake by getting my output from the bufferPtr? Currently I get the encrypted stuff this way: NSData *myData = [NSData dataWithBytes:(const void *)bufferPtr length:(NSUInteger)movedBytes]; result = [[NSString alloc] initWithData:myData encoding:NSISOLatin1StringEncoding]; It seems I almost tried every setting twice, different encodings and so on... where is my error?

    Read the article

  • Explanation of `self` usage during dealloc?

    - by Greg
    I'm trying to lock down my understanding of proper memory management within Objective-C. I've gotten into the habit of explicitly declaring self.myProperty rather than just myProperty because I was encountering occasional scenarios where a property would not be set to the reference that I intended. Now, I'm reading Apple documentation on releasing IBOutlets, and they say that all outlets should be set to nil during dealloc. So, I put this in place as follows and experienced crashes as a result: - (void)dealloc { [self.dataModel close]; [self.dataModel release], self.dataModel = nil; [super dealloc]; } So, I tried taking out the "self" references, like so: - (void)dealloc { [dataModel close]; [dataModel release], dataModel = nil; [super dealloc]; } This second system seems to work as expected. However, it has me a bit confused. Why would self cause a crash in that case, when I thought self was a fairly benign reference more used as a formality than anything else? Also, if self is not appropriate in this case, then I have to ask: when should you include self references, and when should you not?

    Read the article

  • How is timezone handled in the lifecycle of an ADO.NET + SQL Server DateTime column?

    - by stimpy77
    Using SQL Server 2008. This is a really junior question and I could really use some elaborate information, but the information on Google seems to dance around the topic quite a bit and it would be nice if there was some detailed elaboration on how this works... Let's say I have a datetime column and in ADO.NET I set it to DateTime.UtcNow. 1) Does SQL Server store DateTime.UtcNow accordingly, or does it offset it again based on the timezone of where the server is installed, and then return it offset-reversed when queried? I think I know that the answer is "of course it stores it without offsetting it again" but want to be certain. So then I query for it and cast it from, say, an IDataReader column to a DateTime. As far as I know, System.DateTime has metadata that internally tracks whether it is a UTC DateTime or it is an offsetted DateTime, which may or may not cause .ToLocalTime() and .ToUniversalTime() to have different behavior depending on this state. So, 2) Does this casted System.DateTime object already know that it is a UTC DateTime instance, or does it assume that it has been offset? Now let's say I don't use UtcNow, I use DateTime.Now, when performing an ADO.NET INSERT or UPDATE. 3) Does ADO.NET pass the offset to SQL Server and does SQL Server store DateTime.Now with the offset metadata? So then I query for it and cast it from, say, an IDataReader column to a DateTime. 4) Does this casted System.DateTime object already know that it is an offset time, or does it assume that it is UTC?

    Read the article

  • When should EntityManagerFactory instance be created/opened ?

    - by masato-san
    Ok, I read bunch of articles/examples how to write Entity Manager Factory in singleton. One of them easiest for me to understand a bit: http://javanotepad.blogspot.com/2007/05/jpa-entitymanagerfactory-in-web.html I learned that EntityManagerFactory (EMF) should only be created once preferably in application scope. And also make sure to close the EMF once it's used (?) So I wrote EMF helper class for business methods to use: public class EmProvider { private static final String DB_PU = "KogaAlphaPU"; public static final boolean DEBUG = true; private static final EmProvider singleton = new EmProvider(); private EntityManagerFactory emf; private EmProvider() {} public static EmProvider getInstance() { return singleton; } public EntityManagerFactory getEntityManagerFactory() { if(emf == null) { emf = Persistence.createEntityManagerFactory(DB_PU); } if(DEBUG) { System.out.println("factory created on: " + new Date()); } return emf; } public void closeEmf() { if(emf.isOpen() || emf != null) { emf.close(); } emf = null; if(DEBUG) { System.out.println("EMF closed at: " + new Date()); } } }//end class And my method using EmProvider: public String foo() { EntityManager em = null; List<Object[]> out = null; try { em = EmProvider.getInstance().getEntityManagerFactory().createEntityManager(); Query query = em.createNativeQuery(JPQL_JOIN); //just some random query out = query.getResultList(); } catch(Exception e) { //handle error.... } finally { if(em != null) { em.close(); //make sure to close EntityManager } } I made sure to close EntityManager (em) within method level as suggested. But when should EntityManagerFactory be closed then? And why EMF has to be singleton so bad??? I read about concurrency issues but as I am not experienced multi-thread-grammer, I can't really be clear on this idea.

    Read the article

  • MySQL developer here -- Nesting with select * finicky in Oracle 10g?

    - by John Sullivan
    I'm writing a simple diagnostic query then attempting to execute it in the Oracle 10g SQL Scratchpad. EDIT: It will not be used in code. I'm nesting a simple "Select *" and it's giving me errors. In the SQL Scratchpad for Oracle 10g Enterprise Manager Console, this statement runs fine. SELECT * FROM v$session sess, v$sql sql WHERE sql.sql_id(+) = sess.sql_id and sql.sql_text <> ' ' If I try to wrap that up in Select * from () tb2 I get an error, "ORA-00918: Column Ambiguously Defined". I didn't think that could ever happen with this kind of statement so I am a bit confused. select * from (SELECT * FROM v$session sess, v$sql sql WHERE sql.sql_id(+) = sess.sql_id and sql.sql_text <> ' ') tb2 You should always be able to select * from the result set of another select * statement using this structure as far as I'm aware... right? Is Oracle/10g/the scratchpad trying to force me to accept a certain syntactic structure to prevent excessive nesting? Is this a bug in scratchpad or something about how oracle works?

    Read the article

  • C# 3 dimensional array definition issue

    - by George2
    Hello everyone, My following code has compile error, Error 1 Cannot implicitly convert type 'TestArray1.Foo[,,*]' to 'TestArray1.Foo[][][]' C:\Users\lma\Documents\Visual Studio 2008\Projects\TestArray1\TestArray1\Program.cs 17 30 TestArray1 Does anyone have any ideas? Here is my whole code, I am using VSTS 2008 + Vista 64-bit. namespace TestArray1 { class Foo { } class Program { static void Main(string[] args) { Foo[][][] foos = new Foo[1, 1, 1]; return; } } } EDIT: version 2. I have another version of code, but still has compile error. Any ideas? Error 1 Invalid rank specifier: expected ',' or ']' C:\Users\lma\Documents\Visual Studio 2008\Projects\TestArray1\TestArray1\Program.cs 17 41 TestArray1 Error 2 Invalid rank specifier: expected ',' or ']' C:\Users\lma\Documents\Visual Studio 2008\Projects\TestArray1\TestArray1\Program.cs 17 44 TestArray1 namespace TestArray1 { class Foo { } class Program { static void Main(string[] args) { Foo[][][] foos = new Foo[1][1][1]; return; } } } EDIT: version 3. I think I want to have a jagged array. And after learning from the fellow guys. Here is my code fix, and it compile fine in VSTS 2008. What I want is a jagged array, and currently I need to have only one element. Could anyone review whether my code is correct to implement my goal please? namespace TestArray1 { class Foo { } class Program { static void Main(string[] args) { Foo[][][] foos = new Foo[1][][]; foos[0] = new Foo[1][]; foos[0][0] = new Foo[1]; foos[0][0][0] = new Foo(); return; } } } thanks in advance, George

    Read the article

  • Building "isolated" and "automatically updated" caches (java.util.List) in Java.

    - by Aidos
    Hi Guys, I am trying to write a framework which contains a lot of short-lived caches created from a long-living cache. These short-lived caches need to be able to return their entier contents, which is a clone from the original long-living cache. Effectively what I am trying to build is a level of transaction isolation for the short-lived caches. The user should be able to modify the contents of the short-lived cache, but changes to the long-living cache should not be propogated through (there is also a case where the changes should be pushed through, depending on the Cache type). I will do my best to try and explain: master-cache contains: [A,B,C,D,E,F] temporary-cache created with state [A,B,C,D,E,F] 1) temporary-cache adds item G: [A,B,C,D,E,F] 2) temporary-cache removes item B: [A,C,D,E,F] master-cache contains: [A,B,C,D,E,F] 3) master-cache adds items [X,Y,Z]: [A,B,C,D,E,F,X,Y,Z] temporary-cache contains: [A,C,D,E,F] Things get even harder when the values in the items can change and shouldn't always be updated (so I can't even share the underlying object instances, I need to use clones). I have implemented the simple approach of just creating a new instance of the List using the standard Collection constructor on ArrayList, however when you get out to about 200,000 items the system just runs out of memory. I know the value of 200,000 is excessive to iterate, but I am trying to stress my code a bit. I had thought that it might be able to somehow "proxy" the list, so the temporary-cache uses the master-cache, and stores all of it's changes (effectively a Memento for the change), however that quickly becomes a nightmare when you want to iterate the temporary-cache, or retrieve an item at a specific index. Also given that I want some modifications to the contents of the list to come through (depending on the type of the temporary-cache, whether it is "auto-update" or not) and I get completly out of my depth. Any pointers to techniques or data-structures or just general concepts to try and research will be greatly appreciated. Cheers, Aidos

    Read the article

  • Crystal Reports error 7 "Out of memory"

    - by pwee167
    Hi guys, I have a VB6 application, reportwriter, that runs a crystal report and outputs a pdf. I have a .net 3.5 asp application that calls the reportwriter dll. I am getting an error saying 7 "Out of memory", when trying to load the report. I have 4gb of memory and running windows 7 (64 bit). I also have Crystals Reports 11 with service pack 4 installed. I have registered the dll and referenced it in my project. I have run out of ideas what could be causing this. I need some help. Any advice will be very much appreciated. I suspect it has nothing to do with Crystal Reports, but I cannot be sure. Has anyone come across this error message/issue? Also, I have a friend running the identical project in almost identical PC (bar a few different personal applications installed). It works for him. Cheers.

    Read the article

  • How strict should I be in the "do the simplest thing that could possible work" while doing TDD

    - by Support - multilanguage SO
    For TDD you have to Create a test that fail Do the simplest thing that could possible work to pass the test Add more variants of the test and repeat Refactor when a pattern emerge With this approach you're supposing to cover all the cases ( that comes to my mind at least) but I'm wonder if am I being too strict here and if it is possible to "think ahead" some scenarios instead of simple discover them. For instance, I'm processing a file and if it doesn't conform to a certain format I am to throw an InvalidFormatException So my first test was: @Test void testFormat(){ // empty doesn't do anything... processor.validate("empty.txt"); try { processor.validate("invalid.txt"); assert false: "Should have thrown InvalidFormatException"; } catch( InvalidFormatException ife ) { assert "Invalid format".equals( ife.getMessage() ); } } I run it and it fails because it doesn't throw an exception. So the next thing that comes to my mind is: "Do the simplest thing that could possible work", so I : public void validate( String fileName ) throws InvalidFormatException { if(fileName.equals("invalid.txt") { throw new InvalidFormatException("Invalid format"); } } Doh!! ( although the real code is a bit more complicated, I found my self doing something like this several times ) I know that I have to eventually add another file name and other test that would make this approach impractical and that would force me to refactor to something that makes sense ( which if I understood correctly is the point of TDD, to discover the patterns the usage unveils ) but: Q: am I taking too literal the "Do the simplest thing..." stuff?

    Read the article

  • Why am I getting a ClassCastException here?

    - by Holly
    I'm creating an android application and i'm trying to use a PreferenceActivity. I'm getting the java.lang.ClassCastException: java.lang.String error when it gets to this line return PreferenceManager.getDefaultSharedPreferences(context).getInt(USER_TERM, USER_TERM_DEF); I thought it might be because I'm not converting it properly from it's string value in the EditText box to the int I need but I can't figure out how to fix this, or is that even the cause? I'm flummoxed. Here's my preference activity class: public class UserPrefs extends PreferenceActivity { //option names and default vals private static final String USER_TERM = "term_length"; private static final String USER_YEAR = "year_length"; private static final int USER_TERM_DEF = 12; private static final int USER_YEAR_DEF = 34; @Override protected void onCreate(Bundle savedInstanceState){ super.onCreate(savedInstanceState); addPreferencesFromResource(R.xml.preferences); } public static int getTermLength(Context context){ try{ return PreferenceManager.getDefaultSharedPreferences(context).getInt(USER_TERM, USER_TERM_DEF); }catch (ClassCastException e){Log.v("getTermLength", "error::: " + e); } //return 1;//temporary, needed to catch the error and couldn't without adding a return outside the block.// } public static int getYearLength(Context context){ return PreferenceManager.getDefaultSharedPreferences(context).getInt(USER_YEAR, USER_YEAR_DEF); } And here's the bit of code where I'm trying to use the the preferences inside another class: public float alterForFrequency(Context keypadContext, float enteredAmount, String spinnerPosition){ int termLength = UserPrefs.getTermLength(keypadContext); int yearLength = UserPrefs.getYearLength(keypadContext); } The complete android logcat, i've uploaded here: http://freetexthost.com/v3t4ta3wbi

    Read the article

  • Why pseudo-elements :before and :after appear in front of my DIV element and not behind?

    - by Dim13i
    I have a DIV element with this given class: .slideshow { background: white; width: 700px; height: 300px; padding:10px; margin: 0 auto; position: relative; box-shadow: 0px 0px 5px grey, 0px 10px 20px rgba(0,0,0,.1) inset; } I define those two pseudo-elements (:before and :after): .slideshow:before, .slideshow:after { content: " "; background: green; width: 50%; height: 50%; position: absolute; z-index: -10; } My problem is that those two pseudo-elements appear in front of my DIV and not behind. Is there any specific reason for this behaviour? Here is an example: EXAMPLE The Javascript part is a bit messy but i'm still working on it. Also I've noticed that if I delete all the JS part I don't have anymore this problem, but I don't think there is anything in the code that should modify the slideshow DIV SOLVED: The problem is in the javascript part where I have: $('.slideshow').css('-webkit-transform', 'rotate(0deg)'); Removing this line solved the problem. I guess that pseudo-elements :before and :after are not compatible with the property transform.

    Read the article

  • Segmentation Fault?

    - by user336808
    Hello, when I run this program while inputting a number greater than 46348, I get a segmentation fault. For any values below it, the program works perfectly. I am using CodeBlocks 8.02 on Ubuntu 10.04 64-bit. The code is as follows: int main() { int number = 46348; vector<bool> sieve(number+1,false); vector<int> primes; sieve[0] = true; sieve[1] = true; for(int i = 2; i <= number; i++) { if(sieve[i]==false) { primes.push_back(i); int temp = i*i; while(temp <= number) { sieve[temp] = true; temp = temp + i; } } } for(int i = 0; i < primes.size(); i++) cout << primes[i] << " "; return 0; }

    Read the article

  • Checkboxes in ADF are initially null, where I want them to be 0

    - by Mark Tielemans
    I am using ADF in JDeveloper and don´t have any experience with either of the two. Now I´ve run into quite some trouble yet, but for this particular thing I decided to consult the wisom of stackoverflow. The thing is, I have an edit form for an object that contains 3 checkboxes. The checked values are set to 1, unchecked to 0. In my database, the values are NOT NULL, and I want to keep it that way. The thing is, in the edit form, if the user submits the form leaving any boxes unchecked, it will result in an error, because the unchecked box values apparently remain null. Only after checking and then unchecking the boxes again, their values will be '0' rather than null. I've tried some things, including making the attributes mandatory in the domain BCD, but that just gives a bit more neat error message.. Any help would be greatly appreciated!! EDIT I made a little progress thanks to the guide provided by Joe, but still run into problems. I changed the values that should be checkboxes in my model, making them BOOLEANs where the table columns are NUMBERs (All are also mandatory and have a default value of 0). This automatically changed the corresponding View Object too. In the Application module, this now works great. It shows checkboxes, a checked one will return 1, an untouched one will return 0. However, I deleted the old form, and inserted a new one using the corresponding Data Control. I gave these values the checkbox type. I still had to edit the bindings (which I think reflects the problem, as this is not the case with, say, a model-level defined LOV) and gave them 1 for checked and 0 for unchecked. However, now apart from the original problem still occurring, also the checkboxes cannot be unchecked after checking, and return 0 when checked (and null when left untouched). Even though this has created new problems, it works correctly in my AM. Does someone know what I'm doing wrong in my Swing form?

    Read the article

  • Android opengl releasing textures

    - by user1642418
    I have a bit of a problem. I am developing a game for android + engine and I got stuck. I am getting OpenGL out of memory error and either app crashes or phone hangs after loading a scene multiple times. For example: app launches, shows main menu, 1st level/scene is loaded. Then I go back to main menu, and repeat. It doesnt matter which scene I load, after 4-6 times the error occurs. Some background: Each time when scene is loaded all the resources are released and upon first frame render - needed stuff gets loaded. The performance is more or less ok. Note that I am calling glDeleteTexture method, but I think its not doing its job and releasing memory. Thing is that -when I minimize and open it again - problem doesn't occur, but almost the same things are executed. Problem doesn't occur. This way android releases memory. How do I release/get rid of unused textures properly? This happens on HTC Desire HD ( ice cream sandwich 4.0.4) . Other games works fine, so I bet this is not the problem in ROM.

    Read the article

  • Referenced vector does not pass through functions

    - by kylepayne
    The referenced vector to functions does not hold the information in memory. Do I have to use pointers? Thanks. #include <iostream> #include <cstdlib> #include <vector> #include <string> using namespace std; void menu(); void addvector(vector<string>& vec); void subvector(vector<string>& vec); void vectorsize(const vector<string>& vec); void printvec(const vector<string>& vec); void printvec_bw(const vector<string>& vec); int main() { vector<string> svector; menu(); return 0; } //functions definitions void menu() { vector<string> svector; int choice = 0; cout << "Thanks for using this program! \n" << "Enter 1 to add a string to the vector \n" << "Enter 2 to remove the last string from the vector \n" << "Enter 3 to print the vector size \n" << "Enter 4 to print the contents of the vector \n" << "Enter 5 ----------------------------------- backwards \n" << "Enter 6 to end the program \n"; cin >> choice; switch(choice) { case 1: addvector(svector); menu(); break; case 2: subvector(svector); menu(); break; case 3: vectorsize(svector); menu(); break; case 4: printvec(svector); menu(); break; case 5: printvec_bw(svector); menu(); break; case 6: exit(1); default: cout << "not a valid choice \n"; // menu is structured so that all other functions are called from it. } } void addvector(vector<string>& vec) { //string line; //int i = 0; //cin.ignore(1, '\n'); //cout << "Enter the string please \n"; //getline(cin, line); vec.push_back("the police man's beard is half-constructed"); } void subvector(vector<string>& vec) { vec.pop_back(); return; } void vectorsize(const vector<string>& vec) { if (vec.empty()) { cout << "vector is empty"; } else { cout << vec.size() << endl; } return; } void printvec(const vector<string>& vec) { for(int i = 0; i < vec.size(); i++) { cout << vec[i] << endl; } return; } void printvec_bw(const vector<string>& vec) { for(int i = vec.size(); i > 0; i--) { cout << vec[i] << endl; } return; }

    Read the article

< Previous Page | 764 765 766 767 768 769 770 771 772 773 774 775  | Next Page >