Search Results

Search found 6241 results on 250 pages for 'unsigned integer'.

Page 167/250 | < Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >

  • Create new field in a table that already exists - flex/air sqlite?

    - by Adam
    I've got a flex/air app I've been working on, it uses a local sqlite database that is created on the initial application start. I've added some features to the application and in the process I had to add a new field to one of the database tables. My questions is how to I go about getting the application to create one new field that is located in a table that already exists? this is a the line that creates the table stmt.text = "CREATE TABLE IF NOT EXISTS tbl_status ("+"status_id INTEGER PRIMARY KEY AUTOINCREMENT,"+" status_status TEXT)"; And now I'd like to add a status_default field. thanks! Thanks - MPelletier I've add the code you provided and it does add the field, but now the next time I restart my app I get an error - 'status_default' already exists'. So how can I go about adding some sort of a IF NOT EXISTS statement to the line you provided?

    Read the article

  • Instantiating class with custom allocator in shared memory

    - by recipriversexclusion
    I'm pulling my hair due to the following problem: I am following the example given in boost.interprocess documentation to instantiate a fixed-size ring buffer buffer class that I wrote in shared memory. The skeleton constructor for my class is: template<typename ItemType, class Allocator > SharedMemoryBuffer<ItemType, Allocator>::SharedMemoryBuffer( unsigned long capacity ){ m_capacity = capacity; // Create the buffer nodes. m_start_ptr = this->allocator->allocate(); // allocate first buffer node BufferNode* ptr = m_start_ptr; for( int i = 0 ; i < this->capacity()-1; i++ ) { BufferNode* p = this->allocator->allocate(); // allocate a buffer node } } My first question: Does this sort of allocation guarantee that the buffer nodes are allocated in contiguous memory locations, i.e. when I try to access the n'th node from address m_start_ptr + n*sizeof(BufferNode) in my Read() method would it work? If not, what's a better way to keep the nodes, creating a linked list? My test harness is the following: // Define an STL compatible allocator of ints that allocates from the managed_shared_memory. // This allocator will allow placing containers in the segment typedef allocator<int, managed_shared_memory::segment_manager> ShmemAllocator; //Alias a vector that uses the previous STL-like allocator so that allocates //its values from the segment typedef SharedMemoryBuffer<int, ShmemAllocator> MyBuf; int main(int argc, char *argv[]) { shared_memory_object::remove("MySharedMemory"); //Create a new segment with given name and size managed_shared_memory segment(create_only, "MySharedMemory", 65536); //Initialize shared memory STL-compatible allocator const ShmemAllocator alloc_inst (segment.get_segment_manager()); //Construct a buffer named "MyBuffer" in shared memory with argument alloc_inst MyBuf *pBuf = segment.construct<MyBuf>("MyBuffer")(100, alloc_inst); } This gives me all kinds of compilation errors related to templates for the last statement. What am I doing wrong?

    Read the article

  • Master / Detail datagridview with relation from type string in C#

    - by Daniel
    Hello, I want to show a master / detail relationship using two datagridviews and DataRelation in C#. The relation between the master and the detail table is an ID from type string (and there is no chance to change the ID to type integer). It seems like the DataGridView is not able to update the detail view when changing the row in the master table. Does anybody know if it is possible to achieve a master / detail view using a string ID and if yes, how? Or do I have to use an external DataGrid from another company? Thanks for any help Daniel

    Read the article

  • Instruments (Leaks) and NSDateFormatter

    - by Cal
    When I run my iPhone app with Instruments Leaks and parse a bunch of NSDates using NSDateFormatter my memory goes up about 1mb and stays even though these NSDates should be dealloc'd after the parsing (I just discard them if they aren't new). I thought the malloc (in my heaviest stack trace below) could become part of the NSDate but I also thought it could be memory that only used during some intermediate step in parsing. Does anyone know which one it is or how to find out? Also, is there a way to put a breakpoint on NSDate dealloc to see if that memory is really being reclaimed? Here's what my date formatter looks like for parsing these dates: df = [[NSDateFormatter alloc] init]; [df setDateFormat:@"EEE, d MMM yyyy H:m:s z"]; Here's the Heaviest Stack trace when the memory bumps up and stays there: 0 libSystem.B.dylib 208.80 Kb malloc 1 libicucore.A.dylib 868.19 Kb icu::ZoneMeta::getSingleCountry(icu::UnicodeString const&, icu::UnicodeString&) 2 libicucore.A.dylib 868.66 Kb icu::ZoneMeta::getSingleCountry(icu::UnicodeString const&, icu::UnicodeString&) 3 libicucore.A.dylib 868.67 Kb icu::ZoneMeta::getSingleCountry(icu::UnicodeString const&, icu::UnicodeString&) 4 libicucore.A.dylib 868.67 Kb icu::DateFormatSymbols::initZoneStringFormat() 5 libicucore.A.dylib 868.67 Kb icu::DateFormatSymbols::getZoneStringFormat() const 6 libicucore.A.dylib 868.67 Kb icu::SimpleDateFormat::subParse(icu::UnicodeString const&, int&, unsigned short, int, signed char, signed char, signed char*, icu::Calendar&) const 7 libicucore.A.dylib 868.67 Kb icu::SimpleDateFormat::parse(icu::UnicodeString const&, icu::Calendar&, icu::ParsePosition&) const 8 libicucore.A.dylib 868.67 Kb icu::DateFormat::parse(icu::UnicodeString const&, icu::ParsePosition&) const 9 libicucore.A.dylib 868.67 Kb udat_parse 10 CoreFoundation 868.67 Kb CFDateFormatterGetAbsoluteTimeFromString 11 CoreFoundation 868.67 Kb CFDateFormatterCreateDateFromString 12 Foundation 868.67 Kb -[NSDateFormatter getObjectValue:forString:range:error:] 13 Foundation 868.75 Kb -[NSDateFormatter getObjectValue:forString:errorDescription:] 14 Foundation 868.75 Kb -[NSDateFormatter dateFromString:] Thanks!

    Read the article

  • Opengl Triangle instead of square

    - by Dave
    Im trying to create a spinning square inside of xcode using opengl but instead for some reason I have a spinning triangle? I'm doing this inside of sio2 but I dont think this is the problem. Here is the triangle: http://img220.imageshack.us/img220/7051/snapzproxscreensnapz001.png Here is my code: void templateRender( void ) { const GLfloat squareVertices[] ={ 100.0f, -100.0f, 100.0f, -100.0f, -100.0f, 100.0f, 100.0f, 100.0f, }; const unsigned char squareColors[] = { 255, 255, 0, 255, 0, 255, 255, 255, 0, 0, 0, 0, 255, 0, 255, 255, }; glMatrixMode( GL_MODELVIEW ); glLoadIdentity(); glClear( GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT ); // Your rendering code here... sio2WindowEnter2D( sio2->_SIO2window, 0.0f, 1.0f ); { glVertexPointer( 2, GL_FLOAT, 0, squareVertices ); glEnableClientState(GL_VERTEX_ARRAY); //set up the color array glColorPointer( 4, GL_UNSIGNED_BYTE, 0, squareColors ); glEnableClientState( GL_COLOR_ARRAY ); glTranslatef( sio2->_SIO2window->scl->x * 0.5f, sio2->_SIO2window->scl->y * 0.5f, 0.0f ); static float rotz = 0.0f; glRotatef( rotz, 0.0f, 0.0f, 1.0f ); rotz += 90.0f * sio2->_SIO2window->d_time; glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); } sio2WindowLeave2D(); }

    Read the article

  • Insert element into a tree from a list in Standard ML

    - by vichet
    I have just started to learn SML on my own and get stuck with a question from the tutorial. Let say I have: tree data type datatype node of (tree*int*tree) | null insert function fun insert (newItem, null) = node (null, newItem, null) | insert (newItem, node (left, oldItem, right)) = if (newItem <= oldItem) then node (insert(newItem,left),oldItem, right) else node (left, oldItem, insert(newItem, right) an integer list val intList = [19,23,21,100,2]; my question is how can I add write a function to loop through each element in the list and add to a tree? Your answer is really appreciated.

    Read the article

  • Parsing "true" and "false" using Boost.Spirit.Lex and Boost.Spirit.Qi

    - by Andrew Ross
    As the first stage of a larger grammar using Boost.Spirit I'm trying to parse "true" and "false" to produce the corresponding bool values, true and false. I'm using Spirit.Lex to tokenize the input and have a working implementation for integer and floating point literals (including those expressed in a relaxed scientific notation), exposing int and float attributes. Token definitions #include <boost/spirit/include/lex_lexertl.hpp> namespace lex = boost::spirit::lex; typedef boost::mpl::vector<int, float, bool> token_value_type; template <typename Lexer> struct basic_literal_tokens : lex::lexer<Lexer> { basic_literal_tokens() { this->self.add_pattern("INT", "[-+]?[0-9]+"); int_literal = "{INT}"; // To be lexed as a float a numeric literal must have a decimal point // or include an exponent, otherwise it will be considered an integer. float_literal = "{INT}(((\\.[0-9]+)([eE]{INT})?)|([eE]{INT}))"; literal_true = "true"; literal_false = "false"; this->self = literal_true | literal_false | float_literal | int_literal; } lex::token_def<int> int_literal; lex::token_def<float> float_literal; lex::token_def<bool> literal_true, literal_false; }; Testing parsing of float literals My real implementation uses Boost.Test, but this is a self-contained example. #include <string> #include <iostream> #include <cmath> #include <cstdlib> #include <limits> bool parse_and_check_float(std::string const & input, float expected) { typedef std::string::const_iterator base_iterator_type; typedef lex::lexertl::token<base_iterator_type, token_value_type > token_type; typedef lex::lexertl::lexer<token_type> lexer_type; basic_literal_tokens<lexer_type> basic_literal_lexer; base_iterator_type input_iter(input.begin()); float actual; bool result = lex::tokenize_and_parse(input_iter, input.end(), basic_literal_lexer, basic_literal_lexer.float_literal, actual); return result && std::abs(expected - actual) < std::numeric_limits<float>::epsilon(); } int main(int argc, char *argv[]) { if (parse_and_check_float("+31.4e-1", 3.14)) { return EXIT_SUCCESS; } else { return EXIT_FAILURE; } } Parsing "true" and "false" My problem is when trying to parse "true" and "false". This is the test code I'm using (after removing the Boost.Test parts): bool parse_and_check_bool(std::string const & input, bool expected) { typedef std::string::const_iterator base_iterator_type; typedef lex::lexertl::token<base_iterator_type, token_value_type > token_type; typedef lex::lexertl::lexer<token_type> lexer_type; basic_literal_tokens<lexer_type> basic_literal_lexer; base_iterator_type input_iter(input.begin()); bool actual; lex::token_def<bool> parser = expected ? basic_literal_lexer.literal_true : basic_literal_lexer.literal_false; bool result = lex::tokenize_and_parse(input_iter, input.end(), basic_literal_lexer, parser, actual); return result && actual == expected; } but compilation fails with: boost/spirit/home/qi/detail/assign_to.hpp: In function ‘void boost::spirit::traits::assign_to(const Iterator&, const Iterator&, Attribute&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, Attribute = bool]’: boost/spirit/home/lex/lexer/lexertl/token.hpp:434: instantiated from ‘static void boost::spirit::traits::assign_to_attribute_from_value<Attribute, boost::spirit::lex::lexertl::token<Iterator, AttributeTypes, HasState>, void>::call(const boost::spirit::lex::lexertl::token<Iterator, AttributeTypes, HasState>&, Attribute&) [with Attribute = bool, Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, AttributeTypes = boost::mpl::vector<int, float, bool, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, HasState = mpl_::bool_<true>]’ ... backtrace of instantiation points .... boost/spirit/home/qi/detail/assign_to.hpp:79: error: no matching function for call to ‘boost::spirit::traits::assign_to_attribute_from_iterators<bool, __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, void>::call(const __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >&, const __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >&, bool&)’ boost/spirit/home/qi/detail/construct.hpp:64: note: candidates are: static void boost::spirit::traits::assign_to_attribute_from_iterators<bool, Iterator, void>::call(const Iterator&, const Iterator&, char&) [with Iterator = __gnu_cxx::__normal_iterator<const char*, std::basic_string<char, std::char_traits<char>, std::allocator<char> > >] My interpretation of this is that Spirit.Qi doesn't know how to convert a string to a bool - surely that's not the case? Has anyone else done this before? If so, how?

    Read the article

  • Decimal numbers works in iPhone simulator but NOT on iPhone device

    - by matsoftware
    Hi everybody, I noticed a weird behaviour of iPhone OS when using decimal values. The simulator parse them from strings in a correct way but when I test the app on my iPhone it lose the decimal part. In particular, I store values in a dictionary that I retrieve in this way: Code: NSString *thickStr = [dictionary valueForKey:@"thickness"]; NSNumber *thickNum = [[[self class] numberFormatter] numberFromString:thickStr]; [self setSpessore:thickNum]; where the "numberFormatter" class is defined as below: Code: + (NSNumberFormatter *)numberFormatter { static NSNumberFormatter *_formatter; if (_formatter == nil) { _formatter = [[NSNumberFormatter alloc] init]; [_formatter setNumberStyle:NSNumberFormatterDecimalStyle]; [_formatter setFormatterBehavior:NSNumberFormatterBehavior10_4]; [_formatter setGeneratesDecimalNumbers:TRUE]; } return _formatter; } But it doesn't work! The App on iPhone keeps on convert the string to a simple integer, forgetting the decimal part, while the app on iPhone simulator works fine!

    Read the article

  • MySQL index cardinality - performance vs storage efficiency

    - by Sean
    Say you have a MySQL 5.0 MyISAM table with 100 million rows, with one index (other than primary key) on two integer columns. From my admittedly poor understanding of B-tree structure, I believe that a lower cardinality means the storage efficiency of the index is better, because there are less parent nodes. Whereas a higher cardinality means less efficient storage, but faster read performance, because it has to navigate through less branches to get to whatever data it is looking for to narrow down the rows for the query. (Note - by "low" vs "high", I don't mean e.g. 1 million vs 99 million for a 100 million row table. I mean more like 90 million vs 95 million) Is my understanding correct? Related question - How does cardinality affect write performance?

    Read the article

  • Java binary files writeUTF... explain specifications...

    - by user69514
    I'm studying Java on my own. One of the exercises is the following, however I do not really understand what it is asking to.... any smart java gurus out there that could explain this in more detail and simple words? Thanks Suppose that you have a binary file that contains numbers whos type is either int or double. You dont know the order of the numbers in the file, but their order is recorded in a string at the begining of the file. The string is composed of the letters i for int, and d for double, in the order of the types of the subsequent numbers. The string is written using the method writeUTF. For example the string "iddiiddd" indicated that the file contains eight values, as follows: one integer, followed by two doubles, followed by two integers, followed by three doubles. Read this binary file and create a new text file of the values written one to a line.

    Read the article

  • log4j.xml configuration with <rollingPolicy> and <triggeringPolicy>

    - by Mike Smith
    I try to configure log4j.xml in such a way that file will be rolled upon file size, and the rolled file's name will be i.e: "C:/temp/test/test_log4j-%d{yyyy-MM-dd-HH_mm_ss}.log" I followed this discussion: http://web.archiveorange.com/archive/v/NUYyjJipzkDOS3reRiMz Finally it worked for me only when I add: try { Thread.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } to the method: public boolean isTriggeringEvent(Appender appender, LoggingEvent event, String filename, long fileLength) which make it works. The question is if there is a better way to make it work? since this method call many times and slow my program. Here is the code: package com.mypack.rolling; import org.apache.log4j.rolling.RollingPolicy; import org.apache.log4j.rolling.RolloverDescription; import org.apache.log4j.rolling.TimeBasedRollingPolicy; /** * Same as org.apache.log4j.rolling.TimeBasedRollingPolicy but acts only as * RollingPolicy and NOT as TriggeringPolicy. * * This allows us to combine this class with a size-based triggering policy * (decision to roll based on size, name of rolled files based on time) * */ public class CustomTimeBasedRollingPolicy implements RollingPolicy { TimeBasedRollingPolicy timeBasedRollingPolicy = new TimeBasedRollingPolicy(); /** * Set file name pattern. * @param fnp file name pattern. */ public void setFileNamePattern(String fnp) { timeBasedRollingPolicy.setFileNamePattern(fnp); } /* public void setActiveFileName(String fnp) { timeBasedRollingPolicy.setActiveFileName(fnp); }*/ /** * Get file name pattern. * @return file name pattern. */ public String getFileNamePattern() { return timeBasedRollingPolicy.getFileNamePattern(); } public RolloverDescription initialize(String file, boolean append) throws SecurityException { return timeBasedRollingPolicy.initialize(file, append); } public RolloverDescription rollover(String activeFile) throws SecurityException { return timeBasedRollingPolicy.rollover(activeFile); } public void activateOptions() { timeBasedRollingPolicy.activateOptions(); } } package com.mypack.rolling; import org.apache.log4j.helpers.OptionConverter; import org.apache.log4j.Appender; import org.apache.log4j.rolling.TriggeringPolicy; import org.apache.log4j.spi.LoggingEvent; import org.apache.log4j.spi.OptionHandler; /** * Copy of org.apache.log4j.rolling.SizeBasedTriggeringPolicy but able to accept * a human-friendly value for maximumFileSize, eg. "10MB" * * Note that sub-classing SizeBasedTriggeringPolicy is not possible because that * class is final */ public class CustomSizeBasedTriggeringPolicy implements TriggeringPolicy, OptionHandler { /** * Rollover threshold size in bytes. */ private long maximumFileSize = 10 * 1024 * 1024; // let 10 MB the default max size /** * Set the maximum size that the output file is allowed to reach before * being rolled over to backup files. * * <p> * In configuration files, the <b>MaxFileSize</b> option takes an long * integer in the range 0 - 2^63. You can specify the value with the * suffixes "KB", "MB" or "GB" so that the integer is interpreted being * expressed respectively in kilobytes, megabytes or gigabytes. For example, * the value "10KB" will be interpreted as 10240. * * @param value * the maximum size that the output file is allowed to reach */ public void setMaxFileSize(String value) { maximumFileSize = OptionConverter.toFileSize(value, maximumFileSize + 1); } public long getMaximumFileSize() { return maximumFileSize; } public void setMaximumFileSize(long maximumFileSize) { this.maximumFileSize = maximumFileSize; } public void activateOptions() { } public boolean isTriggeringEvent(Appender appender, LoggingEvent event, String filename, long fileLength) { try { Thread.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } boolean result = (fileLength >= maximumFileSize); return result; } } and the log4j.xml: <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE log4j:configuration SYSTEM "log4j.dtd"> <log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/" debug="true"> <appender name="console" class="org.apache.log4j.ConsoleAppender"> <param name="Target" value="System.out" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d [%t] %-5p %c -> %m%n" /> </layout> </appender> <appender name="FILE" class="org.apache.log4j.rolling.RollingFileAppender"> <param name="file" value="C:/temp/test/test_log4j.log" /> <rollingPolicy class="com.mypack.rolling.CustomTimeBasedRollingPolicy"> <param name="fileNamePattern" value="C:/temp/test/test_log4j-%d{yyyy-MM-dd-HH_mm_ss}.log" /> </rollingPolicy> <triggeringPolicy class="com.mypack.rolling.CustomSizeBasedTriggeringPolicy"> <param name="MaxFileSize" value="200KB" /> </triggeringPolicy> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d [%t] %-5p %c -> %m%n" /> </layout> </appender> <logger name="com.mypack.myrun" additivity="false"> <level value="debug" /> <appender-ref ref="FILE" /> </logger> <root> <priority value="debug" /> <appender-ref ref="console" /> </root> </log4j:configuration>

    Read the article

  • How to use json object notation to retrieve dbpedia json data

    - by Margi
    In my php code, I am retrieving json data as below. <?php $url = "http://dbpedia.org/data/Los_Angeles.json"; $data = file_get_contents($url); echo $data; ?> The javascript code consumes this json data returned from php and gets the json object as below. var doc = eval('(' + request.responseText + ')'); How to retrieve the following json data using dot notations. The keys contain URLs. "http://dbpedia.org/ontology/populationTotal" : [ { "type" : "literal", "value" : 3792621 , "datatype" : "http://www.w3.org/2001/XMLSchema#integer" } ] , "http://dbpedia.org/ontology/PopulatedPlace/areaTotal" : [ { "type" : "literal", "value" : "1301.9688931491348" , "datatype" : "http://dbpedia.org/datatype/squareKilometre" }

    Read the article

  • SQL Query - Count column values separately

    - by user575535
    I have a hard time getting a Query to work right. This is the DDL for my Tables CREATE TABLE Agency ( id SERIAL not null, city VARCHAR(200) not null, PRIMARY KEY(id) ); CREATE TABLE Customer ( id SERIAL not null, fullname VARCHAR(200) not null, status VARCHAR(15) not null CHECK(status IN ('new','regular','gold')), agencyID INTEGER not null REFERENCES Agency(id), PRIMARY KEY(id) ); Sample Data from the Tables AGENCY id|'city' 1 |'London' 2 |'Moscow' 3 |'Beijing' CUSTOMER id|'fullname' |'status' |agencyid 1 |'Michael Smith' |'new' |1 2 |'John Doe' |'regular'|1 3 |'Vlad Atanasov' |'new' |2 4 |'Vasili Karasev'|'regular'|2 5 |'Elena Miskova' |'gold' |2 6 |'Kim Yin Lu' |'new' |3 7 |'Hu Jintao' |'regular'|3 8 |'Wen Jiabao' |'regular'|3 I want to produce the following output, but i need to count separately for ('new','regular','gold') 'city' |new_customers|regular_customers|gold_customers 'Moscow' |1 |1 |1 'Beijing'|1 |2 |0 'London' |1 |1 |0

    Read the article

  • How to reserve a set of primary key identifiers for preloading bootstrap data

    - by Joshua
    We would like to reserve a set of primary key identifiers for all tables (e.g. 1-1000) so that we can bootstrap the system with pre-loaded system data. All our JPA entity classes have the following definition for the primary key. @Id @GeneratedValue(strategy = IDENTITY) @Column(name = "id", unique = true, nullable = false, insertable = false, updatable = false) private Integer id; is there a way to tell the database that increments should start happening from 1000 (i.e. customer specific data will start from 1000 onwards). We support (h2, mysql, postgres) in our environment and I would prefer a solution which can be driven via JPA and reverse engineering DDL tools from Hibernate. Let me know if this is the correct approach

    Read the article

  • Help me with a solution for what could be solutioned by virtual static fields... in FPC

    - by Gregory Smith
    Hi I'm doing an event manager in Freepascal Each event is an object type TEvent (=object), each kind of event must derive from this class. Events are differentiated by an integer identificator, assigned dynamically. The problem is that i want to retrieve the event id of an instance, and i can't do it well. All instances of a class(object) have a unique id = so it should be static field. All classes have a diferent id = so it should be virtual. Event ids are assignated in run time, and can change = so it can't be a simple method In sum, I can't put all this together. I'm looking for an elegant solution, i don't want to write a hardcoded table, actualizing it in every constructor... etc, i'd prefer something taking advantage of the polymorphism Can anyone help me with another technical or design solution? I remark I don't want to use class instead of object construct.(property doesn't work on objects? :(

    Read the article

  • Unique Key in MySql

    - by Vinodtiru
    I have a table with four Columns. Col1, Col2, Col3, and Col4. Col1, Col2, Col3 is string and Col4 is a integer primary key with Auto Increment. Now my requirement is to have unique combination of Col2 and Col3. I mean to say like. Insert into table(Col1,Col2,Col3) Values('val1','val2','val3'); Insert into table(Col1,Col2,Col3) Values('val4','val2','val3'); the second statement has to throw error as the same combination of 'val2','val3' is present in the table. But i cant make it as a primary key as i need a auto increment column and for that matter the col4 has to be primary. Please let me know a approach by which i can have both in my table. Any kind of help is appreciated. Thanks.

    Read the article

  • Doing 64 bit manipulation using 32 bit data in Fixed point arithmetic using C.

    - by Viks
    Hi, I am stuck with a problem. I am working on a hardware which only does support 32 bit operations. sizeof(int64_t) is 4. Sizeof(int) is 4. and I am porting an application which assumes size of int64_t to be 8 bytes. The problem is it has this macro BIG_MULL(a,b) ( (int64_t)(a) * (int64_t)(b) 23) The result is always a 32 bit integer but since my system doesn't support 64 bit operation, it always return me the LSB of the operation, rounding of all the results making my system crash. Can someone help me out? Regards, Vikas Gupta

    Read the article

  • C-macro: set a register field defined by a bit-mask to a given value

    - by geschema
    I've got 32-bit registers with field defined as bit-masks, e.g. #define BM_TEST_FIELD 0x000F0000 I need a macro that allows me to set a field (defined by its bit-mask) of a register (defined by its address) to a given value. Here's what I came up with: #include <stdio.h> #include <assert.h> typedef unsigned int u32; /* * Set a given field defined by a bit-mask MASK of a 32-bit register at address * ADDR to a value VALUE. */ #define SET_REGISTER_FIELD(ADDR, MASK, VALUE) \ { \ u32 mask=(MASK); u32 value=(VALUE); \ u32 mem_reg = *(volatile u32*)(ADDR); /* Get current register value */ \ assert((MASK) != 0); /* Null masks are not supported */ \ while(0 == (mask & 0x01)) /* Shift the value to the left until */ \ { /* it aligns with the bit field */ \ mask = mask >> 1; value = value << 1; \ } \ mem_reg &= ~(MASK); /* Clear previous register field value */ \ mem_reg |= value; /* Update register field with new value */ \ *(volatile u32*)(ADDR) = mem_reg; /* Update actual register */ \ } /* Test case */ #define BM_TEST_FIELD 0x000F0000 int main() { u32 reg = 0x12345678; printf("Register before: 0x%.8X\n", reg);/* should be 0x12345678 */ SET_REGISTER_FIELD(&reg, BM_TEST_FIELD, 0xA); printf("Register after: 0x%.8X\n", reg); /* should be 0x123A5678 */ return 0; } Is there a simpler way to do it?

    Read the article

  • What's the easiest way to parse numbers in clojure?

    - by Rob Lachlan
    I've been using java to parse numbers, e.g. (. Integer parseInt numberString) Is there a more clojuriffic way that would handle both integers and floats, and return clojure numbers? I'm not especially worried about performance here, I just want to process a bunch of white space delimited numbers in a file and do something with them, in the most straightforward way possible. So a file might have lines like: 5 10 0.0002 4 12 0.003 And I'd like to be able to transform the lines into vectors of numbers.

    Read the article

  • Debugging NSoperation BAD ACCESS within graphics context

    - by Joe
    I tried everything to debug this one but I can't get to the bottom of it. This code lives in a subclass of NSOperation which is processed from a queue: (borders is an ivar NSArray containing 5 UIimage objects) NSMutableArray *images = [[NSMutableArray alloc] init]; for (unsigned i = 0; i < 5; i++) { CGSize size = CGSizeMake(60, 60); UIGraphicsBeginImageContext(size); CGPoint thumbPoint = CGPointMake(6, 6); [controller.image drawAtPoint:thumbPoint]; CGPoint borderPoint = CGPointMake(0, 0); [[borders objectAtIndex:i] drawAtPoint:borderPoint]; [images addObject:UIGraphicsGetImageFromCurrentImageContext()]; UIGraphicsEndImageContext(); } [images release]; The code works fine most of the time but when I push the iphone by access subviews and pressing lots of buttons on the UI I either get this exception which is trapped by the operation: Exception Load view: *** -[NSCFArray insertObject:atIndex:]: attempt to insert nil or I get this: Program received signal: “EXC_BAD_ACCESS”. The exception is caused because UIGraphicsGetImageFromCurrentImageContext() return nil. I don't know how to debug the EXC_BAD_ACCESS but I'm guessing that this error (in fact both of these errors) is caused by low memory. The debugger stops at the line: [controller.image drawAtPoint:thumbPoint]; As I mentioned I've trapped the exception so I can live with that but the EXC_BAD_ACCESS is more serious. IF this is memory related how can I tell and is it possible to increase the memory available to NSOperation?

    Read the article

  • How to find a binary logarithm very fast? (O(1) at best)

    - by psihodelia
    Is there any very fast method to find a binary logarithm of an integer number? For example, given a number x=52656145834278593348959013841835216159447547700274555627155488768 such algorithm must find y=log(x,2) which is 215. x is always a power of 2. The problem seems to be really simple. All what is required is to find the position of the most significant 1 bit. There is a well-known method FloorLog, but it is not very fast especially for the very long multi-words integers. What is the fastest method?

    Read the article

  • Pattern matching in Perl ala Haskell

    - by Paul Nathan
    In Haskell (F#, Ocaml, and others), I can do this: sign x | x > 0 = 1 | x == 0 = 0 | x < 0 = -1 Which calculates the sign of a given integer. This can concisely express certain logic flows; I've encountered one of these flows in Perl. Right now what I am doing is sub frobnicator { my $frob = shift; return "foo" if $frob eq "Foomaticator"; return "bar" if $frob eq "Barmaticator"; croak("Unable to frob legit value: $frob received"); } Which feels inexpressive and ugly. This code has to run on Perl 5.8.8, but of course I am interested in more modern techniques as well.

    Read the article

  • Reducing differences in xibs

    - by tewha
    I've been noticing superfluous changes in my xib files with Interface Builder 3.2.1. Here are a few of them: - <reference key="NSNextResponder"/> + <nil key="NSNextResponder"/> - <reference key="NSSuperview"/> - <array class="NSMutableArray" key="IBDocument.EditedObjectIDs"> - <integer value="6"/> - </array> + <array class="NSMutableArray" key="IBDocument.EditedObjectIDs"/> Can anyone tell me what these are, and are there any tricks for avoiding them? I'd prefer my checkins to only describe changes I intentionally made. Update: I wasn't clear in the original question, but these differences were caused by opening the file in Interface Builder and saving it without making a change.

    Read the article

  • I am getting exception in main thread...even when i am handling the exception

    - by fari
    public KalaGame(KeyBoardPlayer player1,KeyBoardPlayer player2) { //super(0); int key=0; try { do{ System.out.println("Enter the number of stones to play with: "); BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); key = Integer.parseInt(br.readLine()); if(key<0 || key>10) throw new InvalidStartingStonesException(key); } while(key<0 || key>10); player1=new KeyBoardPlayer(); player2 = new KeyBoardPlayer(); this.player1=player1; this.player2=player2; state=new KalaGameState(key); } catch(IOException e) { System.out.println(e); } } when i enter an invalid number of stones i get this error Exception in thread "main" InvalidStartingStonesException: The number of starting stones must be greater than 0 and less than or equal to 10 (attempted 22) why isn't the exception handled by the throw i defined at KalaGame.<init>(KalaGame.java:27) at PlayKala.main(PlayKala.java:10)

    Read the article

  • How to override equals method in java

    - by Subash Adhikari
    I am trying to override equals method in java. I have a class People which basically has 2 data fields name and age. Now I want to override equals method so that I can check between 2 People objects. My code is as follows public boolean equals(People other){ boolean result; if((other == null) || (getClass() != other.getClass())){ result = false; } // end if else{ People otherPeople = (People)other; result = name.equals(other.name) && age.equals(other.age); } // end else return result; } // end equals But when I write age.equals(other.age) it gives me error as equals method can only compare String and age is Integer. Please help me fix this. Thanks is Advance.

    Read the article

< Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >