Search Results

Search found 36506 results on 1461 pages for 'unsigned long long int'.

Page 142/1461 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • Project Euler Question 14 (Collatz Problem)

    - by paradox
    The following iterative sequence is defined for the set of positive integers: n -n/2 (n is even) n -3n + 1 (n is odd) Using the rule above and starting with 13, we generate the following sequence: 13 40 20 10 5 16 8 4 2 1 It can be seen that this sequence (starting at 13 and finishing at 1) contains 10 terms. Although it has not been proved yet (Collatz Problem), it is thought that all starting numbers finish at 1. Which starting number, under one million, produces the longest chain? NOTE: Once the chain starts the terms are allowed to go above one million. I tried coding a solution to this in C using the bruteforce method. However, it seems that my program stalls when trying to calculate 113383. Please advise :) #include <stdio.h> #define LIMIT 1000000 int iteration(int value) { if(value%2==0) return (value/2); else return (3*value+1); } int count_iterations(int value) { int count=1; //printf("%d\n", value); while(value!=1) { value=iteration(value); //printf("%d\n", value); count++; } return count; } int main() { int iteration_count=0, max=0; int i,count; for (i=1; i<LIMIT; i++) { printf("Current iteration : %d\n", i); iteration_count=count_iterations(i); if (iteration_count>max) { max=iteration_count; count=i; } } //iteration_count=count_iterations(113383); printf("Count = %d\ni = %d\n",max,count); }

    Read the article

  • Primitive type 'short' - casting in Java

    - by gemm
    Hello, I have a question about the primitive type 'short' in Java. I am using JDK 1.6. If I have the following: short a = 2; short b = 3; short c = a + b; the compiler does not want to compile - it says that it "cannot convert from int to short" and suggests that I make a cast to short, so this: short c = (short) (a + b); really works. But my question is why do I need to cast? The values of a and b are in the range of short - the range of short values is {-32,768, 32767}. I also need to cast when I want to perform the operations -, *, / (I haven't checked for others). If I do the same for primitive type int, I do not need to cast aa+bb to int. The following works fine: int aa = 2; int bb = 3; int cc = aa +bb; I discovered this while designing a class where I needed to add two variables of type short, and the compiler wanted me to make a cast. If I do this with two variables of type int, I don't need to cast. Thank you very much in advance. A small remark: the same thing also happens with the primitive type byte. So, this workes: byte a = 2; byte b = 3; byte c = (byte) (a + b); but this not: byte a = 2; byte b = 3; byte c = a + b; For long, float, double, and int, there is no need to cast. Only for short and byte values.

    Read the article

  • Displaying an inverted pyramid of asterisks

    - by onyebuoke
    I help in my for my program. I am required 2 create a inverted pyramid of stars which rows depends on the number of stars the user keys, but I've done it so it does not give an inverted pyramid, it gives a regular pyramid. #include <stdio.h> #include <conio.h> void printchars(int no_star, char space); int getNo_of_rows(void); int main(void) { int numrows, rownum; rownum=0; numrows=getNo_of_rows(); for(rownum=0;rownum<=numrows;rownum++) { printchars(numrows-rownum, ' '); printchars((2*rownum-1), '*'); printf("\n"); } _getche(); return 0; } void printchars(int no_star, char space) { int cnt; for(cnt=0;cnt<no_star;cnt++) { printf("%c",space); } } int getNo_of_rows(void) { int no_star; printf("\n Please enter the number of stars you want to print\n"); scanf("%d",&no_star); while(no_star<1) { printf("\n number incorrect, please enter correct number"); scanf("%d",&no_star); } return no_star; }

    Read the article

  • why is this C++ Code not doing his job

    - by hamza
    i want to create a program that write all the primes in a file ( i know that its a popular algorithm but m trying to make it by my self ) , but it still showing all the numbers instead of just the primes , & i dont know why someone pleas tell me why #include <iostream> #include <stdlib.h> #include <stdio.h> void afficher_sur_un_ficher (FILE* ficher , int nb_bit ); int main() { FILE* p_fich ; char tab[4096] , mask ; int nb_bit = 0 , x ; for (int i = 0 ; i < 4096 ; i++ ) { tab[i] = 0xff ; } for (int i = 0 ; i < 4096 ; i++ ) { mask = 0x01 ; for (int j = 0 ; j < 8 ; j++) { if ((tab[i] & mask) != 0 ) { x = nb_bit ; while (( x > 1 )&&(x < 16384)) { tab[i] = tab[i] ^ mask ; x = x * 2 ; } afficher_sur_un_ficher (p_fich , nb_bit ) ; } mask = mask<<1 ; nb_bit++ ; } } system ("PAUSE"); return 0 ; } void afficher_sur_un_ficher (FILE* ficher , int nb_bit ) { ficher = fopen("res.txt","a+"); fprintf (ficher ,"%d \t" , nb_bit); int static z ; z = z+1 ; if ( z%10 == 0) fprintf (ficher , "\n"); fclose(ficher); }

    Read the article

  • Populating objects in C#

    - by acadia
    Hello, I have Order,OrderDetails and OrderStatus objects as shown below: public class Order { public override int OrderId { get; set; } public string FName { get; set; } public string MName { get; set; } public string LName { get; set; } public string Street { get; set; } public string City { get; set; } public List<OrderDetails> OrderDetails { get; set; }; } public class OrderDetails { public override int OrderdetailsId { get; set; } public int OrderId { get; set; } public int ProductID { get; set; } public int Qty { get; set; } public List<OrderStatus> OrderStat { get; set; }; } public class OrderStatus { public override int OrderdetailsStatusId { get; set; } public int OrderdetailsId { get; set; } public int StatusID { get; set; } } I cannot use LinQ. I want to populate the order object like we do in LinQ. How do I populate all all the properties in Order object: for eg. Order o =new Order(); o.FName="John"; o.LName="abc"; o.Street="TStreet"; o.City="Atlanta"; then o.Orderdetails.Add(orderdetails) How do I do that here in C# when not using LinQ.

    Read the article

  • Abstract Base Class or Class?

    - by Mohit Deshpande
    For my semester project, my team and I are supposed to make a .jar file (library, not runnable) that contains a game development framework and demonstrate the concepts of OOP. Its supposed to be a FRAMEWORK and another team is supposed to use our framework and vice-versa. So I want to know how we should start. We thought of several approaches: 1. Start with a plain class public class Enemy { public Enemy(int x, int y, int health, int attack, ...) { ... } ... } public class UserDefinedClass extends Enemy { ... } 2. Start with an abstract class that user-defined enemies have to inherit abstract members public abstract class Enemy { public Enemy(int x, int y, int health, int attack, ...) { ... } public abstract void draw(); public abstract void destroy(); ... } public class UserDefinedClass extends Enemy { ... public void draw() { ... } public void destroy() { ... } } 3. Create a super ABC (Abstract Base Class) that ALL inherit from public abstract class VectorEntity { ... } public abstract class Enemy extends VectorEntity { ... } public class Player extends VectorEntity { ... } public class UserDefinedClass extends Enemy { ... } Which should I use? Or is there a better way?

    Read the article

  • Better way to design a database

    - by cMinor
    I have a conceptual problem and I would like to get your ideas on how I'll be able to do what I am aiming. My goal is to create a database with information of persons who work at a place depending on their profession and skills,and keep control of salary and projects (how much would cost summing all the hours of work) I have 3 categories which can have subcategories: Outsourcing Technician welder turner assistant Administrative supervisor manager So each person has its information and the projects they are working on, also one person may do several jobs... I was thinking about having 5 tables (EMPLOYEE, SKILLS, PROYECTS, SALARY, PROFESSION) but I guess there is a better way of doing this. create table Employee ( PRIMARY KEY [Person_ID] int(10), [Name] varchar(30), [sex] varchar(10), [address] varchar(10), [profession] varchar(10), [Skills_ID] int(10), [Proyect_ID] int(10), [Salary_ID] int(10), [Salary] float ) create table Skills ( PRIMARY KEY [Skills_ID] int(10), FOREIGN KEY [Skills_name] varchar(10) REFERENCES Employee(Person_ID), [Skills_pay] float(10), [Comments] varchar(50) ) create table Proyects ( PRIMARY KEY [Proyect_ID] int(10), FOREIGN KEY [Skills_name] varchar(10) REFERENCES Employee(Person_ID) [Proyect_name] varchar(10), [working_Hours] float(10), [Comments] varchar(50) ) create table Salary ( PRIMARY KEY [Salary_ID] int(10), FOREIGN KEY [Skills_name] varchar(10) REFERENCES Employee(Person_ID) [Proyect_name] varchar(10), [working_Hours] float(10), [Comments] varchar(50) ) So to get the total amount of the cost of a project I would just sum the working hours of each employee envolved and sum some extra costs in an aggregate query. Is there a way to do this in a more efficient way? What to add or delete of this small model? I guess I am missing something in the salary - maybe I need another table for that?

    Read the article

  • MPI hypercube broadcast error

    - by luvieere
    I've got a one to all broadcast method for a hypercube, written using MPI: one2allbcast(int n, int rank, void *data, int count, MPI_Datatype dtype) { MPI_Status status; int mask, partner; int mask2 = ((1 << n) - 1) ^ (1 << n-1); for (mask = (1 << n-1); mask; mask >>= 1, mask2 >>= 1) { if (rank & mask2 == 0) { partner = rank ^ mask; if (rank & mask) MPI_Recv(data, count, dtype, partner, 99, MPI_COMM_WORLD, &status); else MPI_Send(data, count, dtype, partner, 99, MPI_COMM_WORLD); } } } Upon calling it from main: int main( int argc, char **argv ) { int n, rank; MPI_Init (&argc, &argv); MPI_Comm_size (MPI_COMM_WORLD, &n); MPI_Comm_rank (MPI_COMM_WORLD, &rank); one2allbcast(floor(log(n) / log (2)), rank, "message", sizeof(message), MPI_CHAR); MPI_Finalize(); return 0; } compiling and executing on 8 nodes, I receive a series of errors reporting that processes 1, 3, 5, 7 were stopped before the point of receiving any data: MPI_Recv: process in local group is dead (rank 1, MPI_COMM_WORLD) Rank (1, MPI_COMM_WORLD): Call stack within LAM: Rank (1, MPI_COMM_WORLD): - MPI_Recv() Rank (1, MPI_COMM_WORLD): - main() MPI_Recv: process in local group is dead (rank 3, MPI_COMM_WORLD) Rank (3, MPI_COMM_WORLD): Call stack within LAM: Rank (3, MPI_COMM_WORLD): - MPI_Recv() Rank (3, MPI_COMM_WORLD): - main() MPI_Recv: process in local group is dead (rank 5, MPI_COMM_WORLD) Rank (5, MPI_COMM_WORLD): Call stack within LAM: Rank (5, MPI_COMM_WORLD): - MPI_Recv() Rank (5, MPI_COMM_WORLD): - main() MPI_Recv: process in local group is dead (rank 7, MPI_COMM_WORLD) Rank (7, MPI_COMM_WORLD): Call stack within LAM: Rank (7, MPI_COMM_WORLD): - MPI_Recv() Rank (7, MPI_COMM_WORLD): - main() Where do I go wrong?

    Read the article

  • is delete p where p is a pointer to array a memory leak ?

    - by Eli
    following a discussion in a software meeting I setup to find out if deleting an dynamically allocated primitive array with plain delete will cause a memory leak. I have written this tiny program and compiled with visual studio 2008 running on windows XP: #include "stdafx.h" #include "Windows.h" const unsigned long BLOCK_SIZE = 1024*100000; int _tmain() { for (unsigned int i =0; i < 1024*1000; i++) { int* p = new int[1024*100000]; for (int j =0;j<BLOCK_SIZE;j++) p[j]= j % 2; Sleep(1000); delete p; } } I than monitored the memory consumption of my application using task manager, surprisingly the memory was allocated and freed correctly, allocated memory did not steadily increase as was expected I've modified my test program to allocate a non primitive type array : #include "stdafx.h" #include "Windows.h" struct aStruct { aStruct() : i(1), j(0) {} int i; char j; } NonePrimitive; const unsigned long BLOCK_SIZE = 1024*100000; int _tmain() { for (unsigned int i =0; i < 1024*100000; i++) { aStruct* p = new aStruct[1024*100000]; Sleep(1000); delete p; } } after running for for 10 minutes there was no meaningful increase in memory I compiled the project with warning level 4 and got no warnings. is it possible that the visual studio run time keep track of the allocated objects types so there is no different between delete and delete[] in that environment ?

    Read the article

  • Please Help Me with my Homework Problem in C++

    - by sil3nt
    Hey there, this is part of a question i got in class, im at the final stretch but this has become a major problem. In it im given a certain value which is called the "gold value" and it is 40.5, this value changes in input. and i have these constants const int RUBIES_PER_DIAMOND = 5; // relative values. * const int EMERALDS_PER_RUBY = 2; const int GOLDS_PER_EMERALDS = 5; const int SILVERS_PER_GOLD = 4; const int COPPERS_PER_SILVER = 5; const int DIAMOND_VALUE = 50; // gold values. * const int RUBY_VALUE = 10; const int EMERALD_VALUE = 5; const float SILVER_VALUE = 0.25; const float COPPER_VALUE = 0.05; which means that basically for every diamond there are 5 rubies, and for every ruby there are 2 emeralds. So on and so forth. and the "gold value" for every diamond for example is 50 (diamond value = 50) this is how much one diamond is worth in golds. my problem is converting 40.5 into these diamonds and ruby values. I know the answer is 4rubies and 2silvers but how do i write the algorithm for this so that it gives the best estimate for every goldvalue that comes along??

    Read the article

  • 3n+1 problem at UVa

    - by ShahradR
    Hello, I am having trouble with the first question in the Programming Challenges book by Skiena and Revilla. I keep getting a "Wrong Answer" error message, and I don't know why. I've ran my code against the sample input, and I keep getting the right answer. Anyways, if anyone could help me, I would really appreciate it :) Here is the problem URL: http://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&category=29&page=show_problem&problem=36 And here is the code:` import java.util.Scanner; public class Main { static Scanner kb = new Scanner(System.in); public static void main(String[] args) { while (kb.hasNext()) { long[] numericalInput = {0, 0}; long i = kb.nextLong(); long j = kb.nextLong(); if (i > j) { numericalInput[0] = i; numericalInput[1] = j; } else { numericalInput[0] = j; numericalInput[1] = i; } long maxIterations = 0; for (long n = numericalInput[0]; n <= numericalInput[1]; n += 1) { if (maxIterations < returnIterations(n)) maxIterations = returnIterations(n); } System.out.println(i + " " + j + " " + maxIterations); } System.exit(0); } public static long returnIterations(long num) { long iterations = 0; while (num != 1) { if (num % 2 == 0) num = num / 2; else num = 3 * num + 1; iterations += 1; } iterations += 1; return iterations; } } ` EDIT: I think the problem is with the output. I tried to make it accept all the input first and then display all the answers at once, but I didn't know the terminating condition. I resorted to this method, but I'm not sure that's what the judge wants...

    Read the article

  • Project Euler 7 Scala Problem

    - by Nishu
    I was trying to solve Project Euler problem number 7 using scala 2.8 First solution implemented by me takes ~8 seconds def problem_7:Int = { var num = 17; var primes = new ArrayBuffer[Int](); primes += 2 primes += 3 primes += 5 primes += 7 primes += 11 primes += 13 while (primes.size < 10001){ if (isPrime(num, primes)) primes += num if (isPrime(num+2, primes)) primes += num+2 num += 6 } return primes.last; } def isPrime(num:Int, primes:ArrayBuffer[Int]):Boolean = { // if n == 2 return false; // if n == 3 return false; var r = Math.sqrt(num) for (i <- primes){ if(i <= r ){ if (num % i == 0) return false; } } return true; } Later I tried the same problem without storing prime numbers in array buffer. This take .118 seconds. def problem_7_alt:Int = { var limit = 10001; var count = 6; var num:Int = 17; while(count < limit){ if (isPrime2(num)) count += 1; if (isPrime2(num+2)) count += 1; num += 6; } return num; } def isPrime2(n:Int):Boolean = { // if n == 2 return false; // if n == 3 return false; var r = Math.sqrt(n) var f = 5; while (f <= r){ if (n % f == 0) { return false; } else if (n % (f+2) == 0) { return false; } f += 6; } return true; } I tried using various mutable array/list implementations in Scala but was not able to make solution one faster. I do not think that storing Int in a array of size 10001 can make program slow. Is there some better way to use lists/arrays in scala?

    Read the article

  • What is the best practice for using lock within inherited classes

    - by JDMX
    I want to know if one class is inheriting from another, is it better to have the classes share a lock object that is defined at the base class or to have a lock object defined at each inheritance level. A very simple example of a lock object on each level of the class public class Foo { private object thisLock = new object(); private int ivalue; public int Value { get { lock( thisLock ) { return ivalue; } } set { lock( thisLock ) { ivalue= value; } } } } public class Foo2: Foo { private object thisLock2 = new object(); public int DoubleValue { get { lock( thisLock2 ) { return base.Value * 2; } } set { lock( thisLock2 ) { base.Value = value / 2; } } } } public class Foo6: Foo2 { private object thisLock6 = new object(); public int TripleDoubleValue { get { lock( thisLock6 ) { return base.DoubleValue * 3; } } set { lock( thisLock6 ) { base.DoubleValue = value / 3; } } } } A very simple example of a shared lock object public class Foo { protected object thisLock = new object(); private int ivalue; public int Value { get { lock( thisLock ) { return ivalue; } } set { lock( thisLock ) { ivalue= value; } } } } public class Foo2: Foo { public int DoubleValue { get { lock( thisLock ) { return base.Value * 2; } } set { lock( thisLock ) { base.Value = value / 2; } } } } public class Foo6: Foo2 { public int TripleDoubleValue { get { lock( thisLock ) { return base.DoubleValue * 3; } } set { lock( thisLock ) { base.DoubleValue = value / 3; } } } } Which example is the preferred way to manage locking within an inherited class?

    Read the article

  • Wrapping FUSE from Go

    - by Matt Joiner
    I'm playing around with wrapping FUSE with Go. However I've come stuck with how to deal with struct fuse_operations. I can't seem to expose the operations struct by declaring type Operations C.struct_fuse_operations as the members are lower case, and my pure-Go sources would have to use C-hackery to set the members anyway. My first error in this case is "can't set getattr" in what looks to be the Go equivalent of a default copy constructor. My next attempt is to expose an interface that expects GetAttr, ReadLink etc, and then generate C.struct_fuse_operations and bind the function pointers to closures that call the given interface. This is what I've got (explanation continues after code): package fuse // #include <fuse.h> // #include <stdlib.h> import "C" import ( //"fmt" "os" "unsafe" ) type Operations interface { GetAttr(string, *os.FileInfo) int } func Main(args []string, ops Operations) int { argv := make([]*C.char, len(args) + 1) for i, s := range args { p := C.CString(s) defer C.free(unsafe.Pointer(p)) argv[i] = p } cop := new(C.struct_fuse_operations) cop.getattr = func(*C.char, *C.struct_stat) int {} argc := C.int(len(args)) return int(C.fuse_main_real(argc, &argv[0], cop, C.size_t(unsafe.Sizeof(cop)), nil)) } package main import ( "fmt" "fuse" "os" ) type CpfsOps struct { a int } func (me *CpfsOps) GetAttr(string, *os.FileInfo) int { return -1; } func main() { fmt.Println(os.Args) ops := &CpfsOps{} fmt.Println("fuse main returned", fuse.Main(os.Args, ops)) } This gives the following error: fuse.go:21[fuse.cgo1.go:23]: cannot use func literal (type func(*_Ctype_char, *_Ctype_struct_stat) int) as type *[0]uint8 in assignment I'm not sure what to pass to these members of C.struct_fuse_operations, and I've seen mention in a few places it's not possible to call from C back into Go code. If it is possible, what should I do? How can I provide the "default" values for interface functions that acts as though the corresponding C.struct_fuse_operations member is set to NULL?

    Read the article

  • Why do I get Detached Entity exception when upgrading Spring Boot 1.1.4 to 1.1.5

    - by mmeany
    On updating Spring Boot from 1.1.4 to 1.1.5 a simple web application started generating detached entity exceptions. Specifically, a post authentication inteceptor that bumped number of visits was causing the problem. A quick check of loaded dependencies showed that Spring Data has been updated from 1.6.1 to 1.6.2 and a further check of the change log shows a couple of issues relating to optimistic locking, version fields and JPA issues that have been fixed. Well I am using a version field and it starts out as Null following recommendation to not set in the specification. I have produced a very simple test scenario where I get detached entity exceptions if the version field starts as null or zero. If I create an entity with version 1 however then I do not get these exceptions. Is this expected behaviour or is there still something amiss? Below is the test scenario I have for this condition. In the scenario the service layer that has been annotated @Transactional. Each test case makes multiple calls to the service layer - the tests are working with detached entities as this is the scenario I am working with in the full blown application. The test case comprises four tests: Test 1 - versionNullCausesAnExceptionOnUpdate() In this test the version field in the detached object is Null. This is how I would usually create the object prior to passing to the service. This test fails with a Detached Entity exception. I would have expected this test to pass. If there is a flaw in the test then the rest of the scenario is probably moot. Test 2 - versionZeroCausesExceptionOnUpdate() In this test I have set the version to value Long(0L). This is an edge case test and included because I found reference to Zero values being used for version field in the Spring Data change log. This test fails with a Detached Entity exception. Of interest simply because the following two tests pass leaving this as an anomaly. Test 3 - versionOneDoesNotCausesExceptionOnUpdate() In this test the version field is set to value Long(1L). Not something I would usually do, but considering the notes in the Spring Data change log I decided to give it a go. This test passes. Would not usually set the version field, but this looks like a work-around until I figure out why the first test is failing. Test 4 - versionOneDoesNotCausesExceptionWithMultipleUpdates() Encouraged by the result of test 3 I pushed the scenario a step further and perform multiple updates on the entity that started life with a version of Long(1L). This test passes. Reinforcement that this may be a useable work-around. The entity: package com.mvmlabs.domain; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; import javax.persistence.Table; import javax.persistence.Version; @Entity @Table(name="user_details") public class User { @Id @GeneratedValue(strategy=GenerationType.AUTO) private Long id; @Version private Long version; @Column(nullable = false, unique = true) private String username; @Column(nullable = false) private Integer numberOfVisits; public Long getId() { return id; } public void setId(Long id) { this.id = id; } public Long getVersion() { return version; } public void setVersion(Long version) { this.version = version; } public Integer getNumberOfVisits() { return numberOfVisits == null ? 0 : numberOfVisits; } public void setNumberOfVisits(Integer numberOfVisits) { this.numberOfVisits = numberOfVisits; } public String getUsername() { return username; } public void setUsername(String username) { this.username = username; } } The repository: package com.mvmlabs.dao; import org.springframework.data.repository.CrudRepository; import com.mvmlabs.domain.User; public interface UserDao extends CrudRepository<User, Long>{ } The service interface: package com.mvmlabs.service; import com.mvmlabs.domain.User; public interface UserService { User save(User user); User loadUser(Long id); User registerVisit(User user); } The service implementation: package com.mvmlabs.service; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import org.springframework.transaction.annotation.Propagation; import org.springframework.transaction.annotation.Transactional; import org.springframework.transaction.support.TransactionSynchronizationManager; import com.mvmlabs.dao.UserDao; import com.mvmlabs.domain.User; @Service @Transactional(propagation=Propagation.REQUIRED, readOnly=false) public class UserServiceJpaImpl implements UserService { @Autowired private UserDao userDao; @Transactional(readOnly=true) @Override public User loadUser(Long id) { return userDao.findOne(id); } @Override public User registerVisit(User user) { user.setNumberOfVisits(user.getNumberOfVisits() + 1); return userDao.save(user); } @Override public User save(User user) { return userDao.save(user); } } The application class: package com.mvmlabs; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.EnableAutoConfiguration; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration; @Configuration @ComponentScan @EnableAutoConfiguration public class Application { public static void main(String[] args) { SpringApplication.run(Application.class, args); } } The POM: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.mvmlabs</groupId> <artifactId>jpa-issue</artifactId> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging> <name>spring-boot-jpa-issue</name> <description>JPA Issue between spring boot 1.1.4 and 1.1.5</description> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>1.1.5.RELEASE</version> <relativePath /> <!-- lookup parent from repository --> </parent> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.hsqldb</groupId> <artifactId>hsqldb</artifactId> <scope>runtime</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> </dependencies> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <start-class>com.mvmlabs.Application</start-class> <java.version>1.7</java.version> </properties> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project> The application properties: spring.jpa.hibernate.ddl-auto: create spring.jpa.hibernate.naming_strategy: org.hibernate.cfg.ImprovedNamingStrategy spring.jpa.database: HSQL spring.jpa.show-sql: true spring.datasource.url=jdbc:hsqldb:file:./target/testdb spring.datasource.username=sa spring.datasource.password= spring.datasource.driverClassName=org.hsqldb.jdbcDriver The test case: package com.mvmlabs; import org.junit.Assert; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.SpringApplicationConfiguration; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import com.mvmlabs.domain.User; import com.mvmlabs.service.UserService; @RunWith(SpringJUnit4ClassRunner.class) @SpringApplicationConfiguration(classes = Application.class) public class ApplicationTests { @Autowired UserService userService; @Test public void versionNullCausesAnExceptionOnUpdate() throws Exception { User user = new User(); user.setUsername("Version Null"); user.setNumberOfVisits(0); user.setVersion(null); user = userService.save(user); user = userService.registerVisit(user); Assert.assertEquals(new Integer(1), user.getNumberOfVisits()); Assert.assertEquals(new Long(1L), user.getVersion()); } @Test public void versionZeroCausesExceptionOnUpdate() throws Exception { User user = new User(); user.setUsername("Version Zero"); user.setNumberOfVisits(0); user.setVersion(0L); user = userService.save(user); user = userService.registerVisit(user); Assert.assertEquals(new Integer(1), user.getNumberOfVisits()); Assert.assertEquals(new Long(1L), user.getVersion()); } @Test public void versionOneDoesNotCausesExceptionOnUpdate() throws Exception { User user = new User(); user.setUsername("Version One"); user.setNumberOfVisits(0); user.setVersion(1L); user = userService.save(user); user = userService.registerVisit(user); Assert.assertEquals(new Integer(1), user.getNumberOfVisits()); Assert.assertEquals(new Long(2L), user.getVersion()); } @Test public void versionOneDoesNotCausesExceptionWithMultipleUpdates() throws Exception { User user = new User(); user.setUsername("Version One Multiple"); user.setNumberOfVisits(0); user.setVersion(1L); user = userService.save(user); user = userService.registerVisit(user); user = userService.registerVisit(user); user = userService.registerVisit(user); Assert.assertEquals(new Integer(3), user.getNumberOfVisits()); Assert.assertEquals(new Long(4L), user.getVersion()); } } The first two tests fail with detached entity exception. The last two tests pass as expected. Now change Spring Boot version to 1.1.4 and rerun, all tests pass. Are my expectations wrong? Edit: This code saved to GitHub at https://github.com/mmeany/spring-boot-detached-entity-issue

    Read the article

  • Adivce on Method overloads.

    - by Muhammad Kashif Nadeem
    Please see following methods. public static ProductsCollection GetDummyData(int? customerId, int? supplierId) { try { if (customerId != null && customerId > 0) { Filter.Add(Customres.CustomerId == customerId); } if (supplierId != null && supplierId > 0) { Filter.Add(Suppliers.SupplierId == supplierId); } ProductsCollection products = new ProductsCollection(); products.FetchData(Filter); return products; } catch { throw; } } public static ProductsCollection GetDummyData(int? customerId) { return ProductsCollection GetDummyData(customerId, (int?)null); } public static ProductsCollection GetDummyData() { return ProductsCollection GetDummyData((int?)null); } 1- Please advice how can I make overloads for both CustomerId and SupplierId because only one overload can be created with GetDummyData(int? ). Should I add another argument to mention that first argument is CustomerId or SupplierId for example GetDummyData(int?, string). OR should I use enum as 2nd argument and mention that first argument is CustoemId or SupplierId. 2- Is this condition is correct or just checking 0 is sufficient - if (customerId != null && customerId 0) 3- Using Try/catch like this is correct? 4- Passing (int?)null is correct or any other better approach. Edit: I have found some other posts like this and because I have no knowledge of Generics that is why I am facing this problem. Am I right? Following is the post. http://stackoverflow.com/questions/422625/overloaded-method-calling-overloaded-method

    Read the article

  • Problem in Adding Button in Dynamically created Gridview with Auto Generated Columns True

    - by Anuj Koundal
    Hi Guys I am using Gridview with auto columns true to Display data, I am using Dataset to bind Grid as Dataset gives me Crosstab/Pivot data on Dropdown's slected Index changed Here is the code I am using protected void ddl_SelectedIndexChanged(object sender, EventArgs e) { fillGridview(Convert.ToInt32(ddl.SelectedValue)); bindHeader(); } //===================//Bind GridColumns //================= void bindHeader() { GridViewRow headerRow; headerRow = gridDashboard.HeaderRow; foreach (GridViewRow grdRow in gridDashboard.Rows) { int count = grdRow.Cells.Count; int siteId=Convert.ToInt32(grdRow.Cells[4].Text); for (int j = 0; j < count; j++) { if (j >= 5) { int id=Convert.ToInt32(grdRow.Cells[j].Text); string headText =headerRow.Cells[j].Text.ToString(); string[] txtArray=headText.Split('-'); int stepId=Convert.ToInt32(txtArray[0]); //headerRow.Cells[j].Text = txtArray[1].ToString(); string HeadName = txtArray[1].ToString(); LinkButton lb = new LinkButton(); lb.Style.Add("text-decoration","none"); if (id > 0) { string Details = getDashBoardSiteStepDetails(id); lb.Text = Details; } else { lb.Text = " - "; } lb.CommandName = "HideColumn"; lb.CommandArgument = siteId.ToString() + "/" + stepId.ToString(); grdRow.Cells[j].Controls.Add(lb); } } } int cnt = headerRow.Cells.Count; for (int j = 0; j { if (j >= 5) { string hdText = headerRow.Cells[j].Text.ToString(); string[] txtArray = hdText.Split('-'); // int stepId = Convert.ToInt32(txtArray[0]); headerRow.Cells[j].Text = txtArray[1].ToString(); } } In above code I am trying to add button dynamically in each cell and button in text have text of that cell, IT works Great but when I click the link button created, link buttons Disappear and the original text of the cell Displays. please help I also want to create onclick of these link buttons Thanks

    Read the article

  • trying to make a simple grid-class, non-lvalue in assignment

    - by Tyrfing
    I'm implementing a simple C++ grid class. One Function it should support is accessed through round brackets, so that I can access the elements by writing mygrid(0,0). I overloaded the () operator and i am getting the error message: "non-lvalue in assignment". what I want to be able to do: //main cGrid<cA*> grid(5, 5); grid(0,0) = new cA(); excerpt of my implementation of the grid class: template class cGrid { private: T* data; int mWidth; int mHeight; public: cGrid(int width, int height) : mWidth(width), mHeight(height) { data = new T[width*height]; } ~cGrid() { delete data; } T operator ()(int x, int y) { if (x >= 0 && x <= mWidth) { if (y >= 0 && y <= mHeight) { return data[x + y * mWidth]; } } } const T &operator ()(int x, int y) const { if (x >= 0 && x <= mWidth) { if (y >= 0 && y <= mHeight) { return data[x + y * mWidth]; } } } The rest of the code deals with the implementation of an iterator and should not be releveant.

    Read the article

  • Can you tell me why this generates time limit exceeded in spoj(Prime Number Generator)

    - by magiix
    #include<iostream> #include<string.h> #include<math.h> using namespace std; bool prime[1000000500]; void generate(long long end) { memset(prime,true,sizeof(prime)); prime[0]=false; prime[1]=false; for(long long i=0;i<=sqrt(end);i++) { if(prime[i]==true) { for(long long y=i*i;y<=end;y+=i) { prime[y]=false; } } } } int main() { int n; long long b,e; scanf("%d",&n); while(n--) { cin>>b>>e; generate(e); for(int i=b;i<e;i++) { if(prime[i]) printf("%d\n",i); } } return 0; } That's my code for spoj prime generator. Altought it generates the same output as another accepted code ..

    Read the article

  • C#, finding the largest prime factor of a number

    - by Juan
    Hello! I am new at programming and I am practicing my C# programming skills. My application is meant to find the largest prime factor of a number entered by the user. But my application is not returning the right answer and I dont really know where the problem is. Can you please help me? using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { Console.WriteLine("Calcular máximo factor primo de n. De 60 es 5."); Console.Write("Escriba un numero: "); long num = Convert.ToInt64(Console.ReadLine()); long mfp = maxfactor(num); Console.WriteLine("El maximo factor primo es: " + num); Console.Read(); } static private long maxfactor (long n) { long m=1 ; bool en= false; for (long k = n / 2; !en && k > 1; k--) { if (n % k == 0 && primo(k)) { m = k; en = true; } } return m; } static private bool primo(long x) { bool sp = true; for (long i = 2; i <= x / 2; i++) { if (x % i == 0) sp = false; } return sp; } } }

    Read the article

  • How do i refactor this code by using Action<t> or Func<t> delegates

    - by user330612
    I have a sample program, which needs to execute 3 methods in a particular order. And after executing each method, should do error handling. Now i did this in a normal fashion, w/o using delegates like this. class Program { public static void Main() { MyTest(); } private static bool MyTest() { bool result = true; int m = 2; int temp = 0; try { temp = Function1(m); } catch (Exception e) { Console.WriteLine("Caught exception for function1" + e.Message); result = false; } try { Function2(temp); } catch (Exception e) { Console.WriteLine("Caught exception for function2" + e.Message); result = false; } try { Function3(temp); } catch (Exception e) { Console.WriteLine("Caught exception for function3" + e.Message); result = false; } return result; } public static int Function1(int x) { Console.WriteLine("Sum is calculated"); return x + x; } public static int Function2(int x) { Console.WriteLine("Difference is calculated "); return (x - x); } public static int Function3(int x) { return x * x; } } As you can see, this code looks ugly w/ so many try catch loops, which are all doing the same thing...so i decided that i can use delegates to refactor this code so that Try Catch can be all shoved into one method so that it looks neat. I was looking at some examples online and couldnt figure our if i shud use Action or Func delegates for this. Both look similar but im unable to get a clear idea how to implement this. Any help is gr8ly appreciated. I'm using .NET 4.0, so im allowed to use anonymous methods n lambda expressions also for this Thanks

    Read the article

  • Problem cube mapping in OpenGL using DDS compressed images

    - by Paul Jones
    Hi All, I am having trouble cube mapping when using a DDS cube map, I'm just getting a black cube which leads me to believe I have missing something simple, here's the code so far: DDS_IMAGE_DATA *pDDSImageData = LoadDDSFile(filename); //compressedTexture = -1; if(pDDSImageData != NULL) { int height = pDDSImageData->height; int width = pDDSImageData->width; int numMipMaps = pDDSImageData->numMipMaps; int blockSize; GLenum cubefaces[6] = { GL_TEXTURE_CUBE_MAP_POSITIVE_X, GL_TEXTURE_CUBE_MAP_NEGATIVE_X, GL_TEXTURE_CUBE_MAP_POSITIVE_Y, GL_TEXTURE_CUBE_MAP_NEGATIVE_Y, GL_TEXTURE_CUBE_MAP_POSITIVE_Z, GL_TEXTURE_CUBE_MAP_NEGATIVE_Z, }; if( pDDSImageData->format == GL_COMPRESSED_RGBA_S3TC_DXT1_EXT ) blockSize = 8; else blockSize = 16; glGenTextures( 1, &textureId ); int nSize; int nOffset = 0; glEnable(GL_TEXTURE_CUBE_MAP); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); for(int face = 0; face < 6; face++) { for( int i = 0; i < numMipMaps; i++ ) { if( width == 0 ) width = 1; if( height == 0 ) height = 1; glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_GENERATE_MIPMAP, GL_TRUE); nSize = ((width+3)>>2) * ((height+3)>>2) * blockSize; glCompressedTexImage2D(cubefaces[face] , i, pDDSImageData->format, width, height, 0, nSize, pDDSImageData->pixels + nOffset ); nOffset += nSize; // Half the image size for the next mip-map level... width = (width / 2); height = (height / 2); } } } Once this code is called I bind the texture using glBindTexture and draw a cube using GL_QUADS and glTexCoord3f. Thank you for reading and all comments are welcome, sorry if any of the formatting comes out wrong.

    Read the article

  • Galaxy Nexus (Specificaly) camera startPreview failed

    - by Roman
    I'm having a strange issue on a galaxy nexus when trying to have a live camera view in my app. I get this error in the log cat: 06-29 16:31:26.681 I/CameraClient(133): Opening camera 0 06-29 16:31:26.681 I/CameraHAL(133): camera_device open 06-29 16:31:26.970 D/DOMX (133): ERROR: failed check:(eError == OMX_ErrorNone) || (eError == OMX_ErrorNoMore) - returning error: 0x80001005 - Error returned from OMX API in ducati 06-29 16:31:26.970 E/CameraHAL(133): Error while configuring rotation 0x80001005 06-29 16:31:27.088 I/am_on_resume_called(21274): [0,digifynotes.Activity_Camera] 06-29 16:31:27.111 V/PhoneStatusBar(693): setLightsOn(true) 06-29 16:31:27.205 E/CameraHAL(133): OMX component is not in loaded state 06-29 16:31:27.205 E/CameraHAL(133): setNSF() failed -22 06-29 16:31:27.205 E/CameraHAL(133): Error: CAMERA_QUERY_RESOLUTION_PREVIEW -22 06-29 16:31:27.252 I/MonoDroid(21274): UNHANDLED EXCEPTION: Java.Lang.Exception: Exception of type 'Java.Lang.Exception' was thrown. 06-29 16:31:27.252 I/MonoDroid(21274): at Android.Runtime.JNIEnv.CallVoidMethod (intptr,intptr) <0x00068> 06-29 16:31:27.252 I/MonoDroid(21274): at Android.Hardware.Camera.StartPreview () <0x0007f> 06-29 16:31:27.252 I/MonoDroid(21274): at DigifyNotes.CameraPreviewView.SurfaceChanged (Android.Views.ISurfaceHolder,Android.Graphics.Format,int,int) <0x000d7> 06-29 16:31:27.252 I/MonoDroid(21274): at Android.Views.ISurfaceHolderCallbackInvoker.n_SurfaceChanged_Landroid_view_SurfaceHolder_III (intptr,intptr,intptr,int,int,int) <0x0008b> 06-29 16:31:27.252 I/MonoDroid(21274): at (wrapper dynamic-method) object.4c65d912-497c-4a67-9046-4b33a55403df (intptr,intptr,intptr,int,int,int) <0x0006b> That very same source code works flawlessly on a Samsung Galaxy Ace 2X (4.0.4) and an LG G2X (2.3.7). I will later test on a galaxy s4 if my friend lends it to me. Galaxy Nexus runs Android 4.2.2 I believe. Any one have any ideas? EDIT: Here are my camera classes: [Please note I am using mono] [The formatting is more readable if you view it as raw] Camera Activity: http://pastebin.com/YPcGXJRB Camera Preview View: http://pastebin.com/zNf8AWDf

    Read the article

  • Need help with implementing collision detection using the Separating Axis Theorem

    - by Eddie Ringle
    So, after hours of Googling and reading, I've found that the basic process of detecting a collision using SAT is: for each edge of poly A project A and B onto the normal for this edge if intervals do not overlap, return false end for for each edge of poly B project A and B onto the normal for this edge if intervals do not overlap, return false end for However, as many ways as I try to implement this in code, I just cannot get it to detect the collision. My current code is as follows: for (unsigned int i = 0; i < asteroids.size(); i++) { if (asteroids.valid(i)) { asteroids[i]->Update(); // Player-Asteroid collision detection bool collision = true; SDL_Rect asteroidBox = asteroids[i]->boundingBox; // Bullet-Asteroid collision detection for (unsigned int j = 0; j < player.bullets.size(); j++) { if (player.bullets.valid(j)) { Bullet b = player.bullets[j]; collision = true; if (b.x + (b.w / 2.0f) < asteroidBox.x - (asteroidBox.w / 2.0f)) collision = false; if (b.x - (b.w / 2.0f) > asteroidBox.x + (asteroidBox.w / 2.0f)) collision = false; if (b.y - (b.h / 2.0f) > asteroidBox.y + (asteroidBox.h / 2.0f)) collision = false; if (b.y + (b.h / 2.0f) < asteroidBox.y - (asteroidBox.h / 2.0f)) collision = false; if (collision) { bool realCollision = false; float min1, max1, min2, max2; // Create a list of vertices for the bullet CrissCross::Data::LList<Vector2D *> bullVerts; bullVerts.insert(new Vector2D(b.x - b.w / 2.0f, b.y + b.h / 2.0f)); bullVerts.insert(new Vector2D(b.x - b.w / 2.0f, b.y - b.h / 2.0f)); bullVerts.insert(new Vector2D(b.x + b.w / 2.0f, b.y - b.h / 2.0f)); bullVerts.insert(new Vector2D(b.x + b.w / 2.0f, b.y + b.h / 2.0f)); // Create a list of vectors of the edges of the bullet and the asteroid CrissCross::Data::LList<Vector2D *> bullEdges; CrissCross::Data::LList<Vector2D *> asteroidEdges; for (int k = 0; k < 4; k++) { int n = (k == 3) ? 0 : k + 1; bullEdges.insert(new Vector2D(bullVerts[k]->x - bullVerts[n]->x, bullVerts[k]->y - bullVerts[n]->y)); asteroidEdges.insert(new Vector2D(asteroids[i]->vertices[k]->x - asteroids[i]->vertices[n]->x, asteroids[i]->vertices[k]->y - asteroids[i]->vertices[n]->y)); } for (unsigned int k = 0; k < asteroidEdges.size(); k++) { Vector2D *axis = asteroidEdges[k]->getPerpendicular(); min1 = max1 = axis->dotProduct(asteroids[i]->vertices[0]); for (unsigned int l = 1; l < asteroids[i]->vertices.size(); l++) { float test = axis->dotProduct(asteroids[i]->vertices[l]); min1 = (test < min1) ? test : min1; max1 = (test > max1) ? test : max1; } min2 = max2 = axis->dotProduct(bullVerts[0]); for (unsigned int l = 1; l < bullVerts.size(); l++) { float test = axis->dotProduct(bullVerts[l]); min2 = (test < min2) ? test : min2; max2 = (test > max2) ? test : max2; } delete axis; axis = NULL; if ( (min1 - max2) > 0 || (min2 - max1) > 0 ) { realCollision = false; break; } else { realCollision = true; } } if (realCollision == false) { for (unsigned int k = 0; k < bullEdges.size(); k++) { Vector2D *axis = bullEdges[k]->getPerpendicular(); min1 = max1 = axis->dotProduct(asteroids[i]->vertices[0]); for (unsigned int l = 1; l < asteroids[i]->vertices.size(); l++) { float test = axis->dotProduct(asteroids[i]->vertices[l]); min1 = (test < min1) ? test : min1; max1 = (test > max1) ? test : max1; } min2 = max2 = axis->dotProduct(bullVerts[0]); for (unsigned int l = 1; l < bullVerts.size(); l++) { float test = axis->dotProduct(bullVerts[l]); min2 = (test < min2) ? test : min2; max2 = (test > max2) ? test : max2; } delete axis; axis = NULL; if ( (min1 - max2) > 0 || (min2 - max1) > 0 ) { realCollision = false; break; } else { realCollision = true; } } } if (realCollision) { player.bullets.remove(j); int numAsteroids; float newDegree; srand ( j + asteroidBox.x ); if ( asteroids[i]->degree == 90.0f ) { if ( rand() % 2 == 1 ) { numAsteroids = 3; newDegree = 30.0f; } else { numAsteroids = 2; newDegree = 45.0f; } for ( int k = 0; k < numAsteroids; k++) asteroids.insert(new Asteroid(asteroidBox.x + (10 * k), asteroidBox.y + (10 * k), newDegree)); } delete asteroids[i]; asteroids.remove(i); } while (bullVerts.size()) { delete bullVerts[0]; bullVerts.remove(0); } while (bullEdges.size()) { delete bullEdges[0]; bullEdges.remove(0); } while (asteroidEdges.size()) { delete asteroidEdges[0]; asteroidEdges.remove(0); } } } } } } bullEdges is a list of vectors of the edges of a bullet, asteroidEdges is similar, and bullVerts and asteroids[i].vertices are, obviously, lists of vectors of each vertex for the respective bullet or asteroid. Honestly, I'm not looking for code corrections, just a fresh set of eyes.

    Read the article

  • aio_read from file error on OS X

    - by Pyetras
    The following code: #include <fcntl.h> #include <unistd.h> #include <stdio.h> #include <aio.h> #include <errno.h> int main (int argc, char const *argv[]) { char name[] = "abc"; int fdes; if ((fdes = open(name, O_RDWR | O_CREAT, 0600 )) < 0) printf("%d, create file", errno); int buffer[] = {0, 1, 2, 3, 4, 5}; if (write(fdes, &buffer, sizeof(buffer)) == 0){ printf("writerr\n"); } struct aiocb aio; int n = 2; while (n--){ aio.aio_reqprio = 0; aio.aio_fildes = fdes; aio.aio_offset = sizeof(int); aio.aio_sigevent.sigev_notify = SIGEV_NONE; int buffer2; aio.aio_buf = &buffer2; aio.aio_nbytes = sizeof(buffer2); if (aio_read(&aio) != 0){ printf("%d, readerr\n", errno); }else{ const struct aiocb *aio_l[] = {&aio}; if (aio_suspend(aio_l, 1, 0) != 0){ printf("%d, suspenderr\n", errno); }else{ printf("%d\n", *(int *)aio.aio_buf); } } } return 0; } Works fine on Linux (Ubuntu 9.10, compiled with -lrt), printing 1 1 But fails on OS X (10.6.6 and 10.6.5, I've tested it on two machines): 1 35, readerr Is this possible that this is due to some library error on OS X, or am I doing something wrong?

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >