Search Results

Search found 810 results on 33 pages for 'initializing'.

Page 31/33 | < Previous Page | 27 28 29 30 31 32 33  | Next Page >

  • What is the fastest way to Initialize a multi-dimensional array to non-default values in .NET?

    - by AMissico
    How do I initialize a multi-dimensional array of a primitive type as fast as possible? I am stuck with using multi-dimensional arrays. My problem is performance. The following routine initializes a 100x100 array in approx. 500 ticks. Removing the int.MaxValue initialization results in approx. 180 ticks just for the looping. Approximately 100 ticks to create the array without looping and without initializing to int.MaxValue. Routines similiar to this are called a few hundred-thousand to several million times during a "run". The array size will not change during a run and arrays are created one-at-a-time, used, then discarded, and a new array created. A "run" which may last from one minute (using 10x10 arrays) to forty-five minutes (100x100). The application creates arrays of int, bool, and struct. There can be multiple "runs" executing at same time, but are not because performance degrades terribly. I am using 100x100 as a base-line. I am open to suggestions on how to optimize this non-default initialization of an array. One idea I had is to use a smaller primitive type when available. For instance, using byte instead of int, saves 100 ticks. I would be happy with this, but I am hoping that I don't have to change the primitive data type. public int[,] CreateArray(Size size) { int[,] array = new int[size.Width, size.Height]; for (int x = 0; x < size.Width; x++) { for (int y = 0; y < size.Height; y++) { array[x, y] = int.MaxValue; } } return array; } Down to 450 ticks with the following: public int[,] CreateArray1(Size size) { int iX = size.Width; int iY = size.Height; int[,] array = new int[iX, iY]; for (int x = 0; x < iX; x++) { for (int y = 0; y < iY; y++) { array[x, y] = int.MaxValue; } } return array; } Down to approximately 165 ticks after a one-time initialization of 2800 ticks. (See my answer below.) If I can get stackalloc to work with multi-dimensional arrays, I should be able to get the same performance without having to intialize the private static array. private static bool _arrayInitialized5; private static int[,] _array5; public static int[,] CreateArray5(Size size) { if (!_arrayInitialized5) { int iX = size.Width; int iY = size.Height; _array5 = new int[iX, iY]; for (int x = 0; x < iX; x++) { for (int y = 0; y < iY; y++) { _array5[x, y] = int.MaxValue; } } _arrayInitialized5 = true; } return (int[,])_array5.Clone(); } Down to approximately 165 ticks without using the "clone technique" above. (See my answer below.) I am sure I can get the ticks lower, if I can just figure out the return of CreateArray9. public unsafe static int[,] CreateArray8(Size size) { int iX = size.Width; int iY = size.Height; int[,] array = new int[iX, iY]; fixed (int* pfixed = array) { int count = array.Length; for (int* p = pfixed; count-- > 0; p++) *p = int.MaxValue; } return array; }

    Read the article

  • Spring @Autowired messageSource working in Controller but not in other classes?

    - by Jayaprakash
    New updates: As I could not succeed in configuring messageSource through annotations, I attempted to configure messageSource injection through servlet-context.xml. I still have messageSource as null. Please let me know if you need any more specific info, and I will provide. Thanks for your help in advance. servlet-context.xml <beans:bean id="message" class="com.mycompany.myapp.domain.common.message.Message"> <beans:property name="messageSource" ref="messageSource" /> </beans:bean> Spring gives the below information message about spring initialization. INFO : org.springframework.context.annotation.ClassPathBeanDefinitionScanner - JSR-330 'javax.inject.Named' annotation found and supported for component scanning INFO : org.springframework.beans.factory.support.DefaultListableBeanFactory - Overriding bean definition for bean 'message': replacing [Generic bean: class [com.mycompany.myapp.domain.common.message.Message]; scope=singleton; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null; defined in file [C:\springsource\tc-server-developer-2.1.0.RELEASE\spring-insight-instance\wtpwebapps\myapp\WEB-INF\classes\com\mycompany\myapp\domain\common\message\Message.class]] with [Generic bean: class [com.mycompany.myapp.domain.common.message.Message]; scope=; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null; defined in ServletContext resource [/WEB-INF/spring/appServlet/servlet-context.xml]] INFO : org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor - JSR-330 'javax.inject.Inject' annotation found and supported for autowiring INFO : org.springframework.beans.factory.support.DefaultListableBeanFactory - Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@1c7caac5: defining beans [org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping#0,org.springframework.format.support.FormattingConversionServiceFactoryBean#0,org.springframework.validation.beanvalidation.LocalValidatorFactoryBean#0,org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter#0,org.springframework.web.servlet.handler.MappedInterceptor#0,org.springframework.web.servlet.mvc.HttpRequestHandlerAdapter,org.springframework.web.servlet.resource.ResourceHttpRequestHandler#0,org.springframework.web.servlet.handler.SimpleUrlHandlerMapping#0,xxxDao,message,xxxService,jsonDateSerializer,xxxController,homeController,org.springframework.context.annotation.internalConfigurationAnnotationProcessor,org.springframework.context.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.annotation.internalRequiredAnnotationProcessor,org.springframework.context.annotation.internalCommonAnnotationProcessor,tilesViewResolver,tilesConfigurer,messageSource,org.springframework.web.servlet.handler.MappedInterceptor#1,localeResolver,org.springframework.web.servlet.view.ContentNegotiatingViewResolver#0,validator,resourceBundleLocator,messageInterpolator]; parent: org.springframework.beans.factory.support.DefaultListableBeanFactory@4f47af3 I have the below definition for message source in 3 classes. In debug mode, I can see that in class xxxController, messageSource is initialized to org.springframework.context.support.ReloadableResourceBundleMessageSource. I have annotated Message class with @Component and xxxHibernateDaoImpl with @Repository. I also included context namespace definition in servlet-context.xml. But in Message class and xxxHibernateDaoImpl class, the messageSource is still null. Why is Spring not initializing messageSource in the two other classes though in xxxController classes, it initializes correctly? @Controller public class xxxController{ @Autowired private ReloadableResourceBundleMessageSource messageSource; } @Component public class Message{ @Autowired private ReloadableResourceBundleMessageSource messageSource; } @Repository("xxxDao") public class xxxHibernateDaoImpl{ @Autowired private ReloadableResourceBundleMessageSource messageSource; } <beans:beans xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation=" http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd"> <beans:bean id="messageSource" class="org.springframework.context.support.ReloadableResourceBundleMessageSource"> <beans:property name="basename" value="/resources/messages/messages" /> </beans:bean> <context:component-scan base-package="com.mycompany.myapp"/> </beans>

    Read the article

  • Concurrency and Calendar classes

    - by fbielejec
    I have a thread (class implementing runnable, called AnalyzeTree) organised around a hash map (ConcurrentMap slicesMap). The class goes through the data (called trees here) in the large text file and parses the geographical coordinates from it to the HashMap. The idea is to process one tree at a time and add or grow the values according to the key (which is just a Double value representing time). The relevant part of code looks like this: // grow map entry if key exists if (slicesMap.containsKey(sliceTime)) { double[] imputedLocation = imputeValue( location, parentLocation, sliceHeight, nodeHeight, parentHeight, rate, useTrueNoise, currentTreeNormalization, precisionArray); slicesMap.get(sliceTime).add( new Coordinates(imputedLocation[1], imputedLocation[0], 0.0)); // start new entry if no such key in the map } else { List<Coordinates> coords = new ArrayList<Coordinates>(); double[] imputedLocation = imputeValue( location, parentLocation, sliceHeight, nodeHeight, parentHeight, rate, useTrueNoise, currentTreeNormalization, precisionArray); coords.add(new Coordinates(imputedLocation[1], imputedLocation[0], 0.0)); slicesMap.putIfAbsent(sliceTime, coords); // slicesMap.put(sliceTime, coords); }// END: key check And the class is called like this (executor is ExecutorService executor = Executors.newFixedThreadPool(NTHREDS) ): mrsd = new SpreadDate(mrsdString); int readTrees = 1; while (treesImporter.hasTree()) { currentTree = (RootedTree) treesImporter.importNextTree(); executor.submit(new AnalyzeTree(currentTree, precisionString, coordinatesName, rateString, numberOfIntervals, treeRootHeight, timescaler, mrsd, slicesMap, useTrueNoise)); // new AnalyzeTree(currentTree, precisionString, // coordinatesName, rateString, numberOfIntervals, // treeRootHeight, timescaler, mrsd, slicesMap, // useTrueNoise).run(); readTrees++; }// END: while has trees Now this is running into troubles when executed in parallel (the commented part running sequentially is fine), I thought it might throw a ConcurrentModificationException, but apparently the problem is in mrsd (instance of SpreadDate object, which is simply a class for date related calculations). The SpreadDate class looks like this: public class SpreadDate { private Calendar cal; private SimpleDateFormat formatter; private Date stringdate; public SpreadDate(String date) throws ParseException { // if no era specified assume current era String line[] = date.split(" "); if (line.length == 1) { StringBuilder properDateStringBuilder = new StringBuilder(); date = properDateStringBuilder.append(date).append(" AD") .toString(); } formatter = new SimpleDateFormat("yyyy-MM-dd G", Locale.US); stringdate = formatter.parse(date); cal = Calendar.getInstance(); } public long plus(int days) { cal.setTime(stringdate); cal.add(Calendar.DATE, days); return cal.getTimeInMillis(); }// END: plus public long minus(int days) { cal.setTime(stringdate); cal.add(Calendar.DATE, -days); //line 39 return cal.getTimeInMillis(); }// END: minus public long getTime() { cal.setTime(stringdate); return cal.getTimeInMillis(); }// END: getDate } And the stack trace from when exception is thrown: java.lang.ArrayIndexOutOfBoundsException: 58 at sun.util.calendar.BaseCalendar.getCalendarDateFromFixedDate(BaseCalendar.java:454) at java.util.GregorianCalendar.computeFields(GregorianCalendar.java:2098) at java.util.GregorianCalendar.computeFields(GregorianCalendar.java:2013) at java.util.Calendar.setTimeInMillis(Calendar.java:1126) at java.util.GregorianCalendar.add(GregorianCalendar.java:1020) at utils.SpreadDate.minus(SpreadDate.java:39) at templates.AnalyzeTree.run(AnalyzeTree.java:88) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:636) If a move the part initializing mrsd to the AnalyzeTree class it runs without any problems - however it is not very memory efficient to initialize class each time this thread is running, hence my concerns. How can it be remedied?

    Read the article

  • AES encryption/decryption java bouncy castle explanation?

    - by Programmer0
    Can someone please explain what this program is doing pointing out some of the major points? I'm looking at the code and I'm completely lost. I just need explanation on the encryption/decryption phases. I think it generates an AES 192 key at one point but I'm not 100% sure. I'm not sure what the byte/ivBytes are used for either. import java.security.Key; import javax.crypto.Cipher; import javax.crypto.KeyGenerator; import javax.crypto.spec.IvParameterSpec; public class RandomKey { public static void main(String[] args) throws Exception { byte[] input = new byte[] { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07 }; byte[] ivBytes = new byte[] { 0x00, 0x00, 0x00, 0x01, 0x04, 0x05, 0x06, 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01 }; //initializing a new initialization vector IvParameterSpec ivSpec = new IvParameterSpec(ivBytes); //what does this actually do? Cipher cipher = Cipher.getInstance("AES/CTR/NoPadding", "BC"); //what does this do? KeyGenerator generator = KeyGenerator.getInstance("AES","BC"); //I assume this generates a key size of 192 bits generator.init(192); //does this generate a random key? Key encryptKey = generator.generateKey(); System.out.println("input: " +toHex(input)); //encryption phase cipher.init(Cipher.ENCRYPT_MODE, encryptKey, ivSpec); //what is this doing? byte[] cipherText = new byte[cipher.getOutputSize(input.length)]; //what is this doing? int ctLength = cipher.update(input, 0, input.length, cipherText,0); //getting the cipher text length i assume? ctLength += cipher.doFinal (cipherText, ctLength ); System.out.println ("Cipher: " +toHex(cipherText) + " bytes: " + ctLength); //decryption phase cipher.init(Cipher.DECRYPT_MODE, encryptKey, ivSpec); //storing the ciphertext in plaintext i'm assuming? byte[] plainText = new byte[cipher.getOutputSize(ctLength)]; int ptLength = cipher.update(cipherText, 0, ctLength, plainText, 0); //getting plaintextLength i think? ptLength= cipher.doFinal (plainText, ptLength); System.out.println("plain: " + toHex(plainText, ptLength)); } private static String digits = "0123456789abcdef"; public static String toHex(byte[] data, int length) { StringBuffer buf = new StringBuffer(); for (int i=0; i!= length; i++) { int v = data[i] & 0xff; buf.append(digits.charAt(v >>4)); buf.append(digits.charAt(v & 0xf)); } return buf.toString(); } public static String toHex(byte[] data) { return toHex(data, data.length); } }

    Read the article

  • Node.js, Cygwin and Socket.io walk into a bar... Node.js throws ENOBUFS and everyone dies...

    - by A Wizard Did It
    I'm hoping someone here can help me out, I'm not having much luck figuring this out myself. I'm running node.js version 0.3.1 on Cygwin. I'm using Connect and Socket.io. I seem to be having some random problems with DNS or something, I haven't quite figured it out. The end result is that I the server is running fine, but when a browser attempts to connect to it the initial HTTP Request works, Socket.io connects, and then the server dies (output below). I don't think it has anything to do with the HTTP request because the server gets a lot data posted to it, and it was receiving requests and responding up until my connection that killed it. I've googled around and the closest thing I've found is DNS being set improperly. It's a network program meant to run only on an internal network, so I've set the nameserver x.x.x.x in my /etc/resolv.conf to the internal DNS. I've also added nameserver 8.8.8.8 in addition. I'm not sure what else to check, but would be grateful of any help. In node.exe.stackdump Exception: STATUS_ACCESS_VIOLATION at eip=610C51B9 eax=00000000 ebx=00000001 ecx=00000000 edx=00000308 esi=00000000 edi=010FCCB0 ebp=010FCAEC esp=010FCAC4 program=\\?\E:\cygwin\usr\local\bin\node.exe, pid 3296, thread unknown (0xBEC) cs=0023 ds=002B es=002B fs=0053 gs=002B ss=002B Stack trace: Frame Function Args 010FCAEC 610C51B9 (00000000, 00000000, 00000000, 00000000) 010FCBFC 610C5B55 (00000000, 00000000, 00000000, 00000000) 010FCCBC 610C693A (FFFFFFFF, FFFFFFFF, 750334F3, FFFFFFFE) 010FCD0C 61027CB2 (00000002, F4B994D5, 010FCE64, 00000002) 010FCD98 76306B59 (00000002, 010FCDD4, 763069A4, 00000002) End of stack trace Node Output: node.js:50 throw e; // process.nextTick error, or 'error' event on first tick ^ Error: ENOBUFS, No buffer space available at doConnect (net.js:642:19) at net.js:803:9 at dns.js:166:30 at IOWatcher.callback (dns.js:48:15) EDIT I'm hitting an LDAP server using http.createClient immediately after a client connects to get information, and that seems to be where the problem is that is causing ENOBUFS. I've edited the source to include && errno != ENOBUFS which now prevents the server from dying, however now the LDAP request isn't working. I'm not sure what the problem is that would cause that though. As I mentioned this is an internal only application, so I set the DNS servers in /etc/resolv.conf to the DNS servers that are being applied to the host machine. Not sure if this is part of the issue? EDIT 2 Here's some output from gdb --args ./node_g --debug ../myscript.js. I'm not sure if this is related to ENOBUFS, however, as it seems to be disconnecting immediately after connection with Socket.io [New thread 672.0x100] Error: dll starting at 0x76e30000 not found. Error: dll starting at 0x76250000 not found. Error: dll starting at 0x76e30000 not found. Error: dll starting at 0x76f50000 not found. [New thread 672.0xc90] [New thread 672.0x448] debugger listening on port 5858 [New thread 672.0xbf4] 14 Jan 18:48:57 - socket.io ready - accepting connections [New thread 672.0xed4] [New thread 672.0xd68] [New thread 672.0x1244] [New thread 672.0xf14] 14 Jan 18:49:02 - Initializing client with transport "websocket" assertion "b[1] == 0" failed: file "../src/node.cc", line 933, function: ssize_t node::DecodeWrite(char*, size_t, v8::Handle<v8::Value>, node::encoding) Program received signal SIGABRT, Aborted. 0x7724f861 in ntdll!RtlUpdateClonedSRWLock () from /cygdrive/c/Windows/system32/ntdll.dll (gdb) backtrace #0 0x7724f861 in ntdll!RtlUpdateClonedSRWLock () from /cygdrive/c/Windows/system32/ntdll.dll #1 0x7724f861 in ntdll!RtlUpdateClonedSRWLock () from /cygdrive/c/Windows/system32/ntdll.dll #2 0x75030816 in WaitForSingleObjectEx () from /cygdrive/c/Windows/syswow64/KernelBase.dll #3 0x0000035c in ?? () #4 0x00000000 in ?? () (gdb)

    Read the article

  • Arduino Ethernet Shield Not Connecting to WebServer

    - by new user
    I have a problem making my Arduino Ethernet shield to communicate with the server, the result on the serial monitor is always: my arduino code is #include <Ethernet.h> //library for ethernet functions #include <SPI.h> #include <Dns.h> #include <Client.h> //library for client functions #include <DallasTemperature.h> //library for temperature sensors // Ethernet settings byte mac[] = {0x09,0xA2,0xDA,0x00,0x01,0x26}; //Replace with your Ethernet shield MAC byte ip[] = { 192,168,0,54}; //The Arduino device IP address byte subnet[] = { 255,255,255,0}; byte gateway[] = { 192,168,0,1}; IPAddress server(192,168,0,53); // IP-adress of server arduino sends data to EthernetClient client; bool connected = false; void setup(void) { Serial.begin(9600); Serial.println("Initializing Ethernet."); delay(1000); Ethernet.begin(mac, ip , gateway , subnet); } void loop(void) { if(!connected) { Serial.println("Not connected"); if (client.connect(server, 80)) { connected = true; int temp =analogRead(A1); Serial.print("Temp is "); Serial.println(temp); Serial.println(); Serial.println("Sending to Server: "); client.print("GET /formSubmit.php?t0="); Serial.print("GET /formSubmit.php?t0="); client.print(temp); Serial.print(temp); client.println(" HTTP/1.1"); Serial.println(" HTTP/1.1"); client.println("Host: http://localhost/PhpProject1/"); Serial.println("Host: http://localhost/PhpProject1/"); client.println("User-Agent: Arduino"); Serial.println("User-Agent: Arduino"); client.println("Accept: text/html"); Serial.println("Accept: text/html"); //client.println("Connection: close"); //Serial.println("Connection: close"); client.println(); Serial.println(); delay(10000); } else{ Serial.println("Cannot connect to Server"); } } else { delay(1000); while (client.connected() && client.available()) { char c = client.read(); Serial.print(c); } Serial.println(); client.stop(); connected = false; } } the server is an Apache server running on a pc, the server ip address in the code is the pc ip address. For testing purposes I work at my homes network, there's no proxy or firewall, and I also turned of the antivirus and firewall on my pc. the result in the serial monitor is always: Not connected Cannot connect to Server Any thoughts??

    Read the article

  • Setting custom UITableViewCells height

    - by Vijayeta
    I am using a custum UITableViewCell which has some labels, buttons and imageviews to be displayed. There is one label in the cell whose text is a NSString object and the length of string could be variable. Due to this, I cannot set a constant height to the cell in the UITableViews: heightForCellAtIndex method. The cell's height depends on the labels height which can be determined using the NSStrings sizeWithFont method. I tried using it, but it looks like I'm going wrong somewhere. How can it be fixed? Here is the code used for initializing the cell. if (self = [super initWithFrame:frame reuseIdentifier:reuseIdentifier]) { self.selectionStyle = UITableViewCellSelectionStyleNone; UIImage *image = [UIImage imageNamed:@"dot.png"]; imageView = [[UIImageView alloc] initWithImage:image]; imageView.frame = CGRectMake(45.0,10.0,10,10); headingTxt = [[UILabel alloc] initWithFrame: CGRectMake(60.0,0.0,150.0,post_hdg_ht)]; [headingTxt setContentMode: UIViewContentModeCenter]; headingTxt.text = postData.user_f_name; headingTxt.font = [UIFont boldSystemFontOfSize:13]; headingTxt.textAlignment = UITextAlignmentLeft; headingTxt.textColor = [UIColor blackColor]; dateTxt = [[UILabel alloc] initWithFrame:CGRectMake(55.0,23.0,150.0,post_date_ht)]; dateTxt.text = postData.created_dtm; dateTxt.font = [UIFont italicSystemFontOfSize:11]; dateTxt.textAlignment = UITextAlignmentLeft; dateTxt.textColor = [UIColor grayColor]; NSString * text1 = postData.post_body; NSLog(@"text length = %d",[text1 length]); CGRect bounds = [UIScreen mainScreen].bounds; CGFloat tableViewWidth; CGFloat width = 0; tableViewWidth = bounds.size.width/2; width = tableViewWidth - 40; //fudge factor //CGSize textSize = {width, 20000.0f}; //width and height of text area CGSize textSize = {245.0, 20000.0f}; //width and height of text area CGSize size1 = [text1 sizeWithFont:[UIFont systemFontOfSize:11.0f] constrainedToSize:textSize lineBreakMode:UILineBreakModeWordWrap]; CGFloat ht = MAX(size1.height, 28); textView = [[UILabel alloc] initWithFrame:CGRectMake(55.0,42.0,245.0,ht)]; textView.text = postData.post_body; textView.font = [UIFont systemFontOfSize:11]; textView.textAlignment = UITextAlignmentLeft; textView.textColor = [UIColor blackColor]; textView.lineBreakMode = UILineBreakModeWordWrap; textView.numberOfLines = 3; textView.autoresizesSubviews = YES; [self.contentView addSubview:imageView]; [self.contentView addSubview:textView]; [self.contentView addSubview:webView]; [self.contentView addSubview:dateTxt]; [self.contentView addSubview:headingTxt]; [self.contentView sizeToFit]; [imageView release]; [textView release]; [webView release]; [dateTxt release]; [headingTxt release]; } textView = [[UILabel alloc] initWithFrame:CGRectMake(55.0,42.0,245.0,ht)]; this is the label whose height and width are going wrong.

    Read the article

  • Referencing variables in a structure / C++

    - by user1628622
    Below, I provided a minimal example of code I created. I managed to get this code working, but I'm not sure if the practice being employed is sound. In essence, what I am trying to do is have the 'Parameter' class reference select elements in the 'States' class, so variables in States can be changed via Parameters. Questions I have: is the approach taken OK? If not, is there a better way to achieve what I am aiming for? Example code: struct VAR_TYPE{ public: bool is_fixed; // If is_fixed = true, then variable is a parameter double value; // Numerical value std::string name; // Description of variable (to identify it by name) }; struct NODE{ public: VAR_TYPE X, Y, Z; /* VAR_TYPE is a structure of primitive types */ }; class States{ private: std::vector <NODE_ptr> node; // shared ptr to struct NODE std::vector <PROP_DICTIONARY_ptr> property; // CAN NOT be part of Parameter std::vector <ELEMENT_ptr> element; // CAN NOT be part of Parameter public: /* ect */ void set_X_reference ( Parameter &T , int i ) { T.push_var( &node[i]->X ); } void set_Y_reference ( Parameter &T , int i ) { T.push_var( &node[i]->Y ); } void set_Z_reference ( Parameter &T , int i ) { T.push_var( &node[i]->Z ); } bool get_node_bool_X( int i ) { return node[i]->X.is_fixed; } // repeat for Y and Z }; class Parameter{ private: std::vector <VAR_TYPE*> var; public: /* ect */ }; int main(){ States S; Parameter P; /* Here I initialize and set S, and do other stuff */ // Now I assign components in States to Parameters for(int n=0 ; n<S.size_of_nodes() ; n++ ){ if ( S.get_node_bool_X(n)==true ){ S.set_X_reference ( P , n ); }; // repeat if statement for Y and Z }; /* Now P points selected to data in S, and I can * modify the contents of S through P */ return 0; }; Update The reason this issue cropped up is I am working with Fortran legacy code. To sum up this Fotran code - it's a numerical simulation of a flight vehicle. This code has a fairly rigid procedural framework one must work within, which comes with a pre-defined list of allowable Fortran types. The Fortran glue code can create an instance of a C++ object (in actuality, a reference from the perspective of Fortran), but is not aware what is contained in it (other means are used to extract C++ data into Fortran). The problem that I encountered is when a C++ module is dynamically linked to the Fortran glue code, C++ objects have to be initialized each instance the C++ code is called. This happens by virtue of how the Fortran template is defined. To avoid this cycle of re-initializing objects, I plan to use 'State' as a container class. The Fortran code allows a 'State' object, which has an arbitrary definition; but I plan to use it to harness all relevant information about the model. The idea is to use the Parameters class (which is exposed and updated by the Fortran code) to update variables in States.

    Read the article

  • WPF unity Activation error occured while trying to get instance of type

    - by Traci
    I am getting the following error when trying to Initialise the Module using Unity and Prism. The DLL is found by return new DirectoryModuleCatalog() { ModulePath = @".\Modules" }; The dll is found and the Name is Found #region Constructors public AdminModule( IUnityContainer container, IScreenFactoryRegistry screenFactoryRegistry, IEventAggregator eventAggregator, IBusyService busyService ) : base(container, screenFactoryRegistry) { this.EventAggregator = eventAggregator; this.BusyService = busyService; } #endregion #region Properties protected IEventAggregator EventAggregator { get; set; } protected IBusyService BusyService { get; set; } #endregion public override void Initialize() { base.Initialize(); } #region Register Screen Factories protected override void RegisterScreenFactories() { this.ScreenFactoryRegistry.Register(ScreenKeyType.ApplicationAdmin, typeof(AdminScreenFactory)); } #endregion #region Register Views and Various Services protected override void RegisterViewsAndServices() { //View Models this.Container.RegisterType<IAdminViewModel, AdminViewModel>(); } #endregion the code that produces the error is: namespace Microsoft.Practices.Composite.Modularity protected virtual IModule CreateModule(string typeName) { Type moduleType = Type.GetType(typeName); if (moduleType == null) { throw new ModuleInitializeException(string.Format(CultureInfo.CurrentCulture, Properties.Resources.FailedToGetType, typeName)); } return (IModule)this.serviceLocator.GetInstance(moduleType); <-- Error Here } Can Anyone Help Me Error Log Below: General Information Additional Info: ExceptionManager.MachineName: xxxxx ExceptionManager.TimeStamp: 22/02/2010 10:16:55 AM ExceptionManager.FullName: Microsoft.ApplicationBlocks.ExceptionManagement, Version=1.0.3591.32238, Culture=neutral, PublicKeyToken=null ExceptionManager.AppDomainName: Infinity.vshost.exe ExceptionManager.ThreadIdentity: ExceptionManager.WindowsIdentity: xxxxx 1) Exception Information Exception Type: Microsoft.Practices.Composite.Modularity.ModuleInitializeException ModuleName: AdminModule Message: An exception occurred while initializing module 'AdminModule'. - The exception message was: Activation error occured while trying to get instance of type AdminModule, key "" Check the InnerException property of the exception for more information. If the exception occurred while creating an object in a DI container, you can exception.GetRootException() to help locate the root cause of the problem. Data: System.Collections.ListDictionaryInternal TargetSite: Void HandleModuleInitializationError(Microsoft.Practices.Composite.Modularity.ModuleInfo, System.String, System.Exception) HelpLink: NULL Source: Microsoft.Practices.Composite StackTrace Information at Microsoft.Practices.Composite.Modularity.ModuleInitializer.HandleModuleInitializationError(ModuleInfo moduleInfo, String assemblyName, Exception exception) at Microsoft.Practices.Composite.Modularity.ModuleInitializer.Initialize(ModuleInfo moduleInfo) at Microsoft.Practices.Composite.Modularity.ModuleManager.InitializeModule(ModuleInfo moduleInfo) at Microsoft.Practices.Composite.Modularity.ModuleManager.LoadModulesThatAreReadyForLoad() at Microsoft.Practices.Composite.Modularity.ModuleManager.OnModuleTypeLoaded(ModuleInfo typeLoadedModuleInfo, Exception error) at Microsoft.Practices.Composite.Modularity.FileModuleTypeLoader.BeginLoadModuleType(ModuleInfo moduleInfo, ModuleTypeLoadedCallback callback) at Microsoft.Practices.Composite.Modularity.ModuleManager.BeginRetrievingModule(ModuleInfo moduleInfo) at Microsoft.Practices.Composite.Modularity.ModuleManager.LoadModuleTypes(IEnumerable`1 moduleInfos) at Microsoft.Practices.Composite.Modularity.ModuleManager.LoadModulesWhenAvailable() at Microsoft.Practices.Composite.Modularity.ModuleManager.Run() at Microsoft.Practices.Composite.UnityExtensions.UnityBootstrapper.InitializeModules() at Infinity.Bootstrapper.InitializeModules() in D:\Projects\dotNet\Infinity\source\Inifinty\Infinity\Application Modules\BootStrapper.cs:line 75 at Microsoft.Practices.Composite.UnityExtensions.UnityBootstrapper.Run(Boolean runWithDefaultConfiguration) at Microsoft.Practices.Composite.UnityExtensions.UnityBootstrapper.Run() at Infinity.App.Application_Startup(Object sender, StartupEventArgs e) in D:\Projects\dotNet\Infinity\source\Inifinty\Infinity\App.xaml.cs:line 37 at System.Windows.Application.OnStartup(StartupEventArgs e) at System.Windows.Application.<.ctorb__0(Object unused) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Boolean isSingleParameter) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) 2) Exception Information Exception Type: Microsoft.Practices.ServiceLocation.ActivationException Message: Activation error occured while trying to get instance of type AdminModule, key "" Data: System.Collections.ListDictionaryInternal TargetSite: System.Object GetInstance(System.Type, System.String) HelpLink: NULL Source: Microsoft.Practices.ServiceLocation StackTrace Information at Microsoft.Practices.ServiceLocation.ServiceLocatorImplBase.GetInstance(Type serviceType, String key) at Microsoft.Practices.ServiceLocation.ServiceLocatorImplBase.GetInstance(Type serviceType) at Microsoft.Practices.Composite.Modularity.ModuleInitializer.CreateModule(String typeName) at Microsoft.Practices.Composite.Modularity.ModuleInitializer.Initialize(ModuleInfo moduleInfo) 3) Exception Information Exception Type: Microsoft.Practices.Unity.ResolutionFailedException TypeRequested: AdminModule NameRequested: NULL Message: Resolution of the dependency failed, type = "Infinity.Modules.Admin.AdminModule", name = "". Exception message is: The current build operation (build key Build Key[Infinity.Modules.Admin.AdminModule, null]) failed: The parameter screenFactoryRegistry could not be resolved when attempting to call constructor Infinity.Modules.Admin.AdminModule(Microsoft.Practices.Unity.IUnityContainer container, PhoenixIT.IScreenFactoryRegistry screenFactoryRegistry, Microsoft.Practices.Composite.Events.IEventAggregator eventAggregator, PhoenixIT.IBusyService busyService). (Strategy type BuildPlanStrategy, index 3) Data: System.Collections.ListDictionaryInternal TargetSite: System.Object DoBuildUp(System.Type, System.Object, System.String) HelpLink: NULL Source: Microsoft.Practices.Unity StackTrace Information at Microsoft.Practices.Unity.UnityContainer.DoBuildUp(Type t, Object existing, String name) at Microsoft.Practices.Unity.UnityContainer.DoBuildUp(Type t, String name) at Microsoft.Practices.Unity.UnityContainer.Resolve(Type t, String name) at Microsoft.Practices.Composite.UnityExtensions.UnityServiceLocatorAdapter.DoGetInstance(Type serviceType, String key) at Microsoft.Practices.ServiceLocation.ServiceLocatorImplBase.GetInstance(Type serviceType, String key) 4) Exception Information Exception Type: Microsoft.Practices.ObjectBuilder2.BuildFailedException ExecutingStrategyTypeName: BuildPlanStrategy ExecutingStrategyIndex: 3 BuildKey: Build Key[Infinity.Modules.Admin.AdminModule, null] Message: The current build operation (build key Build Key[Infinity.Modules.Admin.AdminModule, null]) failed: The parameter screenFactoryRegistry could not be resolved when attempting to call constructor Infinity.Modules.Admin.AdminModule(Microsoft.Practices.Unity.IUnityContainer container, PhoenixIT.IScreenFactoryRegistry screenFactoryRegistry, Microsoft.Practices.Composite.Events.IEventAggregator eventAggregator, PhoenixIT.IBusyService busyService). (Strategy type BuildPlanStrategy, index 3) Data: System.Collections.ListDictionaryInternal TargetSite: System.Object ExecuteBuildUp(Microsoft.Practices.ObjectBuilder2.IBuilderContext) HelpLink: NULL Source: Microsoft.Practices.ObjectBuilder2 StackTrace Information at Microsoft.Practices.ObjectBuilder2.StrategyChain.ExecuteBuildUp(IBuilderContext context) at Microsoft.Practices.ObjectBuilder2.Builder.BuildUp(IReadWriteLocator locator, ILifetimeContainer lifetime, IPolicyList policies, IStrategyChain strategies, Object buildKey, Object existing) at Microsoft.Practices.Unity.UnityContainer.DoBuildUp(Type t, Object existing, String name) 5) Exception Information Exception Type: System.InvalidOperationException Message: The parameter screenFactoryRegistry could not be resolved when attempting to call constructor Infinity.Modules.Admin.AdminModule(Microsoft.Practices.Unity.IUnityContainer container, PhoenixIT.IScreenFactoryRegistry screenFactoryRegistry, Microsoft.Practices.Composite.Events.IEventAggregator eventAggregator, PhoenixIT.IBusyService busyService). Data: System.Collections.ListDictionaryInternal TargetSite: Void ThrowForResolutionFailed(System.Exception, System.String, System.String, Microsoft.Practices.ObjectBuilder2.IBuilderContext) HelpLink: NULL Source: Microsoft.Practices.ObjectBuilder2 StackTrace Information at Microsoft.Practices.ObjectBuilder2.DynamicMethodConstructorStrategy.ThrowForResolutionFailed(Exception inner, String parameterName, String constructorSignature, IBuilderContext context) at BuildUp_Infinity.Modules.Admin.AdminModule(IBuilderContext ) at Microsoft.Practices.ObjectBuilder2.DynamicMethodBuildPlan.BuildUp(IBuilderContext context) at Microsoft.Practices.ObjectBuilder2.BuildPlanStrategy.PreBuildUp(IBuilderContext context) at Microsoft.Practices.ObjectBuilder2.StrategyChain.ExecuteBuildUp(IBuilderContext context) 6) Exception Information Exception Type: Microsoft.Practices.ObjectBuilder2.BuildFailedException ExecutingStrategyTypeName: BuildPlanStrategy ExecutingStrategyIndex: 3 BuildKey: Build Key[PhoenixIT.IScreenFactoryRegistry, null] Message: The current build operation (build key Build Key[PhoenixIT.IScreenFactoryRegistry, null]) failed: The current type, PhoenixIT.IScreenFactoryRegistry, is an interface and cannot be constructed. Are you missing a type mapping? (Strategy type BuildPlanStrategy, index 3) Data: System.Collections.ListDictionaryInternal TargetSite: System.Object ExecuteBuildUp(Microsoft.Practices.ObjectBuilder2.IBuilderContext) HelpLink: NULL Source: Microsoft.Practices.ObjectBuilder2 StackTrace Information at Microsoft.Practices.ObjectBuilder2.StrategyChain.ExecuteBuildUp(IBuilderContext context) at Microsoft.Practices.Unity.ObjectBuilder.NamedTypeDependencyResolverPolicy.Resolve(IBuilderContext context) at BuildUp_Infinity.Modules.Admin.AdminModule(IBuilderContext ) 7) Exception Information Exception Type: System.InvalidOperationException Message: The current type, PhoenixIT.IScreenFactoryRegistry, is an interface and cannot be constructed. Are you missing a type mapping? Data: System.Collections.ListDictionaryInternal TargetSite: Void ThrowForAttemptingToConstructInterface(Microsoft.Practices.ObjectBuilder2.IBuilderContext) HelpLink: NULL Source: Microsoft.Practices.ObjectBuilder2 StackTrace Information at Microsoft.Practices.ObjectBuilder2.DynamicMethodConstructorStrategy.ThrowForAttemptingToConstructInterface(IBuilderContext context) at BuildUp_PhoenixIT.IScreenFactoryRegistry(IBuilderContext ) at Microsoft.Practices.ObjectBuilder2.DynamicMethodBuildPlan.BuildUp(IBuilderContext context) at Microsoft.Practices.ObjectBuilder2.BuildPlanStrategy.PreBuildUp(IBuilderContext context) at Microsoft.Practices.ObjectBuilder2.StrategyChain.ExecuteBuildUp(IBuilderContext context) For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • jQuery and Windows Azure

    - by Stephen Walther
    The goal of this blog entry is to describe how you can host a simple Ajax application created with jQuery in the Windows Azure cloud. In this blog entry, I make no assumptions. I assume that you have never used Windows Azure and I am going to walk through the steps required to host the application in the cloud in agonizing detail. Our application will consist of a single HTML page and a single service. The HTML page will contain jQuery code that invokes the service to retrieve and display set of records. There are five steps that you must complete to host the jQuery application: Sign up for Windows Azure Create a Hosted Service Install the Windows Azure Tools for Visual Studio Create a Windows Azure Cloud Service Deploy the Cloud Service Sign Up for Windows Azure Go to http://www.microsoft.com/windowsazure/ and click the Sign up Now button. Select one of the offers. I selected the Introductory Special offer because it is free and I just wanted to experiment with Windows Azure for the purposes of this blog entry.     To sign up, you will need a Windows Live ID and you will need to enter a credit card number. After you finish the sign up process, you will receive an email that explains how to activate your account. Accessing the Developer Portal After you create your account and your account is activated, you can access the Windows Azure developer portal by visiting the following URL: http://windows.azure.com/ When you first visit the developer portal, you will see the one project that you created when you set up your Windows Azure account (In a fit of creativity, I named my project StephenWalther).     Creating a New Windows Azure Hosted Service Before you can host an application in the cloud, you must first add a hosted service to your project. Click your project on the summary page and click the New Service link. You are presented with the option of creating either a new Storage Account or a new Hosted Services.     Because we have code that we want to run in the cloud – the WCF Service -- we want to select the Hosted Services option. After you select this option, you must provide a name and description for your service. This information is used on the developer portal so you can distinguish your services.     When you create a new hosted service, you must enter a unique name for your service (I selected jQueryApp) and you must select a region for this service (I selected Anywhere US). Click the Create button to create the new hosted service.   Install the Windows Azure Tools for Visual Studio We’ll use Visual Studio to create our jQuery project. Before you can use Visual Studio with Windows Azure, you must first install the Windows Azure Tools for Visual Studio. Go to http://www.microsoft.com/windowsazure/ and click the Get Tools and SDK button. The Windows Azure Tools for Visual Studio works with both Visual Studio 2008 and Visual Studio 2010.   Installation of the Windows Azure Tools for Visual Studio is painless. You just need to check some agreement checkboxes and click the Next button a few times and installation will begin:   Creating a Windows Azure Application After you install the Windows Azure Tools for Visual Studio, you can choose to create a Windows Azure Cloud Service by selecting the menu option File, New Project and selecting the Windows Azure Cloud Service project template. I named my new Cloud Service with the name jQueryApp.     Next, you need to select the type of Cloud Service project that you want to create from the New Cloud Service Project dialog.   I selected the C# ASP.NET Web Role option. Alternatively, I could have picked the ASP.NET MVC 2 Web Role option if I wanted to use jQuery with ASP.NET MVC or even the CGI Web Role option if I wanted to use jQuery with PHP. After you complete these steps, you end up with two projects in your Visual Studio solution. The project named WebRole1 represents your ASP.NET application and we will use this project to create our jQuery application. Creating the jQuery Application in the Cloud We are now ready to create the jQuery application. We’ll create a super simple application that displays a list of records retrieved from a WCF service (hosted in the cloud). Create a new page in the WebRole1 project named Default.htm and add the following code: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Products</title> <style type="text/css"> #productContainer div { border:solid 1px black; padding:5px; margin:5px; } </style> </head> <body> <h1>Product Catalog</h1> <div id="productContainer"></div> <script id="productTemplate" type="text/html"> <div> Name: {{= name }} <br /> Price: {{= price }} </div> </script> <script src="Scripts/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/jquery.tmpl.js" type="text/javascript"></script> <script type="text/javascript"> var products = [ {name:"Milk", price:4.55}, {name:"Yogurt", price:2.99}, {name:"Steak", price:23.44} ]; $("#productTemplate").render(products).appendTo("#productContainer"); </script> </body> </html> The jQuery code in this page simply displays a list of products by using a template. I am using a jQuery template to format each product. You can learn more about using jQuery templates by reading the following blog entry by Scott Guthrie: http://weblogs.asp.net/scottgu/archive/2010/05/07/jquery-templates-and-data-linking-and-microsoft-contributing-to-jquery.aspx You can test whether the Default.htm page is working correctly by running your application (hit the F5 key). The first time that you run your application, a database is set up on your local machine to simulate cloud storage. You will see the following dialog: If the Default.htm page works as expected, you should see the list of three products: Adding an Ajax-Enabled WCF Service In the previous section, we created a simple jQuery application that displays an array by using a template. The application is a little too simple because the data is static. In this section, we’ll modify the page so that the data is retrieved from a WCF service instead of an array. First, we need to add a new Ajax-enabled WCF Service to the WebRole1 project. Select the menu option Project, Add New Item and select the Ajax-enabled WCF Service project item. Name the new service ProductService.svc. Modify the service so that it returns a static collection of products. The final code for the ProductService.svc should look like this: using System.Collections.Generic; using System.ServiceModel; using System.ServiceModel.Activation; namespace WebRole1 { public class Product { public string name { get; set; } public decimal price { get; set; } } [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class ProductService { [OperationContract] public IList<Product> SelectProducts() { var products = new List<Product>(); products.Add(new Product {name="Milk", price=4.55m} ); products.Add(new Product { name = "Yogurt", price = 2.99m }); products.Add(new Product { name = "Steak", price = 23.44m }); return products; } } }   In real life, you would want to retrieve the list of products from storage instead of a static array. We are being lazy here. Next you need to modify the Default.htm page to use the ProductService.svc. The jQuery script in the following updated Default.htm page makes an Ajax call to the WCF service. The data retrieved from the ProductService.svc is displayed in the client template. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Products</title> <style type="text/css"> #productContainer div { border:solid 1px black; padding:5px; margin:5px; } </style> </head> <body> <h1>Product Catalog</h1> <div id="productContainer"></div> <script id="productTemplate" type="text/html"> <div> Name: {{= name }} <br /> Price: {{= price }} </div> </script> <script src="Scripts/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/jquery.tmpl.js" type="text/javascript"></script> <script type="text/javascript"> $.post("ProductService.svc/SelectProducts", function (results) { var products = results["d"]; $("#productTemplate").render(products).appendTo("#productContainer"); }); </script> </body> </html>   Deploying the jQuery Application to the Cloud Now that we have created our jQuery application, we are ready to deploy our application to the cloud so that the whole world can use it. Right-click your jQueryApp project in the Solution Explorer window and select the Publish menu option. When you select publish, your application and your application configuration information is packaged up into two files named jQueryApp.cspkg and ServiceConfiguration.cscfg. Visual Studio opens the directory that contains the two files. In order to deploy these files to the Windows Azure cloud, you must upload these files yourself. Return to the Windows Azure Developers Portal at the following address: http://windows.azure.com/ Select your project and select the jQueryApp service. You will see a mysterious cube. Click the Deploy button to upload your application.   Next, you need to browse to the location on your hard drive where the jQueryApp project was published and select both the packaged application and the packaged application configuration file. Supply the deployment with a name and click the Deploy button.     While your application is in the process of being deployed, you can view a progress bar.     Running the jQuery Application in the Cloud Finally, you can run your jQuery application in the cloud by clicking the Run button.   It might take several minutes for your application to initialize (go grab a coffee). After WebRole1 finishes initializing, you can navigate to the following URL to view your live jQuery application in the cloud: http://jqueryapp.cloudapp.net/default.htm The page is hosted on the Windows Azure cloud and the WCF service executes every time that you request the page to retrieve the list of products. Summary Because we started from scratch, we needed to complete several steps to create and deploy our jQuery application to the Windows Azure cloud. We needed to create a Windows Azure account, create a hosted service, install the Windows Azure Tools for Visual Studio, create the jQuery application, and deploy it to the cloud. Now that we have finished this process once, modifying our existing cloud application or creating a new cloud application is easy. jQuery and Windows Azure work nicely together. We can take advantage of jQuery to build applications that run in the browser and we can take advantage of Windows Azure to host the backend services required by our jQuery application. The big benefit of Windows Azure is that it enables us to scale. If, all of the sudden, our jQuery application explodes in popularity, Windows Azure enables us to easily scale up to meet the demand. We can handle anything that the Internet might throw at us.

    Read the article

  • Coding With Windows Azure IaaS

    - by Hisham El-bereky
    This post will focus on some advanced programming topics concerned with IaaS (Infrastructure as a Service) which provided as windows azure virtual machine (with its related resources like virtual disk and virtual network), you know that windows azure started as PaaS cloud platform but regarding to some business cases which need to have full control over their virtual machine, so windows azure directed toward providing IaaS. Sometimes you will need to manage your cloud IaaS through code may be for these reasons: Working on hyper-cloud system by providing bursting connector to windows azure virtual machines Providing multi-tenant system which consume windows azure virtual machine Automated process on your on-premises or cloud service which need to utilize some virtual resources We are going to implement the following basic operation using C# code: List images Create virtual machine List virtual machines Restart virtual machine Delete virtual machine Before going to implement the above operations we need to prepare client side and windows azure subscription to communicate correctly by providing management certificate (x.509 v3 certificates) which permit client access to resources in your Windows Azure subscription, whilst requests made using the Windows Azure Service Management REST API require authentication against a certificate that you provide to Windows Azure More info about setting management certificate located here. And to install .cer on other client machine you will need the .pfx file, or if not exist by exporting .cer as .pfx Note: You will need to install .net 4.5 on your machine to try the code So let start This post built on the post sent by Michael Washam "Advanced Windows Azure IaaS – Demo Code", so I'm here to declare some points and to add new operation which is not exist in Michael's demo The basic C# class object used here as client to azure REST API for IaaS service is HttpClient (Provides a base class for sending HTTP requests and receiving HTTP responses from a resource identified by a URI) this object must be initialized with the required data like certificate, headers and content if required. Also I'd like to refer here that the code is based on using Asynchronous programming with calls to azure which enhance the performance and gives us the ability to work with complex calls which depends on more than one sub-call to achieve some operation The following code explain how to get certificate and initializing HttpClient object with required data like headers and content HttpClient GetHttpClient() { X509Store certificateStore = null; X509Certificate2 certificate = null; try { certificateStore = new X509Store(StoreName.My, StoreLocation.CurrentUser); certificateStore.Open(OpenFlags.ReadOnly); string thumbprint = ConfigurationManager.AppSettings["CertThumbprint"]; var certificates = certificateStore.Certificates.Find(X509FindType.FindByThumbprint, thumbprint, false); if (certificates.Count > 0) { certificate = certificates[0]; } } finally { if (certificateStore != null) certificateStore.Close(); }   WebRequestHandler handler = new WebRequestHandler(); if (certificate!= null) { handler.ClientCertificates.Add(certificate); HttpClient httpClient = new HttpClient(handler); //And to set required headers lik x-ms-version httpClient.DefaultRequestHeaders.Add("x-ms-version", "2012-03-01"); httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/xml")); return httpClient; } return null; }  Let us keep the object httpClient as reference object used to call windows azure REST API IaaS service. For each request operation we need to define: Request URI HTTP Method Headers Content body (1) List images The List OS Images operation retrieves a list of the OS images from the image repository Request URI https://management.core.windows.net/<subscription-id>/services/images] Replace <subscription-id> with your windows Id HTTP Method GET (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None.  C# Code List<String> imageList = new List<String>(); //replace _subscriptionid with your WA subscription String uri = String.Format("https://management.core.windows.net/{0}/services/images", _subscriptionid);  HttpClient http = GetHttpClient(); Stream responseStream = await http.GetStreamAsync(uri);  if (responseStream != null) {      XDocument xml = XDocument.Load(responseStream);      var images = xml.Root.Descendants(ns + "OSImage").Where(i => i.Element(ns + "OS").Value == "Windows");      foreach (var image in images)      {      string img = image.Element(ns + "Name").Value;      imageList.Add(img);      } } More information about the REST call (Request/Response) located here on this link http://msdn.microsoft.com/en-us/library/windowsazure/jj157191.aspx (2) Create Virtual Machine Creating virtual machine required service and deployment to be created first, so creating VM should be done through three steps incase hosted service and deployment is not created yet Create hosted service, a container for service deployments in Windows Azure. A subscription may have zero or more hosted services Create deployment, a service that is running on Windows Azure. A deployment may be running in either the staging or production deployment environment. It may be managed either by referencing its deployment ID, or by referencing the deployment environment in which it's running. Create virtual machine, the previous two steps info required here in this step I suggest here to use the same name for service, deployment and service to make it easy to manage virtual machines Note: A name for the hosted service that is unique within Windows Azure. This name is the DNS prefix name and can be used to access the hosted service. For example: http://ServiceName.cloudapp.net// 2.1 Create service Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices HTTP Method POST (HTTP 1.1) Header x-ms-version: 2012-03-01 Content-Type: application/xml Body More details about request body (and other information) are located here http://msdn.microsoft.com/en-us/library/windowsazure/gg441304.aspx C# code The following method show how to create hosted service async public Task<String> NewAzureCloudService(String ServiceName, String Location, String AffinityGroup, String subscriptionid) { String requestID = String.Empty;   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices", subscriptionid); HttpClient http = GetHttpClient();   System.Text.ASCIIEncoding ae = new System.Text.ASCIIEncoding(); byte[] svcNameBytes = ae.GetBytes(ServiceName);   String locationEl = String.Empty; String locationVal = String.Empty;   if (String.IsNullOrEmpty(Location) == false) { locationEl = "Location"; locationVal = Location; } else { locationEl = "AffinityGroup"; locationVal = AffinityGroup; }   XElement srcTree = new XElement("CreateHostedService", new XAttribute(XNamespace.Xmlns + "i", ns1), new XElement("ServiceName", ServiceName), new XElement("Label", Convert.ToBase64String(svcNameBytes)), new XElement(locationEl, locationVal) ); ApplyNamespace(srcTree, ns);   XDocument CSXML = new XDocument(srcTree); HttpContent content = new StringContent(CSXML.ToString()); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml");   HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); } return requestID; } 2.2 Create Deployment Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>/deploymentslots/<deployment-slot-name> <deployment-slot-name> with staging or production, depending on where you wish to deploy your service package <service-name> provided as input from the previous step HTTP Method POST (HTTP 1.1) Header x-ms-version: 2012-03-01 Content-Type: application/xml Body More details about request body (and other information) are located here http://msdn.microsoft.com/en-us/library/windowsazure/ee460813.aspx C# code The following method show how to create hosted service deployment async public Task<String> NewAzureVMDeployment(String ServiceName, String VMName, String VNETName, XDocument VMXML, XDocument DNSXML) { String requestID = String.Empty;     String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments", _subscriptionid, ServiceName); HttpClient http = GetHttpClient(); XElement srcTree = new XElement("Deployment", new XAttribute(XNamespace.Xmlns + "i", ns1), new XElement("Name", ServiceName), new XElement("DeploymentSlot", "Production"), new XElement("Label", ServiceName), new XElement("RoleList", null) );   if (String.IsNullOrEmpty(VNETName) == false) { srcTree.Add(new XElement("VirtualNetworkName", VNETName)); }   if(DNSXML != null) { srcTree.Add(new XElement("DNS", new XElement("DNSServers", DNSXML))); }   XDocument deploymentXML = new XDocument(srcTree); ApplyNamespace(srcTree, ns);   deploymentXML.Descendants(ns + "RoleList").FirstOrDefault().Add(VMXML.Root);     String fixedXML = deploymentXML.ToString().Replace(" xmlns=\"\"", ""); HttpContent content = new StringContent(fixedXML); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml");   HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); }   return requestID; } 2.3 Create Virtual Machine Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<cloudservice-name>/deployments/<deployment-name>/roles <cloudservice-name> and <deployment-name> are provided as input from the previous steps Http Method POST (HTTP 1.1) Header x-ms-version: 2012-03-01 Content-Type: application/xml Body More details about request body (and other information) located here http://msdn.microsoft.com/en-us/library/windowsazure/jj157186.aspx C# code async public Task<String> NewAzureVM(String ServiceName, String VMName, XDocument VMXML) { String requestID = String.Empty;   String deployment = await GetAzureDeploymentName(ServiceName);   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}/roles", _subscriptionid, ServiceName, deployment);   HttpClient http = GetHttpClient(); HttpContent content = new StringContent(VMXML.ToString()); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml"); HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); } return requestID; } (3) List Virtual Machines To list virtual machine hosted on windows azure subscription we have to loop over all hosted services to get its hosted virtual machines To do that we need to execute the following operations: listing hosted services listing hosted service Virtual machine 3.1 Listing Hosted Services Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices HTTP Method GET (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. More info about this HTTP request located here on this link http://msdn.microsoft.com/en-us/library/windowsazure/ee460781.aspx C# Code async private Task<List<XDocument>> GetAzureServices(String subscriptionid) { String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices ", subscriptionid); List<XDocument> services = new List<XDocument>();   HttpClient http = GetHttpClient();   Stream responseStream = await http.GetStreamAsync(uri);   if (responseStream != null) { XDocument xml = XDocument.Load(responseStream); var svcs = xml.Root.Descendants(ns + "HostedService"); foreach (XElement r in svcs) { XDocument vm = new XDocument(r); services.Add(vm); } }   return services; }  3.2 Listing Hosted Service Virtual Machines Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>/deployments/<deployment-name>/roles/<role-name> HTTP Method GET (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. More info about this HTTP request here http://msdn.microsoft.com/en-us/library/windowsazure/jj157193.aspx C# Code async public Task<XDocument> GetAzureVM(String ServiceName, String VMName, String subscriptionid) { String deployment = await GetAzureDeploymentName(ServiceName); XDocument vmXML = new XDocument();   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}/roles/{3}", subscriptionid, ServiceName, deployment, VMName);   HttpClient http = GetHttpClient(); Stream responseStream = await http.GetStreamAsync(uri); if (responseStream != null) { vmXML = XDocument.Load(responseStream); }   return vmXML; }  So the final method which can be used to list all virtual machines is: async public Task<XDocument> GetAzureVMs() { List<XDocument> services = await GetAzureServices(); XDocument vms = new XDocument(); vms.Add(new XElement("VirtualMachines")); ApplyNamespace(vms.Root, ns); foreach (var svc in services) { string ServiceName = svc.Root.Element(ns + "ServiceName").Value;   String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deploymentslots/{2}", _subscriptionid, ServiceName, "Production");   try { HttpClient http = GetHttpClient(); Stream responseStream = await http.GetStreamAsync(uri);   if (responseStream != null) { XDocument xml = XDocument.Load(responseStream); var roles = xml.Root.Descendants(ns + "RoleInstance"); foreach (XElement r in roles) { XElement svcnameel = new XElement("ServiceName", ServiceName); ApplyNamespace(svcnameel, ns); r.Add(svcnameel); // not part of the roleinstance vms.Root.Add(r); } } } catch (HttpRequestException http) { // no vms with cloud service } } return vms; }  (4) Restart Virtual Machine Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>/deployments/<deployment-name>/roles/<role-name>/Operations HTTP Method POST (HTTP 1.1) Headers x-ms-version: 2012-03-01 Content-Type: application/xml Body <RestartRoleOperation xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <OperationType>RestartRoleOperation</OperationType> </RestartRoleOperation>  More details about this http request here http://msdn.microsoft.com/en-us/library/windowsazure/jj157197.aspx  C# Code async public Task<String> RebootVM(String ServiceName, String RoleName) { String requestID = String.Empty;   String deployment = await GetAzureDeploymentName(ServiceName); String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}/roleInstances/{3}/Operations", _subscriptionid, ServiceName, deployment, RoleName);   HttpClient http = GetHttpClient();   XElement srcTree = new XElement("RestartRoleOperation", new XAttribute(XNamespace.Xmlns + "i", ns1), new XElement("OperationType", "RestartRoleOperation") ); ApplyNamespace(srcTree, ns);   XDocument CSXML = new XDocument(srcTree); HttpContent content = new StringContent(CSXML.ToString()); content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/xml");   HttpResponseMessage responseMsg = await http.PostAsync(uri, content); if (responseMsg != null) { requestID = responseMsg.Headers.GetValues("x-ms-request-id").FirstOrDefault(); } return requestID; }  (5) Delete Virtual Machine You can delete your hosted virtual machine by deleting its deployment, but I prefer to delete its hosted service also, so you can easily manage your virtual machines from code 5.1 Delete Deployment Request URI https://management.core.windows.net/< subscription-id >/services/hostedservices/< service-name >/deployments/<Deployment-Name> HTTP Method DELETE (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. C# code async public Task<HttpResponseMessage> DeleteDeployment( string deploymentName) { string xml = string.Empty; String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}/deployments/{2}", _subscriptionid, deploymentName, deploymentName); HttpClient http = GetHttpClient(); HttpResponseMessage responseMessage = await http.DeleteAsync(uri); return responseMessage; }  5.2 Delete Hosted Service Request URI https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name> HTTP Method DELETE (HTTP 1.1) Headers x-ms-version: 2012-03-01 Body None. C# code async public Task<HttpResponseMessage> DeleteService(string serviceName) { string xml = string.Empty; String uri = String.Format("https://management.core.windows.net/{0}/services/hostedservices/{1}", _subscriptionid, serviceName); Log.Info("Windows Azure URI (http DELETE verb): " + uri, typeof(VMManager)); HttpClient http = GetHttpClient(); HttpResponseMessage responseMessage = await http.DeleteAsync(uri); return responseMessage; }  And the following is the method which can used to delete both of deployment and service async public Task<string> DeleteVM(string vmName) { string responseString = string.Empty;   // as a convention here in this post, a unified name used for service, deployment and VM instance to make it easy to manage VMs HttpClient http = GetHttpClient(); HttpResponseMessage responseMessage = await DeleteDeployment(vmName);   if (responseMessage != null) {   string requestID = responseMessage.Headers.GetValues("x-ms-request-id").FirstOrDefault(); OperationResult result = await PollGetOperationStatus(requestID, 5, 120); if (result.Status == OperationStatus.Succeeded) { responseString = result.Message; HttpResponseMessage sResponseMessage = await DeleteService(vmName); if (sResponseMessage != null) { OperationResult sResult = await PollGetOperationStatus(requestID, 5, 120); responseString += sResult.Message; } } else { responseString = result.Message; } } return responseString; }  Note: This article is subject to be updated Hisham  References Advanced Windows Azure IaaS – Demo Code Windows Azure Service Management REST API Reference Introduction to the Azure Platform Representational state transfer Asynchronous Programming with Async and Await (C# and Visual Basic) HttpClient Class

    Read the article

  • Spring-security not processing pre/post annotations

    - by wuntee
    Trying to get pre/post annotations working with a web application, but for some reason nothing is happening with spring-security. Can anyone see what im missing? web.xml contextConfigLocation /WEB-INF/rvaContext-business.xml /WEB-INF/rvaContext-security.xml <context-param> <param-name>log4jConfigLocation</param-name> <param-value>/WEB-INF/log4j.properties</param-value> </context-param> <!-- Spring security filter --> <filter> <filter-name>springSecurityFilterChain</filter-name> <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class> </filter> <filter-mapping> <filter-name>springSecurityFilterChain</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> <!-- - Publishes events for session creation and destruction through the application - context. Optional unless concurrent session control is being used. --> <listener> <listener-class>org.springframework.security.web.session.HttpSessionEventPublisher</listener-class> </listener> <listener> <listener-class>org.springframework.web.util.Log4jConfigListener</listener-class> </listener> <servlet> <servlet-name>rva</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>rva</servlet-name> <url-pattern>/rva/*</url-pattern> </servlet-mapping> rvaContext-secuity.xml: <?xml version="1.0" encoding="UTF-8"?> <beans:beans xmlns="http://www.springframework.org/schema/security" xmlns:beans="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security-3.0.xsd"> <global-method-security pre-post-annotations="enabled"/> <http use-expressions="true"> <form-login /> <logout /> <remember-me /> <!-- Uncomment to limit the number of sessions a user can have --> <session-management invalid-session-url="/timeout.jsp"> <concurrency-control max-sessions="1" error-if-maximum-exceeded="true" /> </session-management> <form-login login-page="rva/login" /> </http> ... LoginController class: @Controller @RequestMapping("/login") public class LoginController { @RequestMapping(method = RequestMethod.GET) public String login(ModelMap map){ map.addAttribute("title", "Login: AD Credentials"); return("login"); } @RequestMapping("/secure") @PreAuthorize("hasRole('ROLE_USER')") public String secure(ModelMap map){ return("secure"); } } In the logs, there is nothing even related to spring-security: logs: INFO: Initializing Spring FrameworkServlet 'rva' INFO [org.springframework.web.servlet.DispatcherServlet] - FrameworkServlet 'rva': initialization started INFO [org.springframework.web.context.support.XmlWebApplicationContext] - Refreshing WebApplicationContext for namespace 'rva-servlet': startup date [Fri Mar 26 10:28:51 MDT 2010]; parent: Root WebApplicationContext INFO [org.springframework.beans.factory.xml.XmlBeanDefinitionReader] - Loading XML bean definitions from ServletContext resource [/WEB-INF/rva-servlet.xml] INFO [org.springframework.beans.factory.support.DefaultListableBeanFactory] - Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@a2fc31: defining beans [loginController,org.springframework.context.annotation.internalConfigurationAnnotationProcessor,org.springframework.context.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.annotation.internalRequiredAnnotationProcessor,org.springframework.context.annotation.internalCommonAnnotationProcessor,freemarkerConfig,viewResolver]; parent: org.springframework.beans.factory.support.DefaultListableBeanFactory@cc74e7 INFO [org.springframework.web.servlet.view.freemarker.FreeMarkerConfigurer] - ClassTemplateLoader for Spring macros added to FreeMarker configuration INFO [org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping] - Mapped URL path [/login/secure] onto handler [com.cable.comcast.neto.nse.rva.controller.LoginController@79b32a] INFO [org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping] - Mapped URL path [/login/secure.*] onto handler [com.cable.comcast.neto.nse.rva.controller.LoginController@79b32a] INFO [org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping] - Mapped URL path [/login/secure/] onto handler [com.cable.comcast.neto.nse.rva.controller.LoginController@79b32a] INFO [org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping] - Mapped URL path [/login] onto handler [com.cable.comcast.neto.nse.rva.controller.LoginController@79b32a] INFO [org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping] - Mapped URL path [/login.*] onto handler [com.cable.comcast.neto.nse.rva.controller.LoginController@79b32a] INFO [org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping] - Mapped URL path [/login/] onto handler [com.cable.comcast.neto.nse.rva.controller.LoginController@79b32a] INFO [org.springframework.web.servlet.DispatcherServlet] - FrameworkServlet 'rva': initialization completed in 417 ms Mar 26, 2010 10:28:52 AM org.apache.coyote.http11.Http11Protocol start INFO: Starting Coyote HTTP/1.1 on http-8080 Mar 26, 2010 10:28:52 AM org.apache.jk.common.ChannelSocket init INFO: JK: ajp13 listening on /0.0.0.0:8009 Mar 26, 2010 10:28:52 AM org.apache.jk.server.JkMain start INFO: Jk running ID=0 time=0/31 config=null Mar 26, 2010 10:28:52 AM org.apache.catalina.startup.Catalina start INFO: Server startup in 1873 ms WARN [org.springframework.web.servlet.PageNotFound] - No mapping found for HTTP request with URI [/rva-web/] in DispatcherServlet with name 'rva'

    Read the article

  • Minecraft Style Chunk building problem

    - by David Torrey
    I'm having some problems with speed in my chunk engine. I timed it out, and in its current state it takes a total ~5 seconds per chunk to fill each face's list. I have a check to see if each face of a block is visible and if it is not visible, it skips it and moves on. I'm using a dictionary (unordered map) because it makes sense memorywise to just not have an entry if there is no block. I've tracked my problem down to testing if there is an entry, and accessing an entry if it does exist. If I remove the tests to see if there is an entry in the dictionary for an adjacent block, or if the block type itself is seethrough, it runs within about 2-4 milliseconds. so here's my question: Is there a faster way to check for an entry in a dictionary than .ContainsKey()? As an aside, I tried TryGetValue() and it doesn't really help with the speed that much. If I remove the ContainsKey() and keep the test where it does the IsSeeThrough for each block, it halves the time, but it's still about 2-3 seconds. It only drops to 2-4ms if I remove BOTH checks. Here is my code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Runtime.InteropServices; using OpenTK; using OpenTK.Graphics.OpenGL; using System.Drawing; namespace Anabelle_Lee { public enum BlockEnum { air = 0, dirt = 1, } [StructLayout(LayoutKind.Sequential,Pack=1)] public struct Coordinates<T1> { public T1 x; public T1 y; public T1 z; public override string ToString() { return "(" + x + "," + y + "," + z + ") : " + typeof(T1); } } public struct Sides<T1> { public T1 left; public T1 right; public T1 top; public T1 bottom; public T1 front; public T1 back; } public class Block { public int blockType; public bool SeeThrough() { switch (blockType) { case 0: return true; } return false ; } public override string ToString() { return ((BlockEnum)(blockType)).ToString(); } } class Chunk { private Dictionary<Coordinates<byte>, Block> mChunkData; //stores the block data private Sides<List<Coordinates<byte>>> mVBOVertexBuffer; private Sides<int> mVBOHandle; //private bool mIsChanged; private const byte mCHUNKSIZE = 16; public Chunk() { } public void InitializeChunk() { //create VBO references #if DEBUG Console.WriteLine ("Initializing Chunk"); #endif mChunkData = new Dictionary<Coordinates<byte> , Block>(); //mIsChanged = true; GL.GenBuffers(1, out mVBOHandle.left); GL.GenBuffers(1, out mVBOHandle.right); GL.GenBuffers(1, out mVBOHandle.top); GL.GenBuffers(1, out mVBOHandle.bottom); GL.GenBuffers(1, out mVBOHandle.front); GL.GenBuffers(1, out mVBOHandle.back); //make new list of vertexes for each face mVBOVertexBuffer.top = new List<Coordinates<byte>>(); mVBOVertexBuffer.bottom = new List<Coordinates<byte>>(); mVBOVertexBuffer.left = new List<Coordinates<byte>>(); mVBOVertexBuffer.right = new List<Coordinates<byte>>(); mVBOVertexBuffer.front = new List<Coordinates<byte>>(); mVBOVertexBuffer.back = new List<Coordinates<byte>>(); #if DEBUG Console.WriteLine("Chunk Initialized"); #endif } public void GenerateChunk() { #if DEBUG Console.WriteLine("Generating Chunk"); #endif for (byte i = 0; i < mCHUNKSIZE; i++) { for (byte j = 0; j < mCHUNKSIZE; j++) { for (byte k = 0; k < mCHUNKSIZE; k++) { Random blockLoc = new Random(); Coordinates<byte> randChunk = new Coordinates<byte> { x = i, y = j, z = k }; mChunkData.Add(randChunk, new Block()); mChunkData[randChunk].blockType = blockLoc.Next(0, 1); } } } #if DEBUG Console.WriteLine("Chunk Generated"); #endif } public void DeleteChunk() { //delete VBO references #if DEBUG Console.WriteLine("Deleting Chunk"); #endif GL.DeleteBuffers(1, ref mVBOHandle.left); GL.DeleteBuffers(1, ref mVBOHandle.right); GL.DeleteBuffers(1, ref mVBOHandle.top); GL.DeleteBuffers(1, ref mVBOHandle.bottom); GL.DeleteBuffers(1, ref mVBOHandle.front); GL.DeleteBuffers(1, ref mVBOHandle.back); //clear all vertex buffers ClearPolyLists(); #if DEBUG Console.WriteLine("Chunk Deleted"); #endif } public void UpdateChunk() { #if DEBUG Console.WriteLine("Updating Chunk"); #endif ClearPolyLists(); //prepare buffers //for every entry in mChunkData map foreach(KeyValuePair<Coordinates<byte>,Block> feBlockData in mChunkData) { Coordinates<byte> checkBlock = new Coordinates<byte> { x = feBlockData.Key.x, y = feBlockData.Key.y, z = feBlockData.Key.z }; //check for polygonson the left side of the cube if (checkBlock.x > 0) { //check to see if there is a key for current x - 1. if not, add the vector if (!IsVisible(checkBlock.x - 1, checkBlock.y, checkBlock.z)) { //add polygon AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.left); } } else { //polygon is far left and should be added AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.left); } //check for polygons on the right side of the cube if (checkBlock.x < mCHUNKSIZE - 1) { if (!IsVisible(checkBlock.x + 1, checkBlock.y, checkBlock.z)) { //add poly AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.right); } } else { //poly for right add AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.right); } if (checkBlock.y > 0) { //check to see if there is a key for current x - 1. if not, add the vector if (!IsVisible(checkBlock.x, checkBlock.y - 1, checkBlock.z)) { //add polygon AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.bottom); } } else { //polygon is far left and should be added AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.bottom); } //check for polygons on the right side of the cube if (checkBlock.y < mCHUNKSIZE - 1) { if (!IsVisible(checkBlock.x, checkBlock.y + 1, checkBlock.z)) { //add poly AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.top); } } else { //poly for right add AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.top); } if (checkBlock.z > 0) { //check to see if there is a key for current x - 1. if not, add the vector if (!IsVisible(checkBlock.x, checkBlock.y, checkBlock.z - 1)) { //add polygon AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.back); } } else { //polygon is far left and should be added AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.back); } //check for polygons on the right side of the cube if (checkBlock.z < mCHUNKSIZE - 1) { if (!IsVisible(checkBlock.x, checkBlock.y, checkBlock.z + 1)) { //add poly AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.front); } } else { //poly for right add AddPoly(checkBlock.x, checkBlock.y, checkBlock.z, mVBOHandle.front); } } BuildBuffers(); #if DEBUG Console.WriteLine("Chunk Updated"); #endif } public void RenderChunk() { } public void LoadChunk() { #if DEBUG Console.WriteLine("Loading Chunk"); #endif #if DEBUG Console.WriteLine("Chunk Deleted"); #endif } public void SaveChunk() { #if DEBUG Console.WriteLine("Saving Chunk"); #endif #if DEBUG Console.WriteLine("Chunk Saved"); #endif } private bool IsVisible(int pX,int pY,int pZ) { Block testBlock; Coordinates<byte> checkBlock = new Coordinates<byte> { x = Convert.ToByte(pX), y = Convert.ToByte(pY), z = Convert.ToByte(pZ) }; if (mChunkData.TryGetValue(checkBlock,out testBlock )) //if data exists { if (testBlock.SeeThrough() == true) //if existing data is not seethrough { return true; } } return true; } private void AddPoly(byte pX, byte pY, byte pZ, int BufferSide) { //create temp array GL.BindBuffer(BufferTarget.ArrayBuffer, BufferSide); if (BufferSide == mVBOHandle.front) { //front face mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.front.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ + 1) }); } else if (BufferSide == mVBOHandle.right) { //back face mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.back.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ) }); } else if (BufferSide == mVBOHandle.top) { //left face mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.left.Add(new Coordinates<byte> { x = (byte)(pX), y = (byte)(pY + 1), z = (byte)(pZ) }); } else if (BufferSide == mVBOHandle.bottom) { //right face mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ + 1) }); mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY) , z = (byte)(pZ) }); mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.right.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); } else if (BufferSide == mVBOHandle.front) { //top face mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ + 1) }); mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY + 1), z = (byte)(pZ) }); mVBOVertexBuffer.top.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY + 1), z = (byte)(pZ) }); } else if (BufferSide == mVBOHandle.back) { //bottom face mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY), z = (byte)(pZ + 1) }); mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY), z = (byte)(pZ) }); mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY), z = (byte)(pZ) }); mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY), z = (byte)(pZ) }); mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX + 1), y = (byte)(pY), z = (byte)(pZ + 1) }); mVBOVertexBuffer.bottom.Add(new Coordinates<byte> { x = (byte)(pX) , y = (byte)(pY), z = (byte)(pZ + 1) }); } } private void BuildBuffers() { #if DEBUG Console.WriteLine("Building Chunk Buffers"); #endif GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.front); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.front.Count), mVBOVertexBuffer.front.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.back); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.back.Count), mVBOVertexBuffer.back.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.left); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.left.Count), mVBOVertexBuffer.left.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.right); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.right.Count), mVBOVertexBuffer.right.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.top); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.top.Count), mVBOVertexBuffer.top.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer, mVBOHandle.bottom); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(Marshal.SizeOf(new Coordinates<byte>()) * mVBOVertexBuffer.bottom.Count), mVBOVertexBuffer.bottom.ToArray(), BufferUsageHint.StaticDraw); GL.BindBuffer(BufferTarget.ArrayBuffer,0); #if DEBUG Console.WriteLine("Chunk Buffers Built"); #endif } private void ClearPolyLists() { #if DEBUG Console.WriteLine("Clearing Polygon Lists"); #endif mVBOVertexBuffer.top.Clear(); mVBOVertexBuffer.bottom.Clear(); mVBOVertexBuffer.left.Clear(); mVBOVertexBuffer.right.Clear(); mVBOVertexBuffer.front.Clear(); mVBOVertexBuffer.back.Clear(); #if DEBUG Console.WriteLine("Polygon Lists Cleared"); #endif } }//END CLASS }//END NAMESPACE

    Read the article

  • How to Stich to Image objects in Java

    - by Imran
    Hi, I have a scenario in which i`m getting a number of tiles (e.g.12) from my mapping server. Now for buffering and offline functions I need to join them all back again so that we have to deal with 1 single image object instead of 12. I ve tried to do it without JAI my code is below. package imagemerge; import java.awt.*; import java.awt.image.*; import java.awt.event.*; public class ImageSticher extends WindowAdapter { Image tile1; Image tile2; Image result; ColorModel colorModel; int width,height,widthr,heightr; //int t1,t2; int t12[]; public ImageSticher() { } public ImageSticher (Image img1,Image img2,int w,int h) { tile1=img1; tile2=img2; width=w; height=h; colorModel=ColorModel.getRGBdefault(); } public Image horizontalStich() throws Exception { widthr=width*2; heightr=height; t12=new int[widthr * heightr]; int t1[]=new int[width*height]; PixelGrabber p1 =new PixelGrabber(tile1, 0, 0, width, height, t1, 0, width); p1.grabPixels(); int t2[]=new int[width*height]; PixelGrabber p2 =new PixelGrabber(tile2, 0, 0, width, height, t1, 0, width); p2.grabPixels(); int y, x, rp, rpi; int red1, red2, redr; int green1, green2, greenr; int blue1, blue2, bluer; int alpha1, alpha2, alphar; for(y=0;y<heightr;y++) { for(x=0;x<widthr;x++) { //System.out.println(x); rpi=y*widthr+x; // index of resulting pixel; rp=0; //initializing resulting pixel System.out.println(rpi); if(x<(widthr/2)) // x is less than width , copy first tile { //System.out.println("tile1="+x); blue1 = t1[rpi] & 0x00ff; // ERROR occurs here green1=(t1[rpi] >> 8) & 0x00ff; red1=(t1[rpi] >> 16) & 0x00ff; alpha1 = (t1[rpi] >> 24) & 0x00ff; redr = (int)(red1 * 1.0); // copying red band pixel into redresult,,,,1.0 is the alpha valye redr = (redr < 0)?(0):((redr>255)?(255):(redr)); greenr = (int)(green1 * 1.0); // redr = (int)(red1 * 1.0); // greenr = (greenr < 0)?(0):((greenr>255)?(255):(greenr)); bluer = (int)(blue1 * 1.0); bluer = (bluer < 0)?(0):((bluer>255)?(255):(bluer)); alphar = 255; //resulting pixel computed rp = (((((alphar << 8) + (redr & 0x0ff)) << 8) + (greenr & 0x0ff)) << 8) + (bluer & 0x0ff); } else // index is ahead of half way...copy second tile { blue2 = t2[rpi] & 0x00ff; // blue band bit of first tile green2=(t2[rpi] >> 8) & 0x00ff; red2=(t2[rpi] >> 16) & 0x00ff; alpha2 = (t2[rpi] >> 24) & 0x00ff; redr = (int)(red2 * 1.0); // copying red band pixel into redresult,,,,1.0 is the alpha valye redr = (redr < 0)?(0):((redr>255)?(255):(redr)); greenr = (int)(green2 * 1.0); // redr = (int)(red2 * 1.0); // greenr = (greenr < 0)?(0):((greenr>255)?(255):(greenr)); bluer = (int)(blue2 * 1.0); bluer = (bluer < 0)?(0):((bluer>255)?(255):(bluer)); alphar = 255; //resulting pixel computed rp = (((((alphar << 8) + (redr & 0x0ff)) << 8) + (greenr & 0x0ff)) << 8) + (bluer & 0x0ff); } t12[rpi] = rp; // copying resulting pixel in the result int array which will be converted to image } } MemoryImageSource mis; if (t12!=null) { mis = new MemoryImageSource(widthr, heightr, colorModel, t12, 0, widthr); result = Toolkit.getDefaultToolkit().createImage(mis); return result; } return null; } } now to check the my theory Im trying to join or stich two tiles horizontaly but im getting the error : java.lang.ArrayIndexOutOfBoundsException: 90000 at imagemerge.ImageSticher.horizontalStich(ImageSticher.java:69) at imageStream.ImageStream.getImage(ImageStream.java:75) at imageStream.ImageStream.main(ImageStream.java:28) is there some kind of limitation because when stiching two images of 300 x 300 horizontally it means the resulting image will be 600 x 300 ... that would make 180000 index size but its giving error at 90000, what am I doing wrong here

    Read the article

  • Problem measuring N times the execution time of a code block

    - by Nazgulled
    EDIT: I just found my problem after writing this long post explaining every little detail... If someone can give me a good answer on what I'm doing wrong and how can I get the execution time in seconds (using a float with 5 decimal places or so), I'll mark that as accepted. Hint: The problem was on how I interpreted the clock_getttime() man page. Hi, Let's say I have a function named myOperation that I need to measure the execution time of. To measure it, I'm using clock_gettime() as it was recommend here in one of the comments. My teacher recommends us to measure it N times so we can get an average, standard deviation and median for the final report. He also recommends us to execute myOperation M times instead of just one. If myOperation is a very fast operation, measuring it M times allow us to get a sense of the "real time" it takes; cause the clock being used might not have the required precision to measure such operation. So, execution myOperation only one time or M times really depends if the operation itself takes long enough for the clock precision we are using. I'm having trouble dealing with that M times execution. Increasing M decreases (a lot) the final average value. Which doesn't make sense to me. It's like this, on average you take 3 to 5 seconds to travel from point A to B. But then you go from A to B and back to A 5 times (which makes it 10 times, cause A to B is the same as B to A) and you measure that. Than you divide by 10, the average you get is supposed to be the same average you take traveling from point A to B, which is 3 to 5 seconds. This is what I want my code to do, but it's not working. If I keep increasing the number of times I go from A to B and back A, the average will be lower and lower each time, it makes no sense to me. Enough theory, here's my code: #include <stdio.h> #include <time.h> #define MEASUREMENTS 1 #define OPERATIONS 1 typedef struct timespec TimeClock; TimeClock diffTimeClock(TimeClock start, TimeClock end) { TimeClock aux; if((end.tv_nsec - start.tv_nsec) < 0) { aux.tv_sec = end.tv_sec - start.tv_sec - 1; aux.tv_nsec = 1E9 + end.tv_nsec - start.tv_nsec; } else { aux.tv_sec = end.tv_sec - start.tv_sec; aux.tv_nsec = end.tv_nsec - start.tv_nsec; } return aux; } int main(void) { TimeClock sTime, eTime, dTime; int i, j; for(i = 0; i < MEASUREMENTS; i++) { printf(" » MEASURE %02d\n", i+1); clock_gettime(CLOCK_REALTIME, &sTime); for(j = 0; j < OPERATIONS; j++) { myOperation(); } clock_gettime(CLOCK_REALTIME, &eTime); dTime = diffTimeClock(sTime, eTime); printf(" - NSEC (TOTAL): %ld\n", dTime.tv_nsec); printf(" - NSEC (OP): %ld\n\n", dTime.tv_nsec / OPERATIONS); } return 0; } Notes: The above diffTimeClock function is from this blog post. I replaced my real operation with myOperation() because it doesn't make any sense to post my real functions as I would have to post long blocks of code, you can easily code a myOperation() with whatever you like to compile the code if you wish. As you can see, OPERATIONS = 1 and the results are: » MEASURE 01 - NSEC (TOTAL): 27456580 - NSEC (OP): 27456580 For OPERATIONS = 100 the results are: » MEASURE 01 - NSEC (TOTAL): 218929736 - NSEC (OP): 2189297 For OPERATIONS = 1000 the results are: » MEASURE 01 - NSEC (TOTAL): 862834890 - NSEC (OP): 862834 For OPERATIONS = 10000 the results are: » MEASURE 01 - NSEC (TOTAL): 574133641 - NSEC (OP): 57413 Now, I'm not a math wiz, far from it actually, but this doesn't make any sense to me whatsoever. I've already talked about this with a friend that's on this project with me and he also can't understand the differences. I don't understand why the value is getting lower and lower when I increase OPERATIONS. The operation itself should take the same time (on average of course, not the exact same time), no matter how many times I execute it. You could tell me that that actually depends on the operation itself, the data being read and that some data could already be in the cache and bla bla, but I don't think that's the problem. In my case, myOperation is reading 5000 lines of text from an CSV file, separating the values by ; and inserting those values into a data structure. For each iteration, I'm destroying the data structure and initializing it again. Now that I think of it, I also that think that there's a problem measuring time with clock_gettime(), maybe I'm not using it right. I mean, look at the last example, where OPERATIONS = 10000. The total time it took was 574133641ns, which would be roughly 0,5s; that's impossible, it took a couple of minutes as I couldn't stand looking at the screen waiting and went to eat something.

    Read the article

  • Embedding generic sql queries into c# program

    - by Pooja Balkundi
    Okay referring to my first question code in the main, I want the user to enter employee name at runtime and then i take this name which user has entered and compare it with the e_name of my emp table , if it exists i want to display all information of that employee , how can I achieve this ? using System; using System.Collections.Generic; using System.Linq; using System.Windows.Forms; using MySql.Data.MySqlClient; namespace ConnectCsharppToMySQL { public class DBConnect { private MySqlConnection connection; private string server; private string database; private string uid; private string password; string name; //Constructor public DBConnect() { Initialize(); } //Initialize values private void Initialize() { server = "localhost"; database = "test"; uid = "root"; password = ""; string connectionString; connectionString = "SERVER=" + server + ";" + "DATABASE=" + database + ";" + "UID=" + uid + ";" + "PASSWORD=" + password + ";"; connection = new MySqlConnection(connectionString); } //open connection to database private bool OpenConnection() { try { connection.Open(); return true; } catch (MySqlException ex) { //When handling errors, you can your application's response based //on the error number. //The two most common error numbers when connecting are as follows: //0: Cannot connect to server. //1045: Invalid user name and/or password. switch (ex.Number) { case 0: MessageBox.Show("Cannot connect to server. Contact administrator"); break; case 1045: MessageBox.Show("Invalid username/password, please try again"); break; } return false; } } //Close connection private bool CloseConnection() { try { connection.Close(); return true; } catch (MySqlException ex) { MessageBox.Show(ex.Message); return false; } } //Insert statement public void Insert() { string query = "INSERT INTO emp (e_name, age) VALUES('Pooja R', '21')"; //open connection if (this.OpenConnection() == true) { //create command and assign the query and connection from the constructor MySqlCommand cmd = new MySqlCommand(query, connection); //Execute command cmd.ExecuteNonQuery(); //close connection this.CloseConnection(); } } //Update statement public void Update() { string query = "UPDATE emp SET e_name='Peachy', age='22' WHERE e_name='Pooja R'"; //Open connection if (this.OpenConnection() == true) { //create mysql command MySqlCommand cmd = new MySqlCommand(); //Assign the query using CommandText cmd.CommandText = query; //Assign the connection using Connection cmd.Connection = connection; //Execute query cmd.ExecuteNonQuery(); //close connection this.CloseConnection(); } } //Select statement public List<string>[] Select() { string query = "SELECT * FROM emp where e_name=(/*I WANT USER ENTERED NAME TO GET INSERTED HERE*/)"; //Create a list to store the result List<string>[] list = new List<string>[3]; list[0] = new List<string>(); list[1] = new List<string>(); list[2] = new List<string>(); //Open connection if (this.OpenConnection() == true) { //Create Command MySqlCommand cmd = new MySqlCommand(query, connection); //Create a data reader and Execute the command MySqlDataReader dataReader = cmd.ExecuteReader(); //Read the data and store them in the list while (dataReader.Read()) { list[0].Add(dataReader["e_id"] + ""); list[1].Add(dataReader["e_name"] + ""); list[2].Add(dataReader["age"] + ""); } //close Data Reader dataReader.Close(); //close Connection this.CloseConnection(); //return list to be displayed return list; } else { return list; } } public static void Main(String[] args) { DBConnect db1 = new DBConnect(); Console.WriteLine("Initializing"); db1.Initialize(); Console.WriteLine("Search :"); Console.WriteLine("Enter the employee name"); db1.name = Console.ReadLine(); db1.Select(); Console.ReadLine(); } } }

    Read the article

  • JSF2 - use view scope managed bean to pass value between navigation

    - by Fekete Kamosh
    Hi all, I am solving how to pass values from one page to another without making use of session scope managed bean. For most managed beans I would like to have only Request scope. I created a very, very simple calculator example which passes Result object resulting from actions on request bean (CalculatorRequestBean) from 5th phase as initializing value for new instance of request bean initialized in next phase lifecycle. In fact - in production environment we need to pass much more complicated data object which is not as primitive as Result defined below. What is your opinion on this solution which considers both possibilities - we stay on the same view or we navigate to the new one. But in both cases I can get to previous value stored passed using view scoped managed bean. Calculator page: <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html"> <h:head> <title>Calculator</title> </h:head> <h:body> <h:form> <h:panelGrid columns="2"> <h:outputText value="Value to use:"/> <h:inputText value="#{calculatorBeanRequest.valueToAdd}"/> <h:outputText value="Navigate to new view:"/> <h:selectBooleanCheckbox value="#{calculatorBeanRequest.navigateToNewView}"/> <h:commandButton value="Add" action="#{calculatorBeanRequest.add}"/> <h:commandButton value="Subtract" action="#{calculatorBeanRequest.subtract}"/> <h:outputText value="Result:"/> <h:outputText value="#{calculatorBeanRequest.result.value}"/> <h:outputText value="DUMMY" rendered="#{resultBeanView.dummy}"/> </h:panelGrid> </h:form> </h:body> Object to be passed through lifecycle: package cz.test.calculator; import java.io.Serializable; /** * Data object passed among pages. * Lets imagine it holds something much more complicated than primitive int */ public class Result implements Serializable { private int value; public void setValue(int value) { this.value = value; } public int getValue() { return value; } } Request scoped managed bean used on view "calculator.xhtml" package cz.test.calculator; import javax.annotation.PostConstruct; import javax.faces.bean.ManagedBean; import javax.faces.bean.ManagedProperty; import javax.faces.bean.RequestScoped; @ManagedBean @RequestScoped public class CalculatorBeanRequest { @ManagedProperty(value="#{resultBeanView}") ResultBeanView resultBeanView; private Result result; private int valueToAdd; /** * Should perform navigation to */ private boolean navigateToNewView; /** Creates a new instance of CalculatorBeanRequest */ public CalculatorBeanRequest() { } @PostConstruct public void init() { // Remember already saved result from view scoped bean result = resultBeanView.getResult(); } // Dependency injections public void setResultBeanView(ResultBeanView resultBeanView) { this.resultBeanView = resultBeanView; } public ResultBeanView getResultBeanView() { return resultBeanView; } // Getters, setter public void setValueToAdd(int valueToAdd) { this.valueToAdd = valueToAdd; } public int getValueToAdd() { return valueToAdd; } public boolean isNavigateToNewView() { return navigateToNewView; } public void setNavigateToNewView(boolean navigateToNewView) { this.navigateToNewView = navigateToNewView; } public Result getResult() { return result; } // Actions public String add() { result.setValue(result.getValue() + valueToAdd); return isNavigateToNewView() ? "calculator" : null; } public String subtract() { result.setValue(result.getValue() - valueToAdd); return isNavigateToNewView() ? "calculator" : null; } } and finally view scoped managed bean to pass Result variable to new page: package cz.test.calculator; import java.io.Serializable; import javax.annotation.PostConstruct; import javax.faces.bean.ManagedBean; import javax.faces.bean.ViewScoped; import javax.faces.context.FacesContext; @ManagedBean @ViewScoped public class ResultBeanView implements Serializable { private Result result = new Result(); /** Creates a new instance of ResultBeanView */ public ResultBeanView() { } @PostConstruct public void init() { // Try to find request bean ManagedBeanRequest and reset result value CalculatorBeanRequest calculatorBeanRequest = (CalculatorBeanRequest)FacesContext.getCurrentInstance().getExternalContext().getRequestMap().get("calculatorBeanRequest"); if(calculatorBeanRequest != null) { setResult(calculatorBeanRequest.getResult()); } } /** No need to have public modifier as not used on view * but only in managed bean within the same package */ void setResult(Result result) { this.result = result; } /** No need to have public modifier as not used on view * but only in managed bean within the same package */ Result getResult() { return result; } /** * To be called on page to instantiate ResultBeanView in Render view phase */ public boolean isDummy() { return false; } }

    Read the article

  • Re-opening closed dialog puts it in the start position after moving it

    - by semmelbroesel
    I'm using multiple instances of the JQuery-UI dialog along with draggable and resizable. Whenever I drag or resize one of the dialog boxes, the position and dimensions of the current box are saved to a database and then loaded the next time the page opens. This is working well. However, when I close a dialog box and re-open it using a button, jQuery sets the position of the box back to its original location in the center of the screen. Furthermore, I use a show effect to slide the box off to the left on closing and in from the left on re-opening. I found two ways to update its position when the slide in animation is done, however, it still slides into the center of the screen, and I have yet to find a way to get it to slide in towards the location it is supposed to have. Here are the parts of the code that play a part in this: $('.box').dialog({ closeOnEscape: false, hide: { effect: "drop", direction: 'left' }, beforeClose: function (evt, ui){ var $this = $(this); SavePos($this); // saves the dimensions to db }, dragStop: function() { var $this = $(this); SavePos($this); }, resizeStop: function() { var $this = $(this); SavePos($this); }, open: function() { var $this = $(this); $this.dialog('option', { show: { effect: "drop", direction: 'left'} } ); if (init) // meaning only load this code when the page has finished initializing { // tried it both ways - set the position before and after the effect - no success UpdatePos($this.attr('id')); // I tried this section with promise() and effect / complete - I found no difference $this.parent().promise().done(function() { UpdatePos($this.attr('id')); }); } } }); function UpdatePos(key) { // boxpos is an object holding the position for each box by the box's id var $this = $('#' + key); //console.log('updating pos for box ' + boxid); if ($this && $this.hasClass('ui-dialog-content')) { //console.log($this.dialog('widget').css('left')); $this.dialog('option', { width: boxpos[key].width, height: boxpos[key].height }); $this.dialog('widget').css({ left: boxpos[key].left + 'px', top: boxpos[key].top + 'px' }); //console.log('finished updating pos'); //console.log($this.dialog('widget').css('left')); } } The button that re-opens the box has this code on it to make that happen: var $box = $('#boxid'); if ($box) { if ($box.dialog('isOpen')) { $box.dialog('moveToTop'); } else { $box.dialog("open") } } I don't know what jQuery-UI does to the box as it hides it (other than display:none) or to make it slide in, so maybe there's something I'm missing here that might help... Basically, I need JQuery to remember the box' position and put the box back into that location when it is re-opened. It took me days to make it this far, but this is one obstacle I have yet to overcome. Maybe there's a different way I can re-open the box? Thanks! EDIT: Forgot - this issue ONLY happens when I use my UpdatePos function to set the location of a box (i.e. on page load). When I drag a box with my mouse, close it, and re-open it, everything works. So I'm guessing there's one more storage location for the box' position that I'm missing here... EDIT2: After more messing with it, my code for debugging now looks like this: open: function() { var $this = $(this); console.log('box open'); console.log($this.dialog('widget').position()); // { top=0, left=-78.5} console.log($this.dialog('widget').css('left')); $this.dialog('option', { show: { effect: "drop", direction: 'left'} } ); if (init) { UpdatePos($this.attr('id')); $this.parent().promise().done(function() { console.log($this.dialog('widget').position()); // { top=313, left=641.5} console.log($this.dialog('widget').css('left')); UpdatePos($this.attr('id')); console.log($this.dialog('widget').position()); // { top=121, left=107} console.log($this.dialog('widget').css('left')); }); } The results I'm getting are: box open Object { top=0, left=-78.5} -78.5px Object { top=313, left=641.5} 641.5px Object { top=121, left=107} 107px So looks to me as if the widget is being moved off screen (left=-78.5) and then moved for the animation, and then my code moves it into the location that it should be in (121/107). The position() results for $box.position() or $box.dialog().position() do not change during this debugging section. Maybe this will help someone here - I'm still out of ideas here... ... and I just discovered that when I drag the item around myself, then close and re-open it, it is very unpredictable. Sometimes, it will end up in the correct location horizontally, but not vertically. Sometimes, it will end up back in the center of the screen...

    Read the article

  • Error deploying Spring application to Jboss due to ClassCastException

    - by Rafael
    When I try to deploy a spring application in Jboss, I get this error: 11:32:34,045 ERROR [AbstractKernelController] Error installing to Start: name=persistence.unit:unitName=#ehr-punit state=Create java.lang.RuntimeException: Specification violation [EJB3 JPA 6.2.1.2] - You have not defined a jta-data-source for a JTA enabled persistence context named: ehr-punit at org.jboss.jpa.deployment.PersistenceUnitInfoImpl.(PersistenceUnitInfoImpl.java:115) at org.jboss.jpa.deployment.PersistenceUnitDeployment.start(PersistenceUnitDeployment.java:275) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.jboss.reflect.plugins.introspection.ReflectionUtils.invoke(ReflectionUtils.java:59) at org.jboss.reflect.plugins.introspection.ReflectMethodInfoImpl.invoke(ReflectMethodInfoImpl.java:150) at org.jboss.joinpoint.plugins.BasicMethodJoinPoint.dispatch(BasicMethodJoinPoint.java:66) at org.jboss.kernel.plugins.dependency.KernelControllerContextAction$JoinpointDispatchWrapper.execute(KernelControllerContextAction.java:241) at org.jboss.kernel.plugins.dependency.ExecutionWrapper.execute(ExecutionWrapper.java:47) at org.jboss.kernel.plugins.dependency.KernelControllerContextAction.dispatchExecutionWrapper(KernelControllerContextAction.java:109) at org.jboss.kernel.plugins.dependency.KernelControllerContextAction.dispatchJoinPoint(KernelControllerContextAction.java:70) at org.jboss.kernel.plugins.dependency.LifecycleAction.installActionInternal(LifecycleAction.java:221) at org.jboss.kernel.plugins.dependency.InstallsAwareAction.installAction(InstallsAwareAction.java:54) at org.jboss.kernel.plugins.dependency.InstallsAwareAction.installAction(InstallsAwareAction.java:42) at org.jboss.dependency.plugins.action.SimpleControllerContextAction.simpleInstallAction(SimpleControllerContextAction.java:62) at org.jboss.dependency.plugins.action.AccessControllerContextAction.install(AccessControllerContextAction.java:71) at org.jboss.dependency.plugins.AbstractControllerContextActions.install(AbstractControllerContextActions.java:51) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:774) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:540) at org.jboss.deployers.vfs.deployer.kernel.BeanMetaDataDeployer.deploy(BeanMetaDataDeployer.java:121) at org.jboss.deployers.vfs.deployer.kernel.BeanMetaDataDeployer.deploy(BeanMetaDataDeployer.java:51) at org.jboss.deployers.spi.deployer.helpers.AbstractSimpleRealDeployer.internalDeploy(AbstractSimpleRealDeployer.java:62) at org.jboss.deployers.spi.deployer.helpers.AbstractRealDeployer.deploy(AbstractRealDeployer.java:50) at org.jboss.deployers.plugins.deployers.DeployerWrapper.deploy(DeployerWrapper.java:171) at org.jboss.deployers.plugins.deployers.DeployersImpl.doDeploy(DeployersImpl.java:1439) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1157) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1178) at org.jboss.deployers.plugins.deployers.DeployersImpl.install(DeployersImpl.java:1098) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.deployers.plugins.deployers.DeployersImpl.process(DeployersImpl.java:781) at org.jboss.deployers.plugins.main.MainDeployerImpl.process(MainDeployerImpl.java:702) at org.jboss.system.server.profileservice.repository.MainDeployerAdapter.process(MainDeployerAdapter.java:117) at org.jboss.system.server.profileservice.repository.ProfileDeployAction.install(ProfileDeployAction.java:70) at org.jboss.system.server.profileservice.repository.AbstractProfileAction.install(AbstractProfileAction.java:53) at org.jboss.system.server.profileservice.repository.AbstractProfileService.install(AbstractProfileService.java:361) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.system.server.profileservice.repository.AbstractProfileService.activateProfile(AbstractProfileService.java:306) at org.jboss.system.server.profileservice.ProfileServiceBootstrap.start(ProfileServiceBootstrap.java:271) at org.jboss.bootstrap.AbstractServerImpl.start(AbstractServerImpl.java:461) at org.jboss.Main.boot(Main.java:221) at org.jboss.Main$1.run(Main.java:556) at java.lang.Thread.run(Thread.java:619) 11:32:35,615 INFO [TomcatDeployment] deploy, ctxPath=/ehr-web 11:32:35,986 INFO [[/ehr-web]] Initializing Spring root WebApplicationContext 11:32:35,986 INFO [ContextLoader] Root WebApplicationContext: initialization started 11:32:36,046 INFO [XmlWebApplicationContext] Refreshing org.springframework.web.context.support.XmlWebApplicationContext@1392743: display name [Root WebApplicationContext]; startup date [Mon Jul 20 11:32:36 BRT 2009]; root of context hierarchy 11:32:36,184 INFO [XmlBeanDefinitionReader] Loading XML bean definitions from ServletContext resource [/WEB-INF/applicationContext.xml] 11:32:36,189 ERROR [ContextLoader] Context initialization failed org.springframework.beans.factory.BeanDefinitionStoreException: Unexpected exception parsing XML document from ServletContext resource [/WEB-INF/applicationContext.xml]; nested exception is java.lang.ClassCastException: org.apache.xerces.jaxp.DocumentBuilderFactoryImpl cannot be cast to javax.xml.parsers.DocumentBuilderFactory at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:420) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:342) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:310) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:143) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:178) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:149) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:124) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:92) at org.springframework.context.support.AbstractRefreshableApplicationContext.refreshBeanFactory(AbstractRefreshableApplicationContext.java:123) at org.springframework.context.support.AbstractApplicationContext.obtainFreshBeanFactory(AbstractApplicationContext.java:422) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:352) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:255) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:199) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:45) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3910) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4393) at org.jboss.web.tomcat.service.deployers.TomcatDeployment.performDeployInternal(TomcatDeployment.java:310) at org.jboss.web.tomcat.service.deployers.TomcatDeployment.performDeploy(TomcatDeployment.java:142) at org.jboss.web.deployers.AbstractWarDeployment.start(AbstractWarDeployment.java:461) at org.jboss.web.deployers.WebModule.startModule(WebModule.java:118) at org.jboss.web.deployers.WebModule.start(WebModule.java:97) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:157) at org.jboss.mx.server.Invocation.dispatch(Invocation.java:96) at org.jboss.mx.server.Invocation.invoke(Invocation.java:88) at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:264) at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:668) at org.jboss.system.microcontainer.ServiceProxy.invoke(ServiceProxy.java:206) at $Proxy38.start(Unknown Source) at org.jboss.system.microcontainer.StartStopLifecycleAction.installAction(StartStopLifecycleAction.java:42) at org.jboss.system.microcontainer.StartStopLifecycleAction.installAction(StartStopLifecycleAction.java:37) at org.jboss.dependency.plugins.action.SimpleControllerContextAction.simpleInstallAction(SimpleControllerContextAction.java:62) at org.jboss.dependency.plugins.action.AccessControllerContextAction.install(AccessControllerContextAction.java:71) at org.jboss.dependency.plugins.AbstractControllerContextActions.install(AbstractControllerContextActions.java:51) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.system.microcontainer.ServiceControllerContext.install(ServiceControllerContext.java:286) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.system.ServiceController.doChange(ServiceController.java:688) at org.jboss.system.ServiceController.start(ServiceController.java:460) at org.jboss.system.deployers.ServiceDeployer.start(ServiceDeployer.java:163) at org.jboss.system.deployers.ServiceDeployer.deploy(ServiceDeployer.java:99) at org.jboss.system.deployers.ServiceDeployer.deploy(ServiceDeployer.java:46) at org.jboss.deployers.spi.deployer.helpers.AbstractSimpleRealDeployer.internalDeploy(AbstractSimpleRealDeployer.java:62) at org.jboss.deployers.spi.deployer.helpers.AbstractRealDeployer.deploy(AbstractRealDeployer.java:50) at org.jboss.deployers.plugins.deployers.DeployerWrapper.deploy(DeployerWrapper.java:171) at org.jboss.deployers.plugins.deployers.DeployersImpl.doDeploy(DeployersImpl.java:1439) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1157) at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1178) at org.jboss.deployers.plugins.deployers.DeployersImpl.install(DeployersImpl.java:1098) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.deployers.plugins.deployers.DeployersImpl.process(DeployersImpl.java:781) at org.jboss.deployers.plugins.main.MainDeployerImpl.process(MainDeployerImpl.java:702) at org.jboss.system.server.profileservice.repository.MainDeployerAdapter.process(MainDeployerAdapter.java:117) at org.jboss.system.server.profileservice.repository.ProfileDeployAction.install(ProfileDeployAction.java:70) at org.jboss.system.server.profileservice.repository.AbstractProfileAction.install(AbstractProfileAction.java:53) at org.jboss.system.server.profileservice.repository.AbstractProfileService.install(AbstractProfileService.java:361) at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:348) at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:1631) at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:934) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1082) at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:984) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:822) at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:553) at org.jboss.system.server.profileservice.repository.AbstractProfileService.activateProfile(AbstractProfileService.java:306) at org.jboss.system.server.profileservice.ProfileServiceBootstrap.start(ProfileServiceBootstrap.java:271) at org.jboss.bootstrap.AbstractServerImpl.start(AbstractServerImpl.java:461) at org.jboss.Main.boot(Main.java:221) at org.jboss.Main$1.run(Main.java:556) at java.lang.Thread.run(Thread.java:619) Caused by: java.lang.ClassCastException: org.apache.xerces.jaxp.DocumentBuilderFactoryImpl cannot be cast to javax.xml.parsers.DocumentBuilderFactory at javax.xml.parsers.DocumentBuilderFactory.newInstance(Unknown Source) at org.springframework.beans.factory.xml.DefaultDocumentLoader.createDocumentBuilderFactory(DefaultDocumentLoader.java:89) at org.springframework.beans.factory.xml.DefaultDocumentLoader.loadDocument(DefaultDocumentLoader.java:70) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:396) Does someone know what can I do to deploy this? Thanks

    Read the article

  • problems mounting an external IDE drive via USB in ubuntu

    - by Roy Rico
    I am having a problem connecting a specific IDE drive to my linux box. It's an old drive which I just want to get about 3 GB of files off of. INFO I am trying to connect a 200GB IDE Maxtor Drive, internally and externally... externally: I am using an self powered USB IDE external drive enclosure which I have used to connect various drives, under ubuntu and windows, in the past. The other posts stated it coudl be a problem I think i may have formatted the /dev/sdc partition instead of /dev/sdc1 partition when i originally formatted the drive. internally: I only have one machine left that has an internal IDE interface, and it's got XP on it. I plugged this drive internally into this machine with windows XP and used the ext2/ext3 drivers to mount this drive, but some files have question marks (?) in the file names which is messing up my copy process in windows. I can't delete the files under windows. Ubuntu Linux will not install on my only remaining machine that has IDE controller. I have tried the suggestions in the questions below http://superuser.com/questions/88182/mount-an-external-drive-in-ubuntu http://superuser.com/questions/23210/ubuntu-fails-to-mount-usb-drive it looks like i can see the drive in /proc/partitions $ cat /proc/partitions major minor #blocks name 8 0 78125000 sda 8 1 74894998 sda1 8 2 1 sda2 8 5 3229033 sda5 8 16 199148544 sdb <-- could be my drive? but it's not listed under fdisk -l $ fdisk -l Disk /dev/sda: 80.0 GB, 80000000000 bytes 255 heads, 63 sectors/track, 9726 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xd0f4738c Device Boot Start End Blocks Id System /dev/sda1 * 1 9324 74894998+ 83 Linux /dev/sda2 9325 9726 3229065 5 Extended /dev/sda5 9325 9726 3229033+ 82 Linux swap / Solaris and here is my log of /var/log/messages. with a bunch of weird output, can someone let me know what that weird output is? Mar 3 19:49:40 mala kernel: [687455.112029] usb 1-7: new high speed USB device using ehci_hcd and address 3 Mar 3 19:49:41 mala kernel: [687455.248576] usb 1-7: configuration #1 chosen from 1 choice Mar 3 19:49:41 mala kernel: [687455.267450] Initializing USB Mass Storage driver... Mar 3 19:49:41 mala kernel: [687455.269180] scsi4 : SCSI emulation for USB Mass Storage devices Mar 3 19:49:41 mala kernel: [687455.269410] usbcore: registered new interface driver usb-storage Mar 3 19:49:41 mala kernel: [687455.269416] USB Mass Storage support registered. Mar 3 19:49:46 mala kernel: [687460.270917] scsi 4:0:0:0: Direct-Access Maxtor 6 Y200P0 YAR4 PQ: 0 ANSI: 2 Mar 3 19:49:46 mala kernel: [687460.271485] sd 4:0:0:0: Attached scsi generic sg2 type 0 Mar 3 19:49:46 mala kernel: [687460.278858] sd 4:0:0:0: [sdb] 398297088 512-byte logical blocks: (203 GB/189 GiB) Mar 3 19:49:46 mala kernel: [687460.280866] sd 4:0:0:0: [sdb] Write Protect is off Mar 3 19:50:16 mala kernel: [687460.283784] sdb: Mar 3 19:50:16 mala kernel: [687491.112020] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:50:47 mala kernel: [687522.120030] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:51:18 mala kernel: [687553.112034] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:51:49 mala kernel: [687584.116025] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:52:02 mala kernel: [687596.170632] type=1505 audit(1267671122.035:31): operation="profile_replace" pid=8426 name=/usr/lib/cups/backend/cups-pdf Mar 3 19:52:02 mala kernel: [687596.171551] type=1505 audit(1267671122.035:32): operation="profile_replace" pid=8426 name=/usr/sbin/cupsd Mar 3 19:52:06 mala kernel: [687600.908056] async/0 D c08145c0 0 7655 2 0x00000000 Mar 3 19:52:06 mala kernel: [687600.908062] e5601d38 00000046 e5774000 c08145c0 e4c2a848 c08145c0 d203973a 0002713d Mar 3 19:52:06 mala kernel: [687600.908072] c08145c0 c08145c0 e4c2a848 c08145c0 00000000 0002713d c08145c0 f0a98c00 Mar 3 19:52:06 mala kernel: [687600.908079] e4c2a5b0 c20125c0 00000002 e5601d80 e5601d44 c056f3be e5601d78 e5601d4c Mar 3 19:52:06 mala kernel: [687600.908087] Call Trace: Mar 3 19:52:06 mala kernel: [687600.908099] [<c056f3be>] io_schedule+0x1e/0x30 Mar 3 19:52:06 mala kernel: [687600.908107] [<c01b2cf5>] sync_page+0x35/0x40 Mar 3 19:52:06 mala kernel: [687600.908111] [<c056f8f7>] __wait_on_bit_lock+0x47/0x90 Mar 3 19:52:06 mala kernel: [687600.908115] [<c01b2cc0>] ? sync_page+0x0/0x40 Mar 3 19:52:06 mala kernel: [687600.908121] [<c020f390>] ? blkdev_readpage+0x0/0x20 Mar 3 19:52:06 mala kernel: [687600.908125] [<c01b2ca9>] __lock_page+0x79/0x80 Mar 3 19:52:06 mala kernel: [687600.908130] [<c015c130>] ? wake_bit_function+0x0/0x50 Mar 3 19:52:06 mala kernel: [687600.908135] [<c01b459f>] read_cache_page_async+0xbf/0xd0 Mar 3 19:52:06 mala kernel: [687600.908139] [<c01b45c2>] read_cache_page+0x12/0x60 Mar 3 19:52:06 mala kernel: [687600.908144] [<c0232dca>] read_dev_sector+0x3a/0x80 Mar 3 19:52:06 mala kernel: [687600.908148] [<c0233d3e>] adfspart_check_ICS+0x1e/0x160 Mar 3 19:52:06 mala kernel: [687600.908152] [<c023339f>] ? disk_name+0xaf/0xc0 Mar 3 19:52:06 mala kernel: [687600.908157] [<c0233d20>] ? adfspart_check_ICS+0x0/0x160 Mar 3 19:52:06 mala kernel: [687600.908161] [<c02334de>] check_partition+0x10e/0x180 Mar 3 19:52:06 mala kernel: [687600.908165] [<c02335f6>] rescan_partitions+0xa6/0x330 Mar 3 19:52:06 mala kernel: [687600.908171] [<c0312472>] ? kobject_get+0x12/0x20 Mar 3 19:52:06 mala kernel: [687600.908175] [<c0312472>] ? kobject_get+0x12/0x20 Mar 3 19:52:06 mala kernel: [687600.908180] [<c039fc43>] ? get_device+0x13/0x20 Mar 3 19:52:06 mala kernel: [687600.908185] [<c03c263f>] ? sd_open+0x5f/0x1b0 Mar 3 19:52:06 mala kernel: [687600.908189] [<c020fda0>] __blkdev_get+0x140/0x310 Mar 3 19:52:06 mala kernel: [687600.908194] [<c020f0ac>] ? bdget+0xec/0x100 Mar 3 19:52:06 mala kernel: [687600.908198] [<c020ff7a>] blkdev_get+0xa/0x10 Mar 3 19:52:06 mala kernel: [687600.908202] [<c0232f30>] register_disk+0x120/0x140 Mar 3 19:52:06 mala kernel: [687600.908207] [<c0308b4d>] ? blk_register_region+0x2d/0x40 Mar 3 19:52:06 mala kernel: [687600.908211] [<c03084f0>] ? exact_match+0x0/0x10 Mar 3 19:52:06 mala kernel: [687600.908216] [<c0308cf0>] add_disk+0x80/0x140 Mar 3 19:52:06 mala kernel: [687600.908221] [<c03084f0>] ? exact_match+0x0/0x10 Mar 3 19:52:06 mala kernel: [687600.908225] [<c0308860>] ? exact_lock+0x0/0x20 Mar 3 19:52:06 mala kernel: [687600.908230] [<c03c53df>] sd_probe_async+0xff/0x1c0

    Read the article

  • problems mounting an external IDE drive via USB in ubuntu

    - by Roy Rico
    I am having a problem connecting a specific IDE drive to my linux box. It's an old drive which I just want to get about 3 GB of files off of. INFO I am trying to connect a 200GB IDE Maxtor Drive, internally and externally... externally: I am using an self powered USB IDE external drive enclosure which I have used to connect various drives, under ubuntu and windows, in the past. The other posts stated it coudl be a problem I think i may have formatted the /dev/sdc partition instead of /dev/sdc1 partition when i originally formatted the drive. internally: I only have one machine left that has an internal IDE interface, and it's got XP on it. I plugged this drive internally into this machine with windows XP and used the ext2/ext3 drivers to mount this drive, but some files have question marks (?) in the file names which is messing up my copy process in windows. I can't delete the files under windows. Ubuntu Linux will not install on my only remaining machine that has IDE controller. I have tried the suggestions in the questions below http://superuser.com/questions/88182/mount-an-external-drive-in-ubuntu http://superuser.com/questions/23210/ubuntu-fails-to-mount-usb-drive it looks like i can see the drive in /proc/partitions $ cat /proc/partitions major minor #blocks name 8 0 78125000 sda 8 1 74894998 sda1 8 2 1 sda2 8 5 3229033 sda5 8 16 199148544 sdb <-- could be my drive? but it's not listed under fdisk -l $ fdisk -l Disk /dev/sda: 80.0 GB, 80000000000 bytes 255 heads, 63 sectors/track, 9726 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xd0f4738c Device Boot Start End Blocks Id System /dev/sda1 * 1 9324 74894998+ 83 Linux /dev/sda2 9325 9726 3229065 5 Extended /dev/sda5 9325 9726 3229033+ 82 Linux swap / Solaris and here is my log of /var/log/messages. with a bunch of weird output, can someone let me know what that weird output is? Mar 3 19:49:40 mala kernel: [687455.112029] usb 1-7: new high speed USB device using ehci_hcd and address 3 Mar 3 19:49:41 mala kernel: [687455.248576] usb 1-7: configuration #1 chosen from 1 choice Mar 3 19:49:41 mala kernel: [687455.267450] Initializing USB Mass Storage driver... Mar 3 19:49:41 mala kernel: [687455.269180] scsi4 : SCSI emulation for USB Mass Storage devices Mar 3 19:49:41 mala kernel: [687455.269410] usbcore: registered new interface driver usb-storage Mar 3 19:49:41 mala kernel: [687455.269416] USB Mass Storage support registered. Mar 3 19:49:46 mala kernel: [687460.270917] scsi 4:0:0:0: Direct-Access Maxtor 6 Y200P0 YAR4 PQ: 0 ANSI: 2 Mar 3 19:49:46 mala kernel: [687460.271485] sd 4:0:0:0: Attached scsi generic sg2 type 0 Mar 3 19:49:46 mala kernel: [687460.278858] sd 4:0:0:0: [sdb] 398297088 512-byte logical blocks: (203 GB/189 GiB) Mar 3 19:49:46 mala kernel: [687460.280866] sd 4:0:0:0: [sdb] Write Protect is off Mar 3 19:50:16 mala kernel: [687460.283784] sdb: Mar 3 19:50:16 mala kernel: [687491.112020] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:50:47 mala kernel: [687522.120030] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:51:18 mala kernel: [687553.112034] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:51:49 mala kernel: [687584.116025] usb 1-7: reset high speed USB device using ehci_hcd and address 3 Mar 3 19:52:02 mala kernel: [687596.170632] type=1505 audit(1267671122.035:31): operation="profile_replace" pid=8426 name=/usr/lib/cups/backend/cups-pdf Mar 3 19:52:02 mala kernel: [687596.171551] type=1505 audit(1267671122.035:32): operation="profile_replace" pid=8426 name=/usr/sbin/cupsd Mar 3 19:52:06 mala kernel: [687600.908056] async/0 D c08145c0 0 7655 2 0x00000000 Mar 3 19:52:06 mala kernel: [687600.908062] e5601d38 00000046 e5774000 c08145c0 e4c2a848 c08145c0 d203973a 0002713d Mar 3 19:52:06 mala kernel: [687600.908072] c08145c0 c08145c0 e4c2a848 c08145c0 00000000 0002713d c08145c0 f0a98c00 Mar 3 19:52:06 mala kernel: [687600.908079] e4c2a5b0 c20125c0 00000002 e5601d80 e5601d44 c056f3be e5601d78 e5601d4c Mar 3 19:52:06 mala kernel: [687600.908087] Call Trace: Mar 3 19:52:06 mala kernel: [687600.908099] [<c056f3be>] io_schedule+0x1e/0x30 Mar 3 19:52:06 mala kernel: [687600.908107] [<c01b2cf5>] sync_page+0x35/0x40 Mar 3 19:52:06 mala kernel: [687600.908111] [<c056f8f7>] __wait_on_bit_lock+0x47/0x90 Mar 3 19:52:06 mala kernel: [687600.908115] [<c01b2cc0>] ? sync_page+0x0/0x40 Mar 3 19:52:06 mala kernel: [687600.908121] [<c020f390>] ? blkdev_readpage+0x0/0x20 Mar 3 19:52:06 mala kernel: [687600.908125] [<c01b2ca9>] __lock_page+0x79/0x80 Mar 3 19:52:06 mala kernel: [687600.908130] [<c015c130>] ? wake_bit_function+0x0/0x50 Mar 3 19:52:06 mala kernel: [687600.908135] [<c01b459f>] read_cache_page_async+0xbf/0xd0 Mar 3 19:52:06 mala kernel: [687600.908139] [<c01b45c2>] read_cache_page+0x12/0x60 Mar 3 19:52:06 mala kernel: [687600.908144] [<c0232dca>] read_dev_sector+0x3a/0x80 Mar 3 19:52:06 mala kernel: [687600.908148] [<c0233d3e>] adfspart_check_ICS+0x1e/0x160 Mar 3 19:52:06 mala kernel: [687600.908152] [<c023339f>] ? disk_name+0xaf/0xc0 Mar 3 19:52:06 mala kernel: [687600.908157] [<c0233d20>] ? adfspart_check_ICS+0x0/0x160 Mar 3 19:52:06 mala kernel: [687600.908161] [<c02334de>] check_partition+0x10e/0x180 Mar 3 19:52:06 mala kernel: [687600.908165] [<c02335f6>] rescan_partitions+0xa6/0x330 Mar 3 19:52:06 mala kernel: [687600.908171] [<c0312472>] ? kobject_get+0x12/0x20 Mar 3 19:52:06 mala kernel: [687600.908175] [<c0312472>] ? kobject_get+0x12/0x20 Mar 3 19:52:06 mala kernel: [687600.908180] [<c039fc43>] ? get_device+0x13/0x20 Mar 3 19:52:06 mala kernel: [687600.908185] [<c03c263f>] ? sd_open+0x5f/0x1b0 Mar 3 19:52:06 mala kernel: [687600.908189] [<c020fda0>] __blkdev_get+0x140/0x310 Mar 3 19:52:06 mala kernel: [687600.908194] [<c020f0ac>] ? bdget+0xec/0x100 Mar 3 19:52:06 mala kernel: [687600.908198] [<c020ff7a>] blkdev_get+0xa/0x10 Mar 3 19:52:06 mala kernel: [687600.908202] [<c0232f30>] register_disk+0x120/0x140 Mar 3 19:52:06 mala kernel: [687600.908207] [<c0308b4d>] ? blk_register_region+0x2d/0x40 Mar 3 19:52:06 mala kernel: [687600.908211] [<c03084f0>] ? exact_match+0x0/0x10 Mar 3 19:52:06 mala kernel: [687600.908216] [<c0308cf0>] add_disk+0x80/0x140 Mar 3 19:52:06 mala kernel: [687600.908221] [<c03084f0>] ? exact_match+0x0/0x10 Mar 3 19:52:06 mala kernel: [687600.908225] [<c0308860>] ? exact_lock+0x0/0x20 Mar 3 19:52:06 mala kernel: [687600.908230] [<c03c53df>] sd_probe_async+0xff/0x1c0

    Read the article

  • Our embedded linux system won't recognize a USB Device if it is plugged in before powerup. Suggestions?

    - by Blaine
    We are developing on a small embedded device. This device us a gumstix overo board running OpenEmbedded linux. We have our development almost completely done, and have run into the strangest of bugs that we can't figure out. We have a USB Device (Spectrophotometer) that has a USB2.0 Connection and an external power supply for the light source. Typical behavior is that you plug in the power supply, then the USB connection to a host. When the usb connection is detected by the device, the device boots up and enables the light source and fan. The device is then able to be used by the host system. The problem is that if the device is plugged into the Gumstix before we turn on the Gumstix, the USB Device apparently is not probed by the system (and hence does not turn on). Under a normal situation, when the connection is initialized by plugging in the usb cable, the spectro turns itself on and becomes available to the system (this can be seen via "lsusb" typically). Neither of these things are happening. There is no device detected via "lsusb" and no dmesg errors of any kind that we can see. It is as if the device is not plugged in. The device does show up and work fine if we unplug the USB cable and plug it back in once the system is booted up. It turns on and shows up on the usb bus, and we can access it with our driver. On any other desktop or laptop, it does not matter if the host system is on or off when we plug in the spectrometer. This behavior is what I would consider to be "normal" - that the usb system is probed and initialized at boot time, and the usb devices come online. In other words, our system is fully functional as long as we plug in the usb device after the system is booted up. Unfortunately this isn't possible in our final product - everything comes on at once. Additional Info: 1) We have tried a flash drive attached to the system when the system is turned off. Booting up the system brings the flash drive online, as expected 2) There are no messages regarding the spectro or usb device (using dmesg). "lsusb" only lists the USB hubs / controllers. It is literally as if the device is not present and not plugged in. 3) We have tried a brand new image from gumstix and an older image from last year. Both images have this problem. This problem exists on all 3 gumstix devices we use. Does anyone have any suggestions? From what I can tell it isn't really possible to do a complete "reboot" of the usb system that is a complete emulation of "unplugging" and "replugging" a usb device. I feel like what is happening is that there is no initial probe on the usb bus that would trigger the usb handshaking, but this is somehow specific to the spectro. This seems to be a kernel issue or at least an issue in how the kernel is initializing the usb subsystem. I'm not really sure though. I have tried the gumstix mailing list, but there doesn't seem to be anyone who has seen this issue before. Any advice or suggestions on where to start looking would be fantastic. Thank you! Blaine output etc. $ uname -a Linux overo 2.6.33 #1 Tue Apr 27 08:35:38 PDT 2010 armv7l GNU/Linux When the system is up and running and spectro is plugged in (working as intended), this is lsusb: Bus 001 Device 116: ID 2457:1022 Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x2457 idProduct 0x1022 bcdDevice 0.02 iManufacturer 1 USB4000 1.01.11 iProduct 2 Ocean Optics USB4000 iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 46 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x80 (Bus Powered) MaxPower 400mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 4 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 0 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x86 EP 6 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0000 (Bus Powered) dmesg output: usb usb1: usb auto-resume hub 1-0:1.0: hub_resume usb usb2: usb auto-resume ehci-omap ehci-omap.0: resume root hub hub 1-0:1.0: state 7 ports 1 chg 0000 evt 0000 hub 2-0:1.0: hub_resume hub 2-0:1.0: state 7 ports 3 chg 0000 evt 0000 hub 1-0:1.0: hub_suspend usb usb1: bus auto-suspend hub 2-0:1.0: hub_suspend usb usb2: bus auto-suspend ehci-omap ehci-omap.0: suspend root hub usb usb2: usb resume ehci-omap ehci-omap.0: resume root hub hub 2-0:1.0: hub_resume ehci-omap ehci-omap.0: GetStatus port 2 status 001803 POWER sig=j CSC CONNECT hub 2-0:1.0: port 2: status 0501 change 0001 hub 2-0:1.0: state 7 ports 3 chg 0004 evt 0000 hub 2-0:1.0: port 2, status 0501, change 0000, 480 Mb/s ehci-omap ehci-omap.0: port 2 high speed ehci-omap ehci-omap.0: GetStatus port 2 status 001005 POWER sig=se0 PE CONNECT usb 2-2: new high speed USB device using ehci-omap and address 2 ehci-omap ehci-omap.0: port 2 high speed ehci-omap ehci-omap.0: GetStatus port 2 status 001005 POWER sig=se0 PE CONNECT usb 2-2: default language 0x0409 usb 2-2: udev 2, busnum 2, minor = 129 usb 2-2: New USB device found, idVendor=2457, idProduct=1022 usb 2-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 usb 2-2: Product: Ocean Optics USB4000 usb 2-2: Manufacturer: USB4000 1.01.11 usb 2-2: uevent usb 2-2: usb_probe_device usb 2-2: configuration #1 chosen from 1 choice usb 2-2: uevent usb 2-2: adding 2-2:1.0 (config #1, interface 0) usb 2-2:1.0: uevent drivers/usb/core/inode.c: creating file '002' dmesg has nothing to say, and lusb simply lists nothing else but the two default usb controllers / hubs if we plug the device in before the system is turned on.

    Read the article

  • e2fsck extremely slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there is some problem that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg ... [ 113.084079] usb 2-1: new high-speed USB device number 3 using ehci_hcd [ 113.217783] usb 2-1: New USB device found, idVendor=0bc2, idProduct=3320 [ 113.217787] usb 2-1: New USB device strings: Mfr=2, Product=3, SerialNumber=1 [ 113.217790] usb 2-1: Product: Expansion Desk [ 113.217792] usb 2-1: Manufacturer: Seagate [ 113.217794] usb 2-1: SerialNumber: NA4J4N6K [ 113.435404] usbcore: registered new interface driver uas [ 113.455315] Initializing USB Mass Storage driver... [ 113.468051] scsi5 : usb-storage 2-1:1.0 [ 113.468180] usbcore: registered new interface driver usb-storage [ 113.468182] USB Mass Storage support registered. [ 114.473105] scsi 5:0:0:0: Direct-Access Seagate Expansion Desk 070B PQ: 0 ANSI: 6 [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! ... So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and e2fsck scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So the numbers in the lseek lines before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if those numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. UPDATE2: Okey, big disappointment, the numbers are back to very small again (2012-11-07_0720) lseek(4, 52174548992, SEEK_SET) = 52174548992 read(4, "\374\312\22\\\325\215\213\23\0357U\222\246\370v^f(\312|f\212\362\343\375\373\342\4\204mU6"..., 4096) = 4096 lseek(4, 46603526144, SEEK_SET) = 46603526144 write(4, "\370\261\223\227\23?\4\4\217\264\320_Am\246CQ\313^\203U\253\274\204\277\2564n\227\177\267\343"..., 4096) = 4096 so either e2fsck goes over the data multiple times, or it just hops back and forth multiple times. Or my assumption that those numbers are bytes is wrong. UPDATE3: Since it's mentioned here http://forums.fedoraforum.org/showthread.php?t=282125&page=2 that you can testisk while e2fsck is running, i tried that, though not with a lot of success. When asking testdisk to display the data of my partition, this is what I get: TestDisk 6.13, Data Recovery Utility, November 2011 Christophe GRENIER <[email protected]> http://www.cgsecurity.org 1 P Linux 0 4 5 45600 40 8 732566272 Can't open filesystem. Filesystem seems damaged. And this is what strace currently gives me (2012-11-07_1030) lseek(4, 212460343296, SEEK_SET) = 212460343296 read(4, "\315Mb\265v\377Gn \24\f\205EHh\2349~\330\273\203\3375\206\10\r3=W\210\372\352"..., 4096) = 4096 lseek(4, 47347830784, SEEK_SET) = 47347830784 write(4, "]\204\223\300I\357\4\26\33+\243\312G\230\250\371*m2U\t_\215\265J \252\342Pm\360D"..., 4096) = 4096 (times are in CET)

    Read the article

  • "Can't Connect to Server" from 2nd virtual host on VPS

    - by chaoskreator
    I'm using Debian 7 Wheezy and Apache 2.2.22, and I'm setting up Virtual Hosts for a number of websites on my VPS. I've successfully configured the VirtualHost directives for one of the sites, but the second one continually gives "Problem Loading Page" in Firefox. I've run configtest and it has verified all my syntax is correct, and I've checked all the permissions. Everything on the 2nd domain is pretty much copy/pasted from the first, so I'm not sure what the issue is, as there are no entries into /var/log/apache2/error.log other than where I have reloaded the configurations: /# cat /var/log/apache2/error.log [Thu May 29 01:19:00 2014] [notice] Graceful restart requested, doing restart [Thu May 29 01:19:00 2014] [info] Init: Seeding PRNG with 656 bytes of entropy [Thu May 29 01:19:00 2014] [info] Init: Generating temporary RSA private keys (512/1024 bits) [Thu May 29 01:19:00 2014] [info] Init: Generating temporary DH parameters (512/1024 bits) [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(253): shmcb_init allocated 512000 bytes of shared memory [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(272): for 511920 bytes (512000 including header), recommending 32 subcaches, 133 indexes each [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(306): shmcb_init_memory choices follow [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(308): subcache_num = 32 [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(310): subcache_size = 15992 [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(312): subcache_data_offset = 3208 [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(314): subcache_data_size = 12784 [Thu May 29 01:19:00 2014] [debug] ssl_scache_shmcb.c(316): index_num = 133 [Thu May 29 01:19:00 2014] [info] Shared memory session cache initialised [Thu May 29 01:19:00 2014] [info] Init: Initializing (virtual) servers for SSL [Thu May 29 01:19:00 2014] [info] mod_ssl/2.2.22 compiled against Server: Apache/2.2.22, Library: OpenSSL/1.0.1e [Thu May 29 01:19:00 2014] [notice] Apache/2.2.22 (Debian) PHP/5.4.4-14+deb7u9 mod_ssl/2.2.22 OpenSSL/1.0.1e mod_perl/2.0.7 Perl/v5.14.2 configured -- resuming normal operations [Thu May 29 01:19:00 2014] [info] Server built: Mar 4 2013 22:05:16 [Thu May 29 01:19:00 2014] [debug] prefork.c(1023): AcceptMutex: sysvsem (default: sysvsem) I've ensured to enable each vhost with a2ensite {sitename.conf} with no errors there, either. Below are the contents of the configuration files... /etc/apache2/apache2.conf # Global configuration # LockFile ${APACHE_LOCK_DIR}/accept.lock PidFile ${APACHE_PID_FILE} Timeout 300 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 5 ## ## Server-Pool Size Regulation (MPM specific) ## # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadLimit: ThreadsPerChild can be changed to this maximum value during a # graceful restart. ThreadLimit can only be changed by stopping # and starting Apache. # ThreadsPerChild: constant number of worker threads in each server process # MaxClients: maximum number of simultaneous client connections # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # event MPM # StartServers: initial number of server processes to start # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxClients: maximum number of simultaneous client connections # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_event_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> DefaultType None HostnameLookups Off ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel debug # Include module configuration: Include mods-enabled/*.load Include mods-enabled/*.conf # Include list of ports to listen on and which to use for name based vhosts Include ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent <Directory "/var/www"> Order allow,deny Allow from all Require all granted </Directory> # Include generic snippets of statements Include conf.d/ # Include the virtual host configurations: Include sites-enabled/*.conf NameVirtualHost *:80 /etc/apache2/sites-available/site1.net.conf <VirtualHost *:80> ServerName site1.net ServerAlias site1.net *.site1.net DocumentRoot "/var/www/site1" ErrorLog "/var/www/site1/logs/error.log" CustomLog "/var/www/site1/logs/access.log" vhost_combined <Directory "/var/www/site1"> Options None AllowOverride All Order allow,deny Allow from all Satisfy Any </Directory> </VirtualHost> /etc/apache2/sites-available/site2.com.conf <VirtualHost *:80> ServerName site2.com ServerAlias site2.com *.site2.com DocumentRoot "/var/www/site2" ErrorLog "/var/www/site2/logs/error.log" CustomLog "/var/www/site2/logs/access.log" vhost_combined <Directory "/var/www/site2"> Options None AllowOverride All Order allow,deny Allow from all Satisfy Any </Directory> </VirtualHost> I've also tried setting NameVirtualHost like: Listen 80 NameVirtualHost 23.88.121.82:80 NameVirtualHost 127.0.0.1:80 and the VirtualHost Directives: <VirtualHost 23.88.121.82:80> ... </VirtualHost> for both sites, but that causes the first site to fail, as well. I'm wondering if I need to set up individual IPs for each site, possibly? I have 2 more IPv4 and 3 IPv6 addresses available, if that would make a difference. Also, in the grand scheme of things, I will need to enable SSL for the first site. I've been reading that I'll need to basically just mimic the directives for listening on port 80, only on port 443, and make sure mod_ssl is enabled? EDIT: I just ran apache2 -t to test the config files that way, and got the error: apache2: bad user name ${APACHE_RUN_USER}. However, apachectl configtest returns Syntax OK. There are no other mentions of errors with the mutex anywhere else, however. I was pretty sure if there was an error with the user apache was supposed to run under, the server wouldn't start at all... EDIT 2: Restarting apache fixed the bad user name error.

    Read the article

  • SimpleMembership, Membership Providers, Universal Providers and the new ASP.NET 4.5 Web Forms and ASP.NET MVC 4 templates

    - by Jon Galloway
    The ASP.NET MVC 4 Internet template adds some new, very useful features which are built on top of SimpleMembership. These changes add some great features, like a much simpler and extensible membership API and support for OAuth. However, the new account management features require SimpleMembership and won't work against existing ASP.NET Membership Providers. I'll start with a summary of top things you need to know, then dig into a lot more detail. Summary: SimpleMembership has been designed as a replacement for traditional the previous ASP.NET Role and Membership provider system SimpleMembership solves common problems people ran into with the Membership provider system and was designed for modern user / membership / storage needs SimpleMembership integrates with the previous membership system, but you can't use a MembershipProvider with SimpleMembership The new ASP.NET MVC 4 Internet application template AccountController requires SimpleMembership and is not compatible with previous MembershipProviders You can continue to use existing ASP.NET Role and Membership providers in ASP.NET 4.5 and ASP.NET MVC 4 - just not with the ASP.NET MVC 4 AccountController The existing ASP.NET Role and Membership provider system remains supported as is part of the ASP.NET core ASP.NET 4.5 Web Forms does not use SimpleMembership; it implements OAuth on top of ASP.NET Membership The ASP.NET Web Site Administration Tool (WSAT) is not compatible with SimpleMembership The following is the result of a few conversations with Erik Porter (PM for ASP.NET MVC) to make sure I had some the overall details straight, combined with a lot of time digging around in ILSpy and Visual Studio's assembly browsing tools. SimpleMembership: The future of membership for ASP.NET The ASP.NET Membership system was introduces with ASP.NET 2.0 back in 2005. It was designed to solve common site membership requirements at the time, which generally involved username / password based registration and profile storage in SQL Server. It was designed with a few extensibility mechanisms - notably a provider system (which allowed you override some specifics like backing storage) and the ability to store additional profile information (although the additional  profile information was packed into a single column which usually required access through the API). While it's sometimes frustrating to work with, it's held up for seven years - probably since it handles the main use case (username / password based membership in a SQL Server database) smoothly and can be adapted to most other needs (again, often frustrating, but it can work). The ASP.NET Web Pages and WebMatrix efforts allowed the team an opportunity to take a new look at a lot of things - e.g. the Razor syntax started with ASP.NET Web Pages, not ASP.NET MVC. The ASP.NET Web Pages team designed SimpleMembership to (wait for it) simplify the task of dealing with membership. As Matthew Osborn said in his post Using SimpleMembership With ASP.NET WebPages: With the introduction of ASP.NET WebPages and the WebMatrix stack our team has really be focusing on making things simpler for the developer. Based on a lot of customer feedback one of the areas that we wanted to improve was the built in security in ASP.NET. So with this release we took that time to create a new built in (and default for ASP.NET WebPages) security provider. I say provider because the new stuff is still built on the existing ASP.NET framework. So what do we call this new hotness that we have created? Well, none other than SimpleMembership. SimpleMembership is an umbrella term for both SimpleMembership and SimpleRoles. Part of simplifying membership involved fixing some common problems with ASP.NET Membership. Problems with ASP.NET Membership ASP.NET Membership was very obviously designed around a set of assumptions: Users and user information would most likely be stored in a full SQL Server database or in Active Directory User and profile information would be optimized around a set of common attributes (UserName, Password, IsApproved, CreationDate, Comment, Role membership...) and other user profile information would be accessed through a profile provider Some problems fall out of these assumptions. Requires Full SQL Server for default cases The default, and most fully featured providers ASP.NET Membership providers (SQL Membership Provider, SQL Role Provider, SQL Profile Provider) require full SQL Server. They depend on stored procedure support, and they rely on SQL Server cache dependencies, they depend on agents for clean up and maintenance. So the main SQL Server based providers don't work well on SQL Server CE, won't work out of the box on SQL Azure, etc. Note: Cory Fowler recently let me know about these Updated ASP.net scripts for use with Microsoft SQL Azure which do support membership, personalization, profile, and roles. But the fact that we need a support page with a set of separate SQL scripts underscores the underlying problem. Aha, you say! Jon's forgetting the Universal Providers, a.k.a. System.Web.Providers! Hold on a bit, we'll get to those... Custom Membership Providers have to work with a SQL-Server-centric API If you want to work with another database or other membership storage system, you need to to inherit from the provider base classes and override a bunch of methods which are tightly focused on storing a MembershipUser in a relational database. It can be done (and you can often find pretty good ones that have already been written), but it's a good amount of work and often leaves you with ugly code that has a bunch of System.NotImplementedException fun since there are a lot of methods that just don't apply. Designed around a specific view of users, roles and profiles The existing providers are focused on traditional membership - a user has a username and a password, some specific roles on the site (e.g. administrator, premium user), and may have some additional "nice to have" optional information that can be accessed via an API in your application. This doesn't fit well with some modern usage patterns: In OAuth and OpenID, the user doesn't have a password Often these kinds of scenarios map better to user claims or rights instead of monolithic user roles For many sites, profile or other non-traditional information is very important and needs to come from somewhere other than an API call that maps to a database blob What would work a lot better here is a system in which you were able to define your users, rights, and other attributes however you wanted and the membership system worked with your model - not the other way around. Requires specific schema, overflow in blob columns I've already mentioned this a few times, but it bears calling out separately - ASP.NET Membership focuses on SQL Server storage, and that storage is based on a very specific database schema. SimpleMembership as a better membership system As you might have guessed, SimpleMembership was designed to address the above problems. Works with your Schema As Matthew Osborn explains in his Using SimpleMembership With ASP.NET WebPages post, SimpleMembership is designed to integrate with your database schema: All SimpleMembership requires is that there are two columns on your users table so that we can hook up to it – an “ID” column and a “username” column. The important part here is that they can be named whatever you want. For instance username doesn't have to be an alias it could be an email column you just have to tell SimpleMembership to treat that as the “username” used to log in. Matthew's example shows using a very simple user table named Users (it could be named anything) with a UserID and Username column, then a bunch of other columns he wanted in his app. Then we point SimpleMemberhip at that table with a one-liner: WebSecurity.InitializeDatabaseFile("SecurityDemo.sdf", "Users", "UserID", "Username", true); No other tables are needed, the table can be named anything we want, and can have pretty much any schema we want as long as we've got an ID and something that we can map to a username. Broaden database support to the whole SQL Server family While SimpleMembership is not database agnostic, it works across the SQL Server family. It continues to support full SQL Server, but it also works with SQL Azure, SQL Server CE, SQL Server Express, and LocalDB. Everything's implemented as SQL calls rather than requiring stored procedures, views, agents, and change notifications. Note that SimpleMembership still requires some flavor of SQL Server - it won't work with MySQL, NoSQL databases, etc. You can take a look at the code in WebMatrix.WebData.dll using a tool like ILSpy if you'd like to see why - there places where SQL Server specific SQL statements are being executed, especially when creating and initializing tables. It seems like you might be able to work with another database if you created the tables separately, but I haven't tried it and it's not supported at this point. Note: I'm thinking it would be possible for SimpleMembership (or something compatible) to run Entity Framework so it would work with any database EF supports. That seems useful to me - thoughts? Note: SimpleMembership has the same database support - anything in the SQL Server family - that Universal Providers brings to the ASP.NET Membership system. Easy to with Entity Framework Code First The problem with with ASP.NET Membership's system for storing additional account information is that it's the gate keeper. That means you're stuck with its schema and accessing profile information through its API. SimpleMembership flips that around by allowing you to use any table as a user store. That means you're in control of the user profile information, and you can access it however you'd like - it's just data. Let's look at a practical based on the AccountModel.cs class in an ASP.NET MVC 4 Internet project. Here I'm adding a Birthday property to the UserProfile class. [Table("UserProfile")] public class UserProfile { [Key] [DatabaseGeneratedAttribute(DatabaseGeneratedOption.Identity)] public int UserId { get; set; } public string UserName { get; set; } public DateTime Birthday { get; set; } } Now if I want to access that information, I can just grab the account by username and read the value. var context = new UsersContext(); var username = User.Identity.Name; var user = context.UserProfiles.SingleOrDefault(u => u.UserName == username); var birthday = user.Birthday; So instead of thinking of SimpleMembership as a big membership API, think of it as something that handles membership based on your user database. In SimpleMembership, everything's keyed off a user row in a table you define rather than a bunch of entries in membership tables that were out of your control. How SimpleMembership integrates with ASP.NET Membership Okay, enough sales pitch (and hopefully background) on why things have changed. How does this affect you? Let's start with a diagram to show the relationship (note: I've simplified by removing a few classes to show the important relationships): So SimpleMembershipProvider is an implementaiton of an ExtendedMembershipProvider, which inherits from MembershipProvider and adds some other account / OAuth related things. Here's what ExtendedMembershipProvider adds to MembershipProvider: The important thing to take away here is that a SimpleMembershipProvider is a MembershipProvider, but a MembershipProvider is not a SimpleMembershipProvider. This distinction is important in practice: you cannot use an existing MembershipProvider (including the Universal Providers found in System.Web.Providers) with an API that requires a SimpleMembershipProvider, including any of the calls in WebMatrix.WebData.WebSecurity or Microsoft.Web.WebPages.OAuth.OAuthWebSecurity. However, that's as far as it goes. Membership Providers still work if you're accessing them through the standard Membership API, and all of the core stuff  - including the AuthorizeAttribute, role enforcement, etc. - will work just fine and without any change. Let's look at how that affects you in terms of the new templates. Membership in the ASP.NET MVC 4 project templates ASP.NET MVC 4 offers six Project Templates: Empty - Really empty, just the assemblies, folder structure and a tiny bit of basic configuration. Basic - Like Empty, but with a bit of UI preconfigured (css / images / bundling). Internet - This has both a Home and Account controller and associated views. The Account Controller supports registration and login via either local accounts and via OAuth / OpenID providers. Intranet - Like the Internet template, but it's preconfigured for Windows Authentication. Mobile - This is preconfigured using jQuery Mobile and is intended for mobile-only sites. Web API - This is preconfigured for a service backend built on ASP.NET Web API. Out of these templates, only one (the Internet template) uses SimpleMembership. ASP.NET MVC 4 Basic template The Basic template has configuration in place to use ASP.NET Membership with the Universal Providers. You can see that configuration in the ASP.NET MVC 4 Basic template's web.config: <profile defaultProvider="DefaultProfileProvider"> <providers> <add name="DefaultProfileProvider" type="System.Web.Providers.DefaultProfileProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" applicationName="/" /> </providers> </profile> <membership defaultProvider="DefaultMembershipProvider"> <providers> <add name="DefaultMembershipProvider" type="System.Web.Providers.DefaultMembershipProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="false" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="10" applicationName="/" /> </providers> </membership> <roleManager defaultProvider="DefaultRoleProvider"> <providers> <add name="DefaultRoleProvider" type="System.Web.Providers.DefaultRoleProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" applicationName="/" /> </providers> </roleManager> <sessionState mode="InProc" customProvider="DefaultSessionProvider"> <providers> <add name="DefaultSessionProvider" type="System.Web.Providers.DefaultSessionStateProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" /> </providers> </sessionState> This means that it's business as usual for the Basic template as far as ASP.NET Membership works. ASP.NET MVC 4 Internet template The Internet template has a few things set up to bootstrap SimpleMembership: \Models\AccountModels.cs defines a basic user account and includes data annotations to define keys and such \Filters\InitializeSimpleMembershipAttribute.cs creates the membership database using the above model, then calls WebSecurity.InitializeDatabaseConnection which verifies that the underlying tables are in place and marks initialization as complete (for the application's lifetime) \Controllers\AccountController.cs makes heavy use of OAuthWebSecurity (for OAuth account registration / login / management) and WebSecurity. WebSecurity provides account management services for ASP.NET MVC (and Web Pages) WebSecurity can work with any ExtendedMembershipProvider. There's one in the box (SimpleMembershipProvider) but you can write your own. Since a standard MembershipProvider is not an ExtendedMembershipProvider, WebSecurity will throw exceptions if the default membership provider is a MembershipProvider rather than an ExtendedMembershipProvider. Practical example: Create a new ASP.NET MVC 4 application using the Internet application template Install the Microsoft ASP.NET Universal Providers for LocalDB NuGet package Run the application, click on Register, add a username and password, and click submit You'll get the following execption in AccountController.cs::Register: To call this method, the "Membership.Provider" property must be an instance of "ExtendedMembershipProvider". This occurs because the ASP.NET Universal Providers packages include a web.config transform that will update your web.config to add the Universal Provider configuration I showed in the Basic template example above. When WebSecurity tries to use the configured ASP.NET Membership Provider, it checks if it can be cast to an ExtendedMembershipProvider before doing anything else. So, what do you do? Options: If you want to use the new AccountController, you'll either need to use the SimpleMembershipProvider or another valid ExtendedMembershipProvider. This is pretty straightforward. If you want to use an existing ASP.NET Membership Provider in ASP.NET MVC 4, you can't use the new AccountController. You can do a few things: Replace  the AccountController.cs and AccountModels.cs in an ASP.NET MVC 4 Internet project with one from an ASP.NET MVC 3 application (you of course won't have OAuth support). Then, if you want, you can go through and remove other things that were built around SimpleMembership - the OAuth partial view, the NuGet packages (e.g. the DotNetOpenAuthAuth package, etc.) Use an ASP.NET MVC 4 Internet application template and add in a Universal Providers NuGet package. Then copy in the AccountController and AccountModel classes. Create an ASP.NET MVC 3 project and upgrade it to ASP.NET MVC 4 using the steps shown in the ASP.NET MVC 4 release notes. None of these are particularly elegant or simple. Maybe we (or just me?) can do something to make this simpler - perhaps a NuGet package. However, this should be an edge case - hopefully the cases where you'd need to create a new ASP.NET but use legacy ASP.NET Membership Providers should be pretty rare. Please let me (or, preferably the team) know if that's an incorrect assumption. Membership in the ASP.NET 4.5 project template ASP.NET 4.5 Web Forms took a different approach which builds off ASP.NET Membership. Instead of using the WebMatrix security assemblies, Web Forms uses Microsoft.AspNet.Membership.OpenAuth assembly. I'm no expert on this, but from a bit of time in ILSpy and Visual Studio's (very pretty) dependency graphs, this uses a Membership Adapter to save OAuth data into an EF managed database while still running on top of ASP.NET Membership. Note: There may be a way to use this in ASP.NET MVC 4, although it would probably take some plumbing work to hook it up. How does this fit in with Universal Providers (System.Web.Providers)? Just to summarize: Universal Providers are intended for cases where you have an existing ASP.NET Membership Provider and you want to use it with another SQL Server database backend (other than SQL Server). It doesn't require agents to handle expired session cleanup and other background tasks, it piggybacks these tasks on other calls. Universal Providers are not really, strictly speaking, universal - at least to my way of thinking. They only work with databases in the SQL Server family. Universal Providers do not work with Simple Membership. The Universal Providers packages include some web config transforms which you would normally want when you're using them. What about the Web Site Administration Tool? Visual Studio includes tooling to launch the Web Site Administration Tool (WSAT) to configure users and roles in your application. WSAT is built to work with ASP.NET Membership, and is not compatible with Simple Membership. There are two main options there: Use the WebSecurity and OAuthWebSecurity API to manage the users and roles Create a web admin using the above APIs Since SimpleMembership runs on top of your database, you can update your users as you would any other data - via EF or even in direct database edits (in development, of course)

    Read the article

< Previous Page | 27 28 29 30 31 32 33  | Next Page >