Search Results

Search found 8523 results on 341 pages for 'sun storage 7000 scriptin'.

Page 328/341 | < Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >

  • How to implement Xml Serialization with inherited classes in C#

    - by liorafar
    I have two classes : base class name Component and inheritd class named DBComponent [Serializable] public class Component { private string name = string.Empty; private string description = string.Empty; } [Serializable] public class DBComponent : Component { private List<string> spFiles = new List<string>(); // Storage Procedure Files [XmlArrayItem("SPFile", typeof(string))] [XmlArray("SPFiles")] public List<string> SPFiles { get { return spFiles; } set { spFiles = value; } } public DBComponent(string name, string description) : base(name, description) { } } [Serializable] public class ComponentsCollection { private static ComponentsCollection instance = null; private List<Component> components = new List<Component>(); public List<Component> Components { get { return components; } set { components = value; } } public static ComponentsCollection GetInstance() { if (ccuInstance == null) { lock (lockObject) { if (instance == null) PopulateComponents(); } } return instance; } private static void PopulateComponents() { instance = new CCUniverse(); XmlSerializer xs = new XmlSerializer(instance.GetType()); instance = xs.Deserialize(XmlReader.Create("Components.xml")) as ComponentsCollection; } } } I want read\write from a Xml file. I know that I need to implement the Serialization for DBComponent class otherwise it will not read it.But i cannot find any simple article for that. all the articles that I found were too complex for this simple scenario. The Xml file looks like this: <?xml version="1.0" encoding="utf-8" ?> <ComponentsCollection> <Components> <DBComponent Name="Tenant Historical Database" Description="Tenant Historical Database"> <SPFiles> <SPFile>Setup\TenantHistoricalSP.sql</SPFile> </SPFiles> </DBComponent> <Component Name="Agent" Description="Desktop Agent" /> </Components> </ComponentsCollection> Can someone please give me a simple example of how to read this kind of xml file and what should be implemented ? Thanks Lior

    Read the article

  • g++ symbol versioning. Set it to GCC_3.0 using version 4 of g++

    - by Ismael
    Hi all I need to implemente a Java class which uses JNI to control a fiscal printer in XUbuntu 8.10 with sun-java6-jdk installed. The structure is the following: EpsonDriver.java loads libEpson.so libEpson is linked dynamically with EpsonFiscalProtocol.so ( provided by Epson, no source available ) and pthread I use javah to generate the header file, and the code compiles. Then I put the libEpson.so in $JAVA_HOME/jre/lib/i386, and EpsonDriver.java uses an static initializar System.loadLibrary("libEpson") That part works, however, when I try to use any of the methods I get an unsatisfiedLinkError exception. Some time ago, a coworker did a version that works, and using objdump -Dslx I got the following: Program Header: LOAD off 0x00000000 vaddr 0x00000000 paddr 0x00000000 align 2**12 filesz 0x0000ccc4 memsz 0x0000ccc4 flags r-x LOAD off 0x0000d000 vaddr 0x0000d000 paddr 0x0000d000 align 2**12 filesz 0x00000250 memsz 0x00044a5c flags rw- DYNAMIC off 0x0000d014 vaddr 0x0000d014 paddr 0x0000d014 align 2**2 filesz 0x000000f0 memsz 0x000000f0 flags rw- NOTE off 0x000000d4 vaddr 0x000000d4 paddr 0x000000d4 align 2**2 filesz 0x00000024 memsz 0x00000024 flags r-- STACK off 0x00000000 vaddr 0x00000000 paddr 0x00000000 align 2**2 filesz 0x00000000 memsz 0x00000000 flags rw- Dynamic Section: NEEDED EpsonFiscalProtocol.so NEEDED libpthread.so.0 NEEDED libstdc++.so.6 NEEDED libm.so.6 NEEDED libc.so.6 SONAME libcom_tichile_jpos_EpsonSerialDriver.so INIT 0x00007254 FINI 0x0000ba08 GNU_HASH 0x000000f8 STRTAB 0x00001f50 SYMTAB 0x00000ae0 STRSZ 0x00002384 SYMENT 0x00000010 PLTGOT 0x0000d108 PLTRELSZ 0x00000008 PLTREL 0x00000011 JMPREL 0x0000724c REL 0x000045c4 RELSZ 0x00002c88 RELENT 0x00000008 TEXTREL 0x00000000 VERNEED 0x00004564 VERNEEDNUM 0x00000002 VERSYM 0x000042d4 RELCOUNT 0x000000ac Version References: required from libstdc++.so.6: 0x056bafd3 0x00 05 CXXABI_1.3 0x08922974 0x00 04 GLIBCXX_3.4 required from libc.so.6: 0x0b792650 0x00 03 GCC_3.0 0x0d696910 0x00 02 GLIBC_2.0 In the recently compiled file I get: Program Header: LOAD off 0x00000000 vaddr 0x00000000 paddr 0x00000000 align 2**12 filesz 0x00005300 memsz 0x00005300 flags r-x LOAD off 0x00005300 vaddr 0x00006300 paddr 0x00006300 align 2**12 filesz 0x00000274 memsz 0x00010314 flags rw- DYNAMIC off 0x00005314 vaddr 0x00006314 paddr 0x00006314 align 2**2 filesz 0x000000e0 memsz 0x000000e0 flags rw- EH_FRAME off 0x00004a00 vaddr 0x00004a00 paddr 0x00004a00 align 2**2 filesz 0x00000154 memsz 0x00000154 flags r-- Dynamic Section: NEEDED libstdc++.so.5 NEEDED libm.so.6 NEEDED libgcc_s.so.1 NEEDED libc.so.6 SONAME EpsonFiscalProtocol.so INIT 0x00001cb4 FINI 0x00004994 HASH 0x000000b4 STRTAB 0x00000da4 SYMTAB 0x000004f4 STRSZ 0x00000acf SYMENT 0x00000010 PLTGOT 0x0000640c PLTRELSZ 0x00000270 PLTREL 0x00000011 JMPREL 0x00001a44 REL 0x000019dc RELSZ 0x00000068 RELENT 0x00000008 VERNEED 0x0000198c VERNEEDNUM 0x00000002 VERSYM 0x00001874 RELCOUNT 0x00000004 Version References: required from libstdc++.so.5: 0x056bafd2 0x00 04 CXXABI_1.2 required from libc.so.6: 0x09691f73 0x00 03 GLIBC_2.1.3 0x0d696910 0x00 02 GLIBC_2.0 So I suspect the main diference is the GCC_3.0 symbol I compile libcom_tichile_EpsonSerialDriver.so with the following command ( from memory as I not at work right now ) g++ -Wl,-soname=.... -shared -I/*jni libraries*/ -o libcom_tichile_jpos_EpsonSerialDriver -lEpsonFiscalProtocol -lpthread Is there any way to tell g++ to use that symbol version? Or any idea in how to make it work? EDIT: I have another non-working version with the followin dump: Program Header: LOAD off 0x00000000 vaddr 0x00000000 paddr 0x00000000 align 2**12 filesz 0x0000bf68 memsz 0x0000bf68 flags r-x LOAD off 0x0000cc0c vaddr 0x0000cc0c paddr 0x0000cc0c align 2**12 filesz 0x000005e8 memsz 0x00044df0 flags rw- DYNAMIC off 0x0000cc20 vaddr 0x0000cc20 paddr 0x0000cc20 align 2**2 filesz 0x000000f8 memsz 0x000000f8 flags rw- EH_FRAME off 0x0000b310 vaddr 0x0000b310 paddr 0x0000b310 align 2**2 filesz 0x000002bc memsz 0x000002bc flags r-- STACK off 0x00000000 vaddr 0x00000000 paddr 0x00000000 align 2**2 filesz 0x00000000 memsz 0x00000000 flags rw- RELRO off 0x0000cc0c vaddr 0x0000cc0c paddr 0x0000cc0c align 2**0 filesz 0x000003f4 memsz 0x000003f4 flags r-- Dynamic Section: NEEDED EpsonFiscalProtocol.so NEEDED libpthread.so.0 NEEDED libstdc++.so.6 NEEDED libm.so.6 NEEDED libgcc_s.so.1 NEEDED libc.so.6 SONAME libcom_tichile_jpos_EpsonSerialDriver.so INIT 0x000055d8 FINI 0x0000a968 HASH 0x000000f4 GNU_HASH 0x00000a30 STRTAB 0x00002870 SYMTAB 0x00001410 STRSZ 0x00002339 SYMENT 0x00000010 PLTGOT 0x0000cff4 PLTRELSZ 0x00000168 PLTREL 0x00000011 JMPREL 0x00005470 REL 0x00004ea8 RELSZ 0x000005c8 RELENT 0x00000008 VERNEED 0x00004e38 VERNEEDNUM 0x00000002 VERSYM 0x00004baa RELCOUNT 0x00000001 Version References: required from libstdc++.so.6: 0x056bafd3 0x00 05 CXXABI_1.3 0x08922974 0x00 03 GLIBCXX_3.4 required from libc.so.6: 0x09691f73 0x00 06 GLIBC_2.1.3 0x0d696914 0x00 04 GLIBC_2.4 0x0d696910 0x00 02 GLIBC_2.0 Now I think the main difference is in the GCC_3.0 symbol/ABI EDIT: Luckily, a coworker found a way to talk to the printer using Java

    Read the article

  • UI not updated while using ProgressMonitorInputStream in Swing to monitor compressed file decompress

    - by Bozhidar Batsov
    I'm working on swing application that relies on an embedded H2 database. Because I don't want to bundle the database with the app(the db is frequently updated and I want new users of the app to start with a recent copy), I've implemented a solution which downloads a compressed copy of the db the first time the application is started and extracts it. Since the extraction process might be slow I've added a ProgressMonitorInputStream to show to progress of the extraction process - unfortunately when the extraction starts, the progress dialog shows up but it's not updated at all. It seems like to events are getting through to the event dispatch thread. Here is the method: public static String extractDbFromArchive(String pathToArchive) { if (SwingUtilities.isEventDispatchThread()) { System.out.println("Invoking on event dispatch thread"); } // Get the current path, where the database will be extracted String currentPath = System.getProperty("user.home") + File.separator + ".spellbook" + File.separator; LOGGER.info("Current path: " + currentPath); try { //Open the archive FileInputStream archiveFileStream = new FileInputStream(pathToArchive); // Read two bytes from the stream before it used by CBZip2InputStream for (int i = 0; i < 2; i++) { archiveFileStream.read(); } // Open the gzip file and open the output file CBZip2InputStream bz2 = new CBZip2InputStream(new ProgressMonitorInputStream( null, "Decompressing " + pathToArchive, archiveFileStream)); FileOutputStream out = new FileOutputStream(ARCHIVED_DB_NAME); LOGGER.info("Decompressing the tar file..."); // Transfer bytes from the compressed file to the output file byte[] buffer = new byte[1024]; int len; while ((len = bz2.read(buffer)) > 0) { out.write(buffer, 0, len); } // Close the file and stream bz2.close(); out.close(); } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException ex) { ex.printStackTrace(); } try { TarInputStream tarInputStream = null; TarEntry tarEntry; tarInputStream = new TarInputStream(new ProgressMonitorInputStream( null, "Extracting " + ARCHIVED_DB_NAME, new FileInputStream(ARCHIVED_DB_NAME))); tarEntry = tarInputStream.getNextEntry(); byte[] buf1 = new byte[1024]; LOGGER.info("Extracting tar file"); while (tarEntry != null) { //For each entry to be extracted String entryName = currentPath + tarEntry.getName(); entryName = entryName.replace('/', File.separatorChar); entryName = entryName.replace('\\', File.separatorChar); LOGGER.info("Extracting entry: " + entryName); FileOutputStream fileOutputStream; File newFile = new File(entryName); if (tarEntry.isDirectory()) { if (!newFile.mkdirs()) { break; } tarEntry = tarInputStream.getNextEntry(); continue; } fileOutputStream = new FileOutputStream(entryName); int n; while ((n = tarInputStream.read(buf1, 0, 1024)) > -1) { fileOutputStream.write(buf1, 0, n); } fileOutputStream.close(); tarEntry = tarInputStream.getNextEntry(); } tarInputStream.close(); } catch (Exception e) { } currentPath += "db" + File.separator + DB_FILE_NAME; if (!currentPath.isEmpty()) { LOGGER.info("DB placed in : " + currentPath); } return currentPath; } This method gets invoked on the event dispatch thread (SwingUtilities.isEventDispatchThread() returns true) so the UI components should be updated. I haven't implemented this as an SwingWorker since I need to wait for the extraction anyways before I can proceed with the initialization of the program. This method get invoked before the main JFrame of the application is visible. I don't won't a solution based on SwingWorker + property changed listeners - I think that the ProgressMonitorInputStream is exactly what I need, but I guess I'm not doing something right. I'm using Sun JDK 1.6.18. Any help would be greatly appreciated.

    Read the article

  • NSLinguisticTagger on the contents of an NSTextStorage- crashing bug

    - by Remy Porter
    I'm trying to use an NSLinguisticTagger to monitor the contents of an NSTextStorage and provide some contextual information based on what the user types. To that end, I have an OverlayManager object, which wires up this relationship: -(void) setView:(NSTextView*) view { _view = view; _layout = view.layoutManager; _storage = view.layoutManager.textStorage; //get the TextStorage from the view [_tagger setString:_storage.string]; //pull the string out, this grabs the mutable version [self registerForNotificationsOn:self->_storage]; //subscribe to the willProcessEditing notification } When an edit occurs, I make sure to trap it and notify the tagger (and yes, I know I'm being annoyingly inconsistent with member access, I'm rusty on Obj-C, I'll fix it later): - (void) textStorageWillProcessEditing:(NSNotification*) notification{ if ([self->_storage editedMask] & NSTextStorageEditedCharacters) { NSRange editedRange = [self->_storage editedRange]; NSUInteger delta = [self->_storage changeInLength]; [_tagger stringEditedInRange:editedRange changeInLength:delta]; //should notify the tagger of the changes [self highlightEdits:self]; } } The highlightEdits message delegates the job out to a pool of "Overlay" objects. Each contains a block of code similar to this: [tagger enumerateTagsInRange:range scheme:NSLinguisticTagSchemeLexicalClass options:0 usingBlock:^(NSString *tag, NSRange tokenRange, NSRange sentenceRange, BOOL *stop) { if (tag == PartOfSpeech) { [self applyHighlightToRange:tokenRange onStorage:storage]; } }]; And that's where the problem is- the enumerateTagsInRange method crashes out with a message: 2014-06-04 10:07:19.692 WritersEditor[40191:303] NSMutableRLEArray replaceObjectsInRange:withObject:length:: Out of bounds This problem doesn't occur if I don't link to the mutable copy of the underlying string and instead do a [[_storage string] copy], but obviously I don't want to copy the entire backing store every time I want to do tagging. This all should be happening in the main run loop, so I don't think this is a threading issue. The NSRange I'm enumerating tags on exists both in the NSTextStorage and in the NSLinguisticTagger's view of the string. It's not even the fact that the applyHighlightToRange call adds attributes to the string, because it crashes before even reaching that line. I attempted to build a test case around the problem, but can't replicate it in those situations: - (void) testEdit { NSAttributedString* str = [[NSMutableAttributedString alloc] initWithString:@"Quickly, this is a test."]; text = [[NSTextStorage alloc] initWithAttributedString:str]; NSArray* schemes = [NSLinguisticTagger availableTagSchemesForLanguage:@"en"]; tagger = [[NSLinguisticTagger alloc] initWithTagSchemes:schemes options:0]; [tagger setString:[text string]]; [text beginEditing]; [[text mutableString] appendString:@"T"]; NSRange edited = [text editedRange]; NSUInteger length = [text changeInLength]; [text endEditing]; [tagger stringEditedInRange:edited changeInLength:length]; [tagger enumerateTagsInRange:edited scheme:NSLinguisticTagSchemeLexicalClass options:0 usingBlock:^(NSString *tag, NSRange tokenRange, NSRange sentenceRange, BOOL *stop) { //doesn't matter, this should crash }]; } That code doesn't crash.

    Read the article

  • Why does Saxon evaluate the result-document URI to be the same?

    - by Jan
    My XSL source document looks like this <Topology> <Environment> <Id>test</Id> <Machines> <Machine> <Id>machine1</Id> <modules> <module>m1</module> <module>m2</module> </modules> </Machine> </Machines> </Environment> <Environment> <Id>production</Id> <Machines> <Machine> <Id>machine1</Id> <modules> <module>m1</module> <module>m2</module> </modules> </Machine> <Machine> <Id>machine2</Id> <modules> <module>m3</module> <module>m4</module> </modules> </Machine> </Machines> </Environment> </Topology> I want to create one result-document per machine, so I use the following stylesheet giving modelDir as path for the result-documents as parameter. <xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes" name="myXML" doctype-system="http://java.sun.com/dtd/properties.dtd"/> <xsl:template match="/"> <xsl:for-each-group select="/Topology/Environment/Machines/Machine" group-by="Id"> <xsl:variable name="machine" select="Id"/> <xsl:variable name="filename" select="concat($modelDir,$machine,'.xml')" /> <xsl:message terminate="no">Writing machine description to <xsl:value-of select="$filename"/></xsl:message> <xsl:result-document href="$filename" format="myXML"> <xsl:variable name="currentMachine" select="Id"/> <xsl:for-each select="current-group()/LogicalHosts/LogicalHost"> <xsl:variable name="environment" select="normalize-space(../../../../Id)"/> <xsl:message terminate="no">Module <xsl:value-of select="."/> for <xsl:value-of select="$environment"/></xsl:message> </xsl:for-each> </xsl:result-document> </xsl:for-each-group> </xsl:template> As my messages show me this seems to work fine - if saxon would not evaluate the URI of the result-document to be the same and thus give the following output. Writing machine description to target/build/model/m1.xml Module m1 for test Module m2 for test Module m1 for production Module m2 for production Writing machine description to target/build/model/m2.xml Error at xsl:result-document on line 29 of file:/C:/Projekte/.../machine.xsl: XTDE1490: Cannot write more than one result document to the same URI, or write to a URI that has been read: file:/C:/Projekte/.../$filename file:/C:/Projekte/.../machine.xsl(29,-1) : here Cannot write more than one result document to the same URI, or write to a URI that has been read: file:/C:/Projekte/.../$filename ; SystemID: file:/C:/Projekte/.../machine.xsl; Line#: 29; Column#: -1 net.sf.saxon.trans.DynamicError: Cannot write more than one result document to the same URI, or write to a URI that has been read: file:/C:/Projekte/.../$filename at net.sf.saxon.instruct.ResultDocument.processLeavingTail(ResultDocument.java:300) at net.sf.saxon.instruct.Block.processLeavingTail(Block.java:365) at net.sf.saxon.instruct.Instruction.process(Instruction.java:91) Any ideas on how to solve this?

    Read the article

  • Deterministic key serialization

    - by Mike Boers
    I'm writing a mapping class which uses SQLite as the storage backend. I am currently allowing only basestring keys but it would be nice if I could use a couple more types hopefully up to anything that is hashable (ie. same requirements as the builtin dict). To that end I would like to derive a deterministic serialization scheme. Ideally, I would like to know if any implementation/protocol combination of pickle is deterministic for hashable objects (e.g. can only use cPickle with protocol 0). I noticed that pickle and cPickle do not match: >>> import pickle >>> import cPickle >>> def dumps(x): ... print repr(pickle.dumps(x)) ... print repr(cPickle.dumps(x)) ... >>> dumps(1) 'I1\n.' 'I1\n.' >>> dumps('hello') "S'hello'\np0\n." "S'hello'\np1\n." >>> dumps((1, 2, 'hello')) "(I1\nI2\nS'hello'\np0\ntp1\n." "(I1\nI2\nS'hello'\np1\ntp2\n." Another option is to use repr to dump and ast.literal_eval to load. This would only be valid for builtin hashable types. I have written a function to determine if a given key would survive this process (it is rather conservative on the types it allows): def is_reprable_key(key): return type(key) in (int, str, unicode) or (type(key) == tuple and all( is_reprable_key(x) for x in key)) The question for this method is if repr itself is deterministic for the types that I have allowed here. I believe this would not survive the 2/3 version barrier due to the change in str/unicode literals. This also would not work for integers where 2**32 - 1 < x < 2**64 jumping between 32 and 64 bit platforms. Are there any other conditions (ie. do strings serialize differently under different conditions)? (If this all fails miserably then I can store the hash of the key along with the pickle of both the key and value, then iterate across rows that have a matching hash looking for one that unpickles to the expected key, but that really does complicate a few other things and I would rather not do it.) Any insights?

    Read the article

  • Java MapReduce read data

    - by Tatiana
    Hi I am having following map-reduce code by which I am trying to read records from my database. There's code: import java.io.*; import java.util.ArrayList; import java.util.List; import org.apache.hadoop.fs.*; import org.apache.hadoop.io.*; import org.apache.hadoop.mapred.*; import org.apache.hadoop.mapred.lib.db.DBConfiguration; import org.apache.hadoop.mapred.lib.db.DBInputFormat; import org.apache.hadoop.mapred.lib.db.DBWritable; import org.apache.hadoop.util.*; import org.apache.hadoop.conf.*; public class Connection extends Configured implements Tool { public int run(String[] args) throws IOException { JobConf conf = new JobConf(getConf(), Connection.class); conf.setInputFormat(DBInputFormat.class); DBConfiguration.configureDB(conf, "com.sun.java.util.jar.pack.Driver", "jdbc:postgresql://localhost:5432/polyclinic", "postgres", "12345"); String[] fields = { "name" }; DBInputFormat.setInput(conf, MyRecord.class, "doctors", null, null, fields); conf.setMapOutputKeyClass(LongWritable.class); conf.setMapOutputValueClass(MyRecord.class); conf.setOutputKeyClass(LongWritable.class); conf.setOutputValueClass(TextOutputFormat.class); TextOutputFormat.setOutputPath(conf, new Path(args[0])); JobClient.runJob(conf); return 0; } public static void main(String[] args) throws Exception { int exitCode = ToolRunner.run(new Connection(), args); System.exit(exitCode); } } Class Mapper: import java.io.IOException; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapred.MapReduceBase; import org.apache.hadoop.mapred.Mapper; import org.apache.hadoop.mapred.OutputCollector; import org.apache.hadoop.mapred.Reporter; public class MyMapper extends MapReduceBase implements Mapper<LongWritable, MyRecord, Text, IntWritable> { public void map(LongWritable key, MyRecord val, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException { output.collect(new Text(val.name), new IntWritable(1)); } } Class Record: import java.io.DataInput; import java.io.DataOutput; import java.io.IOException; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import org.apache.hadoop.io.Text; import org.apache.hadoop.io.Writable; import org.apache.hadoop.mapred.lib.db.DBWritable; class MyRecord implements Writable, DBWritable { String name; public void readFields(DataInput in) throws IOException { this.name = Text.readString(in); } public void readFields(ResultSet resultSet) throws SQLException { this.name = resultSet.getString(1); } public void write(DataOutput out) throws IOException { } public void write(PreparedStatement stmt) throws SQLException { } } After this I got error: WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String). Can you give me any suggestion how to solve this problem?

    Read the article

  • Issues declaring already existing NSMutableArray in new class

    - by Graeme
    I have a class (DataImporter) which has the code to download an RSS feed. I also have a view and separate class (TableView) which displays the data in a UITableView and starts the parsing process, storing parsed information in an NSMutableArray (items) which is located in the (TableView) subclass. Now I wish to add a UIMapView which displays the items in the (items) NSMutableArray. Herein lies the issue - I need to somehow get the data from the (items) NSMutableArray into the new (mapView) subclass which I'm struggling with - and I preferably don't want to have to create a new class to download the data again for the mapView class when it already is in the applications memory. Is there a way I can transfer the information from the NSMutableArray (items) class to the (mapView) class (i.e. how do I declare the NSMutableArray in the (mapView) class)? Here's a overview of how the system works: App opened Data downloaded (using DataImporter class) when (TableView) viewDidLoad runs Data stored in NSMutableArray accessible by the (TableView) class And from here I need to access and declare the array from a new (mapView) class. Any help greatly appreciated, thanks. Code for viewDidLoad MapKit: Data *data = nil; NSString *ilocation = [data locations]; NSString *ilocation2 = @"New Zealand"; NSString *inewlString; inewlString = [ilocation stringByAppendingString:ilocation2]; NSLog(@"inewlString=%@",inewlString); if(forwardGeocoder == nil) { forwardGeocoder = [[BSForwardGeocoder alloc] initWithDelegate:self]; } // Forward geocode! [forwardGeocoder findLocation: inewlString]; Code for parsing data into original NSMutable Array: - (void)beginParsing { NSLog(@"Parsing has begun"); //self.navigationItem.rightBarButtonItem.enabled = NO; // Allocate the array for song storage, or empty the results of previous parses if (incidents == nil) { NSLog(@"Grabbing array"); self.datas = [NSMutableArray array]; } else { [datas removeAllObjects]; [self.tableView reloadData]; } // Create the parser, set its delegate, and start it. self.parser = [[DataImporter alloc] init]; parser.delegate = self; [parser start]; }

    Read the article

  • how to gzip-compress large Ajax responses (HTML only) in Coldfusion?

    - by frequent
    I'm running Coldfusion8 and jquery/jquery-mobile on the front-end. I'm playing around with an Ajax powered search engine trying to find the best tradeoff between data-volume and client-side processing time. Currently my AJAX search returns 40k of (JQM-enhanced markup), which avoids any client-side enhancement. This way I'm getting by without the page stalling for about 2-3 seconds, while JQM enhances all elements in the search results. What I'm curious is whether I can gzip Ajax responses sent from Coldfusion. If I check the header of my search right now, I'm having this: RESPONSE-header Connection Keep-Alive Content-Type text/html; charset=UTF-8 Date Sat, 01 Sep 2012 08:47:07 GMT Keep-Alive timeout=5, max=95 Server Apache/2.2.21 (Win32) mod_ssl/2.2.21 ... Transfer-Encoding chunked REQUEST-header Accept */* Accept-Encoding gzip, deflate Accept-Language de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Connection keep-alive Cookie CFID= ; CFTOKEN= ; resolution=1143 Host www.host.com Referer http://www.host.com/dev/users/index.cfm So, my request would accept gzip, deflate, but I'm getting back chunked. I'm generating the AJAX response in a cfsavecontent (called compressedHTML) and run this to eliminate whitespace <cfrscipt> compressedHTML = reReplace(renderedResults, "\>\s+\<", "> <", "ALL"); compressedHTML = reReplace(compressedHTML, "\s{2,}", chr(13), "ALL"); compressedHTML = reReplace(compressedHTML, "\s{2,}", chr(09), "ALL"); </cfscript> before sending the compressedHTML in a response object like this: {"SUCCESS":true,"DATA": compressedHTML } Question If I know I'm sending back HTML in my data object via Ajax, is there a way to gzip the response server-side before returning it vs sending chunked? If this is at all possible? If so, can I do this inside my response object or would I have to send back "pure" HTML? Thanks! EDIT: Found this on setting a 'web.config' for dynamic compression - doesn't seem to work EDIT2: Found thi snippet and am playing with it, although I'm not sure this will work. <cfscript> compressedHTML = reReplace(renderedResults, "\>\s+\<", "> <", "ALL"); compressedHTML = reReplace(compressedHTML, "\s{2,}", chr(13), "ALL"); compressedHTML = reReplace(compressedHTML, "\s{2,}", chr(09), "ALL"); if ( cgi.HTTP_ACCEPT_ENCODING contains "gzip" AND not showRaw ){ cfheader name="Content-Encoding" value="gzip"; bos = createObject("java","java.io.ByteArrayOutputStream").init(); gzipStream = createObject("java","java.util.zip.GZIPOutputStream"); gzipStream.init(bos); gzipStream.write(compressedHTML.getBytes("utf-8")); gzipStream.close(); bos.flush(); bos.close(); encoder = createObject("java","sun.misc. outStr= encoder.encode(bos.toByteArray()); compressedHTML = toString(bos.toByteArray()); } </cfscript> Probably need to try this on the response object and not the compressedTHML variable

    Read the article

  • Concurrency and Calendar classes

    - by fbielejec
    I have a thread (class implementing runnable, called AnalyzeTree) organised around a hash map (ConcurrentMap slicesMap). The class goes through the data (called trees here) in the large text file and parses the geographical coordinates from it to the HashMap. The idea is to process one tree at a time and add or grow the values according to the key (which is just a Double value representing time). The relevant part of code looks like this: // grow map entry if key exists if (slicesMap.containsKey(sliceTime)) { double[] imputedLocation = imputeValue( location, parentLocation, sliceHeight, nodeHeight, parentHeight, rate, useTrueNoise, currentTreeNormalization, precisionArray); slicesMap.get(sliceTime).add( new Coordinates(imputedLocation[1], imputedLocation[0], 0.0)); // start new entry if no such key in the map } else { List<Coordinates> coords = new ArrayList<Coordinates>(); double[] imputedLocation = imputeValue( location, parentLocation, sliceHeight, nodeHeight, parentHeight, rate, useTrueNoise, currentTreeNormalization, precisionArray); coords.add(new Coordinates(imputedLocation[1], imputedLocation[0], 0.0)); slicesMap.putIfAbsent(sliceTime, coords); // slicesMap.put(sliceTime, coords); }// END: key check And the class is called like this (executor is ExecutorService executor = Executors.newFixedThreadPool(NTHREDS) ): mrsd = new SpreadDate(mrsdString); int readTrees = 1; while (treesImporter.hasTree()) { currentTree = (RootedTree) treesImporter.importNextTree(); executor.submit(new AnalyzeTree(currentTree, precisionString, coordinatesName, rateString, numberOfIntervals, treeRootHeight, timescaler, mrsd, slicesMap, useTrueNoise)); // new AnalyzeTree(currentTree, precisionString, // coordinatesName, rateString, numberOfIntervals, // treeRootHeight, timescaler, mrsd, slicesMap, // useTrueNoise).run(); readTrees++; }// END: while has trees Now this is running into troubles when executed in parallel (the commented part running sequentially is fine), I thought it might throw a ConcurrentModificationException, but apparently the problem is in mrsd (instance of SpreadDate object, which is simply a class for date related calculations). The SpreadDate class looks like this: public class SpreadDate { private Calendar cal; private SimpleDateFormat formatter; private Date stringdate; public SpreadDate(String date) throws ParseException { // if no era specified assume current era String line[] = date.split(" "); if (line.length == 1) { StringBuilder properDateStringBuilder = new StringBuilder(); date = properDateStringBuilder.append(date).append(" AD") .toString(); } formatter = new SimpleDateFormat("yyyy-MM-dd G", Locale.US); stringdate = formatter.parse(date); cal = Calendar.getInstance(); } public long plus(int days) { cal.setTime(stringdate); cal.add(Calendar.DATE, days); return cal.getTimeInMillis(); }// END: plus public long minus(int days) { cal.setTime(stringdate); cal.add(Calendar.DATE, -days); //line 39 return cal.getTimeInMillis(); }// END: minus public long getTime() { cal.setTime(stringdate); return cal.getTimeInMillis(); }// END: getDate } And the stack trace from when exception is thrown: java.lang.ArrayIndexOutOfBoundsException: 58 at sun.util.calendar.BaseCalendar.getCalendarDateFromFixedDate(BaseCalendar.java:454) at java.util.GregorianCalendar.computeFields(GregorianCalendar.java:2098) at java.util.GregorianCalendar.computeFields(GregorianCalendar.java:2013) at java.util.Calendar.setTimeInMillis(Calendar.java:1126) at java.util.GregorianCalendar.add(GregorianCalendar.java:1020) at utils.SpreadDate.minus(SpreadDate.java:39) at templates.AnalyzeTree.run(AnalyzeTree.java:88) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:636) If a move the part initializing mrsd to the AnalyzeTree class it runs without any problems - however it is not very memory efficient to initialize class each time this thread is running, hence my concerns. How can it be remedied?

    Read the article

  • Listening for TCP and UDP requests on the same port

    - by user339328
    I am writing a Client/Server set of programs Depending on the operation requested by the client, I use make TCP or UDP request. Implementing the client side is straight-forward, since I can easily open connection with any protocol and send the request to the server-side. On the servers-side, on the other hand, I would like to listen both for UDP and TCP connections on the same port. Moreover, I like the the server to open new thread for each connection request. I have adopted the approach explained in: link text I have extended this code sample by creating new threads for each TCP/UDP request. This works correctly if I use TCP only, but it fails when I attempt to make UDP bindings. Please give me any suggestion how can I correct this. tnx Here is the Server Code: public class Server { public static void main(String args[]) { try { int port = 4444; if (args.length > 0) port = Integer.parseInt(args[0]); SocketAddress localport = new InetSocketAddress(port); // Create and bind a tcp channel to listen for connections on. ServerSocketChannel tcpserver = ServerSocketChannel.open(); tcpserver.socket().bind(localport); // Also create and bind a DatagramChannel to listen on. DatagramChannel udpserver = DatagramChannel.open(); udpserver.socket().bind(localport); // Specify non-blocking mode for both channels, since our // Selector object will be doing the blocking for us. tcpserver.configureBlocking(false); udpserver.configureBlocking(false); // The Selector object is what allows us to block while waiting // for activity on either of the two channels. Selector selector = Selector.open(); tcpserver.register(selector, SelectionKey.OP_ACCEPT); udpserver.register(selector, SelectionKey.OP_READ); System.out.println("Server Sterted on port: " + port + "!"); //Load Map Utils.LoadMap("mapa"); System.out.println("Server map ... LOADED!"); // Now loop forever, processing client connections while(true) { try { selector.select(); Set<SelectionKey> keys = selector.selectedKeys(); // Iterate through the Set of keys. for (Iterator<SelectionKey> i = keys.iterator(); i.hasNext();) { SelectionKey key = i.next(); i.remove(); Channel c = key.channel(); if (key.isAcceptable() && c == tcpserver) { new TCPThread(tcpserver.accept().socket()).start(); } else if (key.isReadable() && c == udpserver) { new UDPThread(udpserver.socket()).start(); } } } catch (Exception e) { e.printStackTrace(); } } } catch (Exception e) { e.printStackTrace(); System.err.println(e); System.exit(1); } } } The UDPThread code: public class UDPThread extends Thread { private DatagramSocket socket = null; public UDPThread(DatagramSocket socket) { super("UDPThread"); this.socket = socket; } @Override public void run() { byte[] buffer = new byte[2048]; try { DatagramPacket packet = new DatagramPacket(buffer, buffer.length); socket.receive(packet); String inputLine = new String(buffer); String outputLine = Utils.processCommand(inputLine.trim()); DatagramPacket reply = new DatagramPacket(outputLine.getBytes(), outputLine.getBytes().length, packet.getAddress(), packet.getPort()); socket.send(reply); } catch (IOException e) { e.printStackTrace(); } socket.close(); } } I receive: Exception in thread "UDPThread" java.nio.channels.IllegalBlockingModeException at sun.nio.ch.DatagramSocketAdaptor.receive(Unknown Source) at server.UDPThread.run(UDPThread.java:25) 10x

    Read the article

  • Why does @PostConstruct callback fire every time even though bean is @ViewScoped? JSF

    - by Nitesh Panchal
    Hello, I am using datatable on page and using binding attribute to bind it to my backing bean. This is my code :- <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html" xmlns:p="http://primefaces.prime.com.tr/ui"> <h:head> <title>Facelet Title</title> </h:head> <h:body> <h:form prependId="false"> <h:dataTable var="item" value="#{testBean.stringCollection}" binding="#{testBean.dataTable}"> <h:column> <h:outputText value="#{item}"/> </h:column> <h:column> <h:commandButton value="Click" actionListener="#{testBean.action}"/> </h:column> </h:dataTable> </h:form> </h:body> </html> This is my bean :- package managedBeans; import java.io.Serializable; import java.util.ArrayList; import java.util.List; import javax.annotation.PostConstruct; import javax.faces.bean.ManagedBean; import javax.faces.bean.ViewScoped; import javax.faces.component.html.HtmlDataTable; @ManagedBean(name="testBean") @ViewScoped public class testBean implements Serializable { private List<String> stringCollection; public List<String> getStringCollection() { return stringCollection; } public void setStringCollection(List<String> stringCollection) { this.stringCollection = stringCollection; } private HtmlDataTable dataTable; public HtmlDataTable getDataTable() { return dataTable; } public void setDataTable(HtmlDataTable dataTable) { this.dataTable = dataTable; } @PostConstruct public void init(){ System.out.println("Post Construct fired!!"); stringCollection = new ArrayList<String>(); stringCollection.add("a"); stringCollection.add("b"); stringCollection.add("c"); } public void action(){ System.out.println("Clicked!!"); } } Please tell me why is the @PostConstruct firing each and every time i click on button? It should fire only once as long as i am on same page beacause my bean is @ViewScoped. Further, if i remove the binding attribute then everything works fine and @PostConstruct callback fires only once. Then why every time when i use binding attribute? I need binding attribute and want to perform initialisation tasks like fetching data from webservice, etc only once. What should i do? Where should i write my initialisation task?

    Read the article

  • Asp.net Mvc - Kigg: Maintain User object in HttpContext.Items between requests.

    - by Pickels
    Hallo, first I want to say that I hope this doesn't look like I am lazy but I have some trouble understanding a piece of code from the following project. http://kigg.codeplex.com/ I was going through the source code and I noticed something that would be usefull for my own little project I am making. In their BaseController they have the following code: private static readonly Type CurrentUserKey = typeof(IUser); public IUser CurrentUser { get { if (!string.IsNullOrEmpty(CurrentUserName)) { IUser user = HttpContext.Items[CurrentUserKey] as IUser; if (user == null) { user = AccountRepository.FindByClaim(CurrentUserName); if (user != null) { HttpContext.Items[CurrentUserKey] = user; } } return user; } return null; } } This isn't an exact copy of the code I adjusted it a little to my needs. This part of the code I still understand. They store their IUser in HttpContext.Items. I guess they do it so that they don't have to call the database eachtime they need the User object. The part that I don't understand is how they maintain this object in between requests. If I understand correctly the HttpContext.Items is a per request cache storage. So after some more digging I found the following code. internal static IDictionary<UnityPerWebRequestLifetimeManager, object> GetInstances(HttpContextBase httpContext) { IDictionary<UnityPerWebRequestLifetimeManager, object> instances; if (httpContext.Items.Contains(Key)) { instances = (IDictionary<UnityPerWebRequestLifetimeManager, object>) httpContext.Items[Key]; } else { lock (httpContext.Items) { if (httpContext.Items.Contains(Key)) { instances = (IDictionary<UnityPerWebRequestLifetimeManager, object>) httpContext.Items[Key]; } else { instances = new Dictionary<UnityPerWebRequestLifetimeManager, object>(); httpContext.Items.Add(Key, instances); } } } return instances; } This is the part where some magic happens that I don't understand. I think they use Unity to do some dependency injection on each request? In my project I am using Ninject and I am wondering how I can get the same result. I guess InRequestScope in Ninject is the same as UnityPerWebRequestLifetimeManager? I am also wondering which class/method they are binding to which interface? Since the HttpContext.Items get destroyed each request how do they prevent losing their user object? Anyway it's kinda a long question so I am gradefull for any push in the right direction. Kind regards, Pickels

    Read the article

  • Unable to execute native sql query

    - by Renjith
    I am developing an application with Spring and hibernate. In the DAO class, I was trying to execute a native sql as follows: SELECT * FROM product ORDER BY unitprice ASC LIMIT 6 OFFSET 0 But the system throws an exception. org.hibernate.HibernateException: No Hibernate Session bound to thread, and configuration does not allow creation of non-transactional one here org.springframework.orm.hibernate3.SpringSessionContext.currentSession(SpringSessionContext.java:63) org.hibernate.impl.SessionFactoryImpl.getCurrentSession(SessionFactoryImpl.java:544) com.dao.ProductDAO.listProducts(ProductDAO.java:15) com.dataobjects.impl.ProductDoImpl.listProducts(ProductDoImpl.java:26) com.action.ProductAction.showProducts(ProductAction.java:53) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) application-context.xml is show below <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer" p:location="/WEB-INF/jdbc.properties" /> <bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource" p:driverClassName="${jdbc.driverClassName}" p:url="${jdbc.url}" p:username="${jdbc.username}" p:password="${jdbc.password}" /> <!-- Hibernate SessionFactory --> <!-- class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"--> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource"> <ref local="dataSource"/> </property> <property name="configLocation"> <value>WEB-INF/classes/hibernate.cfg.xml</value> </property> <property name="configurationClass"> <value>org.hibernate.cfg.AnnotationConfiguration</value> </property> <!-- <property name="annotatedClasses"> <list> <value>com.pojo.Product</value> <value>com.pojo.User</value> <value>com.pojo.UserLogin</value> </list> </property> --> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">${hibernate.dialect}</prop> <prop key="hibernate.show_sql">true</prop> </props> </property> </bean> <!-- User Bean definitions --> <bean name="/logincheck" class="com.action.LoginAction"> <property name="userDo" ref="userDo" /> </bean> <bean id="userDo" class="com.dataobjects.impl.UserDoImpl" > <property name="userDAO" ref="userDAO" /> </bean> <bean id="userDAO" class="com.dao.UserDAO" > <property name="sessionFactory" ref="sessionFactory" /> </bean> <bean name="/listproducts" class="com.action.ProductAction"> <property name="productDo" ref="productDo" /> </bean> <bean id="productDo" class="com.dataobjects.impl.ProductDoImpl" > <property name="productDAO" ref="productDAO" /> </bean> <bean id="productDAO" class="com.dao.ProductDAO" > <property name="sessionFactory" ref="sessionFactory" /> </bean> And DAO class is public class ProductDAO extends HibernateDaoSupport{ public List listProducts(int startIndex, int incrementor) { org.hibernate.Session session = getHibernateTemplate().getSessionFactory().getCurrentSession(); String queryString = "SELECT * FROM product ORDER BY unitprice ASC LIMIT 6 OFFSET 0"; List list = null; try{ session.beginTransaction(); org.hibernate.Query query = session.createQuery(queryString); list = query.list(); session.getTransaction().commit(); } catch(Exception e) { e.printStackTrace(); } finally { session.close(); } return list; } public List getProductCount() { String queryString = "SELECT COUNT(*) FROM Product"; return getHibernateTemplate().find(queryString); } } Any thoughts to fix it up?

    Read the article

  • DB Design Pattern - Many to many classification / categorised tagging.

    - by Robin Day
    I have an existing database design that stores Job Vacancies. The "Vacancy" table has a number of fixed fields across all clients, such as "Title", "Description", "Salary range". There is an EAV design for "Custom" fields that the Clients can setup themselves, such as, "Manager Name", "Working Hours". The field names are stored in a "ClientText" table and the data stored in a "VacancyClientText" table with VacancyId, ClientTextId and Value. Lastly there is a many to many EAV design for custom tagging / categorising the vacancies with things such as Locations/Offices the vacancy is in, a list of skills required. This is stored as a "ClientCategory" table listing the types of tag, "Locations, Skills", a "ClientCategoryItem" table listing the valid values for each Category, e.g., "London,Paris,New York,Rome", "C#,VB,PHP,Python". Finally there is a "VacancyClientCategoryItem" table with VacancyId and ClientCategoryItemId for each of the selected items for the vacancy. There are no limits to the number of custom fields or custom categories that the client can add. I am now designing a new system that is very similar to the existing system, however, I have the ability to restrict the number of custom fields a Client can have and it's being built from scratch so I have no legacy issues to deal with. For the Custom Fields my solution is simple, I have 5 additional columns on the Vacancy Table called CustomField1-5. This removes one of the EAV designs. It is with the tagging / categorising design that I am struggling. If I limit a client to having 5 categories / types of tag. Should I create 5 tables listing the possible values "CustomCategoryItems1-5" and then an additional 5 many to many tables "VacancyCustomCategoryItem1-5" This would result in 10 tables performing the same storage as the three tables in the existing system. Also, should (heaven forbid) the requirements change in that I need 6 custom categories rather than 5 then this will result in a lot of code change. Therefore, can anyone suggest any DB Design Patterns that would be more suitable to storing such data. I'm happy to stick with the EAV approach, however, the existing system has come across all the usual performance issues and complex queries associated with such a design. Any advice / suggestions are much appreciated. The DBMS system used is SQL Server 2005, however, 2008 is an option if required for any particular pattern.

    Read the article

  • ASP.NET MVC 4/Web API Single Page App for Mobile Devices ... Needs Authentication

    - by lmttag
    We have developed an ASP.NET MVC 4/Web API single page, mobile website (also using jQuery Mobile) that is intended to be accessed only from mobile devices (e.g., iPads, iPhones, Android tables and phones, etc.), not desktop browsers. This mobile website will be hosted internally, like an intranet site. However, since we’re accessing it from mobile devices, we can’t use Windows authentication. We still need to know which user (and their role) is logging in to the mobile website app. We tried simply using ASP.NET’s forms authentication and membership provider, but couldn’t get it working exactly the way we wanted. What we need is for the user to be prompted for a user name and password only on the first time they access the site on their mobile device. After they enter a correct user name and password and have been authenticated once, each subsequent time they access the site they should just go right in. They shouldn’t have to re-enter their credentials (i.e., something needs to be saved locally to each device to identify the user after the first time). This is where we had troubles. Everything worked as expected the first time. That is, the user was prompted to enter a user name and password, and, after doing that, was authenticated and allowed into the site. The problem is every time after the browser was closed on the mobile device, the device and user were not know and the user had to re-enter user name and password. We tried lots of things too. We tried setting persistent cookies in JavaScript. No good. The cookies weren’t there to be read the second time. We tried manually setting persistent cookies from ASP.NET. No good. We, of course, used FormsAuthentication.SetAuthCookie(model.UserName, true); as part of the form authentication framework. No good. We tried using HTML5 local storage. No good. No matter what we tried, if the user was on a mobile device, they would have to log in every single time. (Note: we’ve tried on an iPad and iPhone running both iOS 5.1 and 6.0, with Safari configure to allow cookies, and we’ve tried on Android 2.3.4.) Is there some trick to getting a scenario like this working? Or, do we have to write some sort of custom authentication mechanism? If so, how? And, what? Or, should we use something like claims-based authentication and WIF? Or??? Any help is appreciated. Thanks!

    Read the article

  • Difference between Cloud and Virtualization

    - by Akash Kava
    Ops: This does not belong to ServerFault because it focuses on Programing Architecture. I have following questions regarding differences between Cloud and Virtualization.. How Cloud is different then Virtualization? Currently I tried to find out pricing of Rackspace, Amazone and all similar cloud providers, I found that our current 6 dedicated servers came cheaper then their pricing. So how one can claim cloud is cheaper? Is it cheaper only in comparison of normal hosting? We re organized our infrastructure in virtual environment to reduce or configuration overhead at time of failure, we did not have to rewrite any peice of code that is already written for earlier setup. So moving to virtualization does not require any re programming. But cloud is absoltely different and it will require entire reprogramming right? Is it really worth to recode when our current IT costs are 3-4 times lower then cloud hosting including raid backups and all sort of clustering for high availability? New programming architecture means new overheads of training staff, new methods of testing and new deployment schemes, does it justify over "on demand resource usage" words of cloud? We are having current development architecture with simple Server side ASP.NET WebServices with no local context and on client side Flex/Silverlight which offers pretty good REST architecture and its highly scalable. How does cloud differs from REST model of deployment? On storage, SQL Server or MySQL offers pretty good replication and high availibility then what is advantage in cloud? Data guarantee, one of our vendor hosting some other customer's app on cloud (one of most used), lost Entire Hard Disk (the virtual) and entire module in first 6 months. Second provider said its your duty to take backup, fine I agree, but no provider gives SLA for data guarantee, they give 99% uptime. However in most business apps, uptime is less important then data integrity. In our 10 years of dedicated hosting experience we had only one hard disk crash. This makes me little skeptical to go for cloud and loosing control over data. And I feel its just a big marketing buzz to sell virtulization in different form. Size of data, currently all providers charge very heavy for large data, if you are hosting only below 100GB cloud can be good alternative, but I think virtual servers and dedicated servers above 100GB to few TBs are still cheaper. Why would want to pay so high on cloud when there is no data guarentee as well as it doesnt say anything about redundancy. (I wish SO had something for spell check for Internet Explorer, sorry for wrong spellings in my post)

    Read the article

  • Why does this Java code not utilize all CPU cores?

    - by ReneS
    The attached simple Java code should load all available cpu core when starting it with the right parameters. So for instance, you start it with java VMTest 8 int 0 and it will start 8 threads that do nothing else than looping and adding 2 to an integer. Something that runs in registers and not even allocates new memory. The problem we are facing now is, that we do not get a 24 core machine loaded (AMD 2 sockets with 12 cores each), when running this simple program (with 24 threads of course). Similar things happen with 2 programs each 12 threads or smaller machines. So our suspicion is that the JVM (Sun JDK 6u20 on Linux x64) does not scale well. Did anyone see similar things or has the ability to run it and report whether or not it runs well on his/her machine (= 8 cores only please)? Ideas? I tried that on Amazon EC2 with 8 cores too, but the virtual machine seems to run different from a real box, so the loading behaves totally strange. package com.test; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.Future; import java.util.concurrent.TimeUnit; public class VMTest { public class IntTask implements Runnable { @Override public void run() { int i = 0; while (true) { i = i + 2; } } } public class StringTask implements Runnable { @Override public void run() { int i = 0; String s; while (true) { i++; s = "s" + Integer.valueOf(i); } } } public class ArrayTask implements Runnable { private final int size; public ArrayTask(int size) { this.size = size; } @Override public void run() { int i = 0; String[] s; while (true) { i++; s = new String[size]; } } } public void doIt(String[] args) throws InterruptedException { final String command = args[1].trim(); ExecutorService executor = Executors.newFixedThreadPool(Integer.valueOf(args[0])); for (int i = 0; i < Integer.valueOf(args[0]); i++) { Runnable runnable = null; if (command.equalsIgnoreCase("int")) { runnable = new IntTask(); } else if (command.equalsIgnoreCase("string")) { runnable = new StringTask(); } Future<?> submit = executor.submit(runnable); } executor.awaitTermination(1, TimeUnit.HOURS); } public static void main(String[] args) throws InterruptedException { if (args.length < 3) { System.err.println("Usage: VMTest threadCount taskDef size"); System.err.println("threadCount: Number 1..n"); System.err.println("taskDef: int string array"); System.err.println("size: size of memory allocation for array, "); System.exit(-1); } new VMTest().doIt(args); } }

    Read the article

  • Java, LDAP: Make it not ignore blank passwords?

    - by Steve
    I'm maintaining some legacy Java LDAP code. I know next to nothing about LDAP. The program below basically just sends the userid and password to the LDAP server, receives notification back if the credentials are good. If so, it prints out the LDAP attributes received from the LDAP server, if not it prints out an exception. All works well if a bad password is given. An "invalid credentials" exception gets thrown. However, if a blank password is sent to the LDAP Server, authentication will still happen, LDAP attributes will still be returned. Is this unhappy situation due to the LDAP server allowing blank passwords, or does the code below need to be adjusted such a blank password will get fed to the LDAP server in such a way so it will get rejected? I do have data validation in place. I took it off in a testing environment to solve another issue and noticed this problem. I would prefer not to have this problem underneath the data validation. Thanks much in advance for any information import javax.naming.*; import javax.naming.directory.*; import java.util.*; import java.sql.*; public class LDAPTEST { public static void main(String args[]) { String lcf = "com.sun.jndi.ldap.LdapCtxFactory"; String ldapurl = "ldaps://ldap-cit.smew.acme.com:636/o=acme.com"; String loginid = "George.Jetson"; String password = ""; DirContext ctx = null; Hashtable env = new Hashtable(); Attributes attr = null; Attributes resultsAttrs = null; SearchResult result = null; NamingEnumeration results = null; int iResults = 0; int iAttributes = 0; env.put(Context.INITIAL_CONTEXT_FACTORY, lcf); env.put(Context.PROVIDER_URL, ldapurl); env.put(Context.SECURITY_PROTOCOL, "ssl"); env.put(Context.SECURITY_AUTHENTICATION, "simple"); env.put(Context.SECURITY_PRINCIPAL, "uid=" + loginid + ",ou=People,o=acme.com"); env.put(Context.SECURITY_CREDENTIALS, password); try { ctx = new InitialDirContext(env); attr = new BasicAttributes(true); attr.put(new BasicAttribute("uid",loginid)); results = ctx.search("ou=People",attr); while (results.hasMore()) { result = (SearchResult)results.next(); resultsAttrs = result.getAttributes(); for (NamingEnumeration enumAttributes = resultsAttrs.getAll(); enumAttributes.hasMore();) { Attribute a = (Attribute)enumAttributes.next(); System.out.println("attribute: " + a.getID() + " : " + a.get().toString()); iAttributes++; }// end for loop iResults++; }// end while loop System.out.println("Records == " + iResults + " Attributes: " + iAttributes); }// end try catch (Exception e) { e.printStackTrace(); } }// end function main() }// end class LDAPTEST

    Read the article

  • Rewrite code from Threads to AnyEvent

    - by user1779868
    I wrote a code: use LWP::UserAgent; use HTTP::Cookies; use threads; use threads::shared; $| = 1; $threads = 50; my @urls : shared = loadf('url.txt'); my @thread_list = (); $thread_list[$_] = threads->create(\&thread) for 0 .. $threads - 1; $_->join for @thread_list; thread(); sub thread { my ($web, $ck) = browser(); while(1) { my $url = shift @urls; if(!$url) { last; } $code = $web->get($url)->code; print "[+] $url - code: $code\n"; if($code == 200) { open F, ">>200.txt"; print F $url."\n"; close F; } elsif($code == 301) { open F, ">>301.txt"; print F $url."\n"; close F; } else { open F, ">>else.txt"; print F "$url code - $code\n"; close F; } } } sub loadf { open (F, "<".$_[0]) or erroropen($_[0]); chomp(my @data = <F>); close F; return @data; } sub browser { my $web = new LWP::UserAgent; my $ck = new HTTP::Cookies; $web->cookie_jar($ck); $web->agent('Opera/9.80 (Windows 7; U; en) Presto/2.9.168 Version/11.50'); $web->timeout(5); return $web, $ck; } After its working for some time physical storage is full. Can u help me to re-write it with AnyEvent. I tried but my code didn't work. I read that it will help me to safe some memory. Thanks a lot to any helpers.

    Read the article

  • Is there a way to efficiently yield every file in a directory containing millions of files?

    - by Josh Smeaton
    I'm aware of os.listdir, but as far as I can gather, that gets all the filenames in a directory into memory, and then returns the list. What I want, is a way to yield a filename, work on it, and then yield the next one, without reading them all into memory. Is there any way to do this? I worry about the case where filenames change, new files are added, and files are deleted using such a method. Some iterators prevent you from modifying the collection during iteration, essentially by taking a snapshot of the state of the collection at the beginning, and comparing that state on each move operation. If there is an iterator capable of yielding filenames from a path, does it raise an error if there are filesystem changes (add, remove, rename files within the iterated directory) which modify the collection? There could potentially be a few cases that could cause the iterator to fail, and it all depends on how the iterator maintains state. Using S.Lotts example: filea.txt fileb.txt filec.txt Iterator yields filea.txt. During processing, filea.txt is renamed to filey.txt and fileb.txt is renamed to filez.txt. When the iterator attempts to get the next file, if it were to use the filename filea.txt to find it's current position in order to find the next file and filea.txt is not there, what would happen? It may not be able to recover it's position in the collection. Similarly, if the iterator were to fetch fileb.txt when yielding filea.txt, it could look up the position of fileb.txt, fail, and produce an error. If the iterator instead was able to somehow maintain an index dir.get_file(0), then maintaining positional state would not be affected, but some files could be missed, as their indexes could be moved to an index 'behind' the iterator. This is all theoretical of course, since there appears to be no built-in (python) way of iterating over the files in a directory. There are some great answers below, however, that solve the problem by using queues and notifications. Edit: The OS of concern is Redhat. My use case is this: Process A is continuously writing files to a storage location. Process B (the one I'm writing), will be iterating over these files, doing some processing based on the filename, and moving the files to another location. Edit: Definition of valid: Adjective 1. Well grounded or justifiable, pertinent. (Sorry S.Lott, I couldn't resist). I've edited the paragraph in question above.

    Read the article

  • sys.path() and PYTHONPATH issues

    - by Justin
    I've been learning Python, I'm working in 2.7.3, and I'm trying to understand import statements. The documentation says that when you attempt to import a module, the interpreter will first search for one of the built-in modules. What is meant by a built-in module? Then, the documentation says that the interpreter searches in the directories listed by sys.path, and that sys.path is initialized from these sources: the directory containing the input script (or the current directory). PYTHONPATH (a list of directory names, with the same syntax as the shell variable PATH). the installation-dependent default. Here is a sample output of a sys.path command from my computer using python in command-line mode: (I deleted a few so that it wouldn't be huge) ['', '/usr/lib/python2.7', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/PIL', '/usr/lib/python2.7/dist-packages/gst-0.10', '/usr/lib/python2.7/dist-packages/gtk-2.0', '/usr/lib/pymodules/python2.7', '/usr/lib/python2.7/dist-packages/ubuntuone-couch', '/usr/lib/python2.7/dist-packages/ubuntuone-storage-protocol'] Now, I'm assuming that the '' path refers to the directory containing the 'script', and so I figured the rest of them would be coming from my PYTHONPATH environmental variable. However, when I go to the terminal and type env, PYTHONPATH doesn't exist as an environmental variable. I also tried import os then os.environ, but I get the same output. Do I really not have a PYTHONPATH environmental variable? I don't believe I ever specifically defined a PYTHONPATH environmental variable, but I assumed that when I installed new packages they automatically altered that environment variable. If I don't have a PYTHONPATH, how is my sys.path getting populated? If I download new packages, how does Python know where to look for them if I don't have this PYTHONPATH variable? How do environment variables work? From what I understand, environment variables are specific to the process for which they are set, however, if I open multiple terminal windows and run env, they all display a number of identical variables, for example, PATH. I know there file locations for persistent environment variables, for example /etc/environment, which contains my PATH variable. Is it possible to tell where a persistent environment variable is stored? What is the recommended location for storing new persistent environment variables? How do environment variables actually work with say, the Python interpreter? The Python interpreter looks for PYTHONPATH, but how does it work at the nitty-gritty level?

    Read the article

  • Android: How to properly exit application when inconsistent condition is unavoidable?

    - by Bevor
    First of all I already read about all this discussion that it isn't a good idea to manually exit an Android application. But in my case it seems to be needed. I have an AsyncTask which does a lof of operations in background. That means downloading data, saving it to local storage and preparing it for usage in application. It could happen that there is no internet connection or something different happens. For all that cases I have an Exception handling which returns the result. And if there is an exception, the application is unusable so I need to exit it. My question is, do I have to do some unregistration unloading or unbinding tasks or something when I exit the application by code or is System.exit(0) ok? I do all this in an AsyncTask, see my example: public class InitializationTask extends AsyncTask<Void, Void, InitializationResult> { private ProcessController processController = new ProcessController(); private ProgressDialog progressDialog; private Activity mainActivity; public InitializationTask(Activity mainActivity) { this.mainActivity = mainActivity; } @Override protected void onPreExecute() { super.onPreExecute(); progressDialog = new ProgressDialog(mainActivity); progressDialog.setMessage("Die Daten werden aufbereitet.\nBitte warten..."); progressDialog.setIndeterminate(true); //means that the "loading amount" is not measured. progressDialog.setCancelable(false); progressDialog.show(); }; @Override protected InitializationResult doInBackground(Void... params) { return processController.initializeData(); } @Override protected void onPostExecute(InitializationResult result) { super.onPostExecute(result); progressDialog.dismiss(); if (!result.isValid()) { AlertDialog.Builder dialog = new AlertDialog.Builder(mainActivity); dialog.setTitle("Initialisierungsfehler"); dialog.setMessage(result.getReason()); dialog.setPositiveButton("Ok", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { dialog.cancel(); //TODO cancel application System.exit(0); } }); dialog.show(); } } }

    Read the article

  • Java JNI leak in c++ process.

    - by user662056
    Hi all.. I am beginner in Java. My problem is: I am calling a Java class's method from c++. For this i am using JNI. Everythings works correct, but i have some memory LEAKS in the process of c++ program... So.. i did simple example.. 1) I create a java machine (jint res = JNI_CreateJavaVM(&jvm, (void**)&env, &vm_args);) 2) then i take a pointer on java class (jclass cls = env-FindClass("test_jni")); 3) after that i create a java class object object, by calling the constructor (testJavaObject = env-NewObject(cls, testConstruct);) AT THIS very moment in the process of c++ program is allocated 10 MB of memory 4) Next i delete the class , the object, and the Java Machine .. AT THIS very moment the 10 MB of memory are not free ................. So below i have a few lines of code c++ program void main() { { //Env JNIEnv *env; // java virtual machine JavaVM *jvm; JavaVMOption* options = new JavaVMOption[1]; //class paths options[0].optionString = "-Djava.class.path=C:/Sun/SDK/jdk/lib;D:/jms_test/java_jni_leak;"; // other options JavaVMInitArgs vm_args; vm_args.version = JNI_VERSION_1_6; vm_args.options = options; vm_args.nOptions = 1; vm_args.ignoreUnrecognized = false; // alloc part of memory (for test) before CreateJavaVM char* testMem0 = new char[1000]; for(int i = 0; i < 1000; ++i) testMem0[i] = 'a'; // create java VM jint res = JNI_CreateJavaVM(&jvm, (void**)&env, &vm_args); // alloc part of memory (for test) after CreateJavaVM char* testMem1 = new char[1000]; for(int i = 0; i < 1000; ++i) testMem1[i] = 'b'; //Creating java virtual machine jclass cls = env->FindClass("test_jni"); // Id of a class constructor jmethodID testConstruct = env->GetMethodID(cls, "<init>", "()V"); // The Java Object // Calling the constructor, is allocated 10 MB of memory in c++ process jobject testJavaObject = env->NewObject(cls, testConstruct); // function DeleteLocalRef, // In this very moment memory not free env->DeleteLocalRef(testJavaObject); env->DeleteLocalRef(cls); // 1!!!!!!!!!!!!! res = jvm->DestroyJavaVM(); delete[] testMem0; delete[] testMem1; // In this very moment memory not free. TO /// } int gg = 0; } java class (it just allocs some memory) import java.util.*; public class test_jni { ArrayList<String> testStringList; test_jni() { System.out.println("start constructor"); testStringList = new ArrayList<String>(); for(int i = 0; i < 1000000; ++i) { // ??????? ?????? testStringList.add("TEEEEEEEEEEEEEEEEST"); } } } process memory view, after crating javaVM and java object: testMem0 and testMem1 - test memory, that's allocated by c++. ************** testMem0 ************** JNI_CreateJavaVM ************** testMem1 ************** // create java object jobject testJavaObject = env->NewObject(cls, testConstruct); ************** process memory view, after destroy javaVM and delete ref on java object: testMem0 and testMem1 are deleted to; ************** JNI_CreateJavaVM ************** // create java object jobject testJavaObject = env->NewObject(cls, testConstruct); ************** So testMem0 and testMem1 is deleted, But JavaVM and Java object not.... Sow what i do wrong... and how i can free memory in the c++ process program.

    Read the article

  • IntentService android download and return file to Activity

    - by Andrew G
    I have a fairly tricky situation that I'm trying to determine the best design for. The basics are this: I'm designing a messaging system with a similar interface to email. When a user clicks a message that has an attachment, an activity is spawned that shows the text of that message along with a paper clip signaling that there is an additional attachment. At this point, I begin preloading the attachment so that when the user clicks on it - it loads more quickly. currently, when the user clicks the attachment, it prompts with a loading dialog until the download is complete at which point it loads a separate attachment viewer activity, passing in the bmp byte array. I don't ever want to save attachments to persistent storage. The difficulty I have is in supporting rotation as well as home button presses etc. The download is currently done with a thread and handler setup. Instead of this, I'd like the flow to be the following: User loads message as before, preloading begins of attachment as before (invisible to user). When the user clicks on the attachment link, the attachment viewer activity is spawned right away. If the download was done, the image is displayed. If not, a dialog is shown in THIS activity until it is done and can be displayed. Note that ideally the download never restarts or else I've wasted cycles on the preload. Obviously I need some persistent background process that is able to keep downloading and is able to call back to arbitrarily bonded Activities. It seems like the IntentService almost fits my needs as it does its work in a background thread and has the Service (non UI) lifecycle. However, will it work for my other needs? I notice that common implementations for what I want to do get a Messenger from the caller Activity so that a Message object can be sent back to a Handler in the caller's thread. This is all well and good but what happens in my case when the caller Activity is Stopped or Destroyed and the currently active Activity (the attachment viewer) is showing? Is there some way to dynamically bind a new Activity to a running IntentService so that I can send a Message back to the new Activity? The other question is on the Message object. Can I send arbitrarily large data back in this package? For instance, rather than send back that "The file was downloaded", I need to send back the byte array of the downloaded file itself since I never want to write it to disk (and yes this needs to be the case). Any advice on achieving the behavior I want is greatly appreciated. I've not been working with Android for that long and I often get confused with how to best handle asynchronous processes over the course of the Activity lifecycle especially when it comes to orientation changes and home button presses...

    Read the article

< Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >