Search Results

Search found 13869 results on 555 pages for 'memory dump'.

Page 527/555 | < Previous Page | 523 524 525 526 527 528 529 530 531 532 533 534  | Next Page >

  • Help with C# program design implementation: multiple array of lists or a better way?

    - by Bob
    I'm creating a 2D tile-based RPG in XNA and am in the initial design phase. I was thinking of how I want my tile engine to work and came up with a rough sketch. Basically I want a grid of tiles, but at each tile location I want to be able to add more than one tile and have an offset. I'd like this so that I could do something like add individual trees on the world map to give more flair. Or set bottles on a bar in some town without having to draw a bunch of different bar tiles with varying bottles. But maybe my reach is greater than my grasp. I went to implement the idea and had something like this in my Map object: List<Tile>[,] Grid; But then I thought about it. Let's say I had a world map of 200x200, which would actually be pretty small as far as RPGs go. That would amount to 40,000 Lists. To my mind I think there has to be a better way. Now this IS pre-mature optimization. I don't know if the way I happen to design my maps and game will be able to handle this, but it seems needlessly inefficient and something that could creep up if my game gets more complex. One idea I have is to make the offset and the multiple tiles optional so that I'm only paying for them when needed. But I'm not sure how I'd do this. A multiple array of objects? object[,] Grid; So here's my criteria: A 2D grid of tile locations Each tile location has a minimum of 1 tile, but can optionally have more Each extra tile can optionally have an x and y offset for pinpoint placement Can anyone help with some ideas for implementing such a design (don't need it done for me, just ideas) while keeping memory usage to a minimum? If you need more background here's roughly what my Map and Tile objects amount to: public struct Map { public Texture2D Texture; public List<Rectangle> Sources; //Source Rectangles for where in Texture to get the sprite public List<Tile>[,] Grid; } public struct Tile { public int Index; //Where in Sources to find the source Rectangle public int X, Y; //Optional offsets }

    Read the article

  • Neo4j 1.9.4 (REST Server,CYPHER) performance issue

    - by user2968943
    I have Neo4j 1.9.4 installed on 24 core 24Gb ram (centos) machine and for most queries CPU usage spikes goes to 200% with only few concurrent requests. Domain: some sort of social application where few types of nodes(profiles) with 3-30 text/array properties and 36 relationship types with at least 3 properties. Most of nodes currently has ~300-500 relationships. Current data set footprint(from console): LogicalLogSize=4294907 (32MB) ArrayStoreSize=1675520 (12MB) NodeStoreSize=1342170 (10MB) PropertyStoreSize=1739548 (13MB) RelationshipStoreSize=6395202 (48MB) StringStoreSize=1478400 (11MB) which is IMHO really small. most queries looks like this one(with more or less WITH .. MATCH .. statements and few queries with variable length relations but the often fast): START targetUser=node({id}), currentUser=node({current}) MATCH targetUser-[contact:InContactsRelation]->n, n-[:InLocationRelation]->l, n-[:InCategoryRelation]->c WITH currentUser, targetUser,n, l,c, contact.fav is not null as inFavorites MATCH n<-[followers?:InContactsRelation]-() WITH currentUser, targetUser,n, l,c,inFavorites, COUNT(followers) as numFollowers RETURN id(n) as id, n.name? as name, n.title? as title, n._class as _class, n.avatar? as avatar, n.avatar_type? as avatar_type, l.name as location__name, c.name as category__name, true as isInContacts, inFavorites as isInFavorites, numFollowers it runs in ~1s-3s(for first run) and ~1s-70ms (for consecutive and it depends on query) and there is about 5-10 queries runs for each impression. Another interesting behavior is when i try run query from console(neo4j) on my local machine many consecutive times(just press ctrl+enter for few seconds) it has almost constant execution time but when i do it on server it goes slower exponentially and i guess it somehow related with my problem. Problem: So my problem is that neo4j is very CPU greedy(for 24 core machine its may be not an issue but its obviously overkill for small project). First time i used AWS EC2 m1.large instance but over all performance was bad, during testing, CPU always was over 100%. Some relevant parts of configuration: neostore.nodestore.db.mapped_memory=1280M wrapper.java.maxmemory=8192 note: I already tried configuration where all memory related parameters where HIGH and it didn't worked(no change at all). Question: Where to digg? configuration? scheme? queries? what i'm doing wrong? if need more info(logs, configs) just ask ;)

    Read the article

  • Few iPhone noob questions

    - by mshsayem
    Why should I declare local variables as 'static' inside a method? Like: static NSString *cellIdentifier = @"Cell"; Is it a performance advantage? (I know what 'static' does; in C context) What does this syntax mean?[someObj release], someObj = nil; Two statements? Why should I assign nil again? Is not 'release' enough? Should I do it for all objects I allocate/own? Or for just view objects? Why does everyone copy NSString, but retains other objects (in property declaration)? Yes, NSStrings can be changed, but other objects can be changed also, right? Then why 'copy' for just NSString, not for all? Is it just a defensive convention? Shouldn't I release constant NSString? Like here:NSString *CellIdentifier = @"Cell"; Why not? Does the compiler allocate/deallocate it for me? In some tutorial application I observed these (Built with IB): Properties(IBOutlet, with same ivar name): window, someLabel, someTextField, etc etc... In the dealloc method, although the window ivar was released, others were not. My question is: WHY? Shouldn't I release other ivars(labels, textField) as well? Why not? Say, I have 3 cascaded drop-down lists. I mean, based on what is selected on the first list, 2nd list is populated and based on what is selected on the second list, 3rd list is populated. What UI components can reflect this best? How is drop-down list presented in iPhone UI? Tableview with UIPicker? When should I update the 2nd, 3rd list? Or just three labels which have touch events? Can you give me some good example tutorials about Core-Data? (Not just simple data fetching and storing on 2/3 tables with 1/2 relationship) How can I know whether my app is leaking memory? Any tools?

    Read the article

  • SQL Server CTE referred in self joins slow

    - by Kharlos Dominguez
    Hello, I have written a table-valued UDF that starts by a CTE to return a subset of the rows from a large table. There are several joins in the CTE. A couple of inner and one left join to other tables, which don't contain a lot of rows. The CTE has a where clause that returns the rows within a date range, in order to return only the rows needed. I'm then referencing this CTE in 4 self left joins, in order to build subtotals using different criterias. The query is quite complex but here is a simplified pseudo-version of it WITH DataCTE as ( SELECT [columns] FROM table INNER JOIN table2 ON [...] INNER JOIN table3 ON [...] LEFT JOIN table3 ON [...] ) SELECT [aggregates_columns of each subset] FROM DataCTE Main LEFT JOIN DataCTE BananasSubset ON [...] AND Product = 'Bananas' AND Quality = 100 LEFT JOIN DataCTE DamagedBananasSubset ON [...] AND Product = 'Bananas' AND Quality < 20 LEFT JOIN DataCTE MangosSubset ON [...] GROUP BY [ I have the feeling that SQL Server gets confused and calls the CTE for each self join, which seems confirmed by looking at the execution plan, although I confess not being an expert at reading those. I would have assumed SQL Server to be smart enough to only perform the data retrieval from the CTE only once, rather than do it several times. I have tried the same approach but rather than using a CTE to get the subset of the data, I used the same select query as in the CTE, but made it output to a temp table instead. The version referring the CTE version takes 40 seconds. The version referring the temp table takes between 1 and 2 seconds. Why isn't SQL Server smart enough to keep the CTE results in memory? I like CTEs, especially in this case as my UDF is a table-valued one, so it allowed me to keep everything in a single statement. To use a temp table, I would need to write a multi-statement table valued UDF, which I find a slightly less elegant solution. Did some of you had this kind of performance issues with CTE, and if so, how did you get them sorted? Thanks, Kharlos

    Read the article

  • What's wrong with my destructor?

    - by Ahmed Sharara
    // Sparse Array Assignment.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include<iostream> using namespace std; struct node{ int row; int col; int value; node* next_in_row; node* next_in_col; }; class MultiLinkedListSparseArray { private: char *logfile; node** rowPtr; node** colPtr; // used in constructor node* find_node(node* out); node* ins_node(node* ins,int col); node* in_node(node* ins,node* z); node* get(node* in,int row,int col); bool exist(node* so,int row,int col); node* dummy; int rowd,cold; //add anything you need public: MultiLinkedListSparseArray(int rows, int cols); ~MultiLinkedListSparseArray(); void setCell(int row, int col, int value); int getCell(int row, int col); void display(); void log(char *s); void dump(); }; MultiLinkedListSparseArray::MultiLinkedListSparseArray(int rows,int cols){ rowPtr=new node* [rows+1]; colPtr=new node* [cols+1]; for(int n=0;n<=rows;n++) rowPtr[n]=NULL; for(int i=0;i<=cols;i++) colPtr[i]=NULL; rowd=rows;cold=cols; } MultiLinkedListSparseArray::~MultiLinkedListSparseArray(){ cout<<"array is deleted"<<endl; for(int i=rowd;i>=0;i--){ for(int j=cold;j>=0;j--){ if(exist(rowPtr[i],i,j)) delete get(rowPtr[i],i,j); } } // it stops in the last loop & doesnt show the done word cout<<"done"<<endl; delete [] rowPtr; delete [] colPtr; delete dummy; } void MultiLinkedListSparseArray::log(char *s){ logfile=s; } void MultiLinkedListSparseArray::setCell(int row,int col,int value){ if(exist(rowPtr[row],row,col)){ (*get(rowPtr[row],row,col)).value=value; } else{ if(rowPtr[row]==NULL){ rowPtr[row]=new node; (*rowPtr[row]).value=value; (*rowPtr[row]).row=row; (*rowPtr[row]).col=col; (*rowPtr[row]).next_in_row=NULL; (*rowPtr[row]).next_in_col=NULL; } else if((*find_node(rowPtr[row])).col<col){ node* out; out=find_node(rowPtr[row]); (*out).next_in_row=new node; (*((*out).next_in_row)).col=col; (*((*out).next_in_row)).row=row; (*((*out).next_in_row)).value=value; (*((*out).next_in_row)).next_in_row=NULL; } else if((*find_node(rowPtr[row])).col>col){ node* ins; ins=in_node(rowPtr[row],ins_node(rowPtr[row],col)); node* g=(*ins).next_in_row; (*ins).next_in_row=new node; (*((*ins).next_in_row)).col=col; (*(*ins).next_in_row).row=row; (*(*ins).next_in_row).value=value; (*(*ins).next_in_row).next_in_row=g; } } } int MultiLinkedListSparseArray::getCell(int row,int col){ return (*get(rowPtr[row],row,col)).value; } void MultiLinkedListSparseArray::display(){ for(int i=1;i<=5;i++){ for(int j=1;j<=5;j++){ if(exist(rowPtr[i],i,j)) cout<<(*get(rowPtr[i],i,j)).value<<" "; else cout<<"0"<<" "; } cout<<endl; } } node* MultiLinkedListSparseArray::find_node(node* out) { while((*out).next_in_row!=NULL) out=(*out).next_in_row; return out; } node* MultiLinkedListSparseArray::ins_node(node* ins,int col){ while(!((*ins).col>col)) ins=(*ins).next_in_row; return ins; } node* MultiLinkedListSparseArray::in_node(node* ins,node* z){ while((*ins).next_in_row!=z) ins=(*ins).next_in_col; return ins; } node* MultiLinkedListSparseArray::get(node* in,int row,int col){ dummy=new node; dummy->value=0; while((*in).col!=col){ if((*in).next_in_row==NULL){ return dummy; } in=(*in).next_in_row; } return in; } bool MultiLinkedListSparseArray::exist(node* so,int row,int col){ if(so==NULL) return false; else{ while((*so).col!=col){ if((*so).next_in_row==NULL) return false; else so=(*so).next_in_row; } return true; } }

    Read the article

  • DNS and name server in centos 6.3 64 bit is not pinged out side

    - by user135855
    I got a problem with centOS 6.3 64-bit. I want to setup my nameserver with bind here. I am listing all my configuration [root@izyon92 ~]# cat/etc/hosts -------------- 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 182.19.26.92 izyon92.zyonize1.com izyon92 [root@izyon92 ~]# cat /etc/sysconfig/network --------------------------------------------- NETWORKING=yes HOSTNAME=izyon92.zyonize1.com GATEWAY=182.19.26.89 [root@izyon92 ~]# cat /etc/resolv.conf -------------------------------------------- # Generated by NetworkManager search zyonize1.com nameserver 182.19.26.92 [root@izyon92 ~]# cat /etc/named.conf -------------------------------------------- // // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { #listen-on port 53 { 127.0.0.1; }; listen-on-v6 port 53 { none; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { 182.19.26.92; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; [root@izyon92 ~]# cat /etc/named.rfc1912.zones -------------------------------------------------- // named.rfc1912.zones: // // Provided by Red Hat caching-nameserver package // // ISC BIND named zone configuration for zones recommended by // RFC 1912 section 4.1 : localhost TLDs and address zones // and http://www.ietf.org/internet-drafts/draft-ietf-dnsop-default-local-zones-02.txt // (c)2007 R W Franks // // See /usr/share/doc/bind*/sample/ for example named configuration files. // zone "localhost.localdomain" IN { type master; file "named.localhost"; allow-update { none; }; }; zone "localhost" IN { type master; file "named.localhost"; allow-update { none; }; }; zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa" IN { type master; file "named.loopback"; allow-update { none; }; }; zone "1.0.0.127.in-addr.arpa" IN { type master; file "named.loopback"; allow-update { none; }; }; zone "0.in-addr.arpa" IN { type master; file "named.empty"; allow-update { none; }; }; zone "zyonize1.com" { type master; file "/var/named/zyonize.com.hosts"; }; [root@izyon92 ~]# cat /var/named/zyonize.com.hosts --------------------------------------------------------- $ttl 38400 zyonize1.com. IN SOA 182.19.26.92. dev\.izyon.gmail.com. ( 1347436958 10800 3600 604800 38400 ) zyonize1.com. IN NS 182.19.26.92. zyonize1.com. IN A 182.19.26.92 www.zyonize1.com. IN A 182.19.26.92 izyon92.zyonize1.com. IN A 182.19.26.92 I have disabled selinux and stopped iptables. dig and nslookup is working fine in the same machine [root@izyon92 ~]# dig zyonize1.com ---------------------------------------- ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.2 <<>> zyonize1.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55751 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;zyonize1.com. IN A ;; ANSWER SECTION: zyonize1.com. 38400 IN A 182.19.26.92 ;; AUTHORITY SECTION: zyonize1.com. 38400 IN NS 182.19.26.92. ;; Query time: 0 msec ;; SERVER: 182.19.26.92#53(182.19.26.92) ;; WHEN: Fri Sep 14 00:09:19 2012 ;; MSG SIZE rcvd: 72 [root@izyon92 ~]# nslookup zyonize1.com ---------------------------------------------- Server: 182.19.26.92 Address: 182.19.26.92#53 Name: zyonize1.com Address: 182.19.26.92 But here is the problem I am facing, I have windows machine, to test this dns and nameserver I set the first IPv4 DNS server to 182.19.26.92. Here is the details Connection-specific DNS Suffix: Description: Realtek PCIe GBE Family Controller Physical Address: ?14-FE-B5-9F-3A-A8 DHCP Enabled: No IPv4 Address: 192.168.2.50 IPv4 Subnet Mask: 255.255.255.0 IPv4 Default Gateway: 192.168.2.1 IPv4 DNS Servers: 182.19.26.92, 182.19.95.66 IPv4 WINS Server: NetBIOS over Tcpip Enabled: Yes Link-local IPv6 Address: fe80::45cc:2ada:c13:ca42%16 IPv6 Default Gateway: IPv6 DNS Server: when I am pining from this machine it is not finding the server. Where as in another server with another live IP with Fedora ping is working fine.

    Read the article

  • Issues declaring already existing NSMutableArray in new class

    - by Graeme
    I have a class (DataImporter) which has the code to download an RSS feed. I also have a view and separate class (TableView) which displays the data in a UITableView and starts the parsing process, storing parsed information in an NSMutableArray (items) which is located in the (TableView) subclass. Now I wish to add a UIMapView which displays the items in the (items) NSMutableArray. Herein lies the issue - I need to somehow get the data from the (items) NSMutableArray into the new (mapView) subclass which I'm struggling with - and I preferably don't want to have to create a new class to download the data again for the mapView class when it already is in the applications memory. Is there a way I can transfer the information from the NSMutableArray (items) class to the (mapView) class (i.e. how do I declare the NSMutableArray in the (mapView) class)? Here's a overview of how the system works: App opened Data downloaded (using DataImporter class) when (TableView) viewDidLoad runs Data stored in NSMutableArray accessible by the (TableView) class And from here I need to access and declare the array from a new (mapView) class. Any help greatly appreciated, thanks. Code for viewDidLoad MapKit: Data *data = nil; NSString *ilocation = [data locations]; NSString *ilocation2 = @"New Zealand"; NSString *inewlString; inewlString = [ilocation stringByAppendingString:ilocation2]; NSLog(@"inewlString=%@",inewlString); if(forwardGeocoder == nil) { forwardGeocoder = [[BSForwardGeocoder alloc] initWithDelegate:self]; } // Forward geocode! [forwardGeocoder findLocation: inewlString]; Code for parsing data into original NSMutable Array: - (void)beginParsing { NSLog(@"Parsing has begun"); //self.navigationItem.rightBarButtonItem.enabled = NO; // Allocate the array for song storage, or empty the results of previous parses if (incidents == nil) { NSLog(@"Grabbing array"); self.datas = [NSMutableArray array]; } else { [datas removeAllObjects]; [self.tableView reloadData]; } // Create the parser, set its delegate, and start it. self.parser = [[DataImporter alloc] init]; parser.delegate = self; [parser start]; }

    Read the article

  • IEnumerable<T> ToArray usage, is it a copy or a pointer?

    - by Daniel
    I am parsing an arbitrary length byte array that is going to be passed around to a few different layers of parsing. Each parser creates a Header and a Packet payload just like any ordinary encapsulation. And my problem lies in how the encapsulation holds its packet byte array payload. Say i have a 100 byte array, and it has 3 levels of encapsulation. 3 packet objects will be created and i want to set the payload of these packets to the corresponding position in the byte array of the packet. For example lets say the payload size is 20 for all levels, then imagine it has a public byte[] Payload on each object. However the problem is that this byte[] Payload is a copy of the original 100 bytes. So i'm going to end up with 160 bytes in memory instead of 100. If it were in c++ i could just easily use a pointer however i'm writing this in c#. So i created the following class: public class PayloadSegment<T> : IEnumerable<T> { public readonly T[] Array; public readonly int Offset; public readonly int Count; public PayloadSegment(T[] array, int offset, int count) { this.Array = array; this.Offset = offset; this.Count = count; } public T this[int index] { get { if (index < 0 || index >= this.Count) throw new IndexOutOfRangeException(); else return Array[Offset + index]; } set { if (index < 0 || index >= this.Count) throw new IndexOutOfRangeException(); else Array[Offset + index] = value; } } public IEnumerator<T> GetEnumerator() { for (int i = Offset; i < Offset + Count; i++) yield return Array[i]; } System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() { IEnumerator<T> enumerator = this.GetEnumerator(); while (enumerator.MoveNext()) { yield return enumerator.Current; } } } This way i can simply reference a position inside the original byte array but use positional indexing. However if i do something like: PayloadSegment<byte> something = new PayloadSegment<byte>(someArray, 5, 10); byte[] somethingArray = something.ToArray(); Will the somethingArray be a copy of the bytes, or a reference to the original PayloadSegment which in turn is a reference to the original byte array? Sorry it was hard to word this lol _<

    Read the article

  • Getting instance crashes on IntelliJ IDEA with scala plugin.

    - by egervari
    I am building a scala web project using scala test, lift, jpa, hibernate, mercurial plugin, etc. I am getting instant crashes, where the ide just bombs, the window shuts down, and it gives no error messages whatsoever when I am doing any amount of copy/pasting of code. This started happening once my project got to about 100 unit tests. This problem is incredibly annoying, because when the crash happens, 30-60 seconds of activity is not saved. Even IDEA will forget which files were last opened and will forget where the cursor was, which makes it really hard to continue where you left off after the crash. A lot can happen in 60 seconds! Now, I've given up, because it seems like all sorts of things cause the IntelliJ IDEA to crash over and over. For example, if I were to copy and paste this code, to write a similar test for another collection type, it would crash shortly after: it should "cascade save and delete status messages" in { val statusMessage = new StatusMessage("message") var user = userDao.find(1).get user.addToStatusMessages(statusMessage) userDao.save(user) statusMessage.isPersistent should be (true) userDao.delete(user) statusMessageDao.find(statusMessage.id) should equal (None) } There is nothing special about this piece of code. It's code that is working just fine. However, IDEA bombs shortly after I paste something like this. For example, I might change StatusMessage to the new class I want to test cascading on... and then have to import that class into the test... and BOOM... it crashed. On windows 7, the IDEA window literally just minimizes and crashes with no warning. The next time I startup IDEA, it has no memory of what happened. Now, I've had this problem before. I posted it way back on IDEA's YouTrack. I was told to invalidate my caches. That never fixed it then, and it's not fixing it now. Please help. This error is fairly random, but it's happening constantly now. I could program for hours and not see it before... and the fact that my work just gets destroyed and I can't remember what I did during the last minute causes me to swear at my monitor at a db level higher than my stereo can go.

    Read the article

  • writing XML with Xerces 3.0.1 and C++ on windows

    - by Jon
    Hi, i have the following function i wrote to create an XML file using Xerces 3.0.1, if i call this function with a filePath of "foo.xml" or "../foo.xml" it works great, but if i pass in "c:/foo.xml" then i get an exception on this line XMLFormatTarget *formatTarget = new LocalFileFormatTarget(targetPath); can someone explain why my code works for relative paths, but not absolute paths please? many thanks. const int ABSOLUTE_PATH_FILENAME_PREFIX_SIZE = 9; void OutputXML(xercesc::DOMDocument* pmyDOMDocument, std::string filePath) { //Return the first registered implementation that has the desired features. In this case, we are after a DOM implementation that has the LS feature... or Load/Save. DOMImplementation *implementation = DOMImplementationRegistry::getDOMImplementation(L"LS"); // Create a DOMLSSerializer which is used to serialize a DOM tree into an XML document. DOMLSSerializer *serializer = ((DOMImplementationLS*)implementation)->createLSSerializer(); // Make the output more human readable by inserting line feeds. if (serializer->getDomConfig()->canSetParameter(XMLUni::fgDOMWRTFormatPrettyPrint, true)) serializer->getDomConfig()->setParameter(XMLUni::fgDOMWRTFormatPrettyPrint, true); // The end-of-line sequence of characters to be used in the XML being written out. serializer->setNewLine(XMLString::transcode("\r\n")); // Convert the path into Xerces compatible XMLCh*. XMLCh *tempFilePath = XMLString::transcode(filePath.c_str()); // Calculate the length of the string. const int pathLen = XMLString::stringLen(tempFilePath); // Allocate memory for a Xerces string sufficent to hold the path. XMLCh *targetPath = (XMLCh*)XMLPlatformUtils::fgMemoryManager->allocate((pathLen + ABSOLUTE_PATH_FILENAME_PREFIX_SIZE) * sizeof(XMLCh)); // Fixes a platform dependent absolute path filename to standard URI form. XMLString::fixURI(tempFilePath, targetPath); // Specify the target for the XML output. XMLFormatTarget *formatTarget = new LocalFileFormatTarget(targetPath); //XMLFormatTarget *myFormTarget = new StdOutFormatTarget(); // Create a new empty output destination object. DOMLSOutput *output = ((DOMImplementationLS*)implementation)->createLSOutput(); // Set the stream to our target. output->setByteStream(formatTarget); // Write the serialized output to the destination. serializer->write(pmyDOMDocument, output); // Cleanup. serializer->release(); XMLString::release(&tempFilePath); delete formatTarget; output->release(); }

    Read the article

  • Do You Really Know Your Programming Languages?

    - by Kristopher Johnson
    I am often amazed at how little some of my colleagues know or care about their craft. Something that constantly frustrates me is that people don't want to learn any more than they need to about the programming languages they use every day. Many programmers seem content to learn some pidgin sub-dialect, and stick with that. If they see a keyword or construct that they aren't familiar with, they'll complain that the code is "tricky." What would you think of a civil engineer who shied away from calculus because it had "all those tricky math symbols?" I'm not suggesting that we all need to become "language lawyers." But if you make your living as a programmer, and claim to be a competent user of language X, then I think at a minimum you should know the following: Do you know the keywords of the language and what they do? What are the valid syntactic forms? How are memory, files, and other operating system resources managed? Where is the official language specification and library reference for the language? The last one is the one that really gets me. Many programmers seem to have no idea that there is a "specification" or "standard" for any particular language. I still talk to people who think that Microsoft invented C++, and that if a program doesn't compile under VC6, it's not a valid C++ program. Programmers these days have it easy when it comes to obtaining specs. Newer languages like C#, Java, Python, Ruby, etc. all have their documentation available for free from the vendors' web sites. Older languages and platforms often have standards controlled by standards bodies that demand payment for specs, but even that shouldn't be a deterrent: the C++ standard is available from ISO for $30 (and why am I the only person I know who has a copy?). Programming is hard enough even when you do know the language. If you don't, I don't see how you have a chance. What do the rest of you think? Am I right, or should we all be content with the typical level of programming language expertise? Update: Several great comments here. Thanks. A couple of people hit on something that I didn't think about: What really irks me is not the lack of knowledge, but the lack of curiosity and willingness to learn. It seems some people don't have any time to hone their craft, but they have plenty of time to write lots of bad code. And I don't expect people to be able to recite a list of keywords or EBNF expressions, but I do expect that when they see some code, they should have some inkling of what it does. Few people have complete knowledge of every dark corner of their language or platform, but everyone should at least know enough that when they see something unfamiliar, they will know how to get whatever additional information they need to understand it.

    Read the article

  • Optimizing Vector elements swaps using CUDA

    - by Orion Nebula
    Hi all, Since I am new to cuda .. I need your kind help I have this long vector, for each group of 24 elements, I need to do the following: for the first 12 elements, the even numbered elements are multiplied by -1, for the second 12 elements, the odd numbered elements are multiplied by -1 then the following swap takes place: Graph: because I don't yet have enough points, I couldn't post the image so here it is: http://www.freeimagehosting.net/image.php?e4b88fb666.png I have written this piece of code, and wonder if you could help me further optimize it to solve for divergence or bank conflicts .. //subvector is a multiple of 24, Mds and Nds are shared memory _shared_ double Mds[subVector]; _shared_ double Nds[subVector]; int tx = threadIdx.x; int tx_mod = tx ^ 0x0001; int basex = __umul24(blockDim.x, blockIdx.x); Mds[tx] = M.elements[basex + tx]; __syncthreads(); // flip the signs if (tx < (tx/24)*24 + 12) { //if < 12 and even if ((tx & 0x0001)==0) Mds[tx] = -Mds[tx]; } else if (tx < (tx/24)*24 + 24) { //if >12 and < 24 and odd if ((tx & 0x0001)==1) Mds[tx] = -Mds[tx]; } __syncthreads(); if (tx < (tx/24)*24 + 6) { //for the first 6 elements .. swap with last six in the 24elements group (see graph) Nds[tx] = Mds[tx_mod + 18]; Mds [tx_mod + 18] = Mds [tx]; Mds[tx] = Nds[tx]; } else if (tx < (tx/24)*24 + 12) { // for the second 6 elements .. swp with next adjacent group (see graph) Nds[tx] = Mds[tx_mod + 6]; Mds [tx_mod + 6] = Mds [tx]; Mds[tx] = Nds[tx]; } __syncthreads(); Thanks in advance ..

    Read the article

  • Multi-tier applications using L2S, WCF and Base Class

    - by Gena Verdel
    Hi all. One day I decided to build this nice multi-tier application using L2S and WCF. The simplified model is : DataBase-L2S-Wrapper(DTO)-Client Application. The communication between Client and Database is achieved by using Data Transfer Objects which contain entity objects as their properties. abstract public class BaseObject { public virtual IccSystem.iccObjectTypes ObjectICC_Type { get { return IccSystem.iccObjectTypes.unknownType; } } [global::System.Data.Linq.Mapping.ColumnAttribute(Storage = "_ID", AutoSync = AutoSync.OnInsert, DbType = "BigInt NOT NULL IDENTITY", IsPrimaryKey = true, IsDbGenerated = true)] [global::System.Runtime.Serialization.DataMemberAttribute(Order = 1)] public virtual long ID { //get; //set; get { return _ID; } set { _ID = value; } } } [DataContract] public class BaseObjectWrapper<T> where T : BaseObject { #region Fields private T _DBObject; #endregion #region Properties [DataMember] public T Entity { get { return _DBObject; } set { _DBObject = value; } } #endregion } Pretty simple, isn't it?. Here's the catch. Each one of the mapped classes contains ID property itself so I decided to override it like this [global::System.Data.Linq.Mapping.TableAttribute(Name="dbo.Divisions")] [global::System.Runtime.Serialization.DataContractAttribute()] public partial class Division : INotifyPropertyChanging, INotifyPropertyChanged { [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_ID", AutoSync=AutoSync.OnInsert, DbType="BigInt NOT NULL IDENTITY", IsPrimaryKey=true, IsDbGenerated=true)] [global::System.Runtime.Serialization.DataMemberAttribute(Order=1)] public override long ID { get { return this._ID; } set { if ((this._ID != value)) { this.OnIDChanging(value); this.SendPropertyChanging(); this._ID = value; this.SendPropertyChanged("ID"); this.OnIDChanged(); } } } } Wrapper for division is pretty straightforward as well: public class DivisionWrapper : BaseObjectWrapper<Division> { } It worked pretty well as long as I kept ID values at mapped class and its BaseObject class the same(that's not very good approach, I know, but still) but then this happened: private CentralDC _dc; public bool UpdateDivision(ref DivisionWrapper division) { DivisionWrapper tempWrapper = division; if (division.Entity == null) { return false; } try { Table<Division> table = _dc.Divisions; var q = table.Where(o => o.ID == tempWrapper.Entity.ID); if (q.Count() == 0) { division.Entity._errorMessage = "Unable to locate entity with id " + division.Entity.ID.ToString(); return false; } var realEntity = q.First(); realEntity = division.Entity; _dc.SubmitChanges(); return true; } catch (Exception ex) { division.Entity._errorMessage = ex.Message; return false; } } When trying to enumerate over the in-memory query the following exception occurred: Class member BaseObject.ID is unmapped. Although I'm stating the type and overriding the ID property L2S fails to work. Any suggestions?

    Read the article

  • Differences between matrix implementation in C

    - by tempy
    I created two 2D arrays (matrix) in C in two different ways. I don't understand the difference between the way they're represented in the memory, and the reason why I can't refer to them in the same way: scanf("%d", &intMatrix1[i][j]); //can't refer as &intMatrix1[(i * lines)+j]) scanf("%d", &intMatrix2[(i * lines)+j]); //can't refer as &intMatrix2[i][j]) What is the difference between the ways these two arrays are implemented and why do I have to refer to them differently? How do I refer to an element in each of the arrays in the same way (?????? in my printMatrix function)? int main() { int **intMatrix1; int *intMatrix2; int i, j, lines, columns; lines = 3; columns = 2; /************************* intMatrix1 ****************************/ intMatrix1 = (int **)malloc(lines * sizeof(int *)); for (i = 0; i < lines; ++i) intMatrix1[i] = (int *)malloc(columns * sizeof(int)); for (i = 0; i < lines; ++i) { for (j = 0; j < columns; ++j) { printf("Type a number for intMatrix1[%d][%d]\t", i, j); scanf("%d", &intMatrix1[i][j]); } } /************************* intMatrix2 ****************************/ intMatrix2 = (int *)malloc(lines * columns * sizeof(int)); for (i = 0; i < lines; ++i) { for (j = 0; j < columns; ++j) { printf("Type a number for intMatrix2[%d][%d]\t", i, j); scanf("%d", &intMatrix2[(i * lines)+j]); } } /************** printing intMatrix1 & intMatrix2 ****************/ printf("intMatrix1:\n\n"); printMatrix(*intMatrix1, lines, columns); printf("intMatrix2:\n\n"); printMatrix(intMatrix2, lines, columns); } /************************* printMatrix ****************************/ void printMatrix(int *ptArray, int h, int w) { int i, j; printf("Printing matrix...\n\n\n"); for (i = 0; i < h; ++i) for (j = 0; j < w; ++j) printf("array[%d][%d] ==============> %d\n, i, j, ??????); }

    Read the article

  • DNS resolution problems; dig SERVFAIL error

    - by JustinP
    I'm setting up a couple of dedicated servers, and having problems setting up my nameservers properly. One of these is a LEMP server (LAMP with nginx in place of Apache), and the other will function solely as an email server, running exim/dovecot/ASSP antispam (no Apache). The LEMP server is CentOS 5.5, with no control panel, while the email server is CentOS 5.5 as well, with cPanel/WHM. So, I've had problems getting DNS set up properly. I have two domains, each one pointing to one of these servers. The nameservers are registered correctly with the domain registrar, and the nameserver IPs are entered correctly as well. I've spoken to tech support at the registrar and they confirm that everything is set up on their end. Not knowing much about DNS, I googled nameservers and DNS until I nearly went blind, and spent hours messing with the configuration. Eventually, I got the LEMP server's DNS working properly (no cPanel). Pleased with this triumph, I'm trying to mimic that configuration and repeat the process with the email server, and it's just not happening. The nameserver starts and stops, but the domain doesn't resolve. Things I have tried Going through standard procedures to set up DNS in WHM Clearing all DNS information, uninstalling BIND, then reinstalling all of that and again going through WHM procedures for setting up DNS Clearing all DNS information, and setting up BIND via shell (completely outside of cPanel) by using my config and zone files from the LEMP server as a template named runs just fine, but nothing is resolving. When I "dig any example.com" I get a SERVFAIL message. Nslookups return no information. Here are my config and zone files. named.conf controls { inet 127.0.0.1 allow { localhost; } keys { coretext-key; }; }; options { listen-on port 53 { any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; // Those options should be used carefully because they disable port // randomization // query-source port 53; // query-source-v6 port 53; allow-query { any; }; allow-query-cache { any; }; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; view "localhost_resolver" { match-clients { 127.0.0.0/24; }; match-destinations { localhost; }; recursion yes; //zone "." IN { // type hint; // file "/var/named/named.ca"; //}; include "/etc/named.rfc1912.zones"; }; view "internal" { /* This view will contain zones you want to serve only to "internal" clients that connect via your directly attached LAN interfaces - "localnets" . */ match-clients { localnets; }; match-destinations { localnets; }; recursion yes; zone "." IN { type hint; file "/var/named/named.ca"; }; // include "/var/named/named.rfc1912.zones"; // you should not serve your rfc1912 names to non-localhost clients. // These are your "authoritative" internal zones, and would probably // also be included in the "localhost_resolver" view above : zone "example.com" { type master; file "data/db.example.com"; }; zone "3.2.1.in-addr.arpa" { type master; file "data/db.1.2.3"; }; }; view "external" { /* This view will contain zones you want to serve only to "external" clients * that have addresses that are not on your directly attached LAN interface subnets: */ match-clients { any; }; match-destinations { any; }; recursion no; // you'd probably want to deny recursion to external clients, so you don't // end up providing free DNS service to all takers allow-query-cache { none; }; // Disable lookups for any cached data and root hints // all views must contain the root hints zone: //include "/etc/named.rfc1912.zones"; zone "." IN { type hint; file "/var/named/named.ca"; }; zone "example.com" { type master; file "data/db.example.com"; }; zone "3.2.1.in-addr.arpa" { type master; file "data/db.1.2.3"; }; }; include "/etc/rndc.key"; db.example.com $TTL 1D ; ; Zone file for example.com ; ; Mandatory minimum for a working domain ; @ IN SOA ns1.example.com. contact.example.com. ( 2011042905 ; serial 8H ; refresh 2H ; retry 4W ; expire 1D ; default_ttl ) NS ns1.example.com. NS ns2.example.com. ns1 A 1.2.3.4 ns2 A 1.2.3.5 example.com. A 1.2.3.4 localhost A 127.0.0.1 www CNAME example.com. mail CNAME example.com. ; db.1.2.3 $TTL 1D $ORIGIN 3.2.1.in-addr.arpa. @ IN SOA ns1.example.com contact.example.com. ( 2011042908 ; 8H ; 2H ; 4W ; 1D ; ) NS ns1.example.com. NS ns2.example.com. 4 PTR hostname.example.com. 5 PTR hostname.example.com. ; Also of note: both of these servers are managed. Tech support is very responsive, and largely useless. Hours go by with them asking me questions to narrow down what could be wrong, then they pass the ticket to the tech on the next shift, who ignores everything that's happened already and spend his whole shift asking all the same questions the last guy asked. So, in summary: *Nameservers, with IPs, are correctly registered with domain registrar *named is configured and running *...and must not be configured correctly, because nothing resolves. Any help would be great. I changed domains and IPs in the files to generics, but let me know if you need to know the domain in question. Thanks! UPDATE I found that I didn't have 127.0.0.1 in /etc/resolv.conf, so I added it, along with my two public IPs that I have named listening on. resolv.conf search www.example.com example.com nameserver 127.0.0.1 nameserver 7.8.9.10 ;Was in here by default, authoritative nameserver of hosting company nameserver 1.2.3.4 ;Public IP #1 nameserver 1.2.3.5 ;Public IP #2 Now when I DIG example.com from the host, it resolves. If I try to DIG from my other server (in the same datacenter), or from the internet, it times out or I get SERVFAIL.

    Read the article

  • Function returning MYSQL_ROW

    - by Gabe
    I'm working on a system using lots of MySQL queries and I'm running into some memory problems I'm pretty sure have to do with me not handling pointers right... Basically, I've got something like this: MYSQL_ROW function1() { string query="SELECT * FROM table limit 1;"; MYSQL_ROW return_row; mysql_init(&connection); // "connection" is a global variable if (mysql_real_connect(&connection,HOST,USER,PASS,DB,0,NULL,0)){ if (mysql_query(&connection,query.c_str())) cout << "Error: " << mysql_error(&connection); else{ resp = mysql_store_result(&connection); //"resp" is also global if (resp) return_row = mysql_fetch_row(resp); mysql_free_result(resp); } mysql_close(&connection); }else{ cout << "connection failed\n"; if (mysql_errno(&connection)) cout << "Error: " << mysql_errno(&connection) << " " << mysql_error(&connection); } return return_row; } And function2(): MYSQL_ROW function2(MYSQL_ROW row) { string query = "select * from table2 where code = '" + string(row[2]) + "'"; MYSQL_ROW retorno; mysql_init(&connection); if (mysql_real_connect(&connection,HOST,USER,PASS,DB,0,NULL,0)){ if (mysql_query(&connection,query.c_str())) cout << "Error: " << mysql_error(&conexao); else{ // My "debugging" shows me at this point `row[2]` is already fubar resp = mysql_store_result(&connection); if (resp) return_row = mysql_fetch_row(resp); mysql_free_result(resp); } mysql_close(&connection); }else{ cout << "connection failed\n"; if (mysql_errno(&connection)) cout << "Error : " << mysql_errno(&connection) << " " << mysql_error(&connection); } return return_row; } And main() is an infinite loop basically like this: int main( int argc, char* args[] ){ MYSQL_ROW row = NULL; while (1) { row = function1(); if(row != NULL) function2(row); } } (variable and function names have been generalized to protect the innocent) But after the 3rd or 4th call to function2, that only uses row for reading, row starts losing its value coming to a segfault error... Anyone's got any ideas why? I'm not sure the amount of global variables in this code is any good, but I didn't design it and only got until tomorrow to fix and finish it, so workarounds are welcome! Thanks!

    Read the article

  • Merge entries in XMLfile (SimpleXML in PHP)

    - by Cudos
    Hello. I have this in my XML file: <product name="iphone"> <variant name="iphone" product_number="12345" price="500" picture="iphone.jpg"> <description><![CDATA[iphone]]></description> <short_description><![CDATA[]]></short_description> <deliverytime><![CDATA[]]></deliverytime> <options> <option group="Color" option="Black" /> </options> </variant> </product> <product name="iphone"> <variant name="iphone" product_number="12345" price="500" picture="iphone.jpg"> <description><![CDATA[iphone]]></description> <short_description><![CDATA[]]></short_description> <deliverytime><![CDATA[]]></deliverytime> <options> <option group="Color" option="White" /> </options> </variant> </product> I want to merge it into this (Note that I merge the options tag): <product name="iphone"> <variant name="iphone" product_number="12345" price="500" picture="iphone.jpg"> <description><![CDATA[iphone]]></description> <short_description><![CDATA[]]></short_description> <deliverytime><![CDATA[]]></deliverytime> <options> <option group="Color" option="Black" /> <option group="Color" option="White" /> </options> </variant> </product> Preferably I want to do it all in the memory since I will process it further afterwards.

    Read the article

  • How do I create a thread-safe write-once read-many value in Java?

    - by Software Monkey
    This is a problem I encounter frequently in working with more complex systems and which I have never figured out a good way to solve. It usually involves variations on the theme of a shared object whose construction and initialization are necessarily two distinct steps. This is generally because of architectural requirements, similar to applets, so answers that suggest I consolidate construction and initialization are not useful. By way of example, let's say I have a class that is structured to fit into an application framework like so: public class MyClass { private /*ideally-final*/ SomeObject someObject; MyClass() { someObject=null; } public void startup() { someObject=new SomeObject(...arguments from environment which are not available until startup is called...); } public void shutdown() { someObject=null; // this is not necessary, I am just expressing the intended scope of someObject explicitly } } I can't make someObject final since it can't be set until startup() is invoked. But I would really like it to reflect it's write-once semantics and be able to directly access it from multiple threads, preferably avoiding synchronization. The idea being to express and enforce a degree of finalness, I conjecture that I could create a generic container, like so: public class WoRmObject<T> { private T object; WoRmObject() { object=null; } public WoRmObject set(T val) { object=val; return this; } public T get() { return object; } } and then in MyClass, above, do: private final WoRmObject<SomeObject> someObject; MyClass() { someObject=new WoRmObject<SomeObject>(); } public void startup() { someObject.set(SomeObject(...arguments from environment which are not available until startup is called...)); } Which raises some questions for me: Is there a better way, or existing Java object (would have to be available in Java 4)? Is this thread-safe provided that no other thread accesses someObject.get() until after it's set() has been called. The other threads will only invoke methods on MyClass between startup() and shutdown() - the framework guarantees this. Given the completely unsynchronized WoRmObject container, it is ever possible under either JMM to see a value of object which is neither null nor a reference to a SomeObject? In other words, does has the JMM always guaranteed that no thread can observe the memory of an object to be whatever values happened to be on the heap when the object was allocated.

    Read the article

  • Saving data in custom class via AppDelegate

    - by redspike
    I can't seem to save data to a custom instance object in my AppDelegate. My custom class is very simple and is as follows: Person.h ... @interface Person : NSObject { int _age; } - (void) setAge: (int) age; - (int) age; @end Person.m #import "Person.h" @implementation Person - (void) setAge:(int) age { _age = age; } - (int) age { return _age; } @end I then create an instance of Person in the AppDelegate class: AppDelegate.h @class Person; @interface AccuTaxAppDelegate : NSObject <UIApplicationDelegate> { ... Person *person; } ... @property (nonatomic, retain) Person *person; @end AppDelegate.m ... #import "Person.h" @implementation AccuTaxAppDelegate ... @synthesize person; - (void)applicationDidFinishLaunching:(UIApplication *)application { // Override point for customization after app launch [window addSubview:[navigationController view]]; [window makeKeyAndVisible]; } - (void)applicationWillTerminate:(UIApplication *)application { // Save data if appropriate } #pragma mark - #pragma mark Memory management - (void)dealloc { [navigationController release]; [window release]; [person release]; [super dealloc]; } @end Finally, in my ViewController code I grab a handle on AppDelegate and then grab the person instance, but when I try to save the age it doesn't seem to work: MyViewController ... - (void)textFieldDidEndEditing:(UITextField *)textField { NSString *textAge = [textField text]; int age = [textAge intValue]; NSLog(@"Age from text field::%i", age); AppDelegate *appDelegate = (AppDelegate *)[UIApplication sharedApplication].delegate; Person *myPerson = (Person *)[appDelegate person]; NSLog(@"Age before setting: %i", [myPerson age]); [myPerson setAge:age]; NSLog(@"Age after setting: %i", [myPerson age]); [textAge release]; } ... The output of the above NSLogs are: [Session started at 2010-05-04 18:29:22 +0100.] 2010-05-04 18:29:28.260 AccuTax[16235:207] Age in text field:25 2010-05-04 18:29:28.262 AccuTax[16235:207] Age before setting: 0 2010-05-04 18:29:28.263 AccuTax[16235:207] Age after setting: 0 Any ideas why 'age' isn't being stored? I'm relatively new to Obj-C so please forgive me if I'm missing something very simple!

    Read the article

  • [N]Hibernate: view-like fetching properties of associated class

    - by chiccodoro
    (Felt quite helpless in formulating an appropriate title...) In my C# app I display a list of "A" objects, along with some properties of their associated "B" objects and properties of B's associated "C" objects: A.Name B.Name B.SomeValue C.Name Foo Bar 123 HelloWorld Bar Hello 432 World ... To clarify: A has an FK to B, B has an FK to C. (Such as, e.g. BankAccount - Person - Company). I have tried two approaches to load these properties from the database (using NHibernate): A fast approach and a clean approach. My eventual question is how to do a fast & clean approach. Fast approach: Define a view in the database which joins A, B, C and provides all these fields. In the A class, define properties "BName", "BSomeValue", "CName" Define a hibernate mapping between A and the View, whereas the needed B and C properties are mapped with update="false" insert="false" and do actually stem from B and C tables, but Hibernate is not aware of that since it uses the view. This way, the listing only loads one object per "A" record, which is quite fast. If the code tries to access the actual associated property, "A.B", I issue another HQL query to get B, set the property and update the faked BName and BSomeValue properties as well. Clean approach: There is no view. Class A is mapped to table A, B to B, C to C. When loading the list of A, I do a double left-join-fetch to get B and C as well: from A a left join fetch a.B left join fetch a.B.C B.Name, B.SomeValue and C.Name are accessed through the eagerly loaded associations. The disadvantage of this approach is that it gets slower and takes more memory, since it needs to created and map 3 objects per "A" record: An A, B, and C object each. Fast and clean approach: I feel somehow uncomfortable using a database view that hides a join and treat that in NHibernate as if it was a table. So I would like to do something like: Have no views in the database. Declare properties "BName", "BSomeValue", "CName" in class "A". Define the mapping for A such that NHibernate fetches A and these properties together using a join SQL query as a database view would do. The mapping should still allow for defining lazy many-to-one associations for getting A.B.C My questions: Is this possible? Is it [un]artful? Is there a better way?

    Read the article

  • how to update an Android ListActivity on changing data of the connected SimpleCursorAdapter

    - by 4485670
    I have the following code. What I want to achieve is to update the shown list when I click an entry so I can traverse through the list. I found the two uncommented ways to do it here on stackoverflow, but neither works. I also got the advice to create a new ListActivity on the data update, but that sounds like wasting resources? EDIT: I found the solution myself. All you need to do is call "SimpleCursorAdapter.changeCursor(new Cursor);". No notifying, no things in UI-Thread or whatever. import android.app.ListActivity; import android.database.Cursor; import android.os.Bundle; import android.util.Log; import android.view.View; import android.widget.ListView; import android.widget.SimpleCursorAdapter; public class MyActivity extends ListActivity { private DepartmentDbAdapter mDbHelper; private Cursor cursor; private String[] from = new String[] { DepartmentDbAdapter.KEY_NAME }; private int[] to = new int[] { R.id.text1 }; private SimpleCursorAdapter notes; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.departments_list); mDbHelper = new DepartmentDbAdapter(this); mDbHelper.open(); // Get all of the departments from the database and create the item list cursor = mDbHelper.fetchSubItemByParentId(1); this.startManagingCursor(cursor); // Now create an array adapter and set it to display using our row notes = new SimpleCursorAdapter(this, R.layout.department_row, cursor, from, to); this.setListAdapter(notes); } @Override protected void onListItemClick(ListView l, View v, int position, long id) { super.onListItemClick(l, v, position, id); // get new data and update the list this.updateData(safeLongToInt(id)); } /** * update data for the list * * @param int departmentId id of the parent department */ private void updateData(int departmentId) { // close the old one, get a new one cursor.close(); cursor = mDbHelper.fetchSubItemByParentId(departmentId); // change the cursor of the adapter to the new one notes.changeCursor(cursor); } /** * safely convert long to in to save memory * * @param long l the long variable * * @return integer */ public static int safeLongToInt(long l) { if (l < Integer.MIN_VALUE || l > Integer.MAX_VALUE) { throw new IllegalArgumentException (l + " cannot be cast to int without changing its value."); } return (int) l; } }

    Read the article

  • Primary language - C++/Qt, C#, Java?

    - by Airjoe
    I'm looking for some input, but let me start with a bit of background (for tl;dr skip to end). I'm an IT major with a concentration in networking. While I'm not a CS major nor do I want to program as a vocation, I do consider myself a programmer and do pretty well with the concepts involved. I've been programming since about 6th grade, started out with a proprietary game creation language that made my transition into C++ at college pretty easy. I like to make programs for myself and friends, and have been paid to program for local businesses. A bit about that- I wrote some programs for a couple local businesses in my senior year in high school. I wrote management systems for local shops (inventory, phone/pos orders, timeclock, customer info, and more stuff I can't remember). It definitely turned out to be over my head, as I had never had any formal programming education. It was a great learning experience, but damn was it crappy code. Oh yeah, by the way, it was all vb6. So, I've used vb6 pretty extensively, I've used c++ in my classes (intro to programming up to algorithms), used Java a little bit in another class (had to write a ping client program, pretty easy) and used Java for some simple Project Euler problems to help learn syntax and such when writing the program for the class. I've also used C# a bit for my own simple personal projects (simple programs, one which would just generate an HTTP request on a list of websites and notify if one responded unexpectedly or not at all, and another which just held a list of things to do and periodically reminded me to do them), things I would've written in vb6 a year or two ago. I've just started using Qt C++ for some undergrad research I'm working on. Now I've had some formal education, I [think I] understand organization in programming a lot better (I didn't even use classes in my vb6 programs where I really should have), how it's important to structure code, split into functions where appropriate, document properly, efficiency both in memory and speed, dynamic and modular programming etc. I was looking for some input on which language to pick up as my "primary". As I'm not a "real programmer", it will be mostly hobby projects, but will include some 'real' projects I'm sure. From my perspective: QtC++ and Java are cross platform, which is cool. Java and C# run in a virtual machine, but I'm not sure if that's a big deal (something extra to distribute, possibly a bit slower? I think Qt would require additional distributables too, right?). I don't really know too much more than this, so I appreciate any help, thanks! TL;DR Am an avocational programmer looking for a language, want quick and straight forward development, liked vb6, will be working with database driven GUI apps- should I go with QtC++, Java, C#, or perhaps something else?

    Read the article

  • Using XmlDiffPatch when writing to stream

    - by Mark Smith
    I am trying to use xmldiffpatch when comparing two Xmls(one from a stream, the other from a file) and writing the diff patch to a stream. The first method is to write my xml to a memory stream. The second method loads an xml from a file and creates a stream for the patched file to be written into. The third method actually compares the two files and writes the third. The xmldiff.Compare(originalFile, finalFile, dgw); method takes (XmlReader, XmlReader, XmlWriter). I'm always getting that both files are identical, even though they are not, so I know that I am missing something. Any help is appreciated! public MemoryStream FirstXml() { string[] names = { "John", "Mohammed", "Marc", "Tamara", "joy" }; MemoryStream ms = new MemoryStream(); XmlTextWriter xtw= new XmlTextWriter(ms, Encoding.UTF8); xtw.WriteStartDocument(); xtw.WriteStartElement("root"); foreach (string s in names) { xtw.WriteStartElement(s); xtw.WriteEndElement(); } xtw.WriteEndElement(); xtw.WriteEndDocument(); return ms; } public Stream SecondXml() { XmlReader finalFile =XmlReader.Create(@"c:\......\something.xml"); MemoryStream ms = FirstXml(); XmlReader originalFile = XmlReader.Create(ms); MemoryStream ms2 = new MemoryStream(); XmlTextWriter dgw = new XmlTextWriter(ms2, Encoding.UTF8); GenerateDiffGram(originalFile, finalFile, dgw); return ms2; } public void GenerateDiffGram(XmlReader originalFile, XmlReader finalFile, XmlWriter dgw) { XmlDiff xmldiff = new XmlDiff(); bool bIdentical = xmldiff.Compare(originalFile, finalFile, dgw); dgw.Close(); StreamReader sr = new StreamReader(SecondXml()); string xmlOutput = sr.ReadToEnd(); if(xmlOutput.Contains("</xd:xmldiff>")) {Console.WriteLine("Xml files are not identical"); Console.Read();} else {Console.WriteLine("Xml files are identical");Console.Read();} }

    Read the article

  • Windows Phone period task, function not executing

    - by Special K.
    I'm trying to execute a code (to parse an XML to be more precisely, and after that I'll toast message the user with some new info's), but the class function AccDetailsDownloaded is not executed (is simply skipped), also the memory usage is ~2mb out of 6, here is my code: if (task is PeriodicTask) { getData(); } else { getData(); } // If debugging is enabled, launch the agent again in one minute. #if DEBUG_AGENT ScheduledActionService.LaunchForTest(task.Name, TimeSpan.FromSeconds(60)); #endif // Call NotifyComplete to let the system know the agent is done working. NotifyComplete(); } public void getData() { var settings = IsolatedStorageSettings.ApplicationSettings; string url = "http://example.com/example.xml"; if (!System.Net.NetworkInformation.NetworkInterface.GetIsNetworkAvailable()) { MessageBox.Show("No network connection available!"); return; } // start loading XML-data WebClient downloader = new WebClient(); Uri uri = new Uri(url, UriKind.Absolute); downloader.DownloadStringCompleted += new DownloadStringCompletedEventHandler(AccDetailsDownloaded); downloader.DownloadStringAsync(uri); string toastTitle = ""; toastTitle = "Periodic "; string toastMessage = "Mem usage: " + DeviceStatus.ApplicationPeakMemoryUsage + "/" + DeviceStatus.ApplicationMemoryUsageLimit; // Launch a toast to show that the agent is running. // The toast will not be shown if the foreground application is running. ShellToast toast = new ShellToast(); toast.Title = toastTitle; toast.Content = toastMessage; toast.Show(); } void AccDetailsDownloaded(object sender, DownloadStringCompletedEventArgs e) { if (e.Result == null || e.Error != null) { MessageBox.Show("There was an error downloading the XML-file!"); } else { string toastTitle = ""; toastTitle = "Periodic "; string toastMessage = "Mem usage: " + DeviceStatus.ApplicationPeakMemoryUsage + "/" + DeviceStatus.ApplicationMemoryUsageLimit; // Launch a toast to show that the agent is running. // The toast will not be shown if the foreground application is running. ShellToast toast = new ShellToast(); toast.Title = toastTitle; toast.Content = toastMessage; toast.Show(); } } Thank you.

    Read the article

  • Efficient file buffering & scanning methods for large files in python

    - by eblume
    The description of the problem I am having is a bit complicated, and I will err on the side of providing more complete information. For the impatient, here is the briefest way I can summarize it: What is the fastest (least execution time) way to split a text file in to ALL (overlapping) substrings of size N (bound N, eg 36) while throwing out newline characters. I am writing a module which parses files in the FASTA ascii-based genome format. These files comprise what is known as the 'hg18' human reference genome, which you can download from the UCSC genome browser (go slugs!) if you like. As you will notice, the genome files are composed of chr[1..22].fa and chr[XY].fa, as well as a set of other small files which are not used in this module. Several modules already exist for parsing FASTA files, such as BioPython's SeqIO. (Sorry, I'd post a link, but I don't have the points to do so yet.) Unfortunately, every module I've been able to find doesn't do the specific operation I am trying to do. My module needs to split the genome data ('CAGTACGTCAGACTATACGGAGCTA' could be a line, for instance) in to every single overlapping N-length substring. Let me give an example using a very small file (the actual chromosome files are between 355 and 20 million characters long) and N=8 import cStringIO example_file = cStringIO.StringIO("""\ header CAGTcag TFgcACF """) for read in parse(example_file): ... print read ... CAGTCAGTF AGTCAGTFG GTCAGTFGC TCAGTFGCA CAGTFGCAC AGTFGCACF The function that I found had the absolute best performance from the methods I could think of is this: def parse(file): size = 8 # of course in my code this is a function argument file.readline() # skip past the header buffer = '' for line in file: buffer += line.rstrip().upper() while len(buffer) = size: yield buffer[:size] buffer = buffer[1:] This works, but unfortunately it still takes about 1.5 hours (see note below) to parse the human genome this way. Perhaps this is the very best I am going to see with this method (a complete code refactor might be in order, but I'd like to avoid it as this approach has some very specific advantages in other areas of the code), but I thought I would turn this over to the community. Thanks! Note, this time includes a lot of extra calculation, such as computing the opposing strand read and doing hashtable lookups on a hash of approximately 5G in size. Post-answer conclusion: It turns out that using fileobj.read() and then manipulating the resulting string (string.replace(), etc.) took relatively little time and memory compared to the remainder of the program, and so I used that approach. Thanks everyone!

    Read the article

< Previous Page | 523 524 525 526 527 528 529 530 531 532 533 534  | Next Page >