Search Results

Search found 16496 results on 660 pages for 'long cheng'.

Page 49/660 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • how to fix it =.= "Third party sources disabled"

    - by Long Pham
    When i tried to upgrade to 13.10, it was appearing on the screen while preparing to update "Third party sources disabled Some third party entries in your sources.list were disabled. You can re-enable them after the upgrade with the 'software-properties' tool or your package manager." After that it was on: W:Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_saucy_universe_source_Sources Hash Sum mismatch , E:Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • How to view files from host? (running inside VMWare Fusion)

    - by Dave Long
    I have just finished moving my development server into a Ubuntu 10.04 Server VM in VMWare Fusion 3. I have all of my mysql and tomcat stuff running and am now trying to connect to my actual site files which are stored on my mac under /{User Root}/Workspace/ColdFusion/. I know that normally you should be able to setup a shared folder in VMWare and find it under /mnt/hgfs/{Share Name}, but I can't find it. I am not sure if I have to manually mount it or what.

    Read the article

  • Can I minify Javascript that requires copyright notice?

    - by Nathan Long
    I guess this is actually a legal question, but it relates to software. I'm about to include a JS plugin in a project. The comments include: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Is using this in my web site "redistribution?" If I minify this to conserve bandwidth, I assume it will strip all comments. If the answer to #1 is yes, doesn't that imply I'm legally not allowed to minify it? (That would stink, since I was planning to auto-minify all JS as part of the deploy process.)

    Read the article

  • questions on a particular algorithm

    - by paul smith
    Upon searching for a fast primr algorithm, I stumbled upon this: public static boolean isP(long n) { if (n==2 || n==3) return true; if ((n&0x1)==0 || n%3==0 || n<2) return false; long root=(long)Math.sqrt(n)+1L; // we check just numbers of the form 6*k+1 and 6*k-1 for (long k=6;k<=root;k+=6) { if (n%(k-1)==0) return false; if (n%(k+1)==0) return false; } return true; } My questions are: Why is long being used everywhere instead of int? Because with a long type the argument could be much larger than Integer.MAX thus making the method more flexible? In the second 'if', is n&0x1 the same as n%2? If so why didn't the author just use n%2? To me it's more readable. The line that sets the 'root' variable, why add the 1L? What is the run-time complexity? Is it O(sqrt(n/6)) or O(sqrt(n)/6)? Or would we just say O(n)?

    Read the article

  • Debugging OpenOffice crashes

    - by JD Long
    This is partly an OpenOffice question and partly a Ubuntu question. I'm running OpenOffice 3.2.0 and Ubuntu 10.04. I get frequent crashes of OO, especially the Calc app, although I get crashes in the word processor as well. They are very abrupt and accompanies by no warning or error message. I'm just typing away and then the app is gone. Sometimes I even end up thinking I'm typing in OO and discover that OO has crashed and I'm typing in whatever application was under OO. However, I can't reproduce these crashes on demand. They seem random. I can open the same file and do the same exact thing but it does not crash. In Ubuntu how do I trace, track, or diagnose these types of crashes? Is there software I can invoke to help diagnose? Can I start OO from a command prompt with debugging of some sort enabled? Note: if someone could add the tag OpenOffice, I would appreciate it

    Read the article

  • How do I know if I'm getting the most out of my video card?

    - by b.long
    My computer at home is a bit lacking, so I want to make sure I'm getting the most out of it while I can. Generally speaking, here are the specs: 4GB Memory AMD Athlon(tm) 64 X2 Dual Core Processor 5200+ × 2 64-bit Ubuntu The terminal shows me the following: me@home:~$ uname -a Linux home 3.0.0-17-generic #30-Ubuntu SMP Thu Mar 8 20:45:39 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux me@home:~$ lspci | grep VGA 01:00.0 VGA compatible controller: ATI Technologies Inc RV380 [Radeon X600 (PCIE)] me@home:~$ sudo lshw -C video *-display:0 description: VGA compatible controller product: RV380 [Radeon X600 (PCIE)] vendor: ATI Technologies Inc physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 32 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=radeon latency=0 resources: irq:44 memory:e0000000-efffffff ioport:ac00(size=256) memory:fdef0000-fdefffff memory:fdec0000-fdedffff *-display:1 UNCLAIMED description: Display controller product: RV380 [Radeon X600] vendor: ATI Technologies Inc physical id: 0.1 bus info: pci@0000:01:00.1 version: 00 width: 32 bits clock: 33MHz capabilities: pm pciexpress bus_master cap_list configuration: latency=0 resources: memory:fdee0000-fdeeffff me@home:~$ lspci -nn | grep VGA 01:00.0 VGA compatible controller [0300]: ATI Technologies Inc RV380 [Radeon X600 (PCIE)] [1002:5b62] The additional drivers menu in System Settings shows me nothing useful and my attempt at installing ATI's Catalyst Control center (drivers that came with the video card) failed. I believe the latest version of Ubuntu at the time was 9.x. What should I do? Install an old version of Ubuntu 9? Use some alternative driver? UPDATE: I might try my hand at a bit from this answer next: "Installing Catalyst Manually (from AMD/ATI's site)" . From a terminal, fgl_glxgears returns *"fgl_glxgears: command not found"*. Any thoughts?

    Read the article

  • Installing 10.04 server on HP xw9400 Workstation with RAID 5

    - by Dave Long
    I have a workstation that was given to me that is a friggen powerhouse, so I figured I would set it up as my development and demo server. This is my first experience installing Ubuntu onto a RAID array and so far it has not been a fun one. I have been following the Advanced Installation guide for installing Ubuntu 10.04 server, and it says that there will be an option on the Partition Disks screen to manually create the partitions, but the only options I have are: Configure iSCSI volumes Undo changes to partitions Finish partitioning and write changes to disk Just before I got to that screen I got a message that said: One or more drives containing Serial ATA RAID configurations have been found. Do you wish to activate these RAID devices? It doesn't matter whether I answer yes or no to that, I still get the same Partition Disks screen. When I try to select Finish partitioning and write changes to disk I just get the No root file system error. Has anyone else experienced this, and how do I get past it? Can I not run Ubuntu on this machine?

    Read the article

  • Lost connectivity after configuring multiple network adapters on separate networks

    - by Dave Long
    I am trying to setup an Ubuntu hosting server, currently just for development, and the server has two NICs, each sitting on a different network. eth0 is on 192.168.200.* and eth1 is on 192.168.101.* and each one has a static IP. eth0 is the public facing NIC card and eth1 is strictly for internal access to the server. I initially only setup eth0 and added the eth1 card when I needed it. eth0 was working find until I added eth1, now, can't get any connectivity on eth0 unless I pull eth1 out of the box. The configuration on each system is as follows: auto eth0 iface eth0 inet static address 192.168.200.94 netmask 255.255.255.0 network 192.168.200.0 broadcast 192.168.200.255 gateway 192.168.200.253 auto eth1 iface eth1 inet static address 192.168.101.64 netmask 255.255.255.0 network 192.168.101.0 broadcast 192.168.101.255 gateway 192.168.101.254 Again eth0 worked fine until I added eth1. I have seen this happen with Windows servers if you have a Default Gateway setup for both NICs, but I am not sure if this works the same on Ubuntu. My resolv.conf file looks like so: nameserver 192.168.101.59 nameserver 192.168.101.58 domain domain.local search domain.local Per request here is the Routing table 192.168.101.0 * 255.255.255.0 U 0 0 0 eth1 192.168.200.0 * 255.255.255.0 U 0 0 0 eth0 default 192.168.101.254 0.0.0.0 UG 100 0 0 eth1 default 192.168.200.253 0.0.0.0 UG 100 0 0 eth0

    Read the article

  • Call DB Stored Procedure using @NamedStoredProcedureQuery Injection

    - by anwilson
    Oracle Database Stored Procedure can be called from EJB business layer to perform complex DB specific operations. This approach will avoid overhead from frequent network hits which could impact end-user result. DB Stored Procedure can be invoked from EJB Session Bean business logic using org.eclipse.persistence.queries.StoredProcedureCall API. Using this approach requires more coding to handle the Session and Arguments of the Stored Procedure, thereby increasing effort on maintenance. EJB 3.0 introduces @NamedStoredProcedureQuery Injection to call Database Stored Procedure as NamedQueries. This blog will take you through the steps to call Oracle Database Stored Procedure using @NamedStoredProcedureQuery.EMP_SAL_INCREMENT procedure available in HR schema will be used in this sample.Create Entity from EMPLOYEES table.Add @NamedStoredProcedureQuery above @NamedQueries to Employees.java with definition as given below - @NamedStoredProcedureQuery(name="Employees.increaseEmpSal", procedureName = "EMP_SAL_INCREMENT", resultClass=void.class, resultSetMapping = "", returnsResultSet = false, parameters = { @StoredProcedureParameter(name = "EMP_ID", queryParameter = "EMPID"), @StoredProcedureParameter(name = "SAL_INCR", queryParameter = "SALINCR")} ) Observe how Stored Procedure's arguments are handled easily in  @NamedStoredProcedureQuery using @StoredProcedureParameter.Expose Entity Bean by creating a Session Facade.Business method need to be added to Session Bean to access the Stored Procedure exposed as NamedQuery. public void salaryRaise(Long empId, Long salIncrease) throws Exception { try{ Query query = em.createNamedQuery("Employees.increaseEmpSal"); query.setParameter("EMPID", empId); query.setParameter("SALINCR", salIncrease); query.executeUpdate(); } catch(Exception ex){ throw ex; } } Expose business method through Session Bean Remote Interface. void salaryRaise(Long empId, Long salIncrease) throws Exception; Session Bean Client is required to invoke the method exposed through remote interface.Call exposed method in Session Bean Client main method. final Context context = getInitialContext(); SessionEJB sessionEJB = (SessionEJB)context.lookup("Your-JNDI-lookup"); sessionEJB.salaryRaise(new Long(200), new Long(1000)); Deploy Session BeanRun Session Bean Client.Salary of Employee with Id 200 will be increased by 1000.

    Read the article

  • Files in /home deleted

    - by long-time user....2006
    In the most specific, unemotional terms: Reinstalled os, using 11.10(1 month after release to skip initial issues that usually crop up). Configured system to my specifications(just ways of organizing config files, etc). Log out Log back in after after an hour or so...to find my home directory obliterated and just a few skeleton files existing. think oh well, try again (this has happened before with an install for reasons I've never been able to pinpoint, usually around install time with some sort of update but its never been a major recurring issue) same thing happens I thought something was awry, so I reinstalled again (another 20 minutes, meh) Set up system, arranged home directory a bit differently thinking maybe I tread on something I shouldn't have. log out, come back --- the same thing. Most of the directories I added were deleted (e.g. .xmonad which links to xmonad.hs in my portable config directory) tl;dr every change I make in my home directory gets deleted. The emotional part: UNACCEPTABLE. I need to configure my system the way I want, not get punched in the face everytime I make a change. I'll willingly fill in details as needed, this was just a start to see if anyone can help, I've found no trace of this issue in a search.

    Read the article

  • Disable automatic starting of sshd?

    - by b.long
    Simple question here; what's the correct way to stop the sshd service from starting when the OS boots ? I'm not sure if this answer is correct, so I'm hoping some guru(s) can help me out! What I'd like is a configuration that (after boot) allows me to start the service using sudo service ssh start when necessary. Version info: me@home:~$ ssh -V OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 me@home:~$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04.1 LTS Release: 12.04 Codename: precise

    Read the article

  • Is Updating double operation atomic

    - by Yan Cheng CHEOK
    In Java, updating double and long variable may not be atomic, as double/long are being treated as two separate 32 bits variables. http://java.sun.com/docs/books/jls/second_edition/html/memory.doc.html#28733 In C++, if I am using 32 bit Intel Processor + Microsoft Visual C++ compiler, is updating double (8 byte) operation atomic? I cannot find much specification mention on this behavior. When I say "atomic variable", here is what I mean : Thread A trying to write 1 to variable x. Thread B trying to write 2 to variable x. We shall get value 1 or 2 out from variable x, but not an undefined value.

    Read the article

  • Creating an Objective-C++ Static Library in Xcode

    - by helixed
    So I've developed an engine for the iPhone with which I'd like to build a couple different games. Rather than copy and paste the files for the engine inside of each game's project directory, I'd a way to link to the engine from each game, so if I need to make a change to it I only have to do so once. After reeding around a little bit, it seems like static libraries are the best way to do this on the iPhone. I created a new project called Skeleton and copied all of my engine files over to it. I used this guide to create a static library, and I imported the library into a project called Chooser. However, when I tried to compile the project, Xcode started complaining about some C++ data structures I included in a file called ControlScene.mm. Here's my build errors: "operator delete(void*)", referenced from: -[ControlScene dealloc] in libSkeleton.a(ControlScene.o) -[ControlScene init] in libSkeleton.a(ControlScene.o) __gnu_cxx::new_allocator<operation_t>::deallocate(operation_t*, unsigned long)in libSkeleton.a(ControlScene.o) __gnu_cxx::new_allocator<operation_t*>::deallocate(operation_t**, unsigned long)in libSkeleton.a(ControlScene.o) "operator new(unsigned long)", referenced from: -[ControlScene init] in libSkeleton.a(ControlScene.o) __gnu_cxx::new_allocator<operation_t*>::allocate(unsigned long, void const*)in libSkeleton.a(ControlScene.o) __gnu_cxx::new_allocator<operation_t>::allocate(unsigned long, void const*)in libSkeleton.a(ControlScene.o) "std::__throw_bad_alloc()", referenced from: __gnu_cxx::new_allocator<operation_t*>::allocate(unsigned long, void const*)in libSkeleton.a(ControlScene.o) __gnu_cxx::new_allocator<operation_t>::allocate(unsigned long, void const*)in libSkeleton.a(ControlScene.o) "___cxa_rethrow", referenced from: std::_Deque_base<operation_t, std::allocator<operation_t> >::_M_create_nodes(operation_t**, operation_t**)in libSkeleton.a(ControlScene.o) std::_Deque_base<operation_t, std::allocator<operation_t> >::_M_initialize_map(unsigned long)in libSkeleton.a(ControlScene.o) "___cxa_end_catch", referenced from: std::_Deque_base<operation_t, std::allocator<operation_t> >::_M_create_nodes(operation_t**, operation_t**)in libSkeleton.a(ControlScene.o) std::_Deque_base<operation_t, std::allocator<operation_t> >::_M_initialize_map(unsigned long)in libSkeleton.a(ControlScene.o) "___gxx_personality_v0", referenced from: ___gxx_personality_v0$non_lazy_ptr in libSkeleton.a(ControlScene.o) ___gxx_personality_v0$non_lazy_ptr in libSkeleton.a(MenuLayer.o) "___cxa_begin_catch", referenced from: std::_Deque_base<operation_t, std::allocator<operation_t> >::_M_create_nodes(operation_t**, operation_t**)in libSkeleton.a(ControlScene.o) std::_Deque_base<operation_t, std::allocator<operation_t> >::_M_initialize_map(unsigned long)in libSkeleton.a(ControlScene.o) ld: symbol(s) not found collect2: ld returned 1 exit status If anybody could offer some insight as to why these problems are occuring, I'd appreciate it. Thanks, helixed

    Read the article

  • Adding a valuetype to IDL, compile and it fails with "No factory found"

    - by jim
    I can't figure out why the client keeps complaining about the not finding the factory method. I've tried the IDL with and without the "factory" keyword and that didn't change the behavior. The SDMGeoVT IDL matches other objects used (which run successfully). The SDMGeoVT classes generated match other generated classes in regards to inheritance and methods. The IDL is as follows; The idlj compiler runs w/o error. I implement the function on the server and I see the server code run and serialize the object over the wire (the server code runs fine). The client bombs with the following stack trace (the first couple of lines is from the jacORB library). I've created a small app just to compile and test the code (ArrayClient & ArrayServer). The base app (from the jacORB demo) works fine. I've tried using the base class OFBaseVT and a single object (SDMGeoVT vs the list return) and have the same issue. 2010-05-27 15:37:11.813 FINE read GIOP message of size 100 from ClientGIOPConnection to 127.0.0.1:47030 (1e4853f) 2010-05-27 15:37:11.813 FINE read GIOP message of size 100 from ClientGIOPConnection to 127.0.0.1:47030 (1e4853f) org.omg.CORBA.MARSHAL: No factory found for: IDL:pl/SDMGeoVT:1.0 at org.jacorb.orb.CDRInputStream.read_untyped_value(CDRInputStream.java:2906) at org.jacorb.orb.CDRInputStream.read_typed_value(CDRInputStream.java:3082) at org.jacorb.orb.CDRInputStream.read_value(CDRInputStream.java:2679) at com.helloworld.pl.SDMGeoVTHelper.read(SDMGeoVTHelper.java:106) at com.helloworld.pl.SDMGeoVTListHelper.read(SDMGeoVTListHelper.java:51) at com.helloworld.pl._PLManagerIFStub.getSDMGeos(_PLManagerIFStub.java:28) at com.helloworld.ArrayClient.<init>(ArrayClient.java:40) at com.helloworld.ArrayClient.main(ArrayClient.java:125) valuetype SDMGeoVT : common::OFBaseVT{ private string sdmName; private string zip; private string atz; private long long primaryDeptId; private string deptName; factory instance(in string name,in string ZIP,in string ATZ,in long long primaryDeptId,in string deptName,in string name); string getZIP(); void setZIP(in string ZIP); string getATZ(); void setATZ(in string ATZ); long long getPrimaryDeptId(); void setPrimaryDeptId(in long long primaryDeptId); string getDeptName(); void setDeptName(in string deptName); }; typedef sequence<SDMGeoVT> SDMGeoVTList; interface PLManagerIF : PublicManagerIF { pl::SDMGeoVTList getSDMGeos(in framework::ITransactionHandle tHandle, in long long productionLocationId); };

    Read the article

  • How to return a single variable from a CUDA kernel function?

    - by Pooya
    I have a CUDA search function which calculate one single variable. How can I return it back. global void G_SearchByNameID(node* Node, long nodeCount, long start,char* dest, long answer){ answer = 2; } cudaMemcpy(h_answer, d_answer, sizeof(long), cudaMemcpyDeviceToHost); cudaFree(d_answer); for both of these lines I get this error: error: argument of type "long" is incompatible with parameter of type "const void *"

    Read the article

  • Call/Ret in x86 assembly embedded in C++

    - by SP658
    This is probably trivial, but for some reason I can't it to work. Its supposed to be a simple function that changes the last byte of a dword to 'AA' (10101010), but nothing happens when I call the function. It just returns my original dword __declspec(naked) long function(unsigned long inputDWord, unsigned long *outputDWord) { _asm{ mov ebx, dword ptr[esp+4] push ebx call SET_AA pop ebx mov eax, dword ptr[esp+8] mov dword ptr[eax], ebx } } __declspec(naked) unsigned long SET_AA( unsigned long inputDWord ) { __asm{ mov eax, [esp+4] mov al, 0xAA ret } }

    Read the article

  • Problem with Precision floating point operation in C

    - by Microkernel
    Hi Guys, For one of my course project I started implementing "Naive Bayesian classifier" in C. My project is to implement a document classifier application (especially Spam) using huge training data. Now I have problem implementing the algorithm because of the limitations in the C's datatype. ( Algorithm I am using is given here, http://en.wikipedia.org/wiki/Bayesian_spam_filtering ) PROBLEM STATEMENT: The algorithm involves taking each word in a document and calculating probability of it being spam word. If p1, p2 p3 .... pn are probabilities of word-1, 2, 3 ... n. The probability of doc being spam or not is calculated using Here, probability value can be very easily around 0.01. So even if I use datatype "double" my calculation will go for a toss. To confirm this I wrote a sample code given below. #define PROBABILITY_OF_UNLIKELY_SPAM_WORD (0.01) #define PROBABILITY_OF_MOSTLY_SPAM_WORD (0.99) int main() { int index; long double numerator = 1.0; long double denom1 = 1.0, denom2 = 1.0; long double doc_spam_prob; /* Simulating FEW unlikely spam words */ for(index = 0; index < 162; index++) { numerator = numerator*(long double)PROBABILITY_OF_UNLIKELY_SPAM_WORD; denom2 = denom2*(long double)PROBABILITY_OF_UNLIKELY_SPAM_WORD; denom1 = denom1*(long double)(1 - PROBABILITY_OF_UNLIKELY_SPAM_WORD); } /* Simulating lot of mostly definite spam words */ for (index = 0; index < 1000; index++) { numerator = numerator*(long double)PROBABILITY_OF_MOSTLY_SPAM_WORD; denom2 = denom2*(long double)PROBABILITY_OF_MOSTLY_SPAM_WORD; denom1 = denom1*(long double)(1- PROBABILITY_OF_MOSTLY_SPAM_WORD); } doc_spam_prob= (numerator/(denom1+denom2)); return 0; } I tried Float, double and even long double datatypes but still same problem. Hence, say in a 100K words document I am analyzing, if just 162 words are having 1% spam probability and remaining 99838 are conspicuously spam words, then still my app will say it as Not Spam doc because of Precision error (as numerator easily goes to ZERO)!!!. This is the first time I am hitting such issue. So how exactly should this problem be tackled?

    Read the article

  • Preoblem with Precision floating point operation in C

    - by Microkernel
    Hi Guys, For one of my course project I started implementing "Naive Bayesian classifier" in C. My project is to implement a document classifier application (especially Spam) using huge training data. Now I have problem implementing the algorithm because of the limitations in the C's datatype. ( Algorithm I am using is given here, http://en.wikipedia.org/wiki/Bayesian_spam_filtering ) PROBLEM STATEMENT: The algorithm involves taking each word in a document and calculating probability of it being spam word. If p1, p2 p3 .... pn are probabilities of word-1, 2, 3 ... n. The probability of doc being spam or not is calculated using Here, probability value can be very easily around 0.01. So even if I use datatype "double" my calculation will go for a toss. To confirm this I wrote a sample code given below. #define PROBABILITY_OF_UNLIKELY_SPAM_WORD (0.01) #define PROBABILITY_OF_MOSTLY_SPAM_WORD (0.99) int main() { int index; long double numerator = 1.0; long double denom1 = 1.0, denom2 = 1.0; long double doc_spam_prob; /* Simulating FEW unlikely spam words */ for(index = 0; index < 162; index++) { numerator = numerator*(long double)PROBABILITY_OF_UNLIKELY_SPAM_WORD; denom2 = denom2*(long double)PROBABILITY_OF_UNLIKELY_SPAM_WORD; denom1 = denom1*(long double)(1 - PROBABILITY_OF_UNLIKELY_SPAM_WORD); } /* Simulating lot of mostly definite spam words */ for (index = 0; index < 1000; index++) { numerator = numerator*(long double)PROBABILITY_OF_MOSTLY_SPAM_WORD; denom2 = denom2*(long double)PROBABILITY_OF_MOSTLY_SPAM_WORD; denom1 = denom1*(long double)(1- PROBABILITY_OF_MOSTLY_SPAM_WORD); } doc_spam_prob= (numerator/(denom1+denom2)); return 0; } I tried Float, double and even long double datatypes but still same problem. Hence, say in a 100K words document I am analyzing, if just 162 words are having 1% spam probability and remaining 99838 are conspicuously spam words, then still my app will say it as Not Spam doc because of Precision error (as numerator easily goes to ZERO)!!!. This is the first time I am hitting such issue. So how exactly should this problem be tackled?

    Read the article

  • VirtualQuery gives illegal result. Is it a bug?

    - by Shimon Newman
    My code: MEMORY_BASIC_INFORMATION meminf; ::VirtualQuery(box.pBits, &meminf, sizeof(meminf)); The results: meminf: BaseAddress 0x40001000 void * AllocationBase 0x00000000 void * AllocationProtect 0x00000000 unsigned long RegionSize 0x0de0f000 unsigned long State 0x00010000 unsigned long Protect 0x00000001 unsigned long Type 0x00000000 unsigned long Notes: (1) AllocationBase is NULL while BaseAddress is not NULL (2) AllocationProtect is 0 (not a protection value) Is it a bug of VirtualQuery?

    Read the article

  • Parsing command line options in Perl

    - by Jay Gridley
    Hi guys, I am parsing command line options in Perl using Getopt::Long. I am forced to use prefix - (one dash) for short commands (-s) and -- (double dash) for long commands (ex. --input=file), but problem is, that there is one special option (-r=) so it is long option for its requirement for argument, but it has to have one dash (-) prefix not double dash (--) like other long options. Is possible to setup Getopt::Long to accept these?

    Read the article

  • How to implement Comet in database side?

    - by Morgan Cheng
    I have been searched for this question for a long time. How to implement Comet in database side? To support Comet, we'd better have a web server stack that supports asynchronous operation. So, Apache is not a option. There are some open source web server such as tornado can do asynchronous http handling. This is in web server level. In database level, how to make web server know that some event happens in database? There should be a asynchronous way to let web server know that something updated in database. Polling is not a option. Is there any example available?

    Read the article

  • pointer being freed was not allocated. Complex malloc history help

    - by Martin KS
    I've followed the guides helpfully linked here: http://stackoverflow.com/questions/295778/iphone-debugging-pointer-being-freed-was-not-allocated-errors but the malloc_history is really throwing me for a loop, can anyone shed any light on the following: ALLOC 0x185c600-0x18605ff [size=16384]: thread_a068a4e0 |start | main | UIApplicationMain | -[UIApplication _run] | CFRunLoopRunInMode | CFRunLoopRunSpecific | PurpleEventCallback | _UIApplicationHandleEvent | -[UIApplication sendEvent:] | -[UIApplication handleEvent:withNewEvent:] | -[UIApplication _reportAppLaunchFinished] | CA::Transaction::commit() | CA::Context::commit_transaction(CA::Transaction*) | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CA::Context::commit_layer(_CALayer*, unsigned int, unsigned int, void*) | CA::Render::encode_set_object(CA::Render::Encoder*, unsigned long, unsigned int, CA::Render::Object*, unsigned int) | CA::Render::Layer::encode(CA::Render::Encoder*) const | CA::Render::Image::encode(CA::Render::Encoder*) const | CA::Render::Encoder::encode_data_async(void const*, unsigned long, void (*)(void const*, void*), void*) | CA::Render::Encoder::encode_bytes(void const*, unsigned long) | CA::Render::Encoder::grow(unsigned long) | realloc | malloc_zone_realloc ---- FREE 0x185c600-0x18605ff [size=16384]: thread_a068a4e0 |start | main | UIApplicationMain | -[UIApplication _run] | CFRunLoopRunInMode | CFRunLoopRunSpecific | PurpleEventCallback | _UIApplicationHandleEvent | -[UIApplication sendEvent:] | -[UIApplication handleEvent:withNewEvent:] | -[UIApplication _reportAppLaunchFinished] | CA::Transaction::commit() | CA::Context::commit_transaction(CA::Transaction*) | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CALayerCommitIfNeeded | CA::Context::commit_layer(_CALayer*, unsigned int, unsigned int, void*) | CA::Render::encode_set_object(CA::Render::Encoder*, unsigned long, unsigned int, CA::Render::Object*, unsigned int) | CA::Render::Layer::encode(CA::Render::Encoder*) const | CA::Render::Image::encode(CA::Render::Encoder*) const | CA::Render::Encoder::encode_data_async(void const*, unsigned long, void (*)(void const*, void*), void*) | CA::Render::Encoder::encode_bytes(void const*, unsigned long) | CA::Render::Encoder::grow(unsigned long) | realloc | malloc_zone_realloc ALLOC 0x185e000-0x185e62f [size=1584]: thread_a068a4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[UITableView _userSelectRowAtIndexPath:] | -[UITableView _selectRowAtIndexPath:animated:scrollPosition:notifyDelegate:] | -[PLAlbumView tableView:didSelectRowAtIndexPath:] | -[PLUIAlbumViewController albumView:selectedPhoto:] | PLNotifyImagePickerOfImageAvailability | -[UIImagePickerController _imagePickerDidCompleteWithInfo:] | -[GalleryViewController imagePickerController:didFinishPickingMediaWithInfo:] | UIImageJPEGRepresentation | CGImageDestinationFinalize | _CGImagePluginWriteJPEG | writeOne | _cg_jpeg_start_compress | _cg_jinit_compress_master | _cg_jinit_c_prep_controller | alloc_sarray | alloc_large | malloc | malloc_zone_malloc ---- FREE 0x185e000-0x185e62f [size=1584]: thread_a068a4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[UITableView _userSelectRowAtIndexPath:] | -[UITableView _selectRowAtIndexPath:animated:scrollPosition:notifyDelegate:] | -[PL AlbumView tableView:didSelectRowAtIndexPath:] | -[PLUIAlbumViewController albumView:selectedPhoto:] | PLNotifyImagePickerOfImageAvailability | -[UIImagePickerController _imagePickerDidCompleteWithInfo:] | -[GalleryViewController imagePickerController:didFinishPickingMediaWithInfo:] | UIImageJPEGRepresentation | CGImageDestinationFinalize | _CGImagePluginWriteJPEG | writeOne | _cg_jpeg_abort | free_pool | free ALLOC 0x185c800-0x185ea1f [size=8736]: thread_a068a4e0 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __NSFireDelayedPerform | -[UITableView _userSelectRowAtIndexPath:] | -[UITableView _selectRowAtIndexPath:animated:scrollPosition:notifyDelegate:] | -[PLAlbumView tableView:didSelectRowAtIndexPath:] | -[PLUIAlbumViewController albumView:selectedPhoto:] | PLNotifyImagePickerOfImageAvailability | -[UIImagePickerController _imagePickerDidCompleteWithInfo:] | -[GalleryViewController imagePickerController:didFinishPickingMediaWithInfo:] | -[UIImage initWithData:] | _UIImageRefFromData | CGImageSourceCreateImageAtIndex | makeImagePlus | _CGImagePluginInitJPEG | initImageJPEG | calloc | malloc_zone_calloc

    Read the article

  • Blackberry stopwatch implementation

    - by Michaela
    I'm trying to write a blackberry app that is basically a stopwatch, and displays lap times. First, I'm not sure I'm implementing the stopwatch functionality in the most optimal way. I have a LabelField (_myLabel) that displays the 'clock' - starting at 00:00. Then you hit the start button and every second the _myLabel field gets updated with how many seconds have past since the last update (should only ever increment by 1, but sometimes there is a delay and it will skip a number). I just can't think of a different way to do it - and I am new to GUI development and threads so I guess that's why. EDIT: Here is what calls the stopwatch: _timer = new Timer(); _timer.schedule(new MyTimerTask(), 250, 250); And here is the TimerTask: class MyTimerTask extends TimerTask { long currentTime; long startTime = System.currentTimeMillis(); public void run() { synchronized (Application.getEventLock()) { currentTime = System.currentTimeMillis(); long diff = currentTime - startTime; long min = diff / 60000; long sec = (diff % 60000) / 1000; String minStr = new Long(min).toString(); String secStr = new Long(sec).toString(); if (min < 10) minStr = "0" + minStr; if (sec < 10) secStr = "0" + secStr; _myLabel.setText(minStr + ":" + secStr); timerDisplay.deleteAll(); timerDisplay.add(_timerLabel); } } } Anyway when you stop the stopwatch it updates a historical table of lap time data. When this list gets long, the timer starts to degrade. If you try to scroll, then it gets really bad. Is there a better way to implement my stopwatch?

    Read the article

  • 3n+1 problem at UVa

    - by ShahradR
    Hello, I am having trouble with the first question in the Programming Challenges book by Skiena and Revilla. I keep getting a "Wrong Answer" error message, and I don't know why. I've ran my code against the sample input, and I keep getting the right answer. Anyways, if anyone could help me, I would really appreciate it :) Here is the problem URL: http://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&category=29&page=show_problem&problem=36 And here is the code:` import java.util.Scanner; public class Main { static Scanner kb = new Scanner(System.in); public static void main(String[] args) { while (kb.hasNext()) { long[] numericalInput = {0, 0}; long i = kb.nextLong(); long j = kb.nextLong(); if (i > j) { numericalInput[0] = i; numericalInput[1] = j; } else { numericalInput[0] = j; numericalInput[1] = i; } long maxIterations = 0; for (long n = numericalInput[0]; n <= numericalInput[1]; n += 1) { if (maxIterations < returnIterations(n)) maxIterations = returnIterations(n); } System.out.println(i + " " + j + " " + maxIterations); } System.exit(0); } public static long returnIterations(long num) { long iterations = 0; while (num != 1) { if (num % 2 == 0) num = num / 2; else num = 3 * num + 1; iterations += 1; } iterations += 1; return iterations; } } ` EDIT: I think the problem is with the output. I tried to make it accept all the input first and then display all the answers at once, but I didn't know the terminating condition. I resorted to this method, but I'm not sure that's what the judge wants...

    Read the article

  • Why do I get Detached Entity exception when upgrading Spring Boot 1.1.4 to 1.1.5

    - by mmeany
    On updating Spring Boot from 1.1.4 to 1.1.5 a simple web application started generating detached entity exceptions. Specifically, a post authentication inteceptor that bumped number of visits was causing the problem. A quick check of loaded dependencies showed that Spring Data has been updated from 1.6.1 to 1.6.2 and a further check of the change log shows a couple of issues relating to optimistic locking, version fields and JPA issues that have been fixed. Well I am using a version field and it starts out as Null following recommendation to not set in the specification. I have produced a very simple test scenario where I get detached entity exceptions if the version field starts as null or zero. If I create an entity with version 1 however then I do not get these exceptions. Is this expected behaviour or is there still something amiss? Below is the test scenario I have for this condition. In the scenario the service layer that has been annotated @Transactional. Each test case makes multiple calls to the service layer - the tests are working with detached entities as this is the scenario I am working with in the full blown application. The test case comprises four tests: Test 1 - versionNullCausesAnExceptionOnUpdate() In this test the version field in the detached object is Null. This is how I would usually create the object prior to passing to the service. This test fails with a Detached Entity exception. I would have expected this test to pass. If there is a flaw in the test then the rest of the scenario is probably moot. Test 2 - versionZeroCausesExceptionOnUpdate() In this test I have set the version to value Long(0L). This is an edge case test and included because I found reference to Zero values being used for version field in the Spring Data change log. This test fails with a Detached Entity exception. Of interest simply because the following two tests pass leaving this as an anomaly. Test 3 - versionOneDoesNotCausesExceptionOnUpdate() In this test the version field is set to value Long(1L). Not something I would usually do, but considering the notes in the Spring Data change log I decided to give it a go. This test passes. Would not usually set the version field, but this looks like a work-around until I figure out why the first test is failing. Test 4 - versionOneDoesNotCausesExceptionWithMultipleUpdates() Encouraged by the result of test 3 I pushed the scenario a step further and perform multiple updates on the entity that started life with a version of Long(1L). This test passes. Reinforcement that this may be a useable work-around. The entity: package com.mvmlabs.domain; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; import javax.persistence.Table; import javax.persistence.Version; @Entity @Table(name="user_details") public class User { @Id @GeneratedValue(strategy=GenerationType.AUTO) private Long id; @Version private Long version; @Column(nullable = false, unique = true) private String username; @Column(nullable = false) private Integer numberOfVisits; public Long getId() { return id; } public void setId(Long id) { this.id = id; } public Long getVersion() { return version; } public void setVersion(Long version) { this.version = version; } public Integer getNumberOfVisits() { return numberOfVisits == null ? 0 : numberOfVisits; } public void setNumberOfVisits(Integer numberOfVisits) { this.numberOfVisits = numberOfVisits; } public String getUsername() { return username; } public void setUsername(String username) { this.username = username; } } The repository: package com.mvmlabs.dao; import org.springframework.data.repository.CrudRepository; import com.mvmlabs.domain.User; public interface UserDao extends CrudRepository<User, Long>{ } The service interface: package com.mvmlabs.service; import com.mvmlabs.domain.User; public interface UserService { User save(User user); User loadUser(Long id); User registerVisit(User user); } The service implementation: package com.mvmlabs.service; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import org.springframework.transaction.annotation.Propagation; import org.springframework.transaction.annotation.Transactional; import org.springframework.transaction.support.TransactionSynchronizationManager; import com.mvmlabs.dao.UserDao; import com.mvmlabs.domain.User; @Service @Transactional(propagation=Propagation.REQUIRED, readOnly=false) public class UserServiceJpaImpl implements UserService { @Autowired private UserDao userDao; @Transactional(readOnly=true) @Override public User loadUser(Long id) { return userDao.findOne(id); } @Override public User registerVisit(User user) { user.setNumberOfVisits(user.getNumberOfVisits() + 1); return userDao.save(user); } @Override public User save(User user) { return userDao.save(user); } } The application class: package com.mvmlabs; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.EnableAutoConfiguration; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration; @Configuration @ComponentScan @EnableAutoConfiguration public class Application { public static void main(String[] args) { SpringApplication.run(Application.class, args); } } The POM: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.mvmlabs</groupId> <artifactId>jpa-issue</artifactId> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging> <name>spring-boot-jpa-issue</name> <description>JPA Issue between spring boot 1.1.4 and 1.1.5</description> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>1.1.5.RELEASE</version> <relativePath /> <!-- lookup parent from repository --> </parent> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.hsqldb</groupId> <artifactId>hsqldb</artifactId> <scope>runtime</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> </dependencies> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <start-class>com.mvmlabs.Application</start-class> <java.version>1.7</java.version> </properties> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> </project> The application properties: spring.jpa.hibernate.ddl-auto: create spring.jpa.hibernate.naming_strategy: org.hibernate.cfg.ImprovedNamingStrategy spring.jpa.database: HSQL spring.jpa.show-sql: true spring.datasource.url=jdbc:hsqldb:file:./target/testdb spring.datasource.username=sa spring.datasource.password= spring.datasource.driverClassName=org.hsqldb.jdbcDriver The test case: package com.mvmlabs; import org.junit.Assert; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.SpringApplicationConfiguration; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import com.mvmlabs.domain.User; import com.mvmlabs.service.UserService; @RunWith(SpringJUnit4ClassRunner.class) @SpringApplicationConfiguration(classes = Application.class) public class ApplicationTests { @Autowired UserService userService; @Test public void versionNullCausesAnExceptionOnUpdate() throws Exception { User user = new User(); user.setUsername("Version Null"); user.setNumberOfVisits(0); user.setVersion(null); user = userService.save(user); user = userService.registerVisit(user); Assert.assertEquals(new Integer(1), user.getNumberOfVisits()); Assert.assertEquals(new Long(1L), user.getVersion()); } @Test public void versionZeroCausesExceptionOnUpdate() throws Exception { User user = new User(); user.setUsername("Version Zero"); user.setNumberOfVisits(0); user.setVersion(0L); user = userService.save(user); user = userService.registerVisit(user); Assert.assertEquals(new Integer(1), user.getNumberOfVisits()); Assert.assertEquals(new Long(1L), user.getVersion()); } @Test public void versionOneDoesNotCausesExceptionOnUpdate() throws Exception { User user = new User(); user.setUsername("Version One"); user.setNumberOfVisits(0); user.setVersion(1L); user = userService.save(user); user = userService.registerVisit(user); Assert.assertEquals(new Integer(1), user.getNumberOfVisits()); Assert.assertEquals(new Long(2L), user.getVersion()); } @Test public void versionOneDoesNotCausesExceptionWithMultipleUpdates() throws Exception { User user = new User(); user.setUsername("Version One Multiple"); user.setNumberOfVisits(0); user.setVersion(1L); user = userService.save(user); user = userService.registerVisit(user); user = userService.registerVisit(user); user = userService.registerVisit(user); Assert.assertEquals(new Integer(3), user.getNumberOfVisits()); Assert.assertEquals(new Long(4L), user.getVersion()); } } The first two tests fail with detached entity exception. The last two tests pass as expected. Now change Spring Boot version to 1.1.4 and rerun, all tests pass. Are my expectations wrong? Edit: This code saved to GitHub at https://github.com/mmeany/spring-boot-detached-entity-issue

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >