Search Results

Search found 2468 results on 99 pages for 'splattered bits'.

Page 31/99 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • How to manage multiple python versions ?

    - by Gyom
    short version: how can I get rid of the multiple-versions-of-python nightmare ? long version: over the years, I've used several versions of python, and what is worse, several extensions to python (e.g. pygame, pylab, wxPython...). Each time it was on a different setup, with different OSes, sometimes different architectures (like my old PowerPC mac). Nowadays I'm using a mac (OSX 10.6 on x86-64) and it's a dependency nightmare each time I want to revive script older than a few months. Python itself already comes in three different flavours in /usr/bin (2.5, 2.6, 3.1), but I had to install 2.4 from macports for pygame, something else (cannot remember what) forced me to install all three others from macports as well, so at the end of the day I'm the happy owner of seven (!) instances of python on my system. But that's not the problem, the problem is, none of them has the right (i.e. same set of) libraries installed, some of them are 32bits, some 64bits, and now I'm pretty much lost. For example right now I'm trying to run a three-year-old script (not written by me) which used to use matplotlib/numpy to draw a real-time plot within a rectangle of a wxwidgets window. But I'm failing miserably: py26-wxpython from macports won't install, stock python has wxwidgets included but also has some conflict between 32 bits and 64 bits, and it doesn't have numpy... what a mess ! Obviously, I'm doing things the wrong way. How do you usally cope with all that chaos ?e

    Read the article

  • Output reformatted text within a file included in a JSP

    - by javanix
    I have a few HTML files that I'd like to include via tags in my webapp. Within some of the files, I have pseudo-dynamic code - specially formatted bits of text that, at runtime, I'd like to be resolved to their respective bits of data in a MySQL table. For instance, the HTML file might include a line that says: Welcome, [username]. I want this resolved to (via a logged-in user's data): Welcome, [email protected]. This would be simple to do in a JSP file, but requirements dictate that the files will be created by people who know basic HTML, but not JSP. Simple text-tags like this should be easy enough for me to explain to them, however. I have the code set up to do resolutions like that for strings, but can anyone think of a way to do it across files? I don't actually need to modify the file on disk - just load the content, modify it, and output it w/in the containing JSP file. I've been playing around with trying to load the files into strings via the apache readFileToString, but I can't figure out how to load files from a specific folder within the webapp's content directory without hardcoding it in and having to worry about it breaking if I deploy to a different system in the future.

    Read the article

  • OpenGL equivalent of GDI's HatchBrush or PatternBrush?

    - by Ptah- Opener of the Mouth
    I have a VB6 application (please don't laugh) which does a lot of drawing via BitBlt and the standard VB6 drawing functions. I am running up against performance issues (yes, I do the regular tricks like drawing to memory). So, I decided to investigate other ways of drawing, and have come upon OpenGL. I've been doing some experimenting, and it seems straightforward to do most of what I want; the application mostly only uses very simple drawing -- relatively large 2D rectangles of solid colors and such -- but I haven't been able to find an equivalent to something like a HatchBrush or PatternBrush. More specifically, I want to be able to specify a small monochrome pixel pattern, choose a color, and whenever I draw a polygon (or whatever), instead of it being solid, have it automatically tiled with that pattern, not translated or rotated or skewed or stretched, with the "on" bits of the pattern showing up in the specified color, and the "off" bits of the pattern left displaying whatever had been drawn under the area that I am now drawing on. Obviously I could do all the calculations myself. That is, instead of drawing as a polygon which will somehow automatically be tiled for me, I could calculate all of the lines or pixels or whatever that actually need to be drawn, then draw them as lines or pixels or whatever. But is there an easier way? Like in GDI, where you just say "draw this polygon using this brush"? I am guessing that "textures" might be able to accomplish what I want, but it's not clear to me (I'm totally new to this and the documentation I've found is not entirely obvious); it seems like textures might skew or translate or stretch the pattern, based upon the vertices of the polygon? Whereas I want the pattern tiled. Is there a way to do this, or something like it, other than brute force calculation of exactly the pixels/lines/whatever that need to be drawn? Thanks in advance for any help.

    Read the article

  • core dump during std::_List_node_base::unhook()

    - by Ron
    I have a program where std::list is used. The program uses threads which act on the std::list as producers and consumers. When a message is dealt with by the consumer, it is removed from the list using pop_front(). But, during pop_front, there is a core dump. The gdb trace is as below. could you help getting me some insights into this issue? (gdb) bt full 0 0xf7531d7b in std::_List_node_base::unhook () from /usr/lib/libstdc++.so.6 No symbol table info available. 1 0x0805c600 in std::list ::_M_erase (this=0x806b08c, __position={_M_node = 0x8075308}) at /opt/target/usr/include/c++/4.2.0/bits/stl_list.h:1169 __n = (class std::_List_node<myMsg> *) 0x0 2 0x0805c6af in std::list ::pop_front (this=0x806b08c) at /opt/target/usr/include/c++/4.2.0/bits/stl_list.h:750 No locals. 3 0x0805afb6 in Base::run () at ../../src/Base.cc:342 nSentBytes = 130 tmpnm = {_vptr.myMsg = 0x80652c0, m_msg = 0x8075140 "{0130,MSG_TYPE=ND_FUNCTION,ORG_PNAME=P01vm01Ax,FUNCTION=LOG,PARAM_CNT=3,DATETIME=06/12/2010 02:59:26.187,LOGNAME=N,ENTRY=Debug 0 }", m_from = 0x8096ee0 "P01vm01Ax", m_to = 0x0, static m_logged = false, static m_pLogMutex = {_data = {_lock = 0, __count = 0, __owner = 0, __kind = 0, _nusers = 0, {_spins = 0, _list = {_next = 0x0}}}, __size = '\0' , __align = 0}} newMsg = {_vptr.myMsg = 0x80652c0, m_msg = 0x0, m_from = 0x0, m_to = 0x0, static m_logged = false, static m_pLogMutex = {_data = {_lock = 0, __count = 0, __owner = 0, __kind = 0, _nusers = 0, {_spins = 0, _list = {_next = 0x0}}}, __size = '\0' , __align = 0}} strBuffer = "{0440,MSG_TYPE=NG_FUNCTION,ORG_PNAME=mach01./opt/abc/VAvsk/abc/comp/DML/gendrs.pl.17560,DST_PNAME=P01vm01Ax,FUNCTION=DRS_REPLICATE,CAUSE_DML_ERROR=N,CORRUPT_DATA=N,CORRUPT_HEADER=N,DEBUG=Y,EXTENDED_RU"... fds = {{fd = 5, events = 1, revents = 0}} retval = 0 iWaitTime = 0 4 0x0805b277 in startRun () at ../../src/Base.cc:454 No locals. 5 0xf7effe7b in start_thread () from /lib/libpthread.so.0 No symbol table info available. 6 0xf744d82e in clone () from /lib/libc.so.6 No symbol table info available.

    Read the article

  • Core Data, Bindings, value transformers : crash when saving

    - by Gael
    Hi, I am trying to store a PNG image in a core data store backed by an sqlite database. Since I intend to use this database on an iPhone I can't store NSImage objects directly. I wanted to use bindings and an NSValueTransformer subclass to handle the transcoding from the NSImage (obtained by an Image well on my GUI) to an NSData containing the PNG binary representation of the image. I wrote the following code for the ValueTransformer : + (Class)transformedValueClass { return [NSImage class]; } + (BOOL)allowsReverseTransformation { return YES; } - (id)transformedValue:(id)value { if (value == nil) return nil; return [[[NSImage alloc] initWithData:value] autorelease]; } - (id)reverseTransformedValue:(id)value { if (value == nil) return nil; if(![value isKindOfClass:[NSImage class]]) { NSLog(@"Type mismatch. Expecting NSImage"); } NSBitmapImageRep *bits = [[value representations] objectAtIndex: 0]; NSData *data = [bits representationUsingType:NSPNGFileType properties:nil]; return data; } The model has a transformable property configured with this NSValueTransformer. In Interface Builder a table column and an image well are both bound to this property and both have the proper value transformer name (an image dropped in the image well shows up in the table column). The transformer is registered and called every time an image is added or a row is reloaded (checked with NSLog() calls). The problem arises when I am trying to save the managed objects. The console output shows the error message : [NSImage length]: unrecognized selector sent to instance 0x1004933a0 It seems like core data is using the value transformer to obtain the NSImage back from the NSData and then tries to save the NSImage instead of the NSData. There are probably workarounds such as the one presented in this post but I would really like to understand why my approach is flawn. Thanks in advance for your ideas and explanations.

    Read the article

  • Can I automatically throw descriptive exceptions with parameter values and class feild information?

    - by Robert H.
    I honestly don't throw exceptions often. I catch them even less, ironically. I currently work in shop where we let them bubble up to avicode. For whatever reason, however, avicode isn't configured to capture some of the critical bits I need when these exceptions come bouncing back to my attention. Specifically, I'd like to see the parameter values and the class’s field data at the time of the exception. I’d guess with the large suite of .Net services that I could create a static method to crawl up the stack, gather these bits and store them in a string that I could stick in my exception message. I really don't are how long such a method would take to execute as performance is no longer a concern when I hit one of these scenarios. If it's possible, I'm sure someone has done it. If that's the case, I'm having a hard time finding it. I think any search containing "exception" brings back too many resutls. Anyway, can this be done? If so, some examples or links would be great. Thanks in advance for your time, Robert

    Read the article

  • Find the Algorithm that generates the checksum

    - by knivmannen
    I have a sensing device that transmits a 6-byte message along with an 1-byte counter and supposely a checksum. The data looks something like this: ------DATA----------- -Counter- --Checksum?-- 55 FF 00 00 EC FF ---- 60---------- 1F The last four bits in the counter are always set 0, i.e those bits are probably not used. The last byte is assumed to be the checksum since it has a quite peculiar nature. It tends to randomly change as data changes. Now what i need is to find the algorithm to compute this checksum based on --DATA--. what i have tried is all possible CRC-8 polynomials, for each polynomial i have tried to reflect data, toggle it, initiate it with non-zeroes etc etc. Ive come to the conclusion that i am not dealing with a normal crc-algorithm. I have also tried some flether and adler methods without succes, xor stuff back and forth but still i have no clue how to generate the checksum. My biggest concern is, how is the counter used??? Same data but with different countervalue generates different checksums. I have tried to include the counter in my computations but without any luck. Here are some other datasamples: 55 FF 00 00 F0 FF A0 38 66 0B EA FF BF FF C0 CA 5E 18 EA FF B7 FF 60 BD F6 30 16 00 FC FE 10 81 One more thing that might be worth mentioning is that the last byte in the data only takes on the values FF or FE Plz if u have any tips or tricks that i may try post them here, I am truly desperate. Thx

    Read the article

  • How can I get the user object from a service in Symfony2

    - by pogo
    My site is using a third party service for authentication as well as other bits of functionality so I have setup a service that does all the API calls for me and it's this service that I need to be able to access the user object. I've tried injecting the security.context service into my API service but I get a ServiceCircularReferenceException because my user authentication provider references the API service (it has to in order to authenticate the user). So I get a chain of security.context -> authentication provider -> user provider -> API service -> security.context I'm struggling to this of another way of getting the user object and I can't see any obvious way of splitting up this chain. My configs are all defined in config.yml, here are the relevant bits myapp.database: class: Pogo\MyAppBundle\Service\DatabaseService arguments: siteid: %siteid% entityManager: "@doctrine.orm.entity_manager" myapp.apiservice: class: Pogo\MyAppBundle\Service\TicketingService arguments: entityManager: "@myapp.database" myapp.user_provider: class: Pogo\MyAppBundle\Service\APIUserProvider arguments: entityManager: "@myapp.database" ticketingAdapter: "@myapp.apiservice" securityContext: "@security.context" myapp.authenticationprovider: class: Pogo\MyAppBundle\Service\APIAuthenticationProvider arguments: userChecker: "@security.user_checker" encoderFactory: "@security.encoder_factory" userProvider: "@myapp.user_provider" myapp.user_provider is the service that I've defined as my user provider service in security.yml which I presume is how it's being referenced by security.context

    Read the article

  • Project management and bundling dependencies

    - by Joshua
    I've been looking for ways to learn about the right way to manage a software project, and I've stumbled upon the following blog post. I've learned some of the things mentioned the hard way, others make sense, and yet others are still unclear to me. To sum up, the author lists a bunch of features of a project and how much those features contribute to a project's 'suckiness' for a lack of a better term. You can find the full article here: http://spot.livejournal.com/308370.html In particular, I don't understand the author's stance on bundling dependencies with your project. These are: == Bundling == Your source only comes with other code projects that it depends on [ +20 points of FAIL ] Why is this a problem, (especially given the last point)? If your source code cannot be built without first building the bundled code bits [ +10 points of FAIL ] Doesn't this necessarily have to be the case for software built against 3rd party libs? Your code needs that other code to be compiled into its library before the linker can work? If you have modified those other bundled code bits [ +40 points of FAIL ] If this is necessary for your project, then it naturally follows that you've bundled said code with yours. If you want to customize a build of some lib,say WxWidgets, you'll have to edit that projects build scripts to bulid the library that you want. Subsequently, you'll have to publish those changes to people who wish to build your code, so why not use a high level make script with the params already written in, and distribute that? Furthermore, (especially in a windows env) if your code base is dependent on a particular version of a lib (that you also need to custom compile for your project) wouldn't it be easier to give the user the code yourself (because in this case, it is unlikely that the user will already have the correct version installed)? So how would you respond to these comments, and what points may I be failing to take into consideration? Would you agree or disagree with the author's take (or mine), and why?

    Read the article

  • Using an SHA1 with Microsoft CAPI

    - by Erik Jõgi
    I have an SHA1 hash and I need to sign it. The CryptSignHash() method requires a HCRYPTHASH handle for signing. I create it and as I have the actual hash value already then set it: CryptCreateHash(cryptoProvider, CALG_SHA1, 0, 0, &hash); CryptSetHashParam(hash, HP_HASHVAL, hashBytes, 0); The hashBytes is an array of 20 bytes. However the problem is that the signature produced from this HCRYPTHASH handle is incorrect. I traced the problem down to the fact that CAPI actually doesn't use all 20 bytes from my hashBytes array. For some reason it thinks that SHA1 is only 4 bytes. To verify this I wrote this small program: HCRYPTPROV cryptoProvider; CryptAcquireContext(&cryptoProvider, NULL, NULL, PROV_RSA_FULL, 0); HCRYPTHASH hash; HCRYPTKEY keyForHash; CryptCreateHash(cryptoProvider, CALG_SHA1, keyForHash, 0, &hash); DWORD hashLength; CryptGetHashParam(hash, HP_HASHSIZE, NULL, &hashLength, 0); printf("hashLength: %d\n", hashLength); And this prints out hashLength: 4 ! Can anyone explain what I am doing wrong or why Microsoft CAPI thinks that SHA1 is 4 bytes (32 bits) instead of 20 bytes (160 bits).

    Read the article

  • What OOP pattern to use when only adding new methods, not data?

    - by Jonathon Reinhart
    Hello eveyone... In my app, I have deal with several different "parameters", which derive from IParameter interface, and also ParamBase abstract base class. I currently have two different parameter types, call them FooParameter and BarParameter, which both derive from ParamBase. Obviously, I can treat them both as IParameters when I need to deal with them generically, or detect their specific type when I need to handle their specific functionality. My question lies in specific FooParameters. I currently have a few specific ones with their own classes which derive from FooParameter, we'll call them FP12, FP13, FP14, etc. These all have certain characteristics, which make me treat them differently in the UI. (Most have names associated with the individual bits, or ranges of bits). Note that these specific, derived FP's have no additional data associated with them, only properties (which refer to the same data in different ways) or methods. Now, I'd like to keep all of these parameters in a Dictionary<String, IParameter> for easy generic access. The problem is, if I want to refer to a specific one with the special GUI functions, I can't write: FP12 fp12 = (FP12)paramList["FP12"] because you can't downcast to a derived type (rightfully so). But in my case, I didn't add any data, so the cast would theoretically work. What type of programming model should I be using instead? Thanks!

    Read the article

  • Cannot call DLL import entry in C# from C++ project. EntryPointNotFoundException

    - by kriau
    I'm trying to call from C# a function in a custom DLL written in C++. However I'm getting the warning during code analysis and the error at runtime: Warning: CA1400 : Microsoft.Interoperability : Correct the declaration of 'SafeNativeMethods.SetHook()' so that it correctly points to an existing entry point in 'wi.dll'. The unmanaged entry point name currently linked to is SetHook. Error: System.EntryPointNotFoundException was unhandled. Unable to find an entry point named 'SetHook' in DLL 'wi.dll'. Both projects wi.dll and C# exe has been compiled in to the same DEBUG folder, both files reside here. There is only one file with the name wi.dll in the whole file system. C++ function definition looks like: #define WI_API __declspec(dllexport) bool WI_API SetHook(); I can see exported function using Dependency Walker: as decorated: bool SetHook(void) as undecorated: ?SetHook@@YA_NXZ C# DLL import looks like (I've defined these lines using CLRInsideOut from MSDN magazine): [DllImport("wi.dll", EntryPoint = "SetHook", CallingConvention = CallingConvention.Cdecl)] [return: MarshalAsAttribute(UnmanagedType.I1)] internal static extern bool SetHook(); I've tried without EntryPoint and CallingConvention definitions as well. Both projects are 32-bits, I'm using W7 64 bits, VS 2010 RC. I believe that I simply have overlooked something.... Thanks in advance.

    Read the article

  • protocol parsing in c

    - by nomad.alien
    I have been playing around with trying to implement some protocol decoders, but each time I run into a "simple" problem and I feel the way I am solving the problem is not optimal and there must be a better way to do things. I'm using C. Currently I'm using some canned data and reading it in as a file, but later on it would be via TCP or UDP. Here's the problem. I'm currently playing with a binary protocol at work. All fields are 8 bits long. The first field(8bits) is the packet type. So I read in the first 8 bits and using a switch/case I call a function to read in the rest of the packet as I then know the size/structure of it. BUT...some of these packets have nested packets inside them, so when I encounter that specific packet I then have to read another 8-16 bytes have another switch/case to see what the next packet type is and on and on. (Luckily the packets are only nested 2 or 3 deep). Only once I have the whole packet decoded can I handle it over to my state machine for processing. I guess this can be a more general question as well. How much data do you have to read at a time from the socket? As much as possible? As much as what is "similar" in the protocol headers? So even though this protocol is fairly basic, my code is a whole bunch of switch/case statements and I do a lot of reading from the file/socket which I feel is not optimal. My main aim is to make this decoder as fast as possible. To the more experienced people out there, is this the way to go or is there a better way which I just haven't figured out yet? Any elegant solution to this problem?

    Read the article

  • Using an SHA1 with Micrsoft CAPI

    - by Erik Jõgi
    Hello, I have an SHA1 hash and I need to sign it. The CryptSignHash() method requires a HCRYPTHASH handle for signing. I create it and as I have the actual hash value already then set it: CryptCreateHash(cryptoProvider, CALG_SHA1, 0, 0, &hash); CryptSetHashParam(hash, HP_HASHVAL, hashBytes, 0); The hashBytes is an array of 20 bytes. However the problem is that the signature produced from this HCRYPTHASH handle is incorrect. I traced the problem down to the fact that CAPI actually doesn't use all 20 bytes from my hashBytes array. For some reason it thinks that SHA1 is only 4 bytes. To verify this I wrote this small program: HCRYPTPROV cryptoProvider; CryptAcquireContext(&cryptoProvider, NULL, NULL, PROV_RSA_FULL, 0); HCRYPTHASH hash; HCRYPTKEY keyForHash; CryptCreateHash(cryptoProvider, CALG_SHA1, keyForHash, 0, &hash); DWORD hashLength; CryptGetHashParam(hash, HP_HASHSIZE, NULL, &hashLength, 0); printf("hashLength: %d\n", hashLength); And this prints out hashLength: 4 ! Can anyone explain what I am doing wrong or why Microsoft CAPI thinks that SHA1 is 4 bytes (32 bits) instead of 20 bytes (160 bits). Thank you.

    Read the article

  • OpenSocial create activity from submit click

    - by russp
    Hi I'm "playing with OpsnSocial" and think I get a lot of it (well thanks to Googles' bits) but one question if I may. Creating an activity Lets say I have a form like this (simple) <form> <input type="text" name="" id="testinput" value=""/> <input type="submit" name="" id="" value=""/> </form> And I want to post the value of the text field (and or a message i.e "just posted" to the "users" activity. Do I use a function like this? function createActivity() { if (viewer) { var activity = opensocial.newActivity({ title: viewer.getDisplayName() + ' VALUE FROM FORM '}); opensocial.requestCreateActivity(activity, "HIGH", function() { setTimeout(initAllData,1000); }); } }; If so, how do I pass the text field value to it - is it something like this? var testinput = document.getElementById("testinput"); so the function may look like function createActivity() { if (viewer) { var activity = opensocial.newActivity({ title: viewer.getDisplayName() + testinput }); opensocial.requestCreateActivity(activity, "HIGH", function() { setTimeout(initAllData,1000); }); } }; And how do I trigger the function by using the submit button. In my basic JQuery I would use $('#submitID').submit(function(){ 'bits in here '}); Is at "simple as that i.e. use the createActivity function and it will use the OS framework to "post" to the activity.xml

    Read the article

  • How to maintain long-lived python projects w.r.t. dependencies and python versions ?

    - by Gyom
    short version: how can I get rid of the multiple-versions-of-python nightmare ? long version: over the years, I've used several versions of python, and what is worse, several extensions to python (e.g. pygame, pylab, wxPython...). Each time it was on a different setup, with different OSes, sometimes different architectures (like my old PowerPC mac). Nowadays I'm using a mac (OSX 10.6 on x86-64) and it's a dependency nightmare each time I want to revive script older than a few months. Python itself already comes in three different flavours in /usr/bin (2.5, 2.6, 3.1), but I had to install 2.4 from macports for pygame, something else (cannot remember what) forced me to install all three others from macports as well, so at the end of the day I'm the happy owner of seven (!) instances of python on my system. But that's not the problem, the problem is, none of them has the right (i.e. same set of) libraries installed, some of them are 32bits, some 64bits, and now I'm pretty much lost. For example right now I'm trying to run a three-year-old script (not written by me) which used to use matplotlib/numpy to draw a real-time plot within a rectangle of a wxwidgets window. But I'm failing miserably: py26-wxpython from macports won't install, stock python has wxwidgets included but also has some conflict between 32 bits and 64 bits, and it doesn't have numpy... what a mess ! Obviously, I'm doing things the wrong way. How do you usally cope with all that chaos ?

    Read the article

  • How do I implement a collection in Scala 2.8?

    - by Simon Reinhardt
    In trying to write an API I'm struggling with Scala's collections in 2.8(.0-beta1). Basically what I need is to write something that: adds functionality to immutable sets of a certain type where all methods like filter and map return a collection of the same type without having to override everything (which is why I went for 2.8 in the first place) where all collections you gain through those methods are constructed with the same parameters the original collection had (similar to how SortedSet hands through an ordering via implicits) which is still a trait in itself, independent of any set implementations. Additionally I want to define a default implementation, for example based on a HashSet. The companion object of the trait might use this default implementation. I'm not sure yet if I need the full power of builder factories to map my collection type to other collection types. I read the paper on the redesign of the collections API but it seems like things have changed a bit since then and I'm missing some details in there. I've also digged through the collections source code but I'm not sure it's very consistent yet. Ideally what I'd like to see is either a hands-on tutorial that tells me step-by-step just the bits that I need or an extensive description of all the details so I can judge myself which bits I need. I liked the chapter on object equality in "Programming in Scala". :-) But I appreciate any pointers to documentation or examples that help me understand the new collections design better.

    Read the article

  • Java "Pool" of longs or Oracle sequence with reusable values

    - by Anthony Accioly
    Several months ago I implemented a solution to choose unique values from a range between 1 and 65535 (16 bits). This range is used to generate unique Route Targets suffixes, which for this customer massive network (it's a huge ISP) are a very disputed resource, so any free index needs to become immediately available to the end user. To tackle this requirement I used a BitSet. Allocate on the RT index with set and deallocate a suffix with clear. The method nextClearBit() can find the next available index. I handle synchronization / concurrency issues manually. This works pretty well for a small range... The entire index is small (around 10k), it is blazing fast and can be easy serialized into a Blob field. The problem is, some new devices can handle RTs of 32 bits (range 1 / 4294967296). Which can't be managed with a BitSet (it would, by itself, consume around 600Mb, plus be limited to int range). Even with this massive range available, the client still wants to free available Route Targets for the end user, mainly because the lowest ones (up to 65535) - which are compatible with old routers - are being heavily disputed. Before I tell the customer that this is impossible and he will have to conform with my reusable index for lower RTs (up to 65550) and use a database sequence for the other ones (which means that when the user frees a Route Target, it will not become available again). Would anyone shed some light? Maybe some kind soul already implemented a high performance number pool for Java (6 if it matters), or I am missing a killer feature of Oracle database (11R2 if it matters)... Wishful thinking. Thank you very much in advance.

    Read the article

  • How to improve Visual C++ compilation times?

    - by dtrosset
    I am compiling 2 C++ projects in a buildbot, on each commit. Both are around 1000 files, one is 100 kloc, the other 170 kloc. Compilation times are very different from gcc (4.4) to Visual C++ (2008). Visual C++ compilations for one project take in the 20 minutes. They cannot take advantage of the multiple cores because a project depend on the other. In the end, a full compilation of both projects in Debug and Release, in 32 and 64 bits takes more than 2 1/2 hours. gcc compilations for one project take in the 4 minutes. It can be parallelized on the 4 cores and takes around 1 min 10 secs. All 8 builds for 4 versions (Debug/Release, 32/64 bits) of the 2 projects are compiled in less than 10 minutes. What is happening with Visual C++ compilation times? They are basically 5 times slower. What is the average time that can be expected to compile a C++ kloc? Mine are 7 s/kloc with vc++ and 1.4 s/kloc with gcc. Can anything be done to speed-up compilation times on Visual C++?

    Read the article

  • How to Implement Rich Document Editor for iPhone

    - by benjismith
    I'm just getting started on a new iPhone/iPad development project, and I need to display a document with rich styled text (potentially with embedded images). The user will touch the document, dragging to highlight individual words or multiline text spans. When the text is highlighted, a context menu will appear, letting them change the color of highlighting or add margin notes (or other various bits of structured metadata). If you're familiar with adding comments to a Word document (or annotating a PDF), then this is the same sort of thing. But in my case, the typical user will spend many many hours within the app, adding thousands (in some cases, tens of thousands) of small annotations to the central document. All of those bits of metadata will be stored locally awaiting synchronization with a remote web service. I've read other pieces of advice, where developers suggest creating a UIWebView control and passing it an HTML string. But that seems kind of clunky, especially with all the context-sensitivity that I want to include. Anyhow, I'm brand new to iPhone development and Objective-C, though I have ten years of software development experience, using a variety of languages on many different platforms, so I'm not worried about getting my hands dirty writing new functionality from scratch. But if anyone has experience building a similar kind of component, I'm interested in hearing strategies for enabling that kind of rich document markup and annotation.

    Read the article

  • C++, using one byte to store two variables

    - by 2di
    Hi All I am working on representation of the chess board, and I am planning to store it in 32 bytes array, where each byte will be used to store two pieces. (That way only 4 bits are needed per piece) Doing it in that way, results in a overhead for accessing particular index of the board. Do you think that, this code can be optimised or completely different method of accessing indexes can be used? c++ char getPosition(unsigned char* c, int index){ //moving pointer c+=(index>>1); //odd number if (index & 1){ //taking right part return *c & 0xF; }else { //taking left part return *c>>4; } } void setValue(unsigned char* board, char value, int index){ //moving pointer board+=(index>>1); //odd number if (index & 1){ //replace right part //save left value only 4 bits *board = (*board & 0xF0) + value; }else { //replacing left part *board = (*board & 0xF) + (value<<4); } } int main() { char* c = (char*)malloc(32); for (int i = 0; i < 64 ; i++){ setValue((unsigned char*)c, i % 8,i); } for (int i = 0; i < 64 ; i++){ cout<<(int)getPosition((unsigned char*)c, i)<<" "; if (((i+1) % 8 == 0) && (i > 0)){ cout<<endl; } } return 0; } I am equally interested in your opinions regarding chess representations, and optimisation of the method above, as a stand alone problem. Thanks a lot

    Read the article

  • Data conversion from accelerometer

    - by mrigendra
    Hi all I am working on an accelerometer bma220 , and its datasheet says that data is in 2's complement form.So what i had to do was getting that 8 bit data in any 8 bit signed char and done. the bma220 have an 8 bit register of which first 6 bits are data and last two are zero. void properdata(int16_t *msgData) { printf("\nin proper data\n"); int16_t temp, i; for(i=0; i<3; i++) { temp = *(msgData + i); printf("temp = %d sense = %d\n", temp, sense); temp = temp >> 2; // only 6 bits data temp = temp / sense; //decimal value * .0625 = value in g printf("temp = %d\n", temp); } } in this program i am taking data in a unsigned variable msgdata and doing all the calculations on a signed variable. I just need to know if this is the correct way to convert data?

    Read the article

  • Setting up a NAS with Citrix XenServer

    - by JasonBrown
    Just a quick query on anyone whos worked with XenServer, I want to setup a NAS at home but with virtualization (I've looked into VMWare Server and KVM, I quite like KVM!) but I was told about XenServer 5.5. I have comomodity hardware (ASUS board, dual core 2.66Ghz CPU with 8Gb RAM), I need to setup a fileserver to house about 2-3Tb worth of data (big chunky video - not porn!). Need to run Linux (preferably CentOS) but also run Windows virtualised for testing. I was thinking of going the XenServer route, however I want to be able to offer a VM access to the 2-3Tb of HDDs (5 HDD drives) directly so it can do its thing (maybe using FreeNAS). Would this be possible with XenServer? Or will I have to do more work - and another box - to offer this? My goals are to use FreeNAS (ZFS!) for the filesserver, CentOS for SVN and aother bits we need to use (LAMP Stack), Windows for our win32 testing all on one box. I see this iSCSI target bits and get scared.

    Read the article

  • C read X bytes from a file, padding if needed

    - by Hunter McMillen
    I am trying to read in an input file 64 bits at a time, then do some calculations on those 64 bits, the problem is I need to convert the ascii text to hexadecimal characters. I have searched around but none of the answers posted seem to work for my situation. Here is what I have: int main(int argc, int * argv) { char buffer[9]; FILE *f; unsigned long long test; if(f = fopen("input2.txt", "r")) { while( fread(buffer, 8, 1, f) != 0) //while not EOF read 8 bytes at a time { buffer[8] = '\0'; test = strtoull(buffer, NULL, 16); //interpret as hex printf("%llu\n", test); printf("%s\n", buffer); } fclose(f); } } For an input like this: "testing string to hex conversion" I get results like this: 0 testing 0 string t 0 o hex co 0 nversion Where I would expect: 74 65 73 74 69 6e 67 20 <- "testing" in hex testing 73 74 72 69 6e 67 20 74 <- "string t" in hex string t 6f 20 68 65 78 20 63 6f <- "o hex co" in hex o hex co 6e 76 65 72 73 69 6f 6e <- "nversion" in hex nversion Can anyone see where I misstepped?

    Read the article

  • Define a positive number to be isolated if none of the digits in its square are in its cube [closed]

    - by proglaxmi
    Define a positive number to be isolated if none of the digits in its square are in its cube. For example 163 is n isolated number because 69*69 = 26569 and 69*69*69 = 4330747 and the square does not contain any of the digits 0, 3, 4 and 7 which are the digits used in the cube. On the other hand 162 is not an isolated number because 162*162=26244 and 162*162*162 = 4251528 and the digits 2 and 4 which appear in the square are also in the cube. Write a function named isIsolated that returns 1 if its argument is an isolated number, it returns 0 if its not an isolated number and it returns -1 if it cannot determine whether it is isolated or not (see the note below). The function signature is: int isIsolated(long n) Note that the type of the input parameter is long. The maximum positive number that can be represented as a long is 63 bits long. This allows us to test numbers up to 2,097,151 because the cube of 2,097,151 can be represented as a long. However, the cube of 2,097,152 requires more than 63 bits to represent it and hence cannot be computed without extra effort. Therefore, your function should test if n is larger than 2,097,151 and return -1 if it is. If n is less than 1 your function should also return -1. Hint: n % 10 is the rightmost digit of n, n = n/10 shifts the digits of n one place to the right. The first 10 isolated numbers are N n*n n*n*n 2 4 8 3 9 27 8 64 512 9 81 729 14 196 2744 24 576 13824 28 784 21952 34 1156 39304 58 3364 195112 63 3969 250047

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >