Search Results

Search found 12047 results on 482 pages for 'general debugging tidbits'.

Page 388/482 | < Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >

  • dynamically create class in scala, should I use interpreter?

    - by Phil
    Hi, I want to create a class at run-time in Scala. For now, just consider a simple case where I want to make the equivalent of a java bean with some attributes, I only know these attributes at run time. How can I create the scala class? I am willing to create from scala source file if there is a way to compile it and load it at run time, I may want to as I sometimes have some complex function I want to add to the class. How can I do it? I worry that the scala interpreter which I read about is sandboxing the interpreted code that it loads so that it won't be available to the general application hosting the interpreter? If this is the case, then I wouldn't be able to use the dynamically loaded scala class. Anyway, the question is, how can I dynamically create a scala class at run time and use it in my application, best case is to load it from a scala source file at run time, something like interpreterSource("file.scala") and its loaded into my current runtime, second best case is some creation by calling methods ie. createClass(...) to create it at runtime. Thanks, Phil

    Read the article

  • OpenGL-ES: Change (multiply) color when using color arrays?

    - by arberg
    Following the ideas in OpenGL ES iPhone - drawing anti aliased lines, I am trying to draw stroked anti-aliased lines and I am successful so far. After line is draw by the finger, I wish to fade the path, that is I need to change the opacity (color) of the entire path. I have computed a large array of vertex positions, vertex colors, texture coordinates, and indices and then I give these to opengl but I would like reduce the opacity of all the drawn triangles without having to change each of the color coordinates. Normally I would use glColor4f(r,g,b,a) before calling drawElements, but it has no effect due to the color array. I am working on Android, but I believe it shouldn't make the big difference, as long as it is OpenGL-ES 1.1 (or 1.0). I have the following code : gl.glEnable(GL10.GL_BLEND); gl.glBlendFunc(GL10.GL_ONE, GL10.GL_ONE_MINUS_SRC_ALPHA); gl.glEnableClientState(GL10.GL_COLOR_ARRAY); gl.glShadeModel(GL10.GL_SMOOTH); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glEnable(GL10.GL_TEXTURE_2D); // Should set rgb to greyish, and alpha to half-transparent, the greyish is // just there to make the question more general its the alpha i'm interested in gl.glColor4f(.75f, .75f, .75f, 0.5f); gl.glVertexPointer(mVertexSize, GL10.GL_FLOAT, 0, mVertexBuffer); gl.glColorPointer(4, GL10.GL_FLOAT, 0, mColorBuffer); gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, mTexCoordBuffer); gl.glDrawElements(GL10.GL_TRIANGLES, indexCount, GL10.GL_UNSIGNED_SHORT, mIndexBuffer.position(startIndex)); If I disable the color array gl.glEnableClientState(GL10.GL_COLOR_ARRAY);, then the glColor4f works, if I enable the color array it does nothing. Is there any way in OpenGl-ES to change the coloring without changing all the color coordinates? I think that in OpenGl one might use a fragment shader, but it seems OpenGL does not have a fragment shader (not that I know how to use one).

    Read the article

  • overriding enumeration base type using pragma or code change

    - by vprajan
    Problem: I am using a big C/C++ code base which works on gcc & visual studio compilers where enum base type is by default 32-bit(integer type). This code also has lots of inline + embedded assembly which treats enum as integer type and enum data is used as 32-bit flags in many cases. When compiled this code with realview ARM RVCT 2.2 compiler, we started getting many issues since realview compiler decides enum base type automatically based on the value an enum is set to. http://www.keil.com/support/man/docs/armccref/armccref_Babjddhe.htm For example, Consider the below enum, enum Scale { TimesOne, //0 TimesTwo, //1 TimesFour, //2 TimesEight, //3 }; This enum is used as a 32-bit flag. but compiler optimizes it to unsigned char type for this enum. Using --enum_is_int compiler option is not a good solution for our case, since it converts all the enum's to 32-bit which will break interaction with any external code compiled without --enum_is_int. This is warning i found in RVCT compilers & Library guide, The --enum_is_int option is not recommended for general use and is not required for ISO-compatible source. Code compiled with this option is not compliant with the ABI for the ARM Architecture (base standard) [BSABI], and incorrect use might result in a failure at runtime. This option is not supported by the C++ libraries. Question How to convert all enum's base type (by hand-coded changes) to use 32-bit without affecting value ordering? enum Scale { TimesOne=0x00000000, TimesTwo, // 0x00000001 TimesFour, // 0x00000002 TimesEight, //0x00000003 }; I tried the above change. But compiler optimizes this also for our bad luck. :( There is some syntax in .NET like enum Scale: int Is this a ISO C++ standard and ARM compiler lacks it? There is no #pragma to control this enum in ARM RVCT 2.2 compiler. Is there any hidden pragma available ?

    Read the article

  • Python: speed up removal of every n-th element from list.

    - by ChristopheD
    I'm trying to solve this programming riddle and althought the solution (see code below) works correct, it is too slow for succesful submission. Any pointers as how to make this run faster? (removal of every n-th element from a list)? Or suggestions for a better algorithm to calculate the same; seems I can't think of anything else then brute-force for now... Basically the task at hand is: GIVEN: L = [2,3,4,5,6,7,8,9,10,11,........] 1. Take the first remaining item in list L (in the general case 'n'). Move it to the 'lucky number list'. Then drop every 'n-th' item from the list. 2. Repeat 1 TASK: Calculate the n-th number from the 'lucky number list' ( 1 <= n <= 3000) My current code (it calculates the 3000 first lucky numbers in about a second on my machine - but unfortunately too slow): """ SPOJ Problem Set (classical) 1798. Assistance Required URL: http://www.spoj.pl/problems/ASSIST/ """ sieve = range(3, 33900, 2) luckynumbers = [2] while True: wanted_n = input() if wanted_n == 0: break while len(luckynumbers) < wanted_n: item = sieve[0] luckynumbers.append(item) items_to_delete = set(sieve[::item]) sieve = filter(lambda x: x not in items_to_delete, sieve) print luckynumbers[wanted_n-1]

    Read the article

  • How To Publish Business Objects Query Service

    - by ssorrrell
    We are trying to copy a BO Query Service from one Universe to another. If you use the BO Query As A Service(QAAS) tool you can do this, but end up basically recreating the query service. It seems like the BusinessObjects.DSWS.* libraries allow you to read and write query services, but those don't appear in the QAAS tool. I think that those queries go into a different Universe than the QAAS tool pings. Perhaps there is a Universe for data and another for Web Service Queries. Monitoring the QAAS tool for HTTP traffic revealed that the BO Web Service used to run queries for the data they contain is also used to manage the Web Service queries. I was able to copy one Query Service into a new one in a new Universe using a Replace() on the XML string in QuerySpec to change the UniverseID. We can basically copy one Query Service to another Universe without manually rebuilding it except for one little thing. The QAAS tool includes a Publish button. This does something unknown, but important. Perhaps it makes some SOAP, WSDL or config files so that the copied Query Service is public. There doesn't seem to be any HTTP traffic to snoop on when it's doing this. The BusinessObjects.DSWS.* libraries include a Publish feature, but it's not for Query Services. It's for general files like Excel and PDF. Right now, we are relegated to using two tools. Does anyone know about how to Publish a BO Query Service programmatically just like the QAAS Tool?

    Read the article

  • Character encoding issues when generating MD5 hash cross-platform

    - by rogueprocess
    This is a general question about character encoding when using MD5 libraries in various languages. My concern is: suppose I generate an MD5 hash using a native Python string object, like this: message = "hello world" m = md5() m.update(message) Then I take a hex version of that MD5 hash using: m.hexdigest() and send the message & MD5 hash via a network, let's say, a JMS message or a HTTP request. Now I get this message in a Java program in the form of a native Java string, along with the checksum. Then I generate an MD5 hash using Java, like this (using the Commons Codec library): String md5 = org.apache.commons.codec.digest.DigestUtils.DigestUtils.md5Hex(s) My feeling is that this is wrong because I have not specified character encodng at either end. So the original hash will be based on the bytes of the Python version of the string; the Java one will be based on the bytes of the Java version of the string , these two byte sequences will often not be the same - is that right? So really I need to specify "UTF-8" or whatever at both ends right? (I am actually getting an intermittent error in my code where the MD5 checksum fails, and I suspect this is the reason - but because it's intermittent, it's difficult to say if changing this fixes it or not. ) Thank you!

    Read the article

  • Pass table as parameter to SQLCLR TV-UDF

    - by Skeolan
    We have a third-party DLL that can operate on a DataTable of source information and generate some useful values, and we're trying to hook it up through SQLCLR to be callable as a table-valued UDF in SQL Server 2008. Taking the concept here one step further, I would like to program a CLR Table-Valued Function that operates on a table of source data from the DB. I'm pretty sure I understand what needs to happen on the T-SQL side of things; but, what should the method signature look like in the .NET (C#) code? What would be the parameter datatype for "table data from SQL Server?" e.g. /* Setup */ CREATE TYPE InTableType AS TABLE (LocationName VARCHAR(50), Lat FLOAT, Lon FLOAT) GO CREATE TYPE OutTableType AS TABLE (LocationName VARCHAR(50), NeighborName VARCHAR(50), Distance FLOAT) GO CREATE ASSEMBLY myCLRAssembly FROM 'D:\assemblies\myCLR_UDFs.dll' WITH PERMISSION_SET = EXTERNAL_ACCESS GO CREATE FUNCTION GetDistances(@locations InTableType) RETURNS OutTableType AS EXTERNAL NAME myCLRAssembly.GeoDistance.SQLCLRInitMethod GO /* Execution */ DECLARE @myTable InTableType INSERT INTO @myTable(LocationName, Lat, Lon) VALUES('aaa', -50.0, -20.0) INSERT INTO @myTable(LocationName, Lat, Lon) VALUES('bbb', -20.0, -50.0) SELECT * FROM @myTable DECLARE @myResult OutTableType INSERT INTO @myResult MyCLRTVFunction @myTable --returns a table result calculated using the input The lat/lon - distance thing is a silly example that should of course be better handled entirely in SQL; but I hope it illustrates the general intent of table-in - table-out through a table-valued UDF tied to a SQLCLR assembly. I am not certain this is possible; what would the SQLCLRInitMethod method signature look like in the C#? public class GeoDistance { [SqlFunction(FillRowMethodName = "FillRow")] public static IEnumerable SQLCLRInitMethod(<appropriateType> myInputData) { //... } public static void FillRow(...) { //... } } If it's not possible, I know I can use a "context connection=true" SQL connection within the C# code to have the CLR component query for the necessary data given the relevant keys; but that's sensitive to changes in the DB schema. So I hope to just have SQL bundle up all the source data and pass it to the function. Bonus question - assuming this works at all, would it also work with more than one input table?

    Read the article

  • When should one use the following: Amazon EC2, Google App Engine, Microsoft Azure and Salesforce.com

    - by vicky21
    I am asking this in very general sense. Both from cloud provider and cloud consumer's perspective. Also the question is not for any specific kind of application (in fact the intention is to know which type of applications/domains can fit into which of the cloud slab -SaaS PaaS IaaS). My understanding so far is: IaaS: Raw Hardware (Processors, Networks, Storage). PaaS: OS, System Softwares, Development Framework, Virtual Machines. SaaS: Software Applications. It would be great if Stackoverflower's can share their understanding and experiences of cloud computing concept. EDIT: Ok, I will put it in more specific way - Amazon EC2: You don't have control over hardware layer. But you can take your choice of OS image, Dev Framework (.NET, J2EE, LAMP) and Application and put it on EC2 hardware. Can you deploy an applications built with Google App Engine or Azure on EC2? Google App Engine: You don't have control over hardware and OS and you get a specific Dev Framework to build your application. Can you take any existing Java or Python application and port it to GAE? Or vice versa, can applications that were built on GAE be taken out of GAE and ported to any Application Server like Websphere or Weblogic? Azure: You don't have control over hardware and OS and you get a specific Dev Framework to build your application. Can you take any existing .NET application and port it to Azure? Or vice versa, can applications that were built on Azure be taken out of Azure and ported to any Application Server like Biztalk?

    Read the article

  • MySQL Unique hash insertion

    - by Jesse
    So, imagine a mysql table with a few simple columns, an auto increment, and a hash (varchar, UNIQUE). Is it possible to give mysql a query that will add a column, and generate a unique hash without multiple queries? Currently, the only way I can think of to achieve this is with a while, which I worry would become more and more processor intensive the more entries were in the db. Here's some pseudo-php, obviously untested, but gets the general idea across: while(!query("INSERT INTO table (hash) VALUES (".generate_hash().");")){ //found conflict, try again. } In the above example, the hash column would be UNIQUE, and so the query would fail. The problem is, say there's 500,000 entries in the db and I'm working off of a base36 hash generator, with 4 characters. The likelyhood of a conflict would be almost 1 in 3, and I definitely can't be running 160,000 queries. In fact, any more than 5 I would consider unacceptable. So, can I do this with pure SQL? I would need to generate a base62, 6 char string (like: "j8Du7X", chars a-z, A-Z, and 0-9), and either update the last_insert_id with it, or even better, generate it during the insert. I can handle basic CRUD with MySQL, but even JOINs are a little outside of my MySQL comfort zone, so excuse my ignorance if this is cake. Any ideas? I'd prefer to use either pure MySQL or PHP & MySQL, but hell, if another language can get this done cleanly, I'd build a script and AJAX it too. Thanks!

    Read the article

  • like exec command in silverlight(save and load properties of Elements dynamically)

    - by Meysam Javadi
    i have some element in my container and want to save all properties of this elements. i list this element by VisualTreeHelper and save its attributes in DB, question is that how to retrieve this properties and affect them? i think that The Silverlight have some statement that behave like Exec in Sql-Server. i save properties in one line that delimited by semicolon.(if you have any suggestion ,appreciate) Edit: suppose this scenario: End-User choose a tool from Mytoolbox(a container like Grid) ,a dialog shown its properties for creation and finally draw Grid . in resumption he/she choose one element(like Button) and drop it on one of the grid's cell. now i want to save workspace that he/she created! My RootLayout have one container control so any of element are child of this.HERETOFORE i want create one string that contain all general properties(not all of them) and save to DB, and when i load this control, i create an element by the type that i saved and affect it by the properties that i saved; with something like EXEC command. is this possible ? have you new approach for this scenario(Guide me with example please).

    Read the article

  • Useful training courses that aren't specific to a single technology

    - by Dave Turvey
    I have possibly the best problem in the world. I have about £1600 left in a training budget and I need to find something to spend it on. I can spend it on anything that could be considered training. Books, courses, conferences, etc. I would like to find a course that would benifit a software developer but is not about learning a specific programming technology. I don't really want to spend it on a technical training course. These topics are usually best learned with a good book and some trial and error. I have also already been on a general business/management training course and a PRINCE2 project management course. I am currently working on a project on my own so am responsible for communicating with the client, requirements gathering, project management, etc., as well as the coding. What training have you found useful outside the usual technical stuff? Has anyone done any business analysis courses? What were they like? Are there any courses on some of the practicalities of working with software, e.g. automated test and deployment strategies, handling technical support? I would prefer a course in the UK but I can travel if necessary.

    Read the article

  • Package creation issues using SQL Developer

    - by Carter
    So I've never worked with stored procedures and have not a whole lot of DB experience in general and I've been assigned a task that requires I create a package and I'm stuck. Using SQL Developer, I'm trying to create a package called JUMPTO with this code... create or replace package JUMPTO is type t_locations is ref cursor; procedure procGetLocations(locations out t_locations); end JUMPTO; When I run it, it spits out this PL/SQL code block... DECLARE LOCATIONS APPLICATION.JUMPTO.t_locations; BEGIN JUMPTO.PROCGET_LOCATIONS( LOCATIONS = LOCATIONS ); -- Modify the code to output the variable -- DBMS_OUTPUT.PUT_LINE('LOCATIONS = ' || LOCATIONS); END; A tutorial I found said to take out the comment for that second line there. I've tried with and without the comment. When I hit "ok" I get the error... ORA-06550: line 2, column 32: PLS-00302: component 'JUMPTO' must be declared ORA-06550: line 2, column 13: PL/SQL: item ignored ORA-06550: line 6, column 18: PLS-00320: the declaration of the type of this expression is incomplete or malformed ORA-06550: line 5, column 3: PL/SQL: Statement ignored ORA-06512: at line 58 I really don't have any idea what's going on, this is all completely new territory for me. I tried creating a body that just selected some stuff from the database but nothing is working the way it seems like it should in my head. Can anyone give me any insight into this?

    Read the article

  • Extending Python and Objective-C

    - by chpwn
    I'm a fan of clean code. I like my languages to be able to express what I'm trying to do, but I like the syntax to mirror that too. For example, I work on a lot of programs in Objective-C for jailbroken iPhones, which patch other code using the method_setImplementation() function of the runtime. Or, in PyObjC, I have to use the syntax UIView.initWithFrame_(), which is also pretty awful and unreadable with the way the method names are structured. In both cases, the language does not support this in syntax. I've found three basic ways that this is done: Insane macros. Take a look at this "CaptainHook", it does what I'm looking for in a usable way, but it isn't quite clean and is a major hack. There's also "Logos", which implements a very nice syntax, but is written in Perl parsing my code with a ton of regular expressions. This scares me. I like the idea of adding a %hook ClassName, but not by using regular expressions to parse C or Objective-C. Finally, there is Cycript. This is an extension to JavaScript which interfaces with the Objective-C runtime and allows you to use Objective-C style code in your JavaScript, and inject that into other processes. This is likely the cleanest as it actually uses a parser for the JavaScript, but I'm not a huge fan of that language in general. Should, and how should, I create an extension to Python and Objective-C to allow me to do this? Is it worth writing a parser for my language to transform the syntax into something nicer, if it is only in a very specialized niche like this? Should I just live with the horrible syntax of the default Objective-C hooking or PyObjC?

    Read the article

  • EXC_BAD_ACCESS NSUrlConnection

    - by Lars
    Hi all, i got an EXC_BAD_ACCESS when i perform the last line of the function (webData). -(void)requestSoap{ NSString *requestUrl = @"http://www.website.com/webservice.php"; NSString *soapMessage = @"the soap message"; //website and soapmessage are valid in original code. NSError **error; NSURLResponse *response; //Convert parameter string to url NSURL *url = [NSURL URLWithString:requestUrl]; NSMutableURLRequest *theRequest = [NSMutableURLRequest requestWithURL:url cachePolicy:NSURLRequestReloadIgnoringCacheData timeoutInterval:10]; NSString *msgLength = [NSString stringWithFormat:@"%d", [soapMessage length]]; //Create an XML message for webservice [theRequest addValue: @"text/xml; charset=utf-8" forHTTPHeaderField:@"Content-Type"]; [theRequest addValue: msgLength forHTTPHeaderField:@"Content-Length"]; [theRequest setHTTPMethod:@"POST"]; [theRequest setHTTPBody: [soapMessage dataUsingEncoding:NSUTF8StringEncoding]]; NSData *webData = [NSURLConnection sendSynchronousRequest:theRequest returningResponse:&response error:error]; } I tried not to release a thing, because what i read on the net is it's almost always a memory thing. When i debug the code (NSZombieEnabled = YES) this is what i get: [Session started at 2010-05-31 15:56:13 +0200.] GNU gdb 6.3.50-20050815 (Apple version gdb-1461.2) (Fri Mar 5 04:43:10 UTC 2010) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "x86_64-apple-darwin".sharedlibrary apply-load-rules all Attaching to process 19856. test(19856) malloc: recording malloc stacks to disk using standard recorder test(19856) malloc: enabling scribbling to detect mods to free blocks test(19856) malloc: process 19832 no longer exists, stack logs deleted from /tmp/stack-logs.19832.test.w9Ek4L.index test(19856) malloc: stack logs being written into /tmp/stack-logs.19856.test.URRpQF.index Program received signal: “EXC_BAD_ACCESS”. Does anybody have a clue?? Thanks a lot! Lars

    Read the article

  • IPhone app with SSL client certs

    - by Pavel Georgiev
    I'm building an iphone app that needs to access a web service over https using client certificates. If I put the client cert (in pkcs12 format) in the app bundle, I'm able to load it into the app and make the https call (largely thanks to stackoverflow.com). However, I need a way to distribute the app without any certs and leave it to the user to provide his own certificate. I thought I would just do that by instructing the user to import the certificate in iphone's profiles (settings-general-profiles), which is what you get by opening a .p12 file in Mail.app and then I would access that item in my app. I would expect that the certificates in profiles are available through the keychain API, but I guess I'm wrong on that. 1) Is there a way to access a certificate that I've already loaded in iphone's profile in my app? 2) What other options I have for loading a user specified certificate in my app? The only thing I can come up with is providing some interface where the user can give an URL to his .p12 cerificate, which I can then load into the app's keychain for later use, but thats not exactly user-friednly. I'm looking for something that would allow the user to put the cert on phone (email it to himself) and then load it in my app.

    Read the article

  • fast similarity detection

    - by reinierpost
    I have a large collection of objects and I need to figure out the similarities between them. To be exact: given two objects I can compute their dissimilarity as a number, a metric - higher values mean less similarity and 0 means the objects have identical contents. The cost of computing this number is proportional to the size of the smaller object (each object has a given size). I need the ability to quickly find, given an object, the set of objects similar to it. To be exact: I need to produce a data structure that maps any object o to the set of objects no more dissimilar to o than d, for some dissimilarity value d, such that listing the objects in the set takes no more time than if they were in an array or linked list (and perhaps they actually are). Typically, the set will be very much smaller than the total number of objects, so it is really worthwhile to perform this computation. It's good enough if the data structure assumes a fixed d, but if it works for an arbitrary d, even better. Have you seen this problem before, or something similar to it? What is a good solution? To be exact: a straightforward solution involves computing the dissimilarities between all pairs of objects, but this is slow - O(n2) where n is the number of objects. Is there a general solution with lower complexity?

    Read the article

  • Where should I declare my CDI resources?

    - by Laird Nelson
    JSR-299 (CDI) introduces the (unfortunately named) concept of a resource: http://docs.jboss.org/weld/reference/1.0.0/en-US/html/resources.html#d0e4373 You can think of a resource in this nomenclature as a bridge between the Java EE 6 brand of dependency injection (@EJB, @Resource, @PersistenceContext and the like) and CDI's brand of dependency injection. The general gist seems to be that somewhere (and this will be the root of my question) you declare what amounts to a bridge class: it contains fields annotated both with Java EE's @EJB or @PersistenceContext or @Resource annotations and with CDI's @Produces annotations. The net effect is that Java EE 6 injects a persistence context, say, where it's called for, and CDI recognizes that injected PersistenceContext as a source for future injections down the line (handled by @Inject). My question is: what is the community's consensus--or is there one--on: what this bridge class should be named where this bridge class should live whether it's best to localize all this stuff into one class or make several of them ...? Left to my own devices, I was thinking of declaring a single class called CDIResources and using that as the One True Place to link Java EE's DI with CDI's DI. Many examples do something similar, but I'm not clear on whether they're "just" examples or whether that's a good way to do it. Thanks.

    Read the article

  • Strange rare out-of-order data received using Indy

    - by Jim
    We're having a bizarre problem with Indy10 where two large strings (a few hundred characters each) that we send out one after the other are appearing at the other end intertwined oddly. This happens extremely infrequently. Each string is a complete XML message terminated with a LF and in general the READ process reads an entire XML message, returning when it sees the LF. The call to actually send the message is protected by a critical section around the call to the IOHandler's writeln method and so it is not possible for two threads to send at the same time. (We're certain the critical section is implemented/working properly). This problem happens very rarely. The symptoms are odd...when we send string A followed by string B what we received at the other end (on the rare occasions where we have failure) is the trailing section of string A by itself (i.e., there's a LF at the end of it) followed by the leading section of string A and then the entire string B followed by a single LF. We've verified that the "timed out" property is not true after the partial read - we log that property after every read that returns content. Also, we know there are no embedded LF characters in the string, as we explicitly replace all non-alphanumeric characters in the string with spaces before appending the LF and sending it. We have log mechanisms inside the critical sections on both the transmission and receiving ends and so we can see this behavior at the "wire". We're completely baffled and wondering (although always the lowest possibility) whether there could be some low-level Indy issues that might cause this issue, e.g., buffers being sent in the wrong order....very hard to believe this could be the issue but we're grasping at straws. Does anyone have any bright ideas?

    Read the article

  • Can a call to WaitHandle.SignalAndWait be ignored for performance profiling purposes?

    - by Dan Tao
    I just downloaded the trial version of ANTS Performance Profiler from Red Gate and am investigating some of my team's code. Immediately I notice that there's a particular section of code that ANTS is reporting as eating up to 99% CPU time. I am completely unfamiliar with ANTS or performance profiling in general (that is, aside from self-profiling using what I'm sure are extremely crude and frowned-upon methods such as double timeToComplete = (endTime - startTime).TotalSeconds), so I'm still fiddling around with the application and figuring out how it's used. But I did call the developer responsible for the code in question and his immediate reaction was "Yeah, that doesn't surprise me that it says that; but that code calls SignalAndWait [which I could see for myself, thanks to ANTS], which doesn't use any CPU, it just sits there waiting for something to do." He advised me to simply ignore that code and look for anything ELSE I could find. My question: is it true that SignalAndWait requires NO CPU overhead (and if so, how is this possible?), and is it reasonable that a performance profiler would view it as taking up 99% CPU time? I find this particularly curious because, if it's at 99%, that would suggest that our application is often idle, wouldn't it? And yet its performance has become rather sluggish lately. Like I said, I really am just a beginner when it comes to this tool, and I don't know anything about the WaitHandle class. So ANY information to help me to understand what's going on here would be appreciated.

    Read the article

  • Calling AddEventListener in a loop with a variable element name

    - by user302209
    Hi, I'm trying to do the following: I have a set of images and select (dropdown) HTML elements, 30 of each one. I'm trying to use AddEventListener on a loop from 1 to 30 so that when I change the value of the select, the image src is updated (and the image changes). The AddEventListener function is this one: function AddEventListener(element, eventType, handler, capture) { if (element.addEventListener) element.addEventListener(eventType, handler, capture); else if (element.attachEvent) element.attachEvent("on" + eventType, handler); } I tried this and it worked: var urlfolderanimalimages = "http://localhost/animalimages/"; var testselect = "sel15"; var testimg = "i15"; AddEventListener(document.getElementById(testselect), "change", function(e) { document.getElementById(testimg).src = urlfolderanimalimages + document.getElementById(testselect).value; document.getElementById(testimg).style.display = 'inline'; if (e.preventDefault) e.preventDefault(); else e.returnResult = false; if (e.stopPropagation) e.stopPropagation(); else e.cancelBubble = true; }, false); But then I tried to call it in a loop and it doesn't work. The event is added, but when I change any select, it will update the last one (the image with id i30). var urlfolderanimalimages = "http://localhost/animalimages/"; for (k=1;k<=30;k++) { var idselect = "sel" + k; var idimage = "i" + k; AddEventListener(document.getElementById(idselect), "change", function(e) { document.getElementById(idimage).src = urlfolderanimalimages + document.getElementById(idselect).value; document.getElementById(idimage).style.display = 'inline'; if (e.preventDefault) e.preventDefault(); else e.returnResult = false; if (e.stopPropagation) e.stopPropagation(); else e.cancelBubble = true; }, false); } What am I doing wrong? I'm new to JavaScript (and programming in general), so sorry for the vomit-inducing code :(

    Read the article

  • Bitwise Interval Arithmetic

    - by KennyTM
    I've recently read an interesting thread on the D newsgroup, which basically asks, Given two signed integers a ∈ [amin, amax], b ∈ [bmin, bmax], what is the tightest interval of a | b? I'm think if interval arithmetics can be applied on general bitwise operators (assuming infinite bits). The bitwise-NOT and shifts are trivial since they just corresponds to -1 − x and 2n x. But bitwise-AND/OR are a lot trickier, due to the mix of bitwise and arithmetic properties. Is there a polynomial-time algorithm to compute the intervals of bitwise-AND/OR? Note: Assume all bitwise operations run in linear time (of number of bits), and test/set a bit is constant time. The brute-force algorithm runs in exponential time. Because ~(a | b) = ~a & ~b and a ^ b = (a | b) & ~(a & b), solving the bitwise-AND and -NOT problem implies bitwise-OR and -XOR are done. Although the content of that thread suggests min{a | b} = max(amin, bmin), it is not the tightest bound. Just consider [2, 3] | [8, 9] = [10, 11].)

    Read the article

  • An alternative to reading input from Java's System.in

    - by dvanaria
    I’m working on the UVa Online Judge problem set archive as a way to practice Java, and as a way to practice data structures and algorithms in general. They give an example input file to submit to the online judge to use as a starting point (it’s the solution to problem 100). Input from the standard input stream (java.lang.System.in) is required as part of any solution on this site, but I can’t understand the implementation of reading from System.in they give in their example solution. It’s true that the input file could consist of any variation of integers, strings, etc, but every solution program requires reading basic lines of text input from System.in, one line at a time. There has to be a better (simpler and more robust) method of gathering data from the standard input stream in Java than this: public static String readLn(int maxLg) { byte lin[] = new byte[maxLg]; int lg = 0, car = -1; String line = “”; try { while (lg < maxLg) { car = System.in.read(); if ((car < 0) || (car == ‘\n’)) { break; } lin[lg++] += car; } } catch (java.io.IOException e) { return (null); } if ((car < 0) && (lg == 0)) { return (null); // eof } return (new String(lin, 0, lg)); } I’m really surprised by this. It looks like something pulled directly from K&R’s “C Programming Language” (a great book regardless), minus the access level modifer and exception handling, etc. Even though I understand the implementation, it just seems like it was written by a C programmer and bypasses most of Java’s object oriented nature. Isn’t there a better way to do this, using the StringTokenizer class or maybe using the split method of String or the java.util.regex package instead?

    Read the article

  • Code for decoding/encoding a modified base64 URL

    - by Kirk Liemohn
    I want to base64 encode data to put it in a URL and then decode it within my HttpHandler. I have found that Base64 Encoding allows for a '/' character which will mess up my UriTemplate matching. Then I found that there is a concept of a "modified Base64 for URL" from wikipedia: A modified Base64 for URL variant exists, where no padding '=' will be used, and the '+' and '/' characters of standard Base64 are respectively replaced by '-' and '_', so that using URL encoders/decoders is no longer necessary and has no impact on the length of the encoded value, leaving the same encoded form intact for use in relational databases, web forms, and object identifiers in general. Using .NET I want to modify my current code from doing basic base64 encoding and decoding to using the "modified base64 for URL" method. Has anyone done this? To decode, I know it starts out with something like: string base64EncodedText = base64UrlEncodedText.Replace('-', '+').Replace('_', '/'); // Append '=' char(s) if necessary - how best to do this? // My normal base64 decoding now uses encodedText But, I need to potentially add one or two '=' chars to the end which looks a little more complex. My encoding logic should be a little simpler: // Perform normal base64 encoding byte[] encodedBytes = Encoding.UTF8.GetBytes(unencodedText); string base64EncodedText = Convert.ToBase64String(encodedBytes); // Apply URL variant string base64UrlEncodedText = base64EncodedText.Replace("=", String.Empty).Replace('+', '-').Replace('/', '_'); I have seen the Guid to Base64 for URL StackOverflow entry, but that has a known length and therefore they can hardcode the number of equal signs needed at the end.

    Read the article

  • Windows-Mobile Directshow: Specifying bitrate/quality of a WMV video capture

    - by Landstander
    Hi- I'm stumped on this, and I'm really hoping someone could point me in the right direction. I'm currently capturing video in Windows Mobile and encoding it using the WMV 9 DMO (CLSID_CWMV9EncMediaObject). That all works well enough, but the output video's bitrate is too high, resulting in a video file that's much too large for my needs. Ultimately, my goal is to mimic the video settings that Microsoft's Camera Capture Dialog outputs in the "messaging" quality mode (64kbps) from my C++ code. Currently, my code's outputting a WMV file with a bitrate of 352kbps. The only example I could find of specifying the capture bitrate with a WMV9 DMO was this. The idea in that code was basically to use a propertybag to write a bitrate to a property of the DMO. Update: In windows mobile, the closest codec property I can find that seems to equate to the bitrate is "g_wszWMVCVBRQuality". Microsoft's documentation of this property is extremely confusing to me: It basically seems to say that a higher number equates to a higher quality, but it gives absolutely no explanation of the specifics for each number. When I attempt to set this property to value like "1" via a propertybag for the WMV9 DMO, I run into a -2147467259 (unknown) error. To summarize: What is the basic strategy to specify the bitrate/quality of a video being captured via directshow (wmv9) on a windows mobile platform? I've heard (or wondered about) the following methods: Use the propertybag to change the encoder DMO's property that corresponds to bitrate/quality (currently failing) Create your own custom transcoder/encoder to specify it. This seems unnecessary since the WMV encoder works well enough- it's just at too high a bitrate. The VIDEOINFOHEADER has a bitrate property, but I suspect that specifying new settings here will do nothing to alter the actual encoding process since I wouldn't think file attributes would come into play until after the encoding. Any suggestions? PS: I would post specific source code, but at this point it may confuse more than it helps since I'm floundering so much on how to do this. At this point, I'm just trying to validate the general strategy. THANKS!

    Read the article

  • GH-Unit for unit testing Objective-C code, why am I getting linking errors?

    - by djhworld
    Hi there, I'm trying to dive into the quite frankly terrible world of unit testing using Xcode (such a convoluted process it seems.) Basically I have this test class, attempting to test my Show.h class #import <GHUnit/GHUnit.h> #import "Show.h" @interface ShowTest : GHTestCase { } @end @implementation ShowTest - (void)testShowCreate { Show *s = [[Show alloc] init]; GHAssertNotNil(s,@"Was nil."); } @end However when I try to build and run my tests it moans with this error: - Undefined symbols: "_OBJC_CLASS_$_Show", referenced from: __objc_classrefs__DATA@0 in ShowTest.o ld: symbol(s) not found collect2: ld returned 1 exit status Now I'm presuming this is a linking error. I tried following every step in the instructions located here: - http://github.com/gabriel/gh-unit/blob/master/README.md And step 2 of these instructions confused me: - In the Target 'Tests' Info window, General tab: Add a linked library, under Mac OS X 10.5 SDK section, select GHUnit.framework Add a linked library, select your project. Add a direct dependency, and select your project. (This will cause your application or framework to build before the test target.) How am I supposed to add my project to the linked library list when all it accepts it .dylib, .framework and .o files. I'm confused! Thanks for any help that is received.

    Read the article

< Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >