Search Results

Search found 10536 results on 422 pages for 'cpu usage'.

Page 366/422 | < Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >

  • What is the easiest way to add compression to WCF in Silverlight?

    - by caryden
    I have a silverlight 2 beta 2 application that accesses a WCF web service. Because of this, it currently can only use basicHttp binding. The webservice will return fairly large amounts of XML data. This seems fairly wasteful from a bandwidth usage standpoint as the response, if zipped, would be smaller by a factor of 5 (I actually pasted the response into a txt file and zipped it.). The request does have the "Accept-Encoding: gzip, deflate" - Is there any way have the WCF service gzip (or otherwise compress) the response? I did find this link but it sure seems a bit complex for functionality that should be handled out-of-the-box IMHO. OK - at first I marked the solution using the System.IO.Compression as the answer as I could never "seem" to get the IIS7 dynamic compression to work. Well, as it turns out: Dynamic Compression on IIS7 was working al along. It is just that Nikhil's Web Developer Helper plugin for IE did not show it working. My guess is that since SL hands the web service call off to the browser, that the browser handles it "under the covers" and Nikhil's tool never sees the compressed response. I was able to confirm this by using Fiddler which monitors traffic external to the browser application. In fiddler, the response was, in fact, gzip compressed!! The other problem with the System.IO.Compression solution is that System.IO.Compression does not exist in the Silverlight CLR. So from my perspective, the EASIEST way to enable WCF compression in Silverlight is to enable Dynamic Compression in IIS7 and write no code at all.

    Read the article

  • How to Inject code in c# method calls from a separate app

    - by Fusspawn
    I was curious if anyone knew of a way of monitoring a .Net application's runtime info (what method is being called and such) and injecting extra code to be run on certain methods from a separate running process. say i have two applications: app1.exe that for simplicity's sake could be class Program { static void Main(string[] args) { while(true){ Somefunc(); } } static void Somefunc() { Console.WriteLine("Hello World"); } } and I have a second application that I wish to be able to detect when Somefunc() from application 1 is running and inject its own code, class Program { static void Main(string[] args) { while(true){ if(App1.SomeFuncIsCalled) InjectCode(); } } static void InjectCode() { App1.Console.WriteLine("Hello World Injected"); } } So The result would be Application one would show Hello World Hello World Injected I understand its not going to be this simple ( By a long shot ) but I have no idea if it's even possible and if it is where to even start. Any suggestions ? I've seen similar done in java, But never in c#. EDIT: To clarify, the usage of this would be to add a plugin system to a .Net based game that I do not have access to the source code of.

    Read the article

  • Java iteration reading & parsing

    - by Patrick Lorio
    I have a log file that I am reading to a string public static String Read (String path) throws IOException { StringBuilder sb = new StringBuilder(); InputStream in = new BufferedInputStream(new FileInputStream(path)); int r; while ((r = in.read()) != -1) { sb.append(r); } return sb.toString(); } Then I have a parser that iterates over the entire string once void Parse () { String con = Read("log.txt"); for (int i = 0; i < con.length; i++) { /* parsing action */ } } This is hugely a waste of cpu cycles. I loop over all the content in Read. Then I loop over all the content in Parse. I could just place the /* parsing action */ under the while loop in the Read method, which would be find but I don't want to copy the same code all over the place. How can I parse the file in one iteration over the contents and still have separate methods for parsing and reading? In C# I understand there is some sort of yield return thing, but I'm locked with Java. What are my options in Java?

    Read the article

  • __toString magic and type coercion

    - by TomcatExodus
    I've created a Template class for managing views and their associated data. It implements Iterator and ArrayAccess, and permits "sub-templates" for easy usage like so: <p><?php echo $template['foo']; ?></p> <?php foreach($template->post as $post): ?> <p><?php echo $post['bar']; ?></p> <?php endforeach; ?> Anyways, rather than using inline core functions, such as hash() or date(), I figured it would be useful to create a class called TemplateData, which would act as a wrapper for any data stored in the templates. This way, I can add a list of common methods for formatting, for example: echo $template['foo']->asCase('upper'); echo $template['bar']->asDate('H:i:s'); //etc.. When a value is set via $template['foo'] = 'bar'; in the controllers, the value of 'bar' is stored in it's own TemplateData object. I've used the magic __toString() so when you echo a TemplateData object, it casts to (string) and dumps it's value. However, despite the mantra controllers and views should not modify data, whenever I do something like this: $template['foo'] = 1; echo $template['foo'] + 1; //exception It dies on a Object of class TemplateData could not be converted to int; Unless I recast $template['foo'] to a string: echo ((string) $template['foo']) + 1; //outputs 2 Sort of defeats the purpose having to jump through that hoop. Are there any workarounds for this sort of behavior that exist, or should I just take this as it is, an incidental prevention of data modification in views?

    Read the article

  • Failure remediation strategy for File I/O

    - by Brett
    I'm doing buffered IO into a file, both read and write. I'm using fopen(), fseeko(), standard ANSI C file I/O functions. In all cases, I'm writing to a standard local file on a disk. How often do these file I/O operations fail, and what should the strategy be for failures? I'm not exactly looking for stats, but I'm looking for a general purpose statement on how far I should go to handle error conditions. For instance, I think everyone recognizes that malloc() could and probably will fail someday on some user's machine and the developer should check for a NULL being returned, but there is no great remediation strategy since it probably means the system is out of memory. At least, this seems to be the approach taken with malloc() on desktop systems, embedded systems are different. Likewise, is it worth reattempting a file I/O operation, or should I just consider a failure to be basically unrecoverable, etc. I would appreciate some code samples demonstrating proper usage, or a library guide reference that indicates how this is to be handled. Any other data is, of course, welcome.

    Read the article

  • How to model has_many with polymorphism?

    - by Daniel Abrahamsson
    I've run into a situation that I am not quite sure how to model. Suppose I have a User class, and a user has many services. However, these services are quite different, for example a MailService and a BackupService, so single table inheritance won't do. Instead, I am thinking of using polymorphic associations together with an abstract base class: class User < ActiveRecord::Base has_many :services end class Service < ActiveRecord::Base validates_presence_of :user_id, :implementation_id, :implementation_type belongs_to :user belongs_to :implementation, :polymorphic = true delegate :common_service_method, :name, :to => :implementation end #Base class for service implementations class ServiceImplementation < ActiveRecord::Base validates_presence_of :user_id, :on => :create has_one :service, :as => :implementation has_one :user, :through => :service after_create :create_service_record #Tell Rails this class does not use a table. def self.abstract_class? true end #Default name implementation. def name self.class.name end protected #Sets up a service object def create_service_record service = Service.new(:user_id => user_id) service.implementation = self service.save! end end class MailService < ServiceImplementation #validations, etc... def common_service_method puts "MailService implementation of common service method" end end #Example usage MailService.create(..., :user_id => user.id) BackupService.create(...., :user_id => user.id) user.services.each do |s| puts "#{user.name} is using #{s.name}" end #Daniel is using MailService, Daniel is using BackupService So, is this the best solution? Or even a good one? How have you solved this kind of problem?

    Read the article

  • What are Code Smells? What is the best way to correct them?

    - by Rob Cooper
    OK, so I know what a code smell is, and the Wikipedia Article is pretty clear in its definition: In computer programming, code smell is any symptom in the source code of a computer program that indicates something may be wrong. It generally indicates that the code should be refactored or the overall design should be reexamined. The term appears to have been coined by Kent Beck on WardsWiki. Usage of the term increased after it was featured in Refactoring. Improving the Design of Existing Code. I know it also provides a list of common code smells. But I thought it would be great if we could get clear list of not only what code smells there are, but also how to correct them. Some Rules Now, this is going to be a little subjective in that there are differences to languages, programming style etc. So lets lay down some ground rules: ** ONE SMELL PER ANSWER PLEASE! & ADVISE ON HOW TO CORRECT! ** See this answer for a good display of what this thread should be! DO NOT downmod if a smell doesn't apply to your language or development methodology We are all different. DO NOT just quickly smash in as many as you can think of Think about the smells you want to list and get a good idea down on how to work around. DO downmod answers that just look rushed For example "dupe code - remove dupe code". Let's makes it useful (e.g. Duplicate Code - Refactor into separate methods or even classes, use these links for help on these common.. etc. etc.). DO upmod answers that you would add yourself If you wish to expand, then answer with your thoughts linking to the original answer (if it's detailed) or comment if its a minor point. DO format your answers! Help others to be able to read it, use code snippets, headings and markup to make key points stand out!

    Read the article

  • Are programming languages and methods inefficient? (assembler and C knowledge needed)

    - by b-gen-jack-o-neill
    Hi, for a long time, I am thinking and studying output of C language compiler in assembler form, as well as CPU architecture. I know this may be silly to you, but it seems to me that something is very ineffective. Please, don´t be angry if I am wrong, and there is some reason I do not see for all these principles. I will be very glad if you tell me why is it designed this way. I actually truly believe I am wrong, I know the genius minds of people which get PCs together knew a reason to do so. What exactly, do you ask? I´ll tell you right away, I use C as a example: 1: Stack local scope memory allocation: So, typical local memory allocation uses stack. Just copy esp to ebp and than allocate all the memory via ebp. OK, I would understand this if you explicitly need allocate RAM by default stack values, but if I do understand it correctly, modern OS use paging as a translation layer between application and physical RAM, when address you desire is further translated before reaching actual RAM byte. So why don´t just say 0x00000000 is int a,0x00000004 is int b and so? And access them just by mov 0x00000000,#10? Because you wont actually access memory blocks 0x00000000 and 0x00000004 but those your OS set the paging tables to. Actually, since memory allocation by ebp and esp use indirect addressing, "my" way would be even faster. 2: Variable allocation duplicity: When you run application, Loader load its code into RAM. When you create variable, or string, compiler generates code that pushes these values on the top o stack when created in main. So there is actual instruction for do so, and that actual number in memory. So, there are 2 entries of the same value in RAM. One in form of instruction, second in form of actual bytes in the RAM. But why? Why not to just when declaring variable count at which memory block it would be, than when used, just insert this memory location?

    Read the article

  • How to know why an animation stutters?

    - by Patrick Klug
    I have a few fairly simple animations (moving text around, moving ellipses etc.) and running in full screen (1920x1080 minus the task bar) the WPF Performance Suite reports a good framerate around 50 FPS throughout the animation. Dirty Rect Addition is somewhere around 300 rect/s, the SW frames are between 0 and 4 and the HW frames are between 3 and 5. Video memory usage is around 80 MB. Problem is that the animations stutters every other half second. My machine is a new Dell laptop XPS 15 with the GeForce GT 435 with 2GB memory. - The drivers are up to date. (The same behavior occurs on my netbook (in full screen) as well so I don't think it is hardware related.) If I make the window smaller the stutter goes away. The stutter occurs with the simplest of animations - even with just a couple of elements but adding more elements certainly makes it more noticeable. How can I find out what causes this stutter? When I think of it, I have not actually seen any WPF animations which run smoothly in full screen. Is this even possible?

    Read the article

  • FileReference.save() duplicates ByteArray

    - by bartekb
    Hi, I've encountered a memory problem using FileReference.save(). My Flash application generates of a lot of data in real-time and needs to save this data to a local file. As I understand, Flash 10 (as opposed to AIR) does not support streaming to a file. But, what's even worse is that FileReference.save() duplicates all the data before saving it. I was looking for a workaround to this doubled memory usage and thought about the following approach: What if I pass a custom subclass of ByteArray as an argument to FileReference.save(), where this ByteArray subclass would override all read*() methods. The overridden read*() methods would wait for a piece of data to be generated by my application, return this piece of data and immediately remove it from the memory. I know how much data will be generated, so I could also override length/bytesAvailable methods. Would it be possible? Could you give me some hint how to do it? I've created a subclass of ByteArray, registered an alias for it, passed an instance of this subclass to FileReference.save(), but somehow FileReference.save() seems to treat it just as it was a ByteArray instance and doesn't call any of my overridden methods... Thanks a lot for any help!

    Read the article

  • Any merit to a lazy-ish juxt function?

    - by NielsK
    In answering a question about a function that maps over multiple functions with the same arguments (A: juxt), I came up with a function that basically took the same form as juxt, but used map: (defn could-be-lazy-juxt [& funs] (fn [& args] (map #(apply %1 %2) funs (repeat args)))) => ((juxt inc dec str) 1) [2 0 "1"] => ((could-be-lazy-juxt inc dec str) 1) (2 0 "1") => ((juxt * / -) 6 2) [12 3 4] => ((could-be-lazy-juxt * / -) 6 2) (12 3 4) As posted in the original question, I have little clue about the laziness or performance of it, but timing in the REPL does suggest something lazy-ish is going on. => (time (apply (juxt + -) (range 1 100))) "Elapsed time: 0.097198 msecs" [4950 -4948] => (time (apply (could-be-lazy-juxt + -) (range 1 100))) "Elapsed time: 0.074558 msecs" (4950 -4948) => (time (apply (juxt + -) (range 10000000))) "Elapsed time: 1019.317913 msecs" [49999995000000 -49999995000000] => (time (apply (could-be-lazy-juxt + -) (range 10000000))) "Elapsed time: 0.070332 msecs" (49999995000000 -49999995000000) I'm sure this function is not really that quick (the print of the outcome 'feels' about as long in both). Doing a 'take x' on the function only limits the amount of functions evaluated, which probably is limited in it's applicability, and limiting the other parameters by 'take' should be just as lazy in normal juxt. Is this juxt really lazy ? Would a lazy juxt bring anything useful to the table, for instance as a compositing step between other lazy functions ? What are the performance (mem / cpu / object count / compilation) implications ? Is that why the Clojure juxt implementation is done with a reduce and returns a vector ? Edit: Somehow things can always be done simpler in Clojure. (defn could-be-lazy-juxt [& funs] (fn [& args] (map #(apply % args) funs)))

    Read the article

  • how to fix my error saying expected expression before 'else'

    - by user292489
    this program intended to read a .txt, a set of numbers, file and wwrite to another two .txt files called even amd odd as follows: #include <stdio.h> #include <stdlib.h> int main(int argc, char *argv[]) { int i=0,even,odd; int number[i]; // check to make sure that all the file names are entered if (argc != 3) { printf("Usage: executable in_file output_file\n"); exit(0); } FILE *dog = fopen(argv[1], "r"); FILE *feven= fopen(argv[2], "w"); FILE *fodd= fopen (argv[3], "w"); // check whether the file has been opened successfully if (dog == NULL) { printf("File %s cannot open!\n", argv[1]); exit(0); } //odd = fopen(argv[2], "w"); { if (i%2!=1) i++;} fprintf(feven, "%d", even); fscanf(dog, "%d", &number[i]); else { i%2==1; i++;} fprintf(fodd, "%d", odd); fscanf(dog, "%d", &number[i]); fclose(feven); fclose(fodd);

    Read the article

  • Macro to improve callback registration readability

    - by Warren Seine
    I'm trying to write a macro to make a specific usage of callbacks in C++ easier. All my callbacks are member functions and will take this as first argument and a second one whose type inherits from a common base class. The usual way to go is: register_callback(boost::bind(&my_class::member_function, this, _1)); I'd love to write: register_callback(HANDLER(member_function)); Note that it will always be used within the same class. Even if typeof is considered as a bad practice, it sounds like a pretty solution to the lack of __class__ macro to get the current class name. The following code works: typedef typeof(*this) CLASS; boost::bind(& CLASS :: member_function, this, _1)(my_argument); but I can't use this code in a macro which will be given as argument to register_callback. I've tried: #define HANDLER(FUN) \ boost::bind(& typeof(*this) :: member_function, this, _1); which doesn't work for reasons I don't understand. Quoting GCC documentation: A typeof-construct can be used anywhere a typedef name could be used. My compiler is GCC 4.4, and even if I'd prefer something standard, GCC-specific solutions are accepted.

    Read the article

  • Maven - 'all' or 'parent' project for aggregation?

    - by disown
    For educational purposes I have set up a project layout like so (flat in order to suite eclipse better): -product | |-parent |-core |-opt |-all Parent contains an aggregate project with core, opt and all. Core implements the mandatory part of the application. Opt is an optional part. All is supposed to combine core with opt, and has these two modules listed as dependencies. I am now trying to make the following artifacts: product-core.jar product-core-src.jar product-core-with-dependencies.jar product-opt.jar product-opt-src.jar product-opt-with-dependencies.jar product-all.jar product-all-src.jar product-all-with-dependencies.jar Most of them are fairly straightforward to produce. I do have some problem with the aggregating artifacts though. I have managed to make the product-all-src.jar with a custom assembly descriptor in the 'all' module which downloads the sources for all non-transitive deps, and this works fine. This technique also allows me to make the product-all-with-dependencies.jar. I however recently found out that you can use the source:aggregate goal in the source plugin to aggregate sources of the entire aggregate project. This is also true for the javadoc plugin, which also aggregates through the usage of the parent project. So I am torn between my 'all' module approach and ditching the 'all' module and just use the 'parent' module for all aggregation. It feels unclean to have some aggregate artifacts produced in 'parent', and others produced in 'all'. Is there a way of making an 'product-all' jar in the parent project, or to aggregate javadoc in the 'all' project? Or should I just keep both? Thanks

    Read the article

  • Using PHP as template language

    - by Kunal
    I wrote up this quick class to do templating via PHP -- I was wondering if this is easily exploitable if I were ever to open up templating to users (not the immediate plan, but thinking down the road). class Template { private $allowed_methods = array( 'if', 'switch', 'foreach', 'for', 'while' ); private function secure_code($template_code) { $php_section_pattern = '/\<\?(.*?)\?\>/'; $php_method_pattern = '/([a-zA-Z0-9_]+)[\s]*\(/'; preg_match_all($php_section_pattern, $template_code, $matches); foreach (array_unique($matches[1]) as $index => $code_chunk) { preg_match_all($php_method_pattern, $code_chunk, $sub_matches); $code_allowed = true; foreach ($sub_matches[1] as $method_name) { if (!in_array($method_name, $this->allowed_methods)) { $code_allowed = false; break; } } if (!$code_allowed) { $template_code = str_replace($matches[0][$index], '', $template_code); } } return $template_code; } public function render($template_code, $params) { extract($params); ob_start(); eval('?>'.$this->secure_code($template_code).'<?php '); $result = ob_get_contents(); ob_end_clean(); return $result; } } Example usage: $template_code = '<?= $title ?><? foreach ($photos as $photo): ?><img src="<?= $photo ?>"><? endforeach ?>'; $params = array('title' => 'My Title', 'photos' => array('img1.jpg', 'img2.jpg')); $template = new Template; echo $template->render($template_code, $params); The idea here is that I'd store the templates (PHP code) in the database, and then run it through the class which uses regular expressions to only allow permitted methods (if, for, etc.). Anyone see an obvious way to exploit this and run arbitrary PHP? If so, I'll probably go the more standard route of a templating language such as Smarty...

    Read the article

  • Android Signal analysis + some filters.

    - by Profete162
    Hello, as the world cup is the main sport event and the Vuvuzelas are the most annoying sound in the world, I had an idea to remove them definitively by reading this new ( http://www.popsci.com/diy/article/2010-06/simple-software-can-filter-out-vuvuzela-whine) that told us that the sound has some frequencies at 233Hz + 466,932,1864Hz. I have already made a lot of Android application by myself but never touching the signal analysis and filtering part, so here are a few questions, I do not ask for precise answer but maybe links and tutorial to find something to work on. I guess that a new Android phone has the CPU and power to make real-time filtering. 1) How can I intercept the sound coming from the Jack microphone - Line-IN plug- ( I plan to link my TV to my phone with Jack to Jack plug). My question is totally software and coding, I have all the wires and adapters to plug a jack into my android phone Line IN. 2) Are there some Fourier analysis librairies, may I have a look to Java libraries on the web and import them to my Android project? I really apologize because my question seem not precise, but I think that would be something great. Thank you for your answers.

    Read the article

  • sqlite3 'database is locked' won't go away with retries

    - by Azarias
    I have a sqlite3 database that is accessed by a few threads (3-4). I am aware of the general limitations of sqlite3 with regards to concurrency as stated http://www.sqlite.org/faq.html#q6 , but I am convinced that is not the problem. All of the threads both read and write from this database. Whenever I do a write, I have the following construct: try: Cursor.execute(q, params) Connection.commit() except sqlite3.IntegrityError: Notify except sqlite3.OperationalError: print sys.exc_info() print("DATABASE LOCKED; sleeping for 3 seconds and trying again") time.sleep(3) Retry On some runs, I won't even hit this block, but when I do, it never comes out of it (keeps retrying, but I keep getting the 'database is locked' error from exc_info. If I understand the reader/writer lock usage correctly, some amount of waiting should help with the contention. What this sounds like is deadlock, but I do not use any transactions in my code, and every SELECT or INSERT is simply a one off. Some threads, however, keep the same connection when they do their operation (which includes a mix of SELECTS and INSERTS and other modifiers). I would appericiate it if you could shade a light on this, and also ways around fixing it (besides using a different database engine.)

    Read the article

  • Error when linking C executable to OpenCV

    - by Ghilas BELHADJ
    I'm compiling OpenCV under Ubuntu 13.10 using cMake. i've already compiled c++ programs and they works well. now i'm trying to compile a C file using this cMakeLists.txt cmake_minimum_required (VERSION 2.8) project (hello) find_package (OpenCV REQUIRED) add_executable (hello src/test.c) target_link_libraries (hello ${OpenCV_LIBS}) here is the test.c file: #include <stdio.h> #include <stdlib.h> #include <opencv/highgui.h> int main (int argc, char* argv[]) { IplImage* img = NULL; const char* window_title = "Hello, OpenCV!"; if (argc < 2) { fprintf (stderr, "usage: %s IMAGE\n", argv[0]); return EXIT_FAILURE; } img = cvLoadImage(argv[1], CV_LOAD_IMAGE_UNCHANGED); if (img == NULL) { fprintf (stderr, "couldn't open image file: %s\n", argv[1]); return EXIT_FAILURE; } cvNamedWindow (window_title, CV_WINDOW_AUTOSIZE); cvShowImage (window_title, img); cvWaitKey(0); cvDestroyAllWindows(); cvReleaseImage(&img); return EXIT_SUCCESS; } it returns me this error whene running cmake . then make to the project: Linking C executable hello /usr/bin/ld: CMakeFiles/hello.dir/src/test.c.o: undefined reference to symbol «lrint@@GLIBC_2.1» /lib/i386-linux-gnu/libm.so.6: error adding symbols: DSO missing from command line collect2: error: ld returned 1 exit status make[2]: *** [hello] Erreur 1 make[1]: *** [CMakeFiles/hello.dir/all] Erreur 2 make: *** [all] Erreur 2

    Read the article

  • Keeping sync in multiplayer RTS game that uses floating point arithmetic

    - by Calmarius
    I'm writing a 2D space RTS game in C#. Single player works. Now I want to add some multiplayer functionality. I googled for it and it seems there is only one way to have thousands of units continuously moving without a powerful net connection: send only the commands through the network while running the same simulation at every player. And now there is a problem the entire engine uses doubles everywhere. And floating point calculations are depends heavily on compiler optimalizations and cpu architecture so it is very hard to keep things syncronized. And it is not grid based at all, and have a simple phisics engine to move the space-ships (space ships have impulse and angular-momentum...). So recoding the entire stuff to use fixed point would be quite cumbersome (but probably the only solution). So I have 2 options so far: Say bye to the current code and restart from scratch using integers Make the game LAN only where there is enough bandwidth to have 8 players with thousands of units and sending the positions and orientation etc in (almost) every frame... So I looking for better opinions, (or even tips on migrating the code to fixed-point without messing everything up...)

    Read the article

  • When memory is actually freeded?

    - by zhyk
    Hello all. I'm trying to understand memory management stuff in Objective-C. If I see the memory usage listed by Activity Monitor, it looks like memory is not being freed (I mean column rsize). But in "Object Allocations" everything looks fine. Here is my simple code: #import <Foundation/Foundation.h> int main (int argc, const char * argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; NSInteger i, k=10000; while (k>0) { NSMutableArray *array = [[NSMutableArray alloc]init]; for (i=0;i<1000*k; i++) { NSString *srtring = [[NSString alloc] initWithString:@"string...."]; [array addObject:srtring]; [srtring release]; srtring = nil; } [array release]; array = nil; k-=500; } [NSThread sleepForTimeInterval:5]; [pool release]; return 0; } As for retain and release it's cool, everything is balanced. But rsize decreases only after quitting from this little program. Is it possible to "clean" memory somehow before quitting?

    Read the article

  • Trying to not need two separate solutions for x86 and x64 program.

    - by Sean Anderson
    Hi all, I have a program which needs to function in both an x86 and an x64 environment. It is using Oracle's ODBC drivers. I have a reference to Oracle.DataAccess.DLL. This DLL is different depending on whether the system is x64 or x86, though. Currently, I have two separate solutions and I am maintaining the code on both. This is atrocious. I was wondering what the proper solution is? I have my platform set to "Any CPU." and it is my understanding that VS should compile the DLL to an intermediary language such that it should not matter if I use the x86 or x64 version. Yet, if I attempt to use the x64 DLL I receive the error "Could not load file or assembly 'Oracle.DataAccess, Version=2.102.3.2, Culture=neutral, PublicKeyToken=89b483f429c47342' or one of its dependencies. An attempt was made to load a program with an incorrect format." I am running on a 32 bit machine, so the error message makes sense, but it leaves me wondering how I am supposed to efficiently develop this program when it needs to work on x64. Thanks.

    Read the article

  • Pump Messages During Long Operations + C#

    - by Newbie
    Hi I have a web service that is doing huge computation and is taking more than a minute. I have generated the proxy file of the web service and then from my client end I am using the dll(of course I generated the proxy dll). My client side code is TimeSeries3D t = new TimeSeries3D(); int portfolioId = 4387919; string[] str = new string[2]; str[0] = "MKT_CAP"; DateRange dr = new DateRange(); dr.mStartDate = DateTime.Today; dr.mEndDate = DateTime.Today; Service1 sc = new Service1(); t = sc.GetAttributesForPortfolio(portfolioId, true, str, dr); But since it is taking to much time for the server to compute, after 1 minute I am receiving an error message The CLR has been unable to transition from COM context 0x33caf30 to COM context 0x33cb0a0 for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations. Kindly guide me what to do? Thanks

    Read the article

  • Tool to detect use/abuse of String.Concat (where StringBuilder should be used)

    - by Mark Rushakoff
    It's common knowledge that you shouldn't use a StringBuilder in place of a small number of concatenations: string s = "Hello"; if (greetingWorld) { s += " World"; } s += "!"; However, in loops of a significant size, StringBuilder is the obvious choice: string s = ""; foreach (var i in Enumerable.Range(1,5000)) { s += i.ToString(); } Console.WriteLine(s); Is there a tool that I can run on either raw C# source or a compiled assembly to identify where in the source code that String.Concat is being called? (If you're not familiar, s += "foo" is mapped to String.Concat in the IL output.) Obviously, I can't realistically search through an entire project and evaluate every += to identify whether the lvalue is a string. Ideally, it would only point out calls inside a for/foreach loop, but I would even put up with all the false positives of noting every String.Concat. Also, I'm aware that there are some refactoring tools that will automatically refactor my code to use StringBuilder, but I am only interested in identifying the Concat usage at this point. I routinely run Gendarme and FxCop on my code, and neither of those tools identify what I've described.

    Read the article

  • Endianness and C API's: Specifically OpenSSL.

    - by Hassan Syed
    I have an algorithm that uses the following OpenSSL calls: HMAC_update() / HMAC_final() // ripe160 EVP_CipherUpdate() / EVP_CipherFinal() // cbc_blowfish These algorithm take a unsigned char * into the "plain text". My input data is comes from a C++ std::string::c_str() which originate from a protocol buffer object as a encoded UTF-8 string. UTF-8 strings are meant to be endian neutrial. However I'm a bit paranoid about how OpenSSL may perform operations on the data. My understanding is that encryption algorithms work on 8-bit blocks of data, and if a unsigned char * is used for pointer arithmetic when the operations are performed the algorithms should be endian neutral and I do not need to worry about anything. My uncertainty is compounded by the fact that I am working on a little-endian machine and have never done any real cross-architecture programming. My beliefs/reasoning are/is based on the following two properties std::string (not wstring) internally uses a 8-bit ptr and a the resulting c_str() ptr will itterate the same way regardless of the CPU architecture. Encryption algorithms are either by design, or by implementation, endian neutral. I know the best way to get a definitive answer is to use QEMU and do some cross-platform unit tests (which I plan to do). My question is a request for comments on my reasoning, and perhaps will assist other programmers when faced with similar problems.

    Read the article

  • Socket Read In Multi-Threaded Application Returns Zero Bytes or EINTR (-1)

    - by user309670
    Hi. Am a c-coder for a while now - neither a newbie nor an expert. Now, I have a certain daemoned application in C on a PPC Linux. I use PHP's socket_connect as a client to connect to this service locally. The server uses epoll for concurrent connections via a Unix socket. A user submitted string is parsed for certain characters/words using strstr() and if found, spawns 4 joinable threads to different websites simultaneously. I use socket, connect, write and read, to interact with the said webservers via TCP on port 80 in each thread. All connections and writes seems successful. Reads to the webserver sockets fail however, with either (A) all 3 threads seem to hang, and only one thread returns -1 and errno is set to 104. The responding thread takes like 10 minutes - an eternity long:-(. *I read somewhere that the 104 (is EINTR) suggests that ...'the connection was reset by peer', or (B) 0 bytes from 3 threads, and only 1 of the 4 threads actually returns some data. Isn't the socket read/write thread-safe? Otherwise, use thread-safe (and reentrant) libc functions such as strtok_r, gethostbyname_r, etc. *I doubt that the said webhosts are actually resetting the connection, because when I run a single-threaded standalone (everything else equal) all things works perfectly right. There's a second problem too (oops), I can't write back to the client who connect to my epoll-ed Unix socket. My daemon application will hang and hog CPU 100% for ever. Yet nothing is written to the clients end. Am sure the client (a very typical PHP socket application) hasn't closed the connection whenever this is happening - no error(s) detected either. I cannot figure-out whatever is wrong even with Valgrind or GDB

    Read the article

< Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >