Search Results

Search found 1226 results on 50 pages for 'improvement'.

Page 37/50 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Windows Azure WebRole stuck in a deployment loop

    - by Rob G
    I've been struggling with this one for a couple of days now. My current Windows Azure WebRole is stuck in a loop where the status keeps changing between Initializing, Busy, Stopping and Stopped. It never goes live, and I can can never see the website as a result. The WebRole is an "out of the box" MVC 2 application with Copy Local set to true on the Mvc dll and I haven't even tried hooking up a storage or WorkerRole yet, and there is nothing really happening inside the Start method that I can see would crash. I've really tried going back to basics to ensure nothing can complicate the process and the website launches without a problem on the Dev Fabric and yes it looks just like the standard "Home", "About" MVC app - just can't get it running in the cloud! Funny thing is, a few days ago, this exact package worked on the staging area in the cloud, and I could even see it in the browser - but could never get it swapped over to production, so I deleted everything and started from scratch, and now I can't even get it running on staging... Does anyone have any ideas on what I could do to diagnose this problem myself because since logging this problem on the forums 2 days ago, there has been no improvement or feedback. Any help appreciated, Regards, Rob G

    Read the article

  • Optimal Serialization of Primitive Types

    - by Greg Dean
    We are beginning to roll out more and more WAN deployments of our product (.Net fat client w/ IIS hosted Remoting backend). Because of this we are trying to reduce the size of the data on the wire. We have overridden the default serialization by implementing ISerializable (similar to this), we are seeing anywhere from 12% to 50% gains. Most of our efforts focus on optimizing arrays of primitive types. I would like to know if anyone knows of any fancy way of serializing primitive types, beyond the obvious? For example today we serialize an array of ints as follows: [4-bytes (array length)][4-bytes][4-bytes] Can anyone do significantly better? The most obvious example of a significant improvement, for boolean arrays, is putting 8 bools in each byte, which we already do. Note: Saving 7 bits per bool may seem like a waste of time, but when you are dealing with large magnitudes of data (which we are), it adds up very fast. Note: We want to avoid general compression algorithms because of the latency associated with it. Remoting only supports buffered requests/responses(no chunked encoding). I realize there is a fine line between compression and optimal serialization, but our tests indicate we can afford very specific serialization optimizations at very little cost in latency. Whereas reprocessing the entire buffered response into new compressed buffer is too expensive.

    Read the article

  • Strange C++ performance difference?

    - by STingRaySC
    I just stumbled upon a change that seems to have counterintuitive performance ramifications. Can anyone provide a possible explanation for this behavior? Original code: for (int i = 0; i < ct; ++i) { // do some stuff... int iFreq = getFreq(i); double dFreq = iFreq; if (iFreq != 0) { // do some stuff with iFreq... // do some calculations with dFreq... } } While cleaning up this code during a "performance pass," I decided to move the definition of dFreq inside the if block, as it was only used inside the if. There are several calculations involving dFreq so I didn't eliminate it entirely as it does save the cost of multiple run-time conversions from int to double. I expected no performance difference, or if any at all, a negligible improvement. However, the perfomance decreased by nearly 10%. I have measured this many times, and this is indeed the only change I've made. The code snippet shown above executes inside a couple other loops. I get very consistent timings across runs and can definitely confirm that the change I'm describing decreases performance by ~10%. I would expect performance to increase because the int to double conversion would only occur when iFreq != 0. Chnaged code: for (int i = 0; i < ct; ++i) { // do some stuff... int iFreq = getFreq(i); if (iFreq != 0) { // do some stuff with iFreq... double dFreq = iFreq; // do some stuff with dFreq... } } Can anyone explain this? I am using VC++ 9.0 with /O2. I just want to understand what I'm not accounting for here.

    Read the article

  • Developmnet process for an embedded project with significant Hardware change

    - by pierr
    Hi, I have a good idea about Agile development process but it seems it does not fit well with a embedded project with significant hardware change. I will describe below what we are currently doing (Ad-hoc way , no defined process yet). The change are divided to three categories and different process are used for them : complete hardware change example : use a different video codec IP a) Study the new IP b) RTL/FPGA simulation c) Implement the leagcy interface - go to b) d) Wait until hardware (tape out) is ready f) Test on the real Hardware hardware improvement example : enhance the image display quaulity by improving the underlie algorithm a)RTL/FPGA simulation b)Wait until hardware and test on the hardware Mino change exmaple : only change hardware register mapping a)Wait until hardware and test on the hardware The worry is it seems we don't have too much control and confidence about software maturity for the hardware change as the bring up schedule is always very tight and the customer desired a seemless change when updating to a new version hardware. How did you manage this kind of hardware hardware change? Did you solve that by a Hardware Abstraction Layer (HAL)? Did you have a automatical test for the HAL layer? How did you test when the hardware platform is not even ready? Do you have well documented process for this kind of change? Thanks for your insight.

    Read the article

  • Problem: Sorting for GridView/ObjectDataSource changes depending on page

    - by user148298
    I have a GridView tied to an ObjectDataSource using paging. The paging works fine, except that the sort order changes depending on which page of the results is being viewed. This causes items to reappear on subsequent pages among other issues. I traced the problem to my DAL, which reads a page at a time and then sorts it. Obviously the sorting is going to change as the result set size changes. Is there an improvement to this algorithm. I would like to use a datareader if possible: [System.ComponentModel.DataObjectMethod(System.ComponentModel.DataObjectMethodType.Select)] public static WordsCollection LoadForCriteria(string sqlCriteria, int maximumRows, int startRowIndex, string sortExpression) { //DEFAULT SORT EXPRESSION if (string.IsNullOrEmpty(sortExpression)) sortExpression = "OrderBy"; //CREATE THE DYNAMIC SQL TO LOAD OBJECT StringBuilder selectQuery = new StringBuilder(); selectQuery.Append("SELECT"); if (maximumRows > 0) selectQuery.Append(" TOP " + (startRowIndex + maximumRows).ToString()); selectQuery.Append(" " + Words.GetColumnNames(string.Empty)); selectQuery.Append(" FROM sw_Words"); string whereClause = string.IsNullOrEmpty(sqlCriteria) ? string.Empty : " WHERE " + sqlCriteria; selectQuery.Append(whereClause); selectQuery.Append(" ORDER BY " + sortExpression); Database database = Token.Instance.Database; DbCommand selectCommand = database.GetSqlStringCommand(selectQuery.ToString()); //EXECUTE THE COMMAND WordsCollection results = new WordsCollection(); int thisIndex = 0; int rowCount = 0; using (IDataReader dr = database.ExecuteReader(selectCommand)) { while (dr.Read() && ((maximumRows < 1) || (rowCount < maximumRows))) { if (thisIndex >= startRowIndex) { Words varWords = new Words(); Words.LoadDataReader(varWords, dr); results.Add(varWords); rowCount++; } thisIndex++; } dr.Close(); } return results; }

    Read the article

  • SQL Server - Multi-Column substring matching

    - by hamlin11
    One of my clients is hooked on multi-column substring matching. I understand that Contains and FreeText search for words (and at least in the case of Contains, word prefixes). However, based upon my understanding of this MSDN book, neither of these nor their variants are capable of searching substrings. I have used LIKE rather extensively (Select * from A where A.B Like '%substr%') Sample table A: ID | Col1 | Col2 | Col3 | ------------------------------------- 1 | oklahoma | colorado | Utah | 2 | arkansas | colorado | oklahoma | 3 | florida | michigan | florida | ------------------------------------- The following code will give us row 1 and row 2: select * from A where Col1 like '%klah%' or Col2 like '%klah%' or Col3 like '%klah%' This is rather ugly, probably slow, and I just don't like it very much. Probably because the implementations that I'm dealing with have 10+ columns that need searched. The following may be a slight improvement as code readability goes, but as far as performance, we're still in the same ball park. select * from A where (Col1 + ' ' + Col2 + ' ' + Col3) like '%klah%' I have thought about simply adding insert, update, and delete triggers that simply add the concatenated version of the above columns into a separate table that shadows this table. Sample Shadow_Table: ID | searchtext | --------------------------------- 1 | oklahoma colorado Utah | 2 | arkansas colorado oklahoma | 3 | florida michigan florida | --------------------------------- This would allow us to perform the following query to search for '%klah%' select * from Shadow_Table where searchtext like '%klah%' I really don't like having to remember that this shadow table exists and that I'm supposed to use it when I am performing multi-column substring matching, but it probably yields pretty quick reads at the expense of write and storage space. My gut feeling tells me there there is an existing solution built into SQL Server 2008. However, I don't seem to be able to find anything other than research papers on the subject. Any help would be appreciated.

    Read the article

  • My OpenCL kernel is slower on faster hardware.. But why?

    - by matdumsa
    Hi folks, As I was finishing coding my project for a multicore programming class I came up upon something really weird I wanted to discuss with you. We were asked to create any program that would show significant improvement in being programmed for a multi-core platform. I’ve decided to try and code something on the GPU to try out OpenCL. I’ve chosen the matrix convolution problem since I’m quite familiar with it (I’ve parallelized it before with open_mpi with great speedup for large images). So here it is, I select a large GIF file (2.5 MB) [2816X2112] and I run the sequential version (original code) and I get an average of 15.3 seconds. I then run the new OpenCL version I just wrote on my MBP integrated GeForce 9400M and I get timings of 1.26s in average.. So far so good, it’s a speedup of 12X!! But now I go in my energy saver panel to turn on the “Graphic Performance Mode” That mode turns off the GeForce 9400M and turns on the Geforce 9600M GT my system has. Apple says this card is twice as fast as the integrated one. Guess what, my timing using the kick-ass graphic card are 3.2 seconds in average… My 9600M GT seems to be more than two times slower than the 9400M.. For those of you that are OpenCL inclined, I copy all data to remote buffers before starting, so the actual computation doesn’t require roundtrip to main ram. Also, I let OpenCL determine the optimal local-worksize as I’ve read they’ve done a pretty good implementation at figuring that parameter out.. Anyone has a clue? edit: full source code with makefiles here http://www.mathieusavard.info/convolution.zip cd gimage make cd ../clconvolute make put a large input.gif in clconvolute and run it to see results

    Read the article

  • Download-from-PyPI-and-install script

    - by zubin71
    Hello, I have written a script which fetches a distribution, given the URL. After downloading the distribution, it compares the md5 hashes to verify that the file has been downloaded properly. This is how I do it. def download(package_name, url): import urllib2 downloader = urllib2.urlopen(url) package = downloader.read() package_file_path = os.path.join('/tmp', package_name) package_file = open(package_file_path, "w") package_file.write(package) package_file.close() I wonder if there is any better(more pythonic) way to do what I have done using the above code snippet. Also, once the package is downloaded this is what is done: def install_package(package_name): if package_name.endswith('.tar'): import tarfile tarfile.open('/tmp/' + package_name) tarfile.extract('/tmp') import shlex import subprocess installation_cmd = 'python %ssetup.py install' %('/tmp/'+package_name) subprocess.Popen(shlex.split(installation_cmd) As there are a number of imports for the install_package method, i wonder if there is a better way to do this. I`d love to have some constructive criticism and suggestions for improvement. Also, I have only implemented the install_package method for .tar files; would there be a better manner by which I could install .tar.gz and .zip files too without having to write seperate methods for each of these?

    Read the article

  • Fast JSON serialization (and comparison with Pickle) for cluster computing in Python?

    - by user248237
    I have a set of data points, each described by a dictionary. The processing of each data point is independent and I submit each one as a separate job to a cluster. Each data point has a unique name, and my cluster submission wrapper simply calls a script that takes a data point's name and a file describing all the data points. That script then accesses the data point from the file and performs the computation. Since each job has to load the set of all points only to retrieve the point to be run, I wanted to optimize this step by serializing the file describing the set of points into an easily retrievable format. I tried using JSONpickle, using the following method, to serialize a dictionary describing all the data points to file: def json_serialize(obj, filename, use_jsonpickle=True): f = open(filename, 'w') if use_jsonpickle: import jsonpickle json_obj = jsonpickle.encode(obj) f.write(json_obj) else: simplejson.dump(obj, f, indent=1) f.close() The dictionary contains very simple objects (lists, strings, floats, etc.) and has a total of 54,000 keys. The json file is ~20 Megabytes in size. It takes ~20 seconds to load this file into memory, which seems very slow to me. I switched to using pickle with the same exact object, and found that it generates a file that's about 7.8 megabytes in size, and can be loaded in ~1-2 seconds. This is a significant improvement, but it still seems like loading of a small object (less than 100,000 entries) should be faster. Aside from that, pickle is not human readable, which was the big advantage of JSON for me. Is there a way to use JSON to get similar or better speed ups? If not, do you have other ideas on structuring this? (Is the right solution to simply "slice" the file describing each event into a separate file and pass that on to the script that runs a data point in a cluster job? It seems like that could lead to a proliferation of files). thanks.

    Read the article

  • Replacing .NET WebBrowser control with a better browser, like Chrome?

    - by Sylverdrag
    Is there any relatively easy way to insert a modern browser into a .NET application? As far as I understand, the WebBrowser control is a wrapper for IE, which wouldn't be a problem except that it looks like it is a very old version of IE, with all that entails in terms of CSS screw-ups, potential security risks (if the rendering engine wasn't patched, can I really expect the zillion buffer overflow problems to be fixed?), and other issues. I am using Visual Studio C# (express edition - does it make any difference here?) I would like to integrate a good web browser in my applications. In some, I just use it to handle the user registration process, interface with some of my website's features and other things of that order, but I have another application in mind that will require more err... control. I need: A browser that can integrate inside a window of my application (not a separate window) A good support for CSS, js and other web technologies, on par with any modern browser Basic browser functions like "navigate", "back", "reload"... Liberal access to the page code and output. I was thinking about Chrome, since it comes under the BSD license, but I would be just as happy with a recent version of IE. As much as possible, I would like to keep things simple. The best would be if one could patch the existing WebBrowser control, which does already about 70% of what I need, but I don't think that's possible. I have found an activeX control for Mozilla (http://www.iol.ie/~locka/mozilla/control.htm) but it looks like it's an old version, so it's not necessarily an improvement. I am open to suggestions

    Read the article

  • Code golf: combining multiple sorted lists into a single sorted list

    - by Alabaster Codify
    Implement an algorithm to merge an arbitrary number of sorted lists into one sorted list. The aim is to create the smallest working programme, in whatever language you like. For example: input: ((1, 4, 7), (2, 5, 8), (3, 6, 9)) output: (1, 2, 3, 4, 5, 6, 7, 8, 9) input: ((1, 10), (), (2, 5, 6, 7)) output: (1, 2, 5, 6, 7, 10) Note: solutions which concatenate the input lists then use a language-provided sort function are not in-keeping with the spirit of golf, and will not be accepted: sorted(sum(lists,[])) # cheating: out of bounds! Apart from anything else, your algorithm should be (but doesn't have to be) a lot faster! Clearly state the language, any foibles and the character count. Only include meaningful characters in the count, but feel free to add whitespace to the code for artistic / readability purposes. To keep things tidy, suggest improvement in comments or by editing answers where appropriate, rather than creating a new answer for each "revision". EDIT: if I was submitting this question again, I would expand on the "no language provided sort" rule to be "don't concatenate all the lists then sort the result". Existing entries which do concatenate-then-sort are actually very interesting and compact, so I won't retro-actively introduce a rule they break, but feel free to work to the more restrictive spec in new submissions. Inspired by http://stackoverflow.com/questions/464342/combining-two-sorted-lists-in-python

    Read the article

  • Home link on the menu does not highlight

    - by strangeloops
    My menu shows the active links when clicked on it except for the home link (http://www.obsia.com). It is never highlighted. I tried playing around but I can't seem to figure it out. This is the jquery code I used to highlight the links? $(function(){ var path = location.pathname.substring(1); if ( path ) $('.nav a[href$="' + path + '"]').attr('class', 'active'); }); I also have another menu on the products pages where I would like to highlight the parents of the siblings and the our products on the global menu. This is the jquery code for the products menu: $(function() { var pathname = location.pathname; var highlight; //highlight home if(pathname == "") highlight = $('ul#accordion > li:first > a:first'); else { var path = pathname.substring(1); if (path) highlight = $('ul#accordion a[href$="' + path + '"]'); }highlight.attr('class', 'active'); // hide 2nd, 3rd, ... level menus $('ul#accordion ul').hide(); // show child menu on click $('ul#accordion > li > a.product_menu').click(function() { //minor improvement $(this).siblings('ul').toggle("slow"); return false; }); //open to current group (highlighted link) by show all parent ul's $('a.active').parents('ul').show(); $('a.active').parents('h2 a').css({'color':'#ff8833'}); //if you only have a 2 level deep navigation you could //use this instead //$('a.selected').parents("ul").eq(0).show(); }); }); I tried adding this: $(this).parents('ul').addClass('active'); but that does not seem to do the trick? Does anybody have a simple way of accomplishing it? Any help would be appreciated from you guys. Kind Regards, G

    Read the article

  • Is "Server not found" error related to Activclient?

    - by Kent
    Users are getting sporadic "Server not found" errors after idling in the browser. We have a HTTPS web application (Apache/Tomcat) using NSS for authentication on the server. The error occurs when a user opens the application and later lets it sit idle/untouched for 15 minutes. When they try to access the application they can get a "Server not found" error. Users use CAC cards with ActivClient software and our web application uses the certificates for authentication and authorization. We have been able to recreate the problem but have been unable to diagnose it. In recreating the problem the server is getting a series of "Unable to find the certificate or key necessary for authentication" errors in the NSS log associated with the browser error. These erros don't occur until the user tries to access the idle application. When the application is idle for 15 minutes the PIN is not requested yet the PIN Cache timeout in ActivClient is set at 15 minutes. All our server side timeout parameters are set to hours not minutes. IE 6 is our browser and NSS is using TLS. We have tried modifying "SetEnvIf User-Agent ".MSIE." ssl-unclean-shutdown" with no improvement. I understand that the PIN cache timeout and SSL session don't have a 1:1 relationship but the timing is suspicious. Can't find anything in the windows error logs that indicates a problem (security logs are not accessible to us). Any suggestions as to how to identify the cause of the problem would be appreciated.

    Read the article

  • Polymorphic Queue

    - by metdos
    Hello Everyone, I'm trying to implement a Polymorphic Queue. Here is my trial: QQueue <Request *> requests; while(...) { QString line = QString::fromUtf8(client->readLine()).trimmed(); if(...)){ Request *request=new Request(); request->tcpMessage=line.toUtf8(); request->decodeFromTcpMessage(); //this initialize variables in request using tcpMessage if(request->requestType==REQUEST_LOGIN){ LoginRequest loginRequest; request=&loginRequest; request->tcpMessage=line.toUtf8(); request->decodeFromTcpMessage(); requests.enqueue(request); } //Here pointers in "requests" do not point to objects I created above, and I noticed that their destructors are also called. LoginRequest *loginRequest2=dynamic_cast<LoginRequest *>(requests.dequeue()); loginRequest2->decodeFromTcpMessage(); } } Unfortunately, I could not manage to make work Polymorphic Queue with this code because of the reason I mentioned in second comment.I guess, I need to use smart-pointers, but how? I'm open to any improvement of my code or a new implementation of polymorphic queue. Thanks.

    Read the article

  • Java Scanner won't follow file

    - by Steve Renyolds
    Trying to tail / parse some log files. Entries start with a date then can span many lines. This works, but does not ever see new entries to file. File inputFile = new File("C:/test.txt"); InputStream is = new FileInputStream(inputFile); InputStream bis = new BufferedInputStream(is); //bis.skip(inputFile.length()); Scanner src = new Scanner(bis); src.useDelimiter("\n2010-05-01 "); while (true) { while(src.hasNext()){ System.out.println("[ " + src.next() + " ]"); } } Doesn't seem like Scanner's next() or hasNext() detects new entries to file. Any idea how else I can implement, basically, a tail -f with custom delimiter. ok - using Kelly's advise i'm checking & refreshing the scanner, this works. Thank you !! if anyone has improvement suggestions plz do! File inputFile = new File("C:/test.txt"); InputStream is = new FileInputStream(inputFile); InputStream bis = new BufferedInputStream(is); //bis.skip(inputFile.length()); Scanner src = new Scanner(bis); src.useDelimiter("\n2010-05-01 "); while (true) { while(src.hasNext()){ System.out.println("[ " + src.next() + " ]"); } Thread.sleep(50); if(bis.available() > 0){ src = new Scanner(bis); src.useDelimiter("\n2010-05-01 "); } }

    Read the article

  • SEO: Where do I start?

    - by James
    Hi, I am primarily a software developer however I tend to delve in some web development from time to time. I have recently been asked to have a look at a friends website as they are wanting to improve their position in search engine results i.e. google/yahoo etc. I am aware there is no guarentee that their position will change, however, I do know there are techniques/ways to make your website more visible to search engine spiders and to consequently improve your position in the rankings i.e. performing SEO. Before I started looking at the SEO of the site I did the following prerequsite checks: Ran the website through the W3C Markup Validator and the W3C CSS Validator services. Looked through the markup code manually (check for meta tags etc) Performed a thorough cross browser compatibility test. From those checks, the following was evident: No SEO has been performed on the site before. The website has been developed using a visual editing tool such as dreamweaver (as it failed the validation services miserably and tables where being used everywhere!) The site is fairly cross browser compatibile (only some slight issues with IE8 which are easily resolved). How the site navigation is, isn't very search engine friendly (e.g. index.php?page=home) I can see right away a major improvement for SEO (or I at least think) would be to change the way the website is structured i.e. change from using dynamic pages such as "index.php?page=home" and actually having pages called "home.html". Other area's would be to add meta tags to identify keywords, and then sprinkling these keywords over the pages. As I am a rookie in this department, could anyone give me some advice on how I could perform thorough SEO on this website? Thanks in advance.

    Read the article

  • Should I expect Comet to be this slow?

    - by Chad Johnson
    I have the following in a Rails controller: def poll records = [] start_time = Time.now.to_i while records.length == 0 do records = Something.uncached{Something.find(:all, :conditions => { :some_condition => false})} if records.length > 0 break end sleep 1 if Time.now.to_i - start_time >= 20 break end end responseData = [] records.each do |record| responseData << { 'something' => record.some_value } # Flag message as received. record.some_condition = true record.save end render :text => responseData.to_json end and then I have Javascript performing an AJAX request. The request sits there for 20 seconds or until the controller method finds a record in the database, waiting. That works. function poll() { $.ajax({ url: '/my_controller/poll', type: 'GET', dataType: 'json', cache: false, data: 'time=' + new Date().getTime(), success: function(response) { // show response here }, complete: function() { poll(); }, error: function() { alert('error'); poll(); } }); } When I have 5 - 10 tabs open in my browser, my web application becomes super slow. Is this to be expected? Or is there some obvious improvement(s) I can make?

    Read the article

  • Classic ASP on IIS 7

    - by jagr
    Hi, I am having problems with my app running on IIS 7. The application is a mixture of classic ASP and ASP.NET MVC (don't ask how and why). Anyway, the application is up and running except for some problems that I am experiencing. For example, I have a button on my page and when I click it, javascript is opening a popup which needs to contain .asp page. But that doesn't happen. I get the blank popup with my cursor on busy as it still loads. This is happening almost always to me in IE. In Firefox it is much better but sometimes the app jams there too. If I close the opened, blank popup, and I want to move around the application, my buttons in menu (which are also .asp) doesn't load properly. For example, I have different buttons for different sections and when I move around they should change. When I restart the browser, only then everything works normal for some time, but the problem occurs again after a while. I am very sure that it is not the problem in application itself, because it works properly on the machines of my colleagues without those problems. They have the same OS (Vista Professional) and we compared the settings in IIS and they match. So I am very confused, and I really don't know how to solve the problem. I found a bunch of articles and blog posts about classic ASP and IIS7 but most of them are about enabling asp, which I already did. So I am suspecting that something wrong with IIS, but I don't know what, tried to reinstall it, hoping for some improvement, but I had no luck. If you need more details please ask. Does anyone have any idea what should I try or do?

    Read the article

  • Copy Small Bitmaps on to Large Bitmap with Transparency Blend: What is faster than graphics.DrawImag

    - by Glenn
    I have identified this call as a bottleneck in a high pressure function. graphics.DrawImage(smallBitmap, x , y); Is there a faster way to blend small semi transparent bitmaps into a larger semi transparent one? Example Usage: XY[] locations = GetLocs(); Bitmap[] bitmaps = GetBmps(); //small images sizes vary approx 30px x 30px using (Bitmap large = new Bitmap(500, 500, PixelFormat.Format32bppPArgb)) using (Graphics largeGraphics = Graphics.FromImage(large)) { for(var i=0; i < largeNumber; i++) { //this is the bottleneck largeGraphics.DrawImage(bitmaps[i], locations[i].x , locations[i].y); } } var done = new MemoryStream(); large.Save(done, ImageFormat.Png); done.Position = 0; return (done); The DrawImage calls take a small 32bppPArgb bitmaps and copies them into a larger bitmap at locations that vary and the small bitmaps might only partially overlap the larger bitmaps visible area. Both images have semi transparent contents that get blended by DrawImage in a way that is important to the output. I've done some testing with BitBlt but not seen significant speed improvement and the alpha blending didn't come out the same in my tests. I'm open to just about any method including a better call to bitblt or unsafe c# code.

    Read the article

  • PHP: Profiling code and strict environment ~ Improving my coding

    - by DavidYell
    I would like to update my local working environment to be stricter in an effort to improve my code. I know that my code is okay, but as with most things there is always room for improvement. I use XAMPP on my local machine, for simplicities sake Apache Friends XAMPP (Basic Package) version 1.7.2 So I've updated my php.ini : error_reporting to be E_ALL | E_STRICT to help with the code standard. I've also enabled the XDebug extension zend_extension = "C:\xampp\php\ext\php_xdebug.dll" which seems to be working, having tested some broken code and got the nice standard orange error notice. However, having read this question, http://stackoverflow.com/questions/133686/what-is-the-best-way-to-profile-php-code and enabled the profiler, I cannot seem to create a cachegrind file. Many of the guides that I've looked at seem to think you need to install XDebug in XAMPP which leads me to think they are out of date, as XDebug is bundled with XAMPP these days. So I would appreciate it if anyone can help point me in the right direction with both configuring XDebug to output grind files, and or just a great set of default settings for the XDebug config in XAMPP. Seems there is very little documentation to go on. If people have tips on integrating these tools with Netbeans, that would be awesomesauce. I'm happy to get suggestions on other things that I can do to help tighten up my php code, both syntactically and performance wise Thanks, and apologies for the rambling question(s)! Ninja edit I should menion that I'm using named vhosts as my Apache configuration, which I think is why running XDebug on port 9000 isn't working for me. I guess I'd need to edit my vhost to include port 9000

    Read the article

  • Consecutive Tables in Latex

    - by Tim
    Hi, I wonder how to place several tables consecutively in Latex? The page with the text right before the first table has a little space but not enough for the first table, so the first table is to be placed on the top of the next page, although I use "\begin{table}[!h]" for it. The second table does not fit into the place in the rest of the page of the first table, so I think I might use longtable for it to span the rest of the page and the top of the next page. Similarly, I use longtable for the third table. The latex code is as follows: ... % some text \begin{table}[!h] \caption{Table 1. \label{tab:1}} \begin{center} \begin{tabular}{c c} ... \end{tabular} \end{center} \end{table} \begin{center} \begin{longtable}{ c c } \caption{Table 2. \label{tab:2}}\\ ... \end{longtable} \end{center} \begin{center} \begin{longtable}{ c c } \caption{Table 3. \label{tab:3}}\\ ... \end{longtable} \end{center} ... % some text In the compiled pdf file it turns out that the order of the tables is messed up. The first table is placed behind the second and third one, and the second one spans the page with text before the tables and the next page with the third one following it. I would like to know how I can make the three tables appear consecutively in order, and there are no space left blank between them and between the text and the tables? Or if what I hope is not possible, what is the best strategy then? Thanks and regards! EDIT: Removing [!h] does not make improvement, the first table is still behind the second and the third.

    Read the article

  • Changing the indexing on existing table in SQL Server 2000

    - by Raj
    Guys, Here is the scenario: SQL Server 2000 (8.0.2055) Table currently has 478 million rows of data. The Primary Key column is an INT with IDENTITY. There is an Unique Constraint imposed on two other columns with a Non-Clustered Index. This is a vendor application and we are only responsible for maintaining the DB. Now the vendor has recommended doing the following "to improve performance" Drop the PK and Clustered Index Drop the non-clustered index on the two columns with the UNIQUE CONSTRAINT Recreate the PK, with a NON-CLUSTERED index Create a CLUSTERED index on the two columns with the UNIQUE CONSTRAINT I am not convinced that this is the right thing to do. I have a number of concerns. By dropping the PK and indexes, you will be creating a heap with 478 million rows of data. Then creating a CLUSTERED INDEX on two columns would be a really mammoth task. Would creating another table with the same structure and new indexing scheme and then copying the data over, dropping the old table and renaming the new one be a better approach? I am also not sure how the stored procs will react. Will they continue using the cached execution plan, considering that they are not being explicitly recompiled. I am simply not able to understand what kind of "performance improvement" this change will provide. I think that this will actually have the reverse effect. All thoughts welcome. Thanks in advance, Raj

    Read the article

  • Can't send smtp email from network using C#, asp.net website

    - by Kaysar
    Hi, I have my code here, it works fine from my home, where my user is administrator, and I am connected to internet via a cable network. But, problem is when I try this code from my work place, it does not work. Shows error: "unable to connect to the remote server" From a different machine in the same network: "A socket operation was attempted to an unreachable network 209.xxx.xx.52:25" I checked with our network admin, and he assured me that all the mail ports are open [25,110, and other ports for gmail]. Then, I logged in with administrative privilege, there was a little improvement, it did not show any error, but the actual email was never received. Please note that, the code was tested from development environment, visual studio 2005 and 2008. Any suggestion will be much appreciated. Thanks in advance try { MailMessage mail_message = new MailMessage("[email protected]", txtToEmail.Text, txtSubject.Text, txtBody.Text); SmtpClient mail_client = new SmtpClient("SMTP.y7mail.com"); NetworkCredential Authentic = new NetworkCredential("[email protected]", "xxxxx"); mail_client.UseDefaultCredentials = true; mail_client.Credentials = Authentic; mail_message.IsBodyHtml = true; mail_message.Priority = MailPriority.High; try { mail_client.Send(mail_message); lblStatus.Text = "Mail Sent Successfully"; } catch (Exception ex) { System.Diagnostics.Debug.WriteLine(ex.Message); lblStatus.Text = "Mail Sending Failed\r\n" + ex.Message; } } catch (Exception ex) { lblStatus.Text = "Mail Sending Failed\r\n" + ex.Message; }

    Read the article

  • entity framework vNext wish list

    - by Fred Yang
    I have been intensively studying and use ef4 in my project. I do feel the improvement that it has over version 1. But I found that I have something I cannot get around easily. Here is a list I want it to be better in ef vNext. the model designer should allow multiple view of the same model, so that I don't need cram all my entity into a single view. respect user's manual edit of edmx. Currently, the some database view object simply can not be imported to the model because the designer "smartly" think that the view does not have a primary key, so that I have to manually edit the edmx to correct designer's behavior. But in the next "update from database" task, designer will revert my customization. For now, I simply fallback to manually edit the edmx file at all, or I have to use compare tool to keep the new update, and rollback and put the new update into my old edmx file manually. Designer should be improved to allow default behavior and user's manual control. I want control not to let the designer refresh the change of imported object. support user defined table function. linq is about Composability, stored proc dos not support composability. I wish I could use user defined table function which support this. What are you wishes for EF vNext?

    Read the article

  • Is there a standard pattern for scanning a job table executing some actions?

    - by Howiecamp
    (I realize that my title is poor. If after reading the question you have an improvement in mind, please either edit it or tell me and I'll change it.) I have the relatively common scenario of a job table which has 1 row for some thing that needs to be done. For example, it could be a list of emails to be sent. The table looks something like this: ID Completed TimeCompleted anything else... ---- --------- ------------- ---------------- 1 No blabla 2 No blabla 3 Yes 01:04:22 ... I'm looking either for a standard practice/pattern (or code - C#/SQL Server preferred) for periodically "scanning" (I use the term "scanning" very loosely) this table, finding the not-completed items, doing the action and then marking them completed once done successfully. In addition to the basic process for accomplishing the above, I'm considering the following requirements: I'd like some means of "scaling linearly", e.g. running multiple "worker processes" simultaneously or threading or whatever. (Just a specific technical thought - I'm assuming that as a result of this requirement, I need some method of marking an item as "in progress" to avoid attempting the action multiple times.) Each item in the table should only be executed once. Some other thoughts: I'm not particularly concerned with the implementation being done in the database (e.g. in T-SQL or PL/SQL code) vs. some external program code (e.g. a standalone executable or some action triggered by a web page) which is executed against the database Whether the "doing the action" part is done synchronously or asynchronously is not something I'm considering as part of this question.

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >